[PATCH] gcmaes_crypt_by_sg: don't use GFP_ATOMIC allocation if the request doesn't cross a page

2018-09-05 Thread Mikulas Patocka
This patch fixes gcmaes_crypt_by_sg so that it won't use memory
allocation if the data doesn't cross a page boundary.

Authenticated encryption may be used by dm-crypt. If the encryption or
decryption fails, it would result in I/O error and filesystem corruption.
The function gcmaes_crypt_by_sg is using GFP_ATOMIC allocation that can
fail anytime. This patch fixes the logic so that it won't attempt the
failing allocation if the data doesn't cross a page boundary.

Signed-off-by: Mikulas Patocka 
Cc: sta...@vger.kernel.org

---
 arch/x86/crypto/aesni-intel_glue.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux-2.6/arch/x86/crypto/aesni-intel_glue.c
===
--- linux-2.6.orig/arch/x86/crypto/aesni-intel_glue.c   2018-05-31 
18:04:37.80800 +0200
+++ linux-2.6/arch/x86/crypto/aesni-intel_glue.c2018-09-04 
21:56:35.33000 +0200
@@ -817,7 +817,7 @@ static int gcmaes_crypt_by_sg(bool enc,
/* Linearize assoc, if not already linear */
if (req->src->length >= assoclen && req->src->length &&
(!PageHighMem(sg_page(req->src)) ||
-   req->src->offset + req->src->length < PAGE_SIZE)) {
+   req->src->offset + req->src->length <= PAGE_SIZE)) {
scatterwalk_start(_sg_walk, req->src);
assoc = scatterwalk_map(_sg_walk);
} else {


[PATCH] dm-integrity: don't store cipher request on the stack (was: [QUESTION] hash import and request initialization)

2018-01-10 Thread Mikulas Patocka


On Wed, 27 Dec 2017, Herbert Xu wrote:

> On Tue, Dec 26, 2017 at 02:21:53PM +0200, Gilad Ben-Yossef wrote:
> > 
> > See how SKCIPHER_REQUEST_ON_STACK is being used with an asymmetric skcipher
> > in drivers/md/dm-integrity.c
> 
> That's just broken.  SKCIPHER_REQUEST_ON_STACK is only meant for
> sync algorithms and this code needs to be changed to either do the
> proper request allocation or switch over to allocating sync
> algorithms.
> 
> Cheers,

Hi

Here I send a patch that moves those allocations to the heap.

Mikulas



From: Mikulas Patocka <mpato...@redhat.com>
Subject: [PATCH] dm-integrity: don't store cipher request on the stack

dm-integrity: don't store cipher request on the stack

Some asynchronous cipher implementations may use DMA. The stack may be
mapped in the vmalloc area that doesn't support DMA. Therefore, the cipher
request and initialization vector shouldn't be on the stack.

This patch allocates the request and iv with kmalloc.

Signed-off-by: Mikulas Patocka <mpato...@redhat.com>
Cc: sta...@vger.kernel.org  $ v4.12+

Index: linux-2.6/drivers/md/dm-integrity.c
===
--- linux-2.6.orig/drivers/md/dm-integrity.c
+++ linux-2.6/drivers/md/dm-integrity.c
@@ -2559,7 +2559,8 @@ static int create_journal(struct dm_inte
int r = 0;
unsigned i;
__u64 journal_pages, journal_desc_size, journal_tree_size;
-   unsigned char *crypt_data = NULL;
+   unsigned char *crypt_data = NULL, *crypt_iv = NULL;
+   struct skcipher_request *req = NULL;
 
ic->commit_ids[0] = cpu_to_le64(0xULL);
ic->commit_ids[1] = cpu_to_le64(0xULL);
@@ -2617,9 +2618,20 @@ static int create_journal(struct dm_inte
 
if (blocksize == 1) {
struct scatterlist *sg;
-   SKCIPHER_REQUEST_ON_STACK(req, ic->journal_crypt);
-   unsigned char iv[ivsize];
-   skcipher_request_set_tfm(req, ic->journal_crypt);
+
+   req = skcipher_request_alloc(ic->journal_crypt, 
GFP_KERNEL);
+   if (!req) {
+   *error = "Could not allocate crypt request";
+   r = -ENOMEM;
+   goto bad;
+   }
+
+   crypt_iv = kmalloc(ivsize, GFP_KERNEL);
+   if (!crypt_iv) {
+   *error = "Could not allocate iv";
+   r = -ENOMEM;
+   goto bad;
+   }
 
ic->journal_xor = dm_integrity_alloc_page_list(ic);
if (!ic->journal_xor) {
@@ -2641,9 +2653,9 @@ static int create_journal(struct dm_inte
sg_set_buf([i], va, PAGE_SIZE);
}
sg_set_buf([i], >commit_ids, sizeof 
ic->commit_ids);
-   memset(iv, 0x00, ivsize);
+   memset(crypt_iv, 0x00, ivsize);
 
-   skcipher_request_set_crypt(req, sg, sg, PAGE_SIZE * 
ic->journal_pages + sizeof ic->commit_ids, iv);
+   skcipher_request_set_crypt(req, sg, sg, PAGE_SIZE * 
ic->journal_pages + sizeof ic->commit_ids, crypt_iv);
init_completion();
comp.in_flight = (atomic_t)ATOMIC_INIT(1);
if (do_crypt(true, req, ))
@@ -2659,10 +2671,22 @@ static int create_journal(struct dm_inte
crypto_free_skcipher(ic->journal_crypt);
ic->journal_crypt = NULL;
} else {
-   SKCIPHER_REQUEST_ON_STACK(req, ic->journal_crypt);
-   unsigned char iv[ivsize];
unsigned crypt_len = roundup(ivsize, blocksize);
 
+   req = skcipher_request_alloc(ic->journal_crypt, 
GFP_KERNEL);
+   if (!req) {
+   *error = "Could not allocate crypt request";
+   r = -ENOMEM;
+   goto bad;
+   }
+
+   crypt_iv = kmalloc(ivsize, GFP_KERNEL);
+   if (!crypt_iv) {
+   *error = "Could not allocate iv";
+   r = -ENOMEM;
+   goto bad;
+   }
+
crypt_data = kmalloc(crypt_len, GFP_KERNEL);
if (!crypt_data) {
*error = "Unable to allocate crypt data";
@@ -2670,8 +2694,6 @@ static int create_journal(struct dm_inte
goto b

Re: [dm-devel] [PATCH v5 12/19] dm: move dm-verity to generic async completion

2017-08-19 Thread Mikulas Patocka


On Mon, 14 Aug 2017, Gilad Ben-Yossef wrote:

> dm-verity is starting async. crypto ops and waiting for them to complete.
> Move it over to generic code doing the same.
> 
> This also fixes a possible data coruption bug created by the
> use of wait_for_completion_interruptible() without dealing
> correctly with an interrupt aborting the wait prior to the
> async op finishing.

What is the exact problem there? The interruptible sleep is called from a 
workqueue and workqueues have all signals blocked. Are signals unblocked 
for some reason there?

Should there be another patch for stable kernels that fixes the 
interruptible sleep?

Mikulas

> Signed-off-by: Gilad Ben-Yossef 
> ---
>  drivers/md/dm-verity-target.c | 81 
> +++
>  drivers/md/dm-verity.h|  5 ---
>  2 files changed, 20 insertions(+), 66 deletions(-)
> 
> diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
> index 79f18d4..8df08a8 100644
> --- a/drivers/md/dm-verity-target.c
> +++ b/drivers/md/dm-verity-target.c
> @@ -92,74 +92,33 @@ static sector_t verity_position_at_level(struct dm_verity 
> *v, sector_t block,
>   return block >> (level * v->hash_per_block_bits);
>  }
>  
> -/*
> - * Callback function for asynchrnous crypto API completion notification
> - */
> -static void verity_op_done(struct crypto_async_request *base, int err)
> -{
> - struct verity_result *res = (struct verity_result *)base->data;
> -
> - if (err == -EINPROGRESS)
> - return;
> -
> - res->err = err;
> - complete(>completion);
> -}
> -
> -/*
> - * Wait for async crypto API callback
> - */
> -static inline int verity_complete_op(struct verity_result *res, int ret)
> -{
> - switch (ret) {
> - case 0:
> - break;
> -
> - case -EINPROGRESS:
> - case -EBUSY:
> - ret = wait_for_completion_interruptible(>completion);
> - if (!ret)
> - ret = res->err;
> - reinit_completion(>completion);
> - break;
> -
> - default:
> - DMERR("verity_wait_hash: crypto op submission failed: %d", ret);
> - }
> -
> - if (unlikely(ret < 0))
> - DMERR("verity_wait_hash: crypto op failed: %d", ret);
> -
> - return ret;
> -}
> -
>  static int verity_hash_update(struct dm_verity *v, struct ahash_request *req,
>   const u8 *data, size_t len,
> - struct verity_result *res)
> + struct crypto_wait *wait)
>  {
>   struct scatterlist sg;
>  
>   sg_init_one(, data, len);
>   ahash_request_set_crypt(req, , NULL, len);
>  
> - return verity_complete_op(res, crypto_ahash_update(req));
> + return crypto_wait_req(crypto_ahash_update(req), wait);
>  }
>  
>  /*
>   * Wrapper for crypto_ahash_init, which handles verity salting.
>   */
>  static int verity_hash_init(struct dm_verity *v, struct ahash_request *req,
> - struct verity_result *res)
> + struct crypto_wait *wait)
>  {
>   int r;
>  
>   ahash_request_set_tfm(req, v->tfm);
>   ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP |
>   CRYPTO_TFM_REQ_MAY_BACKLOG,
> - verity_op_done, (void *)res);
> - init_completion(>completion);
> + crypto_req_done, (void *)wait);
> + crypto_init_wait(wait);
>  
> - r = verity_complete_op(res, crypto_ahash_init(req));
> + r = crypto_wait_req(crypto_ahash_init(req), wait);
>  
>   if (unlikely(r < 0)) {
>   DMERR("crypto_ahash_init failed: %d", r);
> @@ -167,18 +126,18 @@ static int verity_hash_init(struct dm_verity *v, struct 
> ahash_request *req,
>   }
>  
>   if (likely(v->salt_size && (v->version >= 1)))
> - r = verity_hash_update(v, req, v->salt, v->salt_size, res);
> + r = verity_hash_update(v, req, v->salt, v->salt_size, wait);
>  
>   return r;
>  }
>  
>  static int verity_hash_final(struct dm_verity *v, struct ahash_request *req,
> -  u8 *digest, struct verity_result *res)
> +  u8 *digest, struct crypto_wait *wait)
>  {
>   int r;
>  
>   if (unlikely(v->salt_size && (!v->version))) {
> - r = verity_hash_update(v, req, v->salt, v->salt_size, res);
> + r = verity_hash_update(v, req, v->salt, v->salt_size, wait);
>  
>   if (r < 0) {
>   DMERR("verity_hash_final failed updating salt: %d", r);
> @@ -187,7 +146,7 @@ static int verity_hash_final(struct dm_verity *v, struct 
> ahash_request *req,
>   }
>  
>   ahash_request_set_crypt(req, NULL, digest, 0);
> - r = verity_complete_op(res, crypto_ahash_final(req));
> + r = crypto_wait_req(crypto_ahash_final(req), wait);
>  out:
>   return r;
>  

Re: [PATCH v2] crypto/mcryptd: Check mcryptd algorithm compatibility

2016-12-06 Thread Mikulas Patocka
I think a proper fix would be to find the reason why mcryptd crashes and 
fix that reason (i.e. make it fail in mcryptd_create_hash rather than 
crash).

Testing flags could fix the bug for now but it could backfire later when 
more algorithms are added.

Mikulas



On Mon, 5 Dec 2016, Tim Chen wrote:

> Algorithms not compatible with mcryptd could be spawned by mcryptd
> with a direct crypto_alloc_tfm invocation using a "mcryptd(alg)" name
> construct.  This causes mcryptd to crash the kernel if an arbitrary
> "alg" is incompatible and not intended to be used with mcryptd.  It is
> an issue if AF_ALG tries to spawn mcryptd(alg) to expose it externally.
> But such algorithms must be used internally and not be exposed.
> 
> We added a check to enforce that only internal algorithms are allowed
> with mcryptd at the time mcryptd is spawning an algorithm.
> 
> Link: http://marc.info/?l=linux-crypto-vger=148063683310477=2
> Cc: sta...@vger.kernel.org
> Reported-by: Mikulas Patocka <mpato...@redhat.com>
> Signed-off-by: Tim Chen <tim.c.c...@linux.intel.com>
> ---
>  crypto/mcryptd.c | 19 ---
>  1 file changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/crypto/mcryptd.c b/crypto/mcryptd.c
> index 94ee44a..c207458 100644
> --- a/crypto/mcryptd.c
> +++ b/crypto/mcryptd.c
> @@ -254,18 +254,22 @@ static void *mcryptd_alloc_instance(struct crypto_alg 
> *alg, unsigned int head,
>   goto out;
>  }
>  
> -static inline void mcryptd_check_internal(struct rtattr **tb, u32 *type,
> +static inline bool mcryptd_check_internal(struct rtattr **tb, u32 *type,
> u32 *mask)
>  {
>   struct crypto_attr_type *algt;
>  
>   algt = crypto_get_attr_type(tb);
>   if (IS_ERR(algt))
> - return;
> - if ((algt->type & CRYPTO_ALG_INTERNAL))
> - *type |= CRYPTO_ALG_INTERNAL;
> - if ((algt->mask & CRYPTO_ALG_INTERNAL))
> - *mask |= CRYPTO_ALG_INTERNAL;
> + return false;
> +
> + *type |= algt->type & CRYPTO_ALG_INTERNAL;
> + *mask |= algt->mask & CRYPTO_ALG_INTERNAL;
> +
> + if (*type & *mask & CRYPTO_ALG_INTERNAL)
> + return true;
> + else
> + return false;
>  }
>  
>  static int mcryptd_hash_init_tfm(struct crypto_tfm *tfm)
> @@ -492,7 +496,8 @@ static int mcryptd_create_hash(struct crypto_template 
> *tmpl, struct rtattr **tb,
>   u32 mask = 0;
>   int err;
>  
> - mcryptd_check_internal(tb, , );
> + if (!mcryptd_check_internal(tb, , ))
> + return -EINVAL;
>  
>   halg = ahash_attr_alg(tb[1], type, mask);
>   if (IS_ERR(halg))
> -- 
> 2.5.5
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Crash in crypto mcryptd

2016-12-01 Thread Mikulas Patocka
Hi

There is a bug in mcryptd initialization.

This is a test module that tries various hash algorithms. When you load 
the module with "insmod test.ko 'alg=mcryptd(md5)'", the machine crashes.

Mikulas


#include 
#include 
#include 

static char *alg = "md5";

module_param_named(alg, alg, charp, 0444);
MODULE_PARM_DESC(alg, "the algorith to test");

static bool sync = true;

module_param_named(sync, sync, bool, 0444);
MODULE_PARM_DESC(alg, "sync flag");

static int __init dump_init(void)
{
struct crypto_shash *h;
char key[4];
int r;
printk("testing algorithm '%s'\n", alg);
h = crypto_alloc_shash(alg, 0, sync ? CRYPTO_ALG_ASYNC : 0);
if (IS_ERR(h)) {
printk("error %d\n", (int)PTR_ERR(h));
return PTR_ERR(h);
}
printk("setting key\n");
r = crypto_shash_setkey(h, key, sizeof key);
if (r)
printk("setkey: %d\n", r);
crypto_free_shash(h);
printk("module loaded\n");
return 0;
}

static void __exit dump_exit(void)
{
printk("dump exit\n");
}

module_init(dump_init)
module_exit(dump_exit)
MODULE_LICENSE("GPL");


[898029.802035] BUG: unable to handle kernel NULL pointer dereference at
   
(null)
[898029.806060] IP: [] md5_final+0xad/0x210 [md5]
[898029.808156] PGD 11a5d8067 [898029.809051] PUD 11a491067 
PMD 0 [898029.810280] 
[898029.810904] Oops: 0002 [#1] PREEMPT SMP
[898029.812239] Modules linked in: md5 testdump(O+) mcryptd uvesafb 
cfbfillrect cfbimgblt cn cfbcopyarea fbcon bitblit fbcon_rotate fbcon_ccw 
fbcon_ud fbcon_cw softcursor fb fbdev font ipv6 binfmt_misc mousedev 
af_packet psmouse pcspkr virtio_net virtio_balloon button ext4 crc16 jbd2 
mbcache dm_mod virtio_blk evdev virtio_pci virtio_ring virtio
[898029.817178] CPU: 9 PID: 187 Comm: kworker/9:1 Tainted: G   O
4.9.0-rc7+ #6
[898029.818066] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[898029.818732] Workqueue: crypto mcryptd_queue_worker [mcryptd]
[898029.819394] task: 88011aa2bd80 task.stack: 88011848
[898029.820077] RIP: 0010:[]  [] 
md5_final+0xad/0x210 [md5]
[898029.821050] RSP: 0018:880118483d48  EFLAGS: 00010286
[898029.821661] RAX: 04b2008fd98c1dd4 RBX: 880119cd7f28 RCX: 
980980e9
[898029.822464] RDX: 7e42f8ec980980e9 RSI: ef1c4f74 RDI: 
880119cd7f30
[898029.823293] RBP: 880118483d68 R08: 1b99d513 R09: 

[898029.824117] R10:  R11: b8b56373 R12: 
880119cd7f18
[898029.824944] R13:  R14: 880119cd7f38 R15: 
a01ee43c
[898029.825776] FS:  () GS:88011fd2() 
knlGS:
[898029.826712] CS:  0010 DS:  ES:  CR0: 80050033
[898029.827376] CR2:  CR3: 00011a6c9000 CR4: 
06a0
[898029.828204] Stack:
[898029.828452]  880119cd7f18 88011fd3bb00  
880119cd7e00
[898029.829351]  880118483da0 8119f281 880119cd7f18 
88011fd3bb00
[898029.830242]  88011fd3bae0 880119cd7e00 a01ee43c 
880119cd7ec8
[898029.831141] Call Trace:
[898029.831460]  [] ? crypto_shash_final+0x31/0xb0
[898029.832151]  [] ? mcryptd_queue_worker+0x1c/0x190 
[mcryptd]
[898029.832980]  [] ? shash_ahash_finup+0x73/0x80
[898029.833672]  [] ? __switch_to+0x27f/0x460
[898029.834305]  [] ? mcryptd_hash_digest+0x4f/0x80 
[mcryptd]
[898029.835125]  [] ? mcryptd_queue_worker+0x47/0x190 
[mcryptd]
[898029.835963]  [] ? process_one_work+0x1bf/0x3f0
[898029.836681]  [] ? worker_thread+0x42/0x4c0
[898029.837362]  [] ? process_one_work+0x3f0/0x3f0
[898029.838045]  [] ? process_one_work+0x3f0/0x3f0
[898029.838739]  [] ? kthread+0xb9/0xd0
[898029.839318]  [] ? kthread_park+0x70/0x70
[898029.839959]  [] ? ret_from_fork+0x25/0x30
[898029.840594] Code: 14 c5 00 00 00 00 48 c1 e8 1d 41 89 44 24 5c 41 89 
54 24 58 e8 45 ea 0e e1 49 8b 44 24 10 49 8b 54 24 18 48 8d 7b 08 48 83 e7 
f8 <49> 89 45 00 49 89 55 08 31 c0 49 c7 44 24 10 00 00 00 00 48 c7 
[898029.843633] RIP  [] md5_final+0xad/0x210 [md5]
[898029.844354]  RSP 
[898029.844769] CR2: 
[898029.845166] ---[ end trace 2ecde0bf66717337 ]---

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/2] arm aes: fix encryption of unaligned data

2014-07-25 Thread Mikulas Patocka
Fix the same alignment bug as in arm64 - we need to pass residue
unprocessed bytes as the last argument to blkcipher_walk_done.

Signed-off-by: Mikulas Patocka mpato...@redhat.com
Cc: sta...@vger.kernel.org  # 3.13+

Index: linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm/crypto/aesbs-glue.c
===
--- linux-3.16.0-0.rc6.git1.1.fc21.aarch64.orig/arch/arm/crypto/aesbs-glue.c
+++ linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm/crypto/aesbs-glue.c
@@ -137,7 +137,7 @@ static int aesbs_cbc_encrypt(struct blkc
dst += AES_BLOCK_SIZE;
} while (--blocks);
}
-   err = blkcipher_walk_done(desc, walk, 0);
+   err = blkcipher_walk_done(desc, walk, walk.nbytes % 
AES_BLOCK_SIZE);
}
return err;
 }
@@ -158,7 +158,7 @@ static int aesbs_cbc_decrypt(struct blkc
bsaes_cbc_encrypt(walk.src.virt.addr, walk.dst.virt.addr,
  walk.nbytes, ctx-dec, walk.iv);
kernel_neon_end();
-   err = blkcipher_walk_done(desc, walk, 0);
+   err = blkcipher_walk_done(desc, walk, walk.nbytes % 
AES_BLOCK_SIZE);
}
while (walk.nbytes) {
u32 blocks = walk.nbytes / AES_BLOCK_SIZE;
@@ -182,7 +182,7 @@ static int aesbs_cbc_decrypt(struct blkc
dst += AES_BLOCK_SIZE;
src += AES_BLOCK_SIZE;
} while (--blocks);
-   err = blkcipher_walk_done(desc, walk, 0);
+   err = blkcipher_walk_done(desc, walk, walk.nbytes % 
AES_BLOCK_SIZE);
}
return err;
 }
@@ -268,7 +268,7 @@ static int aesbs_xts_encrypt(struct blkc
bsaes_xts_encrypt(walk.src.virt.addr, walk.dst.virt.addr,
  walk.nbytes, ctx-enc, walk.iv);
kernel_neon_end();
-   err = blkcipher_walk_done(desc, walk, 0);
+   err = blkcipher_walk_done(desc, walk, walk.nbytes % 
AES_BLOCK_SIZE);
}
return err;
 }
@@ -292,7 +292,7 @@ static int aesbs_xts_decrypt(struct blkc
bsaes_xts_decrypt(walk.src.virt.addr, walk.dst.virt.addr,
  walk.nbytes, ctx-dec, walk.iv);
kernel_neon_end();
-   err = blkcipher_walk_done(desc, walk, 0);
+   err = blkcipher_walk_done(desc, walk, walk.nbytes % 
AES_BLOCK_SIZE);
}
return err;
 }
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto/arc4: convert this stream cipher into a block cipher

2010-02-22 Thread Mikulas Patocka


On Tue, 16 Feb 2010, Herbert Xu wrote:

 On Fri, Feb 12, 2010 at 09:42:28AM +0100, Sebastian Andrzej Siewior wrote:
 
  -static void arc4_crypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
  +static void arc4_ivsetup(struct arc4_ctx *ctx, u8 *iv)
   {
  -   struct arc4_ctx *ctx = crypto_tfm_ctx(tfm);
  +   if (unlikely(!ctx-new_key))
  +   return;
  +   memcpy(iv, ctx-iv, sizeof(ctx-iv));
  +   ctx-new_key = 0;
 
 Sorry, but this doesn't work.
 
 A ctx is supposed to be reentrant.  That is, while one thread
 is working away with a given ctx I should be able to use that
 same ctx in a different thread without them clobbering each
 other.
 
 So that means (in general) you must not modify the ctx in any
 function other than setkey.
 
 This also brings up the bigger question of how we transition to
 this new arc4.  I don't think we need to maintain exactly the
 same behaviour as the existing ecb(arc4).
 
 So what we could do is simply add a new blkcipher arc4, alongside
 the existing cipher arc4.  Then we can convert the existing users
 across, and finally remove the old arc4.

arc4 can't be used as a block cipher --- see this paper 
http://www.wisdom.weizmann.ac.il/~itsik/RC4/Papers/Rc4_ksa.ps , it says 
that initialization vectors on RC4 are unreliable, if you use (unknown key 
concatenated with known IV) or (known IV concatenated with unknown key) as 
a RC4 key, the RC4 state can be exposed and the cipher is broken.

If you want to avoid this attack, you'd have to do this for each sector:
- take the secret key and the IV and hash them cryptographically
- use this hash as a RC4 key (to avoid the above related-key attack)
- discard the first 256 bytes from the RC4 output (to avoid another 
problem with RC4 --- the beginning of the output is biased)
- use the resulting RC4 state to encrypt a single 512-byte sector

Now, the question is, if RC4 with all this dance around its problems would 
be still faster than AES. I doubt.

I think there is not much sense in writing code to allow arc4 to be used 
for disk encryption. You should just ban it.

Mikulas
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] dm-crypt: disable block encryption with arc4

2010-02-09 Thread Mikulas Patocka
On Tue, 9 Feb 2010, Herbert Xu wrote:

 Mikulas Patocka mpato...@redhat.com wrote:
  
  You should rather add a flag CRYPTO_ALG_CHANGES_STATE to determine that a 
  cipher can't be used to encrypt disks.
 
 No, please see my reply in the previous thread.  What we should
 do is fix arc4.  I just haven't got around to doing it yet.

What is the fix for arc4? Copy the internal state after a key schedule and 
restore it with every encryption?

 As to blacklisting algorithms not suitable for disk encryption,
 that is up to the dm-crypt maintainers to decide.
 
 Cheers,

I think blacklisting arc4 is better, because it provides a fix now. 
Otherwise, people will just keep on arguing what is the clean solution 
and nothing gets done.

Mikulas
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] dm-crypt: disable block encryption with arc4

2010-01-26 Thread Mikulas Patocka
  This patch disables the use of arc4 on block devices.
 
  arc4 again. it is simply not a block cipher:-)
 
  This should be solved inside cryptoAPI and not blacklist it in dm-crypt,
  see that thread
  http://article.gmane.org/gmane.linux.kernel.cryptoapi/3441
  
  I some how remember Herbert saying to test for block size  1. Wouldn't
  this be acceptable to block all stream cipher in one go?
 
 yes, I think it is better.
 (...and I just forgot to add that test to dm-crypt after that suggestion.)
 
 Milan

Hmm, there is salsa20 that has block size 1, larger initialization 
vectors, and can be used to encrypt disks (although salsa20 doesn't 
currently work with dm-crypt, because it doesn't accept ecb(), cbc(), 
etc. chaining modes --- but if you remove the chaining mode manually, it 
works).

You should rather add a flag CRYPTO_ALG_CHANGES_STATE to determine that a 
cipher can't be used to encrypt disks.

Mikulas
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] dm-crypt: disable block encryption with arc4

2010-01-26 Thread Mikulas Patocka
On Tue, 26 Jan 2010, Sebastian Andrzej Siewior wrote:

 * Mikulas Patocka | 2010-01-26 07:27:18 [-0500]:
 
  yes, I think it is better.
  (...and I just forgot to add that test to dm-crypt after that suggestion.)
  
  Milan
 
 Hmm, there is salsa20 that has block size 1, larger initialization 
 vectors, and can be used to encrypt disks (although salsa20 doesn't 
 currently work with dm-crypt, because it doesn't accept ecb(), cbc(), 
 etc. chaining modes --- but if you remove the chaining mode manually, it 
 works).
 
 You should rather add a flag CRYPTO_ALG_CHANGES_STATE to determine that a 
 cipher can't be used to encrypt disks.
 
 Just because it will work does not make it a good idea.
 
 SALSA20 is a stream cipher not a block cipher.
 Block ciphers are used to encrypt data.
 Stream ciphers are used to create one time pads, a set of encryption
 keys, ...
 There are block modes like CTR which can turn a block cipher into a
 stream cipher. Those should not be used for disk encryption as well.

Salsa20 is unsuitable for disk encryption in most cases. It would be 
suitable if we knew that the attacker can obtain the image of encrypted 
device at most once --- it is OK to protect against laptop theft (it 
happens just once), but it is not OK to protect against support technician 
spying on your data (he can read them multiple times, if you have multiple 
support requests).

Anyway, what I wanted to say, is that block_size = 1 test is no less 
hacky than !strcmp(cipher, arc4) test.

Mikulas
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] dm-crypt: disable block encryption with arc4

2010-01-25 Thread Mikulas Patocka
Hi

When using arc4 to encrypt a block device, the resulting device is 
unreliable. It reads garbage. That's because arc4 is a stream cipher, if 
you write something, it advances its state and if you attempt to decrypt 
the same sector, it uses new state that is different.

This patch disables the use of arc4 on block devices.

A question to crypto maintainers: Is there some general method how to 
determine that the cipher is a stream cipher, changes its state as it 
progresses and thus is unusable for block devices? I haven't found any 
flag for that.

Mikulas

---

Disable arc4 for encrypting block device

Arc4 is a stream cipher, it's once initialized with a key, it outputs a stream
of bytes (that are xored with the data to be encrypted) and changes it's
internal state.

Because the cipher changes it's internal state, it is not useable for encrypting
block devices --- once someone encrypts a sector of data, the internal state
changes --- and further attempts to decrypt the same block of data use the new
internal state. Thus, the encrypted device returns garbage.

This patch disables the use of arc4 for dm-crypt.

If we wanted to use arc4, we would have to setup the key before encrypting each
sector. That is slow. Because arc4 works by xoring the bitstream with the data,
it is not suitable for encrypting block devices anyway: if the attacker obtains
two images of the same block device at two different times, he can xor them with
each other, eliminating the cipher and getting two xored plaintexts.

Signed-off-by: Mikulas Patocka mpato...@redhat.com

---
 drivers/md/dm-crypt.c |5 +
 1 file changed, 5 insertions(+)

Index: linux-2.6.32-devel/drivers/md/dm-crypt.c
===
--- linux-2.6.32-devel.orig/drivers/md/dm-crypt.c   2010-01-25 
18:55:14.0 +0100
+++ linux-2.6.32-devel/drivers/md/dm-crypt.c2010-01-25 18:57:02.0 
+0100
@@ -1035,6 +1035,11 @@ static int crypt_ctr(struct dm_target *t
goto bad_cipher;
}
 
+   if (!strcmp(cc-cipher, arc4)) {
+   ti-error = Stream cipher arc4 not supported;
+   goto bad_cipher;
+   }
+
if (snprintf(cc-cipher, CRYPTO_MAX_ALG_NAME, %s(%s),
 chainmode, cipher) = CRYPTO_MAX_ALG_NAME) {
ti-error = Chain mode + cipher name is too long;
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] dm-crypt: disable block encryption with arc4

2010-01-25 Thread Mikulas Patocka
BTW. I created a script that tests all possible ciphers, keysizes, 
chaining modes and iv modes for dm-crypt. arc4 is the only one that fails. 
You can add it your regression testsuite if you want.

Mikulas#!/bin/sh
dmsetup remove cr0
set -e
cipher=$@
cryptsetup -d key $cipher create cr0 /dev/ram0
mkfs.ext3 -b 1024 /dev/mapper/cr0
fsck.ext2 -fn /dev/mapper/cr0
dmsetup remove cr0
cryptsetup -d key $cipher create cr0 /dev/ram0
fsck.ext2 -fn /dev/mapper/cr0
dmsetup remove cr0
#!/bin/sh
set -e
log
# arc4 - doesn't work
for cipher in aes anubis blowfish camellia cast5 cast6 des des3_ede fcrypt 
khazad seed serpent tnepres tea xtea xeta; do
if [ -n $1 -a $cipher != $1 ]; then
continue
fi
if [ $cipher = aes ]; then
keysizes=128 192 256
blocksize=128
elif [ $cipher = anubis ]; then
keysizes=`seq 128 32 320`
blocksize=128
elif [ $cipher = arc4 ]; then
keysizes=`seq 8 256 2048` 2048
blocksize=1
elif [ $cipher = blowfish ]; then
keysizes=`seq 32 8 448`
blocksize=64
elif [ $cipher = camellia ]; then
keysizes=128 192 256
blocksize=128
elif [ $cipher = cast5 ]; then
keysizes=`seq 40 8 128`
blocksize=64
elif [ $cipher = cast6 ]; then
keysizes=`seq 128 32 256`
blocksize=128
elif [ $cipher = des ]; then
keysizes=64
blocksize=64
elif [ $cipher = des3_ede ]; then
keysizes=192
blocksize=64
elif [ $cipher = fcrypt ]; then
keysizes=64
blocksize=64
elif [ $cipher = khazad ]; then
keysizes=128
blocksize=64
elif [ $cipher = seed ]; then
keysizes=128
blocksize=128
elif [ $cipher = serpent -o $cipher = tnepres ]; then
keysizes=`seq 0 8 256`
blocksize=128
elif [ $cipher = tea -o $cipher = xtea -o $cipher = xeta ]; then
keysizes=128
blocksize=64
elif [ $cipher = twofish ]; then
keysizes=128 192 256
blocksize=128
else
echo UNKNOWN CIPHER
exit 1
fi
for keysize in $keysizes; do
# if ! echo $keysizes | grep -qw $keysize$; then continue; fi
for chaining in cbc ctr ecb lrw pcbc xts; do
chaining_keysize=$keysize
if [ $chaining = xts ]; then
chaining_keysize=`expr $chaining_keysize \* 2 | 
cat`;
if [ $blocksize != 128 ]; then continue; fi
fi
if [ $chaining = lrw ]; then
chaining_keysize=`expr $chaining_keysize + 
$blocksize | cat`
if [ $blocksize != 128 ]; then continue; fi
fi
for iv in null plain benbi essiv:md4 essiv:md5 
essiv:michael_mic essiv:rmd128 essiv:rmd160 essiv:rmd256 essiv:rmd320 
essiv:sha1 essiv:sha224 essiv:sha256 essiv:sha384 essiv:sha512 essiv:tgr128 
essiv:tgr160 essiv:tgr192 essiv:wp256 essiv:wp384 essiv:wp512; do
if [ $chaining = ecb ]  echo $iv | grep -q 
^essiv; then continue; fi
if [ $iv = essiv:md4 ]  ! echo $keysizes | 
grep -qw 128; then continue; fi
if [ $iv = essiv:md5 ]  ! echo $keysizes | 
grep -qw 128; then continue; fi
if [ $iv = essiv:michael_mic ]  ! echo 
$keysizes | grep -qw 64; then continue; fi
if [ $iv = essiv:rmd128 ]  ! echo $keysizes | 
grep -qw 128; then continue; fi
if [ $iv = essiv:rmd160 ]  ! echo $keysizes | 
grep -qw 160; then continue; fi
if [ $iv = essiv:rmd256 ]  ! echo $keysizes | 
grep -qw 256; then continue; fi
if [ $iv = essiv:rmd320 ]  ! echo $keysizes | 
grep -qw 320; then continue; fi
if [ $iv = essiv:sha1 ]  ! echo $keysizes | 
grep -qw 160; then continue; fi
if [ $iv = essiv:sha224 ]  ! echo $keysizes | 
grep -qw 224; then continue; fi
if [ $iv = essiv:sha256 ]  ! echo $keysizes | 
grep -qw 256; then continue; fi
if [ $iv = essiv:sha384 ]  ! echo $keysizes | 
grep -qw 384; then continue; fi
if [ $iv = essiv:sha512 ]  ! echo $keysizes | 
grep -qw 512; then continue; fi
if [ $iv = essiv:tgr128 ]  ! echo $keysizes | 
grep -qw 128; then continue; fi
 

Re: [dm-devel] Desynchronizing dm-raid1

2008-05-23 Thread Mikulas Patocka

On Thu, May 22, 2008 at 08:32:45AM -0400, Mikulas Patocka wrote:


There may be external modules.


Sorry but we don't support external modules.  They should be merged
upstream rather than distributed in the wild.

Cheers,


If you want to negate the meaning of the flag, then you have to write it 
yourself. I, as non-developer of crypto code, can prove that on given path 
the input data are read only once --- but I can't prove that on all paths 
and all possible chaining modes of algorithms the data are read once, 
because I don't know about all of them.


Mikulas
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [dm-devel] Desynchronizing dm-raid1

2008-05-22 Thread Mikulas Patocka

All the ciphers comply, so the bug is only a theroretical issue (but I
didn't check assembler versions --- they should be checked by the person
who wrote them, assembler is write-only language).


Since every current algorithm sets the flag could you invert
its sense? Sorry to have to do this to you :)

Thanks,


There may be external modules.

If you don't set the flag when it should be set, nothing happens (just a 
slight performance drop), if you set the flag when it shouldn't be set, 
you get data corruption. So the safest way is this meaning of flag, so 
that not-yet-reviewed algorithms set the flag to 0 and prevent data 
corruption.


Mikulas
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [dm-devel] Desynchronizing dm-raid1

2008-05-21 Thread Mikulas Patocka



On Wed, 14 May 2008, Herbert Xu wrote:


On Tue, May 13, 2008 at 04:35:03PM -0400, Mikulas Patocka wrote:


And where would you propose to place this bit?

One possibility would be struct crypto_tfm-crt_flags
Another possibility is struct crypto_alg-cra_flags


The latter definitely because this is an algorithm property.


Can chaining mode change the value of the flag? (I presume that yes)


If you mean templates like CBC then it depends.  You should
set it to zero by default for safety but most of them should
be able to turn it on once audited.

If it turns out that the majority of algorithms support this,
you could even decide to only select those algorithms that do.
Suppose your bit is

CRYPTO_ALG_FOO

then you could do

crypto_alloc_blkcipher(name, CRYPTO_ALG_FOO, CRYPTO_ALG_FOO)

to demand only those algorithms that comply.

Cheers,


Hi

Here I send the patches, the first one copied data in dm-crypt 
unconditionally. The second one adds a flag to coplying algorithms. The 
third one skips the copy for complying algorithms. The fourth one is 
removing useless increment in arc4 that I found while reviewing the 
ciphers.


All the ciphers comply, so the bug is only a theroretical issue (but I 
didn't check assembler versions --- they should be checked by the person 
who wrote them, assembler is write-only language).


Please review my changes to crypto code, I am not crypto developer and I 
do not understand it as well as people maintaining it.


MikulasCopy data when writing to the device.

This patch fixes damaged blocks on encrypted device when the computer crashes.

Linux IO architecture allows the data to be modified while they are being
written.

While we are processing write requests, the data may change, and it is expected,
that each byte written to the device is either old byte or new byte.

Some crypto functions may read the input buffer more than once and so they'll
produce garbage if the buffer changes under them. When this garbage is
read again and decrypted, it may produce damage in all bytes in the block. This
damage is visible only if the computer crashes; if it didn't crash, the memory
manager writes correct buffer or page contents some times later.

So we first copy data to a temporary buffer with memcpy and then encrypt them
in place.

Signed-off-by: Mikulas Patocka [EMAIL PROTECTED]

---
 drivers/md/dm-crypt.c |   14 --
 1 file changed, 12 insertions(+), 2 deletions(-)

Index: linux-2.6.25.3/drivers/md/dm-crypt.c
===
--- linux-2.6.25.3.orig/drivers/md/dm-crypt.c   2008-05-14 18:41:10.0 
+0200
+++ linux-2.6.25.3/drivers/md/dm-crypt.c2008-05-14 18:41:11.0 
+0200
@@ -361,8 +361,18 @@ static int crypt_convert_block(struct cr
 crypto_ablkcipher_alignmask(cc-tfm) + 1);
 
sg_init_table(dmreq-sg_in, 1);
-   sg_set_page(dmreq-sg_in, bv_in-bv_page, 1  SECTOR_SHIFT,
-   bv_in-bv_offset + ctx-offset_in);
+   if (bio_data_dir(ctx-bio_in) == WRITE) {
+   char *page_in = kmap_atomic(bv_in-bv_page, KM_USER0);
+   char *page_out = kmap_atomic(bv_out-bv_page, KM_USER1);
+   memcpy(page_out + bv_out-bv_offset + ctx-offset_out, page_in 
+ bv_in-bv_offset + ctx-offset_in, 1  SECTOR_SHIFT);
+   kunmap_atomic(page_in, KM_USER0);
+   kunmap_atomic(page_out, KM_USER1);
+
+   sg_set_page(dmreq-sg_in, bv_out-bv_page, 1  SECTOR_SHIFT,
+   bv_out-bv_offset + ctx-offset_out);
+   } else
+   sg_set_page(dmreq-sg_in, bv_in-bv_page, 1  SECTOR_SHIFT,
+   bv_in-bv_offset + ctx-offset_in);
 
sg_init_table(dmreq-sg_out, 1);
sg_set_page(dmreq-sg_out, bv_out-bv_page, 1  SECTOR_SHIFT,
Add a flag that tells that the algorithm reads the data only once when
encrypting.

It will be used by dm-crypt to skip a copy operation if not needed.

Signed-off-by: Mikulas Patocka [EMAIL PROTECTED]

---
 crypto/aes_generic.c |2 +-
 crypto/anubis.c  |2 +-
 crypto/arc4.c|2 +-
 crypto/blkcipher.c   |2 +-
 crypto/blowfish.c|2 +-
 crypto/camellia.c|2 +-
 crypto/cast5.c   |2 +-
 crypto/cast6.c   |2 +-
 crypto/cbc.c |2 +-
 crypto/cryptd.c  |2 +-
 crypto/crypto_null.c |2 +-
 crypto/ctr.c |2 +-
 crypto/des_generic.c |4 ++--
 crypto/ecb.c |2 +-
 crypto/fcrypt.c  |2 +-
 crypto/khazad.c  |2 +-
 crypto/lrw.c |2 +-
 crypto/pcbc.c|2 +-
 crypto/salsa20_generic.c |2 +-
 crypto/seed.c|2 +-
 crypto/serpent.c |4 ++--
 crypto/tea.c |4 ++--
 crypto/twofish.c |2 +-
 crypto/xts.c |2 +-
 include/linux/crypto.h   |9 +
 25 files changed

Re: [dm-devel] Desynchronizing dm-raid1

2008-05-13 Thread Mikulas Patocka



On Tue, 13 May 2008, Herbert Xu wrote:


On Mon, May 12, 2008 at 11:28:44PM -0400, Mikulas Patocka wrote:


Or do you thint it would be useless (all major disk-encryption algorithms
read input buffer more times?) or it would create too much code
complexity?


I'm open to this approach.  As long as you do the work and come up
with the patch that is :)

Cheers,


And where would you propose to place this bit?

One possibility would be struct crypto_tfm-crt_flags
Another possibility is struct crypto_alg-cra_flags

Can chaining mode change the value of the flag? (I presume that yes)

So make a bit in struct crypto_alg. And propagate it to 
crypto_alg-cra_flags of chaining modes that read input data only once?


I'm not familiar with Linux crypto API, so the location of the flag should 
be selected by someone who knows it well.


Here I'm sending the patch for dm-crypt. To optimize it further, you 
should add the test after bio_data_dir(ctx-bio_in) == WRITE test --- i.e. 
if the algorithm is safe to read input only once, the smaller and faster 
branch can be executed.


Mikukas

---

Copy data when writing to the device.

This patch fixes damaged blocks on encrypted device when the computer crashes.

Linux IO architecture allows the data to be modified while they are being
written.

While we are processing write requests, the data may change, and it is expected,
that each byte written to the device is either old byte or new byte.

Some crypto functions may read the input buffer more than once and so they'll
produce garbage if the buffer changes under them. When this garbage is
read again and decrypted, it may produce damage in all bytes in the block. This
damage is visible only if the computer crashes; if it didn't crash, the memory
manager writes correct buffer or page contents some times later.

So we first copy data to a temporary buffer with memcpy and then encrypt them
in place.

Mikulas Patocka [EMAIL PROTECTED]

---
 drivers/md/dm-crypt.c |   14 --
 1 file changed, 12 insertions(+), 2 deletions(-)

Index: linux-2.6.25.3/drivers/md/dm-crypt.c
===
--- linux-2.6.25.3.orig/drivers/md/dm-crypt.c   2008-05-13 21:48:32.0 
+0200
+++ linux-2.6.25.3/drivers/md/dm-crypt.c2008-05-13 22:14:12.0 
+0200
@@ -360,8 +360,18 @@ static int crypt_convert_block(struct cr
 crypto_ablkcipher_alignmask(cc-tfm) + 1);

sg_init_table(dmreq-sg_in, 1);
-   sg_set_page(dmreq-sg_in, bv_in-bv_page, 1  SECTOR_SHIFT,
-   bv_in-bv_offset + ctx-offset_in);
+   if (bio_data_dir(ctx-bio_in) == WRITE) {
+   char *page_in = kmap_atomic(bv_in-bv_page, KM_USER0);
+   char *page_out = kmap_atomic(bv_out-bv_page, KM_USER1);
+   memcpy(page_out + bv_out-bv_offset + ctx-offset_out, page_in + 
bv_in-bv_offset + ctx-offset_in, 1  SECTOR_SHIFT);
+   kunmap_atomic(page_in, KM_USER0);
+   kunmap_atomic(page_out, KM_USER1);
+
+   sg_set_page(dmreq-sg_in, bv_out-bv_page, 1  SECTOR_SHIFT,
+   bv_out-bv_offset + ctx-offset_out);
+   } else
+   sg_set_page(dmreq-sg_in, bv_in-bv_page, 1  SECTOR_SHIFT,
+   bv_in-bv_offset + ctx-offset_in);

sg_init_table(dmreq-sg_out, 1);
sg_set_page(dmreq-sg_out, bv_out-bv_page, 1  SECTOR_SHIFT,
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [dm-devel] Desynchronizing dm-raid1

2008-05-06 Thread Mikulas Patocka

On Tue, 6 May 2008, Herbert Xu wrote:


Mikulas Patocka [EMAIL PROTECTED] wrote:


BTW: is it guaranteed for crypto functions that they read the source data
only once?


There is no such guarantee.

Cheers,


So we'll have to copy the sector to temporary space before encrypting it.

I'll look at it.

Mikulas
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [dm-devel] Desynchronizing dm-raid1

2008-05-05 Thread Mikulas Patocka



On Mon, 7 Apr 2008, Martin K. Petersen wrote:


Malahal == malahal  [EMAIL PROTECTED] writes:


[I sent this last week but it never made it to the list]

Malahal Very few drivers require it, so how about an interface to
Malahal lock the pages of an I/O available to drivers. Only needed
Malahal RAID drivers would lock the I/O while it is in progress and
Malahal they only pay the performance penalty.  mmap pages are a bit
Malahal tricky. They need to go into read-only mode when an I/O is in
Malahal progress. I know this would likely be rejected too!!!

I have exactly the same problem with the data integrity stuff I'm
working on.

Essentially a checksum gets generated when a bio is submitted, and
both the I/O controller and the disk drive verify the checksum.

With ext2 in particular I often experience that the page (usually
containing directory metadata) has been modified before the controller
does the DMA.  And the I/O will then be rejected by the controller or
drive because the checksum doesn't match the data.

So this problem is not specific to DM/MD...

--
Martin K. Petersen  Oracle Linux Engineering


BTW: is it guaranteed for crypto functions that they read the source data 
only once?


I.e. if you encrypt a block while another CPU modifies it, and then 
decrypt it, is it guaranteed that each byte of the result will be either 
old byte or new byte of the original data?


If not, we have a subtle case of disk corruption here too. (imagine that 
you update for example atime field in inode while the block is being 
written and the crypto interface will trash the inode block).


Mikulas
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html