[PATCH v5 REPOST 6/6] hw_random: don't init list element we're about to add to list.

2014-12-08 Thread Amos Kong
From: Rusty Russell ru...@rustcorp.com.au

Another interesting anti-pattern.

Signed-off-by: Rusty Russell ru...@rustcorp.com.au
---
 drivers/char/hw_random/core.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
index a9286bf..4d13ac5 100644
--- a/drivers/char/hw_random/core.c
+++ b/drivers/char/hw_random/core.c
@@ -489,7 +489,6 @@ int hwrng_register(struct hwrng *rng)
goto out_unlock;
}
}
-   INIT_LIST_HEAD(rng-list);
list_add_tail(rng-list, rng_list);
 
if (old_rng  !rng-init) {
-- 
1.9.3

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v5 REPOST 1/6] hw_random: place mutex around read functions and buffers.

2014-12-08 Thread Amos Kong
From: Rusty Russell ru...@rustcorp.com.au

There's currently a big lock around everything, and it means that we
can't query sysfs (eg /sys/devices/virtual/misc/hw_random/rng_current)
while the rng is reading.  This is a real problem when the rng is slow,
or blocked (eg. virtio_rng with qemu's default /dev/random backend)

This doesn't help (it leaves the current lock untouched), just adds a
lock to protect the read function and the static buffers, in preparation
for transition.

Signed-off-by: Rusty Russell ru...@rustcorp.com.au
---
 drivers/char/hw_random/core.c | 20 +---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
index aa30a25..b1b6042 100644
--- a/drivers/char/hw_random/core.c
+++ b/drivers/char/hw_random/core.c
@@ -53,7 +53,10 @@
 static struct hwrng *current_rng;
 static struct task_struct *hwrng_fill;
 static LIST_HEAD(rng_list);
+/* Protects rng_list and current_rng */
 static DEFINE_MUTEX(rng_mutex);
+/* Protects rng read functions, data_avail, rng_buffer and rng_fillbuf */
+static DEFINE_MUTEX(reading_mutex);
 static int data_avail;
 static u8 *rng_buffer, *rng_fillbuf;
 static unsigned short current_quality;
@@ -81,7 +84,9 @@ static void add_early_randomness(struct hwrng *rng)
unsigned char bytes[16];
int bytes_read;
 
+   mutex_lock(reading_mutex);
bytes_read = rng_get_data(rng, bytes, sizeof(bytes), 1);
+   mutex_unlock(reading_mutex);
if (bytes_read  0)
add_device_randomness(bytes, bytes_read);
 }
@@ -128,6 +133,7 @@ static inline int rng_get_data(struct hwrng *rng, u8 
*buffer, size_t size,
int wait) {
int present;
 
+   BUG_ON(!mutex_is_locked(reading_mutex));
if (rng-read)
return rng-read(rng, (void *)buffer, size, wait);
 
@@ -160,13 +166,14 @@ static ssize_t rng_dev_read(struct file *filp, char 
__user *buf,
goto out_unlock;
}
 
+   mutex_lock(reading_mutex);
if (!data_avail) {
bytes_read = rng_get_data(current_rng, rng_buffer,
rng_buffer_size(),
!(filp-f_flags  O_NONBLOCK));
if (bytes_read  0) {
err = bytes_read;
-   goto out_unlock;
+   goto out_unlock_reading;
}
data_avail = bytes_read;
}
@@ -174,7 +181,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user 
*buf,
if (!data_avail) {
if (filp-f_flags  O_NONBLOCK) {
err = -EAGAIN;
-   goto out_unlock;
+   goto out_unlock_reading;
}
} else {
len = data_avail;
@@ -186,7 +193,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user 
*buf,
if (copy_to_user(buf + ret, rng_buffer + data_avail,
len)) {
err = -EFAULT;
-   goto out_unlock;
+   goto out_unlock_reading;
}
 
size -= len;
@@ -194,6 +201,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user 
*buf,
}
 
mutex_unlock(rng_mutex);
+   mutex_unlock(reading_mutex);
 
if (need_resched())
schedule_timeout_interruptible(1);
@@ -208,6 +216,9 @@ out:
 out_unlock:
mutex_unlock(rng_mutex);
goto out;
+out_unlock_reading:
+   mutex_unlock(reading_mutex);
+   goto out_unlock;
 }
 
 
@@ -348,13 +359,16 @@ static int hwrng_fillfn(void *unused)
while (!kthread_should_stop()) {
if (!current_rng)
break;
+   mutex_lock(reading_mutex);
rc = rng_get_data(current_rng, rng_fillbuf,
  rng_buffer_size(), 1);
+   mutex_unlock(reading_mutex);
if (rc = 0) {
pr_warn(hwrng: no data available\n);
msleep_interruptible(1);
continue;
}
+   /* Outside lock, sure, but y'know: randomness. */
add_hwgenerator_randomness((void *)rng_fillbuf, rc,
   rc * current_quality * 8  10);
}
-- 
1.9.3

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v5 REPOST 0/6] fix hw_random stuck

2014-12-08 Thread Amos Kong
When I hotunplug a busy virtio-rng device or try to access
hwrng attributes in non-smp guest, it gets stuck.

My hotplug tests:

| test 0:
|   hotunplug rng device from qemu monitor
|
| test 1:
|   guest) # dd if=/dev/hwrng of=/dev/null 
|   hotunplug rng device from qemu monitor
|
| test 2:
|   guest) # dd if=/dev/random of=/dev/null 
|   hotunplug rng device from qemu monitor
|
| test 4:
|   guest) # dd if=/dev/hwrng of=/dev/null 
|   cat /sys/devices/virtual/misc/hw_random/rng_*
|
| test 5:
|   guest) # dd if=/dev/hwrng of=/dev/null
|   cancel dd process after 10 seconds
|   guest) # dd if=/dev/hwrng of=/dev/null 
|   hotunplug rng device from qemu monitor
|
| test 6:
|   use a fifo as rng backend, execute test 0 ~ 5 with no input of fifo

V5: reset cleanup_done flag, drop redundant init of reference count, use
compiler barrier to prevent recording.
V4: update patch 4 to fix corrupt, decrease last reference for triggering
the cleanup, fix unregister race pointed by Herbert
V3: initialize kref to 1
V2: added patch 2 to fix a deadlock, update current patch 3 to fix reference
counting issue

Amos Kong (1):
  hw_random: move some code out mutex_lock for avoiding underlying
deadlock

Rusty Russell (5):
  hw_random: place mutex around read functions and buffers.
  hw_random: use reference counts on each struct hwrng.
  hw_random: fix unregister race.
  hw_random: don't double-check old_rng.
  hw_random: don't init list element we're about to add to list.

 drivers/char/hw_random/core.c | 173 ++
 include/linux/hw_random.h |   3 +
 2 files changed, 126 insertions(+), 50 deletions(-)

-- 
1.9.3

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v5 REPOST 2/6] hw_random: move some code out mutex_lock for avoiding underlying deadlock

2014-12-08 Thread Amos Kong
In next patch, we use reference counting for each struct hwrng,
changing reference count also needs to take mutex_lock. Before
releasing the lock, if we try to stop a kthread that waits to
take the lock to reduce the referencing count, deadlock will
occur.

Signed-off-by: Amos Kong ak...@redhat.com
---
 drivers/char/hw_random/core.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
index b1b6042..a0905c8 100644
--- a/drivers/char/hw_random/core.c
+++ b/drivers/char/hw_random/core.c
@@ -474,12 +474,12 @@ void hwrng_unregister(struct hwrng *rng)
}
}
if (list_empty(rng_list)) {
+   mutex_unlock(rng_mutex);
unregister_miscdev();
if (hwrng_fill)
kthread_stop(hwrng_fill);
-   }
-
-   mutex_unlock(rng_mutex);
+   } else
+   mutex_unlock(rng_mutex);
 }
 EXPORT_SYMBOL_GPL(hwrng_unregister);
 
-- 
1.9.3

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v5 REPOST 3/6] hw_random: use reference counts on each struct hwrng.

2014-12-08 Thread Amos Kong
From: Rusty Russell ru...@rustcorp.com.au

current_rng holds one reference, and we bump it every time we want
to do a read from it.

This means we only hold the rng_mutex to grab or drop a reference,
so accessing /sys/devices/virtual/misc/hw_random/rng_current doesn't
block on read of /dev/hwrng.

Using a kref is overkill (we're always under the rng_mutex), but
a standard pattern.

This also solves the problem that the hwrng_fillfn thread was
accessing current_rng without a lock, which could change (eg. to NULL)
underneath it.

v5: drop redundant kref_init()
v4: decrease last reference for triggering the cleanup
v3: initialize kref (thanks Amos Kong)
v2: fix missing put_rng() on exit path (thanks Amos Kong)
Signed-off-by: Rusty Russell ru...@rustcorp.com.au
Signed-off-by: Amos Kong ak...@redhat.com
---
 drivers/char/hw_random/core.c | 135 --
 include/linux/hw_random.h |   2 +
 2 files changed, 94 insertions(+), 43 deletions(-)

diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
index a0905c8..83516cb 100644
--- a/drivers/char/hw_random/core.c
+++ b/drivers/char/hw_random/core.c
@@ -42,6 +42,7 @@
 #include linux/delay.h
 #include linux/slab.h
 #include linux/random.h
+#include linux/err.h
 #include asm/uaccess.h
 
 
@@ -91,6 +92,60 @@ static void add_early_randomness(struct hwrng *rng)
add_device_randomness(bytes, bytes_read);
 }
 
+static inline void cleanup_rng(struct kref *kref)
+{
+   struct hwrng *rng = container_of(kref, struct hwrng, ref);
+
+   if (rng-cleanup)
+   rng-cleanup(rng);
+}
+
+static void set_current_rng(struct hwrng *rng)
+{
+   BUG_ON(!mutex_is_locked(rng_mutex));
+   kref_get(rng-ref);
+   current_rng = rng;
+}
+
+static void drop_current_rng(void)
+{
+   BUG_ON(!mutex_is_locked(rng_mutex));
+   if (!current_rng)
+   return;
+
+   /* decrease last reference for triggering the cleanup */
+   kref_put(current_rng-ref, cleanup_rng);
+   current_rng = NULL;
+}
+
+/* Returns ERR_PTR(), NULL or refcounted hwrng */
+static struct hwrng *get_current_rng(void)
+{
+   struct hwrng *rng;
+
+   if (mutex_lock_interruptible(rng_mutex))
+   return ERR_PTR(-ERESTARTSYS);
+
+   rng = current_rng;
+   if (rng)
+   kref_get(rng-ref);
+
+   mutex_unlock(rng_mutex);
+   return rng;
+}
+
+static void put_rng(struct hwrng *rng)
+{
+   /*
+* Hold rng_mutex here so we serialize in case they set_current_rng
+* on rng again immediately.
+*/
+   mutex_lock(rng_mutex);
+   if (rng)
+   kref_put(rng-ref, cleanup_rng);
+   mutex_unlock(rng_mutex);
+}
+
 static inline int hwrng_init(struct hwrng *rng)
 {
if (rng-init) {
@@ -113,12 +168,6 @@ static inline int hwrng_init(struct hwrng *rng)
return 0;
 }
 
-static inline void hwrng_cleanup(struct hwrng *rng)
-{
-   if (rng  rng-cleanup)
-   rng-cleanup(rng);
-}
-
 static int rng_dev_open(struct inode *inode, struct file *filp)
 {
/* enforce read-only access to this chrdev */
@@ -154,21 +203,22 @@ static ssize_t rng_dev_read(struct file *filp, char 
__user *buf,
ssize_t ret = 0;
int err = 0;
int bytes_read, len;
+   struct hwrng *rng;
 
while (size) {
-   if (mutex_lock_interruptible(rng_mutex)) {
-   err = -ERESTARTSYS;
+   rng = get_current_rng();
+   if (IS_ERR(rng)) {
+   err = PTR_ERR(rng);
goto out;
}
-
-   if (!current_rng) {
+   if (!rng) {
err = -ENODEV;
-   goto out_unlock;
+   goto out;
}
 
mutex_lock(reading_mutex);
if (!data_avail) {
-   bytes_read = rng_get_data(current_rng, rng_buffer,
+   bytes_read = rng_get_data(rng, rng_buffer,
rng_buffer_size(),
!(filp-f_flags  O_NONBLOCK));
if (bytes_read  0) {
@@ -200,8 +250,8 @@ static ssize_t rng_dev_read(struct file *filp, char __user 
*buf,
ret += len;
}
 
-   mutex_unlock(rng_mutex);
mutex_unlock(reading_mutex);
+   put_rng(rng);
 
if (need_resched())
schedule_timeout_interruptible(1);
@@ -213,12 +263,11 @@ static ssize_t rng_dev_read(struct file *filp, char 
__user *buf,
}
 out:
return ret ? : err;
-out_unlock:
-   mutex_unlock(rng_mutex);
-   goto out;
+
 out_unlock_reading:
mutex_unlock(reading_mutex);
-   goto out_unlock;
+   put_rng(rng);
+   goto out;
 }
 
 
@@ -257,8 +306,8 @@ static ssize_t hwrng_attr_current_store(struct device *dev,

[PATCH v5 REPOST 4/6] hw_random: fix unregister race.

2014-12-08 Thread Amos Kong
From: Rusty Russell ru...@rustcorp.com.au

The previous patch added one potential problem: we can still be
reading from a hwrng when it's unregistered.  Add a wait for zero
in the hwrng_unregister path.

v5: reset cleanup_done flag, use compiler barrier to prevent recording.
v4: add cleanup_done flag to insure that cleanup is done

Signed-off-by: Rusty Russell ru...@rustcorp.com.au
Signed-off-by: Amos Kong ak...@redhat.com
---
 drivers/char/hw_random/core.c | 12 
 include/linux/hw_random.h |  1 +
 2 files changed, 13 insertions(+)

diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
index 83516cb..067270b 100644
--- a/drivers/char/hw_random/core.c
+++ b/drivers/char/hw_random/core.c
@@ -60,6 +60,7 @@ static DEFINE_MUTEX(rng_mutex);
 static DEFINE_MUTEX(reading_mutex);
 static int data_avail;
 static u8 *rng_buffer, *rng_fillbuf;
+static DECLARE_WAIT_QUEUE_HEAD(rng_done);
 static unsigned short current_quality;
 static unsigned short default_quality; /* = 0; default to off */
 
@@ -98,6 +99,11 @@ static inline void cleanup_rng(struct kref *kref)
 
if (rng-cleanup)
rng-cleanup(rng);
+
+   /* cleanup_done should be updated after cleanup finishes */
+   smp_wmb();
+   rng-cleanup_done = true;
+   wake_up_all(rng_done);
 }
 
 static void set_current_rng(struct hwrng *rng)
@@ -498,6 +504,8 @@ int hwrng_register(struct hwrng *rng)
add_early_randomness(rng);
}
 
+   rng-cleanup_done = false;
+
 out_unlock:
mutex_unlock(rng_mutex);
 out:
@@ -529,6 +537,10 @@ void hwrng_unregister(struct hwrng *rng)
kthread_stop(hwrng_fill);
} else
mutex_unlock(rng_mutex);
+
+   /* Just in case rng is reading right now, wait. */
+   wait_event(rng_done, rng-cleanup_done 
+  atomic_read(rng-ref.refcount) == 0);
 }
 EXPORT_SYMBOL_GPL(hwrng_unregister);
 
diff --git a/include/linux/hw_random.h b/include/linux/hw_random.h
index c212e71..7832e50 100644
--- a/include/linux/hw_random.h
+++ b/include/linux/hw_random.h
@@ -46,6 +46,7 @@ struct hwrng {
/* internal. */
struct list_head list;
struct kref ref;
+   bool cleanup_done;
 };
 
 /** Register a new Hardware Random Number Generator driver. */
-- 
1.9.3

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 25/25] crypto: ansi_cprng - If non-deterministic, don't buffer old output

2014-12-08 Thread Neil Horman
On Mon, Dec 08, 2014 at 11:43:13AM -0500, George Spelvin wrote:
  Wait, I'm confused. You mention in this note that this is an RFC patch, but 
  not
  anywhere else in the series.  Are you proposing this for inclusion or not?
 
 Er, in the 0/25, I mentioned that I put the least certain stuff last,
 and in particular I wasn't sure if the the last three patches were wanted
 or not:
 
  Pending issues:
  * Is non-deterministic mode (last three patches) wanted?
 
 I certainly wouldn't be unhappy if they went in, but with the comment
 clarification just before, I wouldn't be unhappy if they didn't, either.
 
 They're If we wanted to do this, this is how it could be done.  Is this
 something we want to do?
 
 Sorry if my motivations are confusing.  I did indeed start with wanting
Not your motivations, just the posting mechanics.  If you just want to discuss a
patch, and aren't yet proposing it for inclusion, you should put RFC in the
prefix of every patch header.

 to add the seeding because I misunderstood the comments: I thought
 this was claiming to be X9.31 *and* I haven't seen the later versions
 of the standaed (which you have) that back off on the requirements for
 the DT[] vector.
 
 Since you've patiently explained both of those to me, I'm more interested
 in the other, more generic code cleanups.
 
 You also sent me two detailed explanations of the consequences of making
 the generator non-determinsitic in a way that gave me a general impression
 of disliking of the idea.  So I've been weaning myself off the idea.
 
Not particularly opposed to the idea, I just know that several use cases rely on
deterministic behavior for those entities that share the secret information, so
I need to be sure that the deterministic behavior remains and is the default.

 I put those patches at the end so they can easily be dropped from the series.
 
 Or, as I also mentioned, simply postponed until there's been more discussion. 
  
 Since that's an actual semantic change, collecting a few other opinions
 would be valuable.
I'll look at this series in detail shortly.
Neil

 --
 To unsubscribe from this list: send the line unsubscribe linux-crypto in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3] crypto: algif - Mark sgl end at the end of data

2014-12-08 Thread Tadeusz Struk
Hi,
algif_skcipher sends 127 sgl buffers for encryption regardless of how
many buffers acctually have data to process, where the few first with
valid len and the rest with zero len. This is not very eficient.
This patch marks the last one with data as the last one to process.

Changes:
v2 - use data len to find the last buffer instead of nents in RX list.
v3 - Mark/unmark end when data is added and sgl-cur changed.

Signed-off-by: Tadeusz Struk tadeusz.st...@intel.com
---
 crypto/algif_skcipher.c |8 
 1 file changed, 8 insertions(+)

diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 3e84f4a..9b84765 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -330,6 +330,7 @@ static int skcipher_sendmsg(struct kiocb *unused, struct 
socket *sock,
 
sgl = list_entry(ctx-tsgl.prev, struct skcipher_sg_list, list);
sg = sgl-sg;
+   sg_unmark_end(sg + sgl-cur);
do {
i = sgl-cur;
plen = min_t(int, len, PAGE_SIZE);
@@ -355,6 +356,9 @@ static int skcipher_sendmsg(struct kiocb *unused, struct 
socket *sock,
sgl-cur++;
} while (len  sgl-cur  MAX_SGL_ENTS);
 
+   if (!size)
+   sg_mark_end(sg + sgl-cur - 1);
+
ctx-merge = plen  (PAGE_SIZE - 1);
}
 
@@ -401,6 +405,10 @@ static ssize_t skcipher_sendpage(struct socket *sock, 
struct page *page,
ctx-merge = 0;
sgl = list_entry(ctx-tsgl.prev, struct skcipher_sg_list, list);
 
+   if (sgl-cur)
+   sg_unmark_end(sgl-sg + sgl-cur - 1);
+
+   sg_mark_end(sgl-sg + sgl-cur);
get_page(page);
sg_set_page(sgl-sg + sgl-cur, page, size, offset);
sgl-cur++;

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: qat - Fix assumption that sg in and out will have the same nents

2014-12-08 Thread Tadeusz Struk
Fixed invalid assumpion that the sgl in and sgl out will always have the same
number of entries.

Signed-off-by: Tadeusz Struk tadeusz.st...@intel.com
---
 drivers/crypto/qat/qat_common/qat_algs.c   |   82 +---
 drivers/crypto/qat/qat_common/qat_crypto.h |1 
 2 files changed, 50 insertions(+), 33 deletions(-)

diff --git a/drivers/crypto/qat/qat_common/qat_algs.c 
b/drivers/crypto/qat/qat_common/qat_algs.c
index 19eea1c..e4e32d8 100644
--- a/drivers/crypto/qat/qat_common/qat_algs.c
+++ b/drivers/crypto/qat/qat_common/qat_algs.c
@@ -557,7 +557,8 @@ static void qat_alg_free_bufl(struct qat_crypto_instance 
*inst,
dma_addr_t blp = qat_req-buf.blp;
dma_addr_t blpout = qat_req-buf.bloutp;
size_t sz = qat_req-buf.sz;
-   int i, bufs = bl-num_bufs;
+   size_t sz_out = qat_req-buf.sz_out;
+   int i;
 
for (i = 0; i  bl-num_bufs; i++)
dma_unmap_single(dev, bl-bufers[i].addr,
@@ -567,14 +568,14 @@ static void qat_alg_free_bufl(struct qat_crypto_instance 
*inst,
kfree(bl);
if (blp != blpout) {
/* If out of place operation dma unmap only data */
-   int bufless = bufs - blout-num_mapped_bufs;
+   int bufless = blout-num_bufs - blout-num_mapped_bufs;
 
-   for (i = bufless; i  bufs; i++) {
+   for (i = bufless; i  blout-num_bufs; i++) {
dma_unmap_single(dev, blout-bufers[i].addr,
 blout-bufers[i].len,
 DMA_BIDIRECTIONAL);
}
-   dma_unmap_single(dev, blpout, sz, DMA_TO_DEVICE);
+   dma_unmap_single(dev, blpout, sz_out, DMA_TO_DEVICE);
kfree(blout);
}
 }
@@ -587,19 +588,20 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance 
*inst,
   struct qat_crypto_request *qat_req)
 {
struct device *dev = GET_DEV(inst-accel_dev);
-   int i, bufs = 0, n = sg_nents(sgl), assoc_n = sg_nents(assoc);
+   int i, bufs = 0, sg_nctr = 0;
+   int n = sg_nents(sgl), assoc_n = sg_nents(assoc);
struct qat_alg_buf_list *bufl;
struct qat_alg_buf_list *buflout = NULL;
dma_addr_t blp;
dma_addr_t bloutp = 0;
struct scatterlist *sg;
-   size_t sz = sizeof(struct qat_alg_buf_list) +
+   size_t sz_out, sz = sizeof(struct qat_alg_buf_list) +
((1 + n + assoc_n) * sizeof(struct qat_alg_buf));
 
if (unlikely(!n))
return -EINVAL;
 
-   bufl = kmalloc_node(sz, GFP_ATOMIC,
+   bufl = kzalloc_node(sz, GFP_ATOMIC,
dev_to_node(GET_DEV(inst-accel_dev)));
if (unlikely(!bufl))
return -ENOMEM;
@@ -620,15 +622,20 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance 
*inst,
goto err;
bufs++;
}
-   bufl-bufers[bufs].addr = dma_map_single(dev, iv, ivlen,
-DMA_BIDIRECTIONAL);
-   bufl-bufers[bufs].len = ivlen;
-   if (unlikely(dma_mapping_error(dev, bufl-bufers[bufs].addr)))
-   goto err;
-   bufs++;
+   if (ivlen) {
+   bufl-bufers[bufs].addr = dma_map_single(dev, iv, ivlen,
+DMA_BIDIRECTIONAL);
+   bufl-bufers[bufs].len = ivlen;
+   if (unlikely(dma_mapping_error(dev, bufl-bufers[bufs].addr)))
+   goto err;
+   bufs++;
+   }
 
for_each_sg(sgl, sg, n, i) {
-   int y = i + bufs;
+   int y = sg_nctr + bufs;
+
+   if (!sg-length)
+   continue;
 
bufl-bufers[y].addr = dma_map_single(dev, sg_virt(sg),
  sg-length,
@@ -636,8 +643,9 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance 
*inst,
bufl-bufers[y].len = sg-length;
if (unlikely(dma_mapping_error(dev, bufl-bufers[y].addr)))
goto err;
+   sg_nctr++;
}
-   bufl-num_bufs = n + bufs;
+   bufl-num_bufs = sg_nctr + bufs;
qat_req-buf.bl = bufl;
qat_req-buf.blp = blp;
qat_req-buf.sz = sz;
@@ -645,11 +653,15 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance 
*inst,
if (sgl != sglout) {
struct qat_alg_buf *bufers;
 
-   buflout = kmalloc_node(sz, GFP_ATOMIC,
+   n = sg_nents(sglout);
+   sz_out = sizeof(struct qat_alg_buf_list) +
+   ((1 + n + assoc_n) * sizeof(struct qat_alg_buf));
+   sg_nctr = 0;
+   buflout = kzalloc_node(sz_out, GFP_ATOMIC,
   dev_to_node(GET_DEV(inst-accel_dev)));
if (unlikely(!buflout))
   

[PATCH] crypto: qat - add support for cbc(aes) ablkcipher

2014-12-08 Thread Tadeusz Struk
Add support for cbc(aes) ablkcipher.

Signed-off-by: Tadeusz Struk tadeusz.st...@intel.com
Acked-by: Bruce W. Allan bruce.w.al...@intel.com
---
 drivers/crypto/qat/qat_common/icp_qat_hw.h |2 
 drivers/crypto/qat/qat_common/qat_algs.c   |  528 ++--
 drivers/crypto/qat/qat_common/qat_crypto.h |   15 +
 3 files changed, 433 insertions(+), 112 deletions(-)

diff --git a/drivers/crypto/qat/qat_common/icp_qat_hw.h 
b/drivers/crypto/qat/qat_common/icp_qat_hw.h
index 5031f8c..68f191b 100644
--- a/drivers/crypto/qat/qat_common/icp_qat_hw.h
+++ b/drivers/crypto/qat/qat_common/icp_qat_hw.h
@@ -301,5 +301,5 @@ struct icp_qat_hw_cipher_aes256_f8 {
 
 struct icp_qat_hw_cipher_algo_blk {
struct icp_qat_hw_cipher_aes256_f8 aes;
-};
+} __aligned(64);
 #endif
diff --git a/drivers/crypto/qat/qat_common/qat_algs.c 
b/drivers/crypto/qat/qat_common/qat_algs.c
index e4e32d8..f32d0a5 100644
--- a/drivers/crypto/qat/qat_common/qat_algs.c
+++ b/drivers/crypto/qat/qat_common/qat_algs.c
@@ -63,15 +63,15 @@
 #include icp_qat_fw.h
 #include icp_qat_fw_la.h
 
-#define QAT_AES_HW_CONFIG_ENC(alg) \
+#define QAT_AES_HW_CONFIG_CBC_ENC(alg) \
ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
-   ICP_QAT_HW_CIPHER_NO_CONVERT, \
-   ICP_QAT_HW_CIPHER_ENCRYPT)
+  ICP_QAT_HW_CIPHER_NO_CONVERT, \
+  ICP_QAT_HW_CIPHER_ENCRYPT)
 
-#define QAT_AES_HW_CONFIG_DEC(alg) \
+#define QAT_AES_HW_CONFIG_CBC_DEC(alg) \
ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_CBC_MODE, alg, \
-   ICP_QAT_HW_CIPHER_KEY_CONVERT, \
-   ICP_QAT_HW_CIPHER_DECRYPT)
+  ICP_QAT_HW_CIPHER_KEY_CONVERT, \
+  ICP_QAT_HW_CIPHER_DECRYPT)
 
 static atomic_t active_dev;
 
@@ -108,19 +108,31 @@ struct qat_auth_state {
uint8_t data[MAX_AUTH_STATE_SIZE + 64];
 } __aligned(64);
 
-struct qat_alg_session_ctx {
+struct qat_alg_aead_ctx {
struct qat_alg_cd *enc_cd;
-   dma_addr_t enc_cd_paddr;
struct qat_alg_cd *dec_cd;
+   dma_addr_t enc_cd_paddr;
dma_addr_t dec_cd_paddr;
-   struct icp_qat_fw_la_bulk_req enc_fw_req_tmpl;
-   struct icp_qat_fw_la_bulk_req dec_fw_req_tmpl;
-   struct qat_crypto_instance *inst;
-   struct crypto_tfm *tfm;
+   struct icp_qat_fw_la_bulk_req enc_fw_req;
+   struct icp_qat_fw_la_bulk_req dec_fw_req;
struct crypto_shash *hash_tfm;
enum icp_qat_hw_auth_algo qat_hash_alg;
+   struct qat_crypto_instance *inst;
+   struct crypto_tfm *tfm;
uint8_t salt[AES_BLOCK_SIZE];
-   spinlock_t lock;/* protects qat_alg_session_ctx struct */
+   spinlock_t lock;/* protects qat_alg_aead_ctx struct */
+};
+
+struct qat_alg_ablkcipher_ctx {
+   struct icp_qat_hw_cipher_algo_blk *enc_cd;
+   struct icp_qat_hw_cipher_algo_blk *dec_cd;
+   dma_addr_t enc_cd_paddr;
+   dma_addr_t dec_cd_paddr;
+   struct icp_qat_fw_la_bulk_req enc_fw_req;
+   struct icp_qat_fw_la_bulk_req dec_fw_req;
+   struct qat_crypto_instance *inst;
+   struct crypto_tfm *tfm;
+   spinlock_t lock;/* protects qat_alg_ablkcipher_ctx struct */
 };
 
 static int get_current_node(void)
@@ -144,7 +156,7 @@ static int qat_get_inter_state_size(enum 
icp_qat_hw_auth_algo qat_hash_alg)
 }
 
 static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash,
- struct qat_alg_session_ctx *ctx,
+ struct qat_alg_aead_ctx *ctx,
  const uint8_t *auth_key,
  unsigned int auth_keylen)
 {
@@ -267,8 +279,6 @@ static void qat_alg_init_common_hdr(struct 
icp_qat_fw_comn_req_hdr *header)
header-comn_req_flags =
ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_CD_FLD_TYPE_64BIT_ADR,
QAT_COMN_PTR_TYPE_SGL);
-   ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header-serv_specif_flags,
-  ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
ICP_QAT_FW_LA_PARTIAL_SET(header-serv_specif_flags,
  ICP_QAT_FW_LA_PARTIAL_NONE);
ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(header-serv_specif_flags,
@@ -279,8 +289,9 @@ static void qat_alg_init_common_hdr(struct 
icp_qat_fw_comn_req_hdr *header)
   ICP_QAT_FW_LA_NO_UPDATE_STATE);
 }
 
-static int qat_alg_init_enc_session(struct qat_alg_session_ctx *ctx,
-   int alg, struct crypto_authenc_keys *keys)
+static int qat_alg_aead_init_enc_session(struct qat_alg_aead_ctx *ctx,
+int alg,
+struct crypto_authenc_keys *keys)
 {
struct crypto_aead *aead_tfm = 

Re: [PATCH] crypto: qat - add support for cbc(aes) ablkcipher

2014-12-08 Thread Tadeusz Struk
On 12/08/2014 12:08 PM, Tadeusz Struk wrote:
 Add support for cbc(aes) ablkcipher.
 

Hi Herbert,
These two:
[PATCH] crypto: qat - add support for cbc(aes) ablkcipher
[PATCH] crypto: qat - Fix assumption that sg in and out will have the...

are generated against cryptodev with these two on top:

[PATCH v2 1/2] crypto: qat - Prevent dma mapping zero length assoc data
[PATCH v2 2/2] crypto: qat - Enforce valid numa configuration

Thanks,
Tadeusz


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 25/25] crypto: ansi_cprng - If non-deterministic, don't buffer old output

2014-12-08 Thread George Spelvin
 Not your motivations, just the posting mechanics.  If you just want to
 discuss a patch, and aren't yet proposing it for inclusion, you should
 put RFC in the prefix of every patch header.

I understand the principle, and I should have on those patches (mea
culpa), but really *all* patch postings are for comment; I think of RFC
as comment only; please don't apply this.

But it wasn't marked RFC, so that's why I posted a note downgrading
it when I realized I messed it up.  The note was basically oh, shit,
I introduced a bug at the last minute; thankfully that was the most RFC
of the entire series, so nobody's likely to have merged it.

But it certainly is the case that for any significant patch series,
I really don't expect v1 to get merged as-is.

I'm serious about the changes, and it wouldn't have been a problem if
you had applied v1, but it would have surprised me.  Realistically,
I expect a couple of rounds of discussion and tweaking of the specific
form of the changes before people agree it's ready to go in.

And I think that's the case here; I adjusted a lot of details based on
feedback, but at a high level nothing changed; v2 makes the same changes
that v1 did.

 Not particularly opposed to the idea, I just know that several use cases
 rely on deterministic behavior for those entities that share the secret
 information, so I need to be sure that the deterministic behavior remains
 and is the default.

Right, because it's advertised as a PRNG.  Thinking about it, would
a separate crypto_alg with a different seedsize be a better solution
than obscure rules about seed size?  And something in the cra_flags
to indicate it's nondeterminsitic?
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html