(-)
No comments so far? :(
What's wrong with the patch?
Reviewed-by: Huang Ying ying.hu...@intel.com
Best Regards,
Huang Ying
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
not understand it. It
should be as big as CBC.
Best Regards,
Huang Ying
CBC: 84.8262.3 +209.3%
LRW:108.6222.1 +104.5%
XTS:105.0205.5 +95.7%
x86-64: old newdelta
ECB:121.1123.0+1.5%
CBC:285.3290.8+1.9%
LRW:263.7265.3
On Fri, 2010-11-12 at 15:30 +0800, Mathias Krause wrote:
On 12.11.2010, 01:33 Huang Ying wrote:
Hi, Mathias,
On Fri, 2010-11-12 at 06:18 +0800, Mathias Krause wrote:
All test were run five times in a row using a 256 bit key and doing i/o
to the block device in chunks of 1MB
with test_ahash_speed) to
test cipher in asynchronous mode.
Best Regards,
Huang Ying
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi, Andrew,
On Wed, 2010-03-24 at 05:23 +0800, Andrew Morton wrote:
On Fri, 12 Mar 2010 15:01:47 +0800
Huang Ying ying.hu...@intel.com wrote:
Andrew Morton reported that AES-NI CTR optimization failed to compile
with gas 2.16.1, the error message is as follow:
arch/x86/crypto/aesni
-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/aesni-intel_asm.S |4 -
arch/x86/include/asm/inst.h | 96 --
2 files changed, 95 insertions(+), 5 deletions(-)
--- a/arch/x86/crypto/aesni-intel_asm.S
+++ b/arch/x86/crypto/aesni-intel_asm.S
@@ -749,8
Because ghash needs setkey, the setkey and keysize template support
for test_hash_speed is added.
v2:
- Move klen into struct hash_speed.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/tcrypt.c |7 +++
crypto/tcrypt.h | 29 +
2 files changed
Because ghash needs setkey, the setkey and keysize template support
for test_hash_speed is added.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/tcrypt.c | 122 +++-
crypto/tcrypt.h |1
2 files changed, 78 insertions(+), 45
% reduction of
ecryption/decryption time.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/aesni-intel_asm.S | 115
arch/x86/crypto/aesni-intel_glue.c | 130 +++--
2 files changed, 238 insertions(+), 7 deletions
On Tue, 2009-11-10 at 02:56 +0800, Herbert Xu wrote:
On Thu, Nov 05, 2009 at 02:44:17PM +0800, Huang Ying wrote:
Old binutils do not support AES-NI instructions, to make kernel can be
compiled by them, .byte code is used instead of AES-NI assembly
instructions. But the readability
complete(), which accept struct
aead_request *req instead of areq, so avoid using areq after it is
destroyed.
- Expand complete_for_next_step().
The fixing method is based on the idea of Herbert Xu.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/gcm.c | 107
-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/ghash-clmulni-intel_asm.S | 29 ++---
1 file changed, 10 insertions(+), 19 deletions(-)
--- a/arch/x86/crypto/ghash-clmulni-intel_asm.S
+++ b/arch/x86/crypto/ghash-clmulni-intel_asm.S
@@ -17,7 +17,7 @@
*/
#include
-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/aesni-intel_asm.S | 517 --
1 file changed, 173 insertions(+), 344 deletions(-)
--- a/arch/x86/crypto/aesni-intel_asm.S
+++ b/arch/x86/crypto/aesni-intel_asm.S
@@ -16,6 +16,7 @@
*/
#include linux
complete(), which accept struct
aead_request *req instead of areq, so avoid using areq after it is
destroyed.
- Expand complete_for_next_step().
The fixing method is based on the idea of Herbert Xu.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/gcm.c | 120
On Tue, 2009-11-03 at 23:53 +0800, Herbert Xu wrote:
On Tue, Nov 03, 2009 at 10:40:17AM +0800, Huang Ying wrote:
The flow of the complete function (xxx_done) in gcm.c is as follow:
void complete(struct crypto_async_request *areq, int err)
{
if (!err) {
err
.
- Expand complete_for_next_step().
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/gcm.c | 43 ---
1 file changed, 28 insertions(+), 15 deletions(-)
--- a/crypto/gcm.c
+++ b/crypto/gcm.c
@@ -267,8 +267,7 @@ static int gcm_hash_final(struct aead_re
-intel_glue.c is not
changed accordingly. This patch fixes this.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/ghash-clmulni-intel_glue.c |8
1 file changed, 4 insertions(+), 4 deletions(-)
--- a/arch/x86/crypto/ghash-clmulni-intel_glue.c
+++ b/arch/x86/crypto/ghash
it.
How about something as below? But it seems not appropriate to put these
bits into i387.h, that is, to combine C and gas syntax.
Best Regards,
Huang Ying
.macro xmm_num opd xmm
.ifc \xmm,%xmm0
\opd = 0
.endif
.ifc \xmm,%xmm1
\opd = 1
.endif
.ifc \xmm,%xmm2
\opd = 2
.endif
.ifc \xmm,%xmm3
\opd
is not changed
accordingly. This patch fixes this.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/aesni-intel_glue.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -82,7
On Tue, 2009-09-15 at 22:42 +0800, Daniel Walker wrote:
On Tue, 2009-09-15 at 13:42 +0800, Huang Ying wrote:
Hi, Herbert,
The dependency to irq_fpu_usable has been merged by linus' tree.
Best Regards,
Huang Ying
, performance increase about 2x.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/Makefile |3
arch/x86/crypto/ghash-clmulni-intel_asm.S | 157 +
arch/x86/crypto/ghash-clmulni-intel_glue.c | 333 +
arch/x86/include/asm
Hi, Herbert,
The dependency to irq_fpu_usable has been merged by linus' tree.
Best Regards,
Huang Ying
--
PCLMULQDQ is used to accelerate the most time-consuming part of GHASH,
carry-less multiplication. More information
and PCLMULQDQ
accelerated GHASH implementation.
v3:
- Renamed to irq_fpu_usable to reflect the purpose of the function.
v2:
- Renamed to irq_is_fpu_using to reflect the real situation.
Signed-off-by: Huang Ying ying.hu...@intel.com
CC: H. Peter Anvin h...@zytor.com
---
arch/x86/crypto/aesni-intel_glue.c
it cannot
guarantee IV uniqueness. I think reverting to chainiv is the safer
option.
I see seqiv is used in rfc3686 mode, it means seqiv can not be used on
raw counter mode but can be used for rfc3686?
Best Regards,
Huang Ying
--
To unsubscribe from this list: send the line unsubscribe linux
GHASH is implemented as a shash algorithm. The actual implementation
is copied from gcm.c. This makes it possible to add
architecture/hardware accelerated GHASH implementation.
v2:
- Fix a bug in Makefile (Thanks Sebastian)
- Some other minor fixes
Signed-off-by: Huang Ying ying.hu
Hi, Herbert,
On Sun, 2009-06-21 at 21:51 +0800, Herbert Xu wrote:
Huang Ying ying.hu...@intel.com wrote:
PCLMULQDQ is used to accelerate the most time-consuming part of GHASH,
carry-less multiplication. More information about PCLMULQDQ can be
found at:
http://software.intel.com/en-us
On Sun, 2009-06-21 at 21:46 +0800, Herbert Xu wrote:
Huang Ying ying.hu...@intel.com wrote:
+ ghash = crypto_alloc_ahash(ghash, 0, 0);
+ if (IS_ERR(ghash))
+ return PTR_ERR(ghash);
We should add this as an extra parameter to gcm_base. This is
so
On Thu, 2009-06-18 at 15:27 +0800, Sebastian Andrzej Siewior wrote:
* Huang Ying | 2009-06-18 10:08:27 [+0800]:
On Thu, 2009-06-18 at 04:04 +0800, Sebastian Andrzej Siewior wrote:
+#include linux/module.h
+#include linux/init.h
+#include linux/kernel.h
+#include linux/crypto.h
On Thu, 2009-06-18 at 19:40 +0800, Herbert Xu wrote:
On Mon, Jun 15, 2009 at 05:04:57PM +0800, Huang Ying wrote:
Because AES-NI instructions will touch XMM state, corresponding code
must be enclosed within kernel_fpu_begin/end, which used
preempt_disable/enable. So sleep should be prevented
On Thu, 2009-06-18 at 04:04 +0800, Sebastian Andrzej Siewior wrote:
* Huang Ying | 2009-06-11 15:10:26 [+0800]:
GHASH is implemented as a shash algorithm. The actual implementation
is copied from gcm.c. This makes it possible to add
architecture/hardware accelerated GHASH implementation
On Thu, 2009-06-18 at 04:47 +0800, Sebastian Andrzej Siewior wrote:
* Huang Ying | 2009-06-11 15:10:28 [+0800]:
Remove the dedicated GHASH implementation in GCM, and uses the GHASH
digest algorithm instead. This will make GCM uses hardware accelerated
GHASH implementation automatically
Because AES-NI instructions will touch XMM state, corresponding code
must be enclosed within kernel_fpu_begin/end, which used
preempt_disable/enable. So sleep should be prevented between
kernel_fpu_begin/end.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/aesni-intel_glue.c
kernel_fpu_begin/end used preempt_disable/enable, so sleep should be
prevented between kernel_fpu_begin/end.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/fpu.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/x86/crypto/fpu.c
+++ b/arch/x86/crypto
Hi, Herbert,
This patchset adds PCLMULQDQ accelerated GHASH. Because conversion from
crypto_hash to crypto_shash has not been done, this patchset is not
intended to be merged now.
Please take a look at the general design.
Best Regards,
Huang Ying
--
To unsubscribe from this list: send the line
Needed to use shash in cryptd hash.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/shash.c |6 ++
include/crypto/algapi.h |8
2 files changed, 14 insertions(+)
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -240,6 +240,14 @@ static
This is used by AES-NI accelerated AES implementation and PCLMULQDQ
accelerated GHASH implementation.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/aesni-intel_glue.c |7 ---
arch/x86/include/asm/i387.h|7 +++
2 files changed, 7 insertions(+), 7
asynchronous interface.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/Kconfig |2
crypto/gcm.c | 531 +++--
2 files changed, 367 insertions(+), 166 deletions(-)
--- a/crypto/gcm.c
+++ b/crypto/gcm.c
@@ -12,6 +12,7 @@
#include
, its usage must be enclosed with
kernel_fpu_begin/end, which can be used only in process context, the
acceleration is implemented as crypto_ahash. That is, request in soft
IRQ context will be deferred to the cryptd kernel thread.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto
On Sun, 2009-03-29 at 15:43 +0800, Herbert Xu wrote:
On Wed, Mar 18, 2009 at 04:52:12PM +0800, Huang Ying wrote:
To accelerate GCM with it, I make the following design:
1. Implement ghash as an ahash algorithm, Use ghash in gcm
implementation.
2. Provide a new implementation of ghash
as that of AES-NI, that is, XMM registers are
used.
To accelerate GCM with it, I make the following design:
1. Implement ghash as an ahash algorithm, Use ghash in gcm
implementation.
2. Provide a new implementation of ghash with PCLMULQDQ-NI.
What do you think about that?
Best Regards,
Huang Ying
Use crypto_alloc_base() instead of crypto_alloc_ablkcipher() to
allocate underlying tfm in cryptd_alloc_ablkcipher. Because
crypto_alloc_ablkcipher() prefer GENIV encapsulated crypto instead of
raw one, while cryptd_alloc_ablkcipher needed the raw one.
Signed-off-by: Huang Ying ying.hu
these operations to be
invoked for each request.
v2: Make FPU mode invisible to end user
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/Kconfig |5 +
crypto/Makefile |1
crypto/fpu.c| 166
3 files changed, 172
shows that cryption time can be
reduced to 50% of general mode implementation + aes-aesni implementation.
v2: Add description of mode acceleration support in Kconfig
v3: Fix some bugs of CTR block size, LRW and XTS min/max key size.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86
Use crypto_alloc_base() instead of crypto_alloc_ablkcipher() to
allocate underlying tfm in cryptd_alloc_ablkcipher. Because
crypto_alloc_ablkcipher() prefer GENIV encapsulated crypto instead of
raw one, while cryptd_alloc_ablkcipher needed the raw one.
Signed-off-by: Huang Ying ying.hu
shows that cryption time can be
reduced to 50% of general mode implementation + aes-aesni implementation.
v2: Add description of mode acceleration support in Kconfig
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/aesni-intel_glue.c | 256
Use crypto_alloc_base() instead of crypto_alloc_ablkcipher() to
allocate underlying tfm in cryptd_alloc_ablkcipher. Because
crypto_alloc_ablkcipher() prefer GENIV encapsulated crypto instead of
raw one, while cryptd_alloc_ablkcipher needed the raw one.
Signed-off-by: Huang Ying ying.hu
these operations to be
invoked for each request.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/Kconfig |7 ++
crypto/Makefile |1
crypto/fpu.c| 166
3 files changed, 174 insertions(+)
--- a/crypto/Kconfig
+++ b
shows that cryption time can be
reduced to 50% of general mode implementation + aes-aesni implementation.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/aesni-intel_glue.c | 256 +
crypto/Kconfig |1
2 files changed
-off-by: Huang Ying ying.hu...@intel.com
---
drivers/md/dm-crypt.c | 14 +++---
1 file changed, 11 insertions(+), 3 deletions(-)
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -60,6 +60,8 @@ struct dm_crypt_io {
};
struct dm_crypt_request {
+ struct
Hi, Milan,
On Fri, 2009-02-27 at 16:41 +0800, Milan Broz wrote:
Herbert Xu wrote:
On Fri, Feb 27, 2009 at 01:31:56PM +0800, Huang Ying wrote:
I had ever heard from you that the only thing guaranteed in the
completion function of async ablkcipher cryption is the req-data has
the value you
: kcryptd_async_done. This makes my AES-NI cryptd usage panic.
Do you think that is a bug?
Best Regards,
Huang Ying
signature.asc
Description: This is a digitally signed message part
--
The middle value of elapsed time is:
wo cryptwq: 0.31
w cryptwq: 0.26
The performance gain is about (0.31-0.26)/0.26 = 0.192.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/Kconfig |1
crypto/cryptd.c | 220 ++--
2 files
Uses kcrypto_wq instead of keventd_wq in chainiv
keventd_wq has potential starvation problem, so use dedicated
kcrypto_wq instead.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/Kconfig |1 +
crypto/chainiv.c |3 ++-
2 files changed, 3 insertions(+), 1 deletion
keventd_wq has potential starvation problem, so use dedicated
kcrypto_wq instead.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/Kconfig |1 +
crypto/chainiv.c |3 ++-
2 files changed, 3 insertions(+), 1 deletion(-)
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -56,6 +56,7
A dedicated workqueue named kcrypto_wq is created to be used by crypto
subsystem. The system shared keventd_wq is not suitable for
encryption/decryption, because of potential starvation problem.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/Kconfig |3 +++
crypto
On Thu, 2009-01-22 at 11:04 +0800, Herbert Xu wrote:
On Thu, Jan 22, 2009 at 10:32:17AM +0800, Huang Ying wrote:
This is the first attempt to use a dedicate workqueue for crypto. It is
not intended to be merged. Please feedback your comments, especially on
desgin.
Thanks for the patch
cryptd_alloc_ablkcipher() will allocate a cryptd-ed ablkcipher for
specified algorithm name. The new allocated one is guaranteed to be
cryptd-ed ablkcipher, so the blkcipher underlying can be gotten via
cryptd_ablkcipher_child().
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto
On Thu, 2009-01-15 at 16:47 +0800, Herbert Xu wrote:
On Thu, Jan 15, 2009 at 04:28:33PM +0800, Huang Ying wrote:
+ tfm = crypto_alloc_ablkcipher(cryptd_alg_name, type, mask);
+ BUG_ON(crypto_ablkcipher_tfm(tfm)-__crt_alg-cra_module !=
+ THIS_MODULE);
You need to check
On Thu, 2009-01-15 at 17:23 +0800, Herbert Xu wrote:
On Thu, Jan 15, 2009 at 05:21:47PM +0800, Huang Ying wrote:
On Thu, 2009-01-15 at 16:47 +0800, Herbert Xu wrote:
On Thu, Jan 15, 2009 at 04:28:33PM +0800, Huang Ying wrote:
+ tfm = crypto_alloc_ablkcipher(cryptd_alg_name
On Fri, 2009-01-16 at 09:53 +0800, Herbert Xu wrote:
On Fri, Jan 16, 2009 at 09:20:58AM +0800, Huang Ying wrote:
On Thu, 2009-01-15 at 17:47 +0800, roel kluin wrote:
+ kernel_fpu_begin();
+ while ((nbytes = walk.nbytes)) {
+ aesni_ecb_enc(ctx
, that is,
create a dedicate workqueue for crypto subsystem. This way, chainiv can
use this crypto workqueue too.
I will implement it if you have no plan to do it yourself.
Best Regards,
Huang Ying
signature.asc
Description: This is a digitally signed message part
On Fri, 2009-01-16 at 11:26 +0800, Herbert Xu wrote:
On Fri, Jan 16, 2009 at 10:37:02AM +0800, Huang Ying wrote:
But after checking blkcipher_walk_done() in 2.6.28, If input argument
err != 0 and walk-flags BLKCIPHER_WALK_SLOW != 0, when
blkcipher_walk_done() return non-zero, walk-nbytes
processing in cryptd_alloc_ablkcipher()
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/cryptd.c | 33 +
include/crypto/cryptd.h | 27 +++
2 files changed, 60 insertions(+)
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
processing in cryptd_alloc_ablkcipher()
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/cryptd.c | 35 +++
include/crypto/cryptd.h | 27 +++
2 files changed, 62 insertions(+)
--- a/crypto/cryptd.c
+++ b/crypto
.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/Makefile |3
arch/x86/crypto/aesni-intel_asm.S | 896 +
arch/x86/crypto/aesni-intel_glue.c | 461 +++
arch/x86/include/asm/cpufeature.h |1
crypto
, but some user may not want to use cryptd-ed
version.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto/cryptd.c |4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -215,7 +215,9 @@ static struct crypto_instance *cryptd_al
ctx
On Wed, 2009-01-14 at 14:53 +0800, Herbert Xu wrote:
On Wed, Jan 14, 2009 at 02:44:08PM +0800, Huang Ying wrote:
Because:
1. if use %s, you can only request cryptd(driver name), not
cryptd(alg name), because generated new algorithm instance has
algorithm name: alg name and driver
cryptd_alloc_ablkcipher() will allocate a cryptd-ed ablkcipher for
specified algorithm name. The new allocated one is guaranteed to be
cryptd-ed ablkcipher, so the blkcipher underlying can be gotten via
cryptd_ablkcipher_child().
Signed-off-by: Huang Ying ying.hu...@intel.com
---
crypto
implementation.
- AES key scheduling algorithm is re-implemented with higher
performance.
- ablkcipher asynchronous machanism is used to delay a crypto request
to work queue context upon FPU state is using by other kernel
context.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86
On Mon, 2009-01-12 at 18:43 +0800, Herbert Xu wrote:
On Mon, Jan 12, 2009 at 02:55:10PM +0800, Huang Ying wrote:
I use a shell cbc(aes) algorithm which chooses between
cryptd(__cbc-aes-aesni) and __cbc-aes-aesni according to context. But
the struct ablkcipher_request passed in can
a simple method.
Best Regards,
Huang Ying
signature.asc
Description: This is a digitally signed message part
with 16 bytes
alignment requirement of AES-NI implementation.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/aes-i586-asm_32.S | 18 +-
arch/x86/crypto/aes-x86_64-asm_64.S |6 ++
arch/x86/crypto/aes_glue.c | 20
arch
it in aes_x86_64 and aes_generic.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/aes_glue.c |1 +
crypto/aes_generic.c | 20 +++-
include/crypto/aes.h |1 +
3 files changed, 21 insertions(+), 1 deletion(-)
--- a/crypto/aes_generic.c
+++ b
On Tue, 2008-12-23 at 21:01 +0800, Sebastian Andrzej Siewior wrote:
* Huang Ying | 2008-12-23 16:49:26 [+0800]:
If aes_x86_64 and aes_generic are compiled as builtin, the
initialization order is undetermined. That is, aes_x86_64 may be
initilized before aes_generic is initilized
On Wed, 2008-12-17 at 09:26 +0800, Herbert Xu wrote:
Huang Ying ying.hu...@intel.com wrote:
f. if TS is clear, then use x86_64 implementation. Otherwise if
user-space has touched the FPU, we save the state, if not then simply
clear TS.
Well I'd rather avoid using the x86_64
On Sat, 2008-12-13 at 03:57 +0800, Sebastian Andrzej Siewior wrote:
* Huang Ying | 2008-12-12 12:08:46 [+0800]:
Add support to Intel AES-NI instructions for x86_64 platform.
Intel AES-NI is a new set of Single Instruction Multiple Data (SIMD)
instructions that are going to be introduced
On Mon, 2008-12-15 at 13:21 +0800, Herbert Xu wrote:
On Mon, Dec 15, 2008 at 01:14:59PM +0800, Huang Ying wrote:
The PadLock instructions don't use/touch SSE registers, but might cause
DNA fault when CR0.TS is set. So it is sufficient just to clear CR0.TS
before executed.
The AES-NI
or soft_irq context, the general x86_64 implementation are used
instead.
Signed-off-by: Huang Ying ying.hu...@intel.com
---
arch/x86/crypto/aes_glue.c| 10 -
arch/x86/include/asm/aes.h|9 +
arch/x86/include/asm/cpufeature.h |1
drivers/crypto/Kconfig| 11
of my testing machine.
Best Regards,
Huang Ying
also -- please drop the #define for R16 to %rsp ... it obfuscates more
than it helps anything.
thanks
-dean
On Wed, 30 Apr 2008, Sebastian Siewior wrote:
* Huang, Ying | 2008-04-25 11:11:17 [+0800]:
Hi, Sebastian,
Hi Huang
Hi, Sebastian,
On Wed, 2008-04-30 at 00:12 +0200, Sebastian Siewior wrote:
* Huang, Ying | 2008-04-25 11:11:17 [+0800]:
Hi, Sebastian,
Hi Huang,
sorry for the delay.
I changed the patches to group the read or write together instead of
interleaving. Can you help me to test these new
80 matches
Mail list logo