Re: 2.6.13rc3: crypto horribly broken on all 64bit archs

2005-07-19 Thread Andy Isaacson
On Sun, Jul 17, 2005 at 02:20:21PM +0200, Andreas Steinmetz wrote:
> from include/linux/kernel.h:
> 
> #define ALIGN(x,a) (((x)+(a)-1)&~((a)-1))
> 
> from crypto/cipher.c:
> 
> unsigned int alignmask = ...
> u8 *src = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);

The type foolery you suggested is un-obvious, especially at review time.
How about the following instead?

-andy

# HG changeset patch
# User [EMAIL PROTECTED]
# Node ID ff4d8285dcb581b8340b133c22780c4fcc0dabb3
# Parent  2b0b2208676a22741fc3b1681ca9dd555ca40dfd
Add new PTR_ALIGN macro to encapsulate necessary casting, and use it in
the places where ALIGN was used on pointers.

Signed-Off-By: Andy Isaacson <[EMAIL PROTECTED]>

diff -r 2b0b2208676a -r ff4d8285dcb5 crypto/cipher.c
--- a/crypto/cipher.c   Sun Jul 17 03:06:51 2005
+++ b/crypto/cipher.c   Tue Jul 19 18:51:11 2005
@@ -43,7 +43,7 @@
 {
unsigned int alignmask = crypto_tfm_alg_alignmask(desc->tfm);
u8 buffer[bsize * 2 + alignmask];
-   u8 *src = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
+   u8 *src = PTR_ALIGN(buffer, alignmask + 1);
u8 *dst = src + bsize;
unsigned int n;
 
@@ -166,7 +166,7 @@
if (unlikely(((unsigned long)iv & alignmask))) {
unsigned int ivsize = tfm->crt_cipher.cit_ivsize;
u8 buffer[ivsize + alignmask];
-   u8 *tmp = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
+   u8 *tmp = PTR_ALIGN(buffer, alignmask + 1);
int err;
 
desc->info = memcpy(tmp, iv, ivsize);
diff -r 2b0b2208676a -r ff4d8285dcb5 drivers/crypto/padlock-aes.c
--- a/drivers/crypto/padlock-aes.c  Sun Jul 17 03:06:51 2005
+++ b/drivers/crypto/padlock-aes.c  Tue Jul 19 18:51:11 2005
@@ -287,7 +287,7 @@
 
 static inline struct aes_ctx *aes_ctx(void *ctx)
 {
-   return (struct aes_ctx *)ALIGN((unsigned long)ctx, PADLOCK_ALIGNMENT);
+   return PTR_ALIGN(ctx, PADLOCK_ALIGNMENT);
 }
 
 static int
diff -r 2b0b2208676a -r ff4d8285dcb5 include/linux/kernel.h
--- a/include/linux/kernel.hSun Jul 17 03:06:51 2005
+++ b/include/linux/kernel.hTue Jul 19 18:51:11 2005
@@ -29,6 +29,8 @@
 
 #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
 #define ALIGN(x,a) (((x)+(a)-1)&~((a)-1))
+#define PTR_ALIGN(p,a) \
+   ((void *)(((unsigned long)(p)+(a)-1)&~(((unsigned long)(a)-1
 
 #defineKERN_EMERG  "<0>"   /* system is unusable   
*/
 #defineKERN_ALERT  "<1>"   /* action must be taken immediately 
*/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: 2.6.13rc3: crypto horribly broken on all 64bit archs

2005-07-19 Thread Andy Isaacson
On Sun, Jul 17, 2005 at 02:20:21PM +0200, Andreas Steinmetz wrote:
 from include/linux/kernel.h:
 
 #define ALIGN(x,a) (((x)+(a)-1)~((a)-1))
 
 from crypto/cipher.c:
 
 unsigned int alignmask = ...
 u8 *src = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);

The type foolery you suggested is un-obvious, especially at review time.
How about the following instead?

-andy

# HG changeset patch
# User [EMAIL PROTECTED]
# Node ID ff4d8285dcb581b8340b133c22780c4fcc0dabb3
# Parent  2b0b2208676a22741fc3b1681ca9dd555ca40dfd
Add new PTR_ALIGN macro to encapsulate necessary casting, and use it in
the places where ALIGN was used on pointers.

Signed-Off-By: Andy Isaacson [EMAIL PROTECTED]

diff -r 2b0b2208676a -r ff4d8285dcb5 crypto/cipher.c
--- a/crypto/cipher.c   Sun Jul 17 03:06:51 2005
+++ b/crypto/cipher.c   Tue Jul 19 18:51:11 2005
@@ -43,7 +43,7 @@
 {
unsigned int alignmask = crypto_tfm_alg_alignmask(desc-tfm);
u8 buffer[bsize * 2 + alignmask];
-   u8 *src = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
+   u8 *src = PTR_ALIGN(buffer, alignmask + 1);
u8 *dst = src + bsize;
unsigned int n;
 
@@ -166,7 +166,7 @@
if (unlikely(((unsigned long)iv  alignmask))) {
unsigned int ivsize = tfm-crt_cipher.cit_ivsize;
u8 buffer[ivsize + alignmask];
-   u8 *tmp = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
+   u8 *tmp = PTR_ALIGN(buffer, alignmask + 1);
int err;
 
desc-info = memcpy(tmp, iv, ivsize);
diff -r 2b0b2208676a -r ff4d8285dcb5 drivers/crypto/padlock-aes.c
--- a/drivers/crypto/padlock-aes.c  Sun Jul 17 03:06:51 2005
+++ b/drivers/crypto/padlock-aes.c  Tue Jul 19 18:51:11 2005
@@ -287,7 +287,7 @@
 
 static inline struct aes_ctx *aes_ctx(void *ctx)
 {
-   return (struct aes_ctx *)ALIGN((unsigned long)ctx, PADLOCK_ALIGNMENT);
+   return PTR_ALIGN(ctx, PADLOCK_ALIGNMENT);
 }
 
 static int
diff -r 2b0b2208676a -r ff4d8285dcb5 include/linux/kernel.h
--- a/include/linux/kernel.hSun Jul 17 03:06:51 2005
+++ b/include/linux/kernel.hTue Jul 19 18:51:11 2005
@@ -29,6 +29,8 @@
 
 #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
 #define ALIGN(x,a) (((x)+(a)-1)~((a)-1))
+#define PTR_ALIGN(p,a) \
+   ((void *)(((unsigned long)(p)+(a)-1)~(((unsigned long)(a)-1
 
 #defineKERN_EMERG  0   /* system is unusable   
*/
 #defineKERN_ALERT  1   /* action must be taken immediately 
*/
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: 2.6.13rc3: crypto horribly broken on all 64bit archs

2005-07-18 Thread David S. Miller
From: Herbert Xu <[EMAIL PROTECTED]>
Date: Mon, 18 Jul 2005 06:45:54 +1000

> On Sun, Jul 17, 2005 at 02:20:21PM +0200, Andreas Steinmetz wrote:
> > 
> > The compiler first does ~((a)-1)) and then expands the unsigned int to
> > unsigned long for the & operation. So we end up with only the lower 32
> > bits of the address. Who did smoke what to do this? Patch attached.
> 
> Thanks for the patch Andreas.  A similar patch will be in the
> mainline kernel as soon as everybody is back home from Canada.

Yes, the fix is in my tree and will make it's way upstream
as soon as possible.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: 2.6.13rc3: crypto horribly broken on all 64bit archs

2005-07-18 Thread David S. Miller
From: Herbert Xu [EMAIL PROTECTED]
Date: Mon, 18 Jul 2005 06:45:54 +1000

 On Sun, Jul 17, 2005 at 02:20:21PM +0200, Andreas Steinmetz wrote:
  
  The compiler first does ~((a)-1)) and then expands the unsigned int to
  unsigned long for the  operation. So we end up with only the lower 32
  bits of the address. Who did smoke what to do this? Patch attached.
 
 Thanks for the patch Andreas.  A similar patch will be in the
 mainline kernel as soon as everybody is back home from Canada.

Yes, the fix is in my tree and will make it's way upstream
as soon as possible.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: 2.6.13rc3: crypto horribly broken on all 64bit archs

2005-07-17 Thread Herbert Xu
On Sun, Jul 17, 2005 at 02:20:21PM +0200, Andreas Steinmetz wrote:
> 
> The compiler first does ~((a)-1)) and then expands the unsigned int to
> unsigned long for the & operation. So we end up with only the lower 32
> bits of the address. Who did smoke what to do this? Patch attached.

Thanks for the patch Andreas.  A similar patch will be in the
mainline kernel as soon as everybody is back home from Canada.
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[EMAIL PROTECTED]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


2.6.13rc3: crypto horribly broken on all 64bit archs

2005-07-17 Thread Andreas Steinmetz
from include/linux/kernel.h:

#define ALIGN(x,a) (((x)+(a)-1)&~((a)-1))

from crypto/cipher.c:

unsigned int alignmask = ...
u8 *src = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
...
unsigned int alignmask = ...
u8 *tmp = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
...
unsigned int align;
addr = ALIGN(addr, align);
addr += ALIGN(tfm->__crt_alg->cra_ctxsize, align);

The compiler first does ~((a)-1)) and then expands the unsigned int to
unsigned long for the & operation. So we end up with only the lower 32
bits of the address. Who did smoke what to do this? Patch attached.
-- 
Andreas Steinmetz   SPAMmers use [EMAIL PROTECTED]

--- linux.orig/crypto/cipher.c  2005-07-17 13:35:15.0 +0200
+++ linux/crypto/cipher.c   2005-07-17 14:04:00.0 +0200
@@ -41,7 +41,7 @@
   struct scatter_walk *in,
   struct scatter_walk *out, unsigned int bsize)
 {
-   unsigned int alignmask = crypto_tfm_alg_alignmask(desc->tfm);
+   unsigned long alignmask = crypto_tfm_alg_alignmask(desc->tfm);
u8 buffer[bsize * 2 + alignmask];
u8 *src = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
u8 *dst = src + bsize;
@@ -160,7 +160,7 @@
  unsigned int nbytes)
 {
struct crypto_tfm *tfm = desc->tfm;
-   unsigned int alignmask = crypto_tfm_alg_alignmask(tfm);
+   unsigned long alignmask = crypto_tfm_alg_alignmask(tfm);
u8 *iv = desc->info;
 
if (unlikely(((unsigned long)iv & alignmask))) {
@@ -424,7 +424,7 @@
}

if (ops->cit_mode == CRYPTO_TFM_MODE_CBC) {
-   unsigned int align;
+   unsigned long align;
unsigned long addr;

switch (crypto_tfm_alg_blocksize(tfm)) {


2.6.13rc3: crypto horribly broken on all 64bit archs

2005-07-17 Thread Andreas Steinmetz
from include/linux/kernel.h:

#define ALIGN(x,a) (((x)+(a)-1)~((a)-1))

from crypto/cipher.c:

unsigned int alignmask = ...
u8 *src = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
...
unsigned int alignmask = ...
u8 *tmp = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
...
unsigned int align;
addr = ALIGN(addr, align);
addr += ALIGN(tfm-__crt_alg-cra_ctxsize, align);

The compiler first does ~((a)-1)) and then expands the unsigned int to
unsigned long for the  operation. So we end up with only the lower 32
bits of the address. Who did smoke what to do this? Patch attached.
-- 
Andreas Steinmetz   SPAMmers use [EMAIL PROTECTED]

--- linux.orig/crypto/cipher.c  2005-07-17 13:35:15.0 +0200
+++ linux/crypto/cipher.c   2005-07-17 14:04:00.0 +0200
@@ -41,7 +41,7 @@
   struct scatter_walk *in,
   struct scatter_walk *out, unsigned int bsize)
 {
-   unsigned int alignmask = crypto_tfm_alg_alignmask(desc-tfm);
+   unsigned long alignmask = crypto_tfm_alg_alignmask(desc-tfm);
u8 buffer[bsize * 2 + alignmask];
u8 *src = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
u8 *dst = src + bsize;
@@ -160,7 +160,7 @@
  unsigned int nbytes)
 {
struct crypto_tfm *tfm = desc-tfm;
-   unsigned int alignmask = crypto_tfm_alg_alignmask(tfm);
+   unsigned long alignmask = crypto_tfm_alg_alignmask(tfm);
u8 *iv = desc-info;
 
if (unlikely(((unsigned long)iv  alignmask))) {
@@ -424,7 +424,7 @@
}

if (ops-cit_mode == CRYPTO_TFM_MODE_CBC) {
-   unsigned int align;
+   unsigned long align;
unsigned long addr;

switch (crypto_tfm_alg_blocksize(tfm)) {


Re: 2.6.13rc3: crypto horribly broken on all 64bit archs

2005-07-17 Thread Herbert Xu
On Sun, Jul 17, 2005 at 02:20:21PM +0200, Andreas Steinmetz wrote:
 
 The compiler first does ~((a)-1)) and then expands the unsigned int to
 unsigned long for the  operation. So we end up with only the lower 32
 bits of the address. Who did smoke what to do this? Patch attached.

Thanks for the patch Andreas.  A similar patch will be in the
mainline kernel as soon as everybody is back home from Canada.
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} [EMAIL PROTECTED]
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/