Hi Michal:

On Tue, Jul 11, 2006 at 02:01:14PM +1200, Michal Ludvig wrote:
>
> Changes from last time:
> * fallback TFM allocated in cra_init, not in dia_init
> * dropped padlock_sha1_init and introduced padlock_sha1_cra_init (ditto
> for 256). Both sha1 and sha256 use common padlock_sha_init() now.
> * padlock_init checks for fallback modules but only prints warning if
> they are not available, i.e. doesn't prevent loading the module if
> fallbacks are not loadable.
> * doesn't hold fallback TFMs throughout the whole lifetime anymore

Looks much better!

> Anything else to address?

Just a few minor issues.

> +#include <linux/module.h>
> +#include <linux/init.h>
> +#include <linux/types.h>
> +#include <linux/errno.h>
> +#include <linux/crypto.h>
> +#include <linux/cryptohash.h>
> +#include <linux/interrupt.h>
> +#include <linux/kernel.h>
> +#include <linux/scatterlist.h>
> +#include <asm/byteorder.h>

You can drop asm/byteorder.h and linux/types.h since they're
always provided by linux/kernel.h.

> +#include "padlock.h"
> +
> +#define PADLOCK_CRA_PRIORITY 300

Perhaps this macro can be moved into padlock.h since it's shared
with AES?

> +struct padlock_sha_ctx {
> +     char            *data;
> +     size_t          used;
> +     size_t          data_len;

Do we need data_len if it's always going to be a page?

> +     int             bypass;
> +     void (*f_sha_padlock)(const char *in, char *out, int count);
> +     const char      *fallback_driver_name;

How about dropping this variable since it seems to be pretty useless?

> +#define CTX(tfm)     ((struct padlock_sha_ctx*)(crypto_tfm_ctx(tfm)))

Please put a space before the asterisk.  It's also better to use an
inline function instead of macros whenever possible, and stick with
lower-case names please :)

> +/* We'll need aligned address on the stack */
> +#define NEAREST_ALIGNED(ptr) ((unsigned char *)(ptr) +       \
> +     ((0x10 - ((size_t)(ptr) & 0x0F)) & 0x0F))

How about using the ALIGN macro from kernel.h?

> +static void padlock_sha_bypass(struct crypto_tfm *tfm)
> +{
> +     if (CTX(tfm)->bypass)
> +             return;
> +
> +     BUG_ON(!CTX(tfm)->fallback_tfm);
> +
> +     crypto_digest_init(CTX(tfm)->fallback_tfm);
> +     if (CTX(tfm)->data && CTX(tfm)->used) {
> +             struct scatterlist sg[8];

How about sg[1] :)

> +             sg_set_buf(&sg[0], CTX(tfm)->data, CTX(tfm)->used);

It's shorter to say sg rather than &sg[0].

> +static void padlock_sha_update(struct crypto_tfm *tfm, const uint8_t *data, 
> unsigned int length)

Please use u8/u16/u32/u64 instead of uintX_t for kernel code.  Until
there is a change in general kernel policy it's better to stick with
the prevailing coding style.

Please also keep lines to 80 chars.

> +static inline void
> +padlock_htonl_block(uint32_t *data, size_t count)

BSD style alert :)

> +{
> +     while (count--) {
> +             asm volatile ("bswapl %0" : "+r"(*data));

How about sticking with swab32? It'll do the same thing.
Perhaps you could even combine the swab with the copy to the
final output buffer.

> +void padlock_do_sha1(const char *in, char *out, int count)
> +{
> +     /* We can't store directly to *out as it
> +      * doesn't have to be aligned. But who cares,
> +      * it's only a few bytes... */
> +     char buf[128+16];

Does it really need 128 bytes? It's also better to put this in the
ctx since there it's easier to get the required alignment (see what
I did to the padlock AES driver).

Also the output buffer is guaranteed to be aligned because you've
set the alignment mask.

> +static void padlock_sha_final(struct crypto_tfm *tfm, uint8_t *out)
> +{
> +     if (unlikely(CTX(tfm)->bypass)) {
> +             BUG_ON(!CTX(tfm)->fallback_tfm);
> +             crypto_digest_final(CTX(tfm)->fallback_tfm, out);

BTW, these BUG_ON's are pointless since as soon as you use the thing
it'll oops anyway.

> +     /* For now we'll try to allocate one page. This
> +      * could eventually be configurable one day. */
> +     CTX(tfm)->data = (char*)__get_free_page(GFP_KERNEL);
> +     if (!CTX(tfm)->data)
> +             padlock_sha_bypass(tfm);

This can't work because the padlock_sha_init function will wipe out the
bypass flag.  So just fail the whole thing if you can't grab a page.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[EMAIL PROTECTED]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
-
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to