[PATCH] crypto: testmgr: Use linear alias for test input

2016-12-19 Thread Laura Abbott
Christopher Covington reported a crash on aarch64 on recent Fedora
kernels:

kernel BUG at ./include/linux/scatterlist.h:140!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
Modules linked in:
CPU: 2 PID: 752 Comm: cryptomgr_test Not tainted 4.9.0-11815-ge93b1cc #162
Hardware name: linux,dummy-virt (DT)
task: 80007c650080 task.stack: 8891
PC is at sg_init_one+0xa0/0xb8
LR is at sg_init_one+0x24/0xb8
...
[] sg_init_one+0xa0/0xb8
[] test_acomp+0x10c/0x438
[] alg_test_comp+0xb0/0x118
[] alg_test+0x17c/0x2f0
[] cryptomgr_test+0x44/0x50
[] kthread+0xf8/0x128
[] ret_from_fork+0x10/0x50

The test vectors used for input are part of the kernel image. These
inputs are passed as a buffer to sg_init_one which eventually blows up
with BUG_ON(!virt_addr_valid(buf)). On arm64, virt_addr_valid returns
false for the kernel image since virt_to_page will not return the
correct page. The kernel image is also aliased to the linear map so get
the linear alias and pass that to the scatterlist instead.

Reported-by: Christopher Covington 
Fixes: d7db7a882deb ("crypto: acomp - update testmgr with support for acomp")
Signed-off-by: Laura Abbott 
---
x86 supports virt_addr_valid working on kernel image addresses but arm64 is
more strict. This is the direction things have been moving with my
CONFIG_DEBUG_VIRTUAL series for arm64 which is tightening the definition of
__pa/__pa_symbol.
---
 crypto/testmgr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index f616ad7..f5bac10 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -1464,7 +1464,7 @@ static int test_acomp(struct crypto_acomp *tfm, struct 
comp_testvec *ctemplate,
 
memset(output, 0, dlen);
init_completion();
-   sg_init_one(, ctemplate[i].input, ilen);
+   sg_init_one(, __va(__pa_symbol(ctemplate[i].input)), ilen);
sg_init_one(, output, dlen);
 
req = acomp_request_alloc(tfm);
@@ -1513,7 +1513,7 @@ static int test_acomp(struct crypto_acomp *tfm, struct 
comp_testvec *ctemplate,
 
memset(output, 0, dlen);
init_completion();
-   sg_init_one(, dtemplate[i].input, ilen);
+   sg_init_one(, __va(__pa_symbol(dtemplate[i].input)), ilen);
sg_init_one(, output, dlen);
 
req = acomp_request_alloc(tfm);
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: HalfSipHash Acceptable Usage

2016-12-19 Thread Jason A. Donenfeld
Hi JP,

On Mon, Dec 19, 2016 at 9:49 PM, Jean-Philippe Aumasson
 wrote:
>
> On Mon, Dec 19, 2016 at 6:32 PM Jason A. Donenfeld  wrote:
>>
>> Hi JP,
>>
>> With the threads getting confusing, I've been urged to try and keep
>> the topics and threads more closely constrained. Here's where we're
>> at, and here's the current pressing security concern. It'd be helpful
>> to have a definitive statement on what you think is best, so we can
>> just build on top of that, instead of getting lost in the chorus of
>> opinions.
>>
>> 1) Anything that requires actual long-term security will use
>> SipHash2-4, with the 64-bit output and the 128-bit key. This includes
>> things like TCP sequence numbers. This seems pretty uncontroversial to
>> me. Seem okay to you?
>
>
>
> Right, since 2012 when we published SipHash many cryptanalysts attempted to
> break SipHash-2-4 with a 128-bit key, for various notions of "break", and
> nothing worth worrying was ever found. I'm totally confident that
> SipHash-2-4 will live up to its security promises.
>
> Don't use something weaker for things like TCP sequence numbers or RNGs. Use
> SipHash2-4 for those. That is the correct choice.
>
>>
>>
>> 2) People seem to want something competitive, performance-wise, with
>> jhash if it's going to replace jhash. The kernel community
>> instinctively pushes back on anything that could harm performance,
>> especially in networking and in critical data structures, so there
>> have been some calls for something faster than SipHash. So, questions
>> regarding this:
>>
>
> No free lunch I guess: either go with a cryptographically secure,
> time-proved keyed hash such as SipHash, or go with some simpler hash deemed
> secure cos its designer can't break it :) #DontRollYourOwnCrypto
>
>> 2a) George thinks that HalfSipHash on 32-bit systems will have roughly
>> comparable speed as SipHash on 64-bit systems, so the idea would be to
>> use HalfSipHash on 32-bit systems' hash tables and SipHash on 64-bit
>> systems' hash tables. The big obvious question is: does HalfSipHash
>> have a sufficient security margin for hashtable usage and hashtable
>> attacks? I'm not wondering about the security margin for other usages,
>> but just of the hashtable usage. In your opinion, does HalfSipHash cut
>> it?
>
>
> HalfSipHash takes its core function from Chaskey and uses the same
> construction as SipHash, so it *should* be secure. Nonetheless it hasn't
> received the same amount of attention as 64-bit SipHash did. So I'm less
> confident about its security than about SipHash's, but it obviously inspires
> a lot more confidence than non-crypto hashes.
>
> Too, HalfSipHash only has a 64-bit key, not a 128-bit key like SipHash, so
> only use this as a mitigation for hash-flooding attacks, where the output of
> the hash function is never directly shown to the caller. Do not use
> HalfSipHash for TCP sequence numbers or RNGs.
>
>
>>
>>
>> 2b) While I certainly wouldn't consider making the use case in
>> question (1) employ a weaker function, for this question (2), there
>> has been some discussion about using HalfSipHash1-3 (or SipHash1-3 on
>> 64-bit) instead of 2-4. So, the same question is therefore posed:
>> would using HalfSipHash1-3 give a sufficient security margin for
>> hashtable usage and hashtable attacks?
>
>
> My educated guess is that yes, it will, but that it may not withhold
> cryptanalysis as a pseudorandom function (PRF). For example I wouldn't be
> surprised if there were a "distinguishing attack" that detects non-random
> patterns in HalfSipHash-1-3's output. But most of the non-crypto hashes I've
> seen have obvious distinguishing attacks. So the upshot is that HSH will get
> you better security that AnyWeakHash even with 1 & 3 rounds.
>
> So, if you're willing to compromise on security, but still want something
> not completely unreasonable, you might be able to get away with using
> HalfSipHash1-3 as a replacement for jhash—in circumstances where the output
> of the hash function is kept secret—in order to mitigate hash-flooding
> attacks.
>

Thanks for the detailed response. I will continue exactly how you've specified.

1. SipHash2-4 for TCP sequence numbers, syncookies, and RNG. IOW, the
things that MD5 is used for now.

2. HalfSipHash1-3 for hash tables where the output is not revealed,
for jhash replacements. On 64-bit this will alias to SipHash1-3.

3. I will write Documentation/siphash.txt detailing this.

4. I'll continue to discourage other kernel developers from rolling
their own crypto or departing from the tried in substantial ways.

Thanks again,
Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH v5 1/4] siphash: add cryptographically secure PRF

2016-12-19 Thread George Spelvin
David Laight wrote:
> From: George Spelvin
...
>> uint32_t
>> hsiphash24(char const *in, size_t len, uint32_t const key[2])
>> {
>>  uint32_t c = key[0];
>>  uint32_t d = key[1];
>>  uint32_t a = 0x6c796765 ^ 0x736f6d65;
>>  uint32_t b = d ^ 0x74656462 ^ 0x646f7261;

> I've not looked closely, but is that (in some sense) duplicating
> the key length?
> So you could set a = key[2] and b = key[3] and still have an
> working hash - albeit not exactly the one specified.

That's tempting, but not necessarily effective.  (A similar unsuccesful
idea can be found in discussions of "DES with independent round keys".
Or see the design discussion of Salsa20 and the constants in its input.)

You can increase the key size, but that might not increase the *security*
any.

The big issue is that there are a *lot* of square root attack in
cryptanalysis.  Because SipHash's state is twice the size of the key,
such an attack will have the same complexity as key exhaustion and need
not be considered.  To make a stronger security claim, you need to start
working through them all and show that they don't apply.

For SipHash in particular, an important property is asymmetry of the
internal state.  That's what duplicating the key with XORs guarantees.
If the two halves of the state end up identical, the mixing is much
weaker.

Now the probability of ending up in a "mirror state" is the square
root of the state size (1 in 2^64 for HalfSipHash's 128-bit state),
which is the same probability as guessing a key, so it's not a
problem that has to be considered when making a 64-bit security claim.

But if you want a higher security level, you have to think about
what can happen.

That said, I have been thinking very hard about

a = c ^ 0x48536970; /* 'HSip' */
d = key[2];

By guaranteeing that a and c are different, we get the desired
asymmetry, and the XOR of b and d is determined by the first word of
the message anyway, so this isn't weakening anything.

96 bits is far beyond the reach of any brute-force attack, and if a
more sophisticated 64-bit attack exists, it's at least out of the reach
of the script kiddies, and will almost certainly have a non-negligible
constant factor and more limits in when it can be applied.

> Is it worth using the 32bit hash for IP addresses on 64bit systems that
> can't do misaligned accessed?

Not a good idea.  To hash 64 bits of input:

* Full SipHash has to do two loads, a shift, an or, and two rounds of mixing.
* HalfSipHash has to do a load, two rounds, another load, and two more rounds.

In other words, in addition to being less secure, it's half the speed.  

Also, what systems are you thinking about?  x86, ARMv8, PowerPC, and
S390 (and ia64, if anyone cares) all handle unaligned loads.  MIPS has
efficient support.  Alpha and HPPA are for retrocomputing fans, not
people who care about performance.

So you're down to SPARC.  Which conveniently has the same maintainer as
the networking code, so I figure DaveM can take care of that himself. :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


HalfSipHash Acceptable Usage

2016-12-19 Thread Jason A. Donenfeld
Hi JP,

With the threads getting confusing, I've been urged to try and keep
the topics and threads more closely constrained. Here's where we're
at, and here's the current pressing security concern. It'd be helpful
to have a definitive statement on what you think is best, so we can
just build on top of that, instead of getting lost in the chorus of
opinions.

1) Anything that requires actual long-term security will use
SipHash2-4, with the 64-bit output and the 128-bit key. This includes
things like TCP sequence numbers. This seems pretty uncontroversial to
me. Seem okay to you?

2) People seem to want something competitive, performance-wise, with
jhash if it's going to replace jhash. The kernel community
instinctively pushes back on anything that could harm performance,
especially in networking and in critical data structures, so there
have been some calls for something faster than SipHash. So, questions
regarding this:

2a) George thinks that HalfSipHash on 32-bit systems will have roughly
comparable speed as SipHash on 64-bit systems, so the idea would be to
use HalfSipHash on 32-bit systems' hash tables and SipHash on 64-bit
systems' hash tables. The big obvious question is: does HalfSipHash
have a sufficient security margin for hashtable usage and hashtable
attacks? I'm not wondering about the security margin for other usages,
but just of the hashtable usage. In your opinion, does HalfSipHash cut
it?

2b) While I certainly wouldn't consider making the use case in
question (1) employ a weaker function, for this question (2), there
has been some discussion about using HalfSipHash1-3 (or SipHash1-3 on
64-bit) instead of 2-4. So, the same question is therefore posed:
would using HalfSipHash1-3 give a sufficient security margin for
hashtable usage and hashtable attacks?

My plan is essentially to implement things according to your security
recommendation. The thread started with me pushing a heavy duty
security solution -- SipHash2-4 -- for _everything_. I've received
understandable push back on that notion for certain use cases. So now
I'm trying to discover what the most acceptable compromise is. Your
answers on (2a) and (2b) will direct that compromise.

Thanks again,
Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kernel-hardening] Re: [PATCH v5 1/4] siphash: add cryptographically secure PRF

2016-12-19 Thread Jason A. Donenfeld
Hi Ted,

On Sat, Dec 17, 2016 at 4:41 PM, Theodore Ts'o  wrote:
> On Fri, Dec 16, 2016 at 09:15:03PM -0500, George Spelvin wrote:
>> >> - Ted, Andy Lutorminski and I will try to figure out a construction of
>> >>   get_random_long() that we all like.
>
> We don't have to find the most optimal solution right away; we can
> approach this incrementally, after all.

Thanks to your call for moderation. This is the impression I have too.
And with all the back and forth of these threads, I fear nothing will
get done. I'm going to collect the best ideas amongst all of these,
and try to get it merged. Then after that we can incrementally improve
on it.

David Miller -- would you merge something into your 4.11 tree? Seems
like you might be the guy for this, since the changes primarily affect
net/*.

Latest patches are here:
https://git.zx2c4.com/linux-dev/log/?h=siphash


> So long as we replace get_random_{long,int}() with something which is
> (a) strictly better in terms of security given today's use of MD5, and
> (b) which is strictly *faster* than the current construction on 32-bit
> and 64-bit systems, we can do that, and can try to make it be faster
> while maintaining some minimum level of security which is sufficient
> for all current users of get_random_{long,int}() and which can be
> clearly artificulated for future users of get_random_{long,int}().
>
> The main worry at this point I have is benchmarking siphash on a
> 32-bit system.  It may be that simply batching the chacha20 output so
> that we're using the urandom construction more efficiently is the
> better way to go, since that *does* meet the criteron of strictly more
> secure and strictly faster than the current MD5 solution.  I'm open to
> using siphash, but I want to see the the 32-bit numbers first.

Sure, I'll run some benchmarks and report back.

> As far as half-siphash is concerned, it occurs to me that the main
> problem will be those users who need to guarantee that output can't be
> guessed over a long period of time.  For example, if you have a
> long-running process, then the output needs to remain unguessable over
> potentially months or years, or else you might be weakening the ASLR
> protections.  If on the other hand, the hash table or the process will
> be going away in a matter of seconds or minutes, the requirements with
> respect to cryptographic strength go down significantly.
>
> Now, maybe this doesn't matter that much if we can guarantee (or make
> assumptions) that the attacker doesn't have unlimited access the
> output stream of get_random_{long,int}(), or if it's being used in an
> anti-DOS use case where it ultimately only needs to be harder than
> alternate ways of attacking the system.

The only acceptable usage of HalfSipHash is for hash table lookups
where top security isn't actually a goal. I'm still a bit queasy about
going with it, but George S has very aggressively been pursuing a
"save every last cycle" agenda, which makes sense given how
performance sensitive certain hash tables are, and so I suspect this
could be an okay compromise between performance and security. But,
only for hash tables. Certainly not for the RNG.

>
> Rekeying every five minutes doesn't necessarily help the with respect
> to ASLR, but it might reduce the amount of the output stream that
> would be available to the attacker in order to be able to attack the
> get_random_{long,int}() generator, and it also reduces the value of
> doing that attack to only compromising the ASLR for those processes
> started within that five minute window.

The current implemention of get_random_int/long in my branch uses
128-bit key siphash, so the chances of compromising the key are pretty
much nil. The periodic rekeying is to protect against direct-ram
attacks or other kernel exploits -- a concern brought up by Andy.

With siphash, which takes a 128-bit key, if you got an RNG output once
every picosecond, I believe it would take approximately 10^19 years...

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 1/4] siphash: add cryptographically secure PRF

2016-12-19 Thread Jason A. Donenfeld
On Sat, Dec 17, 2016 at 3:55 PM, Jeffrey Walton  wrote:
> It may be prudent to include the endian reversal in the test to ensure
> big endian machines produce expected results. Some closely related
> testing on an old Apple PowerMac G5 revealed that result needed to be
> reversed before returning it to a caller.

The function [1] returns a u64. Originally I had it returning a
__le64, but that was considered unnecessary by many prior reviewers on
the list. It returns an integer. If you want uniform bytes out of it,
then use the endian conversion function, the same as you would do with
any other type of integer.

Additionally, this function is *not* meant for af_alg or any of the
crypto/* code. It's very unlikely to find a use there.


> Forgive my ignorance... I did not find reading on using the primitive
> in a PRNG. Does anyone know what Aumasson or Bernstein have to say?
> Aumasson's site does not seem to discuss the use case:

He's on this thread so I suppose he can speak up for himself. But in
my conversations with him, the primary take-away was, "seems okay to
me!". But please -- JP - correct me if I've misinterpreted.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH 1/3] crypto: zip - Add ThunderX ZIP driver core

2016-12-19 Thread Sasha Levin
On Mon, Dec 12, 2016 at 10:04 AM, Jan Glauber  wrote:
> +/* error messages */
> +#define zip_err(fmt, args...) pr_err("ZIP ERR:%s():%d: " \
> + fmt "\n", __func__, __LINE__, ## args)
> +
> +#ifdef MSG_ENABLE
> +/* Enable all messages */
> +#define zip_msg(fmt, args...) pr_info("ZIP_MSG:" fmt "\n", ## args)
> +#else
> +#define zip_msg(fmt, args...)
> +#endif
> +
> +#if defined(ZIP_DEBUG_ENABLE) && defined(MSG_ENABLE)
> +
> +#ifdef DEBUG_LEVEL
> +
> +#define FILE_NAME (strrchr(__FILE__, '/') ? strrchr(__FILE__, '/') + 1 : \
> +   strrchr(__FILE__, '\\') ? strrchr(__FILE__, '\\') + 1 : __FILE__)
> +
> +#if DEBUG_LEVEL >= 4
> +
> +#define zip_dbg(fmt, args...) pr_info("ZIP DBG: %s: %s() : %d: " \
> + fmt "\n", FILE_NAME, __func__, __LINE__, ## 
> args)
> +
> +#define zip_dbg_enter(fmt, args...) pr_info("ZIP_DBG: %s() in %s" \
> +   fmt "\n", __func__, FILE_NAME, ## args)
> +
> +#define zip_dbg_exit(fmt, args...) pr_info("ZIP_DBG:Exit %s() in %s" \
> +  fmt "\n", __func__, FILE_NAME, ## args)
> +
> +#elif DEBUG_LEVEL >= 3
> +
> +#define zip_dbg(fmt, args...) pr_info("ZIP DBG: %s: %s() : %d: " \
> + fmt "\n", FILE_NAME, __func__, __LINE__, ## 
> args)
> +
> +#elif DEBUG_LEVEL >= 2
> +
> +#define zip_dbg(fmt, args...) pr_info("ZIP DBG: %s() : %d: " \
> + fmt "\n", __func__, __LINE__, ## args)
> +
> +#else
> +
> +#define zip_dbg(fmt, args...) pr_info("ZIP DBG:" fmt "\n", ## args)
> +
> +#endif /* DEBUG LEVEL >= */
> +
> +#if DEBUG_LEVEL <= 3
> +
> +#define zip_dbg_enter(fmt, args...)
> +#define zip_dbg_exit(fmt, args...)
> +
> +#endif /* DEBUG_LEVEL <= 3 */
> +#else
> +
> +#define zip_dbg(fmt, args...) pr_info("ZIP DBG:" fmt "\n", ## args)
> +
> +#define zip_dbg_enter(fmt, args...)
> +#define zip_dbg_exit(fmt, args...)
> +
> +#endif /* DEBUG_LEVEL */
> +#else
> +
> +#define zip_dbg(fmt, args...)
> +#define zip_dbg_enter(fmt, args...)
> +#define zip_dbg_exit(fmt, args...)
> +
> +#endif /* ZIP_DEBUG_ENABLE */

The whole zip_dbg_[enter,exit] thing can be just done with tracepoints
instead of reinventing the wheel. No reason to carry that code

> +u32 zip_load_instr(union zip_inst_s *instr,
> +  struct zip_device *zip_dev)
> +{
> +   union zip_quex_doorbell dbell;
> +   u32 queue = 0;
> +   u32 consumed = 0;
> +   u64 *ncb_ptr = NULL;
> +   union zip_nptr_s ncp;
> +
> +   /*
> +* Distribute the instructions between the enabled queues based on
> +* the CPU id.
> +*/
> +   if (raw_smp_processor_id() % 2 == 0)
> +   queue = 0;
> +   else
> +   queue = 1;

Is this just simplistic load balancing scheme? There's no guarantee
that the cpu won't change later on.

> +
> +   /* Poll for the IQ cmd completion code */
> +   zip_dbg_exit();

The comment doesn't match with what actually happens?

> +static inline u64 zip_depth(void)
> +{
> +   struct zip_device *zip_dev = zip_get_device(zip_get_node_id());
> +
> +   if (!zip_dev)
> +   return -ENODEV;
> +
> +   return zip_dev->depth;
> +}
> +
> +static inline u64 zip_onfsize(void)
> +{
> +   struct zip_device *zip_dev = zip_get_device(zip_get_node_id());
> +
> +   if (!zip_dev)
> +   return -ENODEV;
> +
> +   return zip_dev->onfsize;
> +}
> +
> +static inline u64 zip_ctxsize(void)
> +{
> +   struct zip_device *zip_dev = zip_get_device(zip_get_node_id());
> +
> +   if (!zip_dev)
> +   return -ENODEV;
> +
> +   return zip_dev->ctxsize;
> +}

Where is it all used?

> +/*
> + * Allocates new ZIP device structure
> + * Returns zip_device pointer or NULL if cannot allocate memory for 
> zip_device
> + */
> +static struct zip_device *zip_alloc_device(struct pci_dev *pdev)
> +{
> +   struct zip_device *zip = NULL;
> +   int idx = 0;
> +
> +   for (idx = 0; idx < MAX_ZIP_DEVICES; idx++) {
> +   if (!zip_dev[idx])
> +   break;
> +   }
> +
> +   zip = kzalloc(sizeof(*zip), GFP_KERNEL);
> +
> +   if (!zip)
> +   return NULL;
> +
> +   zip_dev[idx] = zip;

If for some odd reason we try to allocate more than MAX_ZIP_DEVICES
this will deref an invalid pointer.

> +/**
> + * zip_get_device - Get ZIP device based on node id of cpu
> + *
> + * @node: Node id of the current cpu
> + * Return: Pointer to Zip device structure
> + */
> +struct zip_device *zip_get_device(int node)
> +{
> +   if ((node < MAX_ZIP_DEVICES) && (node >= 0))
> +   return zip_dev[node];
> +
> +   zip_err("ZIP device not found for node id %d\n", node);
> +   return NULL;
> +}
> +
> +/**
> + * zip_get_node_id - Get the node id of the current cpu
> + *
> + * Return: Node id of the current cpu
> + */
> +int zip_get_node_id(void)
> +{
> +   return 

Re: [RFC PATCH 1/3] crypto: zip - Add ThunderX ZIP driver core

2016-12-19 Thread Jan Glauber
Hi Corentin,

thanks for your review! Your comments all look reasonable to me,
Mahipal will address them.

Since I posted this series at the beginning of the merge window
I'd like to wait for some more time before we post an updated version.

--Jan

On Tue, Dec 13, 2016 at 02:39:00PM +0100, Corentin Labbe wrote:
> Hello
> 
> I have some comment below
> 
> On Mon, Dec 12, 2016 at 04:04:37PM +0100, Jan Glauber wrote:
> > From: Mahipal Challa 
> > 
> [...]
> > --- a/drivers/crypto/Makefile
> > +++ b/drivers/crypto/Makefile
> > @@ -27,6 +27,7 @@ obj-$(CONFIG_CRYPTO_DEV_MXC_SCC) += mxc-scc.o
> >  obj-$(CONFIG_CRYPTO_DEV_TALITOS) += talitos.o
> >  obj-$(CONFIG_CRYPTO_DEV_UX500) += ux500/
> >  obj-$(CONFIG_CRYPTO_DEV_QAT) += qat/
> > +obj-$(CONFIG_CRYPTO_DEV_CAVIUM_ZIP) += cavium/
> >  obj-$(CONFIG_CRYPTO_DEV_QCE) += qce/
> >  obj-$(CONFIG_CRYPTO_DEV_VMX) += vmx/
> >  obj-$(CONFIG_CRYPTO_DEV_SUN4I_SS) += sunxi-ss/
> 
> Try to keep some alphabetical order
> 
> [...]
> > +/* ZIP invocation result completion status codes */
> > +#define ZIP_NOTDONE0x0
> > +
> > +/* Successful completion. */
> > +#define ZIP_SUCCESS0x1
> > +
> > +/* Output truncated */
> > +#define ZIP_DTRUNC 0x2
> > +
> > +/* Dynamic Stop */
> > +#define ZIP_DYNAMIC_STOP   0x3
> > +
> > +/* Uncompress ran out of input data when IWORD0[EF] was set */
> > +#define ZIP_ITRUNC 0x4
> > +
> > +/* Uncompress found the reserved block type 3 */
> > +#define ZIP_RBLOCK 0x5
> > +
> > +/* Uncompress found LEN != ZIP_NLEN in an uncompressed block in the input 
> > */
> > +#define ZIP_NLEN   0x6
> > +
> > +/* Uncompress found a bad code in the main Huffman codes. */
> > +#define ZIP_BADCODE0x7
> > +
> > +/* Uncompress found a bad code in the 19 Huffman codes encoding lengths. */
> > +#define ZIP_BADCODE2   0x8
> > +
> > +/* Compress found a zero-length input. */
> > +#define ZIP_ZERO_LEN   0x9
> > +
> > +/* The compress or decompress encountered an internal parity error. */
> > +#define ZIP_PARITY 0xA
> 
> Perhaps all errors could begin with ZIP_ERR_xxx ?
> 
> [...]
> > +static inline u64 zip_depth(void)
> > +{
> > +   struct zip_device *zip_dev = zip_get_device(zip_get_node_id());
> > +
> > +   if (!zip_dev)
> > +   return -ENODEV;
> > +
> > +   return zip_dev->depth;
> > +}
> 
> This function is not used.
> 
> > +
> > +static inline u64 zip_onfsize(void)
> > +{
> > +   struct zip_device *zip_dev = zip_get_device(zip_get_node_id());
> > +
> > +   if (!zip_dev)
> > +   return -ENODEV;
> > +
> > +   return zip_dev->onfsize;
> > +}
> 
> Same for this
> 
> > +
> > +static inline u64 zip_ctxsize(void)
> > +{
> > +   struct zip_device *zip_dev = zip_get_device(zip_get_node_id());
> > +
> > +   if (!zip_dev)
> > +   return -ENODEV;
> > +
> > +   return zip_dev->ctxsize;
> > +}
> 
> Again
> 
> [...]
> > +
> > +/*
> > + * Allocates new ZIP device structure
> > + * Returns zip_device pointer or NULL if cannot allocate memory for 
> > zip_device
> > + */
> > +static struct zip_device *zip_alloc_device(struct pci_dev *pdev)
> 
> pdev is not used, so you can remove it from arglist.
> Or keep it and use devm_kzalloc for allocating zip
> 
> > +{
> > +   struct zip_device *zip = NULL;
> > +   int idx = 0;
> > +
> > +   for (idx = 0; idx < MAX_ZIP_DEVICES; idx++) {
> > +   if (!zip_dev[idx])
> > +   break;
> > +   }
> > +
> > +   zip = kzalloc(sizeof(*zip), GFP_KERNEL);
> > +
> > +   if (!zip)
> > +   return NULL;
> > +
> > +   zip_dev[idx] = zip;
> > +   zip->index = idx;
> > +   return zip;
> > +}
> 
> [...]
> > +static int zip_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
> > +{
> > +   struct device *dev = >dev;
> > +   struct zip_device *zip = NULL;
> > +   interr;
> > +
> > +   zip_dbg_enter();
> > +
> > +   zip = zip_alloc_device(pdev);
> > +
> > +   if (!zip)
> > +   return -ENOMEM;
> > +
> > +   pr_info("Found ZIP device %d %x:%x on Node %d\n", zip->index,
> > +   pdev->vendor, pdev->device, dev_to_node(dev));
> 
> You use lots of pr_info, why not using more dev_info/dev_err ?
> 
> > +
> > +   zip->pdev = pdev;
> > +
> > +   pci_set_drvdata(pdev, zip);
> > +
> > +   err = pci_enable_device(pdev);
> > +   if (err) {
> > +   zip_err("Failed to enable PCI device");
> > +   goto err_free_device;
> > +   }
> > +
> > +   err = pci_request_regions(pdev, DRV_NAME);
> > +   if (err) {
> > +   zip_err("PCI request regions failed 0x%x", err);
> > +   goto err_disable_device;
> > +   }
> > +
> > +   err = pci_set_dma_mask(pdev, DMA_BIT_MASK(48));
> > +   if (err) {
> > +   dev_err(dev, "Unable to get usable DMA configuration\n");
> > +   goto err_release_regions;
> > +   }
> > +
> > +   err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(48));
> > +   if (err) {
> > +   dev_err(dev, "Unable to get 

RE: [PATCH v5 1/4] siphash: add cryptographically secure PRF

2016-12-19 Thread David Laight
From: George Spelvin
> Sent: 17 December 2016 15:21
...
> uint32_t
> hsiphash24(char const *in, size_t len, uint32_t const key[2])
> {
>   uint32_t c = key[0];
>   uint32_t d = key[1];
>   uint32_t a = 0x6c796765 ^ 0x736f6d65;
>   uint32_t b = d ^ 0x74656462 ^ 0x646f7261;

I've not looked closely, but is that (in some sense) duplicating
the key length?
So you could set a = key[2] and b = key[3] and still have an
working hash - albeit not exactly the one specified.

I'll add another comment here...
Is it worth using the 32bit hash for IP addresses on 64bit systems that
can't do misaligned accessed?

David

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Test AEAD/authenc algorithms from userspace

2016-12-19 Thread Harsh Jain
Hi Herbert,

TLS default mode of operation is MAC-then-Encrypt for Authenc algos.
Currently framework only supports EtM used in IPSec. User space
programs like openssl cannot use af-alg interface to encrypt/decrypt
in TLS mode.
Are we going to support Mac-then-Encrypt mode in future kernel releases?


Regards
Harsh Jain

On Tue, May 31, 2016 at 12:35 PM, Stephan Mueller  wrote:
> Am Dienstag, 31. Mai 2016, 12:31:16 schrieb Harsh Jain:
>
> Hi Harsh,
>
>> Hi All,
>>
>> How can we open socket of type "authenc(hmac(sha256),cbc(aes))" from
>> userspace program.I check libkcapi library. It has test programs for
>> GCM/CCM. There are 3 types of approaches to Authenticated Encryption,
>> Which of them is supported in crypto framework.
>>
>> 1) Encrypt-then-MAC (EtM)
>>  The plaintext is first encrypted, then a MAC is produced based on
>> the resulting ciphertext. The ciphertext and its MAC are sent
>> together.
>> 2) Encrypt-and-MAC (E)
>>  A MAC is produced based on the plaintext, and the plaintext is
>> encrypted without the MAC. The plaintext's MAC and the ciphertext are
>> sent together.
>>
>> 3) MAC-then-Encrypt (MtE)
>>  A MAC is produced based on the plaintext, then the plaintext and
>> MAC are together encrypted to produce a ciphertext based on both. The
>> ciphertext (containing an encrypted MAC) is sent.
>
> The cipher types you mention refer to the implementation of authenc(). IIRC,
> authenc implements EtM as this is mandated by IPSEC.
>
> When you use libkcapi, you should simply be able to use your cipher name with
> the AEAD API. I.e. use the examples you see for CCM or GCM and use those with
> the chosen authenc() cipher. Do you experience any issues?
>
> Ciao
> Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html