> 64-bit x86_64:
> [0.509409] test_siphash: SipHash2-4 cycles: 4049181
> [0.510650] test_siphash: SipHash1-3 cycles: 2512884
> [0.512205] test_siphash: HalfSipHash1-3 cycles: 3429920
> [0.512904] test_siphash:JenkinsHash cycles: 978267
I'm not sure what these numbers
> Dozens of languages are already using this internally for their hash
> tables. Some of the BSDs already use this in their kernels. SipHash is
> a widely known high-speed solution to a widely known problem, and it's
> time we catch-up.
It would be nice if the network code could be converted to
On Sun, Jun 19, 2016 at 06:00:21PM +0200, Stephan Mueller wrote:
> The LRNG with all its properties is documented in [1]. This
> documentation covers the functional discussion as well as testing of all
> aspects of entropy processing. In addition, the documentation explains
> the conducted
> In addition, on NUMA systems we make the CRNG state per-NUMA socket, to
> address the NUMA locking contention problem which Andi Kleen has been
> complaining about. I'm not entirely sure this will work well on the
> crazy big SGI systems, but they are rare. Whether they ar
> Suggestions:
>
> a) Going forward, I suggest that UB should not be invoked
> unless there is a good solid reason.
Good luck rewriting most of the kernel source.
This discussion is insane!
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a
On Wed, May 04, 2016 at 03:06:04PM -0700, John Denker wrote:
> On 05/04/2016 02:56 PM, H. Peter Anvin wrote:
> >> Beware that shifting by an amount >= the number of bits in the
> >> word remains Undefined Behavior.
>
> > This construct has been supported as a rotate since at least gcc2.
>
> How
On Tue, Apr 26, 2016 at 07:04:15PM +0800, Herbert Xu wrote:
> Theodore Ts'o wrote:
> >
> > Yet another difference which I've noticed as I've been going over the
> > patches is that that since it relies on CRYPTO_DRBG, it drags in a
> > fairly large portion of the crypto subsystem,
> > > If it is the latter, can you explain where the scalability issue comes in?
> >
> > A single pool which is locked/written to does not scale. Larger systems
> > need multiple pools
>
> That would imply that even when you have a system with 1000 CPUs, you want to
> have a large amount of
On Mon, Apr 25, 2016 at 07:25:55PM +0200, Stephan Mueller wrote:
> Am Montag, 25. April 2016, 09:06:03 schrieb Andi Kleen:
>
> Hi Andi,
>
> > Sandy Harris <sandyinch...@gmail.com> writes:
> >
> > There is also the third problem of horrible scalability of /dev
Sandy Harris writes:
There is also the third problem of horrible scalability of /dev/random
output on larger systems, for which patches are getting ignored.
https://lkml.org/lkml/2016/2/10/716
Ignoring problems does not make them go away.
-Andi
--
a...@linux.intel.com
Theodore Ts'o ty...@mit.edu writes:
#undef __NR_syscalls
-#define __NR_syscalls 277
+#define __NR_syscalls 278
You need to add the syscall to kernel/sys_ni.c too, otherwise it will be
impossible to build without random device.
-Andi
--
a...@linux.intel.com -- Speaking for myself only
--
From: Andi Kleen a...@linux.intel.com
Tables used from assembler should be marked __visible to let
the compiler know.
Signed-off-by: Andi Kleen a...@linux.intel.com
---
arch/x86/crypto/camellia_glue.c | 16
crypto/aes_generic.c| 8
crypto/cast_common.c
I agree with that. Currently when I boot my PC with a new 3.4 kernel all the
ciphers from the intel-aesni module get loaded whether I need them or not. As
Jussi stated most people using distros probably won't need the
serpent-avx-x86_64 module get loaded automatically, so it's probably better
On Wed, May 30, 2012 at 01:36:51PM +0200, Johannes Goetzfried wrote:
On Tue, May 29, 2012 at 07:27:43PM -0700, Andi Kleen wrote:
Also drivers should never print anything when they cannot find hardware.
Remove that printk.
I tried to be consistent with the existing ciphers in arch/x86
Can you provide an example where it doesn't work as intended?
It works for the current algorithms except serpent.
What we could do is to use the cpuid-based probing when an algorithm
is needed to selectively load the relevant implementations instead
of all of them. However, for most
On Thu, May 31, 2012 at 07:44:48AM +1000, Herbert Xu wrote:
On Wed, May 30, 2012 at 11:40:06PM +0200, Andi Kleen wrote:
What we could do is to use the cpuid-based probing when an algorithm
is needed to selectively load the relevant implementations instead
of all of them. However
From: Andi Kleen a...@linux.intel.com
This is a C port of the google snappy compressor. It has roughly
comparable compression to LZO, but is significantly faster on many file
types. For example it beats all other compressors on already
compressed data.
I ported the original C++ code over to C
From: Andi Kleen a...@linux.intel.com
Add support in btrfs for snappy compression.
This is based on the lzo code with minor modifications.
The btrfs glue code could be significantly improved over LZO
by exploiting some snappy features, but hasn't so far.
Open: implement scatter-gather support
From: Andi Kleen a...@linux.intel.com
Mainly so that ubifs can use it.
Snappy is a better compressor in the same niche as LZO.
Only lightly tested so far. Experiences welcome.
Cc: herb...@gondor.apana.org.au
Cc: dedeki...@gmail.com
Cc: adrian.hun...@intel.com
Signed-off-by: Andi Kleen
+ do {\
+ struct __scrub { typeof(*p) c[n]; };\
The typeof(*p) suggestion doesn't work. It would require p to always be
a pointer type with an accurate (for memset) sizeof(*p). In general however
we'll memset some array described
+#define ARRAY_PREVENT_DSE(p, n)
Who says the Intel compiler doesn't need this?
There was a comment in include/linux/compiler-intel.h that it's not supported.
That's true for the ia64 version, but not for the x86 version which supports
gcc compatible inline assembler. So on x86 you
On Sun, Feb 28, 2010 at 09:15:11PM -0800, Arjan van de Ven wrote:
On Sat, 27 Feb 2010 21:47:42 +0100
Roel Kluin roel.kl...@gmail.com wrote:
+void secure_bzero(void *p, size_t n)
+{
+ memset(p, 0, n);
+ ARRAY_PREVENT_DSE(p, n);
+}
+EXPORT_SYMBOL(secure_bzero);
please don't
Every byte in the [p,p+n[ range must be used. If you only use the
first byte, via e.g. asm( :: m(*(char*)p)), then the compiler
_will_ skip scrubbing bytes beyond the first. This works with
gcc-3.2.3 up to gcc-4.4.3.
You forgot to credit Mikael who did all the hard work figuring
this out?
roel kluin roel.kl...@gmail.com writes:
And it's wrong because the reason the memset() is there seems to be
to clear out key information that might exist kernel stack so that
it's more difficult for rogue code to get at things.
If the memset is optimized away then the clear out does not
Sebastian Siewior [EMAIL PROTECTED] writes:
Anything wrong with get_random_bytes()?
Whats the advantage over get_random_bytes()?
get_random_bytes() is not a _pseudo_ random number generator,
it doesn't have a seed and you cannot get repeatable sequences
out of it.
random32.c is though, but
FYI -- with a linus git kernel of yesterday or so I ran into a problem where
one cipher module (CBC) was compiled in and the rest modular. dm_crypt
could not resolve the cipher module until I turned it into a module.
I don't remember if cryptomanager was modular or not.
I don't have the exact
On Thursday 31 January 2008 13:48:34 Sebastian Siewior wrote:
* Andi Kleen | 2008-01-31 10:21:24 [+0100]:
FYI -- with a linus git kernel of yesterday or so I ran into a problem where
one cipher module (CBC) was compiled in and the rest modular. dm_crypt
could not resolve the cipher module
The inline and not inline performance is quite similar. I guess the
little difference here and there is due to some random ctx switches (I
Are you sure you were not just IO bound? It would have been better
to test in memory (e.g. using ramfs or just some direct test client)
-Andi
-
To
On Thu, Jan 10, 2008 at 11:17:10AM +0200, Ilpo Järvinen wrote:
On Thu, 10 Jan 2008, Andi Kleen wrote:
Ilpo Järvinen [EMAIL PROTECTED] writes:
Bloat-o-meter shows rather high readings for cast6...
Have you measured if the performance doesn't suffer from that
change? Inner loops
On Thu, Jan 10, 2008 at 02:35:29PM +0100, Sebastian Siewior wrote:
* Herbert Xu | 2008-01-10 20:27:46 [+1100]:
On Thu, Jan 10, 2008 at 10:25:55AM +0100, Andi Kleen wrote:
Then I don't think the patch should have been applied.
I disagree. There isn't any evidence showing
Ilpo Järvinen [EMAIL PROTECTED] writes:
Bloat-o-meter shows rather high readings for cast6...
Have you measured if the performance doesn't suffer from that
change? Inner loops of ciphers tend to be quite performance
sensitive and the inlines might actually help a lot.
On the other hand setkey
On Thursday 04 October 2007 10:48, Herbert Xu wrote:
On Thu, Oct 04, 2007 at 10:35:12AM +0200, Sebastian Siewior wrote:
Two last questions:
- What about the i386 assembly vs generic implementation? Do you prefer
the patch that I have send earlier (choose the assembly by default
making
What about:
Looks good.
-Andi
[crypto] do not use generic AES on i386 and x86_64
-
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Are you sure you get the C version when both are built-in
or loaded as modules? If so then we have a bug in the priority
code.
The usual use case is: Somebody -- either admin or some command
implicitely -- executes modprobe aes because some text file
specifies the aes cipher. At least on my
Not modprobe, but the crypto subsystem. If you have the generic C code
and the assembly variant it picks the assembly over C. The selection is
But only if they're both loaded. Who loads both?
In that case yes. Would it help to add MODULE_ALIAS(aes) to the
assembly version in order to load
On Mon, Aug 20, 2007 at 12:08:19PM +0200, Sebastian Siewior wrote:
* Andi Kleen | 2007-08-20 12:47:14 [+0200]:
Not modprobe, but the crypto subsystem. If you have the generic C code
and the assembly variant it picks the assembly over C. The selection is
But only if they're both loaded
That would be the best. However, it's not hard to do a
simple probing in the kernel until modprobe(8) gets this
feature.
Sounds like a big hack, and at least for aes / aes-x86_64 and
twofish it's not needed. Just disable aes on x86.
The only problem is the select issue with wireless.
On Wednesday 07 June 2006 21:37, Joachim Fritschi wrote:
On Sunday 04 June 2006 15:16, Joachim Fritschi wrote:
I have revised my initial twofish assembler patchset according to the
criticims i recieved on this list:
This patch splits up the twofish crypto routine into a common part ( key
38 matches
Mail list logo