On Sun, Nov 24, 2013 at 06:08:59PM -0800, Shawn Landden wrote:
Commit 35f9c09fe (tcp: tcp_sendpages() should call tcp_push() once)
added an internal flag MSG_SENDPAGE_NOTLAST, similar to
MSG_MORE.
algif_hash and algif_skcipher used MSG_MORE from tcp_sendpages()
and need to see the new flag
On Sun, Nov 24, 2013 at 10:36:28PM -0800, Shawn Landden wrote:
Commit 35f9c09fe (tcp: tcp_sendpages() should call tcp_push() once)
added an internal flag MSG_SENDPAGE_NOTLAST, similar to
MSG_MORE.
algif_hash, algif_skcipher, and udp used MSG_MORE from tcp_sendpages()
and need to see the new
On Do, 2014-07-17 at 05:18 -0400, Theodore Ts'o wrote:
SYNOPSIS
#include linux/random.h
int getrandom(void *buf, size_t buflen, unsigned int flags);
Cool, I think the interface is sane.
Btw. couldn't libressl etc. fall back to binary_sysctl
kernel.random.uuid and seed with that
On Do, 2014-07-17 at 08:52 -0400, Theodore Ts'o wrote:
On Thu, Jul 17, 2014 at 12:57:07PM +0200, Hannes Frederic Sowa wrote:
Btw. couldn't libressl etc. fall back to binary_sysctl
kernel.random.uuid and seed with that as a last resort? We have it
available for few more years.
Yes
On So, 2014-07-20 at 13:03 -0400, George Spelvin wrote:
In the end people would just recall getentropy in a loop and fetch 256
bytes each time. I don't think the artificial limit does make any sense.
I agree that this allows a potential misuse of the interface, but
doesn't a warning in
On Mo, 2014-07-21 at 07:21 -0400, George Spelvin wrote:
I don't like partial reads/writes and think that a lot of people get
them wrong, because they often only check for negative return values.
The v1 patch, which did it right IMHO, didn't do partial reads in the
case we're talking about:
On Mo, 2014-07-21 at 17:27 +0200, Hannes Frederic Sowa wrote:
On Mo, 2014-07-21 at 07:21 -0400, George Spelvin wrote:
I don't like partial reads/writes and think that a lot of people get
them wrong, because they often only check for negative return values.
The v1 patch, which did
Hello,
On Di, 2014-07-22 at 00:44 -0400, Theodore Ts'o wrote:
On Tue, Jul 22, 2014 at 03:02:20AM +0200, Hannes Frederic Sowa wrote:
Ted, would it make sense to specifiy a 512 byte upper bound limit for
random entropy extraction (I am not yet convinced to do that for
urandom) and in case
Hi,
On Wed, Jul 23, 2014, at 00:59, Theodore Ts'o wrote:
But why would you need to use GRND_RANDOM in your scenario, and accept
your application potentially getting stalled and stuck in amber for
perhaps hours? If you are going to accept your application stalling
like that, you can do the
On Wed, Jul 23, 2014, at 13:52, George Spelvin wrote:
I keep wishing for a more general solution. For example, some way to
have a spare extra fd that could be accessed with a special O_NOFAIL
flag.
That would allow any number of library functions to not fail, such as
logging from nasty
Hi,
On Di, 2014-07-29 at 12:32 +0300, Cristian Stoica wrote:
This set of patches introduces support for TLS 1.0 record layer
encryption/decryption with a corresponding algorithm called
tls10(hmac(hash),cbc(cipher)).
Similarly to authenc.c on which it is based, this module mixes the base
of the drop-in replacement memzero_explicit() for
exactly such cases instead of using memset().
Signed-off-by: Daniel Borkmann dbork...@redhat.com
Cc: Julia Lawall julia.law...@lip6.fr
Cc: Herbert Xu herb...@gondor.apana.org.au
Cc: Theodore Ts'o ty...@mit.edu
Cc: Hannes Frederic Sowa han
On Wed, Mar 18, 2015, at 10:53, mancha wrote:
Hi.
The kernel RNG introduced memzero_explicit in d4c5efdb9777 to protect
memory cleansing against things like dead store optimization:
void memzero_explicit(void *s, size_t count)
{
memset(s, 0, count);
On Wed, Mar 18, 2015, at 12:09, Stephan Mueller wrote:
Am Mittwoch, 18. März 2015, 11:56:43 schrieb Daniel Borkmann:
On 03/18/2015 11:50 AM, Hannes Frederic Sowa wrote:
On Wed, Mar 18, 2015, at 10:53, mancha wrote:
Hi.
The kernel RNG introduced memzero_explicit in d4c5efdb9777
On Wed, Mar 18, 2015, at 13:14, Stephan Mueller wrote:
Am Mittwoch, 18. März 2015, 13:02:12 schrieb Hannes Frederic Sowa:
Hi Hannes,
On Wed, Mar 18, 2015, at 12:09, Stephan Mueller wrote:
Am Mittwoch, 18. März 2015, 11:56:43 schrieb Daniel Borkmann:
On 03/18/2015 11:50 AM, Hannes
On Wed, Mar 18, 2015, at 13:42, Daniel Borkmann wrote:
On 03/18/2015 01:20 PM, Stephan Mueller wrote:
Am Mittwoch, 18. März 2015, 13:19:07 schrieb Hannes Frederic Sowa:
My proposal would be to add a
#define OPTIMIZER_HIDE_MEM(ptr, len) __asm__ __volatile__ ( : :
m(
({ struct { u8 b
/gmane.linux.kernel.cryptoapi/13764/
Fixes: d4c5efdb9777 (random: add and use memzero_explicit() for clearing
data)
Cc: Hannes Frederic Sowa han...@stressinduktion.org
Cc: Stephan Mueller smuel...@chronox.de
Cc: Theodore Ts'o ty...@mit.edu
Signed-off-by: mancha security manc...@zoho.com
Signed-off-by: Daniel Borkmann
On Wed, Mar 18, 2015, at 18:41, Theodore Ts'o wrote:
Maybe we should add a kernel self-test that automatically checks
whether or not memset_explicit() gets optimized away? Otherwise we
might not notice when gcc or how we implement barrier() or whatever
else we end up using ends up changing.
Hello,
Stephan Mueller writes:
> Am Dienstag, 24. November 2015, 18:34:55 schrieb Herbert Xu:
>
> Hi Herbert,
>
>>On Mon, Nov 23, 2015 at 09:43:02AM -0800, Dave Watson wrote:
>>> Userspace crypto interface for TLS. Currently supports gcm(aes) 128bit
>>> only, however the
On 14.12.2016 13:53, Jason A. Donenfeld wrote:
> Hi David,
>
> On Wed, Dec 14, 2016 at 10:51 AM, David Laight
> wrote:
>> From: Jason A. Donenfeld
>>> Sent: 14 December 2016 00:17
>>> This gives a clear speed and security improvement. Rather than manually
>>> filling
On 15.12.2016 16:41, David Laight wrote:
> Try (retyped):
>
> echo 'struct { long a; long long b; } s; int bar { return sizeof s; }' >foo.c
> gcc [-m32] -O2 -S foo.c; cat foo.s
>
> And look at what is generated.
I used __alignof__(unsigned long long) with -m32.
>> Right now ipv6 addresses have
On 16.12.2016 00:43, Jason A. Donenfeld wrote:
> Hi Hannes,
>
> Good news.
>
> On Thu, Dec 15, 2016 at 10:45 PM, Hannes Frederic Sowa
> <han...@stressinduktion.org> wrote:
>>> How's that sound?
>>
>> I am still very much concerned about the API.
>
Hello,
On 15.12.2016 19:50, Jason A. Donenfeld wrote:
> Hi David & Hannes,
>
> This conversation is veering off course.
Why?
> I think this doesn't really
> matter at all. Gcc converts u64 into essentially a pair of u32 on
> 32-bit platforms, so the alignment requirements for 32-bit is at a
>
On 15.12.2016 22:04, Peter Zijlstra wrote:
> On Thu, Dec 15, 2016 at 09:43:04PM +0100, Jason A. Donenfeld wrote:
>> On Thu, Dec 15, 2016 at 9:31 PM, Hannes Frederic Sowa
>> <han...@stressinduktion.org> wrote:
>>> ARM64 and x86-64 have memory operations
On 15.12.2016 21:43, Jason A. Donenfeld wrote:
> On Thu, Dec 15, 2016 at 9:31 PM, Hannes Frederic Sowa
> <han...@stressinduktion.org> wrote:
>> ARM64 and x86-64 have memory operations that are not vector operations
>> that operate on 128 bit memory.
>
> Fair enoug
On 15.12.2016 22:25, Jason A. Donenfeld wrote:
> On Thu, Dec 15, 2016 at 10:17 PM, Hannes Frederic Sowa
> <han...@stressinduktion.org> wrote:
>> And I was exactly questioning this.
>>
>> static unsigned int inet6_hash_frag(__be32 id
On 15.12.2016 14:56, David Laight wrote:
> From: Hannes Frederic Sowa
>> Sent: 15 December 2016 12:50
>> On 15.12.2016 13:28, David Laight wrote:
>>> From: Hannes Frederic Sowa
>>>> Sent: 15 December 2016 12:23
>>> ...
>>>> Hmm? Even the
Hey Jason,
On 14.12.2016 20:38, Jason A. Donenfeld wrote:
> On Wed, Dec 14, 2016 at 8:22 PM, Hannes Frederic Sowa
> <han...@stressinduktion.org> wrote:
>> I don't think this helps. Did you test it? I don't see reason why
>> padding could be left out between `d' and `end'
On 14.12.2016 13:46, Jason A. Donenfeld wrote:
> Hi David,
>
> On Wed, Dec 14, 2016 at 10:56 AM, David Laight
> wrote:
>> ...
>>> +u64 siphash24(const u8 *data, size_t len, const u8 key[SIPHASH24_KEY_LEN])
>> ...
>>> + u64 k0 = get_unaligned_le64(key);
>>> + u64
On 14.12.2016 19:06, Jason A. Donenfeld wrote:
> Hi David,
>
> On Wed, Dec 14, 2016 at 6:56 PM, David Miller wrote:
>> Just marking the structure __packed, whether necessary or not, makes
>> the compiler assume that the members are not aligned and causes
>> byte-by-byte
On 15.12.2016 00:29, Jason A. Donenfeld wrote:
> Hi Hannes,
>
> On Wed, Dec 14, 2016 at 11:03 PM, Hannes Frederic Sowa
> <han...@stressinduktion.org> wrote:
>> I fear that the alignment requirement will be a source of bugs on 32 bit
>> machines, where you cannot e
On 15.12.2016 12:04, David Laight wrote:
> From: Hannes Frederic Sowa
>> Sent: 14 December 2016 22:03
>> On 14.12.2016 13:46, Jason A. Donenfeld wrote:
>>> Hi David,
>>>
>>> On Wed, Dec 14, 2016 at 10:56 AM, David Laight <david.lai...@aculab.com>
&
On 15.12.2016 13:28, David Laight wrote:
> From: Hannes Frederic Sowa
>> Sent: 15 December 2016 12:23
> ...
>> Hmm? Even the Intel ABI expects alignment of unsigned long long to be 8
>> bytes on 32 bit. Do you question that?
>
> Yes.
>
> The linux ABI
Hello,
On 14.12.2016 14:10, Jason A. Donenfeld wrote:
> On Wed, Dec 14, 2016 at 12:21 PM, Hannes Frederic Sowa
> <han...@stressinduktion.org> wrote:
>> Can you show or cite benchmarks in comparison with jhash? Last time I
>> looked, especially for short inputs, siphash
On Fri, Dec 16, 2016, at 22:01, Jason A. Donenfeld wrote:
> Yes, on x86-64. But on i386 chacha20 incurs nearly the same kind of
> slowdown as siphash, so I expect the comparison to be more or less
> equal. There's another thing I really didn't like about your chacha20
> approach which is that it
On Fri, 2016-12-23 at 11:04 +0100, Daniel Borkmann wrote:
> On 12/22/2016 05:59 PM, Hannes Frederic Sowa wrote:
> > On Thu, 2016-12-22 at 08:07 -0800, Andy Lutomirski wrote:
> > > On Thu, Dec 22, 2016 at 7:51 AM, Hannes Frederic Sowa
> > > <han...@stressinduktion.or
On Thu, 2016-12-22 at 19:07 -0500, George Spelvin wrote:
> Hannes Frederic Sowa wrote:
> > A lockdep test should still be done. ;)
>
> Adding might_lock() annotations will improve coverage a lot.
Might be hard to find the correct lock we take later down the code
path, but if t
Hi,
On Fri, 2016-12-23 at 13:26 -0500, George Spelvin wrote:
> (Cc: list trimmed slightly as the topic is wandering a bit.)
>
> Hannes Frederic Sowa wrote:
> > On Thu, 2016-12-22 at 19:07 -0500, George Spelvin wrote:
> > > Adding might_lock() annotations wil
Hi,
On 24.12.2016 00:39, George Spelvin wrote:
> Hannes Frederic Sowa wrote:
>> In general this looks good, but bitbuffer needs to be protected from
>> concurrent access, thus needing at least some atomic instruction and
>> disabling of interrupts for the lo
On 23.12.2016 17:42, Andy Lutomirski wrote:
> On Fri, Dec 23, 2016 at 8:23 AM, Andy Lutomirski <l...@amacapital.net> wrote:
>> On Fri, Dec 23, 2016 at 3:59 AM, Daniel Borkmann <dan...@iogearbox.net>
>> wrote:
>>> On 12/23/2016 11:59 AM, Hannes Frederic Sowa
Hello,
On Fri, 2016-12-23 at 20:17 -0500, George Spelvin wrote:
> Hannes Frederic Sowa wrote:
> > On 24.12.2016 00:39, George Spelvin wrote:
> > > We just finished discussing why 8 bytes isn't enough. If you only
> > > feed back 8 bytes, an attacker who can d
On Thu, 2016-12-22 at 08:07 -0800, Andy Lutomirski wrote:
> On Thu, Dec 22, 2016 at 7:51 AM, Hannes Frederic Sowa
> <han...@stressinduktion.org> wrote:
> > On Thu, 2016-12-22 at 16:41 +0100, Jason A. Donenfeld wrote:
> > > Hi Hannes,
> > >
> > > On
On Thu, 2016-12-22 at 16:29 +0100, Jason A. Donenfeld wrote:
> On Thu, Dec 22, 2016 at 4:12 PM, Jason A. Donenfeld wrote:
> > As a first step, I'm considering adding a patch to move halfmd4.c
> > inside the ext4 domain, or at the very least, simply remove it from
> >
On Thu, 2016-12-22 at 16:41 +0100, Jason A. Donenfeld wrote:
> Hi Hannes,
>
> On Thu, Dec 22, 2016 at 4:33 PM, Hannes Frederic Sowa
> <han...@stressinduktion.org> wrote:
> > IPv6 you cannot touch anymore. The hashing algorithm is part of uAPI.
> > You don't want to
On 22.12.2016 00:42, Andy Lutomirski wrote:
> On Wed, Dec 21, 2016 at 3:02 PM, Jason A. Donenfeld wrote:
>> unsigned int get_random_int(void)
>> {
>> - __u32 *hash;
>> - unsigned int ret;
>> -
>> - if (arch_get_random_int())
>> - return ret;
>> -
On Thu, 2016-12-22 at 09:25 -0800, Andy Lutomirski wrote:
> On Thu, Dec 22, 2016 at 8:59 AM, Hannes Frederic Sowa
> <han...@stressinduktion.org> wrote:
> > On Thu, 2016-12-22 at 08:07 -0800, Andy Lutomirski wrote:
> > >
> > > You mean:
> > >
> >
On 22.12.2016 16:54, Theodore Ts'o wrote:
> On Thu, Dec 22, 2016 at 02:10:33PM +0100, Jason A. Donenfeld wrote:
>> On Thu, Dec 22, 2016 at 1:47 PM, Hannes Frederic Sowa
>> <han...@stressinduktion.org> wrote:
>>> following up on what appears to be a random subje
On 22.12.2016 20:56, Andy Lutomirski wrote:
> It's also not quite clear to me why userspace needs to be able to
> calculate the digest on its own. A bpf(BPF_CALC_PROGRAM_DIGEST)
> command that takes a BPF program as input and hashes it would seem to
> serve the same purpose, and that would allow
Hi Ted,
On Thu, 2016-12-22 at 00:41 -0500, Theodore Ts'o wrote:
> On Thu, Dec 22, 2016 at 03:49:39AM +0100, Jason A. Donenfeld wrote:
> >
> > Funny -- while you guys were sending this back & forth, I was writing
> > my reply to Andy which essentially arrives at the same conclusion.
> > Given
On 22.12.2016 22:11, George Spelvin wrote:
>> I do tend to like Ted's version in which we use batched
>> get_random_bytes() output. If it's fast enough, it's simpler and lets
>> us get the full strength of a CSPRNG.
>
> With the ChaCha20 generator, that's fine, although note that this abandons
>
On 22.12.2016 14:10, Jason A. Donenfeld wrote:
> On Thu, Dec 22, 2016 at 1:47 PM, Hannes Frederic Sowa
> <han...@stressinduktion.org> wrote:
>> following up on what appears to be a random subject: ;)
>>
>> IIRC, ext4 code by default still uses half_md4 for hashin
Hello,
On 29.03.2017 19:41, David Miller wrote:
> From: Aviad Yehezkel
> Date: Tue, 28 Mar 2017 16:26:17 +0300
>
>> TLS Tx crypto offload is a new feature of network devices. It
>> enables the kernel TLS socket to skip encryption and authentication
>> operations on the
Hello Dave,
On Wed, Jun 14, 2017, at 21:47, David Miller wrote:
> From: Dave Watson
> Date: Wed, 14 Jun 2017 11:36:54 -0700
>
> > This series adds support for kernel TLS encryption over TCP sockets.
> > A standard TCP socket is converted to a TLS socket using a setsockopt.
>
Hello,
On Tue, Dec 5, 2017, at 12:40, Atul Gupta wrote:
> CPL handlers for TLS session, record transmit and receive
This does very much looks like full TCP offload with TLS on top? It
would be nice if you could give a few more details in the patch
descriptions.
Bye,
Hannes
54 matches
Mail list logo