Re: [PATCH] crypto: aesni-intel - RFC4106 can zero copy when !PageHighMem

2016-12-13 Thread Dave Watson
On 12/13/16 04:32 PM, Ilya Lesokhin wrote: > --- a/arch/x86/crypto/aesni-intel_glue.c > +++ b/arch/x86/crypto/aesni-intel_glue.c > @@ -903,9 +903,11 @@ static int helper_rfc4106_encrypt(struct aead_request > *req) > *((__be32 *)(iv+12)) = counter; > > if (sg_is_last(req->src) && > -

Re: [PATCH v3 net-next 1/4] tcp: ULP infrastructure

2017-07-31 Thread Dave Watson
On 07/29/17 01:12 PM, Tom Herbert wrote: > On Wed, Jun 14, 2017 at 11:37 AM, Dave Watson wrote: > > Add the infrustructure for attaching Upper Layer Protocols (ULPs) over TCP > > sockets. Based on a similar infrastructure in tcp_cong. The idea is that > > any > > U

[RFC PATCH 0/2] Crypto kernel TLS socket

2015-11-23 Thread Dave Watson
https://lwn.net/Articles/657999/ 3) NIC offload. To support running aesni routines on the NIC instead of the processor, we would probably need enough of the framing interface put in kernel. Dave Watson (2): Crypto support aesni rfc5288 Crypto kernel tls socket arch/x86/crypto/ae

[RFC PATCH 2/2] Crypto kernel tls socket

2015-11-23 Thread Dave Watson
diff --git a/crypto/algif_tls.c b/crypto/algif_tls.c new file mode 100644 index 000..123ade3 --- /dev/null +++ b/crypto/algif_tls.c @@ -0,0 +1,1233 @@ +/* + * algif_tls: User-space interface for TLS + * + * Copyright (C) 2015, Dave Watson + * + * This file provides the user-space API for AEAD c

[RFC PATCH 1/2] Crypto support aesni rfc5288

2015-11-23 Thread Dave Watson
Support rfc5288 using intel aesni routines. See also rfc5246. AAD length is 13 bytes padded out to 16. Padding bytes have to be passed in in scatterlist currently, which probably isn't quite the right fix. The assoclen checks were moved to the individual rfc stubs, and the common routines suppor

Re: [RFC PATCH 2/2] Crypto kernel tls socket

2015-11-23 Thread Dave Watson
On 11/23/15 02:27 PM, Sowmini Varadhan wrote: > On (11/23/15 09:43), Dave Watson wrote: > > Currently gcm(aes) represents ~80% of our SSL connections. > > > > Userspace interface: > > > > 1) A transform and op socket are created using the userspace crypto

[PATCH net-next 0/4] kernel TLS

2017-05-24 Thread Dave Watson
This series adds support for kernel TLS encryption over TCP sockets. A standard TCP socket is converted to a TLS socket using a setsockopt. Only symmetric crypto is done in the kernel, as well as TLS record framing. The handshake remains in userspace, and the negotiated cipher keys/iv are provided

[PATCH net-next 1/4] tcp: ULP infrastructure

2017-05-24 Thread Dave Watson
v4/tcp_available_ulp tls There is currently no functionality to remove or chain ULPs, but it should be possible to add these in the future if needed. Signed-off-by: Boris Pismenny Signed-off-by: Dave Watson --- include/net/inet_connection_sock.h | 4 ++ include/net/tcp.h | 25 +++

[PATCH net-next 3/4] tls: kernel TLS support

2017-05-24 Thread Dave Watson
: Ilya Lesokhin Signed-off-by: Aviad Yehezkel Signed-off-by: Dave Watson --- MAINTAINERS | 10 + include/linux/socket.h | 1 + include/net/tls.h| 223 ++ include/uapi/linux/tls.h | 79 + net/Kconfig | 1 + net/Makefile | 1

[PATCH net-next 2/4] tcp: export do_tcp_sendpages and tcp_rate_check_app_limited functions

2017-05-24 Thread Dave Watson
Signed-off-by: Dave Watson --- include/net/tcp.h | 2 ++ net/ipv4/tcp.c | 5 +++-- net/ipv4/tcp_rate.c | 1 + 3 files changed, 6 insertions(+), 2 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index fcc39f8..2b35100 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h

[PATCH net-next 4/4] tls: Documentation

2017-05-24 Thread Dave Watson
Add documentation for the tcp ULP tls interface. Signed-off-by: Boris Pismenny Signed-off-by: Dave Watson --- Documentation/networking/tls.txt | 120 +++ 1 file changed, 120 insertions(+) create mode 100644 Documentation/networking/tls.txt diff --git a

[PATCH v2 net-next 0/4] kernel TLS

2017-06-06 Thread Dave Watson
This series adds support for kernel TLS encryption over TCP sockets. A standard TCP socket is converted to a TLS socket using a setsockopt. Only symmetric crypto is done in the kernel, as well as TLS record framing. The handshake remains in userspace, and the negotiated cipher keys/iv are provided

[PATCH v2 net-next 1/4] tcp: ULP infrastructure

2017-06-06 Thread Dave Watson
v4/tcp_available_ulp tls There is currently no functionality to remove or chain ULPs, but it should be possible to add these in the future if needed. Signed-off-by: Boris Pismenny Signed-off-by: Dave Watson --- include/net/inet_connection_sock.h | 4 ++ include/net/tcp.h | 25 +++

[PATCH v2 net-next 2/4] tcp: export do_tcp_sendpages and tcp_rate_check_app_limited functions

2017-06-06 Thread Dave Watson
Signed-off-by: Dave Watson --- include/net/tcp.h | 2 ++ net/ipv4/tcp.c | 5 +++-- net/ipv4/tcp_rate.c | 1 + 3 files changed, 6 insertions(+), 2 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index fcc39f8..2b35100 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h

[PATCH v2 net-next 3/4] tls: kernel TLS support

2017-06-06 Thread Dave Watson
: Ilya Lesokhin Signed-off-by: Aviad Yehezkel Signed-off-by: Dave Watson --- MAINTAINERS | 10 + include/linux/socket.h | 1 + include/net/tls.h| 222 + include/uapi/linux/tls.h | 79 + net/Kconfig | 1 + net/Makefile | 1 + net

[PATCH v2 net-next 4/4] tls: Documentation

2017-06-06 Thread Dave Watson
Add documentation for the tcp ULP tls interface. Signed-off-by: Boris Pismenny Signed-off-by: Dave Watson --- Documentation/networking/tls.txt | 135 +++ 1 file changed, 135 insertions(+) create mode 100644 Documentation/networking/tls.txt diff --git a

[PATCH v3 net-next 0/4] kernel TLS

2017-06-14 Thread Dave Watson
This series adds support for kernel TLS encryption over TCP sockets. A standard TCP socket is converted to a TLS socket using a setsockopt. Only symmetric crypto is done in the kernel, as well as TLS record framing. The handshake remains in userspace, and the negotiated cipher keys/iv are provided

[PATCH v3 net-next 1/4] tcp: ULP infrastructure

2017-06-14 Thread Dave Watson
v4/tcp_available_ulp tls There is currently no functionality to remove or chain ULPs, but it should be possible to add these in the future if needed. Signed-off-by: Boris Pismenny Signed-off-by: Dave Watson --- include/net/inet_connection_sock.h | 4 ++ include/net/tcp.h | 25 +++

[PATCH v3 net-next 2/4] tcp: export do_tcp_sendpages and tcp_rate_check_app_limited functions

2017-06-14 Thread Dave Watson
Signed-off-by: Dave Watson --- include/net/tcp.h | 2 ++ net/ipv4/tcp.c | 5 +++-- net/ipv4/tcp_rate.c | 1 + 3 files changed, 6 insertions(+), 2 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index b439f46..e17ec28 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h

[PATCH v3 net-next 4/4] tls: Documentation

2017-06-14 Thread Dave Watson
Add documentation for the tcp ULP tls interface. Signed-off-by: Boris Pismenny Signed-off-by: Dave Watson --- Documentation/networking/tls.txt | 135 +++ 1 file changed, 135 insertions(+) create mode 100644 Documentation/networking/tls.txt diff --git a

[PATCH v3 net-next 3/4] tls: kernel TLS support

2017-06-14 Thread Dave Watson
: Ilya Lesokhin Signed-off-by: Aviad Yehezkel Signed-off-by: Dave Watson --- MAINTAINERS | 10 + include/linux/socket.h | 1 + include/net/tls.h| 237 +++ include/uapi/linux/tls.h | 79 + net/Kconfig | 1 + net/Makefile | 1

Re: [PATCH v3 net-next 0/4] kernel TLS

2017-06-14 Thread Dave Watson
Hi Hannes, On 06/14/17 10:15 PM, Hannes Frederic Sowa wrote: > one question for this patch set: > > What is the reason for not allowing key updates for the TX path? I was > always loud pointing out the problems with TLSv1.2 renegotiation and > TLSv1.3 key update alerts. This patch set uses encry

Re: [PATCH v3 net-next 0/4] kernel TLS

2017-06-14 Thread Dave Watson
On 06/14/17 01:54 PM, Tom Herbert wrote: > On Wed, Jun 14, 2017 at 11:36 AM, Dave Watson wrote: > > This series adds support for kernel TLS encryption over TCP sockets. > > A standard TCP socket is converted to a TLS socket using a setsockopt. > > Only symmetric crypto is d

Re: [PATCH v3 net-next 3/4] tls: kernel TLS support

2017-06-16 Thread Dave Watson
On 06/16/17 01:58 PM, Stephen Hemminger wrote: > On Wed, 14 Jun 2017 11:37:39 -0700 > Dave Watson wrote: > > > --- /dev/null > > +++ b/net/tls/Kconfig > > @@ -0,0 +1,12 @@ > > +# > > +# TLS configuration > > +# > > +config TLS > > + tr

Re: [PATCH v3 net-next 1/4] tcp: ULP infrastructure

2017-06-26 Thread Dave Watson
On 06/25/17 02:42 AM, Levin, Alexander (Sasha Levin) wrote: > On Wed, Jun 14, 2017 at 11:37:14AM -0700, Dave Watson wrote: > >Add the infrustructure for attaching Upper Layer Protocols (ULPs) over TCP > >sockets. Based on a similar infrastructure in tcp_cong. The idea is that any

Re: [PATCH v3 net-next 0/4] kernel TLS

2017-07-06 Thread Dave Watson
Hi Richard, On 07/06/17 04:30 PM, Richard Weinberger wrote: > Dave, > > On Wed, Jun 14, 2017 at 8:36 PM, Dave Watson wrote: > > Documentation/networking/tls.txt | 135 +++ > > MAINTAINERS| 10 + > > include/linux/socket.h

Re: [PATCH v3 net-next 3/4] tls: kernel TLS support

2017-07-11 Thread Dave Watson
On 07/11/17 08:29 AM, Steffen Klassert wrote: > Sorry for replying to old mail... > > +int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx) > > +{ > > ... > > > + > > + if (!sw_ctx->aead_send) { > > + sw_ctx->aead_send = crypto_alloc_aead("gcm(aes)", 0, 0); > > +

Re: [PATCH v3 net-next 3/4] tls: kernel TLS support

2017-07-12 Thread Dave Watson
On 07/12/17 09:20 AM, Steffen Klassert wrote: > On Tue, Jul 11, 2017 at 11:53:11AM -0700, Dave Watson wrote: > > On 07/11/17 08:29 AM, Steffen Klassert wrote: > > > Sorry for replying to old mail... > > > > +int tls_set_sw_offload(struct soc

Re: [RFC crypto v3 8/9] chtls: Register the ULP

2018-01-25 Thread Dave Watson
<1513769897-26945-1-git-send-email-atul.gu...@chelsio.com> On 12/20/17 05:08 PM, Atul Gupta wrote: > +static void __init chtls_init_ulp_ops(void) > +{ > + chtls_base_prot = tcp_prot; > + chtls_base_prot.hash= chtls_hash; > + chtls_base_prot.unhash =

Re: [RFC crypto v3 8/9] chtls: Register the ULP

2018-01-30 Thread Dave Watson
On 01/30/18 06:51 AM, Atul Gupta wrote: > What I was referring is that passing "tls" ulp type in setsockopt > may be insufficient to make the decision when multi HW assist Inline > TLS solution exists. Setting the ULP doesn't choose HW or SW implementation, I think that should be done later when

Re: [PATCHv2] tls: Add support for encryption using async offload accelerator

2018-01-31 Thread Dave Watson
On 01/31/18 09:34 PM, Vakul Garg wrote: > Async crypto accelerators (e.g. drivers/crypto/caam) support offloading > GCM operation. If they are enabled, crypto_aead_encrypt() return error > code -EINPROGRESS. In this case tls_do_encryption() needs to wait on a > completion till the time the response

Re: [RFC crypto v3 8/9] chtls: Register the ULP

2018-01-31 Thread Dave Watson
On 01/31/18 04:14 PM, Atul Gupta wrote: > > > On Tuesday 30 January 2018 10:41 PM, Dave Watson wrote: > > On 01/30/18 06:51 AM, Atul Gupta wrote: > > > > > What I was referring is that passing "tls" ulp type in setsockopt > > > may be insuf

Re: [PATCHv2] tls: Add support for encryption using async offload accelerator

2018-01-31 Thread Dave Watson
On 01/31/18 05:22 PM, Vakul Garg wrote: > > > On second though in stable we should probably just disable async tfm > > > allocations. > > > It's simpler. But this approach is still good for -next > > > > > > > > > Gilad > > > > I agree with Gilad, just disable async for now. > > > > How to do it

[PATCH 01/14] x86/crypto: aesni: Merge INITIAL_BLOCKS_ENC/DEC

2018-02-12 Thread Dave Watson
Use macro operations to merge implemetations of INITIAL_BLOCKS, since they differ by only a small handful of lines. Use macro counter \@ to simplify implementation. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 298 ++ 1 file changed, 48

[PATCH 02/14] x86/crypto: aesni: Macro-ify func save/restore

2018-02-12 Thread Dave Watson
Macro-ify function save and restore. These will be used in new functions added for scatter/gather update operations. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 53 ++- 1 file changed, 24 insertions(+), 29 deletions(-) diff --git a

[PATCH 03/14] x86/crypto: aesni: Add GCM_INIT macro

2018-02-12 Thread Dave Watson
Reduce code duplication by introducting GCM_INIT macro. This macro will also be exposed as a function for implementing scatter/gather support, since INIT only needs to be called once for the full operation. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 84

[PATCH 04/14] x86/crypto: aesni: Add GCM_COMPLETE macro

2018-02-12 Thread Dave Watson
Merge encode and decode tag calculations in GCM_COMPLETE macro. Scatter/gather routines will call this once at the end of encryption or decryption. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 172 ++ 1 file changed, 63 insertions

[PATCH 00/14] x86/crypto gcmaes SSE scatter/gather support

2018-02-12 Thread Dave Watson
%aes_loop_initial_4974 1.27%gcmaes_encrypt_sg.constprop.15 Dave Watson (14): x86/crypto: aesni: Merge INITIAL_BLOCKS_ENC/DEC x86/crypto: aesni: Macro-ify func save/restore x86/crypto: aesni: Add GCM_INIT macro x86/crypto: aesni: Add GCM_COMPLETE macro x86/crypto: aesni: Merge encode and

[PATCH 09/14] x86/crypto: aesni: Move ghash_mul to GCM_COMPLETE

2018-02-12 Thread Dave Watson
Prepare to handle partial blocks between scatter/gather calls. For the last partial block, we only want to calculate the aadhash in GCM_COMPLETE, and a new partial block macro will handle both aadhash update and encrypting partial blocks between calls. Signed-off-by: Dave Watson --- arch/x86

[PATCH 07/14] x86/crypto: aesni: Split AAD hash calculation to separate macro

2018-02-12 Thread Dave Watson
AAD hash only needs to be calculated once for each scatter/gather operation. Move it to its own macro, and call it from GCM_INIT instead of INITIAL_BLOCKS. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 71 --- 1 file changed, 43

[PATCH 08/14] x86/crypto: aesni: Fill in new context data structures

2018-02-12 Thread Dave Watson
Fill in aadhash, aadlen, pblocklen, curcount with appropriate values. pblocklen, aadhash, and pblockenckey are also updated at the end of each scatter/gather operation, to be carried over to the next operation. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 51

[PATCH 11/14] x86/crypto: aesni: Introduce partial block macro

2018-02-12 Thread Dave Watson
Before this diff, multiple calls to GCM_ENC_DEC will succeed, but only if all calls are a multiple of 16 bytes. Handle partial blocks at the start of GCM_ENC_DEC, and update aadhash as appropriate. The data offset %r11 is also updated after the partial block. Signed-off-by: Dave Watson

[PATCH 12/14] x86/crypto: aesni: Add fast path for > 16 byte update

2018-02-12 Thread Dave Watson
We can fast-path any < 16 byte read if the full message is > 16 bytes, and shift over by the appropriate amount. Usually we are reading > 16 bytes, so this should be faster than the READ_PARTIAL macro introduced in b20209c91e2 for the average case. Signed-off-by: Dave Watson ---

[PATCH 05/14] x86/crypto: aesni: Merge encode and decode to GCM_ENC_DEC macro

2018-02-12 Thread Dave Watson
Make a macro for the main encode/decode routine. Only a small handful of lines differ for enc and dec. This will also become the main scatter/gather update routine. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 293 +++--- 1 file changed

[PATCH 10/14] x86/crypto: aesni: Move HashKey computation from stack to gcm_context

2018-02-12 Thread Dave Watson
need to be calculated once per key and could be moved to when set_key is called, however, the current glue code falls back to generic aes code if fpu is disabled. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 205 -- 1 file changed, 106

[PATCH 14/14] x86/crypto: aesni: Update aesni-intel_glue to use scatter/gather

2018-02-12 Thread Dave Watson
them out with scatterlist_map_and_copy. Only the SSE routines are updated so far, so leave the previous gcmaes_en/decrypt routines, and branch to the sg ones if the keysize is inappropriate for avx, or we are SSE only. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_glue.c | 166

[PATCH 13/14] x86/crypto: aesni: Introduce scatter/gather asm function stubs

2018-02-12 Thread Dave Watson
The asm macros are all set up now, introduce entry points. GCM_INIT and GCM_COMPLETE have arguments supplied, so that the new scatter/gather entry points don't have to take all the arguments, and only the ones they need. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S

[PATCH 06/14] x86/crypto: aesni: Introduce gcm_context_data

2018-02-12 Thread Dave Watson
Introduce a gcm_context_data struct that will be used to pass context data between scatter/gather update calls. It is passed as the second argument (after crypto keys), other args are renumbered. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 115

Re: [PATCH 14/14] x86/crypto: aesni: Update aesni-intel_glue to use scatter/gather

2018-02-13 Thread Dave Watson
On 02/12/18 03:12 PM, Junaid Shahid wrote: > Hi Dave, > > > On 02/12/2018 11:51 AM, Dave Watson wrote: > > > +static int gcmaes_encrypt_sg(struct aead_request *req, unsigned int > > assoclen, > > + u8 *hash_subkey, u8 *iv, voi

Re: [PATCH 14/14] x86/crypto: aesni: Update aesni-intel_glue to use scatter/gather

2018-02-13 Thread Dave Watson
On 02/13/18 08:42 AM, Stephan Mueller wrote: > > +static int gcmaes_encrypt_sg(struct aead_request *req, unsigned int > > assoclen, + u8 *hash_subkey, u8 *iv, void *aes_ctx) > > +{ > > + struct crypto_aead *tfm = crypto_aead_reqtfm(req); > > + unsigned long auth_tag_len = crypto

[PATCH v2 00/14] x86/crypto gcmaes SSE scatter/gather support

2018-02-14 Thread Dave Watson
14: merge enc/dec also use new routine if cryptlen < AVX_GEN2_OPTSIZE optimize case if assoc is already linear Dave Watson (14): x86/crypto: aesni: Merge INITIAL_BLOCKS_ENC/DEC x86/crypto: aesni: Macro-ify func save/restore x86/crypto: aesni: Add GCM_INIT macro x86/

[PATCH v2 01/14] x86/crypto: aesni: Merge INITIAL_BLOCKS_ENC/DEC

2018-02-14 Thread Dave Watson
Use macro operations to merge implemetations of INITIAL_BLOCKS, since they differ by only a small handful of lines. Use macro counter \@ to simplify implementation. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 298 ++ 1 file changed, 48

[PATCH v2 02/14] x86/crypto: aesni: Macro-ify func save/restore

2018-02-14 Thread Dave Watson
Macro-ify function save and restore. These will be used in new functions added for scatter/gather update operations. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 53 ++- 1 file changed, 24 insertions(+), 29 deletions(-) diff --git a

[PATCH v2 10/14] x86/crypto: aesni: Move HashKey computation from stack to gcm_context

2018-02-14 Thread Dave Watson
need to be calculated once per key and could be moved to when set_key is called, however, the current glue code falls back to generic aes code if fpu is disabled. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 205 -- 1 file changed, 106

[PATCH v2 11/14] x86/crypto: aesni: Introduce partial block macro

2018-02-14 Thread Dave Watson
Before this diff, multiple calls to GCM_ENC_DEC will succeed, but only if all calls are a multiple of 16 bytes. Handle partial blocks at the start of GCM_ENC_DEC, and update aadhash as appropriate. The data offset %r11 is also updated after the partial block. Signed-off-by: Dave Watson

[PATCH v2 12/14] x86/crypto: aesni: Add fast path for > 16 byte update

2018-02-14 Thread Dave Watson
We can fast-path any < 16 byte read if the full message is > 16 bytes, and shift over by the appropriate amount. Usually we are reading > 16 bytes, so this should be faster than the READ_PARTIAL macro introduced in b20209c91e2 for the average case. Signed-off-by: Dave Watson ---

[PATCH v2 13/14] x86/crypto: aesni: Introduce scatter/gather asm function stubs

2018-02-14 Thread Dave Watson
The asm macros are all set up now, introduce entry points. GCM_INIT and GCM_COMPLETE have arguments supplied, so that the new scatter/gather entry points don't have to take all the arguments, and only the ones they need. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S

[PATCH v2 14/14] x86/crypto: aesni: Update aesni-intel_glue to use scatter/gather

2018-02-14 Thread Dave Watson
ous gcmaes_en/decrypt routines, and branch to the sg ones if the keysize is inappropriate for avx, or we are SSE only. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_glue.c | 133 + 1 file changed, 133 insertions(+) diff --git a/arch/x86/crypto/ae

[PATCH v2 09/14] x86/crypto: aesni: Move ghash_mul to GCM_COMPLETE

2018-02-14 Thread Dave Watson
Prepare to handle partial blocks between scatter/gather calls. For the last partial block, we only want to calculate the aadhash in GCM_COMPLETE, and a new partial block macro will handle both aadhash update and encrypting partial blocks between calls. Signed-off-by: Dave Watson --- arch/x86

[PATCH v2 06/14] x86/crypto: aesni: Introduce gcm_context_data

2018-02-14 Thread Dave Watson
Introduce a gcm_context_data struct that will be used to pass context data between scatter/gather update calls. It is passed as the second argument (after crypto keys), other args are renumbered. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 115

[PATCH v2 08/14] x86/crypto: aesni: Fill in new context data structures

2018-02-14 Thread Dave Watson
Fill in aadhash, aadlen, pblocklen, curcount with appropriate values. pblocklen, aadhash, and pblockenckey are also updated at the end of each scatter/gather operation, to be carried over to the next operation. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 51

[PATCH v2 07/14] x86/crypto: aesni: Split AAD hash calculation to separate macro

2018-02-14 Thread Dave Watson
AAD hash only needs to be calculated once for each scatter/gather operation. Move it to its own macro, and call it from GCM_INIT instead of INITIAL_BLOCKS. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 71 --- 1 file changed, 43

[PATCH v2 04/14] x86/crypto: aesni: Add GCM_COMPLETE macro

2018-02-14 Thread Dave Watson
Merge encode and decode tag calculations in GCM_COMPLETE macro. Scatter/gather routines will call this once at the end of encryption or decryption. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 172 ++ 1 file changed, 63 insertions

[PATCH v2 03/14] x86/crypto: aesni: Add GCM_INIT macro

2018-02-14 Thread Dave Watson
Reduce code duplication by introducting GCM_INIT macro. This macro will also be exposed as a function for implementing scatter/gather support, since INIT only needs to be called once for the full operation. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 84

[PATCH v2 05/14] x86/crypto: aesni: Merge encode and decode to GCM_ENC_DEC macro

2018-02-14 Thread Dave Watson
Make a macro for the main encode/decode routine. Only a small handful of lines differ for enc and dec. This will also become the main scatter/gather update routine. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 293 +++--- 1 file changed

Re: [Crypto v5 03/12] support for inline tls

2018-02-15 Thread Dave Watson
On 02/15/18 12:24 PM, Atul Gupta wrote: > @@ -401,6 +430,15 @@ static int do_tls_setsockopt_tx(struct sock *sk, char > __user *optval, > goto out; > } > > + rc = get_tls_offload_dev(sk); > + if (rc) { > + goto out; > + } else { > + /* Retai

Re: [Crypto v5 03/12] support for inline tls

2018-02-15 Thread Dave Watson
On 02/15/18 04:10 PM, Atul Gupta wrote: > > -Original Message- > > From: Dave Watson [mailto:davejwat...@fb.com] > > Sent: Thursday, February 15, 2018 9:22 PM > > To: Atul Gupta > > Cc: da...@davemloft.net; herb...@gondor.apana.org.au; s...@quea

Re: [Crypto v7 03/12] tls: support for inline tls

2018-02-23 Thread Dave Watson
On 02/22/18 11:21 PM, Atul Gupta wrote: > @@ -403,6 +431,15 @@ static int do_tls_setsockopt_tx(struct sock *sk, char > __user *optval, > goto err_crypto_info; > } > > + rc = tls_offload_dev_absent(sk); > + if (rc == -EINVAL) { > + goto out; > + } else

Re: [Crypto v7 03/12] tls: support for inline tls

2018-02-23 Thread Dave Watson
On 02/23/18 04:58 PM, Atul Gupta wrote: > > On 02/22/18 11:21 PM, Atul Gupta wrote: > > > @@ -403,6 +431,15 @@ static int do_tls_setsockopt_tx(struct sock *sk, > > > char __user *optval, > > > goto err_crypto_info; > > > } > > > > > > + rc = tls_offload_dev_absent(sk); > > > + if (rc

[PATCH RFC 0/5] TLX Rx

2018-03-08 Thread Dave Watson
imally zero copies vs. userspace's one, vs. previous kernel's two. https://marc.info/?l=linux-crypto-vger&m=151931242406416&w=2 [2] https://github.com/Mellanox/openssl/commits/tls_rx [3] https://github.com/ktls/af_ktls-tool/tree/RX Dave Watson (5): tls: Generalize zerocopy_fr

[PATCH RFC 2/5] tls: Move cipher info to a separate struct

2018-03-08 Thread Dave Watson
Separate tx crypto parameters to a separate cipher_context struct. The same parameters will be used for rx using the same struct. tls_advance_record_sn is modified to only take the cipher info. Signed-off-by: Dave Watson --- include/net/tls.h | 26 +--- net/tls/tls_main.c

[PATCH RFC 3/5] tls: Pass error code explicitly to tls_err_abort

2018-03-08 Thread Dave Watson
Pass EBADMSG explicitly to tls_err_abort. Receive path will pass additional codes - E2BIG if framing is larger than max TLS record size. Signed-off-by: Dave Watson --- include/net/tls.h | 6 +++--- net/tls/tls_sw.c | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include

[PATCH RFC 4/5] tls: RX path for ktls

2018-03-08 Thread Dave Watson
are provided to decrypt in to. sk_poll is overridden, and only returns POLLIN if a full TLS message is received. Otherwise we wait for strparser to finish reading a full frame. Actual decryption is only done during recvmsg or splice_read calls. Signed-off-by: Dave Watson --- include/net/tls.h

[PATCH RFC 1/5] tls: Generalize zerocopy_from_iter

2018-03-08 Thread Dave Watson
Refactor zerocopy_from_iter to take arguments for pages and size, such that it can be used for both tx and rx. RX will also support zerocopy direct to output iter, as long as the full message can be copied at once (a large enough userspace buffer was provided). Signed-off-by: Dave Watson

[PATCH RFC 5/5] tls: Add receive path documentation

2018-03-08 Thread Dave Watson
Add documentation on rx path setup and cmsg interface. Signed-off-by: Dave Watson --- Documentation/networking/tls.txt | 59 ++-- 1 file changed, 57 insertions(+), 2 deletions(-) diff --git a/Documentation/networking/tls.txt b/Documentation/networking

Re: [PATCH RFC 4/5] tls: RX path for ktls

2018-03-08 Thread Dave Watson
On 03/08/18 09:48 PM, Boris Pismenny wrote: > Hi Dave, > > On 03/08/18 18:50, Dave Watson wrote: > > Add rx path for tls software implementation. > > > > recvmsg, splice_read, and poll implemented. > > > > An additional sockopt TLS_RX is added, with th

[PATCH net-next 0/6] TLS Rx

2018-03-20 Thread Dave Watson
/marc.info/?l=linux-crypto-vger&m=151931242406416&w=2 [2] https://github.com/Mellanox/openssl/commits/tls_rx2 [3] https://github.com/ktls/af_ktls-tool/tree/RX Dave Watson (6): tls: Generalize zerocopy_from_iter tls: Move cipher info to a separate struct tls: Pass error code explicitly t

[PATCH net-next 2/6] tls: Move cipher info to a separate struct

2018-03-20 Thread Dave Watson
Separate tx crypto parameters to a separate cipher_context struct. The same parameters will be used for rx using the same struct. tls_advance_record_sn is modified to only take the cipher info. Signed-off-by: Dave Watson --- include/net/tls.h | 26 +--- net/tls/tls_main.c

[PATCH net-next 1/6] tls: Generalize zerocopy_from_iter

2018-03-20 Thread Dave Watson
Refactor zerocopy_from_iter to take arguments for pages and size, such that it can be used for both tx and rx. RX will also support zerocopy direct to output iter, as long as the full message can be copied at once (a large enough userspace buffer was provided). Signed-off-by: Dave Watson

[PATCH net-next 3/6] tls: Pass error code explicitly to tls_err_abort

2018-03-20 Thread Dave Watson
Pass EBADMSG explicitly to tls_err_abort. Receive path will pass additional codes - EMSGSIZE if framing is larger than max TLS record size, EINVAL if TLS version mismatch. Signed-off-by: Dave Watson --- include/net/tls.h | 6 +++--- net/tls/tls_sw.c | 2 +- 2 files changed, 4 insertions(+), 4

[PATCH net-next 5/6] tls: RX path for ktls

2018-03-20 Thread Dave Watson
finish reading a full frame. Actual decryption is only done during recvmsg or splice_read calls. Signed-off-by: Dave Watson --- include/net/tls.h| 27 ++- include/uapi/linux/tls.h | 2 + net/tls/Kconfig | 1 + net/tls/tls_main.c | 62 - net/tls/tls_sw.c | 587

[PATCH net-next 4/6] tls: Refactor variable names

2018-03-20 Thread Dave Watson
Several config variables are prefixed with tx, drop the prefix since these will be used for both tx and rx. Signed-off-by: Dave Watson --- include/net/tls.h | 2 +- net/tls/tls_main.c | 26 +- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/include/net

[PATCH net-next 6/6] tls: Add receive path documentation

2018-03-20 Thread Dave Watson
Add documentation on rx path setup and cmsg interface. Signed-off-by: Dave Watson --- Documentation/networking/tls.txt | 67 ++-- 1 file changed, 65 insertions(+), 2 deletions(-) diff --git a/Documentation/networking/tls.txt b/Documentation/networking

Re: [PATCH net-next 5/6] tls: RX path for ktls

2018-03-21 Thread Dave Watson
On 03/21/18 07:20 AM, Boris Pismenny wrote: > > > On 3/20/2018 7:54 PM, Dave Watson wrote: > > + ctx->control = header[0]; > > + > > + data_len = ((header[4] & 0xFF) | (header[3] << 8)); > > + > > + cipher_overhead = tls_ctx->rx.tag

[PATCH v2 net-next 0/6] TLS Rx

2018-03-22 Thread Dave Watson
ent crypto patchset to remove copies, resulting in optimally zero copies vs. userspace's one, vs. previous kernel's two. https://marc.info/?l=linux-crypto-vger&m=151931242406416&w=2 [2] https://github.com/Mellanox/openssl/commits/tls_rx2 [3] https://github.com/ktls/af_ktls

[PATCH v2 net-next 2/6] tls: Move cipher info to a separate struct

2018-03-22 Thread Dave Watson
Separate tx crypto parameters to a separate cipher_context struct. The same parameters will be used for rx using the same struct. tls_advance_record_sn is modified to only take the cipher info. Signed-off-by: Dave Watson --- include/net/tls.h | 26 +--- net/tls/tls_main.c

[PATCH v2 net-next 4/6] tls: Refactor variable names

2018-03-22 Thread Dave Watson
Several config variables are prefixed with tx, drop the prefix since these will be used for both tx and rx. Signed-off-by: Dave Watson --- include/net/tls.h | 2 +- net/tls/tls_main.c | 26 +- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/include/net

[PATCH v2 net-next 1/6] tls: Generalize zerocopy_from_iter

2018-03-22 Thread Dave Watson
Refactor zerocopy_from_iter to take arguments for pages and size, such that it can be used for both tx and rx. RX will also support zerocopy direct to output iter, as long as the full message can be copied at once (a large enough userspace buffer was provided). Signed-off-by: Dave Watson

[PATCH v2 net-next 6/6] tls: Add receive path documentation

2018-03-22 Thread Dave Watson
Add documentation on rx path setup and cmsg interface. Signed-off-by: Dave Watson --- Documentation/networking/tls.txt | 66 ++-- 1 file changed, 64 insertions(+), 2 deletions(-) diff --git a/Documentation/networking/tls.txt b/Documentation/networking

[PATCH v2 net-next 3/6] tls: Pass error code explicitly to tls_err_abort

2018-03-22 Thread Dave Watson
Pass EBADMSG explicitly to tls_err_abort. Receive path will pass additional codes - EMSGSIZE if framing is larger than max TLS record size, EINVAL if TLS version mismatch. Signed-off-by: Dave Watson --- include/net/tls.h | 6 +++--- net/tls/tls_sw.c | 2 +- 2 files changed, 4 insertions(+), 4

[PATCH v2 net-next 5/6] tls: RX path for ktls

2018-03-22 Thread Dave Watson
finish reading a full frame. Actual decryption is only done during recvmsg or splice_read calls. Signed-off-by: Dave Watson --- include/net/tls.h| 27 ++- include/uapi/linux/tls.h | 2 + net/tls/Kconfig | 1 + net/tls/tls_main.c | 62 - net/tls/tls_sw.c

[PATCH] crypto: aesni - Use unaligned loads from gcm_context_data

2018-08-15 Thread Dave Watson
ccesses to gcm_context_data already use unaligned loads. Reported-by: Mauro Rossi Fixes: 1476db2d12 ("Move HashKey computation from stack to gcm_context") Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_asm.S | 66 +++ 1 file changed, 33 inserti

Re: Deadlock when using crypto API for block devices

2018-08-24 Thread Dave Watson
On 08/24/18 09:22 PM, Herbert Xu wrote: > > > BTW. gcmaes_crypt_by_sg also contains GFP_ATOMIC and -ENOMEM, behind a > > > pretty complex condition. Do you mean that this condition is part of the > > > contract that the crypto API provides? > > > > This is an implementation defect. I think for

[PATCH 00/12] x86/crypto: gcmaes AVX scatter/gather support

2018-12-10 Thread Dave Watson
adds support for those keysizes. The final patch updates the C glue code, passing everything through the crypt_by_sg() function instead of the previous memcpy based routines. Dave Watson (12): x86/crypto: aesni: Merge GCM_ENC_DEC x86/crypto: aesni: Introduce gcm_context_data x86/crypto: a

[PATCH 01/12] x86/crypto: aesni: Merge GCM_ENC_DEC

2018-12-10 Thread Dave Watson
will be used by both AVX and AVX2. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_avx-x86_64.S | 951 --- 1 file changed, 318 insertions(+), 633 deletions(-) diff --git a/arch/x86/crypto/aesni-intel_avx-x86_64.S b/arch/x86/crypto/aesni-intel_avx-x86_64.S index

[PATCH 03/12] x86/crypto: aesni: Macro-ify func save/restore

2018-12-10 Thread Dave Watson
Macro-ify function save and restore. These will be used in new functions added for scatter/gather update operations. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_avx-x86_64.S | 94 +--- 1 file changed, 36 insertions(+), 58 deletions(-) diff --git a/arch/x86

[PATCH 02/12] x86/crypto: aesni: Introduce gcm_context_data

2018-12-10 Thread Dave Watson
stores to the new struct are always done unlaligned to avoid compiler issues, see e5b954e8 "Use unaligned loads from gcm_context_data" Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_avx-x86_64.S | 378 +++ arch/x86/crypto/aesni-intel_glue.c | 58 ++-

[PATCH 05/12] x86/crypto: aesni: Add GCM_COMPLETE macro

2018-12-10 Thread Dave Watson
Merge encode and decode tag calculations in GCM_COMPLETE macro. Scatter/gather routines will call this once at the end of encryption or decryption. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_avx-x86_64.S | 8 1 file changed, 8 insertions(+) diff --git a/arch/x86/crypto

[PATCH 04/12] x86/crypto: aesni: support 256 byte keys in avx asm

2018-12-10 Thread Dave Watson
that this diff depends on using gcm_context_data - 256 bit keys require 16 HashKeys + 15 expanded keys, which is larger than struct crypto_aes_ctx, so they are stored in struct gcm_context_data. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_avx-x86_64.S | 188 +--

[PATCH 08/12] x86/crypto: aesni: Fill in new context data structures

2018-12-10 Thread Dave Watson
Fill in aadhash, aadlen, pblocklen, curcount with appropriate values. pblocklen, aadhash, and pblockenckey are also updated at the end of each scatter/gather operation, to be carried over to the next operation. Signed-off-by: Dave Watson --- arch/x86/crypto/aesni-intel_avx-x86_64.S | 51

  1   2   >