On 12/13/16 04:32 PM, Ilya Lesokhin wrote:
> --- a/arch/x86/crypto/aesni-intel_glue.c
> +++ b/arch/x86/crypto/aesni-intel_glue.c
> @@ -903,9 +903,11 @@ static int helper_rfc4106_encrypt(struct aead_request
> *req)
> *((__be32 *)(iv+12)) = counter;
>
> if (sg_is_last(req->src) &&
> -
On 07/29/17 01:12 PM, Tom Herbert wrote:
> On Wed, Jun 14, 2017 at 11:37 AM, Dave Watson wrote:
> > Add the infrustructure for attaching Upper Layer Protocols (ULPs) over TCP
> > sockets. Based on a similar infrastructure in tcp_cong. The idea is that
> > any
> > U
https://lwn.net/Articles/657999/
3) NIC offload. To support running aesni routines on the NIC instead
of the processor, we would probably need enough of the framing
interface put in kernel.
Dave Watson (2):
Crypto support aesni rfc5288
Crypto kernel tls socket
arch/x86/crypto/ae
diff --git a/crypto/algif_tls.c b/crypto/algif_tls.c
new file mode 100644
index 000..123ade3
--- /dev/null
+++ b/crypto/algif_tls.c
@@ -0,0 +1,1233 @@
+/*
+ * algif_tls: User-space interface for TLS
+ *
+ * Copyright (C) 2015, Dave Watson
+ *
+ * This file provides the user-space API for AEAD c
Support rfc5288 using intel aesni routines. See also rfc5246.
AAD length is 13 bytes padded out to 16. Padding bytes have to be
passed in in scatterlist currently, which probably isn't quite the
right fix.
The assoclen checks were moved to the individual rfc stubs, and the
common routines suppor
On 11/23/15 02:27 PM, Sowmini Varadhan wrote:
> On (11/23/15 09:43), Dave Watson wrote:
> > Currently gcm(aes) represents ~80% of our SSL connections.
> >
> > Userspace interface:
> >
> > 1) A transform and op socket are created using the userspace crypto
This series adds support for kernel TLS encryption over TCP sockets.
A standard TCP socket is converted to a TLS socket using a setsockopt.
Only symmetric crypto is done in the kernel, as well as TLS record
framing. The handshake remains in userspace, and the negotiated
cipher keys/iv are provided
v4/tcp_available_ulp
tls
There is currently no functionality to remove or chain ULPs, but
it should be possible to add these in the future if needed.
Signed-off-by: Boris Pismenny
Signed-off-by: Dave Watson
---
include/net/inet_connection_sock.h | 4 ++
include/net/tcp.h | 25 +++
: Ilya Lesokhin
Signed-off-by: Aviad Yehezkel
Signed-off-by: Dave Watson
---
MAINTAINERS | 10 +
include/linux/socket.h | 1 +
include/net/tls.h| 223 ++
include/uapi/linux/tls.h | 79 +
net/Kconfig | 1 +
net/Makefile | 1
Signed-off-by: Dave Watson
---
include/net/tcp.h | 2 ++
net/ipv4/tcp.c | 5 +++--
net/ipv4/tcp_rate.c | 1 +
3 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index fcc39f8..2b35100 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
Add documentation for the tcp ULP tls interface.
Signed-off-by: Boris Pismenny
Signed-off-by: Dave Watson
---
Documentation/networking/tls.txt | 120 +++
1 file changed, 120 insertions(+)
create mode 100644 Documentation/networking/tls.txt
diff --git a
This series adds support for kernel TLS encryption over TCP sockets.
A standard TCP socket is converted to a TLS socket using a setsockopt.
Only symmetric crypto is done in the kernel, as well as TLS record
framing. The handshake remains in userspace, and the negotiated
cipher keys/iv are provided
v4/tcp_available_ulp
tls
There is currently no functionality to remove or chain ULPs, but
it should be possible to add these in the future if needed.
Signed-off-by: Boris Pismenny
Signed-off-by: Dave Watson
---
include/net/inet_connection_sock.h | 4 ++
include/net/tcp.h | 25 +++
Signed-off-by: Dave Watson
---
include/net/tcp.h | 2 ++
net/ipv4/tcp.c | 5 +++--
net/ipv4/tcp_rate.c | 1 +
3 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index fcc39f8..2b35100 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
: Ilya Lesokhin
Signed-off-by: Aviad Yehezkel
Signed-off-by: Dave Watson
---
MAINTAINERS | 10 +
include/linux/socket.h | 1 +
include/net/tls.h| 222 +
include/uapi/linux/tls.h | 79 +
net/Kconfig | 1 +
net/Makefile | 1 +
net
Add documentation for the tcp ULP tls interface.
Signed-off-by: Boris Pismenny
Signed-off-by: Dave Watson
---
Documentation/networking/tls.txt | 135 +++
1 file changed, 135 insertions(+)
create mode 100644 Documentation/networking/tls.txt
diff --git a
This series adds support for kernel TLS encryption over TCP sockets.
A standard TCP socket is converted to a TLS socket using a setsockopt.
Only symmetric crypto is done in the kernel, as well as TLS record
framing. The handshake remains in userspace, and the negotiated
cipher keys/iv are provided
v4/tcp_available_ulp
tls
There is currently no functionality to remove or chain ULPs, but
it should be possible to add these in the future if needed.
Signed-off-by: Boris Pismenny
Signed-off-by: Dave Watson
---
include/net/inet_connection_sock.h | 4 ++
include/net/tcp.h | 25 +++
Signed-off-by: Dave Watson
---
include/net/tcp.h | 2 ++
net/ipv4/tcp.c | 5 +++--
net/ipv4/tcp_rate.c | 1 +
3 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index b439f46..e17ec28 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
Add documentation for the tcp ULP tls interface.
Signed-off-by: Boris Pismenny
Signed-off-by: Dave Watson
---
Documentation/networking/tls.txt | 135 +++
1 file changed, 135 insertions(+)
create mode 100644 Documentation/networking/tls.txt
diff --git a
: Ilya Lesokhin
Signed-off-by: Aviad Yehezkel
Signed-off-by: Dave Watson
---
MAINTAINERS | 10 +
include/linux/socket.h | 1 +
include/net/tls.h| 237 +++
include/uapi/linux/tls.h | 79 +
net/Kconfig | 1 +
net/Makefile | 1
Hi Hannes,
On 06/14/17 10:15 PM, Hannes Frederic Sowa wrote:
> one question for this patch set:
>
> What is the reason for not allowing key updates for the TX path? I was
> always loud pointing out the problems with TLSv1.2 renegotiation and
> TLSv1.3 key update alerts. This patch set uses encry
On 06/14/17 01:54 PM, Tom Herbert wrote:
> On Wed, Jun 14, 2017 at 11:36 AM, Dave Watson wrote:
> > This series adds support for kernel TLS encryption over TCP sockets.
> > A standard TCP socket is converted to a TLS socket using a setsockopt.
> > Only symmetric crypto is d
On 06/16/17 01:58 PM, Stephen Hemminger wrote:
> On Wed, 14 Jun 2017 11:37:39 -0700
> Dave Watson wrote:
>
> > --- /dev/null
> > +++ b/net/tls/Kconfig
> > @@ -0,0 +1,12 @@
> > +#
> > +# TLS configuration
> > +#
> > +config TLS
> > + tr
On 06/25/17 02:42 AM, Levin, Alexander (Sasha Levin) wrote:
> On Wed, Jun 14, 2017 at 11:37:14AM -0700, Dave Watson wrote:
> >Add the infrustructure for attaching Upper Layer Protocols (ULPs) over TCP
> >sockets. Based on a similar infrastructure in tcp_cong. The idea is that any
Hi Richard,
On 07/06/17 04:30 PM, Richard Weinberger wrote:
> Dave,
>
> On Wed, Jun 14, 2017 at 8:36 PM, Dave Watson wrote:
> > Documentation/networking/tls.txt | 135 +++
> > MAINTAINERS| 10 +
> > include/linux/socket.h
On 07/11/17 08:29 AM, Steffen Klassert wrote:
> Sorry for replying to old mail...
> > +int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx)
> > +{
>
> ...
>
> > +
> > + if (!sw_ctx->aead_send) {
> > + sw_ctx->aead_send = crypto_alloc_aead("gcm(aes)", 0, 0);
> > +
On 07/12/17 09:20 AM, Steffen Klassert wrote:
> On Tue, Jul 11, 2017 at 11:53:11AM -0700, Dave Watson wrote:
> > On 07/11/17 08:29 AM, Steffen Klassert wrote:
> > > Sorry for replying to old mail...
> > > > +int tls_set_sw_offload(struct soc
<1513769897-26945-1-git-send-email-atul.gu...@chelsio.com>
On 12/20/17 05:08 PM, Atul Gupta wrote:
> +static void __init chtls_init_ulp_ops(void)
> +{
> + chtls_base_prot = tcp_prot;
> + chtls_base_prot.hash= chtls_hash;
> + chtls_base_prot.unhash =
On 01/30/18 06:51 AM, Atul Gupta wrote:
> What I was referring is that passing "tls" ulp type in setsockopt
> may be insufficient to make the decision when multi HW assist Inline
> TLS solution exists.
Setting the ULP doesn't choose HW or SW implementation, I think that
should be done later when
On 01/31/18 09:34 PM, Vakul Garg wrote:
> Async crypto accelerators (e.g. drivers/crypto/caam) support offloading
> GCM operation. If they are enabled, crypto_aead_encrypt() return error
> code -EINPROGRESS. In this case tls_do_encryption() needs to wait on a
> completion till the time the response
On 01/31/18 04:14 PM, Atul Gupta wrote:
>
>
> On Tuesday 30 January 2018 10:41 PM, Dave Watson wrote:
> > On 01/30/18 06:51 AM, Atul Gupta wrote:
> >
> > > What I was referring is that passing "tls" ulp type in setsockopt
> > > may be insuf
On 01/31/18 05:22 PM, Vakul Garg wrote:
> > > On second though in stable we should probably just disable async tfm
> > > allocations.
> > > It's simpler. But this approach is still good for -next
> > >
> > >
> > > Gilad
> >
> > I agree with Gilad, just disable async for now.
> >
>
> How to do it
Use macro operations to merge implemetations of INITIAL_BLOCKS,
since they differ by only a small handful of lines.
Use macro counter \@ to simplify implementation.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 298 ++
1 file changed, 48
Macro-ify function save and restore. These will be used in new functions
added for scatter/gather update operations.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 53 ++-
1 file changed, 24 insertions(+), 29 deletions(-)
diff --git a
Reduce code duplication by introducting GCM_INIT macro. This macro
will also be exposed as a function for implementing scatter/gather
support, since INIT only needs to be called once for the full
operation.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 84
Merge encode and decode tag calculations in GCM_COMPLETE macro.
Scatter/gather routines will call this once at the end of encryption
or decryption.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 172 ++
1 file changed, 63 insertions
%aes_loop_initial_4974
1.27%gcmaes_encrypt_sg.constprop.15
Dave Watson (14):
x86/crypto: aesni: Merge INITIAL_BLOCKS_ENC/DEC
x86/crypto: aesni: Macro-ify func save/restore
x86/crypto: aesni: Add GCM_INIT macro
x86/crypto: aesni: Add GCM_COMPLETE macro
x86/crypto: aesni: Merge encode and
Prepare to handle partial blocks between scatter/gather calls.
For the last partial block, we only want to calculate the aadhash
in GCM_COMPLETE, and a new partial block macro will handle both
aadhash update and encrypting partial blocks between calls.
Signed-off-by: Dave Watson
---
arch/x86
AAD hash only needs to be calculated once for each scatter/gather operation.
Move it to its own macro, and call it from GCM_INIT instead of
INITIAL_BLOCKS.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 71 ---
1 file changed, 43
Fill in aadhash, aadlen, pblocklen, curcount with appropriate values.
pblocklen, aadhash, and pblockenckey are also updated at the end
of each scatter/gather operation, to be carried over to the next
operation.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 51
Before this diff, multiple calls to GCM_ENC_DEC will
succeed, but only if all calls are a multiple of 16 bytes.
Handle partial blocks at the start of GCM_ENC_DEC, and update
aadhash as appropriate.
The data offset %r11 is also updated after the partial block.
Signed-off-by: Dave Watson
We can fast-path any < 16 byte read if the full message is > 16 bytes,
and shift over by the appropriate amount. Usually we are
reading > 16 bytes, so this should be faster than the READ_PARTIAL
macro introduced in b20209c91e2 for the average case.
Signed-off-by: Dave Watson
---
Make a macro for the main encode/decode routine. Only a small handful
of lines differ for enc and dec. This will also become the main
scatter/gather update routine.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 293 +++---
1 file changed
need to be calculated once per key and could
be moved to when set_key is called, however, the current glue code
falls back to generic aes code if fpu is disabled.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 205 --
1 file changed, 106
them out
with scatterlist_map_and_copy.
Only the SSE routines are updated so far, so leave the previous
gcmaes_en/decrypt routines, and branch to the sg ones if the
keysize is inappropriate for avx, or we are SSE only.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_glue.c | 166
The asm macros are all set up now, introduce entry points.
GCM_INIT and GCM_COMPLETE have arguments supplied, so that
the new scatter/gather entry points don't have to take all the
arguments, and only the ones they need.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S
Introduce a gcm_context_data struct that will be used to pass
context data between scatter/gather update calls. It is passed
as the second argument (after crypto keys), other args are
renumbered.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 115
On 02/12/18 03:12 PM, Junaid Shahid wrote:
> Hi Dave,
>
>
> On 02/12/2018 11:51 AM, Dave Watson wrote:
>
> > +static int gcmaes_encrypt_sg(struct aead_request *req, unsigned int
> > assoclen,
> > + u8 *hash_subkey, u8 *iv, voi
On 02/13/18 08:42 AM, Stephan Mueller wrote:
> > +static int gcmaes_encrypt_sg(struct aead_request *req, unsigned int
> > assoclen, + u8 *hash_subkey, u8 *iv, void *aes_ctx)
> > +{
> > + struct crypto_aead *tfm = crypto_aead_reqtfm(req);
> > + unsigned long auth_tag_len = crypto
14: merge enc/dec
also use new routine if cryptlen < AVX_GEN2_OPTSIZE
optimize case if assoc is already linear
Dave Watson (14):
x86/crypto: aesni: Merge INITIAL_BLOCKS_ENC/DEC
x86/crypto: aesni: Macro-ify func save/restore
x86/crypto: aesni: Add GCM_INIT macro
x86/
Use macro operations to merge implemetations of INITIAL_BLOCKS,
since they differ by only a small handful of lines.
Use macro counter \@ to simplify implementation.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 298 ++
1 file changed, 48
Macro-ify function save and restore. These will be used in new functions
added for scatter/gather update operations.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 53 ++-
1 file changed, 24 insertions(+), 29 deletions(-)
diff --git a
need to be calculated once per key and could
be moved to when set_key is called, however, the current glue code
falls back to generic aes code if fpu is disabled.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 205 --
1 file changed, 106
Before this diff, multiple calls to GCM_ENC_DEC will
succeed, but only if all calls are a multiple of 16 bytes.
Handle partial blocks at the start of GCM_ENC_DEC, and update
aadhash as appropriate.
The data offset %r11 is also updated after the partial block.
Signed-off-by: Dave Watson
We can fast-path any < 16 byte read if the full message is > 16 bytes,
and shift over by the appropriate amount. Usually we are
reading > 16 bytes, so this should be faster than the READ_PARTIAL
macro introduced in b20209c91e2 for the average case.
Signed-off-by: Dave Watson
---
The asm macros are all set up now, introduce entry points.
GCM_INIT and GCM_COMPLETE have arguments supplied, so that
the new scatter/gather entry points don't have to take all the
arguments, and only the ones they need.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S
ous
gcmaes_en/decrypt routines, and branch to the sg ones if the
keysize is inappropriate for avx, or we are SSE only.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_glue.c | 133 +
1 file changed, 133 insertions(+)
diff --git a/arch/x86/crypto/ae
Prepare to handle partial blocks between scatter/gather calls.
For the last partial block, we only want to calculate the aadhash
in GCM_COMPLETE, and a new partial block macro will handle both
aadhash update and encrypting partial blocks between calls.
Signed-off-by: Dave Watson
---
arch/x86
Introduce a gcm_context_data struct that will be used to pass
context data between scatter/gather update calls. It is passed
as the second argument (after crypto keys), other args are
renumbered.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 115
Fill in aadhash, aadlen, pblocklen, curcount with appropriate values.
pblocklen, aadhash, and pblockenckey are also updated at the end
of each scatter/gather operation, to be carried over to the next
operation.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 51
AAD hash only needs to be calculated once for each scatter/gather operation.
Move it to its own macro, and call it from GCM_INIT instead of
INITIAL_BLOCKS.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 71 ---
1 file changed, 43
Merge encode and decode tag calculations in GCM_COMPLETE macro.
Scatter/gather routines will call this once at the end of encryption
or decryption.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 172 ++
1 file changed, 63 insertions
Reduce code duplication by introducting GCM_INIT macro. This macro
will also be exposed as a function for implementing scatter/gather
support, since INIT only needs to be called once for the full
operation.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 84
Make a macro for the main encode/decode routine. Only a small handful
of lines differ for enc and dec. This will also become the main
scatter/gather update routine.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 293 +++---
1 file changed
On 02/15/18 12:24 PM, Atul Gupta wrote:
> @@ -401,6 +430,15 @@ static int do_tls_setsockopt_tx(struct sock *sk, char
> __user *optval,
> goto out;
> }
>
> + rc = get_tls_offload_dev(sk);
> + if (rc) {
> + goto out;
> + } else {
> + /* Retai
On 02/15/18 04:10 PM, Atul Gupta wrote:
> > -Original Message-
> > From: Dave Watson [mailto:davejwat...@fb.com]
> > Sent: Thursday, February 15, 2018 9:22 PM
> > To: Atul Gupta
> > Cc: da...@davemloft.net; herb...@gondor.apana.org.au; s...@quea
On 02/22/18 11:21 PM, Atul Gupta wrote:
> @@ -403,6 +431,15 @@ static int do_tls_setsockopt_tx(struct sock *sk, char
> __user *optval,
> goto err_crypto_info;
> }
>
> + rc = tls_offload_dev_absent(sk);
> + if (rc == -EINVAL) {
> + goto out;
> + } else
On 02/23/18 04:58 PM, Atul Gupta wrote:
> > On 02/22/18 11:21 PM, Atul Gupta wrote:
> > > @@ -403,6 +431,15 @@ static int do_tls_setsockopt_tx(struct sock *sk,
> > > char __user *optval,
> > > goto err_crypto_info;
> > > }
> > >
> > > + rc = tls_offload_dev_absent(sk);
> > > + if (rc
imally
zero copies vs. userspace's one, vs. previous kernel's two.
https://marc.info/?l=linux-crypto-vger&m=151931242406416&w=2
[2] https://github.com/Mellanox/openssl/commits/tls_rx
[3] https://github.com/ktls/af_ktls-tool/tree/RX
Dave Watson (5):
tls: Generalize zerocopy_fr
Separate tx crypto parameters to a separate cipher_context struct.
The same parameters will be used for rx using the same struct.
tls_advance_record_sn is modified to only take the cipher info.
Signed-off-by: Dave Watson
---
include/net/tls.h | 26 +---
net/tls/tls_main.c
Pass EBADMSG explicitly to tls_err_abort. Receive path will
pass additional codes - E2BIG if framing is larger than max
TLS record size.
Signed-off-by: Dave Watson
---
include/net/tls.h | 6 +++---
net/tls/tls_sw.c | 2 +-
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include
are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson
---
include/net/tls.h
Refactor zerocopy_from_iter to take arguments for pages and size,
such that it can be used for both tx and rx. RX will also support
zerocopy direct to output iter, as long as the full message can
be copied at once (a large enough userspace buffer was provided).
Signed-off-by: Dave Watson
Add documentation on rx path setup and cmsg interface.
Signed-off-by: Dave Watson
---
Documentation/networking/tls.txt | 59 ++--
1 file changed, 57 insertions(+), 2 deletions(-)
diff --git a/Documentation/networking/tls.txt b/Documentation/networking
On 03/08/18 09:48 PM, Boris Pismenny wrote:
> Hi Dave,
>
> On 03/08/18 18:50, Dave Watson wrote:
> > Add rx path for tls software implementation.
> >
> > recvmsg, splice_read, and poll implemented.
> >
> > An additional sockopt TLS_RX is added, with th
/marc.info/?l=linux-crypto-vger&m=151931242406416&w=2
[2] https://github.com/Mellanox/openssl/commits/tls_rx2
[3] https://github.com/ktls/af_ktls-tool/tree/RX
Dave Watson (6):
tls: Generalize zerocopy_from_iter
tls: Move cipher info to a separate struct
tls: Pass error code explicitly t
Separate tx crypto parameters to a separate cipher_context struct.
The same parameters will be used for rx using the same struct.
tls_advance_record_sn is modified to only take the cipher info.
Signed-off-by: Dave Watson
---
include/net/tls.h | 26 +---
net/tls/tls_main.c
Refactor zerocopy_from_iter to take arguments for pages and size,
such that it can be used for both tx and rx. RX will also support
zerocopy direct to output iter, as long as the full message can
be copied at once (a large enough userspace buffer was provided).
Signed-off-by: Dave Watson
Pass EBADMSG explicitly to tls_err_abort. Receive path will
pass additional codes - EMSGSIZE if framing is larger than max
TLS record size, EINVAL if TLS version mismatch.
Signed-off-by: Dave Watson
---
include/net/tls.h | 6 +++---
net/tls/tls_sw.c | 2 +-
2 files changed, 4 insertions(+), 4
finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson
---
include/net/tls.h| 27 ++-
include/uapi/linux/tls.h | 2 +
net/tls/Kconfig | 1 +
net/tls/tls_main.c | 62 -
net/tls/tls_sw.c | 587
Several config variables are prefixed with tx, drop the prefix
since these will be used for both tx and rx.
Signed-off-by: Dave Watson
---
include/net/tls.h | 2 +-
net/tls/tls_main.c | 26 +-
2 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/include/net
Add documentation on rx path setup and cmsg interface.
Signed-off-by: Dave Watson
---
Documentation/networking/tls.txt | 67 ++--
1 file changed, 65 insertions(+), 2 deletions(-)
diff --git a/Documentation/networking/tls.txt b/Documentation/networking
On 03/21/18 07:20 AM, Boris Pismenny wrote:
>
>
> On 3/20/2018 7:54 PM, Dave Watson wrote:
> > + ctx->control = header[0];
> > +
> > + data_len = ((header[4] & 0xFF) | (header[3] << 8));
> > +
> > + cipher_overhead = tls_ctx->rx.tag
ent crypto patchset to remove copies, resulting in optimally
zero copies vs. userspace's one, vs. previous kernel's two.
https://marc.info/?l=linux-crypto-vger&m=151931242406416&w=2
[2] https://github.com/Mellanox/openssl/commits/tls_rx2
[3] https://github.com/ktls/af_ktls
Separate tx crypto parameters to a separate cipher_context struct.
The same parameters will be used for rx using the same struct.
tls_advance_record_sn is modified to only take the cipher info.
Signed-off-by: Dave Watson
---
include/net/tls.h | 26 +---
net/tls/tls_main.c
Several config variables are prefixed with tx, drop the prefix
since these will be used for both tx and rx.
Signed-off-by: Dave Watson
---
include/net/tls.h | 2 +-
net/tls/tls_main.c | 26 +-
2 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/include/net
Refactor zerocopy_from_iter to take arguments for pages and size,
such that it can be used for both tx and rx. RX will also support
zerocopy direct to output iter, as long as the full message can
be copied at once (a large enough userspace buffer was provided).
Signed-off-by: Dave Watson
Add documentation on rx path setup and cmsg interface.
Signed-off-by: Dave Watson
---
Documentation/networking/tls.txt | 66 ++--
1 file changed, 64 insertions(+), 2 deletions(-)
diff --git a/Documentation/networking/tls.txt b/Documentation/networking
Pass EBADMSG explicitly to tls_err_abort. Receive path will
pass additional codes - EMSGSIZE if framing is larger than max
TLS record size, EINVAL if TLS version mismatch.
Signed-off-by: Dave Watson
---
include/net/tls.h | 6 +++---
net/tls/tls_sw.c | 2 +-
2 files changed, 4 insertions(+), 4
finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson
---
include/net/tls.h| 27 ++-
include/uapi/linux/tls.h | 2 +
net/tls/Kconfig | 1 +
net/tls/tls_main.c | 62 -
net/tls/tls_sw.c
ccesses to gcm_context_data already use
unaligned loads.
Reported-by: Mauro Rossi
Fixes: 1476db2d12 ("Move HashKey computation from stack to gcm_context")
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_asm.S | 66 +++
1 file changed, 33 inserti
On 08/24/18 09:22 PM, Herbert Xu wrote:
> > > BTW. gcmaes_crypt_by_sg also contains GFP_ATOMIC and -ENOMEM, behind a
> > > pretty complex condition. Do you mean that this condition is part of the
> > > contract that the crypto API provides?
> >
> > This is an implementation defect. I think for
adds support for those
keysizes.
The final patch updates the C glue code, passing everything through
the crypt_by_sg() function instead of the previous memcpy based
routines.
Dave Watson (12):
x86/crypto: aesni: Merge GCM_ENC_DEC
x86/crypto: aesni: Introduce gcm_context_data
x86/crypto: a
will be used by both AVX and AVX2.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_avx-x86_64.S | 951 ---
1 file changed, 318 insertions(+), 633 deletions(-)
diff --git a/arch/x86/crypto/aesni-intel_avx-x86_64.S
b/arch/x86/crypto/aesni-intel_avx-x86_64.S
index
Macro-ify function save and restore. These will be used in new functions
added for scatter/gather update operations.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_avx-x86_64.S | 94 +---
1 file changed, 36 insertions(+), 58 deletions(-)
diff --git a/arch/x86
stores to the new struct are always done unlaligned to
avoid compiler issues, see e5b954e8 "Use unaligned loads from
gcm_context_data"
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_avx-x86_64.S | 378 +++
arch/x86/crypto/aesni-intel_glue.c | 58 ++-
Merge encode and decode tag calculations in GCM_COMPLETE macro.
Scatter/gather routines will call this once at the end of encryption
or decryption.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_avx-x86_64.S | 8
1 file changed, 8 insertions(+)
diff --git a/arch/x86/crypto
that this diff depends on using gcm_context_data - 256 bit keys
require 16 HashKeys + 15 expanded keys, which is larger than
struct crypto_aes_ctx, so they are stored in struct gcm_context_data.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_avx-x86_64.S | 188 +--
Fill in aadhash, aadlen, pblocklen, curcount with appropriate values.
pblocklen, aadhash, and pblockenckey are also updated at the end
of each scatter/gather operation, to be carried over to the next
operation.
Signed-off-by: Dave Watson
---
arch/x86/crypto/aesni-intel_avx-x86_64.S | 51
1 - 100 of 106 matches
Mail list logo