t a/crypto/algif_tls.c b/crypto/algif_tls.c
new file mode 100644
index 000..123ade3
--- /dev/null
+++ b/crypto/algif_tls.c
@@ -0,0 +1,1233 @@
+/*
+ * algif_tls: User-space interface for TLS
+ *
+ * Copyright (C) 2015, Dave Watson <davejwat...@fb.com>
+ *
+ * This file provides the user-sp
/Articles/657999/
3) NIC offload. To support running aesni routines on the NIC instead
of the processor, we would probably need enough of the framing
interface put in kernel.
Dave Watson (2):
Crypto support aesni rfc5288
Crypto kernel tls socket
arch/x86/crypto/aesni-intel_asm.S
Support rfc5288 using intel aesni routines. See also rfc5246.
AAD length is 13 bytes padded out to 16. Padding bytes have to be
passed in in scatterlist currently, which probably isn't quite the
right fix.
The assoclen checks were moved to the individual rfc stubs, and the
common routines
On 11/23/15 02:27 PM, Sowmini Varadhan wrote:
> On (11/23/15 09:43), Dave Watson wrote:
> > Currently gcm(aes) represents ~80% of our SSL connections.
> >
> > Userspace interface:
> >
> > 1) A transform and op socket are created using the userspace crypto
On 12/13/16 04:32 PM, Ilya Lesokhin wrote:
> --- a/arch/x86/crypto/aesni-intel_glue.c
> +++ b/arch/x86/crypto/aesni-intel_glue.c
> @@ -903,9 +903,11 @@ static int helper_rfc4106_encrypt(struct aead_request
> *req)
> *((__be32 *)(iv+12)) = counter;
>
> if (sg_is_last(req->src) &&
> -
On 07/11/17 08:29 AM, Steffen Klassert wrote:
> Sorry for replying to old mail...
> > +int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx)
> > +{
>
> ...
>
> > +
> > + if (!sw_ctx->aead_send) {
> > + sw_ctx->aead_send = crypto_alloc_aead("gcm(aes)", 0, 0);
> > +
On 07/12/17 09:20 AM, Steffen Klassert wrote:
> On Tue, Jul 11, 2017 at 11:53:11AM -0700, Dave Watson wrote:
> > On 07/11/17 08:29 AM, Steffen Klassert wrote:
> > > Sorry for replying to old mail...
> > > > +int tls_set_sw_offload(struct soc
Hi Richard,
On 07/06/17 04:30 PM, Richard Weinberger wrote:
> Dave,
>
> On Wed, Jun 14, 2017 at 8:36 PM, Dave Watson <davejwat...@fb.com> wrote:
> > Documentation/networking/tls.txt | 135 +++
> > MAINTAINERS| 10 +
> > include
On 07/29/17 01:12 PM, Tom Herbert wrote:
> On Wed, Jun 14, 2017 at 11:37 AM, Dave Watson <davejwat...@fb.com> wrote:
> > Add the infrustructure for attaching Upper Layer Protocols (ULPs) over TCP
> > sockets. Based on a similar infrastructure in tcp_cong. The idea is that
&g
On 06/25/17 02:42 AM, Levin, Alexander (Sasha Levin) wrote:
> On Wed, Jun 14, 2017 at 11:37:14AM -0700, Dave Watson wrote:
> >Add the infrustructure for attaching Upper Layer Protocols (ULPs) over TCP
> >sockets. Based on a similar infrastructure in tcp_cong. The idea is that any
cp_available_ulp
tls
There is currently no functionality to remove or chain ULPs, but
it should be possible to add these in the future if needed.
Signed-off-by: Boris Pismenny <bor...@mellanox.com>
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/inet_connection_so
This series adds support for kernel TLS encryption over TCP sockets.
A standard TCP socket is converted to a TLS socket using a setsockopt.
Only symmetric crypto is done in the kernel, as well as TLS record
framing. The handshake remains in userspace, and the negotiated
cipher keys/iv are
bor...@mellanox.com>
Signed-off-by: Ilya Lesokhin <il...@mellanox.com>
Signed-off-by: Aviad Yehezkel <avia...@mellanox.com>
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
MAINTAINERS | 10 +
include/linux/socket.h | 1 +
include/net/tls.h| 223 ++
in <il...@mellanox.com>
Signed-off-by: Boris Pismenny <bor...@mellanox.com>
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/tcp.h | 2 ++
net/ipv4/tcp.c | 5 +++--
net/ipv4/tcp_rate.c | 1 +
3 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/include
Add documentation for the tcp ULP tls interface.
Signed-off-by: Boris Pismenny <bor...@mellanox.com>
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
Documentation/networking/tls.txt | 120 +++
1 file changed, 120 insertions(+)
create
On 06/16/17 01:58 PM, Stephen Hemminger wrote:
> On Wed, 14 Jun 2017 11:37:39 -0700
> Dave Watson <davejwat...@fb.com> wrote:
>
> > --- /dev/null
> > +++ b/net/tls/Kconfig
> > @@ -0,0 +1,12 @@
> > +#
> > +# TLS configuration
> > +#
> &g
This series adds support for kernel TLS encryption over TCP sockets.
A standard TCP socket is converted to a TLS socket using a setsockopt.
Only symmetric crypto is done in the kernel, as well as TLS record
framing. The handshake remains in userspace, and the negotiated
cipher keys/iv are
cp_available_ulp
tls
There is currently no functionality to remove or chain ULPs, but
it should be possible to add these in the future if needed.
Signed-off-by: Boris Pismenny <bor...@mellanox.com>
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/inet_connection_so
in <il...@mellanox.com>
Signed-off-by: Boris Pismenny <bor...@mellanox.com>
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/tcp.h | 2 ++
net/ipv4/tcp.c | 5 +++--
net/ipv4/tcp_rate.c | 1 +
3 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/include
bor...@mellanox.com>
Signed-off-by: Ilya Lesokhin <il...@mellanox.com>
Signed-off-by: Aviad Yehezkel <avia...@mellanox.com>
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
MAINTAINERS | 10 +
include/linux/socket.h | 1 +
include/net/tls.h| 237 +++
Add documentation for the tcp ULP tls interface.
Signed-off-by: Boris Pismenny <bor...@mellanox.com>
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
Documentation/networking/tls.txt | 135 +++
1 file changed, 135 insertions(+)
create
On 06/14/17 01:54 PM, Tom Herbert wrote:
> On Wed, Jun 14, 2017 at 11:36 AM, Dave Watson <davejwat...@fb.com> wrote:
> > This series adds support for kernel TLS encryption over TCP sockets.
> > A standard TCP socket is converted to a TLS socket using a setsockopt.
> > On
Hi Hannes,
On 06/14/17 10:15 PM, Hannes Frederic Sowa wrote:
> one question for this patch set:
>
> What is the reason for not allowing key updates for the TX path? I was
> always loud pointing out the problems with TLSv1.2 renegotiation and
> TLSv1.3 key update alerts. This patch set uses
cp_available_ulp
tls
There is currently no functionality to remove or chain ULPs, but
it should be possible to add these in the future if needed.
Signed-off-by: Boris Pismenny <bor...@mellanox.com>
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/inet_connection_so
This series adds support for kernel TLS encryption over TCP sockets.
A standard TCP socket is converted to a TLS socket using a setsockopt.
Only symmetric crypto is done in the kernel, as well as TLS record
framing. The handshake remains in userspace, and the negotiated
cipher keys/iv are
bor...@mellanox.com>
Signed-off-by: Ilya Lesokhin <il...@mellanox.com>
Signed-off-by: Aviad Yehezkel <avia...@mellanox.com>
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
MAINTAINERS | 10 +
include/linux/socket.h | 1 +
include/net/tls.h| 222 +
in <il...@mellanox.com>
Signed-off-by: Boris Pismenny <bor...@mellanox.com>
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/tcp.h | 2 ++
net/ipv4/tcp.c | 5 +++--
net/ipv4/tcp_rate.c | 1 +
3 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/include
Add documentation for the tcp ULP tls interface.
Signed-off-by: Boris Pismenny <bor...@mellanox.com>
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
Documentation/networking/tls.txt | 135 +++
1 file changed, 135 insertions(+)
create
<1513769897-26945-1-git-send-email-atul.gu...@chelsio.com>
On 12/20/17 05:08 PM, Atul Gupta wrote:
> +static void __init chtls_init_ulp_ops(void)
> +{
> + chtls_base_prot = tcp_prot;
> + chtls_base_prot.hash= chtls_hash;
> + chtls_base_prot.unhash
On 01/30/18 06:51 AM, Atul Gupta wrote:
> What I was referring is that passing "tls" ulp type in setsockopt
> may be insufficient to make the decision when multi HW assist Inline
> TLS solution exists.
Setting the ULP doesn't choose HW or SW implementation, I think that
should be done later when
On 01/31/18 04:14 PM, Atul Gupta wrote:
>
>
> On Tuesday 30 January 2018 10:41 PM, Dave Watson wrote:
> > On 01/30/18 06:51 AM, Atul Gupta wrote:
> >
> > > What I was referring is that passing "tls" ulp type in setsockopt
> > > may be insuf
On 01/31/18 05:22 PM, Vakul Garg wrote:
> > > On second though in stable we should probably just disable async tfm
> > > allocations.
> > > It's simpler. But this approach is still good for -next
> > >
> > >
> > > Gilad
> >
> > I agree with Gilad, just disable async for now.
> >
>
> How to do
On 01/31/18 09:34 PM, Vakul Garg wrote:
> Async crypto accelerators (e.g. drivers/crypto/caam) support offloading
> GCM operation. If they are enabled, crypto_aead_encrypt() return error
> code -EINPROGRESS. In this case tls_do_encryption() needs to wait on a
> completion till the time the
them out
with scatterlist_map_and_copy.
Only the SSE routines are updated so far, so leave the previous
gcmaes_en/decrypt routines, and branch to the sg ones if the
keysize is inappropriate for avx, or we are SSE only.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto
The asm macros are all set up now, introduce entry points.
GCM_INIT and GCM_COMPLETE have arguments supplied, so that
the new scatter/gather entry points don't have to take all the
arguments, and only the ones they need.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto
Use macro operations to merge implemetations of INITIAL_BLOCKS,
since they differ by only a small handful of lines.
Use macro counter \@ to simplify implementation.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel_asm.S
Macro-ify function save and restore. These will be used in new functions
added for scatter/gather update operations.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel_asm.S | 53 ++-
1 file changed, 24 insertions(+), 29 del
Prepare to handle partial blocks between scatter/gather calls.
For the last partial block, we only want to calculate the aadhash
in GCM_COMPLETE, and a new partial block macro will handle both
aadhash update and encrypting partial blocks between calls.
Signed-off-by: Dave Watson <davej
AAD hash only needs to be calculated once for each scatter/gather operation.
Move it to its own macro, and call it from GCM_INIT instead of
INITIAL_BLOCKS.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel_asm.S | 71 ---
Fill in aadhash, aadlen, pblocklen, curcount with appropriate values.
pblocklen, aadhash, and pblockenckey are also updated at the end
of each scatter/gather operation, to be carried over to the next
operation.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel
Introduce a gcm_context_data struct that will be used to pass
context data between scatter/gather update calls. It is passed
as the second argument (after crypto keys), other args are
renumbered.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel_asm.S
Reduce code duplication by introducting GCM_INIT macro. This macro
will also be exposed as a function for implementing scatter/gather
support, since INIT only needs to be called once for the full
operation.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel
Merge encode and decode tag calculations in GCM_COMPLETE macro.
Scatter/gather routines will call this once at the end of encryption
or decryption.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel_asm.S | 172 ++
1 file c
%aes_loop_initial_4974
1.27%gcmaes_encrypt_sg.constprop.15
Dave Watson (14):
x86/crypto: aesni: Merge INITIAL_BLOCKS_ENC/DEC
x86/crypto: aesni: Macro-ify func save/restore
x86/crypto: aesni: Add GCM_INIT macro
x86/crypto: aesni: Add GCM_COMPLETE macro
x86/crypto: aesni: Merge encode
We can fast-path any < 16 byte read if the full message is > 16 bytes,
and shift over by the appropriate amount. Usually we are
reading > 16 bytes, so this should be faster than the READ_PARTIAL
macro introduced in b20209c91e2 for the average case.
Signed-off-by: Dave Watson <davejw
Make a macro for the main encode/decode routine. Only a small handful
of lines differ for enc and dec. This will also become the main
scatter/gather update routine.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel_asm.S
On 02/12/18 03:12 PM, Junaid Shahid wrote:
> Hi Dave,
>
>
> On 02/12/2018 11:51 AM, Dave Watson wrote:
>
> > +static int gcmaes_encrypt_sg(struct aead_request *req, unsigned int
> > assoclen,
> > + u8 *hash_subkey, u8 *iv, voi
On 02/13/18 08:42 AM, Stephan Mueller wrote:
> > +static int gcmaes_encrypt_sg(struct aead_request *req, unsigned int
> > assoclen, + u8 *hash_subkey, u8 *iv, void *aes_ctx)
> > +{
> > + struct crypto_aead *tfm = crypto_aead_reqtfm(req);
> > + unsigned long auth_tag_len =
We can fast-path any < 16 byte read if the full message is > 16 bytes,
and shift over by the appropriate amount. Usually we are
reading > 16 bytes, so this should be faster than the READ_PARTIAL
macro introduced in b20209c91e2 for the average case.
Signed-off-by: Dave Watson <davejw
The asm macros are all set up now, introduce entry points.
GCM_INIT and GCM_COMPLETE have arguments supplied, so that
the new scatter/gather entry points don't have to take all the
arguments, and only the ones they need.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto
need to be calculated once per key and could
be moved to when set_key is called, however, the current glue code
falls back to generic aes code if fpu is disabled.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel_asm.S | 205 ---
Before this diff, multiple calls to GCM_ENC_DEC will
succeed, but only if all calls are a multiple of 16 bytes.
Handle partial blocks at the start of GCM_ENC_DEC, and update
aadhash as appropriate.
The data offset %r11 is also updated after the partial block.
Signed-off-by: Dave Watson
Prepare to handle partial blocks between scatter/gather calls.
For the last partial block, we only want to calculate the aadhash
in GCM_COMPLETE, and a new partial block macro will handle both
aadhash update and encrypting partial blocks between calls.
Signed-off-by: Dave Watson <davej
ous
gcmaes_en/decrypt routines, and branch to the sg ones if the
keysize is inappropriate for avx, or we are SSE only.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel_glue.c | 133 +
1 file changed, 133 insertions(+)
diff --g
Introduce a gcm_context_data struct that will be used to pass
context data between scatter/gather update calls. It is passed
as the second argument (after crypto keys), other args are
renumbered.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel_asm.S
Fill in aadhash, aadlen, pblocklen, curcount with appropriate values.
pblocklen, aadhash, and pblockenckey are also updated at the end
of each scatter/gather operation, to be carried over to the next
operation.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel
AAD hash only needs to be calculated once for each scatter/gather operation.
Move it to its own macro, and call it from GCM_INIT instead of
INITIAL_BLOCKS.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel_asm.S | 71 ---
Use macro operations to merge implemetations of INITIAL_BLOCKS,
since they differ by only a small handful of lines.
Use macro counter \@ to simplify implementation.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel_asm.S
14: merge enc/dec
also use new routine if cryptlen < AVX_GEN2_OPTSIZE
optimize case if assoc is already linear
Dave Watson (14):
x86/crypto: aesni: Merge INITIAL_BLOCKS_ENC/DEC
x86/crypto: aesni: Macro-ify func save/restore
x86/crypto: aesni: Add GCM_INIT macro
x86/
Macro-ify function save and restore. These will be used in new functions
added for scatter/gather update operations.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel_asm.S | 53 ++-
1 file changed, 24 insertions(+), 29 del
Merge encode and decode tag calculations in GCM_COMPLETE macro.
Scatter/gather routines will call this once at the end of encryption
or decryption.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel_asm.S | 172 ++
1 file c
Reduce code duplication by introducting GCM_INIT macro. This macro
will also be exposed as a function for implementing scatter/gather
support, since INIT only needs to be called once for the full
operation.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel
Make a macro for the main encode/decode routine. Only a small handful
of lines differ for enc and dec. This will also become the main
scatter/gather update routine.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
arch/x86/crypto/aesni-intel_asm.S
On 02/15/18 12:24 PM, Atul Gupta wrote:
> @@ -401,6 +430,15 @@ static int do_tls_setsockopt_tx(struct sock *sk, char
> __user *optval,
> goto out;
> }
>
> + rc = get_tls_offload_dev(sk);
> + if (rc) {
> + goto out;
> + } else {
> + /*
On 02/15/18 04:10 PM, Atul Gupta wrote:
> > -Original Message-
> > From: Dave Watson [mailto:davejwat...@fb.com]
> > Sent: Thursday, February 15, 2018 9:22 PM
> > To: Atul Gupta <atul.gu...@chelsio.com>
> > Cc: da...@davemloft.net; herb...@gondo
On 02/23/18 04:58 PM, Atul Gupta wrote:
> > On 02/22/18 11:21 PM, Atul Gupta wrote:
> > > @@ -403,6 +431,15 @@ static int do_tls_setsockopt_tx(struct sock *sk,
> > > char __user *optval,
> > > goto err_crypto_info;
> > > }
> > >
> > > + rc = tls_offload_dev_absent(sk);
> > > + if
On 02/22/18 11:21 PM, Atul Gupta wrote:
> @@ -403,6 +431,15 @@ static int do_tls_setsockopt_tx(struct sock *sk, char
> __user *optval,
> goto err_crypto_info;
> }
>
> + rc = tls_offload_dev_absent(sk);
> + if (rc == -EINVAL) {
> + goto out;
> + } else
are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwat...@fb.
Pass EBADMSG explicitly to tls_err_abort. Receive path will
pass additional codes - E2BIG if framing is larger than max
TLS record size.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/tls.h | 6 +++---
net/tls/tls_sw.c | 2 +-
2 files changed, 4 insertions(+), 4 del
Refactor zerocopy_from_iter to take arguments for pages and size,
such that it can be used for both tx and rx. RX will also support
zerocopy direct to output iter, as long as the full message can
be copied at once (a large enough userspace buffer was provided).
Signed-off-by: Dave Watson
Separate tx crypto parameters to a separate cipher_context struct.
The same parameters will be used for rx using the same struct.
tls_advance_record_sn is modified to only take the cipher info.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/tls.h
copies vs. userspace's one, vs. previous kernel's two.
https://marc.info/?l=linux-crypto-vger=151931242406416=2
[2] https://github.com/Mellanox/openssl/commits/tls_rx
[3] https://github.com/ktls/af_ktls-tool/tree/RX
Dave Watson (5):
tls: Generalize zerocopy_from_iter
tls: Move cipher i
Add documentation on rx path setup and cmsg interface.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
Documentation/networking/tls.txt | 59 ++--
1 file changed, 57 insertions(+), 2 deletions(-)
diff --git a/Documentation/networking/tls
On 03/08/18 09:48 PM, Boris Pismenny wrote:
> Hi Dave,
>
> On 03/08/18 18:50, Dave Watson wrote:
> > Add rx path for tls software implementation.
> >
> > recvmsg, splice_read, and poll implemented.
> >
> > An additional sockopt TLS_RX is added, with th
emove copies, resulting in optimally
zero copies vs. userspace's one, vs. previous kernel's two.
https://marc.info/?l=linux-crypto-vger=151931242406416=2
[2] https://github.com/Mellanox/openssl/commits/tls_rx2
[3] https://github.com/ktls/af_ktls-tool/tree/RX
Dave Watson (6):
tls: Gener
to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/tls.h| 27 ++-
include/uapi/linux/tls.h | 2 +
net/tls/Kconfig | 1 +
net/tls/tls_main.c | 62 -
n
Refactor zerocopy_from_iter to take arguments for pages and size,
such that it can be used for both tx and rx. RX will also support
zerocopy direct to output iter, as long as the full message can
be copied at once (a large enough userspace buffer was provided).
Signed-off-by: Dave Watson
Add documentation on rx path setup and cmsg interface.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
Documentation/networking/tls.txt | 66 ++--
1 file changed, 64 insertions(+), 2 deletions(-)
diff --git a/Documentation/networking/tls
Pass EBADMSG explicitly to tls_err_abort. Receive path will
pass additional codes - EMSGSIZE if framing is larger than max
TLS record size, EINVAL if TLS version mismatch.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/tls.h | 6 +++---
net/tls/tls_sw.c | 2 +-
2 files c
Separate tx crypto parameters to a separate cipher_context struct.
The same parameters will be used for rx using the same struct.
tls_advance_record_sn is modified to only take the cipher info.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/tls.h
Several config variables are prefixed with tx, drop the prefix
since these will be used for both tx and rx.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/tls.h | 2 +-
net/tls/tls_main.c | 26 +-
2 files changed, 14 insertions(+), 14 deletions(-)
vger=151931242406416=2
[2] https://github.com/Mellanox/openssl/commits/tls_rx2
[3] https://github.com/ktls/af_ktls-tool/tree/RX
Dave Watson (6):
tls: Generalize zerocopy_from_iter
tls: Move cipher info to a separate struct
tls: Pass error code explicitly to tls_err_abort
tls: Refactor variable na
Separate tx crypto parameters to a separate cipher_context struct.
The same parameters will be used for rx using the same struct.
tls_advance_record_sn is modified to only take the cipher info.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/tls.h
to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/tls.h| 27 ++-
include/uapi/linux/tls.h | 2 +
net/tls/Kconfig | 1 +
net/tls/tls_main.c | 62 -
n
Several config variables are prefixed with tx, drop the prefix
since these will be used for both tx and rx.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/tls.h | 2 +-
net/tls/tls_main.c | 26 +-
2 files changed, 14 insertions(+), 14 deletions(-)
Refactor zerocopy_from_iter to take arguments for pages and size,
such that it can be used for both tx and rx. RX will also support
zerocopy direct to output iter, as long as the full message can
be copied at once (a large enough userspace buffer was provided).
Signed-off-by: Dave Watson
Pass EBADMSG explicitly to tls_err_abort. Receive path will
pass additional codes - EMSGSIZE if framing is larger than max
TLS record size, EINVAL if TLS version mismatch.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
include/net/tls.h | 6 +++---
net/tls/tls_sw.c | 2 +-
2 files c
Add documentation on rx path setup and cmsg interface.
Signed-off-by: Dave Watson <davejwat...@fb.com>
---
Documentation/networking/tls.txt | 67 ++--
1 file changed, 65 insertions(+), 2 deletions(-)
diff --git a/Documentation/networking/tls
88 matches
Mail list logo