tree:
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
head: 88d905e20b11f7ad841e3afddaf1d59b6693c4a1
commit: 17c18f9e33282a170458cb5ea20759bfcb0da7d8 [81/86] crypto: user - Split
stats in multiple structures
reproduce: make htmldocs
All warnings (new ones
The crypto API wants the updated IV in req->info after decryption. The
updated IV used to be copied correctly to req->info after running the
decryption job. Since 115957bb3e59 this is done before running the job
so instead of the updated IV only the unmodified input IV is given back
to the crypto
On Thu, Nov 29, 2018 at 02:42:15PM +, Corentin Labbe wrote:
> Hello
>
> This patchset fixes all reported problem by Eric biggers.
>
> Regards
>
> Changes since v4:
> - Inlined functions when !CRYPTO_STATS
>
> Changes since v3:
> - Added a crypto_stats_init as asked vy Neil Horman
> - Fixed
On Fri, Nov 30, 2018 at 02:31:48PM +0530, Atul Gupta wrote:
> Immediate packets sent to hardware should include the work
> request length in calculating the flits. WR occupy one flit and
> if not accounted result in invalid request which stalls the HW
> queue.
>
> Cc: sta...@vger.kernel.org
>
From: Eric Biggers
The 2018-11-28 revision of the Adiantum paper has revised some notation:
- 'M' was replaced with 'L' (meaning "Left", for the left-hand part of
the message) in the definition of Adiantum hashing, to avoid confusion
with the full message
- ε-almost-∆-universal is now
From: Eric Biggers
The kernel's ChaCha20 uses the RFC7539 convention of the nonce being 12
bytes rather than 8, so actually I only appended 12 random bytes (not
16) to its test vectors to form 24-byte nonces for the XChaCha20 test
vectors. The other 4 bytes were just from zero-padding the
From: Eric Biggers
There is a draft specification for XChaCha20 being worked on. Add the
XChaCha20 test vector from the appendix so that we can be extra sure the
kernel's implementation is compatible.
I also recomputed the ciphertext with XChaCha12 and added it there too,
to keep the tests for
I was curious if it might make implementing F() faster to use
instructions that are meant to work with sets of data similar to what
would be processed
From: Eric Biggers
If the stream cipher implementation is asynchronous, then the Adiantum
instance must be flagged as asynchronous as well. Otherwise someone
asking for a synchronous algorithm can get an asynchronous algorithm.
There are no asynchronous xchacha12 or xchacha20 implementations
On Thu, Sep 06, 2018 at 12:43:41PM +0200, Ard Biesheuvel wrote:
> On 5 September 2018 at 21:24, Eric Biggers wrote:
> > From: Eric Biggers
> >
> > fscrypt doesn't use the CTR mode of operation for anything, so there's
> > no need to select CRYPTO_CTR. It was added by commit 71dea01ea2ed
> >
Improve the performance of NEON based ChaCha:
Patch #1 adds a block size of 1472 to the tcrypt test template so we have
something that reflects the VPN case.
Patch #2 improves performance for arbitrary length inputs: on deep pipelines,
throughput increases ~30% when running on inputs blocks
In order to have better coverage of algorithms operating on block
sizes that are in the ballpark of a VPN packet, add 1472 to the
block_sizes array.
Signed-off-by: Ard Biesheuvel
---
crypto/tcrypt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/crypto/tcrypt.c
To some degree, most known AArch64 micro-architectures appear to be
able to issue ALU instructions in parellel to SIMD instructions
without affecting the SIMD throughput. This means we can use the ALU
to process a fifth ChaCha block while the SIMD is processing four
blocks in parallel.
Update the 4-way NEON ChaCha routine so it can handle input of any
length >64 bytes in its entirety, rather than having to call into
the 1-way routine and/or memcpy()s via temp buffers to handle the
tail of a ChaCha invocation that is not a multiple of 256 bytes.
On inputs that are a multiple of
Send SPI, 64b seq nos and 64b IV with aadiv drop for inline crypto.
This information is added in outgoing packet after the CPL TX PKT XT
and removed by hardware.
The aad, auth and cipher offsets are then adjusted for ESN enabled tunnel.
Signed-off-by: Atul Gupta
---
Immediate packets sent to hardware should include the work
request length in calculating the flits. WR occupy one flit and
if not accounted result in invalid request which stalls the HW
queue.
Cc: sta...@vger.kernel.org
Signed-off-by: Atul Gupta
---
drivers/crypto/chelsio/chcr_ipsec.c | 5 -
On Tue, Nov 20, 2018 at 05:30:47PM +0100, Martin Willi wrote:
> In the quest for pushing the limits of chacha20 encryption for both IPsec
> and Wireguard, this small series adds AVX-512VL block functions. The VL
> variant works on 256-bit ymm registers, but compared to AVX2 can benefit
> from the
On Tue, Nov 20, 2018 at 07:09:53AM +, gongchen (E) wrote:
> Hi Dear Herbert,
>
> Sorry to bother you , but we’ve met a problem in crypto module,
> would you please kindly help us look into it ? Thank you very much.
>
> In the below function chain,
--
Guten Tag, Wir sind eine registrierte private Geldverleiher. Wir geben
Kredite an Firmen, Einzelpersonen, die ihre finanzielle Status auf der
ganzen Welt aktualisieren müssen, mit minimalen jährlichen Zinsen von
2% .reply, wenn nötig.
Good Day, We are a registered private money lender. We
Firmware upgraded to v10
Signed-off-by: Nagadheeraj Rottela
---
WHENCE | 2 +-
cavium/cnn55xx_se.fw | Bin 27698 -> 35010 bytes
2 files changed, 1 insertion(+), 1 deletion(-)
diff --git a/WHENCE b/WHENCE
index a188c0d..ed10d5b 100644
--- a/WHENCE
+++ b/WHENCE
@@ -3586,7 +3586,7
Hi Martin,
On Tue, Nov 20, 2018 at 5:29 PM Martin Willi wrote:
> Thanks for the offer, no need at this time. But I certainly would
> welcome if you could do some (Wireguard) benching with that code to see
> if it works for you.
I certainly will test it in a few different network circumstances,
This version uses the same principle as the AVX2 version by scheduling the
operations for two block pairs in parallel. It benefits from the AVX-512VL
rotate instructions and the more efficient partial block handling using
"vmovdqu8", resulting in a speedup of the raw block function of ~20%.
In the quest for pushing the limits of chacha20 encryption for both IPsec
and Wireguard, this small series adds AVX-512VL block functions. The VL
variant works on 256-bit ymm registers, but compared to AVX2 can benefit
from the new instructions.
Compared to the AVX2 version, these block functions
This version uses the same principle as the AVX2 version. It benefits
from the AVX-512VL rotate instructions and the more efficient partial
block handling using "vmovdqu8", resulting in a speedup of ~20%.
Unlike the AVX2 version, it is faster than the single block SSSE3 version
to process a
This variant is similar to the AVX2 version, but benefits from the AVX-512
rotate instructions and the additional registers, so it can operate without
any data on the stack. It uses ymm registers only to avoid the massive core
throttling on Skylake-X platforms. Nontheless does it bring a ~30%
Hi Jason,
> [...] I have a massive Xeon Gold 5120 machine that I can give you
> access to if you'd like to do some testing and benching.
Thanks for the offer, no need at this time. But I certainly would
welcome if you could do some (Wireguard) benching with that code to see
if it works for you.
Hi Dear Herbert,
Sorry to bother you , but we’ve met a problem in crypto module,
would you please kindly help us look into it ? Thank you very much.
In the below function chain, scatterwalk_start() doesn't check the
result of sg_next(), so the kernel will crash if
On Wed, Nov 14, 2018 at 12:21:11PM -0800, Eric Biggers wrote:
> From: Eric Biggers
>
> 'shash' algorithms are always synchronous, so passing CRYPTO_ALG_ASYNC
> in the mask to crypto_alloc_shash() has no effect. Many users therefore
> already don't pass it, but some still do. This inconsistency
On Wed, Nov 14, 2018 at 12:19:39PM -0800, Eric Biggers wrote:
> From: Eric Biggers
>
> 'cipher' algorithms (single block ciphers) are always synchronous, so
> passing CRYPTO_ALG_ASYNC in the mask to crypto_alloc_cipher() has no
> effect. Many users therefore already don't pass it, but some
On Wed, Nov 14, 2018 at 11:35:48AM -0800, Eric Biggers wrote:
> From: Eric Biggers
>
> Some algorithms initialize their .cra_list prior to registration.
> But this is unnecessary since crypto_register_alg() will overwrite
> .cra_list when adding the algorithm to the 'crypto_alg_list'.
>
On Wed, Nov 14, 2018 at 11:10:53AM -0800, Eric Biggers wrote:
> From: Eric Biggers
>
> Remove the unnecessary setting of CRYPTO_ALG_TYPE_SKCIPHER.
> Commit 2c95e6d97892 ("crypto: skcipher - remove useless setting of type
> flags") took care of this everywhere else, but a few more instances made
Hallo,
Sie haben eine wohltätige Spende in Höhe von 4.800, 000.00EUR, ich der
Amerika-Lotterie Wert $ 560 Millionen gewonnen und ich bin einen Teil davon
fünf glückliche Menschen und Altersheimen Spenden.Kontaktieren Sie mich für
diesen Gott Gelegenheit per e-Mail: jane.d...@zoho.com
--
This
Hi Martin,
On Mon, Nov 19, 2018 at 8:52 AM Martin Willi wrote:
>
> Adding AVX-512VL support is relatively simple. I have a patchset mostly
> ready that is more than competitive with the code from Zinc. I'll clean
> that up and do more testing before posting it later this week.
Terrific.
On Mon, Nov 19, 2018 at 05:19:10PM +0800, Kenneth Lee wrote:
> On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
> > Date: Mon, 19 Nov 2018 17:14:05 +0800
> > From: Kenneth Lee
> > To: Leon Romanovsky
> > CC: Tim Sell , linux-...@vger.kernel.org,
> > Alexander Shishkin , Zaibo Xu
> >
On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
> Date: Mon, 19 Nov 2018 17:14:05 +0800
> From: Kenneth Lee
> To: Leon Romanovsky
> CC: Tim Sell , linux-...@vger.kernel.org,
> Alexander Shishkin , Zaibo Xu
> , zhangfei@foxmail.com, linux...@huawei.com,
>
On Thu, Nov 15, 2018 at 04:54:55PM +0200, Leon Romanovsky wrote:
> Date: Thu, 15 Nov 2018 16:54:55 +0200
> From: Leon Romanovsky
> To: Kenneth Lee
> CC: Kenneth Lee , Tim Sell ,
> linux-...@vger.kernel.org, Alexander Shishkin
> , Zaibo Xu ,
> zhangfei@foxmail.com, linux...@huawei.com,
Hi Jason,
> I'd be inclined to roll with your implementation if it can eventually
> become competitive with Andy Polyakov's, [...]
I think for the SSSE3/AVX2 code paths it is competitive; especially for
small sizes it is faster, which is not that unimportant when
implementing layer 3 VPNs.
>
Hello,
My name is ms. Reem Al-Hashimi. The UAE minister of state for international
cooparation. I got your contact from a certain email database from your country
while i was looking for someone to handle a huge financial transaction for me
in confidence. Can you receive and invest on behalf
On Thu, Nov 08, 2018 at 03:36:26PM +0200, Horia Geantă wrote:
> This patch set adds support for CAAM Era 10, currently used in LX2160A SoC:
> -new register mapping: some registers/fields are deprecated and moved
> to different locations, mainly version registers
> -algorithms
> chacha20 (over
On Sun, Nov 11, 2018 at 10:36:24AM +0100, Martin Willi wrote:
> This patchset improves performance of the ChaCha20 SIMD implementations
> for x86_64. For some specific encryption lengths, performance is more
> than doubled. Two mechanisms are used to achieve this:
>
> * Instead of calculating the
Hi Martin,
This is nice work, and given that it's quite clean -- and that it's
usually hard to screw up chacha in subtle ways when test vectors pass
(unlike, say, poly1305 or curve25519), I'd be inclined to roll with
your implementation if it can eventually become competitive with Andy
On Sun, Nov 11, 2018 at 10:36:24AM +0100, Martin Willi wrote:
> This patchset improves performance of the ChaCha20 SIMD implementations
> for x86_64. For some specific encryption lengths, performance is more
> than doubled. Two mechanisms are used to achieve this:
>
> * Instead of calculating the
Hi Eric,
On Wed, Nov 14, 2018 at 11:10:53AM -0800, Eric Biggers wrote:
> From: Eric Biggers
>
> Remove the unnecessary setting of CRYPTO_ALG_TYPE_SKCIPHER.
> Commit 2c95e6d97892 ("crypto: skcipher - remove useless setting of type
> flags") took care of this everywhere else, but a few more
From: Eric Biggers
'shash' algorithms are always synchronous, so passing CRYPTO_ALG_ASYNC
in the mask to crypto_alloc_shash() has no effect. Many users therefore
already don't pass it, but some still do. This inconsistency can cause
confusion, especially since the way the 'mask' argument works
From: Eric Biggers
'cipher' algorithms (single block ciphers) are always synchronous, so
passing CRYPTO_ALG_ASYNC in the mask to crypto_alloc_cipher() has no
effect. Many users therefore already don't pass it, but some still do.
This inconsistency can cause confusion, especially since the way
From: Eric Biggers
Some algorithms initialize their .cra_list prior to registration.
But this is unnecessary since crypto_register_alg() will overwrite
.cra_list when adding the algorithm to the 'crypto_alg_list'.
Apparently the useless assignment has just been copy+pasted around.
So, remove
From: Eric Biggers
Remove the unnecessary setting of CRYPTO_ALG_TYPE_SKCIPHER.
Commit 2c95e6d97892 ("crypto: skcipher - remove useless setting of type
flags") took care of this everywhere else, but a few more instances made
it into the tree at about the same time. Squash them before they get
Hi Friend I am a bank director of the International Finance Bank Plc
bf .I want to transfer an abandoned sum of 10.5 millions USD to your
account.50% will be for you. No risk involved. Contact me for more
details. Kindly reply me back to my alternative email address
(samiramohamed5...@gmail.com)
Hi,friend,
This is Daniel Murray and i am from Sinara Group Co.Ltd Group Co.,LTD in Russia.
We are glad to know about your company from the web and we are interested in
your products.
Could you kindly send us your Latest catalog and price list for our trial order.
Best Regards,
Daniel
Hi,friend,
This is Daniel Murray and i am from Sinara Group Co.Ltd Group Co.,LTD in Russia.
We are glad to know about your company from the web and we are interested in
your products.
Could you kindly send us your Latest catalog and price list for our trial order.
Best Regards,
Daniel
On Mon, Nov 12, 2018 at 09:44:41AM +0200, Gilad Ben-Yossef wrote:
> Hi,
>
> It seems that the cryptodev-2.6 tree at
> https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git
> has somehow rolled back 3 months ago.
>
> Not sure if it's a git.kernel.org issue or something else
Hi,
It seems that the cryptodev-2.6 tree at
https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git
has somehow rolled back 3 months ago.
Not sure if it's a git.kernel.org issue or something else but probably
worth taking a look?
Thanks,
Gilad
--
Gilad Ben-Yossef
Chief
On Sat, 2018-11-10 at 15:51 +0100, Stefan Wahren wrote:
> Adopt the SPDX license identifier headers to ease license compliance
> management. While we are at this fix the comment style, too.
>
> Cc: Lubomir Rintel
> Signed-off-by: Stefan Wahren
> ---
> drivers/char/hw_random/bcm2835-rng.c | 7
This variant builds upon the idea of the 2-block AVX2 variant that
shuffles words after each round. The shuffling has a rather high latency,
so the arithmetic units are not optimally used.
Given that we have plenty of registers in AVX, this version parallelizes
the 2-block variant to do four
Add a length argument to the eight block function for AVX2, so the
block function may XOR only a partial length of eight blocks.
To avoid unnecessary operations, we integrate XORing of the first four
blocks in the final lane interleaving; this also avoids some work in
the partial lengths path.
Now that all block functions support partial lengths, engage the wider
block sizes more aggressively. This prevents using smaller block
functions multiple times, where the next larger block function would
have been faster.
Signed-off-by: Martin Willi
---
arch/x86/crypto/chacha20_glue.c | 39
Add a length argument to the single block function for SSSE3, so the
block function may XOR only a partial length of the full block. Given
that the setup code is rather cheap, the function does not process more
than one block; this allows us to keep the block function selection in
the C glue code.
Add a length argument to the quad block function for SSSE3, so the
block function may XOR only a partial length of four blocks.
As we already have the stack set up, the partial XORing does not need
to. This gives a slightly different function trailer, so we keep that
separate from the 1-block
This patchset improves performance of the ChaCha20 SIMD implementations
for x86_64. For some specific encryption lengths, performance is more
than doubled. Two mechanisms are used to achieve this:
* Instead of calculating the minimal number of required blocks for a
given encryption length,
This variant uses the same principle as the single block SSSE3 variant
by shuffling the state matrix after each round. With the wider AVX
registers, we can do two blocks in parallel, though.
This function can increase performance and efficiency significantly for
lengths that would otherwise
Stefan Wahren writes:
> Adopt the SPDX license identifier headers to ease license compliance
> management. While we are at this fix the comment style, too.
Reviewed-by: Eric Anholt
signature.asc
Description: PGP signature
On Sat, Nov 10, 2018 at 03:51:16PM +0100, Stefan Wahren wrote:
> Adopt the SPDX license identifier headers to ease license compliance
> management. While we are at this fix the comment style, too.
>
> Cc: Lubomir Rintel
> Signed-off-by: Stefan Wahren
> ---
>
Adopt the SPDX license identifier headers to ease license compliance
management. While we are at this fix the comment style, too.
Cc: Lubomir Rintel
Signed-off-by: Stefan Wahren
---
drivers/char/hw_random/bcm2835-rng.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git
Hi All,
PCI based devices can be shutdown from sysfs interface
echo "unbind" > /sys/bus/pci/drivers/cxgb4/unbind
In case device has active Transformation(tfm), Drivers cannot un-register the
Algorithms because alg->cra_refcnt will be non zero.
Can driver use the "CRYPTO_ALG_DEAD" flag to mark
On Sat, Oct 20, 2018 at 02:01:52AM +0300, Dmitry Eremin-Solenikov wrote:
> crypto_cfb_decrypt_segment() incorrectly XOR'ed generated keystream with
> IV, rather than with data stream, resulting in incorrect decryption.
> Test vectors will be added in the next patch.
>
> Signed-off-by: Dmitry
On Wed, Oct 17, 2018 at 09:37:57PM -0700, Eric Biggers wrote:
> This series makes the "aes-fixed-time" and "aes-arm" implementations of
> AES more resistant to cache-timing attacks.
>
> Note that even after these changes, the implementations still aren't
> necessarily guaranteed to be
On 9 November 2018 at 10:45, Herbert Xu wrote:
> On Fri, Nov 09, 2018 at 05:44:47PM +0800, Herbert Xu wrote:
>> On Fri, Nov 09, 2018 at 12:33:23AM +0100, Ard Biesheuvel wrote:
>> >
>> > This should be
>> >
>> > reqsize += max(crypto_skcipher_reqsize(_tfm->base);
>> >
On Fri, Nov 09, 2018 at 05:44:47PM +0800, Herbert Xu wrote:
> On Fri, Nov 09, 2018 at 12:33:23AM +0100, Ard Biesheuvel wrote:
> >
> > This should be
> >
> > reqsize += max(crypto_skcipher_reqsize(_tfm->base);
> >crypto_skcipher_reqsize(cryptd_skcipher_child(cryptd_tfm)));
> >
> > since
On Fri, Nov 09, 2018 at 12:33:23AM +0100, Ard Biesheuvel wrote:
>
> This should be
>
> reqsize += max(crypto_skcipher_reqsize(_tfm->base);
>crypto_skcipher_reqsize(cryptd_skcipher_child(cryptd_tfm)));
>
> since the cryptd path in simd still needs some space in the subreq for
> the
On Fri, Nov 9, 2018 at 8:42 AM Ard Biesheuvel wrote:
>
> (+ Masahiro, kbuild ml)
>
> On 8 November 2018 at 21:37, Jason A. Donenfeld wrote:
> > Hi Ard, Eric, and others,
> >
> > As promised, the next Zinc patchset will have less generated code! After a
> > bit of work with Andy and Samuel, I'll
> On Nov 8, 2018, at 6:33 PM, Ard Biesheuvel wrote:
>
> On 8 November 2018 at 23:55, Ard Biesheuvel wrote:
>> The simd wrapper's skcipher request context structure consists
>> of a single subrequest whose size is taken from the subordinate
>> skcipher. However, in simd_skcipher_init(), the
Hey Ard,
On Fri, Nov 9, 2018 at 12:42 AM Ard Biesheuvel
wrote:
> Wonderful! Any problems doing that for x86_64 ?
The x86_64 is still a WIP, but hopefully we'll succeed.
> I agree 100%. When I added this the first time, it was at the request
> of the ARM maintainer, who was reluctant to rely on
(+ Masahiro, kbuild ml)
On 8 November 2018 at 21:37, Jason A. Donenfeld wrote:
> Hi Ard, Eric, and others,
>
> As promised, the next Zinc patchset will have less generated code! After a
> bit of work with Andy and Samuel, I'll be bundling the perlasm.
>
Wonderful! Any problems doing that for
On 8 November 2018 at 23:55, Ard Biesheuvel wrote:
> The simd wrapper's skcipher request context structure consists
> of a single subrequest whose size is taken from the subordinate
> skcipher. However, in simd_skcipher_init(), the reqsize that is
> retrieved is not from the subordinate skcipher
The simd wrapper's skcipher request context structure consists
of a single subrequest whose size is taken from the subordinate
skcipher. However, in simd_skcipher_init(), the reqsize that is
retrieved is not from the subordinate skcipher but from the
cryptd request structure, whose size is
Hi Ard, Eric, and others,
As promised, the next Zinc patchset will have less generated code! After a
bit of work with Andy and Samuel, I'll be bundling the perlasm.
One thing I'm wondering about, though, is the wisdom behind the current
.S_shipped pattern. Usually the _shipped is for big
Add support for Chacha20 + Poly1305 combined AEAD:
-generic (rfc7539)
-IPsec (rfc7634 - known as rfc7539esp in the kernel)
Signed-off-by: Horia Geantă
---
drivers/crypto/caam/caamalg.c | 4 +-
drivers/crypto/caam/caamalg_desc.c | 24 ++-
drivers/crypto/caam/caamalg_desc.h | 3 +-
Add support for Chacha20 + Poly1305 combined AEAD:
-generic (rfc7539)
-IPsec (rfc7634 - known as rfc7539esp in the kernel)
Signed-off-by: Cristian Stoica
Signed-off-by: Horia Geantă
---
drivers/crypto/caam/caamalg.c | 221 -
This patch set adds support for CAAM Era 10, currently used in LX2160A SoC:
-new register mapping: some registers/fields are deprecated and moved
to different locations, mainly version registers
-algorithms
chacha20 (over DPSECI - Data Path SEC Interface on fsl-mc bus)
rfc7539(chacha20,poly1305)
Era 10 changes the register map.
The updates that affect the drivers:
-new version registers are added
-DBG_DBG[deco_state] field is moved to a new register -
DBG_EXEC[19:16] @ 8_0E3Ch.
Signed-off-by: Horia Geantă
---
drivers/crypto/caam/caamalg.c| 47 +
From: Cristian Stoica
Move CHACHAPOLY_IV_SIZE to header file, so it can be reused.
Signed-off-by: Cristian Stoica
Signed-off-by: Horia Geantă
---
crypto/chacha20poly1305.c | 2 --
include/crypto/chacha20.h | 1 +
2 files changed, 1 insertion(+), 2 deletions(-)
diff --git
Add support for ChaCha20 skcipher algorithm.
Signed-off-by: Carmen Iorga
Signed-off-by: Horia Geantă
---
drivers/crypto/caam/caamalg_desc.c | 6 --
drivers/crypto/caam/caamalg_qi2.c | 27 +--
drivers/crypto/caam/compat.h | 1 +
drivers/crypto/caam/desc.h
On 07/11/18 14:55, Will Deacon wrote:
> On Wed, Nov 07, 2018 at 09:40:05AM +, Vladimir Murzin wrote:
>> There are cases where the whole feature, for instance arm64/lse or
>> arm/crypto, can depend on assembler. Current practice is to report
>> buildtime that selected feature is not supported,
On Wed, Nov 07, 2018 at 09:40:05AM +, Vladimir Murzin wrote:
> There are cases where the whole feature, for instance arm64/lse or
> arm/crypto, can depend on assembler. Current practice is to report
> buildtime that selected feature is not supported, which can be quite
> annoying...
Why is it
So we can simply hide LSE support if dependency is not satisfied.
Cc: Will Deacon
Signed-off-by: Vladimir Murzin
---
arch/arm64/Kconfig | 1 +
arch/arm64/Makefile | 13 ++---
arch/arm64/include/asm/atomic.h | 2 +-
arch/arm64/include/asm/lse.h| 6 +++---
There are cases where the whole feature, for instance arm64/lse or
arm/crypto, can depend on assembler. Current practice is to report
buildtime that selected feature is not supported, which can be quite
annoying...
It'd nicer if we can check assembler first and opt-in feature
visibility in
So it is available everywhere and there is no need to keep
CONFIG_ARM64 workaround ;)
Cc: Marc Zyngier
Signed-off-by: Vladimir Murzin
---
arch/arm64/Kconfig | 3 +++
arch/arm64/Makefile | 9 ++---
2 files changed, 5 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/Kconfig
With recent changes in Kconfig processing it is now possible to expose
dependency on specific tools and supported options via Kconfig rather
than bury it deep in Makefile.
This small series try to address the case where the whole feature, for
instance arm64/lse or arm/crypto, depends on GAS.
So we can advertise only those entries which dependency is satisfied.
Cc: Ard Biesheuvel
Signed-off-by: Vladimir Murzin
---
arch/arm/crypto/Kconfig | 31 +--
arch/arm/crypto/Makefile | 31 ++-
2 files changed, 27 insertions(+), 35
чт, 1 нояб. 2018 г. в 11:41, Herbert Xu :
>
> On Thu, Nov 01, 2018 at 11:32:37AM +0300, Dmitry Eremin-Solenikov wrote:
> >
> > Since 4.20 pull went into Linus'es tree, any change of getting these two
> > patches
> > in crypto tree?
>
> These aren't critical enough for the current mainline so they
On Thu, Nov 01, 2018 at 11:32:37AM +0300, Dmitry Eremin-Solenikov wrote:
>
> Since 4.20 pull went into Linus'es tree, any change of getting these two
> patches
> in crypto tree?
These aren't critical enough for the current mainline so they will
go in at the next merge window.
Cheers,
--
Email:
Hello,
вс, 21 окт. 2018 г. в 11:07, James Bottomley
:
>
> On Sun, 2018-10-21 at 09:05 +0200, Ard Biesheuvel wrote:
> > (+ James)
>
> Thanks!
>
> > On 20 October 2018 at 01:01, Dmitry Eremin-Solenikov
> > wrote:
> > > crypto_cfb_decrypt_segment() incorrectly XOR'ed generated keystream
> > > with
Please can you contact me for a transaction.
On Wed, 24 Oct 2018, James Bottomley wrote:
+static void KDFa(u8 *key, int keylen, const char *label, u8 *u,
+u8 *v, int bytes, u8 *out)
Should this be in lower case? I would rename it as tpm_kdfa().
This one is defined as KDFa() in the standards and it's not TPM
specific
On Wed, 24 Oct 2018, James Bottomley wrote:
On Wed, 2018-10-24 at 02:51 +0300, Jarkko Sakkinen wrote:
I would consider sending first a patch set that would iterate the
existing session stuff to be ready for this i.e. merge in two
iterations (emphasis on the word "consider"). We can probably
On Wed, 2018-10-24 at 02:48 +0300, Jarkko Sakkinen wrote:
> On Mon, 22 Oct 2018, James Bottomley wrote:
> > [...]
I'll tidy up the descriptions.
> These all sould be combined with the existing session stuff inside
> tpm2-cmd.c and not have duplicate infrastructures. The file name
> should be
On Tue, 23 Oct 2018, Ard Biesheuvel wrote:
On 23 October 2018 at 04:01, James Bottomley
wrote:
On Mon, 2018-10-22 at 19:19 -0300, Ard Biesheuvel wrote:
[...]
+static void hmac_init(struct shash_desc *desc, u8 *key, int
keylen)
+{
+ u8 pad[SHA256_BLOCK_SIZE];
+ int i;
+
+
On Wed, 2018-10-24 at 02:51 +0300, Jarkko Sakkinen wrote:
> I would consider sending first a patch set that would iterate the
> existing session stuff to be ready for this i.e. merge in two
> iterations (emphasis on the word "consider"). We can probably merge
> the groundwork quite fast.
I
The tag in the short description does not look at all. Should be either
"tpm:" or "keys, trusted:".
On Mon, 22 Oct 2018, James Bottomley wrote:
If some entity is snooping the TPM bus, the can see the data going in
to be sealed and the data coming out as it is unsealed. Add parameter
and
I would consider sending first a patch set that would iterate the existing
session stuff to be ready for this i.e. merge in two iterations
(emphasis on the word "consider"). We can probably merge the groundwork
quite fast.
/Jarkko
On Mon, 22 Oct 2018, James Bottomley wrote:
By now, everybody
1 - 100 of 29368 matches
Mail list logo