Re: [v3 PATCH 0/8] crypto: Convert all AEAD users to new interface

2015-05-27 Thread Steffen Klassert
On Wed, May 27, 2015 at 04:01:05PM +0800, Herbert Xu wrote:
 Hi:
 
 The only changes from the last version are that set_ad no longer
 takes a cryptoff argument and testmgr has been updated to always
 supply space for the authentication tag.
 
 The algif_aead patch has been removed and will be posted separately.
 
 Series description:
 
 This series of patches convert all in-tree AEAD users that I
 could find to the new single SG list interface.  For IPsec it
 also adopts the new explicit IV generator scheme.
 
 To recap, the old AEAD interface takes an associated data (AD)
 SG list in addition to the plain/cipher text SG list(s).  That
 forces the underlying AEAD algorithm implementors to try to stitch
 those two lists together where possible in order to maximise the
 contiguous chunk of memory passed to the ICV/hash function.  Things
 get even more hairy for IPsec as it has a third piece of memory,
 the generated IV (giv) that needs to be hashed.  One look at the
 nasty things authenc does for example is enough to make anyone
 puke :)
 
 In fact the interface is just getting in our way because for the
 main user IPsec the data is naturally contiguous as the protocol
 was designed with this in mind.
 
 So the new AEAD interface gets rid of the separate AD SG list
 and instead simply requires the AD to be at the head of the src
 and dst SG lists.
 
 The conversion of in-tree users is fairly straightforward.  The
 only non-trivial bit is IPsec as I'm taking this opportunity to
 move the IV generation knowledge into IPsec as that's where it
 belongs since we may in future wish to support different generation
 schemes for a single algorithm.

Not sure if I missed something in the flood of patches, but if I
apply your v3 patchset on top of the cryptodev tree, it crashes
like that buring boot:

[4.668297] [ cut here ]
[4.669143] kernel BUG at 
/home/klassert/git/linux-stk/include/linux/scatterlist.h:67!
[4.670457] invalid opcode:  [#1] SMP DEBUG_PAGEALLOC
[4.671595] CPU: 0 PID: 1363 Comm: cryptomgr_test Not tainted 4.0.0+ #951
[4.672025] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
Bochs 01/01/2011
[4.672025] task: ce9e7300 ti: ceb54000 task.ti: ceb54000
[4.672025] EIP: 0060:[c11d45b5] EFLAGS: 00010206 CPU: 0
[4.672025] EIP is at scatterwalk_ffwd+0xf5/0x100
[4.672025] EAX: ceb43b20 EBX: ceb55c94 ECX: 0014 EDX: c11db23f
[4.672025] ESI: 0010 EDI: 0003 EBP: ceb55c7c ESP: ceb55c6c
[4.672025]  DS: 007b ES: 007b FS: 00d8 GS:  SS: 0068
[4.672025] CR0: 8005003b CR2: bfbb6fc0 CR3: 0eb26000 CR4: 06d0
[4.672025] Stack:
[4.672025]  cffd28c0 0014 ceb35400 cea33618 ceb55cd0 c11d45e8 ceb43b20 

[4.672025]  ceb35438 c11db220 ceb55c9c c11db23f ceb55cac c11da470 ceb35438 
ceb353c8
[4.672025]  ceb55cb4 c11da763 ceb55cd0 c11f2c6f ceb35400 0200 ceb35358 
ceb353c8
[4.672025] Call Trace:
[4.672025]  [c11d45e8] scatterwalk_map_and_copy+0x28/0xc0
[4.672025]  [c11db220] ? shash_ahash_finup+0x80/0x80
[4.672025]  [c11db23f] ? shash_async_finup+0x1f/0x30
[4.672025]  [c11da470] ? crypto_ahash_op+0x20/0x50
[4.672025]  [c11da763] ? crypto_ahash_finup+0x13/0x20
[4.672025]  [c11f2c6f] ? crypto_authenc_ahash_fb+0xaf/0xd0
[4.672025]  [c11f2dfc] crypto_authenc_genicv+0xfc/0x340
[4.672025]  [c11f3526] crypto_authenc_encrypt+0x96/0xb0
[4.672025]  [c11f3490] ? crypto_authenc_decrypt+0x3e0/0x3e0
[4.672025]  [c11d4eb7] old_crypt+0xa7/0xc0
[4.672025]  [c11d4f09] old_encrypt+0x19/0x20
[4.672025]  [c11ddbe8] __test_aead+0x268/0x1580
[4.672025]  [c11d28a7] ? __crypto_alloc_tfm+0x37/0x120
[4.672025]  [c11d28a7] ? __crypto_alloc_tfm+0x37/0x120
[4.672025]  [c11d7742] ? skcipher_geniv_init+0x22/0x40
[4.672025]  [c11d7d73] ? eseqiv_init+0x43/0x50
[4.672025]  [c11d2936] ? __crypto_alloc_tfm+0xc6/0x120
[4.672025]  [c11df101] test_aead+0x31/0xc0
[4.672025]  [c11df1d3] alg_test_aead+0x43/0xa0
[4.672025]  [c11def2e] ? alg_find_test+0x2e/0x70
[4.672025]  [c11dfe42] alg_test+0xa2/0x240
[4.672025]  [c106dd83] ? finish_task_switch+0x83/0xe0
[4.672025]  [c159c002] ? __schedule+0x412/0x1067
[4.672025]  [c1085f57] ? __wake_up_common+0x47/0x70
[4.672025]  [c11dbc10] ? cryptomgr_notify+0x450/0x450
[4.672025]  [c11dbc4f] cryptomgr_test+0x3f/0x50
[4.672025]  [c1066dfb] kthread+0xab/0xc0
[4.672025]  [c15a1a41] ret_from_kernel_thread+0x21/0x30
[4.672025]  [c1066d50] ? __kthread_parkme+0x80/0x80
[4.672025] Code: 83 c4 04 5b 5e 5f 5d c3 81 3b 21 43 65 87 75 13 8b 43 04 
83 e0 fe 83 c8 02 89 43 04 89 d8 e9 4d ff ff ff 0f 0b 0f 0b 0f 0b 0f 0b 0f 0b 
0f 0b 8d b4 26 00 00 00 00 55 89 e5 57 56 53 83 ec 40 3e
[4.672025] EIP: [c11d45b5] scatterwalk_ffwd+0xf5/0x100 SS:ESP 
0068:ceb55c6c
[4.721562] ---[ end trace 94a02f0816fe7c7f ]---

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body 

Re: crypto: algif_aead - Switch to new AEAD interface

2015-05-27 Thread Herbert Xu
On Wed, May 27, 2015 at 12:10:03PM +0200, Stephan Mueller wrote:
 
 -
 -if (ctx-enc) {
 -/* round up output buffer to multiple of block size */
 -outlen = ((used + bs - 1) / bs * bs);
 
 Why wouldn't the round up for the output not be needed any more? If the 
 caller 
 provides input data that is not multiple of block sizes and the output buffer 
 is also not multiple of block sizes, wouldn't an encrypt overstep boundaries?

No the AEAD algorithm should fail them instead.  We do the same
thing in algif_skcipher where it's up to the underlying algorithm
to fail requests that do not contain full blocks.

Cheers,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 14/16] ARM: marvell/dt: enable crypto on armada-xp-gp

2015-05-27 Thread Gregory CLEMENT
Hi Thomas, Boris,

On 27/05/2015 13:23, Thomas Petazzoni wrote:
 Dear Gregory CLEMENT,
 
 On Wed, 27 May 2015 12:20:49 +0200, Gregory CLEMENT wrote:
 
 But is it really depending of the board itself?
 I see that the first lines are the same on all the dts, I just remember that
 there was a reason why we could not put it in the dtsi.
 
 Yes, because the DT language doesn't have a += operator, basically.
 
 Some of the MBus ranges are inherently board-specific: when you have a
 NOR flash, you need a specific MBus range for it. And such a MBus range
 is board-specific.
 
 Since it's not possible to do:
 
   ranges = SoC level ranges
 
 in .dtsi, and:
 
   ranges += board level ranges
 
 in .dts, we simply decided to always put:
 
   ranges = SoC level and board level ranges
 
 in the .dts.
 
 It does create some duplication, but that's the best we could do with
 the existing DT infrastructure.

Thanks for the remainder.

So I think we should duplicate the crypto related part in all the dts
file which use an Armada XP SoC. And we don't have to test it again
as soon as it was tested on an Armada XP board (and it is the case
with the Armada XP one).

Gregory


 
 Best regards,
 
 Thomas
 


-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 02/16] crypto: add a new driver for Marvell's CESA

2015-05-27 Thread Imre Kaloz
On Mon, 25 May 2015 13:17:13 +0200, Boris Brezillon  
boris.brezil...@free-electrons.com wrote:


Sorry, I didn't word it right - the series is missing the crypto nodes  
for

the orion, 375 and 38x platforms.


I only add nodes for platforms I have tested on.
If you're able to test on those platforms I'd be happy to include those
changes in the upcoming version.


I would test on 38x but I don't have access to the cpu datasheets to fill  
in the missing pieces.. Thomas, Gregory, could one of you add those?



Imre
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [v3 PATCH 0/8] crypto: Convert all AEAD users to new interface

2015-05-27 Thread Johannes Berg
On Wed, 2015-05-27 at 17:07 +0800, Herbert Xu wrote:
 On Wed, May 27, 2015 at 11:00:40AM +0200, Johannes Berg wrote:
 
  Right. Unfortunately, I can't typically rely on being able to make
  changes to the kernel our driver is built against, and I don't think we
  could do these changes otherwise.
 
 You could provide your own version of crypto_aead_encrypt and
 crypto_aead_decrypt that did the same thing as old_crypt.

Ah, good point, thanks. I'll look into it once these changes hit my
tree :)

johannes

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [v3 PATCH 0/8] crypto: Convert all AEAD users to new interface

2015-05-27 Thread Johannes Berg
On Wed, 2015-05-27 at 16:39 +0800, Herbert Xu wrote:
 On Wed, May 27, 2015 at 10:15:50AM +0200, Johannes Berg wrote:
  
  Do you think it'd be feasible at all to somehow override the
  aead_request_set_crypt() and aead_request_set_ad() functions or so to do
  something that works on older kernels (and thus older crypto subsystems)
  or do you think I just shouldn't bother looking at that and just add
  ifdefs to undo your changes in this series on older kernels?
 
 Another option is to backport the new interface to the older kernel.
 
 You only need something like
 
 https://patchwork.kernel.org/patch/6452601/
 
 for the older kernel to support the new interface along with the
 old interface.

Right. Unfortunately, I can't typically rely on being able to make
changes to the kernel our driver is built against, and I don't think we
could do these changes otherwise.

johannes

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/11] crypto: scatterwalk - Add missing sg_init_table to scatterwalk_ffwd

2015-05-27 Thread Stephan Mueller
Am Mittwoch, 27. Mai 2015, 14:37:27 schrieb Herbert Xu:

Hi Herbert,

We need to call sg_init_table as otherwise the first entry may
inadvertently become the last.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au

Although the following remark is to the previous patch to add 
scatterwalk_ffwd, I would like to ask it here:
---

 crypto/scatterwalk.c |1 +
 1 file changed, 1 insertion(+)

diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c
index 8690324..2ef9cbb 100644
--- a/crypto/scatterwalk.c
+++ b/crypto/scatterwalk.c
@@ -158,6 +158,7 @@ struct scatterlist *scatterwalk_ffwd(struct scatterlist
dst[2],
   src = sg_next(src);

Shouldn't there be a check for src == NULL here? I see the scatterwalk_ffwd 
being used in the IV generators where they simply use the AD len and others. 
For AF_ALG, those values may be set by user space in a deliberately wrong way 
(e.g. more AD len than provided buffers).


Ciao
Stephan
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto: nx - tweak Makefile dependencies

2015-05-27 Thread Herbert Xu
On Mon, May 25, 2015 at 02:45:16PM +1000, Cyril Bur wrote:
 Selecting CRYPTO_DEV_NX causes a conditional include of nx/Kconfig but
 options within nx/Kconfig do not depend on it. The included options should
 depend on CRYPTO_DEV_NX since currently CRYPTO_DEV_NX cannot be built for
 little endian. While Kconfig appears to understand this convoluted
 dependency situation, it isn't explicitly stated.
 
 This patch addresses the missing dependencies for CRYPTO_DEV_NX_ENCRYPT and
 CRYPTO_DEV_NX_COMPRESS which should depend on CRYPTO_DEV_NX. It also makes
 more sense to put all three options into the nx/Kconfig file and have the
 file included unconditionally.
 
 CC: Marcelo Henrique Cerri mhce...@linux.vnet.ibm.com
 CC: Fionnuala Gunter f...@linux.vnet.ibm.com
 CC: linux-crypto@vger.kernel.org
 CC: linuxppc-...@lists.ozlabs.org
 Signed-off-by: Cyril Bur cyril...@gmail.com

Your patch doesn't apply against the cryptodev tree.  Please rebase
it.

Thanks,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[cryptodev:master 158/160] crypto/jitterentropy.c:133:2: error: implicit declaration of function 'timekeeping_valid_for_hres'

2015-05-27 Thread kbuild test robot
tree:   git://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git 
master
head:   d725332208ef13241fc435eece790c9d0ea16a4e
commit: bb5530e4082446aac3a3d69780cd4dbfa4520013 [158/160] crypto: 
jitterentropy - add jitterentropy RNG
config: i386-randconfig-r1-0527 (attached as .config)
reproduce:
  git checkout bb5530e4082446aac3a3d69780cd4dbfa4520013
  # save the attached .config to linux build tree
  make ARCH=i386 

All error/warnings:

   crypto/jitterentropy.c: In function 'jent_get_nstime':
 crypto/jitterentropy.c:133:2: error: implicit declaration of function 
 'timekeeping_valid_for_hres' [-Werror=implicit-function-declaration]
 if ((0 == tmp) 
 ^
   cc1: some warnings being treated as errors

vim +/timekeeping_valid_for_hres +133 crypto/jitterentropy.c

   127   * hoping that there are timers we can work with.
   128   *
   129   * The list of available timers can be obtained from
   130   * 
/sys/devices/system/clocksource/clocksource0/available_clocksource
   131   * and are registered with clocksource_register()
   132   */
  133  if ((0 == tmp) 
   134  #ifndef MODULE
   135 (0 == timekeeping_valid_for_hres()) 
   136  #endif

---
0-DAY kernel test infrastructureOpen Source Technology Center
http://lists.01.org/mailman/listinfo/kbuild Intel Corporation
#
# Automatically generated file; DO NOT EDIT.
# Linux/i386 4.0.0 Kernel Configuration
#
# CONFIG_64BIT is not set
CONFIG_X86_32=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_PERF_EVENTS_INTEL_UNCORE=y
CONFIG_OUTPUT_FORMAT=elf32-i386
CONFIG_ARCH_DEFCONFIG=arch/x86/configs/i386_defconfig
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_X86_32_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS=-fcall-saved-ecx -fcall-saved-edx
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_PGTABLE_LEVELS=2
CONFIG_DEFCONFIG_LIST=/lib/modules/$UNAME_RELEASE/.config
CONFIG_CONSTRUCTORS=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=
# CONFIG_COMPILE_TEST is not set
CONFIG_LOCALVERSION=
CONFIG_LOCALVERSION_AUTO=y
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_HAVE_KERNEL_LZ4=y
# CONFIG_KERNEL_GZIP is not set
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
CONFIG_KERNEL_XZ=y
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
CONFIG_DEFAULT_HOSTNAME=(none)
CONFIG_SWAP=y
# CONFIG_SYSVIPC is not set
# CONFIG_POSIX_MQUEUE is not set
# CONFIG_CROSS_MEMORY_ATTACH is not set
CONFIG_FHANDLE=y
CONFIG_USELIB=y
# CONFIG_AUDIT is not set
CONFIG_HAVE_ARCH_AUDITSYSCALL=y

#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_IRQ_LEGACY_ALLOC_HWIRQ=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_DOMAIN=y
CONFIG_IRQ_DOMAIN_DEBUG=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_HZ_PERIODIC=y
# CONFIG_NO_HZ_IDLE is not set
# CONFIG_NO_HZ is not set
CONFIG_HIGH_RES_TIMERS=y

#
# CPU/Task time and stats accounting
#
CONFIG_TICK_CPU_ACCOUNTING=y
# CONFIG_IRQ_TIME_ACCOUNTING is not set
# CONFIG_BSD_PROCESS_ACCT is not set
# CONFIG_TASKSTATS is not set

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
CONFIG_SRCU=y
CONFIG_TASKS_RCU=y
CONFIG_RCU_STALL_COMMON=y
CONFIG_RCU_FANOUT=32
CONFIG_RCU_FANOUT_LEAF=16
# CONFIG_RCU_FANOUT_EXACT is not set
# CONFIG_TREE_RCU_TRACE is not set
CONFIG_RCU_KTHREAD_PRIO=0
CONFIG_RCU_NOCB_CPU=y
# CONFIG_RCU_NOCB_CPU_NONE is not set
# CONFIG_RCU_NOCB_CPU_ZERO is not set
CONFIG_RCU_NOCB_CPU_ALL=y
# CONFIG_RCU_EXPEDITE_BOOT is not set
CONFIG_BUILD_BIN2C=y
CONFIG_IKCONFIG=y
# CONFIG_IKCONFIG_PROC is not set
CONFIG_LOG_BUF_SHIFT=17
CONFIG_LOG_CPU_MAX_BUF_SHIFT=12
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
# CONFIG_CPUSETS is not set
# CONFIG_CGROUP_CPUACCT is not set
CONFIG_PAGE_COUNTER=y

Re: [PATCH 03/13] serial: 8250_dma: Support for deferred probing when requesting DMA channels

2015-05-27 Thread Peter Ujfalusi
On 05/27/2015 01:41 PM, Peter Ujfalusi wrote:
 On 05/26/2015 05:44 PM, Greg Kroah-Hartman wrote:
 On Tue, May 26, 2015 at 04:25:58PM +0300, Peter Ujfalusi wrote:
 Switch to use ma_request_slave_channel_compat_reason() to request the DMA
 channels. In case of error, return the error code we received including
 -EPROBE_DEFER

 I think you typed the function name wrong here :(
 
 Oops. Also in other drivers :(

I mean in other patches ;)

 I will fix up the messages for the v2 series, which will not going to include
 the patch against 8250_dma.
 
 If I understand things right around the 8250_* is that the
 serial8250_request_dma() which is called from serial8250_do_startup() is not
 called at module probe time, so it can not be used to handle deferred probing.
 
 Thus this patch can be dropped IMO.
 


-- 
Péter
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 03/13] serial: 8250_dma: Support for deferred probing when requesting DMA channels

2015-05-27 Thread Peter Ujfalusi
On 05/26/2015 06:08 PM, Tony Lindgren wrote:
 * Peter Ujfalusi peter.ujfal...@ti.com [150526 06:28]:
 Switch to use ma_request_slave_channel_compat_reason() to request the DMA
 channels. In case of error, return the error code we received including
 -EPROBE_DEFER

 Signed-off-by: Peter Ujfalusi peter.ujfal...@ti.com
 CC: Greg Kroah-Hartman gre...@linuxfoundation.org
 ---
  drivers/tty/serial/8250/8250_dma.c | 18 --
  1 file changed, 8 insertions(+), 10 deletions(-)

 diff --git a/drivers/tty/serial/8250/8250_dma.c 
 b/drivers/tty/serial/8250/8250_dma.c
 index 21d01a491405..a617eca4e97d 100644
 --- a/drivers/tty/serial/8250/8250_dma.c
 +++ b/drivers/tty/serial/8250/8250_dma.c
 @@ -182,21 +182,19 @@ int serial8250_request_dma(struct uart_8250_port *p)
  dma_cap_set(DMA_SLAVE, mask);
  
  /* Get a channel for RX */
 -dma-rxchan = dma_request_slave_channel_compat(mask,
 -   dma-fn, dma-rx_param,
 -   p-port.dev, rx);
 -if (!dma-rxchan)
 -return -ENODEV;
 +dma-rxchan = dma_request_slave_channel_compat_reason(mask, dma-fn,
 +dma-rx_param, p-port.dev, rx);
 +if (IS_ERR(dma-rxchan))
 +return PTR_ERR(dma-rxchan);
  
  dmaengine_slave_config(dma-rxchan, dma-rxconf);
  
  /* Get a channel for TX */
 -dma-txchan = dma_request_slave_channel_compat(mask,
 -   dma-fn, dma-tx_param,
 -   p-port.dev, tx);
 -if (!dma-txchan) {
 +dma-txchan = dma_request_slave_channel_compat_reason(mask, dma-fn,
 +dma-tx_param, p-port.dev, tx);
 +if (IS_ERR(dma-txchan)) {
  dma_release_channel(dma-rxchan);
 -return -ENODEV;
 +return PTR_ERR(dma-txchan);
  }
  
  dmaengine_slave_config(dma-txchan, dma-txconf);
 
 In general the drivers need to work just fine also without DMA.
 
 Does this handle the case properly where no DMA channel is configured
 for the driver in the dts file?

The 8250 core will fall back to PIO mode if the DMA can not be requested.
At the morning I was looking at the 8250 stack and realized that
serial8250_request_dma() will not be called at driver probe time so this patch
can be ignored and will be dropped from the v2 series.

-- 
Péter
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [v3 PATCH 0/8] crypto: Convert all AEAD users to new interface

2015-05-27 Thread Herbert Xu
On Wed, May 27, 2015 at 11:00:40AM +0200, Johannes Berg wrote:

 Right. Unfortunately, I can't typically rely on being able to make
 changes to the kernel our driver is built against, and I don't think we
 could do these changes otherwise.

You could provide your own version of crypto_aead_encrypt and
crypto_aead_decrypt that did the same thing as old_crypt.

Cheers,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 14/16] ARM: marvell/dt: enable crypto on armada-xp-gp

2015-05-27 Thread Gregory CLEMENT
On 26/05/2015 10:59, Boris Brezillon wrote:
 On Mon, 25 May 2015 17:10:37 +0200
 Gregory CLEMENT gregory.clem...@free-electrons.com wrote:
 
 Hi Boris,

 On 22/05/2015 15:34, Boris Brezillon wrote:
 Enable the crypto IP on armada-xp-gp.

 Signed-off-by: Boris Brezillon boris.brezil...@free-electrons.com
 ---
  arch/arm/boot/dts/armada-xp-gp.dts | 4 +++-
  1 file changed, 3 insertions(+), 1 deletion(-)

 diff --git a/arch/arm/boot/dts/armada-xp-gp.dts 
 b/arch/arm/boot/dts/armada-xp-gp.dts
 index 565227e..8a739f4 100644
 --- a/arch/arm/boot/dts/armada-xp-gp.dts
 +++ b/arch/arm/boot/dts/armada-xp-gp.dts
 @@ -94,7 +94,9 @@
 soc {
 ranges = MBUS_ID(0xf0, 0x01) 0 0 0xf100 0x10
   MBUS_ID(0x01, 0x1d) 0 0 0xfff0 0x10
 - MBUS_ID(0x01, 0x2f) 0 0 0xf000 0x100;
 + MBUS_ID(0x01, 0x2f) 0 0 0xf000 0x100
 + MBUS_ID(0x09, 0x09) 0 0 0xf110 0x1
 + MBUS_ID(0x09, 0x05) 0 0 0xf111 0x1;

 As the crypto engine really depend on the SoC itself and not of the board,
 what about updating the dts of the other board using an Armada XP?
 
 But that means introducing changes I haven't tested. Are you okay with
 that ?

Maybe I missed something but as the crypto is fully integrated in the SoC,
if for a given SoC it works on a board it would work on all the boards using
the same SoC.

The board specific part seems about setting memory address on the mbus.

By the way could you add a comment in front of the new line ? so next time
someone will copy and past one of the dts file, he will understand the
signification of these two lines.

But is it really depending of the board itself?
I see that the first lines are the same on all the dts, I just remember that
there was a reason why we could not put it in the dtsi. My point here, is as
the configuration is the same on all the boards, adding the crypto on all the
board should work without any issue.


Thanks,

Gregory





 
 


-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 14/16] ARM: marvell/dt: enable crypto on armada-xp-gp

2015-05-27 Thread Thomas Petazzoni
Dear Gregory CLEMENT,

On Wed, 27 May 2015 12:20:49 +0200, Gregory CLEMENT wrote:

 But is it really depending of the board itself?
 I see that the first lines are the same on all the dts, I just remember that
 there was a reason why we could not put it in the dtsi.

Yes, because the DT language doesn't have a += operator, basically.

Some of the MBus ranges are inherently board-specific: when you have a
NOR flash, you need a specific MBus range for it. And such a MBus range
is board-specific.

Since it's not possible to do:

ranges = SoC level ranges

in .dtsi, and:

ranges += board level ranges

in .dts, we simply decided to always put:

ranges = SoC level and board level ranges

in the .dts.

It does create some duplication, but that's the best we could do with
the existing DT infrastructure.

Best regards,

Thomas
-- 
Thomas Petazzoni, CTO, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [v3 PATCH 0/8] crypto: Convert all AEAD users to new interface

2015-05-27 Thread Johannes Berg

 The conversion of in-tree users is fairly straightforward.

It is pretty much - but a related question (that you totally don't have
to answer if you don't want to think about this).

I'm going to have to (continue) backport(ing) this code to older kernels
for customer support, and I prefer making as few modifications to the
code as possible and putting all the logic into the external backports
project.

Do you think it'd be feasible at all to somehow override the
aead_request_set_crypt() and aead_request_set_ad() functions or so to do
something that works on older kernels (and thus older crypto subsystems)
or do you think I just shouldn't bother looking at that and just add
ifdefs to undo your changes in this series on older kernels?

johannes

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [v3 PATCH 0/8] crypto: Convert all AEAD users to new interface

2015-05-27 Thread Herbert Xu
On Wed, May 27, 2015 at 10:15:50AM +0200, Johannes Berg wrote:
 
 Do you think it'd be feasible at all to somehow override the
 aead_request_set_crypt() and aead_request_set_ad() functions or so to do
 something that works on older kernels (and thus older crypto subsystems)
 or do you think I just shouldn't bother looking at that and just add
 ifdefs to undo your changes in this series on older kernels?

Another option is to backport the new interface to the older kernel.

You only need something like

https://patchwork.kernel.org/patch/6452601/

for the older kernel to support the new interface along with the
old interface.

Note that this patch itself won't be good enough because I have since
removed cryptoff.  But it illustrates the amount of code you need.

Cheers,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


crypto: algif_aead - Switch to new AEAD interface

2015-05-27 Thread Herbert Xu
This patch makes use of the new AEAD interface which uses a single
SG list instead of separate lists for the AD and plain text.

Note that the user-space interface now requires both input and
output to be of the same length, and both must include space for
the AD as well as the authentication tag.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au

diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 53702e9..72a94dc 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -72,7 +72,7 @@ static inline bool aead_sufficient_data(struct aead_ctx *ctx)
 {
unsigned as = crypto_aead_authsize(crypto_aead_reqtfm(ctx-aead_req));
 
-   return (ctx-used = (ctx-aead_assoclen + (ctx-enc ? 0 : as)));
+   return ctx-used = ctx-aead_assoclen + as;
 }
 
 static void aead_put_sgl(struct sock *sk)
@@ -353,12 +353,8 @@ static int aead_recvmsg(struct socket *sock, struct msghdr 
*msg, size_t ignored,
struct sock *sk = sock-sk;
struct alg_sock *ask = alg_sk(sk);
struct aead_ctx *ctx = ask-private;
-   unsigned bs = crypto_aead_blocksize(crypto_aead_reqtfm(ctx-aead_req));
unsigned as = crypto_aead_authsize(crypto_aead_reqtfm(ctx-aead_req));
struct aead_sg_list *sgl = ctx-tsgl;
-   struct scatterlist *sg = NULL;
-   struct scatterlist assoc[ALG_MAX_PAGES];
-   size_t assoclen = 0;
unsigned int i = 0;
int err = -EINVAL;
unsigned long used = 0;
@@ -407,23 +403,13 @@ static int aead_recvmsg(struct socket *sock, struct 
msghdr *msg, size_t ignored,
if (!aead_sufficient_data(ctx))
goto unlock;
 
+   outlen = used;
+
/*
 * The cipher operation input data is reduced by the associated data
 * length as this data is processed separately later on.
 */
-   used -= ctx-aead_assoclen;
-
-   if (ctx-enc) {
-   /* round up output buffer to multiple of block size */
-   outlen = ((used + bs - 1) / bs * bs);
-   /* add the size needed for the auth tag to be created */
-   outlen += as;
-   } else {
-   /* output data size is input without the authentication tag */
-   outlen = used - as;
-   /* round up output buffer to multiple of block size */
-   outlen = ((outlen + bs - 1) / bs * bs);
-   }
+   used -= ctx-aead_assoclen + (ctx-enc ? as : 0);
 
/* convert iovecs of output buffers into scatterlists */
while (iov_iter_count(msg-msg_iter)) {
@@ -453,47 +439,11 @@ static int aead_recvmsg(struct socket *sock, struct 
msghdr *msg, size_t ignored,
if (usedpages  outlen)
goto unlock;
 
-   sg_init_table(assoc, ALG_MAX_PAGES);
-   assoclen = ctx-aead_assoclen;
-   /*
-* Split scatterlist into two: first part becomes AD, second part
-* is plaintext / ciphertext. The first part is assigned to assoc
-* scatterlist. When this loop finishes, sg points to the start of the
-* plaintext / ciphertext.
-*/
-   for (i = 0; i  ctx-tsgl.cur; i++) {
-   sg = sgl-sg + i;
-   if (sg-length = assoclen) {
-   /* AD is larger than one page */
-   sg_set_page(assoc + i, sg_page(sg),
-   sg-length, sg-offset);
-   assoclen -= sg-length;
-   if (i = ctx-tsgl.cur)
-   goto unlock;
-   } else if (!assoclen) {
-   /* current page is to start of plaintext / ciphertext */
-   if (i)
-   /* AD terminates at page boundary */
-   sg_mark_end(assoc + i - 1);
-   else
-   /* AD size is zero */
-   sg_mark_end(assoc);
-   break;
-   } else {
-   /* AD does not terminate at page boundary */
-   sg_set_page(assoc + i, sg_page(sg),
-   assoclen, sg-offset);
-   sg_mark_end(assoc + i);
-   /* plaintext / ciphertext starts after AD */
-   sg-length -= assoclen;
-   sg-offset += assoclen;
-   break;
-   }
-   }
+   sg_mark_end(sgl-sg + sgl-cur - 1);
 
-   aead_request_set_assoc(ctx-aead_req, assoc, ctx-aead_assoclen);
-   aead_request_set_crypt(ctx-aead_req, sg, ctx-rsgl[0].sg, used,
-  ctx-iv);
+   aead_request_set_crypt(ctx-aead_req, sgl-sg, ctx-rsgl[0].sg,
+  used, ctx-iv);
+   aead_request_set_ad(ctx-aead_req, ctx-aead_assoclen);
 
err = af_alg_wait_for_completion(ctx-enc ?
 crypto_aead_encrypt(ctx-aead_req) :
-- 
Email: Herbert 

Re: [PATCH v1 3/3] crypto: ccp - Protect against poorly marked end of sg list

2015-05-27 Thread Herbert Xu
On Wed, May 27, 2015 at 05:43:05PM +0800, Herbert Xu wrote:
 Tom Lendacky thomas.lenda...@amd.com wrote:
  Scatter gather lists can be created with more available entries than are
  actually used (e.g. using sg_init_table() to reserve a specific number
  of sg entries, but in actuality using something less than that based on
  the data length).  The caller sometimes fails to mark the last entry
  with sg_mark_end().  In these cases, sg_nents() will return the original
  size of the sg list as opposed to the actual number of sg entries that
  contain valid data.
  
  On arm64, if the sg_nents() value is used in a call to dma_map_sg() in
  this situation, then it causes a BUG_ON in lib/swiotlb.c because an
  empty sg list entry results in dma_capable() returning false and
  swiotlb trying to create a bounce buffer of size 0. This occurred in
  the userspace crypto interface before being fixed by
  
  0f477b655a52 (crypto: algif - Mark sgl end at the end of data)
  
  Protect against this in the future by counting the number of sg entries
  needed to meet the length requirement and supplying that value to
  dma_map_sg().
 
 Is this needed for any reason other than this bug that's already
 been fixed?

Could this be needed if you have a properly marked SG list say of
100 bytes but len is only 10 bytes?

Cheers,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/11] crypto: scatterwalk - Add missing sg_init_table to scatterwalk_ffwd

2015-05-27 Thread Herbert Xu
On Wed, May 27, 2015 at 11:00:55AM +0200, Stephan Mueller wrote:

 Shouldn't there be a check for src == NULL here? I see the scatterwalk_ffwd 
 being used in the IV generators where they simply use the AD len and others. 
 For AF_ALG, those values may be set by user space in a deliberately wrong way 
 (e.g. more AD len than provided buffers).

algif_aead should be verifying the user provided input.  AFAICS it
is doing exactly that.  The crash we had previously were due to
bugs in my algif_aead patch.

Cheers,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [v3 PATCH 0/8] crypto: Convert all AEAD users to new interface

2015-05-27 Thread Steffen Klassert
On Wed, May 27, 2015 at 05:29:22PM +0800, Herbert Xu wrote:
 On Wed, May 27, 2015 at 11:25:33AM +0200, Steffen Klassert wrote:
  
  Not sure if I missed something in the flood of patches, but if I
  apply your v3 patchset on top of the cryptodev tree, it crashes
  like that buring boot:
 
 Sorry, I forgot to mention that v3 depends on the series of fixes
 posted just before it (but only to linux-crypto):
 
 https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg14487.html
 

OK, I'll try with this.

Thanks!

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1 0/3] crypto: ccp - CCP driver updates 2015-05-26

2015-05-27 Thread Herbert Xu
On Tue, May 26, 2015 at 01:06:13PM -0500, Tom Lendacky wrote:
 The following patches are included in this driver update series:
 
 - Remove the checking and setting of the device dma_mask field
 - Remove an unused field from a structure to help avoid any confusion
 - Protect against poorly marked end of scatter-gather list
  
 This patch series is based on cryptodev-2.6.

Patches 1 and 2 applied.  I'll wait for your response before
deciding on patch 3.

Thanks,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 11/13] spi: omap2-mcspi: Support for deferred probing when requesting DMA channels

2015-05-27 Thread Peter Ujfalusi
Mark,

On 05/26/2015 06:27 PM, Mark Brown wrote:
 On Tue, May 26, 2015 at 04:26:06PM +0300, Peter Ujfalusi wrote:
 
 Switch to use ma_request_slave_channel_compat_reason() to request the DMA
 channels. Only fall back to pio mode if the error code returned is not
 -EPROBE_DEFER, otherwise return from the probe with the -EPROBE_DEFER.
 
 I've got two patches from a patch series here with no cover letter...
 I'm guessing there's no interdependencies or anything?  Please always
 ensure that when sending a patch series everyone getting the patches can
 tell what the series as a whole looks like (if there's no dependencies
 consider posting as individual patches rather than a series).

I have put the maintainers of the relevant subsystems as CC in the commit
message and sent the series to all of the mailing lists. This series was
touching 7 subsystems and I thought not spamming every maintainer with all the
mails might be better.

In v2 I will keep this in mind.

The series depends on the first two patch, which adds the
dma_request_slave_channel_compat_reason() function.

-- 
Péter
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: crypto: algif_aead - Switch to new AEAD interface

2015-05-27 Thread Stephan Mueller
Am Mittwoch, 27. Mai 2015, 17:24:41 schrieb Herbert Xu:

Hi Herbert,

-
-  if (ctx-enc) {
-  /* round up output buffer to multiple of block size */
-  outlen = ((used + bs - 1) / bs * bs);

Why wouldn't the round up for the output not be needed any more? If the caller 
provides input data that is not multiple of block sizes and the output buffer 
is also not multiple of block sizes, wouldn't an encrypt overstep boundaries?

-  /* add the size needed for the auth tag to be created */
-  outlen += as;
-  } else {
-  /* output data size is input without the authentication tag */
-  outlen = used - as;
-  /* round up output buffer to multiple of block size */
-  outlen = ((outlen + bs - 1) / bs * bs);

Same here.

-  }
+  used -= ctx-aead_assoclen + (ctx-enc ? as : 0);



Ciao
Stephan
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 03/13] serial: 8250_dma: Support for deferred probing when requesting DMA channels

2015-05-27 Thread Peter Ujfalusi
On 05/26/2015 05:44 PM, Greg Kroah-Hartman wrote:
 On Tue, May 26, 2015 at 04:25:58PM +0300, Peter Ujfalusi wrote:
 Switch to use ma_request_slave_channel_compat_reason() to request the DMA
 channels. In case of error, return the error code we received including
 -EPROBE_DEFER
 
 I think you typed the function name wrong here :(

Oops. Also in other drivers :(
I will fix up the messages for the v2 series, which will not going to include
the patch against 8250_dma.

If I understand things right around the 8250_* is that the
serial8250_request_dma() which is called from serial8250_do_startup() is not
called at module probe time, so it can not be used to handle deferred probing.

Thus this patch can be dropped IMO.

-- 
Péter
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/11] crypto: scatterwalk - Add missing sg_init_table to scatterwalk_ffwd

2015-05-27 Thread Stephan Mueller
Am Mittwoch, 27. Mai 2015, 17:08:55 schrieb Herbert Xu:

Hi Herbert,

On Wed, May 27, 2015 at 11:00:55AM +0200, Stephan Mueller wrote:
 Shouldn't there be a check for src == NULL here? I see the scatterwalk_ffwd
 being used in the IV generators where they simply use the AD len and
 others.
 For AF_ALG, those values may be set by user space in a deliberately wrong
 way (e.g. more AD len than provided buffers).

algif_aead should be verifying the user provided input.  AFAICS it
is doing exactly that.  The crash we had previously were due to
bugs in my algif_aead patch.

To be precise, the concern I currently have are as follows. But I will test it 
later and report back:

The seqiv.c uses the following call:

scatterwalk_ffwd(dstbuf, req-dst,
 req-assoclen + ivsize),
scatterwalk_ffwd(srcbuf, req-src,
 req-assoclen + ivsize),

That together with my other tests for seqniv(rfc4106()) this indicates that 
the input SGL must contain AD || IV || PT.

The algif_aead, however only slurps in AD || PT via the sendmsg call and 
processes that as documented in the recvmsg call. So, the IV part is missing 
in the picture as the IV is set via the setsockopt.

So, the aforementioned call unconditionally advances the SGL by AD + 8 bytes 
where I am not sure that the 8 bytes are always accounted for by algif_aead.


Ciao
Stephan
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/11] crypto: scatterwalk - Add missing sg_init_table to scatterwalk_ffwd

2015-05-27 Thread Herbert Xu
On Wed, May 27, 2015 at 01:24:48PM +0200, Stephan Mueller wrote:
 
 To be precise, the concern I currently have are as follows. But I will test 
 it 
 later and report back:
 
 The seqiv.c uses the following call:
 
 scatterwalk_ffwd(dstbuf, req-dst,
  req-assoclen + ivsize),
 scatterwalk_ffwd(srcbuf, req-src,
  req-assoclen + ivsize),
 
 That together with my other tests for seqniv(rfc4106()) this indicates that 
 the input SGL must contain AD || IV || PT.

seqniv verifies that the IV is present.

Cheers,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: jitterentropy - remove timekeeping_valid_for_hres

2015-05-27 Thread Stephan Mueller
The patch removes the use of timekeeping_valid_for_hres which is now
marked as internal for the time keeping subsystem. The jitterentropy
does not really require this verification as a coarse timer (when
random_get_entropy is absent) is discovered by the initialization test
of jent_entropy_init, which would cause the jitter rng to not load in
that case.

Reported-by: kbuild test robot fengguang...@intel.com
Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 crypto/jitterentropy.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/crypto/jitterentropy.c b/crypto/jitterentropy.c
index 1ebe58a..a60147e 100644
--- a/crypto/jitterentropy.c
+++ b/crypto/jitterentropy.c
@@ -131,9 +131,6 @@ static inline void jent_get_nstime(__u64 *out)
 * and are registered with clocksource_register()
 */
if ((0 == tmp) 
-#ifndef MODULE
-  (0 == timekeeping_valid_for_hres()) 
-#endif
   (0 == __getnstimeofday(ts))) {
tmp = ts.tv_sec;
tmp = tmp  32;
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/8] ARM: mvebu: Add support for RAID6 PQ offloading

2015-05-27 Thread Boaz Harrosh
On 05/26/2015 07:31 PM, Dan Williams wrote:
 [ adding Boaz as this discussion has implications for ore_raid ]
 

 You're not talking about deprecating it, you're talking about removing
 it entirely.
 
 True, and adding more users makes that removal more difficult.  I'm
 willing to help out on the design and review for this work, I just
 can't commit to doing the implementation and testing.
 


Hi

So for ore_raid, Yes it uses both xor and pq functions, and I expect
that to work also after the API changes.

That said, I never really cared for the HW offload engines of these
APIs. Actually I never met any. On a modern machine I always got
the DCE/MMX kick in or one of the other CPU variants. With preliminary
testing of XOR I got an almost memory speed for xor (read n pages
+ write one) So with multy-core CPUs I fail to see how an HW do
better, memory caching and all. The PQ was not that far behind.

All I need is an abstract API that gives me the best implementation
on any ARCH / configuration. Actually the async_tx API is a pain
and a sync API would make things simple. I do not use the concurrent
async submit, wait later at all. I submit then wait.

So anything you change this to, as long as you keep the wonderful
dce implementation is good with me, just that the code keeps running
after the new API is fine with me.

(And the common structures between XOR and PQ was also nice, but I
 can also use a union, its always either or in ore_raid)

Once you make API changes and modify code, CC me I'll run tests

good luck
Boaz

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ipsec PATCH 0/3] Preserve skb-mark through VTI tunnels

2015-05-27 Thread Alexander Duyck
These patches are meant to try and address the fact the VTI tunnels are
currently overwriting the skb-mark value.  I am generally happy with the
first two patches, however the third patch still modifies the skb-mark,
though it undoes after the fact.

The main problem I am trying to address is the fact that currently if I use
an v6 over v6 VTI tunnel I cannot receive any traffic on the interface as
the skb-mark is bleeding through and causing the traffic to be dropped.

---

Alexander Duyck (3):
  ip_vti/ip6_vti: Do not touch skb-mark on xmit
  xfrm: Override skb-mark with tunnel-parm.i_key in xfrm_input
  ip_vti/ip6_vti: Preserve skb-mark after rcv_cb call


 net/ipv4/ip_vti.c |   14 ++
 net/ipv6/ip6_vti.c|   13 ++---
 net/xfrm/xfrm_input.c |   17 -
 3 files changed, 36 insertions(+), 8 deletions(-)

--
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1 3/3] crypto: ccp - Protect against poorly marked end of sg list

2015-05-27 Thread Tom Lendacky

On 05/27/2015 04:43 AM, Herbert Xu wrote:

Tom Lendacky thomas.lenda...@amd.com wrote:

Scatter gather lists can be created with more available entries than are
actually used (e.g. using sg_init_table() to reserve a specific number
of sg entries, but in actuality using something less than that based on
the data length).  The caller sometimes fails to mark the last entry
with sg_mark_end().  In these cases, sg_nents() will return the original
size of the sg list as opposed to the actual number of sg entries that
contain valid data.

On arm64, if the sg_nents() value is used in a call to dma_map_sg() in
this situation, then it causes a BUG_ON in lib/swiotlb.c because an
empty sg list entry results in dma_capable() returning false and
swiotlb trying to create a bounce buffer of size 0. This occurred in
the userspace crypto interface before being fixed by

0f477b655a52 (crypto: algif - Mark sgl end at the end of data)

Protect against this in the future by counting the number of sg entries
needed to meet the length requirement and supplying that value to
dma_map_sg().


Is this needed for any reason other than this bug that's already
been fixed?



I added this just to protect against any other users of the API that
may do something similar in the future (or if the user should re-use
an sg list and leave leftover sg entries in it). Since software
crypto implementations walk the sg list based on length and do not use
DMA mappings it is possible for this bug to pop up again in another
location since it is likely that the testing won't be done with
hardware crypto devices.


The reason I'm asking is because while this patch fixes your driver
everybody else will still crash and burn should something like this
happen again.


A number of other drivers already have similar sg-count functions in
them.

I'm ok if you decide that this patch shouldn't be applied. It's just
that this is typically an issue that won't be found until after the
release of a kernel rather than during the development stages.

Thanks,
Tom



Cheers,


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1 3/3] crypto: ccp - Protect against poorly marked end of sg list

2015-05-27 Thread Tom Lendacky

On 05/27/2015 04:45 AM, Herbert Xu wrote:

On Wed, May 27, 2015 at 05:43:05PM +0800, Herbert Xu wrote:

Tom Lendacky thomas.lenda...@amd.com wrote:

Scatter gather lists can be created with more available entries than are
actually used (e.g. using sg_init_table() to reserve a specific number
of sg entries, but in actuality using something less than that based on
the data length).  The caller sometimes fails to mark the last entry
with sg_mark_end().  In these cases, sg_nents() will return the original
size of the sg list as opposed to the actual number of sg entries that
contain valid data.

On arm64, if the sg_nents() value is used in a call to dma_map_sg() in
this situation, then it causes a BUG_ON in lib/swiotlb.c because an
empty sg list entry results in dma_capable() returning false and
swiotlb trying to create a bounce buffer of size 0. This occurred in
the userspace crypto interface before being fixed by

0f477b655a52 (crypto: algif - Mark sgl end at the end of data)

Protect against this in the future by counting the number of sg entries
needed to meet the length requirement and supplying that value to
dma_map_sg().


Is this needed for any reason other than this bug that's already
been fixed?


Could this be needed if you have a properly marked SG list say of
100 bytes but len is only 10 bytes?


I don't think that situation matters because the DMA mapping should
succeed just fine at 100 bytes even if only needing/using 10 bytes.

Thanks,
Tom



Cheers,


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ipsec PATCH 2/3] xfrm: Override skb-mark with tunnel-parm.i_key in xfrm_input

2015-05-27 Thread Alexander Duyck
This change makes it so that if a tunnel is defined we just use the mark
from the tunnel instead of the mark from the skb header.  By doing this we
can avoid the need to set skb-mark inside of the tunnel receive functions.

Signed-off-by: Alexander Duyck alexander.h.du...@redhat.com
---
 net/xfrm/xfrm_input.c |   17 -
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
index 526c4feb3b50..b58286ecd156 100644
--- a/net/xfrm/xfrm_input.c
+++ b/net/xfrm/xfrm_input.c
@@ -13,6 +13,8 @@
 #include net/dst.h
 #include net/ip.h
 #include net/xfrm.h
+#include net/ip_tunnels.h
+#include net/ip6_tunnel.h
 
 static struct kmem_cache *secpath_cachep __read_mostly;
 
@@ -186,6 +188,7 @@ int xfrm_input(struct sk_buff *skb, int nexthdr, __be32 
spi, int encap_type)
struct xfrm_state *x = NULL;
xfrm_address_t *daddr;
struct xfrm_mode *inner_mode;
+   u32 mark = skb-mark;
unsigned int family;
int decaps = 0;
int async = 0;
@@ -203,6 +206,18 @@ int xfrm_input(struct sk_buff *skb, int nexthdr, __be32 
spi, int encap_type)
   XFRM_SPI_SKB_CB(skb)-daddroff);
family = XFRM_SPI_SKB_CB(skb)-family;
 
+   /* if tunnel is present override skb-mark value with tunnel i_key */
+   if (XFRM_TUNNEL_SKB_CB(skb)-tunnel.ip4) {
+   switch (family) {
+   case AF_INET:
+   mark = 
be32_to_cpu(XFRM_TUNNEL_SKB_CB(skb)-tunnel.ip4-parms.i_key);
+   break;
+   case AF_INET6:
+   mark = 
be32_to_cpu(XFRM_TUNNEL_SKB_CB(skb)-tunnel.ip6-parms.i_key);
+   break;
+   }
+   }
+
/* Allocate new secpath or COW existing one. */
if (!skb-sp || atomic_read(skb-sp-refcnt) != 1) {
struct sec_path *sp;
@@ -229,7 +244,7 @@ int xfrm_input(struct sk_buff *skb, int nexthdr, __be32 
spi, int encap_type)
goto drop;
}
 
-   x = xfrm_state_lookup(net, skb-mark, daddr, spi, nexthdr, 
family);
+   x = xfrm_state_lookup(net, mark, daddr, spi, nexthdr, family);
if (x == NULL) {
XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOSTATES);
xfrm_audit_state_notfound(skb, family, spi, seq);

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ipsec PATCH 3/3] ip_vti/ip6_vti: Preserve skb-mark after rcv_cb call

2015-05-27 Thread Alexander Duyck
The vti6_rcv_cb and vti_rcv_cb calls were leaving the skb-mark modified
after completing the function.  This resulted in the original skb-mark
value being lost.  Since we only need skb-mark to be set for
xfrm_policy_check we can pull the assignment into the rcv_cb calls and then
just restore the original mark after xfrm_policy_check has been completed.

Signed-off-by: Alexander Duyck alexander.h.du...@redhat.com
---
 net/ipv4/ip_vti.c  |9 +++--
 net/ipv6/ip6_vti.c |9 +++--
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
index 4c318e1c13c8..0c152087ca15 100644
--- a/net/ipv4/ip_vti.c
+++ b/net/ipv4/ip_vti.c
@@ -65,7 +65,6 @@ static int vti_input(struct sk_buff *skb, int nexthdr, __be32 
spi,
goto drop;
 
XFRM_TUNNEL_SKB_CB(skb)-tunnel.ip4 = tunnel;
-   skb-mark = be32_to_cpu(tunnel-parms.i_key);
 
return xfrm_input(skb, nexthdr, spi, encap_type);
}
@@ -91,6 +90,8 @@ static int vti_rcv_cb(struct sk_buff *skb, int err)
struct pcpu_sw_netstats *tstats;
struct xfrm_state *x;
struct ip_tunnel *tunnel = XFRM_TUNNEL_SKB_CB(skb)-tunnel.ip4;
+   u32 orig_mark = skb-mark;
+   int ret;
 
if (!tunnel)
return 1;
@@ -107,7 +108,11 @@ static int vti_rcv_cb(struct sk_buff *skb, int err)
x = xfrm_input_state(skb);
family = x-inner_mode-afinfo-family;
 
-   if (!xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family))
+   skb-mark = be32_to_cpu(tunnel-parms.i_key);
+   ret = xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family);
+   skb-mark = orig_mark;
+
+   if (!ret)
return -EPERM;
 
skb_scrub_packet(skb, !net_eq(tunnel-net, dev_net(skb-dev)));
diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
index 104de4da3ff3..ff3bd863fa03 100644
--- a/net/ipv6/ip6_vti.c
+++ b/net/ipv6/ip6_vti.c
@@ -322,7 +322,6 @@ static int vti6_rcv(struct sk_buff *skb)
}
 
XFRM_TUNNEL_SKB_CB(skb)-tunnel.ip6 = t;
-   skb-mark = be32_to_cpu(t-parms.i_key);
 
rcu_read_unlock();
 
@@ -342,6 +341,8 @@ static int vti6_rcv_cb(struct sk_buff *skb, int err)
struct pcpu_sw_netstats *tstats;
struct xfrm_state *x;
struct ip6_tnl *t = XFRM_TUNNEL_SKB_CB(skb)-tunnel.ip6;
+   u32 orig_mark = skb-mark;
+   int ret;
 
if (!t)
return 1;
@@ -358,7 +359,11 @@ static int vti6_rcv_cb(struct sk_buff *skb, int err)
x = xfrm_input_state(skb);
family = x-inner_mode-afinfo-family;
 
-   if (!xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family))
+   skb-mark = be32_to_cpu(t-parms.i_key);
+   ret = xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family);
+   skb-mark = orig_mark;
+
+   if (!ret)
return -EPERM;
 
skb_scrub_packet(skb, !net_eq(t-net, dev_net(skb-dev)));

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ipsec PATCH 1/3] ip_vti/ip6_vti: Do not touch skb-mark on xmit

2015-05-27 Thread Alexander Duyck
Instead of modifying skb-mark we can simply modify the flowi_mark that is
generated as a result of the xfrm_decode_session.  By doing this we don't
need to actually touch the skb-mark and it can be preserved as it passes
out through the tunnel.

Signed-off-by: Alexander Duyck alexander.h.du...@redhat.com
---
 net/ipv4/ip_vti.c  |5 +++--
 net/ipv6/ip6_vti.c |4 +++-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
index 9f7269f3c54a..4c318e1c13c8 100644
--- a/net/ipv4/ip_vti.c
+++ b/net/ipv4/ip_vti.c
@@ -216,8 +216,6 @@ static netdev_tx_t vti_tunnel_xmit(struct sk_buff *skb, 
struct net_device *dev)
 
memset(fl, 0, sizeof(fl));
 
-   skb-mark = be32_to_cpu(tunnel-parms.o_key);
-
switch (skb-protocol) {
case htons(ETH_P_IP):
xfrm_decode_session(skb, fl, AF_INET);
@@ -233,6 +231,9 @@ static netdev_tx_t vti_tunnel_xmit(struct sk_buff *skb, 
struct net_device *dev)
return NETDEV_TX_OK;
}
 
+   /* override mark with tunnel output key */
+   fl.flowi_mark = be32_to_cpu(tunnel-parms.o_key);
+
return vti_xmit(skb, dev, fl);
 }
 
diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
index ed9d681207fa..104de4da3ff3 100644
--- a/net/ipv6/ip6_vti.c
+++ b/net/ipv6/ip6_vti.c
@@ -495,7 +495,6 @@ vti6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
int ret;
 
memset(fl, 0, sizeof(fl));
-   skb-mark = be32_to_cpu(t-parms.o_key);
 
switch (skb-protocol) {
case htons(ETH_P_IPV6):
@@ -516,6 +515,9 @@ vti6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
goto tx_err;
}
 
+   /* override mark with tunnel output key */
+   fl.flowi_mark = be32_to_cpu(t-parms.o_key);
+
ret = vti6_xmit(skb, dev, fl);
if (ret  0)
goto tx_err;

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 11/13] spi: omap2-mcspi: Support for deferred probing when requesting DMA channels

2015-05-27 Thread Mark Brown
On Tue, May 26, 2015 at 04:26:06PM +0300, Peter Ujfalusi wrote:
 Switch to use ma_request_slave_channel_compat_reason() to request the DMA
 channels. Only fall back to pio mode if the error code returned is not
 -EPROBE_DEFER, otherwise return from the probe with the -EPROBE_DEFER.

Acked-by: Mark Brown broo...@kernel.org


signature.asc
Description: Digital signature


Re: [PATCH 13/13] ASoC: omap-pcm: Switch to use dma_request_slave_channel_compat_reason()

2015-05-27 Thread Mark Brown
On Tue, May 26, 2015 at 04:26:08PM +0300, Peter Ujfalusi wrote:
 dmaengine provides a wrapper function to handle DT and non DT boots when
 requesting DMA channel. Use that instead of checking for of_node in the
 platform driver.

Acked-by: Mark Brown broo...@kernel.org


signature.asc
Description: Digital signature


Re: [PATCH 11/13] spi: omap2-mcspi: Support for deferred probing when requesting DMA channels

2015-05-27 Thread Mark Brown
On Wed, May 27, 2015 at 02:15:12PM +0300, Peter Ujfalusi wrote:

 I have put the maintainers of the relevant subsystems as CC in the commit
 message and sent the series to all of the mailing lists. This series was
 touching 7 subsystems and I thought not spamming every maintainer with all the
 mails might be better.

You need to at least include people on the cover letter, otherwise
they'll have no idea what's going on.


signature.asc
Description: Digital signature


[PATCH] xfrm6: Do not use xfrm_local_error for path MTU issues in tunnels

2015-05-27 Thread Alexander Duyck
This change makes it so that we use icmpv6_send to report PMTU issues back
into tunnels in the case that the resulting packet is larger than the MTU
of the outgoing interface.  Previously xfrm_local_error was being used in
this case, however this was resulting in no changes, I suspect due to the
fact that the tunnel itself was being kept out of the loop.

This patch fixes PMTU problems seen on ip6_vti tunnels and is based on the
behavior seen if the socket was orphaned.  Instead of requiring the socket
to be orphaned this patch simply defaults to using icmpv6_send in the case
that the frame came though a tunnel.

Signed-off-by: Alexander Duyck alexander.h.du...@redhat.com
---
 net/ipv6/xfrm6_output.c |   18 --
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/net/ipv6/xfrm6_output.c b/net/ipv6/xfrm6_output.c
index 09c76a7b474d..6f9b514d0e38 100644
--- a/net/ipv6/xfrm6_output.c
+++ b/net/ipv6/xfrm6_output.c
@@ -72,6 +72,7 @@ static int xfrm6_tunnel_check_size(struct sk_buff *skb)
 {
int mtu, ret = 0;
struct dst_entry *dst = skb_dst(skb);
+   struct xfrm_state *x = dst-xfrm;
 
mtu = dst_mtu(dst);
if (mtu  IPV6_MIN_MTU)
@@ -82,7 +83,7 @@ static int xfrm6_tunnel_check_size(struct sk_buff *skb)
 
if (xfrm6_local_dontfrag(skb))
xfrm6_local_rxpmtu(skb, mtu);
-   else if (skb-sk)
+   else if (skb-sk  x-props.mode != XFRM_MODE_TUNNEL)
xfrm_local_error(skb, mtu);
else
icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
@@ -149,11 +150,16 @@ static int __xfrm6_output(struct sock *sk, struct sk_buff 
*skb)
else
mtu = dst_mtu(skb_dst(skb));
 
-   if (skb-len  mtu  xfrm6_local_dontfrag(skb)) {
-   xfrm6_local_rxpmtu(skb, mtu);
-   return -EMSGSIZE;
-   } else if (!skb-ignore_df  skb-len  mtu  skb-sk) {
-   xfrm_local_error(skb, mtu);
+   if (!skb-ignore_df  skb-len  mtu) {
+   skb-dev = dst-dev;
+
+   if (xfrm6_local_dontfrag(skb))
+   xfrm6_local_rxpmtu(skb, mtu);
+   else if (skb-sk  x-props.mode != XFRM_MODE_TUNNEL)
+   xfrm_local_error(skb, mtu);
+   else
+   icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
+
return -EMSGSIZE;
}
 

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/11] crypto: aead - Tweaks/fixes to new AEAD interface

2015-05-27 Thread Herbert Xu
Hi:

Previously the AD was required to exist in both the source and
destination buffers.  This creates a rather confusing situation
where the destination served as both input as well as output.

This series rectifies by allowing the destination to contain
the AD (e.g., it always does for in-place encryption) but not
require it.  Those AEAD algorithms that need the AD to be in
the destination buffer will do their own copying.

This series also merges some common code between echainiv and
seqiv.  In particular, the entire compatibility layer is now
shared.

Finally a number of bugs have been quashed.

Cheers,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/11] crypto: scatterwalk - Add missing sg_init_table to scatterwalk_ffwd

2015-05-27 Thread Herbert Xu
We need to call sg_init_table as otherwise the first entry may
inadvertently become the last.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 crypto/scatterwalk.c |1 +
 1 file changed, 1 insertion(+)

diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c
index 8690324..2ef9cbb 100644
--- a/crypto/scatterwalk.c
+++ b/crypto/scatterwalk.c
@@ -158,6 +158,7 @@ struct scatterlist *scatterwalk_ffwd(struct scatterlist 
dst[2],
src = sg_next(src);
}
 
+   sg_init_table(dst, 2);
sg_set_page(dst, sg_page(src), src-length - len, src-offset + len);
scatterwalk_crypto_chain(dst, sg_next(src), 0, 2);
 
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 9/11] crypto: seqiv - Use common IV generation code

2015-05-27 Thread Herbert Xu
This patch makes use of the new common IV generation code.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 crypto/seqiv.c |   92 ++---
 1 file changed, 36 insertions(+), 56 deletions(-)

diff --git a/crypto/seqiv.c b/crypto/seqiv.c
index b55c685..9c4490b 100644
--- a/crypto/seqiv.c
+++ b/crypto/seqiv.c
@@ -13,7 +13,7 @@
  *
  */
 
-#include crypto/internal/aead.h
+#include crypto/internal/geniv.h
 #include crypto/internal/skcipher.h
 #include crypto/null.h
 #include crypto/rng.h
@@ -37,30 +37,14 @@ struct seqiv_ctx {
 };
 
 struct seqiv_aead_ctx {
-   struct crypto_aead *child;
-   spinlock_t lock;
+   /* aead_geniv_ctx must be first the element */
+   struct aead_geniv_ctx geniv;
struct crypto_blkcipher *null;
u8 salt[] __attribute__ ((aligned(__alignof__(u32;
 };
 
 static void seqiv_free(struct crypto_instance *inst);
 
-static int seqiv_aead_setkey(struct crypto_aead *tfm,
-const u8 *key, unsigned int keylen)
-{
-   struct seqiv_aead_ctx *ctx = crypto_aead_ctx(tfm);
-
-   return crypto_aead_setkey(ctx-child, key, keylen);
-}
-
-static int seqiv_aead_setauthsize(struct crypto_aead *tfm,
- unsigned int authsize)
-{
-   struct seqiv_aead_ctx *ctx = crypto_aead_ctx(tfm);
-
-   return crypto_aead_setauthsize(ctx-child, authsize);
-}
-
 static void seqiv_complete2(struct skcipher_givcrypt_request *req, int err)
 {
struct ablkcipher_request *subreq = skcipher_givcrypt_reqctx(req);
@@ -289,7 +273,7 @@ static int seqiv_aead_givencrypt(struct 
aead_givcrypt_request *req)
return err;
 }
 
-static int seqiv_aead_encrypt_compat(struct aead_request *req)
+static int seqniv_aead_encrypt(struct aead_request *req)
 {
struct crypto_aead *geniv = crypto_aead_reqtfm(req);
struct seqiv_aead_ctx *ctx = crypto_aead_ctx(geniv);
@@ -309,7 +293,7 @@ static int seqiv_aead_encrypt_compat(struct aead_request 
*req)
if (req-assoclen  12)
return -EINVAL;
 
-   aead_request_set_tfm(subreq, ctx-child);
+   aead_request_set_tfm(subreq, ctx-geniv.child);
 
compl = seqniv_aead_encrypt_complete;
data = req;
@@ -359,7 +343,7 @@ static int seqiv_aead_encrypt(struct aead_request *req)
if (req-cryptlen  ivsize)
return -EINVAL;
 
-   aead_request_set_tfm(subreq, ctx-child);
+   aead_request_set_tfm(subreq, ctx-geniv.child);
 
compl = req-base.complete;
data = req-base.data;
@@ -403,7 +387,7 @@ static int seqiv_aead_encrypt(struct aead_request *req)
return err;
 }
 
-static int seqiv_aead_decrypt_compat(struct aead_request *req)
+static int seqniv_aead_decrypt(struct aead_request *req)
 {
struct crypto_aead *geniv = crypto_aead_reqtfm(req);
struct seqiv_aead_ctx *ctx = crypto_aead_ctx(geniv);
@@ -419,7 +403,7 @@ static int seqiv_aead_decrypt_compat(struct aead_request 
*req)
if (req-cryptlen  ivsize + crypto_aead_authsize(geniv))
return -EINVAL;
 
-   aead_request_set_tfm(subreq, ctx-child);
+   aead_request_set_tfm(subreq, ctx-geniv.child);
 
compl = req-base.complete;
data = req-base.data;
@@ -472,7 +456,7 @@ static int seqiv_aead_decrypt(struct aead_request *req)
if (req-cryptlen  ivsize + crypto_aead_authsize(geniv))
return -EINVAL;
 
-   aead_request_set_tfm(subreq, ctx-child);
+   aead_request_set_tfm(subreq, ctx-geniv.child);
 
compl = req-base.complete;
data = req-base.data;
@@ -536,27 +520,27 @@ unlock:
return seqiv_aead_givencrypt(req);
 }
 
-static int seqiv_aead_encrypt_compat_first(struct aead_request *req)
+static int seqniv_aead_encrypt_first(struct aead_request *req)
 {
struct crypto_aead *geniv = crypto_aead_reqtfm(req);
struct seqiv_aead_ctx *ctx = crypto_aead_ctx(geniv);
int err = 0;
 
-   spin_lock_bh(ctx-lock);
-   if (geniv-encrypt != seqiv_aead_encrypt_compat_first)
+   spin_lock_bh(ctx-geniv.lock);
+   if (geniv-encrypt != seqniv_aead_encrypt_first)
goto unlock;
 
-   geniv-encrypt = seqiv_aead_encrypt_compat;
+   geniv-encrypt = seqniv_aead_encrypt;
err = crypto_rng_get_bytes(crypto_default_rng, ctx-salt,
   crypto_aead_ivsize(geniv));
 
 unlock:
-   spin_unlock_bh(ctx-lock);
+   spin_unlock_bh(ctx-geniv.lock);
 
if (err)
return err;
 
-   return seqiv_aead_encrypt_compat(req);
+   return seqniv_aead_encrypt(req);
 }
 
 static int seqiv_aead_encrypt_first(struct aead_request *req)
@@ -565,7 +549,7 @@ static int seqiv_aead_encrypt_first(struct aead_request 
*req)
struct seqiv_aead_ctx *ctx = crypto_aead_ctx(geniv);
int err = 0;
 
-   spin_lock_bh(ctx-lock);
+   spin_lock_bh(ctx-geniv.lock);
if (geniv-encrypt != 

[PATCH 7/11] crypto: echainiv - Fix IV size in context size calculation

2015-05-27 Thread Herbert Xu
This patch fixes a bug in the context size calculation where we
were still referring to the old cra_aead.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 crypto/echainiv.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/crypto/echainiv.c b/crypto/echainiv.c
index 0f79fc6..62a817f 100644
--- a/crypto/echainiv.c
+++ b/crypto/echainiv.c
@@ -280,7 +280,7 @@ static int echainiv_aead_create(struct crypto_template 
*tmpl,
 
inst-alg.base.cra_alignmask |= __alignof__(u32) - 1;
inst-alg.base.cra_ctxsize = sizeof(struct echainiv_ctx);
-   inst-alg.base.cra_ctxsize += inst-alg.base.cra_aead.ivsize;
+   inst-alg.base.cra_ctxsize += inst-alg.ivsize;
 
 done:
err = aead_register_instance(tmpl, inst);
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 6/11] crypto: echainiv - Use common IV generation code

2015-05-27 Thread Herbert Xu
This patch makes use of the new common IV generation code.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 crypto/echainiv.c |  230 --
 1 file changed, 18 insertions(+), 212 deletions(-)

diff --git a/crypto/echainiv.c b/crypto/echainiv.c
index 02d0543..0f79fc6 100644
--- a/crypto/echainiv.c
+++ b/crypto/echainiv.c
@@ -18,7 +18,7 @@
  *
  */
 
-#include crypto/internal/aead.h
+#include crypto/internal/geniv.h
 #include crypto/null.h
 #include crypto/rng.h
 #include crypto/scatterwalk.h
@@ -33,39 +33,15 @@
 
 #define MAX_IV_SIZE 16
 
-struct echainiv_request_ctx {
-   struct scatterlist src[2];
-   struct scatterlist dst[2];
-   struct scatterlist ivbuf[2];
-   struct scatterlist *ivsg;
-   struct aead_givcrypt_request subreq;
-};
-
 struct echainiv_ctx {
-   struct crypto_aead *child;
-   spinlock_t lock;
+   /* aead_geniv_ctx must be first the element */
+   struct aead_geniv_ctx geniv;
struct crypto_blkcipher *null;
u8 salt[] __attribute__ ((aligned(__alignof__(u32;
 };
 
 static DEFINE_PER_CPU(u32 [MAX_IV_SIZE / sizeof(u32)], echainiv_iv);
 
-static int echainiv_setkey(struct crypto_aead *tfm,
- const u8 *key, unsigned int keylen)
-{
-   struct echainiv_ctx *ctx = crypto_aead_ctx(tfm);
-
-   return crypto_aead_setkey(ctx-child, key, keylen);
-}
-
-static int echainiv_setauthsize(struct crypto_aead *tfm,
- unsigned int authsize)
-{
-   struct echainiv_ctx *ctx = crypto_aead_ctx(tfm);
-
-   return crypto_aead_setauthsize(ctx-child, authsize);
-}
-
 /* We don't care if we get preempted and read/write IVs from the next CPU. */
 static void echainiv_read_iv(u8 *dst, unsigned size)
 {
@@ -90,36 +66,6 @@ static void echainiv_write_iv(const u8 *src, unsigned size)
}
 }
 
-static void echainiv_encrypt_compat_complete2(struct aead_request *req,
-int err)
-{
-   struct echainiv_request_ctx *rctx = aead_request_ctx(req);
-   struct aead_givcrypt_request *subreq = rctx-subreq;
-   struct crypto_aead *geniv;
-
-   if (err == -EINPROGRESS)
-   return;
-
-   if (err)
-   goto out;
-
-   geniv = crypto_aead_reqtfm(req);
-   scatterwalk_map_and_copy(subreq-giv, rctx-ivsg, 0,
-crypto_aead_ivsize(geniv), 1);
-
-out:
-   kzfree(subreq-giv);
-}
-
-static void echainiv_encrypt_compat_complete(
-   struct crypto_async_request *base, int err)
-{
-   struct aead_request *req = base-data;
-
-   echainiv_encrypt_compat_complete2(req, err);
-   aead_request_complete(req, err);
-}
-
 static void echainiv_encrypt_complete2(struct aead_request *req, int err)
 {
struct aead_request *subreq = aead_request_ctx(req);
@@ -154,59 +100,6 @@ static void echainiv_encrypt_complete(struct 
crypto_async_request *base,
aead_request_complete(req, err);
 }
 
-static int echainiv_encrypt_compat(struct aead_request *req)
-{
-   struct crypto_aead *geniv = crypto_aead_reqtfm(req);
-   struct echainiv_ctx *ctx = crypto_aead_ctx(geniv);
-   struct echainiv_request_ctx *rctx = aead_request_ctx(req);
-   struct aead_givcrypt_request *subreq = rctx-subreq;
-   unsigned int ivsize = crypto_aead_ivsize(geniv);
-   crypto_completion_t compl;
-   void *data;
-   u8 *info;
-   __be64 seq;
-   int err;
-
-   if (req-cryptlen  ivsize)
-   return -EINVAL;
-
-   compl = req-base.complete;
-   data = req-base.data;
-
-   rctx-ivsg = scatterwalk_ffwd(rctx-ivbuf, req-dst, req-assoclen);
-   info = PageHighMem(sg_page(rctx-ivsg)) ? NULL : sg_virt(rctx-ivsg);
-
-   if (!info) {
-   info = kmalloc(ivsize, req-base.flags 
-  CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL:
- GFP_ATOMIC);
-   if (!info)
-   return -ENOMEM;
-
-   compl = echainiv_encrypt_compat_complete;
-   data = req;
-   }
-
-   memcpy(seq, req-iv + ivsize - sizeof(seq), sizeof(seq));
-
-   aead_givcrypt_set_tfm(subreq, ctx-child);
-   aead_givcrypt_set_callback(subreq, req-base.flags,
-  req-base.complete, req-base.data);
-   aead_givcrypt_set_crypt(subreq,
-   scatterwalk_ffwd(rctx-src, req-src,
-req-assoclen + ivsize),
-   scatterwalk_ffwd(rctx-dst, rctx-ivsg,
-ivsize),
-   req-cryptlen - ivsize, req-iv);
-   aead_givcrypt_set_assoc(subreq, req-src, req-assoclen);
-   aead_givcrypt_set_giv(subreq, info, be64_to_cpu(seq));
-
-   err = crypto_aead_givencrypt(subreq);
- 

[PATCH 1/11] crypto: aead - Document behaviour of AD in destination buffer

2015-05-27 Thread Herbert Xu
This patch defines the behaviour of AD in the new interface more
clearly.  In particular, it specifies that if the user must copy
the AD to the destination manually when src != dst if they wish
to guarantee that the destination buffer contains a copy of the
AD.

The reason for this is that otherwise every AEAD implementation
would have to perform such a copy when src != dst.  In reality
most users do in-place processing where src == dst so this is
not an issue.

This patch also kills some remaining references to cryptoff.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 include/crypto/aead.h |   14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/include/crypto/aead.h b/include/crypto/aead.h
index 94141dc..61306ed 100644
--- a/include/crypto/aead.h
+++ b/include/crypto/aead.h
@@ -473,8 +473,15 @@ static inline void aead_request_set_callback(struct 
aead_request *req,
  * destination is the ciphertext. For a decryption operation, the use is
  * reversed - the source is the ciphertext and the destination is the 
plaintext.
  *
- * For both src/dst the layout is associated data, skipped data,
- * plain/cipher text, authentication tag.
+ * For both src/dst the layout is associated data, plain/cipher text,
+ * authentication tag.
+ *
+ * The content of the AD in the destination buffer after processing
+ * will either be untouched, or it will contain a copy of the AD
+ * from the source buffer.  In order to ensure that it always has
+ * a copy of the AD, the user must copy the AD over either before
+ * or after processing.  Of course this is not relevant if the user
+ * is doing in-place processing where src == dst.
  *
  * IMPORTANT NOTE AEAD requires an authentication tag (MAC). For decryption,
  *   the caller must concatenate the ciphertext followed by the
@@ -525,8 +532,7 @@ static inline void aead_request_set_assoc(struct 
aead_request *req,
  * @assoclen: number of bytes in associated data
  *
  * Setting the AD information.  This function sets the length of
- * the associated data and the number of bytes to skip after it to
- * access the plain/cipher text.
+ * the associated data.
  */
 static inline void aead_request_set_ad(struct aead_request *req,
   unsigned int assoclen)
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 10/11] crypto: seqiv - Fix IV size in context size calculation

2015-05-27 Thread Herbert Xu
This patch fixes a bug in the context size calculation where we
were still referring to the old cra_aead.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 crypto/seqiv.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/crypto/seqiv.c b/crypto/seqiv.c
index 9c4490b..c0dba8f 100644
--- a/crypto/seqiv.c
+++ b/crypto/seqiv.c
@@ -812,7 +812,7 @@ static int seqniv_create(struct crypto_template *tmpl, 
struct rtattr **tb)
 
inst-alg.base.cra_alignmask |= __alignof__(u32) - 1;
inst-alg.base.cra_ctxsize = sizeof(struct seqiv_aead_ctx);
-   inst-alg.base.cra_ctxsize += inst-alg.base.cra_aead.ivsize;
+   inst-alg.base.cra_ctxsize += inst-alg.ivsize;
 
 done:
err = aead_register_instance(tmpl, inst);
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 11/11] crypto: seqiv - Fix module unload/reload crash

2015-05-27 Thread Herbert Xu
On module unload we weren't unregistering the seqniv template,
thus leading to a crash the next time someone walks the template
list.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 crypto/seqiv.c |1 +
 1 file changed, 1 insertion(+)

diff --git a/crypto/seqiv.c b/crypto/seqiv.c
index c0dba8f..2333974 100644
--- a/crypto/seqiv.c
+++ b/crypto/seqiv.c
@@ -874,6 +874,7 @@ out_undo_niv:
 
 static void __exit seqiv_module_exit(void)
 {
+   crypto_unregister_template(seqniv_tmpl);
crypto_unregister_template(seqiv_tmpl);
 }
 
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/11] crypto: aead - Add common IV generation code

2015-05-27 Thread Herbert Xu
This patch adds some common IV generation code currently duplicated
by seqiv and echainiv.  For example, the setkey and setauthsize
functions are completely identical.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 crypto/aead.c   |  205 +++-
 include/crypto/internal/geniv.h |   24 
 2 files changed, 226 insertions(+), 3 deletions(-)

diff --git a/crypto/aead.c b/crypto/aead.c
index 35c55e0..8cdea89 100644
--- a/crypto/aead.c
+++ b/crypto/aead.c
@@ -12,7 +12,7 @@
  *
  */
 
-#include crypto/internal/aead.h
+#include crypto/internal/geniv.h
 #include crypto/scatterwalk.h
 #include linux/err.h
 #include linux/init.h
@@ -27,6 +27,14 @@
 
 #include internal.h
 
+struct compat_request_ctx {
+   struct scatterlist src[2];
+   struct scatterlist dst[2];
+   struct scatterlist ivbuf[2];
+   struct scatterlist *ivsg;
+   struct aead_givcrypt_request subreq;
+};
+
 static int aead_null_givencrypt(struct aead_givcrypt_request *req);
 static int aead_null_givdecrypt(struct aead_givcrypt_request *req);
 
@@ -373,6 +381,185 @@ static int crypto_grab_nivaead(struct crypto_aead_spawn 
*spawn,
return crypto_grab_spawn(spawn-base, name, type, mask);
 }
 
+static int aead_geniv_setkey(struct crypto_aead *tfm,
+const u8 *key, unsigned int keylen)
+{
+   struct aead_geniv_ctx *ctx = crypto_aead_ctx(tfm);
+
+   return crypto_aead_setkey(ctx-child, key, keylen);
+}
+
+static int aead_geniv_setauthsize(struct crypto_aead *tfm,
+ unsigned int authsize)
+{
+   struct aead_geniv_ctx *ctx = crypto_aead_ctx(tfm);
+
+   return crypto_aead_setauthsize(ctx-child, authsize);
+}
+
+static void compat_encrypt_complete2(struct aead_request *req, int err)
+{
+   struct compat_request_ctx *rctx = aead_request_ctx(req);
+   struct aead_givcrypt_request *subreq = rctx-subreq;
+   struct crypto_aead *geniv;
+
+   if (err == -EINPROGRESS)
+   return;
+
+   if (err)
+   goto out;
+
+   geniv = crypto_aead_reqtfm(req);
+   scatterwalk_map_and_copy(subreq-giv, rctx-ivsg, 0,
+crypto_aead_ivsize(geniv), 1);
+
+out:
+   kzfree(subreq-giv);
+}
+
+static void compat_encrypt_complete(struct crypto_async_request *base, int err)
+{
+   struct aead_request *req = base-data;
+
+   compat_encrypt_complete2(req, err);
+   aead_request_complete(req, err);
+}
+
+static int compat_encrypt(struct aead_request *req)
+{
+   struct crypto_aead *geniv = crypto_aead_reqtfm(req);
+   struct aead_geniv_ctx *ctx = crypto_aead_ctx(geniv);
+   struct compat_request_ctx *rctx = aead_request_ctx(req);
+   struct aead_givcrypt_request *subreq = rctx-subreq;
+   unsigned int ivsize = crypto_aead_ivsize(geniv);
+   struct scatterlist *src, *dst;
+   crypto_completion_t compl;
+   void *data;
+   u8 *info;
+   __be64 seq;
+   int err;
+
+   if (req-cryptlen  ivsize)
+   return -EINVAL;
+
+   compl = req-base.complete;
+   data = req-base.data;
+
+   rctx-ivsg = scatterwalk_ffwd(rctx-ivbuf, req-dst, req-assoclen);
+   info = PageHighMem(sg_page(rctx-ivsg)) ? NULL : sg_virt(rctx-ivsg);
+
+   if (!info) {
+   info = kmalloc(ivsize, req-base.flags 
+  CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL:
+ GFP_ATOMIC);
+   if (!info)
+   return -ENOMEM;
+
+   compl = compat_encrypt_complete;
+   data = req;
+   }
+
+   memcpy(seq, req-iv + ivsize - sizeof(seq), sizeof(seq));
+
+   src = scatterwalk_ffwd(rctx-src, req-src, req-assoclen + ivsize);
+   dst = req-src == req-dst ?
+ src : scatterwalk_ffwd(rctx-dst, rctx-ivsg, ivsize);
+
+   aead_givcrypt_set_tfm(subreq, ctx-child);
+   aead_givcrypt_set_callback(subreq, req-base.flags,
+  req-base.complete, req-base.data);
+   aead_givcrypt_set_crypt(subreq, src, dst,
+   req-cryptlen - ivsize, req-iv);
+   aead_givcrypt_set_assoc(subreq, req-src, req-assoclen);
+   aead_givcrypt_set_giv(subreq, info, be64_to_cpu(seq));
+
+   err = crypto_aead_givencrypt(subreq);
+   if (unlikely(PageHighMem(sg_page(rctx-ivsg
+   compat_encrypt_complete2(req, err);
+   return err;
+}
+
+static int compat_decrypt(struct aead_request *req)
+{
+   struct crypto_aead *geniv = crypto_aead_reqtfm(req);
+   struct aead_geniv_ctx *ctx = crypto_aead_ctx(geniv);
+   struct compat_request_ctx *rctx = aead_request_ctx(req);
+   struct aead_request *subreq = rctx-subreq.areq;
+   unsigned int ivsize = crypto_aead_ivsize(geniv);
+   struct scatterlist *src, *dst;
+   crypto_completion_t compl;
+   void 

[PATCH 3/11] crypto: aead - Preserve in-place processing in old_crypt

2015-05-27 Thread Herbert Xu
This patch tries to preserve in-place processing in old_crypt as
various algorithms are optimised for in-place processing where
src == dst.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 crypto/aead.c |3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/crypto/aead.c b/crypto/aead.c
index 7c3d725..35c55e0 100644
--- a/crypto/aead.c
+++ b/crypto/aead.c
@@ -107,7 +107,8 @@ static int old_crypt(struct aead_request *req,
return crypt(req);
 
src = scatterwalk_ffwd(nreq-srcbuf, req-src, req-assoclen);
-   dst = scatterwalk_ffwd(nreq-dstbuf, req-dst, req-assoclen);
+   dst = req-src == req-dst ?
+ src : scatterwalk_ffwd(nreq-dstbuf, req-dst, req-assoclen);
 
aead_request_set_tfm(nreq-subreq, aead);
aead_request_set_callback(nreq-subreq, aead_request_flags(req),
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 5/11] crypto: echainiv - Copy AD along with plain text

2015-05-27 Thread Herbert Xu
As the AD does not necessarily exist in the destination buffer
it must be copied along with the plain text.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 crypto/echainiv.c |   10 ++
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/crypto/echainiv.c b/crypto/echainiv.c
index bd85dcc..02d0543 100644
--- a/crypto/echainiv.c
+++ b/crypto/echainiv.c
@@ -228,19 +228,13 @@ static int echainiv_encrypt(struct aead_request *req)
info = req-iv;
 
if (req-src != req-dst) {
-   struct scatterlist src[2];
-   struct scatterlist dst[2];
struct blkcipher_desc desc = {
.tfm = ctx-null,
};
 
err = crypto_blkcipher_encrypt(
-   desc,
-   scatterwalk_ffwd(dst, req-dst,
-req-assoclen + ivsize),
-   scatterwalk_ffwd(src, req-src,
-req-assoclen + ivsize),
-   req-cryptlen - ivsize);
+   desc, req-dst, req-src,
+   req-assoclen + req-cryptlen);
if (err)
return err;
}
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 8/11] crypto: seqiv - Copy AD along with plain/cipher text

2015-05-27 Thread Herbert Xu
As the AD does not necessarily exist in the destination buffer
it must be copied along with the plain/cipher text.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 crypto/seqiv.c |   33 ++---
 1 file changed, 6 insertions(+), 27 deletions(-)

diff --git a/crypto/seqiv.c b/crypto/seqiv.c
index 127970a..b55c685 100644
--- a/crypto/seqiv.c
+++ b/crypto/seqiv.c
@@ -315,19 +315,12 @@ static int seqiv_aead_encrypt_compat(struct aead_request 
*req)
data = req;
 
if (req-src != req-dst) {
-   struct scatterlist srcbuf[2];
-   struct scatterlist dstbuf[2];
struct blkcipher_desc desc = {
.tfm = ctx-null,
};
 
-   err = crypto_blkcipher_encrypt(
-   desc,
-   scatterwalk_ffwd(dstbuf, req-dst,
-req-assoclen + ivsize),
-   scatterwalk_ffwd(srcbuf, req-src,
-req-assoclen + ivsize),
-   req-cryptlen - ivsize);
+   err = crypto_blkcipher_encrypt(desc, req-dst, req-src,
+  req-assoclen + req-cryptlen);
if (err)
return err;
}
@@ -373,19 +366,12 @@ static int seqiv_aead_encrypt(struct aead_request *req)
info = req-iv;
 
if (req-src != req-dst) {
-   struct scatterlist src[2];
-   struct scatterlist dst[2];
struct blkcipher_desc desc = {
.tfm = ctx-null,
};
 
-   err = crypto_blkcipher_encrypt(
-   desc,
-   scatterwalk_ffwd(dst, req-dst,
-req-assoclen + ivsize),
-   scatterwalk_ffwd(src, req-src,
-req-assoclen + ivsize),
-   req-cryptlen - ivsize);
+   err = crypto_blkcipher_encrypt(desc, req-dst, req-src,
+  req-assoclen + req-cryptlen);
if (err)
return err;
}
@@ -446,19 +432,12 @@ static int seqiv_aead_decrypt_compat(struct aead_request 
*req)
}
 
if (req-src != req-dst) {
-   struct scatterlist srcbuf[2];
-   struct scatterlist dstbuf[2];
struct blkcipher_desc desc = {
.tfm = ctx-null,
};
 
-   err = crypto_blkcipher_encrypt(
-   desc,
-   scatterwalk_ffwd(dstbuf, req-dst,
-req-assoclen + ivsize),
-   scatterwalk_ffwd(srcbuf, req-src,
-req-assoclen + ivsize),
-   req-cryptlen - ivsize);
+   err = crypto_blkcipher_encrypt(desc, req-dst, req-src,
+  req-assoclen + req-cryptlen);
if (err)
return err;
}
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ipsec PATCH 0/3] Preserve skb-mark through VTI tunnels

2015-05-27 Thread Steffen Klassert
On Wed, May 27, 2015 at 07:16:37AM -0700, Alexander Duyck wrote:
 These patches are meant to try and address the fact the VTI tunnels are
 currently overwriting the skb-mark value.  I am generally happy with the
 first two patches, however the third patch still modifies the skb-mark,
 though it undoes after the fact.
 
 The main problem I am trying to address is the fact that currently if I use
 an v6 over v6 VTI tunnel I cannot receive any traffic on the interface as
 the skb-mark is bleeding through and causing the traffic to be dropped.
 
 ---
 
 Alexander Duyck (3):
   ip_vti/ip6_vti: Do not touch skb-mark on xmit
   xfrm: Override skb-mark with tunnel-parm.i_key in xfrm_input
   ip_vti/ip6_vti: Preserve skb-mark after rcv_cb call

All applied to the ipsec tree, thanks a lot Alexander!
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1 3/3] crypto: ccp - Protect against poorly marked end of sg list

2015-05-27 Thread Herbert Xu
On Wed, May 27, 2015 at 09:12:02AM -0500, Tom Lendacky wrote:

 The reason I'm asking is because while this patch fixes your driver
 everybody else will still crash and burn should something like this
 happen again.
 
 A number of other drivers already have similar sg-count functions in
 them.

Perhaps you can help abstract this into a helper that everybody can
call?

Cheers,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] xfrm6: Do not use xfrm_local_error for path MTU issues in tunnels

2015-05-27 Thread Herbert Xu
On Wed, May 27, 2015 at 10:40:32AM -0700, Alexander Duyck wrote:
 This change makes it so that we use icmpv6_send to report PMTU issues back
 into tunnels in the case that the resulting packet is larger than the MTU
 of the outgoing interface.  Previously xfrm_local_error was being used in
 this case, however this was resulting in no changes, I suspect due to the
 fact that the tunnel itself was being kept out of the loop.
 
 This patch fixes PMTU problems seen on ip6_vti tunnels and is based on the
 behavior seen if the socket was orphaned.  Instead of requiring the socket
 to be orphaned this patch simply defaults to using icmpv6_send in the case
 that the frame came though a tunnel.
 
 Signed-off-by: Alexander Duyck alexander.h.du...@redhat.com

Does this still work with normal tunnel mode and identical inner
and outer addresses? I recall we used to have a bug where in that
situation the kernel would interpret the ICMP message as a reduction
in outer MTU and thus resulting in a loop where the MTU keeps
getting smaller.

Cheers,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC v2 1/2] crypto: add PKE API

2015-05-27 Thread Herbert Xu
On Sat, May 23, 2015 at 07:20:15AM -0700, Tadeusz Struk wrote:

 The length would be redundant. It can be obtained by sg_nents(reg-inparams)
 I don't limit the number of parameters. You can pass as many as you want. For 
 instance to pass 3 in and 2 out you do:
 
   struct scatterlist in[3];
   struct scatterlist out[2];
 
   sg_init_table(in, 3);
   sg_init_table(out, 2);
 
   sg_set_buf(in, first_in_param, len_of_first_in_param);
   sg_set_buf(in + 1, second_in_param, len_of_second_in_param);
   sg_set_buf(in + 2, third_in_param, len_of_third_in_param);
   
   sg_set_buf(out, first_out_param, len_of_first_out_param);
   sg_set_buf(out + 1, second_out_param, len_of_second_out_param);
 
   akcipher_request_set_crypt(req, in, out);
 
 The limitation here is that one parameter can not span multiple sgs. This 
 should be ok as they will never be bigger than one page.
 In fact MPI limits it to 2K max with #define MAX_EXTERN_MPI_BITS 16384.
 I'm ok to rename it to src and dst.

Do you have a specific piece of hardware in mind? What are its
capabilities?

If we are going to go with just contiguous memory then we might
as well just do u8 *src, *dst, unsigned int slen, dlen.

The whole point of the SG complexity is to deal with non-contiguous
memory (e.g., fragmented packets with IPsec).  If you can't do that
then why add the SG complexity?

Cheers,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] xfrm6: Do not use xfrm_local_error for path MTU issues in tunnels

2015-05-27 Thread Steffen Klassert
On Thu, May 28, 2015 at 12:49:19PM +0800, Herbert Xu wrote:
 On Wed, May 27, 2015 at 10:40:32AM -0700, Alexander Duyck wrote:
  This change makes it so that we use icmpv6_send to report PMTU issues back
  into tunnels in the case that the resulting packet is larger than the MTU
  of the outgoing interface.  Previously xfrm_local_error was being used in
  this case, however this was resulting in no changes, I suspect due to the
  fact that the tunnel itself was being kept out of the loop.
  
  This patch fixes PMTU problems seen on ip6_vti tunnels and is based on the
  behavior seen if the socket was orphaned.  Instead of requiring the socket
  to be orphaned this patch simply defaults to using icmpv6_send in the case
  that the frame came though a tunnel.
  
  Signed-off-by: Alexander Duyck alexander.h.du...@redhat.com
 
 Does this still work with normal tunnel mode and identical inner
 and outer addresses? I recall we used to have a bug where in that
 situation the kernel would interpret the ICMP message as a reduction
 in outer MTU and thus resulting in a loop where the MTU keeps
 getting smaller.

Right, I think this reintroduces a bug that I fixed some years ago with
commit dd767856a36e (xfrm6: Don't call icmpv6_send on local error)
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Revert crypto: algif_aead - Disable AEAD user-space for now

2015-05-27 Thread Herbert Xu
This reverts commit f858c7bcca8c20761a20593439fe998b4b67e86b as
the algif_aead interface has been switched over to the new AEAD
interface.
 
Signed-off-by: Herbert Xu herb...@gondor.apana.org.au

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 0ff4cd4..af011a9 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1532,6 +1532,15 @@ config CRYPTO_USER_API_RNG
  This option enables the user-spaces interface for random
  number generator algorithms.
 
+config CRYPTO_USER_API_AEAD
+   tristate User-space interface for AEAD cipher algorithms
+   depends on NET
+   select CRYPTO_AEAD
+   select CRYPTO_USER_API
+   help
+ This option enables the user-spaces interface for AEAD
+ cipher algorithms.
+
 config CRYPTO_HASH_INFO
bool
 
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto: jitterentropy - remove timekeeping_valid_for_hres

2015-05-27 Thread Herbert Xu
On Wed, May 27, 2015 at 01:50:12PM +0200, Stephan Mueller wrote:
 The patch removes the use of timekeeping_valid_for_hres which is now
 marked as internal for the time keeping subsystem. The jitterentropy
 does not really require this verification as a coarse timer (when
 random_get_entropy is absent) is discovered by the initialization test
 of jent_entropy_init, which would cause the jitter rng to not load in
 that case.
 
 Reported-by: kbuild test robot fengguang...@intel.com
 Signed-off-by: Stephan Mueller smuel...@chronox.de

Applied.
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] xfrm6: Do not use xfrm_local_error for path MTU issues in tunnels

2015-05-27 Thread Steffen Klassert
On Wed, May 27, 2015 at 10:40:32AM -0700, Alexander Duyck wrote:
 This change makes it so that we use icmpv6_send to report PMTU issues back
 into tunnels in the case that the resulting packet is larger than the MTU
 of the outgoing interface.  Previously xfrm_local_error was being used in
 this case, however this was resulting in no changes, I suspect due to the
 fact that the tunnel itself was being kept out of the loop.
 
 This patch fixes PMTU problems seen on ip6_vti tunnels and is based on the
 behavior seen if the socket was orphaned.  Instead of requiring the socket
 to be orphaned this patch simply defaults to using icmpv6_send in the case
 that the frame came though a tunnel.

We can use icmpv6_send() just in the case that the packet
was already transmitted by a tunnel device, otherwise we
get the bug back that I mentioned in my other mail.

Not sure if we have something to know that the packet
traversed a tunnel device. That's what I asked in the
thread 'Looking for a lost patch'.

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [net-next PATCH RFC 0/3] Preserve skb-mark through VTI tunnels

2015-05-27 Thread Steffen Klassert
On Tue, May 26, 2015 at 03:41:10PM -0700, Alexander Duyck wrote:
 These patches are meant to try and address the fact the VTI tunnels are
 currently overwriting the skb-mark value.  I am generally happy with the
 first two patches, however the third patch still modifies the skb-mark,
 though it undoes after the fact.

I don't see any better solution, so I think this should be ok for now.
On the long run we need to replace this gre key/mark matching with
a separate interface.

 
 The main problem I am trying to address is the fact that currently if I use
 an v6 over v6 VTI tunnel I cannot receive any traffic on the interface as
 the skb-mark is bleeding through and causing the traffic to be dropped.

This is broken in the current mainline, so it should go into the ipsec
tree as a bugfix. I'd merge this patchset if you submit it to that tree.

Thanks!
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [net-next PATCH RFC 0/3] Preserve skb-mark through VTI tunnels

2015-05-27 Thread Herbert Xu
On Tue, May 26, 2015 at 03:41:10PM -0700, Alexander Duyck wrote:
 These patches are meant to try and address the fact the VTI tunnels are
 currently overwriting the skb-mark value.  I am generally happy with the
 first two patches, however the third patch still modifies the skb-mark,
 though it undoes after the fact.
 
 The main problem I am trying to address is the fact that currently if I use
 an v6 over v6 VTI tunnel I cannot receive any traffic on the interface as
 the skb-mark is bleeding through and causing the traffic to be dropped.

Looks good to me.  Thanks for following up on this!
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[v3 PATCH 0/8] crypto: Convert all AEAD users to new interface

2015-05-27 Thread Herbert Xu
Hi:

The only changes from the last version are that set_ad no longer
takes a cryptoff argument and testmgr has been updated to always
supply space for the authentication tag.

The algif_aead patch has been removed and will be posted separately.

Series description:

This series of patches convert all in-tree AEAD users that I
could find to the new single SG list interface.  For IPsec it
also adopts the new explicit IV generator scheme.

To recap, the old AEAD interface takes an associated data (AD)
SG list in addition to the plain/cipher text SG list(s).  That
forces the underlying AEAD algorithm implementors to try to stitch
those two lists together where possible in order to maximise the
contiguous chunk of memory passed to the ICV/hash function.  Things
get even more hairy for IPsec as it has a third piece of memory,
the generated IV (giv) that needs to be hashed.  One look at the
nasty things authenc does for example is enough to make anyone
puke :)

In fact the interface is just getting in our way because for the
main user IPsec the data is naturally contiguous as the protocol
was designed with this in mind.

So the new AEAD interface gets rid of the separate AD SG list
and instead simply requires the AD to be at the head of the src
and dst SG lists.

The conversion of in-tree users is fairly straightforward.  The
only non-trivial bit is IPsec as I'm taking this opportunity to
move the IV generation knowledge into IPsec as that's where it
belongs since we may in future wish to support different generation
schemes for a single algorithm.

Cheers,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[v3 PATCH 7/8] mac80211: Switch to new AEAD interface

2015-05-27 Thread Herbert Xu
This patch makes use of the new AEAD interface which uses a single
SG list instead of separate lists for the AD and plain text.

Tested-by: Johannes Berg johan...@sipsolutions.net
Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 net/mac80211/aes_ccm.c  |   30 ++
 net/mac80211/aes_gcm.c  |   30 ++
 net/mac80211/aes_gmac.c |   12 +---
 3 files changed, 33 insertions(+), 39 deletions(-)

diff --git a/net/mac80211/aes_ccm.c b/net/mac80211/aes_ccm.c
index 70d53da..7663c28 100644
--- a/net/mac80211/aes_ccm.c
+++ b/net/mac80211/aes_ccm.c
@@ -22,7 +22,7 @@ void ieee80211_aes_ccm_encrypt(struct crypto_aead *tfm, u8 
*b_0, u8 *aad,
   u8 *data, size_t data_len, u8 *mic,
   size_t mic_len)
 {
-   struct scatterlist assoc, pt, ct[2];
+   struct scatterlist sg[3];
 
char aead_req_data[sizeof(struct aead_request) +
   crypto_aead_reqsize(tfm)]
@@ -31,15 +31,14 @@ void ieee80211_aes_ccm_encrypt(struct crypto_aead *tfm, u8 
*b_0, u8 *aad,
 
memset(aead_req, 0, sizeof(aead_req_data));
 
-   sg_init_one(pt, data, data_len);
-   sg_init_one(assoc, aad[2], be16_to_cpup((__be16 *)aad));
-   sg_init_table(ct, 2);
-   sg_set_buf(ct[0], data, data_len);
-   sg_set_buf(ct[1], mic, mic_len);
+   sg_init_table(sg, 3);
+   sg_set_buf(sg[0], aad[2], be16_to_cpup((__be16 *)aad));
+   sg_set_buf(sg[1], data, data_len);
+   sg_set_buf(sg[2], mic, mic_len);
 
aead_request_set_tfm(aead_req, tfm);
-   aead_request_set_assoc(aead_req, assoc, assoc.length);
-   aead_request_set_crypt(aead_req, pt, ct, data_len, b_0);
+   aead_request_set_crypt(aead_req, sg, sg, data_len, b_0);
+   aead_request_set_ad(aead_req, sg[0].length);
 
crypto_aead_encrypt(aead_req);
 }
@@ -48,7 +47,7 @@ int ieee80211_aes_ccm_decrypt(struct crypto_aead *tfm, u8 
*b_0, u8 *aad,
  u8 *data, size_t data_len, u8 *mic,
  size_t mic_len)
 {
-   struct scatterlist assoc, pt, ct[2];
+   struct scatterlist sg[3];
char aead_req_data[sizeof(struct aead_request) +
   crypto_aead_reqsize(tfm)]
__aligned(__alignof__(struct aead_request));
@@ -59,15 +58,14 @@ int ieee80211_aes_ccm_decrypt(struct crypto_aead *tfm, u8 
*b_0, u8 *aad,
 
memset(aead_req, 0, sizeof(aead_req_data));
 
-   sg_init_one(pt, data, data_len);
-   sg_init_one(assoc, aad[2], be16_to_cpup((__be16 *)aad));
-   sg_init_table(ct, 2);
-   sg_set_buf(ct[0], data, data_len);
-   sg_set_buf(ct[1], mic, mic_len);
+   sg_init_table(sg, 3);
+   sg_set_buf(sg[0], aad[2], be16_to_cpup((__be16 *)aad));
+   sg_set_buf(sg[1], data, data_len);
+   sg_set_buf(sg[2], mic, mic_len);
 
aead_request_set_tfm(aead_req, tfm);
-   aead_request_set_assoc(aead_req, assoc, assoc.length);
-   aead_request_set_crypt(aead_req, ct, pt, data_len + mic_len, b_0);
+   aead_request_set_crypt(aead_req, sg, sg, data_len + mic_len, b_0);
+   aead_request_set_ad(aead_req, sg[0].length);
 
return crypto_aead_decrypt(aead_req);
 }
diff --git a/net/mac80211/aes_gcm.c b/net/mac80211/aes_gcm.c
index b91c9d7..3afe361f 100644
--- a/net/mac80211/aes_gcm.c
+++ b/net/mac80211/aes_gcm.c
@@ -18,7 +18,7 @@
 void ieee80211_aes_gcm_encrypt(struct crypto_aead *tfm, u8 *j_0, u8 *aad,
   u8 *data, size_t data_len, u8 *mic)
 {
-   struct scatterlist assoc, pt, ct[2];
+   struct scatterlist sg[3];
 
char aead_req_data[sizeof(struct aead_request) +
   crypto_aead_reqsize(tfm)]
@@ -27,15 +27,14 @@ void ieee80211_aes_gcm_encrypt(struct crypto_aead *tfm, u8 
*j_0, u8 *aad,
 
memset(aead_req, 0, sizeof(aead_req_data));
 
-   sg_init_one(pt, data, data_len);
-   sg_init_one(assoc, aad[2], be16_to_cpup((__be16 *)aad));
-   sg_init_table(ct, 2);
-   sg_set_buf(ct[0], data, data_len);
-   sg_set_buf(ct[1], mic, IEEE80211_GCMP_MIC_LEN);
+   sg_init_table(sg, 3);
+   sg_set_buf(sg[0], aad[2], be16_to_cpup((__be16 *)aad));
+   sg_set_buf(sg[1], data, data_len);
+   sg_set_buf(sg[2], mic, IEEE80211_GCMP_MIC_LEN);
 
aead_request_set_tfm(aead_req, tfm);
-   aead_request_set_assoc(aead_req, assoc, assoc.length);
-   aead_request_set_crypt(aead_req, pt, ct, data_len, j_0);
+   aead_request_set_crypt(aead_req, sg, sg, data_len, j_0);
+   aead_request_set_ad(aead_req, sg[0].length);
 
crypto_aead_encrypt(aead_req);
 }
@@ -43,7 +42,7 @@ void ieee80211_aes_gcm_encrypt(struct crypto_aead *tfm, u8 
*j_0, u8 *aad,
 int ieee80211_aes_gcm_decrypt(struct crypto_aead *tfm, u8 *j_0, u8 *aad,
  u8 *data, size_t data_len, u8 *mic)
 {
-   struct scatterlist assoc, pt, ct[2];
+   struct scatterlist 

[v3 PATCH 5/8] esp6: Switch to new AEAD interface

2015-05-27 Thread Herbert Xu
This patch makes use of the new AEAD interface which uses a single
SG list instead of separate lists for the AD and plain text.  The
IV generation is also now carried out through normal AEAD methods.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 net/ipv6/esp6.c |  200 ++--
 1 file changed, 122 insertions(+), 78 deletions(-)

diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
index 31f1b5d..060a60b 100644
--- a/net/ipv6/esp6.c
+++ b/net/ipv6/esp6.c
@@ -76,7 +76,7 @@ static void *esp_alloc_tmp(struct crypto_aead *aead, int 
nfrags, int seqihlen)
len = ALIGN(len, crypto_tfm_ctx_alignment());
}
 
-   len += sizeof(struct aead_givcrypt_request) + crypto_aead_reqsize(aead);
+   len += sizeof(struct aead_request) + crypto_aead_reqsize(aead);
len = ALIGN(len, __alignof__(struct scatterlist));
 
len += sizeof(struct scatterlist) * nfrags;
@@ -96,17 +96,6 @@ static inline u8 *esp_tmp_iv(struct crypto_aead *aead, void 
*tmp, int seqhilen)
 crypto_aead_alignmask(aead) + 1) : tmp + seqhilen;
 }
 
-static inline struct aead_givcrypt_request *esp_tmp_givreq(
-   struct crypto_aead *aead, u8 *iv)
-{
-   struct aead_givcrypt_request *req;
-
-   req = (void *)PTR_ALIGN(iv + crypto_aead_ivsize(aead),
-   crypto_tfm_ctx_alignment());
-   aead_givcrypt_set_tfm(req, aead);
-   return req;
-}
-
 static inline struct aead_request *esp_tmp_req(struct crypto_aead *aead, u8 
*iv)
 {
struct aead_request *req;
@@ -125,14 +114,6 @@ static inline struct scatterlist *esp_req_sg(struct 
crypto_aead *aead,
 __alignof__(struct scatterlist));
 }
 
-static inline struct scatterlist *esp_givreq_sg(
-   struct crypto_aead *aead, struct aead_givcrypt_request *req)
-{
-   return (void *)ALIGN((unsigned long)(req + 1) +
-crypto_aead_reqsize(aead),
-__alignof__(struct scatterlist));
-}
-
 static void esp_output_done(struct crypto_async_request *base, int err)
 {
struct sk_buff *skb = base-data;
@@ -141,32 +122,57 @@ static void esp_output_done(struct crypto_async_request 
*base, int err)
xfrm_output_resume(skb, err);
 }
 
+/* Move ESP header back into place. */
+static void esp_restore_header(struct sk_buff *skb, unsigned int offset)
+{
+   struct ip_esp_hdr *esph = (void *)(skb-data + offset);
+   void *tmp = ESP_SKB_CB(skb)-tmp;
+   __be32 *seqhi = esp_tmp_seqhi(tmp);
+
+   esph-seq_no = esph-spi;
+   esph-spi = *seqhi;
+}
+
+static void esp_output_restore_header(struct sk_buff *skb)
+{
+   esp_restore_header(skb, skb_transport_offset(skb) - sizeof(__be32));
+}
+
+static void esp_output_done_esn(struct crypto_async_request *base, int err)
+{
+   struct sk_buff *skb = base-data;
+
+   esp_output_restore_header(skb);
+   esp_output_done(base, err);
+}
+
 static int esp6_output(struct xfrm_state *x, struct sk_buff *skb)
 {
int err;
struct ip_esp_hdr *esph;
struct crypto_aead *aead;
-   struct aead_givcrypt_request *req;
+   struct aead_request *req;
struct scatterlist *sg;
-   struct scatterlist *asg;
struct sk_buff *trailer;
void *tmp;
int blksize;
int clen;
int alen;
int plen;
+   int ivlen;
int tfclen;
int nfrags;
int assoclen;
-   int sglists;
int seqhilen;
u8 *iv;
u8 *tail;
__be32 *seqhi;
+   __be64 seqno;
 
/* skb is pure payload to encrypt */
aead = x-data;
alen = crypto_aead_authsize(aead);
+   ivlen = crypto_aead_ivsize(aead);
 
tfclen = 0;
if (x-tfcpad) {
@@ -187,16 +193,14 @@ static int esp6_output(struct xfrm_state *x, struct 
sk_buff *skb)
nfrags = err;
 
assoclen = sizeof(*esph);
-   sglists = 1;
seqhilen = 0;
 
if (x-props.flags  XFRM_STATE_ESN) {
-   sglists += 2;
seqhilen += sizeof(__be32);
assoclen += seqhilen;
}
 
-   tmp = esp_alloc_tmp(aead, nfrags + sglists, seqhilen);
+   tmp = esp_alloc_tmp(aead, nfrags, seqhilen);
if (!tmp) {
err = -ENOMEM;
goto error;
@@ -204,9 +208,8 @@ static int esp6_output(struct xfrm_state *x, struct sk_buff 
*skb)
 
seqhi = esp_tmp_seqhi(tmp);
iv = esp_tmp_iv(aead, tmp, seqhilen);
-   req = esp_tmp_givreq(aead, iv);
-   asg = esp_givreq_sg(aead, req);
-   sg = asg + sglists;
+   req = esp_tmp_req(aead, iv);
+   sg = esp_req_sg(aead, req);
 
/* Fill padding... */
tail = skb_tail_pointer(trailer);
@@ -227,36 +230,53 @@ static int esp6_output(struct xfrm_state *x, struct 
sk_buff *skb)
esph = ip_esp_hdr(skb);
*skb_mac_header(skb) = IPPROTO_ESP;
 
-   esph-spi = x-id.spi;

[v3 PATCH 8/8] crypto: tcrypt - Switch to new AEAD interface

2015-05-27 Thread Herbert Xu
This patch makes use of the new AEAD interface which uses a single
SG list instead of separate lists for the AD and plain text.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 crypto/tcrypt.c |   15 +++
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 2bff613..4b4a931 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -277,7 +277,6 @@ static void test_aead_speed(const char *algo, int enc, 
unsigned int secs,
const char *key;
struct aead_request *req;
struct scatterlist *sg;
-   struct scatterlist *asg;
struct scatterlist *sgout;
const char *e;
void *assoc;
@@ -309,11 +308,10 @@ static void test_aead_speed(const char *algo, int enc, 
unsigned int secs,
if (testmgr_alloc_buf(xoutbuf))
goto out_nooutbuf;
 
-   sg = kmalloc(sizeof(*sg) * 8 * 3, GFP_KERNEL);
+   sg = kmalloc(sizeof(*sg) * 9 * 2, GFP_KERNEL);
if (!sg)
goto out_nosg;
-   asg = sg[8];
-   sgout = asg[8];
+   sgout = sg[9];
 
tfm = crypto_alloc_aead(algo, 0, 0);
 
@@ -339,7 +337,8 @@ static void test_aead_speed(const char *algo, int enc, 
unsigned int secs,
do {
assoc = axbuf[0];
memset(assoc, 0xff, aad_size);
-   sg_init_one(asg[0], assoc, aad_size);
+   sg_set_buf(sg[0], assoc, aad_size);
+   sg_set_buf(sgout[0], assoc, aad_size);
 
if ((*keysize + *b_size)  TVMEMSIZE * PAGE_SIZE) {
pr_err(template (%u) too big for tvmem 
(%lu)\n,
@@ -375,14 +374,14 @@ static void test_aead_speed(const char *algo, int enc, 
unsigned int secs,
goto out;
}
 
-   sg_init_aead(sg[0], xbuf,
+   sg_init_aead(sg[1], xbuf,
*b_size + (enc ? authsize : 0));
 
-   sg_init_aead(sgout[0], xoutbuf,
+   sg_init_aead(sgout[1], xoutbuf,
*b_size + (enc ? authsize : 0));
 
aead_request_set_crypt(req, sg, sgout, *b_size, iv);
-   aead_request_set_assoc(req, asg, aad_size);
+   aead_request_set_ad(req, aad_size);
 
if (secs)
ret = test_aead_jiffies(req, enc, *b_size,
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[v3 PATCH 1/8] crypto: testmgr - Switch to new AEAD interface

2015-05-27 Thread Herbert Xu
This patch makes use of the new AEAD interface which uses a single
SG list instead of separate lists for the AD and plain text.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 crypto/testmgr.c |   87 ++-
 1 file changed, 48 insertions(+), 39 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 1817252..eff8eba 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -427,7 +427,6 @@ static int __test_aead(struct crypto_aead *tfm, int enc,
char *key;
struct aead_request *req;
struct scatterlist *sg;
-   struct scatterlist *asg;
struct scatterlist *sgout;
const char *e, *d;
struct tcrypt_result result;
@@ -454,11 +453,10 @@ static int __test_aead(struct crypto_aead *tfm, int enc,
goto out_nooutbuf;
 
/* avoid the frame size is larger than 1024 bytes compiler warning */
-   sg = kmalloc(sizeof(*sg) * 8 * (diff_dst ? 3 : 2), GFP_KERNEL);
+   sg = kmalloc(sizeof(*sg) * 8 * (diff_dst ? 4 : 2), GFP_KERNEL);
if (!sg)
goto out_nosg;
-   asg = sg[8];
-   sgout = asg[8];
+   sgout = sg[16];
 
if (diff_dst)
d = -ddst;
@@ -537,23 +535,27 @@ static int __test_aead(struct crypto_aead *tfm, int enc,
goto out;
}
 
+   k = !!template[i].alen;
+   sg_init_table(sg, k + 1);
+   sg_set_buf(sg[0], assoc, template[i].alen);
+   sg_set_buf(sg[k], input,
+  template[i].ilen + (enc ? authsize : 0));
+   output = input;
+
if (diff_dst) {
+   sg_init_table(sgout, k + 1);
+   sg_set_buf(sgout[0], assoc, template[i].alen);
+
output = xoutbuf[0];
output += align_offset;
-   sg_init_one(sg[0], input, template[i].ilen);
-   sg_init_one(sgout[0], output, template[i].rlen);
-   } else {
-   sg_init_one(sg[0], input,
-   template[i].ilen + (enc ? authsize : 0));
-   output = input;
+   sg_set_buf(sgout[k], output,
+  template[i].rlen + (enc ? 0 : authsize));
}
 
-   sg_init_one(asg[0], assoc, template[i].alen);
-
aead_request_set_crypt(req, sg, (diff_dst) ? sgout : sg,
   template[i].ilen, iv);
 
-   aead_request_set_assoc(req, asg, template[i].alen);
+   aead_request_set_ad(req, template[i].alen);
 
ret = enc ? crypto_aead_encrypt(req) : crypto_aead_decrypt(req);
 
@@ -633,9 +635,29 @@ static int __test_aead(struct crypto_aead *tfm, int enc,
authsize = abs(template[i].rlen - template[i].ilen);
 
ret = -EINVAL;
-   sg_init_table(sg, template[i].np);
+   sg_init_table(sg, template[i].anp + template[i].np);
if (diff_dst)
-   sg_init_table(sgout, template[i].np);
+   sg_init_table(sgout, template[i].anp + template[i].np);
+
+   ret = -EINVAL;
+   for (k = 0, temp = 0; k  template[i].anp; k++) {
+   if (WARN_ON(offset_in_page(IDX[k]) +
+   template[i].atap[k]  PAGE_SIZE))
+   goto out;
+   sg_set_buf(sg[k],
+  memcpy(axbuf[IDX[k]  PAGE_SHIFT] +
+ offset_in_page(IDX[k]),
+ template[i].assoc + temp,
+ template[i].atap[k]),
+  template[i].atap[k]);
+   if (diff_dst)
+   sg_set_buf(sgout[k],
+  axbuf[IDX[k]  PAGE_SHIFT] +
+  offset_in_page(IDX[k]),
+  template[i].atap[k]);
+   temp += template[i].atap[k];
+   }
+
for (k = 0, temp = 0; k  template[i].np; k++) {
if (WARN_ON(offset_in_page(IDX[k]) +
template[i].tap[k]  PAGE_SIZE))
@@ -643,7 +665,8 @@ static int __test_aead(struct crypto_aead *tfm, int enc,
 
q = xbuf[IDX[k]  PAGE_SHIFT] + offset_in_page(IDX[k]);
memcpy(q, template[i].input + temp, template[i].tap[k]);
-   sg_set_buf(sg[k], q, template[i].tap[k]);
+   sg_set_buf(sg[template[i].anp + k],
+  q, template[i].tap[k]);
 
if (diff_dst) {
q = 

[v3 PATCH 2/8] xfrm: Add IV generator information to xfrm_algo_desc

2015-05-27 Thread Herbert Xu
This patch adds IV generator information for each AEAD and block
cipher to xfrm_algo_desc.  This will be used to access the new
AEAD interface.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 include/net/xfrm.h   |2 ++
 net/xfrm/xfrm_algo.c |   16 
 2 files changed, 18 insertions(+)

diff --git a/include/net/xfrm.h b/include/net/xfrm.h
index 36ac102..30bca86 100644
--- a/include/net/xfrm.h
+++ b/include/net/xfrm.h
@@ -1314,6 +1314,7 @@ static inline int xfrm_id_proto_match(u8 proto, u8 
userproto)
  * xfrm algorithm information
  */
 struct xfrm_algo_aead_info {
+   char *geniv;
u16 icv_truncbits;
 };
 
@@ -1323,6 +1324,7 @@ struct xfrm_algo_auth_info {
 };
 
 struct xfrm_algo_encr_info {
+   char *geniv;
u16 blockbits;
u16 defkeybits;
 };
diff --git a/net/xfrm/xfrm_algo.c b/net/xfrm/xfrm_algo.c
index 12e82a5..67266b7 100644
--- a/net/xfrm/xfrm_algo.c
+++ b/net/xfrm/xfrm_algo.c
@@ -31,6 +31,7 @@ static struct xfrm_algo_desc aead_list[] = {
 
.uinfo = {
.aead = {
+   .geniv = seqniv,
.icv_truncbits = 64,
}
},
@@ -49,6 +50,7 @@ static struct xfrm_algo_desc aead_list[] = {
 
.uinfo = {
.aead = {
+   .geniv = seqniv,
.icv_truncbits = 96,
}
},
@@ -67,6 +69,7 @@ static struct xfrm_algo_desc aead_list[] = {
 
.uinfo = {
.aead = {
+   .geniv = seqniv,
.icv_truncbits = 128,
}
},
@@ -85,6 +88,7 @@ static struct xfrm_algo_desc aead_list[] = {
 
.uinfo = {
.aead = {
+   .geniv = seqniv,
.icv_truncbits = 64,
}
},
@@ -103,6 +107,7 @@ static struct xfrm_algo_desc aead_list[] = {
 
.uinfo = {
.aead = {
+   .geniv = seqniv,
.icv_truncbits = 96,
}
},
@@ -121,6 +126,7 @@ static struct xfrm_algo_desc aead_list[] = {
 
.uinfo = {
.aead = {
+   .geniv = seqniv,
.icv_truncbits = 128,
}
},
@@ -139,6 +145,7 @@ static struct xfrm_algo_desc aead_list[] = {
 
.uinfo = {
.aead = {
+   .geniv = seqiv,
.icv_truncbits = 128,
}
},
@@ -353,6 +360,7 @@ static struct xfrm_algo_desc ealg_list[] = {
 
.uinfo = {
.encr = {
+   .geniv = echainiv,
.blockbits = 64,
.defkeybits = 64,
}
@@ -373,6 +381,7 @@ static struct xfrm_algo_desc ealg_list[] = {
 
.uinfo = {
.encr = {
+   .geniv = echainiv,
.blockbits = 64,
.defkeybits = 192,
}
@@ -393,6 +402,7 @@ static struct xfrm_algo_desc ealg_list[] = {
 
.uinfo = {
.encr = {
+   .geniv = echainiv,
.blockbits = 64,
.defkeybits = 128,
}
@@ -413,6 +423,7 @@ static struct xfrm_algo_desc ealg_list[] = {
 
.uinfo = {
.encr = {
+   .geniv = echainiv,
.blockbits = 64,
.defkeybits = 128,
}
@@ -433,6 +444,7 @@ static struct xfrm_algo_desc ealg_list[] = {
 
.uinfo = {
.encr = {
+   .geniv = echainiv,
.blockbits = 128,
.defkeybits = 128,
}
@@ -453,6 +465,7 @@ static struct xfrm_algo_desc ealg_list[] = {
 
.uinfo = {
.encr = {
+   .geniv = echainiv,
.blockbits = 128,
.defkeybits = 128,
}
@@ -473,6 +486,7 @@ static struct xfrm_algo_desc ealg_list[] = {
 
.uinfo = {
.encr = {
+   .geniv = echainiv,
.blockbits = 128,
.defkeybits = 128,
}
@@ -493,6 +507,7 @@ static struct xfrm_algo_desc ealg_list[] = {
 
.uinfo = {
.encr = {
+   .geniv = echainiv,
.blockbits = 128,
.defkeybits = 128,
}
@@ -512,6 +527,7 @@ static struct xfrm_algo_desc ealg_list[] = {
 
.uinfo = {
.encr = {
+   .geniv = seqiv,
.blockbits = 128,
.defkeybits = 160, /* 128-bit key + 32-bit nonce */
}
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to 

[v3 PATCH 3/8] ipsec: Add IV generator information to xfrm_state

2015-05-27 Thread Herbert Xu
This patch adds IV generator information to xfrm_state.  This
is currently obtained from our own list of algorithm descriptions.

Signed-off-by: Herbert Xu herb...@gondor.apana.org.au
---

 include/net/xfrm.h   |1 +
 net/key/af_key.c |1 +
 net/xfrm/xfrm_user.c |   40 +++-
 3 files changed, 33 insertions(+), 9 deletions(-)

diff --git a/include/net/xfrm.h b/include/net/xfrm.h
index 30bca86..f0ee97e 100644
--- a/include/net/xfrm.h
+++ b/include/net/xfrm.h
@@ -168,6 +168,7 @@ struct xfrm_state {
struct xfrm_algo*ealg;
struct xfrm_algo*calg;
struct xfrm_algo_aead   *aead;
+   const char  *geniv;
 
/* Data for encapsulator */
struct xfrm_encap_tmpl  *encap;
diff --git a/net/key/af_key.c b/net/key/af_key.c
index f0d52d7..3c5b8ce 100644
--- a/net/key/af_key.c
+++ b/net/key/af_key.c
@@ -1190,6 +1190,7 @@ static struct xfrm_state * pfkey_msg2xfrm_state(struct 
net *net,
memcpy(x-ealg-alg_key, key+1, keysize);
}
x-props.ealgo = sa-sadb_sa_encrypt;
+   x-geniv = a-uinfo.encr.geniv;
}
}
/* x-algo.flags = sa-sadb_sa_flags; */
diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
index 2091664..bd16c6c 100644
--- a/net/xfrm/xfrm_user.c
+++ b/net/xfrm/xfrm_user.c
@@ -289,6 +289,31 @@ static int attach_one_algo(struct xfrm_algo **algpp, u8 
*props,
return 0;
 }
 
+static int attach_crypt(struct xfrm_state *x, struct nlattr *rta)
+{
+   struct xfrm_algo *p, *ualg;
+   struct xfrm_algo_desc *algo;
+
+   if (!rta)
+   return 0;
+
+   ualg = nla_data(rta);
+
+   algo = xfrm_ealg_get_byname(ualg-alg_name, 1);
+   if (!algo)
+   return -ENOSYS;
+   x-props.ealgo = algo-desc.sadb_alg_id;
+
+   p = kmemdup(ualg, xfrm_alg_len(ualg), GFP_KERNEL);
+   if (!p)
+   return -ENOMEM;
+
+   strcpy(p-alg_name, algo-name);
+   x-ealg = p;
+   x-geniv = algo-uinfo.encr.geniv;
+   return 0;
+}
+
 static int attach_auth(struct xfrm_algo_auth **algpp, u8 *props,
   struct nlattr *rta)
 {
@@ -349,8 +374,7 @@ static int attach_auth_trunc(struct xfrm_algo_auth **algpp, 
u8 *props,
return 0;
 }
 
-static int attach_aead(struct xfrm_algo_aead **algpp, u8 *props,
-  struct nlattr *rta)
+static int attach_aead(struct xfrm_state *x, struct nlattr *rta)
 {
struct xfrm_algo_aead *p, *ualg;
struct xfrm_algo_desc *algo;
@@ -363,14 +387,15 @@ static int attach_aead(struct xfrm_algo_aead **algpp, u8 
*props,
algo = xfrm_aead_get_byname(ualg-alg_name, ualg-alg_icv_len, 1);
if (!algo)
return -ENOSYS;
-   *props = algo-desc.sadb_alg_id;
+   x-props.ealgo = algo-desc.sadb_alg_id;
 
p = kmemdup(ualg, aead_len(ualg), GFP_KERNEL);
if (!p)
return -ENOMEM;
 
strcpy(p-alg_name, algo-name);
-   *algpp = p;
+   x-aead = p;
+   x-geniv = algo-uinfo.aead.geniv;
return 0;
 }
 
@@ -515,8 +540,7 @@ static struct xfrm_state *xfrm_state_construct(struct net 
*net,
if (attrs[XFRMA_SA_EXTRA_FLAGS])
x-props.extra_flags = nla_get_u32(attrs[XFRMA_SA_EXTRA_FLAGS]);
 
-   if ((err = attach_aead(x-aead, x-props.ealgo,
-  attrs[XFRMA_ALG_AEAD])))
+   if ((err = attach_aead(x, attrs[XFRMA_ALG_AEAD])))
goto error;
if ((err = attach_auth_trunc(x-aalg, x-props.aalgo,
 attrs[XFRMA_ALG_AUTH_TRUNC])))
@@ -526,9 +550,7 @@ static struct xfrm_state *xfrm_state_construct(struct net 
*net,
   attrs[XFRMA_ALG_AUTH])))
goto error;
}
-   if ((err = attach_one_algo(x-ealg, x-props.ealgo,
-  xfrm_ealg_get_byname,
-  attrs[XFRMA_ALG_CRYPT])))
+   if ((err = attach_crypt(x, attrs[XFRMA_ALG_CRYPT])))
goto error;
if ((err = attach_one_algo(x-calg, x-props.calgo,
   xfrm_calg_get_byname,
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html