On 8/8/19 8:01 AM, Mikulas Patocka wrote:
Note that the patch bd293d071ffe doesn't really prevent the deadlock from
occuring - if we look at the stacktrace reported by Junxiao Bi, we see
that it hangs in bit_wait_io and not on the mutex - i.e. it has already
successfully taken the mutex.
On Thu, Aug 08, 2019 at 06:01:49PM +, Horia Geanta wrote:
>
> -- >8 --
>
> Subject: [PATCH] crypto: testmgr - Add additional AES-XTS vectors for covering
> CTS (part II)
Patchwork doesn't like it when you do this and it'll discard
your patch. To make it into patchwork you need to put the
Hi Steffen,
I love your patch! Yet something to improve:
[auto build test ERROR on linus/master]
[cannot apply to v5.3-rc3 next-20190808]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/commits/Steffen
On Thu, Aug 08, 2019 at 05:50:10AM -0400, Mikulas Patocka wrote:
> A deadlock with this stacktrace was observed.
>
> The obvious problem here is that in the call chain
> xfs_vm_direct_IO->__blockdev_direct_IO->do_blockdev_direct_IO->kmem_cache_alloc
>
> we do a GFP_KERNEL allocation while we
On 8/8/2019 4:43 PM, Pascal Van Leeuwen wrote:
> Hi Horia,
>
> This is the best I can do on short notice w.r.t vectors with 8 byte IV.
> Format is actually equivalent to that of the XTS specification, with
> the sector number being referred to as "H".
>
> Actually, the input keys, plaintext and
On Thu, 8 Aug 2019, Junxiao Bi wrote:
>
> On 8/8/19 8:01 AM, Mikulas Patocka wrote:
>
> Note that the patch bd293d071ffe doesn't really prevent the deadlock from
> occuring - if we look at the stacktrace reported by Junxiao Bi, we see
> that it hangs in bit_wait_io and not on the mutex - i.e.
On 8/8/19 8:17 AM, Mikulas Patocka wrote:
> A deadlock with this stacktrace was observed.
>
> The loop thread does a GFP_KERNEL allocation, it calls into dm-bufio
> shrinker and the shrinker depends on I/O completion in the dm-bufio
> subsystem.
>
> In order to fix the deadlock (and other
A deadlock with this stacktrace was observed.
The loop thread does a GFP_KERNEL allocation, it calls into dm-bufio
shrinker and the shrinker depends on I/O completion in the dm-bufio
subsystem.
In order to fix the deadlock (and other similar ones), we set the flag
PF_MEMALLOC_NOIO at loop thread
On Thu, 8 Aug 2019, Matthew Wilcox wrote:
> On Thu, Aug 08, 2019 at 05:50:10AM -0400, Mikulas Patocka wrote:
> > A deadlock with this stacktrace was observed.
> >
> > The obvious problem here is that in the call chain
> >
On Thu, 8 Aug 2019, Mike Snitzer wrote:
> On Thu, Aug 08 2019 at 5:40am -0400,
> Mikulas Patocka wrote:
>
> > Revert the patch bd293d071ffe65e645b4d8104f9d8fe15ea13862. A proper fix
> > should be not to use GFP_KERNEL in the function do_blockdev_direct_IO.
>
> Matthew Wilcox pointed out
On 8/7/2019 11:58 PM, Pascal Van Leeuwen wrote:
>> -Original Message-
>> From: Horia Geanta
>> Sent: Wednesday, August 7, 2019 5:52 PM
>> To: Pascal Van Leeuwen ; Ard Biesheuvel
>>
>> Cc: Milan Broz ; Herbert Xu
>> ; dm-
>> de...@redhat.com; linux-cry...@vger.kernel.org
>> Subject: Re:
On Thu, Aug 08 2019 at 5:40am -0400,
Mikulas Patocka wrote:
> Revert the patch bd293d071ffe65e645b4d8104f9d8fe15ea13862. A proper fix
> should be not to use GFP_KERNEL in the function do_blockdev_direct_IO.
Matthew Wilcox pointed out that the "proper fix" is loop should be using
Hi Mikulas,
This seemed not issue on mainline, mutex in dm_bufio_shrink_count() had
been removed from mainline.
Thanks,
Junxiao.
On 8/8/19 2:50 AM, Mikulas Patocka wrote:
A deadlock with this stacktrace was observed.
The obvious problem here is that in the call chain
On Thu, Aug 08, 2019 at 05:50:10AM -0400, Mikulas Patocka wrote:
> A deadlock with this stacktrace was observed.
>
> The obvious problem here is that in the call chain
> xfs_vm_direct_IO->__blockdev_direct_IO->do_blockdev_direct_IO->kmem_cache_alloc
>
> we do a GFP_KERNEL allocation while we
Hi Horia,
This is the best I can do on short notice w.r.t vectors with 8 byte IV.
Format is actually equivalent to that of the XTS specification, with
the sector number being referred to as "H".
Actually, the input keys, plaintext and IV should be the same as before,
with the exception of the IV
> -Original Message-
> From: Milan Broz
> Sent: Thursday, August 8, 2019 2:53 PM
> To: Pascal Van Leeuwen ; Eric Biggers
>
> Cc: Ard Biesheuvel ; linux-cry...@vger.kernel.org;
> herb...@gondor.apana.org.au; a...@redhat.com; snit...@redhat.com;
> dm-devel@redhat.com
> Subject: Re: [RFC
Gentle ping?
This feature would be pretty useful if we want to really log heavy
operations on a relatively small log devices.
Thanks,
Qu
On 2019/6/19 下午4:03, Qu Wenruo wrote:
> Current dm-log-writes will record all bios, no matter if the bios is
> METADATA (normally what we care) or is DATA
On 08/08/2019 11:31, Pascal Van Leeuwen wrote:
>> -Original Message-
>> From: Eric Biggers
>> Sent: Thursday, August 8, 2019 10:31 AM
>> To: Pascal Van Leeuwen
>> Cc: Ard Biesheuvel ; linux-cry...@vger.kernel.org;
>> herb...@gondor.apana.org.au; a...@redhat.com; snit...@redhat.com;
>>
Hi,
On 07/08/2019 07:50, Ard Biesheuvel wrote:
> Instead of instantiating a separate cipher to perform the encryption
> needed to produce the IV, reuse the skcipher used for the block data
> and invoke it one additional time for each block to encrypt a zero
> vector and use the output as the IV.
A deadlock with this stacktrace was observed.
The obvious problem here is that in the call chain
xfs_vm_direct_IO->__blockdev_direct_IO->do_blockdev_direct_IO->kmem_cache_alloc
we do a GFP_KERNEL allocation while we are in a filesystem driver and in a
block device driver.
This patch changes
Revert the patch bd293d071ffe65e645b4d8104f9d8fe15ea13862. A proper fix
should be not to use GFP_KERNEL in the function do_blockdev_direct_IO.
Note that the patch bd293d071ffe doesn't really prevent the deadlock from
occuring - if we look at the stacktrace reported by Junxiao Bi, we see
that it
Hi
The is not a bug in dm-bufio.
> #14 [88272f5af880] kmem_cache_alloc at 811f484b
> #15 [88272f5af8d0] do_blockdev_direct_IO at 812535b3
> #16 [88272f5afb00] __blockdev_direct_IO at 81255dc3
> #17 [88272f5afb30] xfs_vm_direct_IO at
Hi James, Martin, Paolo, Ming,
multipathing with linux-next is broken since 20190723 in our CI.
The patches fix a memleak and a severe dh/multipath functional regression.
It would be nice if we could get them to 5.4/scsi-queue and also next.
I would have preferred if such a new feature had used
This was missing from scsi_mq_ops_no_commit of linux-next commit
8930a6c20791 ("scsi: core: add support for request batching")
from Martin's scsi/5.4/scsi-queue or James' scsi/misc.
See also linux-next commit b7e9e1fb7a92 ("scsi: implement .cleanup_rq
callback") from block/for-next.
This was missing from scsi_device_from_queue() due to the introduction
of another new scsi_mq_ops_no_commit of linux-next commit
8930a6c20791 ("scsi: core: add support for request batching")
from Martin's scsi/5.4/scsi-queue or James' scsi/misc.
Only devicehandler code seems to call
> +++ b/drivers/md/dm-raid1.c
> @@ -878,12 +878,9 @@ static struct mirror_set *alloc_context(unsigned int
> nr_mirrors,
> struct dm_target *ti,
> struct dm_dirty_log *dl)
> {
> - size_t len;
> struct mirror_set
-Maier/scsi-core-fix-missing-cleanup_rq-for-SCSI-hosts-without-request-batching/20190808-052017
config: riscv-defconfig (attached as .config)
compiler: riscv64-linux-gcc (GCC) 7.4.0
reproduce:
wget
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O
~/bin/make.cross
27 matches
Mail list logo