Re: [dm-devel] Thin provisioning bug in dm-thin

2020-05-04 Thread Drew Hastings
My mistake - I had been looking at an old version (and had a typo in my original message). I can see it's correct in the latest version. You can disregard this. On Mon, May 4, 2020 at 11:54 AM Drew Hastings wrote: > In process_create_snap_mesg, when dm_thinner_pool_create_snap fa

[dm-devel] Thin provisioning bug in dm-thin

2020-05-04 Thread Drew Hastings
In process_create_snap_mesg, when dm_thinner_pool_create_snap fails, the DMWARN is for ARGV[1] and ARGV[2], but should be for 0 and 1. -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel

[dm-devel] Unused functions in dm-thin-metadata

2020-04-05 Thread Drew Hastings
Not super important, but both dm_thin_remove_block and __remove are unused. I'm assuming this is because at some point the remove_range logic was implemented. -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel

[dm-devel] Fix for dm-thin pool resizing

2020-03-10 Thread Drew Hastings
Expanding the data device consumes metadata space. Right now the data device is currently resized *before* the metadata device is. This creates a situation where you can reload the pool with a much larger metadata device and a larger data device, but it will fail the pool by running out of

Re: [dm-devel] multipath - unable to use multiple active paths at once, and deprecated example in docs

2019-11-25 Thread Drew Hastings
On Mon, Nov 25, 2019 at 12:48 PM Martin Wilck wrote: > > I think you are seeing this FIXME: > > https://elixir.bootlin.com/linux/v4.19.79/source/drivers/md/dm-mpath.c#L612 > > For your testing, perhaps you just remove that if(!pgpath) condition. > > Regards, > Martin > That's correct, thanks.

[dm-devel] multipath - unable to use multiple active paths at once, and deprecated example in docs

2019-11-22 Thread Drew Hastings
My use case doesn't lend itself well to multipathd, so I'm trying to implement multipathing with device mapper directly. My table is (kernel 4.19.79): 0 1562378240 multipath 4 queue_if_no_path retain_attached_hw_handler queue_mode bio 0 1 1 queue-length 0 4 1 253:11 1 253:8 1 253:9 1 253:10 1

[dm-devel] Possible bug in mirror target

2019-02-04 Thread Drew Hastings
Hi, I'm assuming all user space code is expected to use the handle_errors feature, so this isn't that big of a deal. I'm also using 4.19.13, which I think is more recent than the latest update to dm-raid1.c That said, there may be a bug that causes the entire mirror to crash if there is an error

Re: [dm-devel] multipath target and non-request-stackable devices

2018-10-31 Thread Drew Hastings
Perfect, thank you! For what it's worth, the error did not happen with regular, locally attached NVME devices. It only occurred with NVMEoF devices, with the chelsio driver shipped with the kernel. You seemed to discuss this a bit in

[dm-devel] multipath target and non-request-stackable devices

2018-10-31 Thread Drew Hastings
Firstly, thanks for the hard work you guys are doing on the dm drivers. I'm curious to know if I correctly understand the limitations of the multipath target. I'm using kernel 4.19.0-rc5 When attempting to create a device from two NVMEs connected over nvme_rdma / nvmf, I get the following

Re: [dm-devel] Is thin provisioning still experimental?

2018-08-04 Thread Drew Hastings
Jul 23 2018 at 1:06am -0400, > > Drew Hastings wrote: > > > > >I love all of the work you guys do @dm-devel . Thanks for taking > the time > > >to read this. > > >I would like to use thin provisioning targets in production, but > it's hard >

[dm-devel] Is thin provisioning still experimental?

2018-07-23 Thread Drew Hastings
I love all of the work you guys do @dm-devel . Thanks for taking the time to read this. I would like to use thin provisioning targets in production, but it's hard to ignore the warning in the documentation. It seems like, with an understanding of how thin provisioning works, it should be safe to

[dm-devel] Device mapper for temporarily buffering writes on fast device before pushing to slower device

2018-02-20 Thread Drew Hastings
I found stochastic multi-queue and writeboost to be insufficient for this purpose. I'm wondering if anything exists that fits this description: Device mapper creates a "cache" device on a fast device (SSD/NVME, etc)... and writes to the device *always* hit the fast device. Writes are later