My mistake - I had been looking at an old version (and had a typo in my
original message). I can see it's correct in the latest version. You can
disregard this.
On Mon, May 4, 2020 at 11:54 AM Drew Hastings
wrote:
> In process_create_snap_mesg, when dm_thinner_pool_create_snap fa
In process_create_snap_mesg, when dm_thinner_pool_create_snap fails, the
DMWARN is for ARGV[1] and ARGV[2], but should be for 0 and 1.
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Not super important, but both dm_thin_remove_block and __remove are unused.
I'm assuming this is because at some point the remove_range logic was
implemented.
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Expanding the data device consumes metadata space. Right now the data
device is currently resized *before* the metadata device is. This creates a
situation where you can reload the pool with a much larger metadata device
and a larger data device, but it will fail the pool by running out of
On Mon, Nov 25, 2019 at 12:48 PM Martin Wilck wrote:
>
> I think you are seeing this FIXME:
>
> https://elixir.bootlin.com/linux/v4.19.79/source/drivers/md/dm-mpath.c#L612
>
> For your testing, perhaps you just remove that if(!pgpath) condition.
>
> Regards,
> Martin
>
That's correct, thanks.
My use case doesn't lend itself well to multipathd, so I'm trying to
implement multipathing with device mapper directly.
My table is (kernel 4.19.79):
0 1562378240 multipath 4 queue_if_no_path retain_attached_hw_handler
queue_mode bio 0 1 1 queue-length 0 4 1 253:11 1 253:8 1 253:9 1 253:10 1
Hi,
I'm assuming all user space code is expected to use the handle_errors
feature, so this isn't that big of a deal. I'm also using 4.19.13, which I
think is more recent than the latest update to dm-raid1.c
That said, there may be a bug that causes the entire mirror to crash if
there is an error
Perfect, thank you!
For what it's worth, the error did not happen with regular, locally
attached NVME devices. It only occurred with NVMEoF devices, with the
chelsio driver shipped with the kernel. You seemed to discuss this a bit in
Firstly, thanks for the hard work you guys are doing on the dm drivers.
I'm curious to know if I correctly understand the limitations of the
multipath target. I'm using kernel 4.19.0-rc5
When attempting to create a device from two NVMEs connected over nvme_rdma
/ nvmf, I get the following
Jul 23 2018 at 1:06am -0400,
> > Drew Hastings wrote:
> >
> > >I love all of the work you guys do @dm-devel . Thanks for taking
> the time
> > >to read this.
> > >I would like to use thin provisioning targets in production, but
> it's hard
>
I love all of the work you guys do @dm-devel . Thanks for taking the time
to read this.
I would like to use thin provisioning targets in production, but it's hard
to ignore the warning in the documentation. It seems like, with an
understanding of how thin provisioning works, it should be safe to
I found stochastic multi-queue and writeboost to be insufficient for this
purpose. I'm wondering if anything exists that fits this description:
Device mapper creates a "cache" device on a fast device (SSD/NVME, etc)...
and writes to the device *always* hit the fast device. Writes are later
12 matches
Mail list logo