Re: [dm-devel] Potential enhancements to dm-thin v2

2022-04-14 Thread Demi Marie Obenour
On 4/12/22 05:32, Zdenek Kabelac wrote:
> Dne 12. 04. 22 v 0:30 Demi Marie Obenour napsal(a):
>> On Mon, Apr 11, 2022 at 10:16:43PM +0200, Zdenek Kabelac wrote:
>>> Dne 11. 04. 22 v 19:22 Demi Marie Obenour napsal(a):
 On Mon, Apr 11, 2022 at 10:16:02AM +0200, Zdenek Kabelac wrote:
> Dne 11. 04. 22 v 0:03 Demi Marie Obenour napsal(a):
>
> Your proposal actually breaks this sequence and would move things to the
> state of  'guess at which states we are now'. (and IMHO presents much more
> risk than virtual problem with suspend from user-space - which is only a
> problem if you are using suspended device as 'swap' and 'rootfs' - so 
> there
> are very easy ways how to orchestrate your LVs to avoid such problems).
 The intent is less “guess what states we are now” and more “It looks
 like dm-thin already has the data structures needed to store some
 per-thin metadata, and that could make writing a simple userspace volume
 manager FAR FAR easier”.  It appears to me that the only change needed
>>>
>>> I do not spend hours explaining all the details - but running just the
>>> suspend alone may result in many differnt problem where the things like
>>> running thin-pool out-of-data space is one of the easiest.
>>>
>>> Basically each step must be designed with  'power-off' happen during the
>>> operation. For each step you need to know how the recovery step looks like
>>> and how the lvm2 & kernel metadata c/would match together.
>> That is absolutely the case, and is in fact the reason I proposed this
>> change to begin with.  By having dm-thin store a small amount of
>> userspace-provided metadata for each thin volume, and by providing an
>> API to enumerate the thin volumes in a pool, I can store all of the
>> metadata I need in the thin pool itself.  This is much simpler than
>> having to store metadata outside of the pool.
> 
> Hi
> 
> Here is actually the fundamental problem with your proposal - our design was 
> about careful split between user-space and kernel 'who is the owner/holder of 
> information'  - your proposal unfortunately does not fit the model where lvm2 
> is the authoritative owner of info about devices -   note - we also tried the 
> 'model' where the info is held within target - our mdraid  dm wrapper - but 
> it 
> has more troubles compared with very clear thin logic.  So from the lvm2 
> position - we do not have any plans to change this proven model.

This does not surprise me.  lvm2 already has the infrastructure to
store its own metadata and update it in a crash-safe way, so having the
kernel be able to store additional metadata would be of no benefit to
lvm2.  The intended use-case for this feature is tools that are dm-thin
specific, and which do not already have such infrastructure.

> What you are asking for is - that 'kernel' module is doing all the job - and 
> lvm2 would be obtaining info from the kernel metadata - and eventually you 
> would be able to command everything with ioctl() interface and letting the 
> complexity sit completely in kernel - but as explained our design is heading 
> in opposite direction - what can be done in user-space stays in user space 
> and 
> kernel does the necessary minimum, which can be then much easier developed 
> and 
> traced.
Not at all.  I just want userspace to be able to stash some data in
each thin and retrieve it later.  The complex logic would still remain
in userspace.  That’s why I dropped the “lookup thin by blob”
functionality: it requires a new data structure in the kernel, and
userspace can achieve almost the same effect with a cache.  Qubes OS
has a persistent daemon that has exclusive ownership of the storage,
so there are no cache invalidation problems.  The existing thin
pool already has a btree that could store the blob, so no new data
structures are required on the kernel side.

>> Combining many
>>> steps together into a single 'kernel' call just increases already large
>>> range of errors.  So in many case we simply do favour to keep operation more
>>> 'low-level-atomic' even at slight higher performance price (as said - we've
>>> never seen a creation of snapshot to be 'msec' critical operation - as  the
>>> 'suspend' with implicit flush & fsfreeze itself might be far more expensive
>>> operation.
>> Qubes OS should never be snapshotting an in-use volume of any kind.
>> Right now, there is one case where it does so, but that is a bug, and I
>> am working on fixing it.  A future API might support snapshotting to an
>> in-use volume, but that would likely require a way to tell the VM to
>> freeze its own filesystem.
> 
> 
> Yeah - you have very unusual use case  - in fact lvm2 goal is usually to 
> support as much things as we can while devices are in-use so user does not 
> need to take them offline - which surely complicates everything a lot -  also 
> there was basically never any user demand to operate with offline device in 
> very quick way - so admittedly not 

Re: [dm-devel] Potential enhancements to dm-thin v2

2022-04-14 Thread Demi Marie Obenour
On Wed, Apr 13, 2022 at 09:55:00AM +0200, Zdenek Kabelac wrote:
> Dne 12. 04. 22 v 16:29 David Teigland napsal(a):
> > Dne 11. 04. 22 v 0:03 Demi Marie Obenour napsal(a):
> > > For quite a while, I have wanted to write a tool to manage thin volumes
> > > that is not based on LVM.
> > 
> > On Tue, Apr 12, 2022 at 11:32:09AM +0200, Zdenek Kabelac wrote:
> > > Here is actually the fundamental problem with your proposal - our design 
> > > was
> > > about careful split between user-space and kernel 'who is the owner/holder
> > > of information'  - your proposal unfortunately does not fit the model 
> > > where
> > > lvm2 is the authoritative owner of info about devices
> > 
> > The proposal is a new tool to manage dm-thin devices, not to rewrite lvm.
> > I would hope the tool is nothing at all like lvm, but rather "thinsetup"
> > in the tradition of dmsetup, cryptsetup.  I think it's a great idea and
> > have wanted such a tool for years.  I have a feeling that many have
> > already written ad hoc thinsetup-like tools, and there would be fairly
> > broad interest in it (especially if it has a proper lib api.)
> > 
> 
> 
> The problem with these 'ad-hoc' tools is their 'support - aka how to proceed
> in case of any failure.
> 
> So while there will be no problem to generate many device in very fast way -
> the recoverability from failure will then be always individual based on the
> surrounding environment.

That’s why I want to stick a name and UUID in each thin device’s
metadata.  That makes creating a thin device and associating it with a
name an atomic operation, and means that if there is a failure, the
sysadmin or management toolstack knows what it needs to do to clean up.

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


signature.asc
Description: PGP signature
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] Potential enhancements to dm-thin v2

2022-04-13 Thread Zdenek Kabelac

Dne 12. 04. 22 v 16:29 David Teigland napsal(a):

Dne 11. 04. 22 v 0:03 Demi Marie Obenour napsal(a):

For quite a while, I have wanted to write a tool to manage thin volumes
that is not based on LVM.


On Tue, Apr 12, 2022 at 11:32:09AM +0200, Zdenek Kabelac wrote:

Here is actually the fundamental problem with your proposal - our design was
about careful split between user-space and kernel 'who is the owner/holder
of information'  - your proposal unfortunately does not fit the model where
lvm2 is the authoritative owner of info about devices


The proposal is a new tool to manage dm-thin devices, not to rewrite lvm.
I would hope the tool is nothing at all like lvm, but rather "thinsetup"
in the tradition of dmsetup, cryptsetup.  I think it's a great idea and
have wanted such a tool for years.  I have a feeling that many have
already written ad hoc thinsetup-like tools, and there would be fairly
broad interest in it (especially if it has a proper lib api.)




The problem with these 'ad-hoc' tools is their 'support - aka how to proceed 
in case of any failure.


So while there will be no problem to generate many device in very fast way - 
the recoverability from failure will then be always individual based on the 
surrounding environment.


So it's in the principle the very same case as the request for support of 
managing DM devices with 'external' metadata - if there are different 
constrains to match - you end with different requirements on the tool.


If there is pure focus on thin device management - surely a standalone tool 
does this jobs faster.



Zdenek

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] Potential enhancements to dm-thin v2

2022-04-12 Thread David Teigland
Dne 11. 04. 22 v 0:03 Demi Marie Obenour napsal(a):
> For quite a while, I have wanted to write a tool to manage thin volumes   
> that is not based on LVM. 

On Tue, Apr 12, 2022 at 11:32:09AM +0200, Zdenek Kabelac wrote:
> Here is actually the fundamental problem with your proposal - our design was
> about careful split between user-space and kernel 'who is the owner/holder
> of information'  - your proposal unfortunately does not fit the model where
> lvm2 is the authoritative owner of info about devices

The proposal is a new tool to manage dm-thin devices, not to rewrite lvm.
I would hope the tool is nothing at all like lvm, but rather "thinsetup"
in the tradition of dmsetup, cryptsetup.  I think it's a great idea and
have wanted such a tool for years.  I have a feeling that many have
already written ad hoc thinsetup-like tools, and there would be fairly
broad interest in it (especially if it has a proper lib api.)

Dave
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel



Re: [dm-devel] Potential enhancements to dm-thin v2

2022-04-12 Thread Zdenek Kabelac

Dne 12. 04. 22 v 0:30 Demi Marie Obenour napsal(a):

On Mon, Apr 11, 2022 at 10:16:43PM +0200, Zdenek Kabelac wrote:

Dne 11. 04. 22 v 19:22 Demi Marie Obenour napsal(a):

On Mon, Apr 11, 2022 at 10:16:02AM +0200, Zdenek Kabelac wrote:

Dne 11. 04. 22 v 0:03 Demi Marie Obenour napsal(a):

Your proposal actually breaks this sequence and would move things to the
state of  'guess at which states we are now'. (and IMHO presents much more
risk than virtual problem with suspend from user-space - which is only a
problem if you are using suspended device as 'swap' and 'rootfs' - so there
are very easy ways how to orchestrate your LVs to avoid such problems).

The intent is less “guess what states we are now” and more “It looks
like dm-thin already has the data structures needed to store some
per-thin metadata, and that could make writing a simple userspace volume
manager FAR FAR easier”.  It appears to me that the only change needed


I do not spend hours explaining all the details - but running just the
suspend alone may result in many differnt problem where the things like
running thin-pool out-of-data space is one of the easiest.

Basically each step must be designed with  'power-off' happen during the
operation. For each step you need to know how the recovery step looks like
and how the lvm2 & kernel metadata c/would match together.

That is absolutely the case, and is in fact the reason I proposed this
change to begin with.  By having dm-thin store a small amount of
userspace-provided metadata for each thin volume, and by providing an
API to enumerate the thin volumes in a pool, I can store all of the
metadata I need in the thin pool itself.  This is much simpler than
having to store metadata outside of the pool.


Hi

Here is actually the fundamental problem with your proposal - our design was 
about careful split between user-space and kernel 'who is the owner/holder of 
information'  - your proposal unfortunately does not fit the model where lvm2 
is the authoritative owner of info about devices -   note - we also tried the 
'model' where the info is held within target - our mdraid  dm wrapper - but it 
has more troubles compared with very clear thin logic.  So from the lvm2 
position - we do not have any plans to change this proven model.


What you are asking for is - that 'kernel' module is doing all the job - and 
lvm2 would be obtaining info from the kernel metadata - and eventually you 
would be able to command everything with ioctl() interface and letting the 
complexity sit completely in kernel - but as explained our design is heading 
in opposite direction - what can be done in user-space stays in user space and 
kernel does the necessary minimum, which can be then much easier developed and 
traced.



Combining many
steps together into a single 'kernel' call just increases already large
range of errors.  So in many case we simply do favour to keep operation more
'low-level-atomic' even at slight higher performance price (as said - we've
never seen a creation of snapshot to be 'msec' critical operation - as  the
'suspend' with implicit flush & fsfreeze itself might be far more expensive
operation.

Qubes OS should never be snapshotting an in-use volume of any kind.
Right now, there is one case where it does so, but that is a bug, and I
am working on fixing it.  A future API might support snapshotting to an
in-use volume, but that would likely require a way to tell the VM to
freeze its own filesystem.



Yeah - you have very unusual use case  - in fact lvm2 goal is usually to 
support as much things as we can while devices are in-use so user does not 
need to take them offline - which surely complicates everything a lot -  also 
there was basically never any user demand to operate with offline device in 
very quick way - so admittedly not the focused area of development.



But IMHO creation and removal of thousands of devices in very short period
of time rather suggest there is something sub-optimal in your original
software design as I'm really having hard time imagining why would you need
this ?

There very well could be (suggestions for improvement welcome).


If you wish to operate lots of devices - keep them simply created and ready
- and eventually blkdiscard them for next device reuse.

That would work for volatile volumes, but those are only about 1/3 of
the volumes in a Qubes OS system.  The other 2/3 are writable snapshots.
Also, Qubes OS has found blkdiscard on thins to be a performance
problem.  It used to lock up entire pools until Qubes OS moved to doing
the blkdiscard in chunks.

Always make sure you use recent Linux kernels.

Should the 5.16 series be recent enough?


Blkdiscard should not differ from lvremove too much  - also experiment how
the  'lvchange --discards  passdown|nopassdown poolLV' works.

I believe this was with passdown on, which is the default in Qubes OS.
The bug was tracked down by Jinoh Kang in

Re: [dm-devel] Potential enhancements to dm-thin v2

2022-04-12 Thread Demi Marie Obenour
On Mon, Apr 11, 2022 at 10:16:02AM +0200, Zdenek Kabelac wrote:
> Dne 11. 04. 22 v 0:03 Demi Marie Obenour napsal(a):
> > For quite a while, I have wanted to write a tool to manage thin volumes
> > that is not based on LVM.  The main thing holding me back is that the
> > current dm-thin interface is extremely error-prone.  The only per-thin
> > metadata stored by the kernel is a 24-bit thin ID, and userspace must
> > take great care to keep that ID in sync with its own metadata.  Failure
> > to do so results in data loss, data corruption, or even security
> > vulnerabilities.  Furthermore, having to suspend a thin volume before
> > one can take a snapshot of it creates a critical section during which
> > userspace must be very careful, as I/O or a crash can lead to deadlock.
> > I believe both of these problems can be solved without overly
> > complicating the kernel implementation.
> 
> 
> Hi
> 
> These things are coming with initial design of whole DM world - where there
> is a split of complexity between kernel & user-space. So projects like
> btrfs, ZFS, decided to go the other way and create a monolithic 'all-in-one'
> solution, where they avoid some problems related with communication between
> kernel & user-space - but at the price of having a pretty complicated and
> very hard to devel & debug  kernel code.
> 
> So let me explain one of the reasons, we have this logic with suspend is
> this basic principle:
> 
> write new lvm metadata ->  suspend (with all table preloads) ->  commit  new
> lvm2 metadata -> resume
> 
> with this we ensure the user space maintain the only valid 'view' of metadata.
> 
> Your proposal actually breaks this sequence and would move things to the
> state of  'guess at which states we are now'. (and IMHO presents much more
> risk than virtual problem with suspend from user-space - which is only a
> problem if you are using suspended device as 'swap' and 'rootfs' - so there
> are very easy ways how to orchestrate your LVs to avoid such problems).

The intent is less “guess what states we are now” and more “It looks
like dm-thin already has the data structures needed to store some
per-thin metadata, and that could make writing a simple userspace volume
manager FAR FAR easier”.  It appears to me that the only change needed
would be reserving some space (amount fixed at pool creation) after
‘struct disk_device_details’ for use by userspace, and providing a way
for userspace to enumerate the thin devices on a volume and to set and
retrieve that extra data.  Suspend isn’t actually that big of a problem,
since new Qubes OS 4.1 (and later) installs use one pool for the root
filesystem and a separate one for VMs.  As a userspace writer, the
scariest part of managing thin volumes is actually making sure I don’t
lose track of which thin ID corresponds to which volume name.  The
*only* metadata Qubes OS would need would be a per-thin name, size, thin
ID, and possibly UUID.  All of those could be put in that extra space.

> Basically you are essentially wanting to move whole management into kernel
> for some not so great speed gains (related to the rest of the running system
> (and you can certainly do that by writing your own kernel module to manage
> your ratehr unique software problem)

From a storage perspective, my problem is basically the same as Docker’s
devicemapper driver.  Unlike Docker, though, Qubes OS must work at the
block level; it can’t work at the filesystem level.  So overlayfs and
friends aren’t options.

> But IMHO creation and removal of thousands of devices in very short period
> of time rather suggest there is something sub-optimal in your original
> software design as I'm really having hard time imagining why would you need
> this ?

There very well could be (suggestions for improvement welcome).

> If you wish to operate lots of devices - keep them simply created and ready
> - and eventually blkdiscard them for next device reuse.

That would work for volatile volumes, but those are only about 1/3 of
the volumes in a Qubes OS system.  The other 2/3 are writable snapshots.
Also, Qubes OS has found blkdiscard on thins to be a performance
problem.  It used to lock up entire pools until Qubes OS moved to doing
the blkdiscard in chunks.

> I'm also unsure from where would arise any special need to instantiate  that
> many snapshots -  if there is some valid & logical purpose -   lvm2 can have
> extended user space API to create multiple snapshots at once maybe (so
> i.e.    create  10 snapshots   with  name-%d  of a single thinLV)

This would be amazing, and Qubes OS should be able to use it.  That
said, Qubes OS would prefer to be able to choose the name of each volume
separately.  Could there be a more general batching operation?  Just
supporting ‘lvm lvcreate’ and ‘lvm lvs’ would be great, but support for
‘lvm lvremove’, ‘lvm lvrename’, ‘lvm lvextend’, and ‘lvm lvchange
--activate=y’ as well would be even better.

> Not to mentioning operating that many thin 

Re: [dm-devel] Potential enhancements to dm-thin v2

2022-04-12 Thread Demi Marie Obenour
On Mon, Apr 11, 2022 at 10:16:43PM +0200, Zdenek Kabelac wrote:
> Dne 11. 04. 22 v 19:22 Demi Marie Obenour napsal(a):
> > On Mon, Apr 11, 2022 at 10:16:02AM +0200, Zdenek Kabelac wrote:
> > > Dne 11. 04. 22 v 0:03 Demi Marie Obenour napsal(a):
> > > 
> > > Your proposal actually breaks this sequence and would move things to the
> > > state of  'guess at which states we are now'. (and IMHO presents much more
> > > risk than virtual problem with suspend from user-space - which is only a
> > > problem if you are using suspended device as 'swap' and 'rootfs' - so 
> > > there
> > > are very easy ways how to orchestrate your LVs to avoid such problems).
> > The intent is less “guess what states we are now” and more “It looks
> > like dm-thin already has the data structures needed to store some
> > per-thin metadata, and that could make writing a simple userspace volume
> > manager FAR FAR easier”.  It appears to me that the only change needed
> 
> 
> I do not spend hours explaining all the details - but running just the
> suspend alone may result in many differnt problem where the things like
> running thin-pool out-of-data space is one of the easiest.
> 
> Basically each step must be designed with  'power-off' happen during the
> operation. For each step you need to know how the recovery step looks like
> and how the lvm2 & kernel metadata c/would match together.

That is absolutely the case, and is in fact the reason I proposed this
change to begin with.  By having dm-thin store a small amount of
userspace-provided metadata for each thin volume, and by providing an
API to enumerate the thin volumes in a pool, I can store all of the
metadata I need in the thin pool itself.  This is much simpler than
having to store metadata outside of the pool.

> Combining many
> steps together into a single 'kernel' call just increases already large
> range of errors.  So in many case we simply do favour to keep operation more
> 'low-level-atomic' even at slight higher performance price (as said - we've
> never seen a creation of snapshot to be 'msec' critical operation - as  the 
> 'suspend' with implicit flush & fsfreeze itself might be far more expensive
> operation.

Qubes OS should never be snapshotting an in-use volume of any kind.
Right now, there is one case where it does so, but that is a bug, and I
am working on fixing it.  A future API might support snapshotting to an
in-use volume, but that would likely require a way to tell the VM to
freeze its own filesystem.

> > > But IMHO creation and removal of thousands of devices in very short period
> > > of time rather suggest there is something sub-optimal in your original
> > > software design as I'm really having hard time imagining why would you 
> > > need
> > > this ?
> > There very well could be (suggestions for improvement welcome).
> > 
> > > If you wish to operate lots of devices - keep them simply created and 
> > > ready
> > > - and eventually blkdiscard them for next device reuse.
> > That would work for volatile volumes, but those are only about 1/3 of
> > the volumes in a Qubes OS system.  The other 2/3 are writable snapshots.
> > Also, Qubes OS has found blkdiscard on thins to be a performance
> > problem.  It used to lock up entire pools until Qubes OS moved to doing
> > the blkdiscard in chunks.
> 
> Always make sure you use recent Linux kernels.

Should the 5.16 series be recent enough?

> Blkdiscard should not differ from lvremove too much  - also experiment how
> the  'lvchange --discards  passdown|nopassdown poolLV' works.

I believe this was with passdown on, which is the default in Qubes OS.
The bug was tracked down by Jinoh Kang in
https://github.com/QubesOS/qubes-issues/issues/5426#issuecomment-761595524
and found to be due to dm-thin deleting B-tree nodes one at a time,
causing large amounts of time to be wasted on btree rebalancing and node
locking.

> > > I'm also unsure from where would arise any special need to instantiate  
> > > that
> > > many snapshots -  if there is some valid & logical purpose -   lvm2 can 
> > > have
> > > extended user space API to create multiple snapshots at once maybe (so
> > > i.e.    create  10 snapshots   with  name-%d  of a single thinLV)
> > This would be amazing, and Qubes OS should be able to use it.  That
> > said, Qubes OS would prefer to be able to choose the name of each volume
> > separately.  Could there be a more general batching operation?  Just
> > supporting ‘lvm lvcreate’ and ‘lvm lvs’ would be great, but support for
> > ‘lvm lvremove’, ‘lvm lvrename’, ‘lvm lvextend’, and ‘lvm lvchange
> > --activate=y’ as well would be even better.
> 
> There is kind of 'hidden' plan inside command line processing to allow
> 'grouped'  processing.
> 
> lvcreate --snapshot  --name lv1  --snapshot --name lv2 vg/origin
> 
> However there is currently no man power to proceed further on this part as
> we have other parts of code needed enhancements.
> 
> But we may put this on our TODO plans...

That would be 

Re: [dm-devel] Potential enhancements to dm-thin v2

2022-04-11 Thread Zdenek Kabelac

Dne 11. 04. 22 v 19:22 Demi Marie Obenour napsal(a):

On Mon, Apr 11, 2022 at 10:16:02AM +0200, Zdenek Kabelac wrote:

Dne 11. 04. 22 v 0:03 Demi Marie Obenour napsal(a):

Your proposal actually breaks this sequence and would move things to the
state of  'guess at which states we are now'. (and IMHO presents much more
risk than virtual problem with suspend from user-space - which is only a
problem if you are using suspended device as 'swap' and 'rootfs' - so there
are very easy ways how to orchestrate your LVs to avoid such problems).

The intent is less “guess what states we are now” and more “It looks
like dm-thin already has the data structures needed to store some
per-thin metadata, and that could make writing a simple userspace volume
manager FAR FAR easier”.  It appears to me that the only change needed



I do not spend hours explaining all the details - but running just the suspend 
alone may result in many differnt problem where the things like running 
thin-pool out-of-data space is one of the easiest.


Basically each step must be designed with  'power-off' happen during the 
operation. For each step you need to know how the recovery step looks like and 
how the lvm2 & kernel metadata c/would match together.  Combining many steps 
together into a single 'kernel' call just increases already large range of 
errors.  So in many case we simply do favour to keep operation more 
'low-level-atomic' even at slight higher performance price (as said - we've 
never seen a creation of snapshot to be 'msec' critical operation - as  the  
'suspend' with implicit flush & fsfreeze itself might be far more expensive 
operation.





But IMHO creation and removal of thousands of devices in very short period
of time rather suggest there is something sub-optimal in your original
software design as I'm really having hard time imagining why would you need
this ?

There very well could be (suggestions for improvement welcome).


If you wish to operate lots of devices - keep them simply created and ready
- and eventually blkdiscard them for next device reuse.

That would work for volatile volumes, but those are only about 1/3 of
the volumes in a Qubes OS system.  The other 2/3 are writable snapshots.
Also, Qubes OS has found blkdiscard on thins to be a performance
problem.  It used to lock up entire pools until Qubes OS moved to doing
the blkdiscard in chunks.


Always make sure you use recent Linux kernels.

Blkdiscard should not differ from lvremove too much  - also experiment how 
the  'lvchange --discards  passdown|nopassdown poolLV' works.




I'm also unsure from where would arise any special need to instantiate  that
many snapshots -  if there is some valid & logical purpose -   lvm2 can have
extended user space API to create multiple snapshots at once maybe (so
i.e.    create  10 snapshots   with  name-%d  of a single thinLV)

This would be amazing, and Qubes OS should be able to use it.  That
said, Qubes OS would prefer to be able to choose the name of each volume
separately.  Could there be a more general batching operation?  Just
supporting ‘lvm lvcreate’ and ‘lvm lvs’ would be great, but support for
‘lvm lvremove’, ‘lvm lvrename’, ‘lvm lvextend’, and ‘lvm lvchange
--activate=y’ as well would be even better.


There is kind of 'hidden' plan inside command line processing to allow 
'grouped'  processing.


lvcreate --snapshot  --name lv1  --snapshot --name lv2 vg/origin

However there is currently no man power to proceed further on this part as we 
have other parts of code needed enhancements.


But we may put this on our TODO plans...


Not to mentioning operating that many thin volumes from a single thin-pool
is also nothing close to high performance goal you try to reach...

Would you mind explaining?  My understanding, and the basis of
essentially all my feature requests in this area, was that virtually all
of the cost of LVM is the userspace metadata operations, udev syncing,
and device scanning.  I have been assuming that the kernel does not have
performance problems with large numbers of thin volumes.



The main idea behind the comment is -  when there is increased disk usage - 
the manipulation with thin-pool metadata and locking will soon start to be a 
considerable performance problem.


So while it's easy to have active  1000 thinLVs from a single thin-pool that 
are UNUSED, situation is dramatically different when there LVs would be in 
some heavy use load.  There you should keep the active thinLV at low number 
of  tens  LVs, especially if you are performance oriented.  The lighter usage 
and less provisioning and especially bigger block size - improve





Right now, my machine has 334 active thin volumes, split between one
pool on an NVMe drive and one on a spinning hard drive.  The pool on an
NVMe drive has 312 active thin volumes, of which I believe 64 are in use.
Are these numbers high enough to cause significant performance
penalties for dm-thin v1, and would they cause problems for 

Re: [dm-devel] Potential enhancements to dm-thin v2

2022-04-11 Thread Zdenek Kabelac

Dne 11. 04. 22 v 0:03 Demi Marie Obenour napsal(a):

For quite a while, I have wanted to write a tool to manage thin volumes
that is not based on LVM.  The main thing holding me back is that the
current dm-thin interface is extremely error-prone.  The only per-thin
metadata stored by the kernel is a 24-bit thin ID, and userspace must
take great care to keep that ID in sync with its own metadata.  Failure
to do so results in data loss, data corruption, or even security
vulnerabilities.  Furthermore, having to suspend a thin volume before
one can take a snapshot of it creates a critical section during which
userspace must be very careful, as I/O or a crash can lead to deadlock.
I believe both of these problems can be solved without overly
complicating the kernel implementation.



Hi

These things are coming with initial design of whole DM world - where there is 
a split of complexity between kernel & user-space. So projects like btrfs, 
ZFS, decided to go the other way and create a monolithic 'all-in-one' 
solution, where they avoid some problems related with communication between 
kernel & user-space - but at the price of having a pretty complicated and very 
hard to devel & debug  kernel code.


So let me explain one of the reasons, we have this logic with suspend is this 
basic principle:


write new lvm metadata ->  suspend (with all table preloads) ->  commit  new 
lvm2 metadata -> resume


with this we ensure the user space maintain the only valid 'view' of metadata.

Your proposal actually breaks this sequence and would move things to the state 
of  'guess at which states we are now'. (and IMHO presents much more risk than 
virtual problem with suspend from user-space - which is only a problem if you 
are using suspended device as 'swap' and 'rootfs' - so there are very easy 
ways how to orchestrate your LVs to avoid such problems).


Basically you are essentially wanting to move whole management into kernel for 
some not so great speed gains (related to the rest of the running system (and 
you can certainly do that by writing your own kernel module to manage your 
ratehr unique software problem)


But IMHO creation and removal of thousands of devices in very short period of 
time rather suggest there is something sub-optimal in your original software 
design as I'm really having hard time imagining why would you need this ?


If you wish to operate lots of devices - keep them simply created and ready - 
and eventually blkdiscard them for next device reuse.


I'm also unsure from where would arise any special need to instantiate  that 
many snapshots -  if there is some valid & logical purpose -   lvm2 can have 
extended user space API to create multiple snapshots at once maybe (so i.e.    
create  10 snapshots   with  name-%d  of a single thinLV)


Not to mentioning operating that many thin volumes from a single thin-pool is 
also nothing close to high performance goal you try to reach...


Regards

Zdenek

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel