SJS wrote:
> begin  quoting James G. Sack (jim) as of Mon, Dec 10, 2007 at 04:26:06PM 
> -0800:
>> Tracy R Reed wrote:
>>> James G. Sack (jim) wrote:
>>>> Whether that's right or not, it's still convenient to call the resulting
>>>> capabilities LVM. Now, it strikes me that the unique contribution by LVM
>>>> is snapshot and data migration (pvmove). That is, could not the
>>>> re-allocation stuff be done outside of LVM? Though I suppose, perhaps
>>>> not as dynamically, eh?
> 
> Are those two contributions really unique?

Don't really know. They are not even quite "unrelated to each other", I
suppose. Where else do we see snapshot or this kind of migration?

 Vmware?
 XEN?
 something similar in other OSes?

>.. 
>>> Oddly enough I have never really made use of those features. I have
>>> played with them of course but 99% of my LVM use is increasing the size
>>> of volumes. This is because I usually don't allocate all of my disk
>>> space. I usually only allocate what I need and then leave the rest to
>>> expand into later. This is much easier than shrinking volumes and then
>>> expanding.
> 
> This strikes me as a surperb solution for a single-disk machine.
> 
> Especially with today's monster disks.
> 
>>>> Am I talking any sense, here? Is removing indirection/complication worth
>>>> the pain?
>>> What do you expect to gain by it?
>> Perhaps I should have used virtualization instead of indirection, and
>> it's all handwaving, of course, but I was thinking that removing a layer
>> could improve performance, mainly latency, I suppose. And not less
>> important, I was thinking that there might be simplification leading to
>> better maintainability and fewer places for bugs to hide.
>  
> Aren't you talking about ZFS now? ;-)

Maybe? I know I've seen gripes about them short-circuiting the
"layering". It's sometimes useful to reinvent the wheel, no?

>From wikipedia:
"""
Storage pools

Unlike traditional file systems, which reside on single devices and thus
require a volume manager to use more than one device, ZFS filesystems
are built on top of virtual storage pools called zpools. A zpool is
constructed of virtual devices (vdevs), which are themselves constructed
of block devices: files, hard drive partitions, or entire drives, with
the last being the recommended usage.[6] Block devices within a vdev may
be configured in different ways, depending on needs and space available:
non-redundantly (similar to RAID 0), as a mirror (RAID 1) of two or more
devices, as a RAID-Z group of two or more devices, or as a RAID-Z2 group
of three or more devices.[7] The storage capacity of all vdevs is
available to all of the file system instances in the zpool.

A quota can be set to limit the amount of space a file system instance
can occupy, and a reservation can be set to guarantee that space will be
available to a file system instance.
"""

Hmmm, maybe that's had a subconscious impact on my question?

Regards,
..jim


-- 
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to