Re: Vinum deprecation for FreeBSD 14 - are there any remaining Vinum users?

2021-04-18 Thread Edward Sanford Sutton, III
  Been years since I first used Vinum (before the g transition?) for
RAID5 and had to repair a failure where one of the 3 disks died and it
swapped one or two remaining disks to different positions within that
RAID5 sequence. I was able to fix it with a text editor and dd if I recall.
  I had ZFS on an unstable system corrupt a pool to the state that it
caused a panic during scrub which would the start again on reboot. Had
another pool corrupted when a system crashed due to a hard drive failure
in a mirrored configuration.
  I remember liking the flexibility of how any storage could be combined
but thought it confusing to have that and geom providers to have to
choose between as they came around though the geom providers were not as
complete last I compared them.
  Unless I'm mistaken, ZFS is only a considerable replacement for either
when it is on a system with enough resources to make it viable and then
you have to decide how much you want its features to use said resources.
It has been a battle to use when dealing with high total storage space
utilization, many nonsequential writes of single larger files, and
random writes+deletes of many small files. Trying to tweak performance
to minimize impact then lead me down the route of making a system that
was easy to crash/freeze.
  It wasn't flawless when I worked with it but did its job when it was
running but if alternatives with good
reliability/development/maintenance/performance will exist then I can
survive it disappearing but was a nice option to be able to use.
Edward Sanford Sutton, III



OpenPGP_signature
Description: OpenPGP digital signature


Re: Vinum deprecation for FreeBSD 14 - are there any remaining Vinum users?

2021-04-09 Thread Scott Bennett via freebsd-stable
Eugene Grosbein  wrote:

> 07.04.2021 12:49, Scott Bennett via freebsd-stable wrote:
>
> >  At least w.r.t. gvinum's raid5, I can attest that the kernel panics
> > are real.  Before settling on ZFS raidz2 for my largest storage pool, I
> > experimented with gstripe(8), gmirror(8), graid3(8), and graid5(8) (from
> > sysutils/graid5).  All worked reasonably well, except for one operation,
> > namely, "stop".  Most/all such devices cannot actually be stopped because
> > a stopped device does not *stay* stopped.  As soon as the GEOM device
> > node is destroyed, all disks are retasted, their labels, if any, are
> > recognized, and their corresponding device nodes are recreated and placed
> > back on line. :-(  All of this happens too quickly for even a series of
> > commands entered on one line to be able to unload the kernel module for
> > the device node type in question, so there is no practical way to stop
> > such a device once it has been started.
>
> In fact, you can disable re-tasting with sysctl kern.geom.notaste=1,
> stop an GEOM, clear its lables and re-enable tasting setting 
> kern.geom.notaste=0 back.

 Thank you for this valuable, but undocumented, workaround!  However, it
serves to demonstrate the bugs in gstripe(8), gmirror(8), graid3(8), and
graid5(8), and perhaps a few others, either in the commands themselves, which
do not behave as advertised in their respective man pages or in their man pages
for not correctly documenting the commands' actual behavior.
>
> >  A special note is needed here regarding gcache(8) and graid3(8).  The
> > documentation of gcache parameters for sector size for physical devices
> > and gcache logical devices is very unclear, such that a user must have the
> > device nodes and space on them available to create test cases and do so,
> > whereas a properly documented gcache(8) would obviate the need to set up
> > such experiments.  There is similar lack of clarity in various size
> > specifications for blocks, sectors, records, etc. in many of the man pages
> > for native GEOM commands.
>
> I found gcache(8) very nice at first, it really boosts UFS performance 
> provided
> you have extra RAM to dedicate to its cache. gcache can be stacked with 
> gmirror etc.
> but I found it guilty to some obscure UFS-related panics. It seems there were 
> races or something.
> No data loss, though as it is intended to be transparent for writing.

 There are other, also undocumented, problems.  For example, I played with
gcache(8) for a short time as a method of dividing a ZFS pool into two extents
on a drive in order to place a frequently accessed partition between them.  It
worked nicely for a while, but the first time that gcache(8) choked it made a
real mess of the ZFS pool's copy on that drive.  As a result I immediately
abandoned that use of gcache(8).
 gcache(8) usses two poorly defined sysctl values, kern.geom.cache.used_hi
and kern.geom.cache.used_lo.  Its man page shows them with default values, but
neglects to mention whether they are enforced limits or merely sysctl variables
that report current or high and low watermark usages.
>
> I was forced to stop using gcache for sake of stability and it's a shame.
> For example, dump(8) speed-up due to gcache was 2x at least with big cache
> comparing to dump -C32 without gcache.
>
 I used it to make all accesses to a graid3(8) set of partitions work with
64 KB and 32 KB block sizes for UFS2 efficiency on a graid3(8) device.  That use
worked very nicely, but it took some experimentation to figure out how to do it
because the man page is so ambiguous about the gcache command's options and
arguments.
 A similar complaint could be leveled at the man pages for gstripe(8),
graid3(8), and graid5(8) w.r.t. their undocumented definitions of stripe size,
sector size, and block size.  At present, without reading the command and kernel
source code for each or experimenting extensively, it is difficult to understand
what the commands' options and arguments will do and which combinations of their
numerical values can be valid and accepted.


  Scott Bennett, Comm. ASMELG, CFIAG
**
* Internet:   bennett at sdf.org   *xor*   bennett at freeshell.org  *
**
* "A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army."   *
*-- Gov. John Hancock, New York Journal, 28 January 1790 *
**
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Vinum deprecation for FreeBSD 14 - are there any remaining Vinum users?

2021-04-06 Thread Eugene Grosbein
07.04.2021 12:49, Scott Bennett via freebsd-stable wrote:

>  At least w.r.t. gvinum's raid5, I can attest that the kernel panics
> are real.  Before settling on ZFS raidz2 for my largest storage pool, I
> experimented with gstripe(8), gmirror(8), graid3(8), and graid5(8) (from
> sysutils/graid5).  All worked reasonably well, except for one operation,
> namely, "stop".  Most/all such devices cannot actually be stopped because
> a stopped device does not *stay* stopped.  As soon as the GEOM device
> node is destroyed, all disks are retasted, their labels, if any, are
> recognized, and their corresponding device nodes are recreated and placed
> back on line. :-(  All of this happens too quickly for even a series of
> commands entered on one line to be able to unload the kernel module for
> the device node type in question, so there is no practical way to stop
> such a device once it has been started.

In fact, you can disable re-tasting with sysctl kern.geom.notaste=1,
stop an GEOM, clear its lables and re-enable tasting setting 
kern.geom.notaste=0 back.

>  A special note is needed here regarding gcache(8) and graid3(8).  The
> documentation of gcache parameters for sector size for physical devices
> and gcache logical devices is very unclear, such that a user must have the
> device nodes and space on them available to create test cases and do so,
> whereas a properly documented gcache(8) would obviate the need to set up
> such experiments.  There is similar lack of clarity in various size
> specifications for blocks, sectors, records, etc. in many of the man pages
> for native GEOM commands.

I found gcache(8) very nice at first, it really boosts UFS performance provided
you have extra RAM to dedicate to its cache. gcache can be stacked with gmirror 
etc.
but I found it guilty to some obscure UFS-related panics. It seems there were 
races or something.
No data loss, though as it is intended to be transparent for writing.

I was forced to stop using gcache for sake of stability and it's a shame.
For example, dump(8) speed-up due to gcache was 2x at least with big cache
comparing to dump -C32 without gcache.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Vinum deprecation for FreeBSD 14 - are there any remaining Vinum users?

2021-04-06 Thread Scott Bennett via freebsd-stable
Ed,
 On Thu, 25 Mar 2021 13:25:44 -0400 Ed Maste  wrote:

>Vinum is a Logical Volume Manager that was introduced in FreeBSD 3.0,
>and for FreeBSD 5 was ported to geom(4) as gvinum. gvinum has had no
>specific development at least as far back as 2010 and it is not clear
>how well it works today. There are open PRs with reports of panics
>upon removing disks, etc. And, it also imposes an ongoing cost as it

 First off, the "port" to geom(4) was incomplete in that gvinum is
somehow not restricted to the geom(4) device nodes presented to it, but
instead always grabs the entire physical device and node to do its own
label processing.
 Second, gvinum is completely incompatible with GPT partitioning
because, regardless of the device nodes given it to use, it always writes
and reads its own label to and from the ends of the physical drives.
That means it overwrites the GPT secondary partition table with its own
labels, which soon causes error/warning messages from the kernel about
a damaged/missing secondary partition table and recommending a recovery
of that damaged/missing partition table.  Doing the recovery then will
overwrite gvinum's labels, which is likely to cause a kernel panic or
worse.
 My memory on gvinum's compatibility with glabel(8) labels is fuzzy
at the present remove, but I seem to recall having encountered problems
there, too.  This is not unique, unfortunately, to gvinum(8).  For
example, using glabel(8) to label swap partitions, as well as
bsdlabel(8)ed partitions, can lead to many unexpected problems.  Such
inconsistencies should be researched and fixed.
 GPT labels allow a partition type of "freebsd-vinum".  I did not try
to play with that one, but I suspect that it would not work correctly
because gvinum is somehow not limited to the GEOM device node for a
partition.  However, if you decide to keep gvinum(8) for some reason,
then this matter should be checked out in detail and its inconsistencies
fixed..
 At least w.r.t. gvinum's raid5, I can attest that the kernel panics
are real.  Before settling on ZFS raidz2 for my largest storage pool, I
experimented with gstripe(8), gmirror(8), graid3(8), and graid5(8) (from
sysutils/graid5).  All worked reasonably well, except for one operation,
namely, "stop".  Most/all such devices cannot actually be stopped because
a stopped device does not *stay* stopped.  As soon as the GEOM device
node is destroyed, all disks are retasted, their labels, if any, are
recognized, and their corresponding device nodes are recreated and placed
back on line. :-(  All of this happens too quickly for even a series of
commands entered on one line to be able to unload the kernel module for
the device node type in question, so there is no practical way to stop
such a device once it has been started.  Because gvinum's raid5 was
always unbearably slow and also subject to kernel panics, I soon excluded
it from further consideration.  GEOM is one of the brightest gems of
modern FreeBSD design.  GEOM's native functions should not be corrupted
or ignored as a result of a botched attempt to "modernize" an old
monstrosity like gvinum, which was originally written for a system that
lacked GEOM and has not fit well into a modern system that *has* GEOM,
not to mention GPT partitioning.
  All of these specific, native GEOM second-level devices otherwise
work pretty much as advertised.  graid5(8), however, was recently marked
as deprecated, which is a real shame.  I would vote for finishing its man
page, which is very incomplete, and for adding a subcommand to do some
sort of scrub procedure like many hardware RAID5 controllers do.  There
are perfectly valid reasons to use these devices in some situations
instead of ZFS, e.g., better performance for temporary/disposable data,
especially for situations involving millions of very short files like
ccache(1) directory trees, portmaster(8)'s $WRKDIRPREFIX, and likely
others.  gvinum(8) appears to have been broken in several ways since
FreeBSD 5.0, is unmaintained as you wrote, and should be deprecated and
eliminated for the reasons given above.  The simple GEOM devices provide
much the same flexibility that gvinum was intended to provide without
the need to learn gvinum's peculiar configuration method.  Once one
understands how GEOM devices work and can be stacked, they are generally
very simple to use in contrast to gvinum, which remains broken in
multiple ways.

>must be updated when other work is done (such as the recent MAXPHYS
>work). I suspect that by now all users have migrated to either
>graid(8) or ZFS.

 graid(8) is not always a good option.  If you read its man page,
you will see that RAID5 is usually only supported as read-only devices,
where it is supported at all.  This can be helpful for recovering data
from a proprietary RAID device, but is not generally useful for actively
used and updated data.  IOW, it can be helpful in a potentially large
number of situations for some users, especially for

Re: Vinum deprecation for FreeBSD 14 - are there any remaining Vinum users?

2021-04-01 Thread Doug Ambrisko
On Thu, Apr 01, 2021 at 11:20:44AM +0300, Lev Serebryakov wrote:
| On 01.04.2021 2:39, Doug Ambrisko wrote:
| 
| > | > I can only state that I use it only occasionally, and that when I do. I
| > | > have had no problems with it. I'm glad that it's there when I need it.
| > |
| > | Thanks for the reply. Can you comment on your use cases - in
| > | particular, did you use mirror, stripe, or raid5? If the first two
| > | then gmirror, gconcat, gstripe, and/or graid are suitable
| > | replacements.
| > |
| > | I'm not looking to deprecate it just because it's old, but because of
| > | a mismatch between user and developer expectations about its
| > | stability.
| > 
| > It would be nice if graid got full support for RAID5 alteast I'm not sure
| > how much the others are used for that are not fully supported (RAID4,
| > RAID5, RAID5E, RAID5EE, RAID5R, RAID6, RAIDMDF) according to the man
| > page.  I started to hack in RAID5 full support and try to avoid writes
| > if members didn't change.  This limits our VROC support.
|   My experience, as co-author and maintainer of `sysutil/graid5`, 
|   shows, that it is very non-trivial task. It contains many subtle
|   problems.
| 
|   `graid5` still has some undiscovered problems, and I don't think it 
|   worth fixing in 2021, when we have ZFS for many years.

The only advantage I see of graid supporting raid5 would be better support
for VROC and people like RAID5.  I don't like RAID5 for SSD's since it
adds to write amplification issues but people like it.  RAID5 had
terrible write performance in Linux with concurrent I/O.  I wanted to
see if FreeBSD could do better.

Intel seems to be pushing VMD since we recently had a FreeBSD user
need newer VMD support since they couldn't turn it off in the BIOS.
VMware doesn't support VROC.  We support it a bit in that VMD allows
graid to access the drives and deals with the Intel meta data.  It
doesn't read the info. from the EFI runtime.  So in RAID 0, 1 and 10
should work.  It would be nice if someone could install FreeBSD on
working Linux config.  No-one has asked for it so it doesn't seem
very important.

Thanks,

Doug A.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Vinum deprecation for FreeBSD 14 - are there any remaining Vinum users?

2021-04-01 Thread Lev Serebryakov

On 01.04.2021 2:39, Doug Ambrisko wrote:


| > I can only state that I use it only occasionally, and that when I do. I
| > have had no problems with it. I'm glad that it's there when I need it.
|
| Thanks for the reply. Can you comment on your use cases - in
| particular, did you use mirror, stripe, or raid5? If the first two
| then gmirror, gconcat, gstripe, and/or graid are suitable
| replacements.
|
| I'm not looking to deprecate it just because it's old, but because of
| a mismatch between user and developer expectations about its
| stability.

It would be nice if graid got full support for RAID5 alteast I'm not sure
how much the others are used for that are not fully supported (RAID4,
RAID5, RAID5E, RAID5EE, RAID5R, RAID6, RAIDMDF) according to the man
page.  I started to hack in RAID5 full support and try to avoid writes
if members didn't change.  This limits our VROC support.

 My experience, as co-author and maintainer of `sysutil/graid5`, shows, that it 
is very non-trivial task. It contains many subtle problems.

 `graid5` still has some undiscovered problems, and I don't think it worth 
fixing in 2021, when we have ZFS for many years.


--
// Lev Serebryakov
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Vinum deprecation for FreeBSD 14 - are there any remaining Vinum users?

2021-03-31 Thread Doug Ambrisko
On Fri, Mar 26, 2021 at 10:22:53AM -0400, Ed Maste wrote:
| On Thu, 25 Mar 2021 at 15:09, Chris  wrote:
| >
| > I can only state that I use it only occasionally, and that when I do. I
| > have had no problems with it. I'm glad that it's there when I need it.
| 
| Thanks for the reply. Can you comment on your use cases - in
| particular, did you use mirror, stripe, or raid5? If the first two
| then gmirror, gconcat, gstripe, and/or graid are suitable
| replacements.
| 
| I'm not looking to deprecate it just because it's old, but because of
| a mismatch between user and developer expectations about its
| stability.

It would be nice if graid got full support for RAID5 alteast I'm not sure
how much the others are used for that are not fully supported (RAID4,
RAID5, RAID5E, RAID5EE, RAID5R, RAID6, RAIDMDF) according to the man
page.  I started to hack in RAID5 full support and try to avoid writes
if members didn't change.  This limits our VROC support.

Thanks,

Doug A.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Vinum deprecation for FreeBSD 14 - are there any remaining Vinum users?

2021-03-26 Thread Chris

On 2021-03-26 07:22, Ed Maste wrote:

On Thu, 25 Mar 2021 at 15:09, Chris  wrote:


I can only state that I use it only occasionally, and that when I do. I
have had no problems with it. I'm glad that it's there when I need it.


Thanks for the reply. Can you comment on your use cases - in
particular, did you use mirror, stripe, or raid5? If the first two
then gmirror, gconcat, gstripe, and/or graid are suitable
replacements.

Thank you for the reply. :-)
Sure. My only needs have been for:
gstripe gmirror or gconcat.


I'm not looking to deprecate it just because it's old, but because of
a mismatch between user and developer expectations about its
stability.

Sure. I understand. Thanks for mentioning it.

--Chris

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Vinum deprecation for FreeBSD 14 - are there any remaining Vinum users?

2021-03-26 Thread Ed Maste
On Thu, 25 Mar 2021 at 15:09, Chris  wrote:
>
> I can only state that I use it only occasionally, and that when I do. I
> have had no problems with it. I'm glad that it's there when I need it.

Thanks for the reply. Can you comment on your use cases - in
particular, did you use mirror, stripe, or raid5? If the first two
then gmirror, gconcat, gstripe, and/or graid are suitable
replacements.

I'm not looking to deprecate it just because it's old, but because of
a mismatch between user and developer expectations about its
stability.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Vinum deprecation for FreeBSD 14 - are there any remaining Vinum users?

2021-03-25 Thread Chris

On 2021-03-25 10:25, Ed Maste wrote:

Vinum is a Logical Volume Manager that was introduced in FreeBSD 3.0,
and for FreeBSD 5 was ported to geom(4) as gvinum. gvinum has had no
specific development at least as far back as 2010 and it is not clear
how well it works today. There are open PRs with reports of panics
upon removing disks, etc. And, it also imposes an ongoing cost as it
must be updated when other work is done (such as the recent MAXPHYS
work). I suspect that by now all users have migrated to either
graid(8) or ZFS.

I plan to add a deprecation notice after a short discussion period,
assuming no reasonable justification is made to retain it. The notice
would suggest graid and ZFS as alternatives, and would be merged in
advance of FreeBSD 13.1. Then, gvinum would be removed in advance of
FreeBSD 14.0.

Please follow up if you have experience or input on vinum in FreeBSD,
including past use but especially if you are still using it today and
expect to continue doing so.

I can only state that I use it only occasionally, and that when I do. I
have had no problems with it. I'm glad that it's there when I need it.
Further; I find it easier to setup and use, as compared to the
alternatives. It is also "lighter" than the alternatives.
While it wouldn't be "the end of the world" if it disappeared. I'm
really glad it's there.

--Chris

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Vinum deprecation for FreeBSD 14 - are there any remaining Vinum users?

2021-03-25 Thread Ed Maste
Vinum is a Logical Volume Manager that was introduced in FreeBSD 3.0,
and for FreeBSD 5 was ported to geom(4) as gvinum. gvinum has had no
specific development at least as far back as 2010 and it is not clear
how well it works today. There are open PRs with reports of panics
upon removing disks, etc. And, it also imposes an ongoing cost as it
must be updated when other work is done (such as the recent MAXPHYS
work). I suspect that by now all users have migrated to either
graid(8) or ZFS.

I plan to add a deprecation notice after a short discussion period,
assuming no reasonable justification is made to retain it. The notice
would suggest graid and ZFS as alternatives, and would be merged in
advance of FreeBSD 13.1. Then, gvinum would be removed in advance of
FreeBSD 14.0.

Please follow up if you have experience or input on vinum in FreeBSD,
including past use but especially if you are still using it today and
expect to continue doing so.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"