Re: ZFS status

2020-03-04 Thread Dan LaBell



On Feb 21, 2020, at 5:45 AM, Sad Clouds wrote:


Hi, anyone knows the current status of ZFS for recently released
NetBSD-9? There is a message on the console -




"WARNING: ZFS on NetBSD
is under development". OK, but what does this mean? There is a good
chance it may lose/corrupt data, or it's pretty stable but watch out
for minor issues?


That, should you, presently, presume you have,
enough people, or time, maybe you shouldn't.

Backup strategies, aren't the same as number of backups,
made, and kept.  Do you have bit-for-bit backups, (*sp* foresnsic *sp*)

More software tools? Maybe, instead of buying after a dip,
or moving corporate stocks into bonds, correct issues you may
not know,  you now could have, and move stocks into cash on hand;
Hire a Dev, for an Admin position, for a while,
if, OR when, they present themselves to you, for example,
if UR also, a business, or corporate entity.


Re: ZFS status

2020-02-25 Thread David Brownlee
On Tue, 25 Feb 2020 at 11:36, Rocky Hotas  wrote:
>
> On feb 24 23:39, David Brownlee wrote:
>
> [...]
>
> > Or if another disk is plugged in that appears as a lower numbered
> > device, for example making the disk switch from wd1 to wd2.
>
> Ok!
>
> > Apologies, poor phrasing on my part - I meant whole devices rather
> > than partition within a device - so wd4, ld0, sd3, as opposed to wd1e
> > or raid2f. I suspect wedges such as dk3 should be fine, but again,
> > better to give zfs the whole device. (on a note of light amusement I
> > have used all of the mentioned devices in zfs pools at least once :)
>
> No problem. Yes, according to the previous message from Chavdar Ivanov,
> wedges can be used as well.
> If you tested any sort of device, it's definitely good :)! ZFS seems
> very flexible about that.

Yup - To clarify, I've used zfs without issue on wedges, while I
believe I have restarted with the wedge appearing as a different
device id I cannot confirm it.
While I have restarted with different device ids for the other
devices, and the only one to have an issue was on disklabelled
partitions. I should file a PR :)

Thanks

David


Re: ZFS status

2020-02-25 Thread Brad Spencer
Rocky Hotas  writes:

> On feb 24 19:48, Chavdar Ivanov wrote:
>

[snip]

>> dk2: zfsroot, 58720256 blocks at 4456576, type: ffs
>> dk5: SWAP2, 3931999 blocks at 63176832, type: ffs
>
> Thanks also for this second example. What is confusing to me is that,
> while `dk2' is used for ZFS, it still has `type: ffs'. Is ZFS an
> alternative to FFS (v1, v2), or is it `above' FFS, sustained by FFS?
>
> Rocky

In my testing with GPT and ZFS on NetBSD it does not really seem to
matter what the type of the wedge was.  I used fbsd-zfs most of the time
and it also worked fine too.  I am fairly sure anything will work.




-- 
Brad Spencer - b...@anduin.eldar.org - KC8VKS - http://anduin.eldar.org


Re: ZFS status

2020-02-25 Thread Rocky Hotas
On feb 24 23:39, David Brownlee wrote:

[...]

> Or if another disk is plugged in that appears as a lower numbered
> device, for example making the disk switch from wd1 to wd2.

Ok!

> Apologies, poor phrasing on my part - I meant whole devices rather
> than partition within a device - so wd4, ld0, sd3, as opposed to wd1e
> or raid2f. I suspect wedges such as dk3 should be fine, but again,
> better to give zfs the whole device. (on a note of light amusement I
> have used all of the mentioned devices in zfs pools at least once :)

No problem. Yes, according to the previous message from Chavdar Ivanov,
wedges can be used as well.
If you tested any sort of device, it's definitely good :)! ZFS seems
very flexible about that.

Rocky


Re: ZFS status

2020-02-25 Thread Rocky Hotas
On feb 24 19:48, Chavdar Ivanov wrote:

[...]

> I never specify the raw device; as you see, it works both for the
> sheer name - wd1 - or whole disk partition - /dev/wd2d, both cover
> whole disks.
> 
> I haven't used zfs pool backed by a disklabel slice yet, which was the
> warning above.

Ok! So, it is not clear what could happen using /dev/wd2 and then a, b,
e, f, or any other custom slice. Thank you!

> dk2: zfsroot, 58720256 blocks at 4456576, type: ffs
> dk5: SWAP2, 3931999 blocks at 63176832, type: ffs

Thanks also for this second example. What is confusing to me is that,
while `dk2' is used for ZFS, it still has `type: ffs'. Is ZFS an
alternative to FFS (v1, v2), or is it `above' FFS, sustained by FFS?

Rocky


Re: ZFS status

2020-02-24 Thread David Brownlee
On Mon, 24 Feb 2020 at 18:58, Rocky Hotas  wrote:
> > - If you make a zfs filesystem on a disklabel partition (eg wd0f) and
> > the disk moves zfs does not seem to be able to find it again.
>
> Do you mean if the disk is removed from the system and then plugged
> there again?

Or if another disk is plugged in that appears as a lower numbered
device, for example making the disk switch from wd1 to wd2.

> > zfs best practice is to use raw devices, so this shouldn't be an issue
> > for most people
>
> For example, assume that you would like to create a new pool called
> `newpool' using disk /dev/wd0.
> By `using the raw device', are you meaning `zpool create newpool rwd0'?

Apologies, poor phrasing on my part - I meant whole devices rather
than partition within a device - so wd4, ld0, sd3, as opposed to wd1e
or raid2f. I suspect wedges such as dk3 should be fine, but again,
better to give zfs the whole device. (on a note of light amusement I
have used all of the mentioned devices in zfs pools at least once :)

David


Re: ZFS status

2020-02-24 Thread Chavdar Ivanov
On Mon, 24 Feb 2020 at 18:59, Rocky Hotas  wrote:
>
> Hi!
> I was also trying to learn ZFS and still have some doubts. Thanks for
> the suggestions in this whole thread. A couple of questions:
>
> On feb 21 13:40, David Brownlee wrote:
>
> [...]
>
> > - If you make a zfs filesystem on a disklabel partition (eg wd0f) and
> > the disk moves zfs does not seem to be able to find it again.
>
> Do you mean if the disk is removed from the system and then plugged
> there again?
>
> > zfs best practice is to use raw devices, so this shouldn't be an issue
> > for most people
>
> For example, assume that you would like to create a new pool called
> `newpool' using disk /dev/wd0.
> By `using the raw device', are you meaning `zpool create newpool rwd0'?

My two pools:
...
xci # zpool history tank | grep zpool\ create
2019-07-28.13:17:24 zpool create tank /dev/wd2d
xci # zpool history pail | grep zpool\ create
2019-08-09.19:45:49 zpool create -f pail wd1
...

I never specify the raw device; as you see, it works both for the
sheer name - wd1 - or whole disk partition - /dev/wd2d, both cover
whole disks.

I haven't used zfs pool backed by a disklabel slice yet, which was the
warning above.

On the other hand, the latest root-on-zfs VM I setup for testing has
its pool on a gpt partition - /dev/dk2:

dkctl wd0 listwedges
/dev/rwd0: 6 wedges:
dk3: FILLER, 30 blocks at 34, type: ffs
dk0: d6dd2b31-4705-4a83-a239-f80a44bcfa66, 262144 blocks at 64, type: msdos
dk4: F1LLER, 64 blocks at 262208, type: ffs
dk1: boot, 4194304 blocks at 262272, type: ffs
dk2: zfsroot, 58720256 blocks at 4456576, type: ffs
dk5: SWAP2, 3931999 blocks at 63176832, type: ffs

However, within the system itself I can't see the create history, as
it was created outside it and only imported once populated:

zpool history rpool
History for 'rpool':
2020-02-22.16:03:34 zfs set mountpoint=legacy rpool/ROOT
2020-02-22.16:09:17 zpool import -f rpool
2020-02-22.17:11:06 zpool import -f -N rpool



>
> Rocky



-- 



Re: ZFS status

2020-02-24 Thread Rocky Hotas
Hi!
I was also trying to learn ZFS and still have some doubts. Thanks for
the suggestions in this whole thread. A couple of questions:

On feb 21 13:40, David Brownlee wrote:

[...]

> - If you make a zfs filesystem on a disklabel partition (eg wd0f) and
> the disk moves zfs does not seem to be able to find it again.

Do you mean if the disk is removed from the system and then plugged
there again?

> zfs best practice is to use raw devices, so this shouldn't be an issue
> for most people

For example, assume that you would like to create a new pool called
`newpool' using disk /dev/wd0.
By `using the raw device', are you meaning `zpool create newpool rwd0'?

Rocky


Re: ZFS status

2020-02-22 Thread Roy Marples

On 21/02/2020 13:40, David Brownlee wrote:

- If you make a zfs filesystem on a disklabel partition (eg wd0f) and
the disk moves zfs does not seem to be able to find it again. If you
run MAKEDEV for the affected device into a new directory and point zfs
at that then it picks up the disk. This gave me something of a scare.
zfs best practice is to use raw devices, so this shouldn't be an issue
for most people


I hit with issue with my experiments with the Root On ZFS ramdisk.
Coupled with the fact that the ramdisk needs the FFS boot partiton to be 
labelled makes GPT a must have.


Roy


Re: ZFS status

2020-02-21 Thread Sad Clouds
On Fri, 21 Feb 2020 13:40:03 +
David Brownlee  wrote:

> On Fri, 21 Feb 2020 at 10:45, Sad Clouds
>  wrote:
> >
> > Hi, anyone knows the current status of ZFS for recently released
> > NetBSD-9? There is a message on the console - "WARNING: ZFS on
> > NetBSD is under development". OK, but what does this mean? There is
> > a good chance it may lose/corrupt data, or it's pretty stable but
> > watch out for minor issues?
> 
> I would say the latter - I'm using it on a couple of boxes and they
> have had the usual selection of test cases - apps filling up file
> systems, switching between zfs and legacy mount and back, the box
> being rebooted with the disks on different ports, disks moved between
> boxes, and at least one case with a disk from a set missing then added
> back on next reboot. Not lost any data yet (though see note below on
> disklabel partitions)

OK thanks, this is good news. 


Re: ZFS status

2020-02-21 Thread David Brownlee
On Fri, 21 Feb 2020 at 10:45, Sad Clouds  wrote:
>
> Hi, anyone knows the current status of ZFS for recently released
> NetBSD-9? There is a message on the console - "WARNING: ZFS on NetBSD
> is under development". OK, but what does this mean? There is a good
> chance it may lose/corrupt data, or it's pretty stable but watch out
> for minor issues?

I would say the latter - I'm using it on a couple of boxes and they
have had the usual selection of test cases - apps filling up file
systems, switching between zfs and legacy mount and back, the box
being rebooted with the disks on different ports, disks moved between
boxes, and at least one case with a disk from a set missing then added
back on next reboot. Not lost any data yet (though see note below on
disklabel partitions)

I've encountered two issues
- I have one box where zfs does not work - trying to mount a new
filesystem or existing one copied from another machine just panics
- If you make a zfs filesystem on a disklabel partition (eg wd0f) and
the disk moves zfs does not seem to be able to find it again. If you
run MAKEDEV for the affected device into a new directory and point zfs
at that then it picks up the disk. This gave me something of a scare.
zfs best practice is to use raw devices, so this shouldn't be an issue
for most people

David


ZFS status

2020-02-21 Thread Sad Clouds
Hi, anyone knows the current status of ZFS for recently released
NetBSD-9? There is a message on the console - "WARNING: ZFS on NetBSD
is under development". OK, but what does this mean? There is a good
chance it may lose/corrupt data, or it's pretty stable but watch out
for minor issues?