Re: zfs problems after rebuilding system [SOLVED]

2018-03-13 Thread Pete French




I based my fix heavily on that patch from the PR, but I rewrote it
enough that I might've made any number of mistakes, so it needs fresh
testing. 


Ok, have been rebooting with the patch eery ten minutes for 24 hours 
now, and it comes back up perfectly every time, so as far as I am 
concerned thats sufficient testing for me to say its fixed and I would 
be very happy to have it merged into STABLE (and I;ll then roll it out 
everywhere). Thanks!


-pete.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system [SOLVED]

2018-03-12 Thread Ian Lepore
On Mon, 2018-03-12 at 17:21 +, Pete French wrote:
> 
> On 10/03/2018 23:48, Ian Lepore wrote:
> > 
> > I based my fix heavily on that patch from the PR, but I rewrote it
> > enough that I might've made any number of mistakes, so it needs fresh
> > testing.  The main change I made was to make it a lot less noisy while
> > waiting (it only mentions the wait once, unless bootverbose is set, in
> > which case it's once per second).  I also removed the logic that
> > limited the retries to nfs and zfs, because I think we can remove all
> > the old code related to waiting that only worked for ufs and let this
> > new retry be the way it waits for all filesystems.  But that's a bigger
> > change we can do separately; I didn't want to hold up this fix any
> > longer.
> TThansk for the patch, its is very much appercaited! I applied this 
> earlier today, and have been continuously rebooting the machine in Azure 
> ever since (every ten minutes). This has worked flawlessly, so I am very 
> happy that this fixes the issue for me. I am going to leave it running 
> though, just to see if anything happens. I havent examined dmesg, but I 
> thould be able to see the output from the patch there to verify that its 
> waiting, yes ?
> 
> cheers,
> 
> -pete.

Yes, if the root filesystem isn't available on the first attempt, it
should emit a single line saying it will wait for up to N seconds for
it to arrive, where N is the vfs.mountroot.timeout value (3 seconds if
not set in loader.conf).

-- Ian
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system [SOLVED]

2018-03-12 Thread Pete French



On 10/03/2018 23:48, Ian Lepore wrote:

I based my fix heavily on that patch from the PR, but I rewrote it
enough that I might've made any number of mistakes, so it needs fresh
testing.  The main change I made was to make it a lot less noisy while
waiting (it only mentions the wait once, unless bootverbose is set, in
which case it's once per second).  I also removed the logic that
limited the retries to nfs and zfs, because I think we can remove all
the old code related to waiting that only worked for ufs and let this
new retry be the way it waits for all filesystems.  But that's a bigger
change we can do separately; I didn't want to hold up this fix any
longer.


TThansk for the patch, its is very much appercaited! I applied this 
earlier today, and have been continuously rebooting the machine in Azure 
ever since (every ten minutes). This has worked flawlessly, so I am very 
happy that this fixes the issue for me. I am going to leave it running 
though, just to see if anything happens. I havent examined dmesg, but I 
thould be able to see the output from the patch there to verify that its 
waiting, yes ?


cheers,

-pete.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system [SOLVED]

2018-03-10 Thread Ian Lepore
On Sat, 2018-03-10 at 23:42 +, Pete French wrote:
> > 
> > It looks like r330745 applies fine to stable-11 without any changes,
> > and there's plenty of value in testing that as well, if you're already
> > set up for that world.
> > 
> 
> Ive been running the patch from the PR in production since the original 
> bug report and it works fine. I havent looked at r330745 yes, but can 
> replace the PR patch with that and give it a whirl will take a look 
> Monday at whats possible.
> 
> -pete.
> 

I based my fix heavily on that patch from the PR, but I rewrote it
enough that I might've made any number of mistakes, so it needs fresh
testing.  The main change I made was to make it a lot less noisy while
waiting (it only mentions the wait once, unless bootverbose is set, in
which case it's once per second).  I also removed the logic that
limited the retries to nfs and zfs, because I think we can remove all
the old code related to waiting that only worked for ufs and let this
new retry be the way it waits for all filesystems.  But that's a bigger
change we can do separately; I didn't want to hold up this fix any
longer.

-- Ian
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system [SOLVED]

2018-03-10 Thread Pete French



It looks like r330745 applies fine to stable-11 without any changes,
and there's plenty of value in testing that as well, if you're already
set up for that world.




Ive been running the patch from the PR in production since the original 
bug report and it works fine. I havent looked at r330745 yes, but can 
replace the PR patch with that and give it a whirl will take a look 
Monday at whats possible.


-pete.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system [SOLVED]

2018-03-10 Thread Ian Lepore
On Sat, 2018-03-10 at 23:08 +, Pete French wrote:
> Ah, thankyou! I haven;t run current before, but as this is such an issue 
> for us I;ll setup an Azure machine running it and have it reboot every 
> five minutes or so to check it works OK. Unfortunately the error doesnt 
> show up consisntently, as its a race condition. Will let you know if it
> fails for any reason.
> 
> -pete. [time to take a dive into the exiting world of current]

It looks like r330745 applies fine to stable-11 without any changes,
and there's plenty of value in testing that as well, if you're already
set up for that world.

-- Ian
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system [SOLVED]

2018-03-10 Thread Pete French
Ah, thankyou! I haven;t run current before, but as this is such an issue 
for us I;ll setup an Azure machine running it and have it reboot every 
five minutes or so to check it works OK. Unfortunately the error doesnt 
show up consisntently, as its a race condition. Will let you know if it

fails for any reason.

-pete. [time to take a dive into the exiting world of current]

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system [SOLVED]

2018-03-10 Thread Ian Lepore
On Sat, 2018-03-03 at 16:19 +, Pete French wrote:
> 
> > 
> > That won't work for the boot drive.
> > 
> > When no boot drive is detected early enough, the kernel goes to the
> > mountroot prompt.  That seems to hold a Giant lock which inhibits
> > further progress being made.  Sometimes progress can be made by
> > trying
> > to mount unmountable partitions on other drives, but this usually
> > goes
> > too fast, especially if the USB drive often times out.
> 
> 
> We have this problem in Azure with a ZFS root, was fixed by the pacth
> in 
> this bug report, which actually starts off being about USB.
> 
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=208882
> 
> You can then set the mountroot timeout as normal and it works.
> 
> I wold really like this patch to be applied, but it seems to have 
> languished since last summer. We use this as standard on all our
> cloud 
> machines now, and it works very nicely.
> 
> -pete.

I've committed a fix to -current (r330745) based on that patch.  It
would be good if people running -current who've had this problem could
give it some testing.  I'd like to get it merged back to 11 before the
11.1 release (and back to 10-stable as well).

With r330745 in place, the only setting that should be needed if your
rootfs is on a device that is slow to arrive is vfs.mountroot.timeout=
in loader.conf; the value is the number of seconds to wait before
giving up and going to the mountroot prompt.

-- Ian
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system [SOLVED]

2018-03-05 Thread Mark Millard via freebsd-stable
Eugene Grosbein eugen at grosbein.net wrote on
Mon Mar 5 12:20:47 UTC 2018 :

> 05.03.2018 19:10, Dimitry Andric wrote:
> 
>>> When no boot drive is detected early enough, the kernel goes to the
>>> mountroot prompt.  That seems to hold a Giant lock which inhibits
>>> further progress being made.  Sometimes progress can be made by trying
>>> to mount unmountable partitions on other drives, but this usually goes
>>> too fast, especially if the USB drive often times out.
>> 
>> What I would like to know, is why our USB stack has such timeout issues
>> at all.  When I boot Linux on the same type of hardware, I never see USB
>> timeouts.  They must be doing something right, or maybe they just don't
>> bother checking some status bits that we are very strict about?
> 
> This is heavily hardware-dependent. You may have no issues with some
> software+hardware combination and long timeouts with same software
> but different hardware.

Dimitry's example is for changing the software for the same(?) hardware,
if I understand right. (FreeBSD vs. some Linux distribution.)

(?: He did say "type of".)

Perhaps that type of hardware can be used to figure out the difference.

===
Mark Millard
marklmi at yahoo.com
( markmi at dsl-only.net is
going away in 2018-Feb, late)

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system [SOLVED]

2018-03-05 Thread Eugene Grosbein
05.03.2018 19:10, Dimitry Andric wrote:

>> When no boot drive is detected early enough, the kernel goes to the
>> mountroot prompt.  That seems to hold a Giant lock which inhibits
>> further progress being made.  Sometimes progress can be made by trying
>> to mount unmountable partitions on other drives, but this usually goes
>> too fast, especially if the USB drive often times out.
> 
> What I would like to know, is why our USB stack has such timeout issues
> at all.  When I boot Linux on the same type of hardware, I never see USB
> timeouts.  They must be doing something right, or maybe they just don't
> bother checking some status bits that we are very strict about?

This is heavily hardware-dependent. You may have no issues with some
software+hardware combination and long timeouts with same software
but different hardware.





signature.asc
Description: OpenPGP digital signature


Re: zfs problems after rebuilding system [SOLVED]

2018-03-05 Thread Dimitry Andric
On 3 Mar 2018, at 13:56, Bruce Evans  wrote:
> 
> On Sat, 3 Mar 2018, tech-lists wrote:
>> On 03/03/2018 00:23, Dimitry Andric wrote:
...
>>> Whether this is due to some sort of BIOS handover trouble, or due to
>>> cheap and/or crappy USB-to-SATA bridges (even with brand WD and Seagate
>>> disks!), I have no idea.  I attempted to debug it at some point, but
>>> a well-placed "sleep 10" was an acceptable workaround... :)
>> 
>> That fixed it, thank you again :D
> 
> That won't work for the boot drive.
> 
> When no boot drive is detected early enough, the kernel goes to the
> mountroot prompt.  That seems to hold a Giant lock which inhibits
> further progress being made.  Sometimes progress can be made by trying
> to mount unmountable partitions on other drives, but this usually goes
> too fast, especially if the USB drive often times out.

What I would like to know, is why our USB stack has such timeout issues
at all.  When I boot Linux on the same type of hardware, I never see USB
timeouts.  They must be doing something right, or maybe they just don't
bother checking some status bits that we are very strict about?

-Dimitry



signature.asc
Description: Message signed with OpenPGP


Re: zfs problems after rebuilding system [SOLVED]

2018-03-03 Thread Pete French




That won't work for the boot drive.

When no boot drive is detected early enough, the kernel goes to the
mountroot prompt.  That seems to hold a Giant lock which inhibits
further progress being made.  Sometimes progress can be made by trying
to mount unmountable partitions on other drives, but this usually goes
too fast, especially if the USB drive often times out.




We have this problem in Azure with a ZFS root, was fixed by the pacth in 
this bug report, which actually starts off being about USB.


https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=208882

You can then set the mountroot timeout as normal and it works.

I wold really like this patch to be applied, but it seems to have 
languished since last summer. We use this as standard on all our cloud 
machines now, and it works very nicely.


-pete.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system [SOLVED]

2018-03-03 Thread Eugene Grosbein
03.03.2018 19:56, Bruce Evans wrote:

> On Sat, 3 Mar 2018, tech-lists wrote:
> 
>> On 03/03/2018 00:23, Dimitry Andric wrote:
>>> Indeed.  I have had the following for a few years now, due to USB drives
>>> with ZFS pools:
>>>
>>> --- /usr/src/etc/rc.d/zfs2016-11-08 10:21:29.820131000 +0100
>>> +++ /etc/rc.d/zfs2016-11-08 12:49:52.971161000 +0100
>>> @@ -25,6 +25,8 @@
>>>
>>>  zfs_start_main()
>>>  {
>>> +echo "Sleeping for 10 seconds to let USB devices settle..."
>>> +sleep 10
>>>  zfs mount -va
>>>  zfs share -a
>>>  if [ ! -r /etc/zfs/exports ]; then
>>>
>>> For some reason, USB3 (xhci) controllers can take a very, very long time
>>> to correctly attach mass storage devices: I usually see many timeouts
>>> before they finally get detected.  After that, the devices always work
>>> just fine, though.
> 
> I have one that works for an old USB hard drive but never works for a not
> so old USB flash drive and a new SSD in a USB dock (just to check the SSD
> speed when handicapped by USB).  Win7 has no problems with the xhci and
> USB flash drive combination, and FreeBSD has no problems with the drive
> on other systems.
> 
>>> Whether this is due to some sort of BIOS handover trouble, or due to
>>> cheap and/or crappy USB-to-SATA bridges (even with brand WD and Seagate
>>> disks!), I have no idea.  I attempted to debug it at some point, but
>>> a well-placed "sleep 10" was an acceptable workaround... :)
>>
>> That fixed it, thank you again :D
> 
> That won't work for the boot drive.
> 
> When no boot drive is detected early enough, the kernel goes to the
> mountroot prompt.  That seems to hold a Giant lock which inhibits
> further progress being made.  Sometimes progress can be made by trying
> to mount unmountable partitions on other drives, but this usually goes
> too fast, especially if the USB drive often times out.

In fact, we have enough loader.conf quirks for that:
 
kern.cam.boot_delay "Bus registration wait time" # miliseconds
vfs.mountroot.timeout   "Wait for root mount" # seconds
vfs.root_mount_always_wait "Wait for root mount holds even if the root device 
already exists" # boolean

No need in extra hacks to zfs rc.d script.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system [SOLVED]

2018-03-03 Thread tech-lists
On 03/03/2018 12:56, Bruce Evans wrote:
> That won't work for the boot drive.

In my case the workaround is fine because it's not a boot drive

-- 
J.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system [SOLVED]

2018-03-03 Thread Bruce Evans

On Sat, 3 Mar 2018, tech-lists wrote:


On 03/03/2018 00:23, Dimitry Andric wrote:

Indeed.  I have had the following for a few years now, due to USB drives
with ZFS pools:

--- /usr/src/etc/rc.d/zfs   2016-11-08 10:21:29.820131000 +0100
+++ /etc/rc.d/zfs   2016-11-08 12:49:52.971161000 +0100
@@ -25,6 +25,8 @@

 zfs_start_main()
 {
+   echo "Sleeping for 10 seconds to let USB devices settle..."
+   sleep 10
zfs mount -va
zfs share -a
if [ ! -r /etc/zfs/exports ]; then

For some reason, USB3 (xhci) controllers can take a very, very long time
to correctly attach mass storage devices: I usually see many timeouts
before they finally get detected.  After that, the devices always work
just fine, though.


I have one that works for an old USB hard drive but never works for a not
so old USB flash drive and a new SSD in a USB dock (just to check the SSD
speed when handicapped by USB).  Win7 has no problems with the xhci and
USB flash drive combination, and FreeBSD has no problems with the drive
on other systems.


Whether this is due to some sort of BIOS handover trouble, or due to
cheap and/or crappy USB-to-SATA bridges (even with brand WD and Seagate
disks!), I have no idea.  I attempted to debug it at some point, but
a well-placed "sleep 10" was an acceptable workaround... :)


That fixed it, thank you again :D


That won't work for the boot drive.

When no boot drive is detected early enough, the kernel goes to the
mountroot prompt.  That seems to hold a Giant lock which inhibits
further progress being made.  Sometimes progress can be made by trying
to mount unmountable partitions on other drives, but this usually goes
too fast, especially if the USB drive often times out.

Bruce
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system [SOLVED]

2018-03-03 Thread tech-lists
On 03/03/2018 00:23, Dimitry Andric wrote:
> Indeed.  I have had the following for a few years now, due to USB drives
> with ZFS pools:
> 
> --- /usr/src/etc/rc.d/zfs 2016-11-08 10:21:29.820131000 +0100
> +++ /etc/rc.d/zfs 2016-11-08 12:49:52.971161000 +0100
> @@ -25,6 +25,8 @@
> 
>  zfs_start_main()
>  {
> + echo "Sleeping for 10 seconds to let USB devices settle..."
> + sleep 10
>   zfs mount -va
>   zfs share -a
>   if [ ! -r /etc/zfs/exports ]; then
> 
> For some reason, USB3 (xhci) controllers can take a very, very long time
> to correctly attach mass storage devices: I usually see many timeouts
> before they finally get detected.  After that, the devices always work
> just fine, though.
> 
> Whether this is due to some sort of BIOS handover trouble, or due to
> cheap and/or crappy USB-to-SATA bridges (even with brand WD and Seagate
> disks!), I have no idea.  I attempted to debug it at some point, but
> a well-placed "sleep 10" was an acceptable workaround... :)

That fixed it, thank you again :D
-- 
J.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system

2018-03-02 Thread tech-lists
On 03/03/2018 00:09, Freddie Cash wrote:
> You said it's an external USB drive, correct? Could it be a race condition
> during the boot process where the USB mass storage driver hasn't detected
> the drive yet when /etc/rc.d/zfs is run?
> 
> As a test, add a "sleep 30" in that script before the "zfs mount -a" call
> and reboot.

Yes it's an external usb3 drive. That's interesting and I'll test that
tomorrow. I recently commented out a GENERIC kernel the USB debug line
because the console was filling up with usb attach messages on boot.
They were appearing after the login prompt. I have a couple of usb3 hubs
attached and the disk is attached through one such hub. (although it was
done this way because sometimes it'd be seen as /dev/da0 and others as
/dev/da4. And possibly linked to this, when it came up as /dev/da0 it
was always with a usb2 speed rather than usb3).

thanks everyone for your input.
-- 
J.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system

2018-03-02 Thread Dimitry Andric
On 3 Mar 2018, at 01:09, Freddie Cash  wrote:
> 
> You said it's an external USB drive, correct? Could it be a race condition
> during the boot process where the USB mass storage driver hasn't detected
> the drive yet when /etc/rc.d/zfs is run?
> 
> As a test, add a "sleep 30" in that script before the "zfs mount -a" call
> and reboot.

Indeed.  I have had the following for a few years now, due to USB drives
with ZFS pools:

--- /usr/src/etc/rc.d/zfs   2016-11-08 10:21:29.820131000 +0100
+++ /etc/rc.d/zfs   2016-11-08 12:49:52.971161000 +0100
@@ -25,6 +25,8 @@

 zfs_start_main()
 {
+   echo "Sleeping for 10 seconds to let USB devices settle..."
+   sleep 10
zfs mount -va
zfs share -a
if [ ! -r /etc/zfs/exports ]; then

For some reason, USB3 (xhci) controllers can take a very, very long time
to correctly attach mass storage devices: I usually see many timeouts
before they finally get detected.  After that, the devices always work
just fine, though.

Whether this is due to some sort of BIOS handover trouble, or due to
cheap and/or crappy USB-to-SATA bridges (even with brand WD and Seagate
disks!), I have no idea.  I attempted to debug it at some point, but
a well-placed "sleep 10" was an acceptable workaround... :)

-Dimitry



signature.asc
Description: Message signed with OpenPGP


Re: zfs problems after rebuilding system

2018-03-02 Thread Freddie Cash
You said it's an external USB drive, correct? Could it be a race condition
during the boot process where the USB mass storage driver hasn't detected
the drive yet when /etc/rc.d/zfs is run?

As a test, add a "sleep 30" in that script before the "zfs mount -a" call
and reboot.

Cheers,
Freddie

Typos courtesy of my phone.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system

2018-03-02 Thread tech-lists
On 02/03/2018 21:56, Alan Somers wrote:
> dmesg only shows stuff that comes from the kernel, not the console.  To
> see what's printed to the console, you'll actually have to watch it.  Or
> enable /var/log/console.log by uncommenting the appropriate line in
> /etc/syslog.conf.

ok did that, chmodded it to 600 then gave a kill -1 to its process id,
then rebooted.

# cat /var/log/console.log | grep -i zfs
# 

lots of info if I less the file, but nothing about zfs

here's output of mount:

# mount
/dev/ada0s1a on / (ufs, local, journaled soft-updates)
devfs on /dev (devfs, local, multilabel)
linprocfs on /compat/linux/proc (linprocfs, local)
tmpfs on /compat/linux/dev/shm (tmpfs, local)
zpool0 on /zpool0 (zfs, local, nfsv4acls)
zpool0/home on /zpool0/home (zfs, local, nfsv4acls)
zpool0/usr on /zpool0/usr (zfs, local, nfsv4acls)
zpool0/usr/local on /zpool0/usr/local (zfs, local, nfsv4acls)
zpool0/vms on /zpool0/vms (zfs, local, nfsv4acls)
zpool0/usr/oldsrc on /usr/oldsrc (zfs, local, nfsv4acls)
zpool0/usr/ports on /usr/ports (zfs, local, nfsv4acls)
zpool0/usr/src on /usr/src (zfs, local, nfsv4acls)

#

output of zfs mount

# zfs mount
zpool0 /zpool0
zpool0/home/zpool0/home
zpool0/usr /zpool0/usr
zpool0/usr/local   /zpool0/usr/local
zpool0/vms /zpool0/vms
zpool0/usr/oldsrc  /usr/oldsrc
zpool0/usr/ports   /usr/ports
zpool0/usr/src /usr/src

now I'll do zfs mount -a and then zfs mount

# zfs mount -a

# zfs mount
zpool0 /zpool0
zpool0/home/zpool0/home
zpool0/usr /zpool0/usr
zpool0/usr/local   /zpool0/usr/local
zpool0/vms /zpool0/vms
zpool0/usr/oldsrc  /usr/oldsrc
zpool0/usr/ports   /usr/ports
zpool0/usr/src /usr/src
zpool1 /zpool1
zpool1/compressed  /zpool1/compressed
zpool1/important   /zpool1/important

and everything is there as it should be, after zfs mount -a.

it's as if the /etc/rc.d/zfs either isn't running or I don't know,
failing to run the main section, where it uses -av rather than just -a.
Is this file the only one that's called to load zfs?

I mean, in that file, zfs mount is never called without a parameter. To
me, it doesn't look like the file is being run at all.

thanks,
-- 
J.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system

2018-03-02 Thread tech-lists
On 02/03/2018 21:56, Alan Somers wrote:
> dmesg only shows stuff that comes from the kernel, not the console.  To see
> what's printed to the console, you'll actually have to watch it.  Or enable
> /var/log/console.log by uncommenting the appropriate line in
> /etc/syslog.conf.

ok will do this asap, thanks

-- 
J.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system

2018-03-02 Thread Alan Somers
On Fri, Mar 2, 2018 at 2:53 PM, tech-lists  wrote:

> On 02/03/2018 21:38, Alan Somers wrote:
> > The relevant code is in /etc/rc.d/zfs, and it already uses "-a".  Have
> > you checked if /etc/rc.d/zfs is printing any errors to the console
> > during boot?
>
> Nothing much in dmesg -a
>
> # dmesg -a | egrep -i zfs
> ZFS filesystem version: 5
> ZFS storage pool version: features support (5000)
>
> # cat /etc/rc.conf | egrep -i zfs
> zfs_enable="YES"
>
> very puzzling
> --
> J.
>

dmesg only shows stuff that comes from the kernel, not the console.  To see
what's printed to the console, you'll actually have to watch it.  Or enable
/var/log/console.log by uncommenting the appropriate line in
/etc/syslog.conf.

-Alan
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system

2018-03-02 Thread tech-lists
On 02/03/2018 21:38, Alan Somers wrote:
> The relevant code is in /etc/rc.d/zfs, and it already uses "-a".  Have
> you checked if /etc/rc.d/zfs is printing any errors to the console
> during boot?

Nothing much in dmesg -a

# dmesg -a | egrep -i zfs
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)

# cat /etc/rc.conf | egrep -i zfs
zfs_enable="YES"

very puzzling
-- 
J.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system

2018-03-02 Thread Alan Somers
On Fri, Mar 2, 2018 at 2:26 PM, tech-lists  wrote:

> Hi, thanks for looking at this,
>
> On 02/03/2018 20:39, Alan Somers wrote:
> > This doesn't make sense.  vdevs have nothing to do with mounting.  You
> see
> > your vdevs by doing "zpool status".  What are you expecting to see that
> you
> > don't?
>
> sorry, I was confusing terms. I was expecting to see similar to output
> of zfs list from both zpools instead of just zpool0.
>
> (just rebooted the system.)
>
> OK here's zpool status:
>
> # zpool status
>   pool: zpool1
>  state: ONLINE
>   scan: scrub repaired 0 in 0h39m with 0 errors on Mon Feb  5 22:55:31 2018
> config:
>
> NAMESTATE READ WRITE CKSUM
> zpool1  ONLINE   0 0 0
>   diskid/DISK-NA7DKXXF  ONLINE   0 0 0
>
> errors: No known data errors
>
>   pool: zpool0
>  state: ONLINE
>   scan: scrub repaired 0 in 3h46m with 0 errors on Thu Mar  1 23:01:29 2018
> config:
>
> NAMESTATE READ WRITE CKSUM
> zpool0  ONLINE   0 0 0
>   raidz1-0  ONLINE   0 0 0
> ada1ONLINE   0 0 0
> ada2ONLINE   0 0 0
> ada3ONLINE   0 0 0
> >
> >> Confusingly, I didn't need to and don't have to do any of that for
> >> zpool0. What am I doing wrong/what am I missing? Why is zpool0
> >> automatically loading but zpool1 is not? Before ada0 (the failed disk)
> >> was replaced, both loaded on boot.
> >>
> > Please post the output of "zfs list -r -o name,mountpoint,canmount,
> mounted"
> > and also the contents of /etc/fstab.
>
> # zfs list -r -o name,mountpoint,canmount,mounted
> NAME MOUNTPOINT CANMOUNT  MOUNTED
> zpool1   /zpool1   on   no
> zpool1/compressed/zpool1/compressedon   no
> zpool1/important /zpool1/important on   no
> zpool0   /zpool0   on  yes
> zpool0/home  /zpool0/home  on  yes
> zpool0/usr   /zpool0/usr   on  yes
> zpool0/usr/local /zpool0/usr/local on  yes
> zpool0/usr/oldsrc/usr/oldsrc   on  yes
> zpool0/usr/ports /usr/portson  yes
> zpool0/usr/src   /usr/src  on  yes
> zpool0/vms   /zpool0/vms   on  yes
>
> # cat /etc/fstab
> # DeviceMountpoint  FStype  Options DumpPass#
> /dev/ada0s1a/   ufs rw  1   1
> /dev/ada0s1bnoneswapsw  0   0
> linprocfs   /compat/linux/proc  linprocfs   rw  0   0
> tmpfs/compat/linux/dev/shm  tmpfs   rw,mode=17770   0
> fdescfs /dev/fd fdescfs rw,late 0   0
>
> > Also, have you set "zfs_enable=YES" in /etc/rc.conf?
>
> yes.
>
> If I run zfs mount -a, everything zfs is mounted as expected. I'm
> wondering if at bootup, when zfs mount is called (I suppose it must be
> called from somewhere), whether it needs to specify -a. I would not know
> the first place to look though.
>

The relevant code is in /etc/rc.d/zfs, and it already uses "-a".  Have you
checked if /etc/rc.d/zfs is printing any errors to the console during boot?
-Alan
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system

2018-03-02 Thread tech-lists
Hi, thanks for looking at this,

On 02/03/2018 20:39, Alan Somers wrote:
> This doesn't make sense.  vdevs have nothing to do with mounting.  You see
> your vdevs by doing "zpool status".  What are you expecting to see that you
> don't?

sorry, I was confusing terms. I was expecting to see similar to output
of zfs list from both zpools instead of just zpool0.

(just rebooted the system.)

OK here's zpool status:

# zpool status
  pool: zpool1
 state: ONLINE
  scan: scrub repaired 0 in 0h39m with 0 errors on Mon Feb  5 22:55:31 2018
config:

NAMESTATE READ WRITE CKSUM
zpool1  ONLINE   0 0 0
  diskid/DISK-NA7DKXXF  ONLINE   0 0 0

errors: No known data errors

  pool: zpool0
 state: ONLINE
  scan: scrub repaired 0 in 3h46m with 0 errors on Thu Mar  1 23:01:29 2018
config:

NAMESTATE READ WRITE CKSUM
zpool0  ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
ada1ONLINE   0 0 0
ada2ONLINE   0 0 0
ada3ONLINE   0 0 0
> 
>> Confusingly, I didn't need to and don't have to do any of that for
>> zpool0. What am I doing wrong/what am I missing? Why is zpool0
>> automatically loading but zpool1 is not? Before ada0 (the failed disk)
>> was replaced, both loaded on boot.
>>
> Please post the output of "zfs list -r -o name,mountpoint,canmount,mounted"
> and also the contents of /etc/fstab.  

# zfs list -r -o name,mountpoint,canmount,mounted
NAME MOUNTPOINT CANMOUNT  MOUNTED
zpool1   /zpool1   on   no
zpool1/compressed/zpool1/compressedon   no
zpool1/important /zpool1/important on   no
zpool0   /zpool0   on  yes
zpool0/home  /zpool0/home  on  yes
zpool0/usr   /zpool0/usr   on  yes
zpool0/usr/local /zpool0/usr/local on  yes
zpool0/usr/oldsrc/usr/oldsrc   on  yes
zpool0/usr/ports /usr/portson  yes
zpool0/usr/src   /usr/src  on  yes
zpool0/vms   /zpool0/vms   on  yes

# cat /etc/fstab
# DeviceMountpoint  FStype  Options DumpPass#
/dev/ada0s1a/   ufs rw  1   1
/dev/ada0s1bnoneswapsw  0   0
linprocfs   /compat/linux/proc  linprocfs   rw  0   0
tmpfs/compat/linux/dev/shm  tmpfs   rw,mode=17770   0
fdescfs /dev/fd fdescfs rw,late 0   0

> Also, have you set "zfs_enable=YES" in /etc/rc.conf?

yes.

If I run zfs mount -a, everything zfs is mounted as expected. I'm
wondering if at bootup, when zfs mount is called (I suppose it must be
called from somewhere), whether it needs to specify -a. I would not know
the first place to look though.

thanks,
-- 
J.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs problems after rebuilding system

2018-03-02 Thread Alan Somers
On Fri, Mar 2, 2018 at 1:25 PM, tech-lists  wrote:

> Hi,
>
> Importing two zpools after a hd crash, the first pool I imported
> auto-loads at boot. But the second one I'm always having to zfs mount
> (zpool name) then mount the zfs subdirs. The system was like this:
>
> ada0 (this had the OS on. It was replaced after it crashed. Not a member
> of any zpool, not zfs at all, no root-on-zfs). New
> freebsd-11-stable-snapshot was installed to this disk.
>
> ada1 \
> ada2  -- these made up zpool0, a raidz pool. Each device is a 1TB disk.
> ada3 /
>
> zpool1 - this is striped, one 4TB disk, attached via usb3
>
> zpool1/c - this is compressed with lz4
> zpool1/important - this has copies=2 enabled
>
> I ran zpool import zpool0 and then zpool import zpool1 and both imported
> without error. However despite setting the mountpoint for zpool1 as
> /zpool1, on reboot I always have to:
>
> # zfs mount zpool1
>
> or I won't see the drive in df -h or mount or zfs mount.
>
> But I *will* find it at its mountpoint /zpool1. If I do a ls -lah on
> that, I can see zpool1/c and zpool1/important as dirs but not in zfs
> mount or (normal) mount, nor can I see the other dirs that are not vdevs.
>
> If I then zfs mount zpool1 I can see all the dirs and vdevs on that disk
> off its root, in ls -lah. But I dont see the vdevs in zfs mount. I have
> to zfs mount zpool1/c and zfs mount zpool1/important to see the vdevs in
> zfs mount.
>

This doesn't make sense.  vdevs have nothing to do with mounting.  You see
your vdevs by doing "zpool status".  What are you expecting to see that you
don't?


>
> Confusingly, I didn't need to and don't have to do any of that for
> zpool0. What am I doing wrong/what am I missing? Why is zpool0
> automatically loading but zpool1 is not? Before ada0 (the failed disk)
> was replaced, both loaded on boot.
>

Please post the output of "zfs list -r -o name,mountpoint,canmount,mounted"
and also the contents of /etc/fstab.  Also, have you set "zfs_enable=YES"
in /etc/rc.conf?

-Alan
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


zfs problems after rebuilding system

2018-03-02 Thread tech-lists
Hi,

Importing two zpools after a hd crash, the first pool I imported
auto-loads at boot. But the second one I'm always having to zfs mount
(zpool name) then mount the zfs subdirs. The system was like this:

ada0 (this had the OS on. It was replaced after it crashed. Not a member
of any zpool, not zfs at all, no root-on-zfs). New
freebsd-11-stable-snapshot was installed to this disk.

ada1 \
ada2  -- these made up zpool0, a raidz pool. Each device is a 1TB disk.
ada3 /

zpool1 - this is striped, one 4TB disk, attached via usb3

zpool1/c - this is compressed with lz4
zpool1/important - this has copies=2 enabled

I ran zpool import zpool0 and then zpool import zpool1 and both imported
without error. However despite setting the mountpoint for zpool1 as
/zpool1, on reboot I always have to:

# zfs mount zpool1

or I won't see the drive in df -h or mount or zfs mount.

But I *will* find it at its mountpoint /zpool1. If I do a ls -lah on
that, I can see zpool1/c and zpool1/important as dirs but not in zfs
mount or (normal) mount, nor can I see the other dirs that are not vdevs.

If I then zfs mount zpool1 I can see all the dirs and vdevs on that disk
off its root, in ls -lah. But I dont see the vdevs in zfs mount. I have
to zfs mount zpool1/c and zfs mount zpool1/important to see the vdevs in
zfs mount.

Confusingly, I didn't need to and don't have to do any of that for
zpool0. What am I doing wrong/what am I missing? Why is zpool0
automatically loading but zpool1 is not? Before ada0 (the failed disk)
was replaced, both loaded on boot.

thanks,
-- 
J.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"