I based my fix heavily on that patch from the PR, but I rewrote it
enough that I might've made any number of mistakes, so it needs fresh
testing.
Ok, have been rebooting with the patch eery ten minutes for 24 hours
now, and it comes back up perfectly every time, so as far as I am
On Mon, 2018-03-12 at 17:21 +, Pete French wrote:
>
> On 10/03/2018 23:48, Ian Lepore wrote:
> >
> > I based my fix heavily on that patch from the PR, but I rewrote it
> > enough that I might've made any number of mistakes, so it needs fresh
> > testing. The main change I made was to make
On 10/03/2018 23:48, Ian Lepore wrote:
I based my fix heavily on that patch from the PR, but I rewrote it
enough that I might've made any number of mistakes, so it needs fresh
testing. The main change I made was to make it a lot less noisy while
waiting (it only mentions the wait once, unless
On Sat, 2018-03-10 at 23:42 +, Pete French wrote:
> >
> > It looks like r330745 applies fine to stable-11 without any changes,
> > and there's plenty of value in testing that as well, if you're already
> > set up for that world.
> >
>
> Ive been running the patch from the PR in production
It looks like r330745 applies fine to stable-11 without any changes,
and there's plenty of value in testing that as well, if you're already
set up for that world.
Ive been running the patch from the PR in production since the original
bug report and it works fine. I havent looked at
On Sat, 2018-03-10 at 23:08 +, Pete French wrote:
> Ah, thankyou! I haven;t run current before, but as this is such an issue
> for us I;ll setup an Azure machine running it and have it reboot every
> five minutes or so to check it works OK. Unfortunately the error doesnt
> show up
Ah, thankyou! I haven;t run current before, but as this is such an issue
for us I;ll setup an Azure machine running it and have it reboot every
five minutes or so to check it works OK. Unfortunately the error doesnt
show up consisntently, as its a race condition. Will let you know if it
fails
On Sat, 2018-03-03 at 16:19 +, Pete French wrote:
>
> >
> > That won't work for the boot drive.
> >
> > When no boot drive is detected early enough, the kernel goes to the
> > mountroot prompt. That seems to hold a Giant lock which inhibits
> > further progress being made. Sometimes
Eugene Grosbein eugen at grosbein.net wrote on
Mon Mar 5 12:20:47 UTC 2018 :
> 05.03.2018 19:10, Dimitry Andric wrote:
>
>>> When no boot drive is detected early enough, the kernel goes to the
>>> mountroot prompt. That seems to hold a Giant lock which inhibits
>>> further progress being made.
05.03.2018 19:10, Dimitry Andric wrote:
>> When no boot drive is detected early enough, the kernel goes to the
>> mountroot prompt. That seems to hold a Giant lock which inhibits
>> further progress being made. Sometimes progress can be made by trying
>> to mount unmountable partitions on other
On 3 Mar 2018, at 13:56, Bruce Evans wrote:
>
> On Sat, 3 Mar 2018, tech-lists wrote:
>> On 03/03/2018 00:23, Dimitry Andric wrote:
...
>>> Whether this is due to some sort of BIOS handover trouble, or due to
>>> cheap and/or crappy USB-to-SATA bridges (even with brand WD
That won't work for the boot drive.
When no boot drive is detected early enough, the kernel goes to the
mountroot prompt. That seems to hold a Giant lock which inhibits
further progress being made. Sometimes progress can be made by trying
to mount unmountable partitions on other drives, but
03.03.2018 19:56, Bruce Evans wrote:
> On Sat, 3 Mar 2018, tech-lists wrote:
>
>> On 03/03/2018 00:23, Dimitry Andric wrote:
>>> Indeed. I have had the following for a few years now, due to USB drives
>>> with ZFS pools:
>>>
>>> --- /usr/src/etc/rc.d/zfs2016-11-08 10:21:29.820131000 +0100
On 03/03/2018 12:56, Bruce Evans wrote:
> That won't work for the boot drive.
In my case the workaround is fine because it's not a boot drive
--
J.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
On Sat, 3 Mar 2018, tech-lists wrote:
On 03/03/2018 00:23, Dimitry Andric wrote:
Indeed. I have had the following for a few years now, due to USB drives
with ZFS pools:
--- /usr/src/etc/rc.d/zfs 2016-11-08 10:21:29.820131000 +0100
+++ /etc/rc.d/zfs 2016-11-08 12:49:52.971161000
On 03/03/2018 00:23, Dimitry Andric wrote:
> Indeed. I have had the following for a few years now, due to USB drives
> with ZFS pools:
>
> --- /usr/src/etc/rc.d/zfs 2016-11-08 10:21:29.820131000 +0100
> +++ /etc/rc.d/zfs 2016-11-08 12:49:52.971161000 +0100
> @@ -25,6 +25,8 @@
>
>
On 03/03/2018 00:09, Freddie Cash wrote:
> You said it's an external USB drive, correct? Could it be a race condition
> during the boot process where the USB mass storage driver hasn't detected
> the drive yet when /etc/rc.d/zfs is run?
>
> As a test, add a "sleep 30" in that script before the
On 3 Mar 2018, at 01:09, Freddie Cash wrote:
>
> You said it's an external USB drive, correct? Could it be a race condition
> during the boot process where the USB mass storage driver hasn't detected
> the drive yet when /etc/rc.d/zfs is run?
>
> As a test, add a "sleep 30"
You said it's an external USB drive, correct? Could it be a race condition
during the boot process where the USB mass storage driver hasn't detected
the drive yet when /etc/rc.d/zfs is run?
As a test, add a "sleep 30" in that script before the "zfs mount -a" call
and reboot.
Cheers,
Freddie
On 02/03/2018 21:56, Alan Somers wrote:
> dmesg only shows stuff that comes from the kernel, not the console. To
> see what's printed to the console, you'll actually have to watch it. Or
> enable /var/log/console.log by uncommenting the appropriate line in
> /etc/syslog.conf.
ok did that,
On 02/03/2018 21:56, Alan Somers wrote:
> dmesg only shows stuff that comes from the kernel, not the console. To see
> what's printed to the console, you'll actually have to watch it. Or enable
> /var/log/console.log by uncommenting the appropriate line in
> /etc/syslog.conf.
ok will do this
On Fri, Mar 2, 2018 at 2:53 PM, tech-lists wrote:
> On 02/03/2018 21:38, Alan Somers wrote:
> > The relevant code is in /etc/rc.d/zfs, and it already uses "-a". Have
> > you checked if /etc/rc.d/zfs is printing any errors to the console
> > during boot?
>
> Nothing much in
On 02/03/2018 21:38, Alan Somers wrote:
> The relevant code is in /etc/rc.d/zfs, and it already uses "-a". Have
> you checked if /etc/rc.d/zfs is printing any errors to the console
> during boot?
Nothing much in dmesg -a
# dmesg -a | egrep -i zfs
ZFS filesystem version: 5
ZFS storage pool
On Fri, Mar 2, 2018 at 2:26 PM, tech-lists wrote:
> Hi, thanks for looking at this,
>
> On 02/03/2018 20:39, Alan Somers wrote:
> > This doesn't make sense. vdevs have nothing to do with mounting. You
> see
> > your vdevs by doing "zpool status". What are you expecting
Hi, thanks for looking at this,
On 02/03/2018 20:39, Alan Somers wrote:
> This doesn't make sense. vdevs have nothing to do with mounting. You see
> your vdevs by doing "zpool status". What are you expecting to see that you
> don't?
sorry, I was confusing terms. I was expecting to see similar
On Fri, Mar 2, 2018 at 1:25 PM, tech-lists wrote:
> Hi,
>
> Importing two zpools after a hd crash, the first pool I imported
> auto-loads at boot. But the second one I'm always having to zfs mount
> (zpool name) then mount the zfs subdirs. The system was like this:
>
> ada0
Hi,
Importing two zpools after a hd crash, the first pool I imported
auto-loads at boot. But the second one I'm always having to zfs mount
(zpool name) then mount the zfs subdirs. The system was like this:
ada0 (this had the OS on. It was replaced after it crashed. Not a member
of any zpool, not
- Alexandre Biancalana [EMAIL PROTECTED] wrote:
On 1/29/08, ZsUM ZsUM [EMAIL PROTECTED] wrote:
*Dump header from device /dev/ad4s1b
Architecture: amd64
Architecture Version: 2
Dump Length: 253394944B (241 MB)
Blocksize: 512
Dumptime: Sun Jan 20 23:51:58 2008
Hostname:
Magic:
I'm not clear about which version of ZFS code is in FreeBSD, and whether the
free space map bug is present there or not. Does anyone know which ZFS is in
FreeBSD?
Anyways, if you really need the data, it might be possible to boot the newest version of OpenSolaris, and let it repair your
- Hugo Silva [EMAIL PROTECTED] wrote:
I'm not clear about which version of ZFS code is in FreeBSD, and
whether the free space map bug is present there or not. Does anyone
know which ZFS is in FreeBSD?
Anyways, if you really need the data, it might be possible to boot
the newest
Hy,
I use FreeBSD 7.0 RC1 AMD64 and 4 WD 250GB HDD in Raid-z., and a weeks ago
my server crashed. In the log i found a I/O error, and since then i can't
import my zpool.
When i run zpool import, i get the folowing messege:
*ginger# zpool import
pool: tank
id: 9268868588611347691
state: ONLINE
On 1/29/08, ZsUM ZsUM [EMAIL PROTECTED] wrote:
*Dump header from device /dev/ad4s1b
Architecture: amd64
Architecture Version: 2
Dump Length: 253394944B (241 MB)
Blocksize: 512
Dumptime: Sun Jan 20 23:51:58 2008
Hostname:
Magic: FreeBSD Kernel Dump
Version String: FreeBSD 7.0-RC1 #0: Mon
32 matches
Mail list logo