So I noticed this during a scrub:
scrub in progress for 307445734561825855h10m, 89.55% done,
307445734561825859h41m to go
Which comes to 35+ trillion years. This makes ZFS the most enduring technology
ever!
Not really a bug--my clock was reset during the scrub. Just thought it was
amusing
Posted this reply in the help forum, copying it here:
I frequently use mirrors to replace disks, or even as a backup with an esata
dock. So I set up v134 with a mirror in VB, ran installgrub, then detached each
drive in turn. I completely duplicated and can confirm your problem, and since
I'm
Good idea (importing from a LiveCD). I just did this, and it imported without
any unusual complaint, except for the usual DEGRADED state because a member
is missing.
Also, for whatever this is worth, I noticed that v134 now shows the mirror (or
the first mirror) as mirror-0 instead of just
Depends on a lot of things. I'd let it sit for at least half an hour to see if
you get any messages. 30 seconds, if it's waiting for the driver stack
timeouts, is way too short.
-
I'm not the OP, but I let my VB guest sit for an hour now, and nothing new has
What I don't understand then is why can I do this with some frequency without
any delays on my 2009.06 and S10 systems? I have a three disk mirror at home,
one disk in an esata dock. Sometimes I don't turn on the dock, and the system
boots just as quickly. Likewise, I've done this with
I have an external disk that was offline yesterday, so today when I booted my
system I made sure it was turned on. ZFS of course brought it current with the
pool (I have a 3 disk zfs mirror), and for the first time I saw this result for
the resilver process:
resilver completed after
Richard already addressed this process, but I do this basic concept all the
time (moving to a larger disk or new computer). I simply create the partition
on the new disk with format, then zpool attach -f the larger drive. Once
done mirroring, use installgrub as normal. Remove the smaller
This comment has only to do with booting an old drive on a different
computer--a bit of a tangent to this discussion:
I've also used this to migrate to a new computer with larger disks. The only
caveat I've run into is you need to move from SATA/AHCI to the same, or
SATA/IDE to the same. They
I have a much more generic question regarding this thread. I have a sun T5120
(T2 quad core, 1.4GHz) with two 10K RPM SAS drives in a mirrored pool running
Solaris 10 u7. The disk performance seems horrible. I have the same apps
running on a Sun X2100M2 (dual core 1.8GHz AMD) also running
I don't swear. The word it bleeped was not a bad word
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2009.06 is v111b, but you're running v111a. I don't know, but perhaps the a-b
transition addressed this issue, among others?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Did you run installgrub on both disks:
/usr/sbin/installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/cxtydzs0
Or the equivalent. If you can't boot from either, how did either become your
boot disk?
If you want to use a single mirror member disk to boot from (i.e. for
testing), I
Just trying to help since no one has responded
Have you tried importing with an alternate root? We don't know your setup,
such as other pools, types of controllers and/or disks, or how your pool was
constructed.
Try importing something like this:
zpool import -R /tank2 -f
By the way, if you try my idea and both disks remain physically attached, both
should be found and the mirror will be intact, regardless of which disk you
boot from. If one is physically disconnected, then you will have complaints
about the missing disk, but it should still work if everything
I have the time slider enabled on two 2008.11 systems, and I noticed that for
both systems, the weekly snapshots did not run on 12/15/08.
This is no big deal. It's just an observation that I thought I'd mention in
case it leads to something that needs to be fixed.
Interesting that the weekly
I've done some more research, but would still greatly appreciate someone
helping me understand this.
It seems that writes to only the home directory of the person logged in to the
console suffers from degraded performance. If I write to a subdirectory
beneath my home, or to any other
If that were the case, why would it matter if I was logged into the console,
and why would subdirectories of my home exhibit better write performance than
the top level home directory? A write to /export/home/username is slower than
to /export/home/username/blah, but ONLY if that user is
This smells of name resolution delays somewhere. Do
you have
a shell prompt that gets some host name or user name
from
name services? Is your /home directory owned by a
non-existing
user or group? Do you accidentally have something
enabled
in /etc/nsswitch.conf that does not exist
This sounds plausible I suppose Being unfamiliar with this tracker daemon,
I can blindly accept it as a maybe!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
For clarity, here's how you can reproduce what I'm asking about:
This is for local file systems on build 86 and not about NFS or
any remote mounts. You can repeat these 100 times and always get
the same result, whether you reboot between trials or leave the
system running.
1. Log into the
I cannot recreate this on b101. There is no significant difference between
the two on my system.
That's encouraging...unless no one can reproduce it on 86, then I'm forgetting
something. I've done this a dozen times on several systems, so maybe ZFS
performance has been improved.
What
Bingo! I just updated a system from 86 to 99 and the problem is gone. Even
better, it was a VB guest, and the ZFS performance on the guest increased 5x in
this test, as I mentioned earlier. Granted, a VB guest may not be the best
test and it only applies to top level home directories, but it
After a zpool upgrade, this simple test's write speed jumped up yet another
20%. Looks like ZFS is getting better. As one would hope expect.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I apologize if this has been addressed countless times, but I have searched
searched and have not found the answer.
I'm rather new to ZFS and have learned a lot about it so far. At least one
thing confuses me, however. I've noticed that writes to the boot disk in
OpenSolaris (i.e. pool
24 matches
Mail list logo