Just to close the loop on this, for some other poor soul having similar
problems and googling away
I believe I have resolved it. The problem was somewhere on the 750G drive, and
was fixed by detaching and re-attaching it to my mirrors.
I actually took the extra step of creating a UFS on
On Tue, Jul 8, 2008 at 8:56 AM, Darren J Moffat [EMAIL PROTECTED] wrote:
Pete Hartman wrote:
I'm curious which enclosures you've had problems with?
Mine are both Maxtor One Touch; the 750 is slightly different in that it has
a FireWire port as well as USB.
I've had VERY bad experiences
ah == Al Hopper [EMAIL PROTECTED] writes:
ah I've had bad experiences with the Seagate products.
I've had bad experiences with all of them.
(maxtor, hgst, seagate, wd)
ah My guess is that it's related to duty cycle -
Recently I've been getting a lot of drives from companies like
On Jul 9, 2008, at 11:12 AM, Miles Nordin wrote:
ah == Al Hopper [EMAIL PROTECTED] writes:
ah I've had bad experiences with the Seagate products.
I've had bad experiences with all of them.
(maxtor, hgst, seagate, wd)
ah My guess is that it's related to duty cycle -
Recently
Also worth noting is that the enterprise-class drives have protection from
heavy load that the consumer-class drives don't. In particular, there's no
temperature sensor on the voice coil for the consumer drives, which means that
under heavy seek load (constant i/o), the drive will eventually
James,
May I ask what kind of USB enclosures and hubs you are using? I've had some
very bad experiences over the past month with not so cheap enclosures.
Wrt esata, I found the following chipsets on the SHCL. Any others you can
recommend?
Silicon Image 3112A
intel S5400
Intel S5100
Silicon Image
I'm curious which enclosures you've had problems with?
Mine are both Maxtor One Touch; the 750 is slightly different in that it has a
FireWire port as well as USB.
This message posted from opensolaris.org
___
zfs-discuss mailing list
Pete Hartman wrote:
I'm curious which enclosures you've had problems with?
Mine are both Maxtor One Touch; the 750 is slightly different in that it has
a FireWire port as well as USB.
I've had VERY bad experiences with the Maxtor One Touch and ZFS. To the
point that we gave up trying to
I got a 750 and sliced it and mirrored the other pieces.
Maybe you ran into a bug, because that situation would not be tested much in
the wild... or maybe you just bad lucked out and your computer toasted some
data.
Thanks Jeff. I hope my frustration in all this doesn't sound directed
at
However, my pool is not behaving well. I have had
insufficient replicas for the pool and corrupted
data for the mirror piece that is on both the USB
drives.
I'm learining about ZFS for the same reason, I want a reliable home server. So
I've been reading the archives. In March 2007 there was
Bohdan Tashchuk wrote:
However, my pool is not behaving well. I have had
insufficient replicas for the pool and corrupted
data for the mirror piece that is on both the USB
drives.
I'm learining about ZFS for the same reason, I want a reliable home server.
So I've been reading the
I have a zpool which has grown organically. I had a 60Gb disk, I added a
120, I added a 500, I got a 750 and sliced it and mirrored the other pieces.
The 60 and the 120 are internal PATA drives, the 500 and 750 are Maxtor
OneTouch USB drives.
The original system I created the 60+120+500 pool
I'm doing another scrub after clearing insufficient replicas only to find
that I'm back to the report of insufficient replicas, which basically leads me
to expect this scrub (due to complete in about 5 hours from now) won't have any
benefit either.
-bash-3.2# zpool status local
pool: local
As a first step, 'fmdump -ev' should indicate why it's complaining
about the mirror.
Jeff
On Sun, Jul 06, 2008 at 07:55:22AM -0700, Pete Hartman wrote:
I'm doing another scrub after clearing insufficient replicas only to find
that I'm back to the report of insufficient replicas, which
I'm not sure how to interpret the output of fmdump:
-bash-3.2# fmdump -ev
TIME CLASS ENA
Jul 06 23:25:39.3184 ereport.fs.zfs.vdev.bad_label
0x03b3e4e8b1900401
Jul 07 03:32:14.3561 ereport.fs.zfs.checksum
0xdaffb466a7e1
Jul 07 03:32:14.3561
15 matches
Mail list logo