OK, I have finally settled on hardware;
2x LSI SAS3081E-R controllers
2x Seagate Momentus 5400.6 rpool disks
15x Hitachi 5K3000 'data' disks
I am still undecided as to how to group the disks. I have read elsewhere that
raid-z1 is best suited with either 3 or 5 disks and raid-z2 is better suited
The LSI2008 chipset is supported and works very well.
I would actually use 2 vdevs; 8 disks in each. And I would configure each vdev
as raidz2. Maybe use one hot spare.
And I also have personal, subjective reasons: I like to use the number of 8 in
computers. 7 is an ugly number. Everything is
Thanks.
I ruled out the SAS2008 controller as my motherboard is only PCIe 1.0 so would
not have been able to make the most of the difference in increased bandwidth.
I can't see myself upgrading every few months (my current WHZ build has lasted
over 4 years without a single change) so by the
Hey guys,
I had a zfs system in raidz1 that was working until there was a power outage
and now I'm getting this:
pool: tank
state: DEGRADED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
Ok so now I have no idea what to do. The scrub is not working either. The pool
is only 3x 1.5TB drives so it should not take so long. Does anyone know what I
should do next?
pool: tank
state: DEGRADED
status: One or more devices are faulted in response to IO failures.
action: Make sure the
On Tue, Jul 5, 2011 at 6:54 AM, Lanky Doodle lanky_doo...@hotmail.com wrote:
OK, I have finally settled on hardware;
2x LSI SAS3081E-R controllers
2x Seagate Momentus 5400.6 rpool disks
15x Hitachi 5K3000 'data' disks
I am still undecided as to how to group the disks. I have read elsewhere
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Orvar Korvar
Here is my problem:
I have an 1.5TB disk with OpenSolaris (b134, b151a) using non AHCI.
I then changed to AHCI in BIOS, which results in severe problems: I can
not
boot the
On Tue, Jul 5, 2011 at 8:03 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Orvar Korvar
Here is my problem:
I have an 1.5TB disk with OpenSolaris (b134, b151a)
On Tue, Jul 5, 2011 at 7:47 AM, Lanky Doodle lanky_doo...@hotmail.com wrote:
Thanks.
I ruled out the SAS2008 controller as my motherboard is only PCIe 1.0 so
would not have been able to make the most of the difference in increased
bandwidth.
Only PCIe 1.0? What chipset is that based on?
Hello,
Are you certain that after the outage your disks are indeed accessible?
* What does BIOS say?
* Are there any errors reported in dmesg output or the
/var/adm/messages file?
* Does the format command return in a timely manner?
** Can you access and print the disk labels in format command?
Ok, so I switch back and then I have my data back?
But, it does not work. Because, meanwhile I switched, I tried to zpool import
which messed up the drive. Then I switched back, but my data is still not
accessible.
Earlier, when I switched, I did not do a zpool import and when I switched
2011-07-05 17:11, Fajar A. Nugraha пишет:
On Tue, Jul 5, 2011 at 8:03 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Orvar Korvar
Here is my problem:
I have an
On Tue, Jul 5, 2011 at 9:11 AM, Fajar A. Nugraha w...@fajar.net wrote:
On Tue, Jul 5, 2011 at 8:03 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Orvar Korvar
On Tue, Jul 5, 2011 at 12:54 PM, Lanky Doodle lanky_doo...@hotmail.com wrote:
OK, I have finally settled on hardware;
2x LSI SAS3081E-R controllers
Beware that this controller does not support drives larger than 2TB.
--
Trond Michelsen
___
2011-07-05 19:21, Paul Kraus wrote:
While I agree that you should not change the controller mode with data
behind it, I did do that on a Supermicro system running OpenSuSE and
Linux LVM mirrors with no issues. I suspect because Linux both loads
the AHCI drivers in the mini-root (to use a Sun
I have already formatted one disk, so I can not try this anymore.
(But, importing the zpool with the name rpool and exporting the rpool again,
was successful. I can now use the disk as usual. But this did not work on the
other disk, so I formatted it)
--
This message posted from
Thanks for your help,
I did a zpool clear and now this happens:
pool: tank
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
Reading through this page
(http://dlc.sun.com/osol/docs/content/ZFSADMIN/gbbwl.html), it seems like all I
need to do is 'rm' the file. The problem is finding it in the first place. Near
the bottom of this page it says:
If the damage is within a file data block, then the file can safely be
On 2011-Jul-05 21:03:50 +0800, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Orvar Korvar
...
I suspect the problem is because I changed to AHCI.
This is normal,
19 matches
Mail list logo