In reading the list archives, am I right to conclude that disks larger than
1 TB need to support EFI? I one of my projects the SAN does not support EFI
labels under Solaris. Does this mean I would have to create a pool with
disks 1 TB?
TIA.
-Doug
___
In reading the list archives, am I right to conclude that disks larger than
1 TB need to support EFI? I one of my projects the SAN does not support EFI
labels under Solaris. Does this mean I would have to create a pool with
disks 1 TB?
I would assume so.
The Solaris VTOC label (likely your
Platform:
- old dell workstation with an Andataco gigaraid enclosure
plugged into an Adaptec 39160
- Nevada b51
Current zpool config:
- one two-disk mirror with two hot spares
In my ferocious pounding of ZFS I've managed to corrupt my data
pool. This is what I've been doing to test
Hi Krzys,
On Thu, 2006-11-30 at 12:09 -0500, Krzys wrote:
my drive did go bad on me, how do I replace it?
You should be able to do this using zpool replace. There's output below
from me simulating your situation with file-based pools.
This is documented in Chapters 7 and 10 of the ZFS admin
Douglas Denny wrote:
In reading the list archives, am I right to conclude that disks larger
than 1 TB need to support EFI? I one of my projects the SAN does not
support EFI labels under Solaris. Does this mean I would have to
create a pool with disks 1 TB?
Out of curiosity ... what array is
Krzys wrote:
my drive did go bad on me, how do I replace it? I am sunning solaris 10
U2 (by the way, I thought U3 would be out in November, will it be out
soon? does anyone know?
[11:35:14] server11: /export/home/me zpool status -x
pool: mypool2
state: DEGRADED
status: One or more
Dave
which BIOS manufacturers and revisions? that seems to be more of
the problem
as choices are typically limited across vendors .. and I take it
you're running 6/06 u2
Jonathan
On Nov 30, 2006, at 12:46, David Elefante wrote:
Just as background:
I attempted this process on the
Hi Jason,
It seems to me that there is some detailed information which would
be needed for a full analysis. So, to keep the ball rolling, I'll
respond generally.
Jason J. W. Williams wrote:
Hi Richard,
Been watching the stats on the array and the cache hits are 3% on
these volumes. We're
Great, thank you, it certainly helped, I did not want to loose data on that disk
therefore wanted to be sure than sorry
thanks for help.
Chris
On Thu, 30 Nov 2006, Bart Smaalders wrote:
Krzys wrote:
my drive did go bad on me, how do I replace it? I am sunning solaris 10 U2
(by the
Ah, did not see your follow up. Thanks.
Chris
On Thu, 30 Nov 2006, Cindy Swearingen wrote:
Sorry, Bart, is correct:
If new_device is not specified, it defaults to
old_device. This form of replacement is useful after an
existing disk has failed and
I would like to update some of our Solaris 10 OS systems to the new zfs
file system that supports spares. The Solaris 6/06 version does have
zfs but does not have this feature. What is the best way to upgrade to
this functionality? Also we have a 3/05 version of Solaris and the Sun
Express
On 30/11/06, Michael Barto [EMAIL PROTECTED] wrote:
I would like to update some of our Solaris 10 OS systems to the new zfs file
system that supports spares. The Solaris 6/06 version does have zfs but does
not have this feature. What is the best way to upgrade to this functionality?
Hot
In the same vein...
I currently have a 400GB disk that is full of data on a linux system.
If I buy 2 more disks and put them into a raid-z'ed zfs under solaris,
is there a generally accepted way to build an degraded array with the
2 disks, copy the data to the new filesystem, and then move the
I finally found the answer myself.
By re-reading the doc, I re-discovered the term resilvering, that I did not
understand properly the first time.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
14 matches
Mail list logo