Hello... Since there has been much discussion about zpool import failures
resulting in loss of an entire pool, I thought I would illustrate a scenario
I just went through to recover a faulted pool that wouldn't import under
Solaris 10 U5. While this is a simple scenario, and the data was not
[EMAIL PROTECTED] wrote on 09/15/2008 11:32:15 PM:
Brandon High wrote:
On Fri, Sep 12, 2008 at 11:49 AM, Dale Ghent [EMAIL PROTECTED]
wrote:
Did I detect a (well-done) metaphor for shared ZFS?
Probably not. It looks like a deduplication / MAID solution.
Yeah, I think they blew
s == Solaris [EMAIL PROTECTED] writes:
s Point being that even if you can't run OpenSolaris due to
s support issues, you may still be able to use OpenSolaris to
s help resolve ZFS issues that you might run into in Solaris 10.
glad ZFS is improving, but this sentence is a
2008/9/15 gm_sjo:
2008/9/15 Ben Rockwood:
On Thumpers I've created single pools of 44 disks, in 11 disk RAIDZ2's.
I've come to regret this. I recommend keeping pools reasonably sized
and to keep stripes thinner than this.
Could you clarify why you came to regret it? I was intending to
I've recently upgraded my x4500 to Nevada build 97, and am having problems with
the iscsi target.
Background: this box is used to serve NFS underlying a VMware ESX environment
(zfs filesystem-type datasets) and presents iSCSI targets (zfs zvol datasets)
for a Windows host and to act as
On Tue, Sep 16, 2008 at 10:03 PM, Ben Rockwood [EMAIL PROTECTED] wrote:
gm_sjo wrote:
2008/9/15 gm_sjo:
2008/9/15 Ben Rockwood:
On Thumpers I've created single pools of 44 disks, in 11 disk RAIDZ2's.
I've come to regret this. I recommend keeping pools reasonably sized
and to keep stripes
jd == Jim Dunham [EMAIL PROTECTED] writes:
jd If at the time the SNDR replica is deleted the set was
jd actively replicating, along with ZFS actively writing to the
jd ZFS storage pool, I/O consistency will be lost, leaving ZFS
jd storage pool in an indeterministic state on the
Moore, Joe wrote:
I've recently upgraded my x4500 to Nevada build 97, and am having problems
with the iscsi target.
Background: this box is used to serve NFS underlying a VMware ESX environment
(zfs filesystem-type datasets) and presents iSCSI targets (zfs zvol datasets)
for a Windows
On Tue, Sep 16, 2008 at 2:28 PM, Peter Tribble [EMAIL PROTECTED] wrote:
For what it's worth, we put all the disks on our thumpers into a single pool -
mostly it's 5x 8+1 raidz1 vdevs with a hot spare and 2 drives for the OS and
would happily go much bigger.
so you have 9 drive raidz1 (8 disks
Sorry, I popped up to Hokkdaido for a holiday. I want to thank you all
for the replies.
I mentioned AVS as I thought it to do be the only product close to
enabling us to do a (makeshift) fail-over setup.
We have 5-6 ZFS filesystem, and 5-6 zvol with UFS (for quotas). To do
zfs send snapshots
Just one more things on this:
Run with a 64-bit processor. Don't even think of using a 32-bit one -
there are known issues with ZFS not quite properly using 32-bit only
structures. That is, ZFS is really 64-bit clean, but not 32-bit clean.
grin
--
Erik Trimble
Java System Support
Mailstop:
On Wed, Sep 17, 2008 at 6:06 AM, Erik Trimble [EMAIL PROTECTED] wrote:
Just one more things on this:
Run with a 64-bit processor. Don't even think of using a 32-bit one -
there are known issues with ZFS not quite properly using 32-bit only
structures. That is, ZFS is really 64-bit clean, but
12 matches
Mail list logo