Jorgen Lundman wrote:
If we were interested in finding a method to replicate data to a 2nd
x4500, what other options are there for us?
If you already have an X4500, I think the best option for you is a cron
job with incremental 'zfs send'. Or rsync.
--
Ralf Ramge
Senior Solaris
Am I right in thinking though that for every raidz1/2 vdev, you're
effectively losing the storage of one/two disks in that vdev?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo [EMAIL PROTECTED] wrote:
Am I right in thinking though that for every raidz1/2 vdev, you're
effectively losing the storage of one/two disks in that vdev?
Well yeah - you've got to have some allowance for redundancy.
--
-Peter Tribble
2008/9/17 Peter Tribble:
On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo [EMAIL PROTECTED] wrote:
Am I right in thinking though that for every raidz1/2 vdev, you're
effectively losing the storage of one/two disks in that vdev?
Well yeah - you've got to have some allowance for redundancy.
This is
If 2 disks of a mirror fail do the pool will be faulted ?
NAMESTATE READ WRITE CKSUM
homez ONLINE 0 0 0
mirrorONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0
Francois wrote:
If 2 disks of a mirror fail do the pool will be faulted ?
NAMESTATE READ WRITE CKSUM
homez ONLINE 0 0 0
mirrorONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c0t3d0
On Wed, Sep 17, 2008 at 10:11 AM, gm_sjo [EMAIL PROTECTED] wrote:
2008/9/17 Peter Tribble:
On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo [EMAIL PROTECTED] wrote:
Am I right in thinking though that for every raidz1/2 vdev, you're
effectively losing the storage of one/two disks in that vdev?
Well
gm_sjo wrote:
Are you not infact losing performance by reducing the
amount of spindles used for a given pool?
This depends. Usually, RAIDZ1/2 isn't a good performancer when it comes
to random access read I/O, for instance. If I wanted to scale
performance by adding spindles, I would use
Darren J Moffat wrote:
If c0t6d0 and c0t7d0 both fail (ie both sides of the same mirror vdev)
then the pool will be unable to retrieve all the data stored in it. If
c0t6d0 and c0t3d0 both fail then there are sufficient replicas of data
available in that case because it was disks from
I believe the problem you're seeing might be related to deadlock
condition (CR 6745310), if you run pstack on the
iscsi target daemon you might find a bunch of zombie
threads. The fix
is putback to snv-99, give snv-99 a try.
Yes, a pstack of the core I've generated from iscsitgtd does have
Moore, Joe wrote:
I believe the problem you're seeing might be related to deadlock
condition (CR 6745310), if you run pstack on the
iscsi target daemon you might find a bunch of zombie
threads. The fix
is putback to snv-99, give snv-99 a try.
Yes, a pstack of the core I've generated
djm == Darren J Moffat [EMAIL PROTECTED] writes:
djm If c0t6d0 and c0t7d0 both fail (ie both sides of the same
djm mirror vdev) then the pool will be unable to retrieve all the
djm data stored in it.
won't be able to retrieve ANY of the data stored on it. It's correct
as you wrote it,
Running Nevada build 95 on an ultra 40.
Had to replace a drive.
Resilver in progress, but it looks like each
time I do a zpool status, the resilver starts over.
Is this a known issue?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 17 September, 2008 - Neal Pollack sent me these 0,3K bytes:
Running Nevada build 95 on an ultra 40.
Had to replace a drive.
Resilver in progress, but it looks like each
time I do a zpool status, the resilver starts over.
Is this a known issue?
I recall some issue with 'zpool status' as
t == Tomas Ögren [EMAIL PROTECTED] writes:
t I recall some issue with 'zpool status' as root restarting
t resilvering.. Doing it as a regular user will not..
is there an mdb command similar to zpool status? maybe it's safer.
pgp8jYtCisPzr.pgp
Description: PGP signature
Cyril Plisko wrote:
On Wed, Sep 17, 2008 at 6:06 AM, Erik Trimble [EMAIL PROTECTED] wrote:
Just one more things on this:
Run with a 64-bit processor. Don't even think of using a 32-bit one -
there are known issues with ZFS not quite properly using 32-bit only
structures. That is, ZFS is
Are you doing snaps? If so unless you have the new bits to handle the
issue, each snap restarts a scrub or resilver.
Thanks!
Wade Stuart
we are fallon
P: 612.758.2660
C: 612.877.0385
** Fallon has moved. Effective May 19, 2008 our address is 901 Marquette
Ave, Suite 2400, Minneapolis, MN
On 09/17/08 02:29 PM, [EMAIL PROTECTED] wrote:
Are you doing snaps?
No, no snapshots ever.
Logged in as root to do;
zpool replace poolname deaddisk
and then did a few zpool status
as root. It restarted each time.
If so unless you have the new bits to handle the
issue, each snap restarts
On Sep 16, 2008, at 5:39 PM, Miles Nordin wrote:
jd == Jim Dunham [EMAIL PROTECTED] writes:
jd If at the time the SNDR replica is deleted the set was
jd actively replicating, along with ZFS actively writing to the
jd ZFS storage pool, I/O consistency will be lost, leaving ZFS
19 matches
Mail list logo