Bob Friesenhahn wrote:
On Fri, 18 Sep 2009, David Magda wrote:
If you care to keep your pool up and alive as much as possible, then
mirroring across SAN devices is recommended.
One suggestion I heard was to get a LUN that's twice the size, and
set copies=2. This way you have some
All this reminds me: how much work (if any) has been done on the
asyncronous mirroring option? That is, for supporting mirrors with
radically different access times? (useful for supporting a mirror
across a WAN, where you have hundred(s)-millisecond latency to the other
side of the
I asked the same question about one year ago here, and the posts poured in.
Search for my user id? There is more info in that thread about which is best:
ZFS vs ZFS+HWraid
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On 18.09.09 22:18, Dave Abrahams wrote:
I just did a fresh reinstall of OpenSolaris and I'm again seeing
the phenomenon described in
http://article.gmane.org/gmane.os.solaris.opensolaris.zfs/26259
which I posted many months ago and got no reply to.
Can someone *please* help me figure out
On 17.09.09 21:44, Chris Murray wrote:
Thanks David. Maybe I mis-understand how a replace works? When I added disk
E, and used 'zpool replace [A] [E]' (still can't remember those drive names),
I thought that disk A would still be part of the pool, and read from in order
to build the contents of
Hey, thanks for following up.
on Sat Sep 19 2009, Victor Latushkin Victor.Latushkin-AT-Sun.COM wrote:
Can you provide output of
zdb -l /dev/rdsk/c8t1d0p0
zdb -l /dev/rdsk/c8t1d0s0
zdb -l /dev/rdsk/c9t0d0p0
zdb -l /dev/rdsk/c9t0d0s0
zdb -l /dev/rdsk/c9t1d0p0
zdb -l /dev/rdsk/c9t1d0s0
on Fri Sep 18 2009, Cindy Swearingen Cindy.Swearingen-AT-Sun.COM wrote:
Not much help, but some ideas:
1. What does the zpool history -l output say for the phantom pools?
d...@hoss:~# zpool history -l Xc8t1d0p0
History for 'Xc8t1d0p0':
2009-05-14.06:00:20 zpool create Xc8t1d0p0 c8t1d0p0
Yeah, after I learned that ditto blocks don't protect against failed
drives, I started working on a plan to move to raidz. I couldn't find
any good documentation on setting multiple filesystems systems in one
pool, though I know it is possible. I think I have enough storage to
work this together
Hello folks,
I am sure this topic has been asked, but I am new to this list. I have read
a ton of docĀ¹s on the web, but wanted to get some opinions from you all.
Also, if someone has a digest of the last time this was discussed, you can
just send that to me. In any case, I am reading a lot of
Victor Latushkin wrote:
I think you need to get a closer look at your another disk.
Is it possible to get result of (change controller/target numbers as
appropriate if needed)
dd if=/dev/rdsk/c8t0d0p0 bs=1024k count=4 | bzip2 -9 c8t0d0p0.front.bz2
while booted off OpenSolaris CD?
not
On Fri, Sep 18, 2009 at 1:51 PM, Steffen Weiberle
steffen.weibe...@sun.com wrote:
I am trying to compile some deployment scenarios of ZFS.
# of systems
3
amount of storage
10 TB on storage server (can scale to 30)
application profile(s)
NFS and CIFS
type of workload (low, high; random,
In the Eat-Your-Own-Dogfood mode:
Here in CSG at Sun (which is mainly all Java-related things):
Steffen Weiberle wrote:
I am trying to compile some deployment scenarios of ZFS.
If you are running ZFS in production, would you be willing to provide
(publicly or privately)?
# of systems
All
-- Forwarded message --
From: Al Hopper a...@logical-approach.com
Date: Sat, Sep 19, 2009 at 5:55 PM
Subject: Re: [zfs-discuss] ZFS HW RAID
To: Scott Lawson scott.law...@manukau.ac.nz
On Fri, Sep 18, 2009 at 4:38 PM, Scott Lawson scott.law...@manukau.ac.nzwrote:
. snip
I added a disk to the rpool of my zfs root:
# zpool attach rpool c1t0d0s0 c1t1d0s0
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
I waited for the resilver to complete, then i shut the system down.
then i physically removed c1t0d0 and put c1t1d0 in it's place.
I tried to
14 matches
Mail list logo