On Oct 8, 2008, at 4:27 PM 10/8/, Jim Dunham wrote:
, a single Solaris node can not be both
the primary and secondary node.
If one wants this type of mirror functionality on a single node, use
host based or controller based mirroring software.
If one is running multiple zones, couldn't
Brian Hechinger
On Mon, Oct 06, 2008 at 10:47:04AM -0400, Moore, Joe wrote:
I wonder if an AVS-replicated storage device on the
backends would be appropriate?
write - ZFS-mirrored slog - ramdisk -AVS- physical disk
\
+-iscsi- ramdisk -AVS-
0n Sat, Oct 04, 2008 at 10:37:26PM -0700, Chris Greer wrote:
The big thing here is I ended up getting a MASSIVE boost in
performance even with the overhead of the 1GB link, and iSCSI.
The iorate test I was using went from 3073 IOPS on 90% sequential
writes to 23953 IOPS with
On Wed, Oct 08, 2008 at 08:50:57AM -0400, Moore, Joe wrote:
I've not worked with AVS other than looking at the basic concepts, but to me
this looks like a dont-shoot-yourself-in-the-foot critical warning rather
than an actual functionality restriction. Is there a -force option to
I was using EMC's iorate for the comparison.
ftp://ftp.emc.com/pub/symm3000/iorate/
I had 4 processes running on the pool in parallel do 4K sequential writes.
I've also been playing around with a few other benchmark tools (i just had
results from other storage test with this same iorate
Joe,
Brian Hechinger
On Mon, Oct 06, 2008 at 10:47:04AM -0400, Moore, Joe wrote:
I wonder if an AVS-replicated storage device on the
backends would be appropriate?
write - ZFS-mirrored slog - ramdisk -AVS- physical disk
\
+-iscsi- ramdisk -AVS-
On Wed, Oct 08, 2008 at 06:27:51PM -0400, Jim Dunham wrote:
If one wants this type of mirror functionality on a single node, use
host based or controller based mirroring software.
Is there mirroring software that can do async copies to a mirror?
-brian
--
Coding in C is like sending a 3
Or would they? A box dedicated to being a RAM based
slog is going to be
faster than any SSD would be. Especially if you make
the expensive jump
to 8Gb FC.
Not necessarily. While this has some advantages in terms of price
performance, at ~$2400 the 80GB ioDrive would give it a run for
Hello Nicolas,
Monday, October 6, 2008, 10:51:58 PM, you wrote:
NW I'm pretty sure that local RAM beats remote-anything, no matter what the
NW anything (as long as it isn't RAM) and what the protocol to get to it
NW (as long as it isn't a normal backplane). (You could claim with NUMA
NW memory
Very interesting idea, thanks for sharing it.
Infiniband would definately be worth looking at for performance, although I
think you'd need iSER to get the benefits and that might still be a little new:
http://www.opensolaris.org/os/project/iser/Release-notes/.
It's also worth bearing in
Nicolas Williams wrote
There have been threads about adding a feature to support slow mirror
devices that don't stay synced synchronously. At least IIRC. That
would help. But then, if the pool is busy writing then your slow ZIL
mirrors would generally be out of sync, thus being of no help
On Sun, Oct 05, 2008 at 11:30:54PM -0500, Nicolas Williams wrote:
There have been threads about adding a feature to support slow mirror
devices that don't stay synced synchronously. At least IIRC. That
would help. But then, if the pool is busy writing then your slow ZIL
That would
On Mon, Oct 06, 2008 at 10:47:04AM -0400, Moore, Joe wrote:
I wonder if an AVS-replicated storage device on the backends would be
appropriate?
write - ZFS-mirrored slog - ramdisk -AVS- physical disk
\
+-iscsi- ramdisk -AVS- physical disk
You'd
On Mon, Oct 06, 2008 at 05:38:33PM -0400, Brian Hechinger wrote:
On Sun, Oct 05, 2008 at 11:30:54PM -0500, Nicolas Williams wrote:
There have been threads about adding a feature to support slow mirror
devices that don't stay synced synchronously. At least IIRC. That
would help. But then,
On Mon, Oct 06, 2008 at 10:47:04AM -0400, Moore, Joe wrote:
I wonder if an AVS-replicated storage device on the backends would be
appropriate?
write - ZFS-mirrored slog - ramdisk -AVS- physical disk
\
+-iscsi- ramdisk -AVS- physical disk
You'd
On Mon, Oct 06, 2008 at 01:13:40AM -0700, Ross wrote:
It's also worth bearing in mind that you can have multiple mirrors. I don't
know what effect that will have on the performance, but it's an easy way to
boost the reliability even further. I think this idea configured on a set of
2-3
On Sat, Oct 04, 2008 at 10:37:26PM -0700, Chris Greer wrote:
So I tried this experiment this week...
On each host (OpenSolaris 2008.05), I created an 8GB ramdisk with ramdiskadm.
I shared this ramdisk on each host via the iscsi target and initiator over a
1GB crossconnect cable (jumbo
So what are the downsides to this? If both nodes were to crash and
I used the same technique to recreate the ramdisk I would lose any
transactions in the slog at the time of the crash, but the physical
disk image is still in a consistent state right (just not from my
apps point of
On Sun, Oct 05, 2008 at 09:07:31PM -0400, Brian Hechinger wrote:
On Sat, Oct 04, 2008 at 10:37:26PM -0700, Chris Greer wrote:
I'm not sure I could survive a crash of both nodes, going to try and
test some more.
Ok, so taking my idea above, maybe a pair of 15K SAS disks in those
boxes so
I currently have a traditional NFS cluster hardware setup in the lab (2 host
with FC attached JBOD storage) but no cluster software yet. I've been wanting
to try out the separate ZIL to see what it might do to boost performance. My
problem is that I don't have any cool SSD devices, much less
20 matches
Mail list logo