the 4 database servers are part of an Oracle RAC configuration. 3 databases are
hosted on these servers, BIGDB1 on all 4, littledb1 on the first 2, and
littledb2 on the last two. The oracle backup system spawns db backup jobs that
could occur on any node based on traffic and load. All nodes are
On 8/25/07, Matt B [EMAIL PROTECTED] wrote:
the 4 database servers are part of an Oracle RAC configuration. 3 databases
are hosted on these servers, BIGDB1 on all 4, littledb1 on the first 2, and
littledb2 on the last two. The oracle backup system spawns db backup jobs
that could occur on
Originally, we tried using our tape backup software
to read the oracle flash recovery area (oracle raw
device on a seperate set of san disks), however our
backup software has a known issue with the the
particular version of ORacle we are using.
So one option is to get the backup vendor to
Im not sure what you mean
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Aug 25, 2007 at 12:36:34 -0700, Matt B wrote:
: Im not sure what you mean
I think what he's trying to tell you is that you need to consult a storage
expert.
--
Dickon Hood
Due to digital rights management, my .sig is temporarily unavailable.
Normal service will be resumed as soon as
Here is what seems to be the best course of action assuming IP over FC is
supported by the HBA's (which I am pretty sure they so since this is all brand
new equipment)
Mount the shared disk backup lun on Node 1 via the FC link to the SAN as a
non-redundant ZFS volume.
On node 1 RMAN (oracle
On Sat, 25 Aug 2007, Matt B wrote:
snip
I still wonder if NFS could be used over the FC network in some way similar
to how NFS works over ethernet/tcp network
If you're running Qlogic FC HBAs, you can run a TCP/IP stack over the
FC links. That would allow NFS traffic over the FC
Hi:
I plan to configure the remaining 46 disk on X4500 :
5x(8+1) raid1 , 1 hot sapre
Any problem?
Does it means I can suffer 2 disk failure for each of the five group?
thanks.
zhanyang
This message posted from opensolaris.org
___
zfs-discuss
Before I open a new case with Sun, I am wondering if anyone has seen this
kernel panic before? It happened on an X4500 running Sol10U3 while it was
receiving incremental snapshot updates.
Thanks.
Aug 25 17:01:50 ldasdata6 ^Mpanic[cpu0]/thread=fe857d53f7a0:
Aug 25 17:01:50 ldasdata6
Stuart Anderson wrote:
Before I open a new case with Sun, I am wondering if anyone has seen this
kernel panic before? It happened on an X4500 running Sol10U3 while it was
receiving incremental snapshot updates.
Looks like it could be 6569719, which we expect to be fixed (in OpenSolaris)
I have tried tcpip over fc in the lab, the performance was no diff compare to
gigabit ethernet.
-Original Message-
From: Al Hopper [EMAIL PROTECTED]
To: Matt B [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org
Sent: 8/26/2007 9:29 AM
Subject: Re: [zfs-discuss] Single SAN Lun presented
11 matches
Mail list logo