Branimir:

Comments inlined ..


--- Branimir Petrovic <[EMAIL PROTECTED]> wrote:
> I need help with clearing up one conceptual issue:
> My understanding is that with Oracle RAC one set of physical database
> files "sitting" in the "middle" - shared storage are being accessed 
> by  multiple Oracle instances running on multiple physical servers
> (nodes). All instances "attacking" (sharing) the very same set of 
> data files at  the same time. Right or wrong?


Not necessarily. You can run RAC in a single machine also. I have few
guys here who run RAC in their windoze laptops. But in general your
understanding is very correct

 
> Provided the answer on the above question happens to be "yes" - I'd
> like   to ask  List Folks how feasible is to assemble and
successfully
> (smoothly?) run 0.5-1 TB database, use Oracle RAC and high 
> performance shared storage  (say SAN) served ("pumped") by a number 
> of Windows 2K servers? 
> 
> The "number" of Win2K servers I have on mind is at least 4 "beefy"
> (as   beefy as it gets in Windows wrld) Win2K "boxes" each running 
> Win2K AS  with lots of RAM and at least 4 CPUs, with perspective of 
> adding more  later. 

Installing and Configuring RAC is as simple as you install Oracle
databases. But the scalability is limited to just 4 nodes in Windows. I
think this limitation is coming from Windows Clusters and NOT from
Oracle. And also for Windows, you can use the OCFS (Oracle Cluster File
System) and no need to create the RAW partitions. ( I think now a days
all platforms have their own CFS (for Linux and windoze oracle gives
the CFS) and no need to use RAW partitions for OPS/RAC).

> It would be nice (for me) to know if new nodes can be added to the 
> cluster at any later time to improve performance (in order to deal 
> with increase in usage or to accomodate growth over period of time). 

The nice thing is , yes you can add new nodes dynamically (depends on
the OS) and the number of nodes is limited by the OS. For example, in
Windoze and Solaris you can only have max 4 nodes. HP-UX, AIX and Linux
clusters supports up to 8 nodes. IBM-SP clusters supports upto 128
nodes and again, as I have said earlier all these limitations come from
the respective OS/Hardware vendors. From Oracle side , there is no
limit for the number of nodes in RAC clusters.

> It would be very nice to know if number of nodes is or is not limited
> (otherwise than by raw I/O capabilities of the shared storage).
> Has anyone seen/run/stumbled over similar beast, if so - does it
> "fly" or it "stinks"? 

I think the mandatory RAW partition requirement only comes with AIX (or
does oracle support GPFS?) and you don't need to worry about RAW
partition limitations. 


> Thanks (for any help, hints, links, etc.),
> 
> Branimir
> 
> P.S.
> 
> I've looked at number of metalink articles and found none yet to 
> "scratch" this specific "itch" of mine. 
> 
> Oracle RAC on Win2K is for some bizarre reason REQUIREMENT.

So I was not the only one in this earth who caught in that trap.. Nice
to know somebody else also caught in that.  We had implemeted RAC on
W2k Advanced Server some time back .. TAF (Select failover) will not
work in W2k and make sure TAF is not the MANDATORY requirement.
Preconnect is an alternate if your customer agrees for that.  



=====
Have a nice day !!
------------------------------------------------------------
Best Regards,
K Gopalakrishnan,
Bangalore, INDIA.
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: K Gopalakrishnan
  INET: [EMAIL PROTECTED]

Fat City Network Services    -- 858-538-5051 http://www.fatcity.com
San Diego, California        -- Mailing list and web hosting services
---------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).

Reply via email to