Matt Ingenthron wrote:
Gavin Maltby wrote:
On 08/25/05 17:34, Felix Schulte wrote:
8 SF15k running *one* OS instance?
I don't think so, I think it is some form of compute cluster.

I bet I know which customer that is.

The Sun Fire-Link had a high bandwidth, super low latency interconnect and then through Sun Cluster you had the Remote Shared Memory APIs (a.k.a. RSM). I recall from reading the cluster whitepaper at the time, there was also something called xdoors for fast IPC between nodes. Sun Cluster also brought global device trees (kinda) and global filesystems, so while you weren't one OS instance, you had a set of really interesting APIs you could do some serious damage with using Sun Fire-Link. Cluster also supported the same APIs over cheaper interconnects. Sadly, I think the only major use by a complicated commercial program was Oracle in 9i RAC (they called it "cache fusion", but a search for that makes it seem like a retread term-- there was a whitepaper somewhere once), and they eschewed that in favor of a more cross-platform (and a more pure Oracle software) approach. Now they are their own volume manager, etc.

I was on the review team for the cluster 3.0 blueprint book,
in which this feature was covered -- it did indeed look very
interesting but sadly mostly unused in a direct sense. There
aren't all that many customers out there running RAC with
more than 2 nodes and SunFireLink.

I believe it to be the case that Oracle did take the knowledge
they gained with cache fusion and have made it work better in
10g. However I'm not too impressed with the clustering (CRS)
stuff -- then again, it's a 1.0 product along with their storage
manager (ASM). Time will tell. Fortunately for Sun, our SunCluster
engineering teams have a head start and are fantastically well
endowed with talent and skill :)


James C. McPherson
--
Pacrim PTS Engineer            828 Pacific Highway
                               Gordon NSW
Sun Microsystems Australia     2072

_______________________________________________
opensolaris-discuss mailing list
[email protected]

Reply via email to