> Now when you talk about the mid-range to high-end
> market, such as the M-series it's a difficult
> ballpark. Unless you throw in someone like Unisys or
> Bull, you can't find an x86 system that'll have the
> amount of CPU's, Memory, or I/O; let alone RAS
> features.

Maybe not, but like I keep hammering, RAS can be solved with:

1. IPMI (as crappy as it is, it's just "good enough")
2. clusters.

Got a fried CPU, motherboard, or power supply? Take the whole node down, 
without taking the service down, and pay a lot less money in the process!

I used to work with really, really expensive supercomputers (sgi, Sun, hp), and 
it was really cool, so long as I wasn't the one footing the bills for them, and 
so long as sgi, Sun and hp kept lending or even giving us the hardware.

But now that I'm the one footing the bills, it's a whole different ballgame.  
Now I realize just how ridiculously expensive that big iron hardware is. And 
now that I'm forced to compensate for not having big iron, I realized there are 
creative ways of having big iron performance without paying big iron prices. 
Sure it might take more (re)engineering in terms of software, but hey, I save 
tons of money on electricity bills alone, not to mention upfront savings on 
hardware!

Like Jonathan Schwartz once wrote, there are always people willing to sacrifice 
their time & effort instead of doling out cash. I'm one of those people. If I 
have the skill and experience, why not put it to good use to pay less?
Even with my time, effort and materials, it's still a whole lot less money than 
doling out say, $75K for an entry level SPARC/SPARC64 system.

> In that space only IBM Power and HP Itanium
> systems can compete.

Well, people are changing that game, with armies of cheap clusters. The junk is 
cheap and reliable enough, and if it does die, take the whole thing, throw it 
away, and put another one in.  It's throw-away, expendable, cheap hardware.
It's like going to a store and buying two or three drill bits for half the 
price of a quality drill bit, because you know that the first one is likely to 
break, the second one is likely to finish the job, and the third one is just, 
well, bonus, reserve.

All you have to do is make sure your software is designed around running on a 
cluster. Yes that's hard, yes that's challenging, but that's what that degree 
in computer science was for!

> The other reality is that you can do some serious
> consolidation with the T-Series and the M-series. You
> don't need as many physical servers to replace your
> old Ultra Enterprise or Sun Fire servers. Ultimately,
> this is what's hurting Sun and even IBM. The
> difference is that IBM doesn't give away the
> software, virtualization, and management components
> away for free.

Neither does Sun, for that matter. Zones is a runaway project that perfectly 
illustrates my point: there can be 8192 zones per server. Now, if you have tens 
of thousands of servers, how will you manage all those zones? xVM ops center? 
That's just an afterthought, and it lacks software, inventory, and change 
management.

Stereotypical: Sun invents some cool new tech, but they only make it work for a 
single developer on a single system. Zones is a perfect example of that. 
Clustering zones? An afterthought, and how many years did it take Sun to even 
bring Sun cluster to a point where it's zones aware?

Examples like this go on, and on, and on. No wonder small players like Jomasoft 
with their VDCF are reaping all the cream, while Sun scrambles to get bought by 
someone else.

So consolidation with Sun T and Fujitsu M series isn't as simple or 
straightforward as it seems. Without the right tools for the job, you need an 
army of skilled Solaris system engineers (not even admins will do!) Good luck 
finding those in this day and age of cheap Linux and Windows "sysadmins".
-- 
This message posted from opensolaris.org
_______________________________________________
opensolaris-discuss mailing list
[email protected]

Reply via email to