I've had a look at Coda and GFS2.

Although far superior to NFS, these are still file-based. I don't see
this kind of solution scaling to million-transaction-per-second range
anytime soon.

Of course for many (in fact most) use cases, even thousands of TPS is
plenty already.


On 10/3/07, Rogelio Serrano <[EMAIL PROTECTED]> wrote:
..
> ok the resource is the state. in our http based system the resources
> are stored in a coda filesystem hosted in 3 servers. well cheap pcs
> actually but it has not been a problem. im more worried about the
> switch suddenly dying on us.

This is a problem that all cluster designers have to worry about.

If you have a 100-node cluster, grouped into 2 racks each one with 50
1-RU servers, if the switch interconnect between the two dies, you
have a split brain because suddenly each half of the cluster can't see
the other half.

To quote one of my colleagues from London, "in case of split-brain,
the side with the heart wins."
:-D


> > Most people use a centralized backing DB to store the session state.
> > What if that one goes south?
> >
>
> i dont do that but they are screwed.

You'd be surprised how many people do that. Most people, actually.



> > I know of no open-source or Free Software project that handles write
> > coherency over large clusters efficiently.
>
> try gfs2.


Like I said, file based.  Ewww.

Memcached with a write coherency implementation (so that more than one
node can do PUT's into the cache) would be fantastic. Not as good as
Coherence (not by a mile -- Coherence does distributed processing in
addition to distributed storage) but good enough for a good subset.


On 10/3/07, Rogelio Serrano <[EMAIL PROTECTED]> wrote:
..
> > That's why it sucks to be on Linux sometimes. Because Solaris X86 on
> > the same hardware performs better, due to use of the Sun Studio
> > compiler.
> >
>
> Cant do anything about that. They got there first. But cost? Would you
> rather buy a large number of commodity hardware?

I'd like to correct an impression you might have.

I am perfectly happy with commodity hardware. The HP DL585 for example
or IBM BladeCenter HS21 are pretty nice hardware for the buck, and
they don't cost too much.

I don't believe in "big IBM pSeries" or "big Sun Enterprise" either.
But unless your software stack is hardware fault-tolerant, many
enterprises would still rather have the big iron with hardware
resiliency.


..
> Of course. The guys who needs these implementations the most are the
> ones with a lot  of cash. But its a matter of preferences. DB based
> applications specially those based on sql need lots of resources.
> Those who prefer non db based REST style applications can do the same
> with cheap commodity hardware. in terms of number of transactions per
> second and reliability.

Well, the solution follows the customer. Most customers are still in a
DB-centric world.

What use is your innovative lightweight transaction solution if nobody
uses it? I do not think you can say that your solution is truly
resilient and useful unless it's been used in a very high load
environment for a long time.


..
> > A compiler license is much cheaper than buying an 8-way server
> > (instead of a 4-way server) because you're paying a performance tax.
>
> oh no. you need the multi processor server to see the benefits anyway.
> why not use commodity hardware then? they are so cheap nowadays.
>
> i find the term "enterprise class hardware" overrated nowadays. it
> sounds like a marketing gimmick. but thats only me.

Commodity 4-way and 8-way processors exist. See above (HP DL585,
SunFire X4600). But the 8-way costs a lot more than the 4-way.

So if I need 20 boxes, I'd rather buy one Intel Compiler license and
20 4-way boxes, than have to buy 20 8-way boxes to make up for the
performance shortfall of using GCC.





..
> True. And the enterprise class high reliability servers cost an arm
> and leg. for me at least. for others thats just the heel.

See above. I am not an advocate of big iron. First-tier-vendor
commodity hardware is good enough for me. But not for the banks!


> > 2) NFS! locking issues galore! horrific file lock timeouts! I can't
> > imagine this scaling up well
>
> true. i used coda instead.


Like I said. It's still file-based. No harm in that, many ostensibly
enterprise products are still file-based. But not as elegant as say, a
memory grid solution. Which is what Memcached is..



..
> Yep. But remember never to bet against the cheap plastic solutions. We
> have a way of creeping up on you.

Well this is more of a value statement / religious belief than a
technical argument.


..
> > I guess if you don't have that huge a revenue, you can tolerate
> > outages and less-than-enterprise class software.
>
> Not really. We have a small revenue but we cant tolerate outages. My
> boss even wanted our data to survive a nuclear strike in manila or
> amsterdam.


Ok, let me change that. Huge revenue OR huge transaction rate.

It's not rocket science to build a fault-tolerant, modest-sized app.
Building a fault-tolerant app that scales WELL (say... hundred- or
thousand-server clusters), now THAT'S rocket science. There are
open-source tools for this (PVM, MPI) but they are more geared to
compute grids, which traditionally are not transactional, generally
not 24x7 available, and can tolerate hardware failure.
_________________________________________________
Philippine Linux Users' Group (PLUG) Mailing List
[email protected] (#PLUG @ irc.free.net.ph)
Read the Guidelines: http://linux.org.ph/lists
Searchable Archives: http://archives.free.net.ph

Reply via email to