On Monday, August 18, 2014 08:09:00 PM thegeezer wrote: > On 18/08/14 15:31, J. Roeleveld wrote: > > <snip> > > valid points, and interesting to see the corrections of my > understanding, always welcome :)
You're welcome :) > > Looks nice, but is not going to help with performance if the application > > is > > not designed for distributed processing. > > > > -- > > Joost > > this is the key point i would raise about clusters really -- it would be > nice to not need for example distcc configured and just have portage run > across all connected nodes without any further work, or to use a tablet > computer which is "borrowing" cycles from a GFX card across the network > without having to configure nvidia grid: specifically these two use > cases have wildly different characteristics and are a great example of > why clustering has to be designed first to fit the application and > viceversa. I had a better look at that site you linked to. It won't be as "hidden" as you'd like. The software you run on it needs to be designed to actually use the infrastructure. This means that for your ideal to work, the "industry" needs to decide on a single clustering technology for this. I wish you good luck on that venture. :) > /me continues to wonder if 10GigE is fast enough to page fault across > the network ... ;) Depends on how fast you want the environment to be. Old i386 time, probably. Expecting a performance equivalent to a modern system, no. Check the bus-speeds between the CPU and memory that is being employed these days. That is the minimum speed you need in the network link to be fast enough to actually work. And that is expecting a perfect link with no errors occurring in the wiring. -- Joost

