On Sun, 18 Feb 2007 14:25:57 +0100, Michel Talon wrote: > Rupert Pigott wrote: > >> On Thu, 01 Feb 2007 09:39:30 -0500, Justin C. Sherrill wrote: > >> True, but Matt has explained that ZFS doesn't provide the functionality >> that DragonFlyBSD needs for cluster computing. >> >> ZFS solves the problem of building a bigger fileserver, but it >> doesn't help you distribute that file system across hundreds or thousands >> of grid nodes. ZFS doesn't address the issue of high-latency >> comms links between nodes, and NFS just curls up and dies when you try to >> run it across the Atlantic with 100+ms of latency. >> >> I don't know if IBM's GridFS does any better with the latency, but it >> certainly scales a lot better but the barrier for adoption is $$$. It >> costs $$$ and it costs a lot more $$$ to train up and hire the SAs to run >> it. There are other options like AFS too, but people tend to be put off by >> the learning curve and the fact it's an extra rather than something that >> is packaged with the OS. > > Of course it is none of my business, but i have always wandered about the > real usefulness of a clustering OS in the context of free systems, and you > post allows me to explain why.
Here's one reason : Free project wants to crack a particular problem that needs massive amounts of cycles and/or IO bandwidth but no individual can afford to run a datacentre. A distributed compile farm would fit that bill. > People who have the money to buy machines by > the thousands, run them, pay the electricity bill, etc. should also have > the money to pay $$$ to IBM, and not count on the generosity of unpaid > developers. Small installations are the natural target of free systems, Doesn't always happen that way. Quite frequently internal politics, short sightedness, NIH, budget battles etc get in the way. > and in this context i remain convinced that the clustering ideas have an > utility next to null. And frankly, i doubt they have any utility for big This would be an enabling technology. Big business doesn't innovate, the little guys do the innovation. I didn't see big business amongst the early adopters of Ethernet, TCP/IP, UNIX etc.. > systems if you don't use high speed, low latency connects which are far > more expensive than the machines themselves. And even with this highly Tell that to the folks who crack codes. Low latency is highly desirable but it isn't essential for all problems... Render farms and compile farms are good examples. > expensive hardware, if you don't have high brain programmers able to > really make use of concurrency. They help, but they aren't essential. There are a surprising number of problems out there that can be cracked in a dumb way. :) > On the contrary, the disks of Joe User are becoming bigger and bigger, > his processor is getting more and more cores, so there is clearly a need > for file systems appropriate for big disks and sufficiently reliable ( > ZFS being an example ) and operating systems able to use multicores > efficiently. I suspect that smaller slower cores are on the agenda for the great unwashed masses. I am one of those people who thinks the days of the foot warming tower case are numbered. Laptops, PDAs and Game Consoles already out-ship desktops by a few orders of magnitude, I don't see that trend swinging back the other way anytime soon. I think you have also missed a point here. Applications like SETI just weren't possible without the Grid concept (funding) - and people really do want to do that kind stuff. Sure you and I might question the utility of it, but the fact is it gave those guys a shot at doing something way beyond their budget *without* having to resort to exotic hardware or software. For the record I cut my Parallel Processing teeth on OCCAM & Transputers. This Grid stuff is neanderthal by comparison, but I have seen people get real work out of it, and I can see a bunch of folks out there who could also find it useful... Perhaps in the future you could contribute your unused cycles & storage to web-serving & compiling for the DFly project. I wouldn't mind that. :) Cheers, Rupert