Hmmm, Pipermail seems to have neglected to send me your reply. I saw it in the archives. As far as the kernel goes, I am still nailing down the config. I did see that OpenSuse now has Cubox support and I am looking at that config. it is based on a 3.3.0-rc5 kernel, but I may make progress comparing it to the original shipping kernel config. I am not an expert with kernel configs, haven't really messed with it since FreeBSD 5 was current... I will try compiling source on the cloud instance I am playing with and see if I can come up with something useful to add to the wiki.
On Wednesday, August 22, 2012, Gordan Bobic wrote: > On 08/22/2012 10:45 AM, Adrian Hardy wrote: > >> On Tue, Aug 21, 2012 at 1:29 AM, Gordan Bobic <[email protected] >> <mailto:[email protected]>> wrote: >> >> On 08/20/2012 03:48 PM, Adrian Hardy wrote: >> >> A 32 core ARM v7 1U server which draws < 1A on max load? >> http://www.baserock.com/ >> .... >> >> Most interesting. I've just dropped them a message. >> >> I've also similarly had a chat with Boston about their Viridis >> servers (Calxeda based) and we got RS running on those. The only >> downside is that the fully loaded machine is about £30K and the >> minimal configuration is over £6K. Awesome machine, but hard to >> stretch to for an unfunded volunteer project. >> >> >> Glad to hear it's of interest. I'm still interested to know how a >> regular person would make use of it without virtualisation. Either that, >> or make use of cheap clustering like MySQL NDB / round robin CGI, etc. I >> suppose for redsleeve, it could function as a relatively simple parallel >> build farm with 8 RPMs being built at any given time. Please be sure to >> post back with their reply :) >> > > I've exchanged a couple of emails with them and they don't appear to be > particularly interested in RedSleeve. The product also seems to be somewhat > vaporware, much more so than the Boston Viridis. The relevant quote from > their email is: > "There is no simple pricing model due to the limited run" > They also didn't seem to respond when asked if their system would be > competitively priced against the Boston Viridis' minimal configuration (4 > nodes x 4 cores at about £6K), which leads me to believe it won't be > significantly cheaper, if at all, especially considering the Viridis > pricing per node improves as you add more CPU modules. > > On the subject of how you would use it, these are farm-in-a-box blade type > machines, same as x86 blade systems. They are many separate computers. The > only way you could glue them together into a single SMP-ish machine would > be to use an SSI (Single System Image) kernel, e.g. Kerrighed (only > supports x86-64, not even x86-32, let alone ARM or another architecture), > Mosix (not free, no idea if it supports anything but x86). There could be > other options - you might be interested to have a look at this: > > http://openssi.org/cgi-bin/**view?page=openssi.html<http://openssi.org/cgi-bin/view?page=openssi.html> > > Unfortuantely, the kernels supported by OpenSSI are quite ancient. > > The other thing to consider is that this sort of technology usually comes > with substantial performance penalties, and is only suitable for _very_ > coarse grained tasks. I tried distributed kernel compiling on a 4 cores x 2 > nodes x86-64 Kerrighed cluster and the time it takes to compile most .c > files is less than the time it takes to migrate the process to another > node, so this is not a suitable application for SSI. > > When you use a setup like this the only efficient way get performance out > of it is to use a load balancing solution such as LNLB to distribute the > load of network requests coming in, or shard the data in the database, or > for a rpm build farm using koji or something similar that would build > different rpms on different nodes. > > Gordan > ______________________________**_________________ > users mailing list > [email protected] > http://lists.redsleeve.org/**mailman/listinfo/users<http://lists.redsleeve.org/mailman/listinfo/users> > -- Thanks, Ian M Perkins
_______________________________________________ users mailing list [email protected] http://lists.redsleeve.org/mailman/listinfo/users
