And, at last note, I accept your challenge, sir. I will debait you in a fortress of our piers. I stand ready at a moments notice!! ON GUARD!!
On Sun, Aug 26, 2012 at 9:53 AM, chris kluka <asd...@asdlkf.net> wrote: > So, back to the debait, I doubt that it would have "capacity to spare". > > Even if we kill any boxen doing prime grid or pi or simmilar tasks, I just > dont feel like a single quad core server (even with VT) is going to be > enough horse power for (presumably 15-45) virtual machines. > > More imortantly, I do not like the prospect of running that many workloads > on desktop grade equipment, with out iLo or any propper form of remote > management. > > > Lets be clear on this point: $2500 does not represent "more cores and more > ram"; $2500 represents "20x as many cores, 20x as much ram, iLo remote > management, precise power metering and historical graphing, reconfigurable > switching fabric with a 20Gbps internal switching stack, up to 8 Gbps blade > enclosure to switch stack aggregate trunk, 6 redundant power supplies, 2 > redundant network cards per blade, and all the blades themselves would be > fully redundant and clustered." > > The iLo management, to most of us i assume who would be working with this > system, is relatively important, and I personally put a fair bit of weight > behind that component in deciding (1x beefy server vs blades). > > > On Sun, Aug 26, 2012 at 9:45 AM, chris kluka <asd...@asdlkf.net> wrote: > >> Quick comparison page: http://ark.intel.com/compare/36547,33924 >> >> >> On Sun, Aug 26, 2012 at 9:43 AM, chris kluka <asd...@asdlkf.net> wrote: >> >>> What socket is the processor slot in the VMServer? >>> >>> I see it has a Core 2 Q8200 in it. I have a Core 2 Q9550 laying around I >>> would trade 1:1 for. >>> >>> The 9550 is slightly faster, way more L2 cache, and has the VT >>> extensions. >>> >>> >>> On Sun, Aug 26, 2012 at 9:32 AM, Stefan Penner >>> <stefan.pen...@gmail.com>wrote: >>> >>>> +1 Improve a VM server, over a farm of machines that will be totally >>>> under utilized. >>>> >>>> On 2012-08-26, at 10:27 AM, Mark Jenkins <m...@parit.ca> wrote: >>>> >>>> > I watched the blade discussion with interest. >>>> > >>>> > The amount of compute capacity being contemplated is massive -- it's >>>> well beyond the peak resources needs of everything in the server room at >>>> present. >>>> > (that is, if we ignore anyone doing optimal Golomb rulers, prime >>>> hunting, RSA numbers, etc, as these are infinite needs of indefinite scope >>>> that suck up whatever you throw up at them) >>>> > >>>> > If your goal is to just consolidate the current workloads in the most >>>> energy efficient way it doesn't make sense to spend a lot of money on an >>>> action that puts even more capacity online. Whatever you make available to >>>> people will in the end get used. [that is, we will often have a higher >>>> percentage of the blades blazing] >>>> > (even if you ban or put a limit on the infinite-indefinite stuff) >>>> > >>>> > You don't need to spend $2,500 when $400 to $900 in upgrades to our >>>> VM server would be enough to consolidate everything running right now and >>>> with capacity to spare. As such, I am launching a capital capital campaign >>>> for that: >>>> > http://skullspace.ca/wiki/index.php/Vmsrv#Capital_Campaign >>>> > >>>> > Also seeking project funding: >>>> > >>>> http://www.skullspace.ca/wiki/index.php/Proposed_projects#VM_server_hardware_upgrades >>>> > """ >>>> > Current upgrade project is to switch to a CPU with VT extensions, >>>> which will improve VM performance, allow for 64bit guest OS, and also make >>>> more guest operating systems available that are currently a no-go with >>>> Virtualbox and no hardware extensions such as OpenBSD and FreeBSD. >>>> > """ >>>> > >>>> > There's a stronger case for power use ROI here -- not only because >>>> less money is being spent but total compute capacity is actually going >>>> down. That is, except for those doing infinite-indefinites, we'll be taking >>>> heavy servers offline that are mostly running idle. >>>> > >>>> > ------------------------ >>>> > >>>> > And now, to the subject line, as there's no interesting debate in the >>>> above. >>>> > >>>> > Someday we will grow and not be saying "shit, we need to consolidate >>>> and reduce energy use". We'll be saying, "more power!" and want to add a >>>> lot more capacity. >>>> > >>>> > It is conceivable that we'll be able to have a successful, special >>>> fund-raising drive just for that, and reach a nice target like $2,500. >>>> > >>>> > But, if you're going to spend $2,500, I say spend it on one, super >>>> kick ass server vs the blade approach of scaling out RAM and CPU in >>>> parallel. >>>> > >>>> > I have nothing against blades in general -- for many scientific, >>>> engineering, artistic, and business use cases it makes sense to scale out >>>> in the blade way. >>>> > >>>> > Nor am I against mixing blades with virtualization. (such as here: >>>> > >>>> http://web.archive.org/web/20090204223932/http://get-admin.com/blog/?p=392) >>>> > >>>> > >>>> > What I want to argue is that the workloads of hackers in a >>>> hackerspace are better suited to scaling in a vertical direction over a >>>> horizontal one. Call it, /The one grand machine to rule them all/. >>>> > >>>> > Seeing how donors are already in contemplation mode, I feel the need >>>> to challenge the blade advocates to a debate. >>>> > >>>> > I'm not going to have that debate here on the mailing list (which is >>>> why I haven't said *why* it's better for a hackerspace to spend $2500 to >>>> scale vertically) -- I'm going to give a formal presentation on the >>>> subject. >>>> > >>>> > To the blade advocates -- do you wish to accept my challenge to a >>>> dual by scheduling presentations back to back (with random order?)? >>>> Alternatively, I could go first (perhaps late September..) and you could >>>> opt for rebuttal on a separate day once you've seen it? >>>> > >>>> > >>>> > Mark >>>> > _______________________________________________ >>>> > SkullSpace Discuss Mailing List >>>> > Help: http://www.skullspace.ca/wiki/index.php/Mailing_List#Discuss >>>> > Archive: https://groups.google.com/group/skullspace-discuss-archive/ >>>> >>>> _______________________________________________ >>>> SkullSpace Discuss Mailing List >>>> Help: http://www.skullspace.ca/wiki/index.php/Mailing_List#Discuss >>>> Archive: https://groups.google.com/group/skullspace-discuss-archive/ >>>> >>> >>> >> >
_______________________________________________ SkullSpace Discuss Mailing List Help: http://www.skullspace.ca/wiki/index.php/Mailing_List#Discuss Archive: https://groups.google.com/group/skullspace-discuss-archive/