Re: [eX-bulk] : Re: Rasberry pi - high density

2015-05-14 Thread charles

On 2015-05-13 19:42, na...@cdl.asgaard.org wrote:

Greetings,

Do we really need them to be swappable at that point?  The reason we
swap HDD's (if we do) is because they are rotational, and mechanical
things break.


Right.


Do we swap CPUs and memory hot?


Nope. Usually just toss the whole thing. Well I keep spare ram around 
cause it's so cheap. But if CPU goes, chuck it in the ewaste pile in the 
back.



 Do we even replace

memory on a server that's gone bad, or just pull the whole thing
during the periodic "dead body collection" and replace it?



Usually swap memory. But yeah, often times the hardware ops folks just 
cull old boxes on a quarterly basis and backfill with the latest batch 
of inbound kit. At large scale (which many on this list operate at), you 
have pallets of gear sitting in the to deploy queue, and another couple 
pallets worth racked up but not even imaged yet.


(This is all supposition of course. I'm used to working with $HUNDREDS 
of racks worth of gear). Containers, moonshot type things etc are 
certainly on the radar.



 Might it

not be more efficient (and space saving) to just add 20% more storage
to a server than the design goal, and let the software use the extra
space to keep running when an SSD fails?


Yes. Also a few months ago I read an article about several SSD brands 
having $MANY terabytes written to them. Can't find it just now. But they 
seem to take quite a long time (data wise/number of write wise) to fail.


  When the overall storage

falls below tolerance, the unit is dead.  I think we will soon need to
(if we aren't already) stop thinking about individual components as
FRUs.  The server (or rack, or container) is the FRU.

Christopher



Yes. Agree.

Most of the very large scale shops (the ones I've worked at) are 
massively horizontal scaled, cookie cutter. Many boxes 
replicating/extending/expanding a set of well defined workloads.


Re: [eX-bulk] : Re: Rasberry pi - high density

2015-05-14 Thread nanog

Greetings,

	Do we really need them to be swappable at that point?  The reason we 
swap HDD's (if we do) is because they are rotational, and mechanical 
things break.  Do we swap CPUs and memory hot?  Do we even replace 
memory on a server that's gone bad, or just pull the whole thing during 
the periodic "dead body collection" and replace it?  Might it not be 
more efficient (and space saving) to just add 20% more storage to a 
server than the design goal, and let the software use the extra space to 
keep running when an SSD fails?  When the overall storage falls below 
tolerance, the unit is dead.  I think we will soon need to (if we aren't 
already) stop thinking about individual components as FRUs.  The server 
(or rack, or container) is the FRU.


Christopher



On 9 May 2015, at 12:26, Eugeniu Patrascu wrote:


On Sat, May 9, 2015 at 9:55 PM, Barry Shein  wrote:



On May 9, 2015 at 00:24 char...@thefnf.org (char...@thefnf.org) 
wrote:



So I just crunched the numbers. How many pies could I cram in a 
rack?


For another list I just estimated how many M.2 SSD modules one could
cram into a 3.5" disk case. Around 40 w/ some room to spare (assuming
heat and connection routing aren't problems), at 500GB/each that's
20TB in a standard 3.5" case.

It's getting weird out there.


I think the next logical step in servers would be to remove the 
traditional
hard drive cages and put SSD module slots that can be hot swapped. 
Imagine
inserting small SSD modules on the front side of the servers and 
directly

connect them via PCIe to the motherboard. No more bottlenecks and a
software RAID of some sorts would actually make a lot more sense than 
the

current controller based solutions.



--
李柯睿
Avt tace, avt loqvere meliora silentio
Check my PGP key here: http://www.asgaard.org/cdl/cdl.asc
Current vCard here: http://www.asgaard.org/cdl/cdl.vcf
keybase: https://keybase.io/liljenstolpe