Re: [eX-bulk] : Re: Rasberry pi - high density

2015-05-14 Thread charles

On 2015-05-13 19:42, na...@cdl.asgaard.org wrote:

Greetings,

Do we really need them to be swappable at that point?  The reason we
swap HDD's (if we do) is because they are rotational, and mechanical
things break.


Right.


Do we swap CPUs and memory hot?


Nope. Usually just toss the whole thing. Well I keep spare ram around 
cause it's so cheap. But if CPU goes, chuck it in the ewaste pile in the 
back.



 Do we even replace

memory on a server that's gone bad, or just pull the whole thing
during the periodic "dead body collection" and replace it?



Usually swap memory. But yeah, often times the hardware ops folks just 
cull old boxes on a quarterly basis and backfill with the latest batch 
of inbound kit. At large scale (which many on this list operate at), you 
have pallets of gear sitting in the to deploy queue, and another couple 
pallets worth racked up but not even imaged yet.


(This is all supposition of course. I'm used to working with $HUNDREDS 
of racks worth of gear). Containers, moonshot type things etc are 
certainly on the radar.



 Might it

not be more efficient (and space saving) to just add 20% more storage
to a server than the design goal, and let the software use the extra
space to keep running when an SSD fails?


Yes. Also a few months ago I read an article about several SSD brands 
having $MANY terabytes written to them. Can't find it just now. But they 
seem to take quite a long time (data wise/number of write wise) to fail.


  When the overall storage

falls below tolerance, the unit is dead.  I think we will soon need to
(if we aren't already) stop thinking about individual components as
FRUs.  The server (or rack, or container) is the FRU.

Christopher



Yes. Agree.

Most of the very large scale shops (the ones I've worked at) are 
massively horizontal scaled, cookie cutter. Many boxes 
replicating/extending/expanding a set of well defined workloads.


Re: [eX-bulk] : Re: Rasberry pi - high density

2015-05-14 Thread nanog

Greetings,

	Do we really need them to be swappable at that point?  The reason we 
swap HDD's (if we do) is because they are rotational, and mechanical 
things break.  Do we swap CPUs and memory hot?  Do we even replace 
memory on a server that's gone bad, or just pull the whole thing during 
the periodic "dead body collection" and replace it?  Might it not be 
more efficient (and space saving) to just add 20% more storage to a 
server than the design goal, and let the software use the extra space to 
keep running when an SSD fails?  When the overall storage falls below 
tolerance, the unit is dead.  I think we will soon need to (if we aren't 
already) stop thinking about individual components as FRUs.  The server 
(or rack, or container) is the FRU.


Christopher



On 9 May 2015, at 12:26, Eugeniu Patrascu wrote:


On Sat, May 9, 2015 at 9:55 PM, Barry Shein  wrote:



On May 9, 2015 at 00:24 char...@thefnf.org (char...@thefnf.org) 
wrote:



So I just crunched the numbers. How many pies could I cram in a 
rack?


For another list I just estimated how many M.2 SSD modules one could
cram into a 3.5" disk case. Around 40 w/ some room to spare (assuming
heat and connection routing aren't problems), at 500GB/each that's
20TB in a standard 3.5" case.

It's getting weird out there.


I think the next logical step in servers would be to remove the 
traditional
hard drive cages and put SSD module slots that can be hot swapped. 
Imagine
inserting small SSD modules on the front side of the servers and 
directly

connect them via PCIe to the motherboard. No more bottlenecks and a
software RAID of some sorts would actually make a lot more sense than 
the

current controller based solutions.



--
李柯睿
Avt tace, avt loqvere meliora silentio
Check my PGP key here: http://www.asgaard.org/cdl/cdl.asc
Current vCard here: http://www.asgaard.org/cdl/cdl.vcf
keybase: https://keybase.io/liljenstolpe


Re: Is anyone working on an RFC for standardized maintenance notifications

2015-05-14 Thread Bill Woodcock

Whoo...  Yeah, we had a WG on that, back around 2000 or so...  The 
determination was, as I recall, that it didn't need to be part of SNMP, but it 
kind of went off the rails in an all-things-to-all-people sort of way.  But my 
memory is vague.  Erik Guttman might remember more clearly.  

Anyway, the idea is a good one, and if you can keep it constrained to a 
reasonable scope, I think you should find good support. 


-Bill


> On May 14, 2015, at 06:10, Robert Drake  wrote:
> 
> Like the "Automated Copyright Notice System" (http://www.acns.net/spec.html) 
> except I don't think they went through any official standards body besides 
> their own MPAA, or whatever.
> 
> I get circuits from several vendors and get maintenance notifications from 
> them all the time.  Each has a different format and each supplies different 
> details for their maintenance.  Most of the time there are core things that 
> everyone wants and it would be nice if it were automatically readable so 
> automation could be performed (i.e., our NOC gets the email into our 
> ticketing system. It is recognized as being part of an existing maintenance 
> due to maintenance id# (or new, whatever) and fields are automatically 
> populated or updated accordingly.
> 
> If you're uncomfortable with the phrase "automatically populated accordingly" 
> for security reasons then you can replace that with "NOC technician verifies 
> all fields are correct and hits update ticket." or whatever.
> 
> The main fields I think you would need:
> 
> 1.  Company Name
> 2.  Maintenance ID
> 3.  Start Date
> 4.  Expected length
> 5.  Circuits impacted (if known or applicable)
> 6.  Description/Scope of Work (free form)
> 7.  Ticket Number
> 8.  Contact
>