Re: "Hypothetical" Datacenter Overheating

2024-01-18 Thread Lamar Owen

On 1/15/24 10:14, sro...@ronan-online.com wrote:

I’m more interested in how you lose six chillers all at once.

According to a post on a support forum for one of the clients in that 
space: "We understand the issue is due to snow on the roof affecting the 
cooling equipment."


Never overlook the simplest single points of failure.  Snow on cooling 
tower fan bladesfailed fan motors are possible or even likely at 
that point.  Assuming the airflow won't be clogged; conceptually much 
like the issue in having multiple providers for redundancy but they're 
all in the same cable or conduit.




Re: "Hypothetical" Datacenter Overheating

2024-01-18 Thread Lamar Owen

On 1/17/24 20:06, Tom Beecher wrote:


If these chillers are connected to BACnet or similar network, then
I wouldn't rule out the possibility of an attack.


Don't insinuate something like this without evidence. Completely 
unreasonable and inappropriate.


I wasn't meaning to insinuate anything; it's as much of a reasonable 
possibility as any other these days.


Perhaps I should have worded it differently: "if my small data centers' 
chillers were connected to some building management network such as 
BACnet and all of them went down concurrently I would be investigating 
my building management network for signs of intrusion in addition to 
checking other items, such as shared points of failure in things like 
chilled water pumps, electrical supply, emergency shut-off circuits, 
chiller/closed-loop configurations for various temperature, pressure, 
and flow set points, etc."  Bit more wordy, but doesn't have the same 
implication.  But I would think it unreasonable, if I were to find 
myself in this situation in my own operations, to rule any possibility 
out that can explain simultaneous shutdowns.


And this week we did have a chiller go out on freeze warning, but the DC 
temp never made it quite up to 80F before the temperature raised back 
into double digits and the chiller restarted.


Re: "Hypothetical" Datacenter Overheating

2024-01-17 Thread Lamar Owen
>This sort of mass failure seems to point
>towards either design issues (like equipment >selection/configuration vs 
>temperature range for the location), systemic maintenance issues, or 
>some sort of single failure point that could take all the chillers out, 
>none of which I'd be happy to see in a data center.



If these chillers are connected to BACnet or similar network, then I wouldn't 
rule out the possibility of an attack.

Re: "Hypothetical" Datacenter Overheating

2024-01-15 Thread Lamar Owen
On Mon, Jan 15, 2024 at 7:14 AM  wrote:
>> I’m more interested in how you lose six chillers all at once.
>Extreme cold. If the transfer temperature is too low, they can reach a
>state where the refrigerant liquifies too soon, damaging the
compressor.
>Regards,
>Bill Herrin

Our 70-ton Tranes here have kicked out on 'freeze warning' before; there's a 
strainer in the water loop at the evaporator that can clog, restricting flow 
enough to allow freezing to occur if the chiller is actively cooling.  It's so 
strange to have an overheating data center in subzero (F) temps.  The flow 
sensor in the water loop can sometimes get too cold and not register the flow 
as well.





Re: Reminder: Never connect a generator to home wiring without transfer switch

2021-08-30 Thread Lamar Owen

On 8/25/21 11:26 AM, Dave wrote:
Back feed is a significant problem but bringing a generator that is 
not synchronized to the grid can have dramatic results, typically only 
once 


This, IMO, is a great thread, lots of good reading here.

My $dayjob is at a site where the previous occupants did indeed operate, 
under PE supervision and special permit from the electric cooperative, 
grid-synchronized generators.  Most of the required switchgear is still 
here, including the dual-incandescent-bulb sync indicators.  Since we 
don't have the required 24x7 PE (that's licensed Professional Engineer, 
by the way) supervision, we don't do this, but over a period of several 
years had normal ATS setups with isolated generation installed, with a 
more distributed setup with only critical loads on UPS.  But grid-sync 
generation is a form of UPS.


The process of going from grid being offline, then adjusting the 
generator governors to sync-in and closing the main breakers (three of 
them, 2,500A each) at the very instant of sync, was apparently quite the 
sight to behold, with the sync indicators blinking and pulsing, until 
they locked in  At the 2,500A level it is definitely not a pretty 
sight to reclose out of sync.


Synchronous generators were more reliable and less expensive in those 
days than battery backups, especially in the megawatt class, and if you 
only required intermittent uninterruptable power.  The largest battery 
backup on site was 500KVA and was a Piller motor-generator with a huge 
battery on the DC bus.  The total site drew a bit over 2MW on a normal 
day, and so loads were prioritized and during critical operations only 
the required generator capacity was brought online in synchronous mode.  
The main breakers were set up with power-loss instant trip; when we went 
ATS and regular generator operations (much much less load for us) we 
disabled the power-loss trip functions on the three 2,500A mains.


I have some friends who work for the local electric cooperative, and all 
of them have backfeed stories.  Around here, which is very rural, it's 
not at all uncommon to have a single house isolated on a distribution 
spur; nor is it at all uncommon for some people to have the 'suicide 
dryer cord' 'ATS' in use.  One story I heard was that one individual had 
been a repeat offender, and several line workers had gotten bit (none 
were injured, thankfully) by his 4KVA generator; so after the repair of 
one outage, a line worker taught the individual a lesson by hitting the 
recloser without letting the individual know ahead of time (they had 
been letting the individual know that power was coming back on) and 
watching the individual's generator, out in his yard, explode from the 
out-of-sync condition.




Re: Impacts of Encryption Everywhere (any solution?)

2018-05-29 Thread Lamar Owen

On 05/28/2018 06:13 PM, Matthew Petach wrote:

Your 200mbit/sec link that costs you $300 in hardware
is going to cost you $4960/month to actually get IP traffic
across, in Nairobi.   Yes, that's about $60,000/year.
I live in the US of A, and this is what 200Mb/s roughly would cost me as 
well here in Rural Monopoly-land.  Rural ILEC also has the CATV 
business, and, well, they are _not_ going to run cable up here.  I've 
actually priced 150Mb/s bandwidth from the ILEC over the years; in 2003 
the cost would have been about $100,000 per month. As of five years ago 
10Mb/s symmetrical cost roughly $1,000 per month, the lion's share of 
that being per-mile NECA Tariff 5 transport costs.


The terrain here prevents fixed wireless.  The terrain also prevents 
satellite comms to the Clarke belt (mountain to the south with trees on 
US Forest Service property in the line of sight).  I get 1XRTT in one 
room of my house when the humidity is below 70% and it's winter, and 
once in a blue moon 3G will light up, but it's not stable enough to 
actually use; it's the speed of dialup.  If I traipse about a hundred 
yards up the mountain to the south (onto US Forest Service property, so, 
no repeater for me) I can get semi-usable 4G; nothing like being in the 
middle of the woods with an active black bear population trying to get a 
usable signal.


I'm paying $50 per month for 7/0.5 DSL (I might add that they provide 
excellent DSL that has been extremely reliable) from the only ISP 
available in the area.


I remember a usable web experience not too long ago on 28.8K/33.6K 
dialup (it was quite a while before said ILEC got a 56K-capable modem 
bank).  DSL started out here at 384k/128k.  On the positive side, we 
have a very low oversubscription ratio, so I actually get the full 
bandwidth the majority of the time, even video streaming. I also know 
all the network engineers there, too, and that also has its advantages.


(Yes, I am aware that rural living is a choice, and there are things 
worth a great deal more than bandwidth, that it's a tradeoff, etc.)


So it's not just '3rd-world' countries with expensive bandwidth.



Re: Best practices for telcoflex -48VDC cabling & other power OSI layer 1

2016-08-10 Thread Lamar Owen

On 07/18/2016 12:12 PM, Eric Kuhnke wrote:

I'm looking for a document or set of photos/presentation on best practices
for telcoflex/-48VDC power cabling installation. Labeling, routing,
organization and termination, etc. Or a recommendation on a printed book
that covers this topic.
I apologize for the late reply, but even if just for the archives, the 
best resource for DC power systems and cabling I have ever found is "DC 
Power System Design for Telecommunications" by Whitham D. Reeve, 
published by Wiley and Sons as part of the IEEE Telecommunications 
Handbook Series.





Re: Cost-effectivenesss of highly-accurate clocks for NTP

2016-05-16 Thread Lamar Owen

On 05/15/2016 03:16 PM, Måns Nilsson wrote:
...If you think the IP implementations in IoT devices are naîve, wait 
until you've seen what passes for broadcast quality network 
engineering. Shoving digital audio samples in raw Ethernet frames is 
at least 20 years old, but the last perhaps 5 years has seen some 
progress in actually using IP to carry audio streams. (this is 
close-to-realtime audio, not file transfers, btw.) 


Close to realtime is a true statement.  Using an IP STL 
(studio-transmitter link) has enough latency that the announcer can no 
longer use the air signal as a monitor.


And the security side of things is a pretty serious issue; just ask a 
major IP STL appliance vendor about the recent hijacking of some of 
their customers' IP STL devices yeah, a random intruder on the 
internet hijacked several radio stations' IP STL's and began 
broadcasting their content over the radio.  Not pretty.  I advise any of 
my remaining broadcast clients that if they are going to an IP STL that 
they put in a dedicated point to point IP link without publicly routable 
IP addresses.


Digital audio for broadcast STL's is old tech; we were doing G.722/G.723 
over switched-56 in the early 90's.  But using a public-facing internet 
connection with no firewalling for an IP STL appliance like the Barix 
boxes and the Tieline boxes and similar? That borders on networking 
malpractice.


... But, to try to return to "relevant for NANOG", there are actual 
products requiring microsecond precision being sold. And used. And 
we've found that those products don't have a very good holdover. ... 

Television broadcast is another excellent example of timing needs; thanks.

Valdis mentioned the scariest thing the scariest thing I've seen 
recently?  Windows NT 3.5 being used for a transmitter control system, 
within the past five years.  I will agree with Valdis on the scary 
aspects of the public safety communications Mel mentioned. Thanks, Mel, 
for the educational post.




Re: Cost-effectivenesss of highly-accurate clocks for NTP

2016-05-16 Thread Lamar Owen

On 05/15/2016 01:05 PM, Eric S. Raymond wrote:
I'm not used to thinking of IT as a relatively low-challenge environment! 


I actually changed careers from broadcast engineering to IT to lower my 
stress level and 'personal bandwidth challenge.'  And, yes, it worked.  
In my case, I'm doing IT for radio and optical astronomy, and at least 
the timing aspect is higher-challenge that most IT environments.


You're implicitly suggesting there might be a technical case for 
replacing these T1/T3 trunks with some kind of VOIP provisioning less 
dependent on accurate time synch. Do you think that's true? 


While I know the question was directed at Mel specifically, I'll just 
say from the point of view of a T1 voice trunk customer that I hope to 
never see it go to a VoIP solution.  VoIP codecs can have some serious 
latency issues; I already notice the round-trip delay if I try to carry 
on a conversation between our internal VoIP system and someone on a cell 
phone.  And this is before codec artifacting (and cascaded codec 
scrambling) is counted.  Can we please keep straight μ-law (A-law if 
relevant) lossless DS0 PCM timeslices for trunklines so we get at least 
one less lossy codec cascade?  Or have you never experimented with what 
happens when you cascade G.722 with G.729 with G.726 and then G.711 and 
back?  Calls become mangled gibberish.


I would find it interesting to see how many carriers are still doing 
large amounts of SONET, as that is the biggest use-case for 
high-stability timing.




Re: Cost-effectivenesss of highly-accurate clocks for NTP

2016-05-14 Thread Lamar Owen

On 05/13/2016 03:39 PM, Eric S. Raymond wrote:
Traditionally dedicated time-source hardware like rubidium-oscillator 
GPSDOs is sold on accuracy, but for WAN time service their real draw 
is long holdover time with lower frequency drift that you get from the 
cheap, non-temperature-compensated quartz crystals in your PC. There 
is room for debate about how much holdover you should pay for, but 
you'll at least be thinking more clearly about the problem if you 
recognize that you *should not* buy expensive hardware for accuracy. 
For WAN time service, in that price range, you're wither buying 
holdover and knowing you're doing so or wasting your money. 


Eric,

Thanks for the pointers; nice information.

A cheap way to get a WAN frequency standard is to use a WAN that is 
delivered over something derived from the telco's synchronous network; a 
POS on an OC3 with the clock set to network has an exceptionally stable 
frequency standard available.  Less expensive, get a voice T1 trunk 
delivered (robbed-bit signaled will typically be less expensive than 
PRI) and grab clock from that; tarriffs for RBS T1/fractional T1 around 
here at least are less than an analog POTS line).  Very stable.  The 
plesiochronous digital hierarchy on copper or synchronous digital 
hierarchy/SONET on fiber have cesium clocks behind them, and you can get 
that stability by doing clock recovery on those WAN circuits.  Back when 
this was the most common WAN technology frequency standards were there 
for the taking; Ethernet, on the other hand, not so much.


But a nice catch on using the isochronous nature of USB.  Cheap webcams 
also take advantage of the isochronous transfer mode.  Do note that 
isochronous is often not supported in USB-passthrough for 
virtualization, though.  But you shouldn't use a VM to do timing, 
either. :-)


Now I'm looking for myself one of those Navisys devices you 
mentioned. do any of them have external antenna inputs, say on an 
SMA connector (MCX is in my experience just too fragile) with a bias tee 
to drive phantom to an active antenna?  The quick search I did seemed to 
indicate that the three you mentioned are self-contained with their own 
smart antenna.  External antenna input would be required here, where we 
use timing-grade GPS antennas to feed our Z3816's.  But for straight 
1PPS and GPS timecode, dealing with the Z3816's complexity is overkill.


Thanks again for the info; looking forward to seeing how NTPsec develops.



Re: NIST NTP servers

2016-05-14 Thread Lamar Owen

On 05/13/2016 04:38 PM, Mel Beckman wrote:

But another key consideration beyond accuracy is the reliability of a server's 
GPS constellation view. If you can lose GPS sync for an hour or more (not 
uncommon in terrain-locked locations), the NTP time will go free-running and 
could drift quite a bit. You need an OCXO to minimize that drift to acceptable 
levels.
While this is drifting a bit off-topic for NANOG (and drifting into the 
topic range for time-n...@febo.com), I'll just add one more thing to 
this.  The Hold time (when the oscillator is free-running) is a very 
important consideration, especially, as you say, when terrain is an 
issue. For us it is even more important, as the 10MHz output from the 
timing rack is used as a site-wide frequency standard.  Of course, you 
never discipline a cesium PRS, but the rubidium secondary is disciplined 
by circuitry in the SSU2000.


Back in the days of common backbone delivery over SONET discussion of 
cesium standards would have been on-topic, as some SONET gear (Nortel 
Optera for instance) needs a master clock; especially if you were 
delivering channelized circuits or interfacing with customers and telcos 
with DS3 or even DS1 circuits or DS0 fractions within them.  Ethernet is 
far more forgiving.




Re: NIST NTP servers

2016-05-13 Thread Lamar Owen

On 05/13/2016 10:38 AM, Mel Beckman wrote:

You make it sound like TXCOs are rare, but they're actually quite common in 
most single board computers. True, you're probably not gonna find them in the 
$35 cellular-based SBCs, but since these temperature compensated oscillators 
cost less than a dollar each in quantity, they're quite common in most 
industrial species for well under $100.


Correct, they're not rare in the industrial line (for that matter you 
can get TCXO-equipped RTL-SDR dongles, but that's not NTP-related).  
Something like a Transko TFC or TX-P or similar is enough for reasonable 
timing for basic purposes, and they're not expensive.  They're also not 
a stock item on the consumer-level SBC's either.  I looked at one of our 
half-dozen ODroid C2's, and the main processor clock, X3, is under the 
heatsink, so I can't see what part is being used.  X1 and X2 are 
outside, and it doesn't appear that they are TCXO modules, although I 
didn't use a magnifier to check the part number and might have made an 
error.


The Nicegear DS3231 RTC has a TCXO, and might be the best low-cost 
choice at $12 (need to have an RPi, ODroid, or similar on which to mount 
it).  It's not that TCXO's are rare or expensive, it's that they're not 
often considered to be important to accuracy in many circles.



An Ovenized XCO is absolutely not required for IT-grade NTP servers.


No, but it is for my purposes here.  But, as I said in my post:



You really have to have at least a temperature compensated quartz crystal 
oscillator (TCXO) to even begin to think about an NTP server, for anything but 
the most rudimentary of timing.  Ovenized quartz oscillators (OCXO) and 
rubidium standards are the next step up, ...


I was just saying that OCXO and Rb are just the next step up if you 
would like more stability, that's all.  For 'within a second' on a 
GPS-disciplined clock (even without the 1PPS signal) you wouldn't 
necessarily need TXCO.  But that's what I meant by 'anything but the 
most rudimentary of timing.'  Rudimentary is down to the millisecond in 
my environment.  PTP takes you to the next level, and beyond that you 
don't use network timing but put a dedicated time distribution network 
running IRIG-B or similar.  But that is beyond the scope of a typical IT 
NTP server's needs.




Re: NIST NTP servers

2016-05-13 Thread Lamar Owen

On 05/11/2016 09:46 PM, Josh Reynolds wrote:

maybe try [setting up an NTP server] with an odroid?


...

I have several ODroid C2's, and the first thing to note about them is 
that there is no RTC at all.  Also, the oscillator is just a 
garden-variety non-temperature-compensated quartz crystal, and not 
necessarily a very precise one, either (precise quartz oscillators can 
cost more than the whole ODroid board costs).  The XU4 and other ODroid 
devices make nice single-board ARM computers, but have pretty ratty 
oscillator precision.


You really have to have at least a temperature compensated quartz 
crystal oscillator (TCXO) to even begin to think about an NTP server, 
for anything but the most rudimentary of timing.  Ovenized quartz 
oscillators (OCXO) and rubidium standards are the next step up, and most 
reasonably good GPS-disciplined clocks have at least an ovenized quartz 
oscillator module (the Agilent Z3816 and kin are of this type).




Re: NIST NTP servers

2016-05-11 Thread Lamar Owen

On 05/11/2016 07:46 AM, Baldur Norddahl wrote:
But would you not need to actually spend three times $300 to get a 
good redundant solution?


While we are there, why not go all the way and get a rubidium standard 
with GPS sync? Anyone know of a (relatively) cheap solution with NTP 
output?


Ebay has several Symmetricom, Microsemi, Datum, Spectracom, and even 
Agilent solutions for prices from a few hundred US$ to a couple of 
thousand US$.  Even something like the Agilent Z3801, Z3805, or Z3816 
can be found for a few hundred US$.   New, these things are in the 
$10,000+ territory.  About the same range as mid-range ethernet gear.


I like our SSU2000, personally.



Re: NIST NTP servers

2016-05-11 Thread Lamar Owen

On 05/11/2016 12:05 AM, Joe Klein wrote:

Is this group aware of the incident with tock.usno.navy.mil &
tick.usno.navy.mil on November 19. 2012 2107 UTC, when the systems lost 12
years for the period of one hour, then return?


...

I remember it like it was only four years ago oh, wait

We have multiple sync sources ourselves, with a Symmetricom (formerly 
Datum) SSU2000 setup with a cesium PRS, a rubidium secondary, and an 
ovenized quartz for tertiary oscillators.  SSU2000 architecture is 
separate control and data planes, with time-sync on a different 
interface from the LAN-facing NTP NIC.  And the control plane is 
firewalled off from the main LAN.  An Agilent (now Symmetricom) Z3816 is 
secondary.


PC and SBC (RasPi, etc) oscillators are just not accurate enough for 
Stratum 1 standards; at best stratum 3 or 4, even when directly 
GPS-disciplined (stratum is NOT just a synonym for 'level' as a 
particular stratum really has stability, precision, and accuracy 
requirements).  WWV plus GPS; GPS as you may or may not be aware, is 
spoofable and is not as accurate as one might want.  Neither is WWV.


Good reference for time-nuts is, well, the 'time-n...@febo.com' mailing 
list.


(We're a radio astronomy observatory; accurate time and frequency 
standards are  a must here, especially as position accuracy of radio 
telescopes approach tens of arcseconds).




Re: NANOG list attack

2015-10-29 Thread Lamar Owen

On 10/26/2015 03:17 PM, Larry Blunk wrote:

   As Job Snijders (a fellow Communications Committee member) noted
in an earlier post, we will be implementing some additional protection
mechanisms to prevent this style of incident from happening again. We
will be more aggressively moderating posts from addresses who have
not posted recently, in addition to other filtering mechanisms.

For what it's worth, while I did see all of these that made it through 
the list itself, the larger portion that I saw did not come through the 
list but were sent directly to me, and the Received header trail shows 
that those did not come through the nanog mailman.  So I applaud what 
you do with the list itself, but it wouldn't have made (and won't make, 
in the future) much difference, since e-mails were sent out bypassing 
the list server.


And thanks for this note.



Re: Ear protection

2015-09-23 Thread Lamar Owen

On 09/23/2015 10:09 AM, Keith Stokes wrote:

Since I’m in our colo facility this morning, I decided to put some numbers on 
it in my little isolated corner with lots of blowers running.

According to my iPhone SPL meter, average SPL is 81 - 82 dB with peaks 88 - 89 
dB.


With SPL that close to the recommended maximum, the accuracy of the SPL 
measurement is rather critical.  I would not trust my smartphone's mic 
to have sufficient accuracy to protect my hearing unless it is 
calibrated to a known source SPL using pink noise of a particular 
weight.  The calibration SLM should be a 'real' SLM, such as a Bruel & 
Kjaer Type 2250 or similar with proper transducers.  (Yes, I know, a B 
2250 will set you back nearly $4K, but, just what is your hearing 
worth?  A pair of hearing aids will set you (or your insurance company 
at least) back $4K too).  I used a vintage B transducer with a 
custom-built SLM-rated spec-an years ago at a local manufacturer's sound 
testing lab; the manufacturer makes ballasts and luminaires for HID 
lighting, and measuring ballast noise is a big deal.  But reasonably 
accurate SLM's are available for less than $500 (some are available for 
less than $100, but you get what you pay for).


The particular whine of high-speed fans is a known risky noise source, 
particularly white noise, due to the high frequency content (140dB SPL 
at 45Hz is not as harmful as 140dB at 3kHz or 15kHz due to the outer 
ears' acting as waveguide-beyond-cutoff attenuators (and cavity 
resonators, too, for that matter).  Spinning drives are no better, 
particularly 15k  RPM drives.


If it's at all uncomfortable, wear the earplugs.  You're already having 
to shout to be heard anyway.




Re: Rasberry pi - high density

2015-05-13 Thread Lamar Owen

On 05/11/2015 06:50 PM, Brandon Martin wrote:


8kW/rack is something it seems many a typical computing oriented 
datacenter would be used to dealing with, no?  Formfactor within the 
rack is just a little different which may complicate how you can 
deliver the cooling - might need unusually forceful forced air or a 
water/oil type heat exchanger for the oil immersion method being 
discussed elsewhere in the thread.


You still need giant wires and busses to move 800A worth of current. ...


This thread brings me back to 1985, what with talk of full immersion 
cooling (Fluorinert, anyone?) and hundreds of amps at 5VDC reminds 
me of the Cray-2, which dropped 150-200KW in 6 rack location units of 
space; 2 for the CPU itself, 2 for space, and 2 for the cooling 
waterfall [ https://en.wikipedia.org/wiki/File:Cray2.jpeg by referencing 
floor tile space occupied and taking 16 sq ft (four tiles) as one RLU 
].  Each 'stack' of the CPU pulled 2,200A at 5V [source: 
https://en.wikipedia.org/wiki/Cray-2#History ].  At those currents you 
use busbar, not wire.  Our low-voltage (120/208V three-phase) switchgear 
here uses 6,000A rated busbar, so it's readily available, if expensive.




Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-09 Thread Lamar Owen

On 05/08/2015 02:53 PM, John Levine wrote:

...
Most of the traffic will be from one node to another, with
considerably less to the outside.  Physical distance shouldn't be a
problem since everything's in the same room, maybe the same rack.

What's the rule of thumb for number of hosts per switch, cascaded
switches vs. routers, and whatever else one needs to design a dense
network like this?  TIA

You know, I read this post and immediately thought 'SGI Altix' 
scalable to 512 CPU's per system image and 20 images per cluster 
(NASA's Columbia supercomputer had 10,240 CPUs in that 
configuration.twelve years ago, using 1.5GHz 64-bit RISC CPUs 
running Linux my, how we've come full circle (today's equivalent 
has less power consumption, at least)).  The NUMA technology in 
those Altix CPU's is a de-facto 'memory-area network' and thus can have 
some interesting topologies.


Clusters can be made using nodes with at least two NICs in them, and no 
switching.  With four or eight ports you can do some nice mesh 
topologies.  This wouldn't be L2 bridging, either, but a L3 mesh could 
be made that could be rather efficient, with no switches, as long as you 
have at least three ports per node, and you can do something reasonably 
efficient with a switch or two and some chains of nodes, with two NICs 
per node.  L3 keeps the broadcast domain size small, and broadcast 
overhead becomes small.


If you only have one NIC per node, well, time to get some seriously 
high-density switches. but even then how many nodes are going to be 
per 42U rack?  A top-of-rack switch may only need 192 ports, and that's 
only 4U, with 1U 48 port switches. 8U you can do 384 ports, and three 
racks will do a bit over 1,000.  Octopus cables going from an RJ21 to 
8P8C modular are available, so you could use high-density blades; Cisco 
claims you could do 576 10/100/1000 ports in a 13-slot 6500.  That's 
half the rack space for the switching.  If 10/100 is enough, you could 
do 12 of the WS-X6196-21AF cards (or the RJ-45 'two-ports-per-plug' 
WS-X6148X2-45AF) and get in theory 1,152 ports in a 6513 (one SUP; drop 
96 ports from that to get a redundant SUP).


Looking at another post in the thread, these moonshot rigs sound 
interesting 45 server blades in 4.3U.  4.3U?!?!?  Heh, some custom 
rails, I guess, to get ten in 47U.  They claim a quad-server blade, so 
1,800 servers (with networking) in a 47U rack.  Yow.  Cost of several 
hundred thousand dollars for that setup.


The effective limit on subnet size would be of course broadcast 
overhead; 1,000 nodes on a /22 would likely be painfully slow due to 
broadcast overhead alone.




Re: FCC releases Open Internet document

2015-03-12 Thread Lamar Owen

On 03/12/2015 12:13 PM, Bryan Tong wrote:
I read through the introduction. This document seems like a good thing 
for everyone.



I'm about 50 pages in, reading a little bit at a time.  Paragraph 31 is 
one that anyone who does peering or exchanges should read and 
understand.  I take it to mean something like 'Guys who abuse peering 
and engage in peering disputes, take note of what we just did to the 
last mile people; you have been warned.'  But, having read Commission 
RO's before on the broadcast (Media Bureau) side of the house for 
years, maybe I'm a bit cynical.


Paragraphs 37 through 40, including footnotes, appear tailored as a 
reply to Verizon's creative Morse reply.  It's impossible to know which 
was first, but it is an interesting thought.


The 'Verizon Court' is mentioned numerous times.  Paragraph43 and 
footnote 40 mention the 'Brand X' decision of the Supreme Court, 
mentioning that that decision left open the reclassification avenue.  
This could cause any legislation that attempted to thwart this RO to 
eventually be ruled unconstitutional, citing Brand X.  Prior to reading 
this RO I wasn't familiar with this decision, so I've already learned 
something new.and I think the reference in paragraph 43, footnote 
41, is rather interesting as well.  And Justice Scalia's pizza delivery 
analogy makes a humorous (in the political context!) appearance.  
Delightful.


Paragraphs 60 through 74 give a concise history of the action, and are a 
great read.  And it also shows me that I should have paid a bit closer 
attention to the Part 8 I read a few days back; that's the part 8 from 
the RO of 2010; the part 8 as of today in the eCFR has not been updated 
with the new sections, including 8.18.  So the rules as set into place 
by this RO were not public earlier; I stand corrected.


Paragraphs 78 through 85 and associated footnotes (I found footnote 131 
particularly relevant) state in a nutshell why the FCC thought that this 
action had to be taken.  And I am just in awe of the first sentence of 
paragraph 92.  And paragraph 99 is spot-on on wireless carrier switching 
costs.


One of the more interesting side effects of this is that it would appear 
that a mass-market BIAS (the FCC's term, not mine, for Broadband 
Internet Access Service) provider cannot block outbound port 25 (RO 
paragraph 105 and footnote 241 in particular). Well, it does depend upon 
what Paragraph 113 means about a device that 'does not harm the network.'


Whether you agree with the RO or not, I believe that you will find it a 
very readable document.  Some will no doubt strongly disagree.






Re: Unlawful transfers of content and transfers of unlawful content

2015-03-12 Thread Lamar Owen

On 03/12/2015 04:58 PM, Donald Kasper wrote:



More then website blocking I've been wondering what this means for 
spam prevention?


That's a pretty interesting thought, and it is pretty well addressed by 
paragraphs 376, 377, and 378.  Basically, the FCC found that spam 
blocking is a separate add-on information service.  It may be that the 
consumer now must opt-in to that service after clear disclosure of what 
the service entails.  The FCC even found that DNS is not an information 
service (paragraphs 366-371), and the argument is compelling.  This 
Commission is not technically illiterate, that's for sure, whether you 
agree with the RO or not.




Re: FCC releases Open Internet document

2015-03-12 Thread Lamar Owen

On 03/12/2015 02:02 PM, Rob McEwen wrote:
Nevertheless, in such a circumstance, 47 USC 230(c)(2) should prevail 
and trump any such interpretation of this!


(If anyone thinks that 47 USC 230(c)(2) might not prevail over such an 
interpretation, please let me know... and let me know why?)


Found it; paragraph 532 addresses 230(c).  In a nutshell, the 
applicability does not change due to the reclassification of BIAS 
providers as telecommunications services.




Re: FCC releases Open Internet document

2015-03-12 Thread Lamar Owen

On 03/12/2015 10:58 AM, Ca By wrote:

For the first time to the public
http://transition.fcc.gov/Daily_Releases/Daily_Business/2015/db0312/FCC-15-24A1.pdf



The actual final rules are in Appendix A, pages 283 through 290 (8 
pages), although that's a bit misleading, as the existing Part 8 is not 
included in full in that Appendix.There are also three amendments to 
Part 20, as well, in the Definitions, which means other paragraphs of 
Part 20 may apply.


It's interesting that pages 321 through 400 (80 pages) are taken up 
entirely by the dissenting Commissioner's statements, and Tom Wheeler's 
statement begins on page 314.


This will indeed be an interesting read.



Unlawful transfers of content and transfers of unlawful content (was:Re: Verizon Policy Statement on Net Neutrality)

2015-03-12 Thread Lamar Owen

On 02/27/2015 02:14 PM, Jim Richardson wrote:

What's a lawful web site?

Paragraphs 304 and 305 in today's released RO address some of this.  
The wording 'Unlawful transfers of content and transfers of unlawful 
content' is pretty good, and covers what the Commission wanted to cover.




Re: FCC releases Open Internet document

2015-03-12 Thread Lamar Owen

On 03/12/2015 01:28 PM, Lamar Owen wrote:

On 03/12/2015 12:13 PM, Bryan Tong wrote:
I read through the introduction. This document seems like a good 
thing for everyone.



I'm about 50 pages in, reading a little bit at a time.  Paragraph 31 
is one that anyone who does peering or exchanges should read and 
understand.  I take it to mean something like 'Guys who abuse peering 
and engage in peering disputes, take note of what we just did to the 
last mile people; you have been warned.'  But, having read Commission 
RO's before on the broadcast (Media Bureau) side of the house for 
years, maybe I'm a bit cynical.
Another 40 pages, and found the detailed paragraphs related to this 
introductory paragraph.  Those here who know how peering works in the 
real world, read paragraphs 194 through 206 of this RO, including 
footnotes, and see if the FCC 'gets it' when it comes to how peering works.




Re: FCC releases Open Internet document

2015-03-12 Thread Lamar Owen

On 03/12/2015 02:02 PM, Rob McEwen wrote:

On 3/12/2015 1:30 PM, William Kenny wrote:

NO BLOCKING:
A person engaged in the provision of broadband Internet access service,
insofar as such person is so engaged, shall not block lawful content,
applications, services, or nonharmful devices, subject to reasonable
network management.


The document (if I read it correctly) states that reasonable network 
management includes spam filtering (yeah!)


However, in spite of that... it seems to give the MISTAKEN impression 
that:


(1) ALL spam is ALWAYS... NOT-lawful content
(2) ALL lawful content is NEVER spam.




I think the issue is adequately addressed by the RO's paragraph 222 and 
its neighbors, with footnotes 571, 572, and 573 elucidating.  The short 
version: the FCC is not going to rigidly define this and leave it up to 
the providers, but they will address it on a case-by-case basis if need 
be.  At least that was my takeaway.


Nevertheless, in such a circumstance, 47 USC 230(c)(2) should prevail 
and trump any such interpretation of this!


(If anyone thinks that 47 USC 230(c)(2) might not prevail over such an 
interpretation, please let me know... and let me know why?)


It would seem, but I am not a lawyer, that perhaps it would.  It's not 
directly addressed in the portions of the RO that I've read thus far, 
and that specific paragraph is not cited that I could find.  A Good 
Samaritan law, in 47 USC. fun stuff.




Re: symmetric vs. asymmetric [was: Verizon Policy Statement on Net Neutrality]

2015-03-04 Thread Lamar Owen

On 03/03/2015 08:07 AM, Scott Helms wrote:

For consumers to care about symmetrical upload speeds as much as you're
saying why have they been choosing to use technologies that don't deliver
that in WiFi and LTE?
For consumers to have choice, there must be an available alternative 
that is affordable.




Re: symmetric vs. asymmetric [was: Verizon Policy Statement on Net Neutrality]

2015-03-02 Thread Lamar Owen

On 03/02/2015 03:31 PM, Owen DeLong wrote:

On Mar 2, 2015, at 08:28 , Lamar Owen lo...@pari.edu wrote:

...it would be really nice to have 7Mb/s up for just a minute or ten 
so I can shut the machine down and go to bed. 

How much of your downstream bandwidth are you willing to give up in order to 
get that?

Let’s say your current service is 10Mbps/512Kbps. Would you be willing to 
switch to 3Mbps/7Mbps in order to achieve what you want?

What about 5.25Mbps/5.25Mbps? (same total bandwidth, but split symmetrically)?


Any of those would be nice.  Nicer would be something adaptive, but 
that's a pipe dream, I know.  I'm aware of the technological limitations 
of ADSL, especially the crosstalk and power limitations, how the 
spectrum is divided, etc.


The difference between 10/.5 and 5.25/5.25 on the download would be 
minimal (half as fast); on the upload, not so minimal (ten times 
faster).  But even a 'less asymmetrical' connection would be better than 
a 20:1 ratio.  4:1 (with 10Mb/s aggregate) would be better than 20:1.




Re: symmetric vs. asymmetric [was: Verizon Policy Statement on Net Neutrality]

2015-03-02 Thread Lamar Owen

On 02/28/2015 05:46 PM, Mark Andrews wrote:

Home users should be able to upload a content in the same amount
of time it takes to download content.

This.

Once a week I upload a 100MB+ MP3 (that I produced myself, and for which 
I own the copyright) to a cloud server.  I have a reasonable ADSL 
circuit at home, but it takes quite a bit of my time to upload that one 
file.  Even if the average BW was throttled to 512k, it would be really 
nice to have 7Mb/s up for just a minute or ten so I can shut the machine 
down and go to bed.  Cloud services are becoming the choice for all 
kinds of content distribution, and there are more content creators out 
there than you might think who need to do exactly what I need to do.


Yes, I do remember the days of dialup, in particular I remember the 
quite interesting business model of free.org, which dramatically reduced 
my long distance bill that I had been paying to dial up Eskimo North 
(I'm in the Southeast US, incidentally).  And then we got dialup 
locally, and my old Okidata 9600 modem got a workout.


And, well, I still use my connection in much the same way as I used 
dialup, turning it off when I'm not using it.  I almost never leave it 
up all night; if my router isn't online it can't be used for malicious 
purposes, etc.  And, no, I have no alternatives to the ILEC's DSL here, 
as 3G/4G cell service simply doesn't get to my house (now on the ridge 
behind my house, great 4G bandwidth, but I'm down in a valley, and the 
shadowing algorithm's show the story; I ran a Splat simulation from the 
cell tower site; across the creek from my house is the edge of one of 
the diffraction zones where good service can be found, and my house is 
in a deep null)


Thanks all for the interesting symmetry discussion; this has been enjoyable.



Re: content regulation, was Verizon Policy Statement on Net Neutrality

2015-03-02 Thread Lamar Owen

On 02/28/2015 07:33 PM, Jimmy Hess wrote:

On Sat, Feb 28, 2015 at 8:34 AM, John R. Levine jo...@iecc.com wrote:
[...]

Until yesterday, there were no network neutrality rules, not for spam or for
anything else.

There still aren't any network neutrality rules, until the FCC makes
the documents public, which they haven't yet.

The rules themselves are public.  The area of uncertainty is whether the 
Report and Order will pull in more rules than just the newly published 
47CFR§8.  For instance, there's 47CFR§6 which deal with 
'telecommunications' carriers and the ADA.


But as far as net neutrality is concerned, the actual rules dealing with 
the gist of it are embodied in 47CFR§8 Preserving the Open Internet.  
Link to the eCFR page on it was posted elsewhere on the list.




Re: Verizon Policy Statement on Net Neutrality

2015-02-28 Thread Lamar Owen

On 02/27/2015 04:49 PM, Stephen Satchell wrote:
So did I. Also, do you recall that the FCC changed the definition of 
broadband to require 25 Mbps downstream? Does this mean that all 
these rules on broadband don't apply to people providing Internet 
access service on classic ADSL?
The FCC regulations do not have to use consistent definitions (and many 
times definitions are not consistent!); the local-to-the-section 
definition usually (but not always; it's always up for interpretation at 
hearing time!) trumps any other.  The local definitions for the context 
of 47CFR§8 are found in §8.11, and do not mention required bandwidth.  
It seems to include any 'eyeball' network, regardless of bandwidth.  The 
definition in 47CFR§8.11(a) is classic FCC wordsmithing.


Think of 'scope of definition' as being similar to 'longest prefix 
matching' in routing, and it will be clear what is going on here. Hint: 
a particular section of the Rules can hijack a term out from under the 
general definitions, much like prefixes can be hijacked out from under 
their containing prefix.  The difference is that in the Rules, a 
particular paragraph or subparagraph can hijack a term and say 'for the 
purposes of this paragraph, term 'A' means the opposite of what it means 
everywhere else' and that definition in that scope will stand the test 
of hearing.





Re: One FCC neutrality elephant: disabilities compliance

2015-02-28 Thread Lamar Owen

On 02/27/2015 03:12 PM, Mel Beckman wrote:

Two pages? Read the news, man.


I'd rather read the actual regulations, from the source, in 47CFR§8.  
They're public.  The enforcement won't come from what the news said.


You say you haven't read the actual RO. Nobody in the public sector, 
or even in Congress AFAIK, has read it. The Order's 300-plus pages 
were never publicly released or openly debated.This is another you 
must pass it to see what's in it debacle, without the luxury of 
having any semblance of democratic process or transparency.
The RO is not limited to just the text of the actual regulations.  The 
RO will include the discussion and the rationale behind the adopted 
rules, along with quotes from those who commented on the action, and 
further language, including the derivation of the regulatory authority.  
The actual regulation, much shorter than the RO, is already public, in 
47CFR§8.  The RO is the 'what' plus the 'why,' 'how,' and 'when' 
whereas the new section in 47CFR is just the 'what.'  It takes a lot 
more time to get the 'why,' 'how,' and 'when' into shape for publication 
than it does to get the 'what' into shape for publication. The 
enforcement will come from the 'what.'


This is standard, normal, FCC procedure.  The NPRM was 99 pages, plus, 
with proposed rules of two pages.  The RO is reported as being 300 
pages perhaps, with actual adopted rules of about 8 pages (depending 
upon the font used; I took the eCFR version of 47CFR§8 and printed it to 
PDF, and that PDF ran 8 pages).  This is not unusual, and is something 
I've seen many times.  The process is quite transparent, just with 
greater latency than many people like, and you do need to know where to 
look, although the FCC has made it a lot easier to find stuff than it 
was a few years back.  The statement from the FCC spokesperson doesn't 
quote a length; we'll see how long it will be.  I personally look 
forward to reading it; FCC RO's tend to be better reading than the 
resulting sections in 47CFR, but when the EB knocks on your door they're 
going to hold you to 47CFR, not the establishing RO.


This is a lot better than the days where you had to subscribe to a 
service, like Pike and Fischer's, to get even the Daily Digest, much 
less up to the day copies of the CFR, like we now can have.  The latency 
for Commission actions is typically on the order of months; the NPRM's 
date is May 15, 2014.


You can see more into this by looking at the docket's page at 
http://apps.fcc.gov/ecfs/proceeding/view?name=14-28 .  There were over 2 
million filings in this docket, with almost 7,000 in the last 30 days 
alone.  I would imagine the first place to have the actual RO text will 
be the docket's page linked above; you can even follow it with its RSS 
feed and get it as soon as its released.




Re: Verizon Policy Statement on Net Neutrality

2015-02-28 Thread Lamar Owen

On 02/27/2015 02:14 PM, Jim Richardson wrote:

 From 47CFR§8.5b
(b) A person engaged in the provision of mobile broadband Internet
access service, insofar as such person is so engaged, shall not block
consumers from accessing lawful Web sites, subject to reasonable
network management; nor shall such person block applications that
compete with the provider's voice or video telephony services, subject
to reasonable network management.

What's a lawful web site?
That would likely be determined on a case-by-case basis during 
Commission review of a complaint, I would imagine, with each FCC 
document related to each case becoming part of the collection of 
precedent (whether said document is a NAL, NOV, or RO would be somewhat 
immaterial).  The obvious answer is 'a website that has no illegal 
content' but once something is brought to a hearing, what is 'obvious' 
doesn't really matter.


If you want to read about the types of rationale that can be used to 
determine terms like 'lawful' in this context, search through 
Enforcement Bureau actions relating to 47CFR§73.3999   Enforcement of 
18 U.S.C. 1464 (restrictions on the transmission of obscene and indecent 
material).  For more technical considerations, you might find the 
collection of precedent on what satisfies 47CFR§73.1300, 1350, and 1400 
to be more interesting reading, if you're into this sort of arcana.





Re: Verizon Policy Statement on Net Neutrality

2015-02-28 Thread Lamar Owen

On 02/28/2015 02:29 PM, Rob McEwen wrote:
For roughly two decades of having a widely-publicly-used Internet, 
nobody realized that they already had this authority... until suddenly 
just now... we were just too stupid to see the obvious all those 
years, right? 


Having authority and choosing to exercise it are two different things.  
Of course it was realized that they had this authority already; that's 
why these regulations were fought so strongly.


Nobody has refuted my statement that their stated intentions for use 
of this authority, and their long term application of that authority, 
could be frighteningly different.


It's impossible to refute such a vaguely worded supposition. Refuting a 
'could be' is like nailing gelatin to the wall, because virtually 
anything 'could be' even at vanishingly small probabilities.  I 'could 
be' given a million dollars by a random strange tomorrow, but it's very 
unlikely.




FOR PERSPECTIVE... CONSIDER THIS HYPOTHETICAL: Suppose that the EPA 
was given a statutory power to monitor air quality (which is likely 
true, right)... decades later, a group of EPA officials have a little 
vote amongst themselves and they decide that they now define the air 
INSIDE your house is also covered by those same regulations and 
monitoring directives for outside air. 


Ok, I'll play along.  So far, a reasonable analogy, except that such an 
ex parte action (a 'little vote amongst themselves') wouldn't survive 
judicial review.  The FCC Commissioners didn't just 'have a little vote 
amongst themselves;' they held a complete, according to statute 
rulemaking proceeding.  That is what our elected representatives have 
mandated that the FCC is to do when decisions need to be made.


Therefore, to carry out their task of monitoring the air inside your 
home, they conduct random warrent-less raids inside your homes, thus 
violating your 4th amendment rights. 


This is where your analogy drops off the deep end.  The FCC will hear 
complaints from complainants who must follow a particular procedure and 
request specific relief after attempting to resolve the dispute by 
direct communication with the ISP in question.  There aren't any 'raids' 
provided for by the current regulation; have you ever heard of any raids 
from a Title II action previously?  There is no provision in the current 
regulation as passed for the FCC to do any monitoring; it's up to the 
complainant to make their case that the defendant violated 47CFR§8.  
This doesn't change the statute, just the regulations derived from the 
statute.


To go with your analogy, as part of the newly added powers of the EPA 
under your hypothetical, it would now be possible for a complainant, 
after attempting to satisfy a 'inside the building unclean air' 
complaint with a particular establishment but failing, and having to go 
through a significant procedure, to get the EPA to rule that the owner 
of that establishment must provide relief to the complainant or be 
fined.  No authority to raid, just authority to respond to complaints 
and fine accordingly.  Any change to that rule requires another 
rulemaking proceeding.


Before the FCC can change the wording to add any of your supposed power 
grab increases they will have to go through another full docket, with 
required public notices and the NPRM.  And the courts can throw it all out.


The FCC's primary power is economic, by fining.

I know that hypothetical example is even more preposterous than this 
net neutrality ruling... but probably not that much more! (in BOTH 
cases, the power grab involves an intrusion upon privately-owned 
space.. using a statute that was originally intended for public space)


The telecommunications infrastructure is in reality public space, not 
private, and has been for a really long time.  Or are there any 
physical-layer facilities that are not regulated in some way?  Let's 
see: 1.) Telephone copper and fiber?  Nope, regulated as a common 
carrier already.  2.) Satellite?  Nope, regulated.  3.) Wireless (3G, 
4G)?  Nope, regulated, and many of the spectrum auctions have strings 
attached, as Verizon Wireless found out last year.  4.) 2.4GHz ISM?  
Nope, regulated under §15 and subject to being further regulated.  5.) 
Municipal fiber?  Nope, it's public by definition. 6.) Point to point 
optical?  Maybe, but this is a vanishingly small number of links; I 
helped install one of these several years back. 7.) Point to point 
licensed microwave?  Nope, regulated; license required.


Even way back in NSFnet days the specter of regulation, in the form of 
discouragement of commercial traffic across the NSFnet, was present.  I 
don't understand why people are so surprised at this ruling; the 
Internet is becoming a utility for the end user; it's no longer a 
free-for-all in the provider space.




But the bigger picture isn't what the FCC STATES that they will do 
now.. it is what unelected FCC officials could do, with LITTLE 
accountability, in 

Re: Verizon Policy Statement on Net Neutrality

2015-02-28 Thread Lamar Owen

On 02/27/2015 02:58 PM, Rob McEwen wrote:

On 2/27/2015 1:28 PM, Lamar Owen wrote:
You really should read 47CFR§8.  It won't take you more than an hour 
or so, as it's only about 8 pages. 


The bigger picture is (a) HOW they got this authority--self-defining 
it in, and (b) the potential abuse and 4th amendment violations, not 
just today's foot in the door details!


How they got the authority is through the Communications Act of 1934, as 
passed and amended by our elected representatives in Congress, with the 
approval of our elected President.  The largest amendments are from 
1996, as I recall.  The specific citations are 47 U.S.C. secs. 151, 152, 
153, 154, 201, 218, 230, 251, 254, 256, 257, 301, 303, 304, 307, 309, 
316, 332, 403, 503, 522, 536, 548, and 1302 (that list is from the 
Authority section of §8 itself, and will be elaborated upon in the RO, 
likely with multiple paragraphs explaining why each of those enumerated 
sections of 47 USC apply here.  Commission RO's will typically spend a 
bit of time on the history of each relevant section, and it wouldn't 
surprise me in the least to see the Telecom Act of 1996 quoted there.).


It will be interesting to see how the judiciary responds, or how 
Congress responds, for that matter, as Congress could always amend the 
Communications Act of 1934 again (subject to Executive approval, of 
course).  In any case, the Report and Order will give us a lot more 
information on why the regulations read the way they do, and on how this 
authority is said to derive from the portions of the USC as passed by 
Congress (and signed by the President).  And at that point things could 
get really interesting.  Our govermental system of checks and balances 
at work.


In the same way, I don't like the BASIS for this authority... and what 
it potentially means in the long term... besides what they state that 
they intend to do with this new authority they've appointed themselves 
in the short term.


Had some people not apparently taken advantage of the situation as it 
existed before the proceeding in docket 14-28, it's likely no regulatory 
actions would have been initiated.


I'm not cheerleading by any means; I would much prefer less regulation 
than more in almost every situation; but the simple fact is that people 
do tend to abuse the lack of regulations long enough for regulatory 
agencies to take notice, and then everyone loses when regulations come.


As an extreme example of how onerous regulations could be, if the 
Commission were to decide to decree that all ISP's have to use ATM cells 
instead of variable length IP packets on the last mile, they actually do 
have the regulatory authority to set that standard (they did exactly 
this for AM Stereo in the 80's, for IBOC HD Radio, and then the ATSC DTV 
standard (it was even an unfunded mandate in that case), not to mention 
the standards set in §68 for equipment connected to the public switched 
telephone network, etc).  The FCC even auctioned off spectrum already in 
use by §15 wireless microphones and amended §15 making those wireless 
mics (in the 700MHz range) illegal to use, even though many are still 
out there. So it could be very much worse; this new section is one of 
the shortest sections of 47CFR I've ever read.  Much, much, simpler and 
shorter than my bread and butter in 47CFR§§11, 73, and 101.


Reading the RO once it is released will be very interesting, at least 
in my opinion, since we'll get a glimpse into the rationale and the 
thought processes that went into each paragraph and subparagraph of this 
new section in 47CFR.  I'm most interested in the rationale behind the 
pleading requirements, like requiring complainants to serve  the 
complaint by hand delivery on the named defendant, requiring the 
complainant to serve two copies on the Market Disputes Resolution 
Division of the EB, etc.   This seems to be a pretty high bar to filing 
a complaint; it's not like you can just fill out a form on the FCC 
website to report your ISP for violating 47CFR§8.  Heh, part of the 
rationale might be the fact that they got over 2 million filings on this 
docket..




Re: content regulation, was Verizon Policy Statement on Net Neutrality

2015-02-28 Thread Lamar Owen

On 02/28/2015 09:53 AM, Rich Kulawiec wrote:
...Spam, the slang term for unsolicited bulk email (UBE), is a form of 
denial-of-service attack and may/should be treated in the same way as 
other DoS attacks. ---rsk 
47CFR§8.11(d) Reasonable network management. A network management 
practice is reasonable if it is appropriate and tailored to achieving a 
legitimate network management purpose, taking into account the 
particular network architecture and technology of the broadband Internet 
access service.


Classic FCC wordsmithing, and seems to cover the DoS case nicely when 
you look through the rest of §8 for instances of the term 'reasonable 
network management.'


Yes, it is amorphous, ambiguous, and all those things regulations 
usually are.  (Like being up for interpretation.)  It remains to be seen 
whether §8 will survive appeal.





Re: Verizon Policy Statement on Net Neutrality

2015-02-27 Thread Lamar Owen

On 02/27/2015 09:05 AM, Larry Sheldon wrote:
http://publicpolicy.verizon.com/blog/entry/fccs-throwback-thursday-move-imposes-1930s-rules-on-the-internet 



Cute.  Obviously they never watched the Leno segment where a pair of 
amateur radio ops using Morse code outperformed a couple of teens using 
texting, in terms of speed of communications.




Re: Verizon Policy Statement on Net Neutrality

2015-02-27 Thread Lamar Owen

On 02/27/2015 09:50 AM, Rob McEwen wrote:
btw - does anyone know if that thick book of regulations, you know... 
those hundreds of pages we weren't allowed to see before the vote... 
anyone know if that is available to the public now? If so, where?
You were allowed to see the proposed rules in the NPRM's appendix A.  
The RO will state which of those were adopted, which were reconsidered 
after reading the public comments, etc.  Watch docket 14-28 and when the 
RO (or MRO maybe) is released you'll be able to read that.  The RO 
will contain a pointer to which section of 47CFR the rules will be in, 
and you can get those from multiple places.  The easiest way is through 
eCFR (www.ecfr.gov), a part of the GPO, which publishes all these sorts 
of things.


Now, the RO isn't available yet, but the regs themselves are. Check out 
47CFR§8.1-17, already available through the eCFR.  Here's a link:

http://www.ecfr.gov/cgi-bin/text-idx?SID=3f0ad879cf046fa8e4edd14261ef70f2node=pt47.1.8rgn=div5

That has got to be the smallest full section of 47CFR I've ever read.



Re: One FCC neutrality elephant: disabilities compliance

2015-02-27 Thread Lamar Owen

On 02/27/2015 01:06 PM, Mel Beckman wrote:

Section 255 of Title II applies to Internet providers now, as does section 225 
of the Americans with Disabilities Act (ADA).
These regulations are found in 47CFR§6, not 47CFR§8, which is the 
subject of docket 14-28.


Not having read the actual RO in docket 14-28, so basing the following 
statements on the NPRM instead.  Since the NPRM had 47CFR§8 limited to 
47CFR§8.11, and the actual amendment going to 47CFR§8.17, the adopted 
rules are different than originally proposed.  You can read the proposed 
regulations yourself in FCC 14-61 ( 
http://apps.fcc.gov/ecfs/document/view?id=7521129942 ) pages 66-67.  
Yes, two pages.  The actual regulations are a bit, but not much, longer.


47CFR§6 was already there before docket 14-28 came about.



Re: Verizon Policy Statement on Net Neutrality

2015-02-27 Thread Lamar Owen

On 02/27/2015 01:19 PM, Rob McEwen wrote:
We're solving an almost non-existing problem.. by over-empowering an 
already out of control US government, with powers that we can't even 
begin to understand the extend of how they could be abused... to fix 
an industry that has done amazingly good things for consumers in 
recent years.


You really should read 47CFR§8.  It won't take you more than an hour or 
so, as it's only about 8 pages.


The procedure for filing a complaint is pretty interesting, and requires 
the complainant to do some pretty involved things. (47CFR§8.14 for the 
complaint procedure, 47CFR§8.13 for the requirements for the pleading, 
etc).  Note that the definitions found in 47CFR§8.11(a) and (b) are 
pretty specific in who is actually covered by 'net neutrality.'




Re: Linux: concerns over systemd adoption and Debian's decision to switch [OT]

2014-10-27 Thread Lamar Owen

On 10/25/2014 04:55 PM, Matthew Petach wrote:
Completely agree on this point--but I fail to see why it has to be one 
or the other? Why can't systemd have a --text flag to tell it to 
output in ascii text mode for those of us who prefer it that way? 
It still logs to syslog, and syslog can still log to text.  Systemd 
certainly writes a nice text /var/log/messages on my CentOS 7 system.


There is also a --log-target command line option, where there are 
several possible targets.


Further, the binary log is generated by journald, not by systemd itself, 
which can log directly to syslog without using the binary journal (see: 
http://fitzcarraldoblog.wordpress.com/2014/09/20/change-systemds-binary-logging-to-text-logging-in-sabayon-linux/ 
for how to do this in one particular Linux distribution, Sabayon).


The more I dig into systemd, the less I dislike it.  I'm still not 
thrilled, but it's not as bad as I first heard it was going to be.


Re: Linux: concerns over systemd [OT]

2014-10-27 Thread Lamar Owen

On 10/27/2014 11:35 AM, Jay Ashworth wrote:
I will counter with you wouldn't be running a real distro in that 
case anyway; you'd be running something super trimmed down, and 
possibly custom built, or based on something like CoreOS, that only 
does one job. Well. 


Hmm, now this one I wasn't aware of this tidbit here has made this 
thread worthwhile to me, as we work on developing some clustered 
'things' for use here. CoreOS wasn't even on the 'look at this at 
some point in time' list before, but it is now. Thanks, Jay.




Re: Linux: concerns over systemd adoption and Debian's decision to switch [OT]

2014-10-24 Thread Lamar Owen

On 10/24/2014 03:35 AM, Tei wrote:

I pled the Linux people to stay inside the unix philosophy to use text files.


You do realize that the systemd config files are still text, right? As 
to the binary journal, well, by default RHEL 7 (and rebuilds) do at 
least mirror the journal output to syslog, so /var/log/messages and 
friends are still there, in plain text.  I just verified this on my 
CentOS 7 evaluation server; yep, /var/log/messages and friends still 
there and still being used.


As to systemd being a big binary, well, the typical initscript is being 
run by a binary also, even if it is somewhat smaller, and as shellshock 
shows that still has an attack surface.


The systemd config files are much easier to understand than the typical 
initscript (and since the 'functions' most distributions provide are 
directly sourced, you need to include that code as well) is, by a very 
large margin.


I'm not thrilled by this change, but after stepping back and looking 
over all the various systems I've dealt with over the last 25+ years 
it's honestly not as big of a change as some of the things I've seen 
(and my experiences include VMS and a number of Unix variants, including 
Xenix, Irix, SunOS/Solaris, and Domain/OS. And don't get me started on 
the various CLI's for various switch and router vendors, or I'll throw 
some Proteon gear your way.).  And while I should be able to enjoy a 
better desktop experience (I have used Linux as my primary desktop for 
17 years), I can also see the server-side uses for the systemd approach, 
most of which have to do with highly dynamic cloud-style systems (and 
I'm thinking private cloud, not public).  I can see how being 
load-responsive and rapidly spinning up compute resources as needed and 
for only as long as needed could help reduce my cost of power; spread 
out to millions of servers (like Google or Facebook) and the energy 
savings could be very significant.  Much like how package delivery 
companies plan routes to use only right-hand turns to save megabucks per 
year on fuel costs.




Re: Linux: concerns over systemd adoption and Debian's decision to switch [OT]

2014-10-23 Thread Lamar Owen

On 10/22/2014 03:51 PM, Barry Shein wrote:

I wish I had a nickel for every time I started to implement something
in bash/sh, used it a while, and quickly realized I needed something
like perl and had to rewrite the whole thing.


Barry, you've been around a long time, and these words are pearls of wisdom.

This seems to be the reasoning behind replacing the spaghetti known as 
'initscripts' currently written in sh with something written in a real 
programming language.  Upstart and systemd are both responses to the 
inflexible spaghetti now lurking in the initscript system, of which the 
pile steaming in /etc/init.d is but a part.



Sure, one can insist on charging forward in sh but at some point it
becomes, as Ken Thompson so eloquently put it on another topic
entirely, like kicking a dead whale down the beach.

This seems to be the attitude of the systemd developers, although 
they're not as eloquent as Thompson in saying it.  But I remember being 
just as abrasive, just on a smaller scale, myself...


Now, I've read the arguments, and I am squarely in the 'do one thing and 
do it well' camp.  But, let's turn that on its head, shall we? Why oh 
why do we want every single package to implement its own initscript and 
possibly do it poorly?  Wouldn't it be a case of 'do one thing well' if 
one could do away with executable shell code in each individual package 
that is going to run as root?  Wouldn't it be more 'do one thing well' 
if you had a 'super' inetd setup that can start services in a better way 
than with individually packaged (by different packagers in most cases) 
shell scripts that are going to run as root?  Shouldn't the 
initialization logic, which is 'one thing that needs doing' be in one 
container and not thousands?


Now, I say that having been a packager for a largish RPM package several 
years ago.  I waded the morass of the various packages' initscripts; 
each packager was responsible for their own script, and it was a big 
mess with initscripts doing potentially dangerous things (mine included; 
to clear it up, I maintained the PostgreSQL packages for the PostgreSQL 
upstream project from 1999 to 2004). Ever since 1999 there have been 
issues with distributed initialization logic (that runs as *root* no 
less) under hundreds of different packagers' control.  It was and is a 
kludge of the kludgiest sort.


Having a single executable program interpret a thousand config files 
written by a hundred packagers is orders of magnitude better, 
security-wise, than having thousands of executable (as *root*) scripts 
written by hundreds of different packagers, in my experienced opinion.  
If anything, having all initialization executable code in one monolithic 
package very closely monitored by several developers (and, well, for 
this purpose 'developers with attitudes' might not be the worst thing in 
the world) is more secure.  It *is* a smaller attack surface than the 
feeping creaturism found in the typical /etc/init.d directory.  And 
Barry's pearl of wisdom above most definitely applies to /etc/rc.sysinit 
and its cousin /etc/rc.local.


Now, as much as I dislike this magnitude of change, it seems to me that 
systemd actually is more inline with the traditional Unix philosophy 
than the current initialization system is.  But I always reserve the 
right to be wrong.  And I am definitely not fond of the attitudes of the 
various systemd developers; systemd assuredly has its shortcomings.  But 
it *is* here to stay, at least in RHEL-land, for at least the next ten 
years.


Having said that, if you want to use Upstart, by all means use Upstart; 
RHEL6 (and rebuilds) will have Upstart until 2020.  So you're covered 
for quite a while yet, if you use CentOS, Scientific Linux, or another 
RHEL6 rebuild (or actual RHEL6).


And for those who bugle that systemd will be the 'end of unix as we know 
it' I just have one thing to trumpet:


Death of Internet Predicted.  Film at Eleven.



Re: Linux: concerns over systemd adoption and Debian's decision to switch [OT]

2014-10-23 Thread Lamar Owen

On 10/23/2014 02:22 PM, valdis.kletni...@vt.edu wrote:

On Thu, 23 Oct 2014 13:43:03 -0400, Lamar Owen said:


Now, I've read the arguments, and I am squarely in the 'do one thing and
do it well' camp.  But, let's turn that on its head, shall we? Why oh
why do we want every single package to implement its own initscript and
possibly do it poorly?

Umm.. because maybe, just maybe, the package maintainers know more about
the ugly details of what it takes to start a given package than the init
system authors know?


Speaking from my own experience, the actually relevant and 
package-specific guts of the typical initscript could be easily replaced 
by a simple text configuration that simply gives:


1.) What to start
2.) When to start it (traditional initscripts work on a linear timeline 
of priority slots; systemd units have more flexibility)

3.) How to start it (command line options)

This should not need to be an executable script.  This is what systemd 
brings to the table (Upstart brought some of this, too).  It allows the 
packager to declare those details that the packager knows about the 
package and eliminates the boilerplate (that is different between 
versions of the same distribution; I for one maintained initscripts 
across multiple versions of multiple distributions, all of which had 
different boilerplate and different syntax).  I should not have needed 
to learn all the different boilerplate, as that was a distraction from 
the real meat of packaging (it could be argued that the presence of 
syntactically arcane boilerplate is a problem in and of itself: as a for 
instance, the nice 'daemon' function most RPM-based distributions supply 
in /etc/init.d/functions works for some initscripts and not for others; 
PostgreSQL is one for which it doesn't work and it's not obvious at 
first glance why it doesn't); I should simply have been able to tell the 
init system in a declarative syntax that I needed to start the program 
'/usr/bin/postmaster' with the command line options for database 
directory and listening port (among some other items).


And that includes 99% of what the various initscripts do (yeah, the 
PostgreSQL script of which I was one author did one thing that in 
hindsight should simply not have been in the initscript at all). Many of 
the 1% exceptions perhaps don't belong in code that is run as root every 
single time that particular daemon needs to start or stop.  The perhaps 
0.5% remaining that absolutely must be run before starting or stopping, 
well, yes, there should be an option in that declarative syntax to say, 
for instance, 'before starting /usr/bin/postmaster, check the version of 
the database and fail if it's older with a message to the log telling 
the admin they need to DUMP/RESTORE the database prior to trying to 
start again' .. (the systemd syntax does allow this).


I have personally compared the current PostgreSQL systemd unit 
(/usr/lib/systemd/system/postgresql.service on my CentOS 7 system) to 
the old initscript.  I wish it had been that simple years ago; the 
.service file is much cleaner, clearer, and easier to understand; no 
funky shell quote escapes needed.  And it doesn't execute as root, nor 
does it source arcane boilerplate functions, the source of some of which 
will make your eyes bleed.  And to do a customized version you just drop 
your edited copy in /etc/systemd/system/ and you're done, since it won't 
get overwritten when you update packages.


When configuring Cisco IOS, we use a declarative syntax; the systemd 
.service file's syntax is as readable as the typical startup-confg is.  
Imagine if we used the typical bash initscript syntax to bring up 
interfaces and services on our routers.


Re: The FCC is planning new net neutrality rules. And they could enshrine pay-for-play. - The Washington Post

2014-04-28 Thread Lamar Owen

On 04/27/2014 06:18 PM, Jay Ashworth wrote:

- Original Message -

From: Hugo Slabbert hslabb...@stargate.ca
I guess that's the question here: If additional transport directly
been POPs of the two parties was needed, somebody has to pay for the
links.

And the answer is: at whose instance (to use an old Bell term) is that
traffic moving.

The answer is at the instance of the eyeball's customers.

So there's no call for the eyeball to charge the provider for it.




Now, Jay, I don't often disagree with you, but today it occurred to me 
the business case here (I've had to put on my businessman's hat far too 
frequently lately, in dealing with trying to make a data center 
operation profitable, or at least break-even).  This should be taken 
more as a 'devil's advocate' post more than anything else, and if I 
missed someone else in the thread making the same point, my apologies to 
the Department of Redundancy Department.


Sure, the content provider is paying for their transit, and the eyeball 
customer is paying for their transit.  But the content provider is 
further charging the eyeball's customer for the content, and thus is 
making money off of the eyeball network's pipes.  Think like a 
businessman for a moment instead of like an operator.


Now, I can either think of it as double dipping, or I can think of it as 
getting a piece of the action. (One of my favorite ST:TOS episodes, by 
the way).  The network op in me thinks double-dipping; the businessman 
in me (hey, gotta make a living, no?) thinks I need to get a piece of 
that profit, since that profit cannot be made without my last-mile 
network, and I'm willing to 'leverage' that if need be.  How many 
mail-order outfits won't charge for a customer list?  Well, in this case 
it's actual connectivity to customers, not just a customer list.   The 
argument about traffic congestion is just a strawman, disguising the 
real, profit-sharing, motive.




Re: The FCC is planning new net neutrality rules. And they could enshrine pay-for-play. - The Washington Post

2014-04-28 Thread Lamar Owen

On 04/28/2014 02:23 PM, Jack Bates wrote:

On 4/28/2014 12:05 PM, Lamar Owen wrote:


Now, I can either think of it as double dipping, or I can think of it 
as getting a piece of the action


However, as a cable company, comcast must pay content providers for 
video. In addition, they may be losing more video subscribers due to 
netflix. In reality, Netflix is direct competition to Comcast's video 
branch.



That's exactly right.  But it somehow sounds better to blame it on the 
bandwidth consumed.





Re: Heartbleed Bug Found in Cisco Routers, Juniper Gear

2014-04-12 Thread Lamar Owen

On 04/11/2014 07:16 AM, Glen Kent wrote:

VPN, on the other hand, is a totally different world of pain for this
issue.


What about VPNs?




SSL VPN's could possibly be vulnerable.




Re: IPv6 isn't SMTP

2014-03-27 Thread Lamar Owen

On 03/26/2014 08:12 PM, Jimmy Hess wrote:

As far as i'm concerned  if you can force the spammer to use their own
IP range, that they can setup RDNS for,  then you have practically  won,
  for all intents and purposes,   as it makes blacklisting feasible, once
again!

Spammers can jump through these hoops ---  but spammers aren't going to
effectively scale up their spamming operation by using IP address ranges
they can setup RDNS on.

Tell that to the 100,000+ e-mails I blocked last week (and the several 
hundred that got through before I was able to get all the blocks entered 
into my ingress ACLs) from proper rDNS addresses where the addresses 
were hopping all over a /24, a /22, three /21's, four /20's, and six 
/19s in widely separated blocks.  Every single address in those blocks 
eventually attempted to send e-mail, and every address had proper rDNS 
for the pseudorandom domain names, mostly in the .in TLD, but some 
others, too (the blocks were all over, with some registed through ARIN, 
some through RIPE, some through AfriNIC, and some through APNIC, with 
hosters in Europe, North and South America, Asia, and Africa.)  Note 
that these passed full FCrDNS verification in postfix.  They all had 
very similar characteristics, including an embedded image payload/ad and 
a couple of hundred kB of anti-Bayesian text, including the full text of 
Zilog's Z80 manual at one point.


Of course, the other tens of thousands per day that get blocked for not 
having rDNS from residential bots make the case for leaving rDNS (and 
the FCrDNS variant) turned on, but it is not a cure-all.





Re: why IPv6 isn't ready for prime time, SMTP edition

2014-03-26 Thread Lamar Owen

On 03/25/2014 10:51 PM, Jimmy Hess wrote:


[snip]

I would suggest the formation of an IPv6 SMTP Server operator's club,
with a system for enrolling certain IP address source ranges as  Active
mail servers, active IP addresses and SMTP domain names under the
authority of a member.


...

As has been mentioned, this is old hat.

There is only one surefire way of doing away with spam for good, IMO.  
No one is currently willing to do it, though.


That way?  Make e-mail cost; have e-postage.  No, I don't want it 
either.  But where is the pain point for spam where this becomes less 
painful?  If an enduser gets a bill for sending several thousand e-mails 
because they got owned by a botnet they're going to do something about 
it; get enough endusers with this problem and you'll get a class-action 
suit against OS vendors that allow the problem to remain a problem; you 
can get rid of the bots.  This will trim out a large part of spam, and 
those hosts that insist on sending unsolicited bulk e-mail will get 
billed for it.  That would also eliminate a lot of traffic on e-mail 
lists, too, if the subscribers had to pay the costs for each message 
sent to a list; I wonder what the cost would be for each post to a list 
the size of this one.  If spam ceases to be profitable, it will stop.


Of course, I reserve the right to be wrong, and this might all just be a 
pipe dream.  (and yes, I've thought about what sort of billing 
infrastructure nightmare this could be.)




Re: why IPv6 isn't ready for prime time, SMTP edition

2014-03-26 Thread Lamar Owen

On 03/26/2014 12:59 PM, John Levine wrote:

That way?  Make e-mail cost; have e-postage.

Gee, I wondered how long it would take for this famous bad idea to
reappear.

I wrote a white paper ten years ago explaining why e-postage is a
bad idea, and there is no way to make it work.  Nothing of any
importance has changed since then.

http://www.taugh.com/epostage.pdf


And I remember reading this ten years ago.

And I also remember thinking at the time that you missed one very 
important angle, and that is that the typical ISP has the technical 
capability to bill based on volume of traffic already, and could easily 
bill per-byte for any traffic with 'e-mail properties' like being on 
certain ports or having certain characteristics.  Yeah, I'm well aware 
of the technical issues with that; I never said it was a good idea, but 
what is the alternative?


I agree (and agreed ten years ago) with your assessment that the 
technical hurdles are large, but I disagree that they're completely 
insurmountable.  At some point somebody is going to have to make an 
outgoing connection on port 25, and that would be the point of billing 
for the originating host.  I don't like it, and I don't think it's a 
good idea, but the fact of the matter is that as long as spam is 
profitable there is going to be spam.  Every technical anti-spam 
technique yet developed has a corresponding anti-anti-spam technique 
(bayesian filters?  easy-peasy, just load Hamlet or the Z80 programmer's 
manual or somesuch as invisible text inside your e-mail, something I've 
seen in the past week (yes, I got a copy of the text for Zilog's Z80 
manual inside spam this past week!).  DNS BL's got you stopped?  easy 
peasy, do a bit of address hopping.) The only way to finally and fully 
stop spam is financial motivation, there is no 'final' technical 
solution to spam; I and all my users wish there were.






Re: misunderstanding scale, SMTP edition

2014-03-26 Thread Lamar Owen

On 03/26/2014 01:09 PM, John Levine wrote:
Quite right. If I were a spammer or an ESP who wanted to listwash, I 
could easily use a different IP addres for every single message I 
sent. R's, John 
Week before last I saw this in great detail, with nearly 100,000 
messages sent to our users per day from probably the same spammer (lots 
of similarities, including an image payload with invisible anti-bayesian 
text and a .in TLD) where no two messages came from the same IP.  It did 
all come from the same hosting provider, though, and at least for now 
that hoster's whole address space (all twenty blocks, varying between a 
/23 and a /17) is in my border router's deny acl for incoming on port 
25.  At least for now; I did send an e-mail out to the abuse contact, 
waited 72 hours, then but the blocks in the incoming acl.  This hoster 
was adding rwhois entries for each /32 allocated (yes, IPv4 /32) and 
they had different NIC handles.  I'll probably wait a month, then pull 
the acl to see if it starts back up.  Oh, and each and every /32 that 
sent mail had fully proper DNS, including PTR etc.  Spamassassin's score 
was well in the 'ham' category for all of those messages.


IP reputation lists are one weapon in the arsenal, but not nearly as 
effective as one would like.  There is no technical magic bullet that 
I've seen work over the long haul.


But that's not really on-topic for NANOG.




Re: why IPv6 isn't ready for prime time, SMTP edition

2014-03-26 Thread Lamar Owen

On 03/26/2014 01:38 PM, Tony Finch wrote:
Who do I send the bill to for mail traffic from 41.0.0.0/8 ? Tony. 


You don't.  Their upstream(s) in South Africa would bill them for 
outgoing e-mail.


Postage, at least for physical mail, is paid by the sender at the point 
of ingress to the postal network.  Yes, there are ways of gaming 
physical mail, but they are rarely actually used; the challenge of an 
e-mail version of such a system would be making it dirt simple and 
relatively resistant to gaming; or at least making gaming the system 
more expensive than using the system.


And let me reiterate: I don't like the idea, and I don't even think it 
is a good idea.  But how else do we make spamming unprofitable? I'd love 
to see a real solution, but it just isn't here yet.





Re: why IPv6 isn't ready for prime time, SMTP edition

2014-03-26 Thread Lamar Owen

On 03/26/2014 01:42 PM, John Levine wrote:

And I also remember thinking at the time that you missed one very
important angle, and that is that the typical ISP has the technical
capability to bill based on volume of traffic already, and could easily
bill per-byte for any traffic with 'e-mail properties' like being on
certain ports or having certain characteristics.  Yeah, I'm well aware
of the technical issues with that; I never said it was a good idea, but
what is the alternative?

Where do you expect them to send the bill?


The entity with whom they already have a business relationship. 
Basically, if I'm an ISP I would bill each of my customers, with whom I 
already have a business relationship, for e-mail traffic.  Do this as 
close to the edge as possible.


And yes, I know, it will happen just about as soon as all edge networks 
start applying BCP38.  I'm well aware of the limitations and challenges, 
but I'm also well aware of how ungainly and broken current anti-spam 
measures are.



  One of the things I
pointed out in that white paper is that as soon as you have real money
involved, you're going to have a whole new set of frauds and scams that
are likely to be worse than the ones you thought you were solving.

Yes, and this is the most challenging aspect.

Again, I'm not saying e-postage is a good idea (because I don't think it 
is), but the fact of the matter is, like any other crime, as long as 
e-mail unsolicited commercial e-mail is profitable it will be done.


So, what other ways are there to make unsolicited commercial e-mail 
unprofitable?





Re: why IPv6 isn't ready for prime time, SMTP edition

2014-03-26 Thread Lamar Owen

On 03/26/2014 02:59 PM, valdis.kletni...@vt.edu wrote:
You *do* realize that the OS vendor can't really do much about users 
who click on stuff they shouldn't, or reply to phishing emails, or 
most of the other ways people *actually* get pwned these days? Hint: 
Microsoft *tried* to fix this with UAC. The users rioted. 
Yep, I do realize that and I do remember the UAC 'riots.'  But the OS 
vendor can make links that are clicked run in a sandbox and make said 
sandbox robust.  A user clicking on an e-mail link should not be able to 
pwn the system.  Period.


Most of the phishing e-mails I've sent don't have a valid reply-to, 
from, or return-path; replying to them is effectively impossible, and 
the linked/attached/inlined payload is the attack vector.




Re: why IPv6 isn't ready for prime time, SMTP edition

2014-03-26 Thread Lamar Owen

On 03/26/2014 03:56 PM, Lamar Owen wrote:


Most of the phishing e-mails I've sent don't have a valid reply-to, 
from, or return-path; replying to them is effectively impossible, and 
the linked/attached/inlined payload is the attack vector.




Blasted spellcheck Now that everybody has had a good laugh; I've not 
'sent' *any* phishing e-mails, but I have *seen* plenty.  Argh.





Re: IPv6 Security [Was: Re: misunderstanding scale]

2014-03-25 Thread Lamar Owen

On 03/24/2014 09:39 PM, Paul Ferguson wrote:

I'll leave it as an exercise for the remainder of... everywhere to
figure out why there is resistance to v6 migration, and it isn't just
because people can't be bothered.
I'm sure there are numerous enterprises in the same shape I am in, with 
significant equipment investment in non-quite-ipv6-ready gear, and 
insufficient technology refresh capex monies to get ipv6-ready 
capacity-equivalent replacements.  Cisco 6500/7600 even with Sup720 has 
issues, and I know of a number of networks still running Sup2 on 
6500/7600 or even older (including some gear in my own network, where I 
still have old gear, older even than I'm willing to admit publicly, 
serving in core roles; I just decommissioned a failing Extreme Summit 1i 
this past Saturday, and still have two more in core roles, doing Layer 3 
IPv4 in one case).  I know I'm not alone.


While much of this gear may be fully depreciated, the cost of the 
forklift upgrade is major, and the gear is not the biggest part of the 
cost.  Repairs are not anywhere near as draining on the capex budget as 
complete chassis upgrades are, and so we keep old gear running because 
it's what we can afford to do.


So capex is a big part of it; but then there's training costs and the 
opex of dealing with a new-to-us technology.


Just my very-late-to-the-party opinion, and not likely to change 
anything at all, but in hindsight it seems we might have been better off 
with ipv4.1 instead of ipv6, which, IMO, just simply bit off too much in 
one bite.  Much like how the Fountainhead project at DG got eclipsed by 
the much less ambitious Eagle, and never really went anywhere due to its 
pie-in-the-sky goals, when all the customers really wanted was a 32-bit 
Eclipse, which Eagle provided.  (Tracy Kidder, The Soul of a New 
Machine which should be on every tech's must-read list).  Yeah, I know, 
too late to matter, as ipv6 is here and here to stay.  But the 
transition could have been smoother and less traumatic to equipment 
vendors' customers.  At least that's my opinion and experience, your 
mileage may vary.







Re: Level 3 blames Internet slowdowns on Technica

2014-03-24 Thread Lamar Owen

On 03/23/2014 11:08 PM, Frank Bulk wrote:

Not sure which rural LECs are exempt from competition.
This is a quagmire;but it boils down to if the FCC says they're exempt, 
then they're exempt and have a 'rural monopoly' (there's a lot of 
caselaw and a number of FCC Report and Orders (and further Report and 
Orders and Notices of Rulemaking and Public Notices and the like) on the 
subject, but it goes back essentially to the definition found in 47 USC 
§ 153(37) of a Rural Telephone Company).


Just being covered by an NECA (National Exchange Carriers Association) 
tariff doesn't automatically grant this, since there is a subsection in 
47 CFR § 61 dealing with Rural CLEC's and their exemptions.  This 
landscape is changing constantly, and it has been quite some time since 
I've traced the threads in the various RO's and PN's from the FCC on 
the subject; it would take probably a full week just to get up to date 
on the current state of things, since it's been five years since I last 
looked at it.


This is one case where you would have to ask a good communications 
attorney to know for sure.





Re: L6-20P - L6-30R

2014-03-20 Thread Lamar Owen

On 03/19/2014 06:33 PM, Rob Seastrom wrote:
It's not the conductor that you're derating; it's the breaker. Per NEC 
Table 310.16, ampacity of #12 copper THHN/THWN2 (which is almost 
certainly what you're pulling) with 3 conductors in a conduit is 30 
amps. Refer to Table 310.15(B)(2)(a) for derating of more than 3 
current-carrying conductors in a conduit. 4-6 is 80%, 7-9 is 70%. 
Plenty good for 20 amps for any conceivable number of conductors in a 
datacenter whip. Thermal breakers are typically deployed in an 80% 
application for continuous loads, per NEC 384-16(c). See the 
references to 125% of continuous load, which of course is the 
reciprocal of 80%. 


Actually, there is no NEC 384.16 any more, at least in the 2011 code.  
The current relevent, perhaps even replacement, article seems to be the 
exception listed to article 210.20(A).  Now,  210.21(B)(2) indicates 
that, for each individual receptacle on a multi-receptacle, the total 
cord and plug connected load cannot be above certain values (which are 
80% of the branch circuit rating for 15, 20, and 30A circuits) 
regardless of overcurrent protection device rating.  If you have a 100% 
rated overcurrent device you could connect a total load on multiple 
receptacles beyond 80%, it appears.


While 210.21(B)(1) requires receptacles on single-receptacle branch 
circuits to be rated for the full load, any one piece of utilization 
equipment on a 20A or 30A branch circuit cannot be rated to draw more 
than 80% of the branch circuit's rating (210.23(A)(1) for 20A, 210.23(B) 
for 30A).  So even if you have a single receptacle on the branch circuit 
you can't have any single piece of equipment use 100% continuously.  The 
idea is to give the branch circuit some 'headroom;' in the ideal world, 
we don't load networking links past a certain percentage, depending on 
link technology, for similar reasons.


Tracking code changes fuels an entire industry, and several 
websites. :-)  Not to mention continuing education and license 
renewals for electricians.  and headaches for those who think they 
understand the code but then get a surprise at inspection time (been 
there, done that, go the t-shirt and the NEC Handbook so I'll halfway 
know what I'm talking about when dealing with these things.)


A new NEC Handbook is in my budget every three years due to the 
substantial changes that are made by the committees.   The physics of 
electricity don't change, but our understanding of those physics and our 
ideas about how to deal safely with electricity do.  And what is 
allowable and available can change in a moment; I'm still a bit puzzled 
how the L6-30P to L6-20R adapters can actually be on the market in the 
first place, given that they can easily create an unsafe condition.  
Well, I'm puzzled from a technical viewpoint, but not from a marketing 
viewpoint.if it makes money, it is marketable, until pulled or 
recalled.







Re: L6-20P - L6-30R

2014-03-20 Thread Lamar Owen

On 03/20/2014 12:27 PM, Gary Buhrmaster wrote:
Think of the children! I hear the 2017 edition of NFPA 70 (aka NEC) 
may require one to turn off the power to the entire household in order 
to plug in a coffee maker to minimize potential arc flash hazard 
(just kidding). Gary 


ROTFL.

No, I'll just don my $700 arc-flash suit (8 cal per sq cm rated) before 
making coffee in the morning.


While I say that somewhat tongue-in-check, arc flash really is serious 
business, see the youtube video called 'Donnie's Accident' to see how 
serious; I had to have a suit because I am in charge of the power 
monitoring for our data centers, and hooking up our Fluke 435 on the 
input to our Mitsubish 9900B UPS requires full arc flash protection at 
the 8 cal level.  I'm glad it's not on our main switchgear, though, as 
the 6,000A busses there require 40 cal suits, and those are really 
expensive.  The smaller feeders don't require the full suit, but I have 
made a habit of wearing it any time I make a measurement with the 435, 
even on the small 30KVA PDU's, mainly just to make it a habit, since one 
wrong move can be very painful.


All to get our actual PUE to do the adjustments on our receptacle costs 
for our data centers.  (our PUE, depending upon the time of year, runs 
between 1.1 and 1.4, by the way).


But that's drifting even farther off-topic.





Re: L6-20P - L6-30R

2014-03-19 Thread Lamar Owen

On 03/18/2014 09:39 PM, William Herrin wrote:
Meh. It depends. Plug that 30 amp power strip into a 20 amp circuit. 
Try to use more than 20 amps and the main breaker trips. No problem. 
Plug that 20 amp power strip into a 30 amp circuit. Try to use more 
than 20 amps and the strip's breaker trips. No problem. Get a short 
before the strip breaker and the main breaker trips before the wires 
can heat. There just aren't a whole lot of failure modes here that 
result in fire short of one or the other breaker failing. And that 
results in fire regardless of the amperage mismatch.


The amount of misinformation in this thread is astonishing.

This, by the way, is why you're allowed to plug that 22 gauge 
Christmas light wire into a 15 amp receptacle even though it can't 
handle 15 amps: the 3 amp fuse will blow if there's a short. Just 
don't plug in anything with lower-rated wire that doesn't have its own 
breaker or fuse. Regards, Bill Herrin 


Note that in those cases the fuse is in the plug; anywhere else wouldn't 
be ok.  As small as 18AWG may be used for fixture wire on a 20A circuit, 
per 240.5(B)(2)(1).


2011 NEC article 210.23(A) permits 15A receptacles on 20A branch 
circuits; 30A branch circuits must use 30A receptacles.  If the OP's 30A 
branch circuit has an L6-20R on it then this would be a violation; see 
NEC Table 210.24 for a summary of the code.


406.8 is the article requiring that cord caps (plugs) are not supposed 
to be interchangeable.


Now, article 240.5 is the relevant article in the NEC.  This can get a 
bit tricky to apply; if the PDU in question is *listed* for connection 
to a 30A circuit then that's OK (240.5(B)(1)); the individual fixture 
wires within the PDU for a 30A PDU can be as small as 14AWG as long as 
they're protected (240.5(B)(2)(4)), but field assembled extension cord 
sets for a 30A circuit would need 10AWG conductors, as they aren't 
covered by the exception in 240.5(B)(4) and thus fall under 240.5(A).  
It's definitely allowed to connect a 30A PDU with 10AWG conductors to a 
30A branch circuit; anything else could be OK, depending upon the local 
authority having jurisdiction and its interpretation of the 240.5 
exceptions, which aren't the clearest section of the NEC, IMO.  And 
article 645, dealing with ITE rooms, only requires that cords be listed 
for use with IT equipment and be less than 4.5m in length.


IMO, and my degree is in EE, it is possible to have a fault condition in 
a 12AWG cord that won't trip a 30A breaker but could cause a fire and be 
prior to the input breaker in the PDU.


The OP appears to be doing the right thing and getting a 30A PDU.





Re: L6-20P - L6-30R

2014-03-19 Thread Lamar Owen

On 03/19/2014 09:51 AM, William Herrin wrote:
Nobody is talking about putting an L6-20R on a 30 amp circuit. OP was 
talking about putting an L6-30P on a 20 amp appliance: a PDU that has 
its own 20 amp breaker. Big difference. 


If the PDU isn't listed for 30A then it's the essentially the same 
thing, safety-wise. Unless there is overcurrent protection at the source 
of the feed to the conductors of the flexible cord (240.21) that meets 
the ampacity of the conductors of said flexible cord, unless one of the 
exceptions of 240.5 apply, then it's a potentially unsafe condition (NEC 
doesn't directly apply to supply cords of appliances themselves; that's 
what the 'listing' is for from UL or similar; see 
http://ecmweb.com/code-basics/nec-rules-overcurrent-protection-equipment-and-conductors 
for more info, and see UL's FAQ entry for modifications to listed 
equipment at 
www.ul.com/global/eng/pages/offerings/perspectives/regulator/faq/).


Just replacing an L6-20P with an L6-30P on a 20A-listed PDU would be 
unsafe and (IMO) unwise, since the breaker in the input of the PDU does 
not protect the flexible cord's conductors from internal overcurrent 
faults.  A 20A listed PDU should have 20A overcurrent protection to the 
connected receptacle, in addition to any overcurrent protection internal 
to the PDU.  A cord with a 20A ampacity may overheat significantly if it 
faults internally in such a way as to cause more than 20A, but less than 
30A (or whatever overcurrent protection is in the branch circuit), to 
flow; there are numerous ways cords can fault in this manner.  You could 
easily get a situation where the cord is partially faulted internally 
but the PDU's breaker doesn't detect it because the fault shunts current 
ahead of that breaker; again, not a dead short but still an overcurrent 
fault.  I've seen this type of fault before, where the cord itself was 
shunting a few amps prior to the PDU input breaker (in this particular 
case the cord was damaged by lightning, even though the equipment to 
which it was connected still had power).


But the other condition, where a 20A breaker is feeding a 30A PDU, could 
result in dropping power to the PDU but is not unsafe.


I know that I wouldn't approve (in the NEC-speak sense of that word) of 
the use of any of these adapters or similar kludges in my data centers, 
as the insurance liability issues are potentially much more costly than 
just buying the right PDU or running a branch circuit with the correct 
overcurrent protection in the first place.


It also depends a bit on exactly how the PDU is listed.  You can look up 
the listing's details in the UL White Book (download link: 
http://www.ul.com/global/documents/offerings/perspectives/regulators/2013_WB_LINKED_FINAL.pdf 
).


But the final say rests with the authority having jurisdiction, AHJ in 
NEC-speak.







Re: Fusion Splicer

2014-03-19 Thread Lamar Owen

On 03/19/2014 09:20 AM, Eric Dugas wrote:

We have the 70S, it's pretty awesome. We paid around $15K CAD new. You might 
want to look for the 12S or 19S if the price is an issue. I believe you can 
also find them refurbished.


We have a 17S, and are very happy with it.  We paid a little more than 
$8K for ours new, and used units should be available for quite a bit 
less these days.





Re: L6-20P - L6-30R

2014-03-19 Thread Lamar Owen
[Whee.  This discussion is good for me, as I need to refresh my memory 
on the relevant code sections for some new data center 
clients.thanks, Bill, you're a great help!]


On 03/19/2014 12:24 PM, William Herrin wrote:
Yet an 18 awg PC power cable is perfectly safe when plugged in to a 
5-20R on a circuit with a 20 amp breaker. Get real man. 
The NFPA thinks so.  They also allow interoperability between a 20A 
T-slot receptacle and a 15A plug (so that a 2-15P can work in a T-slot 
2-20R, or a 5-15P can work in a 5-20R, etc).  Things are different above 
20A, at least in the NFPA's view.  NFPA 75 is interesting reading, 
especially in those sections where its committee and the NFPA 70 
committee seem to see things differently.


However, my SOP is to use no smaller than 16AWG for a 5-15P or 6-15P 
(with a 14AWG preference), and no smaller than 12AWG for 20A use, etc, 
unless protected by suitable overcurrent devices (for 18AWG, that's 7A, 
and for 16AWG that's 10A, so a power strip with a 10A breaker or a PDU 
with a individual 10A breakers is fine for use with 16AWG power cords).  
I do have an EE background and degree, and so I do tend to be very 
conservative in those things.  I have seen the results of pinched 18AWG 
zipcord in a 5-15R, and it's not pretty.


The 22AWG Christmas lights get away with it by having overcurrent 
protection in the plugs.


You got two things right: The NEC (and related fire codes) don't apply 
to supply cords of appliances in circumstances such as OP's PDU. The 
modification cancels the UL certification. If you have an external 
requirement to use only UL certified components then you can't make 
any modifications no matter how obviously safe they are.By the way, 
you either don't have that requirement or you're breaking it. Your 
custom network cables are not UL certified.
Here's the bottom line, at least in my data centers:  if it could be 
considered questionable by the insurers (that's where UL got its start) 
then it's not likely to happen.  Modifying a piece of utilization 
equipment with a UL QPQY listing is likely to be considered questionable.


Now, network cable installation is covered by the NEC in article 800, 
which got some revisions in 2011, and the class 2 and class 3 cables 
used are also covered, in articles 725 (fiber is covered by article 770, 
and ITE rooms by article 645).  The major theme there is reduction in 
spread of products of combustion, and the UL DUZX listing reflects that 
purpose.  Yes, listed cables are required by code when part of the 
premises wiring, but putting a listed connector on listed cable is 
within the listing.


Further, 802.3af and even 802.3at are considered Class 2 power limited 
sources under article 725 of the NEC (that is, there's not enough 
available power to initiate combustion).


So, sure, I can still use custom network cabling and stay within using 
only listed items.






Re: L6-20P - L6-30R

2014-03-19 Thread Lamar Owen

On 03/19/2014 02:05 PM, William Herrin wrote:
50 watts DC. It won't electrocute you (that's AC) but it's the same 
power that makes a 40 watt bulb burning hot.
802.3af is limited to 15.4W, and 802.3at to 25.5W.  The limits for Class 
2 and 3 circuits are found in Chapter 9, Table 11 (A and B), of the NEC 
(Table 11(B) for DC circuits, and for a power source of 30 to 60 volts a 
Class 2 circuit can have, for a 44VDC supply power, up to 3.4A available 
(a max nameplate rating of 100VA).  For AC, Table 11(A) tells me that a 
120VAC circuit, to meet Class 2, must be current-limited to 5mA.


BICSI has a good set of slides on the NEC at 
http://www.bicsi.org/uploadedfiles/Conference_Websites/Winter_Conference/2012/presentations/Interpreting%20the%20National%20Electrical%20Code.pdf







Re: GPS attack vector

2013-01-17 Thread Lamar Owen

On 01/16/2013 08:06 PM, Jay Ashworth wrote:

Do you use GPS to provide any mission critical services (like time of day)
in your network?

Have you already see this? (I hadn't)

   
http://arstechnica.com/security/2012/12/how-to-bring-down-mission-critical-gps-networks-with-2500/


Hi, Jay,

Yes, saw this about a month ago.  We have a UNAVCO Plate Boundary 
Observatory station (779) on our site, and it uses a Trimble NetRS.  We 
also use GPS timing locally to generate NTP stratum 1 for our LAN via 
Agilient/HP Z3816 disciplined receivers, and individual GPS receivers 
for both of our 26 meter radio telescopes for precision local standard 
of rest calculations.


But as a frequency standard for 10MHz, we only use the output of the 
frequency locked loops in the Z3816s as references for our Efratom 
rubidium standard; even cesium clocks have more drift than rubidium 
ones, and the rubidium is manually locked, and is the master reference 
for anything that needs a frequency reference; the Z3816's can have 
significant jitter (well, significant is relative.).  Last I 
checked, the rubidium was 8.5uHz (yes, microHertz) off according to the 
GPS disciplined 10MHz signal from one of the Z3816s (we use an HP 
differential counter with a very long gate time to get that measurement 
precision).


It was interesting timing for the release of this paper, as it was 
around the time tick and tock were rebooted and went all 'Doc Brown' on us.


Anyone interested in the vagaries of serious time precision, please 
reference the 'Time-Nuts' mailing list, and other content, hosted by 
febo.com.





Re: Programmers with network engineering skills

2012-03-08 Thread Lamar Owen
On Monday, March 05, 2012 09:36:41 PM Jimmy Hess wrote:
  Other common, but misguided assumptions (even in 2012):
  1. You will be using IPv4.  We have no idea what this IPv6 nonsense is.
  Looks complicated and scary.
  2. 255.255.255.0 is the only valid netmask.
...
(16)  The default gateway's IP address is always 192.168.0.1
(17) The user portion of E-mail addresses never contain special
 characters like  - +  $   ~  .  ,, [,  ]

Hilarious.  Wish I'd seen this a few days ago, my whole week would have been 
brighter I'll add one from my 'I asked the programmer about a problem in 
the code, which the programmer proceeded to say wasn't a problem' list:

(18) No, our control protocol doesn't have authentication, it's up to the 
network to keep undesired users out. (I won't say what this software is, but 
suffice to say the package in which it was a part cost over $250,000).   



Re: Programmers with network engineering skills

2012-02-28 Thread Lamar Owen
On Monday, February 27, 2012 07:53:07 PM William Herrin wrote:
 .../SCI clearance.
 
 The clearance is killing me. The two generalists didn't have a
 clearance and the cleared applicants are programmers or admins but
 never both.

I just about spewed my chai tea seeing 'SCI' and 'generalist' in the same 
post... isn't that mutually exclusive?



Re: Programmers with network engineering skills

2012-02-28 Thread Lamar Owen
On Monday, February 27, 2012 05:14:00 PM Owen DeLong wrote:
 Who is a strong network engineer
 Who has been a professional software engineer (though many years ago and my 
 skills are rusty
   and out of date)

Owen, you nailed it here.  Even the ACM recognizes that a 'Software Engineer' 
and a 'Computer Scientist' are different animals (ACM recognizes five 'computer 
related' degree paths with unique skill maps: Computer Engineering, Computer 
Science, Software Engineering, Information Services, and Information 
Technology; see https://www.acm.org/education/curricula-recommendations for 
more details).

A true 'network engineer' will have a different mindset and different focus 
than a 'Computer Scientist' who has all the theoretical math skills that a 
Computer Scientist needs (a reply to one of my recent posts mentioned that 
math, and was somewhat derogatory about engineers and timeliness, but I 
digress). 

Coding and development can bridge across the differences; but it is very useful 
to understand some of the very basic differences in mindset, and apply that to 
the position being sought.  

It boils down to whether the OP wants strong engineering skills with the 
accompanying mindset, or strong CS skills with the accompanying mindset.  Given 
the other clearance issues, I would be more inclined to say that the OP would 
want a 'Software Engineer' with some network engineering skills rather than a 
CS grad with some network guy skills.  It's a different animal, and software 
engineering teaches change control and configuration management at a different 
depth than the typical CS track will do (and that sort of thing would be 
required in such a cleared environment).  On the flip side, that same 'Software 
Engineer' isn't nearly as steeped in CS fundamentals of algorithms and the 
associated math.



Re: Most energy efficient (home) setup

2012-02-23 Thread Lamar Owen
On Wednesday, February 22, 2012 04:13:47 PM Jeroen van Aart wrote:
 Any suggestions and ideas appreciated of course. :-)

www.aleutia.com

DC-powered everything, including a 12VDC LCD monitor.  We're getting one of 
their D2 Pro dual core Atoms (they have other options for more money) for a 
solar powered telescope controller, and the specs look good. 

There is a whole market segment out there for the 'Mini ITX' crowd with DC 
power, low power budgets, and reasonable processors.  Solid State drives have 
immensely.



Re: common time-management mistake: rack stack

2012-02-23 Thread Lamar Owen
On Wednesday, February 22, 2012 03:37:57 PM Dan Golding wrote:
 I disagree. The best model is - gasp - engineering, a profession which
 many in networking claim to be a part of, but few actually are. In the
 engineering world (not CS, not development - think ME and EE), there is
 a strongly defined relationship between junior and senior engineers, and
 real mentorship. 

My degree is in EE, and I spent over a decade in the field as a 'broadcast 
engineer' Now, a 'broadcast engineer' is not a PE, and is not currently 
licensed as such, although many of the best consulting broadcast engineers do 
take the extra step and expense to get the PE license; technically, in many 
states, you're not even supposed to use the word 'engineer' in your title 
without having a PE license.

By way of background, my specialty was phased array directional AM broadcast 
systems in the 5-50KW range, doing 'technician' things like phasor rocking, 
transmitter retubing and retuning, monitor point and radial measurements, 
transmitter installation, and tower climbing, in addition to more mathematical 
and engineering sorts of things like initial coverage and protection studies 
for pattern/power changes, measured radial ground conductivity/dielectric 
constant curve fitting/prediction contour overlap studies and models, as well 
as keeping up with FCC regulations and such.   

I left broadcasting full-time in 2003 for an IT position, as a stress-reducer 
(and it worked.).  So I say this with some experience. 

Mentoring in broadcast is still practiced by associations like the Society of 
Broadcast Engineers and others.  These days, much of this sort of thing is 
online with websites like www.thebdr.net and mailing lists like those hosted by 
broadcast.net; in this regard the network ops community isn't too dissimilar 
from the broadcast community.

Now, while in the broadcast role I had the distinct honor of having three 
fantastic personal mentors, two of whom still stay in touch, and one who died 
twenty years ago, a few years after I got started in the field.  RIP W4QA.  He 
taught me more in half an hour about phased arrays and the way they worked in 
practice than ten hours of class time could have.  Likewise, I know some old 
hands here, even if I've not physically met them, whose opinions I trust, even 
if I don't agree with them. 

 The problem with networking is that TOO MANY skills
 are learned on the job (poorly), rather than folks coming in with solid
 fundamentals. 

This is not limited to networking. 

The parallels between broadcast engineering and IT/networking are a little too 
close for comfort, since there are more potential mentors with weak teaching 
skills and bad misconceptions that were valid 'back in the day' than there are 
who will teach practical, working, correct ways of doing things 'today' and why 
they are done the way they are done (which can involve some history, one of my 
favorite things). 

A mentor who will teach how to think about why you are doing what you are doing 
is priceless.  A mentor who will honestly go over the pros and cons of his or 
her favorite technique and admit that is isn't the single 'correct' way to do 
something, and a mentor who will teach you how to think sideways, especially 
when things are broken, are beyond priceless.  I especially like it when a 
mentor has told me 'now, this isn't necessarily the way I'd do it, and I'm not 
really fond of it, but you *can* do this to get this result if you need to do 
so...just don't ask me to fix it later.'

And the recent thread on common misconceptions has been priceless, in my book.  
Especially due to some of the respectful disagreements.

 I blame our higher education system for being ineffectual
 in this regard. Most of the book learning that you refer to is not
 true theory - it's a mix of vendor prescriptions and
 overgeneralizations. In networking, you don't learn real theory until
 you're very senior - you learn practice, first. 

Vendor-specific certifications don't help much, either, really, in the 'why' 
department.

 They also lack real licensing, which
 is a separate problem. 

Now you've stirred the pot!  In the broadcast field, SBE offers some good 
things in the line of vendor-neutral certification; in the networking field 
there are some vendor-neutral avenues, such as BiCSI for general stuff and SANS 
for security stuff.

Having said that, and going back to the broadcast example, not too long ago you 
had to have an FCC 'First Phone' to even be qualified to read a transmitter's 
meters, and every broadcast licensee (holding the station's operating license, 
that is) had to employ 'operators' holding an active First Phone to keep an eye 
on the transmitter during all operating hours, with the First Phone of every 
operator posted at the site, and even the DJ's had to have a Third Class Permit 
to run the audio board, and periodic FCC inspections were frequent.  So that's 
the extreme situation in terms of 

Re: Most energy efficient (home) setup

2012-02-23 Thread Lamar Owen
On Thursday, February 23, 2012 04:53:06 PM Joe Greco wrote:
 So, good group to ask, probably...  anyone have suggestions for a low-
 noise, low-power GigE switch in the 24-port range ... managed, with SFP?
 That doesn't require constant rebooting?

I can't comment to the rebooting, but a couple of years ago I looked at the 
Allied-Telesis AT-9000-28SP, which is a smack steeply priced (~$1,500) but has 
flexible optics and is managed.  And at ~35 watts is the lowest powered managed 
gigabit switch I was able to find for our solar powered telescopes.  The grant 
that was going to fund that fell through, so I'm still running the 90W+ 
Catalyst 2900XL with two 1000Base-X modules and 24 10/100 ports instead, but 
the AT unit looked pretty good as a pretty much direct replacement with extra 
bandwidth.



Re: Common operational misconceptions

2012-02-21 Thread Lamar Owen
On Monday, February 20, 2012 09:07:20 PM Jimmy Hess wrote:
 RJ45 is really an example of what was originally a misconception
 became so widespread, so universal, that reality has actually shifted
 so the misconception became reality.   When was the last time you ever
 heard anyone say 8P8C connector?

And then there's the 10C variant used on some serial port interfaces (like 
those from Equinox). 

'8 pin modular plug' is reasonable, though, and is what I'll typically say, 
with the modifier 'for stranded' or 'for solid' conductors, as it does make a 
difference.  I haven't said 'RJ45 plug' in years.

Yeah, it's a bummer that the keyed RJ45 plug got genericized to the unkeyed 
variant; at least the unkeyed plug used for TIA568A/B will work in a true RJ45 
jack.  



Re: WW: Colo Vending Machine

2012-02-20 Thread Lamar Owen
On Friday, February 17, 2012 01:44:57 PM Jay Ashworth wrote:
 2) Power cords: C19 to L6-15, C19 to C20, C13 to C20 (latter 2 for 208V PDUs)
 (If you don't have your own C13 to L6-15 cords, you're in the wrong biz)

An interesting thread.

I'd say if you had, instead of a C13 on one end, a C15 so that it would be a 
tad more universal (Cisco 7507 and 12012 power supplies, for instance, are C16 
and not C14 inlets), since the C15 will mate to a C14 but a C13 won't mate to a 
C16.  As I'm still running some old kit (have a 7507 still in production in an 
internal role, and just removed a 12012) I have run into that.

And I'll have to be one of those who has no equipment needing an L6-15; plenty 
of L5-20's, L5-30's, L6-20's, L6-30's, and a few L21-30's but no 15 amp locking 
NEMA receptacles to be found.  Is an L6-15R standard on some PDU's or 
something, while others are C14 and C20 (such as EMC's PDU's)?



Re: Common operational misconceptions

2012-02-17 Thread Lamar Owen
On Friday, February 17, 2012 01:30:30 AM Carsten Bormann wrote:
 Ah, one of the greatest misconceptions still around in 2012:

 -- OSI Layer numbers mean something.
 or
 -- Somewhere in the sky, there is an exact definition of what is layer 2, 
 layer 3, layer 4, layer 5 (!), layer 7

Misconception: Layers are not recursive. 

Thanks to tunneling/MPLS/other encapsulation techniques, they are.



Re: XBOX 720: possible digital download mass service.

2012-01-28 Thread Lamar Owen
On Friday, January 27, 2012 05:56:19 AM Randy Bush wrote:
  Can internet in USA support that?   Call of Duty 15 releases may 2014
  and 30 million gamers start downloading a 20 GB files.  Would the
  internet collapse like a house of cards?.

 not a problem.  the vast majority of the states is like a developing
 country [0], the last mile is pretty much a tin can and a string.  so
 this will effectively throttle the load.

Being in 'the middle of nowhere' as I write, even we are a few notches above 
RFC1149 capabilities.  As one visitor to our site (who had been recently to 
NRAO Greenbank) said 'if this isn't the middle of nowhere, you can probably see 
it from here.'

Our base DSL is 7Mb/s down, 0.5Mb/s up, with 11Mb/s down and 1Mb/s up available 
to over 99% of our very rural county.  We (work) have 1Gb/s on the local loop 
fiber pair, throttled to the amount of IP we actually pay for at the ISP's PoP.

Now if RFC1149 supported jumbo frames, it might give tin-cans-and-string a run 
for its money



Re: DC wiring standards

2012-01-26 Thread Lamar Owen
[Digging up an older post; I let a couple of thousand NANOG posts pile up in my 
NANOG folder]

On Tuesday, January 03, 2012 02:40:39 PM Leigh Porter wrote:
 Does anybody know where I can find standards for DC cabling for -48v systems?

Book Resource that anyone dealing with telecom DC power systems should have on 
their shelf:

'DC Power System Design for Telecommunications by Whitham D. Reeve, published 
by Wiley, ISBN (print) 97680471681618 and is available in the Wiley online 
library.

It is not an inexpensive book, but is written from the point of view of someone 
with 30 years of practical 'in-the-CO' experience.

The various standards for DC distribution are referenced in this volume.  If 
you have access to the Telcordia standards, the relevant standard is referenced 
in this volume (I left my copy at home, so can't quote the Telcordia standard 
right now). 

Saying all that, the NEC does have covering articles, and a good rule of thumb 
is to use black or red (or other normal AC 'HOT' color like blue, brown, 
orange, or yellow) for the ungrounded conductor, white or gray for the grounded 
conductor, and green, yellow with a green stripe, or bare for the grounding 
conductor (using the definitions in the NEC for those conductors).  

(In an AC circuit the grounded conductor is commonly referred to as the 
'neutral' for center-tapped or wye systems, but grounded phase three-phase 
systems (corner-grounded) are known that have no neutral.)

In the typical 'protect the outside plant's lead sheathed buried cable' -48VDC 
system, the battery/rectifier positive is the grounded conductor and should be 
white or gray per NEC, with the negative ungrounded conductor being black, red, 
blue, or other approved NEC ungrounded conductor color (basically anything 
except an approved color for the grounded or grounding conductors) or using 
other site-specific and posted identifiers per the relevant NEC article(s).  
I'm citing the 2008 edition of the NEC here, even though the 2011 edition is 
out, simply because I don't have a 2011 edition handy, and I do have a 2008 
editionand article numbers have been known to change between editions

You can find the requirements for identifying conductors in NEC (2008) articles 
250.119 (grounding conductors), 200.6 (grounded conductors), 210.5(C) (branch 
circuit ungrounded conductors), and 215.12 (feeder ungrounded conductors).  
Examples of the colors are found in the Handbook version's exhibit 200.3, and 
accompanying commentary around that exhibit. (The handbook version of the NEC 
is worth the extra expense for the exhibits and commentary alone). 

Now, having said all that, I have seen common 'in the rack DC rectifiers with 
no battery' setups with black and red as negative and positive, respectively.  
And, as long as neither positive nor negative are grounded, that seems to meet 
NEC.  As soon as you ground one conductor, and get into NEC-covered territory, 
you need to use white or gray (or other 200.6 approved means with the 200.7 
exceptions allowed) for the grounded conductor, regardless of polarity.

Hope that helps, and doesn't overwhelm.



Re: DC wiring standards

2012-01-26 Thread Lamar Owen
On Thursday, January 26, 2012 11:29:03 AM Jay Ashworth wrote:
  'DC Power System Design for Telecommunications by Whitham D. Reeve,
  published by Wiley, ISBN (print) 97680471681618 and is available in
  the Wiley online library.
 
 Disappointingly, that book does *not* appear to be in Safari, unless you've
 misremembered the title...

It wasn't in Safari when I last checked (last year, right before I canceled my 
subscription, since I didn't really use Safari like I once had).  I looked on 
the Wiley site for the ISBN and double-checked the title prior to the post.

This book is worth its price, even though it's steep.  Here's an Amazon link 
(wraps):
http://www.amazon.com/Power-System-Design-Telecommunications-Handbook/dp/047168161X/

$80.95 lowest new copy found.  I paid more than that for my copy in 2007.

What's interesting here is that this is the third book I've seen on Amazon 
where the used price is higher than the new; last week I ordered a new 
paperback copy of 'Pierce's Piano Atlas, 12th Edition' for 30-something 
dollars, but the used price was like a thousand dollars odd.  Not that high 
today, just $118 (versus $37.57 new)... But that's still higher than new (and 
way OT... sorry).




Re: Steve Jobs has died

2011-10-11 Thread Lamar Owen
On Tuesday, October 11, 2011 04:00:44 PM Douglas Otis wrote:
 On 10/6/11 7:26 PM, Paul Graydon wrote:
  On 10/6/2011 4:02 PM, Wayne E Bouchard wrote:
  In some circles, he's being compared to Thomas Edison. 

  It's probably not a bad analogy, like Ford and many other champions of 
  industry he didn't invent groundbreaking technology 

 Steve demonstrated any number of times, when excellent hardware + 
 software engineering + quality control is applied, even commodity 
 products are able to provide good returns.  In this view, the analogy 
 holds when price alone is not considered.

And, like Edison, Mr. Jobs fiercely championed his own technologies over all 
others; just one example is in the field of electricity where Edison's DC lost 
the war to Tesla's AC.  Time has yet to tell how well Mr. Jobs' walled garden 
devices and OS's do, finally.  

Edison would have loved today's intellectual property wars and software patents 
and their attendent trolls. And Edison would have been right at home with the 
concept of lock-in.

Brilliant man, Edison was, and he did do a great deal for humanity in general.  
But historical facts are historical facts.

Don't get me wrong; I have a great deal of respect for both men, even though I 
disagree with some of their ideologies and methods.  And the phonograph really 
was pure brilliance.



Re: Were A record domain names ever limited to 23 characters?

2011-10-07 Thread Lamar Owen
On Friday, September 30, 2011 05:54:38 PM steve pirk [egrep] wrote:
 I seem to recollect back the 1999 or 2000 times that I was unable to
 register a domain name that was 24 characters long. Shortly after that, I
 heard that the character limit had been increased to like 128 characters,
 and we were able to register the name.
 
 Can anyone offer some input, or is this a memory of a bad dream?
 ;-]

At least as of 2008, you could go 45 characters, not counting the TLD:
++
$ whois pneumonoultramicroscopicsilicovolcanoconiosis.com
[Querying whois.verisign-grs.com]
[Redirected to whois.above.com]
[Querying whois.above.com]
[whois.above.com]
The Registry database contains ONLY .COM, .NET, .EDU domains and
Registrars.Registration Service Provided By: ABOVE.COM, Pty. Ltd.
Contact: +613.95897946
  
Domain Name: PNEUMONOULTRAMICROSCOPICSILICOVOLCANOCONIOSIS.COM
  
Registrant:
Above.com Domain Privacy
8 East concourse
Beaumaris
VIC
3193
AU
pneumonoultramicroscopicsilicovolcanoconiosis@privacy.above.com
Tel. +61.395897946
Fax. 


Creation date: 2008-02-20
Expiration Date: 2012-02-20
...
$ ping www.pneumonoultramicroscopicsilicovolcanoconiosis.com
PING www.pneumonoultramicroscopicsilicovolcanoconiosis.com (69.43.161.151) 
56(84) bytes of data.
64 bytes from 69.43.161.151: icmp_req=1 ttl=50 time=85.9 ms
64 bytes from 69.43.161.151: icmp_req=3 ttl=50 time=85.7 ms
64 bytes from 69.43.161.151: icmp_req=4 ttl=50 time=85.8 ms
64 bytes from 69.43.161.151: icmp_req=5 ttl=50 time=85.6 ms
^C
--- www.pneumonoultramicroscopicsilicovolcanoconiosis.com ping statistics ---
5 packets transmitted, 4 received, 20% packet loss, time 4002ms
rtt min/avg/max/mdev = 85.618/85.815/85.984/0.132 ms
+

FWIW.



Re: East Coast Earthquake 8-23-2011

2011-08-23 Thread Lamar Owen
On Tuesday, August 23, 2011 06:13:02 PM William Herrin wrote:
 B. The crust on the east coast is much more solid than on the west
 coast, so the seismic waves propagate much further. Los Angeles
 doesn't feel an earthquake north of San Francisco unless it's huge.
 New York City felt this earthquake near Richmond VA. So yes, we're
 seeing relatively minor damage... but we're seeing it over a much
 wider area than someone in California would.

We felt it, and it overloaded our seismometer, too.  The link to the trace is 
at:
http://www.pari.edu/about_pari/pari-photos/archived-photos/miscellaneous/august-23-2011-richmond-earthquake/ch1-virginia-quake-20110823-1.jpg/view

Live data is at:
http://www.pari.edu/telescopes/geoscience/seismic-readings/readings/

Film at 11 (and 10; local TV station came by and interviewed).

We're 300+ miles away from the epicenter.

At the time I wondered if anything near the IX's in that area might be 
impacted, almost posted about it, and figured that anyone who actually knew 
would be too busy to talk about it



Re: Japan electrical power?

2011-05-11 Thread Lamar Owen
On Wednesday, May 11, 2011 10:08:00 AM Robert Boyle wrote:
 I know voltage varies 
 from town to town and prefecture to prefecture. It seems most is 
 90V-110V. 

Also, part of the country is 50Hz and part is 60Hz. 



Re: Cent OS migration

2011-05-09 Thread Lamar Owen
On Monday, May 09, 2011 04:45:36 PM Kevin Oberman wrote:
 Depends on what he is doing. BSDs tend to be far more mature than any
 Linux. They are poor systems for desktops or anything like that. They
 are heavily used as servers by many vary large providers and as the
 basis for many products like Ironport (Cisco) and JunOS (Juniper). 

Cisco had an RHEL rebuild (internal) at one time, called, refreshingly enough, 
Cisco Enterprise Linux.  Cisco also uses/used a Linux base for their Content 
Engines and subsequent ACNS-running boxen.

The rather high-priced ADVA-sourced Cisco  Metro 1500 DWDM boxes used a 486 ISA 
single-board computer running off of DiskOnChip SSD for control and SNMP.

Having said that, I'd be just about as comfortable with a BSD as with a Linux.

And I do, and will continue to, run CentOS in production. 



Re: [c-nsp] 7600 SXF and IPv6

2011-05-04 Thread Lamar Owen
On Tuesday, May 03, 2011 01:54:00 PM Jay Ford wrote:
 On Tue, 3 May 2011, vince anton wrote:
  Anyone has experiemces to share or known v6 issues with SXF (or v4 issues
  with v6 enabled for that matter), or should I be looking at SRC/SRD/SRE for
  7600 ?

 I have 9 6500+SUP720-3BXL boxes with a mix of CFCs  DFCs running 
 ADVIPSERVICESK9_WAN-M 12.2(33)SXI1, 12.2(33)SXI4a,  12.2(33)SXI5 

While the hardware capabilities are essentially the same, he is on a 7600, not 
a 6500.  With SXF it didn't matter; with later it does, and he can't use SXI, 
but must derail that train to pick up the SRx train, which has different 
capabilities and issues.

To Anton, the OP, you need to look at the SXF and rebuilds release notes and 
see what they say; there are a lot of things to look at, and going SRx is going 
to change things for you; since I'm on Sup2 with SXF I can't go SRx with our 
7600's anyway (chassis won't take Sup720 anyway; these are the OSR7609-branded 
(and epromed) clones of the 6509NEBS chassis that needs the special high speed 
fan upgrade that is so hard to find these days).  

And be sure to peruse the NANOG mailing list archives and read about the 
6500-7600 split, especially the comments fom Gert.



Re: How do you put a TV station on the Mbone? (was: Royal Wedding...)

2011-04-30 Thread Lamar Owen
On Friday, April 29, 2011 03:37:04 PM Jay Ashworth wrote:
 You've conflated my two points.  That would tell the *carriers* who's watching
 what, but they probably don't care.  I was talking about *the providers* 
 knowing (think DRM and 3096 viewers online).

And then if there's music, the SoundExchange rules..to be 'legal' you have 
to count 'performances' and file forms with information on performances, and 
pull out the information on the work performed.preferably with ISRC 
information.



Re: How do you put a TV station on the Mbone? (was: Royal Wedding...)

2011-04-30 Thread Lamar Owen
On Friday, April 29, 2011 05:16:51 PM George Bonser wrote:
 But if broadcast events over the internet are treated the same as 
 broadcast events over RF,  who cares?

They're not; that's the problem.  For the US, at least, the Copyright Office of 
the Library of Congress has statutory authority in this; for digital 
performances there is one and only one avenue, and that's through SoundExchange.



Re: quietly....

2011-02-18 Thread Lamar Owen
On Tuesday, February 15, 2011 11:57:46 pm Jay Ashworth wrote:
  From: Michael Dillon wavetos...@googlemail.com

  This sounds a lot like bellhead speak.

 As a long time fan of David Isen, I almost fell off my chair laughing at 
 that, Michael: Bell *wanted* things -- specifically the network -- smart
 and complicated; Isen's POV, which got him... well, I don't know if 
 laughed out of ATT is the right way to phrase it, but it's close enough, 
 was:
 
 Stupid network; smart endpoints.

The bellhead PoV isn't wrong; it's just different.  Stupid endpoints tend to be 
more usable when such usage matters, such as emergencies (power outages, need 
to call 911, etc).

The problem is we're in neither of the two worlds at the moment; we're in 
between, with complex/smart networks (QoS, etc) and smart/complex endpoints.  
Which, IMO, is the worst of both worlds.

Stupid network and smart endpoint: a smart endpoint user or said user's tech 
person has a chance to fully troubleshoot and correct issues;
Smart network and stupid endpoint: net op tech has a chance to fully 
troubleshoot and correct issues;
Smart network and smart endpoint: nobody can fully troubleshoot anything, and 
much fingerpointing and hilarity ensues



Re: ISDN BRI

2011-02-17 Thread Lamar Owen
On Thursday, February 17, 2011 10:30:12 am Jay Ashworth wrote:
 Off hand, I wouldn't expect a carrier to do any special engineering on
 a BRI -- can you even *order* a BRI these days?  :-)

Seems to still be in NECA Tariff5, at least the last copy I looked at.  So the 
rurals still are tariffed for it.



Re: IPv6 mistakes, was: Re: Looking for an IPv6 naysayer...

2011-02-12 Thread Lamar Owen
On Friday, February 11, 2011 05:33:37 pm valdis.kletni...@vt.edu wrote:
 So riddle me this - what CPE stuff were they giving out in 2009 that was
 already v6-able? (and actually *tested* as being v6-able, rather than It's
 supposed to work but since we don't do v6 on the live net, nobody's ever
 actually *tried* it...)

Well, while no one that I know 'gave out' Linksys WRT54G's capable of running 
OpenWRT or similar (Sveasoft firmware, too), a WRT54G of the right (read: old 
enough) version can run the IPv6 modules (ipkg's) for OpenWRT, and there was at 
least one version of the Sveasoft WRT firmware that could do IPv6.

While I have a few WRT54G's lying around, I've never tried IPv6 on them, and 
would find it interesting if anyone has.

Owen, in particular, should know, because one of the HOWTO's I found was posted 
on an HE forum.back in April of 2009.

I found a few other HOWTO's, some in 2006, some in 2005, detailing IPv6 setup 
for the WRT54G running either Sveasoft or OpenWRT (one was for DD-WRT, and 
another for Tomato).

Yeah, only the tech-savvy customers will be able to use this, unless the ISP 
sets up a 'Golden' CPE firmware image and recycles all those WRT54G's into 
useful things and then, of course, the DSL/Cable gateway needs to be in 
bridge mode.

I'm sure there are other Linux-based firmwares for other CPE that can run Linux 
and IPv6; they just need enough flash and RAM to do it.  vxWorks boxen, not so 
sure.  And then there's all the Zoom stuff out there.

My own Netgear DG834G can, too, with some interesting tinkering involved.

So the firmware is out there to do this, it just requires flashing and 
configuring.



Re: Failure modes: NAT vs SPI

2011-02-10 Thread Lamar Owen
On Monday, February 07, 2011 04:33:23 am Owen DeLong wrote:
 1.Scanning even an entire /64 at 1,000 pps will take 
 18,446,744,073,709,551 seconds
   which is 213,503,982,334 days or 584,542,000 years.
 
   I would posit that since most networks cannot absorb a 1,000 pps attack 
 even without
   the deleterious effect of incomplete ND on the router, no network has 
 yet had even
   a complete /64 scanned. IPv6 simply hasn't been around that long.

Sounds like a job for a 600 million node botnet.  You don't think this hasn't 
already crossed botnet ops minds?



Re: Weekend Gedankenexperiment - The Kill Switch

2011-02-07 Thread Lamar Owen
On Saturday, February 05, 2011 11:29:44 pm Fred Baker wrote:
 To survive an EMP, electronics needs some fancy circuitry. I've never worked 
 with a bit of equipment that had it. It would therefore have to have been 
 through path redundancy.

Surviving EMP is similar to surviving several (dozen) direct lightning strikes, 
and requires the same sort of protection, both in terms of shielding and in 
terms of filtering, as well as the methods used for connections, etc.  There is 
plenty of documentation out there on how to do this, even with commercial 
stuff, if you look.

The biggest issue in EMP is power, however, since the grid in the affected area 
will likely be down.



Re: quietly....

2011-02-04 Thread Lamar Owen
On Friday, February 04, 2011 09:05:09 am Derek J. Balling wrote:
 I think they'll eventually notice a difference. How will an IPv4-only 
 internal host know what to do with an IPv6  record it gets from a DNS 
 lookup?

If the CPE is doing DNS proxy (most do) then it can map the  record to an A 
record it passes to the internal client, with an internal address for the 
record chosen from RFC1918 space, and perform IPv4-IPv6 1:1 NAT from the 
assigned RFC1918 address to the external IPv6 address from the  record 
(since you have at least a /64 at your CPE, you can even use the RFC1918 
address in the lower 32 bits :-P).  

This may already be a standard, or a draft, or implemented somewhere; I don't 
know.  But that is how I would do it, just thinking off the top of my head.



Re: Using IPv6 with prefixes shorter than a /64 on a LAN

2011-02-03 Thread Lamar Owen
On Thursday, February 03, 2011 10:39:28 am TJ wrote:
 Correct me if I am wrong, but won't Classified networks will get their
 addresses IAW the DoD IPv6 Addressing Plan (using globals)?

'Classified' networks are not all governmental.  HIPPA requirements can be met 
with SCIFs, and those need 'classified' networks.

Here, we have some control networks that one could consider 'classified' in the 
access control sense of the word, that is, even if a host is allowed access it 
must have a proven need to access, and such access needs supervision by another 
host.  

This type of network is used here for our large antenna controls, which need to 
be network accessible on-campus but such access must have two points of 
supervision (one of which is an actual person), with accessing hosts not 
allowed to access other networks while accessing the antenna controller.  This 
has been an interesting network design problem, and turns traditional 
'stateful' firewalling on its ear, as the need is to block access when certain 
connections are open, and permit access otherwise.  It's made some easier since 
wireless access is not an option (interferes with the research done with the 
antennas), and wireless AP's and cell cards are actively hunted down, as well 
as passively hindered with shielding in the areas which have network access to 
the antenna controllers.

It's a simple matter of protecting assets that would cost millions to replace 
if the controllers were given errant commands, or if the access to those 
controllers were to be hacked.



Re: quietly....

2011-02-03 Thread Lamar Owen
On Thursday, February 03, 2011 01:35:46 pm Jack Bates wrote:
 I understand and agree that CPEs should not use NAT66. It should even be 
 a MUST NOT in the cpe router draft. 

Do you really think that this will stop the software developers of some CPE 
routers' OS code from just making it work?  Do you really think the standard 
saying 'thou shalt not NAT' will produce the desired effect of preventing such 
devices from actually getting built?




Re: quietly....

2011-02-03 Thread Lamar Owen
On Thursday, February 03, 2011 02:28:32 pm valdis.kletni...@vt.edu wrote:
 The only reason FTP works through a NAT is because the NAT has already
 been hacked up to further mangle the data stream to make up for the
 mangling it does.

FTP is a in essence a peer-to-peer protocol, as both ends initiate TCP streams. 
 I know that's nitpicking, but it is true.

 I'm told that IPSEC through a NAT can be interesting too...  And that's
 something I'm also told some corporations are interested in.

IPsec NAT Traversal over UDP port 4500 works ok, but it does require 
port-forwarding (either manual, automatic-in-the-router, or uPNP) to work ok.  
There are a number of HOWTO's out there to make it work, and we've been doing 
it between the native Windows L2TP VPN client (PPTP is insecure; L2TP as 
implemented by Microsoft is a three layer melange of PPP on top, with L2TP 
carrying that, encapsulated in IPsec between two endpoints) and SmoothWall's 
SmoothTunnel for several years.  It does work, and it's not as hard as it could 
be.

But it's not as easy as it should be, at least on the network plumbing side of 
things. 

However, that's not typically the hardest part of setting up a Microsoft-style 
PPPoL2TPoIPsec VPN, though, especially if you use certificates instead of 
preshared keys.  



Re: quietly....

2011-02-03 Thread Lamar Owen
On Thursday, February 03, 2011 03:59:56 pm Matthew Palmer wrote:
 On Thu, Feb 03, 2011 at 03:20:25PM -0500, Lamar Owen wrote:
  FTP is a in essence a peer-to-peer protocol, as both ends initiate TCP
  streams.  I know that's nitpicking, but it is true.

 So is SMTP, by the same token.  Aptly demonstrating why the term P2P is so
 mind-alteringly stupid.

Yeah, SMTP between servers is peer-to-peer, since both ends can transmit and 
both ends can receive, using the same protocol, but in different sessions, 
unlike FTP, where one session needs two streams, and one originates at the file 
storage end.  But it's also used as a client-server protocol between an SMTP 
sender and an SMTP receiver, which we commonly call the SMTP server.  If it 
were peer-to-peer at that connection there would be no POP3 or IMAP stacks 
needed to go get the mail, rather, every workstation would receive its mail 
directly through SMTP.  The peer-to-peer nature of SMTP is broken not by NAT, 
but by dynamically addressed and often disconnected clients, whether their IP 
addresses are globally routable or not.  Sometimes it would be better to get a 
five day bounce than for the mail to be delivered to the smarthost but the 
client never picks it up. There's a reason POP is the Post Office Protocol, 
as the addresses are then essentially PO Boxes.

But, with my apologies to Moe, Larry, and Curly:
NATagara Falls!  Slowly I turned, step by step, inch by inch..  (with a 
subject of 'quietly' I've been wanting to quote that all thread)  Some are 
that knee-jerk whenever the Three Letter Acronym That Must Not Be Mentioned is 
writ large



Re: quietly....

2011-02-03 Thread Lamar Owen
On Thursday, February 03, 2011 05:30:15 pm Jay Ashworth wrote:
 C'mon; this isn't *your* first rodeo, either.  From the viewpoint of 
 The Internet, *my edge router* is The Node 

Isn't that where this thing all started, with ARPAnet 'routers' on those leased 
lines?

End-to-end is in reality, these days, AS-to-AS.  Beyond that, each AS can do 
whatever it wants with those packets; if it wants to insert the full text of 
the Niagara Falls skit (with copyright owner's permission) into every packet, 
it can do that, and no other AS can make it do differently.

Sure, it would be nice in ways to have full end to end at the individual host 
level, everybody has static addresses and domain names are free and address 
space at the /64 level is portable to kingdom come and back without routing 
table bloat 

NAT in IPv4 came about because people were doing it, and the standards were 
after the fact.  Deja Vu, all over again.

Make it easy to do what people want to do, but without NAT, perhaps 
overloading, port-translating NAT66 won't get traction.



Re: quietly....

2011-02-03 Thread Lamar Owen
On Thursday, February 03, 2011 05:47:44 pm valdis.kletni...@vt.edu wrote:
 ETRN (RFC1985) FTW.

POP (RFC918), and the current version, POP3 (RFC1081) both predate the ETRN 
RFC: by 12 and 8 years, respectively.  By 1996, POP3 was so thoroughly 
entrenched that ETRN really didn't have a chance to replace POP3 in normal use; 
of course, there was the point you mention below, too, that makes it less than 
useful for most e-mail tasks.  The ETRN portion, however, introduces the idea 
of a distinct server and a distinct client that the server holds state for.

 (Of course, the operational problem with ETRN is that it in fact *does*
 implement every workstation gets its mail directly through SMTP, when the
 actual need is every *mail recipient*.

That has its advantages for certain uses.  And its distinct disadvantages, as 
you correctly note, for most 'normal' uses.




Re: quietly....

2011-02-02 Thread Lamar Owen
On Wednesday, February 02, 2011 10:52:46 am Iljitsch van Beijnum wrote:
 No, the point is that DNS resolvers in different places all use the same 
 addresses. So at the cyber cafe 3003::3003 is the cyber cafe DNS but at the 
 airport 3003::3003 is the airport DNS. (Or in both cases, if they don't run a 
 DNS server, one operated by their ISP.)

Hrmph.  Shades of 47.0079.00A03E01.00 and all that that 
implies, here.  Well, different syntatic sugar, but, anyway

Haven't we withdrawn from this ATM before?



Re: quietly....

2011-02-02 Thread Lamar Owen
On Wednesday, February 02, 2011 10:23:28 am Iljitsch van Beijnum wrote:
 Who ever puts NTP addresses in DHCP? That doesn't make any sense. I'd rather 
 use a known NTP server that keeps correct time.

We do, for multiple reasons.  And we have some stringent timing requirements.



  1   2   3   >