Fw: new message

2015-10-26 Thread Michael DeMan
Hey!

 

New message, please read <http://adhocdesign.ro/young.php?hr6f>

 

Michael DeMan



nanong list spam filtering

2014-04-24 Thread Michael DeMan
Hi All,

Sorry being a bit off-topic and having a boring subject, but we really should 
clean up whatever has been going on with so much spam hitting this mailing list.


NO - I am complaining about people who post things I disagree with or on topics 
I have little interest in, I am tired of the stuff like...
subject: Top rated...wireless security cameras--for both indoor  outdoor
subject: Save on: Hawaii, vacation packages
(classic) subject: Compare: the best, email marketing services
subject: Cards reward plans\rates 
compared(MC-Capital1-Venture-ChaseFreedom-Citicard-BankAmericard-Discover-Visa)
...and what not.


Has it always been the policy perhaps that this mailing list does no spam 
filtering whatsoever?  If so that would be understandable.

Meanwhile I think would be better for the community if at least some of the 
most egregious and obvious stuff did not have cpu and network cycles spent 
forwarding it onwards?
As always, I could imagine that although I think this is simple it could be a 
politically charged debate.

Cheers and please fix if possible,
- Michael DeMan




Re: nanong list spam filtering

2014-04-24 Thread Michael DeMan
I take this back.
Spam I received was not via anybody sending to/from nanog@nanog.org but rather 
directly to my subscribed e-mail address.
- Mike


On Apr 24, 2014, at 1:29 AM, Michael DeMan na...@deman.com wrote:

 Hi All,
 
 Sorry being a bit off-topic and having a boring subject, but we really should 
 clean up whatever has been going on with so much spam hitting this mailing 
 list.
 
 
 NO - I am complaining about people who post things I disagree with or on 
 topics I have little interest in, I am tired of the stuff like...
 subject: Top rated...wireless security cameras--for both indoor  outdoor
 subject: Save on: Hawaii, vacation packages
 (classic) subject: Compare: the best, email marketing services
 subject: Cards reward plans\rates 
 compared(MC-Capital1-Venture-ChaseFreedom-Citicard-BankAmericard-Discover-Visa)
 ...and what not.
 
 
 Has it always been the policy perhaps that this mailing list does no spam 
 filtering whatsoever?  If so that would be understandable.
 
 Meanwhile I think would be better for the community if at least some of the 
 most egregious and obvious stuff did not have cpu and network cycles spent 
 forwarding it onwards?
 As always, I could imagine that although I think this is simple it could be a 
 politically charged debate.
 
 Cheers and please fix if possible,
 - Michael DeMan
 




Re: Need trusted NTP Sources

2014-02-06 Thread Michael DeMan
Hi Alexander,

I think you or your consultant may have an overly strict reading of the PCI 
documents.
Looking at section 10.4 of PCI DSS 3.0, and from having gone through PCI a few 
times...
If you have your PCI hosts directly going against ntp.org or similar, then you 
are not in compliance.

My understanding is that you need to:

A) Run a local set of NTP servers - these are your 'trusted' servers, under 
your control, properly managed/secured, fully meshed, etc.
These in turn (section 10.4.3) can get their time from 'industry-accepted time 
sources'.

B) The rest of your PCI infrastructure in turn uses these NTP servers and only 
these NTP servers.

- Michael DeMan

On Feb 6, 2014, at 2:27 AM, Alexander Maassen outsi...@scarynet.org wrote:

 www.pool.ntp.org
 
  Oorspronkelijk bericht 
 Van: Notify Me notify.s...@gmail.com 
 Datum:  
 Aan: nanog@nanog.org list nanog@nanog.org,af...@afnog.org 
 Onderwerp: Need trusted NTP Sources 
 
 Hi !
 
 I'm trying to help a company I work for to pass an audit, and we've
 been told we need trusted NTP sources (RedHat doesn't cut it). Being
 located in Nigeria, Africa, I'm not very knowledgeable about trusted
 sources therein.
 
 Please can anyone help with sources that wouldn't mind letting us sync
 from them?
 
 Thanks a lot!
 



Re: BCP38.info, RELATING: TWC (AS11351) blocking all NTP?

2014-02-03 Thread Michael DeMan
Hi,

I think I might have already deleted subject matter a few days ago in re: BCP38.

What exactly are you trying to do?

I agree my general comment about the recent NTP weaknesses should be addressed 
via IPv6 RFC may have been mis-understood.
I meant mostly that with IPv6 NAT goes away, all devices are exposed, and we 
also have the 'internet of things' - much more subject to potential abuse.
An NTPv5 solution that could be done with NTP services already, and would be 
more of a 'best practices of how this shit starts up and what it can do' and 
educating vendors to have reasonable behavior in the first place?
And an NTPv6 solution/RFC/guideline that was similar, could help?
Neither will 'solve the problem' - but I think the idea of managing what 
somebody can do and having the provider filter in/out on IPv4 and/or mobile 
ipV4, much less ipV6 is very unorthodox and much against the spirit of having 
global m:n communications be helpful for humanity.


My apologies if I mis-understand your recent and last few e-mails.

I disagree that 'filtering' or 'blocking' any kind of IPv4 or IPv6 protocol to 
'protect the end user' is the wrong way to go when compared to just having 
things work in a secure manner.

- Mike

On Feb 3, 2014, at 12:07 AM, Dobbins, Roland rdobb...@arbor.net wrote:

 
 On Feb 3, 2014, at 2:55 PM, Dobbins, Roland rdobb...@arbor.net wrote:
 
 It would be useful to know whether there are in fact NATs, or are 'DNS 
 forwarders' . . .
 
 Another question is whether or not it's possible that in at least some cases, 
 MITMing boxes on intermediary networks are grabbing these queries and then 
 spoofing the scanner source IP as they redirect the queries . . . . if this 
 is taking place, then it would be the network(s) with the MITMing box(es) 
 which allow spoofing, irrespective of whether or not the intended destination 
 networks do, yes?
 
 ---
 Roland Dobbins rdobb...@arbor.net // http://www.arbornetworks.com
 
 Luck is the residue of opportunity and design.
 
  -- John Milton
 
 




Re: TWC (AS11351) blocking all NTP?

2014-02-02 Thread Michael DeMan
The recently publicized mechanism to leverage NTP servers for amplified DoS 
attacks is seriously effective.
I had a friend who had a local ISP affected by this Thursday and also another 
case where just two asterisk servers saturated a 100mbps link to the point of 
unusability.
Once more - this exploit is seriously effective at using bandwidth by 
reflection.

From a provider point of view, given the choices between contacting the 
end-users vs. mitigating the problem, if I were in TW position if I was unable 
to immediately contact the numerous downstream customers that were affected by 
this, I would take the option to block NTP on a case-by-case basis (perhaps 
even taking a broad brush) rather than allow it to continue and cause 
disruptions elsewhere.


- Mike

On Feb 2, 2014, at 12:44 PM, John Levine jo...@iecc.com wrote:

 In article 20140202163313.gf24...@hijacked.us you write:
 The provider has kindly acknowledged that there is an issue, and are
 working on a resolution.  Heads up, it may be more than just my region.
 
 I'm a Time-Warner cable customer in the Syracuse region, and both of
 the NTP servers on my home LAN are happily syncing with outside peers.
 
 My real servers are hosted in Ithaca, with T-W being one of the
 upstreams and they're also OK.  They were recruited into an NTP DDoS
 last month (while I was at a meeting working on anti-DDoS best
 practice, which was a little embarassing) but they're upgraded and
 locked down now.
 
 R's,
 John
 
 
 




Re: TWC (AS11351) blocking all NTP?

2014-02-02 Thread Michael DeMan

On Feb 2, 2014, at 10:02 PM, Dobbins, Roland rdobb...@arbor.net wrote:

 
 On Feb 3, 2014, at 12:45 PM, Michael DeMan na...@deman.com wrote:
 
 From a provider point of view, given the choices between contacting the 
 end-users vs. mitigating the problem, if I were in TW position if I was 
 unable to immediately contact the numerous downstream customers that were 
 affected by this, I would take the option to block NTP on a case-by-case 
 basis (perhaps even taking a broad brush) rather than allow it to continue 
 and cause disruptions elsewhere.
 
 Per my previous post in this thread, there are ways to do this without 
 blocking client access to ntp servers; in point of fact, unless the ISP in 
 question isn't performing antispoofing at their customer aggregation edge, 
 blocking client access to ntp servers does nothing to address (pardon the 
 pun) the issue of ntp reflection/amplification DDoS attacks.
Agreed, and I was not trying to get into arguments about saying whether 
'blocking' is appropriate or not.  I was simply suggesting that a provider, if 
they found themselves in a position where this was causing lots of troubles and 
impacting things in a large, might have had taken a 'broad brush' approach to 
stabilize things while they work on a more proper solution.

 
 All that broadband access operators need to do is to a) enforce antispoofing 
 as close to their customers as possible, and b) enforce their AUPs (most 
 broadband operators prohibit operating servers) by blocking *inbound* UDP/123 
 traffic towards their customers at the customer aggregation edge (same for 
 DNS, chargen, and SNMP).
I certainly would not want to provide as part the AUP (as seller or buyer), a 
policy that fundamentals like NTP are 'blocked' to customers.  Seems like too 
much of a slippery slope for my taste.

In regards to anti-spoofing measures - I think there a couple of vectors about 
the latest NTP attack where more rigorous client-side anti-spoofing could help 
but will not solve it overall.  Trying to be fair and practical (from my 
perspective) - it is a lot easier and quicker to patch/workaround IPv4 problems 
and address proper solutions via IPv6 and associated RFCs?

- Michael DeMan



 
 ---
 Roland Dobbins rdobb...@arbor.net // http://www.arbornetworks.com
 
 Luck is the residue of opportunity and design.
 
  -- John Milton
 
 




Re: tools and techniques to pinpoint and respond to loss on a path

2013-07-16 Thread Michael DeMan
What I have done in the past, and this presumes you have a /29 or bigger on the 
peering session to your upstreams is to check with the direct upstream provider 
at each and get approval to put a linux box diagnostics server on the peering 
side of each BGP upstream connection you have - default-routed out to their BGP 
router(s).  Typically not a problem with the upstream as long as they know this 
is for diagnostics purposes and will be taken down later.  Also helps the 
upstreams know you are seriously looking at the reliability they are giving and 
their competitors are giving you.

On that diagnostics box, run some quick  dirty tools to try and start 
isolating if the problem is related to one upstream link or another, or a 
combination of them.  Have each one monitoring all the distant peer 
connections, and possibly even each-other local peers for connectivity if you 
are uber-detailed.  The problem could be anywhere in between, but if you notice 
it is one link that has the issues and the other one does not, and/or a combo 
of src/dst, then you are in better shape to help your upstreams diagnose as 
well.  A couple tools like smokeping and running traceroute and ping on a 
scripted basis are not perfect, but easy to setup.  Log it all out so when it 
impacts production systems you can go back and look at those logs and see if 
there are any clues.  nettop is also another handy tool to dump stuff out with 
and also in the nearly impossible case you happen to be on the console when the 
problem occurs is very handy.

From there, let that run for a while - hours, days, weeks depending on the 
frequency of the problem and typically you will find that the 'hiccup' happens 
either via one peering partner or all of them - and/or from one end or the 
other.  More than likely something will fall out from the data as to where the 
problem is, and often it is not with your direct peers, but their peers or 
somebody else further down the chain.

This kind of stuff is notoriously difficult to troubleshoot and I generally 
agree with the opinions that for better or worse - global IP connectivity is 
still just a 'best effort basis' with out spending immense amounts of money.
I remember a few years ago having blips and near one-hour outages from NW 
Washington State over to Europe and the problem was that global crossing was 
doing a bunch of maintenance and it was not going well for them.  They were 
'man in the middle' for the routing from two different peers and just knowing 
the problem was a big help and with some creative BGP announcements we were 
able to minimize the impact.

- mike


On Jul 15, 2013, at 2:18 PM, Andy Litzinger andy.litzin...@theplatform.com 
wrote:

 Hi,
 
 Does anyone have any recommendations on how to pinpoint and react to packet 
 loss across the internet?  preferably in an automated fashion.  For detection 
 I'm currently looking at trying smoketrace to run from inside my network, but 
 I'd love to be able to run traceroutes from my edge routers triggered during 
 periods of loss.  I have Juniper MX80s on one end- which I'm hopeful I'll be 
 able to cobble together some combo of RPM and event scripting to kick off a 
 traceroute.  We have Cisco4900Ms on the other end and maybe the same thing is 
 possible but I'm not so sure.
 
 I'd love to hear other suggestions and experience for detection and also for 
 options on what I might be able to do when loss is detected on a path.
 
 In my specific situation I control equipment on both ends of the path that I 
 care about with details below.
 
 we are a hosted service company and we currently have two data centers, DC A 
 and DC B.  DC A uses juniper MX routers, advertises our own IP space and 
 takes full BGP feeds from two providers, ISPs A1 and A2.  At DC B we have a 
 smaller installation and instead take redundant drops (and IP space) from a 
 single provider, ISP B1, who then peers upstream with two providers, B2 and B3
 
 We have a fairly consistent bi-directional stream of traffic between DC A and 
 DC B.  Both of ISP A1 and A2 have good peering with ISP B2 so under normal 
 network conditions traffic flows across ISP B1 to B2 and then to either ISP 
 A1 or A2
 
 oversimplified ascii pic showing only the normal best paths:
 
  -- ISP A1--ISP B2--
 DC A--| |---  
 ISP B1 - DC B
 -- ISP A2--ISP B2--
 
 
 with increasing frequency we've been experiencing packet loss along the path 
 from DC A to DC B.  Usually the periods of loss are brief,  30 seconds to a 
 minute, but they are total blackouts.
 
  I'd like to be able to collect enough relevant data to pinpoint the trouble 
 spot as much as possible so I can take it to the ISPs and request a solution. 
  The blackouts are so quick that it's impossible to log in and get a trace- 
 hence the desire to automate it.
 
 I can provide more details off list 

Can we not just fix it? WAS:Re: Open Resolver Problems

2013-03-28 Thread Michael DeMan
AsI think as we all know the deficiency is the design of the DNS system overall.

No disrespect to anybody, but lots of companies make money off of the design 
deficiencies and try to position themselves as offering 'value add services' or 
something similar.  Basically they make money because the inherent design of 
the DNS system is the antithesis of being able to deliver information on a best 
effort basis.  Entire 'value add' economic ecosystems are created with these 
kinds of things and once it is done, it is extremely difficult to un-do.

If the endpoint is not available or is unreliable, this is all well understood 
and 100% captured in the modern implementations of the Internet with it be OSI 
or TCP/IP and even with numerous extensions from there.

The fundamental cause and source of failure for these kinds of attacks comes 
from the the way the DNS (and lets not even get into 'valid' SSL certs) is 
designed.  It is fundamentally flawed.  I am sure there were plenty of 
political reasons for it to have ended up this way instead of being done in a 
more robust fashion?

For all the gripes and complaints - all I see is complaints of the symptoms and 
nobody calling out the original cause of the disease?


On Mar 27, 2013, at 6:47 AM, William Herrin b...@herrin.us wrote:

 On Tue, Mar 26, 2013 at 10:07 PM, Tom Paseka t...@cloudflare.com wrote:
 Authoritative DNS servers need to implement rate limiting. (a client
 shouldn't query you twice for the same thing within its TTL).
 
 Right now that's a complaint for the mainstream software authors, not
 for the system operators. When the version of Bind in Debian Stable
 implements this feature, I'll surely turn it on.
 
 Regards,
 Bill Herrin
 
 
 -- 
 William D. Herrin  her...@dirtside.com  b...@herrin.us
 3005 Crane Dr. .. Web: http://bill.herrin.us/
 Falls Church, VA 22042-3004
 




fiber cut in Portland OR / Vancouver WA area

2012-07-11 Thread Michael DeMan
We have seen some spottyness with IP and cel phone connectivity in this area.  
It has been going on for several hours now.

Thanks,
- mike




Re: fiber cut in Portland OR / Vancouver WA area

2012-07-11 Thread Michael DeMan
ATT said they have been doing maintenance, I called in about the cel service 
issues only.

They have an ETA of about 2-3 days for completion of work, which should not 
drop service but will impact services.   I would guess for 4G upgrades or 
something and inability/poor-planning on the upgrades.

- mike

Duh - area is Vancouver WA area

On Jul 11, 2012, at 9:42 PM, Michael DeMan wrote:

 We have seen some spottyness with IP and cel phone connectivity in this area. 
  It has been going on for several hours now.
 
 Thanks,
 - mike
 




Re: Reliable Cloud host ?

2012-02-26 Thread Michael DeMan
We have found it effective at least for things like DNS and MX-backup to simply 
swap some VPS and/or physical colo with another ISP outside our geographic 
area.  Both protocols are designed for that kind of redundancy.  Definitely has 
limitations, but is also probably the cheapest solution.

- Mike

On Feb 26, 2012, at 2:56 PM, Randy Carpenter wrote:

 
 
 Does anyone have any recommendation for a reliable cloud host?
 
 We require 1 or 2 very small virtual hosts to host some remote services to 
 serve as backup to our main datacenter. One of these services is a DNS 
 server, so it is important that it is up all the time.
 
 We have been using Rackspace Cloud Servers. We just realized that they have 
 absolutely no redundancy or failover after experiencing a outage that lasted 
 more than 6 hours yesterday. I am appalled that they would offer something 
 called cloud without having any failover at all.
 
 Basic requirements:
 
 1. Full redundancy with instant failover to other hypervisor hosts upon 
 hardware failure (I thought this was a given!)
 2. Actual support (with a phone number I can call)
 3. reasonable pricing (No, $800/month is not reasonable when I need a tiny 
 256MB RAM Server with 1GB/mo of data transfers)
 
 thanks,
 -Randy
 




Re: Microsoft deems all DigiNotar certificates untrustworthy, releases updates

2011-09-09 Thread Michael DeMan
Sorry for being ignorant here - I have not even been aware that it is possible 
to buy a '*.*.com' domain at all.

I though wildcards were limited to having a domain off a TLD - like 
'*.mydomain.tld'.

Is it true that the my browser on a windows, mac, or linux desktop may have 
listed as trusted authorities, an outfit that sells '*.*.tld' ?

Thanks,

- Mike

On Sep 9, 2011, at 2:54 PM, Paul wrote:

 On 09/09/2011 11:48 AM, Marcus Reid wrote:
 On Wed, Sep 07, 2011 at 09:17:10AM -0700, Network IP Dog wrote:
 FYI!!!
 
 http://seattletimes.nwsource.com/html/microsoftpri0/2016132391_microsoft_dee
 ms_all_diginotar_certificates_untrust.html
 
 Google and Mozilla have also updated their browsers to block all DigiNotar
 certificates, while Apple has been silent on the issue, a emblematic zombie
 response!
 Apple has sent out a notification saying that they are removing
 DigiNotar from their list of trusted root certs.
 
 I like this response; instant CA death penalty seems to put the
 incentives about where they need to be.
 
 Marcus
 
 Instant?  This has been going on for over a week, and a lot of damage could 
 have been done in that time, especially given certs for *.*.com were signed 
 against Diginotar.  Most cell phones are unable to update their certificates 
 without an upgrade and you know how long it takes to get them through Cell 
 Phone carriers.  A number of alternative android builds are adding the 
 ability to control accepted root certs to their builds in the interest of 
 speeding this up.  The CA system is fundamentally flawed.
 
 Paul
 




Re: Wacky Weekend: NERC to relax power grid frequency strictures

2011-06-25 Thread Michael DeMan
It ismy understanding also that most commercial grade gensets have built into 
the ATS logic that when utility power comesback online, that the transfer back 
to utility power is coordinated with the ATS driving the generator until both 
frequency and phases are within a user specified range?

- mike

On Jun 25, 2011, at 3:12 PM, Leo Bicknell bickn...@ufp.org wrote:

 In a message written on Fri, Jun 24, 2011 at 06:29:14PM -0400, Jay Ashworth 
 wrote:
 I believe the answer to that question is contained here:
 
  http://yarchive.net/car/rv/generator_synchronization.html [1]
 
 I wouldn't use a colo that had to sync their generator to the grid.
 That is a bad design.
 
 Critical load should be on a battery or flywheel system.  When the
 utility is bad (including out) the load should be on the battery or
 flywheel for 5-15 seconds before the generators start.  The generators
 need to sync to each other.
 
 Essential load (think lighting, AC units) get dropped completely for
 30-60 seconds until the generators are stable, and then that load gets
 hooked back in.
 
 I have never seen a generator that syncs to the utility for live, no
 break transfer.  I'm sure such a thing exists, but that sounds crazy
 dangerous to me.  Generators sync to each other, not the utility.
 
 -- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/



Re: Wacky Weekend: NERC to relax power grid frequency strictures

2011-06-25 Thread Michael DeMan


On Jun 25, 2011, at 4:47 PM, Andrew D Kirch trel...@trelane.net wrote:

 On 6/25/2011 7:43 PM, Paul Graydon wrote:
 Take a guess what the datacenter our equipment is currently hosted in uses.  
 Yet another reason to be glad of a datacenter move that's coming up.
 
 Why can't we just all use DC and be happy?
 

Because often short term pain and long term gain frequently (pardon the pun) 
better ideas do not make it through the free market process?




Re: good geographic for servers reaching the South East Asia market

2011-06-16 Thread Michael DeMan
Hi,

I wanted to thank everybody for their feedback.  Everything seems to correlate 
with what I have heard - generally Hong Kong and Singapore are the major hubs, 
with Tokyo even being an option even though it is not 'geographically' close 
and also possibly there are options in Malaysia.

I think I have what I need for this.  I am just a worker-bee on this project 
getting preliminary information for a potential project by a client next year, 
which may not or may not even be a 'go' anyway.

Out of curiosity, I stumbled across this kind of cool map of submarine cables - 
does anybody know if it is very accurate or up to date?  If nothing else, it is 
kind of fun to play with since you can slide around and zoom in/out with it and 
stuff.

http://www.cablemap.info/



- Mike




On Jun 15, 2011, at 4:50 PM, Michael DeMan wrote:

 Hi All,
 
 I guess this is a bit off-topic since this is the North American network 
 operators group, but I was wondering if anybody had much experience with 
 fiber infrastructure in the South East Asia area.
 
 For reference, generally the WikiPedia entry on South East Asia describes the 
 service delivery area:
 http://en.wikipedia.org/wiki/Southeast_Asia
 
 Basically looking for tips on what cities/countries/locations have as much 
 (mostly submarine cabling in this case?) fiber connectivity and redundancy.  
 From there I can trim down where to begin looking specifically at data 
 centers and colocation options.
 
 Also, if anybody offhand has any tips on political stability and/or the risk 
 of some kind of unwanted censorship by a given country, that would be helpful 
 as well.
 
 Feel free to post back on-list or off-list.
 
 Thanks,
 
 - Michael DeMan
 
 
 
 
 
 




Re: good geographic for servers reaching the South East Asia market

2011-06-16 Thread Michael DeMan
Hi Janne,

Any thoughts about Malaysia?  The outfit I am working for on this right now 
already has manufacturing facilities there and it would be easier for them to 
do it in-country.

I would guess that probably everything from Kuala Lampur area is trunked via 
Singapore anyway?

- mike

On Jun 16, 2011, at 6:21 AM, Janne Snabb sn...@epipe.com wrote:

 Hello from Cambodia. I am familiar with the situation in Cambodia
 and some surrounding countries.
 
 On Wed, 15 Jun 2011, Michael DeMan wrote:
 
 Basically looking for tips on what cities/countries/locations
 have as much (mostly submarine cabling in this case?) fiber
 connectivity and redundancy.  From there I can trim down where to
 begin looking specifically at data centers and colocation options.
 
 Hong Kong, Singapore (and Taiwan).
 
 Hong Kong is the best choice for some countries in the region:
 countries such as Cambodia and Vietnam have their uplinks mostly
 through Hong Kong.
 
 Singapore is the best choice for others: Thailand, Malaysia and
 Indonesia have good connectivity to Singapore.
 
 Taiwan I would place as the 3rd option.
 
 No other realistic options exist in the region beyond that... US
 west coast is often the best option if you are not prepared to spend
 a lot of money (but check your upstream's peering with major SEA
 providers first).
 
 Also, if anybody offhand has any tips on political stability
 and/or the risk of some kind of unwanted censorship by a given
 country, that would be helpful as well.
 
 All countries within the region are unsafe when it comes to censorship.
 Hong Kong is probably the only nearby place which does not openly
 practice censorship currently, but I would not count on that as
 it is just a (autonomous) province of China.
 
 --
 Janne Snabb / EPIPE Communications
 sn...@epipe.com - http://epipe.com/



good geographic for servers reaching the South East Asia market

2011-06-15 Thread Michael DeMan
Hi All,

I guess this is a bit off-topic since this is the North American network 
operators group, but I was wondering if anybody had much experience with fiber 
infrastructure in the South East Asia area.

For reference, generally the WikiPedia entry on South East Asia describes the 
service delivery area:
http://en.wikipedia.org/wiki/Southeast_Asia

Basically looking for tips on what cities/countries/locations have as much 
(mostly submarine cabling in this case?) fiber connectivity and redundancy.  
From there I can trim down where to begin looking specifically at data centers 
and colocation options.

Also, if anybody offhand has any tips on political stability and/or the risk of 
some kind of unwanted censorship by a given country, that would be helpful as 
well.

Feel free to post back on-list or off-list.

Thanks,

- Michael DeMan








Re: Top-posting (was: Barracuda Networks is at it again: Any Suggestions as to anAlternative? )

2011-04-12 Thread Michael DeMan
Call me and old 'hard case' - but I prefer that when I get information via 
email, that if possible, the relevant information show up immediately.

Call me lazy I guess - but I would expect that most folks on this list have 
also understood good user interface design, and that the least amount of work 
that needs to be done for the receiver to be able to get their information is 
frequently the best solution.

On the other hand - I must admit that I do often top post and note 'see inline' 
with heavy use of snipping in order to shorten what has turned into a long 
topic in order to make it a shorter and more concise topic.

I absolutely agree with anybody (or everybody), that wants mailing list 
archives to be readable.  Fortunately we have things called 'computers' that do 
that quite well - and reorganize the email correspondence on mailing lists back 
into standard chronological order.

I am also not adverse to changing formats - I just think that it is just 
inefficient.


- mike

On Apr 11, 2011, at 11:15 AM, John Levine wrote:

 It's really impressive how insular a bunch of old timers can be.
 
 Coming up next: rants about HTML mail!
 
 R's,
 John
 
 In article BANLkTi=v11tghfgmxstjxscjtgpb6ct...@mail.gmail.com you write:
 On Mon, Apr 11, 2011 at 8:21 AM, Kevin Oberman ober...@es.net wrote:
 Of late I have started to get responses from people (not even the person
 who top-posted) saying that I should f*** off and that they would post
 however they wanted. Very hostile and even threatening.
 
 My wife complained once that my responses are hard to read and that I
 should just put at the top like the rest of the Internet.  I fear I
 have been passed by...
 




Re: Top-posting

2011-04-12 Thread Michael DeMan
Hi Paul,

Your point is taken - but actually this is a bit of a conundrum, at least for 
me.

Generally what I see is that younger people who grew up using email when they 
were children desire to bottom post or post inline whereas folks that 
originally utilized email primarily to communicate technical information only 
generally prefer to top-post.

I believe that top-posting is fine and that have also found use for (what do 
they call it, reverse-hugarian or reverse-polish) notation for doing things 
like naming and structuring software packages to also be immensely useful.

Either way, I ultimately agree with you - except with the possible exception 
that possibly if the NaNog list really care - they could setup a survey of all 
list members, have everybody vote, then we know on this list that when we ask 
questions where we expect timely answers we can expect the answers to possibly 
be buried in a myriad of text.  Another problem with bottom-posting is the 
SNIP of anything above, etc.

Cheers - and sorry for having a little late night fun bothering everybody with 
noting something that I have seen mostly as a social change on how people 
communicate via email over the past 30 years.

- Mike

P.S. - meanwhile, for an email list like NaNog - I am still hoping that most 
folks want efficiency on answers to questions - and if the need old data are 
clever enough to realize that there are plenty of ways via HTTP to find those 
'weirdo top-post commentors' listed with their posts in chronological and/or 
relevance level - with prior commentary properly sorted.
- mfd


On Apr 11, 2011, at 11:06 PM, Paul Ferguson wrote:

 I am top-posting to show that this entire thread is retarded.
 
 I certainly could have bottom-posted, because I don't use Outlook for
 this list, but the point here is -- is this what the NANOG list has
 really become? Really?
 
 So sad.
 
 - ferg
 
 
 On Mon, Apr 11, 2011 at 10:55 PM, Dobbins, Roland rdobb...@arbor.net wrote:
 
 
 On Apr 12, 2011, at 12:42 PM, Owen DeLong wrote:
 
 I have used Evolution and IMAP with exchange servers in the past, so, I'm 
 not convinced this is an entirely accurate statement.
 
 
 And in fact, I'm posting this message in plain-text via the OSX Mail.app 
 connected via native Exchange protocols to an Exchange server.
 
 There's even a plug-in for Mail.app in order to make inline posting easier.
 
 
 -- 
 Fergie, a.k.a. Paul Ferguson
  Engineering Architecture for the Internet
  fergdawgster(at)gmail.com
  ferg's tech blog: http://fergdawg.blogspot.com/
 




Re: Top-posting (was: Barracuda Networks is at it again: Any Suggestions as to anAlternative? )

2011-04-12 Thread Michael DeMan
I really don't think anybody is concerned about how fast the email downloads 
anymore.

Rather it is more of a matter of how long it takes us humans to process the 
incredible volume of information we are expected to process.

I have no problem either 'top posting' or 'bottom posting' - but I agree it 
would be good for the NaNog list to decide on a policy.

I say we all vote.

The ultimate question on email etiquette is naturally how to properly identify 
inline commentary.

Top-post is definitely the most efficient for that.  For instance, if I have a 
lengthy correspondence with a peer who may or may not speed English, the 
top-post is always respected, and from there it is quite easy (because it is in 
the top) to note that other commentary is inline - and (as I mentioned before) 
- to remove unnecessary material while leaving short portions of material 
relevant.

To get back on topic about using email efficiently and get away from peoples 
personal preferences, I will say the following.

#1) I have no disagreement about whether to top-post or bottom-post on this 
list or any other - given that there is a policy in place.  Maintaing 
communications is the most important thing.

#2) I still do not understand how 'bottom posters' reference material from 
prior e-mails in their replies?  Perhaps I am just ignorant.  I often have 
lengthy business and technical communications which some times require a bit of 
snipping here and there - the best way to notify somebody you have SNIPPED 
the prior conversation is to say it right up front?

#3) These kinds of things become even more important when working with 
non-native English speakers.

#4) I still seem to believe (maybe I am wrong) - that 'bottom posters' thing 
that an individual email to list is supposed to be an 'archive' - I wholly 
disagree.



On Apr 11, 2011, at 11:49 PM, Tim Chown wrote:

 
 On 12 Apr 2011, at 07:33, Michael DeMan wrote:
 
 Call me and old 'hard case' - but I prefer that when I get information via 
 email, that if possible, the relevant information show up immediately.
 
 Call me lazy I guess - but I would expect that most folks on this list have 
 also understood good user interface design, and that the least amount of 
 work that needs to be done for the receiver to be able to get their 
 information is frequently the best solution.
 
 Well indeed, top-posting is just so much more efficient given the volumes of 
 email most of us probably see each day.
 
 Back when receiving an email was an event, and your xbiff flag popping up was 
 a cause for excitement, taking time to scroll/page down to the new 
 bottom-posted content in the reply was part of the enjoyment of the whole 
 'You have new mail' process. But I'm afraid times have changed; 
 bottom-posted email is now an annoyance to most just as a slow-loading web 
 page would be.
 
 Tim




Re: Nortel, in bankruptcy, sells IPv4 address block for $7.5 million

2011-03-25 Thread Michael DeMan

On Mar 25, 2011, at 12:05 PM, Owen DeLong wrote:

 
 On Mar 24, 2011, at 10:08 AM, Randy Bush wrote:
 
 They can only get them _at all_ if they can document need.  All
 receipt of address space, whether from the free-pool or through a
 transfer, is needs-based.  Anything else would be removing a critical
 resource from use.
 http://en.wikipedia.org/wiki/Canute
 Thank you Randy.  Give Canute a community-developed set of marching
 orders, and make the ocean a little more pliable and you might have 
 something there.
 
 at some point, the arin policy wonk weenies will face reality.  or not.
 it really makes little difference.  
 
 i don't particularly like the reality either, but i find it easier and
 more productive to align my actions and how i spend my time.  not a lot
 of high paying jobs pushing water uphill.
 
 randy
 
 At some point we will see which reality actually pans out. Both the 
 perspective
 of we ARIN Policy wonk weenies as Randy so kindly calls us, and, Randy's
 perspective are speculations about future events. I think both are probably 
 equally
 based in reality based on different sets of experiences.
 
 Since my reality has the potential to preserve many good aspects of the 
 internet,
 I hope it turns out that Randy is the one who is wrong.
 
 Owen
 

Or possibly, if we can not sort this on our own and set a good precedent (for 
ARIN and the other registries as well) that we can sort this out ourselves in a 
way that is agreeable and beneficial to all stakeholders - we will just be 
adding another piece of lumber onto the ready-to-light bonfire that government 
needs to step in somehow?
 
- Mike




Re: [Nanog] Re: US .mil blocking in Japan

2011-03-17 Thread Michael DeMan
Wasn't this announced on the news already?

That because the infrastructure in Japan was hit (no highly publicized) but 
still working, that the US military also said they were blocking u-tube and 
other high bandwidth sites in order to conserve resources?

I am definitely not one to be outside of hearing about a conspiracy theory or 
something, but I know up in our neck of the woods in the NorthWest of NorthWest 
Washington State, that this is just common sense to do.


On Mar 17, 2011, at 6:57 PM, Jason 'XenoPhage' Frisvold wrote:

 On Mar 16, 2011, at 4:22 PM, William Warren wrote:
 As a former Military Member I can tell you we don't have unlimited amounts 
 of bandwidth...especially overseas.  There's been several undersea cables 
 damaged or completely knocked offline.  I don't find this policy very 
 surprising due to the disaster in Japan.
 
 Could this also be part of a communications blackout ?  No, not in a 
 sinister, government keeping secrets, manner.  A friend of mine serves on a 
 ship that's over there right now.  He dropped me a note last night that they 
 were going into a communications blackout to try and control some of the wild 
 miscommunication being sent out.
 
 It seems reasonable enough if only to prevent widespread panic from someone 
 close to the situation saying something incorrect.
 
 
 ---
 Jason 'XenoPhage' Frisvold
 xenoph...@godshell.com
 ---
 Any sufficiently advanced magic is indistinguishable from technology.
 - Niven's Inverse of Clarke's Third Law
 
 
 
 




Re: FAA - ASDI servers

2011-01-04 Thread Michael DeMan
Is that the FFA or the FAA?


On Jan 4, 2011, at 8:57 PM, Ryan Finnesey wrote:

 Can they simply extend the mandate?   We need to setup new connectivity
 to the FFA and was hoping to go IPv6 right out of the gate.
 Cheers
 Ryan
 
 
 -Original Message-
 From: Kevin Oberman [mailto:ober...@es.net] 
 Sent: Tuesday, January 04, 2011 11:12 PM
 To: Christopher Morrow
 Cc: Ryan Finnesey; nanog@nanog.org
 Subject: Re: FAA - ASDI servers 
 
 Date: Tue, 4 Jan 2011 22:49:34 -0500
 From: Christopher Morrow morrowc.li...@gmail.com
 
 On Tue, Jan 4, 2011 at 10:39 PM, Ryan Finnesey 
 ryan.finne...@harrierinvestments.com wrote:
 Very true but why the reference to vacuum tubes?
 
 sadly it was an FAA computer system joke.
 
 But, since the F stands for Federal, if it is still up in two years,
 it must be reachable by IPv6. Today, the odds are pretty slim as almost
 no federal systems are reachable by IPv6. It will be an interesting two
 years for a lot of federal IT folks as the mandate is from the OMB who
 can pull a budget for non-compliance.
 --
 R. Kevin Oberman, Network Engineer
 Energy Sciences Network (ESnet)
 Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab)
 E-mail: ober...@es.netPhone: +1 510 486-8634
 Key fingerprint:059B 2DDF 031C 9BA3 14A4  EADA 927D EBB3 987B 3751
 




Re: Muni Fiber Last Mile - a contrary opinion

2010-12-26 Thread Michael DeMan

On Dec 26, 2010, at 8:07 PM, Chris Adams wrote:

 The ATT (formerly BellSouth) cabinets around here mostly have natural
 gas generators included, so they almost never go out.  The cable
 companies, on the other hand, might have enough battery to last through
 a brownout.

Interesting - out of curiosity, how big are these cabinets/pedestals?  Or would 
you by chance know details on the natgas power system they are using?

Natgas is not ideal in a full-on disaster scenario like an earthquake, but 
probably could add another '9' onto service levels?  I have never heard of or 
seen such a thing, but it is a really good idea.

- Michael DeMan


Re: Some truth about Comcast - WikiLeaks style

2010-12-20 Thread Michael DeMan

On Dec 19, 2010, at 5:48 PM, Richard A Steenbergen wrote:

 
 Personally I think the right answer is to enforce a legal separation 
 between the layer 1 and layer 3 infrastructure providers, and require 
 that the layer 1 network provide non-discriminatory access to any 
 company who wishes to provide IP to the end user. But that would take a 
 lot of work to implement, and there are billions of dollars at work 
 lobbying against it, so I don't expect it to happen any time soon. :)


+1 on this - it is the source of a huge number of problems in the industry.





Warrant Canaries

2010-12-05 Thread Michael DeMan

On Dec 4, 2010, at 9:06 PM, Jay Ashworth wrote:

  Original Message -
 From: Adrian Chadd adr...@creative.net.au
 
 On Sat, Dec 04, 2010, Ken Chase wrote:
 And if they come and ask the same but without a court order is a bit
 trickier and more confusing, and this list is a good place to track the
 frequency of and responce to that kind of request.
 
 Except of course when you're asked not to share what has occured
 with anyone. I hear that kind of thing happens today.
 
 It does.  Hence, the Warrant Canary:
 
 http://blog.kozubik.com/john_kozubik/2010/08/the-warrant-canary-in-2010-and-beyond.html
 
 Cheers,
 -- jra
 

Actually, my intuition is that warrant canaries are not a workable solution 
either.  I would presume that a violation of a 'secret' court order or national 
security letter where you are expressly ordered not to divulge the fact that 
you have received it could be violated either by any 'action' or 'inaction'.  
So the 'inaction' of not updating the warrant canary would be a violation.

The interesting thing of course is that to avoid the 'inaction', and your 
regular process is to say update the warrant canary daily, you would be placed 
in the position where the government was asking you to lie to the public at 
large?

I have wondered about this for quite a while - has anybody on the list ever 
talked with an attorney with specific expertise in this area of law about this? 
 I am not expecting formal legal advice by any means, just curious if anybody 
has done any research on this topic and could share what they discovered.

- Mike

P.S. - Intent here is not to drag out the wikileaks thread, but rather start a 
new thread on the more general topic of legal/policies and warrant canaries, 
which although not a purely technical discussions seems more on-topic for the 
nanog list.  My apologies in advance if it is OT.











Re: wikileaks dns (was Re: Blocking International DNS)

2010-12-03 Thread Michael DeMan
wikileaks.no and wikleaks.se seem to accept requests on port 80 but appear to 
be having troubles generating responses, perhaps just overloaded.


On Dec 3, 2010, at 12:45 AM, Stephane Bortzmeyer wrote:

 On Fri, Dec 03, 2010 at 12:52:29AM -0500,
 Ken Chase k...@sizone.org wrote 
 a message of 24 lines which said:
 
 Anyone have records of what wikileaks (RR, i assume) A record was? 
 
 91.121.133.41
 46.59.1.2
 
 Translated into an URL, the first one does not work (virtual hosting,
 may be) but the second does.
 
 I've found also, thanks to a new name resolution protocol, TDNS
 (Tweeter DNS), 213.251.145.96, which works.
 
 I should have queried my favourite open rDNS servers before they
 expired,
 
 dig A wikileaks.org  backup.txt
 
 (from cron)
 
 is a useful method. Other possible solution would be a DNSarchive, in
 the same way there is a WebArchive. Any volunteer?
 
 
 
 




Terminology Request, WAS: Enterprise DNS providers

2010-10-18 Thread Michael DeMan
Hi,

I have been following this thread, and am mostly curious - can somebody (or 
preferably several folks) define what is meant by 'Enterprise DNS' ?

Thanks,

- Mike

On Oct 16, 2010, at 3:03 AM, Ken Gilmour wrote:

 Hello any weekend workers :)
 
 We are looking at urgently deploying an outsourced DNS provider for a
 critical domain which is currently unavailable but are having some
 difficulty. I've tried contacting UltraDNS who only allow customers from US
 / Canada to sign up (we are in Malta) and their Sales dept are closed, and
 Easy DNS who don't have .com.mt as an option in the dropdown for
 transferring domain names (and also support is closed).
 
 Black Lotus looks like the next best contender, has anyone had experience
 with these or any other recommendations for how we can transfer a .com.mt to
 a reliable hosting provider during the weekend?
 
 Thanks!
 
 Ken




Re: Software-based Border Router

2010-09-27 Thread Michael DeMan
I have seen software based routers (FreeBSD+Quagga) in production at pennies on 
the dollar compared to Cisco for quite some years.

Up front, as other people have noted, you need to know what you are doing.  
There is no 'crying for help 24x7'.  By the same token, if you know what you 
are doing then they can be a very cost effective solutions.

I have yet to see (or try out) MPLS and such, so if requirements need features 
like that, then probably open source may not be the solution.

The above said, other comments inline below...


On Sep 27, 2010, at 3:48 PM, Heath Jones wrote:

 Do jitter sensitive applications have problems at all running?
 What would you say is the point at which people should be looking for
 a hardware forwarding solution?
 
 Differences:
 - Hardware forwarding

Yes, absolutely, no hardware forwarding.  This must be compensated for by 
utilizing as advanced/expensive 'commodity PC hardware' as possible.  You want 
lots of CPU horsepower, fast busses (PCI-E x16 if possible) and good NICs so 
the OS can offload as much as possible to the hardware and not be bandwidth 
constrained.  Even then, no way are you going to get anything close to what you 
can from a 'real' router.  A classic trade off between technical needs  
desires vs. financial constraints.  

 - Interface options

Make sure there are least two NIC platforms.  i.e., a pair of onboard dual 
gigabit plus another dual gigabit card.  Bond the interfaces between the 
separate NIC platforms so one each gigabit link is off say the onboard and one 
off the NIC card.  Utilize LACP.

 - Port density

Use VLANs - again, a quality NIC will help with this by offloading a good 
portion of the overhead to hardware.

 - Redundancy

Use a /29 to your eBGP provider and turn up two routers side-by-side.  Again, 
if you are looking for hard core 'carrier grade' stuff, you should not be 
asking about open source.  Pair the two routers, for eBGP sessions, and use a 
separate interface for them to talk to each other.

 - Power consumption

Always an issue, no way are you going to get pps from this kind of stuff like 
you would from Cisco.

 - Service Provider stuff - MPLS TE? VPLS? VRF??

Yup.

 
 Any others?
 

If somebody is on an extremely tight budget, is technically capable of doing 
utilizing open source to do what they need, and their requirements are limited 
enough that an open source platform would work for them, I would suggest they 
check into it.  Ultimately, as always, it is buyer beware.  Often with 
dedicated routers a support contract can cost as much as the router itself 
after a year or two, but sometimes companies need that support contract because 
they don't have the in-house skills already, etc.  

I would never recommend either open source or dedicated hardware routers to 
anybody as a 'this is the only way to go' solution.




Re: .se disappeared?

2009-10-12 Thread Michael DeMan (OA)

Yes.

On Oct 12, 2009, at 1:38 PM, Ben White wrote:


Does anyone else also see trouble reaching .se domains at the moment?

--
Ben