RE: Wifi SIP WPA/PSK Support

2006-01-26 Thread Frank Bulk


I've been tracking the Wi-Fi SIP phone space for some time, and have
documented all the phones that I could find here:
http://www.mtcnet.net/~fbulk/VoWLAN.doc
It's about a 7 MB file because I've included pictures of these devices where
I could find them.

Because we just installed a SIP proxy server on our switch I took the
opportunity to purchase and try out 4 Wi-Fi SIP phones: 
- Hitachi IPC-5000
- UTStarcom F1000
- Pulver WiSIP/ZyXEL P-2000W v1
- ZyXEL P-2000 v2

The last two offer identical user interface and functionality but a
different shell.  

The Hitachi doesn't offer WPA support, but it does do 802.1X (specifically
EAP-MD5, EAP-TLS, PEAP, and EAP-TTLS) with WEP.  That probably means it can
hand out WEP keys, and perhaps perform dynamic WEP.  The only phone to
support WPA from that short list is the F1000 with 3.60 or higher firmware,
and that's only WPA-PSK.

If you look in the Word document you'll see there are other phones that
offer WPA support, but there not are readily available in the North American
market.

Kind regards,

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Mike
Leber
Sent: Wednesday, January 25, 2006 10:35 PM
To: nanog@merit.edu
Subject: Wifi SIP WPA/PSK Support



I'm working on finding a Wifi SIP phone that supports WPA/PSK that we can
recommended to VOIP clients.  As everybody knows, currently most Wifi SIP
phones support WEP which is demonstrably insecure.  For banking and
financial customers, or companies that are given passwords or credit cards
over the phone, this is a serious security issue.

We recently bought a Hitachi-Cable Wireless IPC-5000 WiFi SIP Phone from
voipsupply.com after finding some web pages that said that phone supported
WPA (the pages were in German), yet once we got the phone all it supported
was WEP even after updating the firmware to the latest version using the
website mentioned in the documentation that came with the phone.

I've had a few people say that there was some sort of conspiracy to keep
US citizens from using secure phones, however I found that laughable
because the potential risk of terrorist or criminal interception from
having all Wifi telephone conversations involving credit cards (let alone
social security numbers, bank account numbers, passwords, what have you)
in the clear would create an attack vector so large as to exceed all other
possible attack vectors... I mean why work on cracking anything when you
can just listen to everybody in the clear (well virtually in the clear
with WEP).

So, back in reality, could anybody in the US that bought their Wifi SIP
phone in the US share a success story at getting Wifi SIP setup with
WPA/PSK?  What model of phone did you buy?  Where did you get it?  Did you
have to upgrade it to any special version of firmware or what?

Mike.

+- H U R R I C A N E - E L E C T R I C -+
| Mike Leber   Direct Internet Connections   Voice 510 580 4100 |
| Hurricane Electric Web Hosting  Colocation   Fax 510 580 4151 |
| [EMAIL PROTECTED]   http://www.he.net |
+---+





RE: Yahoo, Google, Microsoft contact?

2006-02-03 Thread Frank Bulk

I'm sorry, but being a larger company requires more resources to support it.
Our upstream provider has only 3 to 5 people in their NOC during the day,
but they only serve a couple dozen ITCs.  A bigger company generates more
revenue and accordingly has increased responsibilities.  Largish companies
benefit from economies of scale (their overnight crew *actually* has calls
to take) and will likely have better processes in place to handle things
efficiently.

What do you think the messages:NOC man-hours ratio is?  I would argue that
smaller operations provide better service, but it costs them more per
message, or whatever metric you want to use.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Christopher L. Morrow
Sent: Friday, February 03, 2006 10:37 AM
To: Ivan Groenewald
Cc: [EMAIL PROTECTED]; nanog@merit.edu; 'Gadi Evron'; 'n3td3v'
Subject: RE: Yahoo, Google, Microsoft contact?



On Fri, 3 Feb 2006, Ivan Groenewald wrote:


 Earlier, Valdis scribbled:
  There's also the deeper question:  Why do we let the situation persist?
 Why do we tolerate the continued problems from unreachable companies?
 (And yes, this *is* an operational issue - what did that 4 hours on the
 phone cost your company's bottom line in wasted time?)


 To a certain extent, it's simple economic logic.
 At the end of the day, I got my issue sorted and it cost me 4 hours of
 billable time. It cost the other party 15 minutes of time. Why employ
 another person full time to deal with queries or man an email desk, to
save
 *me* 3h45min? It makes economic sense for bigger companies not to, well,
 care. They aren't going to go away, you're not going to get in the way
of
 the big Google/MS/BigCorp(tm) engine with gripes on your blog, so why
bother
 spending more money on helping *you*?

 It might sound very black and white, but I can tell you now that a lot of
 these companies use that as a rationale even without thinking about it so
 directly.

actually, working for a largish company, I'd say one aspect not recognized
is the scale on their side of the problem... [EMAIL PROTECTED]|uu|vzb gets (on a
bad month) 800k messages, on a 'good' month only 400k ... how many do
yahoo/google/msn get? How many do their role accounts get for
hostmaster/postmaster/routing/peering ?? Expecting that you can send an
email and get a response 'quickly' is just no reasonable unless you expect
just an auto-ack from their ticketting system.

-Chris



RE: Middle Eastern Exchange Points

2006-02-07 Thread Frank Bulk

A look at Telegeography's bandwidth maps suggest that the African routes are
predominantly coastal.

http://www.afridigital.net/downloads/DFIDinfrastructurerep.doc
adds some more detail.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Joe
Abley
Sent: Tuesday, February 07, 2006 3:12 PM
To: Martin Hannigan
Cc: Howard C. Berkowitz; nanog@merit.edu
Subject: Re: Middle Eastern Exchange Points



On 7-Feb-2006, at 11:54, Martin Hannigan wrote:

 I know of a Cairo IXP, and possibly one in the UAE.  Is there one  
 in Kuwait as yet?

 Yes, KIX. Note, there's CIX and CRIX. If you are trying to
 reach African users, there's also KIX ala Kenya.

The exchange point in Nairobi is called KIXP, not KIX, in case it  
helps avoid that confusion. The KIXP is The Place to reach Kenyan  
users, but no ISPs from parts of Africa outside Kenya participate in  
it, as far as I know. http://www.kixp.net/.

Terrestrial paths between adjacent African countries are still  
somewhat rare. I don't have science to back this up, but I would not  
be surprised if the toplogical centre of today's African Internet  
turned out to be the LINX.


Joe




RE: ISP filter policies

2006-02-14 Thread Frank Bulk

Same question here.  

We have a filtering appliance that filters for porn, etc based on a
subscription basis, but I've considered filtering phishing and spyware sites
for all our customers.  At what point does the ISP wanting to do good
infringe upon the 'rights' of those who accidentally hurt themselves (many)
and those who want to do everything (few).

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Ricardo V. Oliveira
Sent: Monday, February 13, 2006 10:35 PM
To: nanog@merit.edu
Subject: ISP filter policies


Hi, i would like to know where i can get info about ISP filter  
policies, namely use of community values? Is there any page other than:
http://www.nanog.org/filter.html (lots of broken links)?

Thanks!

--Ricardo




RE: Quarantine your infected users spreading malware

2006-02-20 Thread Frank Bulk

We're one of those user/broadband ISPs, and I have to agree with the other
commentary that to set up an appropriate filtering system (either user,
port, or conversation) across all our internet access platforms would be
difficult.  Put it on the edge and you miss the intra-net traffic, put it in
the core and you need a box on every router, which for a larger or
graphically distributed ISPs could be cost-prohibitive.

In relation to that ThreatNet model, we just could wish there was a place we
could quickly and accurately aggregate information about the bad things our
users are doing -- a combination of RBL listings, abuse@, SenderBase,
MyNetWatchman, etc.  We don't have our own traffic monitoring and analysis
system in place, and even if we did, I'm afraid our work would still be very
reactionary.

And for the record, we are one of those ISPs that blocks ports 139 and 445
on our DSLAM and CMTS, and we've not received one complaint, but I'm
confident it has cut down on a host of infections.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Gadi
Evron
Sent: Monday, February 20, 2006 3:41 PM
To: nanog@merit.edu
Subject: Quarantine your infected users spreading malware


Many ISP's who do care about issues such as worms, infected users spreading
the love, etc. simply do not have the man-power to handle all their
infected users' population.

It is becoming more and more obvious that the answer may not be at the ISP's
doorstep, but the ISP's are indeed a critical part of the solution. What
their eventual role in user safety will be I can only guess, but it is clear
(to me) that this subject is going to become a lot hotter in coming years.

Aunty Jane (like Dr. Alan Solomon (drsolly) likes to call your average
user) is your biggest risk to the Internet today, and how to fix the user
non of us have a good idea quite yet. Especially since it's not quite one as
I put in an Heinlein quote below.

Some who are user/broadband ISP's (not say, tier-1 and tier-2's who would be
against it: don't be the Internet's Firewall) are blocking ports such as
139 and 445 for a long time now, successfully preventing many of their users
from becoming infected. This is also an excellent first step for responding
to relevant outbreaks and halting their progress.

Philosophy aside, it works. It stops infections. Period.

Back to the philosophy, there are some other solutions as well. Plus, should
this even be done?

One of them has been around for a while, but just now begins to mature: 
Quarantining your users.

Infected users quarantine may sound a bit harsh, but consider; if a user is
indeed infected and does spread the joy on your network as well as
others', and you could simply firewall him (or her) out of the world (VLAN,
other solutions which may be far better) letting him (or her) go only to a
web page explaining the problem to them, it's pretty nifty.

As many of us know, handling such users on tech support is not very
cost-effective to ISP's, as if a user makes a call the ISP already losses
money on that user. Than again, paying abuse desk personnel just so that
they can disconnect your users is losing money too.

Which one would you prefer?

Jose (Nazario) points to many interesting papers on the subject on his
blog: http://www.wormblog.com/papers/

Is it the ISP's place to do this? Should the ISP do this? Does the ISP have
a right to do this?

If the ISP is nice enough to do it, and users know the ISP might. Why not?

This (as well as port blocking) is more true for organizations other than
ISP's, but if they are indeed user/broadband ISP's, I see this as both the
effective and the ethical thing to do if the users are notified this might
happen when they sign their contracts. Then all the don't be the Internet's
firewall debate goes away.

I respect the don't be the Internet's firewall issue, not only for the
sake of the cause but also because friends such as Steven Bellovin and other
believe in them a lot more strongly than I do. Bigger issues such as the
safety of the Internet exist now. That doesn't mean user rights are to be
ignored, but certainly so shouldn't ours, especially if these are mostly
unaffected?

I believe both are good and necessary solutions, but every organization
needs to choose what is best for it, rather than follow some pre-determined
blueprint. What's good for one may be horrible for another.

You don't approve? Well too bad, we're in this for the species boys and
girls. It's simple numbers, they have more and every day I have to make
decisions that send hundreds of people, like you, to their deaths. -- Carl
Jenkins, Starship Trooper, the movie.
I don't think the second part of the quote is quite right (to say the
least), but I felt bad leaving it out, it's Heinlein after all... anyone who
claims he is a fascist though will have to deal with me. :) This isn't only
about users, it's about the bad guys and how they out-number us, too. They
have far 

RE: Wiltel has gone pink.

2006-03-14 Thread Frank Bulk

This discussion is now drifting back to the one we had several weeks ago
about properly and adequately staffing the abuse desk (email, phone, and
otherwise) in spite of the temptation to take advantage of the
'efficiencies' of scale.  It's beyond me how an abuse@ can afford to drop
emails via their spam filter, unless the required spamminess value is set
*very* high.  Again, auto-responding to spam email can just perpetuate the
spam, though it is effective for those legitimate senders whose email was
marked up as spam.

Anyone want to start a pool to guess when Level3 will update the Wiltel
contact records with the correct Level3 information? =)

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Simon Lyall
Sent: Tuesday, March 14, 2006 3:35 PM
To: [EMAIL PROTECTED]
Subject: Re: Wiltel has gone pink.


On Tue, 14 Mar 2006, Jo Rhett wrote:
 Complete and utter incompetence (ie spam filtering their abuse 
 mailbox)

Considering the amount of spam that abuse mailboxes get then spam filtering
them is actually a good idea. You just have to be a little careful to not
block the complaints.

One way I did was to look for a Received:  header in the body of the
suspected spam and allow it though if it is rejected. A backup for that was
to have the reject say Please include the word 'xyzzy' in the subject to
bypass the filters and allow anything with that through (which happened
less than once per month).

--
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/ To stay
awake all night adds a day to your life - Stilgar | eMT.




RE: DS3/OC3 to FE bridge?

2006-03-16 Thread Frank Bulk

I just saw this in today's VON FOCUS on Hardware newsletter:
RAD INTRODUCES MINIATURE ETHERNET OVER T1/T3 BRIDGE AT OFC/NFOEC
http://www.radusa.com/Home/0,6583,2519,00.html
but the link doesn't reveal anything.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
matthew zeier
Sent: Wednesday, March 15, 2006 6:48 PM
To: nanog@merit.edu
Subject: DS3/OC3 to FE bridge?



I'm looking for something that can take a DS3 or OC3 and turn it into 
FE.  Basically similiar to what http://www.ds3switch.com/ does.



RE: ATT: 15 Mbps Internet connections irrelevant

2006-04-01 Thread Frank Bulk

The majority of U.S.-based IP TV deployments are not using MPEG-4, in fact,
you would be hard-pressed to find an MPEG-4 capable STB working with
middleware.  

SD MPEG-2 runs around ~4 Mbps today and HD MPEG-2 is ~19 Mbps. With ADSL2+
you can get up to 24 Mbps per home on very short loops, but if you look at
the loop length/rate graphs, you'll see that even with VDSL2 only the very
short loops will have sufficient capacity for multiple HD streams.  FTTP/H
is inevitable.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Edward B. DREGER
Sent: Saturday, April 01, 2006 1:16 AM
To: [EMAIL PROTECTED]
Subject: Re: ATT: 15 Mbps Internet connections irrelevant


MA Date: Sat, 1 Apr 2006 08:34:36 +0200 (CEST)
MA From: Mikael Abrahamsson

MA http://arstechnica.com/news.ars/post/20060331-6498.html
MA 
MA In the foreseeable future, having a 15 Mbps Internet capability is

[ snip ]

MA Is this something held generally true in the US, or is it just 
MA pointed hair-talk? Sounds like nobody should need more than 640kb 
MA of memory all over again.

I think the Comcast and cheaper cable plant references answer your
question.  With new ATT adverts, political lobbying, selling retail DSL
below loop/backhaul-only, and consolidation costs, how much money is left
over for last-mile upgrades?

Call me cynical.  I just seem to recall ATT ads in US news magazines
bragging about backbone size _and_ the large portion of Internet traffic
they [supposedly] carry.  (I say supposedly because claims might be
technically true, but misleading, when traffic passes over ATT _lines_ via
other providers' IP networks.  Shades of UUNet and Sprint[link] from years
gone by, anyone?)

So... uh... assuming all three claims -- backbone is bottleneck, we have
big backbone capacity, and we carry big chunks of Internet traffic -- are
true... I'm puzzling over what appears a bit paradoxical.

The IPTV reference is also amusing.  Let's assume a channel can be encoded
at 1.0 Mbps -- roughly a 1.5 hr show on a CD-ROM.  I don't see two
simultaneous programs, Internet traffic, and telephone fitting on a DSL
connection.

Perhaps the real question is which regulatory agency, or shareholders,
needed to hear what the article said. ;-)


Eddy
--
Everquick Internet - http://www.everquick.net/
A division of Brotsman  Dreger, Inc. - http://www.brotsman.com/
Bandwidth, consulting, e-commerce, hosting, and network building
Phone: +1 785 865 5885 Lawrence and [inter]national
Phone: +1 316 794 8922 Wichita

DO NOT send mail to the following addresses:
[EMAIL PROTECTED] -*- [EMAIL PROTECTED] -*- [EMAIL PROTECTED]
Sending mail to spambait addresses is a great way to get blocked.
Ditto for broken OOO autoresponders and foolish AV software backscatter.



RE: ATT: 15 Mbps Internet connections irrelevant

2006-04-01 Thread Frank Bulk

Sorry if I wasn't clear, but I meant IP-based STB's, like those made from
Amino, Entone, i3 Micro, Motorola's Kreatel, Cisco's Scientific-Atlanta,
Wegener, Sentivision and middleware from vendors such as Infogate,
Microsoft, Minerva, Orca Interactive, and Siemen's Myrio.  And now that
content providers are starting to require encryption, none of these earlier
pairs can actually be used unless they include conditional access solutions
from the likes of Irdeto, Latens, Nagravision, Verimatrix, Widevine.

DIRECTV does not use an IP-based STB, AFAIK, and delivers their content to
consumers via satellite, not using ATT last-mile's infrastructure, which
initiated this thread.

Frank

-Original Message-
From: Matt Ghali [mailto:[EMAIL PROTECTED] 
Sent: Saturday, April 01, 2006 6:05 PM
To: Frank Bulk
Cc: [EMAIL PROTECTED]
Subject: RE: ATT: 15 Mbps Internet connections irrelevant

On Sat, 1 Apr 2006, Frank Bulk wrote:

 Yes, there are quite a few MPEG4-capable STB vendors with lots of 
 middleware vendors standing behind them, but I challenge you to 
 document one STB/middleware combination in GA.  I haven't seen it.  
 Talk to me in six months, and it will be a different story.

err. directv?

matto

[EMAIL PROTECTED]darwin
   Moral indignation is a technique to endow the idiot with dignity.
 - Marshall McLuhan



RE: Verizonwireless.com Blacklisted SMTP

2006-04-25 Thread Frank Bulk



This posting on broadbandreports.com might add some 
background to your issues:
http://www.broadbandreports.com/shownews/73818

Regards,

Frank


From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Chris RilingSent: 
Monday, April 24, 2006 3:12 PMTo: nanog@merit.eduSubject: 
Verizonwireless.com Blacklisted SMTP
Hi,
 What's up with VZW 
blacklisting a large amount of IP space on their MXes? I have tried sending mail 
to verizonwireless.com from 
several boxes on Cogent's network to no avail... Is Cogent on the "bad boy list" 
yet again? Anyone have a useful contact there? We're not able to correspond with 
our sales rep via email and they're losing business. I have contacted some NOC 
engineers with no luck... Trying 162.115.163.69...Connected to mars.verizonwireless.com.Escape character is '^]'. 554-venus.verizonwireless.com 554-Your access to the VZW 
mail systems has been rejected due to the sending MTA or Network Service 
Provider's poor reputation / e-mail hygiene on the 
Internet.554-554-Please reference the following URL for more 
information: 554-http://www.senderbase.org/search?searchString=554-554 
If you believe that this failure is in error, please contact the recipient via 
alternate means. Connection closed by foreign 
host.Thanks,
Chris


RE: Geo location to IP mapping

2006-05-15 Thread Frank Bulk

Quova seems to be the premier service: http://www.quova.com/ 

I read a story on them some time ago and I was left with the impression that
all the other players are rookies, but then again, you probably will pay
heavily for this service.

Geobytes is another one I've played with.

We're a small ISP, and I know they've never asked for our ranges, so the
best any of these could do would be on a multi-county basis.  For kicks I
would like to try an IP address from each of our subnets and see how they
do.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Ashe
Canvar
Sent: Monday, May 15, 2006 11:36 AM
To: [EMAIL PROTECTED]
Subject: Geo location to IP mapping


Hi all,

Can any of you please recommend some IP-to-geo mapping database / web
service ?

I would like to get resolution down to city if possible.

Thanks and Regards,
-ashe



RE: private ip addresses from ISP

2006-05-23 Thread Frank Bulk

While we're on the topic, perhaps I should ask for some best practices
(where 'best' equals one for every listserv member) on the use of RFC 1918
addresses within a network provider's infrastructure.

We use private addresses for some stub routes, as well as our cable modems.
Should we aggressively move away from private stub networks?  And for the
second, should we specifically limit access to those cable modem IPs to just
our management network ?  Right now any of customers could do an SNMP sweep
and identify them all, but I don't really care that much about that, or
should I?

And yes, I do have RFC 1918 filters on our outbound traffic. =)

Frank




RE: voip calea interfaces

2006-06-20 Thread Frank Bulk

USTelecom has put on a free webinar about this, with guests from VeriSign.
It might be on interest.
http://www.ustelecom.org/events.php?urh=home.events.web2006_0615

Frank 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Eric
A. Hall
Sent: Tuesday, June 20, 2006 11:49 AM
To: nanog list
Subject: voip calea interfaces



I'm looking into the FCC ruling to require CALEA support for certain
classes of VoIP providers, as upheld by the DC circuit court a couple of
weeks ago [1]. The portion of VoIP that is covered by this order is pretty
narrow (ie, you provide telephony-like voip services for $$ [read the
specs for the real definition]), and the FCC is looking at narrowing it
down further but has not done so yet. Meanwhile, the deadline for
implementation -- May 14, 2007 -- is starting to get pretty close.

The operational part of this subject, and the reason for this mail, is the
implementation of the wiretap interface. Obviously there are going to be a
range of implementation approaches, given that there are a wide variety of
providers. I mean, big-switch users probably just enable a feature, but
small providers that rely on IP PBX gear with FXO cards will have to do
something specific. Are vendors stepping up to the plate? Did you even
know about this?

Off-list is fine, and I'll summarize if there's interest.

Thanks

[1] http://pacer.cadc.uscourts.gov/docs/common/opinions/200606/05-1404a.pdf

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/



RE: voip calea interfaces

2006-06-20 Thread Frank Bulk

Sorry, I should have given a link to the actual archived copy:
http://w.on24.com/r.htm?e=24039s=1k=38C852E931DEFE2A92A709EDE5FCF209partn
erref=website

The master list of event can be found on this page:
http://www.ustelecom.org/webinars.php?urh=home.events.webinars

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Frank Bulk
Sent: Tuesday, June 20, 2006 3:14 PM
To: nanog list
Subject: RE: voip calea interfaces


USTelecom has put on a free webinar about this, with guests from VeriSign.
It might be on interest.
http://www.ustelecom.org/events.php?urh=home.events.web2006_0615

Frank 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Eric
A. Hall
Sent: Tuesday, June 20, 2006 11:49 AM
To: nanog list
Subject: voip calea interfaces



I'm looking into the FCC ruling to require CALEA support for certain
classes of VoIP providers, as upheld by the DC circuit court a couple of
weeks ago [1]. The portion of VoIP that is covered by this order is pretty
narrow (ie, you provide telephony-like voip services for $$ [read the
specs for the real definition]), and the FCC is looking at narrowing it
down further but has not done so yet. Meanwhile, the deadline for
implementation -- May 14, 2007 -- is starting to get pretty close.

The operational part of this subject, and the reason for this mail, is the
implementation of the wiretap interface. Obviously there are going to be a
range of implementation approaches, given that there are a wide variety of
providers. I mean, big-switch users probably just enable a feature, but
small providers that rely on IP PBX gear with FXO cards will have to do
something specific. Are vendors stepping up to the plate? Did you even
know about this?

Off-list is fine, and I'll summarize if there's interest.

Thanks

[1] http://pacer.cadc.uscourts.gov/docs/common/opinions/200606/05-1404a.pdf

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




RE: Who wants to be in charge of the Internet today?

2006-06-26 Thread Frank Bulk

Sometimes we can't get a hold of each other's NOCs during 'peacetime',
imagine in times of disaster!

Frank 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: Monday, June 26, 2006 2:43 PM
To: nanog@merit.edu
Subject: Re: Who wants to be in charge of the Internet today?


On Mon, 26 Jun 2006, Wayne E. Bouchard wrote:

 something like 75% service restoration. The independant efforts of
 individuals and individual companies will probably be the best
 mechanism for repairing any injury to the 'net.

Totally agree.

What needs to be in place are lines of communication between these 
indviduals and their management, both within and beteen companies and 
authorities. Much can be accomplished by information spreading regarding 
what equipment is lacking etc.

If everybody just agrees to fix it all, and deal with the commercial 
issues afterwards, wonders can be achieved in very short time. But 
will, authority, communication and information need to exist.

The biggest example I can think of was during the worst storm in the last 
50 years here in sweden, there was much devistation in telecommunications 
and power, most of it power related (power lines torn down). In the EU 
there are contingency plans to handle this and countries can request help 
from other countries to get access to their disaster relief equipment such 
as generators etc. What DOES need to be in place is for someone to pay for 
transportation of this equipment. The head of the swedish state agency to 
handle these didn't have authority and budget to pay for the 
transportation, so he had to call and more or less beg one of the power 
companies to pay for this. This delayed the delivery of the equipment by 
some time, totally unnecessary.

So a very important part of disaster planning is how do we communicate 
with everybody involved? and what are our authorities regarding money 
and resources. If there is a will, there is a way :P

-- 
Mikael Abrahamssonemail: [EMAIL PROTECTED]



RE: Qwest Long Distance Network

2006-07-21 Thread Frank Bulk

We are experiencing it, too.  We are being told by ZONETELECOM (which
purchased WRLD Alliance Communications a few months back) that a Nortel
switch in the midwest is the cause of the trouble.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Wallace Keith
Sent: Friday, July 21, 2006 2:36 PM
To: S. Ryan; nanog@merit.edu
Subject: RE: Qwest Long Distance Network


We use Qwest as our LD provider at several call centers and have not had any
issues reported (and they ARE finicky !). Perhaps it's some sort of issue
between your provider and Qwest or something more localized?

-Keith

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of
S. Ryan
Sent: Friday, July 21, 2006 2:56 PM
To: nanog@merit.edu
Subject: Qwest Long Distance Network



Perhaps not the best place to ask, but I thought I would try.

Anyone know or have more information on the Qwest 14-State LD Network 
Outage?

It's been going on for the better part of this morning.

At times, can not call LD from the Qwest network, nor can anyone call 
into the Qwest network.





RE: Hot weather and power outages continue

2006-07-24 Thread Frank Bulk



Depending on the state you live in, the PUC generally 
requires 4 to 8 hours of dialtone if it's generated from the C.O. Dialtone 
generated from SLC may not be explicitly covered under the 
rules.

Regards,

Frank


From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Daniel SenieSent: 
Monday, July 24, 2006 5:47 PMTo: Brandon GalbraithCc: 
nanog@merit.eduSubject: Re: Hot weather and power outages 
continue
At 06:26 PM 7/24/2006, Brandon Galbraith wrote:
While hardwired (fiber/coax/copper) 
  aggregation points usually don't have backup power on them, most cellular 
  towers have either batteries or generators for backup power, 
correct?We see good cable modem connectivity during power 
outages. Batteries must still be good in the HFC nodes in our area. Verizon POTS 
service, on the other hand, dies when the power does. The batteries in their SLC 
units are toast. I've got the local police chief off conversing with the Verizon 
E911 folks to find out why it's OK to have no 911 service to a large part of 
town when there's no power (Verizon repair kept trying to tell me it must be my 
equipment, despite testing at the network interface).I am looking at 
moving telephone services off to Comcast or VOIP because they're more reliable 
than Verizon is, in my particular neighborhood.
-brandonOn 7/24/06, 
  William S. Duncanson [EMAIL PROTECTED]  
  wrote:
  
Indeed, my RoadRunner connection is the same way. All of my stuff 
stays up,
but "teh Interweb is broken." I'm guessing that they (DSL/CableCo's) 
find it
too cost-prohibitive to roll out UPSes to the customer aggregation 
points. 
Suprisingly, my cable TV goes out as well when the power goes, so it 
might
just be more than the CMTS that's going out.
-Original Message-
From: [EMAIL PROTECTED] 
[ 
mailto:[EMAIL PROTECTED]] On Behalf Of
Michael Loftis
Sent: Monday, July 24, 2006 16:20
To: nanog@merit.edu
Subject: Re: Hot weather and power outages continue 
--On July 24, 2006 2:22:26 AM -0400 Sean Donelan [EMAIL PROTECTED] wrote:
 While its expected for individual customers to go down during 
power
 outages, usually because the customer does not have local 
backup
 power, it is less common for major web sites and co-location 
centers
 to experience downtime during power outages.
Except if you're in Qwest territory. Apparently they don't put any 
battery 
backup at their mini-DSLAMs and such. Every time we lose power, 
I'm still
up, but the DSL signal goes away. Haven't checked dialtone, but I 
keep
meaning too during the next outage.
Now I know it's not exactly fair singling out Qwest, because I'll bet 

Verizon and others share the same thing, and I'm pretty sure it's just 
their
ADSL service and not the voice service (I haven't checked though) 
it's
still becoming more and more common that as an individual user your 
connection to the internet, unless you're paying for something other 
than
ADSL or Cable, will be just as affected by local power 
outages.-- Brandon GalbraithEmail: [EMAIL PROTECTED]AIM: 
  brandong00Voice: 630.400.6992"A true pirate starts drinking before the 
  sun hits the yard-arm. Ya. --thelost" 


RE: Hot weather and power outages continue

2006-07-24 Thread Frank Bulk

Our small operation has outfitted our Calix shelves in the field with a
minimum 8 hours of run time.  If they would run low we would re-charge them
with portable generators.  We just consider it the cost of doing business.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Michael Loftis
Sent: Monday, July 24, 2006 4:20 PM
To: nanog@merit.edu
Subject: Re: Hot weather and power outages continue




--On July 24, 2006 2:22:26 AM -0400 Sean Donelan [EMAIL PROTECTED] wrote:

 While its expected for individual customers to go down during power
 outages, usually because the customer does not have local backup power, it
 is less common for major web sites and co-location centers to experience
 downtime during power outages.

Except if you're in Qwest territory.  Apparently they don't put any battery 
backup at their mini-DSLAMs and such.  Every time we lose power, I'm still 
up, but the DSL signal goes away.  Haven't checked dialtone, but I keep 
meaning too during the next outage.

Now I know it's not exactly fair singling out Qwest, because I'll bet 
Verizon and others share the same thing, and I'm pretty sure it's just 
their ADSL service and not the  voice service (I haven't checked though) 
it's still becoming more and more common that as an individual user your 
connection to the internet, unless you're paying for something other than 
ADSL or Cable, will be just as affected by local power outages.




RE: Hot weather and power outages continue

2006-07-24 Thread Frank Bulk



In the state of Iowa the PUC is called the Iowa Utility 
Board (IUB). According to the IUB, each CO must provide 2 
hours of battery reserve (not 4, as I said), and if the C.O. serves over 4000 
lines, a generator is required.http://www.legis.state.ia.us/Rules/2003/iac/199iac/19922/19922pp1.pdfPage 
26, 22.6(5)
Frank


From: Daniel Senie [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 24, 2006 9:12 PMTo: 
[EMAIL PROTECTED]Cc: nanog@merit.eduSubject: RE: Hot 
weather and power outages continue
At 09:59 PM 7/24/2006, Frank Bulk wrote:
Depending on the state you live in, the PUC generally requires 4 to 8 
  hours of dialtone if it's generated from the C.O. Dialtone generated 
  from SLC may not be explicitly covered under the 
rules.So when they moved about 1/2 of the town from CO 
to SLC, they got out of the obligation to provide dial tone in power outage? 
That seems, ummm, interesting. Will be interesting to see what the police chief 
learns (I'm on a town board that works closely with the police, so he and I get 
to talk often and get along well). He was quite concerned to learn about the 
telephone outages that happen.Thanks for the info. I will follow up with 
the PUC folks as well and see what they have to say. The Verizon service folks 
did tell me they expected dialtone to work during power failures, and kept 
claiming it must be my telephones that are at fault (plugging a very basic, 
known-functional POTS phone into the network interface says they're 
wrong).
Regards,Frank
  
  From: [EMAIL PROTECTED] [ 
  mailto:[EMAIL PROTECTED]] On Behalf Of Daniel 
  SenieSent: Monday, July 24, 2006 5:47 PMTo: Brandon 
  GalbraithCc: nanog@merit.eduSubject: Re: Hot weather and 
  power outages continueAt 06:26 PM 7/24/2006, Brandon Galbraith 
  wrote:
  While hardwired 
(fiber/coax/copper) aggregation points usually don't have backup power on 
them, most cellular towers have either batteries or generators for backup 
power, correct?We see good cable modem connectivity during 
  power outages. Batteries must still be good in the HFC nodes in our area. 
  Verizon POTS service, on the other hand, dies when the power does. The 
  batteries in their SLC units are toast. I've got the local police chief off 
  conversing with the Verizon E911 folks to find out why it's OK to have no 911 
  service to a large part of town when there's no power (Verizon repair kept 
  trying to tell me it must be my equipment, despite testing at the network 
  interface).I am looking at moving telephone services off to Comcast or 
  VOIP because they're more reliable than Verizon is, in my particular 
  neighborhood.
  -brandonOn 7/24/06, 
William S. Duncanson [EMAIL PROTECTED]  
wrote: 

  Indeed, my RoadRunner connection is the same way. All of my 
  stuff stays up, 
  but "teh Interweb is broken." I'm guessing that they (DSL/CableCo's) 
  find it 
  too cost-prohibitive to roll out UPSes to the customer aggregation 
  points. 
  Suprisingly, my cable TV goes out as well when the power goes, so it 
  might 
  just be more than the CMTS that's going out.
  -Original Message- 
  From: [EMAIL PROTECTED] 
  [ mailto:[EMAIL PROTECTED]] On Behalf Of 
  Michael Loftis 
  Sent: Monday, July 24, 2006 16:20 
  To: nanog@merit.edu 
  Subject: Re: Hot weather and power outages continue 
  --On July 24, 2006 2:22:26 AM -0400 Sean Donelan [EMAIL PROTECTED] wrote:
   While its expected for individual customers to go down during 
  power 
   outages, usually because the customer does not have local backup 
   power, it is less common for major web sites and co-location 
  centers 
   to experience downtime during power outages.
  Except if you're in Qwest territory. Apparently they don't put 
  any battery 
  backup at their mini-DSLAMs and such. Every time we lose power, 
  I'm still 
  up, but the DSL signal goes away. Haven't checked dialtone, but 
  I keep 
  meaning too during the next outage.
  Now I know it's not exactly fair singling out Qwest, because I'll bet 
  Verizon and others share the same thing, and I'm pretty sure it's just 
  their 
  ADSL service and not the voice service (I haven't checked 
  though) it's 
  still becoming more and more common that as an individual user your 
  connection to the internet, unless you're paying for something other 
  than 
  ADSL or Cable, will be just as affected by local power 
  outages.-- Brandon GalbraithEmail: [EMAIL PROTECTED]AIM: 
brandong00Voice: 630.400.6992"A true pirate starts drinking before 
the sun hits the yard-arm. Ya. --thelost" 



RE: OT: Good list for VoIP

2006-08-03 Thread Frank Bulk

The isp-voip list is pretty quiet, and probably not the caliber you're
looking for.

Frank 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Netfortius
Sent: Thursday, August 03, 2006 8:41 AM
To: nanog@merit.edu
Subject: Re: OT: Good list for VoIP


I've had some decent success with other lists from this site:

http://isp-lists.isp-planet.com/about/

so you may want to try their VoIP one. I cannot personally endorse that 
specific one, though, as I am not a subscriber.

Stefan

On Thursday 03 August 2006 07:20, Mike Callahan wrote:
 Sorry for the OT post but I'm wondering if anyone can recommend a good
list
 for ISP level VoIP discussion.  On that's focus is on technical issues
 would be preferred.

 Thanks,

 M. Callahan



RE: Anyone else lost power at Fisher Plaza this afternoon?

2006-08-05 Thread Frank Bulk

AFAIK, you don't need to have to have someone onsite to trip a breakerif
it doesn't do it automatically, there are a multitude of SCADA systems
available to manuaully flip them on.  Unless, of course, the
electromechanical components that physically flip the breaker over have
failed.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Michael K. Smith
Sent: Friday, August 04, 2006 12:49 PM
To: Jim Popovitch
Cc: Nanog
Subject: Re: Anyone else lost power at Fisher Plaza this afternoon?


Hello Jim:


On 8/4/06 9:30 AM, Jim Popovitch [EMAIL PROTECTED] wrote:

 
 Michael K. Smith wrote:
 It was a breaker in the main bypass from city power to the generators.
The
 breaker failed to close so the generators happily fed power to nowhere.
 Then, everyone's UPS failed and down we/they went.  The outage lasted
 approximately 26 minutes.
 
 Nobody checked to make sure that at least one of the UPSs showed a
 status of ONLINE instead of ONBATTERY?   Were there no UPSs
 configured to alert during continued and extended PF?  Surely people
 didn't just trust the sound/vibration of the running generator.
 
 -Jim P.

Indeed.  The problem was there wasn't an engineer on site who could manually
trip the breaker.  They got onsite pretty quickly, but not quickly enough to
trip the breaker in time to avoid an outage.  So we watched the UPS drain
all the way down which took about 24 minutes in our case.  So close, yet so
far.

Regards,

Mike




RE: [Fwd: Important ICANN Notice Regarding Your Domain Name(s)]

2006-10-05 Thread Frank Bulk

GoDaddy's abuse desk is not so easy to work with...I have had two different
times that a whole /24 was blocked even though parts of the address space
were split between different providers (and customers), but GoDaddy would
hardly relent.  Took over a week to get that resolved.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Steve Sobol
Sent: Thursday, October 05, 2006 10:05 AM
To: Alexander Harrowell
Cc: Joe Abley; Chris Stone; [EMAIL PROTECTED]
Subject: Re: [Fwd: Important ICANN Notice Regarding Your Domain Name(s)]


On Thu, 5 Oct 2006, Alexander Harrowell wrote:

 Are you sure it's genuine? Those WWD domains (especially
 secureserver.net) account for a large fraction of the spam and
 phishing attempts I receive.

SecureServer.net is GoDaddy.

If you have domains hosted at GoDaddy or a reseller, your customer 
notifications come from that domain.

They also do web and email hosting, which is probably why you're seeing 
the abusive behavior, but they do have a working abuse desk, so if you see 
stuff from there, definitely report it.

-- 
Steve Sobol, Professional Geek ** Java/VB/VC/PHP/Perl ** Linux/*BSD/Windows
Apple Valley, California PGP:0xE3AE35ED

It's all fun and games until someone starts a bonfire in the living room.




RE: CO fire St. Johns Newfoundland

2006-10-21 Thread Frank Bulk

Apparently it was a DC power cable:
http://www.canada.com/topics/news/national/story.html?id=16fff79a-1848-41f9-
a635-ac645e423308k=83532
Too much current?

Frank 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Dan
Armstrong
Sent: Saturday, October 21, 2006 2:53 AM
To: Sean Donelan
Cc: nanog@merit.edu
Subject: Re: CO fire St. Johns Newfoundland


I bet it was set by the codfather.

:-)



Sean Donelan wrote:



 Its been a while since the last big telephone central office fire.

 100,000+ lines are out of service in St. John's Newfoundland (Canada, 
 the other part of North America).






RE: Yahoo Postmaster contact, please

2006-11-03 Thread Frank Bulk

I have one customer that's been having trouble (not specific to him, all of
our ISP subs send out via well-known gateways) and the message started off
with 451 Message temporarily deferred - 4.16.50 and the most recent one
was Remote host said: 451 Message temporarily deferred - [190].  When I
had this problem about 2 weeks ago I could manually initiate a connection to
any of Yahoo's MX records and get the first error message.  After I filled
out the Yahoo! Mail Feedback form on their website
(http://add.yahoo.com/fast/help/us/mail/cgi_defer/) I get this response four
days later:

===
Thank you for contacting Yahoo! Customer Care.

There appears to have been an incident involving capacity issues within 
our delivery infrastructure. The error message 451 Message temporarily 
deferred - 4.16.50 indicates that our MTAs are currently experiencing 
heavy, unusual traffic. You may retry sending at a later time when you 
see this message.

However please note that emails from the mail server(s) you are using 
may also have recently become deprioritized due to potential issues with
its mailings. 

These deprioritizations were temporary but may be re-triggered if the 
sending IP profile continues to be poor. Typically, deprioritizations 
are triggered by bad individual sender or MAIL FROM profiles. 
===

Googling for some of these error codes yields this Yahoo discussion
(http://tinyurl.com/y3nm6p) of a week ago that talks about greylisting and
site upgrades, so hopefully this goes away sooner rather than later.

Regards,

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
chuck goolsbee
Sent: Friday, November 03, 2006 3:42 PM
To: nanog@merit.edu
Subject: Re: Yahoo Postmaster contact, please


Greetings, NANOGers.  I've got a mail cluster that's been spooling about
5 messages for the past week or so (with very little drain and
traffic passing), and my mail admin reports that attempted contacts to
the Yahoo Postmaster are not getting answered.  Can someone over there
drop me a line off-list, please?

Welcome to a very NON-exclusive club Matt.

You are not alone*. It seems as if every other mail server on the 
planet is having the same issue.

As for an actual human being at Yahoo getting back in touch with you, 
I suspect I'll be refereeing a Flyers** vs Chiefs*** Ice Hockey game 
in hell before that happens. However, if an actual human being 
affiliated with Yahoo does get back to you prior to the Zamboni being 
delivered to the netherworld, please pass them over my way when your 
done with them.

--chuck


* http://www.forest.net/support/archives/2006/10/000792.php#000792
** http://en.wikipedia.org/wiki/Philadelphia_Flyers#Broad_Street_Bullies
*** http://en.wikipedia.org/wiki/Slap_Shot_%28film%29





RE: DNS - connection limit (without any extra hardware)

2006-12-08 Thread Frank Bulk
You could also look at Cloudshield.  I was following the EveryDNS issue this
weekend and this item among the regular VON press release blast jumped out
at me:
http://www.cloudshield.com/news_events/2006_Releases/EveryDNS%20FINAL.pdf
 
Regards,
 
Frank

  _  

From: Frank Bulk 
Sent: Friday, December 08, 2006 8:59 AM
To: '[EMAIL PROTECTED]'
Subject: DNS - connection limit (without any extra hardware)


Hi,
as a comsequence of a virus diffused in my customer-base, I often receive
big bursts of traffic on my DNS servers.
Unluckly, a lot of clients start to bomb my DNSs at a certain hour, so I
have a distributed tentative of denial of service. 
I can't blacklist them on my DNSs, because the infected clients are too
much.

For this reason, I would like that a DNS could response maximum to 10
queries per second given by every single Ip address.
Anybody knows a solution, just using iptables/netfilter/kernel tuning/BIND
tuning, without using any hardware traffic shaper? 

Thanks
Best Regards

Luke




RE: Home media servers, AUPs, and upstream bandwidth utilization.

2006-12-26 Thread Frank Bulk

What hasn't been yet discussed is the upstream/downstream disparity on the
link to the upstream provider.  At least in our ISP operations, downstream
peaks out at about 3x the upstream, and downstream only dips to the upstream
utilization at the wee hours of the morning.

I wouldn't mind if upstream utilization matched downstream rates as we're
essentially paying for downstream utilization, not upstream.  Are there more
pieces to the bandwidth puzzle that would start getting messed up if ISPs
and end-users were more symmetrical in their usage?

Frank

-Original Message-
From: Frank Bulk 
Sent: Sunday, December 24, 2006 8:56 PM
To: NANOG
Subject: Home media servers, AUPs, and upstream bandwidth utilization.

I recently purchased a Slingbox Pro, and have set it up so that I can  
remotely access/control my home HDTV DVR and stream video remotely.   
My broadband access SP specifically allow home users to run servers,  
as long as said servers don't cause a problem for the SP  
infrastructure nor for other users or doing anything illegal; as long  
as I'm not breaking the law or making problems for others, they don't  
care.

The Slingbox is pretty cool; when I access it, both the video and  
audio quality are more than acceptable.  It even works well when I  
access it via EVDO; on average, I'm pulling down about 450kb/sec up  
to about 580kb/sec over TCP (my home upstream link is a theoretical  
768kb/sec, minus overhead; I generally get something pretty close to  
that).

What I'm wondering is, do broadband SPs believe that this kind of  
system will become common enough to make a signficant difference in  
traffic paterns, and if so, how do they believe it will affect their  
access infrastructures in terms of capacity, given the typical  
asymmetries seen in upstream vs. downstream capacity in many  
broadband access networks?  If a user isn't doing something like  
breaking the law by illegally redistributing copyrighted content, is  
this sort of activity permitted by your AUPs?  If so, would you  
change your AUPs if you saw a significant shift towards non- 
infringing upstream content streaming by your broadband access  
customers?  If not, would you consider changing your AUPs in order to  
allow this sort of upstream content streaming of non-infringing  
content, with the caveat that users can't caused problems for your  
infrastructure or for other users, and perhaps with a bandwidth cap?

Would you police down this traffic if you could readily classify it,  
as many SPs do with P2P applications?  Would the fact that this type  
of traffic doesn't appear to be illegal or infringing in any way lead  
you to treat it differently than P2P traffic (even though there are  
many legitimate uses for P2P file-sharing systems, the presumption  
always seems to be that the majority of P2P traffic is in illegally- 
redistributed copyrighted content, and thus P2P technologies seem  
to've acquired a taint of distaste from many quarters, rightly or  
wrongly).

Also, have you considered running a service like this yourselves, a  
la VoIP/IPTV?

Vidoeconferencing is somewhat analogous, but in most cases,  
videoconference calls (things like iChat, Skype videoconferencing,  
etc.) generally seem to use a less bandwidth than the Slingox, and it  
seems to me that they will in most cases be of shorter duration than,  
say, a business traveler who wants to keep up with Lost or 24 and so  
sits down to stream video from his home A/V system for 45 minutes to  
an hour at a stretch.

Sorry to ramble, this neat little toy just sparked a few questions,  
and I figured that some of you are dealing with these kinds of issues  
already, or are anticipating doing so in the not-so-distant future.   
Any insight or informed speculation greatly appreciated!


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

All battles are perpetual.

   -- Milton Friedman







RE: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-06 Thread Frank Bulk

Colm:

What does the Venice project see in terms of the number of upstreams
required to feed one view, and how much does the size of upstream pipe
affect this all?  Do you see trends where 10 upstreams can feed one view if
they are at 100 kbps each as opposed to 5 upstreams and 200 kbps each, or is
it no tight relation?  Supposedly FTTH-rich countries contribute much more
to P2P networks because they have a symmetrical connection and are more
attractive to the P2P clients.  

And how much does being in the same AS help compare to being geographically
or hopwise apart?

Regards,

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Colm
MacCarthaigh
Sent: Saturday, January 06, 2007 8:08 AM
To: Robert Boyle
Cc: Thomas Leavitt; nanog@merit.edu
Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?


On Sat, Jan 06, 2007 at 03:18:03AM -0500, Robert Boyle wrote:
 At 01:52 AM 1/6/2007, Thomas Leavitt [EMAIL PROTECTED] wrote:
 If this application takes off, I have to presume that everyone's 
 baseline network usage metrics can be tossed out the window...

That's a strong possibility :-) 

I'm currently the network person for The Venice Project, and busy
building out our network, but also involved in the design and planning
work and a bunch of other things. 

I'll try and answer any questions I can, I may be a little restricted in
revealing details of forthcoming developments and so on, so please
forgive me if there's later something I can't answer, but for now I'll
try and answer any of the technicalities. Our philosophy is to pretty
open about how we work and what we do. 

We're actually working on more general purpose explanations of all this,
which we'll be putting on-line soon. I'm not from our PR dept, or a
spokesperson, just a long-time NANOG reader and ocasional poster
answering technical stuff here, so please don't just post the archive
link to digg/slashdot or whatever. 

The Venice Project will affect network operators and we're working on a
range of different things which may help out there.  We've designed our
traffic to be easily categorisable (I wish we could mark a DSCP, but the
levels of access needed on some platforms are just too restrictive) and
we know how the real internet works. Already we have aggregate per-AS
usage statistics, and have some primitive network proximity clustering.
AS-level clustering is planned.

This will reduce transit costs, but there's not much we can do for other
infrastructural, L2 or last-mile costs. We're L3 and above only.
Additionally, we predict a healthy chunk of usage will go to our Long
tail servers, which are explained a bit here;

http://www.vipeers.com/vipeers/2007/01/venice_project_.html

and in the next 6 months or so, we hope to turn up at IX's and arrange
private peerings to defray the transit cost of that traffic too. 
Right now, our main transit provider is BT (AS5400) who are at some
well-known IX's.

 Interesting. Why does it send so much data? 

It's full-screen TV-quality video :-) After adding all the overhead for
p2p protocol and stream resilience we still only use a maximum of 320MB
per viewing hour. 

The more popular the content is, the more sources it can be pulled from
and the less redundant data we send, and that number can be as low as
220MB per hour viewed. (Actually, I find this a tough thing to explain
to people in general; it's really counterintuitive to see that more
peers == less bandwidth - I'm still searching for a useful user-facing
metaphor, anyone got any ideas?).

To put that in context; a 45 minute episode grabbed from a file-sharing
network will generally eat 350MB on-disk, obviously slightly more is
used after you account for even the 2% TCP/IP overhead and p2p protocol
headers. And it will usually take longer than 45 minutes to get there.

Compressed digital telivision works out at between 900MB and 3GB an hour
viewed (raw is in the tens of gigabytes). DVD is of the same order.
YouTube works out at about 80MB to 230MB per-hour, for a mini-screen
(though I'm open to correction on that, I've just multiplied the
bitrates out).

 Is it a peer to peer type of system where it redistributes a portion
 of the stream as you are viewing it to other users?

Yes, though not neccessarily as you are viewing it. A proportion of what
you have viewed previously is cached and can be made available to other
peers.

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]



RE: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-12 Thread Frank Bulk

If we're becoming a VOD world, does multicast play any practical role in
video distribution?

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Michal Krsek
Sent: Wednesday, January 10, 2007 2:28 AM
To: Marshall Eubanks
Cc: nanog@merit.edu
Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?


Hi Marshall,

 - the largest channel has 1.8% of the audience
 - 50% of the audience is in the largest 2700 channels
 - the least watched channel has ~ 10 simultaneous viewers
 - the multicast bandwidth usage would be 3% of the unicast.

I'm a bit skeptic for future of channels. For making money from the long 
tail, you have to have to adapt your distribution to user's needs. It is not

only format, codec ... but also time frame. You can organise your programs 
in channels, but they will not run simultaneously for all the users. I want 
to control my TV, I don't want to my TV jockey my life.

For the distribution, you as content owner have to help the ISP find the 
right way to distribute your content. In example: having distribution center

in Tier1 ISP network will make money from Tier2 ISP connected directly to 
Tier1. Probably, having CDN (your own or pay for service) will be the only 
one way for large scale non synchronous programing.

Regards
Michal




RE: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-12 Thread Frank Bulk

You mean the NCTC?  Yes, they did close their doors for new membership, but
there are regional head ends that represent a larger number of ITCs that
have been able to directly negotiate with the content providers.  

And then there's the turnkey vendors: IPTV Americas, SES Americom' IP-PRIME,
and Falcon Communications.

It's not entirely impossible.

Frank



From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Gian
Constantine
Sent: Wednesday, January 10, 2007 7:47 AM
To: [EMAIL PROTECTED]
Cc: Marshall Eubanks; nanog@merit.edu
Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?


Many of the small carriers, who are doing IPTV in the U.S., have acquired
their content rights through a consortium, which has since closed its doors
to new membership. 

I cannot stress this enough: content is the key to a good industry-changing
business model. Broad appeal content will gain broad interest. Broad
interest will change the playing field and compel content providers to
consider alternative consumption/delivery models.

The ILECs are going to do it. They have deep pockets. Look at how quickly
they were able to get franchising laws adjusted to allow them to offer
video. 

Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.




RE: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-12 Thread Frank Bulk
Gian:
 
I ahven't spoken to any of those turnkey providers.  Sounds like just the
hardware, plant infrastructure, and transport is turnkey. =)
 
Getting content rights is a [EMAIL PROTECTED]  That and the associated price 
tag is
probably the largest non-technical barrier to IP TV deployments today.
 
Frank

  _  

From: Gian Constantine [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 12, 2007 9:24 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; Marshall Eubanks; nanog@merit.edu
Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?


Yes, the NCTC. 

I have spoken with two of the vendors you mentioned. Neither have
pass-through licensing rights. I still have to go directly to most of the
content providers to get the proper licensing rights.

There are a few vendors out there who will help a company attain these
rights, but the solution is not turnkey on licensing. To be clear, it is not
turnkey for the major U.S. content providers.

Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.


On Jan 12, 2007, at 10:14 AM, Frank Bulk wrote:


You mean the NCTC?  Yes, they did close their doors for new membership, but
there are regional head ends that represent a larger number of ITCs that
have been able to directly negotiate with the content providers.  

And then there's the turnkey vendors: IPTV Americas, SES Americom' IP-PRIME,
and Falcon Communications.

It's not entirely impossible.

Frank



From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Gian
Constantine
Sent: Wednesday, January 10, 2007 7:47 AM
To: [EMAIL PROTECTED]
Cc: Marshall Eubanks; nanog@merit.edu
Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?


Many of the small carriers, who are doing IPTV in the U.S., have acquired
their content rights through a consortium, which has since closed its doors
to new membership. 

I cannot stress this enough: content is the key to a good industry-changing
business model. Broad appeal content will gain broad interest. Broad
interest will change the playing field and compel content providers to
consider alternative consumption/delivery models.

The ILECs are going to do it. They have deep pockets. Look at how quickly
they were able to get franchising laws adjusted to allow them to offer
video. 

Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.






RE: Pac Rim Cable Damage Defies Repair [was: AFP article on Taiwan cable repair effort]

2007-01-17 Thread Frank Bulk

This article paints a rather dismal picture:

Despite optimistic estimates that it would take only three weeks
to repair the massive damage done to what are now said to be 
eight submarine cables by the Dec. 26, 2006, magnitude-6.7 
earthquake near Taiwan, reports today indicate that not one of 
the cables is back in service.
http://www.telecomweb.com/tnd/21168.html 

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Marshall Eubanks
Sent: Tuesday, January 16, 2007 3:33 PM
To: Robert E. Seastrom
Cc: Joel Jaeggli; Jim Segrave; Bill Woodcock; [EMAIL PROTECTED]
Subject: Re: AFP article on Taiwan cable repair effort


Furlongs per fortnight.

On Jan 16, 2007, at 3:46 PM, Robert E. Seastrom wrote:



 Joel Jaeggli [EMAIL PROTECTED] writes:

 Is it just me or is this article a migraine inducing mix of  
 metric and
 English measures?

 you're lucky they also didn't use nautical miles and fathoms (1.829
 meters in si units)...

 Leagues...  mustn't forget leagues.

 ---rob





RE: Wireless Network Question

2007-02-15 Thread Frank Bulk

If you forced your customers use 802.1X for authentication they wouldn't get
an IP address unless they were authorized.

If 802.1X is not in the mix, another solution is to give them a very short
lease (say 2 minutes) until they've completed web-based authentication, and
then give them the one-hour lease.  Any portal-based product for wireless
hotspots can help you out here.

Frank

-Original Message-
From: Frank Bulk 
Sent: Wednesday, February 14, 2007 5:40 PM
To: nanog@merit.edu
Subject: Wireless Network Question


Hello-  I'm looking for anyone that can send me some suggestions based on
experience with a wireless network.

My problem:
It is possible with our current wireless network that a situation could
arise where the IP address pool for a specific service location could be
exhausted due to Windows clients acquiring an IP address without being
authenticated. Thus, if we have a large event taking place in-market, the IP
addresses would be assigned and reassigned out (on a one-hour lease) to each
Windows client connected to the network, possibly quickly exhausting a small
IP address pool if enough clients were simultaneously up and connected. 

Does anyone have a good suggestion on how to avoid this from happening
(aside from over assigning and wasting IP addresses or ignoring the
problem)?  

Thank you for your time
MArla Azinger
Frontier Communications





RE: 96.2.0.0/16 Bogons

2007-02-26 Thread Frank Bulk

We found out last Thursday we were blocking that range (our customer base is
across the state line from this Midcon).  Our upstream internet provider,
who manages the BGP side of things, had had their automated Bogon update
process stalled since last fall. =)

Frank



From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Eric
Ortega
Sent: Monday, February 26, 2007 11:26 AM
To: nanog@merit.edu
Subject: 96.2.0.0/16 Bogons

I am a network engineer for Midcontinent Communications We are an ISP in the
American Midwest. Recently, we were allocated a new network assignment:
96.2.0.0/16. We've been having major issues with sites still blocking this
network. I normally wouldn't blanket post to the group, but I'm looking to
hit as many direct network engineers/operators as possible.  Would it be
possible to have people do a quick check on their inbound filters?

Thanks! 

Eric Ortega 
Midcontinent Communications 
Network Engineer 
605.357.5720 
[EMAIL PROTECTED] 




RE: 96.2.0.0/16 Bogons

2007-02-26 Thread Frank Bulk

Randy:

Sorry, our upstream provider's ASN is not listed in that
filter-candidates.txt document.

Kind regards,

Frank 

-Original Message-
From: Randy Bush [mailto:[EMAIL PROTECTED] 
Sent: Monday, February 26, 2007 4:34 PM
To: Frank Bulk
Cc: nanog@merit.edu
Subject: RE: 96.2.0.0/16 Bogons

 We found out last Thursday we were blocking that range (our customer base
is
 across the state line from this Midcon).  

frank, could your links be in

   http://psg.com/filter-candidates.txt

would love to know if anyone knows that indeed they were caught in
that list.  fyi, the ASns listed are.  would very much appreciate 
  o if you see your asn listed
  o go to http://psg.com/filter-candidates.txt
  o get the links associated with your as
  o and tell us if indeed that link was filtering 96/8 97/8 or 98/8
about 22/23 jan 2007

thanks!

randy

174
209
286
293
701
702
703
721
1239
1267
1273
1299
1668
2152
2497
2500
2828
2854
2914
3257
3292
3320
3343
3356
3491
3549
3561
4323
4637
4755
4761
4766
5400
5539
5568
6429
6453
6461
6467
6471
6517
6619
6730
7018
7473
7474
7575
8468
8928
9942
10026
11908
11955
12180
12695
12956
13237
15412
15830
17557
19029
19094
19151
19158
19752
21318
21413
21414
21922
22291
22773
23342
23504
25462
25973
29226
29278
29668
32869

-30-




RE: FCC on wifi at hotel

2007-02-28 Thread Frank Bulk

While the hotel cannot prevent you from using Wi-Fi, but they could:
a) restrict you from attaching equipment to their internet connection
(unless you contracted for that and the contract didn't restrict
attachments) or electrical outlets
b) ask you to leave and charge you for trespassing if you didn't

Its highly unlikely those renting facilities from the hotel would agree to
such onerous restrictions and a hotel renting you the facilities is unlikely
going to boot you out.

See:
http://www.wifinetnews.com/archives/007102.html
for some good coverage on the Massport incident.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Carl
Karsten
Sent: Wednesday, February 28, 2007 2:36 PM
To: nanog@merit.edu
Subject: FCC on wifi at hotel

me again.

So wifi at pycon 07 was 'better than 06' witch I hear was a complete
disaster. 
More on 07's coming soon.

Now we are talking about wifi at pycon 08, which will be at a different
hotel 
(Crown Plaza in Rosemont, IL) and the question came up: Can the hotel
actively 
prevent us from using our own wifi?

_maney: although - wasn't the hotel stuck on our wifi or no wifi at last
report?

CarlFK: only the FCC can restrict radio

tpollari: it's their network and their power the FCC has no legal right to
that. 
and no, you show me where they do.  I'm not wasting my day with that tripe
-- 
the caselaw you're likely thinking of has to do with an airline and an
airport 
and the airline's lounge, in which case they're paying for the power and
paying 
for their bandwidth from a provider that's not the airport. We're not.

I know that there are all sorts of factors, and just cuz the FCC says boo
isn't 
the end of the story, but i don't even know what the FCC's position on this
is. 
  google gave me many hits, and after looking at 10 or so I decided to look 
elsewhere.

Carl K



SaidCom disconnected by Level 3 (former Telcove property)

2007-03-14 Thread Frank Bulk

http://www.phillyburbs.com/pb-dyn/articlePrint.cfm?id=1310151

Is this a normal thing for Level 3 to do, cut off small, responsive
providers?

Frank



RE: [funsec] Not so fast, broadband providers tell big users (fwd)

2007-03-14 Thread Frank Bulk

Could you please clarify that comment?  USF has made it possible for us to
serve DSL to almost every customer in our exchanges.

Frank 

-Original Message-
From: Frank Bulk 
Sent: Wednesday, March 14, 2007 6:50 AM
To: NANOG list
Subject: Re: [funsec] Not so fast, broadband providers tell big users (fwd)



On Mar 13, 2007, at 11:19 AM, Daniel Senie wrote:

 A universal service charge could be applied to all bills, with the  
 funds going to subsidize rural areas.

This is already done in the U.S., to no discernible effect.

---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

 Words that come from a machine have no soul.

   -- Duong Van Ngo





RE: [funsec] Not so fast, broadband providers tell big users (fwd)

2007-03-15 Thread Frank Bulk

In regards to gold-plating, it makes a difference if it's average-schedule
or cost-company.  If it's the latter, then yes, all actual costs are
including in building the rate base. 

Frank

-Original Message-
From: Frank Bulk 
Sent: Thursday, March 15, 2007 6:48 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: [funsec] Not so fast, broadband providers tell big users (fwd)


 USF has made it possible for us to
 serve DSL to almost every customer in our exchanges.

I'm glad to hear it - the reports of how that fund is (un)used are  
almost overwhelmingly negative, I'm glad some folks, somewhere are  
benefiting from it.

There's a lot not to like about USF, notably the way it encourages
rural telcos to gold-plate everything to increase their rate base,
but it still does the same job it's done for the past 80 years or
so, make phone service in the boondocks affordable.

R's,
John

PS: My telco has about 8000 lines, but just in case we bave a population
boom, their GTD-5 switch can expand to 100,000.




RE: SaidCom disconnected by Level 3 (former Telcove property)

2007-03-16 Thread Frank Bulk

I've been working at a smaller ISP (~4000 subs, plus businesses), and not
one has asked me if I'm multi-homed.

When we or our upstream provider have a problem the telephones light up and
people act as if it's a problem, but the reality is that they're not
communicating it, up front, as a business requirement.

Frank

-Original Message-
Sent: Friday, March 16, 2007 8:54 PM
To: nanog@merit.edu
Subject: Re: SaidCom disconnected by Level 3 (former Telcove property)


Almost ALL?

Any company, or any person for that matter, that relies on their  
Internet connectivity for their lively hood should be multihomed.

-wil

On Mar 16, 2007, at 4:42 PM, Mike Hammett wrote:

 Almost ALL providers should be multihomed.

 --Mike

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On  
 Behalf Of
 virendra rode //
 Sent: Thursday, March 15, 2007 11:26 AM
 To: NANOG
 Subject: Re: SaidCom disconnected by Level 3 (former Telcove property)


 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Frank Bulk wrote:
 http://www.phillyburbs.com/pb-dyn/articlePrint.cfm?id=1310151

 Is this a normal thing for Level 3 to do, cut off small, responsive
 providers?

 Frank
 - 
 Just curious, should small responsive providers should be multi- 
 homed?



 regards,
 /virendra



 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.2.2 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

 iD8DBQFF+XOApbZvCIJx1bcRAtkwAJ9vNak3F8FlCf9VDycf6IlAr445nACg59kB
 w2OWAGdchd2XQyxxgZWQaug=
 =Yb1+
 -END PGP SIGNATURE-







RE: [funsec] Not so fast, broadband providers tell big users (fwd)

2007-03-23 Thread Frank Bulk

Don't confuse USF with ICC.  It's USF that you're contributing to directly
on your telephone bill and ICC through your long distance payments (which
relates to the att case).

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Andy
Davidson
Sent: Tuesday, March 20, 2007 8:38 PM
To: Roland Dobbins
Cc: NANOG list
Subject: Re: [funsec] Not so fast, broadband providers tell big users (fwd)


On 13 Mar 2007, at 20:31, Roland Dobbins wrote:


 On Mar 13, 2007, at 11:19 AM, Daniel Senie wrote:

 A universal service charge could be applied to all bills, with the  
 funds going to subsidize rural areas.

 This is already done in the U.S., to no discernible effect.


That isn't *quite* the opinion that ATT have ...

... http://gigaom.com/2007/02/07/atts-free-call-bill-2-million/


Although that is people using the rural kickback as a loophole to  
provide free telephony to people outside the area.. still shows that  
regulation always comes with an unexpected effect when times,  
technology and ideas advance.

Cheers
-a





RE: On-going Internet Emergency and Domain Names

2007-03-31 Thread Frank Bulk

What about a worldwide clearing house where all registrars must submit their
domains for some basic verification?  

Naming: For phishing reasons. I think detection of possible trademark
violations would be too contentious.
Contact info: It's fine to use a proxy to hide true ownership to the public,
but the clearing house would verify telephone numbers and addresses against
public and private databases, and for those countries that don't have that
well built-out, something that ties payment (whether that be credit card,
bank transfer, or check) to a piece of identification as strong as a
passport.
Funding of such a clearing house: a flat fee per domain
Maintenance: It can't be a one-time event, but I'm not sure how this would
look.

Of course, the above is only utopia and the problem has to get much worse
before we'll see international cooperation.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Douglas Otis
Sent: Saturday, March 31, 2007 9:47 AM
To: Gadi Evron
Cc: nanog@merit.edu
Subject: Re: On-going Internet Emergency and Domain Names


On Sat, 2007-03-31 at 06:16 -0500, Gadi Evron wrote:

 Or we can look at it from a different perspective:
 Should bad guys be able to register thousands of domains with amazon and
 paypal in them every day? Should there be black hat malicious registrars
 around? Shouldn't there be an abuse route for domain names?
 
 One problem at a time, please.

Based on Lorenzen's data, domain tasting enables millions of domain
names to be in flux every day.  Exchange lists this large to end users
is extremely costly.  When small handguns became a weapon of choice for
holdups, a waiting period was imposed to allow enforcement agencies time
to block exchanges.

Even when bad actors can be identified, a reporting lag of 12 to 24
hours in the case of global registries ensures there can be no
preemptive response.  If enforcement at this level is to prevent crime,
registries would need to help by providing some advanced notice.
Perhaps all registries should be required to report public details of
domain name additions 24 hours in advance of the same details being
published in the TLD zones.

-Doug




RE: On-going Internet Emergency and Domain Names

2007-03-31 Thread Frank Bulk

For some operations or situations 24 hours would be too long a time to wait.
There would need to be some mechanism where the delay could be bypassed.

Frank

-Original Message-
From: Douglas Otis [mailto:[EMAIL PROTECTED] 
Sent: Saturday, March 31, 2007 4:05 PM
To: [EMAIL PROTECTED]
Cc: nanog@merit.edu
Subject: RE: On-going Internet Emergency and Domain Names


On Sat, 2007-03-31 at 11:09 -0500, Frank Bulk wrote:
On 
Sat, 31 Mar 2007 07:46:47 -0700,
Douglas Otis wrote:
  
  Even when bad actors can be identified, a reporting lag of 12 to 24
  hours in the case of global registries ensures there can be no
  preemptive response.  If enforcement at this level is to prevent crime,
  registries would need to help by providing some advanced notice.
  Perhaps all registries should be required to report public details of
  domain name additions 24 hours in advance of the same details being
  published in the TLD zones.
 

 What about a worldwide clearing house where all registrars must submit
their
 domains for some basic verification?

Rather than a clearinghouse, require gTLDs, ccTLDs, and SLDs establish
rules regarding access to a 24 hour preview of zone transfers.
Establish some type of international domain dispute resolution agency
that responds to hold requests made by recognized legal authorities.

Establishing transfers for the next day's zone provides extremely
valuable information that would significantly aid efforts in fighting
crime.  An advanced warning permits deployment of preemptive
technologies.  This technology could be bind10, but there are other
solutions as well.

Legal authorities should also be able to request holds placed on
specific domains when the minimal details appear related to criminal
activity, such as names commonly used for look-alike attacks.  Only then
would additional information become relevant, and be handled by the
domain dispute resolution agency.  They would not be a general
clearinghouse.

 Naming: For phishing reasons. I think detection of possible trademark
 violations would be too contentious.

Agreed.

 Contact info: It's fine to use a proxy to hide true ownership to the
public,
 but the clearing house would verify telephone numbers and addresses
against
 public and private databases, and for those countries that don't have that
 well built-out, something that ties payment (whether that be credit card,
 bank transfer, or check) to a piece of identification as strong as a
 passport.

While this sounds like an excellent idea, it also seems unlikely the
current levels of trust permits a broad sharing of such detail in the
fashion of a clearinghouse.  Just a 24 hour advanced peak at tomorrow's
zone file would not represent any additional data preparation, nor would
this be information someone wishes to keep private.  After all, there is
competition between registrars. 

 Funding of such a clearing house: a flat fee per domain
 Maintenance: It can't be a one-time event, but I'm not sure how this would
 look.

Perhaps registries should be allowed to charge a small fee to cover just
the expense related to the transfers.  

 Of course, the above is only utopia and the problem has to get much worse
 before we'll see international cooperation.

The financial damage caused by crime taking advantage of DNS features to
then dance rapidly over the globe should justify rather minor changes to
the current mode of registry operations.

-Doug




RE: GoDaddy's abuse procedures [was: ICANNs role [was: Re: On-going ...]]

2007-04-07 Thread Frank Bulk

While you have your friend's ear, ask him why they maintain a spam policy of
blocking complete /24's when:
a) the space has been divided into multiple sub-blocks and assigned to
different companies, all well-documented and queryable in ARIN
b) there have been repeated pleas to whitelist a certain IP in separate
sub-block that is only being punished for the behavior of others in a
different sub-block.

Frank

-Original Message-
Sent: Tuesday, April 03, 2007 8:20 AM
To: '[EMAIL PROTECTED]'
Cc: '[EMAIL PROTECTED]'
Subject: Re: ICANNs role [was: Re: On-going ...]

I think the shutdown of seclists.org by GoDaddy is a perfect example of 
exactly why the registrars should NOT be making these decisions.

I know the head abuse guy at Godaddy.  He is a reasonable person.  He
turns off large numbers of domains but he is human and makes the
occasional mistake.  The fact that everyone cites the same mistake
tells me that he doesn't make very many of them.  If you demand that
the shutdown process be perfect and never make any mistakes ever, even
ones that involve peculiar e-mail failures are are fixed in a day or
two, you're saying there can't be any shutdown process at all.

If you want a really simple, and probably very effective first step- 
then stop domain tasting. It doesn't help anyone but the phishers.

Actually, I have never seen any evidence that phishers use domain
tasting.  Phishers use stolen credit cards, so why would they bother
asking for a refund?  The motivation for tasting is typosquatting and
monetization, parking web pages full of pay per click ads on them.
Tasting is a bad idea that should go away, but phishing isn't the
reason.

R's,
John





RE: Abuse procedures... Reality Checks

2007-04-07 Thread Frank Bulk

Joe:

I understand your frustration and appreciate your efforts to contact the
sources of abuse, but why indiscriminately block a larger range of IPs than
what is necessary?  

Here's the /24 in question:
Combined Systems Technologies NET-CST (NET-207-177-31-0-1)
207.177.31.0 - 207.177.31.7
Elkader Public Library NET-ELKRLIB (NET-207-177-31-8-1)
207.177.31.8 - 207.177.31.15
Plastech Grinnell Plant NET-PLASTECH (NET-207-177-31-16-1)
207.177.31.16 - 207.177.31.31 (dial-up, according to DNS)
Griswold Telephone Co. NET-GRIS (NET-207-177-31-32-1)
207.177.31.32 - 207.177.31.63
Griswold Telephone Co. NET-GRIS2 (NET-207-177-31-64-1)
207.177.31.64 - 207.177.31.95 (dial-up, according to DNS)
Jesco Electrical Supplies NET-JESCOELEC (NET-207-177-31-96-1)
207.177.31.96 - 207.177.31.103
American Equity Investment NET-AMREQUITY (NET-207-177-31-104-1)
207.177.31.104 - 207.177.31.111
** open **
Butler County REC NET-BUTLERREC (NET-207-177-31-120-1)
207.177.31.120 - 207.177.31.127
Northeast Missouri Rural Telephone Co. NET-NEMR2
(NET-207-177-31-128-1)
207.177.31.128 - 207.177.31.191
Montezuma Mutual Telephone NET-MONTEZUMA (NET-207-177-31-192-1)
207.177.31.192 - 207.177.31.254 (dial-up, according to DNS) 

Block the /24 and you cause problems for potentially 8 other companies.  Now
the RBL maintainer, or in this case, GoDaddy, has to interact with 8 other
companies -- what a lot of work and overhead!  If they just dealt with the
problem in a more surgical manger they wouldn't have to deal with the other
companies asking for relief.  

Frank

-Original Message-
From: J. Oquendo [mailto:[EMAIL PROTECTED] 
Sent: Saturday, April 07, 2007 2:08 PM
To: nanog@merit.edu
Cc: Frank Bulk
Subject: Abuse procedures... Reality Checks

On Sat, 07 Apr 2007, Frank Bulk wrote:

 
 While you have your friend's ear, ask him why they maintain a spam policy
of
 blocking complete /24's when:
 a) the space has been divided into multiple sub-blocks and assigned to
 different companies, all well-documented and queryable in ARIN
 b) there have been repeated pleas to whitelist a certain IP in separate
 sub-block that is only being punished for the behavior of others in a
 different sub-block.
 
 Frank

realitycheck

You're complaining of blocked /24's. I block off up to /6's from reaching
certain ports on my networks. Sound crazy? How many times should I contact
the netblock owner and here the same generic well you have to open up a
complaint with our abuse desk... golly gee Joseph. Only to have the same
repeat attacks over and over and over. Sure, I'll start out blocking the
offensive address, then shoot off an email here and there, even post to
this or another list or search Jared's list for a contact and ask them
politely Hey... I see X amount of attackers hitting me from your net
But how long should I go on for before I could just say to hell with
your users and network... They just won't connect. It's my own right to
when it comes to my network.

People complain? Sure, then I explain why, point out the fact that I
HAVE made attempts at resolutions to no avail. So should the entire
network be punished... No, but the engineers who now have to answer
THEIR clients on why they've been blacklisted surely are punished aren't
they. Now they have to hear X amount of clients moan about not being
able to send either a client, vendor or relative email. They have to
either find an alternative method to connect, or complain to their
provider about connectivity issues.

Is it fair? Yes it's fair to me, my clients, networks, etc., that
I protect it. Is it fair to complain to deaf ears when those deaf
ears are the ones actually clueful enough to fix? On a daily basis
I have clients who should be calling customer service for issues
contact me directly. Know what I do? ... My best to fix it, enter
a ticket number on the issue and go about the day. One way or the
other I'm going to see the ticket/problem so will it kill me to
take a moment or two to fix something? Sure I will bitch moan and
yell about it, a minute later AFTER THE FIX since things of this
nature usually don't take that much time, guess what? Life returns
to normal.

http://www.infiltrated.net/bforcers/5thWeek-Organizations

Have a look will you? These are constant offending networks with
hosts that are repeatedly ssh'ing into servers I maintain. Is it
unfair to block off their entire netblock from connecting via
ssh to my servers. Hell no it isn't. If I have clients on this
netblock, in all honesty tough. Let them contact their providers
after I tell them their provider has been blocked because of the
garbage on their network. Let their provider do something before
I do because heaven knows how many times have I tried reaching
someone diplomatically before I went ahead and blocked their
entire /6 /7 /8 /9 /10 and so on from

RE: On-going Internet Emergency and Domain Names

2007-04-07 Thread Frank Bulk

One of the reasons that registrars are slow to take down sites that are paid
with a credit card is because there is little financial incentive to do
sothey've lost money it already, why have a department whose priority is
speed if you can hire a person to do it at their own pace and minimize the
loss?

For almost all things prudent and effective there needs to be a financial
incentive.  For those registrars who take stolen credit cards, it's the
rates and fees they are charged to process credit card transactions.  It
appears the rates that are charged and the penalties assessed aren't enough
to dissuade them from these fraudulent transactions, which means that the
monetary externalities of DNS registration abuse (spam, phishing sites, etc)
are not fully assessed by financial institutions.  We have a similar
parallel in the cost of gasoline and the impact on the environment.

Frank

-Original Message-
Sent: Monday, April 02, 2007 9:36 PM
To: David Conrad
Cc: Joseph S D Yao; nanog
Subject: Re: On-going Internet Emergency and Domain Names

On Mon, 2 Apr 2007, David Conrad wrote:



 On Apr 2, 2007, at 7:12 PM, Joseph S D Yao wrote:
  On Mon, Apr 02, 2007 at 05:33:08PM -0700, David Conrad wrote:
  I think this might be a bit in conflict with efforts registries have
  to reduce the turnaround in zone modification to the order of tens of
  minutes.
 
  Why is this necessary?  Other than the cool factor.

 I think the question is why should the Internet be constrained to
 engineering decisions made in 1992?

or victims of policy of that same 'vintage'... doing things faster isn't
bad, doing it with less checks and balances and more people willing to
abuse the lack of checks/balances seems like a bad idea.  If you can get a
domain added to the system fresh in 5min or less, why does it take +90
days to get it removed when all data about the domain is patently false
and the CC used to purchase the domain was reported stolen 2+years ago?

I don't mean to pick on anyone in particular, but wow, to me this seems
like just a policy update requirement.




RE: Abuse procedures... Reality Checks

2007-04-07 Thread Frank Bulk

 On Sat, Apr 07, 2007 at 02:31:25PM -0500, Frank Bulk wrote:
  I understand your frustration and appreciate your efforts to contact the
  sources of abuse, but why indiscriminately block a larger range of IPs
than
  what is necessary?  
 
 1. There's nothing indiscriminate about it.
 
 I often block /24's and larger because I'm holding the 
 *network* operators responsible for what comes out of 
 their operation.  

Define network operator: the AS holder for that space or the operator of
that smaller-than-slash-24 sub-block?  If the problem consistently comes
from /29 why not just leave the block in and be done with it?  

I guess this begs the question: Is it best to block with a /32, /24, or some
other range?  Sounds a lot like throwing something against the wall and
seeing what sticks.  Or vigilantism.

 If they can't hold the outbound abuse down to a minimum, then 
 I guess I'll have to make up for their negligence on my end.  

Sure, block that /29, but why block the /24, /20, or even /8?  Perhaps your
(understandable) frustration is preventing you from agreeing with me on this
specific case.  Because what you usually see is an IP from a /20 or larger
and the network operators aren't dealing with it.  In the example I gave
it's really the smaller /29 that's the culprit, it sounds like you want to
punish a larger group, perhaps as large as an AS, for the fault of smaller
network.

 I don't care why it happens -- they should have thought through 
 all this BEFORE plugging themselves in and planned accordingly.  
 (Never build something you can't control.)

Agreed.

 
 Neither I nor J. Oquendo nor anyone else are required to 
 spend our time, our money, and our resources figuring out which 
 parts of X's network can be trusted and which can't.  

It's not that hard, the ARIN records are easy to look up.  Figuring out that
network operator has a /8 that you want to block based on 3 or 4 IPs in
their range requires just as much work.

 It is entirely X's responsibility to make sure that its _entire_ 
 network can be permitted the privilege of access to ours.  
 And (while I don't wish to speak for anyone else),
 I think we're prepared to live with a certain amount of low-level,
 transient, isolated noise.  

Noise like that is inevitable part of the job.

 We are not prepared to live with persistent, systemic attacks 
 that are not dealt with even *after* complaints are
 filed.  (Which shouldn't be necessary anyway: if we can see inbound
 hostile traffic to our networks, surely X can see it outbound from
 theirs.  Unless X is too stupid, cheap or lazy to look.  Packets do
 not just fall out of the sky, y'know?)

Smaller operators, like those that require just a /29, often don't have that
infrastructure.  Those costs, as I'm sure you aware, are passed on to
companies like yourself that have to maintain their own network's security.
Again, block them, I say, just don't swallow others up in the process.

 2. necessary is a relative term.
 
 Example: I observed spam/spam attempts from 3,599 hosts on 
 pldt's network  during January alone. I've blocked 
 everything they have, because I find it *necessary* 
 to not wait for the other N hosts on their network 
 to pull the same stunt.  I've found it *necessary* to take
 many other similar measures as well because my time, 
 money and resources are limited quantities, so I must 
 expend them frugally while still protecting the operation 
 from overtly hostile networks.  

That's my point: you want to spend time dealing with the other 8 networks
because you blacked them, out, too?  

 That requires pro-active measures and it requires ones 
 that have been proven to be effective.
 
 If X, for some value of X, is unhappy about this, then X should have
 thought of that before permitting large amounts of abuse to escape
 its operation over an extended period of time.  Had X done its job
 to a baseline level of professionalism, then this issue would not
 have arisen, and we'd all be better off for it.

Agreed, but economics usually dictate otherwise.
 
 So.  If you (generic you) can't keep your network from being 
 a persistent and systemic abuse source, then unplug it.  Now.

They want to run a business, too.  So when you blacklist they will end up
calling you asking for mercy, telling you that it's been cleaned up.
Inevitably something/someone gets infected, you black them out, rinse,
repeat.

 If on other hand, you decide to stick around anyway while letting the
 crap flow: no whining when other people find it necessary to 
 take steps to defend themselves from your incompetence.
 
 ---Rsk



RE: Abuse procedures... Reality Checks

2007-04-07 Thread Frank Bulk

If they're properly SWIPed why punish the ISP for networks they don't even
operate, that obviously belong to their business customers?  And if the
granular blocking is effectively shutting down the abuse from that
sub-allocated block, didn't the network operator succeed in protecting
themselves?  Or is the netop looking to the ISP to push back on their
customers to clean up their act?  Or is the netop trying to teach the ISP a
lesson?  

Of course, it doesn't hurt to copy the ISP or AS owner for abuse issues from
a sub-allocated block -- you would hope that ISPs and AS owners would want
to have clean customers.  

Frank 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
william(at)elan.net
Sent: Saturday, April 07, 2007 5:58 PM
To: Fergie
Cc: [EMAIL PROTECTED]; nanog@merit.edu
Subject: Re: Abuse procedures... Reality Checks

On Sat, 7 Apr 2007, Fergie wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 - -- Rich Kulawiec [EMAIL PROTECTED] wrote:

 1. There's nothing indiscriminate about it.

 I often block /24's and larger because I'm holding the *network*
operators
 responsible for what comes out of their operation.  If they can't hold
 the outbound abuse down to a minimum, then I guess I'll have to make
 up for their negligence on my end.  I don't care why it happens -- they
 should have thought through all this BEFORE plugging themselves in
 and planned accordingly.  (Never build something you can't control.)

 I would have to respectfully disagree with you. When network
 operators do due diligence and SWIP their sub-allocations, they
 (the sub-allocations) should be authoritative in regards to things
 like RBLs.

 $.02,

Yes. But the answer is that it also depends how many other cases like
this exist from same operator. If they have 16 suballocations in /24
but say 5 of them are spewing, I'd block /24 (or larger) ISP block.
The exact % of bad blocks (i.e. when to start blocking ISP) depends
on your point of view and history with that ISP but most in fact do
held ISPs partially responsible.

-- 
William Leibzon
Elan Networks
[EMAIL PROTECTED]



RE: Abuse procedures... Reality Checks

2007-04-07 Thread Frank Bulk

Stephen:

Are you saying that if there's nefarious IP out there let's automatically
blacklist the /24 of that IP?  J. Oquendo was describing his own methods and
they sounded quite manual, manual enough that he's getting down to a /8 as
necessary to blacklist a non-responsive operator.  My point is that if
you're going to block something, either block the /32 or do the research to
justify blocking a larger group.

And despite ToS, I think many operators are running automated lookups, and
there are lots of examples out there for ARIN.

Frank

-Original Message-
From: Stephen Satchell [mailto:[EMAIL PROTECTED] 
Sent: Saturday, April 07, 2007 5:44 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: Abuse procedures... Reality Checks

Frank Bulk wrote:
  [[Attribution deleted by Frank Bulk]]
 Neither I nor J. Oquendo nor anyone else are required to 
 spend our time, our money, and our resources figuring out which 
 parts of X's network can be trusted and which can't.  
 
 It's not that hard, the ARIN records are easy to look up.  Figuring out
that
 network operator has a /8 that you want to block based on 3 or 4 IPs in
 their range requires just as much work.

It's *very* hard to do it with an automated system, as such automated 
look-ups are against the Terms of Service for every single RIR out there.

Please play the bonus round:  try again.



RE: Abuse procedures... Reality Checks

2007-04-07 Thread Frank Bulk

That sounds like a very reasonable perspective and generally the route I
follow both as a operator and as someone who works with others.

Frank 

-Original Message-
From: william(at)elan.net [mailto:[EMAIL PROTECTED] 
Sent: Saturday, April 07, 2007 6:23 PM
To: Frank Bulk
Cc: nanog@merit.edu
Subject: RE: Abuse procedures... Reality Checks


On Sat, 7 Apr 2007, Frank Bulk wrote:

 If they're properly SWIPed why punish the ISP for networks they don't even
 operate, that obviously belong to their business customers?

All ISPs have AUPs that prohibit spam (or at least I hope all of you do)
though are enforced at some places better then at others... But the point
is that each and every customer ISP is responsible for following that
AUP and is responsible for making sure their customers follow it as well.
So to answer you the view is that even if ISP do not operate the network
by providing services and ip addresses they in fact basically do operate
in on higher level and are partially directly responsible for what happens
there including enforcing its AUP on its sub-ISP or business customer
(and making sure they enforce same AUP provisions on their customers).
Chain of responsibility if you like to think of it that way...

 And if the granular blocking is effectively shutting down the abuse from 
 that sub-allocated block, didn't the network operator succeed in
protecting
 themselves?  Or is the netop looking to the ISP to push back on their
 customers to clean up their act?  Or is the netop trying to teach the ISP
a
 lesson?

 Of course, it doesn't hurt to copy the ISP or AS owner for abuse issues
from
 a sub-allocated block -- you would hope that ISPs and AS owners would want
 to have clean customers.

Yes, of course blocking of larger ISP block would happen only after trying
to notify ISP of the problem for each of every one of those subblocks did 
not lead to any results.

 Frank

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
 william(at)elan.net
 Sent: Saturday, April 07, 2007 5:58 PM
 To: Fergie
 Cc: [EMAIL PROTECTED]; nanog@merit.edu
 Subject: Re: Abuse procedures... Reality Checks

 On Sat, 7 Apr 2007, Fergie wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 - -- Rich Kulawiec [EMAIL PROTECTED] wrote:

 1. There's nothing indiscriminate about it.

 I often block /24's and larger because I'm holding the *network*
 operators
 responsible for what comes out of their operation.  If they can't hold
 the outbound abuse down to a minimum, then I guess I'll have to make
 up for their negligence on my end.  I don't care why it happens -- they
 should have thought through all this BEFORE plugging themselves in
 and planned accordingly.  (Never build something you can't control.)

 I would have to respectfully disagree with you. When network
 operators do due diligence and SWIP their sub-allocations, they
 (the sub-allocations) should be authoritative in regards to things
 like RBLs.

 $.02,

 Yes. But the answer is that it also depends how many other cases like
 this exist from same operator. If they have 16 suballocations in /24
 but say 5 of them are spewing, I'd block /24 (or larger) ISP block.
 The exact % of bad blocks (i.e. when to start blocking ISP) depends
 on your point of view and history with that ISP but most in fact do
 held ISPs partially responsible.

-- 
William Leibzon
Elan Networks
[EMAIL PROTECTED]



RE: Abuse procedures... Reality Checks

2007-04-07 Thread Frank Bulk

Robert:

You still haven't answered the question: how wide do you block?  You got an
IP address that you know is offensive.  Is your default policy to blacklist
just that one, do the /24, go to ARIN and find out the size of that block
and do the whole thing, or identify the AS and block traffic from the dozen
if not hundreds of allocations they have?  In only the first two cases is no
research required, but I would hope that the network who wants to blacklist
(i.e. GoDaddy) would do a little bit of (automated) legwork to focus their
abuse control.

You also have too dim and narrow a view of customer relationships.  In my
case the upstream ISP is a member-owned cooperative of which the
sub-allocated space is either a member or a customer of a member.  1, 2, and
3 don't apply, rather, the coop works with their members to identify the
source of the abuse and shut it down.  It's not adversarial as you paint it
to be.  BTW, do you think the member-owned coop should be monitoring the
outflow of dozens of member companies and hundreds of sub-allocations they
have?

And it's not *riddled* with abuse, it's just one abuser, probably a dial-up
customer who is unwittingly infected, who while connected for an hour or two
sends out junk.  GoDaddy takes that and blacklists the whole /24, affecting
both large and small businesses alike who are in other sub-allocated blocks
in that /24.  Ideally, of course, each sub-allocated customer would have
their own /24 so that when abuse protection policies kick in and that
automatically blacks out a /24 only they are affected, but for address
conservation reasons that did not occur.  

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Robert Bonomi
Sent: Saturday, April 07, 2007 8:41 PM
To: nanog@merit.edu
Subject: RE: Abuse procedures... Reality Checks

 From: Frank Bulk [EMAIL PROTECTED]
 Subject: RE: Abuse procedures... Reality Checks
 Date: Sat, 7 Apr 2007 16:20:59 -0500

  If they can't hold the outbound abuse down to a minimum, then 
  I guess I'll have to make up for their negligence on my end.  

 Sure, block that /29, but why block the /24, /20, or even /8?  Perhaps
your
 (understandable) frustration is preventing you from agreeing with me on
this
 specific case.  Because what you usually see is an IP from a /20 or larger
 and the network operators aren't dealing with it.  In the example I gave
 it's really the smaller /29 that's the culprit, it sounds like you want to
 punish a larger group, perhaps as large as an AS, for the fault of smaller
 network.

BLUNT QUESTIONS:  *WHO*  pays me to figure out 'which parts' of a provider's
network are riddled with problems and 'which parts' are _not_?  *WHO* pays
me to do the research to find out where the end-user boundaries are? *WHY*
should _I_ have to do that work -- If the 'upstream provider' is incapable
of
keeping _their_own_house_ clean, why should I spend the time trying to
figure
out which of their customers are 'bad guys' and which are not?

A provider *IS* responsible for the 'customers it _keeps_'.

And, unfortunately, a customer is 'tarred by the brush' of the reputation
of it's provider.

 Smaller operators, like those that require just a /29, often don't have
that
 infrastructure.  Those costs, as I'm sure you aware, are passed on to
 companies like yourself that have to maintain their own network's
security.
 Again, block them, I say, just don't swallow others up in the process.

If the _UPSTREAM_ of that 'small operator' cannot 'police' its own
customers,
Why should _I_ absorb the costs that _they_ are unwilling to internalize?

If they want to sell 'cheap' service, but not 'doing what is necessary', I
see no reason to 'facilitate' their cut-rate operations.

Those who buy service from such a provider, 'based on cost',  *deserve* what
they get, when their service doesn't work as well as that provided by the
full-price competition.

_YOUR_ connectivity is only as good as the 'reputation' of whomever it is 
that you buy connectivity from.

You might want to consider _why_ the provider *keeps* that 'offensive' 
customer.  There would seem to be only a few possible explanations:  (1)
they
are 'asleep at the switch', (2) that customer pays enough that they can
'afford' to have multiple other customers who are 'dis-satisfied', or who
may even leave that provider, (3) they aren't willing to 'spend the money'
to run a clean operation.  (_None_ of those seems like a good reason for
_me_
to spend extra money 'on behalf of' _their_ clients.)




RE: Abuse procedures... Reality Checks

2007-04-07 Thread Frank Bulk

I guess our upstream provider is a nobody because they have lots of small
sub-allocated blocks less than a /24 that they route to different member
ISPs. =)

What is the point of blocking a /24 on the basis of a /32 if the ISP manages
dozens of other /24 or larger blocks?  If you're going to do it, block *all*
the IPs associated to the 'bad' ISP.  Then at least you're consistent,
otherwise expanding to a /24 is just a half (or 1%) job or laziness.

Frank

-Original Message-
From: Frank Bulk 
Sent: Saturday, April 07, 2007 10:45 PM
To: [EMAIL PROTECTED]
Subject: Re: Abuse procedures... Reality Checks


 Sure, block that /29, but why block the /24, /20, or even /8?

Since nobody will route less than a /24, you can be pretty sure that
regardless of the SWIPs, everyone in a /24 is served by the same ISP.

I run a tiny network with about 400 mail users, but even so, my
semiautomated systems are sending off complaints about a thousand
spams a day that land in traps and filters.  (That doesn't count about
50,000/day that come from blacklisted sources that I package up and
sell to people who use them to tune filters and look for phishes.)  I
log the sources, when a particular IP has more than 50 complaints in a
month I usually block it, if I see a bunch of blocked IP's in a range
I usually block the /24.  Now and then I get complaints from users
about blocked mail, but it's invariably from an individual IP at an
ISP or hosting company that has both a legit correspondent and a
spam-spewing worm or PHP script.  It is quite rare for an expansion to
a /24 to block any real mail.

My goal is to keep the real users' mail flowing, to block as much spam
as cheaply as I can, and to get some sleep.  I can assure you from
experience that any sort of automated RIR WHOIS lookups will quickly
trip volume checks and get you blocked, so I do a certain number
manually, typically to figure out how likely there is to be someone
reading the spam reports.  But on today's Internet, if you want to get
your mail delivered, it would be a good idea not to live in a bad
neighborhood, and if your ISP puts you in one, you need a better ISP.
That's life.

Regards,
John Levine, [EMAIL PROTECTED], Primary Perpetrator of The Internet for
Dummies,
Information Superhighwayman wanna-be, http://www.johnlevine.com, ex-Mayor
More Wiener schnitzel, please, said Tom, revealingly.





RE: Abuse procedures... Reality Checks

2007-04-09 Thread Frank Bulk

The managed services they currently offer don't include egress filtering (L3
to L7) on their business customer's networks.

From the discussion here it sounds like that naked pipes, even if properly
SWIPed, ought not to be sold, but that all traffic should be checked on the
way out.  It sounds like a good idea, but I'm guessing few network operators
do that for their customer networks, whether that's due to lack of
centralization or cost.

Frank

-Original Message-
From: Frank Bulk 
Sent: Monday, April 09, 2007 3:49 PM
To: 'nanog@merit.edu'
Subject: RE: Abuse procedures... Reality Checks


 If they're properly SWIPed why punish the ISP for networks 
 they don't even
 operate, that obviously belong to their business customers?  

How can you tell that they don't operate a network from SWIP records? 

Seems to me that lots of network operators sell managed services to
businesses which means that the network operator is the one operating
the business customers' networks.

Let's face it, the whole SWIP system and whois directory concept was
poorly implemented way back in the 1980s and it is completely inadequate
on an Internet that is thousands of times larger than it was when SWIP
and whois were first developed. How many of you were aware that whois
was originally intended to record all users of the ARPAnet from each
site so that networking departments could justify the funds they were
spending on high-speed 56k frame relay links?

--Michael Dillon




RE: Abuse procedures... Reality Checks

2007-04-09 Thread Frank Bulk

That's been my entire point.  Network operators who properly SWIP don't get
credit for going through the legwork by other networks that apply
quasi-arbitrary bit masks to their blocks.  

As I said before, if you're going to block a /24, why not do it right and
block *all* the IPs in their ASN?  My DSL and cable modem subscribers are
spread across a dozen non-contiguous /24s.  If the bothered network is upset
with one of my cable modem subs and blocks just one /24 they will open
themselves up when that CPE obtains a new IP in a different /24.  

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Pete
Templin
Sent: Monday, April 09, 2007 3:42 PM
To: Chris Owen
Cc: nanog@merit.edu
Subject: Re: Abuse procedures... Reality Checks


Chris Owen wrote:
 Well, well managed to me would mean that allocations from that /20 
 were SWIPed or a rwhois server was running so that if any of those 4,000 
 IP addresses does something bad you don't get caught in the middle.

Due diligence with SWIP/rwhois only means that one customer is well 
documented apart from another.  As this thread has highlighted, some 
people filter/block based on random variables: the covering /24, the 
covering aggregate announcement, and/or arbitrary bit lengths.  If a 
particular server is within the scope of what someone decides to 
filter/block, it gets filtered or blocked.  Good SWIPs/rwhois entries 
don't mean jack to those admins.

pt



RE: Abuse procedures... Reality Checks

2007-04-10 Thread Frank Bulk

Comcast is known to emit lots of abuse -- are you blocking all their
networks today?

Frank 

-Original Message-
From: Frank Bulk 
Sent: Tuesday, April 10, 2007 7:43 AM
To: nanog@merit.edu
Subject: Re: Abuse procedures... Reality Checks


On Sat, Apr 07, 2007 at 09:50:34PM +, Fergie wrote:
 I would have to respectfully disagree with you. When network
 operators do due diligence and SWIP their sub-allocations, they
 (the sub-allocations) should be authoritative in regards to things
 like RBLs.

After thinking it over: I partly-to-mostly agree.  In principal, yes.
In practice, however, [some] negligent network operators have built
such long and pervasive track records of large-scale abuse that their
allocations can be classified into two categories:

1. Those that have emitted lots of abuse.
2. Those that are going to emit lots of abuse.

In such cases, I'm not inclined to wait for (2) to become reality.

---Rsk





RE: Abuse procedures... Reality Checks

2007-04-11 Thread Frank Bulk

It truly is a wonder that Comcast doesn't apply DOCSIS config file filters
on their consumer accounts, leaving just the IPs of their email servers
open.  Yes, it would take an education campaign on their part for all the
consumers that do use alternate SMTP servers, but imagine how much work it
would save their abuse department in the long run.

Frank

-Original Message-
From: Frank Bulk 
Sent: Wednesday, April 11, 2007 5:10 PM
To: 'nanog@merit.edu'
Subject: Re: Abuse procedures... Reality Checks


On Tue, Apr 10, 2007 at 07:44:59AM -0500, Frank Bulk wrote:
 Comcast is known to emit lots of abuse -- are you blocking all their
 networks today?

All?  No.  But I shouldn't find it necessary to block ANY, and wouldn't,
if Comcast wasn't so appallingly negligent.

( I'm blocking huge swaths of Comcast space from port 25.  This shouldn't
really surprise anyone; Comcast runs what may well be the most prolific
spam-spewing network in the world.  I saw attempts from 80,000+ distinct
IP addresses during January 2007 alone -- to a *test* mail server.
I should have seen zero.  The mitigation techniques for making that
happen are well-known, have been well-known for years, and can be
implemented easily by any competent organization.)

This, by the way, should not be taken as indicative of either what
I've done in the past or may do in the future.   Nor should it be
taken as indicative of what decisions I've made in re other networks.

---Rsk




RE: Limiting email abuse by subscribers [was: Abuse procedures... Reality Checks]

2007-04-12 Thread Frank Bulk

Leigh:

How many customers do you serve that you have just 50 exceptions?

It's my understanding that the most efficient way to keep things clean for
cable modem subscribers is to educate subscribers to use port 587 with SMTP
AUTH for both the ISP's own servers and their customer's external mail
server, and then block destination port 25 on the cable modem.  For
alternative access technologies, block destination port 25 on the access
gear or core routers/firewalls.

Regards,

Frank

-Original Message-
From: Frank Bulk 
Sent: Thursday, April 12, 2007 7:48 AM
To: Mikael Abrahamsson
Cc: [EMAIL PROTECTED]
Subject: Re: Abuse procedures... Reality Checks


Mikael Abrahamsson wrote:

 On Wed, 11 Apr 2007, Frank Bulk wrote:

 It truly is a wonder that Comcast doesn't apply DOCSIS config file
 filters
 on their consumer accounts, leaving just the IPs of their email servers
 open.  Yes, it would take an education campaign on their part for all
 the
 consumers that do use alternate SMTP servers, but imagine how much
 work it
 would save their abuse department in the long run.

 There are several large ISPs (millions of subscribers) that have done
 away with TCP/25 altogether. If you want to send email thru the ISPs
 own email system you have to use TCP/587 (SMTP AUTH).

 Yes, this takes committment and resources, but it's been done
 successfully.


You don't even need to do that. We just filter TCP/25 outbound and force
people to use our mail servers that have sensible rate limiting etc.
People who use alternate SMTP servers can fill in a simple web form to
have them added to the exception list. We have about 50 on this list so far.

--
Leigh Porter






RE: IP Block 99/8

2007-04-20 Thread Frank Bulk

Please provide a pingable IP address on each block so that we can check.

Thanks,

Frank

-Original Message-
Sent: Friday, April 20, 2007 1:09 PM
To: 'nanog@merit.edu'
Subject: IP Block 99/8

Hi,

I am Shai from Rogers Cable Inc. ISP in Canada. We have IP block
99.x.x.x assigned to our customers. Which happened to be bogons block in
the past and was given to ARIN in Oct 2006. As we have recently started
using this block, we are getting complaints from our customers who are
unable to surf some web sites. After investigation we found that there
are still some prefix lists/acls blocks this IP block. 

We own the following blocks:

99.224.0.0/12
99.240.0.0/13
99.248.0.0/14
99.252.0.0/16
99.253.128.0/19

Please update your bogons list.

Shai.

end




RE: iPhone and Network Disruptions ...

2007-07-24 Thread Frank Bulk

Duke runs both Cisco's distributed and autonomous APs, I believe.  Kevin's
report on EDUCAUSE mentioned autonomous APs, but with details as hazy as
they are right now, I don't dare say whether one system or another caused or
received the problem.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Dale
W. Carder
Sent: Sunday, July 22, 2007 2:51 PM
To: Bill Woodcock
Cc: Sean Donelan; North American Network Operators Group
Subject: Re: iPhone and Network Disruptions ...



On Jul 21, 2007, at 8:52 PM, Bill Woodcock wrote:
 Cisco, Duke has now come to see the elimination of the problem,
 see:
 *Duke Resolves iPhone, Wi-Fi Outage Problems* at
 http://www.eweek.com/article2/0,1895,2161065,00.asp

 it's an ARP storm, or something similar,
 when the iPhone roams onto a new 802.11 hotspot.  Apple hasn't
 issued a
 fix yet, so Cisco had to do an emergency patch for some of their
 larger
 customers.

As I understand, Duke is using cisco wireless controllers to run their
wireless network.  Apparently there is some sort of interop issue where
one system was aggravating the other to cause arp floods in rfc1918
space.

We've seen 116 distinct iphones so far on our campus and have had
sniffers
watching arps all week to look for any similar nonsense.  However, we
are running the AP's in autonomous (regular ios) mode without any magic
central controller box.

Dale

--
Dale W. Carder - Network Engineer
University of Wisconsin at Madison / WiscNet
http://net.doit.wisc.edu/~dwcarder





RE: iPhone and Network Disruptions ...

2007-07-24 Thread Frank Bulk

If you look at Kevin's example traces on the EDUCAUSE WIRELESS-LAN listserv
you'll see that the ARP packets are in fact unicast.

Iljitsch's point about the fact that iPhones remain on while crossing
wireless switch boundaries is exactly dead on.  If you read the security
advisory you'll see that it involves either L3 roaming or two or more WLCs
that share a common L2 network.  Most wireless clients don't roam in such a
big way.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Iljitsch van Beijnum
Sent: Tuesday, July 24, 2007 4:35 PM
To: Prof. Robert Mathews (OSIA)
Cc: North American Network Operators Group
Subject: Re: iPhone and Network Disruptions ...


On 24-jul-2007, at 15:27, Prof. Robert Mathews (OSIA) wrote:

 Looking at this issue with an 'interoperability lens,' I remain
 puzzled by a personal observation that at least in the publicized
 case of Duke University's Wi-Fi net being effected, the ARP
 storms did not negatively impact network operations UNTIL the
 presence of iPhones on campus.  The nagging point in my mind
 therefore, is: why have other Wi-Fi devices (laptops, HPCs/PDAs,
 Smartphones etc.,) NOT caused the 'type' of ARP flooding, which was
 made visible in Duke's Wi-Fi environment?

Reading the Cisco document the conclusion seems obvious: the iPhone
implements RFC 4436 unicast ARP packets which cause the problem.

I don't have an iPhone on hand to test this and make sure, though.

The difference between an iPhone and other devices (running Mac OS
X?) that do the same thing would be that an iPhone is online while
the user moves around, while laptops are generally put to sleep prior
to moving around.




RE: Bee attack, fiber cut, 7-hour outage

2007-09-21 Thread Frank Bulk

There's a difference between folding a ring or pushing out a spoke to feed a
few customers and providing connectivity to a town.

I think building a SONET ring, or any kind of redundancy, has more to do
with a rural telco's commitment to it's customers than the bottom line.
Remember, the building of plant contributes to the cost study, so it may end
up having zero cost in the end.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Wayne E. Bouchard
Sent: Friday, September 21, 2007 7:00 PM
To: Justin M. Streiner
Cc: nanog@merit.edu
Subject: Re: Bee attack, fiber cut, 7-hour outage


On Fri, Sep 21, 2007 at 04:49:22PM -0400, Justin M. Streiner wrote:
 Anytime you talk about rural I'm impressed with 7 hours, however --
 isn't SONET supposed to make this better?

 Sure, if:
 1. the protect path is configured and enabled
 2. both the working and protect paths don't run through the same
 conduit/duct/buffer

I am continually amazed at how often this is the case.

I realize that it's expensive to run these lines but when you put your
working and protect in the same cable or different cables in the same
trench (not even a trench a few feet apart, but the same trench and
same innerduct), you have to EXPECT that you're gonna have angry
customers. And yet when telco folks learn that this has occured, they
often fein being as surprised as the customers.

Truely amazing.

---
Wayne Bouchard
[EMAIL PROTECTED]
Network Dude
http://www.typo.org/~web/



RE: New TransPacific Cable Projects:

2007-09-24 Thread Frank Bulk

Here is a TeleGeography news article worth a quick read:
http://www.telegeography.com/cu/article.php?article_id=19783email=html

It appears that that article assumes that capacity will not be increased by
WDM products...have those that been applied on those links already reached
the cables' maximum capabilities based on current technology?

Frank 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Sunday, September 23, 2007 9:11 AM
To: nanog@merit.edu
Subject: RE: New TransPacific Cable Projects:


 Not to mention that the Taiwan straits earthquake showed a
 clear lack of physical diversity on a number of important
 Pacific routes, which I know some companies are laying fiber
 to address.

Anyone who took the trouble to read the two articles knows
that one of the two cables is a USA-to-China direct cable
that does not hop through Japan. This is really part of a
larger connectivity story for the People's Republic of China
along with the trans-Russia cable being built by Russia's
railway-backed TTC and China Unicom.
http://europe.tmcnet.com/news/2007/09/20/2954870.htm
I wouldn't be surprised if this is somehow connected with
GLORIAD as well. In any case, the USA-China direct route is
clearly avoiding the Taiwan Straits weak point.

And the other cable, which Google is involved in, is connecting
the USA and Australia, a country that has always had connectivity
issues, especially pricing issues. This has led to a much higher
use of web proxies in Australia to reduce international traffic
levels and this may be the key to why, Google, an application
developer and ASP/SaaS operator, is trying to build a cable link
to the major English language market in Asia-Pacific.

Seems to me both builds are adressing diversity issues in different
ways, and if this results in a bandwidth glut to the region, that
may be part of the plan.

--Michael Dillon



RE: New TransPacific Cable Projects:

2007-09-24 Thread Frank Bulk

Make sense what you said, I'm just pretty sure that eventually they'll come
up with a way to put 100 to 500 waves on it.

Frank

-Original Message-
From: Rod Beck [mailto:[EMAIL PROTECTED] 
Sent: Monday, September 24, 2007 1:57 PM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; nanog@merit.edu
Subject: RE: New TransPacific Cable Projects:

Here is a TeleGeography news article worth a quick read:
http://www.telegeography.com/cu/article.php?article_id=19783email=html

It appears that that article assumes that capacity will not be increased by
WDM products...have those that been applied on those links already reached
the cables' maximum capabilities based on current technology?

Frank 

I think you are going to find that the numbe of waves that can put on an
undersea fiber is a function of the distance between the landing stations.
Obviously most TransPacific cables traverse greater distances and hence
probably cannot carry as many waves as TransAtlantic cables. 

There is also a need for cables that are diverse from the existing cables.
So lighting more capacity will not solve the physical diversity problems
that were highlighted by the December earthquakes. 

Most modern undersea cables have four fiber pairs per cable. And each of
those fiber pairs can handle from 24 to 80 10 gig waves. 

Hibernia can do 80 10 gig waves, but only becuase we replaced the undersea
DWDM kit deployed at our landing stations. 

Regards, 

- Roderick. 



RE: Yahoo! Mail/Sys Admin

2007-10-04 Thread Frank Bulk

You're right, they've shuffled things around.

Try this form:
http://help.yahoo.com/l/us/yahoo/mail/yahoomail/postmaster/defer.html

Regards,

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Justin Wilson
Sent: Thursday, October 04, 2007 8:55 AM
To: nanog@merit.edu
Subject: RE: Yahoo! Mail/Sys Admin



We've been having trouble sending to [EMAIL PROTECTED]  Getting the infamous
421 Message from (x.x.x.x) temporarily deferred - 4.16.50.
Please refer to http://help.yahoo.com/help/us/mail/defer/defer-06.html.


When I follow the referred link I get to
http://help.yahoo.com/l/us/yahoo/mail/original/abuse/abuse-60.html,
which then points you to this URL:
http://help.yahoo.com/fast/help/us/mail/cgi_defer which is supposed to
be a form.

Sadly, that link loops you back to the Yahoo mail login page.  Once you
login your choices are quite limited and are for basic E-mail help.
I've tried contacting yahoo through those links but I get a canned
reply.  It's been over a month of consistent deliverability issues to
Yahoo and we're not one step closer to solving the problem.  The one
thing I did notice is when I modified SPF to include the IP address
instead of the domain of the deferred MTA, E-mail would get through, but
only for a few days then it was back to deferral.

I've read the older posts on NANOG and various gripes about Yahoo
greylisting on google but all the leads have come to a dead end.  Does
anyone know an interactive yahoo contact they could share with me?

Thank you,
Justin Wilson





RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Frank Bulk

I wonder how quickly applications and network gear would implement QoS
support if the major ISPs offered their subscribers two queues: a default
queue, which handled regular internet traffic but squashed P2P, and then a
separate queue that allowed P2P to flow uninhibited for an extra $5/month,
but then ISPs could purchase cheaper bandwidth for that.

But perhaps at the end of the day Andrew O. is right and it's best off to
have a single queue and throw more bandwidth at the problem.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Joel
Jaeggli
Sent: Sunday, October 21, 2007 9:31 PM
To: Steven M. Bellovin
Cc: Sean Donelan; nanog@merit.edu
Subject: Re: BitTorrent swarms have a deadly bite on broadband nets


Steven M. Bellovin wrote:

 This result is unsurprising and not controversial.  TCP achieves
 fairness *among flows* because virtually all clients back off in
 response to packet drops.  BitTorrent, though, uses many flows per
 request; furthermore, since its flows are much longer-lived than web or
 email, the latter never achieve their full speed even on a per-flow
 basis, given TCP's slow-start.  The result is fair sharing among
 BitTorrent flows, which can only achieve fairness even among BitTorrent
 users if they all use the same number of flows per request and have an
 even distribution of content that is being uploaded.

 It's always good to measure, but the result here is quite intuitive.
 It also supports the notion that some form of traffic engineering is
 necessary.  The particular point at issue in the current Comcast
 situation is not that they do traffic engineering but how they do it.


Dare I say it, it might be somewhat informative to engage in a priority
queuing exercise like the Internet-2 scavenger service.

In one priority queue goes all the normal traffic and it's allowed to
use up to 100% of link capacity, in the other queue goes the traffic
you'd like to deliver at lower priority, which given an oversubscribed
shared resource on the edge is capped at some percentage of link
capacity beyond which performance begins to noticably suffer... when the
link is under-utilized low priority traffic can use a significant chunk
of it. When high-priority traffic is present it will crowd out the low
priority stuff before the link saturates. Now obviously if high priority
traffic fills up the link then you have a provisioning issue.

I2 characterized this as worst effort service. apps and users could
probably be convinced to set dscp bits themselves in exchange for better
performance of interactive apps and control traffic vs worst effort
services data transfer.

Obviously there's room for a discussion of net-neutrality in here
someplace. However the closer you do this to the cmts the more likely it
is to apply some locally relevant model of fairness.

   --Steve Bellovin, http://www.cs.columbia.edu/~smb





RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Frank Bulk

I'm not claiming that squashing P2P is easy, but apparently Comcast has
been successfully enough to generate national attention, and the bandwidth
shaping providers are not totally a lost cause.

The reality is that copper-based internet access technologies: dial-up, DSL,
and cable modems have made the design-based trade off that there is
substantially more downstream than upstream.  With North American
DOCSIS-based cable modem deployments there is generally a 6 MHz wide band at
256 QAM while the upstream is only 3.2 MHz wide at 16 QAM (or even QPSK).
Even BPON and GPON follow that same asymmetrical track.  And the reality is
that most residential internet access patterns reflect that (whether it's a
cause or contributor, I'll let others debate that).  

Generally ISPs have been reluctant to pursue usage-based models because it
adds an undesirable cost and isn't as attractive a marketing tool to attract
customers.  Only in business models where bandwidth (local, transport, or
otherwise) is expensive has usage-based billing become a reality.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Crist Clark
Sent: Monday, October 22, 2007 7:16 PM
To: nanog@merit.edu
Subject: RE: BitTorrent swarms have a deadly bite on broadband nets

 On 10/22/2007 at 3:02 PM, Frank Bulk [EMAIL PROTECTED] wrote:

 I wonder how quickly applications and network gear would implement
 QoS support if the major ISPs offered their subscribers two queues:
 a default queue, which handled regular internet traffic but 
 squashed P2P, and then a separate queue that allowed P2P to flow 
 uninhibited for an extra $5/month, but then ISPs could purchase 
 cheaper bandwidth for that.

 But perhaps at the end of the day Andrew O. is right and it's best
 off to  have a single queue and throw more bandwidth at the problem.

How does one squash P2P? How fast will BitTorrent start hiding it's
trivial to spot .BitTorrent protocol banner in the handshakes? How
many P2P protocols are already blocking/shaping evasive?

It seems to me is what hurts the ISPs is the accompanying upload
streams, not the download (or at least the ISP feels the same
download pain no matter what technology their end user uses to get
the data[0]). Throwing more bandwidth does not scale to the number
of users we are talking about. Why not suck up and go with the
economic solution? Seems like the easy thing is for the ISPs to come
clean and admit their unlimited service is not and put in upload
caps and charge for overages.

[0] Or is this maybe P2P's fault only in the sense that it makes
so much more content available that there is more for end-users
to download now than ever before.

B¼information contained in this e-mail message is confidential, intended
only for the use of the individual or entity named above. If the reader
of this e-mail is not the intended recipient, or the employee or agent
responsible to deliver it to the intended recipient, you are hereby
notified that any review, dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this e-mail
in error, please contact [EMAIL PROTECTED]



RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Frank Bulk

Here's a few downstream/upstream numbers and ratios:
ADSL2+: 24/1.5 = 16:1 (sans Annex.M)
DOCSIS 1.1: 38/9 = 4.2:1 (best case up and downstream modulations and
carrier widths)
  BPON: 622/155 = 4:1
  GPON: 2488/1244 = 2:1

Only the first is non-shared, so that even though the ratio is poor, a
person can fill their upstream pipe up without impacting their neighbors.

It's an interesting question to ask how much engineering decisions have led
to the point where we are today with bandwidth-throttling products, or if
that would have happened in an entirely symmetrical environment.

DOCSIS 2.0 adds support for higher levels of modulation on the upstream,
plus wider bandwidth
(http://i.cmpnet.com/commsdesign/csd/2002/jun02/imedia-fig1.gif), but still
not enough to compensate for the higher downstreams possible with channel
bonding in DOCSIS 3.0.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jack
Bates
Sent: Monday, October 22, 2007 12:35 PM
To: Bora Akyol
Cc: Sean Donelan; nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


Bora Akyol wrote:
 1) Legal Liability due to the content being swapped. This is not a
technical
 matter IMHO.

Instead of sending an icmp host unreachable, they are closing the connection
via
spoofing. I think it's kinder than just dropping the packets all together.

 2) The breakdown of network engineering assumptions that are made when
 network operators are designing networks.

 I think network operators that are using boxes like the Sandvine box are
 doing this due to (2). This is because P2P traffic hits them where it
hurts,
 aka the pocketbook. I am sure there are some altruistic network operators
 out there, but I would be sincerely surprised if anyone else was concerned
 about fairness


As has been pointed out a few times, there are issues with CMTS systems,
including maximum upstream bandwidth allotted versus maximum downstream
bandwidth. I agree that there is an engineering problem, but it is not on
the
part of network operators. DSL fits in it's own little world, but until
VDSL2
was designed, there were hard caps set to down speed versus up speed. This
has
been how many last mile systems were designed, even in shared bandwidth
mediums.
More downstream capacity will be needed than upstream. As traffic patterns
have
changed, the equipment and the standards it is built upon have become
antiquated.

As a tactical response, many companies do not support the operation of
servers
for last mile, which has been defined to include p2p seeding. This is their
right, and it allows them to protect the precious upstream bandwidth until
technology can adapt to a high capacity upstream as well as downstream for
the
last mile.

Currently I show an average 2.5:1-4:1 ratio at each of my pops. Luckily, I
run a
DSL network. I waste a lot of upstream bandwidth on my backbone. Most
downstream/upstream ratios I see on last mile standards and equipment
derived
from such standards isn't even close to 4:1. I'd expect such ratio's if I
filtered out the p2p traffic on my network. If I ran a shared bandwidth last
mile system, I'd definitely be filtering unless my overall customer base was
small enough to not care about maximums on the CMTS.

Fixed downstream/upstream ratios must die in all standards and
implementations.
It seems a few newer CMTS are moving that direction (though I note one I
quickly
found mentions it's flexible ratio as beyond DOCSIS 3.0 features which
implies
the standard is still fixed ratio), but I suspect it will be years before
networks can adapt.


Jack Bates



RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Frank Bulk

With PCMM (PacketCable Multimedia,
http://www.cedmagazine.com/out-of-the-lab-into-the-wild.aspx) support it's
possible to dynamically adjust service flows, as has been done with
Comcast's Powerboost.  There also appears to be support for flow
prioritization.

Regards,

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: Monday, October 22, 2007 1:02 AM
To: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


On Sun, 21 Oct 2007, Eric Spaeth wrote:

 They have.   Enter DOCSIS 3.0.   The problem is that the benefits of
DOCSIS
 3.0 will only come after they've allocated more frequency space, upgraded
 their CMTS hardware, upgraded their HFC node hardware where necessary, and
 replaced subscriber modems with DOCSIS 3.0 capable versions.   On an
 optimistic timeline that's at least 18-24 months before things are going
to
 be better; the problem is things are broken _today_.

Could someone who knows DOCSIS 3.0 (perhaps these are general
DOCSIS questions) enlighten me (and others?) by responding to a few things
I have been thinking about.

Let's say cable provider is worried about aggregate upstream capacity for
each HFC node that might have a few hundred users. Do the modems support
schemes such as everybody is guaranteed 128 kilobit/s, if there is
anything to spare, people can use it but it's marked differently in IP
PRECEDENCE and treated accordingly to the HFC node, and then carry it
into the IP aggregation layer, where packets could also be treated
differently depending on IP PREC.

This is in my mind a much better scheme (guarantee subscribers a certain
percentage of their total upstream capacity, mark their packets
differently if they burst above this), as this is general and not protocol
specific. It could of course also differentiate on packet sizes and a lot
of other factors. Bad part is that it gives the user an incentive to
hack their CPE to allow them to send higher speed with high priority
traffic, thus hurting their neighbors.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]



RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Frank Bulk

I don't see how this Oversi caching solution will work with today's HFC
deployments -- the demodulation happens in the CMTS, not in the field.  And
if we're talking about de-coupling the RF from the CMTS, which is what is
happening with M-CMTSes
(http://broadband.motorola.com/ips/modular_CMTS.html), you're really
changing an MSO's architecture.  Not that I'm dissing it, as that may be
what's necessary to deal with the upstream bandwidth constraint, but that's
a future vision, not a current reality.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Rich
Groves
Sent: Monday, October 22, 2007 3:06 PM
To: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


I'm a bit late to this conversation but I wanted to throw out a few bits of
info not covered.

A company called Oversi makes a very interesting solution for caching
Torrent and some Kad based overlay networks as well all done through some
cool strategically placed taps and prefetching. This way you could cache
out at whatever rates you want and mark traffic how you wish as well. This
does move a statistically significant amount of traffic off of the upstream
and on a gigabit ethernet (or something) attached cache server solving large
bits of the HFC problem. I am a fan of this method as it does not require a
large foot print of inline devices rather a smaller footprint of statics
gathering sniffers and caches distributed in places that make sense.

Also the people at Bittorrent Inc have a cache discovery protocol so that
their clients have the ability to find cache servers with their hashes on
them .

I am told these methods are in fact covered by the DMCA but remember I am no
lawyer.

Feel free to reply direct if you want contacts


Rich


--
From: Sean Donelan [EMAIL PROTECTED]
Sent: Sunday, October 21, 2007 12:24 AM
To: nanog@merit.edu
Subject: Can P2P applications learn to play fair on networks?


 Much of the same content is available through NNTP, HTTP and P2P. The
 content part gets a lot of attention and outrage, but network engineers
 seem to be responding to something else.

 If its not the content, why are network engineers at many university
 networks, enterprise networks, public networks concerned about the impact
 particular P2P protocols have on network operations?  If it was just a
 single network, maybe they are evil.  But when many different networks
 all start responding, then maybe something else is the problem.

 The traditional assumption is that all end hosts and applications
 cooperate and fairly share network resources.  NNTP is usually considered
 a very well-behaved network protocol.  Big bandwidth, but sharing network
 resources.  HTTP is a little less behaved, but still roughly seems to
 share network resources equally with other users. P2P applications seem
 to be extremely disruptive to other users of shared networks, and causes
 problems for other polite network applications.

 While it may seem trivial from an academic perspective to do some things,
 for network engineers the tools are much more limited.

 User/programmer/etc education doesn't seem to work well. Unless the
 network enforces a behavor, the rules are often ignored. End users
 generally can't change how their applications work today even if they
 wanted too.

 Putting something in-line across a national/international backbone is
 extremely difficult.  Besides network engineers don't like additional
 in-line devices, no matter how much the sales people claim its fail-safe.

 Sampling is easier than monitoring a full network feed.  Using netflow
 sampling or even a SPAN port sampling is good enough to detect major
 issues.  For the same reason, asymetric sampling is easier than requiring
 symetric (or synchronized) sampling.  But it also means there will be
 a limit on the information available to make good and bad decisions.

 Out-of-band detection limits what controls network engineers can implement
 on the traffic. USENET has a long history of generating third-party cancel
 messages. IPS systems and even passive taps have long used third-party
 packets to respond to traffic. DNS servers been used to re-direct
 subscribers to walled gardens. If applications responded to ICMP Source
 Quench or other administrative network messages that may be better; but
 they don't.





RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Frank Bulk

A lot of the MDUs and apartment buildings in Japan are doing fiber to the
basement and then VDSL or VDSL2 in the building, or even Ethernet.  That's
how symmetrical bandwidth is possible.  Considering that much of the
population does not live in high-rises, this doesn't easily apply to the
U.S. population.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Leo
Bicknell
Sent: Monday, October 22, 2007 8:55 PM
To: nanog@merit.edu
Subject: Re: BitTorrent swarms have a deadly bite on broadband nets

In a message written on Mon, Oct 22, 2007 at 08:24:17PM -0500, Frank Bulk
wrote:
 The reality is that copper-based internet access technologies: dial-up,
DSL,
 and cable modems have made the design-based trade off that there is
 substantially more downstream than upstream.  With North American
 DOCSIS-based cable modem deployments there is generally a 6 MHz wide band
at
 256 QAM while the upstream is only 3.2 MHz wide at 16 QAM (or even QPSK).
 Even BPON and GPON follow that same asymmetrical track.  And the reality
is
 that most residential internet access patterns reflect that (whether it's
a
 cause or contributor, I'll let others debate that).  

Having now seen the cable issue described in technical detail over
and over, I have a question.

At the most recent Nanog several people talked about 100Mbps symmetric
access in Japan for $40 US.

This leads me to two questions:

1) Is that accurate?

2) What technology to the use to offer the service at that price point?

3) Is there any chance US providers could offer similar technologies at
   similar prices, or are there significant differences (regulation,
   distance etc) that prevent it from being viable?

-- 
   Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/
Read TMBG List - [EMAIL PROTECTED], www.tmbg.org



RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-24 Thread Frank Bulk
The key thing is that it can't be too complicated for the subscriber.  What
you've described is already too difficult for the masses to consume.  

 

The scavenger class, as has been described in other postings, is probably
the simplest way to implement things.  Let the application developers take
care of the traffic marking and expose priorities in the GUI, and the
marketing from the MSO needs to be $xx.xx per month for general use
internet, with unlimited bulk traffic for $y.yy.  Of course, the MSOs
wouldn't say that the first category excludes bulk traffic, or mention caps
or upstream limitations or P2P control because that would be bad for
marketing. 

 

Frank

 

From: Dorn Hetzel [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 24, 2007 8:12 AM
To: Joe Greco
Cc: [EMAIL PROTECTED]; nanog@merit.edu
Subject: Re: BitTorrent swarms have a deadly bite on broadband nets

 

How about a system where I tell my customers that for a given plan X at
price Y they get U bytes of high priority upload per month (or day or
whatever) and after that all their traffic is low priority until the next
cycle starts.  

Now here's the fun part.  They can mark the priority on the packets they
send (diffserv/TOS) and decide what they want treated as high priority and
what they want treated as not-so-high priority.

If I'm a low usage customer with no p2p applications, maybe I can mark ALL
my traffic high priority all month long and not run over my limit.  If I run
p2p, I can choose to set my p2p software to send all it's traffic marked low
priority if I want to, and save my high priority traffic quote for more
important stuff. 

Maybe the default should be high priority so that customers who do nothing
but are light users get the best service.

low priority upstream traffic gets dropped in favor of high priority, but
users decide what's important to them. 

If I want all my stuff to be high priority, maybe there's a metered plan I
can sign up for so I don't have any hard cap on high priority traffic each
month but I pay extra over a certain amount.

This seems like it would be reasonable and fair and p2p wouldn't have to be
singled out. 

Any thoughts?

On 10/22/07, Joe Greco [EMAIL PROTECTED] wrote:


 I wonder how quickly applications and network gear would implement QoS
 support if the major ISPs offered their subscribers two queues: a default
 queue, which handled regular internet traffic but squashed P2P, and then a

 separate queue that allowed P2P to flow uninhibited for an extra $5/month,
 but then ISPs could purchase cheaper bandwidth for that.

 But perhaps at the end of the day Andrew O. is right and it's best off to 
 have a single queue and throw more bandwidth at the problem.

A system that wasn't P2P-centric could be interesting, though making it
P2P-centric would be easier, I'm sure.  ;-)

The idea that Internet data flows would ever stop probably doesn't work 
out well for the average user.

What about a system that would /guarantee/ a low amount of data on a low
priority queue, but would also provide access to whatever excess capacity
was currently available (if any)? 

We've already seen service providers such as Virgin UK implementing things
which essentially try to do this, where during primetime they'll limit the
largest consumers of bandwidth for 4 hours.  The method is completely 
different, but the end result looks somewhat similar.  The recent
discussion of AU service providers also talks about providing a baseline
service once you've exceeded your quota, which is a simplified version of 
this.

Would it be better for networks to focus on separating data classes and
providing a product that's actually capable of quality-of-service style
attributes?

Would it be beneficial to be able to do this on an end-to-end basis (which 
implies being able to QoS across ASN's)?

The real problem with the throw more bandwidth solution is that at some
point, you simply cannot do it, since the available capacity on your last
mile simply isn't sufficient for the numbers you're selling, even if you 
are able to buy cheaper upstream bandwidth for it.

Perhaps that's just an argument to fix the last mile.

... JG
--
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then
I
won't contact you again. - Direct Marketing Ass'n position on e-mail
spam(CNN) 
With 24 million small businesses in the US alone, that's way too many
apples.

 



RE: Internet access in Japan (was Re: BitTorrent swarms have a deadly bite on broadband nets)

2007-10-24 Thread Frank Bulk

Here's timely article: KDDI says 900k target for fibre users 'difficult'
http://www.telegeography.com/cu/article.php?article_id=20215email=html

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
David Andersen
Sent: Monday, October 22, 2007 9:21 PM
To: Leo Bicknell
Cc: nanog@merit.edu
Subject: Internet access in Japan (was Re: BitTorrent swarms have a deadly
bite on broadband nets)

On Oct 22, 2007, at 9:55 PM, Leo Bicknell wrote:

 Having now seen the cable issue described in technical detail over
 and over, I have a question.

 At the most recent Nanog several people talked about 100Mbps symmetric
 access in Japan for $40 US.

 This leads me to two questions:

 1) Is that accurate?

 2) What technology to the use to offer the service at that price  
 point?

 3) Is there any chance US providers could offer similar  
 technologies at
similar prices, or are there significant differences (regulation,
distance etc) that prevent it from being viable?

http://www.washingtonpost.com/wp-dyn/content/article/2007/08/28/ 
AR2007082801990.html

The Washington Post article claims that:

Japan has surged ahead of the United States on the wings of better  
wire and more aggressive government regulation, industry analysts say.
The copper wire used to hook up Japanese homes is newer and runs in  
shorter loops to telephone exchanges than in the United States.

...

a)  Dense, urban area (less distance to cover)

b)  Fresh new wire installed after WWII

c)  Regulatory environment that forced telecos to provide capacity to  
Internet providers

Followed by a recent explosion in fiber-to-the-home buildout by NTT.   
About 8.8 million Japanese homes have fiber lines -- roughly nine  
times the number in the United States. -- particularly impressive  
when you count that in per-capita terms.

Nice article.  Makes you wish...



   -Dave



RE: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Frank Bulk

Ah, but the reality is that you *think* you're paying for something, but the
operator never really intended to deliver it to you.

If anything, we need better full-disclosure, preferably voluntarily, and if
not that way, legislatively required.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Paul
Ferguson
Sent: Friday, October 26, 2007 12:19 AM
To: [EMAIL PROTECTED]
Cc: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- Sean Donelan [EMAIL PROTECTED] wrote:

When 5% of the users don't play nicely with the rest of the 95% of
the users; how can network operators manage the network so every user
receives a fair share of the network capacity?

I don't know if that's a fair argument.

If I'm sitting at the end of 8Mb/768k cable modem link, and paying
for it, I should damned well be able to use it anytime I want.

24x7.

As a consumer/customer, I say Don't sell it it if you can't
deliver it. And not just sometimes or only during foo time.

All the time. Regardless of my applications. I'm paying for it.

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.3 (Build 3017)

wj8DBQFHIXiYq1pz9mNUZTMRAnpdAJ98sZm5SfK+7ToVei4Ttt8OocNPRQCgheRL
lq9rqTBscFmo8I4Y8r1ZG0Q=
=HoIx
-END PGP SIGNATURE-


--
Fergie, a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/




RE: Can P2P applications learn to play fair on networks?

2007-10-29 Thread Frank Bulk

There's a large installed based of asymmetric speed internet access links.
Considering that even BPON and GPON solutions are designed for asymmetric
use, too, it's going to take a fiber-based Active Ethernet solution to
transform access links to change the residential experience to something
symmetrical.  (I'm making the underlying presumption that copper-based
symmetric technologies will not become part of residential broadband market
any time in the near future, if ever.)

Until the time that we are all FTTH, ISPs will continue to manage their
customer's upstream links.

Regards,

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Sean
Donelan
Sent: Saturday, October 27, 2007 6:31 PM
To: Mohacsi Janos
Cc: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


On Sat, 27 Oct 2007, Mohacsi Janos wrote:
 Agreed. Measures, like NAT, spoofing based accelerators, quarantining
 computers are developed for fairly small networks. No for 1Gbps and above
and
 20+ sites/customers.

small is a relative term.  Hong Kong is already selling 1Gbps access
links to residential customers, and once upon a time 56Kbps was a big
backbone network.

Last month folks were complaining about ISPs letting everything through
the networks, this month people are complaining that ISPs aren't letting
everything through the networks.  Does this mean next month we will be
back the other direction again.

Why artificially keep access link speeds low just to prevent upstream
network congestion?  Why can't you have big access links?





RE: cpu needed to NAT 45mbs

2007-11-12 Thread Frank Bulk

I would have disagree with your point on centralized AP controllers --
almost all the vendors have some form of high availability, and Trapeze's
offering, new (and may not yet be G.A) purports to be almost entirely
seamless in its load sharing and failover support.

Now that dual-band radios in laptops are becoming more prevalent, it's
possible to get 30 to 50% of your user population using 802.11a.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Joel
Jaeggli
Sent: Saturday, November 10, 2007 11:51 PM
To: Adrian Chadd
Cc: Suresh Ramasubramanian; nanog@merit.edu
Subject: Re: cpu needed to NAT 45mbs

Adrian Chadd wrote:
 On Sat, Nov 10, 2007, Suresh Ramasubramanian wrote:

 Speaking of all that, does someone have a conference wireless'  bcp
 handy?  The sort that starts off with dont deploy $50 unbranded
 taiwanese / linksys etc routers that fall over and die at more than 5
 associations, place them so you dont get RF interference all over the
 place etc before going on to more faqs like what to do so worms dont
 run riot?

 Comes in handy for that, as well as for public wifi access points.

 Everyone I speak to says something along the lines of

 Why would I put that sort of stuff up? I want people to pay me for
 that kind of clue.

I did a presentation a couple of years ago at nanog on high-density
conference style wireless deployments. It's in the proceedings from
Scottsdale. Fundamentally the game hasn't changed that much since then:

Newer hardware is a bit more robust.

Centralized AP controllers are beguiling but have to be deployed with
high availability in mind because putting all your eggs in a smaller
number of baskets carriers some risk...

If you can, deploy A to draw off some users from 2.4ghz.

Design to keep the number of users per radio at 50 or less in the worst
case.

Instrument everything...


 There are slides covering basic stuff and observations out there.

 (I'm going through a wireless deployment at an ISP conference next week;
 I'll draft up some notes on the nanog cluepon site.)




 Adrian





RE: large-scale wireless [was: cpu needed to NAT 45mbs]

2007-11-13 Thread Frank Bulk

 
If you're going with Extricom you don't need to worry about channel planning
beyond adding more channel blankets.  

Frank

-Original Message-
From: Carl Karsten [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 12, 2007 10:56 PM
To: nanog@merit.edu
Cc: [EMAIL PROTECTED]; Adrian Chadd; Suresh Ramasubramanian
Subject: Re: cpu needed to NAT 45mbs

Thank you for all the advice - it was nice to see 20 replies that all
basically
agreed (and with me too.)  If only the 6 people involved in this project
were such.

On Wifi for 1000:

I have tried to make sure everyone involved in this PyCon Wifi project has
read
  http://www.nanog.org/mtg-0302/ppt/joel.pdf - too bad some have read it and
don't get it.  I think it will be OK, because someone else wrote up the
plan,
which is basically to use http://wavonline.com/vendorpages/extricom.htm

If anyone would like to see it in action,  I am sure something can be
arranged.
  (you are welcome to come look at it, but I would think would want to
actually
peek under the hood and see some stuff in real time, etc.  )  March 13-16 in
Chicago.

Carl K

Joel Jaeggli wrote:
 Frank Bulk wrote:
 I would have disagree with your point on centralized AP controllers --
 almost all the vendors have some form of high availability, and Trapeze's
 offering, new (and may not yet be G.A) purports to be almost entirely
 seamless in its load sharing and failover support.

 I have a few scars to show from deploying centralized ap controllers,
 from several vendors including the one that you mention above. Hence my
 observation that they must be deployed in a HA setup in that sort of
 environment...

 We you lose a fat-ap, unless cascading failure ensues you just lost one
 ap... When your ap-controller with 80 radio's attached goes boom, you
 are dead. So, as I said if you're going to use a central ap controller
 for an environment like this you need to avail yourself of it's HA
features.

 Now that dual-band radios in laptops are becoming more prevalent, it's
 possible to get 30 to 50% of your user population using 802.11a.

 Frank

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Joel
 Jaeggli
 Sent: Saturday, November 10, 2007 11:51 PM
 To: Adrian Chadd
 Cc: Suresh Ramasubramanian; nanog@merit.edu
 Subject: Re: cpu needed to NAT 45mbs

 Adrian Chadd wrote:
 On Sat, Nov 10, 2007, Suresh Ramasubramanian wrote:

 Speaking of all that, does someone have a conference wireless'  bcp
 handy?  The sort that starts off with dont deploy $50 unbranded
 taiwanese / linksys etc routers that fall over and die at more than 5
 associations, place them so you dont get RF interference all over the
 place etc before going on to more faqs like what to do so worms dont
 run riot?

 Comes in handy for that, as well as for public wifi access points.
 Everyone I speak to says something along the lines of

 Why would I put that sort of stuff up? I want people to pay me for
 that kind of clue.
 I did a presentation a couple of years ago at nanog on high-density
 conference style wireless deployments. It's in the proceedings from
 Scottsdale. Fundamentally the game hasn't changed that much since then:

 Newer hardware is a bit more robust.

 Centralized AP controllers are beguiling but have to be deployed with
 high availability in mind because putting all your eggs in a smaller
 number of baskets carriers some risk...

 If you can, deploy A to draw off some users from 2.4ghz.

 Design to keep the number of users per radio at 50 or less in the worst
 case.

 Instrument everything...


 There are slides covering basic stuff and observations out there.

 (I'm going through a wireless deployment at an ISP conference next week;
 I'll draft up some notes on the nanog cluepon site.)




 Adrian







RE: large-scale wireless [was: cpu needed to NAT 45mbs]

2007-11-13 Thread Frank Bulk

Elmar:

Marketing and theory -- I haven't had a chance to test it myself.

BTW, I'm not regurgitating Extricom's marketing rhetoric when I say you
don't need to worry about channel planning -- their product is designed with
that specifically in mind.  The technical benefits and caveats of this
single-channel architecture, and the possible concerns that a network
planner might have around the requirement to have L1 connectivity from
Extricom's APs to their switch, are better discussed in another forum.

Frank

-Original Message-
From: Elmar K. Bins [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 13, 2007 7:46 AM
To: Frank Bulk
Cc: nanog@merit.edu
Subject: Re: large-scale wireless [was: cpu needed to NAT 45mbs]

[EMAIL PROTECTED] (Frank Bulk) wrote:

 If you're going with Extricom you don't need to worry about channel
planning
 beyond adding more channel blankets.

Is that based on marketing, theory (based on the whitepapers and patent
descriptions) or practical experience?

Elmar.



RE: large-scale wireless [was: cpu needed to NAT 45mbs]

2007-11-13 Thread Frank Bulk

Also, some issues with Intel, too:

http://www.intel.com/support/wireless/wlan/sb/cs-006205.htm
http://listserv.educause.edu/cgi-bin/wa.exe?A2=ind0608L=wireless-lanD=1H=
1T=0P=5230

I know that this has been at least somewhat addressed, but I'm not sure if
they are fully addressed.

Regards,

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Casey Callendrello
Sent: Tuesday, November 13, 2007 1:20 PM
To: nanog@merit.edu; [EMAIL PROTECTED]
Subject: Re: large-scale wireless [was: cpu needed to NAT 45mbs]


Hard-earned knowledge:
Meru's single-channel approach has some compatability issues with
certain drivers, most notably Lenovo laptops with the Atheros chipset.
If you decide to go that route, make sure you have a USB key lying
around with the latest drivers from the Lenovo site for the T60's
wireless network.
Regardless of your deployment, make sure your front line support staff
(you DO have a helptable, right?) has the ability to update drivers on
PCs without requiring wireless connectivity.  An ethernet cable should
work just fine :)

--Casey

Jeff Kell wrote:

Frank Bulk wrote:


Foundry OEMs from Meru, which also uses a single-channel approach.  It
does
not have an L1 requirement.



Meru APs tunnel back to the controller, so any old L3 will do.  We took an
AP home (just for grins) and it still worked back to our controller through
residential broadband.

Jeff






RE: unwise filtering policy from cox.net

2007-11-21 Thread Frank Bulk

To be clear, should one be white listing *all* the addresses suggested in
RFC 2142?

Regards,

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Joe
Greco
Sent: Wednesday, November 21, 2007 8:30 AM
To: Eliot Lear
Cc: nanog@merit.edu
Subject: Re: unwise filtering policy from cox.net


 Given what Sean wrote goes to the core of how mail is routed, you'd
 pretty much need to overhaul how MX records work to get around this one,
 or perhaps go back to try to resurrect something like a DNS MB record,
 but that presumes that the problem can't easily be solved in other
 ways.  Sean demonstrated one such way (move the high volume stuff to its
 own domain).

Moving abuse@ to its own domain may work, however, fixing this problem at
the DNS level is probably an error, and probably non-RFC-compliant anyways.

The real problem here is probably one of:

1) Mail server admin forgot (FSVO forgot, which might be didn't even
   stop to consider, considered it and decided that it was worthwhile to
   filter spam sent to abuse@, not realizing the implications for abuse
   reporting, didn't have sufficient knowledge to figure out how to
   exempt abuse@, etc.)

2) Server software doesn't allow exempting a single address; this is a
   common problem with certain software, and the software should be fixed,
   since the RFC's essentially require this to work.  Sadly, it is
   frequently assumed that if you cannot configure your system to do X,
   then it's all right to not do X, regardless of what the RFC's say.

The need to be able to accept unfiltered recipients has certain
implications for mail operations, such as that it could be bad to use IP
level filtering to implement a shared block for bad senders.

... JG
--
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then
I
won't contact you again. - Direct Marketing Ass'n position on e-mail
spam(CNN)
With 24 million small businesses in the US alone, that's way too many
apples.



RE: Creating a crystal clear and pure Internet

2007-11-27 Thread Frank Bulk

Rather than go after distilled water via reverse osmosis, I think a carbon
filter would be a good place to start.  

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Sean
Donelan
Sent: Tuesday, November 27, 2007 8:39 AM
To: nanog@merit.edu
Subject: Creating a crystal clear and pure Internet

Some people have compared unwanted Internet traffic to water pollution,
and proposed that ISPs should be required to be like water utilities and
be responsible for keeping the Internet water crystal clear and pure.

Several new projects have started around the world to achieve those goals.

ITU anti-botnet initiative

http://www.itu.int/ITU-D/cyb/cybersecurity/projects/botnet.html

France anti-piracy initiative

http://www.culture.gouv.fr/culture/actualites/index-olivennes231107.htm



RE: Any earthlink mail admins?

2007-11-28 Thread Frank Bulk

I found their NOC line:
http://www.merit.edu/mail.archives/nanog/msg01583.html

Their business tech support line is 888-698-4357, they might be able to
direct you to the right person.

Also: http://kb.earthlink.net/case.asp?article=89393

I know it's lame, but as a last resort you might also want to try their chat
feature on their support site.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Barry Shein
Sent: Wednesday, November 28, 2007 12:40 PM
To: nanog@merit.edu
Subject: Any earthlink mail admins?



I can't get thru via their abuse.

Your email servers have been pounding us (theworld.com / std.com) with
a non-stop dictionary attack for about a week.

Logs available upon request.

Nov 28 13:37:46 pcls5 sendmail[26487]: NOUSER: jbart1
relay=elasmtp-galgo.atl.sa.earthlink.net [209.86.89.61]
Nov 28 13:37:49 pcls5 sendmail[26487]: NOUSER: jbart10
relay=elasmtp-galgo.atl.sa.earthlink.net [209.86.89.61]
Nov 28 13:37:53 pcls5 sendmail[26487]: NOUSER: jbart2
relay=elasmtp-galgo.atl.sa.earthlink.net [209.86.89.61]
Nov 28 13:37:56 pcls5 sendmail[26487]: NOUSER: jbart3
relay=elasmtp-galgo.atl.sa.earthlink.net [209.86.89.61]
Nov 28 13:37:59 pcls5 sendmail[26487]: NOUSER: jbart4
relay=elasmtp-galgo.atl.sa.earthlink.net [209.86.89.61]
  ...etc etc etc...

--
-Barry Shein

The World  | [EMAIL PROTECTED]   |
http://www.TheWorld.com
Purveyors to the Trade | Voice: 800-THE-WRLD| Login: Nationwide
Software Tool  Die| Public Access Internet | SINCE 1989 *oo*



Looking for generic OID to transmit free-form text in an SNMP trap

2007-12-17 Thread Frank Bulk

I'm looking to do some custom monitoring of a system and the contracted NOC
only supports pings, SNMP queries, and SNMP traps.  My first choice was to
send an e-mail and have their system ingest it, but that's not possible, and
the first two aren't an option, which means I'm looking to send them SNMP
traps.

What OID should/do I use for sending traps with free-form text?  Do I just
use sysDescr?  Or is there another OID that's recommended?

Regards,

Frank



RE: ISPs slowing P2P traffic...

2008-01-14 Thread Frank Bulk

Geo:

That's an over-simplification.  Some access technologies have different
modulations for downstream and upstream.
i.e. if a:b and a=b, and c:d and cd, a+bc+d.

In other words, you're denying the reality that people download a 3 to 4
times more than they upload and penalizing every in trying to attain a 1:1
ratio.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Geo.
Sent: Sunday, January 13, 2008 1:47 PM
To: nanog list
Subject: Re: ISPs slowing P2P traffic...

 The vast majority of our last-mile connections are fixed wireless.   The
 design of the system is essentially half-duplex with an adjustable ratio
 between download/upload traffic.

This in a nutshell is the problem, the ratio between upload and download
should be 1:1 and if it were then there would be no problems. Folks need to
stop pretending they aren't part of the internet. Setting a ratio where
upload:download is not 1:1 makes you a leech. It's a cheat designed to allow
technology companies to claim their devices provide more bandwidth than they
actually do. Bandwidth is 2 way, you should give as much as you get.

Making the last mile a 18x unbalanced pipe (ie 6mb down and 384K up) is what
has created this problem, not file sharing, not running backups, not any of
the things that require up speed. For the entire internet up speed must
equal down speed or it can't work. You can't leech and expect everyone else
to pay for your unbalanced approach.

Geo.




RE: ISPs slowing P2P traffic...

2008-01-14 Thread Frank Bulk

Interesting, because we have a whole college attached of 10/100/1000 users,
and they still have a 3:1 ratio of downloading to uploading.  Of course,
that might be because the school is rate-limiting P2P traffic.  That further
confirms that P2P, generally illegal in content, is the source of what I
would call disproportionate ratios.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: Monday, January 14, 2008 11:22 AM
To: nanog list
Subject: RE: ISPs slowing P2P traffic...


On Mon, 14 Jan 2008, Frank Bulk wrote:

 In other words, you're denying the reality that people download a 3 to 4
 times more than they upload and penalizing every in trying to attain a
 1:1 ratio.

That might be your reality.

My reality is that people with 8/1 ADSL download twice as much as they
upload, people with 10/10 upload twice as much as they download.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]



RE: ISPs slowing P2P traffic...

2008-01-14 Thread Frank Bulk

We're delivering full IP connectivity, it's the school that's deciding to
rate-limit based on application type.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: Monday, January 14, 2008 1:28 PM
To: nanog list
Subject: RE: ISPs slowing P2P traffic...


On Mon, 14 Jan 2008, Frank Bulk wrote:

 Interesting, because we have a whole college attached of 10/100/1000
users,
 and they still have a 3:1 ratio of downloading to uploading.  Of course,
 that might be because the school is rate-limiting P2P traffic.  That
further
 confirms that P2P, generally illegal in content, is the source of what I
 would call disproportionate ratios.

You're not delivering Full Internet IP connectivity, you're delivering
some degraded pseudo-Internet connectivity.

If you take away one of the major reasons for people to upload (ie P2P)
then of course they'll use less upstream bw. And what you call
disproportionate ratio is just an idea of users should be consumers and
we want to make money at both ends by selling download capacity to users
and upload capacity to webhosting instead of the Internet idea that
you're fully part of the internet as soon as you're connected to it.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]



RE: ISPs slowing P2P traffic...

2008-01-14 Thread Frank Bulk

You're right, I shouldn't let the access technologies define the services I
offer, but I have to deal with the equipment I have today.  Although that
equipment doesn't easily support a 1:1 product offering, I can tell you that
all the decisions we're making in regards to upgrades and replacements are
moving toward that goal.  In the meantime, it is what it is and we need to
deal with it.

Frank

-Original Message-
From: Joe Greco [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 14, 2008 3:17 PM
To: [EMAIL PROTECTED]
Cc: nanog@merit.edu
Subject: Re: ISPs slowing P2P traffic...

 Geo:

 That's an over-simplification.  Some access technologies have different
 modulations for downstream and upstream.
 i.e. if a:b and a=b, and c:d and cd, a+bc+d.

 In other words, you're denying the reality that people download a 3 to 4
 times more than they upload and penalizing every in trying to attain a 1:1
 ratio.

So, is that actually true as a constant, or might there be some
cause-effect mixed in there?

For example, I know I'm not transferring any more than I absolutely must
if I'm connected via GPRS radio.  Drawing any sort of conclusions about
my normal Internet usage from my GPRS stats would be ... skewed ... at
best.  Trying to use that reality as proof would yield you an exceedingly
misleading picture.

During those early years of the retail Internet scene, it was fairly easy
for users to migrate to usage patterns where they were mostly downloading
content; uploading content on a 14.4K modem would have been unreasonable.
There was a natural tendency towards eyeball networks and content networks.

However, these days, more people have always on Internet access, and may
be interested in downloading larger things, such as services that might
eventually allow users to download a DVD and burn it.

http://www.engadget.com/2007/09/21/dvd-group-approves-restrictive-download-t
o-burn-scheme/

This means that they're leaving their PC on, and maybe they even have other
gizmos or gadgets besides a PC that are Internet-aware.

To remain doggedly fixated on the concept that an end-user is going to
download more than they upload ...  well, sure, it's nice, and makes
certain things easier, but it doesn't necessarily meet up with some of
the realities.  Verizon recently began offering a 20M symmetrical FiOS
product.  There must be some people who feel differently.

So, do the modulations of your access technologies dictate what your
users are going to want to do with their Internet in the future, or is it
possible that you'll have to change things to accomodate different
realities?

... JG
--
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then
I
won't contact you again. - Direct Marketing Ass'n position on e-mail
spam(CNN)
With 24 million small businesses in the US alone, that's way too many
apples.



RE: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Frank Bulk

I'm not aware of MSOs configuring their upstreams to attain rates for 9 and
27 Mbps for version 1 and 2, respectively.  The numbers you quote are the
theoretical max, not the deployed values.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: Tuesday, January 15, 2008 3:27 AM
To: nanog@merit.edu
Subject: Re: FW: ISPs slowing P2P traffic...


On Tue, 15 Jan 2008, Brandon Galbraith wrote:

 I think no matter what happens, it's going to be very interesting as
Comcast
 rolls out DOCSIS 3.0 (with speeds around 100-150Mbps possible), Verizon
FIOS

Well, according to wikipedia DOCSIS 3.0 gives 108 megabit/s upstream as
opposed to 27 and 9 megabit/s for v2 and v1 respectively. That's not what
I would call revolution as I still guess hundreds if not thousands of
subscribers share those 108 megabit/s, right? Yes, fourfold increase but
... that's still only factor 4.

 expands it's offering (currently, you can get 50Mb/s down and 30Mb/sec
up),
 etc. If things are really as fragile as some have been saying, then the
 bottlenecks will slowly make themselves apparent.

Upstream capacity will still be scarce on shared media as far as I can
see.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]



RE: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Frank Bulk

Except that upstreams are not at 27 Mbps
(http://i.cmpnet.com/commsdesign/csd/2002/jun02/imedia-fig1.gif show that
you would be using 32 QAM at 6.4 MHz).  The majority of MSOs are at 16-QAM
at 3.2 MHz, which is about 10 Mbps.  We just took over two systems that were
at QPSK at 3.2 Mbps, which is about 5 Mbps.

And upstreams are usually sized not to be more than 250 users per upstream
port.  So that would be a 10:1 oversubscription on upstream, not too bad, by
my reckoning.  The 1000 you are thinking of is probably 1000 users per
downstream power, and there is a usually a 1:4 to 1:6 ratio of downstream to
upstream ports.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: Tuesday, January 15, 2008 5:41 PM
To: nanog@merit.edu
Subject: RE: FW: ISPs slowing P2P traffic...


On Tue, 15 Jan 2008, Frank Bulk wrote:

 I'm not aware of MSOs configuring their upstreams to attain rates for 9
and
 27 Mbps for version 1 and 2, respectively.  The numbers you quote are the
 theoretical max, not the deployed values.

But with 1000 users on a segment, don't these share the 27 megabit/s for
v2, even though they are configured to only be able to use 384kilobit/s
peak individually?

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]



RE: FW: ISPs slowing P2P traffic...

2008-01-16 Thread Frank Bulk

The wikipedia article is simplified to the extent that it doesn't embed
actual practices.  Those are best obtained at SCTE meetings and discussion
with CMTS vendors.

A 10x oversubscription rate from residential broadband access doesn't seem
too unreasonable to me based in practice and what I've heard, but perhaps
other operators have differing opinions or experiences.

The '250' is really 250 subscribers in my case, but you're right, you see
different figures bandied about in regards to homes passed and penetration.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: Wednesday, January 16, 2008 1:07 AM
To: nanog@merit.edu
Subject: RE: FW: ISPs slowing P2P traffic...


On Tue, 15 Jan 2008, Frank Bulk wrote:

 Except that upstreams are not at 27 Mbps
 (http://i.cmpnet.com/commsdesign/csd/2002/jun02/imedia-fig1.gif show that
 you would be using 32 QAM at 6.4 MHz).  The majority of MSOs are at 16-QAM
 at 3.2 MHz, which is about 10 Mbps.  We just took over two systems that
were
 at QPSK at 3.2 Mbps, which is about 5 Mbps.

Ok, so the wikipedia article http://en.wikipedia.org/wiki/Docsis is
heavily simplified? Any chance someone with good knowledge of this could
update the page to be more accurate?

 And upstreams are usually sized not to be more than 250 users per upstream
 port.  So that would be a 10:1 oversubscription on upstream, not too bad,
by
 my reckoning.  The 1000 you are thinking of is probably 1000 users per
 downstream power, and there is a usually a 1:4 to 1:6 ratio of downstream
to
 upstream ports.

250 users sharing 10 megabit/s would mean 40 kilobit/s average utilization
which to me seems very tight. Or is this 250 apartments meaning perhaps
40% subscribe to the service indicating that those 250 really are 100
and that the average utilization then can be 100 kilobit/s upstream?

With these figures I can really see why companies using HFC/Coax have a
problem with P2P, the technical implementation is not really suited for
the application.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]



RE: qwest outage?

2008-01-19 Thread Frank Bulk

Funny, I saw nothing on Qwest's stat site, either:

http://stat.qwest.net/statqwest/perfRptIndex.jsp
http://stat.qwest.net/index_flash.html

Frank 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jeff
Shultz
Sent: Sunday, January 20, 2008 12:16 AM
To: Daniel; nanog@merit.edu
Subject: Re: qwest outage?


Daniel wrote:
 Anyone currently aware of a Qwest outage? My qwest sites are down, even
 qwest.com http://qwest.com

 daniel

Nope.

traceroute www.qwest.com
traceroute to www.qwest.com (155.70.40.252), 30 hops max, 40 byte packets
 1  192.168.255.1 (192.168.255.1)  0.287 ms   0.232 ms   0.332 ms
 2  stayton2-stinger-gw.wvi.com (67.43.68.1)  7.627 ms   7.986 ms   7.097 ms
 3  wvi-gw.wvi.com (204.119.27.254)  7.637 ms   8.202 ms   7.607 ms
 4  69.59.218.105 (69.59.218.105)  8.889 ms   9.814 ms   8.926 ms
 5  sst-6509-gi13-p2p-peak.silverstartelecom.com (12.111.189.105)
22.849 ms   20.245 ms   16.434 ms
 6  sst-m10-fe002-p2p-6509-fa347.silverstartelecom.com (12.111.189.233)
 10.069 ms   10.456 ms   9.801 ms
 7  12.118.177.73 (12.118.177.73)  10.369 ms   11.057 ms   9.951 ms
 8  gr1.st6wa.ip.att.net (12.123.44.122)  33.398 ms   32.790 ms   32.975 ms
 9  tbr1.st6wa.ip.att.net (12.122.12.157)  37.985 ms   38.693 ms   37.595 ms
10  tbr2.sffca.ip.att.net (12.122.12.113)  33.806 ms   34.252 ms   34.272 ms
11  ggr2.sffca.ip.att.net (12.123.13.185)  32.995 ms   32.302 ms   32.994 ms
12  * * *
(nothing after this, but I can bring up Qwest.com just fine.)


--
Jeff Shultz



RE: Lessons from the AU model

2008-01-21 Thread Frank Bulk

Is this story relevant?
Undersea cable to slash Aust broadband costs
http://www.nzherald.co.nz/section/2/story.cfm?c_id=2objectid=10486793

They seem have the sales angle all locked up.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Matthew Moyle-Croft
Sent: Sunday, January 20, 2008 8:54 PM
To: Geoff Huston
Cc: Randy Bush; Andy Davidson; Andrew Odlyzko; nanog@merit.edu
Subject: Re: Lessons from the AU model



 Southern Cross cost some US $1B to construct about a decade ago
RFS was Nov 2001.  They full paid the debt from a US$1.3B cost of
construction in Oct 2005.
(see
http://www.southerncrosscables.com/public/News/newsdetail.cfm?StoryID=14)

So, they're making some VERY decent money out of the duopoly with AJC.

Hence why Telstra's building their OWN cable to Hawaii.   It's cheaper
to build than buy!

MMC

--
Matthew Moyle-Croft - Internode/Agile - Networks
Level 5, 150 Grenfell Street, Adelaide, SA 5000 Australia
Email: [EMAIL PROTECTED]  Web: http://www.on.net
Direct: +61-8-8228-2909 Mobile: +61-419-900-366
Reception: +61-8-8228-2999  Fax: +61-8-8235-6909

   The difficulty lies, not in the new ideas,
 but in escaping from the old ones - John Maynard Keynes




RE: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-21 Thread Frank Bulk

You're right, the major cost isn't the bandwidth (at least the in the U.S.),
but the current technologies (cable modem, DSL, and wireless) are thoroughly
asymmetric, and high upstreams kill the performance of the first and third.
In the shorter-term, it's cheaper to find some way to minimize upstream so
that everyone has decent performance that do the expensive field world to
split the shared medium (via deeper fiber, more radios, overlaying
frequencies, etc).

Long-term, fiber avoids the upstream performance issues.  

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Alex
Rubenstein
Sent: Sunday, January 20, 2008 2:02 PM
To: Taran Rampersad; nanog@merit.edu
Subject: RE: An Attempt at Economically Rational Pricing: Time Warner Trial

snip

Am I the only one here who thinks that the major portion of the cost of
having a customer is *not* the bandwidth they use?




RE: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-21 Thread Frank Bulk

Your points about the marketing and usage value of higher asymmetric is
right on.  Not only are the higher numbers attractive, they generally
reflect a residential subscriber's usage pattern (there are some on this
listserv who have pointed out that those with very high symmetrical speeds,
100 Mbps, for example, do have higher upstream, but I think that's because
they are more attractive P2P nodes) and so residential broadband networks
have been designed for asymmetric service.  One of the reasons that business
broadband is more expensive is that they not only use their 'pipe' more
heavily than a typical user provisioned with the same speeds (i.e. bandwidth
costs are more), they also prefer a symmetrical connection for their e-mail
server and web traffic which requires different (lower volume and more
expensive) equipment and/or they consume more of that shared upstream link.

BPON/GPON is also asymmetric, as you point out, but because the marketed
highest-end speeds are a fraction of the standards' capabilities, the
asymmetry and potential oversubscription are easily overlooked.  This works
to Verizon FiOS' advantage while marketing its symmetrical plans.

I personally prefer Active Ethernet-based fiber solutions for the reasons
you allude to -- they more closely match enterprise network architectures
(that's why we see Cisco in this space (i.e. Amsterdam's fiber network) and
so networks of this type can leverage that equipment, volumes, and pricing):
symmetrical in speed and switched.  The challenge with the Active Ethernet
architecture is that most often active electronics need to be placed in the
field, while many PON solutions can use passive optical splitters.

Frank

-Original Message-
From: Sean Donelan [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 21, 2008 4:47 PM
To: Frank Bulk
Cc: nanog@merit.edu
Subject: RE: An Attempt at Economically Rational Pricing: Time Warner Trial

On Mon, 21 Jan 2008, Frank Bulk wrote:
 You're right, the major cost isn't the bandwidth (at least the in the
U.S.),
 but the current technologies (cable modem, DSL, and wireless) are
thoroughly
 asymmetric, and high upstreams kill the performance of the first and
third.

There are symmetric versions for all of those.  But ever since the dialup
days (e.g. 56Kbps modems had slower reverse direction) consumers have
shown a preference for a bigger number on the box, even if it meant giving
up bandwidth in the one direction.

For example, how many people want SDSL at 1.5Mbps symmetric versus ADSL at
6Mbps/768Kbps. The advertisment with the bigger number wins the consumer.

I expect the same thing would happen with 100Mbps symmetric versus
400Mbps/75Mbps asymmetric.  Consumers would choose 400Mbps over 100Mbps.

 Long-term, fiber avoids the upstream performance issues.

Asymmetric fiber technologiges exists too, and like other technologies
gives you much more bandwidth than symmetric fiber (in one direction).

The problem for wireless and cable (and probably PON) is using shared
access bandwidth.  Sharing the access bandwidth lets you advertise much
bigger numbers than using dedicated access bandwidth; as long as everyone
doesn't use it. The advantage of dedicated access technologies like
active fiber (or old fashion T-1, T-3) is your neighbor's bad antics
don't affect your bandwidth.

Remember the good old days of thicknet Ethernet and what happened when
a single transceiver went crazy, the 10Mbps ethernet coax slowed to a
crawl for everything connected to it.  The token ring folks may have
been technically correct, but they lost that battle.

There was a reason why IT people replaced shared thicknet/thinnet coax
Ethernet with dedicated 10Base-T pairs and switches replaced hubs.



RE: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-21 Thread Frank Bulk

Which of the telecom service providers is moaning about being a provider?
This conversation started with Time Warner's metered trial, and they aren't
doing it in response to people complaining -- I'm pretty sure there was a
financial/marketing motive here.

There are some subscribers who complain about asymmetrical speeds, and some
members of this listserv who fall into that category, but I would hazard a
guess that less than 5% of the entire N.A residential subscriber base would
actually pay a premium to have higher upstream speeds (we provide that
option with our service today for an extra $10 and very few take it).  And
for that small base, an operator isn't about to rebuild or overbuild their
network.  Oh, they'll keep it in mind as they upgrade and enhance their
network, but upstreams speeds aren't an issue that cause them to lie awake
at night.  I think FiOS as a competitive factor will move them more quickly
to better their upstreams, though.

So I don't think telecom providers think they are in the ghettos, and
neither do most customers.  As for creative technology, I'll let someone
else buy DOCSIS 3.0 first and drive down prices with their volumes -- I'll
join them in 3-5 years.  On the DSL side, the work on VDSL2 demonstrates the
greatest benefits on short loops.  I haven't see any technology that serves
fantastic upstream speeds at 1, 2 and 3x a CSA.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Monday, January 21, 2008 5:36 PM
To: nanog@merit.edu
Subject: RE: An Attempt at Economically Rational Pricing: Time Warner Trial


 There are symmetric versions for all of those.  But ever
 since the dialup days (e.g. 56Kbps modems had slower reverse
 direction) consumers have shown a preference for a bigger
 number on the box, even if it meant giving up bandwidth in
 the one direction.

 For example, how many people want SDSL at 1.5Mbps symmetric
 versus ADSL at 6Mbps/768Kbps. The advertisment with the
 bigger number wins the consumer.

Seems to me that Internet SERVICE Providers have all turned
into telecom companies and the only thing that matters now
is providing IP circuits.

If P2P is such a problem for providers who supply IP circuits
over wireless and cable, why don't they try going up a level
and provide Internet SERVICE instead? For instance, every
customer could get a virtual server that they can access via
VNC with some popular P2P packages preinstalled. The P2P software
could recognize when it's talking over preferred circuits
such as local virtual servers or over peering connections that
aren't too expensive, and prefer those. If the virtual servers
are implemented on Linux, there is a technology called FUSE
that could be used to greatly increase the capacity of the
disk farm by not storing multiple copies of the same file.

Rather than moaning about the problems of being a telecom
provider, people could apply some creative technology to get
out of the telecom ghetto.

--Michael Dillon



RE: Lessons from the AU model

2008-01-22 Thread Frank Bulk

We've figured our customer base ranges between 8 to 12 kbps/customer.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Alastair Johnson
Sent: Tuesday, January 22, 2008 4:09 AM
To: nanog@merit.edu
Subject: Re: Lessons from the AU model


Mark Newton wrote:

 Despite the best efforts of some people to run their broadband
 access at line rate, residential broadband is very much a
 CIR + burst kind of service.  All of our customers can burst
 to line rate (they're paying for it, so they should be able to
 get it).  None of our customers can burst at line rate 24x7 for
 a month without paying for it.  You can work out the CIR by
 dividing the number of bits in the quota by the number of
 seconds in a month.

Indeed.  If you look at New Zealand, a very similar economic model to
Australia (except less population and a much bigger density problem),
there are regulated wholesale products[1,2,3] that offer a 32Kbps CIR
per subscriber, and linerate PIR.

32Kbps working out to approximately 10GB per month, you can guess what
the most common subscriber data cap is - and surprisingly few actually
exceed it, although it has definitely gone up.  Incidentally, the
incumbent in NZ launched a flat rate DSL package.  It did not go well,
and ultimately cost them several million dollars in subscriber refunds.

Perhaps some of the guys posting on this thread (Mark? MMC?) would be
able to provide an average subscriber bandwidth (in GB or Kbit/s) use of
their subscriber base.  Break it down by 10G, 40G type accounts?

aj

[1]
http://www.comcom.govt.nz/IndustryRegulation/Telecommunications/Wholesale/Ov
erview.aspx
for what the regulator is doing
[2] http://www.telecom.co.nz/content/0,8748,205743-204225,00.html?nv=tpd
[3] http://www.telecom.co.nz/content/0,8748,204215-204225,00.html?nv=sd



RE: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-22 Thread Frank Bulk

I'm not struggling -- anyone else volunteer that they are?  It costs to
upgrade plant/equipment to meet traffic growth, but it's being done and no
one is saying that their prices are going up.  Even from the customer
perspective, the bang for their buck has continued to rise.

Frank

-Original Message-
From: Roderick Beck [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 22, 2008 4:45 AM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; nanog@merit.edu
Subject: Re: An Attempt at Economically Rational Pricing: Time Warner Trial

Hi Frank,

My impression is that IP networks are struggling.

Do you disagree?

-R.
Sent wirelessly via BlackBerry from T-Mobile.

-Original Message-
From: Frank Bulk [EMAIL PROTECTED]

Date: Mon, 21 Jan 2008 19:21:08
To:[EMAIL PROTECTED], nanog@merit.edu
Subject: RE: An Attempt at Economically Rational Pricing: Time Warner Trial



Which of the telecom service providers is moaning about being a provider?
This conversation started with Time Warner's metered trial, and they aren't
doing it in response to people complaining -- I'm pretty sure there was a
financial/marketing motive here.

There are some subscribers who complain about asymmetrical speeds, and some
members of this listserv who fall into that category, but I would hazard a
guess that less than 5% of the entire N.A residential subscriber base would
actually pay a premium to have higher upstream speeds (we provide that
option with our service today for an extra $10 and very few take it).  And
for that small base, an operator isn't about to rebuild or overbuild their
network.  Oh, they'll keep it in mind as they upgrade and enhance their
network, but upstreams speeds aren't an issue that cause them to lie awake
at night.  I think FiOS as a competitive factor will move them more quickly
to better their upstreams, though.

So I don't think telecom providers think they are in the ghettos, and
neither do most customers.  As for creative technology, I'll let someone
else buy DOCSIS 3.0 first and drive down prices with their volumes -- I'll
join them in 3-5 years.  On the DSL side, the work on VDSL2 demonstrates the
greatest benefits on short loops.  I haven't see any technology that serves
fantastic upstream speeds at 1, 2 and 3x a CSA.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Monday, January 21, 2008 5:36 PM
To: nanog@merit.edu
Subject: RE: An Attempt at Economically Rational Pricing: Time Warner Trial


 There are symmetric versions for all of those.  But ever
 since the dialup days (e.g. 56Kbps modems had slower reverse
 direction) consumers have shown a preference for a bigger
 number on the box, even if it meant giving up bandwidth in
 the one direction.

 For example, how many people want SDSL at 1.5Mbps symmetric
 versus ADSL at 6Mbps/768Kbps. The advertisment with the
 bigger number wins the consumer.

Seems to me that Internet SERVICE Providers have all turned
into telecom companies and the only thing that matters now
is providing IP circuits.

If P2P is such a problem for providers who supply IP circuits
over wireless and cable, why don't they try going up a level
and provide Internet SERVICE instead? For instance, every
customer could get a virtual server that they can access via
VNC with some popular P2P packages preinstalled. The P2P software
could recognize when it's talking over preferred circuits
such as local virtual servers or over peering connections that
aren't too expensive, and prefer those. If the virtual servers
are implemented on Linux, there is a technology called FUSE
that could be used to greatly increase the capacity of the
disk farm by not storing multiple copies of the same file.

Rather than moaning about the problems of being a telecom
provider, people could apply some creative technology to get
out of the telecom ghetto.

--Michael Dillon




RE: Level3 in the Midwest is KIA

2008-01-24 Thread Frank Bulk

Ah, that old-age problem of designing redundancy to cover one failure, but
not two.

Frank 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Justin Shore
Sent: Wednesday, January 23, 2008 4:41 PM
To: nanog@merit.edu
Subject: Re: Level3 in the Midwest is KIA


I've been told that there are 2 issues.  One was fiber cut, I believe in
the Houston area.  The second issue was a card failure, also in the
Houston area.  Both failures contributed to a loss on a backbone ring
that covers a portion of the Midwest.  The master ticket number is
2332102 for those who want updates.

Justin


Justin Shore wrote:

 L3 dropped us at 13:30CST.  I've been told that whatever happened took
 out everything from KC to Wichita to Little Rock to Houston.  No word on
 the cause and no ETA yet.  They're handing us 37 routes which is a far
 cry from the roughly 237,000 we'd normally get.  I recognize 3 of the
 routes too as routes local to the Wichita area.

 FYI
  Justin





RE: IBM report reviews Internet crime

2008-02-14 Thread Frank Bulk

Hear-hear: most of our customer's e-mail problems are resolved when we turn
off in the in and outbound scanning offered by their favorite AV vendor. =)
I bet we've had more support calls about e-mail scanning than the number of
viruses that feature has ever trapped for them.  

And another anecdote: we experienced a rash of malware-infected subscribers
spewing out spam last weekend.  Most of them had some kind of AV, but of
course that AV didn't prevent them from getting infected.  Many of them
update their definitions and scanned and thought they were clean, but
because the virus/Trojan was so new, they started spewing spam again.  In
this case, their AV software gave them a false sense of assurance.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Mark
Radabaugh
Sent: 2008-02-13 17:35
To: nanog list
Subject: Re: IBM report reviews Internet crime


JC Dill wrote:

 I'm really surprised that ISPs haven't banded together to sue
 Microsoft for negligently selling and distributing an insecure OS that
 is an Attractive Nuisance - causing the ISPs (who don't own the OS
 infected computers) harm from the network traffic the infected OSs
 send, and causing them untold support dollars to handle the problem.

 If every big ISP joined a class action lawsuit to force Microsoft to
 pay up for the time ISPs spend fixing viruses on Windows computer,
 Microsoft would get a LOT more proactive about solving this problem
 directly.  The consumers have no redress against MS because of the
 EULA, but this doesn't extend to other computer owners (e.g. ISPs) who
 didn't agree to the EULA on the infected machine but who are impacted
 by the infection.

 jc

I think I would rather see a class action against Symantec for the
hundreds of hours ISP's waste fixing customers mail server settings that
Symantec sees fit to screw up with every update.   We can always tell
when they have pushed a major update - hundreds of calls from mail users
who can no longer send mail.

It's 2008.   How bloody hard is it to notice that the mail server SMTP
port is 587 and authentication is turned on?   Why do they mess with it?

--

Mark Radabaugh
Amplex
419.837.5015 x21
[EMAIL PROTECTED]




RE: Power outages in Florida

2008-02-27 Thread Frank Bulk
For power conservation the units might automatically shut down data
services.

 

Frank

 

From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
David Diaz
Sent: Tuesday, February 26, 2008 11:44 PM
To: [EMAIL PROTECTED]
Cc: nanog@merit.edu
Subject: Re: Power outages in Florida

 

Being that Miami is my home town. I found it interesting today that in areas
affected by the black out services like verizon EVDO lost their backbone
connections. The towers were up with signal but no one could get to the IP
gateway.  Driving a few miles to a lit area provided connectivity.

 

This is a concern for those of us with hurricane experience in the area.

 

David

 

On Tue, Feb 26, 2008 at 7:01 PM, Scott Weeks [EMAIL PROTECTED] wrote:




--- [EMAIL PROTECTED] wrote:
snip

Being in the lightning capital of the world systems are generally well
protected from power issues. None of our peers have had any issues.

---


There has been a lot of lightning there recently...

http://flash.ess.washington.edu/TOGA_network_global_maps.htm

http://webflash.ess.washington.edu/AmericaL_plot_weather_map.jpg



http://www.nytimes.com/2008/02/26/us/26cnd-florida.html?hp

says: The company and state officials said the blackout began with a
failure in an electrical substation near the Turkey Point nuclear station
south of Miami, the division of emergency management said. That failure
caused other parts of the system to shut down to protect the integrity of
the electrical grid.


scott

 



RE: Customer-facing ACLs

2008-03-07 Thread Frank Bulk

Same concerns here.  Glad to know we're not alone.

I think a transition to blocking outbound SMTP (except for one's own e-mail
servers) would benefit from an education campaign, but perhaps the pain
level is small enough that it can implemented without.  One could start
doing a subnet block a day to keep the helpdesk people sane, and then apply
a global block at the edge once done to catch any subnets that one might
have missed.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Kameron Gasso
Sent: Friday, March 07, 2008 2:44 PM
To: Justin M. Streiner
Cc: NANOG
Subject: Re: Customer-facing ACLs


Justin M. Streiner wrote:
 I do recall weighing the merits of extending that to drop outbound SMTP
 to exerything except our mail farm, but it wasn't deployed because there
 was a geat deal a fear of customer backlash and that it would drive more
 calls into the call center.

This seems to be very common practice these days for larger ISPs/dialup
aggregators using the appropriate RADIUS attributes on supported access
servers.

We generally restrict outbound SMTP on our dial-up users so they may
only reach our hosts (or the mail hosts of our wholesale customers).
Our DSL subscribers, both dynamic and static, are currently unfiltered
-- but we're very quick to react to abuse incidents and apply filters
when necessary until the user cleans up their network.

I'm currently on the fence with regards to filtering SMTP for all of our
dynamic DSL folks.  It'd be nice to prevent abuse before it happens, but
it's a matter of finding the time to integrate the filtering into our
wholesale backend and making sure there aren't any unforeseen issues.

-- Kameron



RE: Customer-facing ACLs

2008-03-07 Thread Frank Bulk

The last few spam incidents I measured an outflow of about 2 messages per
second.  Does anyone know how aggressive Telnet and SSH scanning is?  Even
if it was greater, it's my guess there are many more hosts spewing spam than
there are running abusive telnet and SSH scans.  

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Mark
Foster
Sent: Friday, March 07, 2008 10:02 PM
To: Dave Pooser
Cc: nanog@merit.edu
Subject: Re: Customer-facing ACLs


 Blocking port 25 outbound for dynamic users until they specifically
request
 it be unblocked seems to me to meet the no undue burden test; so would
 port 22 and 23. Beyond that, I'd probably be hesitant until I either
started
 getting a significant number of abuse reports about a certain flavor of
 traffic that I had reason to believe was used by only a tiny minority of
my
 own users.


Sorry, I must've missed something.
Port 25 outbound (excepting ISP SMTP server) seems entirely logical to me.

Port 22 outbound? And 23?  Telnet and SSH _outbound_ cause that much of a
concern? I can only assume it's to stop clients exploited boxen being used
to anonymise further telnet/ssh attempts - but have to admit this
discussion is the first i've heard of it being done 'en masse'.

It'd frustrate me if I jacked into a friends Internet in order to do some
legitimate SSH based server administration, I imagine...

Is this not 'reaching' or is there a genuine benefit in blocking these
ports as well?

Mark.






  1   2   >