Re: wrt joao damas' DLV talk on wednesday

2006-06-12 Thread Paul Vixie

 I'd like to hear about DLV. For example, Randy Bush asked (twice) the
 following:
 
  my question was a bit simpler.  what is the security policy that isc
  plans to use over the content of the isc dlv registry?  and how will
  the dvl trust key roll-over and revocation be handled?
 
 I would also like to understand the security policy, and to hear how DLV
 at ISC will handle key roll-over and revocation.

since joao is probably still sleeping-off the time shift from san jose to
madrid, i'll chime in here.  the last plan i saw was the same as the last
draft i heard about for what any other important zone would do with a
key that has to be hard coded in a lot of places: allocate more than one
KSK and an infinite lifetime.  use this KSK offline (only), to generate
ZSK's with short lifetimes that are in turn used online to sign the zone.

many are those argue that DNSSEC-bis, having failed to address key-rollover,
is unimplementable.  DNSSEC-ter may or may not come about (depending on the
contining faith and patience of those who funded DNSSEC and DNSSEC-bis) in
order to (a) prevent zone-walking, (b) allow for unsecured subdelegations,
and (c) automate key-rollover.  (that's NSEC3 and TAKREM in a nutshell.)

on the other hand i believe that DNSSEC-bis is good enough to solve some
real world problems, and that the thing that makes it unimplementable is
merely its dependence on cooperation between US-DoC, ICANN, and VeriSign
around the myriad issues touched on by the sign the root zone work item.
that's why ISC is helping (under contract to VeriSign and Nominet) with
NSEC3 and stands ready to help with automated trust anchor work as well--
these are important problems.

if hand-edited trust anchors backed by infinite-lifetime offline KSK's are
unacceptable to you, then you are already not a candidate for DNSSEC-bis
and you're going to be waiting for DNSSEC-ter.  so, no complaints about
the fact that DLV uses the only thing DNSSEC-bis specifies in that area,
unless you have a proposal for automated rollover that's as easy to
implement as DLV was, and IPR-unencumbered, in which case, send it over!

  as providing a tld key registry is tantamount to emulating the root key
  responsibilities of the iana, potential users should be rather concerned.

tantamount is an unruly word, it accuses without specification.  in any
case anyone who is concerned about DLV should seek alternatives, such as:

|   1. figure out why the root zone isn't signed and fix whatever it is.
|
|   2. design your own version of DLV (as sam weiler has done, long before
|  ISC's although i didn't learn this until later), publish it, and
|  urge adoption (find someone to run a DLV registry, implement the
|  validator side, and so on.)
|
|   3. rubber-stamp ISC's DLV design, adopt our BSD-licensed source code
|  for the validator side, start your own DLV registry.
|
|   4. go to IETF and say i think something DLV should be a standard but
|  i don't like ISC's, so let's make an RFC together.

and i forgot to mention:

   5. forget about DNSSEC until all these problems are solved by others.

whichever (or whatever else related) you want to do, you can count on ISC's
support.  just don't count on ISC's inaction; ISC isn't adept at inaction.

that URL again is http://www.isc.org/ops/dlv/.
-- 
Paul Vixie


Re: wrt joao damas' DLV talk on wednesday

2006-06-12 Thread Paul Vixie

[EMAIL PROTECTED] (David Conrad) writes:

 Can you have a power play when at least one party doesn't play?

what i find fascinating by the whole why don't you and him fight? angle
being played out here is that there is *no* trusted entity for this.  drc,
can you check with your corporate masters to find out whether ICANN, ISOC,
ITU, NRO, and the other alphabet-soup denizens of your choice could somehow
do a joint venture around DLV?  it seems to me that if we dilute the stew
with enough disparite international unaligned interests, we'll eventually
reach a point where the result appears so dilute as to be powerless and
therefore trustworthy, but still barely potent enough to operate a DLV
zone.
-- 
Paul Vixie


Re: wrt joao damas' DLV talk on wednesday

2006-06-13 Thread Paul Vixie

  can you say does not scale?
 
 Indeed.

this is why we're trying to sign up some registrars, starting with alice's,
who can send us blocks of keys based on their pre-existing trust
relationships.
-- 
Paul Vixie


Re: wrt joao damas' DLV talk on wednesday

2006-06-13 Thread Paul Vixie

[EMAIL PROTECTED] (Brian McMahon) writes:

  why can isc not simply say we plan to vet zones as follows:.  and we
  plan to manage maintenance of key rollover as follows: etc.?
 
 Would it help if I volunteered to talk to folks and help write  
 something up?

not at the moment.  joao heard this question at the podium, and i've
touched on it since then, and there doesn't seem to be any reason (yet)
to assume that the answer won't be posted to www.isc.org/ops/dlv/ soon.

 I mean, if there's some other issue that is preventing ISC from nailing
 this down, then that's one thing.

i believe it's called jet lag.

 But if it's just a case of never seems to bubble up to the top of the
 stack, then maybe a little outside assistance can do the trick.

i don't think so.  no bank in its right mind, for example, would allow its
identity to be held or represented by a middleman whose security policies
weren't auditable.  on the other hand, joao heard this question at the
podium and i don't think it's time yet to declare USC late answering it.

 Besides, now that the semester's over, I need something besides just  
 firing off resumes (gotta fill that summer time, and not completely  
 lose touch with the Real World!) to keep myself entertained.
 
 You may flame when ready, Gridley.

isc depends on a lot of volunteers, i'm happy to hear of your availability
and i assume that joao will also be happy to hear it when he catches up on
[EMAIL PROTECTED]
-- 
Paul Vixie


Re: wrt joao damas' DLV talk on wednesday

2006-06-13 Thread Paul Vixie

   ... we're trying to sign up some registrars, starting with alice's,
   who can send us blocks of keys based on their pre-existing trust
   relationships.
  
  so a key roll or change of delegation requires two levels of human
  intervention to work?

no.

in the normal, non-DLV DNSSEC-bis model, a registrant informs its registrar of
new KSK's before existing KSK's expire (or perhaps during revocation events)
using the same authenticated automation they would use to change NS RRs or
arrange for payment of fees or whatever.  the registrar (like alice's for ISC)
then tells the appropriate registry (like Afilias-PIR-UltraDNS, for .ORG) the
new DS RR data using the same (EPP? RRP? fax? carrier pigeon?) authenticated
automated model they would use when changing NS RR data.  no human
intervention at all.

in the DLVified DNSSEC-bis model, the DNS registry (like VeriSign for .COM)
is not yet accepting DS RR data via their EPP interface to their registrars,
although i note with admiration that VeriSign has led the effort to add new
EPP protocol elements to support this new data.  as far as i know, no existing
DNS registry will accept DS RR data.

therefore registrars (like alice's... remember alice? this is a song about
alice) have no place to go with registrant KSK data at this time.  this in
turn keeps most registrars from bothering to collect or store this useless
data.  ISC proposes to accept this KSK data (in the form of DLV RRs) via
authenticated automated processes whereby lots of keys can be sent to us
by interested/participating registrars.  we do not have a good way of knowing
whether somebody is or isn't the registrant for bankofamerica.com, but we
think that bank of america's registrar does have a way of authenticating the
registrant.  and we know how to authenticate bankofamerica.com's registrar.
so there IS a more scalable, untouched-by-human-hands, trust path available.

until we get that working, we're left with the least desireable alternative,
which is accepting keys directly from registrants, and authenticating these
folks the hard way, with human hands and eyeballs.

alas, i repeat myself.  i've said this already.  and if folks aren't going to
read the explainations i really need to discipline myself into not repeating
them.

 DNS-SEC will live and die on the business model. How user-friendly it is
 vs. how necessary it is against what alternatives there are.
 
 To be honest, waiting for so many years for DNS-SEC, if these questions
 were not answered by now...

to be equally honest, i'm now weary of hearing what can't be done or shouldn't
be done.  anyone who wants to not do dnssec is free to do that, they don't
need to shout it from the rooftop.  anyone else who wants to wait until the
root is signed and NSEC3 is done and automated trust key rollover is done is
welcome to wait -- no shouting is required from any rooftop by those, either.


Re: wrt joao damas' DLV talk on wednesday

2006-06-13 Thread Paul Vixie

  thanks for actual technalia.

i've also been warned that this isn't ops-related and told to move elsewhere.

  ( first, i suspect much of the confusion could come from your
  thinking that the place up on skyline is *the* alice's restaurant.

*the* alice's restaurants are the ones in our own private idaho's.

  i think if you amplified on and detailed the above, and went into
  how re-delegation and key changes would handled, it would go a long
  way to clarifying the isc dlv registry's security process.

i feel sure that joao said at the podium that he would do that and put it on
the www.isc.org/ops/dlv/ web site.  so, you're just selling after the close.

  you're also welcome to use some of the cctlds and other zones i
  manage as outlying/strange examples.  e.g. NG, which i could sign,
  but neither ng nor i have an established relationship to isc.

it's possible that no trust path can be found for some domains.  for example,
i cannot imagine who could represent the root zone for the purpose of sending
in a key for it.  (not that DLV has a way to publish the root key; it doesn't;
i'm just using the root as the ideal strange example of this problem.)

  and how it would be rolled would be of interest.

key-roll through DLV is no different, from the high level, that key roll
through non-DLV.  either way you have to instantiate a new key and get it
to your registry somehow (either through your registrar or otherwise) before
you start using it.  either way you have to remove your old keys after you've
stopped using them.  either way you'll have two keys in your key registry
(either DLV or DNS) during the rollover.  the only thing that changes with
DLV is that you actually *have* someone to send your key to even if your
DNS registrar and/or DNS registry isn't ready to accept/publish them yet.

  and say psg.com, registered through retsiger, who we might assume,
  for sake of example, will not play.

anyone whose registrar won't play, will have to follow the procedure outlined
on www.isc.org/ops/dlv/, which involves much manual labour, but can be done.
(see http://www.isc.org/ops/dlv/#how_register in particular.)


Re: wrt joao damas' DLV talk on wednesday

2006-06-14 Thread Paul Vixie

 Has anyone ever considered trying to come up
 with a way that these crypto projects could be
 explained in plain English?

yes.

 I think a lot of the problem with adoption of
 DNSSEC stems from the fact that most people who
 might make a decision to use it, haven't got a
 clue what it is, how it works, or whether it even
 works at all.

then they should go read steve crocker and russ
mundy's most excellent www.dnssec-deployment.org.

 And it's not their fault that they don't understand.
 It's the fault of a technical community that likes to
 cloak its discussions in TLAs and twisted jargon.

that's just bitterness, though.
-- 
Paul Vixie


on topic?

2006-06-14 Thread Paul Vixie

The effect of Nanog is remarkable. All the hybrid cells became fully
converted to embryonic stem cells, said Jose Silva of the University of
Edinburgh, Scotland, who reported the findings in the journal Nature.

http://news.com.com/Gene+may+mean+adult+cells+can+be+reprogrammed/2100-1008_3-6083878.html?tag=nefd.top


Re: DNS Based Load Balancers

2006-07-01 Thread Paul Vixie

 I'm soliciting recommendations for DNS based load balancers.  

my recommendation is: don't do it.  for background, see:

http://www.ops.ietf.org/lists/namedroppers/namedroppers.2002/msg02168.html
http://www.cctec.com/maillists/nanog/current/msg03572.html
http://www.cctec.com/maillists/nanog/current/msg00671.html
-- 
Paul Vixie


Re: DNS Based Load Balancers

2006-07-02 Thread Paul Vixie

[EMAIL PROTECTED] (David Temkin) writes:

 So, you guys have been pretty clear on what he shouldn't do.
 
 What should he do as an alternative to using DNS for a proximity based
 solution?

http://www.redbooks.ibm.com/redbooks/pdfs/sg245858.pdf
http://www.cisco.com/univercd/cc/td/doc/product/iaabu/distrdir/dd2501/ovr.htm
http://www.radware.com/content/products/library/faq_wsd.pdf
http://www.foundrynet.com/solutions/appNotes/GSLB.html
http://www.ifi.unizh.ch/ifiadmin/staff/rofrei/DA/DA_Arbeiten_2000/Masutti_Oliver.pdf

note that several of these describe or offer a dns-based solution as an option,
but they all describe session-level redirection and most recommend that (as i
do) and some even say using dns for this is bad (as i do, but for different
reasons.)
-- 
Paul Vixie


Re: DNS Based Load Balancers

2006-07-02 Thread Paul Vixie

 The problem being that most of what you linked to below is either A) out
 of date, or B) the only way to get proximity based load balancing (GSLB
 type stuff) with them is with DNS tricks. =20

most of, huh?  let's have a looksie.

 Breaking it down in order:
 
  The IBM solution hasn't been updated since 1999.  It also seems
 relatively proprietary.

the ibm white paper i referred you to was writteh in 1999.  websphere is
quite current, and its implementation of GSLB functionality has been updated
plenty since 1999.  and the competitors james baldwin said he was eval'ing
(cisco, f5) are certainly patent-holders offering proprietary solutions.

  The Cisco solution relies on either doing HTTP redirects (which is
 useless if you're not doing HTTP) or DNS.  =20

james baldwin said he was using the cisco solution today, so clearly HTTP is
the main target.  i can't think of a protocol requiring GSLB that isn't HTTP
based (either web browsing or web services).  FTP just isn't a growth industry
and the transaction processing systems i know of (the ones that aren't based
on HTTP, that is) have GSLB hooks built into them.

IOW, either you can do GSLB with session redirects, or you don't need GSLB.

  Both Foundry and Radware rely 100% on DNS to do their GSLB.  You can do
 local load balancing on both boxeswithout, however.

did you read the same radware white paper i did?  in

http://www.radware.com/content/products/library/faq_wsd.pdf

it says that they can do session level redirects.  so, less than 100% of
radware is dns.  i can see that i misread the foundry whitepaper i ref'd
(perhaps we both saw most readily that data which fit our preconceptions?)

  The last link is an outdated thesis paper that makes reference moreso
 to local load balancing and not global.

why is it outdated?  as a survey of the desired functionality it's still
pretty good background.  no new GSLB has been invented since then, surely?

 It seems that in lieu of a real, currently produced solution, the only
 option is presently DNS to meet the requirements.  Others have sent me
 off-list stuff they're working on, but none of it's ready for prime
 time. =20

well, i see that fezhead is dead.  but 3-party TCP is alive and well:
http://www.cs.bu.edu/~best/res/projects/DPRClusterLoadBalancing/.

see also http://www.tenereillo.com/GSLBPageOfShame.htm
and  http://www.tenereillo.com/GSLBPageOfShameII.htm.

the references sections of those last three are particularly informative.
-- 
Paul Vixie


Re: DNS Based Load Balancers

2006-07-03 Thread Paul Vixie

 Without getting into a massive back and forth, I just want to make 3
 points:

as long as the back-and-forth remains informative and constructive, i'll play:

 1) Websphere is proprietary to IBM and requires their servers.  It's not
 scalable to other applications.  It's also not targeted to the same
 market as, say, F5.

websphere is a trade name for a family of products and services.  the GSLB
component is able to play as a proxy to someone else's web server.  (don't
take my word for it, call an ibm salesweenie.)

 2) There are definitely protocols that require GSLB that aren't HTTP.
 Off the top of my head: RTSP/MMS, VoIP services.  I'd say that, at the
 very least, VoIP protocols are the killer app for GSLB moreso than HTTP.
 Surely the internet isn't only the web, right?

according to http://www.isc.org/pubs/tn/isc-tn-2004-2.html, the internet
is much larger than the web.  but i'm not sure what you're replying to.  i
said that session level redirection would be possible in all cases where
GSLB was needed.  voip has session level redirection (several kinds).

 3) TCP-redirect solutions, such as the Radware one you pointed out, do
 not work in large scales.  Have you ever met anyone who's actually
 implemented that in a large scale?  The solution they point to they
 don't even sell anymore (the WSD-DS/NP).  If you talk to their sales,
 they'll point you at the DNS based solution because they know that doing
 Triangulation is a joke.  Triangulation and NAT-based methods both
 crumble under any sort of DoS and provide no site isolation.

i did not know radware has given up on wsd.  but i don't see an explaination
of what you mean by not work in large scales beyond radware gave up.  i
gave another reference to third-party TCP, have you looked at it or surveyed
the rest of the field to find out how assymetric IP (satellite downlink, 
terrestrial uplink) and third-party TCP is working for the various pacific
islands who depend on it?

 Pete Tenereillo's papers are interesting, but they're also slanted and
 ignore other implementation methods of DNS GSLB.  How about handing out
 NS records instead of A records?   That's an method that would make
 large parts of his papers irrelevant.=20

just as one can always find an example that supports one's preconceptions,
one can always find a single counterexample that will support one's
prejudices.  i'm sure that any technology can be successfully demo'd or
successfully counter-demo'd.  this conversation started out as what DNS
GSLB should i use? and then if DNS GSLB is such a bad idea then what do
you propose as an alternative? and now it's every alternative has known
failure modes that are as bad as DNS GSLB's worst case.  does that mean
we're done with the informative and constructive part of this thread?

 My main point here is that each solution has it's evils, and when faced
 with a choice, he needs to evaluate what method works best for him.
 Anyone could just as easily say that Triangulation and NAT are a hack
 just the same as GSLB DNS is a hack.   Akamai and UltraDNS will actually
 sell you GSLB without even buying localized hardware to do it - are
 these bad services, too?  Patrick said it best: Just in case we like to
 decide things for ourselves.

nobody ever got fired for buying akamai's or ultradns's DNS GSLB services,
that's for sure.
-- 
Paul Vixie


Re: DNS Based Load Balancers

2006-07-05 Thread Paul Vixie

 As someone who has also deployed GSLB's with hardware applicances I would
 also like to know real world problems and issues people are running into
 today on modern GSLB implementations and not theoretical ones, as far
 as I can tell our GSLB deployment was very straight forward and works
 flawlessly.

since works flawlessly could just mean that you don't have any reported
problems with the technology -- no complaints from your users, no bugs logged
with your vendor, etc, i have two bracketing questions.

first, have you measured the improvement you got -- in terms of
min/max/avg/stddev of TTFB/TTLB (time to first byte / last byte)
with the appliances turned on vs. turned off?

second, have you measured the dns damage your gslb might cause or
contribute to, due to things not responding to unhandled QTYPES
( comes to mind) or use of abnormally low DNS TTL?

i'm not as much interested in whether a technology causes no problems for its
operator as whether its cost:benefit is worthwhile to the internet community.
-- 
Paul Vixie


Re: DNS Based Load Balancers

2006-07-05 Thread Paul Vixie

 What would be a better solution then?

multiple A RR's for your web service, each leading to an independent web
server (which might be leased capacity rather than your own hardware),
each having excellent (high bandwidth, low latency, etc) connectivity to
a significant part of the internet.  the law of averages is a good friend
to those who can adequately provision, so the likely outcome is that you
won't need anything fancy.  but if you need something fancy, use session
level redirects to tell a web browser or sip client that there's a better
and closer place for them to get their service.  pundits please note that
the fancy thing i'm recommending sit perfectly on top of the non-fancy
thing i'm recommending.
-- 
Paul Vixie


Re: DNS Based Load Balancers

2006-07-06 Thread Paul Vixie

 There is a new player on the block that I see more and more 
 http://www.infoblox.com/company/

infoblox isn't new.  i'm familiar with them since they use BIND as their
DNS protocol engine, and are long time members of the ISC BIND Forum.  i
recently did colour commentary for an o'reilly/infoblox webinar (see
http://infoblox.market2lead.com/go/dnsbind5 for more info on that.)

most importantly to this thread, infoblox doesn't offer GSLB, they just
do network identity (dhcp, dns, ldap, that kind of thing) on an appliance
platform.  folks seem to like it pretty well.  (i wonder if ISC should
print up some BIND Inside stickers? :-))


Re: final agenda for August 10th DA Workshop

2006-07-22 Thread Paul Vixie

  The agenda is quite tight.
 
 Perhaps *too* tight.  When I was at Usenix SRUTI '05 last year, the
 single biggest problem with an otherwise good workshop was a lack of
 cross-pollination time.

agreed.  (i was on the progcomm for that; thanks for your kind words.)

 Remember that the hallway track of a conference is where much of the
 most interesting stuff happens:
 
 http://www.google.com/search?q=conference+%22hallway+track%22
 
  11:55 - 12:30   Lunch break Got chow?
 
 Indicative of the problem.

also agreed.  which is why there's this little ditty at the end:

After-party:
Dinner, hosted by the ISC.

this is pizza and beer in the warehouse but it'll allow cross-pollination.
-- 
Paul Vixie


myspace

2006-07-24 Thread Paul Vixie

http://news.bbc.co.uk/2/hi/technology/5209496.stm


Re: mitigating botnet CCs has become useless

2006-07-31 Thread Paul Vixie

[EMAIL PROTECTED] (Gadi Evron) writes:

 The subject line why mitigating botnet CCs has become useless is
 misleading. It has been useless for a long time, but ...
 
 Today it has become (close to) completely useless. ...

i wish that the value of this activity were zero.  instead, it's negative.

see http://fm.vix.com/internet/security/superbugs.html for details.
-- 
Paul Vixie


Re: mitigating botnet CCs has become useless

2006-08-01 Thread Paul Vixie

[EMAIL PROTECTED] (Scott Weeks) writes:

 From: Paul Vixie [EMAIL PROTECTED]
 
 http://fm.vix.com/internet/security/superbugs.html
 
 ... I'd like to see ...jackbooted [US is implied in the text]
 government thugs...kicking in a door somewhere ...

i apologize for writing so sloppily that you mistook my meaning in this way.
i am a citizen of the US but i have always recognized that the internet is
a transnational entity.  nowhere and in no way did i mean to imply that all
potential kickers in of doors are US LEOs.  barry shein understood correctly.
-- 
Paul Vixie


Re: mitigating botnet CCs has become useless

2006-08-02 Thread Paul Vixie

[EMAIL PROTECTED] (Scott Weeks) writes:

 ... I'm just saying that there has to be a better way than police-type
 actions on a global scale.  ...

no, there doesn't have to be such a way.  where the stakes are in meatspace
(pun unintended), the remediation has to be in meatspace.  cyberspace is
just a meatspace overlay, it can only pretend to have different laws when
nothing outside of cyberspace is at stake.  i think that the days when
botnets were mostly used for kiddie-on-kiddie violence or even gangster-on-
gangster violence are permanently behind us.  it's up to the real LEOs now,
because it's on their turf now, which is to say, it's in the real world now.

as was true of spam when i said this about spam ten years ago, it is true
now of botnets that the only technical solution is gated communities.  but
the internet's culture, which merely mirrors the biases of those who use it,
requires the ability for children to go door to door selling girl scout
cookies, without necessarily having the key code to every one of the doors.

so the internet community has no appetite for the trappings of any technical
solution to botnets.  the meatspace community and their LEOs absolutely *do*.
-- 
Paul Vixie


Re: SORBS Contact

2006-08-10 Thread Paul Vixie

hit D now, i've been trolled.

[EMAIL PROTECTED] (Allan Poindexter) writes:

 ...  I have one email address that has:

 ...
 
 In short it should be one of the worst hit addresses there is.  All I
 have to do to make it manageable is run spamassassin over it.

may the wind always be at your back.  my troubles are different than yours,
and i hope i can count on your support if i feel compelled take more drastic
measures than you're taking.  especially since one of my troubles is about a
moral issue having to do with mutual benefit.  if an isp's business success
depends on them using access granted under an implied mutual benefit covenant
and they decide to operate in a sole benefit manner, they can't expect me to
continue to accept their traffic or their customer's traffic.  simpler put,
i won't run spamassassin to figure out what might or might not be spam after
i receive it -- i'll just reject everything they send me.

just because i think the linux kernel people are insane when they illegalize
binary or proprietary kernel modules, doesn't mean i'm ready to live in a
world where anyone on the internet can shift their costs to me with impunity.

but i respect your right to treat your inbox as you see fit.  can you say the
same about me and my rights and my inbox, mr. poindexter?

 That is the mildest of several measures I could use to fix the spam
 problem.  If it became truly impossible I could always fall back to
 requiring an address of the form apoindex+password and blocking all
 the one's that don't match the password(s).  That would definitely fix
 the problem and doesn't require any pie in the sky re-architecting of the
 entire Internet to accomplish.

if you wish to accept those costs, i hope noone opposes you.  but i'm not
willing to live that way, and i hope you won't try to force me to?

 For almost a decade now I have listened to the antispam kooks say that
 spam is going to be this vast tidal wave that will engulf us all.

that would be me, and it has.

 Well it hasn't.  It doesn't show any sign that it ever will.  In the
 meantime in order to fix something that is at most an annoyance people
 in some places have instigated draconian measures that make some mail
 impossible to deliver at all or *even in some case to know it wasn't
 delivered*.  The antispam kooks are starting to make snail mail look
 good.  It's pathetic.

that paragraph seems to be semantically equal to shut up and eat your spam
so i hope i'm misinterpreting you.  otherwise, it's your word, pathetic.

 The functionality of my email is still almost completely intact.  The
 only time it isn't is when some antispam kook somewhere decides he
 knows better than me what I want to read.  Spam is manageable problem
 without the self appointed censors.  Get over it and move on.

damn.  i've been trolled.  sorry everybody.
-- 
Paul Vixie


i am not a list moderator, but i do have a request

2006-08-13 Thread Paul Vixie

which is, please move these threads to a non-SP mailing list.

R  [  41: Danny McPherson ] Re: mitigating botnet CCs has become useless
R  [  22: Laurence F. Sheldon] 
R45: Danny McPherson  
R  [  62: Laurence F. Sheldon] 
R  [ 162: J. Oquendo] Re: [Full-disclosure] what can be done with 
botnet CC's?
R   211: Payam Tarverdyan Ch 
R  [  66: Michael Nicks   ] 

i already apologized to the moderators for participating in a non-ops thread
here.  there are plenty of mailing lists for which botnets are on-topic.
nanog is not one and should not become one.  nanog has other useful purposes.
-- 
Paul Vixie


Re: i am not a list moderator, but i do have a request

2006-08-14 Thread Paul Vixie

   http://www.whitestar.linuxbox.org/mailman/listinfo/botnets
 
 thanks, didn't know about it. But isn't it still usefull, when urgent
 matters concerning botnets will still discussed on the nanog-list?
 Please let me disabussed to it, but it's just my opinion.

almost everything that happens in the world is urgent to somebody somewhere.

not everything that happens on the internet is urgent to everybody on nanog.

there are too many topics (and too many botnets) for nanog to cover them all.
-- 
Paul Vixie


fyi-- [dns-operations] early key rollover for dlv.isc.org

2006-09-21 Thread Paul Vixie
fyi:

---BeginMessage---
EARLY KEY ROLLOVER

---

In light of the recently announced OpenSSL security advisory: RSA Signature
Forgery (CVE-2006-4339), ISC has instigated an early rollover of the DLV Key
Signing Key (KSK). ISC reccomends reconfiguration of resolvers to use the DLV
KSK published on September 21, 2006. 

The old KSK will be retired on September 29, 2006.

---

see http://www.isc.org/ops/dlv/ for details, and note that there's now a
dlv-announce@ mailing list where folks can subscribe to learn about changes
to the dlv trust anchor.
___
dns-operations mailing list
[EMAIL PROTECTED]
http://lists.oarci.net/mailman/listinfo/dns-operations
---End Message---


Re: fyi-- [dns-operations] early key rollover for dlv.isc.org

2006-09-21 Thread Paul Vixie

[EMAIL PROTECTED] (Paul Vixie) writes:

 EARLY KEY ROLLOVER
 
 ---
 
 In light of the recently announced OpenSSL security advisory: RSA Signature
 Forgery (CVE-2006-4339), ISC has instigated an early rollover of the DLV Key
 Signing Key (KSK). ISC reccomends reconfiguration of resolvers to use the DLV
 KSK published on September 21, 2006. 
 
 The old KSK will be retired on September 29, 2006.
 
 ---
 
 see http://www.isc.org/ops/dlv/ for details, and note that there's now a
 dlv-announce@ mailing list where folks can subscribe to learn about changes
 to the dlv trust anchor.
 ___
 dns-operations mailing list
 [EMAIL PROTECTED]
 http://lists.oarci.net/mailman/listinfo/dns-operations

[EMAIL PROTECTED] (Laurence F. Sheldon, Jr.) writes:

 My mail reader can sanitize HTML mail for me, but it was stymied by this 
 one.  What is it?

included as above in even plainer text.  my mail user-agent is emacs/mh-e, and
i as far as i know it could not generate or consume HTML mail even if i tried.

[EMAIL PROTECTED] (Steven M. Bellovin) wrote:

 Paul, what exponent does the new key use?  (I clicked on the public key
 link, but I can't decode the base64 that easily...)

it was made with bind9's dnssec-keygen utility, using the -e option, so...

-e use large exponent (RSAMD5/RSASHA1 only)

...hopefully it's a good exponent.  (every few years someone tries to explain
to me what a key exponent is, i think you steve have tried, but it just doesn't
stick.)
-- 
ISC Training!  October 16-20, 2006, in the San Francisco Bay Area,
covering topics from DNS to DHCP.  Email [EMAIL PROTECTED]
--
Paul Vixie


Re: tech support being flooded due to IE 0day

2006-09-21 Thread Paul Vixie

[EMAIL PROTECTED] (Jared Mauch) writes:

   I was thinking sql-slammer, massive flood causing signifcant
 amount of network infrastructure to go down.  (people on low speed links
 with large blocks of address space were DoS'ed off the network).

right.

   I don't think of drive-by browser/desktop infection as a networking
 issue, more of an end-host issue.

given that network operations now includes all kinds of non-bgp activities
like datacenter design, tcp syn flood protection, nonrandom initial tcp
sequence number prediction, and a googolplex or two of other issues, i've
assumed that the hardcore bgp engineering community now meets elsewhere.
(i wouldn't be needed or welcome there if so, so i'm just guessing.)  so,
for lack of a better forum, things that can beat the hell out of your abuse
desk does indeed seem like safe fare for nanog@ in 2006, even though in 1996
maybe not so much so.  (hell, in 1996 one could still send MIME attachments
to abuse desks, since they were generally running solaris on NCD terminals
rather than microsoft outlook, and attachments were just opaque data, grrr.)

can we all agree to stop shooting the messenger?  every time gadi speaks up
here, three or four folks bawl him out for being off-topic.  time has proved
that (a) gadi's not going to STFU no matter whether he's flamed or isn't, (b)
those flaming arrows sticking out of his chest don't seem to injure him at all,
(c) the flames completely outweigh gadi's own original posts, and (d) some of
the folks lurking here actually tell me that they benefit from gadi's stuff.
henceforth if you see a post, a poster, or a thread that you aren't interested
in, just hit delete.  it'll save more bandwidth than flaming about it would.
-- 
ISC Training!  October 16-20, 2006, in the San Francisco Bay Area,
covering topics from DNS to DHCP.  Email [EMAIL PROTECTED]
--
Paul Vixie


Re: tech support being flooded due to IE 0day

2006-09-21 Thread Paul Vixie

[EMAIL PROTECTED] (Joel Jaeggli) writes:

 Even in an enterprise it's really hard to justify the expenditure that a
 rapid response to a host security problem involves. For an isp which is
 not likely to be in the position to recover the cost of being reactive
 let alone pro-active I can't imagine how they would possibly support
 desktop issues like this.

and yet, when i consider my nontechnical friends with their DSL and cablemodem
connections, i know that if they get hit by an exploding DLL, their ISP is one
of the likely places they will place a call.  and then they'll carefully nav
their way through what they call voice mail hell until they can talk to a
live operator, no matter how complex that is, no matter how many steps, and
no matter how much musak-on-hold they'll have to listen to.

the perfect storm is a million extra customers calling over the course of a
week just to explain that they have exploding DLL symptoms and listen to a
live operator tell them that this isn't a network problem and they should
contact the dealer where they bought their computer, which is likely CostCo.
assuming that this takes less than 60 seconds per affected customer, it's
still a nasty unbudgeted expense and as a secondary burn it will make real
network problems harder to report.
-- 
ISC Training!  October 16-20, 2006, in the San Francisco Bay Area,
covering topics from DNS to DHCP.  Email [EMAIL PROTECTED]
--
Paul Vixie


Re: tech support being flooded due to IE 0day

2006-09-22 Thread Paul Vixie

[EMAIL PROTECTED] (Sean Donelan) writes:

 For assistance with Microsoft security issues in the US, call (866) PC-SAFETY

according to http://www.eweek.com/article2/0,1895,2019162,00.asp, microsoft has
not released a patch for the VML thing, so calling (866) PC-SAFETY isn't going
to be a universal fix (and who will $user call after that, we wonder?)

according to http://www.websense.com/securitylabs/alerts/alert.php?AlertID=628,
there is now malware-in-the-field that exploits the VML thing.  and according
to http://www.auscert.org.au/render.html?it=6771, there's already phishing.

last but not least, according to http://isotf.org/zert/ there is a non-MSFT
patch for the VML thing.  i don't expect ISP's to recommend its use, due to
liability reasons, but mentioning it or even proactively notifying about it
might be a way to get people off the phone (or keep them from calling in).

(i'll remove the ISC training ad from my .signature for this post, since i've
gone way over my NANOG quota here -- three messages in 24 hours, oops.)
--
Paul Vixie


Re: Aggregation path information [was: 200K prefixes - Weekly Routing Table Report]

2006-10-14 Thread Paul Vixie

[EMAIL PROTECTED] (Patrick W. Gilmore) writes:

 Obviously the table contains kruft.  But I know we could not shrink  
 it to 109K prefixes without losing something from where I sit.  Are  
 you sure there's no additional path info?

before we could be sure that an aggregation proposal was nondestructive,
we'd have to model it from where a lot of people sit, not just patrick.

on the one hand this seems to be a useful endeavour.  in addition to
measuring the total number of routes, we probably ought to measure the
number of non-TE-related routes, and focus our attention on those routes
and also the ratio (global TE cost borne by the routing system.)

on the other hand i dispair of finding a set of observation posts and
metrics that will abstract TE out of the observed routes in a way that
wouldn't be seen as controversial or useless by most of the community.
-- 
ISC Training!  October 16-20, 2006, in the San Francisco Bay Area,
covering topics from DNS to DHCP.  Email [EMAIL PROTECTED]
--
Paul Vixie


Re: register.com down sev0?

2006-10-25 Thread Paul Vixie

   I'm seeing *.register.com down (including ns*) from everywhere.

  They are apparently under a multi-gbps ddos of biblical proportions.

i wonder if that's due to the spam they've been sending out?

 As pointed out by Rob Seastrom in private email, RFC2182 addresses things
 of biblical proportions -

no.  really, not.

   such as dispersion of nameservers geographically
 and topologically. Having 3 secondaries, only one of them on separate /24,
 and none of them on topologically different network does not qualify.

there is no zone anywhere, including COM, the root zone, or any other, that
is immune from worst-case DDoS.  anycast all you want.  diversify.  build a
name service infrastructure larger than the earth's moon.  none of that will
matter as long as OPNs (the scourge of internet robustness) still exist.

 Given that register.com is/was public (I think?) - I wonder what are their 
 sarbox auditors saying about it now ;)

that's an easy but catty criticism, and baseless.  i'm sure that some way
could be found to improve register.com's infrastructure, and i don't just
mean by stopping the spamming they've been doing.  but it's not trivial and
in the face of well-tuned worst-case DDoS, nothing will help.

 Compliance of icann-accredited gtld-registrars with rfc2182 might be a
 good subject for research (again, thanks to rs for idea)

i've been wondering if ICANN's accredidation could be revoked for spammers,
and register.com has indeed been spamming.  and it may also be that they
are out of compliance with RFC 2182.  but that would be like catching al
capone for income tax evasion just because you couldn't pin murder on him.

(OPNs = Other People's Networks)
--
Paul Vixie


Re: OT: How to stop UltraDNS sales people calling

2006-11-28 Thread Paul Vixie

here i am replying to an offtopic post.  what's the world coming to?

[EMAIL PROTECTED] (Andy Davidson) writes:

 I am really fed up of calls from UltraDNS - we seem to get them every
 few days.  We don't need their product.  

every month or two somebody will ask me does BIND really drop 20% of all
queries it receives?  and i say, um, no, why do you ask? and the answer
is always that's what the ultradns salesman told me.  i can't argue with
their success, but i guess i am ready to quibble over their manners.
-- 
Paul Vixie


Re: OT: How to stop UltraDNS sales people calling

2006-11-28 Thread Paul Vixie

 Hi Paul, just curious, someone over at UltraDNS called and told me my own
 bind server is dropping 20% of queries. Can you please explain to me how did
 they log into my systems?

:-)


Re: Colocation in the US.

2007-01-24 Thread Paul Vixie

[EMAIL PROTECTED] (david raistrick) writes:

  I had a data center tour on Sunday where they said that the way they
  provide space is by power requirements.  You state your power
  requirements, they give you enough rack/cabinet space to *properly*
  house gear that consumers that
 
 properly is open for debate here.  ...  It's possible to have a
 facility built to properly power and cool 10kW+ per rack.  Just that most
 colo facilties aren't built to that level.

i'm spec'ing datacenter space at the moment, so this is topical.  at 10kW/R
you'd either cool ~333W/SF at ~30sf/R, or you'd dramatically increase sf/R
by requiring a lot of aisleway around every set of racks (~200sf per 4R
cage) to get it down to 200W/SF, or you'd compromise on W/R.  i suspect
that the folks offering 10kW/R are making it up elsewhere, like 50sf/R
averaged over their facility.  (this makes for a nice-sounding W/R number.)
i know how to cool 200W/SF but i do not know how to cool 333W/SF unless
everything in the rack is liquid cooled or unless the forced air is
bottom-top and the cabinet is completely enclosed and the doors are never
opened while the power is on.

you can pay over here, or you can pay over there, but TANSTAAFL.  for my
own purposes, this means averaging ~6kW/R with some hotter and some
colder, and cooling at ~200W/SF (which is ~30SF/R).  the thing that's
burning me right now is that for every watt i deliver, i've got to burn a
watt in the mechanical to cool it all.  i still want the rackmount
server/router/switch industry to move to liquid which is about 70% more
efficient (in the mechanical) than air as a cooling medium.

  It's a good way of looking at the problem, since the flipside of power
  consumption is the cooling problem.  Too many servers packed in a small
  space (rack or cabinet) becomes a big cooling problem.
 
 Problem yes, but one that is capable of being engineered around (who'd 
 have ever though we could get 1000Mb/s through cat5, after all!)

i think we're going to see a more Feinman-like circuit design where we're
not dumping electrons every time we change states, and before that we'll
see a standardized gozinta/gozoutta liquid cooling hookup for rackmount
equipment, and before that we're already seeing Intel and AMD in a
watts-per-computron race.  all of that would happen before we'd air-cool
more than 200W/SF in the average datacenter, unless Eneco's chip works out
in which case all bets are off in a whole lotta ways.
-- 
Paul Vixie


Re: [cacti-announce] Cacti 0.8.6j Released (fwd)

2007-01-24 Thread Paul Vixie

[EMAIL PROTECTED] (Jason LeBlanc) writes:

 After looking for 'the ideal' tool for many years, it still amazes me
 that no one has built it.  Bulk gets, scalable schema and good portal/UI.
 RTG is better than MRTG, but the config/db/portal are still lacking.

if funding were available, i know some developers we could hire to build the
ultimate scalable pluggable network F/L/OSS management/monitoring system.  if
funding's not available then we're depending on some combination of hobbiests
(who've usually got rent to pay, limiting their availability for this work)
and in-house toolmakers at network owners (who've usually got other work to
do, or who would be under pressure to monetize/license/patent the results if
That Much Money was spent in ways that could otherwise directly benefit their
competitors.)

been there, done that, got the t-shirt.  is there funding available yet?
like, $5M over three years?  spread out over 50 network owners that's ~$3K
a month.  i don't see that happening in a consolidation cycle like this one,
but hope springs eternal.  give randy and hank the money, they'll take care
of this for us once and for all.
-- 
Paul Vixie


Re: [cacti-announce] Cacti 0.8.6j Released (fwd)

2007-01-24 Thread Paul Vixie

[EMAIL PROTECTED] (Jeroen Massar) writes:

  ..., $5M over three years?  spread out over 50 network owners that's
  $3K a month.  i don't see that happening in a consolidation cycle like
  this one, but hope springs eternal.  give randy and hank the money,
  they'll take care of this for us once and for all.
 
 Heh, for that kind of money you can even convince me to do it ;)

glibly said, sir.  but i disasterously underestimated the amount of time
and money it would take to build BIND9.  since i'm talking about a scalable
pluggable portable F/L/OSS framework that would serve disparite interests
and talk to devices that will never go to an snmp connectathon, i'm trying
to set a realistic goal.  anyone who want to convince me that it can be done
for less than what i'm saying will have to first show me their credentials,
second convince david conrad and jerry scharf.  (after that, i'm all ears.)
-- 
Paul Vixie


Re: Colocation in the US.

2007-01-24 Thread Paul Vixie

 If you have water for the racks:

we've all gotta have water for the chillers. (compressors pull too much power,
gotta use cooling towers outside.)

 http://www.knuerr.com/web/en/index_e.html?products/miracel/cooltherm/cooltherm.html~mainFrame

i love knuerr's stuff.  and with mainframes or blade servers or any other
specialized equipment that has to come all the way down when it's maintained,
it's a fine solution.  but if you need a tech to work on the rack for an
hour, because the rack is full of general purpose 1U's, and you can't do it
because you can't leave the door open that long, then internal heat exchangers
are the wrong solution.

knuerr also makes what they call a CPU cooler which adds a top-to-bottom
liquid manifold system for cold and return water, and offers connections to
multiple devices in the rack.  by collecting the heat directly through paste
and aluminum and liquid, and not depending on moving-air, huge efficiency 
gains are possible.  and you can dispatch a tech for hours on end without
having to power off anything in the rack except whatever's being serviced.
note that by CPU they mean rackmount server in nanog terminology.  CPU's
are not the only source of heat, by a long shot.  knuerr's stuff is expensive
and there's no standard for it so you need knuerr-compatible servers so far.

i envision a stage in the development of 19-inch rack mount stuff, where in
addition to console (serial for me, KVM for everybody else), power, ethernet,
and IPMI or ILO or whatever, there are two new standard connectors on the
back of every server, and we've all got boxes of standard pigtails to connect
them to the rack.  one will be cold water, the other will be return water.
note that when i rang this bell at MFN in 2001, there was no standard nor any
hope of a standard.  today there's still no standard but there IS hope for one.

 (there are other vendors too, of course)

somehow we've got standards for power, ethernet, serial, and KVM.  we need
a standard for cold and return water.  then server vendors can use conduction
and direct transfer rather than forced air and convection.  between all the
fans in the boxes and all the motors in the chillers and condensers and
compressors, we probably cause 60% of datacenter related carbon for cooling.
with just cooling towers and pumps it ought to be more like 15%.  maybe
google will decide that a 50% savings on their power bill (or 50% more
computes per hydroelectric dam) is worth sinking some leverage into this.

 http://www.spraycool.com/technology/index.asp

that's just creepy.  safe, i'm sure, but i must be old, because it's creepy.


Re: Colocation in the US.

2007-01-25 Thread Paul Vixie

 How long before we rediscover the smokestack? After all, a colo is an
 industrial facility.  A cellar beneath, a tall stack on top, and let physics
 do the rest.

odd that you should say that.  when building out in a warehouse with 28 foot
ceilings, i've just spec'd raised floor (which i usually hate, but it's safe
if you screw all the tiles down) with horizontal cold air input, and return
air to be taken from the ceiling level.  i agree that it would be lovely to
just vent the hot air straight out and pull all new air rather than just 
make up air from some kind of ground-level outside source... but then i'd
have to run the dehumidifier on a 100% duty cycle.  so it's 20% make up air
like usual.  but i agree, use the physics.  convected air can gather speed,
and i'd rather pull it down than suck it up.  woefully do i recall the times
i've built out under t-bar.  hot aisles, cold aisles.  gack.

 Anyway, RJ45 for Water is a cracking idea.  I wouldn't be surprised if
 there aren't already standardised pipe connectors in use elsewhere - perhaps
 the folks on NAWOG (North American Water Operators Group) could help?  Or
 alt.plumbers.pipe? But seriously folks, if the plumbers don't have that,
 then other people who use a lot of flexible pipework might.  Medical,
 automotive, or aerospace come to mind.

the wonderful thing about standards is, there are so many to choose from.
knuerr didn't invent the fittings they're using, but, i'll betcha they aren't
the same as the fittings used by any of their competitors.  not yet anyway.

 All I can think of about that link is a voice saying Genius - or Madman?

this thread was off topic until you said that.


what the heck do i do now?

2007-01-31 Thread Paul Vixie

bear with me, this appears to be about DNS but it's actually about e-mail.

maps.vix.com has been gone since 1999 or so.  mail-abuse.org is the new thing.
i've tried just about everything to get traffic toward the old domain name to
stop... right now there's a DNAME but it made no real difference.  i've taken
the maps.vix.com domain away.  i've set its NS to localhost.  i've put long
TTL's on both good and bad data.  the traffic continues.  clearly this is my
pennance for starting MAPS, and i hear you giggling about it, but i need some
advice.  once upon a time, someone more insane than myself wanted to close an
RBL and did so by replacing it with a wildcard entry.  we all hated that since
it caused a lot of mail to bounce.  (all mail that would otherwise have been
received by that RBL's subscribers, in fact.)  it did however have the effect
of causing the subscribers to reconfigure their mailers to stop querying the
now-dead RBL in question.  what's the current thinking on this?

oh and even though this isn't about bgp i can put some numbers in so that it
will seem on-topic.  out of 100K DNS queries received by a vix.com nameserver
(which is about five minutes worth), here are the toptalkers for maps.vix.com.
(and we all know by now that public shaming and notification won't work, i'm
not sure why this is relevant, but it looks good, so here it is.)  thoughts?

2208 68.216.187.10
2156 192.106.1.99
1348 213.239.240.162
1024 192.106.1.100
 808 192.106.1.1
 742 216.156.2.29
 659 216.55.144.5
 594 192.203.136.10
 592 80.247.227.1
 556 24.111.1.180
 535 217.18.160.2
 523 87.127.246.222
 438 192.106.1.9
 430 192.203.136.1
 384 69.20.2.227
 378 213.249.17.10
 355 87.98.222.35
 353 216.218.185.16
 331 213.251.136.18
 319 72.41.223.229
 274 216.65.0.148
 264 61.194.193.9
 257 200.75.51.132
 251 213.234.128.211
 248 213.251.134.167
 222 69.38.230.2
 208 219.232.224.89
 193 213.249.17.11
 178 200.75.51.133
 175 204.212.38.12
 170 209.244.4.235
 167 211.5.1.220
 158 200.62.191.36
 150 195.178.70.10
 147 212.125.128.91
 147 192.247.72.254
 145 202.180.64.9
 144 216.46.201.220
 135 164.164.149.13
 134 203.97.32.3
 132 66.98.240.151
 129 84.14.44.250
 122 206.40.201.230
 118 195.14.50.1
 108 211.5.1.219
 102 64.234.192.7
  98 66.235.216.48
  88 200.205.163.168
  85 69.20.2.243
  85 69.20.2.231
  85 146.83.183.94
  80 202.239.113.18
  79 199.60.229.4
  79 140.186.4.4
  78 68.125.191.131
  78 203.97.32.5
  72 80.88.192.200
  71 192.189.54.17
  70 82.210.64.18
  69 217.106.235.214
  68 202.229.192.20
  66 202.154.3.2
  64 217.19.24.18
  61 213.203.124.146
  61 195.96.33.249
  57 168.95.1.128
  56 203.94.129.130
  56 200.69.193.111
  56 168.95.1.133
  55 72.3.128.125
  54 168.95.1.145
  53 216.231.41.2
  53 210.119.192.6
  52 203.141.32.247
  52 168.95.1.132
  50 80.80.231.223
  50 204.145.230.31
  49 213.225.90.203
  48 61.129.66.75
  48 38.115.131.10
  47 195.220.32.99
  46 64.60.208.40
  46 207.12.35.170
  46 168.95.1.129
  46 168.95.1.126
  44 212.42.168.116
  44 209.128.208.11
  44 168.95.1.148
  43 84.14.176.166
  43 200.62.191.38
  43 154.11.147.2
  42 168.95.1.144
  41 168.95.1.131
  41 168.95.1.127
  40 81.240.254.45
  40 218.223.31.252
  40 209.82.111.202
  39 69.20.2.237
  39 217.22.50.3
  39 211.72.171.75
  39 131.188.3.89
  38 203.166.97.12
  38 199.201.145.162
  38 168.95.1.147
  38 12.98.160.66
  37 69.36.241.228
  37 168.95.1.146
  37 131.130.199.155
  36 200.31.192.18
  34 216.174.17.6
  34 209.61.163.233
  34 200.207.88.142
  34 168.95.1.92
  32 80.247.228.1
  32 69.60.117.147
  31 67.18.97.50
  30 202.175.151.10
  27 168.95.1.99
  26 216.127.136.207
  26 212.245.255.2
  26 195.2.96.2
  25 216.244.191.38
  25 212.86.129.142
  25 199.64.0.252
  24 212.55.197.117
  24 199.166.210.2
  24 154.11.136.2
  23 216.65.0.156
  23 198.63.210.55
  23 194.204.0.1
  22 202.134.64.12
  21 84.22.6.100
  21 211.125.124.33
  21 203.198.7.66
  21 203.144.168.6
  21 203.116.1.94
  21 202.96.102.3
  21 158.234.250.70
  20 65.106.2.117
  20 213.191.73.65
  19 66.77.137.9
  19 64.65.208.6
  19 64.122.97.116
  19 203.23.72.2
  19 194.106.218.42
  17 80.255.128.145
  17 168.95.192.81
  16 194.242.40.3
  16 168.95.192.83
  15 69.20.2.225
  15 210.104.1.13
  15 207.7.4.66
  15 207.218.192.26
  15 207.106.1.2
  15 168.95.192.89
  15 168.95.192.84
  15 140.123.181.1
  14 217.10.104.109
  14 216.145.96.10
  14 212.45.26.98
  14 210.238.234.242
  14 210.236.36.2
  14 193.50.240.2
  14 193.225.16.111
  14 168.95.192.86
  14 150.186.1.1
  13 24.96.32.18
  13 218.232.110.37
  13 213.130.10.10
  13 202.237.13.66
  13 161.142.201.17
  12 168.95.192.80
  11 62.252.64.17
  11 213.171.195.168
  11 211.125.124.34
  11 210.253.165.8
  11 203.30.161.1
  11 202.45.84.68
  10 216.127.136.213
  10 212.49.128.65
  10 203.10.110.104
  10 202.134.99.162
  10 192.114.65.50
  10 168.95.192.88
  10 168.95.192.85
...


Re: what the heck do i do now?

2007-01-31 Thread Paul Vixie

  ... the effect of causing the subscribers to reconfigure their mailers to
  stop querying the now-dead RBL in question.  what's the current thinking
  on this?
 
 one problem with this is that the pain is not felt by the misconfigured
 folk, but by distant innocents.

i am one of those who believes that e-mail is a shared benefit.  so in my
worldview, both the intended recipients and actual senders would feel pain.
(bulk e-mail disproportionately benefits the sender, but i'm thinking 1x1
e-mail in this thought experiment.)


Re: what the heck do i do now?

2007-01-31 Thread Paul Vixie

 One thing you might consider is putting together a script to harvest email
 addresses from whois records that correspond to the PTR for the querying
 IPs.  Add to that list abuse, postmaster, webmaster, hostmaster, etc @ the
 poorly run domain.  Then fire off a message explaining the situation and
 that you'll be adding a wildcard record on such and such date (preferably
 not 4/1).  Script all of this and run it every couple of days until the
 date you gave and then follow through with the wildcard entry.  This
 undoubtedly won't stop all of the whining but you can at least say you
 tried.

volunteers are welcome to apply for that job.

 Perhaps you can get CNet or InfoWorld to pick it and write a story about the
 service and the impending change.

that, conversely, would be fun.

 When it comes right down to it, you've got to do what you've got to do to
 recover your domain.  You provided a service that many of us relied upon.
 The responsibility rests on our shoulders to keep up with the changing
 times.  7-8 years is more than enough time for even the laziest of mail
 admins to update their config.  Think about how much bandwidth has been
 wasted over the years with these errant queries...

about 1 billionth as much as has been wasted by RFC1918-sourced DNS queries
sent to the root name servers OR RFC1918-domained DNS updates sent to AS112.
therefore i treat it as a personal annoyance but NOT a waste in its own right.


the authors of RFC 2317 have a question for att worldnet

2007-02-01 Thread Paul Vixie

(this must be my week for past-sins pennance related to RBL's.)

today someone whose e-mail was blocked when they tried to send it to an att
customer, asked the authors of RFC 2317 to please unblock their address.  as
the only such author whose e-mail address hasn't changed since RFC publication
i pretty much assumed that the other two guys weren't hearing this, and so i
investigated.  the complainer showed me this text:

  [EMAIL PROTECTED]: host gateway2.att.net[12.102.240.23] said:
550-24.248.126.43 blocked by ldap:ou=rblmx,dc=worldnet,dc=att,dc=net 550
Blocked for abuse. See http://www.att.net/general-info/rblinquiry.html;
(in reply to MAIL FROM command)

i looked at the URL thus indicated, and the link for

  Information for end-users whose messages have been blocked.

is

  http://www.att.net/general-info/mail_info/block_enduser.html

which says:

  What to do: Ask your system administrator to submit identifying information
  to the DNS. For more information, your administrator should refer to
  http://www.faqs.org/rfcs/rfc2317.html In the meantime, you should use a
  fully registered domain for sending your messages, such as the mail system
  from an ISP or one of the major free e-mail services.

now, i count myself as a master of the obscure reference, but this is over
the top.  can someone from att worldnet please contact me for the purpose
of explaining what RFC 2317 could possibly have to do with spam complaints?

(and btw, if you're going to block inbound e-mail, you need to give senders
some idea of how to get unblocked.  not for fairness, just for practicality.
and this parenthesized paragraph is why i count this screed as not-off-topic.)


Re: what the heck do i do now?

2007-02-01 Thread Paul Vixie

[EMAIL PROTECTED] (Brian Wallingford) writes:

 ...  Considering the time passed since maps went defunct, Paul is
 entirely justified in doing whatever is necessary to cluebat the
 offending networks, imho.

thanks for those supportive words.  note that MAPS is not defunct.  the
domain MAPS.VIX.COM is defunct, in favour of MAIL-ABUSE.ORG, which was
originally an asset of MAPS LLC, then Kelkea, and lately Trend Micro.

i've received some excellent private suggestions due to this thread.  my
two leading candidates are (a) ask dan bernstein to take over MAPS.VIX.COM
and run his own RBL there; vs (b) hack up a BIND server so that it can
return a positive answer 1% of the time (chosen randomly).
-- 
Paul Vixie


internet idealism (Re: what the heck do i do now?)

2007-02-01 Thread Paul Vixie

[EMAIL PROTECTED] (Brian Wallingford) writes:

 Ultimately, the problem is that the idealism which was more or less the
 rule a decade ago has taken a backseat to commercialism ...

i dunno about that.  i see a lot of idealism still.  volunteers at spamhaus,
and within the da/mwp community, and at cymru, are still going quite strong.

and in an odd twist of fate's knife, i still hold the cix.net domain which
was very quiet until COX went into the internet business a few years back.
since i and o are adjacent in qwertyland, i get a whole lotta misdirected
e-mail, including a lot of 1x1 correspondance from folks who mistyped their
source-email-address in their e-mail reader and then proceeded to correspond.

rather than bounce it all, i answer it with the following template:

there is no such person here at cix.net.

try cox.net.

re:

and then i include-all the mail they sent to me by mistake.  eventually i
got tired of explaining to the senders why [EMAIL PROTECTED] was answering 
their
e-mail, and so i started forging the source of my response to be the cix.net
address they were trying to reach.  i've got it all down to a couple of MH-E
keystrokes and macros and e-lisp functions now.  i just don't like the idea
of bouncing the stuff outright, since a lot of the senders will never guess
what went wrong.  (i also appreciate the extra spam, for robot-training use.)
it's only a dozen messages a day, on average, and thus: idealism isn't dead.
-- 
Paul Vixie


Re: WTH does Paul do now?

2007-02-01 Thread Paul Vixie

[EMAIL PROTECTED] (Jon Lewis) writes:

 Why do I even bother?

  (reason: 553 5.7.1 Service unavailable; \
   Client host [69.28.69.2] blocked using reject-all.vix.com; \
   reason / created)

here's what you ran into.

*.69.28.69.reject-all.vix.com. 1800 IN  TXT reason sa.vix.com \
watchmaillog sqlgrey \
[EMAIL PROTECTED] - \
[EMAIL PROTECTED] \
at 2006-11-09 17:55:26.932919

obviously, autoblackholing /24's based on a single greylist failure (mail
not retried within 24 hours after receiving the initial 4XX) was over the
top.  i've disabled that part of the inbound processing robotics, and i've
removed your /24 from the list.
-- 
Paul Vixie


Re: what the heck do i do now?

2007-02-01 Thread Paul Vixie

[EMAIL PROTECTED] (Jon Lewis) writes:

 As for trying to make it stop, the two methods thought to be most 
 successful are:
 
 1) maps.vix.com.  604800  IN  NS  .

i've tried that.  the retry rate actually goes up rather than down.

 2) maps.vix.com.  604800  IN  NS  u1.vix.com.
 maps.vix.com. 604800  IN  NS  u2.vix.com.
 maps.vix.com. 604800  IN  NS  u3.vix.com.
 ... [as many as you like]
 u1.vix.com.   604800  IN  A   192.0.2.1
 u2.vix.com.   604800  IN  A   192.0.2.2
 u3.vix.com.   604800  IN  A   192.0.2.3
 ... [as many as you like]

i hadn't thought of that.  i'll think seriously about it, thanks.

 Successful here doesn't necessarily mean the traffic stopped but rather 
 the traffic has been mitigated as much as is possible without actually 
 getting people to fix their systems and stop querying the dead zone.

right you are.  it sort of goes against my personal grain to cause folks'
mail to bounce when their only offense against the community is not reading
the qmail man page and understanding the what the defaults are.
-- 
Paul Vixie


death of the net predicted by deloitte -- film at 11

2007-02-11 Thread Paul Vixie

(i'm guessing kc will be on the phone soon, to get from them their data?)

...

A recent report from Deloitte said 2007 could be the year the internet
approaches capacity, with demand outstripping supply. It predicted bottlenecks
in some of the net's backbones as the amount of data overwhelms the size of
the pipes.

...

http://news.bbc.co.uk/2/hi/technology/6342063.stm


Re: death of the net predicted by deloitte -- film at 11

2007-02-11 Thread Paul Vixie

-Chris, still-waiting-for-the-rapture, wrote as follows:

 (or did I miss the hue and cry on nanog-l about full pipes and no more fiber
 to push traffic over? wasn't there in fact a hue and cry about a 1) fiber
 glut, 2) only 4% of all fiber actually lit?)

:-).  however, you did seem to miss the hue and cry about how ALL YOUR BASE
ARE BELONG TO GOOGLE now.  a smattering of this can be found at:

* http://www.internetoutsider.com/2006/04/how_much_dark_f.html
* http://dondodge.typepad.com/the_next_big_thing/2005/11/google_data_cen.html

now as to whether this is true, or whether it's a prevent-defense meant to
strangle the redmond folks before the redmond folks know they needed fiber
or whether google actually needs the capacity, or whether it's possible to
lock up the market for more than couple of years, given that more capacity
can be laid in once all the LRU's are signed... who the heck knows or cares?

but hue there has been, and cry also, and measurement weenettes are likely
banging their foreheads against their powerbook screens while they read our
uninformed 4% estimates.


Re: death of the net predicted by deloitte -- film at 11

2007-02-11 Thread Paul Vixie

 Has anyone considered that perhaps google is not looking at beating
 Microsoft but instead at beating TIVO, ABC, CBS, Warner Cable, etc?

sure, but...

 You can't possibly believe that there is enough bandwidth to stream 
 HD video to everyone, that's just not going to happen any time soon.

...wouldn't there be, if interdomain multicast existed and had a billing
model that could lead to a compelling business model?  right now, to the
best of my knowledge, all large multicast flows are still intradomain.

so if tivo and the others wanted to deliver all that crap using IP, would
they do what broadcast.com did (lots of splitter/repeater stations), or
do what google is presumably doing (lots of fiber), or would they put
some capital and preorder into IDMR?

 All you need is someone like Cisco to team with who can produce a network
 consumer DVD player capable of assuming the roll of a physical tivo box,
 say something like the kiss technology DP-600 box (cisco bought kiss last
 year) that the MPAA loves so much (MPAA bought thousands of them for their
 own purposes) and presto things are suddenly taking a whole new shape and
 direction.

yeah.  sadly, that seems like the inevitable direction for the market leaders
and disruptors.  but i still wonder if a dark horse like IDMR can still emerge
among the followers and incumbents (or the next-gen disruptors)?

 So now you get a choice, buy a new HD TV tuner or buy a new DVD player that
 does standard or HD tv even after the over the air broadcast change happens
 in the US.

at some point tivo will disable my fast-forward button and i'll give up 
network TV altogether.  irritatingly, hundreds of millions of others will
not.  but we digress.


Re: Every incident is an opportunity (was Re: Hackers hit key Internet

2007-02-11 Thread Paul Vixie

[EMAIL PROTECTED] (Sean Donelan) writes:

 ... don't believe everything you read on the net.

you had me right up until that last part, which is completely unreasonable.
-- 
Paul Vixie


Re: Every incident is an opportunity (was Re: Hackers hit key Internet

2007-02-11 Thread Paul Vixie

   ... don't believe everything you read on the net.
  
  you had me right up until that last part, which is completely unreasonable.
 
 I think it's not only reasonable, but is the only sane way to approach 
 content on the net. Why do you feel it's unreasonable? Or are you being 
 sarcastic? (It's impossible to tell) 

i mean it's never going to happen, and is therefore totally unrealistic, and
that any plan with that as a required element is doomed at the outset, and we
had better figure out alternative plans.

you might just as well ask for rivers to flow backwards, or dogs and cats to
live together in harmony, or an educated american electorate, as to ask that
folks stop believing everything they read on the net | see on tv | etc.

are we off-topic yet?


Re: death of the net predicted by deloitte -- film at 11

2007-02-11 Thread Paul Vixie

[EMAIL PROTECTED] (Geo.) writes:

 IDMR is great if you're a broadcaster or a backbone, but how does it help 
 the last 2 miles, the phoneco ATM network or the ISP network where you have 
 10k different users watching 10k different channels?

http://tools.ietf.org/html/draft-ietf-mboned-auto-multicast-00 is what i
expect.  note: i've drunk that koolaid  am helping on the distribution side.
-- 
Paul Vixie


Re: death of the net predicted by deloitte -- film at 11

2007-02-12 Thread Paul Vixie

[EMAIL PROTECTED] (Geo.) writes:

 Multicast isn't going to help the phoneco atm network. ...

nothing can help, or for that matter save, the phoneco atm network.
-- 
Paul Vixie


Re: Every incident is an opportunity

2007-02-12 Thread Paul Vixie

warning-- this thread is so far off topic, i can't even REMEMBER a topic
that it might once have had.  hit D now.


[EMAIL PROTECTED] (Barry Shein) writes:

 ... If your goal is invasion then value preservation is important
 (factories, bridges, civilian infrastructure, etc.) ...

so if the last remaining superpower were to bomb a country in the middle
east in preparation for invasion, regime change, etc., that superpower
would be well advised to avoid hitting civilian infrastructure, assuming
that its bombs were smart enough to target like that?

(i'm sorry, but your theory doesn't sound plausible given recent events.)
-- 
Paul Vixie


Re: DNS: Definitely Not Safe?

2007-02-14 Thread Paul Vixie

[EMAIL PROTECTED] (Stephane Bortzmeyer) writes:

 It may be on-topic but it is full of FUD, mistakes and blatant
 b...t. Certainly not the recommended reading for the sysadmin.

i think you're being way to kind here.

 The best stupid sentence is the one asking firewalls in front of the
 DNS servers... to prevent tunneling data over DNS!

just as the most common lie told by spammers is dear friend, so it is
that the biggest error in this piece is in the first sentence:

When it comes to the Web's domain name system (DNS),

this guy was probably writing netware-vs-smb comparisons during the two
decades that the internet existed before the web came along.  the web is
an internet application, and the dns is part of the internet, not part of
the web.  the rest of the article is equally horrific in its maltreatment
and ignorance of facts.
-- 
Paul Vixie


Re: PGE on data centre cooling..

2007-03-29 Thread Paul Vixie

[EMAIL PROTECTED] (Dorn Hetzel) writes:

 I preferred the darkness of PAIX back in the late 90's.  We had a
 christmas tree in our cage and it looked great in the dark :)

that was brian reid's idea, and it was a great one, and equinix-san-jose
was merely copying paix (where al and jay had just spent a few years).
most importantly, it's STILL dark, and still looks great.


Re: On-going Internet Emergency and Domain Names

2007-03-30 Thread Paul Vixie

whoa.  this is like deja vu all over again.  when [EMAIL PROTECTED] asked me to
patch BIND gethostbyaddr() back in 1994 or so to disallow non-ascii host
names in order to protect sendmail from a /var/spool/mqueue/qf* formatting
vulnerability, i was fresh off the boat and did as i was asked.  a dozen
years later i find that that bug in sendmail is long gone, but the pain
from BIND's check-names logic is still with us.  i did the wrong thing
and i should have said just fix sendmail, i don't care how much easier
it would be to patch libc, that's just wrong.

are we really going to stop malware by blackholing its domain names?  if
so then i've got some phone calls to make.
-- 
Paul Vixie


Re: On-going Internet Emergency and Domain Names

2007-03-31 Thread Paul Vixie

 ...
 Back to reality and 2007:
 In this case, we speak of a problem with DNS, not sendmail, and not bind.
 
 As to blacklisting, it's not my favorite solution but rather a limited
 alternative I also saw you mention on occasion. What alternatives do you
 offer which we can use today?

on any given day, there's always something broken somewhere.

in dns, there's always something broken everywhere.

since malware isn't breaking dns, and since dns not a vector per se, the
idea of changing dns in any way to try to control malware strikes me as
a way to get dns to be broken in more places more often.

in practical terms, and i've said this to you before, you'll get as much
traction by getting people to switch from windows to linux as you'd get by
trying to poison dns.  that is, neither solution would be anything close to
universal.  that rules it out as an alternative we can use today.

but, isp's responsible for large broadband populations could do this in their
recursion farms, and no doubt they will contact their dns vendors to find a
way.  BIND9, sadly, does not make this easy.  i'll make sure that poison at
scale makes the BIND10 feature list, since clustering is already coming.

at the other end, authority servers which means registries and registrars
ought, as you've oft said, be more responsible about ripping down domains
used by bad people.  whether phish, malware, whatever.  what we need is some
kind of public shaming mechanism, a registrar wall of sheep if you will, to
put some business pressure on the companies who enable this kind of evil.

fundamentally, this isn't a dns technical problem, and using dns technology
to solve it will either not work or set a dangerous precedent.  and since
the data is authentic, some day, dnssec will make this kind of poison
impossible.


redirect (Re: On-going Internet Emergency and Domain Names )

2007-03-31 Thread Paul Vixie

  since malware isn't breaking dns, and since dns not a vector per se,
  the idea of changing dns in any way to try to control malware
  strikes me as a way to get dns to be broken in more places more
  often.
 
 Well, once more people learn about DLV (especially the NS override
 extension that has been requested by zone operators), more and more
 questions will pop up why we can't do this for NS records they don't
 like for some reason.  The genie is out of the bottle, I'm afraid.

i'm going to fwd this to [EMAIL PROTECTED] and answer it there,
since this is now far afield of can i type that into an IOS prompt?.


Re: On-going Internet Emergency and Domain Names

2007-03-31 Thread Paul Vixie

  at the other end, authority servers which means registries and registrars
  ought, as you've oft said, be more responsible about ripping down domains
  used by bad people.  whether phish, malware, whatever.  what we need is
  some kind of public shaming mechanism, a registrar wall of sheep if you
  will, to put some business pressure on the companies who enable this kind
  of evil.
 
 I have done public shaming in the past, as you know. I'd rather avoid it
 if policy/technology can help out.

technology can help someone protect their own assets.  policy can help other
people protect their assets.  public shaming can motivate other people protect
their own assets.  YMMV.

 Conversationally though, how would you suggest to proceed on that front?

a push-pull.  first, advance the current effort to get registrars and
dynamic-dns providers to share information about bad CC#'s, bad customers,
bad domains, whatever.  arrange things so that a self-vetting society of
both in-industry and ombudsmen have the communications fabric they need to
behave responsibly.  push hard on this, make sure everybody hears about it
and that the newspapers are full of success stories about it.

then, whenever there's a phish or malware domain whose dyndns provider or
dns registrar is notified but takes no action, put it on the wall of shame.
something akin to ROKSO would work.  (in fact, spamhaus could *do* this.)
make sure that the lack of responsible takedown is a matter of public record
and that a sustained pattern of such irresponsibility is always objectively 
verifiable by independent observers who can each make independent judgements.

  fundamentally, this isn't a dns technical problem, and using dns
  technology to solve it will either not work or set a dangerous precedent.
  and since the data is authentic, some day, dnssec will make this kind of
  poison impossible.
 
 Not for the bad guys, unfortunately. :/

by this kind of poison i meant something that would be used by good guys
to whiteout the domains needed/used by bad guys.  it'll be inauthentic
data, and if dnssec is ever launched, this kind of data will be transparently
obviously inauthentic, and will just not be seen by the client population.
so, yes, dnssec will end up helping the bad guys in that particular way.


Re: On-going Internet Emergency and Domain Names

2007-04-01 Thread Paul Vixie

 From: [EMAIL PROTECTED] (Dave Rand)
 
 ...
 
 We are not fighting technology.  We are dealing with very well organized,
 smart, and well-funded people.
 
 We need to focus on solutions that we can deploy, which will address the
 problems at hand, as we discover them.  That means we will deploy things
 that do not solve underlying prolems, but address the symptoms as best we
 can, to prevent the entire mess from falling down.
 
 That means that we must look at short-range solutions to address things in
 near-real-time, ...
 
 There is no one true solution to this.  That means you, as network
 operators, need to look at what makes sense *today*, and *DEPLOY IT*.
 
 ...

As Dave is certainly aware (as CTO of Trend Micro, which bought MAPS/Kelkea),
his daytime employer has a product (called ICSS, and which I had a hand in
building) that proposes to let enterprises or ISP's use recursive DNS as a
delivery mechanism for security policy (like, poison this malware domain).

I've got no heartburn about deploying these technologies at a customer level,
but my experience with both BIND's check-names facilty and VeriSign's
sitefinder wildcard (*.COM) have taught me that it's best to creatively
rulebreak at the edge, and keep the core pristine.  I helped Dave build ICSS
and I know that customers of that technology could easily white-out domains
used for Gadi's 0-day and that it would be a good thing for them to do so.

But, that's the DNS edge, I'm not ready to see the DNS core gain features
like this.  Or if they do come, I'd like them to come as a result of consensus
driven protocol engineering (like inside the IETF) and take longer than this
week to be defined.  I hope this clarifies the incompatibility between me
helping dave build ICSS (an edge solution) and me saying that whiting out
malware domain names as a way to stop malware isn't a real (core) solution.

Some references to ICSS, in case you all missed it.  (Note that I am not an
employee, shareholder, representative, or agent of Trend Micro and I have no
financial stake in ICSS at this point.)

http://www.trendmicro.com/en/products/nss/icss/evaluate/overview.htm
http://www.eweek.com/article2/0,1895,2020286,00.asp
http://www.vnunet.com/itweek/news/2164897/trend-appliance-sniffs-bot-nets
http://www.computerwire.com/industries/research/?pid=2E16BA11-5976-42B0-9C13-EC19B10DB2F3
http://www.computing.co.uk/itweek/news/2164897/trend-appliance-sniffs-bot-nets


Re: On-going Internet Emergency and Domain Names

2007-04-01 Thread Paul Vixie

 From: Dave Crocker [EMAIL PROTECTED]
 To: Paul Vixie [EMAIL PROTECTED], nanog@merit.edu, Gadi Evron [EMAIL 
 PROTECTED]
 Subject: Re: On-going Internet Emergency and Domain Names
 
 offlist.

actually, not, according to the headers shown above.

 Paul Vixie wrote:
  a push-pull.  first, advance the current effort to get registrars and
  dynamic-dns providers to share information about bad CC#'s, bad customers,
  bad domains, whatever.  arrange things so that a self-vetting society of
  both in-industry and ombudsmen have the communications fabric they need to
  behave responsibly.  push hard on this, make sure everybody hears about it
  and that the newspapers are full of success stories about it.
 
 IP Address blacklists are a sufficiently solid staple of email anti-abuse
 effort, that I suspect similar approaches, for other information tidbits,
 would be quite useful.

as the inventor of the internet's first ip address blackhole list (not
blacklist), i agree that it's a solid staple, but i'm not sure it was
the most effective 10-year plan we could have made at the time, had we
been making 10-year plans.

 This is less about shaming and more about filtering.  In this case,
 filtering at DNS registration time, ISP account setup, or the like.

agreed.  i'd be happy to see the DNS registration front end (one of its
edges) gain some kind of reputation filtering.  i just don't want to see
core-level filtering like we did in e-mail, unless it's at the customer-
facing (edge) level, like Trend ICSS offers.

 The difficulties, here, are to a) establish a credible organization for
 creating and maintaining the list(s), b) getting folks to submit data to
 it, and c) getting folks to use it.

those are Gadi's three areas of strength and i'd help him if he did this.

 Since there is quite a lot of track-record on doing this -- both well and
 poorly -- the challenge here is all about implementation, rather than
 design, of the service.

having designed a reputation system inadequately once upon a time, i think
it's important to get both the design and implementation right.


Re: On-going Internet Emergency and Domain Names

2007-04-01 Thread Paul Vixie

[EMAIL PROTECTED] (Gadi Evron) writes:

 On Sun, 1 Apr 2007, Adrian Chadd wrote:

  Stop trying to fix things in the core - it won't work, honest - and start
  trying to fix things closer to the edge where the actual problem is.
 
 Thing is, the problem IS in the core.

nope.  read what he wrote-- it won't work, honest.  the problem is on the
front-end, an edge, specifically in the way domain tasting works.  does
anyone really believe that there will ever again be a million domains added
to the DNS in a 24-hour period?  (of course not.)  then why do verisign and
the other TLD registries have to cope with many millions of updates per day?
if we solve THAT problem, which is difficult and barely tractible, then the
dns core will go on as before, working just fine all the while.

 DNS is no longer just being abused, it is pretty much an abuse
 infrastructure.

do you mean DNS or do you mean every Internet technology including IP, UDP,
TCP, ICMP, BGP, etc; plus most non-Internet-specific technologies including
ASCII, Unicode, 32-bit, 64-bit, and binary?

the internet, and technology in general, is no longer just being abused,
it is pretty much an abuse infrastructure.  --- i'd agree with *that*.
(but this is not the first time I've been irritated that I can't choose which
other humans to share the galaxy with and which ones I'd like to kick out.)
-- 
Paul Vixie


Re: On-going Internet Emergency and Domain Names (kill this thread)

2007-04-01 Thread Paul Vixie

[EMAIL PROTECTED] (Jeff Shultz) writes:

 As I see it, the problem at hand is the current Windows 0day. What Gadi
 is doing is concentrating on a tactic it is using to justify solving
 what he sees as a more general problem (DNS abuse) that could be used by 
 an exploit to any operating system.  By solving it, this could mitigate 
 future problems.

the more general problem is hard to agree about.  i think it's that every
day neustar and afilias and verisign and the other TLD registries handle
many millions of new-domain transactions, most of which will never be paid
for (domain tasting) and most of which are being held with stolen credit
cards.  i don't know if these companies book the revenue (ship bricks) or
if this is just a hell hole of wasted time and money for them (or, both?)

i do know that a small number of criminals and wastrels among the registrant
and registrar communities are responsible for between 95% and 99.98% of each
day's domain churn, and that most of the domains will never be used or will
only be used for evil.  some of the costs of this infrastructure-for-evil
are passed on to the rest of the registrants, and all of the costs of the
evil itself are passed on to the rest of humanity.

now we can try to pour widescale poison on the domains we see used for evil,
and hope that everyone who would like to be protected by that poison is able
to get in on the action; or we can look at the registrars and registrants,
and track their actions, and build a reputation system indicating who has
done evil and who has irresponsibly or greedily profited from enabling evil.

in the first case we have an infinite set of possible choke points; in the
second we have a finite set.  in the first case we have to pay the cost on
every DNS lookup, in the second case we have to pay the cost on every DNS
registration event.

 We're looking at the alligators surrounding us. Gadi is trying to 
 convince us to help him in draining the swamp (which may indeed be a 
 positive thing in the long run).
 
 Does that sound about right?

that sounds exactly wrong.  harkening back to my experience with check-names
i can tell you that all i did was scare away a few alligators and the swamp
remained.  (probably the same was true of the original MAPS RBL.)  what we've
got in the DNS registry/registrar market today is as corrupt and abusable as
the California electricity market was back in 2000-2001, and we're seeing the
same kind of windfalls enjoyed by the same kind of assholes now as then.  the
system is ripe for policing, which icann has shown that they will not do.  i
want to see gadi in ralph nader mode, shining a light on all this, making it
harder to profit from building the infrastructure of evil.  if that's what
you meant by swamp-draining, then i apologize for misunderstanding you.
-- 
Paul Vixie


Re: Blocking mail from bad places

2007-04-08 Thread Paul Vixie

  ...and why aren't bounce messages standardized in content and formatting?
 
 Jiminy creepers, why can't people run software that implements standards
 from the last frikking *millenium*??!?

because those are feel-good standards, with no selfishness hooks.  emitting
standardized bounce messages helps the internet but does little local good.
and indeed, in the previous fracking millenium, we did well by doing good,
but this is now.  my personal blackhole list has at least 20K entries in it
whose only offense was bouncing a joe-job back to me in non RFC 1891..1894.

the rest of the world will no doubt go on JHD'ing this pre-compliant chaff,
and eventually false-positive so much wheat that there will be no benefit to
sending any kind of error-mail, much less compliant error-mail, since it
won't be read no matter what it looks like.  there's an argument to be made
that we're already in that situation.  store-and-forward should be a priv'd
operation (like relay had to become), the universal message transport should
be synchronous end-to-end.  any errors must be reportable in real time unless
there's a high-privilege relationship with the sender that permits queuing.

i have an unrelated question.  understand that i did my time in the messaging
salt mines, i maintained a version of sendmail while eric allman was at
britton-lee, i wrote a book about sendmail with fred avolio, i started the
first e-mail reputation project and was the employer of eric ziegast when he
invented the RBL DNS format universally used today.  in other words i think
i'm qualified to think hard thoughts about messaging.

my question is, is there a network operations e-list that's like NANOG used
to be, someplace where routers and switches and routes and packets and ones
and zeroes are discussed, and where abuse policy, economics, morality, bots,
web, e-mail, ftp, firewalls, uucp, and bitnet are considered irrelevant and
off-topic?  i did my time in the messaging salt mines.  i'm ready to graduate.
-- 
Paul Vixie


Re: Abuse procedures... Reality Checks

2007-04-08 Thread Paul Vixie

  Neither I nor J. Oquendo nor anyone else are required to spend our
  time, our money, and our resources figuring out which parts of X's
  network can be trusted and which can't.

you should only spend resources on activities which will benefit you, of
course.  research into a /N to find out which /(MN)'s are good and which
are evil can pay back in a lower false-positive rate, which will matter to
some blockers more than others.

  It's not that hard, the ARIN records are easy to look up.  Figuring out
  that network operator has a /8 that you want to block based on 3 or 4
  IPs in their range requires just as much work.

as several others have pointed out, detailed records are often unavailable
and are sometimes wrong.  my theory is that folks don't want to put abuse
contact info into WHOIS that will just cause them to be reportbombed with
low quality automated trash having no particular format, lacking useful
detail, and often complaining to the wrong place.  (for example, as one of
the WHOIS contacts for AS112, i am reportbombed frequently by folks whose
reportbot's best guess at who-spammed-them is an RFC 1918 address.)

 It's *very* hard to do it with an automated system, as such automated 
 look-ups are against the Terms of Service for every single RIR out there.

perhaps appropos of this, http://www.arin.net/announcements/article_352.html
says that there's a movement afoot to remove one of the WHOIS query limits
at ARIN.  if someone here thinks that a TOS change that permitted automated
lookups for the purpose of abuse reporting would be good, then in the ARIN
region, http://www.arin.net/policy/irpep.html says how you can suggest such.
-- 
Paul Vixie


Re: Abuse procedures... Reality Checks

2007-04-08 Thread Paul Vixie

[EMAIL PROTECTED] (Douglas Otis) writes:

 Good advise.  For various reasons, a majority of IP addresses within a
 CIDR of any size being abusive is likely to cause the CIDR to be blocked.
 While a majority could be considered as being half right, the existence
 of the bad neighborhood demonstrates a lack of oversight for the entire
 CIDR, which is also fairly predictive of future abuse.

that sounds like a continuum, but my experience requires more dimensions
than you're describing.  for example, this weekend two /24's were hijacked
and used for spam spew.  as my receivebot started blackholing /32's, the
sender started cycling to other addresses in the block.  each address was
used continuously until it stopped working, then the next address came in.
while there were two /24's and two self-similar spam flows, there was not a
strict mapping of spam flow to packet flow -- both /24's emitted both kinds
of spam.  uniq -c results are below.  i've nominated both blocks to the
MAPS RBL, and i can't tell from whois whether it's worthwhile to complain
to the ISP's.  would you say that i've learned anything of predictive value
concerning future spam from the containing /17 (CARI) or /15 (THEPLANET)?
or is this just another run of the mill BGP hijack due to some other ISP's
router having enable passwords still set to the factory default?  (we all
owe randy bush a debt of gratitude for pushing on RPKI, by the way.  anybody
can complain about the weather but very few people do something about it.)

   7 67.18.239.66
   2 67.18.239.67
   1 67.18.239.68
   1 67.18.239.69
   2 67.18.239.70
   5 67.18.239.71
   1 67.18.239.82
   1 67.18.239.83
   2 67.18.239.85
   2 67.18.239.87
   1 67.18.239.88
   3 67.18.239.89
   2 67.18.239.91
   2 67.18.239.92
   3 67.18.239.93
   4 67.18.239.94
   1 71.6.213.103
   1 71.6.213.105
   1 71.6.213.108
   4 71.6.213.159
   1 71.6.213.16
   5 71.6.213.160
   1 71.6.213.161
   7 71.6.213.162
   8 71.6.213.163
   6 71.6.213.166
   1 71.6.213.168
   6 71.6.213.170
   6 71.6.213.171
   2 71.6.213.172
   6 71.6.213.176
   5 71.6.213.179
   6 71.6.213.180
   2 71.6.213.181
   3 71.6.213.182
   3 71.6.213.19
   3 71.6.213.190
   1 71.6.213.191
   1 71.6.213.193
   1 71.6.213.202
   2 71.6.213.23
   5 71.6.213.26
   3 71.6.213.32
   5 71.6.213.65
   4 71.6.213.75
   6 71.6.213.8
   1 71.6.213.80
   1 71.6.213.87
   1 71.6.213.94
   1 71.6.213.96
-- 
Paul Vixie


Re: Question on 7.0.0.0/8

2007-04-14 Thread Paul Vixie

  And who, exactly, gets to tell IANA/ICANN how to do its job??
 
 As far as I can tell, pretty much everyone on the planet... :-)

but you never LISTEN!  :-)


Re: DHCPv6, was: Re: IPv6 Finally gets off the ground

2007-04-16 Thread Paul Vixie

since somebody made the mistake of cc'ing me, i actually saw this message even
though i long ago killed-by-thread the offtopic noise it's part of.  hereis:

  What's weird is that they don't just return a 0-record NOERROR when you
  do the follow-up A query, which would be the most logical failure mode
  -- they return an authoritative answer of 0.0.0.1 instead.
 
 Ick. These folks really need a clue batting don't they?

this kind of outrageous behaviour has made the introduction of new RR types
almost pointless, which is in turn the reason most often cited for just use
TXT (as in SPF for example).   is just a current example.  some of these
boxes only handle A RR's (by redirecting folks to a proxy) and answer with
NOERROR/ANCOUNT=0, or just don't answer at all, for everything else.

  Of course, dealing with idiot consumers on a regular basis, their tech
  support folks insist the problem is on the user's machine and that it's
  a bug in their v6 stack, despite Ethereal captures showing the bad DNS
  response packets coming from their box...
 
 Argh, I can sort-of understand their way of handling it, but still, they
 should have fixed this by now, and their clear broken DNS is simply a
 real reason to avoid those hotels at all.

lack of clear channel DNS has also made the introduction of DNSSEC take
at least five of its thirteen years-too-long.  ultimately we'll have to make
an HTTPS transport for DNS or tunnel all of our hotel queries back to our
home networks over VPN's.  anything left in the clear is a target, not just
for phishers and identity thieves, but for startup CEO's and their VC's.

 Can somebody please sponsor a trip to any of these hotels for either two
 or both of the Pauls, that is Mockapetris or Vixie, and let THEM call
 techsupport on this!? :) At least the eh dude, I kinda like (invented
 DNS|coded BIND) and I really do think I sort of know what I am talking
 about discussion would be worth a extremely priceless rating and a
 good laugh for the coming years for most of the Ops community :)

been there, done that, trust me it wasn't even mildly amusing for anybody.
what i'm wondering now is, if a 501(c)(3) patented something that was to be
used on the internet, and granted an free/unlimited use/distribute license on
sole condition that users/distributors actually implement it correctly, then
(a) would it hold up in court, and (b) would the 501(c)(3) CEO get lynched?


Re: Comcast blocking all Gmail!

2007-04-25 Thread Paul Vixie

[EMAIL PROTECTED] (William Allen Simpson) writes:

 Heads up on operational problem!

i block all gmail, too, and it causes me no operational problem at all.
-- 
Paul Vixie


Re: Slightly OT: datacenter cage providers in SF Bay Area?

2007-05-03 Thread Paul Vixie

 That should read:
 
 I have an internal datacenter. I need someone to come out and build
 out a cage for me.

[EMAIL PROTECTED] has been known to take on that kind of project.
-- 
Paul Vixie


Re: Interesting new dns failures

2007-05-22 Thread Paul Vixie

apropos of this...

 As to NS fastflux, I think you are right. But it may also be an issue of
 policy. Is there a reason today to allow any domain to change NSs
 constantly?

...i just now saw the following on comp.protocols.dns.bind (bind-users@):

+---
| From: Wiley Sanders [EMAIL PROTECTED]
| Newsgroups: comp.protocols.dns.bind
| Subject: Hooray, glue updates are instantaneous!
| Date: Tue, 22 May 2007 12:08:13 -0700
| X-Original-Message-ID: [EMAIL PROTECTED]
| X-Google-Sender-Auth: fbac9c128e6c36c7
| 
| Well, maybe I've been out of the loop for a while but I just changed the IP
| address of one of our authoritative name servers on Network Solutions' web
| site and it propagated to all the gtld servers within 5 minutes.
| 
| I don't know how this got fixed but for all readers out there who may
| contributed to making this magic happen, my hat is off to you, and I will
| quaff a brew (or more) in your honor as I consider this a significant
| contribution to the march of civilization.
| 
| -W Sanders
|   http://wsanders.net
+---

in general, we ought to be willing to implement almost anything if free beer
is going to be offered by non-criminal beneficiaries.
-- 
Paul Vixie


Re: IPv6 Advertisements

2007-05-29 Thread Paul Vixie

  I understand the problems but I think there are clear cut cases where
  /48's make sense- a large scale anycast DNS provider would seem to be a
  good candidate for a /48 and I would hope it would get routed. Then
  again that might be the only sensible reason...
 
 f-root does this on the IPv6 side:  2001:500::/48
 
 Whether that's available everywhere on IPv6 networks, is as Bill 
 pointed-out, another question.

http://www.arin.net/reference/micro_allocations.html explains what's going
on with that /48.  http://www.root-servers.org/ shows some other /48's.  if
the RIR community wants critical infrastructure to use a /48, then f-root's
operator will comply.  if the RIR community changes its mind, then f-root's
operator will comply with that, too.
-- 
Paul Vixie


Re: IPv6 Advertisements

2007-05-29 Thread Paul Vixie

[EMAIL PROTECTED] (David Conrad) writes:

 I once suggested that due to the odd nature of the root name server  
 addresses in the DNS protocol (namely, that they must be hardwired  
 into every caching resolver out there and thus, are somewhat  
 difficult to change), the IETF/IAB should designate a bunch of /32s  
 as root server addresses as DNS protocol parameters.  ISPs could  
 then explicitly permit those /32s.
 
 However, the folks I mentioned this to (some root server operators)  
 felt this would be inappropriate.

as one of the people who told drc that this was a bad idea, i ought to
say that my reason is based on domain name universalism.  if root name
service addresses were protocol parameters (fixed everywhere) they'd
be intercepted (served locally) even more often by local ISP's and
governments for the purpose of overloading the namespace with political
or economic goals in mind.  this would be great for local ISP's and
governments with political or economic goals in mind, but bad for the
end users, bad for the community, bad for the internet, and bad for the
world.  right now, the people who intercept f-root traffic for fun or
profit could conceivably be in violation of law or treaty, could have
the pleasure of receiving letters from ISC's attorney, and so on.  if
root name service addresses were unowned protocol parameters used only
by convention (like port numbers or AS112 server addresses or RFC1918
addresses), then we'd see a far less universal namespace than we do now,
and the coca cola company would probably see far fewer hits at COKE.COM
than they see now.

whether drc's idea is bad depends on what one thinks the internet is.
-- 
Paul Vixie


Re: NANOG 40 agenda posted

2007-06-02 Thread Paul Vixie

  how much of the v4 prefix count is de-aggregation for te or by TWits?
  why won't they do this in v6?
 
 you mean like:
   
 AS4755
 AS4134
 AS18566
 AS4323
 AS9498
 AS6478
 AS11492
 AS22773
 AS8151
 AS19262
 AS6197
 
 I'm sure they'll claim they have valid business reasons.

i wish that the community had the means to do revenue sharing with such
folks.  carrying someone else's TE routes is a global cost for a point
benefit.
-- 
Paul Vixie


Re: NAT Multihoming (was:Re: NANOG 40 agenda posted)

2007-06-02 Thread Paul Vixie

 Cisco has a whitepaper entitled Enabling Enterprise Multihoming with Cisco 
 IOS NAT that addresses this.  See 
 http://www.cisco.com/en/US/tech/tk648/tk361/technologies_white_paper09186a0080091c8a.shtml
 as well as RFC2260.

see also http://sa.vix.com/~vixie/proxynet.pdf.

 There are indeed a few thorny issues with this approach; the largest
 issue is that all connectivity becomes DNS-dependent and raw IP addresses
 (from both the inside and outside) become virtually useless.  Running
 servers behind this scheme, while doable, is difficult.

and also much fun to watch, once you get it working.
-- 
Paul Vixie


Re: ULA BoF

2007-06-02 Thread Paul Vixie

 Although ISPs tend to let packets with RFC 1918 source addresses slip  
 out from time to time, ...

maybe some isp's, or even most isp's in some parts of the world, but not
isp's in general.  we see a continuous barrage of rfc1918-sourced queries
at f-root, along with a continuous blast of rfc1918-related updates in
AS112.  i don't think you want to use RFC 1918 as your poster child for
getting filtering right.

 ... they're actually pretty good at rejecting RFC 1918 routes: currently,
 route-views.oregon-ix.net doesn't have the 10.0.0.0, 172.16.0.0 or
 192.168.0.0 networks in its BGP table (there are two entries for
 192.0.2.0, though). So in IPv4 the magic is of sufficiently quality.

route-views is run by competent people, and the networks who feed routing
tables to it are usually run by competent people.  filtering this kind of
trash is probably a normal part of operations for this class of networks.
i don't think you can use route-views as a poster child for filtering having
been gotten right.
-- 
Paul Vixie


Re: NANOG 40 agenda posted

2007-06-03 Thread Paul Vixie

 ipv6 load balancers exist, one's current load balancer is/may probably
 not be up to the task.

my favourite load balancer is OSPF ECMP, since there are no extra boxes,
just the routers and switches and hosts i'd have to have anyway.

quagga ospf6d works great, and currently lacks only a health check API.
-- 
Paul Vixie


Re: NANOG 40 agenda posted

2007-06-04 Thread Paul Vixie

two replies here.  i ([EMAIL PROTECTED]) said:

  quagga ospf6d works great, and currently lacks only a health check API.

Donald Stahl [EMAIL PROTECTED] answered:

 Health checks are unfortunately the most important aspect of a LB for some
 people.

understood.

 Can you elaborate on where you use ECMP and specifics about your
 implementation that might interest people?

i could, but joe abley already did, and i wouldn't want to plagiarize him.
plz see http://www.isc.org/pubs/tn/index.pl?tn=isc-tn-2004-1.html.

---

Colm MacCarthaigh [EMAIL PROTECTED] answered:

 If you're load-balancing N nodes, and 1 node dies, the distribution hash
 is re-calced and TCP sessions to all N are terminated simultaneously. 

i could just say that since i'm serving mostly UDP i don't care about this,
but then i wouldn't have a chance to say that paying the complexity and bug
and training cost of an extra in-path powered box 24x365.24 doesn't weigh
well against the failure rate of the load balanced servers.  somebody could
drop an anvil on one of my servers twice a day (so, 730 times per year) and
i would still come out ahead, given that most TCP traffic comes from web
browsers and many users will click Reload before giving up.  then there's
CEF which i think keeps existing flows stable even through an OSPF recalc.
finally, there's the fact that we see less than one server failure per month
among the 100 or so servers we've deployed behind OSPF ECMP.

i know a lot of people who get paid well for building and selling and
supporting Extra Powered Boxes, and a lot of other people who will never
get fired for buying one... but that doesn't make it right.


Re: NANOG 40 agenda posted

2007-06-04 Thread Paul Vixie

 It depends on the length of those TCP sockets. If you were load-balancing
 the increasingly common video-over-http, it would be very unacceptable.

yes.  i believe i said that my preferred approach works really well with UDP
and marginally well with current WWW.  video over http is an example of an
application who wouldn't like its ECMP recalced many times per month (which
is the worst case; my actual experience is less-than-once-per-month.)

 ... The limits are anywhere from just 6 ECMP routes to 32 (though of course
 you could do staggered load-balancing using multiple CEF devices). I'm open
 to correction on the 32, but it's the highest I've yet come accross.

this is the more interesting scaling limit.  i know of a company with ~150
recursive name servers all answering at the same IP address.  ECMP won't help
at that level.  i believe that even a hardware load balancer would have to be
multistage at that level, but once you've got multiple levels then the Powered
Boxes aren't Extra any more and i wouldn't prefer an ECMP design.

 ... This *is* possible with many load-balancers (plug: Including Apache's
 own load-balancing proxy), but with OSPF I'm forced to drop *all* sessions
 to the cluster 20 times (or yes I could do 10 nodes at a time, but you get
 the picture).

yes.

 I *like* OSPF ECMP load-balancing, it's *great*, and I use it in production,
 even load-balancing a tonne of https traffic, but in my opinion you are
 over-stating its abilities.  It is not close to the capabilities of a good
 intelligent load-balancer.  It is however extremely cost-effective and good
 enough for a lot of usage, as long as it's taken with some operational and
 engineering considerations.

or if, as in my case, the primary app is UDP based.  your points are well
taken in the case of large scale TCP though.


Re: NANOG 40 agenda posted

2007-06-04 Thread Paul Vixie

 As with all things, the trick is to weigh the risk of disaster against the
 probability of benefit and do whatever makes sense within your own
 particular constraints.

is nobody using a host based solution to this?  that is, are times when HA LB
is needed for TCP (like video over http) also seen as times when a single UNIX
host is too unreliable, even if it's fast enough, and an appliance is better?

even last year's model of BSD or Linux 1U with a couple of broadcom GigE ports
can run proxynetd at near wire speed.  so performance isn't the issue unless
we're talking 10GE, and there can't be many appliances operating at 10GE yet.

or is the problem simply that there isn't a port or pkg or rpm of proxynet,
and in spite of being 12 years old, nobody but me runs anything like it?  (so,
this boils down to, are folks only using proxies on outbound, still, in 2007?)
((and did you think squid was your only inbound proxying option?))


peter lothberg's mother slashdotted

2007-07-12 Thread Paul Vixie

http://slashdot.org/article.pl?sid=07/07/12/1236231

http://www.thelocal.se/7869/20070712/


Re: San Francisco Power Outage

2007-07-24 Thread Paul Vixie

[EMAIL PROTECTED] (Seth Mattinen) writes:

 I have a question: does anyone seriously accept oh, power trouble as a 
 reason your servers went offline? Where's the generators? UPS? Testing 
 said combination of UPS and generators? What if it was important? I 
 honestly find it hard to believe anyone runs a facility like that and 
 people actually *pay* for it.
 
 If you do accept this is a good reason for failure, why?

sometimes the problem is in the redundancy gear itself.  PAIX lost power
twice during its first five years of operation, and both times it was due
to faulty GFI in the UPS+redundancy gear.  which had passed testing during
construction and subsequently, but eventually some component just wore out.
-- 
Paul Vixie


Re: San Francisco Power Outage

2007-07-25 Thread Paul Vixie

[EMAIL PROTECTED] (Jonathan Lassoff) writes:

 Well, the fact still remains that operating a datacenter smack-dab in
 the center of some of the most inflated real estate in recent history
 is quite a castly endeavor.

yes.  (speaking for both 365 main, and 529 bryant.)

 I really wouldn't be all that surprised if 365 Main cut some corners
 here and there behind the scenes to save costs while saving face.

no expense was spared in the conversion of this tank turret factory into
a modern data center.  if there was a dark start option, MFN ordered it.
(but if it required maintainance, MFN's bankruptcy interrupted that, but
the current owner has never been bankrupt.)

 As it is, they don't have remotely enough power to fill that facility
 to capacity, and they've suffered some pretty nasty outages in the
 recent past. I'm strongly considering the possibility of completely
 moving out of there.

2mW/floor seemed like a lot at the time.  ~6kW/rack wasn't contemplated.

(is it time to build out the land adjacent to 200 paul, then?)
-- 
Paul Vixie


Re: San Francisco Power Outage

2007-07-25 Thread Paul Vixie

[EMAIL PROTECTED] (Jeff Aitken) writes:

 ..., we had a failure at another datacenter that uses Piller units, which
 operate on the same basic principle as the Hitec ones.  ...

i guess i never understood why anyone would install a piller that far from
the equator.  (it spins like a top, on a vertical axis, and the angular
momentum is really quite gigantic for its size -- it's heavy and it spins
really really fast -- and i remember asking a piller tech why his machine
wasn't tipped slightly southward to account for Coriolis, and he said i was
confused.  probably i am.)  but for north america, whenever i had a choice,
i chose hitec.  (which spins with an axis parallel to gravity.)
-- 
Paul Vixie


buncha updates to http://www.vix.com/personalcolo/ today

2007-08-03 Thread Paul Vixie

seems i've been ignoring it for two years.  sorry about that.  all the
mail i had on this topic has been processed.  check your entries.  i'm
in the mood for more updates if anybody's got anything.  note that CCCP
died and i replaced it with an entry for SFCCP, don't know if that's
correct.  i'd like all community colo projects in the world to be listed.


Re: large organization nameservers sending icmp packets to dns servers.

2007-08-08 Thread Paul Vixie

i normally agree with doug

[EMAIL PROTECTED] (Douglas Otis) writes:
 Ensuring an authoritative domain name server responds via UDP is a
 critical security requirement.  TCP will not create the same risk of a
 resolver being poisoned, but a TCP connection will consume a significant
 amount of a name server's resources.

...but this is flat out wrong, dead wrong, no way to candy coat it, wrong.
-- 
Paul Vixie


Re: large organization nameservers sending icmp packets to dns servers.

2007-08-08 Thread Paul Vixie

  ... but a TCP connection will consume a
  significant amount of a name server's resources.
 
  ...wrong.
 
 Wanting to understand this comment, ...

the resources given a nameserver to TCP connections are tightly controlled,
as described in RFC 1035 4.2.2.  so while TCP/53 can become unreliable during
high load, the problems will be felt by initiators not targets.

(this is why important AXFR targets have to be firewalled down to a very small
population of just one's own nameservers, and is why important zones have to
use unpublished primary master servers, and is why f-root's open AXFR of the
root zone is a diagnostic service not a production service.)


Re: large organization nameservers sending icmp packets to dns servers.

2007-08-09 Thread Paul Vixie

  the resources given a nameserver to TCP connections are tightly
  controlled, as described in RFC 1035 4.2.2.  so while TCP/53 can become
  unreliable during high load, the problems will be felt by initiators not
  targets.
 
 The relevant entry in Section 1035 4.2.2 recommends that the server not
 block other activities waiting for TCP data.  This is not exactly a
 requirement that TCP should fail before UDP.

it is semantically equivilent to such a requirement, in that UDP/53 is an
other activity performed by name servers.  it happens to be implemented
this way in all versions of BIND starting in 4.8 or so (when named-xfer was
made a separate executable), all versions of Windows DNS, and all current
name server implementations i am aware of (including powerdns, nominum ANS,
and NSD).  so while not exactly, it's effectively a requirement, and i
think we ought to design our systems with this constraint as a given.

 The concern leading to a suggestion that TCP always fail was a bit
 different.  A growing practice treats DNS as a type of web server when used
 to publish rather bulky script-like resource records.  Due to typical sizes,
 it is rather common to find these records depend upon TCP fallback.  This
 problem occurred with paypal, for example.  TCP fallback is especially
 problematic when these records are given wildcards.  Such fallback increases
 the amplification associated with an exploit related to the use of the
 script within the record.
 
 Of course there are better ways to solve this problem, but few are as
 certain.

i think you're advising folks to monitor their authority servers to find out
how many truncated responses are going out and how many TCP sessions result
from these truncations and how many of these TCP sessions are killed by the
RFC1035 4.2.2 connection management logic, and if the numbers seem high, then
they ought to change their applications and DNS content so that truncations
no longer result.

or perhaps you're asking that EDNS be more widely implemented, that it not
be blocked by firewalls or perverted by hotelroom DNS middleboxes, and that
firewalls start allowing UDP fragments (which don't have port numbers and
therefore won't be allowed by UDP/53 rules).

i would agree with either recommendation.

but i won't agree that TCP creates stability or load problems for servers.


Re: large organization nameservers sending icmp packets to dns servers.

2007-08-09 Thread Paul Vixie

[EMAIL PROTECTED] writes:

  ... advising folks to monitor their authority servers to find out how
  many truncated responses are going out and how many TCP sessions result
  from these truncations and how many of these TCP sessions are killed by
  the RFC1035 4.2.2 connection management logic, and if the numbers seem
  high, then they ought to change their applications and DNS content so
  that truncations no longer result.
 
 How does the (eventual) deployment of DNSSEC change these numbers?

DNSSEC cannot be signalled except in EDNS.

 And who's likely to feel *that* pain first?

the DNSSEC design seems to distribute pain very fairly.
-- 
Paul Vixie


Re: Industry best practices (was Re: large organization nameservers

2007-08-09 Thread Paul Vixie

[EMAIL PROTECTED] (Doug Barton) writes:

 ... I took this a step further and worked (together with others) on a
 patch to restrict the size of DNS answers to  512 by returning a random
 selection of any RR set larger than that.

note that this sounds like a DNS protocol violation, and usually is.  every
time someone sent me a BIND patch adding this kind of deliberate instability
(see RFC 1794 for an example) i said no.
-- 
Paul Vixie


Re: large organization nameservers sending icmp packets to dns servers.

2007-08-10 Thread Paul Vixie

 Your comments have helped.

groovy.

 When TCP is designed to readily fail, reliance upon TCP seems questionable.

i caution against being overly cautious about DNS TCP if you're using RFC 1035
section 4.2.2 as your basis for special caution.  DNS TCP only competes
directly against other DNS TCP.  there are only two situations where a DNS TCP
state blob is present in a DNS target (server) long enough to be in any
danger: when doing work upstream to fulfill the query, and in zone transfers.

when answering DNS TCP queries in an authority server, there is by definition
no upstream work to be done, other than possible backend database lookups
which are beyond the scope of this discussion.  these responses will usually
be generated synchronous to the receipt of the last octet of a query, and the
response will be put into the TCP window (if it's open, which it usually is),
and the DNS target (server) will then wait for the initiator (client) to
close the connection or send another query.  (usually it's a close.)

when answering DNS TCP zone transfer requests in an authority server, there is
a much larger window of doom, during which spontaneous network congestion can
close the outgoing TCP window and cause a DNS target (server) to think that
a TCP session is idle for the purpose of RFC 1035 section 4.2.2 TCP resource
management.  while incremental zone transfer is slightly less prone to this
kind of doom than full zone transfer, since the sessions are shorter, it can
take some time for the authority server to compute incremental zone diffs,
during which the TCP session may appear idle through no fault of the DNS
initiator (client) who is avidly waiting for its response.

lastly, when answering DNS TCP queries in a recursive caching nameserver, it
can take a while (one or more round trips to one or more authority servers)
before there is enough local state to satisfy the query, during which time the
TCP resources held by that query might be reclaimed under RFC 1035 section
4.2.2's rules.

the reason why not to be overly cautious about TCP is that in the case where
you're an authority server answering a normal query, the time window during
which network congestion could close the outbound TCP window long enough for
RFC 1034 section 4.2.2's rules to come into effect, is vanishingly short.  so
while it's incredibly unwise to depend on zone transfer working from a small
number of targets to a large number of initiators, and it is in fact wise to
firewall or ACL your stealth master server so that only your designated
secondary servers can reach it, none of this comes into play for normal
queries to authority servers -- only zone transfers to authority servers.

the unmanageable risk is when a recursive caching nameserver receives a 
query by TCP and forwards/iterates upstream.  if this happens too often, then
the RFC 1035 section 4.2.2 rules will really hurt.  and thus, it's wise, just
as you say, to try to make sure other people don't have to use TCP to fetch
data about your zone.  the counterintuitive thing is that you won't be able
to measure the problems at your authority server since that's not where they
will manifest.  they'll manifest at caching recursive servers downstream.

 As DNSSEC in introduced, TCP could be relied upon in the growing number of
 instances where UDP is improperly handled.

this would be true if TCP fallback was used when EDNS failed.  it's not.
if EDNS fails, then EDNS will not be used, either via UDP or TCP.  so if
improper handling of UDP prevents EDNS from working, then EDNS and anything
that depends on EDNS, including DNSSEC, will not be used.

 UDP handling may have been easier had EDNS been limited to 1280 bytes.

if you mean, had EDNS been limited to nonfragmentation cases, then i think
you might mean 576 bytes or even 296 bytes.  1280 is an IPv6 (new era) limit.

 On the other hand, potentially larger messages may offer the necessary
 motivation for adding ACLs on recursive DNS, and deploying BCP 38.

i surely do hope so.  we need those ACLs and we need that deployment, and if
message size and TCP fallback is a motivator, then let's turn UP the volume.


wierd dns thread (Re: Discovering policy)

2007-08-16 Thread Paul Vixie

i wasn't reading this thread at all since i thought it was about discovering
policy, like the subject says.  horror of horrors, it's about dns internals,
which means the thread is not only mislabelled, but also off-topic.  i think
it could go to [EMAIL PROTECTED], [EMAIL PROTECTED], or perhaps even
[EMAIL PROTECTED]  but it's a long way from being something
related to routers, and i think it should stop here.

[EMAIL PROTECTED] (Douglas Otis) writes:

 On Aug 15, 2007, at 5:34 PM, Mark Andrews wrote:
 
  If you have applications which don't honour SRV's . processing  
  rules report the bug.
 
 Even Paul Vixie, the author, will likely agree the RFC has the bug.

i'm only one author, but in any case i ain't sayin', since this is nanog,
and my only purpose in joining this thread is to say enough already!  if
you want to know what i think about SRV's . rules, ask me in some forum
where it's on-topic.
-- 
Paul Vixie


Re: SpamHaus Drop List

2007-08-23 Thread Paul Vixie

 Does anyone use spamhaus drop list ?
 http://www.spamhaus.org/drop/index.lasso

i do.

 I'm glad to listen opinions or experience.

no false positives yet.  mostly seems to drop inbound tcp/53.


Re: SpamHaus Drop List

2007-08-23 Thread Paul Vixie

[EMAIL PROTECTED] (Sean Donelan) writes:

  I'm glad to listen opinions or experience.
 
  no false positives yet.  mostly seems to drop inbound tcp/53.
 
 Waving a dead chicken over your computer will have no false positives too.

whoa -- that wasn't called for.

 Is it a placebo or does it actually have an effect?

the inbound tcp/53 i see blocked by SH-DROP isn't the result of truncation
or any other response of mine that could reasonably trigger TCP retry.  so
on the basis that it's no longer reaching me and can't have been for my
good, SH-DROP has at least that good effect.  i also see a lot of nameserver
transaction timeouts in my own logs, and it's all (*ALL*) for garbage domains
such as much be used by phishers or spammers.  so i'm getting failures in my
SMTP logs (because i've got postfix wired up to high paranoia and if it
can't resolve the HELO name or if the A/PTR doesn't match, i bounce stuff.)
but even if i weren't bouncing more stuff, or bouncing it earlier (since most
of what i'm bouncing is also listed on various blackhole lists), the fact of
me not making DNS queries about these malicious domain names means i'm denying
criminals a potentially valuable (if they know how to use it) source of
telemetry about their spam runs.  so, no placebos here.

 Although very little good or bad will come from those networks, just like
 the various BOGON lists, the Spamhause DROP list does require
 maintenance.  If you don't have a process in place to maintain it even
 after you are gone, proceed with caution.

why would i install something that required manual maintainance or depended
on me still being present?  other than putting system level logic in my home
directory, i detect no sysadmin sin here.  take a look, tell me your thoughts.

here is the root crontab entry i'm using on my freebsd firewall:

14 * * * * /home/vixie/spamhaus-drop/cronrun.sh

here is the full text of that shell script:

#!/bin/sh -x
cd ~vixie/spamhaus-drop
rm -f drop.txt.new
fetch -o drop.txt.new http://www.spamhaus.org/drop/drop.lasso  {
[ -r drop.txt ] || touch drop.txt
cmp -s drop.txt drop.txt.new || {
./ipfw-merge.pl 29  drop.txt.new | /sbin/ipfw /dev/stdin
mv drop.txt.new drop.txt
}
}
exit 0

the ipfw-merge.pl perl script is just:

#!/usr/bin/perl
# august 17, 2007
use strict;
use warnings;
my ($tblno) = @ARGV;
die usage: $0 tblno unless defined $tblno  $tblno;
# load in the existing table
my %old = ();
open(ipfw, ipfw table $tblno list |) || die ipfw: $!;
while (ipfw) {
chop;
my @ary = split;
$_ = $ary[$[];
next unless length;
$old{$_} = '';
}
close(ipfw);
# use mark and sweep to compute differences
my $now = time;
while (STDIN) {
chop;
s/\;.*//o; s/\s+//go;
next unless length;
if (defined $old{$_}) {
delete $old{$_};
} else {
print table $tblno add $_ $now\n;
}
}
my ($key, $val);
while (($key, $val) = each %old) {
print table $tblno delete $key\n;
}
exit 0;

(note, i've squished out vertical whitespace to make cut/paste easier, at
the expense of readability.  sorry i still write in perl3, old habits die
hard.)

here is the relevant component of my ipfw rule file.

add deny log all from table(29) to any
add deny log all from any to table(29)

 If you do have a process in place, not only for routing but also for
 your new customer order process, it is a useful source of information.

agreed.
-- 
Paul Vixie


Re: SpamHaus Drop List

2007-08-23 Thread Paul Vixie

[EMAIL PROTECTED] (Derek) writes:

  Does anyone use spamhaus drop list ?
  http://www.spamhaus.org/drop/index.lasso

 My experience is not specific to the DROP list but regarding the RBL/Zen 
 service I have found the 'moderators' of the lists can abuse their power 
 and unable to provide any proof to their entries.

having once upon a time maintained such a list, and having been accused by
a lot of people, sometimes in court papers, of abusing my powers, i agree
that proof ought to be available.  spamhaus does a fine job at this, from my
experience thus far.  the thing i like about SH-DROP is that it includes all
of the russian business network, and it's very short, and changes very slowly.

 I think it works well, I don't operate a large scale mail service and
 have not had too many complaints.  But when your on the wrong side of the
 fence it is very annoying, if one of the moderators has a beef with your
 provider - look out!

agree.
-- 
Paul Vixie


Re: SpamHaus Drop List

2007-08-24 Thread Paul Vixie

[EMAIL PROTECTED] (Sean Donelan) writes:

 Unfortunately, on today's Internet if you randomly picked a couple of 
 hundred network blocks of the same size you would see the same thing.

no.  really.  just not.  you'd have to search nonrandomly among thousands
or tens of thousands of netblocks to equal the russian business network.

 Lame delegations and brokeness is well distributed across the Internet.

that's not the kind of maliciousness i'm interested in avoiding.

 Unfortunately again, if you use your favorite search engine you will find
 several instances that read something like we also have the DROP list in
 an ACL on our router, but we don't monitor it.  I  have found two year 
 old copies of the DROP list in networks.

that's an argument for not statically importing policy.

 Network blocks are regularly added *AND REMOVED* from the Spamhaus DROP 
 list.

and that's another.

nobody here is claiming that external policy should be fired and forgot.
in fact, cymru's BOGON list comes with lots of disclaimers about how much
pain your successors will be in if you import these things and forget them.

 It can be useful if used correctly, it can be harmful if used incorrectly.

like anything else.  remember, all power tools can kill.  that's an argument
for using them correctly, more than it's an argument for living without them.
-- 
Paul Vixie


i think the cogent depeering thing is a myth of some kind

2007-09-28 Thread Paul Vixie

at http://www.e-gerbil.net/cogent-t1r there is a plain text document with
the following HTTP headers:

Date: Fri, 28 Sep 2007 21:56:34 GMT
Server: Apache/2.2.3 (Unix) PHP/5.2.3
Last-Modified: Fri, 28 Sep 2007 19:15:53 GMT
ETag: 92c1e1-a85-43b36ea5bcc40
Content-Length: 2693
Content-Type: text/plain

the plain text title is:

Cogent shows hypocrisy with de-peering policy

the plain text authorship is ascribed to:

Dan Golding

the first plain text assertion that caught my eye was:

Cogent, has, in fact, de-peered other Internet networks in the last 24
hours, including content-delivery network Limelight Networks and
wholesale transit provider nLayer Communications, along with several
European networks.

since i appear to be reaching the aforementioned web server by a path that
includes cogent-to-nlayer, i think this part of the plain text is inaccurate.

traceroute to www.e-gerbil.net (69.31.1.2), 64 hops max, 52 byte packets
 1  rc-main.f1.sql1.isc.org (204.152.187.254)  0.336 ms
 2  149.20.48.65 (149.20.48.65)  0.509 ms
 3  gig-0-1-0-606.r2.sfo2.isc.org (149.20.65.3)  1.163 ms
 4  g0-8.core02.sfo01.atlas.cogentco.com (154.54.11.177)  2.757 ms
 5  t4-2.mpd01.sfo01.atlas.cogentco.com (154.54.2.89)  2.958 ms
 6  g3-0-0.core02.sfo01.atlas.cogentco.com (154.54.3.117)  2.525 ms
 7  p6-0.core01.sjc04.atlas.cogentco.com (66.28.4.234)  4.183 ms
 8  g3-3.ar1.pao1.us.nlayer.net (69.22.153.21)  2.637 ms
 9  ge-2-1-1.cr1.sfo1.us.nlayer.net (69.22.143.161)  3.806 ms
10  so-0-2-0.cr1.ord1.us.nlayer.net (69.22.142.77)  69.022 ms
11  60.po1.ar1.ord1.us.nlayer.net (69.31.111.130)  69.491 ms
12  0.tge4-4.ar1.iad1.us.nlayer.net (69.22.142.113)  81.580 ms
...

the second plain text assertion which caught my eye was:

Why is this happening? There are a few possibilities. First, Cogent
may simply want revenue from the networks it has de-peered, in the
form of Internet transit. Of course, few de-peered networks are
willing to fork over cash to those that have rejected them. Another
possibility is that Cogent is seeing threats from other peers
regarding its heavy outbound ratios, and it seeks to disconnect
Limelight and other content-heavy peers to help balance those ratios
out.

this makes no sense, since dan golding would know that cogent's other peers
would not be seeing traffic via cogent from the allegedly de-peered peers.

so, i think the document is a hoax of some kind.  (i saw it mentioned here.)


Re: i think the cogent depeering thing is a myth of some kind

2007-09-28 Thread Paul Vixie

Randy Epstein [EMAIL PROTECTED] wrote:

 Clearly you can see the article was published by T1R in their Daily T1R
 report: http://www.t1r.com/
 
 (listed under The Daily T1R Headlines)
 
 If you subscribe to the Daily T1R, you can find Dan's report issued today.

Sorry, T1R.com requires Flash 8 or above: Get Flash

 I think Dan overstepped here.  Richard has made comments of a de-peering
 notice received by nLayer, not an actual de-peering occurrence.

ok.

 AFAIK, the only two networks in recent weeks that have been de-peered are WV
 Fiber and LimeLight.  WV was de-peered a couple on September 17th and
 LimeLight was de-peered yesterday.

it's still really hard to believe that dan golding, of all people, could have
written text that makes it seem as though traffic from one set of cogent's
peers would be seen as input from cogent by another set of cogent's peers.
i'll take your word for it, since you've got Flash 8 or above, and i havn't.

are any of the de-peering letters online someplace?


Re: i think the cogent depeering thing is a myth of some kind

2007-09-29 Thread Paul Vixie

 This is a proven maneuver and Cogent is not the first to do it.

i guess that without knowing who else these de-peered networks are customers
of, it's hard for an outsider to guess which ratios into cogent's network by
other peers will improve as a result of de-peering these networks.  had you
been writing for a technical audience i'm sure you would have alluded to this,
i'm sure.  now that i know the article was a leak rather than a publication,
it all becomes clear.

 ... That full explanation was missing from the writeup that is posted (and
 I'll allow it to stay up for now), because that report was aimed at folks
 who may not be fully conversant in peering - financial professionals. BTW,
 thanks for dropping me an email to ask me about it, before posted to NANOG.

the text i saw was so uncharacteristically non-dan-golding, that i really did
think it was a hoax.  you're right that i should have asked you about it; in
my defense i was leaving for the weekend and didn't have as much time as this
should've gotten.

 As far as reachability from one provider to another - I've heard that one
 can make routing changes quickly and easily on this crazy Internet thing.
 Perhaps in the 24 hours since I wrote that, a few changes occurred?

i'm a cogent customer, and my path to nlayer at the moment i read your note
still went through cogent.  what was i to think?  anyway, problem solved.


Re: WG Action: Conclusion of IP Version 6 (ipv6)

2007-10-02 Thread Paul Vixie

 On Oct 1, 2007, at 9:15 AM, John Curran wrote:
  What happens if folks can somehow obtain IPv4 address blocks
  but the cumulative route load from all of these non-hierarchical
  blocks prevents ISP's from routing them?

[EMAIL PROTECTED] (David Conrad) writes:
 Presumably, the folks with the non-hierarchical address space that  
 might get filtered would have potentially limited connectivity (as  
 opposed to no connectivity if they didn't have IPv4 addresses).

i had a totally different picture in my head, which was of a rolling
outage of routers unable to cope with full routing in the face of
this kind of unaggregated/nonhierarchical table, followed by a surge
of bankruptcies and mergers and buyouts as those without access to
sufficient new-router capital gave way to those with such access,
followed by another surge of bankruptcies and mergers as those who
thought they had access to such capital couldn't make their payments.

call me a glass-half-full kind of guy, but the picture in my head in
response to john's question is of a whole lot of network churn as the
community jointly answers the question who can still play in this
world? rather than how useful will those new routes really be?
internet economics don't admit the possibility of not-full-routes, and
so david's view that nonhierarchical routes won't be as useful as
hierarchical makes me wonder, what isp anywhere will stay in business
while not routing everything if other isp's can route everything?

we're all in this stew pot together.
-- 
Paul Vixie


<    2   3   4   5   6   7   8   >