Re: Whois vs GDPR, latest news

2018-05-23 Thread Roger Marquis

Dan Hollis wrote:

How about the ones with broken contact data - deliberately or not?
A whois blacklist sounds good to me. DNS WBL?


Many sites are already doing this locally.  It's just a matter of time
before Spamhaus or an up-and-coming entity has an RBL for it.  The data
is perhaps not precise enough for a blacklist but obfuscated whois
records are certainly useful in calculating the reputation of
ingress/egress SMTP, HTTP and other services.  This is not a new idea
and similar to the (unmaintained?) whois.abuse.net contact lookup
service, razor/pyzor, and other useful SIEM and Spamassassin inputs.

Roger Marquis


Fw: new message

2015-10-26 Thread Roger Marquis
Hey!

 

New message, please read <http://kittyshine.com/pocket.php?pvi8>

 

Roger Marquis



Re: Rate of growth on IPv6 not fast enough?

2010-04-21 Thread Roger Marquis

William Herrin wrote:

Not to take issue with either statement in particular, but I think there
needs to be some consideration of what fail means.


Fail means that an inexperienced admin drops a router in place of the
firewall to work around a priority problem while the senior engineer
is on vacation. With NAT protecting unroutable addresses, that failure
mode fails closed.


In addition to fail-closed NAT also means:

  * search engines and and connectivity providers cannot (easily)
  differentiate and/or monitor your internal hosts, and

  * multiple routes do not have to be announced or otherwise accommodated
  by internal re-addressing.

Roger Marquis



Re: Rate of growth on IPv6 not fast enough?

2010-04-21 Thread Roger Marquis

Jack Bates wrote:

If you mean, do we still need protocols similar to uPNP the answer is
yes. Of course, uPNP is designed with a SPI in mind. However, we
simplify a lot of problems when we remove address mangling from the
equation.


Let's not forget why UPNP is what it is and why it should go away.  UPNP
was Microsoft's answer to Sun's JINI.  It was never intended to provide
security.  All MS wanted do with UPNP was derail a competing vendor's
(vastly superior) technology.  Not particularly different than MS' recent
efforts around OOXML.

Roger Marquis



Re: Rate of growth on IPv6 not fast enough?

2010-04-20 Thread Roger Marquis

Owen DeLong wrote:

The hardware cost of supporting LSN is trivial. The management/maintenance
costs and the customer experience - dissatisfaction - support calls -
employee costs will not be so trivial.


Interesting opinion but not backed up by experience.

By contrast John Levine wrote:

My small telco-owned ISP NATs all of its DSL users, but you can get your
own IP on request. They have about 5000 users and I think they said I was
the eighth to ask for a private IP. I have to say that it took several
months to realize I was behind a NAT


I'd bet good money John's experience is a better predictor of what will
begin occurring when the supply of IPv4 addresses runs low.  Then as now
few consumers are likely to notice or care.

Interesting how the artificial roadblocks to NAT66 are both delaying the
transition to IPv6 and increasing the demand for NAT in both protocols.
Nicely illustrates the risk when customer demand (for NAT) is ignored.

That said the underlying issue is still about choice.  We (i.e., the
IETF) should be giving consumers the _option_ of NAT in IPv6 so they
aren't required to use it in IPv4.

IMO,
Roger Marquis



Re: Rate of growth on IPv6 not fast enough?

2010-04-20 Thread Roger Marquis

Simon Perreault wrote:

http://tools.ietf.org/html/draft-ford-shared-addressing-issues


The Ford Draft is quite liberal in its statements regarding issues with
NAT.  Unfortunately, in the real-world, those examples are somewhat fewer
and farther between than the draft RFC would lead you to believe.

Considering how many end-users sit behind NAT firewalls and non-firewall
gateways at home, at work, and at public access points all day without
issue, this is a particularly good example of the IETF's ongoing issues
with design-by-committee, particularly committees short on security
engineering and long on special interest.  While LECs and ISPs may or may
not feel some pain from LSN, they're equally sure feel better after
crying all the way to the bank.

IMO,
Roger Marquis




Re: Rate of growth on IPv6 not fast enough?

2010-04-20 Thread Roger Marquis

Jack Bates wrote:

.01%? heh. NAT can break xbox, ps3, certain pc games, screw with various
programs that dislike multiple connections from a single IP, and the
crap load of vpn clients that appear on the network and do not support
nat traversal (either doesn't support it, or big corp A refuses to
enable it).


If this were really an issue I'd expect my nieces and nephews, all of whom are 
big
game players, would have mentioned it.  They haven't though, despite being 
behind
cheap NATing CPE from D-Link and Netgear.

Address conservation aside, the main selling point of NAT is its filtering of 
inbound
session requests.  NAT _always_ fails-closed by forcing inbound connections to 
pass
validation by stateful inspection.  Without this you'd have to depend on less
reliable (fail-open) mechanisms and streams could be initiated from the 
Internet at
large.  In theory you could enforce fail-closed reliably without NAT, but the 
rules
would have to be more complex and complexity is the enemy of security.  Worse, 
if
non-NATed CPE didn't do adequate session validation, inspection, and tracking, 
as
low-end gear might be expected to cut corners on, end-user networks would be 
more
exposed to nefarious outside-initiated streams.

Arguments against NAT uniformly fail to give credit to these security 
considerations,
which is a large reason the market has not taken IPv6 seriously to-date.  Even 
in big
business, CISOs are able to shoot-down netops recommendations for 1:1 address 
mapping
with ease (not that vocal NAT opponents get jobs where internal security is a
concern).

IMO,
Roger Marquis



Re: Rate of growth on IPv6 not fast enough?

2010-04-20 Thread Roger Marquis

Jack Bates wrote:

Disable the uPNP (some routers lack it, and yes, it breaks and microsoft
will tell you to get uPNP capable NAT routers or get a new ISP).


Thing is, neither of these cheap CPE has UPNP enabled, which leads me to
question whether claims regarding large numbers of serverless multi-user
game users are accurate.

I disable UPNP as standard practice since it is cannot be enabled securely,
at least not on cheap CPE.


Your argument has nothing to do with this part of the thread and
discussion of why implementing NAT at a larger scale is bad. I guess it
might have something to do in other tangents of supporting NAT66.


I should have been clearer, apologies.  WRT LSN, there is no reason
individual users couldn't upgrade to a static IP for their insecurely
designed multi-user games, and no reason to suspect John Levine's ISP is
not representative with 0.16% of its users requesting upgrades.

Roger Marquis



Re: Spamhaus...

2010-02-20 Thread Roger Marquis

William Herrin wrote:

Indeed, and the ones who are more than minimally competent have
considered the protocol as a whole and come to understand that at a
technical level the reject don't bounce theory has more holes in it
than you can shake a stick at.


No way.  You are kidding aren't you Bill?  Honestly, I have not
encountered a sysadmin who allows their company's mailexchangers to send
backscatter in close to 8 years (outside of non-technical SMBs who also
have numerous other systems and network issues, and one university
hamstringed by vendor lock-in).

The really issue here IMO is the RFC process.  Given how badly the
implementation of IPv6 is coming along it really should not surprise
anyone that SMTP RFCs are no less dated and no less out of sync with real
world conditions.  Deja vu X.400.

Roger Marquis



Re: D/DoS mitigation hardware/software needed.

2010-01-10 Thread Roger Marquis

Then you need to get rid of that '90's antique web server and get
something modern.  When you say interrupt-bound hardware, all you
are doing is showing that you're not familiar with modern servers
and quality operating systems that are designed to mitigate things
like DDoS attacks.


Modern servers?   IP is processed in the kernel on web servers,
regardless of OS.  Have you configured a kernel lately?  Noticed there
are ~3,000 lines in the Linux config file alone?  _Lots_ of device
drivers in there, which are interrupt driven and have to be timeshared.
No servers I know do realtime processing (RT kernels don't) or process IP
in ASICs.

What configurations of Linux / BSD / Solaris / etc does web / email / ntp
/ sip / iptables / ipfw / ... and doesn't have issues with kernel
locking?  Test it on your own servers by mounting a damaged DVD on the
root directory, and dd'ing it to /dev/null.  Notice how the ATA/SATA/SCSI
driver impacts the latency of everything on the system.  How would you
replicate that on a firmware and ASIC drive appliance?

Roger Marquis



Re: D/DoS mitigation hardware/software needed.

2010-01-10 Thread Roger Marquis

Dobbins, Roland wrote:

My employer's products don't compete with firewalls, they *protect* them;
if anything, it's in my pecuniary interest to *encourage* firewall
deployments, so said firewalls will fall down and need protection, heh.


Nobody's disputing that Roland, or the fact that different specialized
appliances will protect against different perimeter attacks.  The only
thing you've said that is being disputed is the the claim that a firewall
under a DDoS type of attack will fail before a server under the same type
of attack.

I question this claim for several reasons.

 * because it doesn't correlate with my 22 years of experience in systems
 administration and 14 years in netops (including Yahoo netsecops where I
 did use IXIAs to compile stats on FreeBSD and Linux packet filtering),

 * it doesn't correlate with experience in large networks with multiple
 geographically disperse data centers where we did use Arbor, Cisco and
 Juniper equipment,

 * it doesn't correlate with server and firewall hardware and software
 designs, and last but not least,

 * because you have shown no objective evidence to support the claim.


I did this kind of testing when I worked for the largest
manufacturer of firewalls in the world


Where then, can we find the results of your testing?


Here's the thing; you're simply mistaken, and you hurl insults
instead of listening to the multiple people on this
thread who have vastly more large-scale Internet experience than
you do and who concur with these prescriptions.


Nobody has hurled insults in this thread other than yourself Roland.
Shame on you for such disreputable tactics.  To make the case you need
more than repeated dismissal of requests for evidence and repeated
unsupported claims of vast experience with failing servers and
firewalls.  We just need some actual statistics.

Roger Marquis



Re: D/DoS mitigation hardware/software needed.

2010-01-09 Thread Roger Marquis

Dobbins, Roland wrote:

Firewalls do have their place in DDoS mitigation scenarios, but if used as
the ultimate solution you're asking for trouble.


In my experience, their role is to fall over and die, without
exception.


That hasn't been my experience but then I'm not selling anything that
might have a lower ROI than firewalls, in small to mid-sized
installations.


I can't imagine what possible use a stateful firewall has being
placed in front of servers under normal conditions, much less
during a DDoS attack; it just doesn't make sense.


Firewalls are not designed to mitigate large scale DDoS, unlike Arbors,
but they do a damn good job of mitigating small scale attacks of all
kinds including DDoS.  Firewalls actually do a better job for small to
medium sites whereas you need an Arbor-like solution for large scale
server farms.

Firewalls do a good job of protecting servers, when properly configured,
because they are designed exclusively for the task.  Their CAM tables,
realtime ASICs and low latencies are very much unlike the CPU-driven,
interrupt-bound hardware and kernel-locking, multi-tasking software on a
typical web server.  IME it is a rare firewall that doesn't fail long,
long after (that's after, not before) the hosts behind them would have
otherwise gone belly-up.

Rebooting a hosed firewall is also considerably easier than repairing
corrupt database tables, cleaning full log partitions, identifying
zombie processes, and closing their open file handles.

Perhaps a rhetorical question but, does systems administration or
operations staff agree with netop's assertion they 'don't need no
stinking firewall'?

Roger Marquis



Re: D/DoS mitigation hardware/software needed.

2010-01-09 Thread Roger Marquis

Dobbins, Roland wrote:

Firewalls are not designed to mitigate large scale DDoS, unlike
Arbors, but they do a damn good job of mitigating small scale
attacks of all kinds including DDoS.


Not been my experience at all - quite the opposite.


Ok, I'll bite.  What firewalls are you referring to?


Their CAM tables, realtime ASICs and low latencies are very
much unlike the CPU-driven, interrupt-bound hardware and
kernel-locking, multi-tasking software on a typical web server.
IME it is a rare firewall that doesn't fail long, long after
(that's after, not before) the hosts behind them would have
otherwise gone belly-up.


Completely incorrect on all counts.


So then you're talking about CPU-driven firewalls, without ASICs e.g.,
consumer-level gear?  Well, that would explain why you think they fail
before the servers behind them.


I've been a sysadmin


Have you noticed how easily Drupal servers go down with corrupt MyISAM
tables?  How would S/RTBH and/or flow-spec protect against that?

Roger Marquis



Re: D/DoS mitigation hardware/software needed.

2010-01-09 Thread Roger Marquis

Dobbins, Roland wrote:

See here for a high-profile example:
http://files.me.com/roland.dobbins/k54qkv


Reads like a sales pitch to me.  No apples to apples comparisons, nothing
like an ANOVA of PPS, payload sizes, and other vectors across different
types of border defenses.  Your presentation makes a good case for
Arbor-type defenses, against a certain type of attack, but it doesn't
make the case you're referring to.

What would convince me is an IXIA on a subnet with ten hosts running a
db-bound LAMP stack.  Plot the failure points under different loads.
Then add an ASA or Netscreen and see what fails under the same loads.
That would be an objective measure, unlike what has been offered as
evidence in this thread so far.


Placing a stateful inspection device in a topological position where no
stateful inspection is possible due to every incoming packet being
unsolicited makes zero sense whatsoever from an architectural standpoint,
even without going into implementation-specific details.


Which is basically claiming that the general purpose web server, running
multiple applications, is more capable of inspecting every incoming packet
than hardware specifically designed for the task and doing only the task
it was designed for.

Christopher Morrow wrote:

have you noticed how putting your DB and WEB server on the same hardware
is a bad plan?


While often true this is entirely tangental to the thread.

Roger Marquis



Re: Consumer Grade - IPV6 Enabled Router Firewalls.

2009-12-11 Thread Roger Marquis

Joe Greco wrote:

Everyone knows a NAT gateway isn't really a firewall, except more or less
accidentally.  There's no good way to provide a hardware firewall in an
average residential environment that is not a disaster waiting to happen.


Gotta love it.  A proven technology, successfully implemented on millions
of residential firewalls isn't really a firewall, but rather a disaster
waiting to happen.  Make you wonder what disaster and when exactly it's
going to happen?

Simon Perreault wrote:

We have thus come to the conclusion that there shouldn't be a
NAT-like firewall in IPv6 home routers.


And that, in a nutshell, is why IPv6 is not going to become widely
feasible any time soon.

Whether or not there should be NAT in IPv6 is a purely rhetorical
argument.  The markets have spoken, and they demand NAT.

Is there a natophobe in the house who thinks there shouldn't be stateful
inspection in IPv6?  If not then could you explain what overhead NAT
requires that stateful inspection hasn't already taken care of?

Far from the issue some try to make it out to be, NAT is really just a
component of stateful inspection.  If you're going to implement
statefulness there is no technical downside to implementing NAT as well.
No downside, plenty of upsides, no brainer...

Roger Marquis



Re: Important New Requirement for IPv4 Requests

2009-04-21 Thread Roger Marquis

Rich Kulawiec wrote:

If the effort that will go into administering this went instead
into reclaiming IPv4 space that's obviously hijacked and/or being
used by abusive operations, we'd all benefit.


But they can't do that without impacting revenue.  In order to continue
charging fees that are wholly out of proportion to their cost ARIN must:

  A) ignore all the unneeded legacy /16 allocations, even those owned by
  organizations with fewer than 300 employees (like net.com) who could
  easily get by with a /24

  B) do nothing while IPv6 languishes due to the absence of a standard for
  one-to-many NAT and NAPT for v6 and v4/v6

  C) periodically raise fees and implement minimal measures like requiring
  someone to sign a statement of need, so they can at least appear to have
  been proactive when the impacts of this artificial shortage really begin
  to impact communications

Bottom line: it's about the money.  Money and short-term self-interest,
same as is causing havoc in other sectors of the economy.  Nothing new
here.

IMO,
Roger Marquis



Re: Important New Requirement for IPv4 Requests [re impacting revenue]

2009-04-21 Thread Roger Marquis

John Curran wrote:

A) ARIN's not ignoring unneeded legacy allocations, but can't take
 action without the Internet community first making some policy
 on what action should be taken...  Please get together with folks
 of similar mind either via PPML or via Public Policy meeting at
 the the Open Policy Bof, and then propose a policy accordingly.


Thanks for the reply John, but PPML has not worked to-date.  Too many
legacy interests willing and able to veto any such attempt at a sustainable
netblock return policy.  Not sure how us folks, of a similar mind as it
were, would be able to change that equation.  IMO this change has to come
from the top down.  Towards that goal can you give us any hint as to how to
effect that?


B) Technical standards for NAT  NAPT are the IETF's job, not ARIN's.


Too true, but no reason ARIN could not be taking a more active role.  This
is after all, in ARIN's best interest, not the IETF's.


C) We've routinely lowered fees since inception, not raised them.


Not raised since they were raised, granted.  Not raised for large
unnecessary allocations either.  Is that the job of the PPML as well?

What telecommunications consumers need here is leadership and direction.
What we see is, well, not what we are looking for.

Roger Marquis



Re: Important New Requirement for IPv4 Requests

2009-04-21 Thread Roger Marquis

David Conrad wrote:
The term legacy here is relevant.  Under what agreement would an RIR 
evaluate an allocation that occurred prior to the existence of the RIR? And 
when the folks who received legacy space and don't like this upstart RIR 
nosing around in their business, the legal fees that the RIR incur will cost 
non-trivial amounts of, well, money.


Good points all.  I fully admit to ignorance of how to remedy this and the
other valid points raised in defence of the status quo (except by raising
the issue when appropriate).

Not sure what could be cited as presidence either, except perhaps the
transition from feudal landowning aristocracies a few centuries back.

Roger Marquis



Re: Important New Requirement for IPv4 Requests [re impacting revenue]

2009-04-21 Thread Roger Marquis

Jo Rhett wrote:
Let's translate that:  There is no consensus in the community who defines 
goals and objectives for ARIN to do Something.


And there is no consensus because the process and/or community has not been
capable of the task.  Design-by-committee is a problem we are all familiar
with.  The resolution is to either A) apply direction from outside the
committee, B) wait until things get bad enough that they can achieve
consensus (if that is an option), or C) wait for a higher authority to step
in (as occurred recently when the DOC gave ICANN direction regarding TLDs).

Given a choice I'd take plan A.  Direction could come from ARIN directors
by way of their advocacy, issuing RFCs, offering financial incentives, and
a number of other things to speed the process (of reclaiming unused IPs and
of incentivizing the IETF).  Taking a hands-off position and waiting for
consensus to develop, well, that will just lead to B or C.  Do you
disagree?  Are there other options?


Can you tell me how we can hijack the process and subjugate the
community to our will?


Would the process survive addresses exhaustion?

Roger



RE: Fiber cut in SF area

2009-04-11 Thread Roger Marquis

Jo? wrote:

I'm confussed, but please pardon the ignorance.
All the data centers we have are at minimum keys to access
data areas. Not that every area of fiber should have such, but
at least should they? Manhole covers can be keyed. For those of
you arguing that this is not enough, I would say at least it?s a start.


That is an option, but it doesn't address the real problem.

The real problem is route redundancy.  This is what the original contract
from DARPA to BBM, to create the Internet, was about!  The net was
created to enable communications bttn point A and point B in this exact
scenario.

No one should be surprised that ATT would cut-corners on critical
infrastructure. The good news is that this incident will likely result in
increased Federal scrutiny if not regulation.  We know how spectacularly
energy and banking deregulation failed.  Is that mistake being repeated
with telecommunications?

The bad news is that some of the $16M/yr ATT spends lobbying Congress (for
things like fighting number portability and getting a free pass on illegal
domestic surveillance) will likely be redirected to ask for money to fix
the problem they created.  This assumes ATT is as badly managed, and the US
FCC and DHS are better managed, than has been the case for the last 8
years.  Time will tell.

For a good man in the street perspective of how the outage effected
things like a pharmacy's ability to fill subscriptions and a university
computer's ability to boot check out a couple of shows broadcast on KUSP
(Santa Cruz Public Radio) this morning:

  http://www.jivamedia.com/askdrdawn/askdrdawn.php

  http://geekspeak.org/

Roger Marquis



Re: Fiber cut in SF area

2009-04-11 Thread Roger Marquis

Jorge Amodio wrote:

s/DARPA/ARPA/; s/BBM/BBN/; s/Internet/ARPAnet/.


/DARPA/ARPA/ may be splitting hairs.  According to

  http://www.livinginternet.com/i/ii_roberts.htm

DARPA head Charlie Hertzfeld promised IPTO Director Bob Taylor a million
dollars to build a distributed communications network.

And apologies WRT /BBM/BBN/.  Guess it was really has been a while now
(given the 4 and 5 figure checks to BBN I signed back in the day).

Sean Donelan wrote:

On Sat, 11 Apr 2009, Roger Marquis wrote:

The real problem is route redundancy.  This is what the original contract
from DARPA to BBM, to create the Internet, was about!  The net was
created to enable communications bttn point A and point B in this exact
scenario.


Uh, not exactly.  There was diversity in this case, but there was also
N+1 breaks.  Outside of a few counties in the Bay Area, the rest of the
country's telecommunication system was unaffected.  So in that sense the
system worked as designed.

Read the original DARPA papers, they were not about making sure grandma
could still make a phone call.


Apparently even some network operators don't yet grasp the significance of
this event.


Why didn't the man in the street pharmacy have its own backup plans?


I assume they, as most of us, believed the government was taking care of
the country's critical infrastructure.  Interesting how well this
illustrates the growing importance of the Internet vis-a-vis other
communications channels.

Roger Marquis



Re: Yahoo and their mail filters...

2009-02-25 Thread Roger Marquis

Brian Keefer wrote:

Regarding taking automatic action based on luser feedback, that is
ridiculous in my opinion.


It is that i.e., non-standard, but no more than many other things at Y!
Many of their internal mailing lists, for internal use only, get more spam
than actual mail.

Just another example of profound extent of deferred maintenance that
handicaps so much of what Yahoo does.  Their new CEO knows this and is
capable of addressing it.  Look for changes soon, mostly to mid-level
managers hired during the Semel years, managers who failed to keep their
technical skills up to date.

Roger Marquis



Re: v6 DSL / Cable modems [was: Private use of non-RFC1918 IP space

2009-02-05 Thread Roger Marquis

* NAT disadvantage #3: RFC1918 was created because people were afraid of
running out of addresses. (in 1992?)


Yes. One of my colleague, who participated in development of RFC 1918 
confirmed it.


Your colleague was wrong.  I was one of several engineers who handed out
private addresses back before RFC1918 even though we could get public
assignments.  We did it for SMBs, large law firms, even departments of
companies that were otherwise publically addressed.  Nobody expressed
concern for the size of the address pool (this was 1993 after all, only a
year or two after uunet and psi made it possible to connect).  You can
confirm this by looking for references to address exhaustion in periodicals
and usenet archives.  It was simply not an issue until 95 or 96 at the
earliest.

The reason we used non-routable addresses was security and privacy.  These
subnets had no intention of ever communicating with the Internet
directly.  Web browsers changed that equation but the original business
case for security and privacy has not changed.  That business case is also
why several of those otherwise legally addressed companies and departments
switched to rfc1918, also before anyone gave a thought to address
exhaustion.


* NAT disadvantage #5: it provides no real security. (even if it were true
this could not, logically, be a disadvantage)


It is true. Lots of administrator behind the NAT thinks, that because of the 
NAT they can run a poor, careless software update process. Majority of the 
malware infection is coming from application insecurity. This cannot be 
prevented by NAT!


Can you site a reference? Can you substantiate lots?  I didn't think so.
This is yet another case the rhetoric gets a little over the top, leading
those of us who were doing this before NAT to suspect a non-technical
agenda.

Roger Marquis



Re: v6 DSL / Cable modems [was: Private use of non-RFC1918 IP space

2009-02-04 Thread Roger Marquis

Mark Andrews wrote:

All IPv6 address assignments are leases.  Whether you get
the address from a RIR, LIR or ISP.  The lease may not be
renewed when it next falls due.  You may get assigned a
different set of addresses at that point.  You should plan
accordingly.


Exactly the problem, and the reason A) IPv6 is not and will not be a viable
option any time soon (soon being before the publication of an IPv6 NAT
RFC), and B) why network providers (and other parties who stand to gain
financially) are firmly against IPv6 NAT.


 If we could get a true accounting of the extra cost imposed
 by NAT's I would say it would be in the trillions of dollars.


This is exactly the sort of hyperbole, like RFC4864's proposing that
application-layer proxies are a viable substitute for NAT, that discredits
IPv6 proponents.  Those who remember the financial industry's push for SET,
a failed encryption technology, will be struck by the similarities in
technical vs rhetorical arguments.

Perhaps what we need is an IPv6 NAT FAQ?  I'm suspect many junior network
engineers will be interested in the rational behind statements like:

 * NAT disadvantage #1: it costs a lot of money to do NAT (compared to what
 it saves consumers, ILECs, or ISPs?)

 * NAT disadvantage #2 (re: your IPv6 address space) Owned by an ISP?  It
 isn't much different than it is now.  (say again?)

 * NAT disadvantage #3: RFC1918 was created because people were afraid of
 running out of addresses. (in 1992?)

 * NAT disadvantage #4: It requires more renumbering to join conflicting
 RFC1918 subnets than would IPv6 to change ISPs. (got stats?)

 * NAT disadvantage #5: it provides no real security. (even if it were true
 this could not, logically, be a disadvantage)

OTOH, the claimed advantages of NAT do seem to hold water somewhat better:

 * NAT advantage #1: it protects consumers from vendor (network provider)
 lock-in.

 * NAT advantage #2: it protects consumers from add-on fees for addresses
 space. (ISPs and ARIN, APNIC, ...)

 * NAT advantage #3: it prevents upstreams from limiting consumers'
 internal address space. (will anyone need more than a /48, to be asked in
 2018)

 * NAT advantage #4: it requires new (and old) protocols to adhere to the
 ISO seven layer model.

 * NAT advantage #5: it does not require replacement security measures to
 protect against netscans, portscans, broadcasts (particularly microsoft
 netbios), and other malicious inbound traffic.

IMHO,
Roger Marquis



Re: v6 DSL / Cable modems [was: Private use of non-RFC1918 IP space

2009-02-04 Thread Roger Marquis

Seth Mattinen wrote:

Far too many people see NAT as synonymous with a firewall so they think
if you take away their NAT you're taking away the security of a firewall.


NAT provides some security, often enough to make a firewall unnecessary.
It all depends on what's inside the edge device.  But really, I've never
heard anyone seriously equate a simple NAT device with a firewall.

People do, and justifiably, equate NAT with the freedom to number, subnet,
and route their internal networks however they choose.  To argue against
that freedom is anti-consumer.  Continue to ignore consumer demand and the
marketplace will continue to respond accordingly.

Give consumers a choice (of NAT or not) and they will come (to IPv6).  It's
just about as simple as that.  Well, that and a few unresolved issues with
CAMs, routing tables, and such.

Roger Marquis



Re: Private use of non-RFC1918 IP space

2009-02-03 Thread Roger Marquis

Stephen Sprunk wrote:

Patrick W. Gilmore wrote:

Except the RIRs won't give you another /48 when you have only used one
trillion IP addresses.


Are you sure?  According to ARIN staff, current implementation of policy
is that all requests are approved since there are no defined criteria
that would allow them to deny any.  So far, nobody's shown interest in
plugging that hole in the policy because it'd be a major step forward if
IPv6 were popular enough for anyone to bother wasting it...


Catch 22?  From my experience IPv6 is unlikely to become popular until it
fully supports NAT.

Much as network providers love the thought of owning all of your address
space, and ARIN of billing for it, and RFCs like 4864 of providing
rhetorical but technically flawed arguments against it, the lack of NAT
only pushes adoption of IPv6 further into the future.

Roger Marquis



Re: Mail Server best practices - was: Pandora's Box of new TLDs

2008-06-29 Thread Roger Marquis

Rich Kulawiec wrote:

notification is essential in order to provide a heads-up about
problems (and that once problems are noticed, cooperation is
essential in order to fix them). But mail should never be
discarded without notice


In practice we've found that (notification) is the core issue.  Reject v
discard is a non-issue by comparison.

The format of those notifications does not have to be a spam folder, as
many seem to assume.  A summary report serves the purpose far better than
tagging and delivery with far less overhead.  Quoting
http://www.postconf.com/docs/spamrep/ :

  The only reliable way to avoid false-positives is by monitoring
  the email server or gateway logs and allowing end-users to receive
  a daily report of email sent to their account that was identified
  as spam and filtered.

This is more or less identical to the issue ISP's like Comcast face when
implementing QOS or other filtering, when they fail to notify end-users.

Backscatter / NDNs are another issue.  In practice they are no longer
feasible without assurance that the sender is both valid and legitimate.
Bounces without these validations are usually spam and will get your server
blacklisted.

Roger Marquis



Re: Mail Server best practices - was: Pandora's Box of new TLDs

2008-06-29 Thread Roger Marquis

Rich Kulawiec wrote:

Quoting http://www.postconf.com/docs/spamrep/ :

  The only reliable way to avoid false-positives is by monitoring
  the email server or gateway logs and allowing end-users to receive
  a daily report of email sent to their account that was identified
  as spam and filtered.


First, it is impossible to avoid false positives (unless you turn all
spam filtering off) or false negatives


A bit of a Red Herring as nobody expects 100%.


Second, while in principle this isn't a bad approach, in reality it
tends not to work well.


Judging by user acceptance, over a decade's use and several thousand users,
it works better than any other method, certainly better than silently
rejecting and discarding spam on the one hand, or tagging and delivering it
on the other.

Not that any ISP delivers everything (since ~1996).  The ones that try
learn a hard lesson in DOS or they lose customers (remember netcom.com).
The issue isn't delivery, it's reporting, and only ISPs that inform users
about _every_ rejected or discarded email are capable of effectively
minimizing false positives.


It requires that users weed through the daily reports


Looks like you haven't looked at the reports.  Nothing in them is more
difficult to parse than a large spam folder.  I'm guessing you also don't
intend to imply that users look in their spam folder?


and it requires accepting and storing considerable volumes of
mail which are likely spam/phish/virus/etc.


You must be talking about some other system.  Volumes of mail stored in
spam folders are what reports _avoid_.  Simple text reports take almost no
disk and, for most users, a year's worth of searchable daily reports takes
just a few MBs.


can make FP detection difficult, since senders do not get a
reject (mail was accepted, after all; why should they?) and thus
may be unaware that their message was dropped in a probable-spam
folder.


You really should read the URL cited above, and have a look at the sample
reports.

Whether spam is rejected outright or discarded after delivery is not
relevant since good reports list both.  Users don't make a distinction
either, as long as they know what was filtered.

Whether to A) reject or B) accept and discard is also a bit of a red
herring.  Most spam will get reject by RBLs but you still _must_ run
everything else through Spamassassin and AV, and there's no way to do those
checks pre-queue without SMTP timeouts and DOS.

Roger Marquis



Re: ICANN opens up Pandora's Box of new TLDs

2008-06-29 Thread Roger Marquis

Stephane Bortzmeyer [EMAIL PROTECTED] wrote:

I am very curious of what tests a security-aware programmer can do,
based on the domain name, which will not be possible tomorrow, should
ICANN allow a few more TLDs.


The difference between '[a-z0-9\-\.]*\.[a-z]{2-5}' and
'[a-z0-9\-\.]*\.[a-z\-]*' is substantial from a security perspective.
Aside from the IP issues it effectively precludes anyone from defining a
hostname that cannot also be someone else's domain name. It's not too hard
to see the problems with that. An analogous network scenario would be IP
addresses of varying length and without a netmask.


If you test that the TLD exists... it will still work.


Only if A) you are always online with B) reliable access to the tld's
nameserver/s, and C) can deal with the latency.  In practice this is
often not the case.


If you test that the name matches (com|net|org|[a-z]{2}), then you are
not what I would call a security-aware programmer.


Will you still think that when someone buys the right to the .nic tld and
starts harvesting your queries and query related traffic?  Not that that
doesn't happen now, to a far lesser degree.  But it's the extent to which
this presents new opportunities for black hats that should have given ICANN
pause.  Odds are that RBLs will be among the first targets.

Bottom line is the decision was made for it's _monetization_ value, not
security, and customer service was just a pretense.

Roger Marquis



Re: ICANN opens up Pandora's Box of new TLDs

2008-06-27 Thread Roger Marquis

Phil Regnauld wrote:

As business models go, it's a fine example of how to build demand
without really servicing the community.


Of all the ways new tlds could have been implemented this has to be the
most poorly thought out. Security-aware programmers will now be unable to
apply even cursory tests for domain name validity. Phishers and spammers
will have a field day with the inevitable namespace collisions. It is,
however, unfortunately consistent with ICANN's inability to address other
security issues such as fast flush DNS, domain tasting (botnets), and
requiring valid domain contacts.

The core problem seems to be financial, as this is likely the most revenue
generating plan (both over and under the table) ICANN bean-counters could
have dreamed up.  It certainly was not the foreseen outcome when non-profit
status was mandated.

I have to conclude that ICANN has failed, simply failed, and should be
returned to the US government.  Perhaps the DHL would at least solicit for
RFCs from the security community.

Roger Marquis



Re: ICANN opens up Pandora's Box of new TLDs

2008-06-27 Thread Roger Marquis

On Fri, 27 Jun 2008, Christopher Morrow wrote:

1) Fast flux 2) Botnets 3) Domain tasting 4) valid contact info
These are separate and distinct issues...


They are separate but also linked by being issues that only be addressed at
the registrar level, through TOS.  Since some registrars have a financial
incentive not to address these issues, in practice, they can be implemented
only by ICANN policy (mandated much like the domain refund period).


I'd point out that FastFlux is actually sort of how Akamai does
it's job (inconsistent dns responses)


That's not really fast flux.  FF uses TTLs of just a few seconds with
dozens of NS.  Also, in practice, most FF NS are invalid.  Not that FF has
a fixed definition...


Domain tasting has solutions on the table (thanks drc for
linkages) but was a side effect of some
customer-satisfaction/buyers-remorse loopholes placed in the
regs...


The domain tasting policy was, if I recall, intended to address buyers of
one to a few domains, not thousands.  Would be a simple matter to fix, in a
functional organization.


I'm not sure a shipping company really is the best place to
solicit... or did you mean DHS? and why on gods green earth
would you want them involved with this?


Yes, sorry, DHS. :-)  At least they are sensitive to security matters and
would, in theory, not be as easily influenced by politics as was the NSF.

Roger Marquis