On Mon, May 21, 2007 11:02 pm, Steve Gibbard wrote:
Is the above situation any different from the decision of whether to use
locally-expected ccTLDs for local content, or to use the international
.com for everything?
Ah, assuming local content, no. I was coming more from the 'must protect
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Cisco Security Advisory: Vulnerability In Crypto Library
Advisory ID: cisco-sa-20070522-crypto.shtml
http://www.cisco.com/warp/public/707/cisco-sa-20070522-crypto.shtml
Revision 1.0
For Public Release 2007 May 22 1300 UTC (GMT
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Cisco Security Advisory:
Multiple Vulnerabilities in Cisco IOS While Processing SSL Packets
Advisory ID: cisco-sa-20070522-SSL
http://www.cisco.com/warp/public/707/cisco-sa-20070522-SSL.shtml
Revision 1.0
For Public Release 2007 May 22 1300 UTC
On 5/21/2007 at 2:09 PM, Edward Lewis [EMAIL PROTECTED] wrote:
At 3:50 PM -0500 5/21/07, Gadi Evron wrote:
As to NS fastflux, I think you are right. But it may also be an issue
of
policy. Is there a reason today to allow any domain to change NSs
constantly?
Although I rarely find
apropos of this...
As to NS fastflux, I think you are right. But it may also be an issue of
policy. Is there a reason today to allow any domain to change NSs
constantly?
...i just now saw the following on comp.protocols.dns.bind (bind-users@):
+---
| From: Wiley Sanders [EMAIL PROTECTED]
|
Gadi Evron wrote:
On Mon, 21 May 2007, Chris L. Morrow wrote:
ok, so 'today' you can't think of a reason (nor can I really easily) but
it's not clear that this may remain the case tomorrow. It's possible that
as a way to 'better loadshare' traffic akamai (just to make an example)
could start
On 22 May 2007, Paul Vixie wrote:
apropos of this...
As to NS fastflux, I think you are right. But it may also be an issue of
policy. Is there a reason today to allow any domain to change NSs
constantly?
...i just now saw the following on comp.protocols.dns.bind (bind-users@):
On Tue, 22 May 2007, David Ulevitch wrote:
Gadi Evron wrote:
On Mon, 21 May 2007, Chris L. Morrow wrote:
ok, so 'today' you can't think of a reason (nor can I really easily) but
it's not clear that this may remain the case tomorrow. It's possible that
as a way to 'better loadshare'
Why are people trying to solve these problems in the core?
Because that's the only place it can be done.
These issues need to and must be solved at the edge.
Been there, done that, with smtp/spam, netbios, and any number of
other protocols that would also be ideally addressed at the
Gadi Evron wrote:
People are suggesting it become the rule because nobody is trying
anything else.
I was with you up to this sentence. Obviously avoiding the core is key,
but should we not have the capability of preventing abuse in the core
rather than mitigating it there? Allowing NS
On Tue, 22 May 2007, David Ulevitch wrote:
snip
These questions, and more (but I'm biased to DNS), can be solved at the
edge for those who want them. It's decentralized there. It's done the
right way there. It's also doable in a safe and fail-open kind of way.
This is what I'm
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
- -- David Ulevitch [EMAIL PROTECTED] wrote:
But very few people (okay, not nobody) are saying, Hey, why should I
allow that compromised windows box that has never sent me an MX request
before all of the sudden be able to request 10,000 MX records
Fergie wrote:
David,
As you (and some others) may be aware, that's an approach that we
(Trend Micro) took a while back, but we got a lot (that's an
understatement) of push-back from service providers, specifically,
because they're not very inclined to change out their infrastructure
(in this
Roger Marquis wrote:
Simply
saying it is dangerous is indistinguishable from any other verisign
astroturfing.
It's not everyday that you get accused of astroturfing for Verisign.
I'm printing this, framing it, putting it on my wall, and leaving this
thread.
Thanks!
-David
On Mon, May 21, 2007 at 03:08:06PM +, Chris L. Morrow wrote:
[snip]
This is sort of the point of the NRIC document/book... 'we need to
find/make/use a directory system for the internet' then much talk of how
dns was supposed to be that but for a number of reasons it's not,
google/insert
The directory that was contracted
and 'supposed to' exist as part of the NNSC-to-InterNIC dance
was to be built by old-ATT Labs. As far as I can recall, it
was ever only an ftp repository and not much of a 'directory
and database service' (corrections welcome).
Anyone remember the
On Wed, 23 May 2007 01:32:41 BST, [EMAIL PROTECTED] said:
Anyone remember the Internet Scout? Even back then labors of love like
John December's list were more useful than the Internic services.
That worked well for 14,000 .coms. It doesn't work for 140,000,000 .coms.
Does everybody on this
On Tue, 22 May 2007, David Ulevitch wrote:
Fergie wrote:
David,
As you (and some others) may be aware, that's an approach that we
(Trend Micro) took a while back, but we got a lot (that's an
understatement) of push-back from service providers, specifically,
because they're not
On Tue, 22 May 2007, Roger Marquis wrote:
Why are people trying to solve these problems in the core?
Because that's the only place it can be done.
it is A PLACE, not necessarily THE PLACE. With every decision as to where
there are tradeoffs, be prepared to accept/defend them.
These
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
- -- Chris L. Morrow [EMAIL PROTECTED] wrote:
Sure work on an expedited removal process inside a real procedure from
ICANN down to the registry. Work on a metric and monetary system used to
punish/disincent registrys from allowing their systems to
20 matches
Mail list logo