Re: default routes question or any way to do the rebundant

2008-03-21 Thread Scott McGrath


If we do not help the newbies how will they ever become clued.   I can 
certainly remember when I did not know a bit from a byte.


Oh and btw I'll take 5 of those STM64's on special...

Regards all - Scott

Martin Hannigan wrote:

On Fri, Mar 21, 2008 at 4:29 PM, Barry Shein [EMAIL PROTECTED] wrote:
  

 Is this for real?

 Someone asks a harmless question about setting up multiple default
 routes, not about Barack Obama or whether the moon is made of green
 cheese, but about default routes.

 Then 10 people decide to respond that this isn't appropriate for nanog.

 Then 25 people decide to dispute that.

 Then 50 people are arguing (ok maybe I exaggerate but just a little)
 about it.

 So the person who asked the original question feels bad and apologizes.

 And 5 people decide to tell her there's nothing to apologize for.

 And 10 people dispute that...and...what next? Oh, right, and next I
 feel an urge to write this idiotic meta-meta-meta-note.

 I think psychologists have a term for this, chaotic instability
 disorder or something like that.

 Maybe what we need are NANOG GREETERS!

 Hello, welcome to Nanog, can we help you find something? Hello,
 welcome to Nanog, can we help you find something?...



Blue light special in slot 5? V6 only STM64's now half price!

personal opinion

I dont think that there's any issue at all to be honest. NANOG isn't
just for the clued.

/personal opinion

Best,

Marty
  




Re: default routes question or any way to do the rebundant

2008-03-21 Thread Scott McGrath


I'll take that bet Valdis

[EMAIL PROTECTED] wrote:

On Fri, 21 Mar 2008 16:44:39 EDT, Martin Hannigan said:

  

personal opinion

I dont think that there's any issue at all to be honest. NANOG isn't
just for the clued.

/personal opinion



And more to the point - if somebody manages to go through all the hoops needed
to ask a basic question on the NANOG list, it demonstrates a desire to
accumulate clue - so we should encourage those people.  I'll make the
prediction that in 5 years, the person who *started* this thread will be
substantially more clued than the lead network engineer at many AS's (you all
know the ones I mean - that AS that's 1 or 2 hops away from you that on a
weekly basis do something that makes you want to go and inject clue with a
baseball bat..)

  




Re: EU Official: IP Is Personal

2008-01-24 Thread Scott McGrath


We have a similar system based around Cisco's CNR which is a popular 
DHCP/DNS system used by large ISP's and other large organization and it 
is the IP+Timestamp coupled with the owner to MAC relationship which 
allows unique  identification of a user and we have strict data 
retention policies so that after the data has been maintained for the 
interval specified by the Provost it is permanently removed from the 
database.


We treat IP/Mac information as personally identifiable information  and 
as such  limit access to this information to authorized users  only.


But there seems to be  a misapprehension  that  a  dynamically assigned 
address cannot be associated with a individual.


Eric Gauthier wrote:

Heya,

  

In the US, folks are fighting the RIAA claiming that an IP address isn't
enough to identify a person.

In Europe, folks are fighting the Google claiming that an IP address is
enough to identify a person.

I guess it depends on which side of the pond you are on.

  

They are both right. If you have a dynamic IP such as most college students
have, it is here-today-gone-tomorrow.




Our University uses dynamic addressing but we are able to identify likely users
in response to the RIAA stuff.  There is a hidden step in here, at least for our 
University, in the IP-to-Person mapping.  Our network essentially tracks the 
IP-to-MAC relationship and the MAC-to-Owner relationship.  For us, its not the 
IP that identifies a person, but the combination of IP plus Timestamp, which can 
be used to walk our database and produce a system owner.


I'm guessing that Google et. al. have a similar multi-factor token set (IP, 
time,
cookie, etc) which allows them to map back to a person.

Eric :)
  


Re: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-21 Thread Scott McGrath


I think a rate limited plan would appeal to most customers as it would 
give them a fixed monthly budget item.   But I am pretty sure this will 
not happen in the US based on experiences with the broadband by cell 
providers who prefer a 'bill-by-byte' method with no mechanism to stop 
loss in the event of a runaway process or compromised host It seems 
the market has lost it's taste for a known revenue stream with known 
costs and profits in exchange for a Vegas go for broke growth at all 
costs mentality.And too often the vendor does go broke it seems the 
market has lost a degree of rationality to the point where ISP are 
'firing' customers who are using 'too many' resources instead of trying 
to help them fix their issue or if they really are using all those bits 
finding a mutually beneficial method of profiting from them.


Mark Foster wrote:




The big advanatge of these plans is that the cost is fixed
even if I've used up all my alotted transfer.



This is the success of systems that implement rate limiting (not 
additional charging) once a specified ceiling has been reached.


It provides some fiscal security that you're not going to blow out 
your upper limit.  (I've seen some horrendous bills in the face of 
'overage' caused by virus/drone infections, spammers hitting 
mailservers run on SME broadband links, etc etc.)


Both .nz and .au have implemented this.  No reason that .us can't do 
the same?


Heck were I in the USA and I had to choose between 'flat rate' and 
some figure in the vicinity of 10-15GB/month then 'rate limiting' 
(especially then including the option to buy more bandwidth as a 
one-off), the latter would win hands down.  Flat rate (in my world) 
often includes port-based and/or time based throughput limiting that's 
designed to prevent the ISP from being ground to a halt by P2P during 
peak hours, etc


I'd rather have a (reasonable) monthly limit for an affordable price, 
thanks.


Mark. (In .nz)





Re: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-19 Thread Scott McGrath


No we do not however we are allocated a invariant budget to deliver 
services for a fixed period of time.We cannot 'raise' prices as the 
pot of funds needs to be allocated to scholarship, teaching, housing and 
all the other things which make up a university we provide a service 
which must be available 7x24 for a fixed amount of funds.So even 
though we do not face a profit/loss calculus cost control is a key 
driver for us as every dollar not spent for IT can be redirected into 
scholarships for deserving students.


Our problem on the residential networking side is finding the balance 
between unfettered access which is untenable and providing a service 
which allocates the available pool of bits fairly among the average 
customers and trying to accomodate the large users who are downloading 
the latest Ubuntu ISO's without causing undue pain for either.For 
instance at Columbia resnets implement the same policy I posited in my 
initial response.  

But I REALLY dont want to go back to the days of .25cts/minute access to 
the internet if we do that the entire thing will collapse due to the 
financial uncertainty and the internet will go back to being a
curiosity for Education and Government as it will be deemed 'too 
expensive' by the masses.


But it just seems that the telco's just cannot give up the concept of 
metered access for instance I use DSL at home which is PPPoE which means 
many 'broadband' devices are unusable here sure it terminates on a PIX 
but the PIX does not have a finger to press the reset button on the 
'required by contract' access device sure I could directly terminate it 
but since I live in a rural area I need my ISP more than the ISP needs 
me hence devices which need 'always on' access are a pipe dream as my 
service is 'on most of the time'.





Roderick Beck wrote:

Universities don't face a profit calculus.

And universities are also instituting rationing as well.

-R.
Sent wirelessly via BlackBerry from T-Mobile.

-Original Message-
From: Scott McGrath [EMAIL PROTECTED]

Date: Fri, 18 Jan 2008 17:00:19 
To:Patrick W. Gilmore [EMAIL PROTECTED]

Cc:nanog@merit.edu
Subject: Re: An Attempt at Economically Rational Pricing: Time Warner Trial



Why does the industry as a whole keep trying to drag us back to the old
days of Prodigy, CompuServe, AOL and really high rates per minute of
access.   I am old enough to remember BOSc202202  return

The 'Internet' only took off in adoption once flat rate pricing became
the norm for access.   Yes there are P2P pigs out there but a more
common scenario is the canonical Little Old Lady in a Pink Sweater
with a compromised box which is sending spam at a great rate.Should
she pay the $500 bill when it arrives or would a more prudent and
rational approach be like some universities do.

i.e. Unthrottled pipe until you hit some daily limit like 1-2 gb and
then your pipe drops to something like 64k until midnight or so.This
keeps the 'pigs' in line and you might want to add a   SUPER tier which
would allow truly unlimited use of the pipe for $200-300 because for
some people it would be worth it for them.  It's  human nature  to
desire a degree  of  predictability
in day to day affairs and as another poster noted that's why prepaid
phones are popular now.   Further with the compromised system analogy I
purchased a prepaid phone for my wife who is a teacher so in the event
it was stolen at school the financial loss would be limited to the
prepaid balance, no multi-thousand dollar bill for overseas calls.  You
used the  minutes (bandwidth) didn't you?.

Ultimately there is no option but to build out the network as we have
found on the university side of the house as digital instructional
materials and entertainment delivery over the net will
become the norm instead of sending bits of plastic through the mail
(except for luddites like me ;-}).

Patrick W. Gilmore wrote:
  

On Jan 18, 2008, at 3:11 PM, Michael Holstein wrote:



The problem is the inability of the physical media in TWC's case
(coax) to support multiple simultaneous users. They've held off
infrastructure upgrades to the point where they really can't offer
unlimited bandwidth. TWC also wants to collect on their unlimited
package, but only to the 95% of the users that don't really use it,
and it appears they don't see working to accommodate the other 5% as
cost-effective.
  

I seriously doubt it the coax that is the problem.

And even if that is a limitation, upgrading the last mile still will
not allow for unlimited use by a typical set of users these days.
Backhaul, peering, colocation, etc., are not free, plentiful, or
trivial to operate.




My guess is the market will work this out. As soon as it's
implemented, you'll see ATT commercials in that town slamming cable
and saying how DSL is really unlimited.
  

I do not doubt that.  But do you honestly expect the att DSL line to
provide faster / more reliable access?

Hint

Re: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-18 Thread Scott McGrath


Why does the industry as a whole keep trying to drag us back to the old 
days of Prodigy, CompuServe, AOL and really high rates per minute of 
access.   I am old enough to remember BOSc202202  return   

The 'Internet' only took off in adoption once flat rate pricing became 
the norm for access.   Yes there are P2P pigs out there but a more 
common scenario is the canonical Little Old Lady in a Pink Sweater 
with a compromised box which is sending spam at a great rate.Should 
she pay the $500 bill when it arrives or would a more prudent and 
rational approach be like some universities do.


i.e. Unthrottled pipe until you hit some daily limit like 1-2 gb and 
then your pipe drops to something like 64k until midnight or so.This 
keeps the 'pigs' in line and you might want to add a   SUPER tier which 
would allow truly unlimited use of the pipe for $200-300 because for 
some people it would be worth it for them.  It's  human nature  to  
desire a degree  of  predictability
in day to day affairs and as another poster noted that's why prepaid 
phones are popular now.   Further with the compromised system analogy I 
purchased a prepaid phone for my wife who is a teacher so in the event 
it was stolen at school the financial loss would be limited to the 
prepaid balance, no multi-thousand dollar bill for overseas calls.  You 
used the  minutes (bandwidth) didn't you?.


Ultimately there is no option but to build out the network as we have 
found on the university side of the house as digital instructional 
materials and entertainment delivery over the net will
become the norm instead of sending bits of plastic through the mail 
(except for luddites like me ;-}).


Patrick W. Gilmore wrote:


On Jan 18, 2008, at 3:11 PM, Michael Holstein wrote:

The problem is the inability of the physical media in TWC's case 
(coax) to support multiple simultaneous users. They've held off 
infrastructure upgrades to the point where they really can't offer 
unlimited bandwidth. TWC also wants to collect on their unlimited 
package, but only to the 95% of the users that don't really use it, 
and it appears they don't see working to accommodate the other 5% as 
cost-effective.


I seriously doubt it the coax that is the problem.

And even if that is a limitation, upgrading the last mile still will 
not allow for unlimited use by a typical set of users these days.  
Backhaul, peering, colocation, etc., are not free, plentiful, or 
trivial to operate.



My guess is the market will work this out. As soon as it's 
implemented, you'll see ATT commercials in that town slamming cable 
and saying how DSL is really unlimited.


I do not doubt that.  But do you honestly expect the att DSL line to 
provide faster / more reliable access?


Hint: Whatever your answer, it will be right or wrong for a given time 
in the near future.




Re: Collateral Damage

2006-01-18 Thread Scott McGrath

1 Yes
2 No
3 No
4 No

-Original Message-

From:  Patrick W. Gilmore [EMAIL PROTECTED]
Subj:  Collateral Damage
Date:  Tue Jan 17, 2006 4:44 pm
Size:  2K
To:  [EMAIL PROTECTED]
cc:  Patrick W. Gilmore [EMAIL PROTECTED]


My previous post sparked quite a bit of traffic (mostly to me  
personally).  It also sparked some confusion.  That's mostly my fault  
for writing e-mails far too late at night and mixing it with an  
emotionally charged thread.

So I would like to separate my questions out of the GoDaddy thread,  
write them slightly differently, and give a little more scope for  
clarity.

These questions are designed as yes/no, not it depends.  The idea  
being if there are general circumstances (not billion-in-one corner  
cases) which would make the action in question acceptable, please  
answer yes, and move to the next question.

For instance, I would answer the first question as yes, because  
there are circumstances which happen reasonably often where I would  
take down an innocent domain to stop network abuse.  (E.g. I would  
null-route a /24 that is sending gigabits of DoS traffic, even if  
there is an innocent mail server in that block.)

Anyway, on to the poll.  You are welcome and encouraged to send the  
answers to me privately, I will collate and post back to the list in  
a few days.


* Please answer yes/no.
   - Additional text is encouraged, but I need a yes/no to tabulate  
the vote.
* These questions are not regarding a specific provider or even  
specific abuse type.
   - You can consider spam, DoS, phishing, hacking, etc.
   - Please assume what you consider to be the worst abuse which is  
common on the Internet today.
* There is a basic assumption that due diligence has been applied.
   - You have investigated and are certain this is not a false  
positive or such.
   - I hope we can all agree that shutting someone down without doing  
proper investigation is a Bad Thing.
* There is a basic assumption of notification and grace period.
   - The provider in question knows Bad Things are happening.
   - The provider in question has had a reasonable amount of time to  
fix said Bad Things.
   - Bad Things are still happening.
* Please do not consider extremely rare occurrences or utra-extreme  
scenarios.
   - Null-routing an IP address to stop nuclear war is not in scope  
of this survey.

If you have any questions, please feel free to e-mail me.


1) Do you think it is ever acceptable to cause collateral damage to  
innocent bystanders if it will stop network abuse?

2) If yes, do you still think it is ever acceptable to take down a  
provider with 100s of innocent customers because one customer is  
misbehaving?

3) If yes, do you still think it is ever acceptable if the  
misbehaving customer is not intentionally misbehaving - i.e.  
they've been hacked?

4) If yes, do you still think it is ever acceptable if the collateral  
damage (taking out 100s of innocent businesses) doesn't actually stop  
the spam run / DoS attack / etc.?


Thank you all for your time.

-- 
TTFN,
patrick




Re: the future of the net

2005-11-17 Thread Scott McGrath


Thought provoking article and the consumer side of the 'net is already
heading there i.e. no VPN on many 'broadband' lines unless you pay for
'business' CoS (which I do).  Does anyone here remember the Dow Jones
Information service in which you are billed by the minute AND the service
you access from a business point of view the consumer has had a 'free
ride' for too long and it's time to start charging for the access to
'content' and hopefully the content as well.

Nevermind that the unmetered pricing model facilitated the growth of the
'net to a point where it was 'commercially viable' all the wall street
model rewards is consistent growth quarter over quarter and with a finite
number of households eventually growth with slow and eventually stop now
the only way to facilitate growth will be to be 'bill by the byte'.

The fact that this model has been a commercial failure in the wireless
market in the US as the '3G services i.e. internet access. data transport
are too expensive at 0.05/cents kbyte even for most business as it is not
feasible to 'budget' your data needs.

The US will continue to fall further behind the rest of the world as the
business community attempts to monetize the innovations.  Where the rest
of the world will continue to invent new stuff and at the current rate the
'Next Big Thing' will not be from the US.




Scott C. McGrath

On Wed, 16 Nov 2005, Steven M. Bellovin wrote:


 In message [EMAIL PROTECTED], Warren Kumari wri
 tes:
 
 Oh, the irony - all I get is:
 
 Access denied
 You are not authorized to access this page.
 

 Same here.

   --Steven M. Bellovin, http://www.cs.columbia.edu/~smb




Re: fcc ruling on dsl providers' access to infrastructure

2005-08-08 Thread Scott McGrath


I believe it is called facism.
A big bald Italian mentioned something about trains running on time.

Randy Bush wrote:


From: Randy Bush [EMAIL PROTECTED]
Date: Sun, 7 Aug 2005 11:22:23 -1000
To: Christopher L. Morrow [EMAIL PROTECTED]
Subject: Re: fcc ruling on dsl providers' access to infrastructure



Yes there is a major concern that the government has
just ellminated every isp that is currently permitted
to use another carriers dsl lines to provide
service's.

will the ilec's start offering competitive services (not bw,
but non-dynamic ips or small blocks to end-users?)

if their competition has been eliminated by fcc ruling, what
does 'competitive' pricing mean?

that which is set by the gov't rulings? :)


and, for this morning's pop quiz, what is the classic term for an
economy of private ownership and government control?

randy


Re: OMB: IPv6 by June 2008

2005-07-08 Thread Scott McGrath


On the subject of how many entities should be multihomed.   Any entitiy 
whose operations would be significantly impacted by the loss of their 
connectivity to the global internet.


A personal example with names withheld to protect the guilty

A distributor who took 85% of their orders over the internet the rest was 
phone and EDI the telcom coordinator got a 'great deal' on Internet service 
and LD from an unnamed vendor.   Well we cut over our links and within a 
week our major customers had trouble reaching us due to the SP relying only 
on the public peering points to exchange traffic with other networks.


At that point I set up BGP got an AS and reconnected our new provider and 
our old provider so that we had service from both SP's


A 30 year old company almost went out of business due to being single-homed.

Being dependent on a single SP is a Bad Thing (tm)

At 04:02 AM 7/8/2005, Alexei Roudnev wrote:


Moreover, if you are not multihomned, you can be aggregated. If you became
multihome - yes, you take a slot; how many entities in the world should be
multihomed?

- Original Message -
From: Kuhtz, Christian [EMAIL PROTECTED]
To: David Conrad [EMAIL PROTECTED]; Alexei Roudnev
[EMAIL PROTECTED]
Cc: Mohacsi Janos [EMAIL PROTECTED]; Daniel Golding
[EMAIL PROTECTED]; Scott McGrath [EMAIL PROTECTED];
nanog@merit.edu
Sent: Thursday, July 07, 2005 11:02 AM
Subject: RE: OMB: IPv6 by June 2008



 Alexei,

 On Jul 7, 2005, at 9:58 AM, Alexei Roudnev wrote:
  What's the problem with independent address space for every entity
  (company,
  family, enterprise) which wants it?

 It doesn't scale.  Regardless of Moore's law, there are some
 fundamental physical limits that constrain technology.

I would contend that is not true.  What says that every device inside a
company, family, enterprise etc has to be available and reachable by
anyone on the planet in a bidirectional fashion as far as session
initiation is concerned?

Once you add that bit of reality to it, the scaling requirement goes
down substantially.  Wouldn't you agree?

Trust me, I would like to just see us get it over with as far as IPv6 is
concerned, provided we have a working, palatable IPv6 mh solution.  But,
man, I can't pass the red face test on a lot of these hypothesis :(

Thanks,
Christian


The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential, proprietary, and/or
privileged material. Any review, retransmission, dissemination or other use
of, or taking of any action in reliance upon this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from all
computers. 163




Re: OMB: IPv6 by June 2008

2005-07-07 Thread Scott McGrath


Alexi,

Ah, You mean the excellent 'The Mythical Man-Month' Fred Brooks wrote a
second edition a few years back.  I had not thought of IPv6 in terms of
the second system effect but you are absolutely correct in your appraisal.

Scott C. McGrath

On Wed, 6 Jul 2005, Alexei Roudnev wrote:


 IPv6 is an excellent example of _second system_ (do you remember book,
 written by Brooks many years ago?) Happu engineers put all their crazy ideas
 together into the second version of first 9succesfull) thing, and they
 wonder why it do not work properly.
 OS/360 is one example, IPv6 will be another.

 IPv6 address allocation schema is terrible (who decided to use SP dependent
 spaces?), security is terrible (who designed IPSec protocol?) and so so on.

 Unfortunately, it can fail only if something else will be created, which do
 not looks so.
 - Original Message -
 From: Daniel Golding [EMAIL PROTECTED]
 To: Scott McGrath [EMAIL PROTECTED]; David Conrad
 [EMAIL PROTECTED]
 Cc: nanog@merit.edu
 Sent: Wednesday, July 06, 2005 8:58 AM
 Subject: Re: OMB: IPv6 by June 2008


 
 
  There is an element of fear-mongering in this discussion - that's why many
  of us react poorly to the idea of IPv6. How so?
 
  - We are running out of IPv4 space!
  - We are falling behind #insert scary group to reinforce fear of Other!
  - We are not on the technical cutting edge!
 
  Fear is a convenient motivator when facts are lacking. I've read the above
  three reasons, all of which are provable incorrect or simple fear
 mongering,
  repeatedly. The assertions that we are falling behind the Chinese or
  Japanese are weak echoes of past fears.
 
  The market is our friend. Attempts to claim that technology trumps the
  market end badly - anyone remember 2001? The market sees little value in
 v6
  right now. The market likes NAT and multihoming, even if many of us don't.
 
  Attempts to regulate IPv6 into use are as foolish as the use of fear-based
  marketing. The gain is simply not worth the investment required.
 
  - Daniel Golding
 
  On 7/6/05 11:41 AM, Scott McGrath [EMAIL PROTECTED] wrote:
 
  
  
   You do make some good points as IPv6 does not address routing
 scalability
   or multi-homing which would indeed make a contribution to lower OPEX and
   be easier to 'sell' to the financial people.
  
   As I read the spec it makes multi-homing more difficult since you are
   expected to receive space only from your SP there will be no 'portable
   assignments' as we know them today.  If my reading of the spec is
   incorrect someone please point me in the right direction.
  
   IPv6's hex based nature is really a joy to work with IPv6 definitely
 fails
   the human factors part of the equation.
  
   Scott C. McGrath
  
   On Wed, 6 Jul 2005, David Conrad wrote:
  
   On Jul 6, 2005, at 7:57 AM, Scott McGrath wrote:
   IPv6 would have been adopted much sooner if the protocol had been
   written
   as an extension of IPv4 and in this case it could have slid in
   under the
   accounting departments radar since new equipment and applications
   would
   not be needed.
  
   IPv6 would have been adopted much sooner if it had solved a problem
   that caused significant numbers of end users or large scale ISPs real
   pain.  If IPv6 had actually addressed one or more of routing
   scalability, multi-homing, or transparent renumbering all the hand
   wringing about how the Asians and Europeans are going to overtake the
   US would not occur.  Instead, IPv6 dealt with a problem that, for the
   most part, does not immediately affect the US market but which
   (arguably) does affect the other regions.  I guess you can, if you
   like, blame it on the accountants...
  
   Rgds,
   -drc
  
 
  --
  Daniel Golding
  Network and Telecommunications Strategies
  Burton Group
 
 



Re: OMB: IPv6 by June 2008

2005-07-07 Thread Scott McGrath


My day to day is primarily supporting high-performance research computing
on the network side if I can add new functionality without incurring
accquisition costs or operational expenses AND not changing experimental
regimes in my area of responsibility that is a BIG win and one that
'slides past the accountants'.  As it stands now IPv6 functionality
requires that the researchers replace their network connected instruments
many of which are purpose built.  Some of the instruments are old (but
network attached) and are used in long term experiments and instrument
replacement would invalidate the results.

A interoperable IPv6 would have been adopted quickly in my environment
especially since it could have been added along with routine scheduled
network element software maintenance.

With the current IPv6 implementation I have to

1 - Get new (non-multihomed) address space from each of our upstreams
2 - Replace network elements with IPv6 compatible network elements and S/W
3 - Convince all the researchers to dump all their instruments and buy
new ones
4 - Retrain entire staff to support IPv6

No matter how hard I try I just am not going to be able to make any
cogent argument which will allow the implementation of IPv6 since it
appears to offer no benefits to the user community which in my case is
extremely well informed on technologies which will benefit their research.

The best I can hope for is IPv4 to IPv6 gateways.

Scott C. McGrath

On Wed, 6 Jul 2005, Edward Lewis wrote:

 At 10:57 -0400 7/6/05, Scott McGrath wrote:

 IPv6 would have been adopted much sooner if the protocol had been written
 as an extension of IPv4 and in this case it could have slid in under the
 accounting departments radar since new equipment and applications would
 not be needed.

 Sliding anything past the accountants is bad practice.  Is the goal
 to run IPv6 or to run a communications medium to support society?  If
 it costs $1M to adopt IPv6 in the next quarter, what would you take
 the $1M from?  (I used to work at a science research center.  Having
 a good network wasn't the goal, doing science was.  Without good
 science, there would be no FY++ budget for a better network.)

 The Internet serves society, society owes nothing to the Internet.
 Members of this list may prioritize communications technology, other
 members of society may prioritize different interests and concerns.
 That is why IPv6 must offer a benefit greater than it's cost.

 --
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 Edward Lewis+1-571-434-5468
 NeuStar

 If you knew what I was thinking, you'd understand what I was saying.



Re: OMB: IPv6 by June 2008

2005-07-07 Thread Scott McGrath


On the training issue.  Everybody in our organization understands IPv4 at
some basic level.  The senior staff here myself included are conversant
with IPv6 but you have the level 1 and 2 people who for the most part are
not even aware IPv6 exists and there are a LOT more of them then there are
of us and these are the people who are going to get their world rocked and
who will need extensive training to be effective in a IPv6 world.

Scott C. McGrath

On Thu, 7 Jul 2005, Jeroen Massar wrote:

 On Thu, 2005-07-07 at 18:02 +0200, Andre Oppermann wrote:
  Jeroen Massar wrote:
   On Thu, 2005-07-07 at 10:39 -0400, Scott McGrath wrote:
  4 - Retrain entire staff to support IPv6
  
   You have to train people to drive a car, to program a new VCR etc. What
   is so odd about this?
 
  I had training to drive a car once in my life when I got my drivers
  license.  I don't have to get a fresh training for every new car I
  end up driving throughout my life.

 You will have to get an additional license for driving a truck or even
 when you are getting a caravan behind that car of yours though.
 Motorbikes also have different licenses and you get separate trainings
 for those. They all have wheels, look the same, operate somewhat the
 same, but are just a little bit different and need a bit different
 education.

 You also either read something, educated yourself or even got a training
 to operate IPv4 networks, now you will just need a refresh for IPv6.
 You can opt to not take it, but then don't complain you don't understand
 it. For that matter if you don't understand IPv6 you most likely don't
 IPv4 (fully) either.

  If I need training to program my new VCR then the operating mode of
  that VCR is broken and I'm going to return it asap.

 Then a lot of VCR's will be returned because if there is one thing many
 people don't seem to understand, even after reading the manual then it
 is a VCR.

  It's that simple.  Why are people buying iPod's like crazy?  Because
  these thingies don't require training.  People intuitively can use
  them because the GUI is designed well.

 So you didn't read the manual of or train yourself to use your compiler|
 bgp|isis|rip|operatingsystem| and a lot of other things ?

 IP networks are not meant for the general public, they only care that
 the apps that use it work, they don't type on routers.
 Protocols don't have GUI's or do you have a point and click BGP? :)

 Greets,
  Jeroen




Re: OMB: IPv6 by June 2008

2005-07-06 Thread Scott McGrath


We are already behind in innovation as most networks these days are run by
accountants instead of people with an entrepaneur's sprit.   We need good
business practices so that the network will stay afloat financially I do
not miss the 'dot.com' days.

But what we have now is an overemphasis on cost-cutting and like it or not
IPv6 implementation is seen as a 'frill' which will not reduce OPEX.  I
really fear we have lost the edge here in the west due to too much
emphasis on the cost side of the equation ironically this has been driven
by the current network where financial information is available instantly
for decision making whereas in the past financial information about
far-flung operation took up to a year to to arrive so if a division was
profitable it was 'left alone' now with the instant availability we are
seeing profitable divisions of companies shut down because the numerical
analysis shows the capital could be used to generate a higher return
elsewhere.

Innovation is expensive and it does not return an immediate benefit and
right now all the average corporation cares about is the next quarter's
figures not whether the company will be profitable in 5 years.   We are
seeing many instances of companies eating their seed corn instead of
investing in the future.

IPv6 would have been adopted much sooner if the protocol had been written
as an extension of IPv4 and in this case it could have slid in under the
accounting departments radar since new equipment and applications would
not be needed.





Scott C. McGrath

On Thu, 30 Jun 2005, Fred Baker wrote:


 On Jun 30, 2005, at 5:37 PM, Todd Underwood wrote:
  where is the service that is available only on IPv6? i can't seem to
  find it.

 You might ask yourself whether the Kame Turtle is dancing at
 http://www.kame.net/. This is a service that is *different* (returns a
 different web page) depending on whether you access it using IPv6 or
 IPv4. You might also look at IP mobility, and the routing being done
 for the US Army's WIN-T program. Link-local addresses and some of the
 improved flexibility of the IPv6 stack has figured in there.

 There are a number of IPv6-only or IPv6-dominant networks, mostly in
 Asia-Pac. NTT Communications runs one as a trial customer network, with
 a variety of services running over it. The various constituent networks
 of the CNGI are IPv6-only. There are others.

 Maybe you're saying that all of the applications you can think of run
 over IPv4 networks a well as IPv6, and if so you would be correct. As
 someone else said earlier in the thread, the reason to use IPv6 has to
 do with addresses, not the various issues brought up in the marketing
 hype. The reason the CNGI went all-IPv6 is pretty simple: on the North
 American continent, there are ~350M people, and Arin serves them with
 75 /8s. In the Chinese *University*System*, there are ~320M people, and
 the Chinese figured they could be really thrifty and serve them using
 only 72 /8s. I know that this is absolutely surprising, but APNIC
 didn't give CERNET 72 /8s several years ago when they asked. I really
 can't imagine why. The fact that doing so would run the IPv4 address
 space instantly into the ground wouldn't be a factor would it? So CNGI
 went where they could predictably get the addresses they would need.

 Oh, by the way. Not everyone in China is in the Universities. They also
 have business there, or so they tell me...

 The point made in the article that Fergie forwarded was that Asia and
 Europe are moving to IPv6, whether you agree that they need to or not,
 and sooner or later we will have to run it in order to talk with them.
 They are business partners, and we *will* have to talk with them. We,
 the US, have made a few my-way-or-the-highway stands in the past, such
 as who makes cell phones and such. When the rest of the world went a
 different way, we wound up be net consumers of their products.
 Innovation transfered to them, and market share.

 The good senator is worried that head-in-the-sand attitudes like the
 one above will similarly relegate us to the back seat in a few years in
 the Internet.

 Call him Chicken Little if you like. But remember: even Chicken
 Little is occasionally right.



Re: OMB: IPv6 by June 2008

2005-07-06 Thread Scott McGrath


You do make some good points as IPv6 does not address routing scalability
or multi-homing which would indeed make a contribution to lower OPEX and
be easier to 'sell' to the financial people.

As I read the spec it makes multi-homing more difficult since you are
expected to receive space only from your SP there will be no 'portable
assignments' as we know them today.  If my reading of the spec is
incorrect someone please point me in the right direction.

IPv6's hex based nature is really a joy to work with IPv6 definitely fails
the human factors part of the equation.

Scott C. McGrath

On Wed, 6 Jul 2005, David Conrad wrote:

 On Jul 6, 2005, at 7:57 AM, Scott McGrath wrote:
  IPv6 would have been adopted much sooner if the protocol had been
  written
  as an extension of IPv4 and in this case it could have slid in
  under the
  accounting departments radar since new equipment and applications
  would
  not be needed.

 IPv6 would have been adopted much sooner if it had solved a problem
 that caused significant numbers of end users or large scale ISPs real
 pain.  If IPv6 had actually addressed one or more of routing
 scalability, multi-homing, or transparent renumbering all the hand
 wringing about how the Asians and Europeans are going to overtake the
 US would not occur.  Instead, IPv6 dealt with a problem that, for the
 most part, does not immediately affect the US market but which
 (arguably) does affect the other regions.  I guess you can, if you
 like, blame it on the accountants...

 Rgds,
 -drc



Re: 3rd Party Cisco CWDM GBICs?

2005-02-15 Thread Scott McGrath
Look into Finisar.
I believe Finisar is the OEM for the Cisco CWDM GBIC's as they look 
identical (With the obvious exception of the label)

They have 16 Lambda's available
At 05:33 PM 2/14/2005, Arnold Nipper wrote:
On 14.02.2005 20:52 Aaron Thomas wrote
Hi List,
Cisco currently provides 8 lambdas for CWDM and we have a 10 lambda
mux/de-mux system we want to make use of over a single fibre (5 data
channels).  The 1430 and 1450nm lambdas are dark and I was wondering if
there are any 3rd party vendors out there that have produced Cisco
compatible GBICs for these wavelengths.  I have looked around and seen
Finisar does make Cisco GBICs, but not in the 1430/1450 lambdas.
Have a look at Optoway 
(http://www.optoway.com.tw/html/products/CWDM_GE.htm) I did not yet test 
their CWDM GBICs but I'm about to use their BiDI GBICs which come with 
great distance granularity and excellent price.

Arnold
--
Arnold Nipper / nIPper consulting, Sandhausen, Germany
email: [EMAIL PROTECTED]
phone/mobile: +49 172 2650958
fax: +49 6224 9259 333



Re: High Density Multimode Runs BCP?

2005-01-26 Thread Scott McGrath


Look into MPO cabling

MPO uses fiber ribbon cables the most common of which is 6x2
six strands by two layers

Panduit has several solutions which use cartridges so you get a
cartridge with your desired termination type and run the MPO cable between
the cartridges.

This cabling under another name is also used for IBM Mainframe channel
connections

Scott C. McGrath

On Tue, 25 Jan 2005, Deepak Jain wrote:



 I have a situation where I want to run Nx24 pairs of GE across a
 datacenter to several different customers. Runs are about 200meters max.

 When running say 24-pairs of multi-mode across a datacenter, I have
 considered a few solutions, but am not sure what is common/best practice.

 a) Find/adapt a 24/48 thread inside-plant cable (either multimode, or
 condition single mode) and connectorize the ends. Adv: Clean, Single,
 high density cable runs, Dis: Not sure if such a beast exists in
 multimode, and the whole cable has to be replaced/made redundant if one
 fiber dies and you need a critical restore, may need a break out shelf.

 b) Run 24 duplex MM cables of the proper lengths. Adv: Easy to trace,
 color code, understand. Easy to replace/repair one cable should
 something untoward occur. Can buy/stock pre-terminated cables of the
 proper length for easy restore. Dis: Lots of cables, more riser space.

 c) ??

 

 So... is there an option C? Does a multimode beastie like A exist
 commonly? Is it generally more cost effective to terminate your own MM
 cables or buy them pre-terminated?

 Assume that each of these pairs is going to be used for something like
 1000B-SX full duplex, and that these are all aggregated trunk links so
 you can't take a single pair of 1000B-SX and break it out to 24xGE at
 the end points with a switch.

 I priced up one of these runs at 100m, and I was seeing a list price in
 the ballpark of $2500-$3000 plenum. So I figured it was worth asking if
 there is a better way when we're talking about N times that number. :)

 Thanks in advance, I'm sure I just haven't had enough caffeine today.

 DJ




RE: High Density Multimode Runs BCP?

2005-01-26 Thread Scott McGrath


Hi, Thor

We used it to create zone distribution points throughout our datacenter's
which ran back to a central distribution point.   This solution has been
in place for almost 4 years.   We have 10Gb SM ethernet links traversing
the datacenter which link to the campus distribution center.

The only downsides we have experienced are

1 - Lead time in getting the component parts

2 - easiliy damaged by careless contractors

3 - somewhat higher than normal back reflection
on poor terminations

Scott C. McGrath

On Wed, 26 Jan 2005, Hannigan, Martin wrote:



  -Original Message-
  From: Thor Lancelot Simon [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, January 26, 2005 3:17 PM
  To: Hannigan, Martin; nanog@merit.edu
  Subject: Re: High Density Multimode Runs BCP?
 
 
  On Wed, Jan 26, 2005 at 02:49:29PM -0500, Hannigan, Martin wrote:

 When running say 24-pairs of multi-mode across a
  datacenter, I have
 considered a few solutions, but am not sure what is
common/best practice.
   
I assume multiplexing up to 10Gb (possibly two links
  thereof) and then
back down is cost-prohibitive?  That's probably the
  best practice.
  
   I think he's talking physical plant. 200m should be fine. Consult
   your equipment for power levels and support distance.
 
  Sure -- but given the cost of the new physical plant installation he's
  talking about, the fact that he seems to know the present maximum data
  rate for each physical link, and so forth, I think it does
  make sense to
  ask the question is the right solution to simply be more economical
  with physical plant by multiplexing to a higher data rate?
 
  I've never used fibre ribbon, as advocated by someone else in
  this thread,
  and that does sound like a very clever space- and possibly cost-saving
  solution to the puzzle.  But even so, spending tens of thousands of
  dollars to carry 24 discrete physical links hundreds of
  meters across a

 Tens of thousands? 24 strand x 100' @ $5 = $500. Fusion splice
 is $25 per splice per strand including termination. The 100m
 patch chords are $100.00. It's cheaper to bundle and splice.

 How much does the mux cost?


  datacenter, each at what is, these days, not a particularly high data
  rate, may not be the best choice.  There may well be some
  question about
  at which layer it makes sense to aggregate the links -- but to me, the
  question is it really the best choice of design constraints to take
  aggregation/multiplexing off the table is a very substantial one here
  and not profitably avoided.

 Fiber ribbon doesn't fit in any long distance (+7') distribution
 system, rich or poor, that I'm aware of. Racks, cabinets, et. al.
 are not very conducive to it. The only application I've seen was
 IBM fiber channel.

 Datacenters are sometimes permanent facilities and it's better,
 IMHO, to make things more permanent with cross connect than
 aggregation. It enables you to make your cabinet cabling and
 your termination area cabling almost permanent and maintenance
 free - as well as giving you test,add, move, and drop. It's more
 cable, but less equipment to maintain, support, and reduces
 failure points. It enhances security as well. You can't open
 the cabinet and just jack something in. You have to provision
 behind the locked term area.

 I'd love to hear about a positive experience using ribbon cable
 inside a datacenter.


 
  Thor
 



RE: High Density Multimode Runs BCP?

2005-01-26 Thread Scott McGrath


Hi, Martin

Yes indeed the ribbon cable.  Tho' due to the damage factor I probably
would not specify it again unless I could use innerduct to protect it as
we had some machine room renovations done and the construction workers
managed to kink the underfloor runs as well as setting off the Halon
system several times...


The ribbon cables work well if they are adequately protected.  If the
people in the machine room environment are skilled at handling fiber
there should be no problems.   If however J. Random Laborer has access I
would go with conventional armored runs.


Scott C. McGrath

On Wed, 26 Jan 2005, Hannigan, Martin wrote:


 The ribbon cable?




 --
 Martin Hannigan (c) 617-388-2663
 VeriSign, Inc.  (w) 703-948-7018
 Network Engineer IV   Operations  Infrastructure
 [EMAIL PROTECTED]



  -Original Message-
  From: Scott McGrath [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, January 26, 2005 6:44 PM
  To: Hannigan, Martin
  Cc: Thor Lancelot Simon; nanog@merit.edu
  Subject: RE: High Density Multimode Runs BCP?
 
 
 
  Hi, Thor
 
  We used it to create zone distribution points throughout our
  datacenter's
  which ran back to a central distribution point.   This
  solution has been
  in place for almost 4 years.   We have 10Gb SM ethernet links
  traversing
  the datacenter which link to the campus distribution center.
 
  The only downsides we have experienced are
 
  1 - Lead time in getting the component parts
 
  2 - easiliy damaged by careless contractors
 
  3 - somewhat higher than normal back reflection
  on poor terminations
 
  Scott C. McGrath
 
  On Wed, 26 Jan 2005, Hannigan, Martin wrote:
 
  
  
-Original Message-
From: Thor Lancelot Simon [mailto:[EMAIL PROTECTED]
Sent: Wednesday, January 26, 2005 3:17 PM
To: Hannigan, Martin; nanog@merit.edu
Subject: Re: High Density Multimode Runs BCP?
   
   
On Wed, Jan 26, 2005 at 02:49:29PM -0500, Hannigan, Martin wrote:
  
   When running say 24-pairs of multi-mode across a
datacenter, I have
   considered a few solutions, but am not sure what is
  common/best practice.
 
  I assume multiplexing up to 10Gb (possibly two links
thereof) and then
  back down is cost-prohibitive?  That's probably the
best practice.

 I think he's talking physical plant. 200m should be
  fine. Consult
 your equipment for power levels and support distance.
   
Sure -- but given the cost of the new physical plant
  installation he's
talking about, the fact that he seems to know the present
  maximum data
rate for each physical link, and so forth, I think it does
make sense to
ask the question is the right solution to simply be more
  economical
with physical plant by multiplexing to a higher data rate?
   
I've never used fibre ribbon, as advocated by someone else in
this thread,
and that does sound like a very clever space- and
  possibly cost-saving
solution to the puzzle.  But even so, spending tens of
  thousands of
dollars to carry 24 discrete physical links hundreds of
meters across a
  
   Tens of thousands? 24 strand x 100' @ $5 = $500. Fusion splice
   is $25 per splice per strand including termination. The 100m
   patch chords are $100.00. It's cheaper to bundle and splice.
  
   How much does the mux cost?
  
  
datacenter, each at what is, these days, not a
  particularly high data
rate, may not be the best choice.  There may well be some
question about
at which layer it makes sense to aggregate the links --
  but to me, the
question is it really the best choice of design
  constraints to take
aggregation/multiplexing off the table is a very
  substantial one here
and not profitably avoided.
  
   Fiber ribbon doesn't fit in any long distance (+7') distribution
   system, rich or poor, that I'm aware of. Racks, cabinets, et. al.
   are not very conducive to it. The only application I've seen was
   IBM fiber channel.
  
   Datacenters are sometimes permanent facilities and it's better,
   IMHO, to make things more permanent with cross connect than
   aggregation. It enables you to make your cabinet cabling and
   your termination area cabling almost permanent and maintenance
   free - as well as giving you test,add, move, and drop. It's more
   cable, but less equipment to maintain, support, and reduces
   failure points. It enhances security as well. You can't open
   the cabinet and just jack something in. You have to provision
   behind the locked term area.
  
   I'd love to hear about a positive experience using ribbon cable
   inside a datacenter.
  
  
   
Thor
   
  
 



Re: Setting up DS-3 and 2 4xT1

2004-12-02 Thread Scott McGrath


7206VXR with appropriate PAM's

Scott C. McGrath

On Thu, 2 Dec 2004, Joshua Brady wrote:


 My apologies if some may find this a little off-topic.

 However, here is my issue. I need a router, which can take 2 4xT1's
 and a DS-3, while handing a Gbit for internal use. Now to complicate
 the entire situation, this needs to go into a 3 bedroom apartment, so
 I need to keep the power bills down if I can :)

 What would everyone recommend? Off-List replies are fine, I will
 summarize at the end.

 Thanks,
 Joshua Brady



Name resolution in the .MIL domain

2004-11-19 Thread Scott McGrath


Several of our researchers have pointed out that sites in the .MIL TLD are
unreachable.   Did a nslookup and got a interesting result

 server ns.mit.edu
Default Server:  NOC-CUBE.mit.edu
Address:  18.18.2.25
Aliases:  ns.mit.edu

 www.army.mil
Server:  NOC-CUBE.mit.edu
Address:  18.18.2.25
Aliases:  ns.mit.edu

*** NOC-CUBE.mit.edu can't find www.army.mil: No response from server
 www.navy.mil
Server:  NOC-CUBE.mit.edu
Address:  18.18.2.25
Aliases:  ns.mit.edu

*** NOC-CUBE.mit.edu can't find www.navy.mil: No response from server

I know it's MIT's nameserver just wanted to be sure the problem was not on
our end.

Send Harvard/MIT jokes to me offlist

Back to the subject at hand is anyone else seeing the same issue with the
.MIL domain

Thanks in advance - Scott


Re: 10GE access switch router

2004-09-30 Thread Scott McGrath


Extreme makes such a device but it is not truly wirespeed i.e. it goes
wirespeed on ports associated with a particular ASIC but the ASIC to ASIC
links apparently cannot forward a full ASIC to another full ASIC without
dropping frames.  But that may be an academic concern and is unlikely to
happen in most network environments as most real world environments do not
generate sustained flows of hundreds of Gbytes as do research
environments.


But YMMV

Scott C. McGrath

On Tue, 28 Sep 2004, Frederic NGUYEN wrote:


 Is anyone of you awared of the existence of an access switch router (L2/L3) with GE 
 interfaces and 10GE uplinks? I'm looking for
 something small (around 1U to 2U). Is there any vendor selling such a product?

 Thanks,
 Fred




RE: Cisco moves even more to china.

2004-09-24 Thread Scott McGrath


Too Late

CDL drivers are already outsourced a couple of years ago we agreed to
allow Mexican trucking firms access to the entire CONUS.  Before that they
were limited to 100 Miles from the border.

Become a mechanic or plumber instead...

Scott C. McGrath

On Thu, 23 Sep 2004, Dan Mahoney, System Admin wrote:


 On Thu, 23 Sep 2004, Jason Graun wrote:

  I think the IT field as a whole, programmers, network guys, etc... are going
  to go the way of the auto workers in the 70's and 80's.  I am a CCIE working
  and on a second one and it saddens me that all my hard work and advanced
  knowledge could be replaced by a chop-shop guy because from a business
  standpoint quarter to quarter the chop-shop guy is cheaper on the books.
  Never mind the fact that I solve problems on the network in under 30mins and
  save the company from downtime but I am too expensive.  I used to love
  technology and all it had to offer but now I feel cheated, I feel like we
  all have been burned by the way the business guys look at the technology, as
  a commodity.  Thankfully I am still young (mid 20's) I can make a career
  switch but I'll still love the technology.  Anyway I am going to start the
  paper work to be an H1b to China and brush up on my Mandarin.

 I've felt this way about things at times.  It's why I'm getting my CDL.  I
 highly doubt they can find a way to outsource *that* to some third-world
 country.

 -Dan



 
  Jason
 
  -Original Message-
  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Erik
  Haagsman
  Sent: Thursday, September 23, 2004 7:55 PM
  To: Dan Mahoney, System Admin
  Cc: Nicole; [EMAIL PROTECTED]
  Subject: Re: Cisco moves even more to china.
 
 
  On Fri, 2004-09-24 at 02:29, Dan Mahoney, System Admin wrote:
  I've always personally taken anyone who said but I'm an MCSE with a
  grain of salt.  I've had equal respect for the A-plus and Net-Plus
  certifications, which are basically bought.
 
  I take most certifications with a grain of salt, including degrees,
  unless someone clearly demonstrates he know's what he's talking about,
  is able to make intelligent decisions and learns new techniques quickly.
  In which case a certification is still just an add-on ;-)
 
  I used to have more trust in the /CC../ certifications but I find I may be
 
  laughing those off too quite soon.
 
  The vendor's introductory certs (CCNA, CCNP, JNCIA, JNCIS) don't say
  anything about a candidate, except exactly that (I got the cert). CCIE
  and JNCIE are still at least an indicator someone was at a certain level
  at the time of getting the certification, but are still no substitute
  for experience and a brain in good working order. It's too bad there
  aren't better general (non-vendor specific) certs, since what often
  lacks is general understanding of network architecture and protocols.
  You can teach anyone the right commands for Vendor X and they'll prolly
  get a basic config going on a few nodes, but when troubleshooting time
  comes it's useless without good knowledge of the underlying technology,
  which none of the vendor certs teach very well (IMHO anyway ;-)
 
  Cheers,
 
  Erik
 
 
 
  --
  ---
  Erik Haagsman
  Network Architect
  We Dare BV
  tel: +31.10.7507008
  fax: +31.10.7507005
  http://www.we-dare.nl
 
 
 
 

 --

 Don't be so depressed dear.

 I have no endorphins, what am I supposed to do?

 -DM and SK, February 10th, 1999

 Dan Mahoney
 Techie,  Sysadmin,  WebGeek
 Gushi on efnet/undernet IRC
 ICQ: 13735144   AIM: LarpGM
 Site:  http://www.gushi.org
 ---



Re: Cisco moves even more to china.

2004-09-24 Thread Scott McGrath


The current wave of outsourcing is driven by greed and greed alone.
What's going on now would make Gordon Gekko blush.   There is nothing
stopping the companies from paying the workers in India or China the
prevailing wage in the developed countries which would really accelerate
growth in these countries and would have the side effect of making the
playing field level as in let the best engineer win rather than the
cheapest.

Right now outsourcers are moving jobs from India to Bangladesh and Africa
because wages and the standard of living in India is rising so the
Indians are seeing what we see here in the US.

What is often forgotten is that innovation in an industry comes from its
practitioners not a collective of marketing types and systems
archetects.   So by outsourcing we are sending the wellspring of
innovation and the attendant wealth creation elsewhere.

Scott C. McGrath

On Fri, 24 Sep 2004, Robin Lynn Frank wrote:

 On Fri, 24 Sep 2004 14:49:54 +0100 (IST)
 Paul Jakma [EMAIL PROTECTED] wrote:

   Modern capitalism does create a race to the bottom effect for labor
   which seems to have no end.
 
  This race exists because of imbalances in prosperity in world.


 This race exists because governments beholden to corporate interests,
 permit it to exist.  A US company that sacrifices the welfare of US
 workers for the sake of its bottom line, or to curry favor with a
 foreign government, is not one I care to do business with.

 I usually lurk, not post.  I just needed to say this.


 --
 Robin Lynn Frank
 Director of Operations
 Paradigm-Omega, LLC
 http://www.paradigm-omega.com
 ==
 Sed quis custodiet ipsos custodes?



Re: Multi-link Frame Relay OR Load Balancing

2004-09-16 Thread Scott McGrath


In my experience the breakeven point for a Frame Relay DS3 is 6 DS1
circuits.   DS3's tend to be more reliable than DS1's as the ILEC usually
installs a MUX at your site instead of running to the nearest channel bank
and running the T1's over copper with a few repeaters thrown in for
good measure.

Another nice thing about DS3's is that it is easy to scale bandwidth in
the future by modifying the CIR on your link.   Another feature is that
since the link is faster the serialization delay is lower which will give
you better latency and last but not least PA3+ for Cisco 7[2|5]xx routers
are inexpensive and give you one call for service not a separate call for
the CSU/DSU's and the serial line card you need to support a multilink
solution.


Scott C. McGrath

On Thu, 16 Sep 2004, Bryce Enevoldson wrote:


 We are in the process of updating our internet connection to 8 t1's bound
 together.  Due to price, our options have been narrowed to ATT and MCI.
 I have two questions:
 1.  Which technology is better for binding t1's:  multi link frame relay
 (mci's) or load balancing (att's)
 2.  Which company has a better pop in Atlanta: mci or att?

 We are in the Chattanooga TN area and our current connection is 6 t1's
 through att but they will only bond 4 so they are split 4 and 2.

 Bryce Enevoldson
 Information Processing
 Southern Adventist University





Odd behavior from p4-0-0.MAR1.Austin-TX.us.xo.net

2004-09-10 Thread Scott McGrath


We are originating traffic from AS11 and we are seeing an apparent loop
downstream from the router listed in the header when attempting to connect
to rsync1.spamhaus.org.

Is this problem unique to us or are others seeing the same behavior.
Scott C. McGrath


Re: optics pricing (Re: Weird GigE Media Converter Behavior)

2004-09-02 Thread Scott McGrath


Ordered them when they first became available order is still on New
Product Hold.

BTW they use standard infiniband cables

Scott C. McGrath

On Thu, 2 Sep 2004, Thomas Kernen wrote:


 
   On the other hand, it'd be nice to see a copper 10GBIC, even if its max
   cable length were a few metres. ;-)
 
  There is one. It's called CX4 and has a reach of 15 meters. Cisco sold it
  for $600 list price at first but it has now disappeared from the price
  list. I don't know why.
 
 
 http://www.cisco.com/en/US/products/hw/modules/ps4835/products_data_sheet09186a008007cd00.html
 

 According to my info sources (I tried to purchase some a couple of weeks
 ago) they have not yet been released (= delayed) and that's why they have
 been removed from the GPL... should be back within a month (hopefully).

 I'm trying to get my sticky fingers on a few for testing in our lab... the
 other problem is finding people that actually stock the CX4 patch cables.

 Thomas



Re: sms messaging without a net?

2004-08-05 Thread Scott McGrath


Use TAP (telocator access protocol) your monitoring application dials a
modem pool logs on and sends a text message to the subscriber.

Verizon, Cingular, Nextel all offer this service as does Skytel and most
of the paging vendors.

Scott C. McGrath

On Tue, 3 Aug 2004, Adrian Chadd wrote:


 On Tue, Aug 03, 2004, Dan Hollis wrote:
 
  Does anyone know of a way to send SMS messages without an internet
  connection?
 
  Having a network monitoring system send sms pages via email very quickly
  runs into chicken-egg scenario. How do you email a page to let the admins
  know their net has gone down. :-P

 GNOKII and a suitable nokia phone.

 http://www.gnokii.org/




 Adrian


 --
 Adrian Chadd  I'm only a fanboy if
 [EMAIL PROTECTED]   I emailed Wesley Crusher.





Re: Surge Protection

2004-07-22 Thread Scott McGrath


Polyphaser does make excellent surge supression gear they make it for all
communications services.  i.e. Broadcast Radio, television, cell sites,
gov't/military.

Being a ham I use their gear myself expensive but cheaper than a new rig.
Especially since the rig is connected to a structure designed to attract
electromagnetic fields.

Scott C. McGrath

On Thu, 22 Jul 2004, Mike Lewinski wrote:


 Daniel Senie wrote:

  The cost of installing a surge protector is unlikely to impact your
  bottom line. One successful lightning strike on the other hand will hurt
  quite a bit, and probably happen at 4AM just to be more annoying.

 Yes... we had a strike hit a remote mountain POP via the T1. From the
 router it managed to propogate onto the switch and from the switch onto
 the connected hosts and caused a catastrophic failure. Fortunately the
 hosts mainly lost their NICs.

 We have since purchased some polyphaser surge protectors. Can't remember
 if this was the vendor or not:

 http://www.comm-omni.com/polyweb/t1.htm

 Google has +400 matches on the exact phrase T1 surge protector



RE: concern over public peering points [WAS: Peering point speed publicly available?]

2004-07-09 Thread Scott McGrath


A minitel - in the United States!

Scott C. McGrath

On Thu, 8 Jul 2004, Ian Dickinson wrote:


 Which almost begs the question - what's the oddest WTF?? anybody's willing to
 admit finding under a raised floor, or up in a ceiling or cable chase or
 similar location? (Feel free to change names to protect the guilty if need
 be:)
 
 Water -- about 8 of it...

 Air -- about 8 feet of it...
 In a comms room in a tunnel under London.
 Luckily for those working there, there was a ladder stored there too.
 The term 'raised floor' was never so apt.
 --
 Ian Dickinson
 Development Engineer
 PIPEX
 [EMAIL PROTECTED]
 http://www.pipex.net



RE: Strange behavior of Catalyst4006

2004-06-29 Thread Scott McGrath


Joe,

If you are using NAT 0 you need to have a static translation enabled.
Otherwise when the machine first comes up it arp's which creates an xlate
entry on the PIX which times out when the inactivity timer runs out.

This causes behavior similar to what you are experiencing




Scott C. McGrath

On Mon, 28 Jun 2004, Greg Schwimer wrote:



  Some things you can look into:

  firewall interface(10.10.1.122/30).
  ip route 192.168.5.0 255.255.255.0 10.10.1.124

 Is that the firewall interface is 10.10.1.122, or is it 10.10.1.124?
 10.10.1.122 is a host address in the 10.10.1.120/30 subnet.
 10.10.1.124 is a /30 network.  Either way, you're dealing with two
 different subnets.  Oddly, it's working sometimes.


  At the very begining all system works fine. After sometime  they said they could 
  not  acces their email/web/dns
  server from host outside their company's network... We restart ( shut; noshut) the 
  fastethernet interface on Catalyst4006,
  and then servers' network access recovered.
 

 Sounds suspiciously like an IP conflict or some MAC weirdness with the
 firewall's or 4006's IP.  Is the connection between the 4006 and the
 customer's firewall a basic crossover, or does the customer have a
 hub/switch on their side?  Assuming the subnetting statement I've made
 above is based on erroneous info, check your arp cache/mac table when
 it *is* working.  Write down the MAC for the customer's firewall.  When
 it stops working, check the arp cache/mac table again.  Compare the
 MACs to be sure they're the same.  Just for giggles, clear the arp
 cache and see if that fixes it.  If that doesn't, clear the entry from
 the cam table.

 Good luck...

 Greg Schwimer



Re: Attn MCI/UUNet - Massive abuse from your network

2004-06-25 Thread Scott McGrath


Well said sir!

Scott C. McGrath

On Fri, 25 Jun 2004 [EMAIL PROTECTED] wrote:


  From the AOL theft article:
   The revelations come as AOL and other Internet providers have
  ramped up their efforts to track down the purveyors of spam, which
  has grown into a maddening scourge that costs consumers and
  businesses billions of dollars a year.

 Interesting. An insider at a network operator steals
 a copy of some interesting operational data and sells
 it to a 3rd party with an interest in doing nasty things
 with said data.

 And if Homeland Security really does require all outages
 to be reported to a clearing house where only network
 operations insiders can get access to it, then what?
 Will someone sell this to a terrorist organization?

 Better to leave all this information semi-public as
 it is now so that we all know it is NOT acceptable
 to build insecure infrastructure or to leave infrastructure
 in an insecure state. Fear of a terrorist attack is
 a much stronger motive for doing the right thing
 than a government order to file secret reports to
 a secret bureaucratic agency.

 --Michael Dillon



RE: Homeland Security now wants to restrict outage notifications

2004-06-24 Thread Scott McGrath


I did read the article and having worked for gov't agencies twice in my
career a proposal like the one floated by DHS is just the camel's nose.

I should hope the carriers oppose this.

Now a call comes into our ops center I cant reach my experiment at
Stanford.  Ops looks up the outages Oh yeah there's a fiber cut affecting
service we will let you know when it's fixed.   They check it's fixed they
call the customer telling them to try it now.

Under the proposed regime We know its dead do not know why or when it
will be fixed because it' classified information  This makes for
absolutely wonderful customer service and it protects public safety how?.



Scott C. McGrath

On Thu, 24 Jun 2004, Tad Grosvenor wrote:

 Did you read the article?  The DHS is urging that the FCC drop the proposal
 to require outage reporting for significant outages.   This isn't the DHS
 saying that outage notifications should be muted.  The article also
 mentions: Telecom companies are generally against the proposed new
 reporting requirements, arguing that the industry's voluntary efforts are
 sufficient.

 -Tad



 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
 Scott McGrath
 Sent: Thursday, June 24, 2004 12:58 PM
 To: [EMAIL PROTECTED]
 Subject: Homeland Security now wants to restrict outage notifications



 See

 http://www.theregister.co.uk/2004/06/24/network_outages/

 for the gory details.  The Sean Gorman debacle was just the beginning
 this country is becoming more like the Soviet Union under Stalin every
 passing day in its xenophobic paranoia all we need now is a new version of
 the NKVD to enforce the homeland security directives.

 Scott C. McGrath




RE: Homeland Security now wants to restrict outage notifications

2004-06-24 Thread Scott McGrath


I also believe that critical infrastructure needs to be protected and I am
charged with protecting a good chunk of it.   Also as a Ham operator I
work in concert with the various emergency management organizations in
dealing with possible worst case scenarios.

No, not everyone who asks about some piece of infrastructure under my
control gets an answer but for now we can still choose who receives an
answer without you having to contact a govt agency and ask whether I can
respond to a query from Joe Shmoe.

Unfortunately information=power and control of information is power^2 and
many people in the permanent bureaucracy are there only in pursuit of
power over others and 9/11 was a wonderful excuse to extend their scope
of control over people's everyday lives.

Right now in Boston cameras are now illegal in the subway for 'security
reasons' who hasnt had a picture taken with their friends on the way
to/from a gathering on the subway.

Back when I was younger the only places with restrictions like that were
the countries Iron Curtain.  In the 50's my family helped resettle
refugees from Hungary in the aftermath of the failed Hungarian Revolution
freedom is a valuable thing unfortunately we are losing it bit by bit.


Scott C. McGrath

On Thu, 24 Jun 2004, Harris, Michael C. wrote:

   Scott McGrath said:
   See

   http://www.theregister.co.uk/2004/06/24/network_outages/

   for the gory details.  The Sean Gorman debacle was just the
 beginning this country
   is becoming more like the Soviet Union under Stalin every
 passing day in its xenophobic
   paranoia all we need now is a new version of the NKVD to enforce
 the homeland security directives.

 Scott C. McGrath
 --

 Ask and you shall receive! Fresh from the DHS website yesterday morning.

 (quoting the end of the 4th paragraph below)

 In addition, HSIN-CI network, in partnership with the FBI, provides a
 reporting feature that allows the public to submit information about
 suspicious activities through the FBI Tips Program that is then shared
 with the Department's HSOC.

 Just call the party hotline and report your neighbors, coworkers and
 friends...

 Don't get me wrong, I am a supporter of protecting critical
 infrastructure. There are already programs, Infragard is an example,
 that perform the same kind of information sharing by choice rather than
 decree.  Infragard is supported by public private and sectors both, with
 similar support from the FBI.

 (yes, I am an Infragard member just to be 100% above board)
 Mike Harris
 Umh.edu

 --
 http://www.dhs.gov/dhspublic/display?content=3748

 Homeland Security Launches Critical Infrastructure Pilot Program to
 Bolster Private Sector Security
 - Dallas First of Four Pilot Communities Sharing Targeted Threat
 Information

 For Immediate Release
 Office of the Press Secretary
 Contact: 202-282-8010
 June 23, 2004

 Homeland Security Information Network - Critical Infrastructure

 The U.S. Department of Homeland Security in partnership with local
 private sector and the Federal Bureau of Investigation, today launched
 the first Homeland Security Information Network-Critical Infrastructure
 (HSIN-CI) Pilot Program in Dallas, Texas with locally operated pilot
 programs in Seattle, Indianapolis and Atlanta to follow.  The pilot
 program will operate throughout the course of this year to determine the
 feasibility of using this model for other cities across the country.

 The HSIN-CI pilot program, modeled after the FBI Dallas Emergency
 Response Network expands the reach of the Department's Homeland Security
 Information Network (HSIN) initiative--a counterterrorism communications
 tool that connects 50 states, five territories, Washington, D.C., and 50
 major urban areas to strengthen the exchange of threat information--to
 critical infrastructure owners and operators in a variety of industries
 and locations, first responders and local officials.  As part of the
 HSIN-CI pilot program, more than 25,000 members of the network will have
 access to unclassified sector specific information and alert
 notifications on a 24/7 basis.

 The Homeland is more secure when each hometown is more secure, said
 Secretary of Homeland Security Tom Ridge. HSIN-CI connects our
 communities - the government community to the private sector community
 to the law enforcement community -- the better we share information
 between our partners, the more quickly we are able to implement security
 measures where necessary.

 The HSIN-CI network allows local and regional areas to receive targeted
 alerts and notifications in real-time from Department's Homeland
 Security Operations Center (HSOC) using standard communication devices
 including wired and wireless telephones, email, facsimile and text
 pagers.  The network requires no additional hardware or software

RE: Even you can be hacked

2004-06-11 Thread Scott McGrath


But wouldn't an interocitor with electron sorter option give you much more
reliable packet delivery...

Scott C. McGrath

On Fri, 11 Jun 2004, Fisher, Shawn wrote:


 Hmm, so your on earth?

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of
 Mike Walter
 Sent: Friday, June 11, 2004 5:03 PM
 To: nanog
 Subject: RE: Even you can be hacked



 Now you are just getting silly, we know Flux Capacitors don't work on
 earth.

 Mike Walter

 -Original Message-
 From: Matthew McGehrin [mailto:[EMAIL PROTECTED]
 Sent: Friday, June 11, 2004 5:00 PM
 To: nanog
 Subject: was: Even you can be hacked



 Coupled with a Flux Capacitor for the ultimate in message delivery :)

 - Original Message -
 From: Scott Stursa [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Friday, June 11, 2004 4:44 PM
 Subject: Re: Even you can be hacked


  Ah. A tunneling implementation.
  You'll need a cold fusion generator to power that.




Re: OT: Looking for Ethernt/Optical Device

2004-06-01 Thread Scott McGrath


Finisar also has CWDM optics in both the SFP and GBIC form factor and they
are quite a bit less expensive than the Cisco solution and they do have a
16 lambda passive OADM as well as the 4 and 8 lambda models.

Scott C. McGrath

On Tue, 1 Jun 2004, Erik Haagsman wrote:


 What you could try is use the Cisco CWDM-MUX-4 and it's pluggable optics
 that can be fit into any GBIC 802.3z compliant slot. It's just an OADM
 with 4 or 8 wavelengths that delivers GigE to any box with pluggable
 GBICs provided you use the right optics and it's quite a bit cheaper
 than using ONS stuff. That said, CWDM doesn't get you much further than
 80 kilometres, above that DWDM is your only option, and a hell of a lot
 more expensive.

 Cheers,

 --
 ---
 Erik Haagsman
 Network Architect
 We Dare BV
 tel: +31(0)10 7507008
 fax:+31(0)10 7507005
 http://www.we-dare.nl


 On Tue, 2004-06-01 at 17:30, Michael Smith wrote:
 
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  Hello All:
 
  I'm wondering if anyone has seen a good and cheap(er) solution for
  providing multiple Gigabit Ethernet circuits over single pair of
  fiber.  I'm looking for a way to do CWDM or DWDM that's cheaper than
  putting in a Cisco 15454 or 15327.  I'm only going to be doing 2 GigE
  circuits between two switches, so I don't need to plan for future
  growth.
 
  If anyone knows of a magic box that will do the above I would love to
  hear about it.
 
  Thanks,
 
  Mike
 
  - --
  Michael K. SmithNoaNet
  206.219.7116 (work) 866.662.6380 (NOC)
  [EMAIL PROTECTED]  http://www.noanet.net
 
  -BEGIN PGP SIGNATURE-
  Version: PGP 8.0.3
 
  iQA/AwUBQLyiVJzgx7Y34AxGEQIDewCfR8JQG2jqbxsBopUE6u3FUnfiX3UAoODx
  41QL7T1eyK1EQ4ZMnVJU+l2p
  =hDVT
  -END PGP SIGNATURE-




Re: Type of Service (TOS)

2004-05-10 Thread Scott McGrath


The answer is it depends.  routers _usually_ honor the TOS bits unless
they are configured to clear or rewrite them.  We use the TOS bits for
designating traffic classes so in some cases we rewrite the TOS bits set
by the host so in your case we would modify the TOS bits.

Scott C. McGrath

On Mon, 10 May 2004, Vicky Rode wrote:


 Hi there,

 Say if I had a qos appliance installed on networks between a lan and a
 wan box would the qos policies be carried across wan end points (point
 to point connection)? In other words, will the router retain the TOS
 bits across to the other side of the wan connection to provide QoS-style
 priority for the packets or will it clear the TOS bits? BTW, the other
 side of the wan connection also has the qos appliance sitting between a
 lan and a wan box.

 Just so that I'm clear, I'm not talking about an upstream neighbor being
 an ISP connection  which I know they will likely ignore the TOS bits
 unless I pay them extra for the feature. The above scenario is a point
 to point connection to a remote site.


 Any insight will be appreciated.


 regards,
 /vicky



Re: Type of Service (TOS)

2004-05-10 Thread Scott McGrath


Cisco and Enterasys definitely pass the TOS bits by default.  You need to
talk to your  engineering group to see whether it is your site's
policy to propagate TOS bits to make sure the TOS bits set by your
appliance will arrive at their destination.

Scott C. McGrath

On Mon, 10 May 2004, Vicky Rode wrote:

 Hi,

 Do you know by default if the routers pass the TOS bits?


 regards,
 /vicky


 Scott McGrath wrote:

 
  The answer is it depends.  routers _usually_ honor the TOS bits unless
  they are configured to clear or rewrite them.  We use the TOS bits for
  designating traffic classes so in some cases we rewrite the TOS bits set
  by the host so in your case we would modify the TOS bits.
 
  Scott C. McGrath
 
  On Mon, 10 May 2004, Vicky Rode wrote:
 
 
 Hi there,
 
 Say if I had a qos appliance installed on networks between a lan and a
 wan box would the qos policies be carried across wan end points (point
 to point connection)? In other words, will the router retain the TOS
 bits across to the other side of the wan connection to provide QoS-style
 priority for the packets or will it clear the TOS bits? BTW, the other
 side of the wan connection also has the qos appliance sitting between a
 lan and a wan box.
 
 Just so that I'm clear, I'm not talking about an upstream neighbor being
 an ISP connection  which I know they will likely ignore the TOS bits
 unless I pay them extra for the feature. The above scenario is a point
 to point connection to a remote site.
 
 
 Any insight will be appreciated.
 
 
 regards,
 /vicky
 
 
 



Re: Filtering network content based on User Subscription

2004-05-08 Thread Scott McGrath


Joe,

Your best bet in this case is to place a appropriately sized firewall at
the customer's site, i.e. Cisco PIX 501 - 515 series or SonicWall's
equivalent and link it to a WebSense or N2H2 content filtering server at
your NOC.

the short version of how this works us The firewall sends the URL your
customer is requesting to the filter server and the filter server tells
the firewall whether to grant or deny access to the URL.   Both products
can be configured to fail hard or soft i.e. if the content server is down
the firewall will either block all URL's or grant all URL's.

Both products do what you want them to do right out of the box and can be
tuned easily by your staff or the customer.


Scott C. McGrath



Re: The Uneducated Enduser (Re: Microsoft XP SP2 (was Re: Lazy network operators - NOT))

2004-04-20 Thread Scott McGrath


Operating systems bundled with a retail computer _should_ be reasonably
secure out of the box.

OS X can be placed on a unprotected internet connection in a unpatched
state and it's default configuration allows it to be patched to current
levels without it being compromised.

On the other hand Win2k  XP will be compromised in under 5 minutes if
connected to the same unfiltered connection (The record here is 35 seconds
for time to compromise)

I am not saying that OS X is the paragon of all things good.  But it's
basic settings take into account the average user's skill level and
ability to secure the OS if you want less security the user needs to
_specifically_ configure the machine to allow the reduced level of
protection.

Whereas the desire for chrome on Win has made a platform which is
virtually impossible for the average user to secure.

I use both on a daily basis as well as Solaris and Linux so I consider
myself somewhat agnostic on OS choices as each does something better than
the others and I use it for that function.


Scott C. McGrath



Re: UPS and generator interaction?

2004-03-29 Thread Scott McGrath


Brian,

The way the generators usually are set up is an transfer switch at the
input of the UPS.  When commercial power is lost the ATS signals the
Genset to start and once the input voltage stablizes the UPS shuts down.
This scenario assumes the use of a line interactive UPS which includes the
UPS you describe.   In the case of a online the UPS sees that
line power has been restored.

When power is restored the ATS switches back to commercial power and
signals the generator to shut down.   The ATS usually exercises the
generator as well on a set schedule as well.

My advice is to contact a local electrician who specializes in generator
installations as local codes define what you are allowed to do.

BTW APC has an environmental monitor card with relay outputs which can
be used to start a compatible generator.  once again you need to talk to
your local electrician.

Scott C. McGrath

On Mon, 29 Mar 2004, Brian (nanog-list) wrote:


 Does anyone know of a way to get a UPS to trigger a generator to start, and
 to switch over to the generator power automatically or does this type of
 thing just not exist?

 Right now we've got a APC Symmetra UPS at 12kva, with no generator.  The UPS
 keeps us running for about 45 minutes, which just isn't enough time.  I
 called APC, but they didn't seem to have any type of automatic solution.
 Their method is to hook it up to a switch, and manually change the feed to
 the UPS from the building power to the generator power and back, but it sure
 would be nice to have something more automated (to save me from running like
 a madman when the UPS page wakes me up at 4am).

 I'd be very grateful to hear of any solutions that you guys have come up
 with in this arena.  Also, any recommendations for generators?  I'm not
 looking for something huge, just something that can be mounted on a roof.
 If I have to pour diesel into it every couple hours, that's fine too.

 Thanks in advance,
 Brian



Re: Redirecting mail (Re: Throttling mail)

2004-03-25 Thread Scott McGrath


Ray,

Take a look at IOS server load balancing.  You create a virtual server
with your public IP address and bind 1 or more real servers to this
serverfarm.

The nice thing about IOS SLB is that it is part of the IOS image in native
mode on the 65xx and the 72xx series.  It runs on a couple of other
platforms but you would need to search CCO to find out which ones.

Scott C. McGrath

On Thu, 25 Mar 2004, Ray Burkholder wrote:


 Quoting Adi Linden [EMAIL PROTECTED]:

 
 
  Is there a way do transparently redirect smtp traffic to a server
  elsewhere on the network using Cisco gear? It would be much easier to
  implement this solution if smtp traffic is transparently sent through the
  dedicated box rather than 'cutting off' all users until they manually
  reconfigure their clients to use the new mail relay.
 
  Adi

 Will the Cisco WCCP protocol do what is necessary in this case?



 -
 This mail sent through IMP: http://horde.org/imp/

 --
 Scanned for viruses and dangerous content at
 http://www.oneunified.net and is believed to be clean.



Re: who offers cheap (personal) 1U colo?

2004-03-16 Thread Scott McGrath


Painting with a broad brush the differentiation between student and
administrative networks is based on location,role and ownership A public
ethernet port in a library is a student network even though
administrative computers may be connected from time to time.  The
librarian's machine is attached to a administrative network.  This is a
fluid definition since the students often work on administrative
computers.

The real differentiator is the student networks are comprised of
machines the university does not own or have direct administrative control
over and securing these machines is up to the owner.

An administrative network is a network of machines owned and controlled by
the university hence the security policy is defined, implemented and
enforced by the responsible parties within the university.

Scott C. McGrath

On Tue, 16 Mar 2004, Laurence F. Sheldon, Jr. wrote:


 Curtis Maurand wrote:

  Then anyone can walk up to the machine and get onto the network simply by
  turning on the machine.
 
  The system you're looking for involve biometrics or smartcards.  Firewalls
  between student and administration areas would be a good idea as well.

 It must be dreadful to work in a place where everybody is The Enemy.

 In case I every get another job at a University, how do you separate
 student areas from administration areas?

 In my limited experience, we had students in labs, classrooms, and
 offices in the Administration Building, administrators (RA'a, residents,
 offices) in the Residence Halls, all kinds of creepy people in the
 libraries, classrooms, offices, dining rooms, and recreational and
 exercise facilities.  Do you use armed guards to keep everybody in
 their proper areas?

 --
 Requiescas in pace o email




RE: Will your cisco have the FBI's IOS?

2004-03-15 Thread Scott McGrath


This is part of a law enforcement wishlist which has been around for a
long time (See Magic Lantern, Clipper Chip et. al. for examples).

What is desired here is a system by which all communications
originating/or terminating at $DESIGNATED_TARGET can be intercepted with
no intervention by and/or knowledge of the carrier hence ensuring the
security of the investigation.

The trouble with a system like this is that like all backdoors it can be
exploited by non-legitimate users but law enforcement personnel tend to
have a very limited understanding of technology and communications tech
especially since to the majority of LEA's AOL == Internet many local LEA's
their only internet access is AOL.

I've been asked how do you track down all $NET_MISCREANTS in town.  I told
the chief that it requires good old fashioned police work.  The net is not
magic and is decentralized.   But what is wanted is a centralized place
where with the press of a button you can see who Joe Smith has been
talking to, sending email to and what web pages he is looking at to make
investigations easy from a civil liberties standpoint that is a _bad_
thing human nature being what it is.

It is our job as members of the NANOG community to educate our politicians
and police so that we do not end up living in a system which would be the
envy of the Stasi and the Soviet era KGB


Scott C. McGrath

On Sun, 14 Mar 2004, Sean Donelan wrote:


 On Sat, 13 Mar 2004, Christopher J. Wolff wrote:
  I believe that CALEA versions of IOS are already available on cisco.com.  It
  has a backdoor for any traffic originating from dhs.gov address space. ;)

 If law enforcement was satisified with the solutions already available, I
 don't think they would have spent the time creating this filing.  It's
 probably a good idea for anyone associated in the Internet industry to
 read the filing because it may be requesting the FCC change definitions
 of who is covered and what they must do. Even if you thought CALEA didn't
 apply to you for the last 10 years; you might find out after this you will
 be required to provide complete CALEA capabilities.  The requested
 capabilities may be more than are currently available from vendors.

 Do you know what is the difference between call-identifying information
 and communications-identifying information?  They both have the intials
 CII.  What is the difference between the phone number of a fax machine and
 the from/to lines on the cover page of the fax?



Re: hey had eric sent you

2004-03-15 Thread Scott McGrath


Bit hard by same bug.  What version of code are you running on the 6513
8.1(2) fixes the bug on the 6x48 line cards.  What happens is that packets
of 64 bytes or less are silently dropped.  Replacing linecards will not
help unless there is another bug of which I am not aware.   With a little
digging I can dredge up the relevant DDTS.

Scott C. McGrath

On Sat, 13 Mar 2004, joe wrote:


 MessageThis in reply to the earlier thread Weird Problems?

 Well barring that, I've seen simuliar issues, maybe not the exact same
 timings but.
 I've noticed a couple of things while working with a roll out of
 Active-Directory
 and a recent upgrade to I.E 6.0 for the user base. Since there were several
 thousand
 users involved some of the issues were simply bad configs/drivers/etc.
 However one
 of the stats I have noticed is that in certain situations where a system is
 connected to
 a Cisco 3548, and the client is running in an Auto detect (AD/AN) mode that
 things
 are horendiously slow during boot up, and at various times seem to hang
 unexplainably.
 It seemed corrected by setting the client to 100/Full, but not in all cases.
 Lots of HTTP
 complaints still remain about accessing webpages etc. but never consistant.
 This of course is a pretty fresh problem and is still in my queue for
 research to start this
 Monday. As well, we've found that there was an odd bug with Cisco's 6513s
 and their
 48 port 10/100/1000 line cards. This was using the latest IOS/CAT software
 at the time.
 Again not sure if its a documented problem but, several users were unable to
 Telnet or
 FTP to systems that teminated to the 6513, oddly we were able to icmp echo
 and pass
 HTTP. After sometime and 2 TACs I found that there was a bug regarding these
 items
 and real small packets (I Think less than 64bits??) being passed thru the
 6513 and got an
 RMA for new Line cards. Again, perhaps nothing to do with your situation.
 Since the Nix systems
 and non-Doze seem not to have an issue, perhaps you can enlighten me with
 further
 Sniffs/Captures of these events directly?
 As soon as I get some more data/Captures on my end from the problems I'm
 seeing I can
 forward those apon request so as to keep S/N ratio down on Nanog (:

 Cheers,
 -Joe




 - Original Message -
 From: Riley, Marty
 To: [EMAIL PROTECTED]
 Sent: Friday, March 12, 2004 11:17 PM
 Subject: FW: hey had eric sent you





RE: Will your cisco have the FBI's IOS?

2004-03-15 Thread Scott McGrath


I have read the filing it's another step down the road.  True all comms
are subject to intercept _already_ what is desired is a way to _easily_
perform the intercept and the easily part is the kicker.  Some things
should be hard especially where civil rights are involved.

See all the light and noise about the MATRIX system which is simply a
means of collecting and indexing information which is already available to
LEA's.

However MATRIX removes the step of asking the provider for information
on a individual basis hence law abiding people are now in the position of
having their information searched without the oversight of the judicial
system in fishing expeditions.

Human nature being what it is the act of having to ask a judge to grant
access to the information keeps honest people honest and judges almost
never deny this type of request.  In a perfect world we would not need
locks on our doors, passwords for our systems.  In situations like this
who watches the watchers?.  Currently a judge does in the future...

Scott C. McGrath

On Mon, 15 Mar 2004, Sean Donelan wrote:


 On Mon, 15 Mar 2004, Scott McGrath wrote:
  What is desired here is a system by which all communications
  originating/or terminating at $DESIGNATED_TARGET can be intercepted with
  no intervention by and/or knowledge of the carrier hence ensuring the
  security of the investigation.

 I don't think that is correct.  Read the Justice Department's filing.

 With correct legal authorization, law enforcement already has access to
 any electronic communications through a carrier.


 From the Washington Post:
   The Justice Department wants to significantly expand the government's
   ability to monitor online traffic, proposing that providers of high-speed
   Internet service should be forced to grant easier access for FBI
   wiretaps and other electronic surveillance, according to documents and
   government officials.

   A petition filed this week with the Federal Communications Commission
   also suggests that consumers should be required to foot the bill.

 Is this a modem tax by another name.  Should every ISP add a fee to their
 subscriber's bill to pay for it?

 Read the filing.



Re: Enterprise Multihoming

2004-03-12 Thread Scott McGrath


As Marshall noted multi-homing gives you the ability to switch providers
easily.  This ability also gives you leverage with your network providers
since vendor lock-in does not exist.

This is a strong business case for multihoming and is one the financial
types understand and appreciate.

In a prior incarnation I worked for a distributor who had a online
ordering system.   Our telcom coordinator got a great deal on bundled
internet service and telephony from a unnamed vendor.  Due to the peering
arrangements the carrier had major customers were unable to place orders
in a timely fashion.

I set up a new AS and set up multihoming with another carrier and made our
customers happy again.  Subsequently said carrier had an outage which took
down our link to them for 7 weeks.  Since this was an internal problem at
our provider multiple links to this carrier would not have benefited us in
the least.  A multihoming strategy also allows you to select providers who
provide connectivty to your business partners and customers which is
another win for obvious reasons.

Scott C. McGrath

On Thu, 11 Mar 2004, Marshall Eubanks wrote:


 There is another  thing - if you are multi-homed, and want to switch
 providers, it is pretty seamless and painless - no renumbering, no
 loss of connection, etc., as you always have a redundant path.


 On Thursday, March 11, 2004, at 12:34 PM, Pekka Savola wrote:

 
  On Thu, 11 Mar 2004, Gregory Taylor wrote:
  Mutli-homing a non-ISP network or system on multiple carriers is a
  good
  way to maintain independent links to the internet by means of
  different
  peering, uplinks, over-all routing and reliability.  My network on
  NAIS
  is currently multi-homed through ATT.  I use a single provider as
  both
  of my redundant links via 100% Fiber network.  Even though this is
  cheaper for me, all it takes is for ATT to have some major outage
  and I
  will be screwed.  If I have a backup fiber line from say, Global
  Crossing, then it doesn't matter if ATT takes a nose dive, I still
  have
  my redundancy there.
 
  Well, I think this, in many cases, boils down to being able to pick
  the right provider.
 
  I mean, some providers go belly-up from time to time.  Others are
  designed/run better.
 
  For a major provider, complete outage of all of its customers is such
  a big thing they'll want to avoid it always.  If it happens, for a
  brief moment, once in five years (for example), for most companies
  that's an acceptable level of risk.
 
  --
  Pekka Savola You each name yourselves king, yet the
  Netcore Oykingdom bleeds.
  Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
 
 
   Regards
   Marshall Eubanks

 T.M. Eubanks
 e-mail : [EMAIL PROTECTED]
 http://www.telesuite.com



Re: T1 Customer CPE Replacement?

2004-02-23 Thread Scott McGrath


Have you tried a softnet depot maintenance agreement.  This entitles you
to IOS upgrades but H/W replacement is some negotiated percentage of list
price.

The other guys _may_ be cheaper in the short run but hardware replacement
is always like having a root canal cant speak for Netopia but I have
dealt with Adtran ended up buying a spare DS3 CSU/DSU because of the
experience.

Cisco has many warts but they do honor their maintenance contracts unlike
some other vendors who I have had the misfortune of dealing with.


Scott C. McGrath

On Mon, 23 Feb 2004, Claydon, Tom wrote:

 Hello,

 We're looking for a good replacement for fractional T1 customers with Cisco
 1600-   1700-series routers as their CPE. They are good routers, but the
 ongoing support costs are an issue, and we need to replace them ASAP.

 Someone had mentioned several CPE vendors, such as Adtran and Netopia. Are
 there any others, and does anyone have any pros/cons of what they're
 familiar with?


 Thanks,

 = TC

 --
 Tom Claydon, IT/ATM Network Engineer
 Dobson Telephone Company
 phone: (405) 391-8201  cell: (405) 834-0341




RE: Anti-spam System Idea

2004-02-17 Thread Scott McGrath


We do block port 25 as suggested in earlier in the thread.  Now the
problem is the spambots use our smarthost(s) to spew their garbage and the
smarthosts are blocked.

there is an easy if somewhat impractical anwswer ;~}

access-list network-egress
 deny ip any any log

Think of all the bandwidth charges this would save...

Seriosly though if anyone on the list has any solutions for rate limiting
SMTP in a sendmail environment please reply off list.

Scott C. McGrath

On Mon, 16 Feb 2004, Timothy R. McKee wrote:


 Personally I don't see where ingress filters that only allow registered
 SMTP servers to initiate TCP connections on port 25 is irresponsible.

 Any user sophisticated enough to legitimately require a running SMTP server
 should also have the sophistication to create a dns entry and register it
 with
 his upstream in whatever manner is required.

 There will never be a painless or easy solution to this problem, only a
 choice where we select the lesser of all evils.

 Tim

 -Original Message-
 From: Petri Helenius [mailto:[EMAIL PROTECTED]
 Sent: Monday, February 16, 2004 16:06
 To: Timothy R. McKee
 Cc: 'J Bacher'; [EMAIL PROTECTED]
 Subject: Re: Anti-spam System Idea

 Timothy R. McKee wrote:

 There will *never* be a concerted action by all service providers to
 filter ingress/egress on abused ports unless there is a legal
 requirement to do so.  Think 'level playing field'...
 
 
 Haven´t it been stated enough times previously that blindly blocking ports
 is irresponsible?

 There are ways to similar, if not more accurate results without resorting to
 shooting everything that moves.

 Pete



Re: ISS X-Force Security Advisories on Checkpoint Firewall-1 and VPN-1

2004-02-05 Thread Scott McGrath


On  PIX'en and FWSM it is very easy to disable the evil NAT all you
need is to enter the nat 0 command in global configuration mode.  This
allows the PIX to pass addresses untranslated.

The Pixen are still based on intel hardware but to the best of my
knowledge they have never had a HDD and I have worked with them since the
original PIX and PIX 1 I attended the initial product announcement
seminar they first came out.



Scott C. McGrath

On Thu, 5 Feb 2004, Crist Clark wrote:


 Martin Hepworth wrote:

 
  Alexei Roudnev wrote:
 
  Checkpoint is a very strange brand. On the one hand, it is _well known
  brand_, _many awards_, _editors choice_, etc etc. I know network
  consultant,
  who installed few hundred of them, and it works.
 
  On the other hand, every time, when I have a deal with this beasts (we do
  not use them, but some our customers use), I have an impression, that
  it is
  the worst firewall in the world:
  - for HA, you need very expansive Solaris cluster (compare with
  PIX-es) /I
  can be wrong, but it is overall opinion/.
  - to change VPN, you must reapply all policy, causing service
  disruption (I
  saw  1 day outage due to unsuccesfull Checkpoint reconfiguration);
  - VPN have numerous bugs (it is not 100% compatible with Cisco's by
  default;
  of couse, I can blame Cisco, but Checkpoint is _the only_ one of my peers
  which have this problem);
  - Configuration is not packed in 1 single file, so making difficult
  change
  control, etc etc...
 
  All this is _very_ subjective, of course; but - those customers, who uses
  Checkpoints, are the only ones who had a problems with firewalls. If I
  compare it with plain, reliable and _very simple_ PIX (PIX is not
  state of
  art, of course) and some others... I begin to think about checkpoint as
  about one more _brand bubble_. At least, I always advice _against_ it.
 
  PS. Security for dummies... interesting idea. Unfortunately, this book
  should start with _100% secure computer = dead computer_ -:)
  Why not? People really need such book!
 
 
  Of course 'back in days' when Firewall-1 started and
  [EMAIL PROTECTED] was *the* network security ML, PIX was an
  utter pile of poo and F-1 was very nice thankyou.
 
  Now PIX is quite good,

 Is it still very counter intuitive to set up a PIX to _not_
 do the eevul NAT? Is the PIX no longer PeeCee hardware underneath
 (I know they got rid of the HDD) so not as to bring NOs down to the
 level of the great unwashed throngs of desktop users?

  and Firewall-1 has become the Microsoft of
  firewalls - ie everywhere and not particularly well administratored.
 
  Interesting how things change isn't it?

 At least Checkpoint had the sense to kill the FWZ VPN protocol
 early and go with IPsec. More than I can say for M$. Not that
 IPsec interoperability is fully realized. Checkpoint has its own
 proprietary icky tricks to try to sneak IPsec through NAT just
 like every other commercial vendor. But Checkpoint admins are
 worst part, I check the box to use IKE VPN but someone said that
 uses the ESP service. Which port number is that? I read port 50
 somewhere, but should I make it a TCP or UDP service?

 The Checkpoint feature/bug that frustrates me is at the GUI
 level there is no association between a rule and an interface.
 To cover up this problem, there is the automatic anti-spoofing
 feature which is a bitch, if not impossible, to properly configure
 for a complicated topology.
 --
 Crist J. Clark   [EMAIL PROTECTED]
 Globalstar Communications(408) 933-4387



Re: Misplaced flamewar... WAS: RE: in case nobody else noticed it, there was a mail worm released today

2004-01-29 Thread Scott McGrath


On Wed, 28 Jan 2004, Alexei Roudnev wrote:

 
 
 
  Most Windows boxes are running with administrative privledges.  That makes
  Windows a willing accomplice.  The issue isn't that people click on
  attachments, but that there are no built in safeguards from what happens
  next.
 This is problem #1. Unfortunately, Windose is too complex and have too much
 legacy, so everyone must run as a administrator (try to install Visio
 without admin privileges...).

The whole point of the infamous *.DLL was to provide local libraries for 
applications like unix *.lib.so files.   This was corrupted by app vendors 
who were too deadline focused to install their DLL's in the application 
directory.

Of course this was abetted by the ability of an application to write
into the system directories.

When NTFS came out an ordinary user could not write the system directory
tree Hence most users are running as Administrator or equivalent so that
they can write into the system tree.  This was a bad design decision by
MS _and_ application developers.   This _is_ fixable by MS by simply not 
allowing apps to write into the system tree.  This of course is a small 
matter of programming but it would really improve the overall security 
posture of Windows.

Now there are well written applications which do install their DLL's into 
their own tree these apps can usually be recognized by _not_ requiring a 
reboot after installation.   

 
 Problem #2 - using extentions to select an application - may be, it's a very
 good idea, but it complicates virus (worm) problem.
 
 Agreed
 However magic numbers in the header or having the execute permission bit 
 set bring the same problem to the table.
 

 Problemm #3 - Monoculture.
  This greatly exacerbates problems 1 and 2 but is not so much of a 
  problem on its own.  i.e. Apache which has over 75% of the webserver
  market and is infrequently compromised.


Problem #4

MS applications have an unfortunate predilection to run any bit of 
executable code they find.  i.e. a WMA file can contain executable code 
which media player will happily execute.   This is a perfect example of 
just because you can do something it does not necessarily follow that you 
_should_ do something.   This dates back to [*]BASIC and the RUN command.  
It was somewhat useful 10+ years ago not so much today.




Re: How does one reach a human being at ATT?

2004-01-28 Thread Scott McGrath


What about using byte intervals to BEEFDEAD its space in memory ;~)

Scott C. McGrath

On Wed, 28 Jan 2004, Adam Maloney wrote:


 On Wed, 2004-01-28 at 00:12, Jay Hennigan wrote:
  I have an ATT T-1 taking errors.  Their trouble reporting number dumps
  me into the IVR from hell.  It even has machines calling me back at
  intervals with status.  The status says A test was run...  No hint as to
  the results of the test.
 
  One of the choices is to say or hit 2 if you need further assistance.
 
  Doing so gets a response telling you to call their maintenance center which
  is the same machine that I used to generate the ticket in the first place.
 
  Furrfu!  The telephone company doesn't have anyone to answer the telephone.
 
  Even Floyd[1] is looking pretty good at this point.
 
  Anyone have a secret number or touchtone sequence to share?  Swearing at
  it doesn't work.  This is a point-to-point circuit, not an Internet T-1.
 
  [1] 
  http://www.sfgate.com/cgi-bin/article.cgi?file=/chronicle/archive/2003/02/21/BU227355.DTL

 The ATT TickeTron loves you!  It will open your ticket, work your
 ticket, and then close your ticket for no reason.  Then you can call
 back into it and open a new ticket, which will again be closed.  You can
 yell and scream at TickeTron all you want, and it will still give you
 the same friendly, useless service as it did the first 10 times you
 opened your ticket!

 Open the fscking Ticket, TickeTron
 I'm sorry Jay, I'm afraid I can't do that, your ticket has been
 closed.

 I have a number for Richmond Maintenance Center, e-mailed to you
 off-list.  It may not be the right group for PtP, but at least you'll
 get a real person to vent at.  They will probably be able to open your
 ticket and get it to a warm body without getting HAL involved.

 Make sure you ask the engineer you speak with what the ATT techs call
 that system internally.  They have their own name for it (and it's not
 TickeTron), and it's absolutely hilarious...and appropriate.  For the
 life of me, I can't remember what it was.  At least the engineers know
 how frustrating it is.  Really though, the worst part is that yelling at
 it is just no fun.  And threatening to DEADBEEF it's space in memory
 won't earn you any points either :)

 Adam Maloney
 Systems Administrator
 Sihope Communications



Re: Any 1U - 2U Ethernet switches that can handle 4K VLANs?

2004-01-26 Thread Scott McGrath


Both the ISL _and_ the Dotq headers are stripped off at the trunk
interface so they _both_ change the packet size but neither alters the
payload.


Scott C. McGrath

On Mon, 26 Jan 2004 [EMAIL PROTECTED] wrote:


  ISL _DOES NOT CHANGE_ packet size.

 An 802.1q tag adds 4 bytes to the Ethernet frame.

 ISL encapsulation adds 30 bytes to the Ethernet frame.

 Steinar Haug, Nethelp consulting, [EMAIL PROTECTED]



Re: Outbound Route Optimization.

2004-01-26 Thread Scott McGrath


This was one of the pipe dreams that RSVP was _supposed_ to solve in that
you could set up a end to end path with precisely specified
characteristics. problem is _all_ the devices in the path need to support
RSVP.

Now the snake oil salesmen are coming out with boxes which purport to
monitor the all paths on the  internet and will indvidually select the
best path for your flow.The racket will be deafening when all these
boxes start running scripted ICMP sweeps to find the best path.

The solution is simple buy adequate pipes and possibly partner with a
content delivery network who peers with _all_ the major carriers so that
your traffic will not need to transit the major public peering points.



Scott C. McGrath


Re: What's the best way to wiretap a network?

2004-01-20 Thread Scott McGrath



Scott C. McGrath

On Tue, 20 Jan 2004, Eriks Rugelis wrote:


 Sean Donelan wrote:
  Assuming lawful purposes, what is the best way to tap a network
  undetectable to the surveillance subject, not missing any
  relevant data, and not exposing the installer to undue risk?

 'Best' rarely has a straight-forward answer.  ;-)

 Lawful access is subject to many of the same scaling issues which we
 confront in building up our networks.  Solutions which can work well for
 'small' access or hosting providers may not be sensible for larger scale
 environment.

 If you have only a low rate of warrants to process per year,
and if your facilities are few in number and/or geographically close
 together,
and if your 'optimum' point of tap insertion happens to be a link which
 can be reasonably traced without very expensive ASIC-based gear
and if your operation can tolerate breaking open the link to insert the
 tap,
and if the law enforcement types agree that the surveillance target is
 unlikely to notice the link going down to insert the tap...

then in-line taps such as Finisar or NetOptics can be quite sensible.

 If your operation can tolerate the continuing presence of the in-line tap
 and you only ever need a small number of them then leaving the taps
 permanently installed may be entirely reasonable.

 On the other hand, if your environment consists of a large number (100's) of
 potential tapping points, then you will quickly determine that in-line taps
 have very poor scaling properties.
   a) They are not rack-dense
   b) They require external power warts
   c) They are not cheap (in the range of US$500 each)
   d) Often when you have that many potential tapping points, you are
 likely to be processing a larger number of warrants in a year.  An in-line
 tap arrangement will require a body to physically install the recording
 equipment and cables to the trace-ports on the tap.  You may also need to
 make room for more than one set of recording gear at each site.

 Large-scale providers will probably want to examine solutions based on
 support built directly into their traffic-carrying infrastructure (switches,
 routers.)

Using cisco's feature set on a uBR it would be

cable intercept interface x/y Target MAC Logging Server IP port

as an example of lawful access on infrastructure equipment

 You should be watchful for law enforcement types trying dictate a 'solution'
 which is not a good fit to your own business environment.  There are usually
 several ways of getting them the data which they require to do their jobs.

 Eriks
 ---
 Eriks Rugelis  --  Senior Consultant
 Netidea Inc.  Voice:  +1 416 876 0740
 63 Charlton Boulevard,FAX:+1 416 250 5532
 North York, Ontario,  E-mail: [EMAIL PROTECTED]
 Canada
 M2M 1C1

 PGP public key is here:
 http://members.rogers.com/eriks.rugelis/certs/pgp.htm





Re: sniffer/promisc detector

2004-01-19 Thread Scott McGrath



That's what I assumed but I asked the question anyhow just to confirm my
assumption(s).


Scott C. McGrath

On Mon, 19 Jan 2004, Gerald wrote:

 On Sat, 17 Jan 2004, Scott McGrath wrote:

  The question here is what are you trying to defend against?.

 If that question was directed at me, I am just checking to make sure
 nothing is new on the packet sniffing / detecting scene that I haven't
 heard about. It also seemed to me to have been a long time since the
 subject of detecting packet sniffers was brought up. (not just on NANOG)

 I know there are ways to get around being detected, but I'm just trying to
 make sure I'm doing my best to catch the less than professional sniffers
 on my networks.

 Gerald



Re: sniffer/promisc detector

2004-01-17 Thread Scott McGrath


It is also possible to sniff a network using only the RX pair so most of
the tools to detect cards in P mode will fail.  The new Cisco 6548's have
TDR functionality so you could detect unauthorized connections by their
physical characteristics.

But there are also tools like ettercap which exploit weaknesses within
switched networks.  See http://ettercap.sourceforge.net/ for more details
(and gain some add'l grey hairs in the process).

The question here is what are you trying to defend against?.


Scott C. McGrath

On Sat, 17 Jan 2004, Sam Stickland wrote:



 - Original Message -
 From: Laurence F. Sheldon, Jr. [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Friday, January 16, 2004 10:49 PM
 Subject: Re: sniffer/promisc detector


 
  Gerald wrote:
  
   Subject says it all. Someone asked the other day here for sniffers. Any
   progress or suggestions for programs that detect cards in promisc mode
 or
   sniffing traffic?
 
  I can't even imagine how one might do that.  Traditionally the only
  way to know that you have a mole is to encounter secrets that had to
  have been stolen.

 In an all switched network, sniffing can normally only be accomplished with
 MAC address spoofing (Man In The Middle). Watching for MAC address changes
 (from every machines perspective), along with scanning for seperate machines
 with the same ARP address, and using switches that can detect when a MAC
 address moves between ports will go a long way towards detecting sniffing.

 It can also be worthwhile setting up a machine on a switch to detect
 non-broadcast traffic that isn't for it - sometimes older switches get
 'leaky' when they shouldn't be used.

 I'm not sure if it's still the case, but it used to be the case that when
 Linux is in promiscuous mode, it will answer to TCP/IP packets sent to its
 IP address even if the MAC address on that packet is wrong. Sending TCP/IP
 packets to all the IP addresses on the subnet, where the MAC address
 contains wrong information, will tell you which machines are Linux machines
 in promiscuous mode (the answer from those machines will be a RST packet).

 Some tools that google turned up (haven't tried them myself):

 http://www.securityfriday.com/ToolDownload/PromiScan/promiscan_doc.html

 http://www.packetstormsecurity.org/sniffers/antisniff/

 Apparently Man In The Middle attacks can also be detected by measuring the
 latency under different traffic loads, but I haven't looked to much into
 that.

 Sam




Re: One-element vs two-element design

2004-01-17 Thread Scott McGrath


I personally favor the N+1 design model as it allows maintenance to be
performed on network elements without causing outages which makes the
customers happy.

In many instances you can leverage the N+1 model to share the load between
the devices thereby increasing network capacity.  As an addtional benefit
in the event of a element failure your network degrades gracefully rather
than failing hard and requiring a all hands operation to get it back
online.  This tends to reduce your operational costs for your network even
though your implementation cost is higher so over the lifetime of your
network the overall cost is lower.  i.e. service contracts can be NBD
rather than 24x7x2.

The N+1 model also takes into account the simple fact that stuff breaks!.
I was reading the FIPS standards for machine room design one day and an
entire page was devoted to ALL EQUIPMENT WILL FAIL EVENTUALLY this is a
lesson which is often forgotten.

This is why commercial airliners have multiple engines even though the
system is less reliable overall than a well designed single engine craft
the failure of a single component does not entail the catastrophic failure
of the entire system.  (there are exceptions to this but the overall
concept does work).

In the end it comes down to reliable vs resilient network.  s in a
reliable network components fail infrequently but they have catastrophic
failure modes in a resilient network component failure is taken as a given
but the overall system reliability is much higher than a reliable network
since a component failure does not equal a functional failure.


Scott C. McGrath

On Fri, 16 Jan 2004 [EMAIL PROTECTED] wrote:

 One key consideration you should think about is the ability to perform
 maintenance on redundant devices in the N+1 model without impacting the
 availability of the network.

 Brent




 Timothy Brown [EMAIL PROTECTED]
 Sent by: [EMAIL PROTECTED]
 01/16/2004 10:14 PM


 To: [EMAIL PROTECTED]
 cc:
 Subject:One-element vs two-element design



 I fear this may be a mother of a debate.

 In my (short?) career, i've been involved in several designs, some
 successful,
 some less so.  I've recently been asked to contribute a design for one of
 the
 networks I work on.  The design brings with it a number of challenges, but
 also, unlike a greenfield network, has a lot of history.

 One of the major decisions i'm being faced with is a choice between
 one-element
 or two-element design.  When I refer to elements, what I really mean to
 say
 is N or N+1.  For quite some time now, vendors have been improving
 hardware
 to the point where most components in a given device, with the exception
 of
 a line card, can be made redundant.  This includes things like routing and
 switching processors, power supplies, busses, and even, in the case of
 vendor
 J and several others, the possibility of inflight restarts of  particular
 portions of the software as part of either scheduled maintenance or to
 correct
 a problem.

 I have always been traditionally of the school of learning that states
 that
 it is best to have two devices of equal power and on the same footing,
 and,
 in multiple site configurations, four devices of equal power and equal
 footing.
 I feel like a safe argument to make is N+1, so that is the philosophy that
 I tend to adopt.  N+2 or N...whatever doesn't seem to add a lot of
 additional
 security into the network's model of availability.  This adds complexity,
 but
 I prefer to think of this in terms of,  Well, I can manage software or
 design
 complexity in my configurations, but I can't manage the loss of a single
 device which holds my network together.  Now I must view this assertion
 in
 the context of better designed hardware and cheap spares-on-hand.

 Of course, like many other folks, I have tried to drink as deeply as I can
 from the well of knowledge.  I've perused at length Cisco Press' High
 Availability Network Fundamentals, and understand MTBF calculations and
 some of the design issues in building a highly available network.  But
 from
 a cost perspective, it seems that a single, larger box may be able to
 offer me
 as much redundancy as two equally configured boxes handling the same
 traffic
 load.  Of course, there's that little demon on my shoulder, that tells me
 that I could always lose a complete device due to a power issue or short,
 and then i'd be up a creek.

 We have a history of adopting the N+1 model on the specific network i'm
 talking about, and it has worked very well so far in the face of
 occassional
 software failures by a vendor we occassionally have ridiculed here on
 nanog-l.
 However, in considering a comprehensive redesign, another vendor offers
 significantly more software stability, so i'm re-evaluating the need for
 multiple devices.

 My mind's more or less already made up, but i'd like to hear the design
 philosophies of other members of the operational 

Re: One-element vs two-element design

2004-01-17 Thread Scott McGrath


Point taken, Availability would have been a better term to use.

From a customers standpoint limited availability of bits is still better
than no bits flowing and in an ideal world your published capacity would
be N rather than N+1.

Appreciate the thoughtful comments

Regards - Scott

Scott C. McGrath

On Sat, 17 Jan 2004, Deepak Jain wrote:

 [stuff snipped]

  but the overall system reliability is much higher than a reliable network
  since a component failure does not equal a functional failure.


 s/reliability/availabilty.

 You meant reliability when comparing a 1 vs 2 engine airplane, but a
 network (from a customer point of view) isn't defined by reliability,
 its defined by availability.

 If you are using your backup (N+1) router(s) for extra capacity, than
 you don't fail back to full capacity, but you do have limited availabilty.

 Availability/Performance of the overall system (network) is what we all
 engineer for. Customers don't care about reliability as long as the
 first two items are not impuned. (For example, they don't care if you
 have to replace their physical dialup port every hour on the hour,
 provided that they can get in and off in between service intervals --not
 a very reliable port, but a highly available network from the customer
 perspective).

 Maybe I am just picking on semantics.

 Deepak





Re: Looking for power metering equipment...

2004-01-15 Thread Scott McGrath


Concur with you need wattage not amperage.  There is a 'relatively' cheap 
method of doing this however local electrical codes may put a damper on 
this type of project.

You put a current transformer on each branch circuit.  A 'typical' current 
transformer will generate 1Millivolt per Milliampere.  You then install a 
A/D board in a PC and write a simple application to query each channel of 
the A/D.  or purchase a commercially available SMNP datalogger.


Scott C. McGrath

On Thu, 15 Jan 2004, David Lesher wrote:

 
 Speaking on Deep Background, the Press Secretary whispered:
  
  
  Question: We are looking for something that sits in the PDUs or branch
  circuit-breaker distribution load centers, that, on a branch-circuit by
  branch-circuit basis, can monitor amperage, and be queried by SNMP.
  
  Considering there are several hundreds of circuits to be monitored, cheap
  and featureless (all we need is amperage via SNMP) is fine.
 
 You really want wattage. The power factor of switched supplies
 is far from unity.
 
 Take a look at http://www.quadlogic.com/transmeter1.html
 
 Also, recall you sell each watt twice -- once to heat up
 a chassis, and a 2nd time for the HVAC to cool it.
 
 
 
 
 -- 
 A host is a host from coast to [EMAIL PROTECTED]
  no one will talk to a host that's close[v].(301) 56-LINUX
 Unless the host (that isn't close).pob 1433
 is busy, hung or dead20915-1433
 



RE: GSR, 7600, Juniper M?, oh my!

2004-01-08 Thread Scott McGrath


If you choose the 7600's I would highly recommend going with the Sup720's 
the price difference is not that great and they incorporate the SFM which 
gives you the option of running dCEF on your WAN cards.

Scott C. McGrath

On Thu, 8 Jan 2004, Josh Fleishman wrote:

 
 Back to the original question..
 
 A lot of your decision comes down to what you're going to be doing with the
 box and when you expect your next jump from OC3 to OC12(or greater).  Also,
 you need to consider your comfort level with JUNOS vs IOS.  If you're cool
 with JUNOS then multiple M series boxes are worth investigating.  Our
 experience with them has been almost nothing but positive, plus they will
 allow you to expand to greater than OC3, providing you with some future
 proofing.
 
 7600's have proven to be fine boxes, especially if you have need for
 Ethernet port density at the same layer as your optical circuits.  A lot of
 feature support is going to depend on your supervisor/msfc selection.  If
 you go this route, and the coffers are full, check out the new(er) sup720's.
 However, based on your ACL and Policing requirements, the Sup2/MSFC2 combo
 should be sufficient.  Also, keeping in mind the emergence of point to point
 Ethernet solutions in the WAN/MAN (ie Metro Ethernet, and MPLS and L2TPv3
 pseudowires) keeping Ethernet at your edge might prove useful one day.
 
 The GSR, IMHO, is a higher tier box based on both it's scalability to OC192
 and cost.  Since you're just going to OC3's now, I doubt the GSR will be
 your best bet for the cost, but then again I haven't priced one out lately.
 
 If you're really pinching pennies, then check out upgrading your 7500's with
 RSP8/16s and faster VIPs.  But, if you're putting multiple OC3's on a box,
 then your down links will likely start turning to GE.  I'd stay away from
 the GEIPs if possible.  And for your 7200's, look into the NPG-G1 which have
 line rate GE ports onboard.  We've used them and they are pretty solid.  A
 head to head GRE bakeoff between the NPE-G1 and an RSP8(with dCEF) proved
 the NPE to be far superior.  
 
 Josh
 
 
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of bcm
 Sent: Tuesday, January 06, 2004 1:11 PM
 To: [EMAIL PROTECTED]
 Subject: GSR, 7600, Juniper M?, oh my!
 
 
 Hello all,
 
 I'm faced with a difficult decision.  I work for a large multi-node
 regional ISP (and Cisco shop).  In our largest nodes we've found the Cisco
 7500 series routers to be at the end of their useful life due to the
 throughput generated by POS OC-3 feeds and 10,000+ broadband users whose
 traffic needs to be moved out of the node.  Short of building a farm of
 7500's the need to upgrade seems clear.
 
 But where to go?  The Cisco GSR platform seems a logical choice, but
 their new 7600 series units are attractive for their cost.  Juniper may also
 have a place at this end of the processing spectrum.  I'd also like to
 ensure that the new platform supports doing CAR and ACLs at line rate, given
 the client base.
 
 I wanted to see what other operators in this situation have done, so I
 would appreciate anyone's input or insight into the pros and cons of these
 platforms or any other ideas as to how I can grow beyond the Cisco 7500.
 



Re: Wirespeed 24-port L3 switches

2004-01-08 Thread Scott McGrath


I think you are expecting too much from a 24 port switch.  All these
devices are meant to sell at a price and most of the buyers are using
the L3 features as a checklist elimination option and in my experience
most of these switches are never used as anything other than a dumb L2
switch but the purchaser feels good because they have a Manageable L3
switch the fact that these features are not configured in their
installations notwithstanding.

Also the manufacturers do not want to cannibalize the sales of higher end
products so the cheap ones tend to have limited functionality.

This community is a exception and we really do configure the features and
expect them to work. 


Scott C. McGrath

On Thu, 8 Jan 2004 [EMAIL PROTECTED] wrote:

 
 Hi all,
 
 We're looking at L3 switches which have decent L3 packet forwarding
 performance (wirespeed if possible), a reasonable amount of L4 ACLs/ACEs
 (an average of at least 80 per port) and comes in a 24-port 10/100 port
 package with a couple of GBIC slots for uplinking to the core network.  
 OSPF, but no BGP.
 
 We've looked at the Cisco 3550-24, but they seem to have resource
 exhaustion issues[1] if you create more than 8 SVI's (i.e. it goes back
 to software routing).  Extreme 200 switches look OK, but are limited to
 about 1000 ACE[2] (averages 32 rules per port).  Allied Telesyn's
 8800/Rapier series currently only manage half that figure in hardware and
 don't support UDP/TCP port ranges in a single ACE[3].
 
 Are our expectations of a 24-port switch too high?  Would it be better to
 move over to higher density switches and put in large amounts of
 underfloor cabling in large installations and keep putting separate
 routers and switches into the smaller locations (100 ports)?
 
 Or are L3 switches not a mature product and we should all stick to using 
 switches for L2 and have L3+ dealt with by dedicated routers for the time 
 being?
 
 Cheers,
 
 Rich
 
 [1] http://www.cisco.com/warp/public/473/145.html
 [2] 
 http://www.extremenetworks.com/libraries/prodpdfs/products/summit200_24_48.asp
 [3] They do support ranges, but a rule to cover a single range may require 
 multiple ACEs.
 



Re: Minimum Internet MTU

2003-12-22 Thread Scott McGrath



Or the X.25/IP gateways beloved of Airlines who are also good at 
complaining when traffic is dropped on the floor

Scott C. McGrath

On 22 Dec 2003, Robert E. Seastrom wrote:

 
 
 Chris Brenton [EMAIL PROTECTED] writes:
 
  I agree, this is a bit of a loaded question. I guess by safe I mean Is
  anyone aware of a specific link or set of conditions that could cause
  _legitimate_ non-last fragmented packets on the wire that have a size of
  less than 1200 bytes. I agree there are bound to be inexperienced users
  who have shot themselves in the foot and tweaked their personal system
  lower than this threshold, thus my 99.9% requirement.
 
 You mean like everyone who's still running TCP/IP over AX.25 in the
 ham radio community?  They're generally technically adept and good at
 complaining...  I'm sure rbush would encourage his competitors to do this.
 
 What are you trying to accomplish by killing off the fragments?
 
 ---Rob
 
 



Re: WLAN shielding

2003-12-01 Thread Scott McGrath


There is an adage in the Wireless industry.  If it will hold water it will
hold RF Energy.  Unfortunately this is true and the only method by which
you can prevent the egress of 2.4 GHz signals from a defined area is by
the use of a faraday cage and since the wavelength is short you need a 
very fine mesh screen or solid metal walls.   This is expensive.

If you really want to use wireless I would recommend a VPN solution with 
the authentication being a one time password solution.  i.e. SecureID

Scott C. McGrath

On Wed, 26 Nov 2003, Andy Grosser wrote:

 
 Apologies in advance if this may not quite be the proper list for such a
 question...
 
 My company is investigating the use of wireless in a couple of our
 conference rooms.  Aside from limiting the scope of reception with various
 directional antennae, does anyone have any suggestions or pointers for
 other ways to limit the propagation of signals (i.e. special shielding
 paint, panels or other wall coatings)?
 
 Feel free to reply off-list.
 
 Thanks!
 
 Andy
 
 ---
 Andy Grosser, CCNP
 andy at meniscus dot org
 ---
 
 
 



Re: Anit-Virus help for all of us??????

2003-11-25 Thread Scott McGrath


The minimalist approach has support advantages as well.  Because of the 
small image size a reimage can be accomplished quickly. 

For better or worse many network tools/utilities only run under win[*] 
requiring a windows box for many of these Win98SE fits nicely.  My app 
load is small i.e. browser, ssh client sftp client and the inevitable 
Office suite.

We are primarily a [*}x house here but we do need windows at times.



Scott C. McGrath

On Tue, 25 Nov 2003, Brian Bruns wrote:

 
 - Original Message - 
 From: Vivien M. [EMAIL PROTECTED]
 To: 'Daniel Karrenberg' [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Sent: Tuesday, November 25, 2003 9:39 AM
 Subject: RE: Anit-Virus help for all of us??
 
 
 
  Have either of you actually followed this advice?
 
  Win98SE is totally useless as a desktop OS due to the archaic GDI/USER
  resource limits. When one average consumerish app (eg: a media player)
 eats
  up 10% of those resources, one window in an IM program eats up 2%, etc...
 it
  does not take much to bring down an entire system. Last time I  was
 running
  Win98SE (which is about 3 years ago), it took about 20 minutes after
 booting
  while running boring normal apps to get to a dangerously low resource
 level
  (30%ish free). That machine got totally unstable needing a reboot after
  about 3 days. On the same hardware (with additional RAM), Win2K could
 easily
  run 3-4 weeks and run any app I wanted just fine.
  So, some people might say I'm a power user, but the average users I know
  these days tend to multitask at least a web browser, an IM client with a
  couple open windows, some bloated media player, perhaps a P2P app, and
 some
  office app. This is already stretching Win9X to its limits, and I would
  expect it to be worse (code just gets sloppier...) than it was three years
  ago...
 
 Yes I do follow my own advice.  Back from the days when I was an OEM, I
 still have a box full of win98SE cd packs/licenses for when I build people
 new machines.  Its what I put on them standard unless you ask for Win2k or
 XP or NT4 (or any other OS for that matter, ie Linux, BSD).
 
 I know full well about the resource limits.  Its a PITA, but as long as you
 run a decent set of apps that don't suffer from resource leaks (Mozilla
 without a GDI patch does this for example) that eventually use up all
 GDI/USER memory, you'll be fine.  I use Win98SE here all day with only one
 reboot needed most days, and I run WinAMP, Putty, K-Meleon, Outlook Express,
 Cygwin, mIRC, Xnews (which has a bad habit of crashing the whole system at
 times), as well as AIM, Miranda IM, SST, Yahoo Messenger, and various other
 tools.  Thats all at once, multitasking.  I know, I could reduce the clutter
 by letting Miranda IM do AIM and Yahoo, but thats not the point. :-)
 
 Many times, resource suckage comes from those ugly faceless background
 programs that run at startup.  Kill as many icons as you can on the desktop
 and the task bar, and clean out your startup list, and you'll free up alot
 of GDI resources.
 
 
 
 
  No wonder people think Windows is unreliable. 98SE may be preferable from
 a
  security-from-external-threats POV, yes, but for any type of real use,
 it's
  useless. Not to mention the other quirks, like needing to reboot to change
  network settings, the lack of any local security (or even attempt at local
  security), etc. I'll take rebooting every week or two for the latest XP
  security patch any day over rebooting every day or two because Win98SE is
 an
  unreliable piece of poorly designed legacy junk.
 
  The way I see it, there are two uses for 98SE (or 95, 98, Me, etc) in the
  modern world:
  1) People who use their computers as game-only machines (or who dual boot
 a
  real OS for non-game purposes)
  2) Advertising for $OTHER_OS, where $OTHER_OS can be Win2K, XP, or your
  favourite Linux distro with KDE, GNOME, etc. Anything that actually WORKS
  reliably.
 
 Lets not forget those people who just don't have the CPU power or memory to
 support 2k or XP.
 
 Just because something is new and 'improved' doesn't make it better.  Yes,
 9x has alot of legacy crap.  Yes, 9x has various issues with resource usage.
 But sometimes, its just right.
 
 --
 Brian Bruns
 The Summit Open Source Development Group
 Open Solutions For A Closed World / Anti-Spam Resources
 http://www.sosdg.org
 
 The AHBL - http://www.ahbl.org
 



RE: [Activity logging archiving tool]

2003-11-25 Thread Scott McGrath


CiscoWorks also polls the devices for configuration changes and generates 
a diff if you so desire.  If you have set up AAA you will have an audit 
log of when changes were applied and who applied them.

Scott C. McGrath

On Tue, 25 Nov 2003 [EMAIL PROTECTED] wrote:

 
 Or Ciscoworks. A config change sends a syslog event to CW which in
 turn knows to go grab the latest copy of the config. I believe
 there are some reporting capabilities too, simple diff routines and
 archives
 of past configs. 
 
 I think CW is more of the CVS-like approach whereas ACS is sort of a
 simple logging method. 
 
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
 Dan Lockwood
 Sent: Tuesday, November 25, 2003 3:54 PM
 To: joshua sahala; Priyantha; [EMAIL PROTECTED]
 Subject: RE: [Activity logging  archiving tool]
 
 
 
 If you are in a Cisco shop you might consider Secure ACS.  We use ACS to
 log all of our changes and have very good success with it. Unfortunately
 it is not free.
 
 Dan
 
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
 joshua sahala
 Sent: Tuesday, November 25, 2003 11:45 AM
 To: Priyantha; [EMAIL PROTECTED]
 Subject: Re: [Activity logging  archiving tool]
 
 
 Priyantha [EMAIL PROTECTED] wrote:
  
  In my company, there are several technical guys make changes to the
  existing network and  it's very difficult to keep track of what we did
  when, etc.
 
 i feel your pain - except when it was happening, they weren't as 
 technical as they thought they were...
  
  I'm looking for a simple tool, in which each and every one has to
  manually record whatever (s)he has done or any incident (s)he observed
  so that the tool archives that data someway. Later, in case if someone
  needs, (s)he should be able to search for that archive by date, by 
  person, by a random phrase, etc.
 
 rancid (http://www.shrubbery.net/rancid) and
 cvs-web (http://stud.fh-heilbronn.de/~zeller/cgi/cvsweb.cgi/)
 
 rancid does nice proactive checking of device configs, and cvs-web is a
 pretty front end to look through change history
 
 for tracking:
 request tracker (http://www.bestpractical.com/rt/) - it is a ticketing
 system, but you could probably customize it to fit your needs
 
 netoffice (http://sourceforge.net/projects/netoffice/) - haven't used it
 personally, but it looks like it might work too
 
 track+ (http://sourceforge.net/projects/trackplus/) - same as netoffice
 
 of course, nothing will work unless everyone uses it, so you have to
 have clear, concise policies for change management, and then enforce 
 them.
 
 hth
 
 /joshua
 
  Any help in this regard is appreciated,
  
  Priyantha Pushpa Kumara
  ---
  Manager - Data Services
  Wightman Internet Ltd.
  Clifford, ON
  N0G 1M0
  Fax: 519-327-8010
  
  
  
 
 
 
 Walk with me through the Universe,
  And along the way see how all of us are Connected.
  Feast the eyes of your Soul,
  On the Love that abounds.
  In all places at once, seemingly endless,
  Like your own existence.
  - Stephen Hawking -
 
 
 
 



Re: uRPF-based Blackhole Routing System Overview

2003-11-12 Thread Scott McGrath


Vendor C calls it DHCP snooping and to the best of my knowledge it is only 
available under IOS not CatOS


Scott C. McGrath

On Fri, 7 Nov 2003, Greg Maxwell wrote:

 
 On Fri, 7 Nov 2003, Robert A. Hayden wrote:
 
 [snip]
  One final note.  This system is pretty useless for modem pools, VPN
  concentrators, and many DHCP implementations.  The dynamic IP nature of
  these setups means you will just kill legitimate traffic next time someone
  gets the IP.  You can attempt to correlate your detection with the time
  they were handed out, of course, in the hopes you find them.
 
 Another approach to address this type of problem is the source spoofing
 preventing dynamic-acls support that some vendors have been adding to
 their products. I don't know if it's in anyone's production code-trains
 yet.
 
 The basic idea is that your switch snoops DHCP traffic to the port and
 generates an ACL based on the address assigned to the client. Removing a
 host is as simple as configuring your DHCP server to ignore it's requests
 and perhaps sending a crafty packet (custom written DECLINE) to burp the
 existing ACL out of the switch.
 
 Vendor F calls this feature Source IP Port Security, I'm not sure what
 vendor C calls it.
 
 Since this is a layer 2 feature you can configure it far out on the edge
 and not just at the router.
 
 



Re: IPv6 NAT

2003-10-31 Thread Scott McGrath


Agreed NAT's do not create security although many customers believe they
do.  NAT's _are_ extremely useful in hiding network topologies from casual
inspection.

What I usually recommend to those who need NAT is a stateful firewall in
front of the NAT.  The rationale being the NAT hides the topology and the
stateful firewall provides the security boundary.



Scott C. McGrath

On Thu, 30 Oct 2003, Stephen Sprunk wrote:


 Thus spake [EMAIL PROTECTED]
  Now, I'm not claiming that every device capable of IPv4 NAT is currently
  able to function in this way, but there are no technical barriers to
 prevent
  manufacturers from making IPv6 devices that function in this way. The
  IPv6 vendor marketing folks can even invent terms like NAT (Network
  Authority Technology) to describe this simple IPv6 firewall function, i.e.
  IPv6 NAT.

 Or you could simply call it what it is -- a firewall -- since that's what
 most consumers think NAT is anyways.

 While I disagree with the general sentiment that NATs create security, the
 standard usage of such devices is certainly that of a stateful firewall.

 S

 Stephen Sprunk God does not play dice.  --Albert Einstein
 CCIE #3723 God is an inveterate gambler, and He throws the
 K5SSSdice at every possible opportunity. --Stephen Hawking



Re: Yankee Group declares core routing obsolete (was Re: Anybodyusing GBICs?)

2003-10-31 Thread Scott McGrath


Funny I thought a switch was a multiport bridge... uses the MAC
headers to flood. ahh makes me long for the days of Kalpana.

Scott C. McGrath

On Fri, 31 Oct 2003, Stephen Sprunk wrote:


 Thus spake Daniel Golding [EMAIL PROTECTED]
  Hmm. Don't you just love it when folks say things like Layer 3 Switches
 are
  better than routers. Its very illuminating as to clue level.
 
  I suppose what they were trying to say, is that products that were
 designed
  as switches, but are now running routing code, are superior to products
 that
  were designed as routers, and are running routing code. Of course, this is
  demonstrably false.
 
  Layer 3 Switch is like Tier 1 ISP - meaningless marketing drivel,
  divorced from any previous technical meaning.

 I've always stated that switch is a marketing term meaning fast.  Thus a
 L2 switch is a fast bridge and a L3 switch is a fast router.  In
 this light, the Yankee Group is just now catching on to something we all
 knew a decade ago -- slow (i.e. software) routers are dead.

 There's a more interesting level to the discussion if you look at what
 carriers are interested in for their backbone hardware today; while I'm
 obviously biased based on my employer, I've seen a lot more emphasis on
 $20k-per-10GE-port L3 switches than $200k-per-10GE-port core routers in
 the current economic climate.

 S

 Stephen Sprunk God does not play dice.  --Albert Einstein
 CCIE #3723 God is an inveterate gambler, and He throws the
 K5SSSdice at every possible opportunity. --Stephen Hawking



Re: [arin-announce] IPv4 Address Space (fwd)

2003-10-30 Thread Scott McGrath


That was _exactly_ the point I was attempting to make.  If you recall
there was a case recently where a subcontractor at a power generation
facility linked their system to an isolated network which gave
unintentional global access to the isolated network.  a NAT at the
subcontrator's interface would have prevented this.


Scott C. McGrath

On Wed, 29 Oct 2003, Jack Bates wrote:

 
 David Raistrick wrote:
 
  
  You seem to be arguing that NAT is the only way to prevent inbound access.
  While it's true that most commercial IPv4 firewalls bundle NAT with packet
  filtering, the NAT is not required..and less-so with IPv6.
  
 
 I think the point that was being made was that NAT allows the filtering 
 of the box to be more idiot proof. Firewall rules tend to be complex, 
 which is why mistakes *do* get made and systems still get compromised. 
 NAT interfaces and setups tend to be more simplistic, and the IP 
 addresses of the device won't route publicly through the firewall or any 
 unknown alternate routes.
 
 -Jack
 




Re: [arin-announce] IPv4 Address Space (fwd)

2003-10-29 Thread Scott McGrath


And sometimes you use NAT because you really do not want the NAT'ed device
to be globally addressible but it needs to have a link to the outside to 
download updates.  Instrument controllers et.al.

The wisdom of the design decision to use the internet as the only method
to provide software updates is left for individual cogitation.  (and no I
am not talking about Win[*] products here)

Scott C. McGrath




Re: [arin-announce] IPv4 Address Space (fwd)

2003-10-29 Thread Scott McGrath


Life would be much simpler without NAT howver there are non-computer
devices which use the internet to get updates for their firmware that most
of us would prefer not to be globally reachable due to the human error
factor i.e. Oops forgot a rule to protect X. 

The radar on your cruise ship uses an IP network to communicate with the
chartplotter, GPS, depthsounder do you really want _this_ gear globally
reachable via the internet?.   Remember if it's globally reachable it is 
subject to compromise.

A good example of this is building control systems which get firmware
updates via FTP from their maker.  Usually there is no manual system
for updating them offline and allowing them to be disconnected from the
internet  as in my opinion they _should_ be.

NAT is not security just look what you can do with sFlow to identify 
machines behind a NAT.   NAT is useful for machines which need to 
periodically make a connection to perform some function involving the 
network. 

This class of devices should not have a globally routable address
because in many cases security on them is less than an afterthought (short
fixed passwords no support for secure protocols, etc)

The other case as pointed out by another poster is overlapping networks 
which need NAT until a renumbering can be accomplished.


Scott C. McGrath

On Wed, 29 Oct 2003, Miquel van Smoorenburg wrote:

 
 In article [EMAIL PROTECTED],
 Scott McGrath  [EMAIL PROTECTED] wrote:
 And sometimes you use NAT because you really do not want the NAT'ed device
 to be globally addressible but it needs to have a link to the outside to 
 download updates.  Instrument controllers et.al.
 
 I don't understand. What is the difference between a /24 internal
 NATted network, and a /64 internal IPv6 network that is firewalled
 off: only paclets to the outside allowed, and packets destined for
 the inside need to have a traffic flow associated with it.
 
 As I see it, NAT is just a stateful firewall of sorts. A broken one,
 so why not use a non-broken solution ?
 
 We can only hope that IPv6 capable CPE devices have that sort
 of stateful firewalling turned on by default. Or start educating
 the vendors of these el-cheopo CPE devices so that they will
 all have that kind of firewalling enabled before IPv6 becomes
 mainstream.
 
 Mike.
 



Re: NTP, possible solutions, and best implementation

2003-10-03 Thread Scott McGrath


The recommendations of others to place the Stratum 1 source behind another 
box is indeed good operational practice.  However if you _really_ want to 
provide Stratum 1 services there are a couple of options

1 - Purchase a Cesium clock this is a Primary Time/Frequency standard 
which does not require access to a reference standard to maintain 
accuracy.

This is a Stratum 0 source so once placed behind a Unix/Cisco/Juniper
box you have a stratum 1 source.   This will cost you 30,000 - 
100,000 US per unit.   The beam tube will require replacement
approx every 5 years for about 20,000 US.

2 - Set up a stratum 1 source but use MD5 authentication to prevent 
unauthorized users from accessing the service.

 

Scott C. McGrath

On Thu, 2 Oct 2003, Ariel Biener wrote:

 
 
 
   Hi,
 
 
Assuming one wanted to provide a high profile (say, at the TLD level) NTP 
 service, how would you go about it ?
 
The possibilities I encountered are diverse, the problem is not the 
 back-end device (be it a GPS based NTP source + atomic clock backup, based on 
 cesium or similar), but the front end to the network. Such a time service is 
 something that is considered a trusted stratum 1 server, and assuring that no 
 tampering with the time is possible is of very high priority, if not top 
 priority.
 
 There are a few NTP servers solutions, I like the following comparison 
 between one company's products (Datum, merged into Symmetricom):
 
 http://www.ntp-systems.com/product_comparison.asp
 
 However, when you put such a device on a network, you want to have some 
 kind of clue about the investment made in that product when security comes to 
 mind, and also the turnaround time for bug fixes should such security bug 
 become public. Here is the problem, or actually, my problem with these 
 devices. I know that if I use a Unix machine or a Cisco router as front end 
 to the network for this back-end device, then if a bug in NTP occurs, Cisco 
 or the Unix vendor will fix it quickly. BUT!, if I want to put the device 
 itself on the network, as this is what a NTP device was built for, I feel 
 that I have no real sense of how secure the device really is, and how long it 
 would take for the vendor to actually fix the bug, should such be discovered. 
 It's a black box, and I am supposed to provide a secure time source based on 
 ... what ?
 
This is my dillema. While I don't want to put a NTP front end, which 
 becomes a stratum 2 in this case, but to provide direct stratum 1 service to 
 stratum 2 servers in the TLD in question, I do not know how can I safely 
 trust a device that I have no experience with how the vendor deals with bugs, 
 and also, I have no idea what is the underlying software (although it's safe 
 to assume that it is an implementation of xntpd, in one form or the other).
 
Did any of you have to create/run/maintain such a service, and does any of 
 you have experience with vendors/products that can be trusted when security 
 is concerned (including the vendor and the products I specified above).
 
 thanks for your time,
 
 --Ariel 
 
 
 --
 Ariel Biener
 e-mail: [EMAIL PROTECTED]
 PGP(6.5.8) public key http://www.tau.ac.il/~ariel/pgp.html
 



Re: NTP, possible solutions, and best implementation

2003-10-03 Thread Scott McGrath


Two relevant points on GPS/LORAN

1 - GPS has two positioning systems

1 - SPS Standard Positioning Service which is what all civillian 
uses of GPS utilize for positioning and timing uses and this can 
be degraded or disabled with no notice to the user community
by the National Command Authority.

2 - PPS Precision Positioning Service this is the military GPS system
which uses encrypted signals on a different frequency to provide 
location services accurate to 30 cm.   SPS can be disabled with no 
effect on PPS.

I have no knowledge of why there are two systems since the system
was initially designed for military use only but as a guess the 
SPS system was designed as a test system so GPS system 
functionality could be checked without the need to disclose keys.


2 - GPS is more accurate than LORAN however the SPS is much less 
repeatable by design than LORAN.  A LORAN may not give you as accurate
a Fix as the GPS but the LORAN will always bring you back to the 
same spot +/- a few feet which is why Aviators and Sailors like LORAN
better than GPS.

2.5 - Both systems use atomic clocks for their time reference systems.

Scott C. McGrath

On Thu, 2 Oct 2003, joe mcguckin wrote:

 
 
  It depends upon how low a probability failure you're willing to consider
  and how paranoid you are. For one thing, the U.S. National Command Authority
  could decide that GPS represents a threat to national security and disable
  or derate GPS temporarily or indefinitely over a limited or unlimited area.
  
 
 Derating GPS wouldn't affect the time reference functionality. Turning off
 GPS entirely would seriously affect military aviation operations.
 
  It is well known that GPS is vulnerable to deliberate attacks in limited
  areas, perhaps even over large areas (see Presidential Decision Directive
  63). Backup systems are officially recommended for safety-critical
  applications and the US government is actively intersted in developing
  low-cost backup systems (presumably because they're concerned about GPS as a
  SPOF too).
  
  The US government, and other entities, do perform GPS interference
  testing. This basically means they interfere with GPS. The government is
  also actively investigating phase-over to private operation, which could
  mean changes to operation, fee system, or reliability of the GPS system.
  
  One could also imagine conditions that would result in concurrent failures
  of large numbers of satellites. Remember what happened to Anik E-1 and E-2
  (space weather caused them to spin out of control).
  
  If you do develop a system with GPS as a SPOF, you should certainly be
  aware of these risks and monitor any changes to the political and technical
  climate surrounding GPS. I do believe that it is currently reasonable to
  have GPS as a SPOF for a timing application that is not life critical (that
  is, where people won't die if it fails).
  
  Aviators try very, very hard not to trust their lives to GPS.
 
 
 As opposed to LORAN ?
 



Re: Cisco filter question

2003-08-22 Thread Scott McGrath


Geo,

Look at your set interface Null0 command the rest is correct
you want to set the next hop to be Null0.  How to do this is left as an 
exercise for the reader.

Scott C. McGrath

On Fri, 22 Aug 2003, Geo. wrote:

 
 Perhaps one of you router experts can answer this question. When using the cisco 
 specified filter
 
  access-list 199 permit icmp any any echo
 access-list 199 permit icmp any any echo-reply

 route-map nachi-worm permit 10
   ! --- match ICMP echo requests and replies (type 0  8) 
   match ip address 199
 
   ! --- match 92 bytes sized packets
   match length 92 92
  
   ! --- drop the packet
   set interface Null0

 
 interface incoming-interface
   ! --- it is recommended to disable unreachables
   no ip unreachables
  
   ! --- if not using CEF, enabling ip route-cache flow is recommended
   ip route-cache policy
  
   ! --- apply Policy Based Routing to the interface
   ip policy route-map nachi-worm 
 
 why would it not stop this packet
 
 15 1203.125000 0003E3956600 AMERIC6625D4 ICMP Echo: From 216.144.20.69 To 
 216.144.00.27 216.144.20.69 216.144.0.27 IP 
 FRAME: Base frame properties
 FRAME: Time of capture = 8/22/2003 11:54:16.859
 FRAME: Time delta from previous physical frame: 0 microseconds
 FRAME: Frame number: 15
 FRAME: Total frame length: 106 bytes
 FRAME: Capture frame length: 106 bytes
 FRAME: Frame data: Number of data bytes remaining = 106 (0x006A)
 ETHERNET: ETYPE = 0x0800 : Protocol = IP:  DOD Internet Protocol
 ETHERNET: Destination address : 00C0B76625D4
 ETHERNET: ...0 = Individual address
 ETHERNET: ..0. = Universally administered address
 ETHERNET: Source address : 0003E3956600
 ETHERNET: ...0 = No routing information present
 ETHERNET: ..0. = Universally administered address
 ETHERNET: Frame Length : 106 (0x006A)
 ETHERNET: Ethernet Type : 0x0800 (IP:  DOD Internet Protocol)
 ETHERNET: Ethernet Data: Number of data bytes remaining = 92 (0x005C)
 IP: ID = 0x848; Proto = ICMP; Len: 92
 IP: Version = 4 (0x4)
 IP: Header Length = 20 (0x14)
 IP: Precedence = Routine
 IP: Type of Service = Normal Service
 IP: Total Length = 92 (0x5C)
 IP: Identification = 2120 (0x848)
 IP: Flags Summary = 0 (0x0)
 IP: ...0 = Last fragment in datagram
 IP: ..0. = May fragment datagram if necessary
 IP: Fragment Offset = 0 (0x0) bytes
 IP: Time to Live = 124 (0x7C)
 IP: Protocol = ICMP - Internet Control Message
 IP: Checksum = 0x70D8
 IP: Source Address = 216.144.20.69
 IP: Destination Address = 216.144.0.27
 IP: Data: Number of data bytes remaining = 72 (0x0048)
 ICMP: Echo: From 216.144.20.69 To 216.144.00.27
 ICMP: Packet Type = Echo
 ICMP: Echo Code = 0 (0x0)
 ICMP: Checksum = 0x82AA
 ICMP: Identifier = 512 (0x200)
 ICMP: Sequence Number = 7680 (0x1E00)
 ICMP: Data: Number of data bytes remaining = 64 (0x0040)
 0:  00 C0 B7 66 25 D4 00 03 E3 95 66 00 08 00 45 00   .À·f%Ô..ã•f...E.
 00010:  00 5C 08 48 00 00 7C 01 70 D8 D8 90 14 45 D8 90   .\.H..|.pØؐ.Eؐ
 00020:  00 1B 08 00 82 AA 02 00 1E 00 AA AA AA AA AA AA   ‚ªªª
 00030:  AA AA AA AA AA AA AA AA AA AA AA AA AA AA AA AA   
 
 00040:  AA AA AA AA AA AA AA AA AA AA AA AA AA AA AA AA   
 
 00050:  AA AA AA AA AA AA AA AA AA AA AA AA AA AA AA AA   
 
 00060:  AA AA AA AA AA AA AA AA AA AA ªª  
 



Re: Sea sponge builds a better glass fiber

2003-08-21 Thread Scott McGrath


The natural enemy in this case would be the filefish or the angelfish who
eat the sponges...

Scott C. McGrath

On Thu, 21 Aug 2003, David Meyer wrote:


  I'm still waiting for the discovery of its natural enemy, the Backhoeiosaur.

   All kidding aside, my concern is that it's natural enemy
   has just found it.

  It's such a wonderful example of how exquisite nature is as a
  designer and builder of complex systems, said Geri Richmond, a
  chemist and materials scientist at the University of Oregon who
  wasn't involved in the study.
 
  We can draw it on paper and think about engineering it but
  we're in the stone age compared to nature, she said.

   That much seems clear.

   Dave




Re: Plano, TX Legacy: Fiber Provider or Wireless Wireless question

2003-08-20 Thread Scott McGrath


Wireless is a good option but you might want to look at the licensed 
services as well.  Licensing in  most cases is a formality handled by the 
vendor along with a nominal user fee sent to the FCC.

Unlicensed systems are regulated by part 15 of the FCC regulations which
read DEVICE MUST ACCEPT INTERFERENCE this means if another service with
primary allocation in those frequency bands begins to interfere with your
service you are up a well known creek without propulsion.

Secondly if your device/link interferes with a licensed device YOU must
fix the interference at your expense or terminate the operation of the
interfering device.   

This part of the US code has the full power and majesty of the federal
government behind it and since the primary services in these bands are the
Government Radiolocation Service in fedspeak better known as Military 
Radar to the rest of us the  enforcment stick is quite large
(5-10k$/Day fines and prison terms)



Scott C. McGrath

On Wed, 20 Aug 2003, N. Richard Solis wrote:

 
 Wireless is a good option with a few caveats:
 
 1. At the speeds you are talking about, you need line of sight. 
 Usually, this means getting up high to account for curvature of the 
 earth and clearing of what is called the fresnel zone for the particular 
 frequency you are using.
 
 2. You will need to use some of the higher frequency systems to get link 
 speeds of a gig or more.  There are 23ghz unlicensed systems as well as 
 60ghz unlicensed systems.  The 60ghz systems will get you higher speeds 
 but the link distance will be on the order of hundereds of meters.
 
 3. Link planning will be a critical exercise.  If you really NEED the 
 high availability, you can get it by properly considering the distance 
 you need to go, the speeds you will use, the frequencies you will 
 transmit at, and the statistical expectations of weather and other 
 factors that will affect the total path attenuation the system will 
 encounter.  Systems that average availability of 99.99% are commonplace 
 and 99.999% can be achieved by using shorter path distances.
 
 Try the guys at www.ydi.com.  They will steer you right.
 
 -Richard
 
 
 
 
 [EMAIL PROTECTED] wrote:
 
  
   Looking for any advice or pointers for obtaining
   multiple Gig links (last mile) in the Plano, TX
   area.  The abundance of fiber options here seems
   to be decidedly underwhelming. Looking for suggestions
   including creative options such as wireless. I
   need to get from Plano to any closest better place for
   picking up multiple Gig Internet links.  Wondering
   too what other large companies in this area have done
   for large internet links...any advice appreciated.
  
   Also, I'm reading now that more ISP's are using
   wireless for last mile provisioning on the new
   unlicensed frequencies.  Was wondering if anyone
   had experience using Dragonwave or any similar
   wireless products in Texas. Do sandstorms and
   golf ball sized hail pose significant issues?
   Severe thunderstorms?  Would like to chat with
   anyone with significant wireless experience in
   the Dallas area. WOuldnt mind speaking with an
   unfluffed sales person eitehr. :-)
  
  
  
  
 
 



Re: Did Sean Gorman's maps show the cascading vulnerability in Ohio?

2003-08-18 Thread Scott McGrath


A measured response is needed.  Obviosly we do not want the
vulnerabilities disclosed to bored teenagers looking for excitement.
We need controlled access to this data so that those of us who need the
data to fix vulnerabilities can gain access to it but access is denied to
people without a legitimate need for the data.

The Dig Safe program might be a good model for controlling access to
Sean's work.   This would not preclude further scholarship on Sean's work
but it would keep the data out of the hands of the 31337 crowd.


Scott C. McGrath

On Sun, 17 Aug 2003, Sean Donelan wrote:



 So, the US Government wants to classify Sean Gorman's student project.
 The question is did Mr. Gorman's maps divulge the vulnerability in the
 East Coast power grid that resulted in the blackouts this week?

 Would it be better to know about these vulnerabilities, and do something
 about them; or is it better to keep them secret until they fail in a
 catastrophic way?






Re: Did Sean Gorman's maps show the cascading vulnerability in Ohio?

2003-08-18 Thread Scott McGrath


Remember that Dig Safe is implemented on a state by state basis some of
the programs like the one you describe are dreadful. The one in my home
state is fairly thorough in checking bona fides before providing the data

I believe in setting a fairly low bar for access to this information i.e.
can you _prove_ that you have legitimate cause for access to this
information.  The proof would be do you have
fiber/conduit/circuits/pipelines these all have identifiers which can be
checked and generally only the customer and the service provider has this
information.   Not simply whose fibers are in the conduit attached to the
railroad bridge.  if you own one of those fibers you get access to the
information on who else is in the conduit.   if you do not you are not
privvy to the information.

We had a incident where someone accidentally started a fire under a bridge
and burned through a PVC conduit knocking phone and data out for the
better part of a week for 100,000+ lines.  I really do not want that type
of information in the hands of a bored teenager so they would be able
identify potential targets so that they can be _famous_.

Remember when you go to a library to study rare manuscripts you generally
need to prove to the curator that you have a legitimate scholarly interest
in the documents not simply random curiousity.

Scott C. McGrath

On Mon, 18 Aug 2003, Mr. James W. Laferriere wrote:


   Hello Scott ,

 On Mon, 18 Aug 2003, Scott McGrath wrote:
  A measured response is needed.  Obviosly we do not want the
  vulnerabilities disclosed to bored teenagers looking for excitement.
  We need controlled access to this data so that those of us who need the
  data to fix vulnerabilities can gain access to it but access is denied to
  people without a legitimate need for the data.
   And my statement would be ,  And who is that authority ?
   The government ?  The Utilities ?  The ... ?

  The Dig Safe program might be a good model for controlling access to
  Sean's work.   This would not preclude further scholarship on Sean's work
  but it would keep the data out of the hands of the 31337 crowd.
   Huh ?,  Try this on for size ,  Hello ,  I am joe's contracting
   service  I have a building permit(I do) and I need to dig at ...
   If I remeber correctly the Dig Safe program will give me the
   info without so much as a check on the permit or my company name .

   But ,  Something (may) need to be put in place .  I for one am not
   a great fan of any group of X that has a vested interest in
   keeping the information out of the public hands as being the ones
   to administer or setup or even give suggestions to a body who'd be
   involved in setting up such a commitee/org./...

   I'd really like to see a Public forum be used to take
   suggestions from the PUBLIC (ie: you  I  that neighbor you hate
   so well) for the guide lines as to who /or when such info s/b
   released .  Not the Gov. or the Util Alone .

  On Sun, 17 Aug 2003, Sean Donelan wrote:
   So, the US Government wants to classify Sean Gorman's student project.
   The question is did Mr. Gorman's maps divulge the vulnerability in the
   East Coast power grid that resulted in the blackouts this week?
   Would it be better to know about these vulnerabilities, and do something
   about them; or is it better to keep them secret until they fail in a
   catastrophic way?
   Twyl ,  JimL
 --
+--+
| James   W.   Laferriere | SystemTechniques | Give me VMS |
| NetworkEngineer | P.O. Box 854 |  Give me Linux  |
| [EMAIL PROTECTED] | Coudersport PA 16915 |   only  on  AXP |
+--+




Re: Did Sean Gorman's maps show the cascading vulnerability in Ohio?

2003-08-18 Thread Scott McGrath


Information should be free.  This however assumes that people will be
_responsible_ for what is done with the information.

On Manuel and Jose - with a valid permit number they get the information
if Bubba and Joe do not have a _valid_ permit number they do not get the
information because in the absence of legitimate need for this information
they probably should not have it

Try going to a presidential library and trying to access the information
there you still need a legitimate scholarly interest to access any of the
information deemed _sensitive_ by the curator.  In most of these cases the
documents are available on microfilm or digitally so fragility has noting
to do with the access restrictions on the document but harm to the
subject of the documents does play a significant role in what information
is released to the general public and what is restricted to scholarly
interests.

I want to live in a world where information can be free however this a
utopian ideal which does not work in the _real_ world.  We as a group need
to create a system which allows access to this information WITHOUT
resorting to having GOVERNMENT control access to the information.  BUT we
also need to ensure that the information is used responsibly.   Having
secrets benefits no one except the keeper of the secrets.

Scott C. McGrath

On Mon, 18 Aug 2003, Paul Wouters wrote:


 On Mon, 18 Aug 2003, Scott McGrath wrote:

  Remember when you go to a library to study rare manuscripts you generally
  need to prove to the curator that you have a legitimate scholarly interest
  in the documents not simply random curiousity.

 That's because those old manuscripts are fragile, not because they think
 the information should stay secret.

 If you want to live in a world where this type of information needs to
 be hidden, go ahead and finish your totalitarian state. The US isn't far
 off anyway.

 Paul




Re: Did Sean Gorman's maps show the cascading vulnerability in Ohio?

2003-08-18 Thread Scott McGrath


We have a permutation of this in NH.  When the hole is greater than 1'
deep we need a permit.  This does illustrate the difficulties though we
have too much government interference now. _but_ we do need some way of
ensuring that information is used responsibly and I do not think that a
government agency is the right way to go about solving this dilemma.

Out here in the sticks a popular form of entertainment seems to be
shooting out the insulators on transmission lines.  I really do not want
to tell Bubba and Joe which lines will plunge the region into darkness.
On the other hand I need the information so that I can put into place the
appropriate measures to ensure that services stay online in the event
Bubba and Joe hit the wrong line.

Scott C. McGrath

On Mon, 18 Aug 2003, Kevin Oberman wrote:

  Date: Mon, 18 Aug 2003 11:15:02 -0400 (EDT)
  From: Scott McGrath [EMAIL PROTECTED]
  Sender: [EMAIL PROTECTED]
 
 
 
  Information should be free.  This however assumes that people will be
  _responsible_ for what is done with the information.
 
  On Manuel and Jose - with a valid permit number they get the information
  if Bubba and Joe do not have a _valid_ permit number they do not get the
  information because in the absence of legitimate need for this information
  they probably should not have it

 This does not at all match reality. People don't have to have a permit
 to need to call USA (Call before you dig where I live.) Many things
 not requiring a permit do require calling USA before digging. I just
 tell them that I am digging at a location and they tell me if it's OK
 and if anything is near-by.

 I've been told to call for any dig deeper than 1 foot. Planting a tree
 does not require a permit, but the hole is plenty deep enough to be a
 problem!
 --
 R. Kevin Oberman, Network Engineer
 Energy Sciences Network (ESnet)
 Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab)
 E-mail: [EMAIL PROTECTED] Phone: +1 510 486-8634




RE: Microsoft to ship new versions with firewall enabled

2003-08-14 Thread Scott McGrath



The checkpoint and Pix Boxen are what we use here.  But we also use
ipchains to secure things at a host level.

Scott C. McGrath

On Thu, 14 Aug 2003, Drew Weaver wrote:



 ipchains and similar firewalls are indeed far superior.  I manage real
 firewalls as part of my responsibilities.

 However the new microsoft policy will help protect the network from Joe
 and Jane average who buy a PC from the closest big box store and hook it
 up to their cable modem so they can exchange pictures of the kids with the
 grandparents in Fla.  This is the class of users who botnet builders dream
 about because these people do not see a computer as a complex system which
 _requires_ constant maintenance but as a semi-magical device for moving
 images and text around.

 

 I don't believe that many people really see ipchains as a real viable
 firewall. I think it is awesome, but in many corporations simply mentioning
 it gets you a stern eyeing. Of course these corporations can spend tons of
 money on Checkpoint and PIX boxen.

 -Drew






Re: Microsoft to ship new versions with firewall enabled

2003-08-14 Thread Scott McGrath


No answer on that one, However Mac OS X also includes a built in firewall.

On the configuration angle, the Microsoft ICF (Internet Connection
Firewall) blocks everything by default.

Scott C. McGrath

On Thu, 14 Aug 2003, John Neiberger wrote:


  Sean Donelan [EMAIL PROTECTED] 8/14/03 8:29:07 AM 
 John Markoff reports in the New York Times that Microsoft plans to
 change
 how it ships Windows XP due to the worm.  In the future Microsoft
 will
 ship both business and consumer verisons of Windows XP with the
 included
 firewall enabled by default.

 [Veering further off-topic]

 Hmm...I didn't even know XP had a built-in firewall.  Any bets on how
 long it is before other companies with software firewall products bring
 suit against Microsoft for bundling a firewall in the OS?
 --




Re: Gigabit Media Converter

2003-08-14 Thread Scott McGrath


Where can you get CWDM GBIC's for under 400.  Most vendors charge 5-10x
that price.

Scott C. McGrath

On Tue, 12 Aug 2003, Vincent J. Bono wrote:


 Thanks for all the links and help!

 The issue is cost and space, and all the products that will work seem to
 cost upwards of $3,000 and do a lot more than we need or take up a few rack
 units of space. I am probably going to build a small circuit to handle
 connecting two GBICs back to back.  The pinout from molex was readily
 available and we can get CWDM GBICs these days for $400 or less and more
 normal frequencies for sub $150.

 Anyway, anyone who is interested in the final product send me email off-list
 and I'll keep you posted.

 -vb

 - Original Message -
 From: Mikael Abrahamsson [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Tuesday, August 12, 2003 7:21 AM
 Subject: Re: Gigabit Media Converter


 
  On Tue, 12 Aug 2003, Stephen J. Wilcox wrote:
 
   Sounds like you need a singlemode-multimode convertor, available from
 various
   places, cost around $600
 
  Highly unlikely that it'll do CWDM, at least at that price.
 
  Transmode (www.transmode.se) does converters to order, they'll fix things
  that'll do pretty much any to any (850 / 1310 / (1490-1610) in any
  combination) including 3R and management (which implies that you need
  ethernet onsite which might be tricky).
 
  I'd believe they're more in the $3k-$5k range with CWDM optics though. If
  you need OC48 that'll hike the price up even more.
 
  --
  Mikael Abrahamssonemail: [EMAIL PROTECTED]
 




Re: Cisco vulnerability and dangerous filtering techniques

2003-07-23 Thread Scott McGrath


Another argument for OSPF authentication it seems.   However we are 
still out of luck in the STP announcements
unless you configure all the neat little *guard features (bpdu,root 
etc) from Cisco et al.



On Wednesday, July 23, 2003, at 12:34 PM, [EMAIL PROTECTED] wrote:


Like I said, it's not going to be perfect, but it is better than 
blindly
spewing out evil packets.
Between me and you, ospf packets or bad stp packets are a lot more 
dangerous
than the whack a cisco router. Just try it.

Alex



IOS Vulnerability

2003-07-16 Thread Scott McGrath


For full details about the vulnerability see

http://www.cisco.com/en/US/products/hw/routers/ps341/products_security_advisory09186a00801a34c2.shtml

Scott C. McGrath