Re: qwest.net dropping packets... wife would like someone to pick them up please...

2012-11-04 Thread PC
For some more information, this previous document and presentation make
good resources:

Document:

http://www.nanog.org/meetings/nanog47/presentations/Sunday/RAS_Traceroute_N47_Sun.pdf

There's also a presentation here:

http://www.nanog.org/meetings/nanog45/presentations/Interpret_traceroutes.wmv




On Sat, Nov 3, 2012 at 11:10 PM, Randy randy_94...@yahoo.com wrote:

 --- On Sat, 11/3/12, Christopher Morrow morrowc.li...@gmail.com wrote:

  From: Christopher Morrow morrowc.li...@gmail.com
  Subject: Re: qwest.net dropping packets... wife would like someone to
 pick them up please...
  To: Randy Bush ra...@psg.com
  Cc: North American Network Operators' Group nanog@nanog.org
  Date: Saturday, November 3, 2012, 7:04 PM
  On Sat, Nov 3, 2012 at 3:07 AM, Randy
  Bush ra...@psg.com
  wrote:
   one router along the path showing loss that does
  not continue to
   affect the rest of the path simply means the cpu on
  that router
   is a bit too busy to respond to icmp messages
  
   trivial footnote: some folk configure some routers to
  rate limit
   icmp
 
  other trivial footnote, not all traceroute is icmp.
 
  True, wrt to destination. However all intermediate(including penultimate)
 hops reveal themselves via ICMP type 11 Code 0 (TTL exceeded in transit)
 ./Randy





Re: IPv6 Netowrk Device Numbering BP

2012-11-04 Thread Tore Anderson
* Owen DeLong

 What do you get from SIIT that you don't get from dual stack in a
 datacenter?

In no particular order:

- Single stack is much simpler than dual stack. A single stack to
configure, a single ACL to write, a single service address to monitor,
staff needs to know only a single protocol, deveopment staff needs only
develop and do QA for a single protocol, it's a single topology to
document, a single IGP to run and monitor, a single protocol to debug
and troubleshoot, one less attack vector for the bad guys, and so on. I
have a strong feeling that the reason why dual stack failed so miserably
as a transition mechanism was precisely because of the fact that it adds
significant complexity and operational overhead, compared to single stack.

- IPv4 address conservation. If you're running out of IPv4 addresses,
you cannot use dual stack, as dual stack does nothing to reduce your
dependency on IPv4 compared to single stack IPv4. With dual stack,
you'll be using (at least) one IPv4 address per server, plus a bit of
overhead due to the server LAN prefixes needing to be rounded up to the
nearest power or two (or higher if you want to accommodate for future
growth), plus overhead due to the network infrastructure. With SIIT, on
the other hand, you'll be using a single IPv4 address per publicly
available service - one /32 out of a pool, with nothing going to waste
due to aggregation, network infrastructure, and so on.

- Promotes first-class native IPv6 deployment. Not that dual stack isn't
native IPv6 too, but I do have the impression that often, IPv6 in a dual
stacked environment is a second class citizen. IPv6 might be only
partially deployed, not monitored as well as IPv4, or that there are
architectural dependencies on IPv4 in the application stack, so that you
cannot just shut off IPv4 and expect it to continue to work fine on IPv6
only. With SIIT, you get only a single, first-class, citizen - IPv6. And
it'll be the only IPv6 migration/transition/deployment project you'll
ever have to do. When the time comes to discontinue support for IPv4,
you just remove your IN A records and shut down the SIIT gateway(s),
there will be no need to touch the application stack at all.

As I said earlier, I will submit an IETF draft about the use case
shortly (it seems that the upload page is closed right now, due to the
upcoming IETF meeting), and I invite you to participate in the
discussion - hopefully, we can work together to address your technical
concerns with the solution.

I did present the use case at RIPE64, by the way - I hope you will find
these links interesting:

https://ripe64.ripe.net/archives/video/37
https://ripe64.ripe.net/presentations/67-20120417-RIPE64-The_Case_for_IPv6_Only_Data_Centres.pdf

Best regards,
-- 
Tore Anderson
Redpill Linpro AS - http://www.redpill-linpro.com/



Re: mail-abuse.org down?

2012-11-04 Thread Alexander Maassen
Looks like it's down again

From ge0-1-v201.r2.mst1.proxility.net (77.93.64.146) icmp_seq=1
Destination Host Unreachable

Now that could be through a filter... however:

--2012-11-04 11:07:25--  http://www.mail-abuse.org/
Resolving www.mail-abuse.org... 150.70.74.99
Connecting to www.mail-abuse.org|150.70.74.99|:80... failed: No route to
host.


trace itself ends at my own providers gateway...


signature.asc
Description: This is a digitally signed message part


Re: mail-abuse.org down?

2012-11-04 Thread Suresh Ramasubramanian
Maps was taken over by trend micro years back, maybe they just retired the
old domain?

--srs (htc one x)
On Nov 4, 2012 4:14 PM, Alexander Maassen outsi...@scarynet.org wrote:

 Looks like it's down again

 From ge0-1-v201.r2.mst1.proxility.net (77.93.64.146) icmp_seq=1
 Destination Host Unreachable

 Now that could be through a filter... however:

 --2012-11-04 11:07:25--  http://www.mail-abuse.org/
 Resolving www.mail-abuse.org... 150.70.74.99
 Connecting to www.mail-abuse.org|150.70.74.99|:80... failed: No route to
 host.


 trace itself ends at my own providers gateway...



Re: IPv6 Netowrk Device Numbering BP

2012-11-04 Thread Owen DeLong

On Nov 4, 2012, at 1:55 AM, Tore Anderson tore.ander...@redpill-linpro.com 
wrote:

 * Owen DeLong
 
 What do you get from SIIT that you don't get from dual stack in a
 datacenter?
 
 In no particular order:
 
 - Single stack is much simpler than dual stack. A single stack to
 configure, a single ACL to write, a single service address to monitor,
 staff needs to know only a single protocol, deveopment staff needs only
 develop and do QA for a single protocol, it's a single topology to
 document, a single IGP to run and monitor, a single protocol to debug
 and troubleshoot, one less attack vector for the bad guys, and so on. I
 have a strong feeling that the reason why dual stack failed so miserably
 as a transition mechanism was precisely because of the fact that it adds
 significant complexity and operational overhead, compared to single stack.
 

Except that with SIIT, you're still dealing with two stacks, just moving
the place where you deal with them around a bit. Further, you're adding
the complication of NAT into your world (SIIT is a form of NAT whether
you care to admit that to yourself or not).

 - IPv4 address conservation. If you're running out of IPv4 addresses,
 you cannot use dual stack, as dual stack does nothing to reduce your
 dependency on IPv4 compared to single stack IPv4. With dual stack,
 you'll be using (at least) one IPv4 address per server, plus a bit of
 overhead due to the server LAN prefixes needing to be rounded up to the
 nearest power or two (or higher if you want to accommodate for future
 growth), plus overhead due to the network infrastructure. With SIIT, on
 the other hand, you'll be using a single IPv4 address per publicly
 available service - one /32 out of a pool, with nothing going to waste
 due to aggregation, network infrastructure, and so on.

Since you end up dealing with NAT anyway, why not just use NAT for IPv4
conservation. It's what most engineers are already used to dealing with
and you don't lose anything between it and SIIT. Further, for SIIT to
work, you don't really conserve any IPv4 addresses, since address
conservation requires state.

 
 - Promotes first-class native IPv6 deployment. Not that dual stack isn't
 native IPv6 too, but I do have the impression that often, IPv6 in a dual
 stacked environment is a second class citizen. IPv6 might be only
 partially deployed, not monitored as well as IPv4, or that there are
 architectural dependencies on IPv4 in the application stack, so that you
 cannot just shut off IPv4 and expect it to continue to work fine on IPv6
 only. With SIIT, you get only a single, first-class, citizen - IPv6. And
 it'll be the only IPv6 migration/transition/deployment project you'll
 ever have to do. When the time comes to discontinue support for IPv4,
 you just remove your IN A records and shut down the SIIT gateway(s),
 there will be no need to touch the application stack at all.

Treating IPv6 as a second class citizen is a choice, not an inherent
consequence of dual-stack. IPv6 certainly isn't a second class citizen
on my network or on Hurricane Electric's network.

 As I said earlier, I will submit an IETF draft about the use case
 shortly (it seems that the upload page is closed right now, due to the
 upcoming IETF meeting), and I invite you to participate in the
 discussion - hopefully, we can work together to address your technical
 concerns with the solution.

I'll look forward to reading the draft.

 I did present the use case at RIPE64, by the way - I hope you will find
 these links interesting:
 
 https://ripe64.ripe.net/archives/video/37
 https://ripe64.ripe.net/presentations/67-20120417-RIPE64-The_Case_for_IPv6_Only_Data_Centres.pdf

I'll try to look them over when I get some time.

Owen




Re: IPv6 Netowrk Device Numbering BP

2012-11-04 Thread Tore Anderson
* Owen DeLong

 On Nov 4, 2012, at 1:55 AM, Tore Anderson 
 tore.ander...@redpill-linpro.com wrote:
 
 * Owen DeLong
 
 What do you get from SIIT that you don't get from dual stack in a
 datacenter?
 
 In no particular order:
 
 - Single stack is much simpler than dual stack. A single stack to 
 configure, a single ACL to write, a single service address to 
 monitor, staff needs to know only a single protocol, deveopment 
 staff needs only develop and do QA for a single protocol, it's a 
 single topology to document, a single IGP to run and monitor, a 
 single protocol to debug and troubleshoot, one less attack vector 
 for the bad guys, and so on. I have a strong feeling that the 
 reason why dual stack failed so miserably as a transition
 mechanism was precisely because of the fact that it adds
 significant complexity and operational overhead, compared to single
 stack.
 
 Except that with SIIT, you're still dealing with two stacks, just 
 moving the place where you deal with them around a bit. Further, 
 you're adding the complication of NAT into your world (SIIT is a
 form of NAT whether you care to admit that to yourself or not).

The difference is that only a small number of people will need to deal
with the two stacks, in a small number of places. They way I envision
it, the networking staff would ideally operate SIIT a logical function
on the data centre's access routers, or their in their backbone's
core/border routers.

A typical data centre operator/content provider has a vastly larger
number of servers, applications, systems administrators, and software
developers, than they have routers and network administrators. By making
IPv4 end-user connectivity a service provided by the network, you make
the amount of dual stack-related complexity a fraction of what it would
be if you had to run dual stack on every server and in every application.

I have no problem admitting that SIIT is a form of NAT. It is. The «T»
in both cases stands for «Translation», after all.

 - IPv4 address conservation. If you're running out of IPv4 
 addresses, you cannot use dual stack, as dual stack does nothing
 to reduce your dependency on IPv4 compared to single stack IPv4.
 With dual stack, you'll be using (at least) one IPv4 address per
 server, plus a bit of overhead due to the server LAN prefixes
 needing to be rounded up to the nearest power or two (or higher if
 you want to accommodate for future growth), plus overhead due to
 the network infrastructure. With SIIT, on the other hand, you'll be
 using a single IPv4 address per publicly available service - one
 /32 out of a pool, with nothing going to waste due to aggregation,
 network infrastructure, and so on.
 
 Since you end up dealing with NAT anyway, why not just use NAT for 
 IPv4 conservation. It's what most engineers are already used to 
 dealing with and you don't lose anything between it and SIIT. 
 Further, for SIIT to work, you don't really conserve any IPv4 
 addresses, since address conservation requires state.

Nope! The «S» in SIIT stands for «Stateless». That is the beauty of it.

NAT44, on the other hand, is stateful, a very undesirable trait.
Suddenly, things like flows per second and flow initiation rate is
relevant for the overall performance of the architecture. It requires
flows to pass bidirectionally across a single instance - the servers'
default route must point to the NAT44, and a failure will cause the
disruption of all existing flows. It is probably possible to find ways
to avoid some or all of these problems, but it comes at the expense of
added complexity.

SIIT, on the other hand, is stateless, so you can use anycasting with
normal routing protocols or load balancing using ECMP. A failure handled
just like any IP re-routing event. You don't need the server's default
route to point to the SIIT box, it is just a regular IPv6 route
(typically a /96). You don't even have to run it in your own network.
Assuming we have IPv6 connectivity between us, I could sell you SIIT
service over the Internet or via a direct peering. (I'll be happy to
give you a demo just for fun, give me an IPv6 address and I'll map up a
public IPv4 front-end address for you in our SIIT deployment.)

Finally, by putting your money into NAT44 for IPv4 conservation, you
have accomplished exactly *nothing* when it comes to IPv6 deployment.
You'll still have to go down the dual stack route, with the added
complexity that will cause. With SIIT, you can kill both birds with one
stone.

 - Promotes first-class native IPv6 deployment. Not that dual stack 
 isn't native IPv6 too, but I do have the impression that often, 
 IPv6 in a dual stacked environment is a second class citizen. IPv6 
 might be only partially deployed, not monitored as well as IPv4,
 or that there are architectural dependencies on IPv4 in the 
 application stack, so that you cannot just shut off IPv4 and
 expect it to continue to work fine on IPv6 only. With SIIT, you get
 only a single, first-class, 

Re: IPv6 Netowrk Device Numbering BP

2012-11-04 Thread Owen DeLong

On Nov 4, 2012, at 5:15 AM, Tore Anderson tore.ander...@redpill-linpro.com 
wrote:

 * Owen DeLong
 
 On Nov 4, 2012, at 1:55 AM, Tore Anderson 
 tore.ander...@redpill-linpro.com wrote:
 
 * Owen DeLong
 
 What do you get from SIIT that you don't get from dual stack in a
 datacenter?
 
 In no particular order:
 
 - Single stack is much simpler than dual stack. A single stack to 
 configure, a single ACL to write, a single service address to 
 monitor, staff needs to know only a single protocol, deveopment 
 staff needs only develop and do QA for a single protocol, it's a 
 single topology to document, a single IGP to run and monitor, a 
 single protocol to debug and troubleshoot, one less attack vector 
 for the bad guys, and so on. I have a strong feeling that the 
 reason why dual stack failed so miserably as a transition
 mechanism was precisely because of the fact that it adds
 significant complexity and operational overhead, compared to single
 stack.
 
 Except that with SIIT, you're still dealing with two stacks, just 
 moving the place where you deal with them around a bit. Further, 
 you're adding the complication of NAT into your world (SIIT is a
 form of NAT whether you care to admit that to yourself or not).
 
 The difference is that only a small number of people will need to deal
 with the two stacks, in a small number of places. They way I envision
 it, the networking staff would ideally operate SIIT a logical function
 on the data centre's access routers, or their in their backbone's
 core/border routers.
 

I suppose if you're not moving significant traffic, that might work.

In the data centers I deal with, that's a really expensive approach
because it would tie up a lot more router CPU resources that really
shouldn't be wasted on things end-hosts can do for themselves.

By having the end-host just do dual-stack, life gets a lot easier
if you're moving significant traffic. If you only have a few megabits
or even a couple of gigabits, sure. I haven't worked with anything
that small in a long time.

 A typical data centre operator/content provider has a vastly larger
 number of servers, applications, systems administrators, and software
 developers, than they have routers and network administrators. By making
 IPv4 end-user connectivity a service provided by the network, you make
 the amount of dual stack-related complexity a fraction of what it would
 be if you had to run dual stack on every server and in every application.

In a world where you have lots of network/system administrators that fully
understand IPv6 and have limited IPv4 knowledge, sure. In the real world,
where the situation is reversed, you just confuse everyone and make the
complexity of troubleshooting a lot of things that much harder because it
is far more likely to require interaction across teams to get things fixed.

 I have no problem admitting that SIIT is a form of NAT. It is. The «T»
 in both cases stands for «Translation», after all.
 

Yep.

 - IPv4 address conservation. If you're running out of IPv4 
 addresses, you cannot use dual stack, as dual stack does nothing
 to reduce your dependency on IPv4 compared to single stack IPv4.
 With dual stack, you'll be using (at least) one IPv4 address per
 server, plus a bit of overhead due to the server LAN prefixes
 needing to be rounded up to the nearest power or two (or higher if
 you want to accommodate for future growth), plus overhead due to
 the network infrastructure. With SIIT, on the other hand, you'll be
 using a single IPv4 address per publicly available service - one
 /32 out of a pool, with nothing going to waste due to aggregation,
 network infrastructure, and so on.
 
 Since you end up dealing with NAT anyway, why not just use NAT for 
 IPv4 conservation. It's what most engineers are already used to 
 dealing with and you don't lose anything between it and SIIT. 
 Further, for SIIT to work, you don't really conserve any IPv4 
 addresses, since address conservation requires state.
 
 Nope! The «S» in SIIT stands for «Stateless». That is the beauty of it.
 

Right… As soon as you make it stateless, you lose the ability to
overload the addresses unless you're using a static mapping of ports,
in which case, you've traded dynamic state tables for static tables
that, while stateless, are a greater level of complexity and create
even more limitations.

 NAT44, on the other hand, is stateful, a very undesirable trait.
 Suddenly, things like flows per second and flow initiation rate is
 relevant for the overall performance of the architecture. It requires
 flows to pass bidirectionally across a single instance - the servers'
 default route must point to the NAT44, and a failure will cause the
 disruption of all existing flows. It is probably possible to find ways
 to avoid some or all of these problems, but it comes at the expense of
 added complexity.
 
 SIIT, on the other hand, is stateless, so you can use anycasting with
 normal routing protocols or load 

Re: IPv6 Netowrk Device Numbering BP

2012-11-04 Thread Tore Anderson
* Owen DeLong

 The difference is that only a small number of people will need to 
 deal with the two stacks, in a small number of places. They way I 
 envision it, the networking staff would ideally operate SIIT a 
 logical function on the data centre's access routers, or their in 
 their backbone's core/border routers.
 
 I suppose if you're not moving significant traffic, that might work.
 
 In the data centers I deal with, that's a really expensive approach 
 because it would tie up a lot more router CPU resources that really 
 shouldn't be wasted on things end-hosts can do for themselves.
 
 By having the end-host just do dual-stack, life gets a lot easier if 
 you're moving significant traffic. If you only have a few megabits
 or even a couple of gigabits, sure. I haven't worked with anything
 that small in a long time.

For a production deployment, we would obviously not do this in our
routers' CPUs, for the same reasons that we wouldn't run regular IP
forwarding in their CPUs.

If a data centre access router gets a mix of dual-stacked input traffic
coming in from the Internet, that same amount of traffic has to go out
in the rear towards the data centre. Whether or not it comes out as the
same dual stacked mix as what came in, or as only IPv6 does not change
the total amount of bandwidth the router has to pass. So the amount of
bandwidth is irrelevant, really.

I would agree with you if this was question of doing SIIT in software
instead of IP forwarding in hardware. But that's no different than when
what was a problem a while back - routers that did IPv4 in hardware and
IPv6 in software. Under such conditions, you just don't deploy.

 A typical data centre operator/content provider has a vastly
 larger number of servers, applications, systems administrators, and
 software developers, than they have routers and network
 administrators. By making IPv4 end-user connectivity a service
 provided by the network, you make the amount of dual stack-related
 complexity a fraction of what it would be if you had to run dual
 stack on every server and in every application.
 
 In a world where you have lots of network/system administrators that
 fully understand IPv6 and have limited IPv4 knowledge, sure. In the
 real world, where the situation is reversed, you just confuse
 everyone and make the complexity of troubleshooting a lot of things
 that much harder because it is far more likely to require interaction
 across teams to get things fixed.

With dual stack, they would need to fully understand *both* IPv6 and
IPv4. This sounds to me more like an argument for staying IPv4 only.

And even if they do know both protocols perfectly, they still have to
operate them both, monitor them both, document them both, and so on.
That is a non-negligible operational overhead, in my experience.

 Right… As soon as you make it stateless, you lose the ability to 
 overload the addresses unless you're using a static mapping of
 ports, in which case, you've traded dynamic state tables for static
 tables that, while stateless, are a greater level of complexity and
 create even more limitations.

I would claim that stateful NAPT44 mapping, which requires a router to
dynamically maintain a table of all concurrent flows using a five-tuple
identifier based on both L3 and L4 headers, is a vastly more complex and
thing than a static configured mapping table with two L3
addresses for each entry.

 Without state, how are you overloading the IPv4 addresses?

We're not.

 If I don't have a 1:1 mapping between public IPv4 addresses and IPv6 
 addresses at the SIIT box, what you have described doesn't seem 
 feasible to me.

SIIT is 1:1.

 If I have a 1:1 mapping, then, I don't have any address conservation 
 because the SIIT box has an IPv4 address for every IPv6 host that 
 speaks IPv4.

The SIIT box has indeed an IPv4 address for every IPv6 host that speaks
IPv4, true. The conservation comes from elsewhere, from the fact that a
content provider will often have a large number of servers, but a much
smaller number of publicly available IPv4 service addresses.

Let's say you have a small customer with a web server and a database
server. Using dual stack, you'll need to assign that customer at least a
/29. One address for each server, three for network/broadcast/default
router, and three more that goes away just because those five gets
rounded up to the nearest power of two.

With SIIT, that customer would instead get an IPv6 /64 on his LAN, and
no IPv4 at all. Instead, a single IPv4 address will gets mapped to the
IPv6 address of the customer's web server. You've now just saved 7 out
of 8 addresses which you may use for 7 other customers like this one.

That's just a small example. For most my customers, the ratio of
assigned IPv4 addresses to publicly available services is *at least* one
order of magnitude. So I have huge potential for savings here.

 Finally, by putting your money into NAT44 for IPv4 conservation,
 you have accomplished 

Re: mail-abuse.org down?

2012-11-04 Thread Tom Paseka
from the website:

This website has been moved to http://ers.trendmicro.com.
Please update your bookmarks with this URL.

On Sun, Nov 4, 2012 at 2:47 AM, Suresh Ramasubramanian
ops.li...@gmail.com wrote:

 Maps was taken over by trend micro years back, maybe they just retired the
 old domain?



Re: IPv6 Netowrk Device Numbering BP

2012-11-04 Thread Jimmy Hess
On 11/1/12, Karl Auer ka...@biplane.com.au wrote:
 I espouse four principles (there are others, but these are the biggies):

Sounds like  what is suggested is anti-practices, rather than
suggesting affirmative practices.
I would suggest slightly differently.

  Complexity results in failure modes that are difficult to predict, so
   - Keep addressing design as simple as possible, with as few
interesting things
 or distinctions as possible,
 (such as multiple different prefix lengths for different nets,
different autoconfig methods,
  different host IDs for default gateways, unique numbering
schemes, for different
  network or host types, etc)

 Without omitting requirements,
or overall opportunities for efficient reliable network operations.

   - Keep addressing complexity in addressing.
 E.g.  Addressing may be  simpler with a flat network,  but don't
 use that as an excuse to  relocate 2x the cost of addressing
complexity to switching
 infrastructure/routing design and scalability limits that will
forseeably be reached.

 Don't implement carrier grade NAT,  just because it ensures the
user's default gateway is
 always 192.168.1.1.

 Ensure the simplicity and benefits of the whole is maximized, not
the individual design elements.

 - don't overload address bits with non-addressing information

You suggest building networks with address bits that contain only
addressing information.
It sounds like an IPv4 principle whose days are done;   that,
addressing bits are precious, so don't waste a single bit  encoding
extra information.

If encoding additional information can provide something worthwhile,
then it should be considered.I'm  not sure  what exactly  would be
worthwhile to encode,  but something such as a POP#,   Serving router,
   Label/Tag id,   Security level,  type of network, hash of a circuit
ID, might be some potential contenders for some network  operators
to encode in portions of the network ID  for some networks.

Specifically  P-t-P   networks,  to which a /48  end site numbered
differently may be routed.

 - keep the network as flat as reasonably possible

You are suggesting the avoidance of multiple networks, preferring instead
large single IP subnets for large areas, whenever possible? IPv6 has not
replaced ethernet.

Bottlenecks such as unknown unicast  flooding, broadcast domain chatter,
scalability limits still exist on IPv6oE.

Ample subnet IDs are available.   With IPv6, there are more reasons than ever
to avoid flat networks,   even in cases where a flat network might be an option.

My suggestion would be:

-  Avoid flat networks; implement segmentation.   Make new subnets;
whenever possible plus   reliability, security, and serviceability
gains  exceed the basic config, and continuous management costs.

   Make flat networks when the benefits are limited, or segmented
networks are not possible, or too expensive due to poorly designed
hardware/software  (E.g. software requiring a flat network between
devices that should be segmented).


 - avoid tying topology to geography

It sounds like IPv4 thinking again --- avoid creating additional
topology for geographic
locations, in order to conserve address space.

 - avoid exceptions

Consistency is something  good designs should strive for;
when it can be achieved without major risks, costs, or
technical sacrifices,  exceeding the value of that consistency.


 I too would be really interested in whatever wisdom others have
 developed, even if (especially if!) it doesn't agree with mine.

 Regards, K.
--
-JH



Cisco devices mass config

2012-11-04 Thread sharon saadon
Hi,
A small open source app that i wrote to configure mass of devices
(tested with cisco devices)
http://sharontools.com/Products/MassConfig.php

Regards,
Sharon


Re: IPv6 Netowrk Device Numbering BP

2012-11-04 Thread Karl Auer
On Sun, 2012-11-04 at 13:26 -0600, Jimmy Hess wrote:
 On 11/1/12, Karl Auer ka...@biplane.com.au wrote:
  I espouse four principles (there are others, but these are the biggies):
 
 Sounds like  what is suggested is anti-practices, rather than
 suggesting affirmative practices. I would suggest slightly differently.

I agree that positive is best, but my rules can be expressed in a few
phrases. There is value in being concise.

 You suggest building networks with address bits that contain only
 addressing information.
 It sounds like an IPv4 principle whose days are done;   that,
 addressing bits are precious, so don't waste a single bit  encoding
 extra information.

Address bits are not precious; subnetting bits are. We have moved, with
IPv6, from conserving address space to conserving subnet space. Your
address space in a leaf subnet numbers in the billions of billions; your
subnetting space in a typical /48 site numbers in the tens of thousands.

By the way, there are two very diffferent kinds of subnet - the leaf
subnet that actually contains hosts, and the structural subnet that
divides your network up. I'm talking about structural subnetting. 

 If encoding additional information can provide something worthwhile,
 then it should be considered.

Of course. But *as a rule* it should not be done. Only you can weigh the
benefits against the downsides. Remember I said these were rules for
students; I'm not going to tell a competent professional what to do. I
personally would however strenuously avoid overloading address bits
merely for the sake of human convenience or readability.

Systems with no necessary technical links inevitably diverge; if you
have encoded one in the other, the latter will eventually start lying to
you. You will have to work to keep the two things in sync,  but if they
diverge enough that may not be possible, and you end up with one system
containing the irrelevant, misleading remnants of another.

  - keep the network as flat as reasonably possible
 
 You are suggesting the avoidance of multiple networks, preferring instead
 large single IP subnets for large areas, whenever possible?

No, that's not what I'm suggesting. See above for the distinction I make
between leaf and structural subnets. I am suggesting keeping the network
structure as flat as possible.

 -  Avoid flat networks; implement segmentation.

Yes at the edge, no in the network structure.

  - avoid tying topology to geography
 
 It sounds like IPv4 thinking again --- avoid creating additional
 topology for geographic locations, in order to conserve address space.

Conserving address space is NOT one of my goals, though conserving
subnet space is. Without being foolishly parsimonious, though. As you
say, there is ample subnet space, but it's not nearly as ample as
address space.

I use geography in the broadest sense of the physical world. The
reason for avoiding tying your network topology to geography is that
networks move and flex all the time. So does the physical world, in a
way - companies buy extra buildings, occupy more floors, open new state
branches, move to new buildings with different floor plans. New
administrative divisions arise, old ones disappear, divisions merge.
Building your topology on these shifting sands means that every time
they change, either your address schema moves further away from reality,
or you spend time adjusting it to the new reality. If you chose poorly
at the start, it may not be possible to adjust it easily. Again, I can't
second guess what physical constructs you will want or need to mirror in
your network topology, but I can say with confidence that you should
avoid doing so unless absolutely technically necessary.

These rules are not really IPv6-specific. It's just that in the IPv4
world, we basically can't follow them, because the technical
requirements of the cramped address space drive us towards particular,
unavoidable solutions. With IPv6, the rules gain new currency because we
*can* mostly follow them.

Regards, K.

PS: I've put a lot of this, word for word, in my blog at
www.biplane.com.au/blog
-- 
~~~
Karl Auer (ka...@biplane.com.au)
http://www.biplane.com.au/kauer
http://www.biplane.com.au/blog

GPG fingerprint: AE1D 4868 6420 AD9A A698 5251 1699 7B78 4EEE 6017
Old fingerprint: DA41 51B1 1481 16E1 F7E2 B2E9 3007 14ED 5736 F687


signature.asc
Description: This is a digitally signed message part


RE: Cisco devices mass config

2012-11-04 Thread Network IPdog
Hi Sharon!

Program looks great on the webpage, but... you cannot download it or look at
the pdf of the manual

Not Found

The requested URL /Products/Download\Mass config v1.8.rar was not found on
this server.
Apache/2.2.9 (Debian) PHP/5.4.0-3 mod_python/3.3.1 Python/2.5.2
mod_perl/2.0.4 Perl/v5.10.0 Server at sharontools.com Port 80


Not Found

The requested URL /Products/Download\MassConfig v1.8 - User manual.pdf was
not found on this server.
Apache/2.2.9 (Debian) PHP/5.4.0-3 mod_python/3.3.1 Python/2.5.2
mod_perl/2.0.4 Perl/v5.10.0 Server at sharontools.com Port 80


Ephesians 4:32Cheers!!!

A password is like a... toothbrush  ;^) 
Choose a good one, change it regularly and don't share it.



-Original Message-
From: sharon saadon [mailto:sharon...@gmail.com] 
Sent: Sunday, November 04, 2012 12:19 PM
To: nanog@nanog.org
Subject: Cisco devices mass config

Hi,
A small open source app that i wrote to configure mass of devices (tested
with Cisco devices) http://sharontools.com/Products/MassConfig.php

Regards,
Sharon




RE: Cisco devices mass config

2012-11-04 Thread Andrew Jones
Downloads work fine for me, looks like something's changing forward 
slashes to backslashes in the download URL for you... Try these links:

http://sharontools.com/Products/Download/Mass%20config%20v1.8.rar
http://sharontools.com/Products/Download/MassConfig%20v1.8%20-%20User%20manual.pdf

-Jonesy

On 05.11.2012 12:47, Network IPdog wrote:

Hi Sharon!

Program looks great on the webpage, but... you cannot download it or 
look at

the pdf of the manual

Not Found

The requested URL /Products/Download\Mass config v1.8.rar was not 
found on

this server.
Apache/2.2.9 (Debian) PHP/5.4.0-3 mod_python/3.3.1 Python/2.5.2
mod_perl/2.0.4 Perl/v5.10.0 Server at sharontools.com Port 80


Not Found

The requested URL /Products/Download\MassConfig v1.8 - User 
manual.pdf was

not found on this server.
Apache/2.2.9 (Debian) PHP/5.4.0-3 mod_python/3.3.1 Python/2.5.2
mod_perl/2.0.4 Perl/v5.10.0 Server at sharontools.com Port 80


Ephesians 4:32Cheers!!!

A password is like a... toothbrush  ;^)
Choose a good one, change it regularly and don't share it.



-Original Message-
From: sharon saadon [mailto:sharon...@gmail.com]
Sent: Sunday, November 04, 2012 12:19 PM
To: nanog@nanog.org
Subject: Cisco devices mass config

Hi,
A small open source app that i wrote to configure mass of devices 
(tested

with Cisco devices) http://sharontools.com/Products/MassConfig.php

Regards,
Sharon




Sandy seen costing telco, cable hundreds of millions of dollars

2012-11-04 Thread Roy
http://www.reuters.com/article/2012/11/01/storm-sandy-telecoms-idUSL1E8M1L9Z20121101 





Re: Cisco devices mass config

2012-11-04 Thread sharon saadon
Hi,
i updated the app file name.
now it's also possible to download the app :)

Sharon

On Sun, Nov 4, 2012 at 10:18 PM, sharon saadon sharon...@gmail.com wrote:

 Hi,
 A small open source app that i wrote to configure mass of devices
 (tested with cisco devices)
 http://sharontools.com/Products/MassConfig.php

 Regards,
 Sharon