What I want to see is reasonably priced 40G single mode transceivers.
I have no idea why 40G and now 100G wasn't rolled out with single mode as the
preference. The argument that there's a large multimode install base doesn't
hold water.
For one thing, you're using enormous amounts of MM fiber
I've been working with 40 gig for a few years. When I first ordered a
switch, one of the first publicly available with full 40 gig, I was
appalled that I was going to have to use 4 pair of multimode fiber for each
of my connections. I had planned on using single mode because I can do that
with 1
I also wonder about re-inventing the wheel. The router part is easy, you could
even do that with a windows box (that's a joke).
Obviously capital cost is part of it, but the man hours involved in doing what
you're talking about, especially since you are talking about a telco
whatever you
I've seen this sort of thing popping up before.
Don't quite understand how its going to work. Leasing I understand. So long
as you are willing to suffer the revocation of the IP space should the
company that was actually ISSUED the IP space looses it for whatever
reason...
Buying I really don't
Raritan has a good line, the usual features, we use a lot of 2U, 208v,30A units
with 20xc13 which is a good config these days
Their central management software, while not perfect, is excellent for pdu
control
On Jun 23, 2013, at 8:37 AM, shawn wilson ag4ve...@gmail.com wrote:
We currently use
Greetings
I may be needing 10 gig from the West Coast to the East Coast some time in
the next year. I've got my ideas on what that would cost, but I don't have
anything that big.
This could be a leased line, part of a cloud with Verizon, NTT, Sprint, or
whoever as the provider, etc. I'm just
Fair enough
Seattle to Boston is the general route, real close.
On Monday, June 17, 2013, wrote:
On Mon, 17 Jun 2013 12:51:28 -0700, eric clark said:
I may be needing 10 gig from the West Coast to the East Coast
Might want to be more specific. Catalina Island, CA to Buxton, NC
(home
: +1 415 376 3314 / car...@race.com / http://www.race.com
-Original Message-
From: eric clark cabe...@gmail.com
Date: Monday, June 17, 2013 3:22 PM
To: valdis.kletni...@vt.edu valdis.kletni...@vt.edu
Cc: nanog@nanog.org nanog@nanog.org
Subject: Re: 10gig coast to coast
Fair
I'm looking for options.
With dark fiber, obviously, I have the ultimate in options.
However, its the ultimate in cost as you say.
The requirement we have is 10gig of actual throughput. Precisely what mechanism
is used to transport it isn't all that important, though I'm certain that there
I'm turning up a facility
With APC gear now. I'll let you
Know.
On Tuesday, May 21, 2013, Morgan Miskell wrote:
I realize this topic is semi off point so feel free to reply to the list
or to me personally. I am wondering if anyone has any experience using
the APC In-row cooling units in
You didn't include RJ11 in your question it goes back further.
One reason is that as we push the limits of cable from CAT3 (10meg) to CAT5
(100meg) to 5E (gig) to 6 (not sure what that was for) to 7 (10gig), the
cable doesn't get any smaller. We're dealing with higher and higher
frequencies
I was working with a vendor down there and couldn't get files in or out to
save our lives. Additionally, he was having trouble locally.
I didn't see anything on the pulse site.
did you start your browser before looking at your connection list?
However, you're on a window's box, so it wouldn't surprise me if they helpfully
started ie for you
If you didn't start the browser you use to go to facebook (and its not ie), its
fairly interesting.
On Sep 29, 2011, at
Thanks for all the replies everyone.
Some good options, though I am surprised by how few options I'm finding that
have a good centralized management system. I have to deploy monitoring to a
bunch of sites spread around the world, centralized management is key.
Thanks for all the suggestions.
As far as best practices, I'm not sure.
I've generally built an out of band network for the express purpose of saving
my behind in the event of an unanticipated traffic problem on the primary
network. Secondarily it allows secured access to equipment, and you can monitor
(which is often not
Wondering what people are using to provide security from their Wireless
environments to their corporate networks? 2 or more factors seems to be the
accepted standard and yet we're being told that Microsoft's equipment can't
do it. Our system being a Microsoft Domain... seemed logical, but they can
with LDAP that way,have to check that.
On Thu, Jun 9, 2011 at 3:08 PM, John Adams j...@retina.net wrote:
On Thu, Jun 9, 2011 at 3:02 PM, eric clark cabe...@gmail.com wrote:
Wondering what people are using to provide security from their Wireless
environments to their corporate networks? 2
Don't remember about the v4 part, but 3 years ago they issued me a /48,
specifically for my first site and indicated that a block was reserved for
additional sites. I can probably dig that up.
Sent from my iPad
On Feb 10, 2011, at 12:18 PM, Jason Iannone jason.iann...@gmail.com wrote:
It
Figure I'll throw my 2 cents into this.
The way I read the RFCs, IPv6 is not IP space. Its network space. Unless I
missed it last time I read through them, the RFCs do not REQUIRE
hardware/software manufacturers to support VLSM beyond /64. Autoconfigure
the is the name of the game for the IPv6
I've been troubleshooting an issue all day. Traffic leaving our site, on
Verizon public transport, destined for the Spokane area is routing to Qwest
and hitting 400ms rapidly. The offending router seems to be a Verizon router
(number 6 here).
On top of that, we're seeing this via Level3 coming in
and they forgot to mention it. I'll stick to /64, though it does seem a
horrible waste of space.
Someone else might have read the RFC differently though.
Eric Clark
A friend of mine has services on through yieldbook (in new York) that
he accesses from Santa Barbara. He noticed he couldn't get to them
around 2pm via his Cox cable inet link, dieing after
gar9.n54ny.ip.ATT.net (12.122.131.245), but from his Verizon link, he
had no issues. The problem persists
Most Provider type datacenters I've worked with get a lot of flak from
customers when they announce they're doing network failover testing, because
there's always going to be a certain amount of chance (at least) of
disruption. Its the exception to find a provider that does it I think (or
maybe
23 matches
Mail list logo