Re: Another Big day for IPv6 - 10% native penetration
On 02/01/16 15:35, Tomas Podermanski wrote: Hi, according to Google's statistics (https://www.google.com/intl/en/ipv6/statistics.html) on 31st December 2015 the IPv6 penetration reached 10% for the very first time. Just a little reminder. On 20th Nov 2012 the number was 1%. In December we also celebrated the 20th anniversary of IPv6 standardization - RFC 1883. I'm wondering when we reach another significant milestone - 50% :-) Tomas Given the recent doubling growth, and assuming this trend is following a logistic function, then, rounding the numbers a bit for neatness, I get: Jan 2016: 10% Jan 2017: 20% Jan 2018: 33% Jan 2019: 50% Jan 2020: 67% Jan 2021: 80% Jan 2022: 90% with IPv4 traffic then halving year by year from then on, and IPv4 switch-off (ie. traffic < 1%) around 2027. Neil
Re: Another Big day for IPv6 - 10% native penetration
On 04/01/16 16:09, Ca By wrote: On Mon, Jan 4, 2016 at 3:26 AM, Neil Harris <n...@tonal.clara.co.uk> wrote: On 02/01/16 15:35, Tomas Podermanski wrote: Hi, according to Google's statistics (https://www.google.com/intl/en/ipv6/statistics.html) on 31st December 2015 the IPv6 penetration reached 10% for the very first time. Just a little reminder. On 20th Nov 2012 the number was 1%. In December we also celebrated the 20th anniversary of IPv6 standardization - RFC 1883. I'm wondering when we reach another significant milestone - 50% :-) Tomas Given the recent doubling growth, and assuming this trend is following a logistic function, then, rounding the numbers a bit for neatness, I get: Jan 2016: 10% Jan 2017: 20% Jan 2018: 33% Jan 2019: 50% Jan 2020: 67% Jan 2021: 80% Jan 2022: 90% with IPv4 traffic then halving year by year from then on, and IPv4 switch-off (ie. traffic < 1%) around 2027. Neil Just a reminder, that 10% is a global number. The number in the USA is 25% today in general, is 37% for mobile devices. Furthermore, forecasting is a dark art that frequently simply extends the past onto the future. It does not account for purposeful engineering design like the "world IPv6 launch" or iOS updates. For example, once Apple cleanses the app store of IPv4 apps in 2016 as they have committed and pushes one of their ubiquitous iOS updates, you may see substantial jumps over night in IPv6 eyeballs, possibly meaningful moving that 37% number to over 50% in a few shorts weeks. This will squarely make it clear that IPv4 is minority legacy protocol for all of mobile, and thusly the immediate future of the internet. CB Absolutely. So these figures should be regarded as conservative. The logistic growth model is just the default model choice for predicting new-things-replacing-old transitions. Any number of things could make the transition go faster, particularly, as you say, pushes by major platform vendors like Apple, and the move to mobile first in the expansion of the Internet in the developing world. Companies like search engine providers and streaming video providers could also exert pressure to speed up the IPv6 transition, if they wished. Also, passing psychological thresholds like 50% or 90% -- or even just fashion, in the sense of decision makers wanting to be associated with success and the future, not the rapidly contracting legacy of the past -- might have an effect to accelerate the eventual collapse of IPv4 traffic volumes. I can only imagine the scale of the schadenfreude IPv6 proponents will be able to feel once they're able to start talking about IPv4 as a legacy protocol. Neil
Re: iOS 7 update traffic
On 23/09/13 10:32, John Smith wrote: Picked this off www.jaluri.com (network and Cisco blog aggregator): http://routingfreak.wordpress.com/2013/09/23/ios7s-impact-on-networks-worldwide/ The consensus seems to be for providers to install CDN servers, if they arent able to cope up with an occasional OS update traffic. http://news.idg.no/cw/art.cfm?id=391B4B64-F693-41B7-6BBAC6D7017C3B8A John Perhaps Apple, Microsoft etc. should consider using Bittorrent as a way of distributing their updates? If ISPs were to run their own Bittorrent servers (with appropriate restrictions, see below), this would then create an instant CDN, with no need to define any other protocols or pay any third parties. The hard bit would be to create a way for Apple etc. to be able to authoritatively say we are the content owners, and are happy for you to replicate this locally: but perhaps this could be as simple serving the initial seed from an HTTPS server with a valid certificate? It would then be trivial to create a whitelist of the domains of the top 10 or so distributors of patches, and then everything would work automatically from then on. -- N.
Re: net neutrality and peering wars continue
On 22/06/13 13:08, Matthew Petach wrote: On Thu, Jun 20, 2013 at 2:29 PM, valdis.kletni...@vt.edu wrote: On Thu, 20 Jun 2013 22:39:56 +0200, Niels Bakker said: You're mistaken if you think that CDNs have equal number of packets going in and out. And even if the number of packets match, there's the whole 1500 bytes of data, 64 bytes of ACK thing to factor in... That's easily solved by padding the ACK to 1500 bytes as well. Matt Or indeed by the media player sending large amounts of traffic back to the CDN via auxiliary HTTP POST requests? Neil
Re: net neutrality and peering wars continue
On 22/06/13 16:34, Owen DeLong wrote: That's easily solved by padding the ACK to 1500 bytes as well. Matt Or indeed by the media player sending large amounts of traffic back to the CDN via auxiliary HTTP POST requests? Neil That would assume that the client has symmetrical upstream bandwidth over which to send such datagrams. At least in the US, that is the exception, not the rule. Owen Hi Owen, You only need to match the video stream bandwidth, not the full download speed of the link. Given that current multicore CPUs are now fast enough to decode HEVC in software, and with HEVC being roughly twice as efficient as H.264, that means you should be able to do quite decent full HDTV quality video with an average bandwdith of about 5 Mbps, given sufficient buffering to smooth out the traffic. Less, if you're willing to compromise on picture quality a bit, and go for, say, 720p. So, given an HEVC-capable decoder, this strategy should work for any connection with an upstream speed of better than about 4 to 5 Mbps, which is becoming more and more common on cable Internet service, as DOCSIS 3.0 is rolled out and faster links become more common, Neil
Re: 10 Mbit/s problem in your network
On 26/02/13 17:19, Warren Bailey wrote: Perhaps I don't understand.. Generally in wireless we look at two things; bits to hertz and noise components. If the noise is LESS and the carrier is the same power spectral density, you will have a greater c/n. I've always wondered why wifi didn't implement an array of modcods which can be used with a given system. That way, when you attenuate you have lower efficiency modulation and coding which will allow you to deal with fades better. Maybe they do us it and I'm just not hip to 802.11? They do it, all right, and much, much more, including MIMO -- 802.11 has evolved into something only marginally less complex than the mobile phone wireless stack in the process. -- N.
Re: The 100 Gbit/s problem in your network
On 12/02/13 14:14, fredrik danerklint wrote: Just to clarify, Patrick is right here. Assumptions: All the movies is 120 minuters long. Each movie has an average bitrate of 50 Mbit/s. (50 Mbit/s / 8 (bits) * 7 200 (2 hours) / 1000 (MB) = 45 GB). That means that the storage capacity for the movies is going to be: 10 000 000 * 45 (GB) / 1000 (TB) / 1000 (PB) = 450 PB of storage. Some of you might want to raise your hand to say that this quality of the movie is to good. Ok, so we make it 10 times smaller to 5 Mbit/s in average: 450 PB / 10 = 45 PB or 45 000 TB. If we are using 800 GB SSD drives: 45 000 TB / 0,8 TB = 56 250 SSD drives! (And we don't have any kind of backup of the content here. That need more SSD drives as well. And don't forget the power consumption). So over to the streaming part. 10 000 000 Customers watching, each with a bandwidth of 5 Mbit/s = 50 000 000 Mbit/s / 1000 (Gbit/s) = 50 000 Gbit/s. We only need 500 * 100 Gbit/s connections to solve this kind of demand. For each ISP around the world with 10 000 000 Millions of customers. Will TLMC be able to solve the 100k users watching 10 different movies? Yes. Will TLMC be able to solve the other 10 Million watching 10 Million movies. No, since your network can not handle this kind of load in the first place. Fortunately, we have some fascinating recent research on exactly this: http://www.land.ufrj.br/~classes/coppe-redes-2012/trabalho/youtube_imc07.pdf -- N.
Re: NTP Issues Today
On 21/11/12 12:34, Ryan Malayter wrote: On Nov 19, 2012, at 6:12 PM, Scott Weeks sur...@mauigateway.com wrote: Lesson learned: Use more than one NTP source. The lesson is: use MORE THAN TWO diverse NTP sources. A man with two watches has no idea what the time it actually is. Per David Mills, from the discussion linked upthread, this should be FOUR OR MORE... Every critical server should have at least four sources, no two from the same organization and, as much as possible, reachable only via diverse, nonintersecting paths. Four, so that the remaining three can reach consensus even if one fails. -- Neil
Re: Laptop with reverse VGA
On 21/02/12 14:48, Jay Ashworth wrote: - Original Message - From: Jake Khuonkh...@neebu.net I think the form-factour is already there. I have a Motorola Atrix smartphone. It's available with a laptop-dock unit. This is essentially a USB hub and display. The display is connected by outputting from the phone's HDMI port. The rest of the input/output device (keyboard and trackpad) are seen as USB connected devices and interfaced via the phone's USB port (Atrix supports USB host mode). Essentially, this laptop dock is what people are talking about except for a generic host instead of for a phone. We would want to expose the HDMI input generically and probably with an additional VGA input. Of course there are also VGA-HDMI converters. Anyone wanna ring up Motorola to see if they're interesting in adapting the Atrix laptop-dock technology? As someone who's done video for 20 years, I can tell you, Jake: It ain't that easy. The interface on the Atrix is purpose-built, and it's almost certainly just a DVI/HDMI digital interface to a panel that expects that. What's necessary for a standalone KVM of the sort we're talking about is what the video people call a genlock circuit -- most machines that need this at all have analog VGA out, and you have to have a chip that can lock up to it, and extract the video from that analog signal cleanly. This is, to quote the Jargon file, decidedly non-trivial to do well. That's the reason why a single port unit, not on sale, is generally around $400. If it was DVI/HDMI *only*, it could be substantially cheaper, but I've never seen one that was. Cheers, - jra High prices are more likely to do with the small market for such devices, than to do with the cost of the underlying technology. It isn't so much genlock, as accurate pixel clock recovery, that's the hard thing. It is indeed hard to do well, but fortunately the chipmakers have done all that for you. It's a common enough need (think flat panel monitors) that there are inexpensive single-chip solutions for it that not only do the A/D conversion, but handle the pixel clock recovery for you as well: see, for example, the Analog Devices AD9884A or ADV7441A. Data sheets at http://www.analog.com/en/audiovideo-products/analoghdmidvi-interfaces/ad9884a/products/product.html and http://www.analog.com/en/analog-to-digital-converters/video-decoders/adv7441a/products/product.html respectively. -- Neil
Re: Dear RIPE: Please don't encourage phishing
On 11/02/12 01:16, Masataka Ohta wrote: Randy Bush wrote: My $0.02 on this issue is if the message is rich text I hover over the link and see where it actually sends me. idn has made this unsafe I pointed it out at IETF Munich in 1997 that with an example of: MICROSOFT.COM where 'C' of MICROSOFT is actually a Cyrillic character. But, people insisted working on useless IDN. Masataka Ohta Techniques to deal with this sort of spoofing already exist: see http://www.mozilla.org/projects/security/tld-idn-policy-list.html for one quite effective approach. -- Neil
Re: Dear RIPE: Please don't encourage phishing
On 12/02/12 00:09, Masataka Ohta wrote: Neil Harris wrote: Techniques to deal with this sort of spoofing already exist: see http://www.mozilla.org/projects/security/tld-idn-policy-list.html It does not make sense that .COM allows Cyrillic characters: http://www.iana.org/domains/idn-tables/tables/com_cyrl_1.0.html i script of a domain name is Cyrillic. Domain names do not have such property as script. Is the following domain name: CCC.COM Latin or Cyrillic? for one quite effective approach. The only reasonable thing to do is to disable so called IDN. Masataka Ohta PS Isn't it obvious from the page you referred that IDN is not internationalization but an uncoordinated collection of poor localizations? I'm not a flag-waver for IDN, so much as a proponent of ways to make IDN safer, given that it already exists. Lots of people have thought about this quite carefully. See RFC 4290 for a technical discussion of the thinking behind this policy, and RFC 5992 for a policy mechanism designed to resolve the problem you raised in your example above. You will notice that the .com domain does not appear on the Mozilla IDN whitelist. -- N.
Re: Why no IPv6-only day (Was: Protocol-41 is not the only tunneling protocol)
On 07/06/11 15:28, Mark Andrews wrote: In message8a6a00c3-bd6d-4fb4-ae82-73816dfd9...@delong.com, Owen DeLong write s: Things like happy-eyeballs diminish it even with perfect IPv6 connectivity. 100ms rtt doesn't cover the world and to make multi-homed servers (includes dual stack) work well clients will make additional connections. Is happy eyeballs actually running code ANYWHERE? Owen Chrome does something close using 300ms. There is code out there that does it and there really should be lots more of it as it mitigates lots of problems. There's also a bug currently open for the equivalent functionality in Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=621558 -- Neil
Re: BGP (in)security makes the AP wire
On 18/02/11 12:26, Eugen Leitl wrote: On Sun, May 09, 2010 at 09:38:18AM -0700, Joel Jaeggli wrote: geographic location doesn't map to topology In LEO satellite constellations and mesh wireless it typically does. When bootstrapping a global mesh, one could use VPN tunnels over Internet to emulate long-distance links initially. Eben Moglen recently proposed a FreedomBox intitiative, using ARM wall warts to build an open source cloud with an anonymizing layer. Many of these come with 802.11x radio built-in. If this project ever happens, it could become a basis for end-user owned infrastructure. Long-range WiFi can compete with LR fiber in principle, though at a tiny fraction of throughput. Tiny fraction is putting it mildly. I once considered starting up a low-infrastructure wireless ISP using mesh radio based on wifi radio technology adapted to work in licensed bands. If you work out the numbers, the bandwidth you get in any substantial deployment is pitiful compared to technologies like DSL and cable modems, let alone fiber. New technologies such as distributed space-time multipath coding on the wireless side, and multipath network coding on the bitstream side, look like the way forward on this, but these are brand new, and still the subject of research -- you certainly can't just hot-wire these onto wifi hardware. Presumably, one could prototype something simple and cheap at L2 level with WGS 84-MAC (about ~m^2 resolution), custom switch firmware and GBIC for longish (1-70 km) distances, but without a mesh it won't work. The local 64 bit part of IPv6 has enough space for global ~2 m resolution, including altitide (24, 24, 16 bit). With DAD and fuzzing lowest significant bits address collisions could be prevented reliably. Central authority and decentralism can co-exist. Indeed. The fact that the usable bandwidth resulting from ad-hoc mesh wiki would be tiny compared to broadband connections doesn't mean this sort of thing isn't worth trying: a few tens of kilobits a second is plenty for speech, and even a few hundred bits per second useful for basic text messaging. Given that the cost of doing this is almost zero, since only software is required to implement it on any modern wifi/GPS equipped mobile hardware, this seems like a great thing to have in the general portfolio of networking technologies: having something like this available could be invaluable in disaster/crisis situations. -- Neil
Re: What's really needed is a routing slot market
On 07/02/11 14:25, Jamie Bowden wrote: It would help if we weren't shipping the routing equivalent of the pre DNS /etc/hosts all over the network (it's automated, but it's still the equivalent). There has to be a better way to handle routing information than what's currently being done. The old voice telephony guys built a system that built SVCs on the fly from any phone in the world to any other phone in the world; it (normally) took less than a second for it to do it between any pair of phones under the NANPA, and only slightly longer for international outside the US and Canada. There have to be things to be learned from there. Jamie They did indeed, but they did it by centrally precomputing and then downloading centrally-built routing tables to each exchange, with added statically-configured routing between telco provider domains, and then doing step-by-step call setup, with added load balancing and crankback on the most-favoured links in the static routing table at each stage. All this works fine in a fairly static environment where there are only a few, well-known, and fairly trustworthy officially-endorsed entities involved within each country, and topology changes could be centrally planned. BGP is a hack, but it's a hack that works. I'm not sure how PSTN-style routing could have coped with the explosive growth of the Internet, with its very large number or routing participants with no central planning or central authority to establish trust, and an endlessly-churning routing topology. Still, every good old idea is eventually reinvented, so it may have its time again one day. -- Neil
Re: The scale of streaming video on the Internet.
On 02/12/10 20:21, Leo Bicknell wrote: Comcast has around ~15 million high speed Internet subscribers (based on year old data, I'm sure it is higher), which means at peak usage around 0.3% of all Comcast high speed users would be watching. That's an interesting number, but let's run back the other way. Consider what happens if folks cut the cord, and watch Internet only TV. I went and found some TV ratings: http://tvbythenumbers.zap2it.com/2010/11/30/tv-ratings-broadcast-top-25-sunday-night-football-dancing-with-the-stars-finale-two-and-a-half-men-ncis-top-week-10-viewing/73784 Sunday Night Football at the top last week, with 7.1% of US homes watching. That's over 23 times as many folks watching as the 0.3% in our previous math! Ok, 23 times 150Gbps. 3.45Tb/s. Yowzer. That's a lot of data. 345 10GE ports for a SINGLE TV show. But that's 7.1% of homes, so scale up to 100% of homes and you get 48Tb/sec, that's right 4830 simultaneous 10GE's if all of Comcast's existing high speed subs dropped cable and watched the same shows over the Internet. I think we all know that streaming video is large. Putting the real numbers to it shows the real engineering challenges on both sides, generating and sinking the content, and why comapnies are fighting so much over it. You might be interested in the EU-funded P2P-NEXT research initiative, which is creating a P2P system capable of handling P2P broadcasting at massive scale: http://www.p2p-next.org/ -- Neil (full disclosure: I'm associated with one of the participants in the project)
Re: Internationalized domain names in the root
On 06/05/10 21:27, Zaid Ali wrote: I agree Safari experience looks much nicer and yes whole host of potential malice to arise. Firefox shows punycode http://xn--4gbrim.xnrmckbbajlc6dj7bxne2c.xn--wgbh1c/ar/default.aspx Now if I understood arabic only and was travelling or happen to use Firefox which showed punycode how would I trust it? If it was directly translated to latin characters I could trust it with verification from someone I know who understands english. I would not trust puny code because an end user does not know what it means, I think there is potential for a lot of issues here. Zaid This is indeed a security issue, and the behaviour in Firefox is currently that way by design. To fix it, the .eg / .xn--4gbrim TLD registrar needs to contact the Mozilla Foundation in order to inform the Foundation of their official IDN name allocation policy, so that the native-script URL display can then be switched on for their domain. See https://bugzilla.mozilla.org/show_bug.cgi?id=564213 and http://www.mozilla.org/projects/security/tld-idn-policy-list.html -- Neil
Re: APNIC Allocated 14/8, 223/8 today
On 14/04/10 15:54, Dave Hart wrote: On Wed, Apr 14, 2010 at 14:35 UTC, Vincent Hoffman wrote: PING 014.0.0.1 (12.0.0.1): 56 data bytes C:\Documents and Settings\Administratorping 014.0.0.01 Pinging 12.0.0.1 with 32 bytes of data: Connecting to 014.0.0.1|12.0.0.1|:80... Connecting to 014.0.0.1 (014.0.0.1)|14.0.0.1|:80... When it comes to IP addresses, its not history, its important :) Good point. In most of these classic utility contexts, octal is generally accepted. 32-bit unsigned decimal representation has provided obfuscation for fun and profit in HTTP URIs. I'm sure you can find some software that still accepts it, and some that doesn't. For me, with no proxy, Chrome and IE both accept a non-dotted numeric IPv4 URI, but rewrite it in the address bar to the familiar dotted quad format. FireFox shows an error page that appears equivalent to: h1Bad Request (Invalid Hostname)/h1 FireFox is probably violating some spec. Thankfully. Cheers, Dave Hart This is a historical issue with inet_aton(). See http://tools.ietf.org/html/draft-main-ipaddr-text-rep-00 for more details on the history behind this. Firefox bug 554596 addresses this problem. -- N.
Re: 1/8 and 27/8 allocated to APNIC
On 22/01/10 01:22, Jon Lewis wrote: On Thu, 21 Jan 2010, George Bonser wrote: Some of that water is dirtier than the rest. I wouldn't want to be the person who gets 1.2.3.0/24 The whole /8 should be fun. http://en.wikipedia.org/wiki/AnoNet To avoid addressing conflict with the internet itself, the range 1.0.0.0/8 is used. This is to avoid conflicting with internal networks such as 10/8, 172.16/12 and 192.168/16, as well as assigned Internet ranges. In the event that 1.0.0.0/8 is assigned by IANA, anoNet could move to the next unassigned /8, though such an event is unlikely, as 1.0.0.0/8 has been reserved since September 1981. I thought there was some other group that had been squatting in 1/8, something about radio and peer to peer...but not AnoNet (at least that name was totally unfamiliar)...but this was all I could find with a quick google. This? http://lists.arin.net/pipermail/arin-ppml/2003-May/001628.html -- Neil
Re: Wireless bridge
Peter Boone wrote: From: Michael Dillon [mailto:wavetos...@googlemail.com] (for example, after a good thunderstorm, the wireless link will be down for at least 12 hours, but will fix itself eventually. Sounds like there are trees in the line of sight, and maybe they are getting leafier over the years. The only solution to that is to change the path if it is possible. The line of sight is all clear, no trees. Only one building along the way has a rooftop of similar height, but the antennas are extended far above the roofline. We have used a rifle scope to confirm line of sight is all clear at all angles. Given that you have optical line of sight, and that your path length is only 800m, have you considered line-of-sight optical links for this application? -- Neil
Re: Fiber cut in SF area
Ong Beng Hui wrote: The problem of been LoS is a big problem in metro as far as I know. You can't just put a pair of FSO gear without going to the building owner to talk about rights and cost. Not forgetting lighting protection and other stuff. Murphy, Brian S CTR USAF ACC 83 NOS/Det 4 wrote: I haven't seen any mention of the possible use of FSO (Free Space Optics) by the provider to restore some reasonable amount of connectivity during an outage due to a fiber cut. I would expect that having 2 or 3 pairs of FSO boxes to provide a reduced failover capacity in metro areas would be a reasonable measure to ensure service for extended physical (fiber break, cut, backhoe) outages - although not necessarily for power. Yes, it would take some time to roll them out and set them up, but less time than the crew working the splices, and the folks handling the FSO boxes should be different from the fiber splice truck roll crew. Note that a power outage would not allow microwave to be an effective remediation method either. Plus, FSO's use of lasers (vice microwaves) means no issues with spectrum (AFAIK). Granted, they have limited distance and require LoS, but using two or more pairs can probably handle the 80% situation in the metro (unless there is data to indicate otherwise). murph Based on my experience with operating FSOs as infrastructure some years ago, the major limiting factor for FSOs is weather. In good weather, they should work just fine even at quite long ranges, providing that there is no obstruction or source of heat shimmer in the path, and you have carefully aimed your link to avoid sun outages. Bad weather (rain, snow, sandstorms, fog) causes very high levels of attenuation, with particularly bad weather reducing effective range to a few hundred meters at most. When this happens, the effect is area-wide, with a typical rain cell being a few km in size, so adding extra FSO links for redundancy is useless. If you've got a local airport nearby, you should be able to get good historical data for the frequency and duration of such weather conditions from METAR visibility data. For long-term standby installations, you've got to watch out for building work and cranes, which can pop up unexpectedly. However, if the link is being used solely as a protection path for rare failures in otherwise reliable fiber, and the alternative is either no protection path or a prohibitively expensive protection path, this may be perfectly acceptable: quite long ranges can be achieved with around 95-99% availability in typical European climates. You should expect installing and aiming a couple of FSO links at one another to take about a day in practice, unless you have a crack team of mobile laser ninjas trained and in readiness at all times (although the USAF may have greater access to ninjas, compared to to the rest of us). There is still the matter of getting permission for physical access, safety approval, access to power and network connectivity to the vantage points you will need to install the FSOs on, which can take much longer unless you already have it pre-planned. For truly rapid temporary links, I've seen one major UK operator actually just manually grout fiber in place along a kerbside to cover a few hundred meters of (presumably) temporary fiber run. This is probably faster to install than FSOs, even if the lifespan of such a link might be measured in days before someone crunches the fiber. -- Neil
Re: Diversity - was: Fiber cut in SF area
Rod Beck wrote: That service is probably very expensive. There is no known way to provide cheap 10 wave protection. Not carrier grade. Protected 10 GigE service (LAN PHY 10 GigE) will tolerate a very high BER before switching. And the cost of switching STM64 is very high as well. Bottom line is that it will cost more than two diversely routed 10 gig waves. There is no real market for protected 10 gig waves. Occasionally a bank will request the service, but backoff as soon as they see the price tag. Hopefully none of these customers had service and protect ckts that went down... I would be pissed as a ceo if that happen to my company. Hopefully level3's new service offering is 1...@percent redundant as stated The new service offerings include: - Protected Wavelengths: Level 3 now provides automatic protection-switching to a dedicated diversely routed wavelength in the event of a network failure. The protection switch, fully automated and managed by Level 3, happens at switching speeds approaching SONET restoration times. The single interface to the customer requires no additional capital cost for customer optical ports, and the diverse restoration path is fixed and fully known to the customer. These features allow customers to achieve fast restoration with predictable performance in their network without adding significant cost and routing complexity. - Surely a simple wideband optomechanical switch, actuated by detected signal degradation on a pilot wavelength or wavelengths, would do the job with high reliability and relatively low cost, without any extra need for switching the STM64 signal at the bitstream level? -- Neil
Re: Diversity - was: Fiber cut in SF area
Rod Beck wrote: And if the 10 gig wave is from 1 Wilshire to 60 Hudson with hundreds of regen huts and 30 POPs in between? How that affect the capex cost? Sure, the capex cost of offering full diversity is substantial; my point was just that the cost of switching STM64 signals at the endpiints need not be a significant issue, since you only have to switch the optical path, which is cheap to do and highly reliable, and the kit to do that will only make up a tiny fraction of the rest of the capital and operations cost. -- Neil