Re: Backup DC power standardization with Photovoltiac battery systems?
On Fri, Apr 14, 2023 at 07:17:23PM -0400, Sean Donelan wrote: > All these darn wall warts are almost, but slightly different (5v, 12v, > 24v). No -48v CPE? Ubiquiti EdgeRouter PoE 5 can use 48VDC. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Court orders for blocking of streaming services
Greetings - Recently, a court issued a troubling set of rulings in a default decision against "Israel.TV" and some other sites. https://storage.courtlistener.com/recap/gov.uscourts.nysd.572373/gov.uscourts.nysd.572373.49.0.pdf https://storage.courtlistener.com/recap/gov.uscourts.nysd.572374/gov.uscourts.nysd.572374.49.0.pdf https://storage.courtlistener.com/recap/gov.uscourts.nysd.572375/gov.uscourts.nysd.572375.53.0.pdf While the issue of domains being confiscated and being handed over to a prevailing plaintiff for an international domain with no obvious nexus to the United States by a United States court via orders to companies that happen to be in the United States is a bit of a concerning issue, that's not operationally relevant. What's more concerning is that the ruling includes an expansive clause B, "Against Internet Service Providers (ISPs):" IT IS FURTHER ORDERED that all ISPs (including without limitation those set forth in Exhibit B hereto) and any other ISPs providing services in the United States shall block access to the Website at any domain address known today (including but not limited to those set forth in Exhibit A hereto) or to be used in the future by the Defendants (.Newly-Detected Websites.) by any technological means available on the ISPs. systems. The domain addresses and any NewlyDetected Websites shall be channeled in such a way that users will be unable to connect and/or use the Website, and will be diverted by the ISPs. DNS servers to a landing page operated and controlled by Plaintiffs (the .Landing Page.) which can be reached as follows: This expansive clause basically demands that ISP's implement a DNS override in recursers, which may be dubiously effective given things such as DNSSEC and DNS-over-HTTPS complications. This is not an insignificant amount of work to implement, and since they have not limited the list to big players, that means us small guys would need to do this too. Perhaps more worryingly is the clause "by any technological means available," which seems like it could be opening the door to mandatory DPI filtering of port 53 traffic, an expensive and dicey proposition, or filtering at the CPE for those who run dnsmasq on busybox based CPE, etc., etc. This seems to be transferring the expense of complying to third parties who had nothing to do with the pirate sites. Complying with random court orders where there isn't even a formal notice that there's been a court order is problematic. I would guess that the 96 ISP's listed in the order are going to receive a formal notice, but by what mechanism does the court think that a small service provider would even be aware of such an order? What happens with respect to the "Newly Detected Websites"? What mechanism exists here? Who is going to pay for the costs? And how is this practical when this scales to hundreds or thousands of such rulings? It seems to me like the court overstepped here and issued a ruling that contained a lot of wishful thinking that doesn't reflect the ability of miscreants on the Internet to just rapidly register a new domain name with a new fake credit card. Certainly it is trivial to host the actual websites well out of legal reach of US courts, and with domain registrars without US presence. This leaves those of us in the network operations community in the position of shouldering costs to comply with a court order, but without a clear mechanism to continue to be in compliance. This could become a full time job, if the defendants want to play the game right. "israel.tv"? "1srael.tv" (with a "1" or "L" for the first letter, etc). Is anybody here considering recovering compliance costs from the plaintiffs? ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: V6 still not supported
On Mon, Apr 04, 2022 at 04:24:49PM +0200, JORDI PALET MARTINEZ via NANOG wrote: > Related to the LEA agencies and CGN: > > https://www.europol.europa.eu/media-press/newsroom/news/are-you-sharing-same-ip-address-criminal-law-enforcement-call-for-end-of-carrier-grade-nat-cgn-to-increase-accountability-online And how is this really horribly different than all the Napster crap where the "owner" of an ISP account got blamed for the activities of a family member or guest? Maybe the LEA agencies need some better clue. I'm fine with them advocating for IPv6, but I have a suspicion that IPv6 is just another can of worms, because when you have "an IPv4 internets worth of internets" (64 bits) available as the host portion of an IPv6 address, and stuff like RFC 4941, they're going to continue to mistarget the account owner even in the absence of CG-NAT. Finding a law enforcement compatible method of who generated traffic currently ends up being an exercise in keeping detailed logs. Which could be done with CG-NAT. Which makes the referenced article an example of a failure to understand the true (and horrifying) nature of the problem of traffic attribution. Doesn't even begin to touch on pwnage issues. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Cogent ...
On Thu, Mar 31, 2022 at 03:38:15PM +, Laura Smith via NANOG wrote: > However, perhaps someone would care to elaborate (either on or off- > list) what the deal is with the requirement to sign NDAs with Cogent > before they'll discuss things like why they still charge for BGP, or > indeed any other technical or pricing matters. Seems weird ?!? Because they know that the sillier bits will be poked fun at on NANOG if they allow them to be disclosed? Because if you can't talk about your pricing, then they aren't as likely to be facing customers who know how cheap it was sold to some other party? ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: PoE, Comcast Modems, and Service Outages
On Wed, Mar 30, 2022 at 05:52:06PM +, Livingood, Jason wrote: > > Their crappy equipment needing rebooting every few weeks, not ridiculous. > > Their purchasing gear from incompetent vendors who cannot be standards > compliant for PoE PD negotiation, tragically plausible. > > Many customers buy their own cable modem. You can lease an Xfinity > device as well and those function pretty nicely these days but YMMV. > But typically a device reboot is a way to quickly solve a few > different kinds of problems, which is why techs will often recommend > it as an initial step (you can generally assume that there's data > behind what occurs when any one of tens of thousands of support reps > suggesting something to a customer - support at scale is data-driven). No one's doubting all of that -- support is best when data-driven, scale or otherwise. But that's actually the issue here. There's no data that I know of to suggest widespread PoE ghost current buildups, and, given the audience here, no one else has popped up with a clear "me too". PoE is typically negotiated by modern switches, 24v Unifi special jobbies aside, so it's all DC on cables that are already handling differential signalling. > >He's got graphs showing it every 24 hours? Liar, liar, pants on fire, > lazy SOB is looking for an excuse to clear you off the line. > > Could well be from noise ingress - lots of work goes into finding & > fixing ingress issues. Hard to say unless we look in detail at the > connection in question and the neighborhood node. No doubt. There's huge amounts of room for problems to be introduced into last mile networks. But, again, this isn't about general problems. This is about a tech claiming it's due to PoE, and that he's seen it often before. I certainly have a lot of sympathy for cable techs, but that doesn't mean I want to swallow any random garbage they want to blame issues on. Please just tell me it's the chipmunks getting feisty and nibbling on the copper if you want to feed me a line... ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: PoE, Comcast Modems, and Service Outages
On Tue, Mar 29, 2022 at 03:42:54PM -0400, Josh Luthman wrote: > There's a certain manufacturer of TDD radio where the CPU clock is at the > same frequency as what Verizon's enodeB will transmit. Even at miles away, > it can and will cause PIM issues. Again, don't rule it out. I'm not ruling anything out, but on the flip side, here in this group of professional networkers, you'd think lots of people would have piped up by now with "me too"'s if PoE ghosts killing cable CPE on a 24 hour cycle were a common thing. > Maybe he's just looking for a simple answer that 99% of callers will accept > and it makes them happy. When a customer of mine tells me they think it's > something and I know it's off, I just let them believe in their statement. I'm unclear on how this is making the caller happy. I'm trying to envision under what circumstances a customer site that has purchased PoE switches, presumably to power PoE gear, would be delighted to hear that their not-directly-connected PoE gear would need to be removed, presumably replaced, and then, what? Run extension cords and bricks to all the access points, IP phones, cameras, door terminals, and other PoE-powered gear? > There's no reason to go after this tech and insult him, I'd agree it isn't sporting, but on the other hand, a poster here asked for an evaluation. I did not immediately blow it off, but instead tossed out some thoughts for consideration. Then blew it off. But I am still pondering the issue. > all that's doing is making everyone miserable. I am guessing lots of people laughed. It's an El Reg grade tale of woe. I have to assume the poster who asked is frustrated but trying to resolve a real issue. So if you want the $100 test to eliminate PoE electrical effects, get a pair of media converters and run fiber between them. Put the CPE on the far end. Optimize as appropriate if you have SFP-capable switches. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: PoE, Comcast Modems, and Service Outages
On Tue, Mar 29, 2022 at 03:07:47PM -0400, Josh Luthman wrote: > We've routinely seen where lines not even connected to the same circuit in > any way (ie an OTA antenna coax line and cat5 POE) cause issues with one > another. As much as we would all love to have a perfect line in the sand, > there isn't. Don't rule anything out until the issue is resolved. > > As someone that sees this in the field and watches people simply hate on > someone because there's a frustrating situation, it's worth taking a breath > before too upset. You can run cable lines next to A/C wiring and get problems too. Or ethernet lines next to A/C wiring. That does not justify wild claims about PoE such as what this tech was making, and until someone shows me a graph of "PoE buildups" observable via SNMP or whatever the cable company is using to graph trends, it seems pretty clear that this is a bogus answer. There's a lot of difference between "we observed this very specific kind of interference related to PoE in a particular circumstance" and the crazy generalizations being made by the tech. Asking to please make sure your switch is grounded properly? That'd be good. Asking for PoE to be disabled on the port? Yeah fine. Suggesting separation of cables? Sure. Checking for proper grounding of the ground block (on the cable inlet)? Sure. There's room for things to happen. I'm all for investigating with an open mind, but I draw the line at crazy. Given that so much of the world works on PoE, it seems like the other potential resolution would be to note that there's an implication here by the tech that Comcast's hardware is standards noncompliant and ask them what they plan to replace their cheap CPE with. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: PoE, Comcast Modems, and Service Outages
hat crontab is there that would clear out such buildups in the router's daily run? What capacitor would store up juice for precisely 24 hours? What's the mechanism here? CURIOUS MINDS WANT TO KNOW! Been doing PoE everywhere for years and this is the stupidest thing I've heard this year so far in the networking category. Next time, play dumb. People who can, do. People who can't, tech support. Worth remembering. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Let's Focus on Moving Forward Re: V6 still not supported
On Sat, Mar 26, 2022 at 12:37:59PM -0400, Tom Beecher wrote: > > > > Have you ever considered that this may be in fact: > > > > */writing/* and */deploying/* the code that will allow the use of 240/4 the > > way you expect > > > > While Mr. Chen may have considered that, he has repeatedly hand waved that > it's 'not that big a deal.', so I don't think he adequately grasps the > scale of that challenge. It seems like it should only require changes on a few billion nodes, given the size of the IPv4 address space, right? Oh, wait, NAT... So I guess the question here is how do you plan to incentivize the patching of all these devices, many of which are legacy devices with no maintainer for the firmware/software, in roles where they may not be accessible, and protected by firewalls that understand Class E to be unusable space. I am unclear on the desirability of "fixing" the IPv4 network by touching lots of nodes, in a manner which will never be comprehensive, in order to free up a relatively small block of space. It's going to be crippled, less-valuable space. It seems to me like it'd be much more productive, if you're going to be touching gear, to move towards IPv6. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: ISP data collection from home routers
On Thu, Mar 24, 2022 at 09:26:31AM -0400, Josh Luthman wrote: > I'm surprised we're having this discussion about an internet device that > the customer is using to publicize all of their information on Facebook and > Twitter. Consumers do not care enough about their privacy to the point > where they are providing the information willingly. So your theory is that just because YOU have Facebook and you're fine sharing information (/don't care/whatever), that *I* have to suffer that fate as well? Perhaps you hadn't noticed, but there's a very active business in the form of VPN's, DNS-over-HTTPS, and other privacy-enhancing technologies that seem to indicate that people do have an interest in privacy and limiting the amount of ISP monetization of their data that can go on. Just because some people might be fine with their data being leaked does not mean that everyone is fine with it. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: "Permanent" DST
On Tue, Mar 15, 2022 at 02:06:50PM -0700, Brandon Svec via NANOG wrote: > "..rational time worldwide"? Like all of China in one timezone and Mumbai > :30 off center? and Arizona? and that one county in Idaho? The word "rational" does not belong in a sentence discussing timezones or even general time issues. We're taught from a young age that you wake up at, well for the sake of argument, let's agree on 7AM. You learn that businesses are "9AM to 5PM", etc. These are basically all arbitrary choices, based off hysterical raisins that have to do with the position of the sun at noon, in an era where there weren't better ways to synchronize clocks. We COULD all work in UTC and un-learn the weird system of hour offsets and timezones. This would be convenient for people at a distance, since it would be simply a matter of stating availability hours, rather than giving someone hours AND a timezone and making them do the math. If I say that I'm available for an hour at 22:00 UTC, that works out anywhere on the globe. But do you know what timezone "CDT" is? When's "17:00 CDT"? ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: V6 still not supported (was Re: CC: s to Non List Members (was Re: 202203080924.AYC Re: 202203071610.AYC Re: Making Use of 240/4 NetBlock))
On Thu, Mar 10, 2022 at 09:55:42AM +0200, Saku Ytti wrote: > On Wed, 9 Mar 2022 at 21:00, Joe Greco wrote: > > I really never thought it'd be 2022 and my networks would be still > > heavily v4. Mind boggling. > > Same. And if we don't voluntarily agree to do something to it, it'll > be the same in 2042, we fucked up and those who come after us pay the > price of the insane amount of work and cost dual stack causes. > > It is solvable, easily and cheaply, like most problems (energy, > climate), but not when so many poor leaders participate in decision > making. I am reading your response as to imply that this is somehow my fault (for my networks) and that I am a poor leader for not having embraced v6. If that's not what you meant, great, because I feel like there's been systemic issues. There are several ASN's I run infrastructure for, on an (as you put it) "voluntary" basis, for organizations that run critical bits of Internet infrastructure but which aren't funded like they are critical bits. The problem is that I really don't have the ability to donate more of my time, since I am already 150% booked, and I'm not willing to hire someone just to donate their time. I have no idea what it is I can agree to do to make something happen here that is accomplished "easily and cheaply". From my perspective, IPv4+6 is many times the effort to deploy as just IPv4, somewhere between 5x-10x as much work depending on the specifics. I love many of the ideas behind v6, but adoption seems tepid. I had to fight years ago to get IPv6 via broadband, and most common end-user gear still does not seem to support it, or enable it by default. Looking at the results, I think we've screwed this up. Just like the e-mail ecosystem was screwed up by poor design and then stupid bolt-on fixes, so we've finally arrived at a point where people just don't even want to deal with the problem. At least with e-mail, you can plausibly outsource it if you're not masochistic. I feel like IPv6 is that same sort of problem, except you can't outsource it. You can avoid it by throwing some more IPv4 NAT and proxies into the mix though. And tragically, that seems to be what's happened. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: V6 still not supported (was Re: CC: s to Non List Members (was Re: 202203080924.AYC Re: 202203071610.AYC Re: Making Use of 240/4 NetBlock))
On Wed, Mar 09, 2022 at 09:46:41AM -0800, David Conrad wrote: > Tim, > > On Mar 9, 2022, at 9:09 AM, Tim Howe wrote: > > Some of our biggest vendors who have supposedly supported > > v6 for over a decade have rudimentary, show-stopping bugs. > > Not disagreeing (and not picking on you), but despite hearing > this with some frequency, I haven???t seen much data to corroborate > these sorts of statements. Fine. We could start at the top, with protocols that are defective by design, such as OSPFv3, which lack built-in authentication and rely on IPsec. That's great if you have a system where this is all tightly and neatly integrated, but smaller scale networks may be built on Linux or BSD platforms, and this can quickly turn into a trainwreck of loosely cooperating but separate subsystems, maintaining IPsec with one set of tools and the routing with another. Or ... FreeBSD's firewall has a DEFAULT_TO_DENY option for IPv4 but not for IPv6. Perhaps not a show-stopping bug, granted. But, wait, if you really want end-to-end IPv6 (without something like NAT in between doing its "faux-firewalling") endpoints, wouldn't you really want a firewall that defaults to deny, just in case something went awry? If I've got a gateway host that normally does stateful firewalling but it fails to load due to a typo, I'd really like it to die horribly not packet forwarding anything, because someone will then notice that. But if it fails open, that's pretty awful because it may not be noticed for months or years. So that's a show-stopper. As exciting as it would be to go all-in on v6, it's already quite a bit of a challenge to build everything dual-stack and get to feature parity. The gratuitous differences feel like arrogant protocol developers who know what's best for you and are going to make you comply with their idea of how the world should work, complexity be damned. I really never thought it'd be 2022 and my networks would be still heavily v4. Mind boggling. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Is soliciting money/rewards for 'responsible' security disclosures when none is stated a thing now?
On Fri, Mar 04, 2022 at 11:33:47PM +0200, Denys Fedoryshchenko wrote: > This is typical "Beg bounty". > https://www.troyhunt.com/beg-bounties/ This probably isn't even that. I've seen a bunch of similar spam to various role accounts, some at domains that don't even have a website, in the last month or so. Several contained "real names" of alleged security researchers that did not seem to exist in the real world. It is worth remembering that bad guys may be interested in collecting the e-mail addresses of people who are responsible for security within your organization. These could be used to target those people with malware, or to forge legitimate-looking e-mails "from" your security department to your other employees. It is likely that no good can come of engaging with these. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Russian aligned ASNs?
On Thu, Feb 24, 2022 at 05:59:08PM -0800, Seth David Schoen wrote: > I also imagine (without data) that most DoS attacks continue to be > performed by botnets, using other people's connections, rather than > directly by their ultimate perpetrators. So, the most effective and > meaningful mitigation would be trying to clean up bots, and prevent > ongoing bot infections, rather than cutting off suspected or actual > perpetrators. > > I realize that's much easier said than done! It is, and it isn't. There was a time when we mostly all had staffed abuse desks and took action on complaints. Some of us still do. If we took the security of the Internet seriously, we could at least make a reasonable effort to develop ways to cope with the growing problems that are only exacerbated by stuff like the explosive growth of IoT, and the resulting IoT malware. But this has to include service providers giving a damn about what they let their customers spew out onto the network, and it's been many years since it became clear that profit margin won out over being a decent netizen. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Russian aligned ASNs?
On Thu, Feb 24, 2022 at 07:40:54PM -0500, William Allen Simpson wrote: > Apparently some Russian government sites have already cut themselves > off, presumably to avoid counterattacks. > > Would it improve Internet health to refuse Russian ASN announcements? > > What is our community doing to assist Ukraine against these attacks? Keeping the free flow of information going seems to be the best way to counter a history of isolationist tendencies by authoritarian governments and represssive regimes. Countries that have dabbled with the idea of firewalls, content filters, alternative DNS or even network, etc., are given encouragement if you cut them off. It may be best to focus on things that are less IP-centric and more of a problem-solving variety. Running a good Tor node, by any chance? ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: LEC copper removal from commercial properties
On Wed, Feb 16, 2022 at 08:58:21PM -0500, Martin Hannigan wrote: > At least in Boston, commercial property owners are receiving notices that > 'copper lines are being removed per FCC rules' and replaced with fiber. > The property owner, not the network operators (or users of unbundled > elements if that's even still a thing) are being presented with an > agreement that acknowledges the removal, authorizes the fiber installation > and provides for a minor oversight of the design. It suggests that no costs > are involved in terms of hosting equipment. No power reimbursement. No rent > for spaces used. I have the opposite story of a commercial property where fiber was installed, but they refused to remove the 12 pair copper, refused to remove a massive demarc cabinet, and then threatened the property owner that he couldn't remove it either. Pity I didn't know that when I removed it while cleaning up the huge mess. And yes of course I checked that all the pairs were dead. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Authoritative Resources for Public DNS Pinging
On Fri, Feb 11, 2022 at 09:58:19AM -0500, Jon Lewis wrote: > So...here's a pair of "what if"s: > > What if instead of pinging 8.8.8.8, all these things using it to "test the > Internet" sent it DNS requests instead? i.e. > GOOG=$(dig +short @8.8.8.8 google.com) > if [ -z "$GOOG" ] ; then > echo FAIL > fi > Would that make things better or worse for GOOG (Trading lots more DNS > requests for the ICMP echo requests)? ping is relatively ubiquitous. There are certainly platforms on which it isn't installed, but compare/contrast to the DNS options. Is it host? nslookup? dig? No tool? "ping internet" or "ping 8.8.8.8" are fairly straightforward by comparison. > 8.8.8.8 is already anycasted. What if each large ISP (for whatever > definition of large floats your boat) setup their own internal instance(s) > of 8.8.8.8 with a caching DNS server listening, and handled the traffic > without bothering GOOG? For users using 8.8.8.8 as a lighthouse, this > would change the meaning of their test...i.e. a response means their > connection to their ISP is up, and the ISP's network works at least enough > to reach an internal 8.8.8.8, but the question of their connectivity to > the rest of the Internet would be unanswered. Certainly that is true. Hijacking of any mechanism is a potential risk. Tying it into the DNS somehow at least introduces the opportunity for DNSSEC to reduce the chance of an ISP to muck with the intended result. We could even call it the Enhanced Link Verification Internet Service. "ping elvis" :-P ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Authoritative Resources for Public DNS Pinging
On Wed, Feb 09, 2022 at 11:21:26PM +, Mark Delany wrote: > On 09Feb22, Joe Greco allegedly wrote: > > > So what people really want is to be able to "ping internet" and so far > > the easiest thing people have been able to find is "ping 8.8.8.8" or > > some other easily remembered thing. > > Yes, I think "ping internet" is the most accurate description thus far. Or > perhaps "reach > internet". > > > Does this mean that perhaps we should seriously consider having some > > TLD being named "internet" > > Meaning you need to have a functioning DNS resolver first? I'm sure you see > the problem > with that clouding the results of a diagnostic test. Perhaps. As I noted, The problem with this is that someone will try to make what could be a relatively simple thing complicated and you're already implying the first step down that road, which is trying to do something more than a simple pass/fail test. > > service providers register appropriate upstream targets for their > > customers, and then maybe also allow for some form of registration such > > that if I wanted to provide a remote ping target for AS14536, I could > > somehow register "as14536.internet" or "solnet.internet"? > > Possibly. You'd want to be crystal clear on the use cases. As a starting > point, maybe: > > 1. Do packets leave my network? > 2. Do packets leave my ISP's network? > 3. Mainly for IOT - is the internet reachable? > > Because of 2 and 3. I don't think creative solutions such as ISPs any-casting > some > memorable IP or name will do the trick. And because of 1. anything relying on > DNS > resolution is probably a non-starter. Much as I like "ping ping.ripe.net" it > alone is too > intertwined with DNS resolution to be a reliable alternative. I dunno. I think I'd find that being unable to resolve a hostname or being unable to exchange packets result in a similar level of Internet brokenness. It is going to be hard to quantify all the things you might want to test for. You already enumerated several. But if it has to be a comprehensive "Internet is fully working" test, what do you do to be able to detect that your local coffee shop isn't implementing a net nanny filter? Just to take it too far down the road. ;-) > > Fundamentally, this is a valid issue. > > Yup. There are far more home-gamers and tiny network admins (the networks are > tiny, not > the admins) who just want to run a reachability test or add a command to a > cheap network > monitor cron job. Those on this list who can - or should - do something more > sophisticated > are numerically in the minority of people who care about reachability and are > not really > the target audience for a better "ping 8.8.8.8". Well, that's sorta true. > > and we'll end up needing a special non-ping client and some trainwreck of > > names and > > other hard-to-grok > > I'm not sure the two are fundamentally intertwined tho it could easily be an > unintended > consequence. However, being constrained to creating a new ping target does > severely limit > the choices. And including ipv6 just makes that more complicated. > > The other matter is that the alternative probably has to present a compelling > case to > cause change in behavior. I can see an industry standard ping target being of > possible use > to tests built into devices. But again it'd have to be compelling for most > manufacturers > to even notice. Change happens. Look at pool.ntp.org. > But for humans, I'd be surprised if you can create a compelling alternative > ping > target. For them, I'd be going down the path of a "ping-internet" command > which answers > use-cases 1. & 2. while carefully avoiding the second-system syndrome - he > says with a > laugh. Well, I've lamented many times over the years about how we (as a network operations community) have failed to address issues in a meaningful way. End users are using "ping 8.8.8.8" to test basic connectivity. I'm happy to see the development of resources such as the RIPE Atlas monitor, because for too many years I've had to guess at strategic points to monitor. However, tools for the average end user that could also be used by the more experienced folks would be nice. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Authoritative Resources for Public DNS Pinging
On Wed, Feb 09, 2022 at 05:02:01PM +0200, Mark Tinka wrote: > > > On 2/9/22 16:53, ??ukasz Bromirski wrote: > > >Yup. And Google folks accounted for the world pinging them all day long. > > > >I wouldn't call using DNS resolvers as best "am I connected to internet > >over this interface" tool though. A day, year or 5 years from now the same > >team may decide to drop/filter and then thousands of hardcoded "handmade > >automation solutions" will break. And I believe that's closer to what > >Masataka was trying to convey. > > I get that, but what I'm saying is that users tend to expect things to > remain the same. In reality, they don't, because as abstract as the > Internet seems to most users, it is run by actual people, who have to > apply mind and muscle to not only stand things up, but keep them > standing. The movement of those people has an impact on that, even in > very well established institutions. > > So unless there is some specific accommodation from Google et al, that > the servers they run for one service can be used for liveliness > detection, expect breakage when that changes, at their whim. Until then, > do not expect users to honour the original intent of the service. If it > can serve some other purpose (like liveliness detection), they will use > it for that purpose in the hopes that it will always be there, for that > purpose. So what people really want is to be able to "ping internet" and so far the easiest thing people have been able to find is "ping 8.8.8.8" or some other easily remembered thing. Does this mean that perhaps we should seriously consider having some TLD being named "internet", with maybe a global DNS redirector that lets service providers register appropriate upstream targets for their customers, and then maybe also allow for some form of registration such that if I wanted to provide a remote ping target for AS14536, I could somehow register "as14536.internet" or "solnet.internet"? Fundamentally, this is a valid issue. As the maintainer of several BGP networks, I can't really rely on an upstream consumer ISP to be the connectivity helpdesk when something is awry. It would really be nice to have a list of officially sanctioned testing points so that one could just do "ping google.internet" or "ping level3.internet" or "ping comcast.internet" or "ping aws.internet" and get a response. The problem with this is that someone will try to make what could be a relatively simple thing complicated, and we'll end up needing a special non-ping client and some trainwreck of names and other hard-to-grok garbage, and then we're perilously close to coming back to the current situation where people are using arbitrary targets out on the Internet for connectivity testing. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Log4j mitigation
On Mon, Dec 13, 2021 at 03:50:11PM +0100, J??rg Kost wrote: > But in a world where the attacker can leak out a whole 16-bit integer, > monitoring that 0.003% for two-port states may be irrelevant. > Not saying you shall not, but you will miss 99.997%. Agree? There's all sorts of statements I might agree with. However, if I have an easy indicator of a known problem, such as "LDAP traffic to an unknown server", I might be very tempted to set the IDS to notify me if it sees the weird thing, and then let the very fast moron just do its job. That's what it's there for, after all. Right? I don't care if it misses 9% or 99% or 99.997%. If I can generate some cheap and easy hits, without finding out about problems the Equifax way, I don't see the harm in that. Sometimes we do things "just in case." ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Log4j mitigation
On Mon, Dec 13, 2021 at 01:49:07PM +0100, J??rg Kost wrote: > I understand what you want to say, but I disagree in this point. When > you have a cup full of water and someone remotely can drill holes into > the out shell, just checking the bottom for leaks won't help. You may > want a new mug instead. :-) The initial posting was about looking at the > bottom only. Maybe I'm the only one who puts cheap wireless leak sensors near toilets, drains, and other less-likely sources of water, in addition to the big alarm system hardwired ones in all the usual places. Of course, then again, we also have two AC sump pumps and one that is battery backup, all protected by generator and ATS. I prefer to know. You, of course, are free to disregard as you see fit. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Log4j mitigation
On Mon, Dec 13, 2021 at 01:12:25PM +0100, J??rg Kost wrote: > Yes, but it won't change the outcome. We shall run with assuming breach > paradigm. In this scenario, it might be useless looking around for port > 389 only; it can give you a wrong assumption. That's like arguing that it isn't worth having a canary in the coal mine. Which, come to think of it, was implicitly the point of the message I sent that you're replying to as well. Just because there are other sources of fatalities, doesn't mean you can't check for the quick obvious stuff. In my experience, this tends to reveal issues that might have been forgotten or never known about to begin with. Most organizations have a variety of zombie legacy systems that were set up by people on staff several generations ago. The more tools at your disposal to identify breached systems, the better. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Log4j mitigation
On Mon, Dec 13, 2021 at 12:39:58PM +0100, J??rg Kost wrote: > You can't see it. I think you meant "you can't reliably see it". This doesn't mean that it isn't worth looking for obvious cases where you CAN see it. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Facebook post-mortems...
On Tue, Oct 05, 2021 at 03:40:39PM +0200, Mark Tinka wrote: > Yes, total nightmare yesterday, but sure that 9,999 of the helpdesk > tickets had nothing to do with DNS. They likely all were - "Your > Internet is down, just fix it; we don't wanna know". Unrealistic user expectations are not the point. Users can demand whatever unrealistic claptrap they wish to. The point is that there are a lot of helpdesk staff at a lot of organizations who are responsible for responding to these issues. When Facebook or Microsoft or Amazon take a dump, you get a storm of requests. This is a storm of requests not just to one helpdesk, but to MANY helpdesks, across a wide number of organizations, and this means that you have thousands of people trying to investigate what has happened. It is very common for large companies to forget (or not care) that their technical failures impact not just their users, but also external support organizations. I totally get your disdain and indifference towards end users in these instances; for the average end user, yes, it indeed makes no difference if DNS works or not. However, some of those end users do have a point of contact up the chain. This could be their ISP support, or a company helpdesk, and most of these are tasked with taking an issue like this to some sort of resolution. What I'm talking about here is that it is easier to debug and make a determination that there is an IP connectivity issue when DNS works. If DNS isn't working, then you get into a bunch of stuff where you need to do things like determine if maybe it is some sort of DNSSEC issue, or other arcane and obscure issues, which tends to be beyond what front line helpdesk is capable of. These issues often cost companies real time and money to figure out. It is unlikely that Facebook is going to compensate them for this, so this brings me back around to the point that it's preferable to have DNS working when you have a BGP problem, because this is ultimately easier for people to test and reach a reasonable determination that the problem is on Facebook's side quickly and easily. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Facebook post-mortems...
On Tue, Oct 05, 2021 at 02:57:42PM +0200, Mark Tinka wrote: > > > On 10/5/21 14:52, Joe Greco wrote: > > >That's not quite true. It still gives much better clue as to what is > >going on; if a host resolves to an IP but isn't pingable/traceroutable, > >that is something that many more techy people will understand than if > >the domain is simply unresolvable. Not everyone has the skill set and > >knowledge of DNS to understand how to track down what nameservers > >Facebook is supposed to have, and how to debug names not resolving. > >There are lots of helpdesk people who are not expert in every topic. > > > >Having DNS doesn't magically get you service back, of course, but it > >leaves a better story behind than simply vanishing from the network. > > That's great for you and me who believe in and like troubleshooting. > > Jane and Thando who just want their Instagram timeline feed couldn't > care less about DNS working but network access is down. To them, it's > broken, despite your state-of-the-art global DNS architecture. You don't think at least 10,000 helpdesk requests about Facebook being down were sent yesterday? There's something to be said for building these things to be resilient in a manner that isn't just convenient internally, but also externally to those people that network operators sometimes forget also support their network issues indirectly. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Facebook post-mortems...
On Tue, Oct 05, 2021 at 02:22:09PM +0200, Mark Tinka wrote: > > > On 10/5/21 14:08, Jean St-Laurent via NANOG wrote: > > >Maybe withdrawing those routes to their NS could have been mitigated by > >having NS in separate entities. > > Well, doesn't really matter if you can resolve the A//MX records, > but you can't connect to the network that is hosting the services. > > At any rate, having 3rd party DNS hosting for your domain is always a > good thing to have. But in reality, it only hits the spot if the service > is also available on a 3rd party network, otherwise, you keep DNS up, > but get no service. That's not quite true. It still gives much better clue as to what is going on; if a host resolves to an IP but isn't pingable/traceroutable, that is something that many more techy people will understand than if the domain is simply unresolvable. Not everyone has the skill set and knowledge of DNS to understand how to track down what nameservers Facebook is supposed to have, and how to debug names not resolving. There are lots of helpdesk people who are not expert in every topic. Having DNS doesn't magically get you service back, of course, but it leaves a better story behind than simply vanishing from the network. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Rack rails on network equipment
On Sat, Sep 25, 2021 at 04:23:38PM -0700, Jay Hennigan wrote: > On 9/25/21 16:14, George Herbert wrote: > >(Crying, thinking about racks and racks and racks of AT 56k modems > >strapped to shelves above PM-2E-30s???) > > And all of their wall-warts [...] You were doing it wrong, then. :-) ExecPC had this down to a science, and had used a large transformer to power a busbar along the back of two 60-slot literature organizers, with 4x PM2E30's on top, a modem in each slot, and they snipped off the wall warts, using the supplied cable for power. A vertical board was added over the top so that the rears of the PM2s were exposed, and the board provided a mounting point for an ethernet hub and three Amp RJ21 breakouts. This gave you a modem "pod" that held 120 USR Courier 56K modems, neatly cabled and easily serviced. The only thing coming to each of these racks was 3x AMP RJ21, 1x power, and 1x ethernet. They had ten of these handling their 1200 (one thousand two hundred!) modems before it got unmanageable, and part of that was that US Robotics offered a deal that allowed them to be a testing site for Total Control. At which point they promptly had a guy solder all the wall warts back on to the power leads and proceeded to sell them at a good percentage of original price to new Internet users. The other problem was that they were getting near two full DS3's worth of analog lines being delivered this way, and it was taking up a TON of space. A full "pod" could be reduced to 3x USR TC's, so two whole pods could be replaced with a single rack of gear. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Rack rails on network equipment
On Fri, Sep 24, 2021 at 02:49:53PM -0500, Doug McIntyre wrote: > You mention about hardware lockin, but I wouldn't trust Dell to not switch > out the design on their "next-gen" product, when they buy from a > different OEM, as they are want to do, changing from OEM to OEM for > each new product line. At least that is their past behavior over many years > in the past that I've been buying Dell switches for simple things. > Perhaps they've changed their tune. That sounds very much like their 2000's-era behaviour when they were sourcing 5324's from Accton, etc. Dell has more recently acquired switch companies such as Force10 and it seems like they have been doing more in-house stuff this last decade. There has been somewhat better stability in the product line IMHO. > For me, it really doesn't take all that much time to mount cage nuts > and screw a switch into a rack. Its all pretty 2nd nature to me, look > at holes to see the pattern, snap in all my cage nuts all at once and > go. If you are talking rows of racks of build, it should be 2nd nature? The quick rails on some of their new gear is quite nice, but the best part of having rails is having the support on the back end. > Also, I hate 0U power, for that very reason, there's never room to > move devices in and out of the rack if you do rear-mount networking. Very true. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Carriers need to independently verify LOAs
On Mon, Apr 19, 2021 at 01:20:22PM -0400, Sean Donelan wrote: > On Sat, 17 Apr 2021, Eric Kuhnke wrote: > >Anecdotal: With the prior consent of the DID holders, I have successfully > >ported peoples' numbers using nothing more than a JPG scan of a signature > >that looks like an illegible 150 dpi black and white blob, pasted in an > >image editor on top of a generic looking 'phone bill'. > > All carriers should independently verify any LOAs received for account > changes. > > Documents received from third-parties, without independently verifying > with the customer of record, using the carriers own records, are just junk > papers. > > Almost no carriers verify LOAs by contacting the customer of record. > Worse, they call the phone number on the letterhead provide by the scammer > for "verification." Presumably we're kinda talking about a problem parallel to the Internet ASN/IP space LOA problem here. It would be awesome if there were a nice easy way to identify the responsible parties, so you could figure out WHOIS the appropriate party to contact. If you've ever tried Googling a company with a hundred thousand employees, calling their contact number on the Web, and getting through to anybody who knows anything at all about IT, well, you can spend a day at it and still have gotten nowhere. It's too bad that this information is so frequently redacted for privacy. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: more bad lawyering about Parler
On Mon, Jan 11, 2021 at 02:33:08AM -0800, William Herrin wrote: > On Mon, Jan 11, 2021 at 2:19 AM Danny O'Brien wrote: > > On Sun, Jan 10, 2021 at 8:54 PM William Herrin wrote: > >> there have been some real post-CDA head scratchers where > >> a court decided that an online service exercised sufficient control of > >> the content to have made itself a publisher. > > > > You really need to give citations here, because IMHO not only is this > > *exactly* the scenario that Section 230 was intended to provide legal > > clarity regarding (and so protect service providers from this kind of > > moderation double-bind), but as I understand it pretty much all the > > subsequent caselaw has *strengthened* the ability for providers to moderate > > and manage content, including user-generated content, without triggering > > liability. > > Well, for example, Oberdorf v. Amazon.com, No. 18-1041 (3rd Cir. July > 3, 2019) which found that Amazon was a seller of goods and not merely > hosting information about a third party's sale, and thus subject to > product liability law for the product that was sold. But in the Erie > Insurance case, with similar circumstances, the court found the > opposite, that section 230 barred the plaintiff from suing Amazon over > a defective third-party product. These seem to be examples of situations where Amazon and Erie were selling things (other than Internet access/services), and were not merely acting as a service provider. I don't think that, back when the CDA was written, the service provider world ever expected random retailers or other sellers of products and services to be able to claim section 230 protections just because the transaction happened to be enabled by the Internet. It also isn't clear under what theory 230 protections would take precedence over other protections such as product liability law. I don't think that the fact that you might also sell Internet services creates an umbrella. Are there examples that do not conflate other areas of the law? Given the subject here, it seems relevant to want examples closer to what Parler and service providers providing them services or connectivity might need to consider. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Parler
On Sun, Jan 10, 2021 at 10:03:51AM -0500, sro...@ronan-online.com wrote: > Another interesting angle here is that it as ruled President > couldn???t block people, because his Tweets were government > communication. So has Twitter now blocked government communication? That's not interesting or even a reasonable comparison. Twitter wasn't involved in the former. There is a huge difference in the President being told that he cannot block random citizens from reading his tweets (no Twitter involvement), and Twitter declaring that they no longer wish to provide service to the President (Twitter's right as it is their private property). The President is free to pursue alternative venues for his messaging. Conflating unrelated things and drawing bad conclusions is not useful. At some point, it seems likely that the networking community may be faced with more choices such as what Cloudflare faced with 8chan. In an ideal world, people would act responsibly and we could have the nice things like libertarian ideals, but the reality as demonstrated by the last quarter century seems to indicate otherwise, in many small and not- so-small ways. I find that distressing, but I am not so libertarian as to insist that others pay for this stuff with their lives. I don't have any idea what the correct answer is, though. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: 60 ms cross-continent
On Sat, Jun 20, 2020 at 09:24:11AM -0700, William Herrin wrote: > Howdy, > > Why is latency between the east and west coasts so bad? Speed of light > accounts for about 15ms each direction for a 30ms round trip. Where > does the other 30ms come from and why haven't we gotten rid of it? > > c = 186,282 miles/second > 2742 miles from Seattle to Washington DC mainly driving I-90 > > 2742/186282 ~= 0.015 seconds Speed of light in a fiber is more like 124K miles per second. It depends on the refractive index. And of course amplifiers and stuff. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Partial vs Full tables
On Mon, Jun 08, 2020 at 07:14:01PM +0100, Nick Hilliard wrote: > William Herrin wrote on 08/06/2020 18:53: > >4 gigs and 2 cores is more than sufficient for a 1 gbps router at > >the current 800k routes > > 1gbps is residential access speed. Is this still useful in the dfz? Yes, it is. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Curious Cloudflare DNS behavior
On Sun, May 31, 2020 at 10:07:41AM -0600, Keith Medcalf wrote: > On Saturday, 30 May, 2020 13:18, Joe Greco wrote: > > >The Internet didn't evolve in the way its designers expected. Early > >mistakes and errors required terrible remediation. As an example, look > >at the difficulty involved in running a service like e-mail or DNS. > >E-mail requires all sorts of things to interoperate well, including > SPF, > >DKIM, SSL, DNSBL's, etc., etc., and it is a complicated service to run > >self-hosted. DNS is only somewhat better, with the complexity of > DNSSEC > >and other recent developments making for more difficulties in > maintaining > >self-hosted services. > > I've been running my own DNS and e-mail for more than a quarter century. > Contrary to your proposition it hasn't gotten much more complicated over > than time. Really? Because nowadays, there's all this extra crap that didn't used to exist. >From my perspective, it's gone from "configure Sendmail on your Sun workstation and compile Elm (back in the '80's)" to something a lot more complicated. Now you need to sign your mail with DKIM, have SPF records, and even if you cross all the T's and dot all the I's, you can expect your mail to be rejected at some major mail sites because the LACK of a consistent high volume of mail being sent by your site is actually scored against you. On the inbound side, you now need to be filtering your mail with Spamassassin and DNSBL's, and also virus scanners because it's likely some of your users won't be. You need to support both IMAP _and_ webmail if you want to be able to support users, because we are now in that "post-PC" era where people expect to be able to sit down at an arbitrary PC and have an experience on par with that of any of the mail service providers. I've watched in dismay as many technically competent sysadmins, and even whole service providers, have given up and outsourced e-mail, because it is so difficult to do well. Even Apple finally ditched their OSX Server product's email services, which had for years been one of my best examples of "it's still possible to run this yourself." If this is your idea of "hasn't gotten much more complicated", I salute your technical prowess. It's not that I want this to be the status quo, but I'm also not so blind as to deny what is going on. :-( ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Curious Cloudflare DNS behavior
On Sat, May 30, 2020 at 01:52:58PM -0500, Constantine A. Murenin wrote: > When you're not paying for service, you're not the customer, you're the > product. A pleasantly misleading statement. Most easily observed in that there are many cases where there is multiple monetization. You may be your broadband provider's customer, but it's likely they're still selling you in other ways. On the flip side, some of us provide free services with no ulterior motive. Go figure. > I don't understand why anyone, especially anyone frequenting NANOG, would > use Cloudflare for their DNS. The early '90's called and said you're missing (don't worry, they said it about me too). :-) ;-) The Internet didn't evolve in the way its designers expected. Early mistakes and errors required terrible remediation. As an example, look at the difficulty involved in running a service like e-mail or DNS. E-mail requires all sorts of things to interoperate well, including SPF, DKIM, SSL, DNSBL's, etc., etc., and it is a complicated service to run self-hosted. DNS is only somewhat better, with the complexity of DNSSEC and other recent developments making for more difficulties in maintaining self-hosted services. Some people want basic services that "just work" without having to put any effort into them. That isn't limited to non-technical users. Outsourcing stuff like DNS is just a continuation of the trend of sending your workloads onto someone else's cloud. It seems easy -- right up until it isn't working the way you want it to. But for most people, even those frequenting NANOG, maybe they just don't want to go set up their own recursion nameservice. I'm not saying I agree with that strategy, but at least it's understandable. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Contact at Ubiquiti Networks?
On Tue, May 26, 2020 at 02:44:40PM +, Mel Beckman wrote: > JG, > > I empathize with your BGP problems. I???ve had problems with BGP > on anything other than Cisco for my entire networking life. It???s > just the nature of the beast, although that???s not an excuse for > ubiquity not fixing it. > > But what is an excuse is market demand. How many people do you > think speak BGP on ubiquiti routers? I know ubiquiti, like every > company, likes to claim that they do everything. But no company > can do everything, so you have to find out where their strengths > are and avoid their weaknesses. Well, my point was more about the nature of software (it's fixable!) and the "market pressures deter edge case bux fixes" argument which appears to be a fallacy if you, as a hardware vendor, have paid a license for some professionally developed product, like ZebOS. ZebOS is the commercial offspring of Zebra, which forked Quagga, which forked FRR. I have minor complaints about all of them, but the open source developers have generally done well over the years. ZebOS is integrated into a variety of networking devices. A quick Google suggests this includes F5, SonicWall, Ubiquiti, Fortinet, and other devices. Ubiquiti produced its EdgeRouter Lite back in late 2012, able to do a million PPS on a $100 platform, so there's little doubt about their ability to create devices that do "hardware assisted" software packet routing. I was kinda hoping that the marriage of their hardware and ZebOS would result in a usable product. I am pretty sure that's what Ubiquiti expected to happen, so that they would not need to worry about the finer points of arcane routing protocols. I don't really have a need to do a bazillion PPS. There's still an Ascend GRF 400 here, and having passed the 150K routes mark, it now serves to lift my office laser printer to a better height. It's also part of why I try to avoid buying the hardware routers. There's no budget for it and hardware routers generally provide far more router than is needed here. > Personally, I always put a pair of stacked Cisco layer3 switches > at the edge of every BGP network. This gives me reliable, redundant > BGP peering that operates at wire speed and can still carry full > backbone tables. Use Cisco hardware let me do this for less money > then I would pay for a buggy ubiquiti router. [assuming that was supposed to be "Used Cisco hardware"] Veering way off topic here, I wasn't aware that there were layer 3 stackable Cisco switches that could handle full BGP tables. The Ubiquiti Infinity is $1,800. I am curious what you're using. I have nothing against used hardware. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Contact at Ubiquiti Networks?
On Tue, May 26, 2020 at 01:43:02PM +, Mel Beckman wrote: > I deploy Ubiquiti equipment quite a bit, both in WLANs and WISP > distribution networks. It???s excellent quality at a dirt cheap > price. As with all software-based products, there will be bugs. > Your or my pet bug may never get fixed, based on market demand. > That???s simply capitalism, not low quality. None of us can > afford to pay for perfection, because it would never ship. Bugs exist in hardware products too. The difference is that with the software products, you'd hope for them to be fixed, whereas the ones in hardware generally turn into RMA. My current pet peeves with Ubiquiti are all on the router side of things. OSPFv3 (IPv6) doesn't work correctly past EdgeOS v1.10.9, and their BGP blows chunks - I've got an Infinity connected to a pair of route reflectors handling a single IX (two route servers) and it loses its mind, with the bgpd process actually going away. Whether that's Ubiquiti's fault or should be blamed on ZebOS is a debatable question. If you've got a vendor supplying your routing software, it seems like fixing advertised features that are clearly broken would be a matter of applying pressure on the upstream vendor whose code used to work and then was broken, not a matter of "market demand." What's not debatable is that this has been the status quo for around nine months. That's nine months without proper IPv6 support. And this is their high end 10G full BGP tables router. Buyer beware. The wifi side of things? Yes, the Ubiquiti stuff is very inexpensive and it provides better value-per-dollar than just about anything else out there. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Don't forget RFG (was: Re: RIPE NCC Executive Board election)
On Fri, May 15, 2020 at 11:23:28AM +0200, Terrence Koeman via NANOG wrote: > FYI, the voting results for the three positions on the RIPE exec > board were just announced and Elad was NOT elected. https://www.ripe.net/participate/meetings/gm/meetings/may-2020 Congratulations to Maria H??ll, Raymond Jetten, and Christian Kaufmann on their election. I'm not familiar with any of them, but a quick search suggests that they are all eminently qualified. > No doubt we should thank the super illegal, criminal and anonymous > Spamhaus cabal as well as the super shady and corrupt IPv6 lobby > for manipulating this election from the shadows! Please don't forget the efforts of Ronald F. Guilmette. I may not agree with RFG on various things, but I applaud the tenacity he has always displayed, and the ability to instigate a 100+-message flamefest on NANOG that so clearly demonstrated important qualities about his target -- now archived in perpetuity. Ronald, thanks for this public service. Electing qualified board members to RIPE, etc., is important, especially in this era that sees things like the .ORG/PIR/ICANN debacle. I expect that it is also outside the charter of the list, but on the other hand, it isn't clear that there's a better place. These things are not operational or technical issues, but eventually do come to have an impact on operations. Regards, ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: RIPE NCC Executive Board election
On Wed, May 13, 2020 at 01:46:01PM +, Elad Cohen wrote: > Hello Everyone, > > My apology for not providing an official response to the first "The > Ronald Show" that took place here many months ago, I was out of > hospital after full anesthesia and it took me months to get back to > myself. Ah, yeah. > I cannot help but be reminded of a catch-phrase that I saw somewhere, > not too long ago: > > "Democracy dies in darkness." > -- anon That's the Washington Post's tagline, not "anon". Whatever happened to the good old days of Jim Fleming? sigh. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Abuse Desks
On Wed, Apr 29, 2020 at 03:41:06PM +, Mel Beckman wrote: > Joe, > > Is there any reason to have a root-enabled (or any) ssh server > exposed to the bare Internet? Any at all? Can you name one? > I can???t. That???s basically pilot error. Mel, I think you're looking at it the wrong way. Blaming a potential victim doesn't solve the problem. I like to use a metric of "if everybody did this, would it be a good thing" often. If everybodyGood thing? Didn't run SSHD on public Inet Yes Ran SSH scanners against the rest of the Inet No Ran SSH scanners against their own gear and used it to shut down unnecessary SSHYes The problem is that you're talking about the first case, but the actual problem is the second case. If this trash is allowed to continue, there is a point where your server will just get swamped by a growing number of SSH probes. Also, exposing SSH to the Internet is, for better or for worse, the way many cloud services enable access to their cloud VM's/instances/droplets/ whatever. And, finally, yes, there are reasons to expose SSH servers to the Internet. A well-defended SSH server can do things such as allow other parties access to your server. I run a number of bastion SSH servers for various purposes. You do not need to do so in an obvious manner. That doesn't mean I'm inviting unauthorized parties to try to connect to them. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: Abuse Desks
On Wed, Apr 29, 2020 at 10:12:29AM -0500, Chris Adams wrote: > Once upon a time, Mukund Sivaraman said: > > If an abuse report is incorrect, then it is fair to complain. > > The thing is: are 3 failed SSH logins from an IP legitimately "abuse"? > > I've typoed IP/FQDN before and gotten an SSH response, and taken several > tries before I realized my error. Did I actually "abuse" someone's > server? I didn't get in, and it's hard to say that the server resources > I used with a few failed tries were anything more than negligible. > > I've had users tripped up by fail2ban because they were trying to access > a server they don't use often and took several tries to get the password > right or had the wrong SSH key. Should that have triggered an abuse > email? So your theory is that it is necessary for there to be a threshold of abuse? Is there any reason to expect that a random server is going to be able to figure out that a large pool of a million compromised IoT devices on a million different IP addresses is slowly probing their server for the root password and that a SPECIFIC probe is a member of this set? The way this stuff is trending today, you don't have a single host that is banging on another single host for hours or days at a password per second, which I hope we would agree would be well beyond any reasonable threshold to consider abuse. On the flip side, is it so much to ask that an abuse desk maybe take a look at both the ingress and egress packet stream of their customer, to see if there seems to be something untoward happening? And which one of these is a less damaging strategy? I know we're in the minority here, but policy over here at SOL hasn't changed much in the last quarter century. If you are getting unwanted and unsolicited traffic from us, and you contact abuse@, we're willing to make it stop. If it didn't originate here (forged, etc) then there isn't much to be done -- the community has been trying to encourage BCP38 for years. It's probably jumping the gun a bit for a single connection attempt to result in an abuse@ message, but then again when I look at the stream of trash addressed at SOL's IP space, maybe not. Some of it is clearly trying to scan from large botnets. There's also a lot of room for computers to be doing the hard work of detecting and reporting, and helping to analyze, while letting a human look at what's actually transpired and see if it feels problematic. However, the general solution that seems to have been adopted by the majority of the industry is to hire Dave Null for abuse@ ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: free collaborative tools for low BW and losy connections
On Tue, Mar 31, 2020 at 01:46:09PM +0100, Nick Hilliard wrote: > Joe Greco wrote on 29/03/2020 23:14: > >>>>Flood often works fine until you attempt to scale it. Then it breaks, > >>>>just like Bj??rn admitted. Flooding is inherently problematic at scale. > >>> > >>>For... what, exactly? General Usenet? > >> > >>yes, this is what we're talking about. It couldn't scale to general > >>usenet levels. > > > >The scale issue wasn't flooding, it was bandwidth and storage. > > the bandwidth and storage problems happened because of flooding. Short > of cutting off content, there's no way to restrict bandwidth usage, but > cutting off content restricts the functionality of the ecosystem. You > can work around this using remote readers and manually distributing , > but there's still a fundamental scaling issue going on here, namely that > the model of flooding all posts in all groups to all nodes has terrible > scaling design characteristics. It's terrible because it requires all > core nodes to linearly scale their individual resourcing requirements > according to the overall load of the entire system. You can manually > configure load splitting to work around some of these limitations, but > it's not possible to ignore the design problems here. There's a strange disconnect here. The concept behind Usenet is to have a distributed messaging platform. It isn't clear how this would work without ... well, distribution. The choice is between flood fill and perhaps something a little smarter, for which options were proposed and designed and never really caught on. Without the distribution mechanism (flooding), you don't have Usenet, you have something else entirely. > [...] > >The Usenet "backbone" with binaries isn't going to be viable without a > >real large capex investment and significant ongoing opex. This isn't a > >failure in the technology. > > We may need to agree to disagree on this then. Reasonable engineering > entails being able to build workable solutions within a feasible budget. > If you can't do this, then there's a problem with the technology at > the design level. Kinda like how there's a problem with the technology of the Internet because if I wanna be a massive network or a tier 1 or whatever, I gotta have a massive investment in routers and 100G circuits and all that? Why can't we just build an Internet out of 10 megabit ethernet and T1's? Isn't this just another example of your "problem with the technology at the design level?" See, here's the thing. Twenty six years ago, one of the local NSP's here spent some modest thousands of dollars on a few routers, a switch, and a circuit down to Chicago and set up shop on a folding table (really). This was not an unreasonable outlay of cash to get bootstrapped back in those days. However, within just a few years, the amount of cash that you'd need to invest to get started as an NSP had exploded dramatically. Usage grows. I used to run Usenet on a 24MB Sun 3/60 with a pile of disks and Telebits. Now I'm blowing gigabits through massive machines. This isn't a poorly designed technology. It's scaled well past what anyone would have expected. > >Usenet is a great technology for doing collaboration on low bandwidth and > >lossy connections. > > For small, constrained quantities of traffic, it works fine. It seems like that was the point of this thread... ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: free collaborative tools for low BW and losy connections
On Mon, Mar 30, 2020 at 12:18:37PM -0600, Keith Medcalf wrote: > >The thing that mailing lists lack is a central directory of their > >existence. The discovery problem is a pretty big one. > > Where is this to be found for webforums? I have never seen one. Or do > you think Google is such a master index? Can you please pose your > Google query that you think results in a comprehensive index of *all* > webforums? > > Or is your comment nothing more that you noticing that NEITHER e-mail > lists NOR webforums have a master index, which is a rather useless > observation that would indicate that webforums have zero advantage > over mailing lists in this regard, so what is the point of the > whataboutism? In the context of the discussion, I expect the point is that Usenet had such functionality. This is a significant feature advantage. As are a bunch of other things pointed out earlier today. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: free collaborative tools for low BW and losy connections
On Sun, Mar 29, 2020 at 04:18:51PM -0700, Michael Thomas wrote: > > On 3/29/20 1:46 PM, Joe Greco wrote: > >On Sun, Mar 29, 2020 at 07:46:28PM +0100, Nick Hilliard wrote: > >>Joe Greco wrote on 29/03/2020 15:56: > >> > >>The concept of flooding isn't problematic by itself. > >>Flood often works fine until you attempt to scale it. Then it breaks, > >>just like Bj??rn admitted. Flooding is inherently problematic at scale. > >For... what, exactly? General Usenet? Perhaps, but mainly because you > >do not have a mutual agreement on traffic levels and a bunch of other > >factors. Flooding works just fine within private hierarchies, and since > >I thought this was a discussion of "free collaborative tools" rather than > >"random newbie trying to masochistically keep up with a full backbone > >Usenet feed", it definitely should work fine for a private hierarchy and > >collaborative use. > > AFAIK, Usenet didn't die because it wasn't scalable. It died because > people figured out how to make it a business model. Not at all. I can see why you say that, but it isn't the reality, any more than commercial uses killed the Internet when it was opened up to people who made it a business model. The introduction of the DMCA ratcheted up the potential for enforcement and penalties against end users doing Napster, bittorrent, illicit web and FTP, or whatever your other favorite form of digital piracy might happen to have been back in the '90's. The CDA 230 protection for providers allowed Usenet to be served without significant concern. So pirates had a safe model where they could distribute pirated traffic, posting it somewhere "safe" and then it would be available everywhere, and it was HARD to get it taken down. This, along with legitimate binaries traffic increases, caused an explosion in traffic, which made Usenet increasingly impractical for ISP's to self-host. The problem is that it scaled far too well as a binary traffic distribution system. As this happened, most ISP's outsourced to Usenet service providers, and end users often picked up "premium" Usenet services from such providers directly as well. I do not see the people who made it a business model as responsible for the state of affairs. Had commercial USP's not stepped up, Usenet probably would have died off in the late 90's-early 2000's as ISP's dropped support. They (and I have to include myself as I run a Usenet company) are arguably the ones who kept it going. Demand was there. The users who are dumping binaries on Usenet are helping to kill it. Actual text traffic has been slowly dying off for years as webforums have matured and become a better choice of technology for nontechnical end users on high speed Internet connections. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: free collaborative tools for low BW and losy connections
On Sun, Mar 29, 2020 at 07:46:28PM +0100, Nick Hilliard wrote: > Joe Greco wrote on 29/03/2020 15:56: > >On Sun, Mar 29, 2020 at 03:01:04PM +0100, Nick Hilliard wrote: > >>because it uses flooding and can't guarantee reliable message > >>distribution, particularly at higher traffic levels. > > > >That's so hideously wrong. It's like claiming web forums don't > >work because IP packet delivery isn't reliable. > > Really, it's nothing like that. Sure it is. At a certain point you can get web forums to stop working by DDoS. You can't guarantee reliable interaction with a web site if that happens. > >Usenet message delivery at higher levels works just fine, except that > >on the public backbone, it is generally implemented as "best effort" > >rather than a concerted effort to deliver reliably. > > If you can explain the bit of the protocol that guarantees that all > nodes have received all postings, then let's discuss it. There isn't, just like there isn't a bit of the protocol that guarantees that an IP packet is received by its intended recipient. No magic. It's perfectly possible to make sure that you are not backlogging to a peer and to contact them to remediate if there is a problem. When done at scale, this does actually work. And unlike IP packet delivery, news will happily backlog and recover from a server being down or whatever. > >The concept of flooding isn't problematic by itself. > > Flood often works fine until you attempt to scale it. Then it breaks, > just like Bj??rn admitted. Flooding is inherently problematic at scale. For... what, exactly? General Usenet? Perhaps, but mainly because you do not have a mutual agreement on traffic levels and a bunch of other factors. Flooding works just fine within private hierarchies, and since I thought this was a discussion of "free collaborative tools" rather than "random newbie trying to masochistically keep up with a full backbone Usenet feed", it definitely should work fine for a private hierarchy and collaborative use. > > If you wanted to > >implement a collaborative system, you could easily run a private > >hierarchy and run a separate feed for it, which you could then monitor > >for backlogs or issues. You do not need to dump your local traffic on > >the public Usenet. This can happily coexist alongside public traffic > >on your server. It is easy to make it 100% reliable if that is a goal. > > For sure, you can operate mostly reliable self-contained systems with > limited distribution. We're all in agreement about this. Okay, good. > >>The fact that it ended up having to implement TAKETHIS is only one > >>indication of what a truly awful protocol it is. > > > >No, the fact that it ended up having to implement TAKETHIS is a nod to > >the problem of RTT. > > TAKETHIS was necessary to keep things running because of the dual > problem of RTT and lack of pipelining. Taken together, these two > problems made it impossible to optimise incoming feeds, because of ... > well, flooding, which meant that even if you attempted an IHAVE, by the > time you delivered the article, some other feed might already have > delivered it. TAKETHIS managed to sweep these problems under the > carpet, but it's a horrible, awful protocol hack. It's basically cheap pipelining. If you want to call pipelining in general a horrible, awful protocol hack, then that's probably got some validity. > >It did and has. The large scale binaries sites are still doing a > >great job of propagating binaries with very close to 100% reliability. > > which is mostly because there are so few large binary sites these days, > i.e. limited distribution model. No, there are so few large binary sites these days because of consolidation and buyouts. > >I was there. > > So was I, and probably so were lots of other people on nanog-l. We all > played our part trying to keep the thing hanging together. I'd say most of the folks here were out of this fifteen to twenty years ago, well before the explosion of binaries in the early 2000's. > >I'm the maintainer of Diablo. It's fair to say I had a > >large influence on this issue as it was Diablo's distributed backend > >capability that really instigated retention competition, and a number > >of optimizations that I made helped make it practical. > > Diablo was great - I used it for years after INN-related head-bleeding. > Afterwards, Typhoon improved things even more. > > >The problem for smaller sites is simply the immense traffic volume. > >If you want to carry binaries, you need double digits Gbps. If you > >filter them out, the load is actually quite trivial. > > Right, so
Re: free collaborative tools for low BW and losy connections
On Sun, Mar 29, 2020 at 03:01:04PM +0100, Nick Hilliard wrote: > Bj??rn Mork wrote on 29/03/2020 13:44: > >How is nntp non-scalable? > > because it uses flooding and can't guarantee reliable message > distribution, particularly at higher traffic levels. That's so hideously wrong. It's like claiming web forums don't work because IP packet delivery isn't reliable. Usenet message delivery at higher levels works just fine, except that on the public backbone, it is generally implemented as "best effort" rather than a concerted effort to deliver reliably. The concept of flooding isn't problematic by itself. If you wanted to implement a collaborative system, you could easily run a private hierarchy and run a separate feed for it, which you could then monitor for backlogs or issues. You do not need to dump your local traffic on the public Usenet. This can happily coexist alongside public traffic on your server. It is easy to make it 100% reliable if that is a goal. > The fact that it ended up having to implement TAKETHIS is only one > indication of what a truly awful protocol it is. No, the fact that it ended up having to implement TAKETHIS is a nod to the problem of RTT. > Once again in simpler terms: > > > How is nntp non-scalable? > [...] > > Binaries broke USENET. That has little to do with nntp. > > If it had been scalable, it could have scaled to handling the binary groups. It did and has. The large scale binaries sites are still doing a great job of propagating binaries with very close to 100% reliability. I was there. I'm the maintainer of Diablo. It's fair to say I had a large influence on this issue as it was Diablo's distributed backend capability that really instigated retention competition, and a number of optimizations that I made helped make it practical. The problem for smaller sites is simply the immense traffic volume. If you want to carry binaries, you need double digits Gbps. If you filter them out, the load is actually quite trivial. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov
Re: free collaborative tools for low BW and losy connections
On Sun, Mar 29, 2020 at 10:31:50PM +0100, Nick Hilliard wrote: > Joe Greco wrote on 29/03/2020 21:46: > >On Sun, Mar 29, 2020 at 07:46:28PM +0100, Nick Hilliard wrote: > >>>That's so hideously wrong. It's like claiming web forums don't > >>>work because IP packet delivery isn't reliable. > >> > >>Really, it's nothing like that. > > > >Sure it is. At a certain point you can get web forums to stop working > >by DDoS. You can't guarantee reliable interaction with a web site if > >that happens. > > this is failure caused by external agency, not failure caused by > inherent protocol limitations. Yet we're discussing "low BW and losy(sic) connections". Which would be failure of IP to be magically available with zero packet loss and at high speeds. There are lots of people for whom low speed DSL, dialup, WISP, 4G, GPRS, satellite, or actually nothing at all are available as the Internet options. > >>>Usenet message delivery at higher levels works just fine, except that > >>>on the public backbone, it is generally implemented as "best effort" > >>>rather than a concerted effort to deliver reliably. > >> > >>If you can explain the bit of the protocol that guarantees that all > >>nodes have received all postings, then let's discuss it. > > > >There isn't, just like there isn't a bit of the protocol that guarantees > >that an IP packet is received by its intended recipient. No magic. > > tcp vs udp. IP vs ... what exactly? > >>Flood often works fine until you attempt to scale it. Then it breaks, > >>just like Bj??rn admitted. Flooding is inherently problematic at scale. > > > >For... what, exactly? General Usenet? > > yes, this is what we're talking about. It couldn't scale to general > usenet levels. The scale issue wasn't flooding, it was bandwidth and storage. It's actually not problematic to do history lookups (the key mechanism in what you're calling "flooding") because even at a hundred thousand per second, that's well within the speed of CPU and RAM. Oh, well, yes, if you're trying to do it on HDD, that won't work anymore, and quite possibly SSD will reach limits. But that's a design issue, not a scale problem. Most of Usenet's so-called "scale" problems had to do with disk I/O and network speeds, not flood fill. > >Perhaps, but mainly because you > >do not have a mutual agreement on traffic levels and a bunch of other > >factors. Flooding works just fine within private hierarchies and since > >I thought this was a discussion of "free collaborative tools" rather than > >"random newbie trying to masochistically keep up with a full backbone > >Usenet feed", it definitely should work fine for a private hierarchy and > >collaborative use. > > Then we're in violent agreement on this point. Great! Okay, fine, but it's kinda the same thing as "last week some noob got a 1990's era book on setting up a webhost, bought a T1, and was flummoxed at why his service sucked." The Usenet "backbone" with binaries isn't going to be viable without a real large capex investment and significant ongoing opex. This isn't a failure in the technology. > >>delivered it. TAKETHIS managed to sweep these problems under the > >>carpet, but it's a horrible, awful protocol hack. > > > >It's basically cheap pipelining. > > no, TAKETHIS is unrestrained flooding, not cheap pipelining. It is definitely not unrestrained. Sorry, been there inside the code. There's a limited window out of necessity, because you get interesting behaviours if a peer is held off too long. > >If you want to call pipelining in > >general a horrible, awful protocol hack, then that's probably got > >some validity. > > you could characterise pipelining as a necessary reaction to the fact > that the speed of light is so damned slow. Sure. > >>which is mostly because there are so few large binary sites these days, > >>i.e. limited distribution model. > > > >No, there are so few large binary sites these days because of consolidation > >and buyouts. > > 20 years ago, lots of places hosted binaries. They stopped because it > was pointless and wasteful, not because of consolidation. I thought they stopped it because some of us offered them a better model that reduced their expenses and eliminated the need to have someone who was an expert in an esoteric '80's era service, while also investing in all the capex/opex. Lots of companies sold wholesale Usenet, usually just by offering access to a remote service. As the amount of Usenet content exploded, the increasing cost of storage for
Re: How to wish you hadn't forced ipv6 adoption (was "How to force
> Greetings, > > Excuse my probable ignorance of such matters, but would it not then be > preferred to create a whitelist of proven Email servers/ip's , and just > drop the rest? Granted, one would have to create a process to vet anyone > creating a new email server, but would that not be easier then trying to > create and maintain new blacklists? That hasn't worked spectacularly well even under IPv4. There's no reason to think it'd magically work better under v6. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: ARIN Region IPv4 Free Pool Reaches Zero
> > And this may trigger a refresh on routers, as people old or refurbed > > equipment find they need to change. The whole reason for the inertia > > against going to IPv6 is "it ain't broke, so I not gonna 'fix' it." > > Yea, well, it would be nice if upgrading existing home routers > remained legal, so we could, indeed add ipv6 capability and more. > > http://prpl.works/2015/09/21/yes-the-fcc-might-ban-your-operating-system/ That's not guaranteed to happen, and, I'd note, it has little-to-nothing to do with existing home 'routers' but rather wifi gear. While many home users do have a combined NAT gateway and wireless access point, the vast majority of them are not running custom firmware and would just buy a new device anyways. Part of the real problem here is that manufacturers have generally treated devices like home 'routers' as abandonware. Usually there is just barely enough RAM and flash on these things to hold whatever firmware the company was intending to ship, and sometimes they would not even see any firmware updates ever made available as the software dev team would move on to the next device. This is the same thing we here on this list should all be pretty scared of as the IoT stormfront comes this way. You're unlikely to be able to add code to handle IPv6 to a Belkin F5D6231, which IIRC used some unusual SoC to provide its modest services on something like 1MB of flash and 2MB RAM (it's been a decade so the particulars may be wrong). Only in the relatively rare cases where a manufacturer left a lot of extra room (WRT54GL, etc) are you likely to have sufficient extra space to do updates to gear. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: ARIN Region IPv4 Free Pool Reaches Zero
> > The question really at hand: what happens when you need to host a new=20= > > > pile of servers, need/can-justify a /24, and your hosting provider=20 > > quotes you $2560/month just for the IP space (at $10/IP)? > > You probably laugh and go to some other provider or BYOA from a broker. That works until all the hosting providers are charging similar rates, and even a decade ago I saw providers who would charge you for bringing your own space. > >=20 > > That'd be an incentive to look seriously at IPv6 I *think*. > > I hope so, but most likely people will continue to do the lazy thing as = > long as they > can get away with it. > > > Switching hosting providers will probably become a popular game for=20 > > the early depletion era, as providers attempt to rob each other of > > customers. That's probably a losing game in the long run. > > Let=E2=80=99s hope (that it=E2=80=99s a losing game). Just because something is a losing game doesn't mean people won't play that game. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Ear protection
> > On Sep 23, 2015, at 7:33 AM, Joe Greco <jgr...@ns.sol.net> wrote: > >=20 > > Passive cooling typically translates to lower performance but also can > > be more expensive. > > $DAYJOB uses an immersion cooling system so it=E2=80=99s higher = > performance and much quieter. That's not typical passive cooling. And it's going to be much more expensive and complicated to implement than "air based" passive cooling, or active air cooling, etc. As an example, many mobile devices are underclocked so that their components dissipate less heat, and may actually vary the clock based in part on current temperature. This allows the device to more easily dissipate heat without active cooling measures, but you get the lowered performance of a slower part. It's totally possible to build quieter gear - we do that kind of work here, as some of you know - but it is a matter of tradeoffs. I can show you a Xeon E3 system that consumes a peak of 100 watts. SSDs for storage, fanless oversized PSU to reduce heat, massive CPU heatsink, and 120MM fans in a 4U chassis. Very quiet running, has a higher tolerance to heat as well. But most people don't want their hypervisors to take 4U of space for a mere 32GB of RAM and 12GHz of CPU. They'd rather stick 300 watts of E5's into a 1U and let it scream away. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: ARIN Region IPv4 Free Pool Reaches Zero
> According to http://business.comcast.com/internet/business-internet/static-= > ip > Comcast charges $19.95 per month for one static IPv4 address. High dollar amounts for a single static IPv4 address are nothing new, and are IMHO a side effect of monopoly/duopoly last mile providers being able to shake down end users because the end user's financially viable options are typically just "pay up or don't get a static." The question really at hand: what happens when you need to host a new pile of servers, need/can-justify a /24, and your hosting provider quotes you $2560/month just for the IP space (at $10/IP)? That'd be an incentive to look seriously at IPv6 I *think*. Switching hosting providers will probably become a popular game for the early depletion era, as providers attempt to rob each other of customers. That's probably a losing game in the long run. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Ear protection
> Why not just build a Datacenter that is quiet? Because the cost differential to do so is a lot greater than the $10 to get some hearing protection? Passive cooling typically translates to lower performance but also can be more expensive. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Ear protection
> Maybe I've always listened to my music to loud and spend the bulk of time > via ssh, but I've never felt a need for hearing protection in a DC, is this > generally an issue for people? Depends on how long and how noisy. As I've gotten older, I find loud noise in general is less tolerable, so I've taken to always keeping a pair of earplugs with me. It makes being around loud music, etc., much more enjoyable. Long term exposure to noise is widely considered to be a hazard, but walking into an average data center for an hour once a month is probably not that risky. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Ear protection
> On Wed, Sep 23, 2015 at 2:34 AM, Nick Hilliard <n...@foobar.org> wrote: > > What are people using for ear protection for datacenters these days? > > Telecommuting, in my case. > > had to say it! :0 I carry these around in my pocket all the time: http://www.amazon.com/gp/product/B000W2CPCC Not just for datacenter use. I find myself pulling them out every few months when I happen to be somewhere uncomfortably noisy. Not the same (or as much dB reduction) as some of the other foam ones, but they seem nearly indestructible and washable as well. I also have some of these around, because they fold up really nicely. http://www.amazon.com/gp/product/B000U439KO About the same dB reduction (21) as the above. You can of course use both of these products together for a much higher degree of noise reduction but I rarely find myself needing that. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Exploits start against flaw that could hamstring huge swaths of
On Tue, Aug 04, 2015 at 10:03:33AM -0400, Jay Ashworth j...@baylink.com wrote a message of 6 lines which said: Everyone got BIND updated? For instance by replacing it with NSD or Unbound? Or doing something better like not just replacing one evil with another, and instead moving to a heterogeneous environment where possible. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: RES: Exploits start against flaw that could hamstring huge swaths of
So, you guys recommend replace Bind for another option ? No. Replacing one occasionally faulty product with another occasionally faulty product is foolish. There's no particular reason to think that another product will be impervious to code bugs. What I was suggesting was to use several different devices, much as some networks prefer to buy some Cisco gear and some Juniper gear and make them redundant, or as a well-built ZFS storage array consists of drives from different manufacturers. Heterogeneous environments tend to be more resilient because they are less likely to all suffer the same defect at once. Problems still result in some pain and trouble, but it usually doesn't result in a service outage. This doesn't seem like a horribly catastrophic bug in any case. Anyone who is reliant on a critical bit like a DNS server probably has it set up to automatically restart if it doesn't exit cleanly. If you don't, you should! So if it matters to you, I suggest that you instead use a combination of different products, and you'll be more resilient. If you have two recursers for your customers, one can be BIND and one can be Unbound. And when some critical vuln comes along and knocks out Unbound, you'll still be resolving names. Ditto BIND. You're not likely to see both happen at the same time. However, at least here, we actually *use* TSIG updates, and other functionality that'd be hard to replace (BIND9 is pretty much THE only option for some functionality). ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: RES: Exploits start against flaw that could hamstring huge swaths
With the (large) caveat that heterogenous networks are more subject to human error in many cases. Indeed. Everything comes with tradeoffs. More intimate familiarity with the product and a uniformity of deployment strategy has made it more practical here to stick with BIND; an update is a simple matter of a tarball and running a script that manages the dirty work. However, the original point was that switching from BIND to Unbound or other options is silly, because you're just trading one codebase for another, and they all have bugs. However, collectively, two different products cooperatively providing a service are likely to have a higher uptime in a well-designed environment. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Windows 10 Release
I was just thinking about my remaining Win 7 box _after_ I hit send and I believe you're correct (I have one still to upgrade). Which means users upgrading from 7 to 10 will need to create an ID, but users of 8 and 8.1 will use the one they already have. This is incorrect. While the Win 8{,.1} install process makes it appear as though you need a Microsoft ID, you can actually go into the create a new Microsoft ID option and there's a way to proceed without creating a Microsoft ID, which leaves you with all local accounts. It does appear to be designed to make you THINK you need a Microsoft account however. I have a freshly installed Windows 8.1 box here (no Microsoft ID) that I then upgraded to Windows 10, and it also does not have any Microsoft ID attached to it. Activation shows as Windows 10 Home and Windows is activated. There's a beggy-screen on the user account page saying something like Windows is better when your settings and files automatically sync. Switch to a Microsoft Account now! So, again, totally optional, but admittedly the path of least resistance has users creating a Microsoft Account or linking to their existing one. You have to trawl around a little to get the better (IMHO) behaviour. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Windows 10 Release
Justin, That's true, but it takes effort for people to either set up a local account or change to one, and very few consumers will do that or have. Wow, then, problem solved, because it's at least twice as hard to get your Microsoft Account set up, configured, and verified. The sticky point is that very few consumers will KNOW that they can avoid the Microsoft account, and most won't take the time to explore the various options and possibilities. This isn't an effort thing. Setting up a local account is fairly effortless. It's a matter of the option being hidden away, because it is in Microsoft's interest to get everyone using the Windows cloud magic. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Windows 10 Release
You can download an ISO and burn it to install... Guessing if your upgrading multiple machines, that would be the way to go... You don't even need to burn it to install. Just mount the ISO and run setup.exe ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Windows 10 Release
http://www.metzdowd.com/pipermail/cryptography/2015-July/026136.html Which appears to be about 25% crap, 30% FUD, and the remainder consists of concerns of varying levels of validity. For privacy-minded individuals who are not interested in sharing lots of stuff with Microsoft, there are install-time options to shut most of that off. Don't use Express Settings. Select Customize settings and then turn most of the switches on the next two pages off. The real issue is that lots of people will select the express settings and then might have to do more work to undo the decisions made at this step on their behalf. I do think it is rotten that the defaults for the options are all on. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Windows 10 Release
http://www.metzdowd.com/pipermail/cryptography/2015-July/026136.html Which appears to be about 25% crap, 30% FUD, and the remainder consists of concerns of varying levels of validity. really? read the legal fine print https://edri.org/microsofts-new-small-print-how-your-personal-data-abused/ Again, a lot of crap and FUD mixed in there. It's legalese designed to cover their arses, because they default the options to On and assume most people will take the default. You CAN shut off the sharing. The legalese doesn't mean that the information is shared despite the fact that you configured your box not to share it. The real problem is that so many people have outsourced their problems to ${THE_CLOUD} that those of us who run our own services are now in the tiny minority. I'm disappointed (but hardly shocked) to discover that Microsoft doesn't support arbitratry CalDAV or CardDAV services with their built-in apps, for example. I realize I'm probably in another minority here, but as a Windows-hater, I nevertheless find that there's a bunch of stuff I need to do that only really works on Windows. What I really want is an up-to-date version of Windows 98 or maybe XP. I don't need all the Microsoft-added cruft for other things. I don't use their e-mail, or their calendar, or their contacts, or their web browser (usually). Or pretty much any Metro app. I understand why a lot of that crap is there, especially as they now need to try to make Win10 workable and usable on multiple device types, so I don't mind that they added it, and I understand that using any of that crap could mean that ${MICROSOFT_CLOUD} gets involved. Doesn't mean you have to use it! Windows 10 turns out to be fairly useful once you take a hatchet to it and bludgeon out all the stuff obviously intended for the average home user or the average phone user that is just supposed to magically work. The legal fine print for most software is atrocious these days. This isn't shocking, sadly. I can find egregious crap in lots of license agreements and privacy statements out there. You don't actually *HAVE* to use a Microsoft Account to sign into Windows 10. If you DO sign in using a Microsoft Account, you're going to be hooking your Windows box up to a bunch of cloud services that you might not want or need. For many people, this may actually be the right choice, because how else do you sync things between your desktop, your laptop, your tablet, and your Windows phone? That carries with it a lot of benefits for the average non-techie user, but is a privacy issue as well. If you do that and then don't use any of their apps, because maybe you browse the web with Chrome and you're heavily invested in Google or Yahoo for mail, contacts, and calendar, you're still not sharing the data ... with Microsoft. But that data's still out there in someone's cloud. I could be very cynical (and yet probably come frighteningly close to the mark) by noting that Microsoft is following in Google's footsteps in amassing a wealth of data about users by providing these add-on cloud services to users, and that the users are slowly becoming the product instead of being the customer, which makes it even more attractive to do the sorts of data collection and mining pioneered by some of those other companies. Yet none of this commits you. You don't have to share information. You don't have to use a Microsoft Account. You don't have to use any of their programs or apps that would share data. Screw them. So, again, I say, the quoted article contains about 25% crap, 30% FUD, and the remainder consists of concerns of varying levels of validity. This isn't the apocalypse. People have willingly been exchanging their privacy for free services on the Internet for many years. Those of us who prefer not to rely on those services are also able to navigate that maze. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: SIP trunking providers
Why not set up a small Asterisk box in a local datacenter and only trunk = out the non-local calls? And do what with the local calls, then? You're still left with the problem of getting calls to and from the PSTN. Not everyone wants to deal with the hassle of dealing with POTS or T1 gatewaying. In general, it isn't practical to do on a small scale any more, especially as one looks forward a few years to the inevitable dismantling of the legacy POTS network. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Fwd: [ PRIVACY Forum ] Windows 10 will share your Wi-Fi key with
On 7/7/2015 5:39 PM, Joe Greco wrote: Unclear at best. The way it is implemented, the user has the potential to go either way. A network might not want the user to have the choice, clearly, but there is certainly a subset of users who will opt out of the feature and I cannot see how those would be in violation of any sane network usage policy. It's certainly a mess in any case. Now that windows mobile and desktop versions are converging, I doubt there is a way to really tell if a device is a PC or a phone or a tablet. Some network administrators banned mobile phones from wifi connections because of Google's password storage violating their security policy. Now administrators don't even get that knob. We could fix it in a couple of ways (or, they could fix it.. depending on who pushes around money and if anyone cares enough to bother): 1. Wifi sends password policy during handshaking. If you save passwords you aren't allowed to connect here (or, you aren't allowed to backup/share this password) but we will allow the user to connect. This can be transparent to the user and handled by the OS.* 2. The client device sends I am configured to backup/share passwords to the wifi. This allows the AP to either deny the user outright, or redirect them to a page explaining what is wrong or whatever. This might be accomplished via DHCP option if we want to keep it all in software. * The fact that we need an IEEE level fix for a security problem created by Google and then propagated by Microsoft is just pathetic. These are two companies that should know better than to do this. Yes, I agree. It makes me wonder how much of this is new-feature-ism promoted by a management that is looking at the(ir) big picture, then having people without sufficient technical depth do that new feature. Or are they really drinking their own koolaid and thinking that everything is in the cloud today and so there aren't local security concerns? I best go before I delve into the truly cynical. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Fwd: [ PRIVACY Forum ] Windows 10 will share your Wi-Fi key with
On Mon, 06 Jul 2015 21:12:55 -0500, Joe Greco said: http://winaero.com/blog/windows-10-build-10074-features-a-reworked-setup-experience/ Anyways, if you look on the first page of Customize settings, yes there's an option for Automatically connect to networks shared by my contacts and it CAN be turned off, but it defaults to on. There's a subtle but important difference between that and Allow this device to send sharing info to contacts. Is there? The problem is that the text that's presented there is so vague as to what it means that it is completely worthless to try to infer anything from it. Without going and researching it further, which may or may not be feasible for some poor soul deploying the damn thing since it is quite possible it is their only computer, it is unclear whether it might mean any one of a dozen or more things. I could easily believe that setting this option could automagically sign you up for SSID password sharing with your contacts. Especially the first time I saw it, I had no idea what it meant other than that it was likely something that was probably in the bad to evil range, because, well, that's the point, it doesn't actually SAY what it is you're committing to. The stuff later on (which is referenced in The Register article that was initially quoted) may help make it a little clearer, but again, there's a lot of bad, and you get to answer that first question without knowing what the context is. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Fwd: [ PRIVACY Forum ] Windows 10 will share your Wi-Fi key with
On 06/07/15 19:12, Joe Greco wrote: Terrible idea. These are the kind of features that should be opt in, and Microsoft could have done that instead. It *is* an option. Opt-in and opt-out are two models of having an option. Also I meant being opt-out for the network administrator regarding the availability of the _optout suffix. Instead it should have been opt-in by the use of some _share suffix. No, it should have been opt-in by the use of some standards-track mechanism. Substituting less-screwed for more-screwed is still just screwed at the end of the day. Anyways, if you look on the first page of Customize settings, yes there's an option for Automatically connect to networks shared by my contacts and it CAN be turned off, but it defaults to on. That's an option for the users, not for the network administrator. That's unclear. It is likely settable as policy at some level. I'm not going to defend Microsoft since I think it is total crap, but I am not going to be totally unfair about it. As a network administrator (at home, at work, whatever) I have some trust for my users but not necessarily for the friends of my users. The decision should be up to the network administrator, not the user. The way it's implemented, user inaction makes him/her violate network usage policy. Unclear at best. The way it is implemented, the user has the potential to go either way. A network might not want the user to have the choice, clearly, but there is certainly a subset of users who will opt out of the feature and I cannot see how those would be in violation of any sane network usage policy. It's certainly a mess in any case. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Fwd: [ PRIVACY Forum ] Windows 10 will share your Wi-Fi key with
Sean Donelan s...@donelan.com writes: On Mon, 6 Jul 2015, Joe Greco wrote: Anyways, if you look on the first page of Customize settings, yes there's an option for Automatically connect to networks shared by my contacts and it CAN be turned off, but it defaults to on. Defaults matter. Every configuration parameter has a default setting, whether intentional or not. Well of course defaults matter. We work in an industry where the defaults supplied by most tech companies for the average user are quite depressing to me. People want easy and many don't bother to understand or (even worse) care about privacy. Just look at web advertising and tracking. As bad as that is on the general Internet, even I was a bit shocked to find yesterday while training NoScript on a new VM that a certain wireless carrier's customer portal was reaching out to maybe as many as twenty different ad and tracking networks, including Bing, Yahoo, and Google, in order for you to log in and pay your bill. http://www.sol.net/tmp/nanog/mytmobile-login.jpg This stuff is frickin' pervasive. The default is track the hell out of everyone and share everything you can. I remember first seeing the Windows 10 share networks to contacts and trying to imagine that it meant anything other than wifi access creds. That's part of the problem. They don't even tell you what the words are actually saying, or why it matters one way or another. For those on this list, that may not be a problem, but my 80 year old mom isn't going to have a clue. Bacon Zombie baconzom...@gmail.com writes: This is on by default in the beta like all the reporting in MS. Will probably be either a prompt in the RTM version. Sure. A prompt that defaults to on, on a screen that most people probably bypass, because the new thing is to make tech easy, and bogging them down with a bunch of questions that only computer geeks and privacy wonks and network gearheads care about (or even understand) is anti-user. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Fwd: [ PRIVACY Forum ] Windows 10 will share your Wi-Fi key with
Terrible idea. These are the kind of features that should be opt in, and Microsoft could have done that instead. It *is* an option. When you're setting up Windows 10, it asks you two screens of configuration questions, but most people will hit the Use express settings option and just blow past the choice. I don't know, most of the express settings seem to be craptacular to me, so I always go through all the defaults and usually find myself flipping many/most of them. But that's probably because I am not in search of automated Cortana and Bing magic page prediction goodness that auto- matically shares my name, location, and advertising ID with every random website that it possibly can (hyperbole?? maybe??) http://winaero.com/blog/windows-10-build-10074-features-a-reworked-setup-experience/ Anyways, if you look on the first page of Customize settings, yes there's an option for Automatically connect to networks shared by my contacts and it CAN be turned off, but it defaults to on. I didn't spend a lot of time trying to figure out exactly how that'd work. I don't really want my contacts or any other data being sent to Microsoft's servers. I have my own servers that I'm reasonably happy with. I have an uneasy feeling that if set I'd find it to be slurping a lot of data over to Microsoft's servers and I guess I would not be shocked to find that 50 of my best friends on NANOG are suddenly (and unexpectedly) populating WiFi passwords at me. I suppose I could be wrong, but it's amazing how many LinkedIn invites I get from people I've never heard of, who seem to only have a mailing list in common, etc. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Low Cost 10G Router
How cheap is cheap and what performance numbers are you looking for? About as cheap as you can get: For about $3,000 you can build a Supermicro OEM system with an 8-core Xeon E5 V3 and 4-port 10G Intel SFP+ NIC with 8G of RAM running VyOS. The pro is that BGP convergence time will be good (better than a 7200 VXR), and number of tables likely won't be a concern since RAM is cheap. The con is that you're not doing things in hardware, so you'll have higher latency, and your PPS will be lower. What 8 core Xeon E5 v3 would that be? The 26xx's are hideously pricey, and for a router, you're probably better off with something like a Supermicro X10SRn fsvo n with a Xeon E5-1650v3. Board is typically around $300, 1650 is around $550, so total cost I'm guessing closer to $1500-$2000 that route. The edge you get there is the higher clock on the CPU. Only six cores and only 15M cache, but 3.5GHz. The E5-2643v3 is three times the cost for very similar performance specs. Costwise, E5 single socket is the way to go unless you *need* more. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Low Cost 10G Router
Chat in my nerds irc channel about 10G routers paralleling this 14:21 b the Xeon D-1540 has 8 cores / 16 threads, 2GHz base clock with 2.6GHz turbo, and dual 10G nics on chip 14:21 b 45W TDP Right, but that's a pretty lame clock. 14:31 b supposedly an asrock board is coming that can be 10Gbase-T or SFP+ Also the only one so far I've seen able to support multiple PCIe. The Supermicro is mini-ITX. But the AsRock has some weird power arrangement too. 14:58 a supermicro are shipping some SFP+ 10G E5 boards 15:00 b but the xeon E5 doesn't have the on die 10G nic 15:07 a X9DRW-7TPF+ http://www.supermicro.com/products/motherboard/xeon/c600/x9drw-7tpf_.cfm Yes, but that's a big wattsy thing. The X10SRW comes in some 1U variants that can handle two PCIe so it'd be an interesting router platform that does not eat lots of space. Also: 1.4Mpps per 10G link doesnt seem like the minimum packetsize one wants for handling DOS attacks, but I might be bad at math. Always an issue. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Optic Vendor Coding Question
Do Dell 8132s have SFP+ vendor code issues? As in, do they not-work with non-Dell optics? They don't work with Intel SR optics (whatever it is that comes with the X520-SR's). They do seem to work with generic Finisar 1GB optics. Since the Dell branded FTLX8571D3BCL SR's seem to go for only $20-$30 on eBay I haven't been highly motivated to identify other 10GB modules that work/don't-work. The strategy here has been to simply load up 10GB gear with compatible SR optics and then forget about it. I'm guessing that's not helpful because you're probably interested in non-SR optics, but feel free to ping me if you think I might be able to answer further questions. #show interfaces transceiver properties Yes: Dell QualifiedNo: Not Qualified N/A : Not Applicable Port TypeMedia Serial Number Dell Qualified - --- - - -- Te1/0/3 SFP 1000BASE-TPL7 No Te1/0/4 SFP 1000BASE-TPKL No Te1/0/5 SFP+10GBASE-SRAL3 Yes Te1/0/6 SFP+10GBASE-SRAK3 Yes Te1/0/7 SFP+10GBASE-SRAP9 Yes Te1/0/8 SFP+10GBASE-SRAP9 Yes Te1/0/9 SFP+10GBASE-SRAP9 Yes Te1/0/10 SFP+10GBASE-SRAP9 Yes Te1/0/11 SFP+10GBASE-SRAJC Yes Te1/0/12 SFP+10GBASE-SRAL2 Yes Te1/0/13 SFP+10GBASE-SRAJQ Yes Te1/0/14 SFP+10GBASE-SRAP9 Yes Te1/0/15 SFP+10GBASE-SRAHG Yes Te1/0/16 SFP+10GBASE-SRAJQ Yes Te1/0/17 SFP 1000BASE-TPKL No Te1/0/18 SFP 1000BASE-TPL7 No Te1/0/19 UNKNOWN N/A H11 N/A Te1/0/20 UNKNOWN N/A P11 N/A Te1/0/22 UNKNOWN N/A H51 N/A Te1/0/23 SFP+10GBASE-CU1M 22808 Yes Te1/0/24 SFP+10GBASE-CU1M 11560 Yes Fo1/1/2 QSFP40GBASE-CR4-1MCN0V42N6F37 Yes The 19, 20, 22 SFP's are Finisar 1GB, the 1000baseT are Dell branded but apparently not qualified for use in the switch. I think they showed up differently before the firmware upgrade that made the unit think it's a Dell Networking N4032F. Overall we're pleased with the 8132F, but we're not doing anything too awful stressy with them. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Verizon Policy Statement on Net Neutrality
On 02/28/2015 07:55 PM, Barry Shein wrote: And given lousy upload speeds the opportunities to develop for example backup services in a world of terabyte disks is limited. At 1mb/s it takes approx 100,000 seconds to upload 1TB, that's roughly one week, blue sky. If that terabyte drive holds little files and the backup program uses incremental backup, a slow upload rate shouldn't be all that painful. Video editors need to look at local-network solutions for their backup, at least until upload rates increase by a factor of 10 or better. It just hit me: when one has just a hammer in his toolbox everything starts to look like nails. Network-based storage could just be one of those. That was probably true back when Ethernet was 10Mbps ... let's say 1992. But then along came 100Mbps in 1995, and 1GbE in 1999, and then 10GbE in 2002. In the period of 10 years, the technology became 1000x faster. I don't buy that network-based storage could just be one of those. Just because the broadband networks we have today aren't up to the task doesn't make this a reasonable point. Remember that the National Information Infrastructure was supposed to deliver 45Mbps symmetric connections to the end user back in the '90's, a visionary goal but one that was ultimately subverted in the name of telco profits. http://it.tmcnet.com/topics/it/articles/70379-net-that-got-away.htm ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Verizon Policy Statement on Net Neutrality
(replying to a few different points by different people): In general, I find my 30M/7M is not too terribly painful most of the = time. Do I wish I had more upstream? Yes, but not as much as I wish I = had more downstream. I think an ideal minimum that would probably be = comfortable most of the time today would be 100M/30M. But around here, the best you can get is 50M/5M (cable) or 12M/1M (VDSL). The 5M upstream on the cable is also a fairly recent improvement, it used to be 1M as well - and still is for most non-super ultra mega premium tiers, I believe. And perfect symmetry is not necessary. Would I notice the difference between 60/60 and 60/40 or even 60/20? Probably not really as long as both numbers are significantly more than the expected peak rate. But 24/1.5, a factor of 16, is a very different story. And both those variables are the problem. The current service offerings have been carefully designed to balance existing technology and observed actual usage characteristics, leaving essentially nothing for future technological evolution to grow into. The problem is that if you make service offerings significantly more than the expected peak rate, then there is no longer incentive for customers to buy more than the most basic tier of service. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Verizon Policy Statement on Net Neutrality
On 27/Feb/15 19:13, valdis.kletni...@vt.edu wrote: Consider a group of 10 users, who all create new content. If each one creates at a constant rate of 5 mbits, they need 5 up. But to download all the new content from the other 9, they need close to 50 down. And when you expand to several billion people creating new content, you need a *huge* pipe down. Bottom line is that perfect symmetry isn't needed for content distribution - most people can't create content fast enough to clog their uplink, but have trouble picking and choosing what to downlink to fit in the available bandwidth. Isn't this a phenomenon of the state of our (uplink) networks? Remove the restriction and see what happens? Only partially. It is also a phenomenon of having built the first broadband networks with that asymmetry, which in turn discouraged a whole host of potential applications, which in turn creates a sort of bizarre self-fulfilling prophecy: broadband networks don't see much call for tons of upstream because it wasn't available, and so there aren't lots of apps for it, and so users don't ask for it, and so the cycle continues. In many cases, users who had high upstream requirements have been instead working around the brokenness by, for example, renting a server at a datacenter. I know lots of gamers do this, etc. So even if we were to create massive new upstream capacity tomorrow, it might appear for many years that there's little interest. Consider streaming video. We theoretically had sufficient speed to do this at least ten years ago, but it took a long time for the technology to mature and catch on. However, it should be obvious that the best route to guaranteeing that new technologies do not develop is to keep the status quo. With wildly asymmetric speeds, upstream speeds are sometimes barely enough for the things we do today (and are already insufficient for network based backup strategies, etc). Just try uploading a DVD ISO image for VM deployment from home to work ... The current service offerings generally seem to avoid offering high upstream speeds entirely, and so effectively eliminate even the potential to explore the problem on a somewhat less-rigged basis. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: scaling linux-based router hardware recommendations
I know that specially programmed ASICs on dedicated hardware like Cisco, Juniper, etc. are going to always outperform a general purpose server running gnu/linux, *bsd... but I find the idea of trying to use proprietary, NSA-backdoored devices difficult to accept, especially when I don't have the budget for it. I've noticed that even with a relatively modern system (supermicro with a 4 core 1265LV2 CPU, with a 9MB cache, Intel E1G44HTBLK Server adapters, and 16gig of ram, you still tend to get high percentage of time working on softirqs on all the CPUs when pps reaches somewhere around 60-70k, and the traffic approaching 600-900mbit/sec (during a DDoS, such hardware cannot typically cope). It seems like finding hardware more optimized for very high packet per second counts would be a good thing to do. I just have no idea what is out there that could meet these goals. I'm unsure if faster CPUs, or more CPUs is really the problem, or networking cards, or just plain old fashioned tuning. 10-15 years ago, we were seeing early Pentium 4 boxes capable of moving 100Kpps+ on FreeBSD. See for example http://info.iet.unipi.it/~luigi/polling/ Luigi moved on to Netmap, which looks promising for this sort of thing. https://www.usenix.org/system/files/conference/atc12/atc12-final186.pdf I was under the impression that some people have been using this for 10G routing. Also I'll note that Ubiquiti has some remarkable low-power gear capable of 1Mpps+. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Got a call at 4am - RAID Gurus Please Read
I'm just going to chime in here since I recently had to deal with bit-rot affecting a 6TB linux raid5 setup using mdadm (6x 1TB disks) We couldn't rebuild because of 5 URE sectors on one of the other disks in the array after a power / ups issue rebooted our storage box. We are now using ZFS RAIDZ and the question I ask myself is, why wasn't I using ZFS years ago? +1 for ZFS and RAIDZ I hope you are NOT using RAIDZ. The chances of an error showing up during a resilver is uncomfortably high and there are no automatic tools to fix pool corruption with ZFS. Ideally use RAIDZ2 or RAIDZ3 to provide more appropriate levels of protection. Errors introduced into a pool can cause substantial unrecoverable damage to the pool, so you really want the bitrot detection and correction mechanisms to be working as designed. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Equinix Virginia - Ethernet OOB suggestions
Hey, VPN setup is not really a viable option (for us) in this scenario. Honestly, I'd prefer to just call it done already and have a VPN but due to certain restraints, we have to go down this route. Without explaining the restraints, this kinda boils down to 'cuz we want it, which stopped being good justification many years ago. I doubt you'll find many takers who would want to provide you with a circuit for a few Mbps with a /23 for OOB purposes 'just cuz. I note that we're present in Equinix Ashburn and could do it, and that this is basically a nonstarter for us. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Industry standard bandwidth guarantee?
Consider a better analogy from the provider side: A customer bakes a nice beautiful fruit cake for their Aunt Eddie in wilds of Saskatchewan. The cake is 10 kg - but they want to make sure it gets to Eddie properly, so they wrap it in foil, then bubble wrap, then put it in a box. They have this 10kg cake and 1kg of packaging to get it to up north. They then go to the ISP store to get it delivered - and are surprised, that to get it there, they have to pay to ship 11kg. But the cake is only 10kg! If they pay to ship 11kg for a 10kg cake, obviously the ISP is trying to screw them. The ISP should deliver the 10kg cake at the 10kg rate and eat the cost of the rest - no matter how many kg the packaging is or how much space they actually have on the delivery truck. And then the customer goes to the Internet to decry the nerve of the ISP for not explaining the concept of packaging up front and in big letters. Why they should tell you - to ship 10kg, buy 11kg up front! Or better yet, they shouldn't calculate the box when weighing for shipping! I should pay for the contents and the wrapping, no matter how much it is, shouldn't even be considered! It's plain robbery. Harrumph. Perhaps that's because in the case of shipping, it is usual and customary to expect an item to be packaged carefully and that the packaging is counted as part of the shipped package. From the provider side (bearing in mind I've been in that business for a few decades), usually what the customer wants is to understand what they're purchasing, and if you as a provider tell them that they're buying a 100Mbps circuit, they kinda expect that they can shovel 100Mbps down that circuit. No amount of but you should expect that there's packaging and you should just /knoow/ (whine added for emphasis) that means only 80Mbps usable is really going to change that. That's why I designed an analogy that is much more representative of reality than yours. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Industry standard bandwidth guarantee?
You can't just ignore protocol overhead (or any system's overhead). If an application requires X bits per second of actual payload, then your system should be designed properly and take into account overhead, as well as failure rates, peak utilization hours, etc. This is valid for networking, automobile production, etc etc.. Are you saying that the service provider should take into account overhead? And report the amount of bandwidth available for payload? Even there we have some wiggle room, but at least it is something the customer will be able to work out (IP header overhead, etc). If not, I'm at a bit of a loss. As a customer, how do I identify that my traffic is actually going over an ATM-over-MPLS-over-VPN-over-whatever- other-bitrobbing-tech circuit and that I should only expect to see 60% of the speed advertised? ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Linux: concerns over systemd [OT]
Which leads me to ask - those of you running server farms - what distros are popular these days, for server-side operations? been running bsd forever. but moving to debian and ganeti, as bsd does not host virtualization. Simply not true; http://bhyve.org/ It is a bit immature compared to Xen+Ganeti or something like that. would love it if debian ditched this systemd monstrosity and provided solid zfs. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Linux: concerns over systemd [OT]
Which leads me to ask - those of you running server farms - what distros are popular these days, for server-side operations? been running bsd forever. but moving to debian and ganeti, as bsd does not host virtualization. Simply not true; http://bhyve.org/ It is a bit immature compared to Xen+Ganeti or something like that. apologies. i thought we were talking about production systems. my mistake. Oh, c'mon Randy, you've been around long enough to know how this all works. You can't honestly tell me that VMware ESX was born handling production loads. You can't honestly tell me that Xen was born handling production loads. All hypervisor technologies were new at one point in their life cycle, and most were also catastrofails at one point in their life cycle. The fact that bhyve is new means it's more immature, but people are certainly trying noncricitical production loads on it. Y'know, the same way they did years ago with ESX. No one's saying you have to trust it with your production workloads, but it's pretty unfair to characterize BSD as not host(ing) virtualization when so much effort has been put into that very issue, specifically so that we could gain the advantages of a BSD hypervisor that supported ZFS natively... ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Why is .gov only for US government agencies?
Wondering if some of the long-time list members can shed some light on the question--why is the .gov top level domain only for use by US government agencies? Where do other world powers put their government agency domains? With the exception of the cctlds, shouldn't the top-level gtlds be generically open to anyone regardless of borders? Would love to get any info about the history of the decision to make it US-only. In part due to RFC1480. At one point, everything here in the US was set to transition away from the US- and TLD-centric models. It is now only a fuzzy memory, but at one point commercial entities could not just register a random .NET or .ORG domain name ... which would have resulted in a nicer-looking Internet domain system today. But to make a long story short, and my memory's perhaps a bit rusty now, but my recollection is that shorter URL's looked nicer and there was significant money to be had running the registry, so there was some heavy lobbying against retiring .GOV in favor of .FED.US (and other .US locality domains). ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Why is .gov only for US government agencies?
On Sun, Oct 19, 2014 at 7:12 AM, Joe Greco jgr...@ns.sol.net wrote: But to make a long story short, and my memory's perhaps a bit rusty now, but my recollection is that shorter URL's looked nicer and there was significant money to be had running the registry, so there was some heavy lobbying against retiring .GOV in favor of .FED.US (and other .US locality domains). [snip] The same problem exists with .EDU capriciously adopting new criteria that excludes any non-US-based institutions from being eligible. I believe the major issue is that if a TLD is in the global namespace, then it should NOT be allowed to restrict registrations based on country; the internet is global and .GOV and .EDU are in Global Namespace. So then, why aren't .EDU and .GOV just allowed to continue to exist but a community decision made to require whichever registry will be contracted to manage .GOV to accept registrations from _all_ government entities regardless of nationality ? Because the US has historically held control over the whole process. Regardless of what it may seem like, it's not a community process. In otherwords, rejection of the idea that a registry operating GTLD namespace can be allowed to impose overly exclusive eligibility criteria In the specific case of .gov, I'd say that there's some danger to having multiple nations operating in that single 2LD space; .gov should probably be retired and federal institutions migrated to .fed.us. There's also namespace available for localities. But given the choice between rationality and insanity, usually the process seems to prefer insanity. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Marriott wifi blocking
On Sat, Oct 04, 2014 at 11:19:57PM -0700, Owen DeLong wrote: There's a lot of amateur lawyering ogain on in this thread, in an area where there's a lot of ambiguity. We don't even know for sure that what Marriott did is illegal -- all we know is that the FCC asserted it was and Mariott decided to settle rather than litigate the matter. And that was an extreme case -- Marriott was making transmissions for the *sole purpose of preventing others from using the spectrum*. I don't see a lot of ambiguity in a plain text reading of part 15. Could you please read part 15 and tell me what you think is ambiguous? Marriott was actually accused of violating 47 USC 333: No person shall willfully or maliciously interfere with or cause interference to any radio communications of any station licensed or authorized by or under this chapter or operated by the United States Government. In cases like the Marriott case, where the sole purpose of the transmission is to interfere with other usage of the transmission, there's not much ambiguity. But other cases aren't clear from the text. For example, you've asserted that if I've been using ABCD as my SSID for two years, and then I move, and my new neighbor is already using that, that I have to change. But that if, instead of duplicating my new neighbor's pre-existing SSID, I operate with a different SSID but on the same channel, I don't have to change. I'm not saying your position is wrong, but it's certainly not clear from the text above that that's where the line is. That's what I meant by ambiguity. I've watched this discussion with much amusement. In a manner similar to our legal system, where a lot of the law is actually defined by what is commonly called case law, most of the non-radio geeks here are talking about radios and spectrum as though all of this represents some sort of new problem, when in fact the agency tasked with handling it is older than any of us. (What's your position on a case where someone puts up, say, a continuous carrier point-to-point system on the same channel as an existing WiFi system that is now rendered useless by the p-to-p system that won't share the spectrum? Illegal or Legal? And do you think the text above is unambiguous on that point?) It doesn't matter if you think your quoted text on this point is ambiguous. The fact of the matter is that decades of policy are that the FCC decided many years ago that you cannot go onto shared, unlicensed spectrum with a powerful transmitter and hold the mic open with the intent to disrupt the legitimate communications traffic of others on that channel. This logically derives fairly straightforwardly from the quoted text, and the fact that wifi deauth interference is merely a packet-pushing variant of this isn't really hard for the average person to extrapolate. But they also have decades of experience with other aspects of more subtle radio shenanigans, and they have the authority to sort it all out, so what we should really be hoping for is that the FCC doesn't do something onerous like mandate registration of access point MAC's and SSID's if and when it gets to a point where it is considered a true problem. That could well be the regulatory solution to your ABCD problem, but it would be a heavyhanded fix to a minor problem. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Observations of an Internet Middleman (Level3)
Blake Dunlap iki...@gmail.com wrote: And the unbalanced peers / transit? Surely it is too much to expect a service provider to actually provide service even if it is not entirely fair and balanced. It's not like, you know, anyone was paying them to provide a service ... [...rewind...] kevin_mcelear...@cable.comcast.com wrote: This is a smart group. Well, smart enough to at least try to see it for what it actually is. Telling us we're smart and then expecting us to swallow a load doesn't quite seem to work, judging from the last few responses you've had. Some of us are actually businesspeople, so we understand the issues from multiple dimensions. Including historical ones, where there are both examples of Monopolies Gone Wild! (Spring Break Edition!) and also Government Regulation Gone Overboard. If you'd like to say that you're trying to leverage as much revenue as possible by taking advantage of new trends (i.e. cord cutting) in a customer base that's at least partially without other reasonable options, while keeping investment costs as low as possible, well, then, we have the potential for an honest conversation. But if you're going to tell us about how you've managed to acquire transit customers, that feels like the start of a dishonest discussion because basically most of us here wouldn't buy transit from a cable company unless it was the only available option, or there was some other distorting reason - such as congestion - that caused such an arrangement to be needed. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Observations of an Internet Middleman (Level3)
On Thu, May 15, 2014 at 1:06 PM, Ryan Brooks r...@hack.net wrote: On 5/15/14, 11:58 AM, Joe Greco wrote: 2) Netflix purchases 5Mbps fast lane I appreciate Joe's use of quotation marks here.A lot of the dialog has included this 'fast lane' terminology, yet all of us know there's no 'fast lane' being constructed, rather just varying degrees of _slow_ applied to existing traffic. please correct me if I'm wrong, but 'fast lane' really is (in this example): 'cableco' port from 'moviecompany' has 'qos' marking configuration to set all 'moviecompany' traffic (from this port!) to some priority level. I think that's a possibility, but that we're actually talking at a less-technical level. [...] 3 x 5 == 15 ... not 10. How will 'cableco' manage this when their 100gbps inter-metro links are seeing +100gbps if 'fastlane' traffic and 'fastlane' traffic can't make it to the local metro from the remote one? #whocares You've made a technical implementation issue out of a mostly non-tech issue. This all seems much, much more complicated and expensive than just building out networking, which they will have to do in the end anyway, right? Only with 'fastlanes' there's extra capacity management and configuration and testing and ... all on top of: Gosh, does the new umnptyfart card from routerco actually work in old routerco routers? I certainly agree. This isn't a technical issue though. A majority of the people on this list should appreciate the costs associated with building and maintaining networks, and there are lots of them to be sure. This is about other aspects of the business. This looks, to me, like nuttiness... like mutually assured destruction that the cableco folk are driving both parties into intentionally. No. I don't actually believe that. Businesses are in the habit of making money. There's a reasonably strong desire to remain in business and hopefully make a profit. To that end, in the capitalist model, competition serves to lower prices and increase quality to levels that the average consumer finds acceptable. A monopoly or duopoly environment distorts that; in a market with a constrained number of providers, the conventional capitalistic model can perform poorly or even fail entirely - as an example, consider the LCD price fixing scandal last decade, where prices ended up artificially high. The current situation is worse; the telcos and cablecos have a bunch of incentives to prevent cannibalizing their existing profitable pay TV product lines... which are seeing competition from the likes of Netflix. And there's even some legitimacy there: if all of those customers suddenly dropped their pay TV service and went to Netflix, the whole economic underpinnings of a cable TV company could be thrown into disarray. Because those pay TV subscribers are in some way contributing to covering the opex and capex of the cable TV distribution network. That'd also be damaging to the last mile IP connectivity, heh. But it's hard to have an honest discussion about all of this when those involved are so busy trying to spin things in their favor, and to keep the status quo, etc. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Observations of an Internet Middleman (Level3) (was: RIP
Throttling is taking, say, a link from 10G and applying policy to constrain= it to 1G, for example. Throttling is also trying to cram 20G of traffic through that same 10G link. What if a peer wants to go from a balanced relation= ship to 10,000:1, well outside of the policy binding the relationship? What if you're running a 10G port at saturation in both directions and you decide to stop accepting announcements from the peer on that port? Now you have a 10,000:0 ratio. Then what? Should we just unquestionably toss out our published policy =96 which is consis= tent with other networks =96 and ignore expectations for other peers? What's your goal at the end of the day? You have customers who are paying you for connectivity to Teh Interwebz. Do you have an obligation to run a dedicated 100GbE to each and every host on the planet? No. Do you have an obligation to make a reasonable effort to move the traffic that your customer is paying you for? Yes. At the end of the day, if I'm your customer and I'm trying to pull 50Mbps of data on my 50Mbps connection that I am buying from you, then it seems like a reasonable thing to expect that you'll have the 50Mbps of capacity to actually fulfill the demand. That does not mean that I will actually GET 50Mbps - it just means that you should be making a reasonable effort and especially that you are not actively sabotaging it, by aggregating it through a congested 10Gbps port, or forcing the packets through a congested peer, or any of a number of other underhanded things. If you cannot figure out how to arrange your transit and peering affairs in a manner that allows you to deliver on what you've sold to customers in the current unregulated model, I think you'll find that the alternative of regulation is very much less palatable. So, to answer your question, yes, if you're unable to figure out that Netflix is always going to generate tons more traffic than it receives, and that your customers desperately want to get good connectivity to there, then that's dumb. Perhaps you should figure out how to arrange peering with sites where there's obviously going to be an unrectifiable traffic imbalance. You're a service provider. What should your goal be? I would have thought it obvious: Provide the service. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Observations of an Internet Middleman (Level3) (was: RIP
So by extension, if you enter an agreement and promise to remain balanced y= ou can just willfully throw that out and abuse the heck out of it? Where do= es it end? Why even bother having peering policies at all then? It doesn't strike you as a ridiculous promise to extract from someone? Hi I'm an Internet company. I don't actually know what the next big thing next year will be but I promise that I won't host it on my network and cause our traffic to become lopsided. Wow. Is that what you're saying? To use an analogy, if you and I agree to buy a car together and agree to sw= itch off who uses it every other day, can I just say forget our agreement = =96 I=92m just going to drive the car myself every single day =96 its all m= ine=94? Seems like a poor analogy since I'm pretty sure both parties on a peering can use the port at the same time. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Observations of an Internet Middleman (Level3)
That link is broken and insists that I install a windows upgrade for = Flash on my Mac. Try http://arstechnica.com/tech-policy/2014/05/fcc-votes-for-internet-fast-lanes-but-could-change-its-mind-later/ ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Residential CPE suggestions
It uses a Cavium Octeon processor which does have dedicated HW packet proce= ssing. A moderate number of prefixes won't slow it down doing vanilla for= warding, not sure about 2 million though... I believe they have recently o= ptimized some of the FW stuff to take advantage of the HW as well. =20 Layering services like FW, NAT, and tunneling definitely drops the packet r= ate significantly, but it is still capable of 100+Mbps at IMIX packet sizes= .=20 I think there are a couple of in depth tests out there. In my experience the ERL works really well for a $99 device.=20 I sent them an inquiry and they sent a friendly but fact-free response so it is probably safe to assume that it is relatively good at basic packet forwarding but the services will kill it. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Residential CPE suggestions
I was also going to recommend the EdgeRouter Pro as it has dual SFP = ports and the Vyatta/Linux stuff works quite well. I suspect you will be very surprised with the quality experience. If = you've not used Vyatta, it's very JunOS-like. Does anyone have any practical experience with the EdgeRouter with a largish number of prefixes? http://dl.ubnt.com/datasheets/edgemax/EdgeRouter_DS.pdf The 2 million+ packets per second leads me to believe that this is merely a highly optimized software based router, but under Hardware Specs it specifically says hardware acceleration for packet processing. I have no idea what's being accelerated since the layer 3 forwarding performance specs for the FR-8 are 2Mpps (an 800MHz CPU) and the FRPro-8 are 2.4Mpps (1GHz) which suggests software lookup. Do these things suffer if you load them down with a full table? Or a handful of firewall rules? ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Level 3 blames Internet slowdowns on Technica
The economic reality is that if I build out an expensive infrastructure I have to pile on as many high priced services as possible to order to maximize the revenue from it. A customer who does not balk at a $200 a month TV/voice/Internet service is not going to be happy getting a bill of $50 a month for a fiber loop. The services are what the customer really wants and where you can add bells and whistle with little added expense. The infrastructure is the expensive part. That's correct, but it is still the wrong way to try to approach the problem. It is simply not practical for N different companies to all try to build out their own networks; we already had the cable and telco monopolies each building out communications infrastructure, which in hindsight seems a little foolish, though it was largely due to the available technologies at the time. BTW, if you think that NRC infrastructure charge would ever go away, you are kidding yourself. The N in NRC means non-recurring. Here in Illinois, we have been paying for the construction of our tollway in perpetuity. When it was originally built the state promised to remove the tolls as soon as construction costs were recovered. We are still waiting and will be forever. As someone who has worked in the Loop on and off for twenty years, I am fully aware of the history and folly of the Illinois trollway. As an out-of-stater, I've watched the way that the tollways have been modified over the years to more heavily impact those of us coming from the north (Deerfield/Waukegan restructuring), to more heavily impact those paying cash, etc. I note that it wasn't all that many years ago that I was paying 40c cash at the Waukegan toll; today that same toll is $2.80. If you want, you can criticize the model of the free economic that use profit to determine viability but unfortunately someone pays the bill in the end. Whether it is government funded, a grant, or a commercial enterprise, expenses get recovered. The only difference is that in a free market the customer gets to choose what they pay for. In any other model, everyone pays whether they like it or not. I think our communications model had to develop as a managed monopoly otherwise it would not have been the universal solution that it is today. Now we have to deal with the downside of the monopoly as well. The problem is that if you accept such a fatalistic position as the only possible way, you end up with Comcast and U-Verse. Unfortunately it is a fallacy to imagine that this is the only way it can be. We've seen last mile infrastructure built by municipalities, for example. We know from the historical examples of gas, water, sewer, power, oh and also telephone and cable that it is perfectly possible to create a monopoly to deliver basic services. The entire point, in fact, of my first post in this thread was to point out that this is in fact what Ma Bell had promised to deliver as part of the NII, to provide the last mile fiber to the house, and then to allow competitive access to that network. They did want - and in fact got - concessions and other inducements to actually deliver such a network, by some accounts as much as $200 billion in incentives, which they promptly kept, but then slowly chipped away at what they were expected to deliver in return, until they were finally allowed to just deliver their own services on the infrastructure. So guess what. In this case, we actually spent the money to do it already and in return we got shafted with U-Verse. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: misunderstanding scale
On Mon, Mar 24, 2014 at 3:00 AM, Karl Auer ka...@biplane.com.au wrote: Addressable is not the same as accessible; routable is not the same as routed. Indeed. However, all successful security is about _defense in depth_. If it is inaccessible, unrouted, unroutable and unaddressable then you have four layers of security. If it is merely inaccessible and unrouted you have two. Yet there is significant value to providing uniqueness in address space, a property that is incredibly useful. The proponents of this sort of in depth defense typically view NAT as a way to protect their networks, which it does, in some limited sense, from being addressable from the outside world. The problem is that it has broken one of the key design principles in IPv4, and so we've had to suffer for years under broken NAT regimes and workarounds and other folly. This is overall a bad thing for the Internet, and for the development of future protocols and applications. Time to give up two layers of meaningless security for the riches offered by the vastness of the new address space. If this job were easy, anyone could do it. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: misunderstanding scale
Hi Mike, You can either press the big red button and fire the nukes or you can't, so what difference how many layers of security are involved with the Football? I say this with the utmost respect, but you must understand the principle of defense in depth in order to make competent security decisions for your organization. Smart people disagree on the details but the principle is not only iron clad, it applies to all forms of security, not just IP network security. The problem here is that what's actually going on is that you're now enshrining as a security device a hacky, ill-conceived workaround for a lack of flexibility/space/etc in IPv4. NAT was not designed to act as a security feature. If you want more layers of security, put a second firewall into your design. Don't perpetuate horrid IPv4 hacks that were necessary for specific reasons into IPv6 where those hacks are no longer needed. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: misunderstanding scale
On Mon, Mar 24, 2014 at 8:31 AM, Joe Greco jgr...@ns.sol.net wrote: all successful security is about _defense in depth_. If it is inaccessible, unrouted, unroutable and unaddressable then you have four layers of security. If it is merely inaccessible and unrouted you have two. Time to give up two layers of meaningless security for the riches offered by the vastness of the new address space. Hi Joe, You'd expect folks to give up two layers of security at exactly the same time as they're absorbing a new network protocol with which they're yet unskilled? Does that make sense to you from a risk-management standpoint? Actually, yes, it does. Using the product as intended is substantially less risky than trying to figure out how to use some sort of proxy or gateway functionality to emulate NAT, and then screwing that up. If you're afraid that you're insufficiently competent, help for hire is available, as are two levels of firewalling, which isn't really a bad idea anyways. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: misunderstanding scale
it involves two layers of heterogeneous firewalls (protecting multiple ^ Ugh. Knew I was forgetting something. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Level 3 blames Internet slowdowns on Technica
How do you get around the problem of natural monopolies, then? Or should we be moving to a world where, say, a dozen or more separate companies are all running fiber or coax on the poles on my street in an effort to get to my house? IMHO, the only way to get real competition on the last mile is to have the actual fiber/wire infrastructure being owned by a neutral party that's required to pass anyone's traffic. Which closely resembles what the original goal of the National Infrastructure Initiative was, back in the early 1990's. Fiber to the homes. 86 million of them by 2006. The Bells volunteered to do it in exchange for incentives, which they got, and kept, and then never delivered what was promised. The best short summary of what happened is probably here: http://www.newnetworks.com/ShortSCANDALSummary.htm This boooklet is now maybe ~5-10 years old so it doesn't reflect more recent developments. We *let* the monopolies (er, duopolies in some cases) get away with the regulatory and legislative manipulation that led to the current outcome, and the irony that the message I'm responding to was authored by someone who appears to work for one of those companies would write such a message is not lost upon me. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: Level 3 blames Internet slowdowns on Technica
We don't know because the service provider rolls that cost up along with th= e services they sell. That is my point. They are able to spread the costs= out based on the profitable services they sell. Okay. If they were not able to = sell us services I am not sure they could afford to provide that infrastruc= ture. That's a crock. You can always provide infrastructure without selling services on top of it. It's wire. Or fiber. Or whatever. If you're not able to subsidize the infrastructure with services, then what you actually get is a less distorted reality where you can actually identify the component costs (circuit, services, etc). In fact, having been a service provider I can tell you that I paid t= he LEC about $4 a month for a copper pair to your house to sell DSL service= at around ten times that cost. I am sure the LEC was not making money at = the $4 a month and I know I could not fund a build out for that price. Why would you try to fund a build out on that? Why wouldn't you instead charge for the build out as a NRC and then charge for maintenance as a MRC? What you're suggesting reeks of the deliberate cost distortion games that go on so often. My personal favorite is cell phone contracts where the cost of the phone is *cough* subsidized by the carrier. But what's really happening is that the customer is paying for the phone over the term of the contract, and if the customer doesn't get a different phone at the end of the contract, then the carrier ... lowers their monthly rate accordingly? No, of course not... they keep it as profit. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.