Re: Does anybody here have a problem
On Mon, 9 Aug 2021, C. A. Fillekes wrote: telling the difference between their NANOG and SCA mail? Can't say that I do. I've been on NANOG for about 17 years. I used to be subscribed to over 100 technical lists. Despite these two facts I have no idea what you mean by SCA in this context. I keep reading it as "Society for Creative Anachronism". Would anyone care to enlighten me about this SCA list? My Googlefu has failed me. Cheers, Rob
QUIC, Connection IDs and NAT
QUIC has Connection IDs independent from IP. This was done to make it easier to move from one IP network to another while keeping connections active, as most here will know. Does the existence of Connection IDs separate from IP mean that the host/IP contention ratio in CGNAT can be higher? IE, can a single CGNAT device provide Internet access for a greater number of end-users? And if so, does this reduce demand on IPv4 resources? It's ok, I'm wearing a fire-resistant suit with self-contained breathing apparatus as I type this. Rob
RE: wow, lots of akamai
On Thu, 1 Apr 2021, Jean St-Laurent via NANOG wrote: What happened is that it would create a kind of internal DDoS and they would all timed out and give a weird error message. Something very useful like Error Code 0x8098808 Please call our support line at this phone number. If only there was a way to address the Thundering Herd problem before the cloud. :) This simple change to add 3 lines of code to add a random artificial boot penalty of few seconds, completely solve the problem. Bingo. Now, the trick is to catch this before it causes an self-DDoS. This is a problem that has been recognised for decades and this is unfortunately a good example of how operational experience is still not being distributed properly. Too many managers think that operational work is obvious and just a result of common sense. It isn't. Cheers, Rob
Re: Perhaps it's time to think about enhancements to the NANOG list...?
On Tue, 23 Mar 2021, Valdis Klētnieks wrote: The problem comes when the younger generation *does* need access to the same knowledge - and the older generation is unreachable and/or actually gone. Exactly. Let's keep in mind that it is not fanciful that networks may need to be built from the ground up again. A major environmental disaster, a nuclear war (even a very limited one) or another Carrington Event could require years of reconstruction. Thw world narrowly missed another Carrington Event in 2012. I recall reading that the US Government Accountability Office estimates full recovery from such an event would take 4-10 years. In a crisis like this network restoration would be a priority as it would facilitate communication right when it is most needed. Networks save lives. I would suggest though that anyone with a passion for networking would take the time to understand as much of it was possible. I'm sure there are plenty of young network engineers that have pored over RFCs and other documentation as well as experimenting as much as they can. Rob
Re: Are the days of the showpiece NOC office display gone forever?
On Thu, 17 Dec 2020, Tom Beecher wrote: I'm sure when the automation is perfect and widespread to the point that it catches and alerts on every network event, the monitoring rooms will disappear. The chances of this happening are exactly 0%. Indeed. More broadly, a lot of people have tried to get rid of operations staff and suffered the consequences. Contrary to what salespeople will say, the answer is not 100% automation, or 100% humans. The proper answer is an often changing combination of the two. Exactly. There is an argument to be said that human operators are actually part of the computer system. This is implied in terms like 'wetware' but not often explicitely stated. If the last 50 years has shown us anything it is that humans and computers working together can achieve far more than either in isolation. Cheers, Rob
Re: It's been 20 years today (Oct 16, UTC). Hard to believe.
On Tue, 16 Oct 2018, Michael Thomas wrote: I believe that the IETF party line these days is that Postel was wrong on this point. Security is one consideration, but there are others. Postel's Law is about robustness of network communications. As such it can *increase* network security by improving availability [CIA triad] although it could potentially reduce confidentiality and integrity in some circumstances. Whether or not Postel's Law improves or degrades security would need to be assessed on a case by case basis. Cheers, Rob
Re: 2017 NANOG Elections General Information
On Tue, 5 Sep 2017, Dave Temkin wrote: Hi NANOG Community, Nominations are rapidly coming to a close - September 8th is the last day to submit nominees. Unfortunately, to follow up on my paragraph about diversity: So far, every single candidate that has completed the nomination process is a white male. What you're describing is a very coarse form of diversity based on physical characteristics. A white man who has lived his entire life as a peasant in Ukraine may well have a very different outlook and life experience to a white man who grew up in Australia. These two white men could bring quite diverse viewpoints to any situation even though they share some superficial characteristics. I have always supported the most suitable candidates for any role, irrespective of their physical characteristics. I will always continue to do so. Rob
Re: recommendations for external montioring services?
On Mon, 12 Dec 2011, Eric J Esslinger wrote: I'm not looking to monitor a massive infrastructure: 3 web sites, 2 mail servers (pop,imap,submission port, https webmail), 4 dns servers (including lookups to ensure they're not listening but not talking), and one inbound mx. A few network points to ping to ensure connectivity throughout my system. Scheduled notification windows (for example, during work hours I don't want my phone pinged unless it's everything going offline. Off hours I do. Secondary notifications if problem persists to other users, or in the event of many triggers. That sort of thing). Sensitivity settings (If web server 1 shows down for 5 min, that's not a big deal. Another one if it doesn't respond to repeated queries within 1 minute is a big deal) A Weekly summary of issues would be nice. (especially the 'well it was down for a short bit but we didn't notify as per settings') I don't have a lot of money to throw at this. I Hi Eric. The feature set you are describing should be in any monitoring system worthy of the name. I've used Nagios to good effect for the best part of the last 12 years or so. Before that I used Big Brother, which sucked in various ways. I did an evaluation on a wide variety of FOSS monitoring systems 2-3 years ago and Nagios won at the time (again). Generally I found the alternatives had problems that I considered to be quite serious (such as being overly complicated or doing checks so frequently that they loaded the systems they were supposed to be monitoring[1]). I'm currently trialing Icinga, a fork of Nagios. Puppet can be set up to manage Nagios/Icinga config which cuts down on the admin overhead. Nagios/Icinga can be hooked up to Collectd to provide performance data as well as alert monitoring. One concern about external monitoring services is the level of visibility they need to have in to your network to adequately monitor them. My recommendation is to do a proper risk assessment on the available options. DO have detailed internal monitoring of our systems but sometimes that is not entirely useful, due to the fact that there are a few 'single points of failure' within our network/notification system, not to mention if the monitor itself goes offline it's not exactly going to be able to tell me about it. (and that happened once, right before the mail server decided to stop receiving mail). There are a couple of ways to deal with this. Some monitoring applications can fail-over to a standby server if the primary fails. But this isn't even really necessary. You will arguably gain higher reliability by running multiple _independent_ monitors and have them monitor each other[2]. I have often used this approach. The principal aim here is to guarantee that you are alerted to any single failure (a production service, system or a monitor). Multiple simultaneous failures could still produce a blackspot. It is possible to design a system that will discover multiple simultaneous failures, but it takes more effort and resources. [1] Sometimes I wonder if the people developing certain systems have any operational experience at all. [2] A system designed to fail-over on certain conditions may fail to fail-over, ah, so to speak. Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Director, Software in the Public Interest (http://spi-inc.org/) Free Open Source: The revolution that quietly changed the world One ought not to believe anything, save that which can be proven by nature and the force of reason -- Frederick II (26 December 1194 – 13 December 1250)
Re: Internet Edge and Defense in Depth
On Tue, 6 Dec 2011, Holmes,David A wrote: Some firewall vendors are proposing to collapse all Internet edge functions into a single device (border router, firewall, IPS, caching engine, proxy, etc.). A general Internet edge design principle has been the defense in depth concept. Is anyone collapsing all Internet edge functions into one device? Hi David. A principle of network firewall design has long been that you want to minimise services (proxy, etc) running there as they can be a vector for attack against the firewall itself. In the end this is about risk analysis. In most cases I would recommend against loading the firewall with additional functionality, for a variety of reasons. In some cases it may make sense to do so. This is completely separate to whether servers should even have a firewall or IPS in front of them. That's another (interesting) discussion :) Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Director, Software in the Public Interest (http://spi-inc.org/) Free Open Source: The revolution that quietly changed the world One ought not to believe anything, save that which can be proven by nature and the force of reason -- Frederick II (26 December 1194 – 13 December 1250)
Re: OT: VM slicing and dicing
On Fri, 12 Nov 2010, Charles N Wyble wrote: I use Proxmox exclusively and am very happy with it. It's a great product. You might need to do a bit of CLI work if you want to support multiple VLANS or other slightly advanced features. I'm lazy but I might get around to patching the web UI at some point to support the stuff I do manually. The OpenVZ docs are very clear and the process is pretty trivial to do on the CLI. I've used OpenVZ at many sites and been really happy with it. Managing OpenVZ from the CLI is easy. I wrote wrapper scripts to perform the desired functions. It has extensive documentation available. From a documentation point of view it really stands out among OSS and even commercial apps. Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Contributing member of Software in the Public Interest (http://spi-inc.org/) Open Source: The revolution that silently changed the world
Re: Terry Childs conviction
On Thu, 29 Apr 2010, William Pitcock wrote: Same difference, he still committed a crime and anyone who is defending him seems to not understand this. Whatever we want to call that crime, it's still a crime, and he got the appropriate penalty. Hi William. I have to agree that it does seem he committed an offence but we will have to agree to disagree on the penalty. Two years (or more) in jail for withholding a password for one week seems disproportionate to me. I wonder how expensive the trial was. Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world
Re: Rate of growth on IPv6 not fast enough?
On Mon, 19 Apr 2010, Owen DeLong wrote: I'm looking at both, and, frankly, LSN (large scale NAT) is not as trivial as you think. I actually talk to and work with some of these very large providers on a regular basis. None of them is looking forward to deploying LSN with anything but dread. The support issues, user experience, CALEA problems, and other issues with LSN are huge. None of them that I am aware of are considering using lSN to free up addresses to hand over to hosting providers. Well said. I've been pondering LSN lately. I think people have haven't been involved in large scale service changes or migrations can't appreciate just how many unanticipated edge cases can appear and blindside a project. I expect that deploying IPv6 will be far less problematic than deploying LSN for a large ISP. Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world
Re: What is The Internet TCP/IP or UNIX-to-UNIX ?
On Sun, 4 Apr 2010, Jim Burwell wrote: I agree. I remember back in the 80s when I first got access to UseNet and UUCP based email thinking and saying things like the net will change the world, because for the first time people from all over the globe were communicating fairly openly and inexpensively, and somehow the internet and UUCP seemed to come in under the radar back then. I had more than a few people scoff at me for thinking that way though. :-) I know exactly what you mean. I first connected got online in 1992 (late by the standards of some around here :) ). Right away I knew it was going to change everything. I tried to explain this to people but mostly got a blank stare in return. These days you hear people occassionally say that no one predicted the explosive impact of the Internet. I did, and so did a lot of others. I will say that I expected it was going to take us longer to get to this point (near ubiquitous network access in the developed world, and network technologies being widely adopted by government business to deliver services). In any case, we're at the beginning of this revolution, not the end. I expect it will take several centuries for the full impact of the information revolution to become evident. Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world
Re: legacy /8
On Fri, 2 Apr 2010, jim deleskie wrote: Just like 640k or memory :) But what if I said 640 petabytes will be more than anyone will ever need. The future might prove me wrong but it probably won't happen for a long time. That's a better analogy for IPv6. IPv6 could have included a larger address space, or it could have been allocated differently[1] but the reality is we have no way of knowing how future generations will use the network. I full expect that advances in computing will render IPv6 obsolete well before the address space is exhausted. Perhaps this may occur in 100 years or more. Future generations may well have to go back to the drawing board to develop new protocols to best utilise the technology that they have available at the time. [1] 48bit host identifier rather than 64bit, for example Rob On Fri, Apr 2, 2010 at 9:25 PM, Randy Bush ra...@psg.com wrote: IPv6 as effectively reindroduced classful addressing. but it's not gonna be a problem this time, right? after all, 32^h^h128^h^h^h64 bits is more than we will ever need, right? randy -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world
Re: Time for a lounge mailing list
On Wed, 31 Mar 2010, Michael Dillon wrote: Then we can just remind people to take the non technical discussions to the social networking site. I find that mailing lists flow much better than the discussions on social networking sites. The tools available to manage messages on forums and social networking sites are typically very primitive. There's a reason that the lions's share of technical discussions occur on mailing lists and over IRC (esp in the OSS world). nanog-chat or nanog-lounge lists sound great to me. Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world
Re: Time for a lounge mailing list
On Wed, 31 Mar 2010, Jorge Amodio wrote: Interesting idea. Then we'll start posting at nanog that there must be some operational problem because nobody is posting on the other list. -lounge and -chat lists are common in many technical organisations I'm a member of. They are generally used the way they are intended to be used. Tech types tend to be a lot better than an average person at following list rules, in my experience. Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world
Re: IP4 Space
On Sat, 6 Mar 2010, Shon Elliott wrote: I would love to move to IPv6. However, the IPv6 addressing, I have to say, is really tough to remember and understand for most people. Where Hi Shon. But we have a system in place which allows non-technical people to ignore IP addresses entirely. Up to this point the ease of remembering IPv4 addresses has allowed their use to leak out in to the user community. It is quite common today for users to ssh to servers by IP address in many organisations. I consider this an historical accident. When setting up or upgrading corporate networks (even for small companies) I use split-view DNS. I like to point out that once IPv6 is mainstream no one is going to remember IP addresses ever again :) is a four number dotted quad was easy to remember, an IPv6 address.. not so much. I wished they had made that a little easier when they were drafting up the protocol specs. I don't believe making it easier for humans to remember or understand IP addresses would have been a good design criteria. IP addresses are principally designed for computers to understand. We humans have a parrallel structure of names that we can use. In any case humans got a break with the :: notation in IPv6 :) basically, you need technical knowledge to even understand how the IP address is split up. I wished ARIN would waive the fee for service That's actually still true in IPv4. A knowledgeable user may be able to ping an IP address but few of them will understand the concept of a subnet. Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com I tried to change the world but they had a no-return policy
Re: Email Portability Approved by Knesset Committee
On Mon, 22 Feb 2010, James Jones wrote: Why does this seem like a really bad idea? While I think the principal is noble there are operational problems: 1) Large and increasing quantity of email will be forwarded between Israeli ISPs, loading their networks with traffic that could have been avoided. 2) Every time someone changes ISP and wants to continue using this address they will need to notify their original ISP, who they may not have had a business relationship with for many years. This will be a significant operational challenge I expect. How do you confirm the person notifying you is the real owner of the address, for example? IMHO it would have been better to require the ISPs to forward the email for a reasonable period of time (say 3 months) to allow the user to make relevant notifications (or just stop using an ISP bound email address). Unfortunately the links cited are in Hebrew so I'm only going on Gadi's report here. Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com I tried to change the world but they had a no-return policy
Re: Email Portability Approved by Knesset Committee
On Mon, 22 Feb 2010, Dorn Hetzel wrote: I am sure the various carriers faced with the onset of Local Number Portability and WLNP in this part of the world would have been happy to escape with only forwarding phone calls for 3 months. I'm sure they would :) I know very little of the workings of cell (or landline) phone networks but I expect if it worked the same way Internet routing does then the Telco networks would have had serious problems under the weight of rerouted calls. I would watch out for this idea, it might actually catch on in various places, warts and all... OTOH if it fails in a screaming heap in Israel it may show everyone else why it is a bad idea :) Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com I tried to change the world but they had a no-return policy
Re: Email Portability Approved by Knesset Committee
On Mon, 22 Feb 2010, Larry Sheldon wrote: Believe it or not, some people have email addresses that are not intrinsically ISP addresses. Indeed. I'm sure pretty much everyone here know why ISPs offer email services. My reaction, if I were in a position to do so, would be to stop providing email addresses. Yes this may well be a sensible business decision. Unfortunately the links cited are in Hebrew so I'm only going on Gadi's report here. Why is that relevant? Because I don't speak Hebrew. The statement is a disclaimer that I need to rely on Gadi's summary rather than reading the thing in detail for myself, as I would have preferred to do. Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com I tried to change the world but they had a no-return policy
Re: I don't need no stinking firewall!
On Tue, 5 Jan 2010, Dobbins, Roland wrote: In the most basic terms, a stateful firewall performs bidirectional classification of communications between nodes, and makes a pass/fail determination on each packet based on a) whether or not a bidirectional communications session is already open between the nodes and b) any policy rules configured on the firewall as to what ports/protocols should be allowed between said nodes. Stateful firewalls make good sense in front of machines which are primarily clients; the stateful inspection part keeps unsolicited packets away from the clients. Stateful firewalls make absolutely no sense in front of servers, given that by definition, every packet coming into the server is unsolicited (some protocols like ftp work a bit differently in that there're multiple bidirectional/omnidirectional communications sessions, but the key is that the initial connection is always unsolicited). Putting firewalls in front of servers is a Really Bad Idea - besides the Hi Roland. I disagree strongly with this position. fact that the stateful inspection premise doesn't apply (see above), The problem is that your premise is wrong. Stateful firewalls (hereafter just called firewalls) offer several advantages. This list is not necessarily exhaustive. (1) Security in depth. In an ideal world every packet arriving at a server would be for a port that is intended to be open and listening. Unfortunately ports can be opened unintentionally on servers in several ways: sysadmin error, package management systems pulling in an extra package which starts a service, etc. By having a firewall in front of the server we gain security in depth against errors like this. (2) Centralised management of access controls. Many services should only be open to a certain set of source addresses. While this could be managed on each server we may find that some applications don't support this well, and management of access controls is then spread across any number of management interfaces. Using a firewall for network access control reduces the management overhead and chances of error. Even if network access control is managed on the server, doing it on the firewall offers additional security in depth. (3) Outbound access controls. In many cases we want to stop certain types of outbound traffic. This may contain an intrusion and prevent attacks against other hosts owned by the organisation or other organisations. Trying to do outbound access control on the server won't help as if the server is compromised the attacker can likely get around it. (4) Rate limiting. The ability to rate limit incoming and outgoing data can prevent certain sorts of DoSes. (5) Signature based blocking. Modern firewalls can be tied to intrusion prevention systems which will 'raise the shields' in the face of certain attacks. Many exploits require repeated probing and this provides a way to stop the attack before it is successful. rendering the stateful firewall superfluous, even the biggest, baddest firewalls out there can be easily taken down via state-table exhaustion; Do you have any evidence to support this assertion? You've just asserted that all firewalls have a specific vulnerability. It isn't even possible to know the complete set of architectures (hardware software) used for firewalls so I don't see how you can assert they all have this vulnerability. In any case, my experience tells me that a DDoS will be successful due to exhaustion of available network capacity before it exhausts the state table on the firewall. an attacker can craft enough programmatically-generated, well-formed traffic which conforms to the firewall policies to 'crowd out' legitimate traffic, thus DoSing the server. Addtionally, the firewall can be made to collapse far quicker than the server itself would collapse, as the overhead on the state-tracking is less than what the server itself could handle on its own. Again, I don't believe such a global claim can be made given the wide variety of architectures used for firewalls. Cheers, Rob -- I tried to change the world but they had a no-return policy http://www.practicalsysadmin.com