> On Apr 21, 2018, at 3:48 PM, Mark Andrews
wrote: > > You have a logic fail. This fails because it STILL depends on the DNS for > the zone working. If the DNS fails to that extent, everything fails. I was addressing the single application endpoint point-of-failure. But from a practical standpoint, you're probably right :-( --lyndon
> On Apr 21, 2018, at 2:47 PM, Keith Medcalf
wrote: > > Actually, a I doubt that there are any "real" people with vanity domains > behind this move. I suspect that it is the scammers and spammers who want to > hide their information for very good reason. > > And of course, the "powers of the EU" seem to be in cahoots with those > scammers and spammers (if they are not the scammers and spammers who > themselves are wanting to hide). I also think more fine-grained control of the data would satisfy many needs. E.g., for my own domains, for the purposes of contacting me to deal with abuse (or insanely large but unlikely buyout offers), all you need is an email contact address. I can put my personal or company name out there, along with a working email address, and still not hand over my home address, phone numbers, etc. For domains backing consumer-facing companies, they would likely want to expose more information. There is no one-size-fits-all here. And that's where the EU is losing credibility. --lyndon
> On Apr 21, 2018, at 2:27 PM, Lyndon Nerenberg <lyn...@orthanc.ca> wrote: > >> But backup and failover are reasonably well understood technologies >> where one cares. Registrars could for example cache copies of those >> zone records and act as failover whois servers. Sorry! I left out the last line that was the point of my diatribe. Using SRV to point to multiple domain-specific whois servers eliminates the caching problem Barry raised.
> On Apr 21, 2018, at 1:58 PM, b...@theworld.com wrote: > > That's actually an excellent point and counterpoint to my suggestion > to move the WHOIS information into DNS RRs. > > But backup and failover are reasonably well understood technologies > where one cares. Registrars could for example cache copies of those > zone records and act as failover whois servers. Instead of putting the contact info directly into the DNS, put pointers to the locations of the data instead. I.e. whois moves off dedicated ports and hardwired servers and into zone-controlled SRV records: _whois._tcp.orthanc.ca SRV 0 0 43 orthanc.ca. SRV 5 0 43 backup.otherdomain.example.com. This gives each zone control of the information they want to export (by directing whois(1) to what they consider to be authoritative servers). The domain owners themselves could control the information they chose to expose to the public, through the SRV records, and the information they chose to publish in the whois servers those records point at. If the domain owner is happy with their (say) registrar providing that information, they would just point the appropriate SRV record at the registrar. This is no different from how people handle email outsourcing via MX records. The idea that whois is in any way authoritative is long gone. Those who want to hide have been able to do that for ages. (I think I pay $15/year to mask some of the domains I control.) But for law enforcement, a warrant will always turn up the payment information used to register a domain, should the constabulary want to find that information out. And for court proceedings, whois data is useless. (I speak from $WORK experience.) --lyndon
> On Dec 28, 2017, at 7:50 PM, valdis.kletni...@vt.edu wrote: > > Comcast is passing out CPE that provides a subnet for the actual subscriber, > and another one for *other* Comcast roaming customers. And somehow this > works for a company the size of Comcast without the customers needing to know > how to set them up. This sounds like the Shaw "Wifi to Go" routers they (Shaw) push out. Agree to make your Shaw internet connection "shared" and they drop in a hotspot that provides a 2nd wifi network that subscribers can connect to. In exchange for a discount on your local internet bill. (Separate subnet.) Of course, Shaw STILL doesn't do IPv6 ... ;-P
> On Dec 28, 2017, at 7:26 PM, Brock Tice
wrote: > > Most of our customers only have 2-5 devices. I know this is not the case > in most of America but we are quite rural and for many people they've > never had better than 1.5Mbps DSL until we install service at their > location. Most of them have no idea what a subnet is. Let us say that > over the next ten years they get quite savvy and decide to isolate their > wireless clients, some public servers, their IoT devices, and their > security cameras. We have given them a /52 which contains 4096 /64s. So, > most likely, they will use one of those for their LAN and be done. In > case they decide to make several VLANs or whatever they have used 4 /64s > and they have 4092 left. And that's where you're missing the IPv6 addressing concept. You are thinking of "devices." That's not how the v6 address space was planned out. Future address (subnet, really) consumption will be decided by the devices behind the CPE, not the number of devices behind the CPE. That's why there is such a huge address space allocation to each end point. What people do in the privacy of their own routing domain is their own business. As I mentioned in an earlier post to the list, think of IPv6 as a /64 address space; ignore the noise to the right. ARIN allots you a /48 for each of your customer end points. That means, as an ISP, you get a /32 right away. That covers 64K customer end points out of the gate. If you need a larger allocation, you can get that immediately. So there is no need to carve up end point /48s. And you don't want to. It just makes more work configuring your routers. Your monitoring software will assume /48 per CPE, so you'll have to explicitly configure that, etc. All you are doing is making work for yourself. And messing things up for your customers. --lyndon
> On Dec 28, 2017, at 7:28 PM, Tony Wicks
wrote: > > I think its time you all had a bit of a holiday break and stopped thinking > of IP networking for a little while, Just saying... Nah. This is a useful conversation (and argument) to have.
> On Dec 28, 2017, at 6:54 PM, Ricky Beam
wrote: > > Home networks with multiple LANs??? Never going to happen; people don't know > how to set them up, and there's little technical need for it. Again, you are assuming you know how people will use networks forever. Stop overthinking things, and just concentrate on just routing the packets. Your ONLY concern as a network operator is the number of routing table entries you need to carry. The size of my netmask (giggity) is irrelevant. Ship me a 48, 52, 64, 120, doesn't matter. It's a single RTE as far as you are concerned. (Again, unless you can't $afford renting the extra address space from ARIN. In which case you don't have a network infrastructure problem ...)
Peripherally, it's worth noting that, in far less time then we have not migrated from IPv4 to IPv6, the UK moved from 7-digit to 11-digit telephone numbers. If that's not embarrassing ... --lyndon
> On Dec 28, 2017, at 6:11 PM, Scott Weeks
wrote: > > All I was trying to say is there're going to be things > not thought of yet that will chew up address space > faster than ever before now that everyone believes it's > essentially inexhaustible. And, I expect, sooner than > imagined. If that's the case, it will be because there were few restrictions placed upon that address space. And if some genius comes up with something that burns through all the IPv6 address space, you can rest assured the market (and not the IETF) will come up with a replacement that extends things beyond 128 bits in a ripping big hurry.
> :: Isn't this the utopia we've been seeking out? > > I like that one! :-) Seriously. If we run out of networks while handing out /48s, by migrating everything to HTTPS we can claw back the 16 bit 'port' field in the IP header and reassign it as part of the 140-bit IPv6.1 address space. Mind you, the FCC will likely auction off those extra 16 bits to Amazon, so you'll need a Prime membership to use them. --lyndon
> On Dec 28, 2017, at 4:57 PM, Lyndon Nerenberg <lyn...@orthanc.ca> wrote: > > Instead, think about how we can carve up a 2^61 address space (based on the > current /3 active global allocation pool) between 2^32 people (Earth's > current population) Of course, I screwed up the numbers (thanks Javier for pointing this out). 2^32 is closer to 2^33 for population, so adjust the netmasks accordingly.
> On Dec 28, 2017, at 3:28 PM, Brock Tice
wrote: > > We are currently handing out /52s to customers. Based on a reasonable > sparse allocation scheme that would account for future growth that > seemed like the best option. Could you detail the reasoning behind your allocation scheme? I.e., what are the assumptions you're making about customers deploying hardware? How will they need those devices isolated? What data fed the model you used to come up with those numbers? I ask because I have seen many ISPs advocate for smaller than /48 customer allocations, but I haven't seen anyone present the model they used to come up with those numbers. I really am curious to know the assumptions and rationale behind the various allocation schemes ISPs are coming up with. > I can't really see how /52 is too small for a residential customer. I > know originally it was supposed to be /48 but after doing a bit of > reading I think many people have admitted there is room for nuance. What reading? Can you provide pointers to the documents you were reading? Again, I'm curious to understand how and why ISPs are making these decisions. Also, the fact that you "can't see it" doesn't mean they (or someone else) can't or won't. An ISP's job is to shovel packets around. No more, no less. > Do you think I could go to ARIN and say, well, we haven't used hardly > any of this but based on such-and-such allocation scheme, it would be > much better if you gave us a /32 instead of a /36? Hardly used any of what? Are you talking about density of the customer hosts inside each of these /64 subnets? This is where I think the biggest misunderstandings of the IPv6 allocation strategy comes from. Ask yourself this: do you think the intention was to have 2^64 hosts on a single LAN segment? Can you imagine any practical switch fabric that could handle that? (I'd be curious to know the size of the largest - in the number of hosts sense - 10-Gig Ethernet LAN anyone has deployed.) The number of hosts per /64 will always be limited by the associated switch hardware. This will be true until the universe collapses, I suspect. > Also, does anyone know whether ARIN is using sparse allocation, such > that if we go back later and ask for more they will just increase the > size of our allocation starting from the same point? You could just ask them. But the policies for ISP allocations (last time I read them) makes it pretty straight forward for you to get a block that fits your growth needs for the foreseeable future.† But really, if you are worried about having to advertise, say, eight IPv6 prefixes to the DFZ for all your allocations, haven't you just argued against the fragmented /52 allocations to your downstream customers? You need to treat IPv6 addresses as being 64 bits long. Those extra 64 bits on the right are just noise – ignore them. Instead, think about how we can carve up a 2^61 address space (based on the current /3 active global allocation pool) between 2^32 people (Earth's current population), each having 2^16 devices, needing their own network. That makes for a densely allocated /48 for each person on the planet. (Coincidence?) But when we get to the point of filling up that /3, we still have five more /3s to work with. Now think about scaling. If the population doubles, we're now down to four spare /3s. If that doubled population doubles the number of devices, we're down to two spare /3s. If the population doubles again, there will be no civilization left, let alone an Internet. Etc. So realistically, the current address space allocation policies can handle a doubling of the planet's population, with each person having a quarter of a million addressable nodes. Each node having its own /64 to address individual endpoints within whatever that 'node' represents. Just think, 2^64 port-443 HTTPS servers per "thing." Isn't this the utopia we've been seeking out? I'm pretty confident IPv6 as a protocol (and, really, IP as a networking concept) will be dead *long* before we run out of address space. Not because we run out the numbers of bits allocated to hosts, or subnets, or ports; but because the current topology of routed networks won't fit with what we want or need to do in the future. (My prediction is that everything will move to adhoc meshes, with no control planes at all. But that's completely out of scope for this discussion.) --lyndon † https://www.arin.net/resources/ipv6_planning.html states that ISP allocations for > /32 block sizes is based on a /48 per customer site allocation policy.
> On Dec 28, 2017, at 2:31 PM, Thomas Bellman
wrote: > > My problem with the IPv6 addressing scheme is not the waste of 64 bits > for the interface identifier, but the lack of bits for the subnet id. > 16 bits (as you normally get a /48) is not much for a semi-large organi- > zation, and will force many to have a dense address plan, handing out > just one or a few subnets at a time, resulting in a patch-work of > allocations. 24 bits for subnet id would be more usable. If you need (and can justify) more than a site /48, you can easily get that. > > Consider e.g. a university or company campus. There are probably at > least 16 departments, so I would like to use 8 bits as department id. > Several departments are likely to have offices on more than one floor, > or in more than one building, so I would like to let them have 4 bits > to specify location, and then 8 bits to specify office/workplace within > each location. And allow them to hand out 16 subnets per workplace. > That adds up to 24 bits. So a /40 would be nice, not a /48. IPv6 prefixes are not databases. Coding this sort of thing into your address space is silly. You can (and should) track this info externally. It's pretty simple to do this using easily parsable text files. If you encode policy into your address space like this, sooner or later you will find yourself painted into a corner you can't get out of. Instead, carve up the 16bits of subnet space by allocating network bits from left-to-right. This gives you the ability to slice your subnet space into variable-length allocations. When you allocate in the subnet space, do so by bisecting the current network prefix. That gives you room to grow your /64s into something larger, while keeping the routing simple. This is the same thing we've been doing to carve up IPv4 space for years: number hosts right to left, number networks left to right. For IPv6, just s;hosts;/64s;. --lyndon
> On Dec 4, 2017, at 3:19 AM, Edwin Pers
wrote: > > As an anecdotal aside, approx. 70% of incoming portscanners/rdp bots/ssh > bots/etc that hit the firewalls at my sites are coming from AWS. > I used to send abuse emails but eventually gave up after receiving nothing > beyond "well, aws ip's are dynamic/shared so we can't help you" Last week we found out that Helpscout sends email from AWS servers. Thank you, Helpscout, for forcing me to lift the AWS blocks on my incoming MTAs, that were cutting down my incoming spam scanning load by a factor of two. At least. Note that I work for an email hosting company, which makes this infinitely more annoying. A factor of two, in this case, is a non-trivial number. --lyndon
> On Oct 5, 2017, at 4:52 PM, Steve Feldman
wrote: > > I have a vague recollection of parts of 192.168.0.0/16 being used as default > addresses on early Sun systems. If that's actually true, it might explain > that choice. 192.9.200.X rings a bell; but those might have been the example addresses they used in the SunOS 3.X documentation.
> On Sep 20, 2017, at 6:40 PM, Sean Donelan
wrote: > > Some ham radio operators have been verified as operating from Dominica. Its > an unfortunate, but necessary thing that needs to be verified during disaster > communications. I'm not clear what you're getting at here. Are you saying people are faking operating from the islands? That seems unlikely. Basic RDF is going to tell you in short order where they are transmitting from. And for the smaller islands, the local operators are well known in the region, so it seems unlikely someone would be able to set up shop in, say, Tennessee and claim to be a new ham who just moved to Anguilla last week. --lyndon
> On Aug 27, 2016, at 6:46 PM, Matt Palmer
wrote: > > On Sat, Aug 27, 2016 at 01:25:42AM -, John Levine wrote: >> In article >> you >> write: >>> I was working within the limits of what I had available. >> >> Here's the subscription page for mailop. It's got about as odd >> a mix of people as nanog, ranging from people with single user linux >> machines to people who run some of the largest mail systems in >> the world, including Gmail: >> >> https://chilli.nosignal.org/cgi-bin/mailman/listinfo/mailop > > I know they're mailops, and not tlsops, but surely presenting a cert that > didn't expire six months ago isn't beyond the site admin's capabilities? I tried again, ten months later. Still broken :-( Is there a replacement site I'm missing out on?
> On Feb 23, 2017, at 6:10 PM, Ricky Beam
wrote: > > When you can do that in the timespan of weeks or days, get back to me. Stop thinking in the context of bits of fake news on your phone. Start thinking in the context of trans-national agreements that will soon be signed by such keys. --lyndon
Canada should just have Comcast (or is it "Xfinity"?) provided nation-wide Internet service as a for-profit monopoly. Just as long as we have *someone* to Telus whom to chose.
> On Oct 3, 2016, at 6:52 PM, Lyndon Nerenberg <lyn...@orthanc.ca> wrote: > > It's the closed software that is fscking everything up right now. A little > sunshine on the code base will go a long way towards those people not losing > their Ferrari's after all. Or coming from a more legalistic view, if they lock things down that hard, they cannot possibly blame anyone else for having "rooted" the gear, therefore no passing the buck. They would have to admit that it was their - and only their - code that was responsible for inflicting the damages. I've been in the tech biz for 30+ years, and have worked for a wide range of organizations over that time. The only common denominator across them all (small, large, and everything between - commercial and not) is that rapid response high level organizational change ONLY happen when the executives see the possibility of an imminent, significant, personal loss. That might be monetary loss, or loss of reputation. But it must be personally hurtful. When the reaper appears on the horizon, it's amazing how quickly they see the path to redemption. The sooner we all admit this is not a *technical* problem, the sooner we will eradicate it. --lyndon
> On Oct 3, 2016, at 6:33 PM, Matthew Petach
wrote: > > If you hold the executives of the hardware manufacturer > responsible for the software running on their devices, > then the next generation of hardware from every > manufacturer is going to be hardware locked to > ONLY run their software. No OpenWRT, no Tomato, > no third party software that could be compromised > and leave them holding the liability bag. It's the closed software that is fscking everything up right now. A little sunshine on the code base will go a long way towards those people not losing their Ferrari's after all.
> On Oct 3, 2016, at 5:39 PM, Jay R. Ashworth
wrote: > > You're not familiar with CPSC mandatory recalls, are you? I'm not sure how you could make the case that a compromised DVR, e.g., directly creates a risk of physical injury to a person. Without that, I don't see how the CPSA would apply. But even if a mandatory recall was made under some law, how many of those devices do you think would be returned/exchanged, realistically. And what percentage of those devices would fall under the jurisdiction of any one country's laws? The only way to stop this sort of thing once and for all is to make it punitively costly to the humans at the helm of the corporations selling this crap in the first place. Under corporate law, this almost always means the directors. Only when they start losing their homes/yachts/Jaguars, or start spending some quality time in jail, will this problem go away. Of course, this does require governments to grow some balls :-P --lyndon
This is where device profiles could help. If enough devices register profiles with the local router, at some point the router's default could be closed, so devices with no profile can't talk to the outside. That would be nice, but a manufacturer who can't be bothered to take even the most basic security precautions certainly isn't going to implement this, either. The only cure to this will be changing the law so that the directors of the companies that ship massively insecure devices like these are personally liable for all the financial loss attributed to their products. Bankrupt a few companies' board of directors and you'll start seeing things change in a hurry. --lyndon
But that does not remove those devices from the network. That ship has sailed.
In thinking over the last DDos involving IoT devices, I think we don't have a good technical solution to the problem. Cutting off people with defective devices they they don't understand, and have little control over, is an action that makes sense, but hurts the innocent. "Hey, Grandma, did you know your TV set is hurting the Internet?" The way this will get solved is for a couple of large ISPs and DDoS targets to sue a few of these IoT device manufacturers into oblivion. --lyndon
> On Oct 1, 2016, at 8:37 PM, Hugo Slabbert
wrote: > > So, kudos, Rogers Wireless! This has also been live on Roger's Fido sub-brand for a while now, too. 2605:8d80:484:: is live in Vancouver. --lyndon
> On Aug 31, 2016, at 6:36 PM, Matt Palmer
wrote: > > Thanks, Netscape. Great ecosystem you built. Nobody at that time had a clue how this environment was going to scale, let alone what the wide-ranging security issues would be. And where were you back then, not saving us from our erroneous path ... signature.asc Description: Message signed with OpenPGP using GPGMail
Is there a Yahoo MTA admin listening who can help diagnose what might be a network ACL block to one of our SMTP server subnets? Thanks, --lyndon
> In other words, it's not just Netflix that has this problem... No, it's Netflix that has the problem. Audible actually gives a fuck about their customers.
> 1. C-band teleport in Singapore with SingTel IPs, remote terminals in > Afghanistan. > > 2. Ku-band teleport in Germany with IP space in an Intelsat /20, remote > terminal on the roof of a US government diplomatic facility in > $DEVELOPING_COUNTRY > > 3. Teleports in Miami with IP space that looks indistinguishable (in terms > of BGP-adjacency and traceroutes) from any other ISP in the metro Miami > area, providing services to small TDMA VSAT terminals in west Africa. > > 4. Things in Antarctica that are on the other end of a C-band SCPC pipe > from a large earth station in southern California. > > 5. Maritime Ku and C-band VSAT services with 2.5 meter size 3-axis tracking > antennas on top of cruise ships that could be literally anywhere in the > Mediterranean or Caribbean oceans, with the terrestrial end of the > connection in Switzerland, Italy, Maryland or Georgia. > > 6. Small pacific island nations that have no submarine fiber connectivity > and are now using o3b for IP backhaul, or C-band connectivity to teleports > in Australia. Yes. All big Netflix customers.
> On Jun 3, 2016, at 4:59 PM, jim deleskie
wrote: > > I don't suspect many folks that are outside of this list would likely have > any idea how to set up a v6 tunnel. Those of us on the list, likely have a > much greater ability to influence v6 adoption or not via day job > deployments then Netflix supporting v6 tunnels or not. In western Canada, Telus is on a big push to deploy IPv6. TekSavvy less so. But it's happening. I cancelled my Netflix subscription last summer. I needed native IPv6 more than I needed Grace and Frankie. Which isn't to say I didn't want to watch Grace and Frankie more than having IPv6 access to machines I need to have access to in order to earn the money I need to pay to (not) watch Grace and Frankie ... --lyndon
[...] but I would also have doubts over running anything business critical on a RP2. We use them as reverse terminal servers, for dhcp/tftp bootstrapping other machines, and soon, NTP. They are absolutely rock solid. There's something to be said for "no moving parts inside." --lyndon
> On May 11, 2016, at 5:42 PM, Scott Weeks
wrote: > > Wouldn't the buffers empty in a FIFO manner? They will empty in whatever order the implementation decides to write them. But what's more important is the order in which the incoming packets are presented to the syslogd process. If you're listening on TCP connections, the receive order is very much determined by the strategy the syslogd implementation uses to read from FDs with available data. I.e. elevator scan, lowest/highest first, circular queue, ... In a threaded implementation, your reader workers, buffer writers, etc., are all at the mercy of the threading implementation; it's difficult to control thread dispatch ordering at that level of granularity. --lyndon
I'd get something like a 1U ATOM server ($120 eBay) with small SSD ($18). Runup your favorite FOSS OS, and conserver. For more than the single real serialport, you can most likely fit a USB hub inside the case still, and hang a number of USB serial dongles off. We use Raspberry Pi 2s with single- and 8-port USB serial dongles. Works like a charm, especially with tmux installed. --lyndon
Are any of you pushing MACsec (802.1AE) out from your switches to the edge hosts? Vs. just running it on the network cross-connect fabric? We have a scenario where, if we could MACsec encrypt those (switch <-> host) links, we could eliminate a lot of application level TLS. But searching for a list of PHYs that support this turned up a very thin set of chips, with most of them being several years old now. Are people even using MACsec in anything other than an "encrypt cross connects between the cages" context? I would be very interested in chatting with anyone who has tried pushing this out from their switches to the connected hosts. --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
On Dec 3, 2015, at 6:28 PM, Lyndon Nerenberg <lyn...@orthanc.ca> wrote: > Are we perhaps, finally, reaching the cusp where everyone has realized that > if we all, collectively, tell the rodents to f*** off, they just might? I should also mention that, despite their bluster, they can't keep it up for more than half an hour. By then, the upstream networks have figured it out and have null routed anything of consequence - far upstream. Meanwhile, back haul your traffic in via a private network and they won't be able to do shit to you. (E.g. the standard Cloudflare model.) They are not as smart as they make themselves out to be. Don't let fear drive your decisions. --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
On Dec 3, 2015, at 9:14 PM, Lyndon Nerenberg <lyn...@orthanc.ca> wrote: > I should also mention that, despite their bluster, they can't keep it up for > more than half an hour. The mailing list has been quiet. All step forward who are scared to say "me too" on account of Armada. --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
Afaik, the DDoS is "only" a UDP based one (or much of the attack), you should be able to mitigate some to much of the damage caused by filled pipes by blocking incomming UDP trafic at your ISP level. This is the Armada Collective, based on the description. We just went through a round with them. The hardest they were able to hit us peaked at a little under 80 Gbits/second. Primarily DNS and NTP amplification attacks. They also hit our web servers with a little over 80 million requests over a one hour period, and played some games with TCP to try to mess with the protocol stacks on the servers and network gear. Cloudflare took care of the web attacks. For DDoS, something like Incapsula will take care of the layer 3 stuff. Not cheap, but very effective. --lyndon
On Dec 3, 2015, at 5:00 PM, alvin nanog
wrote: > run tcpdump and/or etherreal to capture the DDoS attacks Of course! If we had only thought of this sooner! :-) --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
Typically, businesses hide from admitting they've been hit by drive-by attacks like Armada is trying to pull off. It has been interesting to see the public reaction from the post-Protonmail targets, many of whom are being very visible about 1) admitting they have been hit by the attacks, and 2) making it very clear the Armada crew can f*** right off as far as collecting ransom is concerned. (Also, 3) the amazing support from customers who understand why we are working on putting up defences instead of just paying, and therefore put up with the inevitable downtime as we reconfigure sometimes large chunks of our networks.) The money asked for was a pittance (around USD$6K) for the attacks I'm personally aware of. The targeted were willing to spend far in excess of that to deploy the necessary wall of DDoS protection to keep their services running. If they didn't have it there, already. What does that say for the business model of the botnet handlers? They can't up their ransom demands, because nobody is paying at the current rates. And they can't lower them, for the same reason. And if they change their targets to sites than might potentially pay a few hundred dollars at best, those sites will just shut down anyway. Are we perhaps, finally, reaching the cusp where everyone has realized that if we all, collectively, tell the rodents to f*** off, they just might? Happy Holidays! --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
On Jul 14, 2015, at 11:56 AM, Tony Hain alh-i...@tndh.net wrote: IPv6 is not the last protocol known to mankind. IF it burns out in 400-500 years, something will have gone terribly wrong, because newer ideas about networking will have been squashed along the way. 64 bits for both hosts and routing was over 3 orders of magnitude more than sufficient to meet the design goals for the IPv4 replacement, but in the context of the dot-com bubble there was a vast outcry from the ops community that it would be insufficient for the needs of routing. So the entire 64 bits of the original proposal was given to routing, and the IETF spent another year arguing about how many bits more to add for hosts. Now, post bubble burst, we are left with 32,768x the already more than sufficient number of routing prefixes, but IPv4-think conservation believes we still need to be extremely conservative about allocations. If you look at how the IoT model is evolving, the entire host+service (i.e. IP address + port number) model is rapidly disintegrating. Services are the end-points now. They need to be individually addressable, since they really have no affinity to physical hardware in the sense we currently think of hosts, with IP and MAC addresses. Host hardware is fungible; services are mobile. The IPv6 address space conservatives are missing the entire point that IPv6, as a global addressing scheme, will collapse in the next couple of decades. Host+port endpoint identifiers are already done. We just haven't noticed yet. --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
On Jul 14, 2015, at 6:33 PM, Curtis Maurand cmaur...@xyonet.com wrote: Since IPV6 does not have NAT, it's going to be difficult for the layman to understand their firewall. deployment of ipv4 is pretty simple. ipv6 on the otherhand is pretty difficult at the network level. yes, all the clients get everything automatically except for the router/firewall. Are we *still* doing this argument?!? block all pass out on $extif keep state Is it that fucking difficult for people to figure out? Really? signature.asc Description: Message signed with OpenPGP using GPGMail
On Jul 14, 2015, at 7:26 PM, valdis.kletni...@vt.edu wrote: But.. But... How does that work without using UPNP? :) SHOUT LOUDER! signature.asc Description: Message signed with OpenPGP using GPGMail
For a bit of fun, the results after 30 minutes of https://orthanc.ca/figure-1 being out on the nanog list: IPv4: 315 IPv6: 22 This is strictly GETs on the target page, not tainted by CSS or favicon nonsense. I don't know what this says about the proclivity of Nanog readers to blindly click on email URLs. (But the delineation between web and MUA email client user behaviours is ... interesting ...) signature.asc Description: Message signed with OpenPGP using GPGMail
On Jul 13, 2015, at 1:57 PM, Mel Beckman m...@beckman.org wrote: David, Did you consider running an IPv6 tunnel through HE.net? Tunnels work, but they really are getting old. I have run 3ffe:: 6bone, HE tunnels, and (currently) aiccu. They all work very reliably, and I have immense gratitude towards the people who commit the time, the hardware, and the software, to making that go. But the bottom line is that 200+ms RTTs to my servers over v6 tunnels simply can't compete with 20ms RTTs on native v4. I know my code works over v6, but how can I ever know it works well when I'm behind a v6 dialup link? This past weekend I bit the projectile and decided to flip my service over to Teksavvy. The latter have native v6, claim to offer /48s, and have the audacity to charge me $10/month less than Telus. I'm game. More importantly, after eight years of Telus promising an IPv6 beta, I can tell them to see https://orthanc.ca/figure-1 :-P --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
I've been poking around looking for an inexpensive xDSL circuit tester to do some measurements on my home DSL line, in opposition to the telco. $2K+ is not in the budget, so I'm curious about the accuracy of the $300 Chinese units kicking around eBay (e.g. the ST332B). Anyone out there have experience with them? Are they even remotely close to accurate? --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
Re: ARIN just subdivided their last /17, /18, /19, /20, /21 and /22. Down to only /23s and /24s now. : ipv6
On Jun 27, 2015, at 5:35 AM, Rafael Possamai raf...@gav.ufsc.br wrote: How long do you think it will take to completely get rid of IPv4? Or is it even going to happen at all? IPX ruled the roost, very popularly, for a little while. How long did it take to die? Why did it die? What were the triggers that pushed it over the cliff? I think there's a lot to be learned from that piece of recent history. Specifically, as a demonstration of how a most popular protocol can find itself ejected from the arena in the blink of an eye. I knew several people who built their career path on the assumptions of IPX. Ouch. --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
On Jun 22, 2015, at 5:27 PM, Scott Weeks sur...@mauigateway.com wrote: I do SSH over geostationary satellite links (C-band) all the time. I'd say it's slow, but not excruciating, unless you type really fast on the network device's CLI. :-) SSH client/server authors would do well to learn the lessons of telnet line mode. As would authors of 'interactive' command line applications. The NVT concept is still useful in this day and age, far beyond the LA36. (I.e., if the termcap entry says 'dumb', honour it. There is a damn good reason we are saying 'turn off the bling'.) --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
What problem do you expect this to solve? This is a real question, since you can be 100% sure that any DMARC policy will wreak havoc on any of your users who use mailing lists like this one. *Any* mailing list. Please help stamp out this abomination by refusing to capitulate to its insane desire to pretend the last three+ decades of email functionality never existed. --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
On Jun 11, 2015, at 9:06 PM, Karl Auer ka...@biplane.com.au wrote: You don't get to just say I'm not going to implement this because I don't agree with it, which is what Google is doing in the case of Android. Actually, you DO get to just say that. Anyone can, but especially something as big as Google. And if DHCPv6 turns out to be important enough to enough people, Android will lose market share and either fork, die or change its mind. It wouldn't be the first mobile platform to disappear into the sludge of history. Sadly, there is another side to this. Witness how the DMARC crew are destroying the functionality of email as we have known it for decades. Sometimes the 800 pound gorillas DO have the ability to fsck things over for everyone. --lyndon P.S. But it is the mindless-sheep-like behaviour that lets them. So instead I should be complaining to the masses who are incapable of thinking for themselves? We know how well *that* works ... signature.asc Description: Message signed with OpenPGP using GPGMail
On Jun 10, 2015, at 11:18 AM, goe...@anime.net wrote: Indeed, the interview process is a two way street. Lets you evaluate who you would be working for -- or if you really would want to. I wrote most of a very long follow-up to this. But what it boils down to is: +10,000 For all of you sitting across the table, consider that you are being interviewed even more intensely than you think you are interviewing us. (By anyone who has been in the game for a while, at least. Which means the people you have short-listed, right?) Over the past 25 years or so, I can think of a half-dozen offers I've turned down because the employer failed the interview. (Which doesn't make me a geeenious ... just someone who values low blood pressure, and prefers an interesting work environment over $$$) --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
On Jun 10, 2015, at 8:39 PM, Stephen Satchell l...@satchell.net wrote: After the phone screen, the company called me in for the face-to-face interview. I put the word interview in quotes because, for 25 minutes, the chief programmer of the place played a video game he wrote. That was the extent of the interview! Mmm hmm. E.g. I spent half+ an hour being grilled on the internals and efficiencies of various regular expression library implementations. '[a-z]' vs. '[:islower:]' or something equally irrelevant to the interview at hand, for a position creating/managing the kernel - not apps - for an email spam filtering appliance. The second half hour devolved into a rant by the interviewer about 'volatile' in whatever was the latest version of the ANSI C standard. You can have a lot of fun, though, by playing the interviewers. When you discover your interest in the company is a noop, steering things into the Brazil regime can generate endless entertainment ;-) In fact, fishing for silliness can produce plenty of results. --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
Where is Mr. Protocol? When we need him most?! signature.asc Description: Message signed with OpenPGP using GPGMail
On Feb 28, 2015, at 4:37 PM, Jack Bates jba...@paradoxnetworks.net wrote: The question is, if YOU paid for the fiber to be run to their ped, would they hook you up? No. But that's because they are using the fibre pedestals to deliver a high bandwidth DSL service. The condo customers still get DSLon copper, but because the copper pipe is so short they can crank a hell of a lot of bps over it. Enough to deliver HDTV, at whatever compression rate they use to their set top boxes. It's way more then the 5 Mb/s up/down (S)DSL I would be quite happy with :-) --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
It's not about that's all they need, that's all they want, etc. Whenever any vendor spouts this is what our customers want you know they are talking pure bullshit. The only customers who know what they want are the microscopic percentage who know what's actually possible, and we are dismissed as cranks. Even though they keep hiring us to run their networks. In the spirit of adding real data to the symmetry conversation, let me describe why I would prefer symmetric. Currently I have all-copper DSL running at 3 Mb/s down and about 640 Kb/s up. There are days I wish I had 1.5 Mb each way, as there are times when I need to push large files out (well in excess of 1 GB each). Doing that now is painfully slow, but I can live with the long transfer times because I'm not doing it every day. Where it is painful is how the clogged pipe breaks other things. The big one is my SIP phone service. Because the ACKs on the file upload come back faster than the data can leave, it's almost impossible to avoid queueing delays in my border router, despite it being a real UNIX box vs. a cheap appliance NAT router with buffer bloat. TCP doesn't deal well with the asymmetry, so the only way to address this is to drastically reduce the sendspace window on my uploading box in order to throttle it back to where TCP's flow control works as designed. So do I hack FTP and ssh on my machines to take a command line option to squash the sendspace? Or worse, do I use the existing knobs to turn sendspace down for the entire host? Neither one is pleasant, and I shouldn't have to implement either. Having a DSL link that allocated bandwidth based on real-time need would solve this for me. But since that's not an option, converting the link I have from ADSL to SDSL would solve my problem. I would gladly trade in a portion of my downstream *bandwidth* for a corresponding reduction in my upstream *latency*. And I suspect a lot of those bullshitting ISPs would find this is what our customers want if their customers ever learned that it is this asymmetry that underlies many of their perceived performance issues. Mind you, the truly annoying part of this story (for me) is knowing Telus has fibre pedestals not a block away, with enough bandwidth to serve up IPTV to all the condos in the neighbourhood. But I'm in the marina across the street. Since there are only a handful of us here with service of any sort, they aren't about to come out and reroute us to the fibre pedestal. So I get to stay on the very long and corroded copper circuit back to one of the original downtown Vancouver exchanges. As one of the Telus techs said when he came out to help troubleshoot a failing DSL modem: I am amazed it works at all :-) And he's right -- the dB line losses are horrific. --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
On Feb 28, 2015, at 5:24 PM, Stephen Satchell l...@satchell.net wrote: (N.B.: we forced long TTLs to reduce the traffic necessary across our peering points. At one point, the cable people said they had one, count 'em one, peering link at 44 megabits/s, to serve all cable companies [with their own internal network]. I still don't know whether to buy that or not, and now it's moot.) Who was the late-90s ISP that had their geek mouthing off on TV about them having multiple T3's to the Internet ??? It was a very well done commercial. I remember having a good laugh at how it parodied both ends of the internet pipe of the time. --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
On Feb 28, 2015, at 7:17 PM, Barry Shein b...@world.std.com wrote: I remember when downloading still images (dial-up days) was considered bandwidth hogging and only something very few people did. Of course no one did it, it took minutes to download even a rather small image and there was little market for image-oriented software (other than porn.) That was 1992-4ish? I helped spin up early ISPs back then. The traffic analysis we did at the time showed the porn crowd came out after dark and left at sunrise. And they were facing self-limiting servers that could not keep up with the demand anyway, so the 'average' customer base wasn't affected. This crossed a pair of ISPs each backed by a single T1 trunk to the internet and two T1 channel banks for the dialup pool. Not quite The World but it was big time stuff for where we were. signature.asc Description: Message signed with OpenPGP using GPGMail
In my part of the world, a well-known service provider runs FTTC and then runs VDSL into the home. Ummh... I live in a 3rd word country. Oh Canada! signature.asc Description: Message signed with OpenPGP using GPGMail
I'm running into TLS interoperability problems with some of the SMTP servers under the inbound.protection.outlook.com domain. Are there any Outlook postmasters lurking here that could contact me off list to help debug this? Thanks, --lyndon
On Nov 23, 2014, at 7:41 PM, Brian Henson marin...@gmail.com wrote: Is anyone else seeing their local craigslist redirected to another site other than craigslist? I see it loading http://digitalgangster.com/5um. *.craigslist.ca and *.craigslist.org have been offline since about 16:40 Pacific Standard Time from at least three different networks I have access to. From the limited poking around I've done it looks like it could be a DNS hijacking. I haven't seen anything about it on the outages list yet ... --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
On Nov 10, 2014, at 4:24 PM, Izaac iz...@setec.org wrote: If you're stuck working in a completely isolated environment, then work it into the contract. That's the cost of being on an island. This is the argument being made against all the citizens who have the temerity to live in British Columbia, yet not within the borders of a sanctioned municipality. Izaac, spend a year getting shot at in Surrey, then get back to us. --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
On Aug 18, 2014, at 3:05 PM, Randy Bush ra...@psg.com wrote: the request message was a forge, see below. damned shame i did not think of it, though. otoh, i consider the contact requests useful. You just blew an opportunity to get on every north american late night talk show. Oh ... (sorry) signature.asc Description: Message signed with OpenPGP using GPGMail
On Jul 14, 2014, at 5:39 PM, Matt Palmer mpal...@hezmatt.org wrote: I assume that there's a leopard involved there somewhere? It's noodling around in the disused lavatory with Moaning Myrtle. signature.asc Description: Message signed with OpenPGP using GPGMail
On Jun 20, 2014, at 6:24 AM, Jacques Latour jacques.lat...@cira.ca wrote: Just as an indicator, we have 316 .ca domains with IPv6 glue records :-( Part of the problem might be that two of the bigger registrars (Webnames and easyDNS) *still* can't handle input of IPv6 addresses in their management panels - you have to initiate a support request and have them enter the records manually. And neither ca do a zone transfer from an IPv6-only master. I tried a little experiment a couple of months back. I set up a new domain, IPv6 only - not an A record in sight, including for the name servers. I then tried getting the name servers and glue records registered, first through Webnames, then through easyDNS. Both required manual intervention to set this up. I then tried using their secondary DNS services to add them as additional slave servers. Neither of them is capable of providing secondary DNS to an IPv6-only domain. In both cases, they can only initiate an xfer against an IPv4 master. Given the current state of affairs with the registrar infrastructure, it doesn't surprise me one bit those numbers are so low. --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
On Feb 16, 2014, at 7:59 PM, Mark Tinka mark.ti...@seacom.mu wrote: Juniper's Junos implementation (which is based on FreeBSD) hasn't been patched Using firewall filters is the only way to mitigate the vulnerability. But doesn't the JunOS ntpd read/parse ntpd.conf? It's worth getting to the admin mode shell prompt and taking a poke around /etc. signature.asc Description: Message signed with OpenPGP using GPGMail
On Feb 16, 2014, at 8:30 PM, Christopher Morrow morrowc.li...@gmail.com wrote: and good luck with figuring out: 1) when you need to re-do that magic move 2) making sure that the move is automatable over time I was suggesting it as an alternative to just chopping off NTP at your border. Presumably it would be a one-off thing until Juniper issues a patch. As for automating it, 'expect' can be a very useful tool in situations like this. --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
On Nov 1, 2013, at 7:18 PM, Mike Lyon mike.l...@gmail.com wrote: So even if Goog or Yahoo encrypt their data between DCs, what stops the NSA from decrypting that data? Or would it be done simply to make their lives a bit more of a PiTA to get the data they want? Markhov chain text generators are cheap. Rather than amping up the crypto, why not bury them under heaping piles of steaming bullshit? After all, it would be the patriotic thing to do. Not only would you be helping employ your fellow network engineers (someone has to increase the size of the effluent pipes), you would be boosting manufacturing (disks for storage, high-end network gear for capture, mainframes and asics for filtering and analysis) and helping the much-maligned coal industry ensure its future prospects (that gear isn't built from electron sipping Atom CPUs, you know!). --lyndon
On 2013-06-25, at 7:58 PM, Sean Donelan s...@donelan.com wrote: The memo provides an overview and principles regarding Lawful Intercept(LI) of networks using RFC 1149, A Standard for the Transmission of IP Datagrams on Avian Carriers. National requirements are not addressed. Is scooping pigeon shit off my front lawn considered meta-data collection? --lyndon
On 2013-06-25, at 8:24 PM, Caruso, Anthony acar...@mre-consulting.com wrote: Yes, if you can identify the source of the grains, you know origin and flight path prior to your lawn. NSA approach's is getting the pigeon shit off of everyone's lawn... Then I am in favour of PRISM. NSA: come vacuum all the pigeon shit off my boat! Please!!!
On 2013-06-25, at 8:54 PM, Jason Hellenthal jhellent...@dataix.net wrote: Anyone got a pentagram packet and a weje board ? Be careful, when you pull out the chalk to draw a pentaGRAM around your data centre, that you don't – accidentally – draw a pentaGONE.
On 2012-12-20, at 12:13 PM, Michael Thomas wrote: Do these things need to have gig-e speeds? Probably not... for a lot even Bluetooth speeds are probably fine. But they do want to be really small and really inexpensive. Then run RS-422 or RS-485 over a single twisted pair. You don't even need a connector – you can solder directly to the PCB. --lyndon
I'm looking for innovative ideas on how to find such a rogue device, ideally as soon as it is plugged in to the network. There was a SIGCOMM paper a few years back that described a scheme based on measuring the the ACK delays of TCP sessions. In a nutshell, you can detect nodes on the wireless network by looking for the extra delay added by the radio link. It had very good accuracy, and caught new nodes quickly. It didn't require any prior knowledge of the network. I don't have a copy of the paper at hand, and I don't remember the title/author or the publication date (2007ish?), but maybe this will ring a bell for someone else on the list who does. --lyndon
On 2012-10-14, at 14:56 PM, Matthias Waehlisch wrote: do you mean http://conferences.sigcomm.org/imc/2007/papers/imc122.pdf ? That's the one!
On 2012-08-24, at 10:33 AM, valdis.kletni...@vt.edu wrote: If you can use 3ms to extract enough money out of the market to pay for a cable, that market is *way* too volatile in the first place. Heh. Think things are volatile now? Wait 'til they get it down to pico-payment based trading of quantum virtual particles. Oh, I have to nip down to the patent office ... --lyndon signature.asc Description: Message signed with OpenPGP using GPGMail
It is far preferable for the merchant to request ID and verify that the signature matches the ID _AND_ the picture in the ID matches the customer. In the late 1990s I had a Visa card from (I think) Citibank that had my picture embossed on the front of the card. I'm surprised this didn't catch on with more card issuers. I see that Bank of America offers this free of charge to their Visa clients, as do some US based credit unions. That card was never lost or stolen, so I don't know if the photo verification would fail as spectacularly as signatures do. --lyndon
On 2012-06-08, at 12:48 PM, Michael Thomas wrote: I'm sorry, my brain doesn't hold that many passwords. Unless you're a savant, neither does yours. So what you're telling me and the rest of the world is impossible. https://agilebits.com/onepassword (1Password) is one solution to managing web site passwords. --lyndon
On 2012-06-08, at 1:02 PM, Scott Weeks wrote: Only if you have an OS you have to pay for: apple or ms. I don't pay for them. $WORK pays for them. If you're complaint is about 1Password not running on your particular operating systems, then pick a solution that *does* run on your OS. There are several open source alternatives you can use.
On 2012-06-08, at 1:22 PM, Michael Thomas wrote: Does your password safe know how to change the password on each website every several months? Yes.
On 2012-06-08, at 1:41 PM, Michael Thomas wrote: I run a website. If it can change it on mine, I'd like to understand how it manages to do that. I log in to your website, change my password, and the software picks up that I've changed the password and updates the safe accordingly. The software doesn't initiate the password change, it just notices it and updates its database accordingly. Sorry, I should have explained that more clearly. If you have a Mac or a Windows box, download the 1Password 30 day trail and take it for a run. It really is a useful bit of software. No, it doesn't work on my *BSD, Solaris, or Plan 9 machines. But it does sync across all my Mac, Windows, and Android gear, and the Android client lets me pull up passwords on my phone when I'm on one of the systems that doesn't have a native 1Password client, or when I am on the road. --lyndon
On 2012-06-08, at 2:07 PM, Andrew Sullivan wrote: I'm not trying to be dismissive. Those are excellent stopgap measures. They're not a solution. There is no solution. Security is about risk management, nothing more. The only way to ensure your personal passwords are never compromised is to kill yourself after destroying all physical copies of those passwords. While ultimately secure, you won't be able to do your daily online banking. --lyndon
I have a couple of wiring projects coming up on salt water-going vessels and I'm curious as to people's experiences with different types of cable marking products in a high-humidity / salt air / bilge environment None of the markers will be directly exposed to the outside elements, but quite a bit will be running below decks and will have to put up with the bilge. Anyone have any horror stories to share? My preference is for a direct printing system rather than stock card markers. --lyndon
On 2012-03-08, at 2:10 PM, George Herbert wrote: Which fuel is present affects the label durability... Diesel.
I just went through some calculations for a (government) site that has the following rules: [...] Under the plausible assumption that very many people will start with a string of digits, continue with a string of lower-case letters to reach seven characters, and then add a period, there are only ~5,000,000,000 choices. That's not many at all -- but the rules look just fine... 1234;lkj rolls off the fingers quite nicely, don't you think?
There really is no winner or right way on this thread. In IPv4 as a security guy we have often implemented NAT as an extra layer of obfuscation. It's worse than just obfuscation. The 'security' side effect of NAT can typically be implemented by four or five rules in a traditional firewall. But a NAT implementation adds thousands of lines of code to the path the packets take, and any time you introduce complexity you decrease the overall security of the system. And the complexity extends beyond the NAT box. Hacking on IPsec, SIP, and lord knows what else to work around address rewriting adds even more opportunities for something to screw up. If you want security, you have to DEcrease the number of lines of code in the switching path, not add to it. Complexity is evil. It's a shame this is no longer taught in computing courses. And I mean taught as a philosophy, not as a function of line count or any other bean-counter metrics. --lyndon
The last mile for the Level3 is coming on Telus (after a punch to the face and gut for build out fee) so I'd like someone else. Shaw does not offer service without what I suspect is another punch to the face for a build out. Bell didn't return any of my inquiries via email of voice message. You will pay for buildout no matter who you talk to. Somebody already mentioned Terago. I can't vouch for how good their service or coverage is downtown, but they're definately worth a call if you can't afford the cost of the fibre pull. --lyndon
I hope someone will explain the operational relevance of this ... Sun V100 FreeBSD firewall/border gateway Sun V100 Plan 9 kernel porting test bed Sun V100 OpenBSD build/test/port box Intel 8-core Solaris fileserver and zones host AMDx4Random OS workstation crash box Epia-EK Plan 9 terminal MacBook xSnow Leopard build/test host Intel-mumble-ITX Win2K8.2 development host Supermicro XLS7A Plan 9 File server Supermicro XLS7A Plan 9 CPU/Auth server Sun V100 Oracle (blech) new-Solaris test/porting box Sun V100 crashbox for *BSD firewall failover tests Sun V100 *BSD ham radio stuff, plus Plan9 terminal kernal testing. sound-of-pants-zipping-up
Sorry, poorly worded. What I was wondering is there is an equivalent of KA9Q for IPv6. I believe one of the comments we got back when we were trying to reclaim 44/8 was that folks couldn't migrate to IPv6 because no software was available... We've come a little way since NOS. Linux has native AX25, and it's pretty simple to write a KISS adapter for any version of UNIX with a tun driver.
Does it make sense that ham radio operators have routable IP address space any longer? Yes. Keep your mitts off 44!
no no no.. it's simply, since the OP posited a math solution, md5. ship the size of file + hash, compute file on the other side. All files can be moved anywhere regardless of the size of the file in a single packet. MD5 compression is lossy in this context. Given big enough files you're going to start seeing hash collisions.
sorry for the noise, but my contact at Syngenta says they have 220.127.116.11/8 18.104.22.168/8 and 22.214.171.124/8, Bugger. Now I have to renumber out of my 172.16/12 subnets :-(
and pigs fly Well, sometimes they do. There underlying problem here is flying sheep: http://www.youtube.com/watch?v=Vkw2DdoskPY Note the accurate summarization of the entire issue.
Guess that move to Amazon EC2 wasn't such a good idea. First reddit, now netflix. http://techblog.netflix.com/2010/12/four-reasons-we-choose-amazons-cloud-as.html FWIW, at $DAYJOB we haven't been able to run out a pool of a couple of dozen EC2 instances for more than two weeks (since last June) without at least one of them going down. The same number of hardware servers we ran ourselves in Peer1 ran for a couple of years with no unplanned outages. Amortized over five years, Peer1 colo + hardware is also cheaper than the equivalent EC2 cost. Hey everyone! Join the cloud, and stand in the pissing rain. --lyndon
Just how much free time do you have? :) 1 minute to google the capacity of a 747-400F. 1 minute to google the dimensions and weight of an lto-4 cartridge. 1 minute to punch the numbers into bc(1). --lyndon
Also, who you will really trust to run it ? The UUCP network chugged along quite nicely for many years without any central authority. (Pathalias and the maps weren't an authority, just a hint.) --lyndon
because most of the end users who would be querying it are in Canada, and, with one nameserver in Canada and one in Japan, they would get a long RTT on DNS queries roughly half the time. But only, say, once per week if you're running a reasonable TTL on your zone.
File transfer wasn't multihop It was, for at least some versions (V2 and later?), if the intermediate site(s) allowed execution of the uucp command. 25 years on the brain is fuzzy on the details ... --lyndon
You could certainly add uux and uux to the list of legal remote commands, but I confess that my memory is also dim about whether uucp file a!b!c would be translated automatically. It has indeed been a while... I'm pretty sure it was adding 'uucp' in the commands list that enabled the behaviour. HDB might have used a different config file syntax for turning this on. I would have to dig out the source code to remember the details. The command syntax you show above worked -- UUCP handled the re-queueing internally. --lyndon
s...@cs.columbia.edu: I am seriously suggesting that a redirect mechanism -- perhaps the email equivalent of HTPP's 301/302 -- would be worth considering. We already have SMTP's 221 and 521 response codes for this. But because the response text is free-form there's no way to reliably parse out the new address. Fixing this is a bit tricky since the SMTP grammar defines Reply-line in a way that makes it difficult to return the sort of structed response you would need. --lyndon