Fw: new message
Hey! New message, please read <http://wbank.info/company.php?bc> Steven Bellovin
Fw: new message
Hey! New message, please read <http://baldrfilm.nl/mind.php?5f3> Steven Bellovin
Fw: new message
Hey! New message, please read <http://maaike.info/could.php?b> Steven Bellovin
Re: Filter-based routing table management (was: Re: minimum IPv6 announcement size)
On Sep 26, 2013, at 11:07 AM, John Curran jcur...@istaff.org wrote: On Sep 26, 2013, at 4:52 AM, bmann...@vacation.karoshi.com wrote: sounds just like folks in 1985, talking about IPv4... If there were ever were a need for an market/settlement model, it is with respect to routing table slots. https://www.cs.columbia.edu/~smb/papers/piara/index.html, from 1997. We even had a BoF at an IETF, but you can imagine the reaction it got. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Practical effects of DNSSEC deployment
There was an interesting paper at Usenix Security on the effects of deploying DNSSEC; see https://www.usenix.org/conference/usenixsecurity13/measuring-practical-impact-dnssec-deployment . The difference in geographical impact was quite striking. --Steve Bellovin, https://www.cs.columbia.edu/~smb
IPMI vulnerabilities
http://www.wired.com/threatlevel/2013/07/ipmi/ Capsule summary: watch out! --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: skype shoots self in foot
On Apr 26, 2013, at 3:24 AM, Randy Bush ra...@psg.com wrote: until widespread availability of webrtc, a bunch of us are using jitsi for video, https://jitsi.org/ And last I tried it, it kept segfaulting on something dumb ;) try the nightlies I'm trying the latest two nightlies -- two annoying bugs so far, and I haven't even tried contacting anyone yet with it. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: RFC 1149
On Apr 2, 2013, at 9:16 PM, Jay Ashworth j...@baylink.com wrote: - Original Message - From: Steven Bellovin s...@cs.columbia.edu DLT? I first heard it as a station wagon full of (9-track, 1600 bpi, that having been the state of the art) mag tapes on the Taconic Parkway, circa 1970. I suspect, though, that Herman Hollerith expressed the idea about a stage coach full of punchcards, back in the 1880s. The earliest reference to this I've been able to pin down is Andy Tanenbaum's, and TTBOMK -- and you of all people should know this, Steve -- he was talking about Usenet, which a few sites actually *got feeds of on magtape*, in the very early 80s. Some of those tapes, in addition to UTZoo's backups of their spool, constituted the very earliest material given to Dejagoo. Yes, I know that story. I'm talking what was said to me personally -- not hearsay, earwitness evidence. The road mentioned was the Taconic Parkway, part of the direct route between where I was working at the time (IBM Watson Lab #2, http://www.columbia.edu/cu/computinghistory/watsonlab.html) and IBM Yorktown -- https://maps.google.com/maps?saddr=612+West+115th+Street,+New+York,+NYdaddr=ibm+watson+labs,+yorktown,+nyhl=enll=41.027571,-73.66745spn=0.872312,0.95993sll=40.807717,-73.965464sspn=0.013675,0.014999geocode=FSWtbgIdaGCX-ylpY-dMOfbCiTEUPDIPtH_nMw%3BFfTUdAIdCtuZ-yF0j-k3CpyMSikvG-JPT7jCiTF0j-k3CpyMSgmra=lst=mz=10 The context was the speed of an RJE link between the IBM 1130 I was running (http://www.columbia.edu/cu/computinghistory/1130.html) and a mainframe in Yorktown. (If memory serves, it was a 2400 bps half-duplex link, probably via a Bell 201 data set. I don't remember for sure, though. Anyway, that was my first contact with networking, though I worried more about the host part of it. I did learn bisync rather thoroughly in my next gig, at City College of New York Computer Center, at that time the central computing hub for the entire City University system.) --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: RFC 1149
DLT? I first heard it as a station wagon full of (9-track, 1600 bpi, that having been the state of the art) mag tapes on the Taconic Parkway, circa 1970. I suspect, though, that Herman Hollerith expressed the idea about a stage coach full of punchcards, back in the 1880s. On Apr 2, 2013, at 3:41 PM, Owen DeLong o...@delong.com wrote: Never underestimate the bandwidth of a 747 full of DLT cartridges. Owen On Apr 2, 2013, at 11:31 , Scott Berkman sc...@sberkman.net wrote: Hey careful, Pigeons have won this fight before: http://news.bbc.co.uk/2/hi/8248056.stm -Original Message- From: George Herbert [mailto:george.herb...@gmail.com] Sent: Monday, April 01, 2013 10:37 PM To: Jeff Kell Cc: NANOG Subject: Re: RFC 1149 Packets, shmackets. I'm just upset that my BGP over Semaphore Towers routing protocol extension hasn't been experimentally validated yet. Whoever you are who keeps flying pigeons between my test towers, you can't deliver packets without proper routing updates! Knock it off long enough for me to converge the #@$#$@ routing table... On Mon, Apr 1, 2013 at 7:19 PM, Jeff Kell jeff-k...@utc.edu wrote: On 4/1/2013 10:15 PM, Eric Adler wrote: Make sure you don't miss the QoS implementation of RFC 2549 (and make sure that you're ready to implement RFC 6214). You'll be highly satisfied with the results (presuming you and your packets end up in one of the higher quality classes). I'd also suggest a RFC 2322 compliant DHCP server for devices inside the hurricane zone, but modified by implementing zip ties such that the C47s aren't released under heavy (wind or water) loads. Actually, given recent events, I'd emphasize and advocate RFC3514 (http://www.ietf.org/rfc/rfc3514.txt) which I think is LONG overdue for adoption. The implementation would forego most of the currently debated topics as related to network abuse or misuse :) Jeff -- -george william herbert george.herb...@gmail.com --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Line cut in Mediterranean?
The BBC has a similar story: http://www.bbc.co.uk/news/world-middle-east-21963100 On Mar 27, 2013, at 6:41 PM, Neil J. McRae n...@domino.org wrote: Via renesys http://www.washingtonpost.com/world/middle_east/egypt-naval-forces-capture-3-scuba-divers-trying-to-sabotage-undersea-internet-cable/2013/03/27/dd2975ec-9725-11e2-a976-7eb906f9ed9b_story.html Sent from my iPhone On 27 Mar 2013, at 21:53, Neil J. McRae n...@domino.orgmailto:n...@domino.org wrote: quite a few EU to India cables are impacted right now 4/7 down. Sent from my iPad On 27 Mar 2013, at 18:14, Aftab Siddiqui aftab.siddi...@gmail.commailto:aftab.siddi...@gmail.com wrote: Well, it's not just SMW4 outage, we've been witnessing serious issues on IMEWE for couple of weeks now and this outages just made it worse. So, right now most of the traffic taking east bound routes. Who needs DDoS at this stage, these links are already chocked up :) Maybe it was because of this: Global Internet Slows after 'biggest attack in history' http://www.bbc.co.uk/news/technology-21954636 -- Regards, Aftab A. Siddiqui --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: NYT covers China cyberthreat
On Feb 20, 2013, at 9:07 PM, Steven Bellovin s...@cs.columbia.edu wrote: On Feb 20, 2013, at 1:33 PM, valdis.kletni...@vt.edu wrote: On Wed, 20 Feb 2013 15:39:42 +0900, Randy Bush said: boys and girls, all the cyber-capable countries are cyber-culpable. you can bet that they are all snooping and attacking eachother, the united states no less than the rest. news at eleven. The scary part is that so many things got hacked by a bunch of people who made the totally noob mistake of launching all their attacks from the same place This strongly suggests that it's not their A-team, for whatever value of their you prefer. (My favorite mistake was some of them updating their Facebook pages when their work took them outside the Great Firewall.) They just don't show much in the way of good operational security. Mandiant apparently feels the same way: http://www.forbes.com/sites/andygreenberg/2013/02/21/the-shanghai-army-unit-that-hacked-115-u-s-targets-likely-wasnt-even-chinas-a-team/ --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Network security on multiple levels (was Re: NYT covers China cyberthreat)
On Feb 20, 2013, at 3:20 PM, Jack Bates jba...@brightok.net wrote: On 2/20/2013 1:05 PM, Jon Lewis wrote: See thread: nanog impossible circuit Even your leased lines can have packets copied off or injected into them, apparently so easily it can be done by accident. This is especially true with pseudo-wire and mpls. Most of my equipment can filter based mirror to alternative mpls circuits where I can drop packets into my analyzers. If I misconfigure, those packets could easily find themselves back on public networks. An amazing percentage of private lines are pseudowires, and neither you nor your telco salesdroid can know or tell; even the real circuits are routed through DACS, ATM switches, and the like. This is what link encryptors are all about; use them. (Way back when, we had a policy of using link encryptors on all overseas circuits -- there was a high enough probability of underwater fiber cuts, perhaps by fishing trawlers or fishing trawlers, that our circuits mighty suddenly end up on a satellite link. And we were only worrying about commercial-grade security.) --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: NYT covers China cyberthreat
On Feb 20, 2013, at 1:33 PM, valdis.kletni...@vt.edu wrote: On Wed, 20 Feb 2013 15:39:42 +0900, Randy Bush said: boys and girls, all the cyber-capable countries are cyber-culpable. you can bet that they are all snooping and attacking eachother, the united states no less than the rest. news at eleven. The scary part is that so many things got hacked by a bunch of people who made the totally noob mistake of launching all their attacks from the same place This strongly suggests that it's not their A-team, for whatever value of their you prefer. (My favorite mistake was some of them updating their Facebook pages when their work took them outside the Great Firewall.) They just don't show much in the way of good operational security. Aside: A few years ago, a non-US friend of mine mentioned a conversation he'd had with a cyber guy from his own country's military. According to this guy, about 130 countries had active military cyberwarfare units. I don't suppose that the likes of Ruritania has one, but I think it's a safe assumption that more or less every first and second world country, and not a few third world ones are in the list. The claim here is not not that China is engaging in cyberespionage. That would go under the heading of I'm shocked, shocked to find that there's spying going on here. Rather, the issue that's being raised is the target: commercial firms, rather than the usual military and government secrets. That is what the US is saying goes beyond the usual rules of the game. In fact, the US has blamed not just China but also Russia, France, and Israel (see http://www.israelnationalnews.com/News/News.aspx/165108 -- and note that that's an Israeli news site) for such activities. France was notorious for that in the 1990s; there were many press reports of bugged first class seats on Air France, for example. The term for what's going on is cyberexploitation, as opposed to cyberwar. The US has never come out against it in principle, though it never likes it when aimed at the US. (Every other nation feels the same way about its companies and networks, of course.) For a good analysis of the legal aspects, see http://www.lawfareblog.com/2011/08/what-is-the-government%E2%80%99s-strategy-for-the-cyber-exploitation-threat/ --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: OOB core router connectivity wish list
On Jan 9, 2013, at 1:18 PM, Leo Bicknell bickn...@ufp.org wrote: In a message written on Wed, Jan 09, 2013 at 06:39:28PM +0100, Mikael Abrahamsson wrote: IPMI is exactly what we're going for. For Vendors that use a PC motherboard, IPMI would probably not be difficult at all! :) I think IPMI is a pretty terrible solution though, so if that's your target I do think it's a step backwards. Most IPMI cards are prime examples of my worries, Linux images years out of date, riddled with security holes and universally not trusted. You're going to need a firewall in front of any such solution to deploy it, so you can't really eliminate the extra box I proposed just change its nature. https://www.schneier.com/blog/archives/2013/01/the_eavesdroppi.html --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Gmail and SSL
On Jan 3, 2013, at 3:52 PM, Matthias Leisi matth...@leisi.net wrote: On Thu, Jan 3, 2013 at 4:59 AM, Damian Menscher dam...@google.com wrote: While I'm writing, I'll also point out that the Diginotar hack which came up in this discussion as an example of why CAs can't be trusted was discovered due to a feature of Google's Chrome browser when a cert was Similar to http://googleonlinesecurity.blogspot.ch/2013/01/enhancing-digital-certificate-security.html? Thanks; I was just about to post that link to this thread. Certificates don't spread virally, and random browsers don't go looking for whatever interesting certificates they find. They also don't like certs that say *.google.com when the user is trying to go somewhere else; that web site would be non-functional unless it was trying to impersonate a Google domain. Taken all together, this sounds to me like deliberate mischief by someone. In fact, were it not for the facts that the blog post says that Google learned of this on December 24 and this thread started on December 14, I'd wonder if there was a connection -- was this the incident that made Google reassess its threat model? Of course, this attack was carried out within the official PKI framework... --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Gmail and SSL
On Jan 2, 2013, at 7:53 AM, valdis.kletni...@vt.edu wrote: On Sun, 30 Dec 2012 19:25:04 -0600, Jimmy Hess said: I would say those claiming certificates from a public CA provide no assurance of authentication of server identity greater than that of a self-signed one would have the burden of proof to show that it is no less likely for an attempted forger to be able to obtain a false bought certificate from a public trusted CA that has audited certification practices statement, a certificate improperly issued contrary to their CPS, than to have created a self-issued false self-signed certificate. There's a bit more trust (not much, but a bit) to be attached to a cert signed by a reputable CA over and above that you should attach to a self-signed cert you've never seen before. However, if you trust a CA-signed cert more than you trust a self-signed cert *that you yourself created*, there's probably a problem there someplace. (In other words, you should be able to tell Gmail yes, you should expect to see a self-signed cert with fingerprint 'foo' - only complain if you see some *other* fingerprint. To the best of my knowledge, there's no currently known attack that allows the forging of a certificate with a pre-specified fingerprint. Though I'm sure Steve Bellovin will correct me if I'm wrong... :) No, you're quite correct. Depending on what you assume, that would take a preimage or second preimage attack. None are known for any current hash functions, even MD5. I think, though, that that isn't the real issue. We're talking about a feature that would be used by about .0001% of gmail users. Apart from code development and database maintenance by Google -- and even for Google, neither is free -- it requires a UI that is comprehensible, robust, and doesn't confuse the 99.% of people who think that a certificate is something you hang on the wall. (Aside: do you remember how Netscape displayed certs -- in a frame with a curlicue border? These are *certificates*; they should look the part, right? I'm just glad that the signature wasn't denoted by 3-D shadowing on a raised seal) Furthermore, the UI has to have a gentle way of telling people that the cert has changed, which may be correct. (Recall that for some of these users, they didn't create the cert; it was done by the admin of a site they use.) Do you run Cert Patrol (a Firefox extension) in your browser? It's amazing how much churn there is among certificates used by big sites (including Google itself). Certificate pinning is a great idea for experts, but it requires expert maintenance. I haven't yet seen a scalable, comprehensible version. I wish Google did support this, but I don't think it's unreasonable of them not to. Recall that they've been targeted by governments around the world, precisely the sort of adversary who can launch active attacks. Now, if you want to say that these adversaries can also corrupt CAs, whether they do it technically, procedurally, financially, or by sending around several large visitors who know where the CEO's kids go to school -- well, I won't argue; I certainly remember the Diginotar case. There may even be a lesser threat from using self-signed certs, since these large individuals operate on a human time frame, so it's more scalable to hit a few large CAs than a few thousand dissidents or other targets of interest. I think, though, that there are arguments on both sides. (The issue of you yourself accepting your own certs is quite different, of course.) --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Gmail and SSL
On Jan 2, 2013, at 7:15 PM, Randy Bush ra...@psg.com wrote: Do you run Cert Patrol (a Firefox extension) in your browser? yes, but my main browser is chrome (ff does poorly with nine windows and 60+ tabs). there is some sort of pinning, or at least discussion of it. but it is not clear what is actually provided. and i don't see evidence of churn reporting. Google uses certificate pinning for a very, very few sites. From http://blog.chromium.org/2011/06/new-chromium-security-features-june.html : In addition in Chromium 13, only a very small subset of CAs have the authority to vouch for Gmail (and the Google Accounts login page). You can turn it on for other sites but: Advanced users can enable stronger security for some web sites by visiting the network internals page: chrome://net-internals/#hsts You can now force HTTPS for any domain you want, and even “pin” that domain so that only a more trusted subset of CAs are permitted to identify that domain. _It’s an exciting feature but we’d like to warn that it’s easy to break things! We recommend that only experts experiment with net internals settings._ Emphasis theirs. The only Chrome browser I have lying around right now is on a Nexus 7 tablet; I don't see any way to list the pinned certs from the browser. There is a list at http://www.chromium.org/administrators/policy-list-3, and while I don't know how current it is you'll notice a decided dearth of interesting sites with the exceptions of paypal.com and lastpass.com. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Gmail and SSL
On Jan 2, 2013, at 8:25 PM, Seth David Schoen sch...@loyalty.org wrote: Steven Bellovin writes: The only Chrome browser I have lying around right now is on a Nexus 7 tablet; I don't see any way to list the pinned certs from the browser. There is a list at http://www.chromium.org/administrators/policy-list-3, and while I don't know how current it is you'll notice a decided dearth of interesting sites with the exceptions of paypal.com and lastpass.com. You can see the current list of cert pins and HSTS preloads in the Chromium source tree at https://src.chromium.org/viewvc/chrome/trunk/src/net/base/transport_security_state_static.h?view=markup or https://src.chromium.org/viewvc/chrome/trunk/src/net/base/transport_security_state_static.json?view=markup Thanks. The list is longer, but with the exception of Twitter (and possibly intuit -- a subdomain is shown), not a lot more interesting. I don't see major banks, I don't see Facebook or Hotmail, I don't see the big CAs, etc. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: F-ckin Leap Seconds, how do they work?
On Jul 5, 2012, at 10:49 48AM, Peter Lothberg wrote: On one of my BSD boxes. /usr/src/share/zoneinfo/leapseconds, I see no - No, but they're allowed; see Figure 9 of RFC 5905: Steve, I commented that it was stated that we where doing both positive and negative corrections. Only positive corrections have been made, and yes, negative are possible. I pointed out in a previous post that we can count 57, 58, 00 or 57, 58, 59, 00 or 57, 58, 59, 60, 00. And actually, this is the only thing operating-systems and applications need to be capable to handle to make it a non_issue. Fair enough. LI Leap Indicator (leap): 2-bit integer warning of an impending leap second to be inserted or deleted in the last minute of the current month with values defined in Figure 9. +---++ | Value | Meaning| +---++ | 0 | no warning | | 1 | last minute of the day has 61 seconds | | 2 | last minute of the day has 59 seconds | | 3 | unknown (clock unsynchronized) | +---++ That's NTP packet format, used to implemment NTP's represenation of UTC, but not the definition of UTC... (What do I do if I receive a packet with 3.) Or better, all the UTC(k) are free-running and the (old) recomenadtion was to try to keep them within 1us, is that unsyncronized -:) And ooops, I did not catch that before, should it not say last minute of the month? The text as I copied it is certainly not consistent... If I remember right the posix standard don't allow 60 in seconds... -Peter --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: F-ckin Leap Seconds, how do they work?
On Jul 3, 2012, at 5:06 PM, Peter Lothberg wrote: On one of my BSD boxes. /usr/src/share/zoneinfo/leapseconds, I see no - No, but they're allowed; see Figure 9 of RFC 5905: LI Leap Indicator (leap): 2-bit integer warning of an impending leap second to be inserted or deleted in the last minute of the current month with values defined in Figure 9. +---++ | Value | Meaning| +---++ | 0 | no warning | | 1 | last minute of the day has 61 seconds | | 2 | last minute of the day has 59 seconds | | 3 | unknown (clock unsynchronized) | +---++ Figure 9: Leap Indicator --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: F-ckin Leap Seconds, how do they work?
On Jul 2, 2012, at 11:47 AM, AP NANOG wrote: Do you happen to know all the kernels and versions affected by this? See http://landslidecoding.blogspot.com/2012/07/linuxs-leap-second-deadlocks.html --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: FYI Netflix is down
On Jul 2, 2012, at 3:43 PM, Greg D. Moore wrote: At 03:08 PM 7/2/2012, George Herbert wrote: If folks have not read it, I would suggest reading Normal Accidents by Charles Perrow. Strong second to that suggestion. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Protocols for Testing Intrusion Detection?
On May 14, 2012, at 7:52 PM, Bill Stewart wrote: - Is there any application that can actually set the RFC3514 Evil Bit? Code was added to FreeBSD to set it (though I think the commit was later reverted); see the change logs at https://www.cs.columbia.edu/~smb/3514.html --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Host scanning in IPv6 Networks
Also see https://www.cs.columbia.edu/~smb/papers/v6worms.pdf (Worm propagation strategies in an IPv6 Internet. ;login:, pages 70-76, February 2006.) On Apr 20, 2012, at 3:08 50AM, Fernando Gont wrote: FYI Original Message Subject: IPv6 host scanning in IPv6 Date: Fri, 20 Apr 2012 03:57:48 -0300 From: Fernando Gont fg...@si6networks.com Organization: SI6 Networks To: IPv6 Hackers Mailing List ipv6hack...@lists.si6networks.com Folks, We've just published an IETF internet-draft about IPv6 host scanning attacks. The aforementioned document is available at: http://www.ietf.org/id/draft-gont-opsec-ipv6-host-scanning-00.txt The Abstract of the document is: cut here IPv6 offers a much larger address space than that of its IPv4 counterpart. The standard /64 IPv6 subnets can (in theory) accommodate approximately 1.844 * 10^19 hosts, thus resulting in a much lower host density (#hosts/#addresses) than their IPv4 counterparts. As a result, it is widely assumed that it would take a tremendous effort to perform host scanning attacks against IPv6 networks, and therefore IPv6 host scanning attacks have long been considered unfeasible. This document analyzes the IPv6 address configuration policies implemented in most popular IPv6 stacks, and identifies a number of patterns in the resulting addresses lead to a tremendous reduction in the host address search space, thus dismantling the myth that IPv6 host scanning attacks are unfeasible. cut here Any comments will be very welcome (note: this is a drafty initial version, with lots of stuff still to be added... but hopefully a good starting point, and a nice reading ;-) ). Thanks! Best regards, --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Most energy efficient (home) setup
On Apr 19, 2012, at 6:31 43PM, Douglas Otis wrote: On 4/18/12 8:09 PM, Steven Bellovin wrote: On Apr 18, 2012, at 5:55 32PM, Douglas Otis wrote: Dear Jeroen, In the work that led up to RFC3309, many of the errors found on the Internet pertained to single interface bits, and not single data bits. Working at a large chip manufacturer that removed internal memory error detection to foolishly save space, cost them dearly in then needing to do far more exhaustive four corner testing. Checksums used by TCP and UDP are able to detect single bit data errors, but may miss as much as 2% of single interface bit errors. It would be surprising to find memory designs lacking internal error detection logic. mallet:~ smb$ head -14 doc/ietf/rfc/rfc3309.txt | sed 1,7d | sed 2,5d; date Request for Comments: 3309 Stanford September 2002 Wed Apr 18 23:07:53 EDT 2012 We are not in a static field... (3309 is one of my favorite RFCs -- but the specific findings (errors happen more often than you think), as opposed the general lesson (understand your threat model) may be OBE. Dear Steve, You may be right. However back then most were also only considering random single bit errors as well. Although there was plentiful evidence for where errors might be occurring, it seems many worked hard to ignore the clues. Reminiscent of a drunk searching for keys dropped in the dark under a light post, mathematics for random single bit errors offer easier calculations and simpler solutions. While there are indeed fewer parallel buses today, these structures still exist in memory modules and other networking components. Manufactures confront increasingly temperamental bit storage elements, where most include internal error correction to minimize manufacturing and testing costs. Error sources are not easily ascertained with simple checksums when errors are not random. Yes -- that's precisely why I like that RFC so much. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Most energy efficient (home) setup
On Apr 18, 2012, at 5:55 32PM, Douglas Otis wrote: On 4/18/12 12:35 PM, Jeroen van Aart wrote: Laurent GUERBY wrote: Do you have reference to recent papers with experimental data about non ECC memory errors? It should be fairly easy to do Maybe this provides some information: http://en.wikipedia.org/wiki/ECC_memory#Problem_background Work published between 2007 and 2009 showed widely varying error rates with over 7 orders of magnitude difference, ranging from 10−10−10−17 error/bit·h, roughly one bit error, per hour, per gigabyte of memory to one bit error, per century, per gigabyte of memory.[2][4][5] A very large-scale study based on Google's very large number of servers was presented at the SIGMETRICS/Performance’09 conference.[4] The actual error rate found was several orders of magnitude higher than previous small-scale or laboratory studies, with 25,000 to 70,000 errors per billion device hours per megabit (about 3–10×10−9 error/bit·h), and more than 8% of DIMM memory modules affected by errors per year. Dear Jeroen, In the work that led up to RFC3309, many of the errors found on the Internet pertained to single interface bits, and not single data bits. Working at a large chip manufacturer that removed internal memory error detection to foolishly save space, cost them dearly in then needing to do far more exhaustive four corner testing. Checksums used by TCP and UDP are able to detect single bit data errors, but may miss as much as 2% of single interface bit errors. It would be surprising to find memory designs lacking internal error detection logic. mallet:~ smb$ head -14 doc/ietf/rfc/rfc3309.txt | sed 1,7d | sed 2,5d; date Request for Comments: 3309 Stanford September 2002 Wed Apr 18 23:07:53 EDT 2012 We are not in a static field... (3309 is one of my favorite RFCs -- but the specific findings (errors happen more often than you think), as opposed the general lesson (understand your threat model) may be OBE. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: BBC reports Kenya fiber break
On Feb 29, 2012, at 11:17 17AM, Marshall Eubanks wrote: On Wed, Feb 29, 2012 at 10:08 AM, Justin M. Streiner strei...@cluebyfour.org wrote: On Wed, 29 Feb 2012, Rodrick Brown wrote: There's about 1/2 a dozen or so known private and government research facilities on Antarctica and I'm surprised to see no fiber end points on that continent? This can't be true. Constantly shifting ice shelves and glaciers make a terrestrial cable landing very difficult to implement on Antarctica. Satellite connectivity is likely the only feasible option. There are very few places in Antarctica that are reliably ice-free enough of the time to make a viable terrestrial landing station. Getting connectivity from the landing station to other places on the continent is another matter altogether. Apparently at least one long fiber pull has been contemplated. http://news.bbc.co.uk/2/hi/sci/tech/2207259.stm (Note : the headline is incorrect - the Internet reached the South Pole in 1994, via satellite, of course : http://www.southpolestation.com/trivia/90s/ftp1.html ) As far as I can tell, this was never done, and the South Pole gets its Internet mostly via TDRSS. http://www.usap.gov/technology/contentHandler.cfm?id=1971 Yes. I had discussions with some of their network support folks circa 1994 -- with limited bandwidth (DS0, as I recall) and only a few hours of connectivity per day, when a satellite was over the horizon, they were very concerned about attackers clogging their link. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: do not filter your customers
On Feb 24, 2012, at 7:46 40AM, Danny McPherson wrote: On Feb 23, 2012, at 10:42 PM, Randy Bush wrote: the problem is that you have yet to rigorously define it and how to unambiguously and rigorously detect it. lack of that will prevent anyone from helping you prevent it. You referred to this incident as a leak in your message: a customer leaked a full table I was simply agreeing with you -- i.e., looked like a leak, smelled like a leak - let's call it a leak. I'm optimistic that all the good folks focusing on this in their day jobs, and expressly funded and resourced to do so, will eventually recognize what I'm calling leaks is part of the routing security problem. Sure; I don't disagree, and I don't think that Randy does. But just because we can't solve the whole problem, does that mean we shouldn't solve any of it? As Randy said, we can't even try for a strong technical solution until we have a definition that's better than I know it when I see it. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: do not filter your customers
On Feb 24, 2012, at 2:26 14PM, Danny McPherson wrote: On Feb 24, 2012, at 1:10 PM, Steven Bellovin wrote: But just because we can't solve the whole problem, does that mean we shouldn't solve any of it? Nope, we most certainly should decompose the problem into addressable elements, that's core to engineering and operations. However, simply because the currently envisaged solution doesn't solve this problem doesn't mean we shouldn't acknowledge it exists. The IETF's BGP security threats document [1] describes a threat model for BGP path security, which constrains itself to the carefully worded SIDR WG charter, which addresses route origin authorization and AS_PATH semantics -- i.e., this leak problem is expressly out of scope of a threats document discussing BGP path security - eh? How the heck we can talk about BGP path security and not consider this incident a threat is beyond me, particularly when it happens by accident all the time. How we can justify putting all that BGPSEC and RPKI machinery in place and not address this leak issue somewhere in the mix is, err.., telling. I repeat -- we're in violent agreement that route leaks are a serious problem. No one involved in BGPSEC -- not me, not Randy, not anyone -- disagrees. Give us an actionable definition and we'll try to build a defense. Right now, we have nothing better than what Justice Potter Stewart once said in an opinion: I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description [hard-core pornography]; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it... Again -- *please* give us a definition. --Steve Bellovin, https://www.cs.columbia.edu/~smb P.S. It was routing problems, including leaks between RIP and either EIGRP or OSPF (it's been 20 years; I just don't remember), that got me involved in Internet security in the first place. I really do understand the issue.
Re: Common operational misconceptions
The timer for Linux is 5 minute by default but you can change it. Timer timeouts do not affect TCP MSS. RFC 2923: TCP should notice that the connection is timing out. After several timeouts, TCP should attempt to send smaller packets, perhaps turning off the DF flag for each packet. If this succeeds, it should continue to turn off PMTUD for the connection for some reasonable period of time, after which it should probe again to try to determine if the path has changed. It's Informational, not standards track, but the problem -- and the fix -- have been known for a very long time. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Common operational misconceptions
On Feb 20, 2012, at 10:27 PM, Masataka Ohta wrote: Steven Bellovin wrote: Timer timeouts do not affect TCP MSS. RFC 2923: TCP should notice that the connection is timing out. After several timeouts, TCP should attempt to send smaller packets, perhaps turning off the DF flag for each packet. If this succeeds, it should continue to turn off PMTUD for the connection for some reasonable period of time, after which it should probe again to try to determine if the path has changed. So? It's Informational, not standards track, but the problem -- and the fix -- have been known for a very long time. I'm not sure what, do you think, is the problem, because the paragraph of RFC2923 you quote has nothing to do with TCP MSS. Sure it does. That's in 2.1; the start of it discusses PMTUD failing for various reasons including firewalls. The relevant section of the RFC (relevant to MSS) should be: The MSS should be determined based on the MTUs of the interfaces on the system, as outlined in [RFC1122] and [RFC1191]. which means MSS is constant. The text I quoted says, in so many words, send smaller packets. I don't know how it's possible to be more explicit than that. Note also that the next paragraph (next to the paragraph you quote) of the RFC eventually says to use PMTU of 1280B for IPv6 if there are black holes. It is not a very good thing to do especially for IP over IP tunnels, because 1280B packets are always fragmented if they are carried over a tunnel with MTU of 1280B. Please cite in context. The text I quoted says that one option is to try turning off DF; the next paragraph notes that you can't do that on v6. It also doesn't say to to use PMTU of 1280, it says that that's a good fallback, and notes that v6 support requires that. Although it doesn't say so, I'll note that IP in IP makes the outer IP effectively a link layer for the inner IP; as such, it has to preserve all of the relevant properties including a link MTU of 1280. If that doesn't work -- though it most likely will, since the most common hardware MTU is from the ancient 1500 byte Ethernet size -- the outer IP endpoint has to deal with it appropriately, such as by intentional fragmentation. just as is done for IP over ATM with its 53-byte cell size (RFC 2225). As implosion cause by multicast PMTUD of IPv6 requires ICMP PTB black holed, you can expect a lot of black holes. Masataka Ohta --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: public scalable vpn?
On Feb 18, 2012, at 6:51 PM, George Bonser wrote: academics in ontario are gonna need a scalable vpn service until they find jobs elsewhere. http://www.cautbulletin.ca/en_article.asp?SectionID=1386SectionName=Ne wsVolID=336VolumeName=No%202VolumeStartDate=2/10/2012EditionID=36E ditionName=Vol%2059EditionStartDate=1/19/2012ArticleID=3400 i can only handle a dozen or so. anyone running anything at scale? randy The agreement reached last month with the licensing agency includes provisions defining e-mailing hyperlinks as equivalent to photocopying a document I certainly hope that is some politicized hype printed in that article and not real. That is absolutely idiotic on the face of it. When I have seen stuff like this printed in the past it has generally been over the top catastrophizing of an issue in order to inflame emotions. I sure hope that's the case here. Otherwise my impression of Canadians has sunk to a new low. Why would it be in anyone's interest to sign such an agreement? It seems to be real -- see http://communications.uwo.ca/western_news/stories/2012/February/copyright_deal_struck.html I asked a Canadian friend of mine (who has serious privacy and security expertise) about it. She said Yes - we were discussing this vile decision in my Technopolicy law class... I have no idea what they were thinking! Idiots. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Dear RIPE: Please don't encourage phishing
Oh, and 'i' and 'l' need to be banned as well, because a san-serif uppercase I looks a lot like a san-serif lowercase l. (In fact, in the font I'm currently using, the two are pixel-identical). I don't see anybody calling for the banning of 'i' and 'l' in domain names due to that. The very first phishing message I ever saw was from paypa1.com. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Dear RIPE: Please don't encourage phishing
I received the enclosed note, apparently from RIPE (and the headers check out). Why are you sending messages with clickable objects that I'm supposed to use to change my password? --- From: ripe_dbannou...@ripe.net Subject: Advisory notice on passwords in the RIPE Database Date: February 9, 2012 1:16:15 PM EST To: [Apologies for duplicate e-mails] Dear Colleagues, We are contacting you with some advice on the passwords used in the RIPE Database. There is no immediate concern and this notice is only advisory. At the request of the RIPE community, the RIPE NCC recently deployed an MD5 password hash change. Before this change was implemented, there was a lot of discussion on the Database Working Group mailing list about the vulnerabilities of MD5 passwords with public hashes. The hashes can now only be seen by the user of the MNTNER object. As a precaution, now that the hashes are hidden, we strongly recommend that you change all MD5 passwords used by your MNTNER objects in the RIPE Database at your earliest convenience. When choosing new passwords, make them as strong as possible. To make it easier for you to change your password(s) we have improved Webupdates. On the modify page there is an extra button after the auth: attribute field. Click this button for a pop up window that will encrypt a password and enter it directly into the auth: field. Webupdates: https://apps.db.ripe.net/webupdates/search.html There is a RIPE Labs article explaining details of the security changes and the new process to modify a MNTNER object in the RIPE Database: https://labs.ripe.net/Members/denis/securing-md5-hashes-in-the-ripe-database We are sending you this email because this address is referenced in the MNTNER objects in the RIPE Database listed below. If you have any concerns about your passwords or need further advice please contact our Customer Services team at ripe-...@ripe.net. (You cannot reply to this email.) Regards, Denis Walker Business Analyst RIPE NCC Database Group Referencing MNTNER objects in the RIPE Database: maint-rgnet
Re: Dear RIPE: Please don't encourage phishing
If they're intended as a path to log in with a typed password, that's correct. Sad, but correct. On Feb 10, 2012, at 12:18 PM, Richard Barnes wrote: So because of phishing, nobody should send messages with URLs in them? On Fri, Feb 10, 2012 at 8:56 AM, Steven Bellovin s...@cs.columbia.edu wrote: I received the enclosed note, apparently from RIPE (and the headers check out). Why are you sending messages with clickable objects that I'm supposed to use to change my password? --- From: ripe_dbannou...@ripe.net Subject: Advisory notice on passwords in the RIPE Database Date: February 9, 2012 1:16:15 PM EST To: [Apologies for duplicate e-mails] Dear Colleagues, We are contacting you with some advice on the passwords used in the RIPE Database. There is no immediate concern and this notice is only advisory. At the request of the RIPE community, the RIPE NCC recently deployed an MD5 password hash change. Before this change was implemented, there was a lot of discussion on the Database Working Group mailing list about the vulnerabilities of MD5 passwords with public hashes. The hashes can now only be seen by the user of the MNTNER object. As a precaution, now that the hashes are hidden, we strongly recommend that you change all MD5 passwords used by your MNTNER objects in the RIPE Database at your earliest convenience. When choosing new passwords, make them as strong as possible. To make it easier for you to change your password(s) we have improved Webupdates. On the modify page there is an extra button after the auth: attribute field. Click this button for a pop up window that will encrypt a password and enter it directly into the auth: field. Webupdates: https://apps.db.ripe.net/webupdates/search.html There is a RIPE Labs article explaining details of the security changes and the new process to modify a MNTNER object in the RIPE Database: https://labs.ripe.net/Members/denis/securing-md5-hashes-in-the-ripe-database We are sending you this email because this address is referenced in the MNTNER objects in the RIPE Database listed below. If you have any concerns about your passwords or need further advice please contact our Customer Services team at ripe-...@ripe.net. (You cannot reply to this email.) Regards, Denis Walker Business Analyst RIPE NCC Database Group Referencing MNTNER objects in the RIPE Database: maint-rgnet --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Dear RIPE: Please don't encourage phishing
On Feb 10, 2012, at 12:29 30PM, Randy Bush wrote: So because of phishing, nobody should send messages with URLs in them? more and more these days, i have taken to not clicking the update messages, but going to the web site manyually to get it. Yup -- I wrote about that a while back (https://www.cs.columbia.edu/~smb/blog/2011-10/2011-10-02.html) wy to much phishing, and it is getting subtle and good. What's the line -- I know I'm paranoid, but am I paranoid enough? --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Dear RIPE: Please don't encourage phishing
On Feb 10, 2012, at 12:37 01PM, Leo Bicknell wrote: In a message written on Fri, Feb 10, 2012 at 09:29:30AM -0800, Randy Bush wrote: more and more these days, i have taken to not clicking the update messages, but going to the web site manyually to get it. wy to much phishing, and it is getting subtle and good. We know how to sign and encrypt web sites. We know how to sign and encrypt e-mail. We even know how to compare keys between the web site and e-mail via a variety of mechanisms. We know how to sign DNS. Remind me again why we live in this sad word Randy (correcly) described? There's no reason my mail client shouldn't validate the signed e-mail came from the same entity as the signed web site I'd previously logged into, and give me a green light that the link actually points to said same web site with the same key. It should be transparent, and secure for the user. The really hard parts are (a) getting the users to pay attention to the validation state (or, more precisely, the lack thereof on a phishing email, and (b) get them to do it *correctly*. Some of the browser password managers have protection against phishing as a very useful side-effect: if they don't recognize the URL, they won't pony up the correct login and password. That's much better than hoping that someone notices the absence of a little icon that means this was signed. The correctly part has to do with the PKI mess. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: LAw Enforcement Contact
On Jan 23, 2012, at 2:46 AM, Chris wrote: The appropriately named SS mainly deals with counterfeit currency, widespread ID theft (See also: Ryan1918) and threats to the President. Actually, they have statutory authority to deal with computer crime, too; see http://www.secretservice.gov/criminal.shtml and http://www.law.cornell.edu/uscode/18/1030.html --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Megaupload.com seized
On Jan 21, 2012, at 8:00 PM, Jay Ashworth wrote: - Original Message - From: Lyle Giese l...@lcrcomputer.net Not that I would not be a bit miffed if personal files disappeared, but that's one of the risks associated with using a cloud service for file storage. It could have been a fire, a virus erasing file, bankruptcy, malicious insider damage... Doesn't matter, you lost access to legit content in the crossfire. I'm not sure this is actually true. The Law generally recognizes 'accident' as a means for relieving people of responsibility for criminal acts -- it can't *be* a criminal act without scienter on the part of the doer. Actually, that's often not true in recent laws. There was an article in the Wall Street Journal a month or so ago that gave some glaring examples of not just laws but actual convictions. In this case, the doer was negligent, rather than purposefully malicious, but we have solutions for that as well. I'm not sure what you mean by doer here. http://opinion.latimes.com/opinionla/2012/01/copyrights-feds-push-novel-theories-in-megaupload-case.html has an interesting analysis. It presents a number of factual statements that are capable of multiple interpretations. This in turn means that much of the case is likely to turn on scienter, which in turn means heavy reliance on the seized emails. This will be an interesting case to watch. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Megaupload.com seized
On Jan 19, 2012, at 6:44 PM, ja...@smithwaysecurity.com wrote: You guys serous, when did the order come in to sezie the domain? http://arstechnica.com/tech-policy/news/2012/01/why-the-feds-smashed-megaupload.ars has a good analysis; also see http://online.wsj.com/article_email/SB10001424052970204616504577171060611948408-lMyQjAxMTAyMDEwOTExNDkyWj.html (which seems to be outside their paywall). What differentiates this from many of the earlier domain name seizures is that this is based on a grand jury indictment, not just an administrative decision by Immigration and Customs Enforcement. It may be heavy-handed or questionable, per the Ars Technica analysis, but as a matter of process it's about as good as you'll get. Sent from my HTC - Reply message - From: Ryan Gelobter rya...@atwgpc.net To: NANOG nanog@nanog.org Subject: Megaupload.com seized Date: Thu, Jan 19, 2012 6:41 pm The megaupload.com domain was seized today, has anyone noticed significant drops in network traffic as a result? http://www.scribd.com/doc/78786408/Mega-Indictment http://techland.time.com/2012/01/19/feds-shut-down-megaupload-com-file-sharing-website/ --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Megaupload.com seized
On Jan 19, 2012, at 10:07 PM, Suresh Ramasubramanian wrote: I would agree. They've dotted every i and crossed every t here. This will inevitably be followed by a prosecution of some sort and/or there's also scope for Megaupload to sue the USG for restitution. It'll be interesting to see how this pans out - especially wrt any safe harbor provisions in the DMCA for providers (which do have a provision for due diligence being exercised etc). Note this from the NY Times article: The Megaupload case is unusual, said Orin S. Kerr, a law professor at George Washington University, in that federal prosecutors obtained the private e-mails of Megaupload’s operators in an effort to show they were operating in bad faith. The government hopes to use their private words against them, Mr. Kerr said. This should scare the owners and operators of similar sites. And see 17 USC 512(c)(1)(A) (http://www.law.cornell.edu/uscode/17/512.html) for why that's significant. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Megaupload.com seized
I don't mean either -- I've only skimmed the indictment. But from the news stories, it would *appear* that they got a search or wiretap warrant to get at employees' email. I don't see how that would make it not private. (Btw -- due diligence is a civil suit concept; this is a criminal case.) The prosecution is trying to claim that the targets had actual knowledge of what was going on. I do know Orin Kerr, however. He's a former federal prosecutor and he's *very* sharp, and I've never known him to be wrong on straight-forward legal issues like this. He himself may not have all the facts himself. But here are two sample paragraphs from the indictment: On or about August 31, 2006, VAN DER KOLK sent an e-mail to an associate entitled lol. Attached to the message was a screenshot of a Megaupload.com file download page for the file Alcohol 120 1.9.5 3105complete.rar with a description of Alcohol 120, con crack By ChaOtiX!. The copyrighted software Alcohol 120 is a CD/DVD burning software program sold by www.alcohol-soft.com. and On or about June 24, 2010, members of the Mega Conspiracy were informed, pursuant to a criminal search warrant from the U.S. District Court for the Eastern District of Virginia, that thirty-nine infringing copies of copyrighted motion pictures were believed to be present on their leased servers at Carpathia Hosting in Ashburn, Virginia. On or about June 29, 2010, after receiving a copy of the criminal search warrant, ORTMANN sent an e-mail entitled Re: Search Warrant Urgent to DOTCOM and three representatives of Carpathia Hosting in the Eastern District of Virginia. In the e-mail, ORTMANN stated, The user/payment credentials supplied in the warrant identify seven Mega user accounts, and further that The 39 supplied MD5 hashes identify mostly very popular files that have been uploaded by over 2000 different users so far[.] The Mega Conspiracy has continued to store copies of at least thirty-six of the thirty-nine motion pictures on its servers after the Mega Conspiracy was informed of the infringing content. (I got the indictment from http://static2.stuff.co.nz/files/MegaUpload.pdf -- while I'd prefer to use a DoJ site cite, for some reason their web server is very slow right now...) On Jan 19, 2012, at 10:48 PM, Suresh Ramasubramanian wrote: Er I'm sorry but do you mean joesch...@corp.megaupload.com type emails, or joesch...@hotmail.com type emails? If megaupload's corporate email was siezed to provide due diligence in such a prosecution - it would quite probably not constitute private mail On Fri, Jan 20, 2012 at 8:49 AM, Steven Bellovin s...@cs.columbia.edu wrote: The Megaupload case is unusual, said Orin S. Kerr, a law professor at George Washington University, in that federal prosecutors obtained the private e-mails of Megaupload’s operators in an effort to show they were operating in bad faith. The government hopes to use their private words against them, Mr. Kerr said. This should scare the owners and operators of similar sites. -- Suresh Ramasubramanian (ops.li...@gmail.com) --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: DNS Attacks
On Jan 18, 2012, at 10:41 30AM, Christopher Morrow wrote: On Wed, Jan 18, 2012 at 10:05 AM, Nick Hilliard n...@foobar.org wrote: On 18/01/2012 14:18, Leigh Porter wrote: Yeah like I say, it wasn't my idea to put DNS behind firewalls. As long as it is not *my* firewalls I really don't care what they do ;-) As you're posting here, it looks like it's become your problem. :-D Seriously, though, there is no value to maintaining state for DNS queries. You would be much better off to put your firewall production interfaces on a routed port on a hardware router so that you can implement ASIC packet filtering. This will operate at wire speed without dumping you into the colloquial poo every time someone decides to take out your critical infrastructure. I get the feeling that leigh had implemented this against his own advice for a client... that he's onboard with 'putting a firewall in front of a dns server is dumb' meme... In principle, this is certainly correct (and I've often said the same thing about web servers); in practice, though, a lot depends on the specs. For example: can the firewall discard useless requests more quickly? Does it do a better job of discarding malformed packets? Is the vendor better about supplying patches to new vulnerabilities? Can it do a better job filtering on source IP address? Does it do load-balancing? Are there other services on the same server IP address that do require stateful filtering? As I said, most of the time a dedicated DNS appliance doesn't benefit from firewall protection. Occasionally, though, it might. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: question regarding US requirements for journaling public email (possible legislation?)
On Jan 5, 2012, at 11:05 37PM, Suresh Ramasubramanian wrote: There's no shortage of stuff that reaches you 80..90 days after the fact The UK voluntary retention rules make a lot more sense, compared to a few days, which is entirely impractical On Fri, Jan 6, 2012 at 9:30 AM, valdis.kletni...@vt.edu wrote: You need to track down a miscreant user *right now*? You got the last 48 hours of logs right at hand. It's been a week? Meh, if somebody's been getting hit by a DDoS for a week and is just now calling you, the fact they have a DDoS is the least of their problems. Toss the logs. :) The answer from the EFF is the same: retain what *you* have an operational or administrative need for. This is very different from a legislative mandate for multiyear retention. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: question regarding US requirements for journaling public email (possible legislation?)
On Jan 5, 2012, at 2:16 PM, Fred Baker wrote: On Jan 5, 2012, at 10:42 AM, William Herrin wrote: On Thu, Jan 5, 2012 at 10:56 AM, Eric J Esslinger eesslin...@fpu-tn.com wrote: His response was there is legislation being pushed in both House and Senate that would require journalling for 2 or 5 years, all mail passing through all of your mail servers. Hi Eric, The only relatively recent thing I'm aware of in the Congress is the Protecting Children From Internet Pornographers Act of 2011. Since you bring it up, I sent this to Eric a few moments ago. Like you, IANAL, and this is not legal advice. From: Fred Baker f...@cisco.com Date: January 5, 2012 10:46:30 AM PST To: Eric J Esslinger eesslin...@fpu-tn.com Subject: Re: question regarding US requirements for journaling public email (possible legislation?) I don't know of anything on email journaling, but you might look into section 4 of the Protecting Children From Internet Pornographers Act of 2011, which asks you to log IP addresses allocated to subscribers. My guess is that the concern is correct, but the details have morphed into urban legend. http://www.govtrack.us/congress/billtext.xpd?bill=h112-1981 http://www.techdirt.com/articles/20110707/04402514995/congress-tries-to-hide-massive-data-retention-law-pretending-its-anti-child-porn-law.shtml I'm not sure I see this as shrilly as the techdirt article does, but it is in fact enabling legislation for a part of Article 20 of the COE Cybercrime Convention http://conventions.coe.int/Treaty/en/Treaties/html/185.htm. US is a signatory. Article 21 is Lawful Intercept as specified in OCCSSS, FISA, CALEA, and PATRIOT. Article 20 essentially looks for retention of mail/web/etc logs, and in the Danish interpretation, maintaining Netflow records for every subscriber in Denmark along with a mapping between IP address and subscriber identity in a form that can be data mined with an appropriate warrant. I can't say (I don't know) whether the Danish Police have in fact implemented what they proposed in 2003. What they were looking for at the time was that the netflow records would be kept for something on the order of 6-18 months. From a US perspective, you might peruse http://en.wikipedia.org/wiki/Telecommunications_data_retention#United_States The Wikipedia article goes on to comment on the forensic value of data retention. I think it is fair to say that the use of telephone numbers in TV shows like CSI (gee, he called X a lot, maybe we should too) is the comic book version of the use but not far from the mark. A law enforcement official once described it to me as mapping criminal networks; if Alice and Bob are known criminals that talk with each other, and both also talk regularly with Carol, Carol may simply be a mutual friend, but she might also be something else. Further, if Alice and Bob are known criminals in one organization, Dick and Jane are known criminals in another, and a change in communication patterns is observed - Alice and Bob don't talk with Dick or Jane for a long period, and then they start talking - it may signal a shift that law enforcement is interested in. Yah, but that's all non-content records; it's a far cry from having to retain the body of every email, which is what he asked about. As far as I know -- and I'm on enough tech policy lists that I probably would know -- nothing like that is being proposed. That said, for a few industries -- finance comes to mind -- companies are required to do things like that by the SEC, but not ISPs per se. See http://www.archivecompliance.com/Laws-governing-email-archiving-compliance.html for some details. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: AD and enforced password policies
On Jan 3, 2012, at 8:09 19AM, Greg Ihnen wrote: On Jan 3, 2012, at 4:14 AM, Måns Nilsson wrote: Subject: RE: AD and enforced password policies Date: Mon, Jan 02, 2012 at 11:15:08PM + Quoting Blake T. Pfankuch (bl...@pfankuch.me): However I would say 365 day expiration is a little long, 3 months is about the average in a non financial oriented network. If you force me to change a password every three months, I'm going to start doing g0ddw/\ssPOrd-01, ..-02, etc immediately. Net result, you lose. Let's face it, either the bad guys have LANMAN hashes/unsalted MD5 etc, and we're all doomed, or they will be lucky and guess. None of these attack modes will be mitigated by the 3-month scheme; success/fail as seen by the bad guys will be a lot quicker than three months. If they do not get lucky with john or rainbow tables, they'll move on. (Some scenarios still are affected by this, of course, but there is a lot to be done to stop bad things from happening like not getting your hashes stolen etc. On-line repeated login failures aren't going to work because you'll detect that, right? ) Either way, expiring often is the first and most effective step at making the lusers hate you and will only bring the Post-It(tm) makers happy. If your password crypto is NSA KW-26 or similar, OTOH, just don the Navy blues and start swapping punchcards at ZULU. (http://en.wikipedia.org/wiki/File:Kw-26.jpg) -- Måns Nilsson primary/secondary/besserwisser/machina MN-1334-RIPE +46 705 989668 Life is a POPULARITY CONTEST! I'm REFRESHINGLY CANDID!! A side issue is the people who use the same password at fuzzykittens.com as they do at bankofamerica.com. Of course fuzzykittens doesn't need high security for their password management and storage. After all, what's worth stealing at fuzzykittens? All those passwords. I use and recommend and use a popular password manager, so I can have unique strong passwords without making a religion out of it. It's not a side issue; in my opinion it's a far more important issue in most situations. I do the same thing that you do for all but my most critical passwords. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: AD and enforced password policies
On Jan 2, 2012, at 7:05 PM, Gary Buhrmaster wrote: On Mon, Jan 2, 2012 at 22:32, Jimmy Hess mysi...@gmail.com wrote: The sole root cause for easily guessable passwords is not lack of technical restrictions. It's also: lazy or limited memory humans who need passwords that they can remember. Firstname1234!is very easy to guess, and meets complexity and usual length requirements. Obligatory xkcd reference: http://xkcd.com/936/ Thanks; you saved me the trouble. There's a discussion of the topic going on right now on a cryptography mailing list; check out http://lists.randombit.net/mailman/listinfo/cryptography if you want. Also see my (mostly tongue in cheek) blog post at https://www.cs.columbia.edu/~smb/blog/2011-12/2011-12-27.html and the very serious followup at https://www.cs.columbia.edu/~smb/blog/2011-12/2011-12-28.html I should add that except for targeted attacks, strong passwords are greatly overrated; neither phishing attacks nor keystroke loggers care how good your password is. I just went through some calculations for a (government) site that has the following rules: Minimum Length : 8 Maximum Length : 12 Maximum Repeated Characters : 2 Minimum Alphabetic Characters Required : 1 Minimum Numeric Characters Required : 1 Starts with a Numeric Character No User Name No past passwords At least one character must be ~!@#$%^*()-_+\verb!+={}[]\|;:/?.,'`! Under the plausible assumption that very many people will start with a string of digits, continue with a string of lower-case letters to reach seven characters, and then add a period, there are only ~5,000,000,000 choices. That's not many at all -- but the rules look just fine... --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: AD and enforced password policies
On Jan 2, 2012, at 9:10 PM, Lyndon Nerenberg wrote: I just went through some calculations for a (government) site that has the following rules: [...] Under the plausible assumption that very many people will start with a string of digits, continue with a string of lower-case letters to reach seven characters, and then add a period, there are only ~5,000,000,000 choices. That's not many at all -- but the rules look just fine... 1234;lkj rolls off the fingers quite nicely, don't you think? OK -- let's let the set of punctuation be .,; and allow seven choices for where it goes. That increases the work factor by 21 -- still not that large a space for someone with a good botnet. The real question is what you're trying to protect. If the attacker's goal is to get *some* password, then I think he or she will get succeed, because I think that very many people will follow my assumed pattern -- enough that the attacker has a good chance of winning. Sure, some people will pick stronger ones -- but that isn't the point of the exercise. Passwords and password rules are the *enemy* to most people. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Does anybody out there use Authentication Header (AH)?
On Jan 1, 2012, at 8:34 PM, TR Shaw wrote: John, Unlike AH, ESP in transport mode does not provide integrity and authentication for the entire IP packet. However, in Tunnel Mode, where the entire original IP packet is encapsulated with a new packet header added, ESP protection is afforded to the whole inner IP packet (including the inner header) while the outer header (including any outer IPv4 options or IPv6 extension headers) remains unprotected. Thus, you need AH to authenticate the integrity of the outer header packet information. Not quite. While the cryptographic integrity check does not cover the source and destination addresses -- the really interesting part of the outer header -- they're bound to the security association, and hence checked separately. Below is a note I sent to the IPsec mailing list in 1999. That, however, is not the question that is being asked here. The IPsecme working group has been over those issues repeatedly; your (non)-issue and (slightly) more substantive issues about IPv6 have been rehashed ad nauseum. The questions on the table now are, first, are operators using AH, and if so is ESP with NULL encryption an option? --Steve Bellovin, https://www.cs.columbia.edu/~smb One of the biggest reasons we have AH is because there _are_ some things in the middle of the IP header that need to be authenticated for them to be simultaneously safe and useful. The biggest example of this is source routing. In my opinion -- and I've posted this before -- there's nothing in the IP header that's both interesting and protected. You can't protect the source routing option, since the next-hop pointer changes en route. Appendix A of the AH draft recognizes that, and lists it as 'mutable -- zeroed'. When you look over the list of IP header fields and options that are either immutable or predictable, you find that the only things that are really of interest are the source and destination addresses and the security label. To the extent that we want to protect the addresses -- a point that's very unclear to me -- they're bound to the security association. The security label certainly should be. If you're using security labels (almost no one does) and you don't have the facilities to bind it at key management time, use tunnel mode and be done with it. I'll admit that I've never been in the operations business, but I've been told that source routing is a very useful tool for diagnosing some classes of problems. AH allows source routing to be useful again w/o opening the holes it opens. Well, yes, but not for the reason you specify. The problem with source routing is that it makes address-spoofing trivial. With AH, people will either verify certificate names -- the right way to do things -- or they'll bind a certificate to the source address, and use AH to verify the legitimacy of it. The route specified has nothing to do with it, and ESP with null encryption does the same thing. I don't like AH, either in concept or design (and in particular I don't like the way it commits layer violations). Its only real use, as I see it, is to answer Greg Minshall's objections -- it leaves the port numbers in the clear, and visible in a context-independent fashion. With null encryption, the monitoring station has to know that that was selected. But I'm very far from convinced that these issues are important enough to justify AH. All that notwithstanding, this is not a new issue. We've been over this ground before in the working group. Several of us, myself included, suggested deleting AH. We lost. Fine; so be it. Let's ship the documents and be done with it.
Re: Does anybody out there use Authentication Header (AH)?
Yes, I know; I'm on that list. John Smith decided to see if reality matched theory -- always a good thing to do -- and asked here. Btw, it's not just this time there is some support for it; AH was downgraded to MAY in RFC 4301 in 2005. On Jan 1, 2012, at 8:56 PM, Jack Kohn wrote: The __exact__ same discussion happening on IPsecME WG right now. http://www.ietf.org/mail-archive/web/ipsec/current/msg07346.html It seems there is yet another effort being made to retire AH so that we have less # of options to deal with. This time there is some support for it .. Jack On Mon, Jan 2, 2012 at 7:20 AM, Steven Bellovin s...@cs.columbia.edu wrote: On Jan 1, 2012, at 8:34 PM, TR Shaw wrote: John, Unlike AH, ESP in transport mode does not provide integrity and authentication for the entire IP packet. However, in Tunnel Mode, where the entire original IP packet is encapsulated with a new packet header added, ESP protection is afforded to the whole inner IP packet (including the inner header) while the outer header (including any outer IPv4 options or IPv6 extension headers) remains unprotected. Thus, you need AH to authenticate the integrity of the outer header packet information. Not quite. While the cryptographic integrity check does not cover the source and destination addresses -- the really interesting part of the outer header -- they're bound to the security association, and hence checked separately. Below is a note I sent to the IPsec mailing list in 1999. That, however, is not the question that is being asked here. The IPsecme working group has been over those issues repeatedly; your (non)-issue and (slightly) more substantive issues about IPv6 have been rehashed ad nauseum. The questions on the table now are, first, are operators using AH, and if so is ESP with NULL encryption an option? --Steve Bellovin, https://www.cs.columbia.edu/~smb One of the biggest reasons we have AH is because there _are_ some things in the middle of the IP header that need to be authenticated for them to be simultaneously safe and useful. The biggest example of this is source routing. In my opinion -- and I've posted this before -- there's nothing in the IP header that's both interesting and protected. You can't protect the source routing option, since the next-hop pointer changes en route. Appendix A of the AH draft recognizes that, and lists it as 'mutable -- zeroed'. When you look over the list of IP header fields and options that are either immutable or predictable, you find that the only things that are really of interest are the source and destination addresses and the security label. To the extent that we want to protect the addresses -- a point that's very unclear to me -- they're bound to the security association. The security label certainly should be. If you're using security labels (almost no one does) and you don't have the facilities to bind it at key management time, use tunnel mode and be done with it. I'll admit that I've never been in the operations business, but I've been told that source routing is a very useful tool for diagnosing some classes of problems. AH allows source routing to be useful again w/o opening the holes it opens. Well, yes, but not for the reason you specify. The problem with source routing is that it makes address-spoofing trivial. With AH, people will either verify certificate names -- the right way to do things -- or they'll bind a certificate to the source address, and use AH to verify the legitimacy of it. The route specified has nothing to do with it, and ESP with null encryption does the same thing. I don't like AH, either in concept or design (and in particular I don't like the way it commits layer violations). Its only real use, as I see it, is to answer Greg Minshall's objections -- it leaves the port numbers in the clear, and visible in a context-independent fashion. With null encryption, the monitoring station has to know that that was selected. But I'm very far from convinced that these issues are important enough to justify AH. All that notwithstanding, this is not a new issue. We've been over this ground before in the working group. Several of us, myself included, suggested deleting AH. We lost. Fine; so be it. Let's ship the documents and be done with it. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Misconceptions, was: IPv6 RA vs DHCPv6 - The chosen one?
On Dec 29, 2011, at 5:30 16PM, Masataka Ohta wrote: valdis.kletni...@vt.edu wrote: IGP snooping is not necessary if the host have only one next hop router. You don't need an IGP either at that point, no matter what some paper from years ago tries to assert. :) IGP is the way for routers advertise their existence, though, in this simplest case, an incomplete proxy of relying on a default router works correctly. Beyond that, if there are multiple routers, having a default router and relying on the default router for forwarding to other routers and/or supplying ICMP redirects stops working when the default router, the single point of failure, goes down, which is the incompleteness and/or incorrectness predicted by the paper of the end to end argument. Considering that the reason to have multiple routers should be for redundancy, there is no point to use one of them as the default router. VRRP? The Router Discovery Protocol (RFC 1256). But given how much more reliable routers are today than in 1984, I'm not convinced it's that necessary these days. Developing more complicated IGP proxy makes the incompleteness and the incorrectness not disappear but more complicated. Masataka Ohta PS Note that the paper was written in 1984, where as RFC791 was written in 1981. There was a lot less understanding of the difference between hosts and routers in 1984 than there is today -- if nothing else, note how 4.2BSD and 4.3BSD considered all multihomed machines to be routers. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: IPv6 RA vs DHCPv6 - The chosen one?
On Dec 26, 2011, at 1:23 46PM, Mark Radabaugh wrote: On 12/26/11 12:56 PM, valdis.kletni...@vt.edu wrote: On Mon, 26 Dec 2011 12:32:46 EST, Ray Soucy said: 2011/12/26 Masataka Ohtamo...@necom830.hpcl.titech.ac.jp: And, if RA is obsoleted, which is a point of discussion, there is no reason to keep so bloated ND only for address resolution. By who? Sources please. A few people on NANOG complaining about RA is pretty far from deprecation of RA. Especially when some of the biggest IPv6 networks out there are still using it pretty heavily. (C''mon you guys, *deploy* already. It's pretty sad when people are arguing about stuff like this, and a frikkin' cow college out in the boonies pushing 300-400mbits/sec of IPv6 off-campus is still a large deployment. It's embarassing for the industry as a whole) Find me some decent consumer CPE and I would be more than happy to deploy IPv6. So far the choices I have found for consumer routers are pathetic. A fair number of them still have IPv4 issues. Not quite what you're asking for, but I was very pleasantly surprised to see that some (at least) Brother printers support IPv6. Progress... --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: what if...?
On Dec 22, 2011, at 7:04 PM, Jeroen van Aart wrote: Marshall Eubanks wrote: Does your Mom call you up every time she gets a dialog box complaining about an invalid certificate ? If she has been conditioned just to click OK when that happens, then she probably can't. Everyone I have observed clicks ok or confirm exception (if I remember the phrase correctly) as soon as possible. Sadly I think only a few security conscious (IT) people will actually think twice and reject it if they don't trust it. That to me proves this aspect ssl is somewhat flawed in that regard. But then I am preaching to the choir. :-) See the definition of dialog box at http://www.w3.org/2006/WSC/wiki/Glossary --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Traceroute explanation
On Dec 7, 2011, at 2:51 08PM, Meftah Tayeb wrote: big thank for that but, i am testing that for one day :) Can you do an AStraceroute or manually translate those addresses into AS#s? That is, might level3 and tinet be using multiple AS#s, in which case this isn't unreasonable? - Original Message - From: Fred Baker f...@cisco.com To: Meftah Tayeb tayeb.mef...@gmail.com Cc: nanog@nanog.org Sent: Thursday, December 08, 2011 11:23 PM Subject: Re: Traceroute explanation This is just a guess, but I'll bet the route changed while you were measuring it. Traceroute sends a request, awaits a response, sends a request, ... Suppose that the route was 172.28.0.1 - 10.16.0.2 - 41.200.16.1 - 172.17.2.25 - 213.140.58.10 - 195.22.195.125 - 4.69.151.13 - 213.200.68.61 - somewhere else and after the test got that far, two systems got inserted into the path before level3, resulting in the route entering level3 at a different point, 4.69.141.249. What you now have is 172.28.0.1 - 10.16.0.2 - 41.200.16.1 - 172.17.2.25 - 213.140.58.10 - 195.22.195.125 - unknown - unknown - 4.69.141.249 - 77.67.66.154 - and so on The effect would be to get a result like this. Next time you see something like this, suggestion: repeat the traceroute and see what you get. On Dec 7, 2011, at 12:12 PM, Meftah Tayeb wrote: Hey folks, i see a strange traceroute there Détermination de l'itinéraire vers www.rri.ro [193.231.72.52] avec un maximum de 30 sauts : 1 2 ms 1 ms 1 ms 172.28.0.1 2 1 ms 1 ms 1 ms localhost [10.16.0.2] 310 ms10 ms13 ms 41.200.16.1 411 ms10 ms11 ms 172.17.2.25 521 ms21 ms21 ms 213.140.58.10 634 ms31 ms55 ms pos14-0.palermo9.pal.seabone.net [195.22.197.125 ] 734 ms33 ms35 ms ae-5-6.bar2.marseille1.level3.net [4.69.151.13] 8 106 ms68 ms67 ms xe-1-1-0.mil10.ip4.tinet.net [213.200.68.61] 974 ms73 ms74 ms ae-1-12.bar1.budapest1.level3.net [4.69.141.249] 1063 ms63 ms79 ms euroweb-gw.ip4.tinet.net [77.67.66.154] 1185 ms84 ms84 ms v15-core1.stsisp.ro [193.151.28.1] 12 100 ms 100 ms 102 ms inet-crli1.qrli1.buh.ew.ro [81.24.28.226] 1381 ms81 ms81 ms 193.231.72.10 1492 ms92 ms93 ms ip4-89-238-225-90.euroweb.ro [89.238.225.90] 1589 ms89 ms89 ms webrri.rri.ro.72.231.193.in-addr.arpa [193.231.7 2.52] Itinéraire déterminé. C:\Documents and Settings\TAYEB Seabone, then level3, then Tinet, then level3, then tinet ? if is that a routing stufs that i don't know, please let me know :) i never saw that befaure Meftah Tayeb IT Consulting http://www.tmvoip.com/ phone: +21321656139 Mobile: +213660347746 __ Information from ESET NOD32 Antivirus, version of virus signature database 6695 (20111208) __ The message was checked by ESET NOD32 Antivirus. http://www.eset.com __ Information from ESET NOD32 Antivirus, version of virus signature database 6695 (20111208) __ The message was checked by ESET NOD32 Antivirus. http://www.eset.com __ Information from ESET NOD32 Antivirus, version of virus signature database 6695 (20111208) __ The message was checked by ESET NOD32 Antivirus. http://www.eset.com --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Traceroute explanation
I don't know what platform you're using, but there's a separate command. See http://www.shrubbery.net/astraceroute/ . If you're using Linux, there's probably a package in your favorite repository. There seem to be other variants floating around the net. If you're using Windows, I have no idea what's available. On Dec 7, 2011, at 2:56 16PM, Meftah Tayeb wrote: please tel me how to ? i don't know astraceroute:) - Original Message - From: Steven Bellovin s...@cs.columbia.edu To: Meftah Tayeb tayeb.mef...@gmail.com Cc: Fred Baker f...@cisco.com; nanog@nanog.org Sent: Thursday, December 08, 2011 11:33 PM Subject: Re: Traceroute explanation On Dec 7, 2011, at 2:51 08PM, Meftah Tayeb wrote: big thank for that but, i am testing that for one day :) Can you do an AStraceroute or manually translate those addresses into AS#s? That is, might level3 and tinet be using multiple AS#s, in which case this isn't unreasonable? - Original Message - From: Fred Baker f...@cisco.com To: Meftah Tayeb tayeb.mef...@gmail.com Cc: nanog@nanog.org Sent: Thursday, December 08, 2011 11:23 PM Subject: Re: Traceroute explanation This is just a guess, but I'll bet the route changed while you were measuring it. Traceroute sends a request, awaits a response, sends a request, ... Suppose that the route was 172.28.0.1 - 10.16.0.2 - 41.200.16.1 - 172.17.2.25 - 213.140.58.10 - 195.22.195.125 - 4.69.151.13 - 213.200.68.61 - somewhere else and after the test got that far, two systems got inserted into the path before level3, resulting in the route entering level3 at a different point, 4.69.141.249. What you now have is 172.28.0.1 - 10.16.0.2 - 41.200.16.1 - 172.17.2.25 - 213.140.58.10 - 195.22.195.125 - unknown - unknown - 4.69.141.249 - 77.67.66.154 - and so on The effect would be to get a result like this. Next time you see something like this, suggestion: repeat the traceroute and see what you get. On Dec 7, 2011, at 12:12 PM, Meftah Tayeb wrote: Hey folks, i see a strange traceroute there Détermination de l'itinéraire vers www.rri.ro [193.231.72.52] avec un maximum de 30 sauts : 1 2 ms 1 ms 1 ms 172.28.0.1 2 1 ms 1 ms 1 ms localhost [10.16.0.2] 310 ms10 ms13 ms 41.200.16.1 411 ms10 ms11 ms 172.17.2.25 521 ms21 ms21 ms 213.140.58.10 634 ms31 ms55 ms pos14-0.palermo9.pal.seabone.net [195.22.197.125 ] 734 ms33 ms35 ms ae-5-6.bar2.marseille1.level3.net [4.69.151.13] 8 106 ms68 ms67 ms xe-1-1-0.mil10.ip4.tinet.net [213.200.68.61] 974 ms73 ms74 ms ae-1-12.bar1.budapest1.level3.net [4.69.141.249] 1063 ms63 ms79 ms euroweb-gw.ip4.tinet.net [77.67.66.154] 1185 ms84 ms84 ms v15-core1.stsisp.ro [193.151.28.1] 12 100 ms 100 ms 102 ms inet-crli1.qrli1.buh.ew.ro [81.24.28.226] 1381 ms81 ms81 ms 193.231.72.10 1492 ms92 ms93 ms ip4-89-238-225-90.euroweb.ro [89.238.225.90] 1589 ms89 ms89 ms webrri.rri.ro.72.231.193.in-addr.arpa [193.231.7 2.52] Itinéraire déterminé. C:\Documents and Settings\TAYEB Seabone, then level3, then Tinet, then level3, then tinet ? if is that a routing stufs that i don't know, please let me know :) i never saw that befaure Meftah Tayeb IT Consulting http://www.tmvoip.com/ phone: +21321656139 Mobile: +213660347746 __ Information from ESET NOD32 Antivirus, version of virus signature database 6695 (20111208) __ The message was checked by ESET NOD32 Antivirus. http://www.eset.com __ Information from ESET NOD32 Antivirus, version of virus signature database 6695 (20111208) __ The message was checked by ESET NOD32 Antivirus. http://www.eset.com __ Information from ESET NOD32 Antivirus, version of virus signature database 6695 (20111208) __ The message was checked by ESET NOD32 Antivirus. http://www.eset.com --Steve Bellovin, https://www.cs.columbia.edu/~smb __ Information from ESET NOD32 Antivirus, version of virus signature database 6695 (20111208) __ The message was checked by ESET NOD32 Antivirus. http://www.eset.com __ Information from ESET NOD32 Antivirus, version of virus signature database 6695 (20111208) __ The message was checked by ESET NOD32 Antivirus. http://www.eset.com --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: [fyo...@insecure.org: C|Net Download.Com is now bundling Nmapwith malware!]
On Dec 6, 2011, at 12:34 31PM, William Allen Simpson wrote: On 12/6/11 12:00 PM, Eric Tykwinski wrote: Maybe it's just me, but I would think that simply getting them listed on stopbadware.org and other similar sites would probably have much more of an effect. The bad publicity can cause them to change tactics, but it takes some time. I've seen much quicker results from blacklisting on Google and other search engines. I've reported it as a malware site via Firefox. Have you? But the whole site should be scanned for other/similar malware, and blocked accordingly. Probably a harder problem, as it gives different downloads depending on browser and OS. Per the Krebs on Security link that Kyle just posted (and beat me to it), the installer is already flagged as malware by a number of different scanners. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: [fyo...@insecure.org: C|Net Download.Com is now bundling Nmap with malware!]
F*ck them! If anyone knows a great copyright attorney in the U.S., please send me the details or ask them to get in touch with me. Hmm -- did you say copyright? I wonder what would happen if you sent them a DMCA takedown notice. To quote Salvor Hardin, It's a poor atom blaster that doesn't point both ways. (And there's another Hardin quote that seems particularly apt when talking about wielding the DMCA: Never let your sense of morals prevent you from doing what is right.) --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: IPv6 prefixes longer then /64: are they possible in DOCSIS networks?
On Nov 28, 2011, at 4:51 52PM, Owen DeLong wrote: On Nov 28, 2011, at 7:29 AM, Ray Soucy wrote: It's a good practice to reserve a 64-bit prefix for each network. That's a good general rule. For point to point or link networks you can use something as small as a 126-bit prefix (we do). Technically, absent buggy {firm,soft}ware, you can use a /127. There's no actual benefit to doing anything longer than a /64 unless you have buggy *ware (ping pong attacks only work against buggy *ware), and there can be some advantages to choosing addresses other than ::1 and ::2 in some cases. If you're letting outside packets target your point-to-point links, you have bigger problems than neighbor table attacks. If not, then the neighbor table attack is a bit of a red-herring. The context is DOCSIS, i.e., primarily residential cable modem users, and the cable company ISPs do not want to spend time on customer care and hand-holding. How are most v6 machines configured by default? That is, what did Microsoft do for Windows Vista and Windows 7? If they're set for stateless autoconfig, I strongly suspect that most ISPs will want to stick with that and hand out /64s to each network. (That's apart from the larger question of why they should want to do anything else...) --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: First real-world SCADA attack in US
On Nov 22, 2011, at 7:51 59PM, valdis.kletni...@vt.edu wrote: On Tue, 22 Nov 2011 13:32:23 -1000, Michael Painter said: http://jeffreycarr.blogspot.com/2011/11/latest-fbi-statement-on-alleged.html And In addition, DHS and FBI have concluded that there was no malicious traffic from Russia or any foreign entities, as previously reported. It's interesting to read the rest of the text while doing some deconstruction: There is no evidence to support claims made in the initial Fusion Center report ... that any credentials were stolen, or that the vendor was involved in any malicious activity that led to a pump failure at the water plant. Notice that they're carefully framing it as no evidence that credentials were stolen - while carefully tap-dancing around the fact that you don't need to steal credentials in order to totally pwn a box via an SQL injection or a PHP security issue, or to log into a box that's still got the vendor-default userid/passwords on them. You don't need to steal the admin password if Google tells you the default login is admin/admin ;) No evidence that the vendor was involved - *HAH*. When is the vendor *EVER* involved? The RSA-related hacks of RSA's customers are conspicuous by their uniqueness. And I've probably missed a few weasel words in there... They do state categorically that After detailed analysis, DHS and the FBI have found no evidence of a cyber intrusion into the SCADA system of the Curran-Gardner Public Water District in Springfield, Illinois. I'm waiting to see Joe Weiss's response. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: First real-world SCADA attack in US
On Nov 22, 2011, at 8:08 58PM, Steven Bellovin wrote: On Nov 22, 2011, at 7:51 59PM, valdis.kletni...@vt.edu wrote: On Tue, 22 Nov 2011 13:32:23 -1000, Michael Painter said: http://jeffreycarr.blogspot.com/2011/11/latest-fbi-statement-on-alleged.html And In addition, DHS and FBI have concluded that there was no malicious traffic from Russia or any foreign entities, as previously reported. It's interesting to read the rest of the text while doing some deconstruction: There is no evidence to support claims made in the initial Fusion Center report ... that any credentials were stolen, or that the vendor was involved in any malicious activity that led to a pump failure at the water plant. Notice that they're carefully framing it as no evidence that credentials were stolen - while carefully tap-dancing around the fact that you don't need to steal credentials in order to totally pwn a box via an SQL injection or a PHP security issue, or to log into a box that's still got the vendor-default userid/passwords on them. You don't need to steal the admin password if Google tells you the default login is admin/admin ;) No evidence that the vendor was involved - *HAH*. When is the vendor *EVER* involved? The RSA-related hacks of RSA's customers are conspicuous by their uniqueness. And I've probably missed a few weasel words in there... They do state categorically that After detailed analysis, DHS and the FBI have found no evidence of a cyber intrusion into the SCADA system of the Curran-Gardner Public Water District in Springfield, Illinois. I'm waiting to see Joe Weiss's response. See http://www.wired.com/threatlevel/2011/11/scada-hack-report-wrong/ --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: First real-world SCADA attack in US
On Nov 21, 2011, at 4:30 PM, Mark Radabaugh wrote: Probably nowhere near that sophisticated. More like somebody owned the PC running Windows 98 being used as an operator interface to the control system. Then they started poking buttons on the pretty screen. Somewhere there is a terrified 12 year old. Please don't think I am saying infrastructure security should not be improved - it really does need help. But I really doubt this was anything truly interesting. That's precisely the problem: it does appear to have been an easy attack. (My thoughts are at https://www.cs.columbia.edu/~smb/blog/2011-11/2011-11-18.html) --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: using IPv6 address block across multiple locations
On Oct 31, 2011, at 12:30 49PM, Joel jaeggli wrote: On 10/31/11 03:43 , Jeroen Massar wrote: On 2011-10-31 08:56 , Dmitry Cherkasov wrote: Hello, Please advice what is the best practice to use IPv6 address block across distributed locations. You go to multiple RIRs and get multiple prefixes. Heck, you apparently can even get multiple disjunct prefixes from the same RIR. There went the whole idea of aggregation or you could just get an aggregateable block of the appropiate size from one RIR and deaggregate it as necessary which should be the normal course of action... One important question: if data for one of your locations were to be sent from somewhere that is closer (as the packets fly) to another, would you prefer that it be sent over your VPN or over the open Internet? The latter may be cheaper for you, since you don't have to pay for that bandwidth; the former may be more secure if your VPN is encrypted. To send stuff only over the open Internet in this situation, use a separate /48 for each location. To send stuff only over your VPN, put everything in a single /44 or so and advertise only it. Advertising the /44 and having each location advertising its own /48 within that /44 will usually cause the traffic to go over the open Internet, with your VPN as backup in case of reachability problems if some ISPs won't carry the longer /48s because of their own policies. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: 13 years ago today - October 16, 1998...
On Oct 15, 2011, at 11:20 58PM, Jay Ashworth wrote: - Original Message - From: Rodney Joffe rjo...@centergate.com Subject: 13 years ago today - October 16, 1998... we lost Jon. It feels like just yesterday. http://www.apps.ietf.org/rfc/rfc2468.html My path didn't cross Jon's much... but he was nice enough to reserve the really cool RFC number that graces my AFJ contribution from 1997 -- 3 or 4 RFCs with higher numbers came out in March. Ah, I'm not the only one he did that for. I asked if the IAB/IESG statement on crypto could by RFC 1984. He told me that he never reserved RFC numbers -- but that coincidences could happen... --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: East Coast Earthquake 8-23-2011
On Aug 24, 2011, at 9:44 20AM, Patrick W. Gilmore wrote: On Aug 24, 2011, at 8:55 AM, JC Dill wrote: On 23/08/11 3:13 PM, William Herrin wrote: A. Our structures aren't built to seismic zone standards. Our construction workers aren't familiar with*how* to build to seismic zone standards. We don't secure equipment inside our buildings to seismic zone standards. They should be. They should be. You should. Earthquakes can happen anywhere. There's no excuse to fail to build/secure to earthquake standards. Tornados can happen anywhere, there's no excuse to fail to build/secure for tornados. [Etc.] Things that cost money are not done unless the probability of the danger is higher than vanishingly small. This temblor - at 5.8 with no injuries or fatalities - was the largest earthquake on the entire east coast in 67 years, and the largest in VA in well over a century. Think of the _trillions_ of dollars which could have been put into healthcare, public safety, hell, better networking equipment :) we could have used instead of making all buildings on the east coast earthquake safe. It's more complex than that: http://www.wired.com/wiredscience/2011/08/east-coast-earthquakes/ And eastern cities can experience quakes of a magnitude noteworthy even on the West Coast -- see http://en.wikipedia.org/wiki/Charleston,_South_Carolina#Postbellum_era_.281865.E2.80.931945.29 --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: How long is your rack?
On Aug 15, 2011, at 10:12 21AM, Randy Bush wrote: I've always wondered if the next cisco/juniper 0 day will be delivered via a set of exploits delivered via a link posted to NANOG. :) Maybe I'll do a talk at DEFCON next year about that. more likely a 'shortened' url. how anyone can click those is beyond me. I'm curious what your objection is. Mine is privacy -- the owner of the shortening site gets to see every place you visit using one of those. I don't think there's a significant incremental security risk, because the URL you click on doesn't tell you what you'll receive in any event. Case in point: https://www.cs.columbia.edu/~smb/SMBlog-in-PDF.pdf does *not* yield a PDF. (As far as I know, it's a completely safe URL to click on, but I can't guarantee that someone else didn't hack my site. I, at least, haven't put any nasties there.) Yes, when you avoid shortened URLs you get some assurance of the owner of the content. Given the rate of hacking -- is anyone really safe from a determined amateur attack, let alone state-sponsored nastiness? -- and given the amount of third-party content served up by virtually all ad-containing site, you really have no idea what you're going to receive when you click on any link. --Steve Bellovin, http://www.cs.columbia.edu/~smb
Re: NANOGers home data centers - What's in your closet?
The holy grail I'm searching for now? A GigE switch with POE, unmanaged is ok, and probably preferred from a price perspective; but with NO FAN. I can't help with the POE part. I have a 16-port D-Link DGS-1016D -- GigE, no fan, unmanaged. --Steve Bellovin, http://www.cs.columbia.edu/~smb
Re: NANOGers home data centers - What's in your closet?
On Aug 12, 2011, at 10:17 39PM, Joe Greco wrote: What nobody wired their abode with fiber ? Am i the only one here I ran a bunch of fiber from the telco rack to the server rack to reduce the risk of damage to expensive servers ... it's likely to be meaningless but it is just a little extra precaution. The server rack is at least a little bit isolated from everything else. That's overkill. I have very little in the house except what's needed to support ordinary client machines for everyone in the house. That means GigE to several locations, some of which have small GigE switches of their own. For example, my wife's computer is colocated with a network-connected color printer/scanner/fax. The basement location has a WiFi access point, the home backup server (though lately, I've started using a colo machine for that), etc. For me -- two generations of laptops (one as backup for the other), and a Mac Mini as backup desktop. Then there's another access point, a BW laser printer, etc. But anything noisy? Nope. --Steve Bellovin, http://www.cs.columbia.edu/~smb
Re: Comcast Bussiness Class and GRE Tunnels
On Jul 26, 2011, at 11:07 37AM, Nate Burke wrote: Hello, I'm hoping that someone here might have run into a similar issue and might be able to offer me some pointers. I have a customer that I am providing redundant paths to, one link over a microwave connection, and a backup link over a Comcast Business Class Connection. Everything on the Microwave link is working fine. On the Comcast Connection, I have a Static IP from Comcast, and I want to setup a vendor specific GRE tunnel (Mikrotik EoIP) from my NOC to the Comcast Static IP Address. It looks like the SPI Firewall inside the SMC Gateway required by comcast is blocking the GRE packets, I'm basing this on the fact that when I power cycle the modem, I get 1 ICMP Packet through the GRE Tunnel while the modem is booting up, then it stops again. I have gotten to Tier2 support who swears that all Firewalls on the SMC Gateway are disabled. As a workaround, I was able to establish a PPTP tunnel to my NOC, however it seems like the tunnel will only run for a few hours, then becomes slow to the point of being unusable. In my mind this would be no different than setting up a permanent VPN back to a corporate office, which I would think happens all the time, so I'm not sure why I'm running into issues with it. I had to make the LAN end of the tunnel the DMZ host (under Firewall settings on my SMC). --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Strange TCP connection behavior 2.0 RC2 (+3)
On Jun 29, 2011, at 8:59 49AM, Ryan Malayter wrote: On Jun 28, 3:35 pm, Cameron Byrne cb.li...@gmail.com wrote: AFAIK, Verizon and all the other 4 largest mobile networks in the USA have transparent TCP proxies in place. Do you have a reference for that information? Neither ATT nor Sprint seem to have transparent *HTTP* proxies according to http://www.lagado.com/tools/cache-test. I would have thought that would be the first and most important optimization a mobile carrier could make. I used to see mobile-optimized images and HTTP compression for sites that weren't using it at the origin on Verizon's 3G network a few years ago, so Verizon clearly had some form of HTTP proxy in effect. Aside from that, how would one check for a transparent *TCP* proxy? By looking at IP or TCP option fingerprints at the receiver? Or comparing TCP ACK RTT versus ICMP ping RTT? Or see what bandwidth is like if you use IPsec or the like. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Address Assignment Question
On Jun 20, 2011, at 5:52 27PM, John Levine wrote: They have inquired about IPv6 already, but it's only gone so far as that. I would gladly give them a /64 and be done with it, but my concern is that they are going to want several /64 subnets for the same reason and I don't really *think* it's a legitimate reason. No legitimate mailer needs more than one /64 per physical network. Same reason. Note that the OP spoke of assigning them one /64, rather than one per physical net. I also note that ARIN, at least, suggests /56 for small sites, those expected to need only a few subnets over the next 5 years, which would seem to include this site even without their justification. All they need -- or, I suspect, need to assert -- is to have multiple physical networks. They can claim a production net, a DMZ, a management net, a back-end net for their databases, a developer net, and no one would question an architecture like that --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Address Assignment Question
On Jun 20, 2011, at 10:22 45PM, John R. Levine wrote: All they need -- or, I suspect, need to assert -- is to have multiple physical networks. They can claim a production net, a DMZ, a management net, a back-end net for their databases, a developer net, and no one would question an architecture like that My impression is that this is about a client whose stuff is all hosted in a single data center. Then take out the developer net (or make it a VPN) but the rest remains. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Yup; the Internet is screwed up.
On Jun 11, 2011, at 5:34 10AM, Jeroen van Aart wrote: Ricardo Ferreira wrote: Funny, how in the title refers to the Internet globally when the article is specific about the USA. I live in europe and we have at home 100Mbps . Mid sized city of 500k people. Some ISPs even spread WiFi across town so that subscribers can have internet access outside their homes. Though it's nice to have why would one *need* 100 Mbps at home? I understand the necessity of internet access and agree everyone has a right to it. But that necessity can be perfectly fulfilled with a stable internet connection of a reasonable speed (say low to mid range DSL speed tops). When I was in grad school, the director of the computer center (remember those) felt that there was no need for 1200 bps modems -- 300 bps was fine, since no one could read the scrolling output any faster than that anyway. Right now, I'm running an rsync job to back up my laptop's hard drive to my office. I hope it finishes before I leave today for Denver. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: IPv6 day fun is beginning!
On Jun 7, 2011, at 7:22 58PM, john.herb...@usc-bt.com john.herb...@usc-bt.com wrote: No issues connecting to FB for me on IPv6 (both to www.v6.facebook.com and to the returned by www.facebook.com now). Interesting (perhaps) side note - www.facebook.com has a , but facebook.com does not. Google / Youtube records are up and running nicely also. J. I was hoping for a v6 Google logo --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: IT Survey Request: Win an iPad2 or Kindle!
On May 27, 2011, at 10:24 22AM, Michael Holstein wrote: I am a student at UCLA Anderson School of Managment and my MBA field study team is working on a research that involves conducting a survey of CIOs, IT Managers/Administrators, IT Engineers to understand challenges in managing IT infrastructure. Could you please help by filling out this really short survey? A more cynical view would be as an MBA student, you're researching cheaper ways to recruit contact information and current projects. A kindle is $139 .. that's pretty cheap for a list of people/projects considering what that lead information is worth to vendors of the solutions to the challenges you ask about. I know nothing of this student, the school, or the study. I will say -- as an academic who frequently does research involving human subjects, generally including surveys -- that this is a very normal way to proceed. Finding enough subjects is always hard; it's the single biggest obstacle we encounter. Paying people is the usual approach, but for a group like this, the usual nominal amount we pay undergrads ($10-25) isn't enough. Other common approaches -- flyers all over campus, offers on Mechanical Turk, ads on Facebook or Google Adwords, etc. -- won't work if you're trying to get people with specialized knowledge or skills. What's left? I might add that by federal law, all government-funded research involving human subjects has to be approved by an IRB -- an Institutional Review Board -- and many universities (including my own) impose that requirement on all research, even if no federal funds are involved. While it's certainly not rare to do studies that involve (initial) deceit of the subjects (you want them reacting normally, rather than giving the answers they think you want), the IRB has to see the full protocol and experiment design. You may be right, of course; I can't say. I haven't contacted the student's professor nor have I asked to see the IRB protocol. Given that any legitimate study of this type would be conducted along the lines explained in the original post, I'd say that the burden of proof is on you. (Of course, as a security guy I know full well that that notion of normal behavior is the best way to hide an attack.) References: http://www.usenix.org/events/upsec08/tech/full_papers/garfinkel/garfinkel.pdf https://www.cs.columbia.edu/~smb/papers/wecsr2011-irb.pdf --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Rogers Canada using 7.0.0.0/8 for internal address space
On May 24, 2011, at 9:29 06PM, Jay Ashworth wrote: - Original Message - From: Jimmy Hess mysi...@gmail.com On Tue, May 24, 2011 at 4:34 PM, vinny_abe...@dell.com wrote: I think those within the organization that deploy those vehicles or are Navy SEALs might sit at different lunch tables than the guys worried about IP address collisions. ;-) The F/A-18 Hornets, F/A-22 Raptors are well, and good, but that's old technology The folks in charge of the MQ-1 predator drones might sit closer to the guys worried about the IP addresses. And automated drone strikes can always be blamed on a malfunction caused by the hijacking If packets that control armed drones cross any router that has access even to SIPRnet, much less the Internet, someone's getting relieved. http://www.eweek.com/c/a/Security/Militants-Hack-Unencrypted-Drone-Feeds-477219/ --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Had an idea - looking for a math buff to tell me if it's possiblewith today's technology.
On May 19, 2011, at 9:48 35AM, Jamie Bowden wrote: I know you're having fun with him, but I think what the original poster had in mind was more like thinking of a file as just a string of numbers. Create an equation that generates that string of numbers, send equation, regenerate string on other end. Of course, if it was that easy, someone would already have done it Yes. I guess I was too terse with my answer, but this is known as Kolmogorv complexity. It's a well-known concept, and in general you can't construct such equations/programs/what-have-yous. Wikipedia even gives a proof of that... --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.
On May 18, 2011, at 4:07 32PM, Landon Stewart wrote: Lets say you had a file that was 1,000,000,000 characters consisting of 8,000,000,000bits. What if instead of transferring that file through the interwebs you transmitted a mathematical equation to tell a computer on the other end how to *construct* that file. First you'd feed the file into a cruncher of some type to reduce the pattern of 8,000,000,000 bits into an equation somehow. Sure this would take time, I realize that. The equation would then be transmitted to the other computer where it would use its mad-math-skillz to *figure out the answer* which would theoretically be the same pattern of bits. Thus the same file would emerge on the other end. The real question here is how long would it take for a regular computer to do this kind of math? Just a weird idea I had. If it's a good idea then please consider this intellectual property. LOL http://en.wikipedia.org/wiki/Kolmogorov_complexity --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: user-relative names - was:[Re: Yahoo and IPv6]
On May 17, 2011, at 10:30 13PM, Joel Jaeggli wrote: On May 17, 2011, at 6:09 PM, Scott Weeks wrote: --- joe...@bogus.com wrote: From: Joel Jaeggli joe...@bogus.com On May 17, 2011, at 4:30 PM, Scott Brim wrote: On May 17, 2011 6:26 PM, valdis.kletni...@vt.edu wrote: On Tue, 17 May 2011 15:04:19 PDT, Scott Weeks said: What about privacy concerns Privacy is dead. Get used to it. -- Scott McNeely Forget that attitude, Valdis. Just because privacy is blown at one level doesn't mean you give it away at every other one. We establish the framework for recovering privacy and make progress step by step, wherever we can. Someday we'll get it all back under control. if you put something in the dns you do so because you want to discovered. scoping the nameservers such that they only express certain certain resource records to queriers in a particular scope is fairly straight forward. The article was not about DNS. It was about Persistent Personal Names for Globally Connected Mobile Devices where Users normally create personal names by introducing devices locally, on a common WiFi network for example. Once created, these names remain persistently bound to their targets as devices move. Personal names are intended to supplement and not replace global DNS names. you mean like mac addresses? those have a tendency to follow you around in ipv6... This is why RFC 3041 (replaced by 4941) was written, 10+ years ago. The problem is that it's not enabled by default on many (possibly all) platforms, so I have to have # cat /etc/sysctl.conf net.inet6.ip6.use_tempaddr=1 set on my Mac. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: 23,000 IP addresses
On May 10, 2011, at 9:07 11AM, Marshall Eubanks wrote: A Federal Judge has decided to let the U.S. Copyright Group subpoena ISPs over 23,000 alleged downloads of some Sylvester Stallone movie I have never heard of; subpoenas are expected to go out this week. I thought that there might be some interest in the list of these addresses : http://www.wired.com/images_blogs/threatlevel/2011/05/expendibleipaddresses.pdf If you have IP addresses on this list, expect to receive papers shortly. Has anyone converted that file to some useful format like ASCII? You know -- something greppable? Here is more of the backstory : http://www.wired.com/threatlevel/2011/05/biggest-bittorrent-case/ This is turning into quite a legal racket (get order $ 3000 for sending a threatening letter); I expect to see a lot more of this until some sense returns to the legal system. There's amazing slime behind some similar efforts -- in another case, of people charged with downloading Nude Nuns with Big Guns (yes, you read that correctly), there are two different that each claim the rights to the movie and hence the right to sue (alleged) downloaders: http://www.wired.com/threatlevel/2011/05/nude-nuns-brouhaha/ --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: 23,000 IP addresses
On May 10, 2011, at 2:10 10PM, Wil Schultz wrote: On May 10, 2011, at 10:56 AM, Steven Bellovin wrote: On May 10, 2011, at 9:07 11AM, Marshall Eubanks wrote: Has anyone converted that file to some useful format like ASCII? You know -- something greppable? I've converted it to ascii, but I don't have a place to host it. I can send to anyone that would like it. Thanks. I've uploaded it as https://www.cs.columbia.edu/~smb/23000.txt.gz and https://www.cs.columbia.edu/~smb/23000-clean.txt.gz ; the latter has page breaks, headers, etc., stripped out; nothing but data. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: 23,000 IP addresses
On May 10, 2011, at 3:02 33PM, Owen DeLong wrote: On May 10, 2011, at 11:49 AM, Michael Holstein wrote: In the EU you have Directive 2006/24/EC: But I'm not, and neither are most of the ISPs in the linked document. Regards, Michael Holstein Information Security Administrator Cleveland State University In the US, I believe that CALEA requires you to have those records for 7 years. Source, please -- I've never heard of this, nor can I find anything like it at askcalea.com. All I've found is that you have to keep records of *interceptions*. I've also seen numerous news stories about how the FBI wants that to be added to the law, thus implying that it isn't there now. See, for example, http://news.cnet.com/8301-13578_3-10448060-38.html --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: 23,000 IP addresses
On May 10, 2011, at 3:51 32PM, Michael Holstein wrote: In the US, I believe that CALEA requires you to have those records for 7 years. No, it doesn't (records *of the requests* are required, but no obligation to create subscriber records exists). Even if it did .. academic institutions are exempt (to CALEA) as private networks.* There are various legislative attempts afoot to create one here in the US .. but none have passed. Regards, Michael Holstein Information Security Administrator Cleveland State Unviersity (*): US Court of Appeals, District of Columbia, 50-1504. If I've found the right case, it was 05-1404, and published as 451 F.3d 226 (2006); see http://law.justia.com/cases/federal/appellate-courts/F3/451/226/627290/ I have no idea if it's still good law. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: 23,000 IP addresses
On May 10, 2011, at 9:53 16PM, Michael Painter wrote: Deepak Jain wrote: For examples, see the RIAA's attempts and more recently the criminal investigations of child porn downloads from unsecured access points. From what I understand (or wildly guess) is that ISPs with remote diagnostic capabilities are being asked if their provided access point is secure or unsecure BEFORE they serve their warrants to avoid further embarrassments. [It'll probably take another 6 months and more goofs before they realize that customers are perfectly capable of poorly installing their own access points behind ISP provided gear]. Exactly...what about those who choose WEP/WPA-TKIP for their 'secured' access point? I can just imagine being in front of a judge/jury after having been arrested for, as you say, child porn downloads and listening to my law^H^H^H public defender explain the mechanisms of how the access point was 'cracked' and may have been used by someone sitting in their car down the street.shudder It's happened -- here are two cases I know of: http://news.cnet.com/Wi-Fi-arrest-highlights-security-dangers/2100-1039_3-5112000.html http://news.nationalpost.com/2010/05/27/ontario-man-accused-of-downloading-child-porn-because-of-free-wifi-connection/ --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: How do you put a TV station on the Mbone?
On May 5, 2011, at 1:55 54AM, George Bonser wrote: There is a security aspect to such things, though, as how do you know the content is from a trusted source? That is the bugaboo with multicast. It needs to be information that isn't going to hurt anything if it is bogus. Also, it opens up a DoS possibility with noise traffic sent to the multicast group. SSM with encryption? Well, certainly, but source address can be very easily spoofed with a UDP multicast stream. Now that could be mitigated with a lot of network configuration rules but something is needed that just works without all that. So using multicast for things like software updates to computers over the general internet to the general public probably isn't going to work. Encryption is also an issue because it doesn't really work well over multicast. How do I encrypt something in a way that anyone can decrypt but nobody can duplicate? If I have a separate stream per user, that is easy. If I have one stream for all users, that is harder. The answer is probably in some sort of digital signature but not really encryption. Using public/private key encryption over multicast, I would have to distribute the private key so others could decrypt the content. If they have the private key, they can generate a public key to use to generate content. Encryption is probably overkill anyway. What is needed is a mechanism simply to say that the content is certified to have come from the source it claims to come from. So ... basically ... better not to use multicast for anything you really might have any security issues with. Fine for broadcasting a video, not so fine for a kernel update. See the work of the IETF MIKEY working group. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: How do you put a TV station on the Mbone?
On May 4, 2011, at 3:37 48PM, Jeff Wheeler wrote: On Wed, May 4, 2011 at 2:22 PM, Scott Helms khe...@ispalliance.net wrote: Local caching is MUCH more efficient than having the same traffic running in streams and depending on everyone's PC to try and update in the same time This only works, of course, if there is a local cache which PCs are aware of. Same issue as above, even if I am watching the latest popular movie moving between a multicast and unicast stream everytime I pause it to get another beer isn't realistic. The chances that there will be a multicast stream that will be in synch with me is not high at all. You must have skipped over the word cache when reading my post. I'll explain again in a little more detail, so you can understand why the consumer who pauses the film to go get a snack is actually an advantage for this system. Let's say your typical movie is 5Mb/s and you want to start watching it right away; you aren't willing to wait several minutes (or longer) until the next multicast loop begins. You press play and begin receiving a 5Mb/s unicast stream, but your STB also joins an mcast group for that movie, because it is very popular and being watched by a huge number of users during peak time. The mcast stream is 20Mb/s, or 400% of real-time. No matter what point the loop is at when you join, you will cache the multicast data and eventually reach a point in the movie where you no longer need the unicast stream. Given a 2 hour movie, the worst-case is that you'll join just a minute after the stream/loop started, in which case it will be about 30 minutes before you start viewing from multi-casted, STB-cached data, instead of unicast streamed data. With two subscribers watching the movie given worst-case circumstances, there is a bandwidth conservation of: (users - 1) * 5Mb/s * 90min, or a mean savings around 37%, for only two users. If ten users are watching, your worst-case bandwidth savings will be greater, 33.7Mb/s, or about 67%. If, on the other hand, you start watching the movie, then realize it would be more enjoyable with some popcorn, your STB is already listening to the mcast stream and caching the movie for you. The longer it takes your popcorn to cook, the greater the chance that the STB will start receiving mcast data for the beginning of the movie before you un-pause it, which means you would not need the unicast stream at all. In fact, if you include the probability that some users will be able to receive data via mcast earlier than 30 minutes into the movie, because they didn't get unlucky and press play at the worst-case moment, your bandwidth savings for a group of ten viewers and a 400% real-time mcast stream will be about 80%. The potential savings is limited by the over-speed of the mcast stream vs real-time, and the density of mcast listener groups. Given that access network speeds continue to increase, yet ISPs are really not increasing bandwidth caps, it is reasonable to assume that an ISP might like to allow its subscribers to receive a very fast mcast stream for a short period of time, instead of all of those subscribers receiving many, slow mcast streams. A crucial point here is the cost ratio between bandwidth and disk space, since ultimately consumers pay for both. My own STB can cache the movie -- but that requires local disk. On the other hand, as you point out, it saves on bandwidth. (Note that I'm interpreting cost broadly to include not just the capital cost of, say, the disk, but all of the associated operational costs, including what ISPs need to spend on provisioning and operating multicast, consumer reactions to local disks being full or dying, etc. Of course, I don't know what the answer is now, let alone over time... --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: VPN over slow Internet connections
On Apr 21, 2011, at 12:55 32PM, Ben Whorwood wrote: Dear all, Can anyone share any thoughts or experiences for VPN links running over slow Internet connections, typically 2kB/s - 3kB/s (think 33.6k modem)? We are looking into utilising OpenVPN for out-of-office workers who would be running mobile broadband in rural areas. Typical data across the wire would be SQL queries for custom applications and not much else. Some initial thoughts include... * How well would the connection handle certificate (= 2048 bit key) based authentication? You're doing this rarely; it shouldn't be a problem. * Is UDP or TCP better considering the speed and possibility of packet loss (no figures to hand)? For your application or for the VPN? For the VPN, I *strongly* suggest you use UDP, or you're going to get dueling retransmissions and spend a lot of time sending many copies of the same thing. Consider: if a packet is dropped, either due to line noise or queuing delay for the slow link, the sending TCP will resend. If you're using TCP for OpenVPN, that session's TCP will resend. Of course, the TCP running on top of it will resend as well, so you'll get two copies of the data sent to the application's TCP, wasting precious bandwidth. If, on the other hand, OpenVPN is running UDP, it won't resend; the application's TCP will, so you'll only get one copy. I should note: IPsec, being datagram-based, will also work well. PPTP, which runs over TCP as far as I know, will suffer all of the ills I just outlined. I'm assuming that your application is using TCP. Unless the data characteristics are such that you're able to fit every query and every response into a single packet, you'll spend more effort (and probably bandwidth) doing your own retransmissions, backoffs, segementation, etc. * Is VPN over this type of connection simply a bad idea? If you do it correctly, a VPN is actually better: you can assign a static internal IP address to each certificate. If the modem connection drops, when you reconnect the applications will still have the same IP address, so their connections won't be interrupted. You do have to watch out for queue limits -- as Jim Gettys has reminded us, many of today's queues are far too long, so we're not getting the very beneficial effects of early drop and consequent TCP slow-down. This will require tuning your end nodes OS and/or the router at your head end. Use active queue management (e.g., RED), and consider a priority queueing scheme. Watch out for other applications -- I've had trouble with MacOS's Mail.App on slow links; it's gotten very confused and more or less forgotten about my mail folders, with the consequent need to rebuild them, reactivate my sorting rules, etc. (Note that this paragraph applies whether or not you're using a VPN; it's the effect of a slow connection, not the crypto.) The real VPN question is what the overhead is. I've never calculated it for OpenVPN; I did for IPsec some years ago (long enough ago that IPsec was still using DES or 3DES because AES and its 16-byte blocksize didn't exist); using average packet size distribution, it came to (as I recall) about 12%. That's unlikely to make or break your system. However, there's no substitute for real data -- what do your packets look like? Fairly obviously, the shorter the packet, the higher the overhead percentage. Someone suggested trying it using a FreeBSD flakeway; that's a good idea. In short -- if your VPN is set up properly, any ill effects are much more likely to be from the link speed itself, not the VPN. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: VPN over slow Internet connections
On Apr 21, 2011, at 4:31 32PM, Phil Regnauld wrote: Steven Bellovin (smb) writes: I should note: IPsec, being datagram-based, will also work well. PPTP, which runs over TCP as far as I know, will suffer all of the ills I just outlined. PPTP uses 1723/tcp for control, but the tunneled traffic is GRE, so that would work fine as well. Ah, thanks for the correction. If you do it correctly, a VPN is actually better: you can assign a static internal IP address to each certificate. If the modem connection drops, when you reconnect the applications will still have the same IP address, so their connections won't be interrupted. Absolutely, that's the case with OpenVPN, if you assign static IPs to each profile. PPtP can do this as well, for instance using MPD. Very big advantage in fact. Yup, I've done this myself with OpenVPN. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: VPN over slow Internet connections
On Apr 21, 2011, at 5:28 46PM, Terry Baranski wrote: On Apr 21, 2011, at 4:20PM, Steven Bellovin wrote: For your application or for the VPN? For the VPN, I *strongly* suggest you use UDP, or you're going to get dueling retransmissions and spend a lot of time sending many copies of the same thing. Consider: if a packet is dropped, either due to line noise or queuing delay for the slow link, the sending TCP will resend. If you're using TCP for OpenVPN, that session's TCP will resend. Of course, the TCP running on top of it will resend as well, so you'll get two copies of the data sent to the application's TCP, wasting precious bandwidth. Is this actually how OpenVPN's TCP encapsulation works? I'd be curious to know. It isn't how Cisco's TCP/1 encapsulation works, at least not with the IOS devices I have experience with. Cisco's TCP/1 looks like TCP to a firewall, but it really isn't. There is no reliability -- no retransmits, etc. It's pretty close to UDP behavior but with a TCP header, which was confusing to troubleshoot at first but quickly made perfect sense to me for the reasons you state above. To the OS, OpenVPN is an application that uses the underlying TCP (or UDP)/IP stack; it can't behave any differently than any other application. Since (as far as I know) Windows, Linux, NeBSD, FreeBSD, MacOS, and all of the other platforms that OpenVPN runs on just have normal TCPs, that's what OpenVPN does. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: Comcast's 6to4 Relays
On Apr 20, 2011, at 3:50 03PM, Owen DeLong wrote: On Apr 20, 2011, at 11:25 AM, Doug Barton wrote: On 04/20/2011 10:54, Brzozowski, John wrote: Doug, I am aware of the drafts you cited earlier, as Mikael mentions below the existence of the same will not result in 6to4 being turned off automatically or immediately. This process will likely take years. I was going to let this go, but after so many responses in the same vein I feel compelled to clarify. *I personally* believe that the answer to 6to4 is to just turn it off. These things have long tails because we insist that they do, not because they have to. *However,* I am realistic enough to know that it isn't going to happen, regardless of how disappointed I may be about that. :) Turnning off the servers will not reduce the brokenness of 6to4, it will increase it. The best way to get rid of 6to4 is to deploy native IPv6. The best way to improve 6to4 behavior until that time is to deploy more, not less 6to4 relays. Hurricane Electric has proven this. Comcast has proven this. Every provider that has deployed more 6to4 relays has proven this. Please note the goal here is not to make 6to4 great, like many others we hope to see 6to4 use diminish over time. Hope is not a plan. Meanwhile, my main goal in posting was to make sure that to the extent that you(Comcast) intend to make changes to your 6to4 infrastructure that you take into account the current thinking about that, and I'm very pleased to hear that you have. The best way to make 6to4 diminish has always been and still remains: Deploy Native IPv6 Now. That's a plan and a necessity at this point, but, execution is still somewhat lagging. Of course, Comcast *is* deploying native IPv6; see, for example, http://mailman.nanog.org/pipermail/nanog/2011-January/031624.html It just takes a while -- and a non-trivial number of zorkmids -- to do things like replacing all of the non-v6 CPE. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: 365x24x7
On Apr 17, 2011, at 11:47 20PM, Frank Bulk wrote: Timely article on the FAA's involvement with sleep schedules: http://www.ajc.com/news/air-traffic-controller-scheduling-913244.html Union spokesman Doug Church said up to now, 25 percent of the nation's air traffic controllers work what he called a 2-2-1″ schedule, working afternoon to night the first two days, followed by a mandatory minimum of eight hours for rest before starting two morning-to-afternoon shifts, another eight or more hours for sleep, then a final shift starting between 10 p.m. to midnight. Maybe we need to work in more time for rest, Church said. You’re forcing yourself to work at a time when the body is used to sleeping. Also see http://www.google.com/hostednews/ap/article/ALeqM5hstTegGafIYTakRavF4WEEPblz-Q?docId=f174db27ddb44dadbcad8419dfe138a7 People who change shifts every few days are going to have all kinds of problems related to memory and learning, Fishbein said. This kind of schedule especially affects what he called relational memories, which involve the ability to understand how one thing is related to another. ... Controllers are often scheduled for a week of midnight shifts followed by a week of morning shifts and then a week on swing shifts. This pattern, sleep scientists say, interrupts the body's natural sleep cycles. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: 365x24x7 (sleep patterns)
On Apr 15, 2011, at 1:41 26PM, Marshall Eubanks wrote: On Apr 15, 2011, at 12:44 PM, Mark Green wrote: Suggestion; once on the 'night shift' stay put for at least three months... Sleep patterns take time to adjust. Jumping between day and night shifts will burn out even the most motivated employee. What we found was that we would find people who wanted to be on the night shift, and would NOT like to be changed, at all. Some people like night work, or have family situations where it is ideal for them. Yah. Read the current news coverage about sleeping U.S. air traffic controllers, especially the articles about how hard it is to switch shifts, and very especially if you do it often. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: v6 Avian Carriers?
On Apr 1, 2011, at 8:41 11AM, Sachs, Marcus Hans (Marc) wrote: I was wondering which April 1st this would happen on. Now I know. So if a v6 carrier swallows a v4 datagram does that count as packet loss or tunneling? http://datatracker.ietf.org/doc/rfc6214/ I was disappointed in this RFC -- Section 3.1 didn't include the proper discussion of the difference between African and European avian carriers, and we know what happens if that question is asked at the wrong time. --Steve Bellovin, https://www.cs.columbia.edu/~smb
Re: v6 Avian Carriers?
On Apr 1, 2011, at 9:49 PM, Owen DeLong o...@delong.com wrote: Which? African or European Swallows? (Watches Chad fly over the cliff edge) ;-) So the RFC needed more text in it's Security Considerations section, too... Owen On Apr 1, 2011, at 6:34 PM, Chad Dailey wrote: Swallows have MTU issues. On Fri, Apr 1, 2011 at 8:27 PM, Owen DeLong o...@delong.com wrote: On Apr 1, 2011, at 10:45 AM, Steven Bellovin wrote: On Apr 1, 2011, at 8:41 11AM, Sachs, Marcus Hans (Marc) wrote: I was wondering which April 1st this would happen on. Now I know. So if a v6 carrier swallows a v4 datagram does that count as packet loss or tunneling? http://datatracker.ietf.org/doc/rfc6214/ I was disappointed in this RFC -- Section 3.1 didn't include the proper discussion of the difference between African and European avian carriers, and we know what happens if that question is asked at the wrong time. --Steve Bellovin, https://www.cs.columbia.edu/~smb That applies to swallows. I'm not sure pidgeons pose the same issue. I think in general, swallows provide poor platforms for avian transport of IP datagrams. Owen
Re: The state-level attack on the SSL CA security model
On Mar 26, 2011, at 12:21 12AM, Franck Martin wrote: On 3/26/11 15:36 , Joe Sniderman joseph.snider...@thoroquel.org wrote: On 03/25/2011 11:12 PM, Steven Bellovin wrote: On Mar 25, 2011, at 12:19 52PM, Akyol, Bora A wrote: One could argue that you could try something like the facebook model (or facebook itself). I can see it coming. Facebook web of trust app ;-) Except, of course, for the fact that people tend to have hundreds of friends, many of whom they don't know at all, and who achieved that status simply by asking. You need a much stronger notion of interaction, to say nothing of what the malware in your friends' computers are doing to simulate such interaction. Then again there are all the friend us for a chance to win $prize gimmicks... not a far jump to friend us, _with trust bits enabled_ for a chance to win $prize Yeah sounds like a wonderful idea. :P Wasn't PGP based on a web of trust too? Yes -- see Valdis' posting on that: http://mailman.nanog.org/pipermail/nanog/2011-March/034651.html --Steve Bellovin, http://www.cs.columbia.edu/~smb
Re: The state-level attack on the SSL CA security model
On Mar 25, 2011, at 12:19 52PM, Akyol, Bora A wrote: One could argue that you could try something like the facebook model (or facebook itself). I can see it coming. Facebook web of trust app ;-) Except, of course, for the fact that people tend to have hundreds of friends, many of whom they don't know at all, and who achieved that status simply by asking. You need a much stronger notion of interaction, to say nothing of what the malware in your friends' computers are doing to simulate such interaction. --Steve Bellovin, http://www.cs.columbia.edu/~smb
Re: Nortel, in bankruptcy, sells IPv4 address block for $7.5 million
On Mar 24, 2011, at 10:27 58AM, Aaron Wendel wrote: That's a good question. Maybe they can't qualify under Arin rules. Another question will be: how is Arin going to handle it? Im pretty sure that the RSA says that in the event of bankruptcy ips revert to the Arin pool. I understand that these were legacy addresses but... I wonder if the bankruptcy court agrees with that. Does it have the power to order ARIN to accept this? Send lawyers, guns, and money... --Steve Bellovin, http://www.cs.columbia.edu/~smb
Re: IPv4 address shortage? Really?
...well, kind of. What you don't mention is that it was thought to be ugly and rejected solely on the aesthetic grounds. Which is somewhat different from being rejected because it cannot work. Now, I'd be first to admit that using LSRR as a substitute for straightforward address extension is ugly. But so is iBGP, CIDR/route aggregation, running interior routing over CLNS, and (God forbid, for it is ugly as hell) NAT. No. It was rejected because routers tended to melt down into quivering puddles of silicon from seeing many packets with IP options set -- a fast trip to the slow path. It also requires just as many changes to applications and DNS content, and about as large an addressing plan change as v6. There were more reasons, but they escape me at the moment. --Steve Bellovin, http://www.cs.columbia.edu/~smb
Re: IPv4 address shortage? Really?
On Mar 8, 2011, at 8:32 59AM, valdis.kletni...@vt.edu wrote: On Tue, 08 Mar 2011 07:37:27 EST, Steven Bellovin said: No. It was rejected because routers tended to melt down into quivering puddles of silicon from seeing many packets with IP options set -- a fast trip to the slow path. It also requires just as many changes to applications and DNS content, and about as large an addressing plan change as v6. There were more reasons, but they escape me at the moment. Steve, you of all people should remember the other big reason why: pathalias tended to do Very Bad Things like violating the Principle of Least Surprise if there were two distinct nodes both called 'turtlevax' or whatever. That, and if you think BGP convergence sucks, imagine trying to run pathalias for a net the size of the current Internet. :) It wouldn't -- couldn't -- work that way. Leaving out longer paths (for many, many reasons) and sticking to 64-bit addresses, every host would have a 64-bit address: a gateway and a local address. For multihoming, there might be two or more such pairs. (Note that this isn't true loc/id split, since the low-order 32 bits aren't unique.) There's no pathalias problem at all, since we don't try to have a unique turtlevax section. --Steve Bellovin, http://www.cs.columbia.edu/~smb
Re: IPv4 address shortage? Really?
On Mar 8, 2011, at 11:21 09AM, valdis.kletni...@vt.edu wrote: On Tue, 08 Mar 2011 08:43:53 EST, Steven Bellovin said: It wouldn't -- couldn't -- work that way. Leaving out longer paths (for many, many reasons) and sticking to 64-bit addresses, every host would have a 64-bit address: a gateway and a local address. For multihoming, there might be two or more such pairs. (Note that this isn't true loc/id split, since the low-order 32 bits aren't unique.) There's no pathalias problem at all, since we don't try to have a unique turtlevax section. Sticking to 64-bit won't work, because some organizations *will* try to dig themselves out of an RFC1918 quagmire and get reachability to the other end of our private net by applying this 4 or 5 times to get through the 4 or 5 layers of NAT they currently have. And then some other dim bulb will connect one of those 5 layers to the outside world... Those are just a few of the many, many reasons I alluded to... The right fix there is to define AA records that only have pairs of addresses. --Steve Bellovin, http://www.cs.columbia.edu/~smb
Re: Mac OS X 10.7, still no DHCPv6
On Feb 28, 2011, at 1:10 21AM, Randy Bush wrote: I'm not saying there are no uses for DHCPv6, though I suspect that some of the reasons proposed are more people wanting to do things the way they always do, rather than making small changes and ending up with equivalent effort. add noc and doc costs of all changes, please Sure. How do they compare to the total cost of the IPv6 conversion excluding SLAAC? (Btw, for the folks who said that enterprises may not want privacy-enhanced addresses -- that isn't clear to me. While they may want it turned off internally, or even when roaming internally, I suspect that many companies would really want to avoid having their employees tracked when they're traveling. Imagine -- you know the CEO's laptop's MAC address from looking at Received: lines in headers. (Some CEOs do send email to random outsiders -- think of the Steve Jobs-grams that some people have gotten.) You then see the same MAC address with a prefix belonging to some potential merger or joint venture target. You may turn on DHCPv6 to avoid that, but his/her home ISP or takeover target may not.) --Steve Bellovin, http://www.cs.columbia.edu/~smb