Re: [Sks-devel] TLS 1.3 and HKPS pool
On 2018-03-23 at 13:55 +, Daniel Kahn Gillmor wrote: > Sadly, SNI iand ALPN are both still in the claer in the TLS 1.3 > handshake. Ah, thank you. I hadn't read the draft, but have just read the relevant parts of v26. I don't recall what source I read which led me to believe otherwise, other than that it was a source I considered reliable enough to not double-check. And now I've repeated misinformation. I'm sorry. -Phil ___ Sks-devel mailing list Sks-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/sks-devel
Re: [Sks-devel] SKS apocalypse mitigation
On Fri 2018-03-23 11:10:49 +, Andrew Gallagher wrote: > Updating the sets on each side is outside the scope of the recon > algorithm, and in SKS it proceeds by a sequence of client pull requests > to the remote server. This is important, because it opens a way to > implement object blacklists in a minimally-disruptive manner. as both an sks server operator, and as a user of the pool, i do not want sks server operators to be in the position of managing a blacklist of specific data. > The trick is to ensure that all the servers in the pool agree (to a > reasonable level) on the blacklist. This could be as simple as a file > hosted at a well known URL that each pool server downloads on a > schedule. The problem then becomes a procedural one - who hosts this, > who decides what goes in it, and what are the criteria? This is a really sticky question, and i don't believe we have a global consensus on how this should be done. I don't think this approach is feasible. > Another effective method that does not require an ongoing management > process would be to blacklist all image IDs - this would also have many > other benefits (I say this as someone who once foolishly added an > enormous image to his key). This would cause a cliff edge in the number > of objects and, unless carefully choreographed, could result in a mass > failure of recon. > > One way to prevent this would be to add the blacklist of images in the > code itself during a version bump, but only enable the filter at some > timestamp well in the future - then a few days before the deadline, > increase the version criterion for the pool. That way, all pool members > will move in lockstep and recon interruptions should be temporary and > limited to clock skew. I have no problems with blacklisting User Attribute packets from sks, and i like Andrew's suggestion of an implementation roll-out, followed by a "switch on" date for the filter. I support this proposal. I've had no luck getting new filters added to sks in the past [0], so i'd appreciate if someone who *does* have the skills/time/commit access could propose a patch for this. I'd be happy to test it. --dkg [0] see for example https://bitbucket.org/skskeyserver/sks-keyserver/pull-request/20/trim-local-certifications-from-any-handled signature.asc Description: PGP signature ___ Sks-devel mailing list Sks-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/sks-devel
Re: [Sks-devel] TLS 1.3 and HKPS pool
On Mon 2018-03-19 17:24:07 -0400, Phil Pennock wrote: > On 2018-03-19 at 22:14 +0100, Kristian Fiskerstrand wrote: >> On 03/19/2018 10:08 PM, Phil Pennock wrote: >> > Do we care? >> >> I'm tempted to say no.. I also agree that we do not care, and should issue no guidance that encourages servers or clients to disable TLS 1.3. If we need any guidance along the selection of transport crypto parameters, we need guidance like: * support and prefer forward-secure key exchanges (ECDHE or FFDHE) over non-forward-secure RSA key exchange. * use ephemeral keys of at least 2048-bits (FFDHE) or 256-bits (ECDHE) * use a reasonable selection of ciphersuites * do not enable export ciphers * do not support SSLv3 * and so on... iow, the same guidance we'd give for anyone running an HTTPS endpoint. > Another point in favor of that: I'd forgotten that TLS1.3 moves > certificate exchange to be protected by the session, so encrypted. Thus > I suspect that we won't have SNI available for choosing TLS versions and > ciphersuites until after TLS1.3 has already been negotiated. Sadly, SNI iand ALPN are both still in the claer in the TLS 1.3 handshake. I was unable to convince the TLS working group that the additional latency cost of protecting SNI from passive monitoring was a price worth paying for the additional metadata privacy. :/ so we don't have to worry about the effect of that on the pool. Note that there are features coming down the pike for HTTPS+TLS+DNS that might allow hiding the SNI behind a "fronting service" name, but that would require special configuration, and we should probably have that discussion separately. TLS1.3 itself should just be a smooth upgrade. One issue that we might have (and we might also have today in TLS 1.2) is failed TLS session resumption from an HKPS client that switches between servers in the pool, depending on how the client handles TLS session tickets. That also probably deserves a separate thread though, since it's orthogonal to the TLS 1.3 question. --dkg ___ Sks-devel mailing list Sks-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/sks-devel
Re: [Sks-devel] SKS apocalypse mitigation
On 23/03/18 11:10, Andrew Gallagher wrote: > Another effective method that does not require an ongoing management > process would be to blacklist all image IDs It occurs to me that this would be more wasteful of bandwidth than blocking objects by their hash, as the server would have to request the object contents before deciding whether to keep it or not. This is assuming that recon is calculated on pure hashes with no type hints (I'm 99% sure this is the case, correct me if I'm wrong). We could minimise this by maintaining a local cache of the hashes of already-seen image objects. This would be consulted during recon and submission in the same way as an externally-sourced blacklist. -- Andrew Gallagher signature.asc Description: OpenPGP digital signature ___ Sks-devel mailing list Sks-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/sks-devel
Re: [Sks-devel] TLS 1.3 and HKPS pool
On Mon, Mar 19, 2018 at 11:08:13PM +0100, Kristian Fiskerstrand wrote: > On 03/19/2018 10:40 PM, Hendrik Visage wrote: > >> Now.. if anyone were to actually disable everything but 1.3, that'd be > >> exclusion worthy from the pool, but lets do this manually if so. > > > > I’ve not seen and TLS1.2 security issues yet (but then I might’ve missed it > > in the deluge of meltdown/spectre/memcached) so I don’t see the need/reason > > to disable TLS1.2 > > I was referring to server operators here, not clients, if that wasn't > clear :) > Further to this point, server side controls have historically been in specifying the prefered ciphers to use and that sort of "when negotiation happens, the server get's to pick a preference, the client just needs to accept the best it can support" exchange. This entirely doesn't change a bit with TLS1.3 with regards to interoperability with TLS1.2. In fact, the majority of objections to TLS1.3 have simply been that it forces the use of a much more constrained set of ciphers which are known to be secure as technology currently stands. Aside from a few performance focused changes and some additional security opions (note: options are optional) TLS1.2 can be configured to provide the vast majority of benefits that one would see by enabling TLS1.3. I've never actually looker at the implementation in any clients for negotiating with an HKPS server but if your clients supports ECDHE curves as well as AEAD Hashing as well as GCM rather than only CBC ciphers then you're basically at the same level lf security as you would be with TLS1.3. So given the incredible lack of incentive to turn off TLS1.2 (especially as it was designed to be able to be easily expanded when flaws where found) there shouldnt be any number of operators suddenly switching it off, I would imagine that anyone so focused on maintaing the "best" TLS options would also know that turning off TLS1.2 will just make your host impossible to interface with until longer lived libraries and distro's (think OS's like Redhat, still using OpenSSL 1.0.2k) and hopefully they would be savy enough with their config to make sure they were adequately securing their TLS1.2 options to allow sufficiently security TLS1.2 communications alongside their TLS1.3 config, and they would therefore allow the downgrade by unsupported clients. Just my 2c on the seemingly very small potential for an issue. Thanks, Henry pgpwMbaLJCbnA.pgp Description: PGP signature ___ Sks-devel mailing list Sks-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/sks-devel
Re: [Sks-devel] SKS apocalypse mitigation
FWIW, while I'm effectively no longer involved in SKS development, I do agree that this is a problem with the underlying design, and Andrew's suggestions all sound sensible to me. On Fri, Mar 23, 2018 at 7:10 AM, Andrew Gallagherwrote: > Hi, all. > > I fear I am reheating an old argument here, but news this week caught my > attention: > > https://www.theguardian.com/technology/2018/mar/20/child-abuse-imagery-bitcoin-blockchain-illegal-content > > tl;dr: Somebody has uploaded child porn to Bitcoin. That opens the > possibility that *anyone* using Bitcoin could be prosecuted for > possession. Whether this will actually happen or not is unclear, but > similar abuse of SKS is an apocalyptic possibility that has been > discussed before on this list. > > I've read Minsky's paper. The reconciliation process is simply a way of > comparing two sets without having to transmit the full contents of each > set. The process is optimised to be highly efficient when the difference > between the sets is small, and gets less efficient as the sets diverge. > > Updating the sets on each side is outside the scope of the recon > algorithm, and in SKS it proceeds by a sequence of client pull requests > to the remote server. This is important, because it opens a way to > implement object blacklists in a minimally-disruptive manner. > > An SKS server can unilaterally decide not to request any object it likes > from its peers. In combination with a local database cleaner that > deletes existing objects, and a submission filter that prevents them > from being reuploaded, it is entirely technically possible to blacklist > objects from a given system. > > The problems start when differences in the blacklists between peers > cause their sets to diverge artificially. The normal reconciliation > process will never resolve these differences and a small amount of extra > work will be expended during each reconciliation. This is not fatal in > itself, as SKS imposes a difference limit beyond which peers will simply > stop reconciling, so the increase in load should be contained. > > The trick is to ensure that all the servers in the pool agree (to a > reasonable level) on the blacklist. This could be as simple as a file > hosted at a well known URL that each pool server downloads on a > schedule. The problem then becomes a procedural one - who hosts this, > who decides what goes in it, and what are the criteria? > > It has been argued that the current technical inability of SKS operators > to blacklist objects could be used as a legal defence. I'm not convinced > this is tenable even now, and legal trends indicate that it is going to > become less and less tenable as time goes on. > > Another effective method that does not require an ongoing management > process would be to blacklist all image IDs - this would also have many > other benefits (I say this as someone who once foolishly added an > enormous image to his key). This would cause a cliff edge in the number > of objects and, unless carefully choreographed, could result in a mass > failure of recon. > > One way to prevent this would be to add the blacklist of images in the > code itself during a version bump, but only enable the filter at some > timestamp well in the future - then a few days before the deadline, > increase the version criterion for the pool. That way, all pool members > will move in lockstep and recon interruptions should be temporary and > limited to clock skew. > > These two methods are complementary and can be implemented either > together or separately. I think we need to start planning now, before > events take over. > > -- > Andrew Gallagher > > > ___ > Sks-devel mailing list > Sks-devel@nongnu.org > https://lists.nongnu.org/mailman/listinfo/sks-devel > ___ Sks-devel mailing list Sks-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/sks-devel
[Sks-devel] SKS apocalypse mitigation
Hi, all. I fear I am reheating an old argument here, but news this week caught my attention: https://www.theguardian.com/technology/2018/mar/20/child-abuse-imagery-bitcoin-blockchain-illegal-content tl;dr: Somebody has uploaded child porn to Bitcoin. That opens the possibility that *anyone* using Bitcoin could be prosecuted for possession. Whether this will actually happen or not is unclear, but similar abuse of SKS is an apocalyptic possibility that has been discussed before on this list. I've read Minsky's paper. The reconciliation process is simply a way of comparing two sets without having to transmit the full contents of each set. The process is optimised to be highly efficient when the difference between the sets is small, and gets less efficient as the sets diverge. Updating the sets on each side is outside the scope of the recon algorithm, and in SKS it proceeds by a sequence of client pull requests to the remote server. This is important, because it opens a way to implement object blacklists in a minimally-disruptive manner. An SKS server can unilaterally decide not to request any object it likes from its peers. In combination with a local database cleaner that deletes existing objects, and a submission filter that prevents them from being reuploaded, it is entirely technically possible to blacklist objects from a given system. The problems start when differences in the blacklists between peers cause their sets to diverge artificially. The normal reconciliation process will never resolve these differences and a small amount of extra work will be expended during each reconciliation. This is not fatal in itself, as SKS imposes a difference limit beyond which peers will simply stop reconciling, so the increase in load should be contained. The trick is to ensure that all the servers in the pool agree (to a reasonable level) on the blacklist. This could be as simple as a file hosted at a well known URL that each pool server downloads on a schedule. The problem then becomes a procedural one - who hosts this, who decides what goes in it, and what are the criteria? It has been argued that the current technical inability of SKS operators to blacklist objects could be used as a legal defence. I'm not convinced this is tenable even now, and legal trends indicate that it is going to become less and less tenable as time goes on. Another effective method that does not require an ongoing management process would be to blacklist all image IDs - this would also have many other benefits (I say this as someone who once foolishly added an enormous image to his key). This would cause a cliff edge in the number of objects and, unless carefully choreographed, could result in a mass failure of recon. One way to prevent this would be to add the blacklist of images in the code itself during a version bump, but only enable the filter at some timestamp well in the future - then a few days before the deadline, increase the version criterion for the pool. That way, all pool members will move in lockstep and recon interruptions should be temporary and limited to clock skew. These two methods are complementary and can be implemented either together or separately. I think we need to start planning now, before events take over. -- Andrew Gallagher signature.asc Description: OpenPGP digital signature ___ Sks-devel mailing list Sks-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/sks-devel