[cryptography] ZKPs and other stuff at Zero Knowledge Systems (Re: Zero knowledge as a term for end-to-end encryption)
I dont think its too bad, its fairly intuitive and related english meaning also. At zero-knowlege we had a precedent of the same use: we used it as an intentional pun that we had zero-knowledge about our customers, and in actuality in one of the later versions we actually had a ZKP (to do with payment privacy). Well they were a licensee for Brands credentials but shamefully the early versions were just relying on no logging in a server to get privacy for the paid up account status necessary to establish a connection. One of the fun aspects was totting up the number of people in the company who actually understood the company name (at a mathematical / crypto level what a ZKP is), must've been a dozen perhaps out of a peak of 300 employees! My contribution to their crypto was end-to-end forward anonymity, which didnt get implemented in the freedom network before they closed it down to focus on selling personal firewalls via ISPs (under their new brand radialpoint.com). But the e2e forward anon concept was implemented by Zach Brown and Jerome Etienne in a ZKS skunk works project that never got deployed. And after ZKS Zach reimplemented something similar in open souce project cebolla which isnt actively developed at present now but the code is here: http://www.cypherspace.org/cebolla/ Now there is ToR and i2p which are actively developed, and I presume at this point they would both have forward anonymity. Without e2e forward anon any one of the default 3 hops in your connection could record traffic passing through and then subpoena the other hops to identify the source and destination (and web logs perhaps at the dest). E2e forward anon is pretty simple - establish a forward secret connection between User and node A call that tunnel 1. Tunnel a foward secret connection establishment through tunnel 1 between User and node B call that tunnel 2. Then tunnel a foward secret establishment through tunnel 2 between User and node C. Node A is the entry node, node C is the exit node. QED. Costs no more than the previous method, and actually as I remember the establishment is faster and more reliable also. Adam Not without some precedent, there was a company called Zero Knowledge Systems back in the early 2000s that tried to build what we now would see as a Skype or Tor competitor. ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] Q: CBC in SSH
On Wed, Feb 13, 2013 at 12:52 PM, Peter Gutmann pgut...@cs.auckland.ac.nzwrote: active use of ECC suites on the public Internet is practically nonexistent That's not entirely accurate; try www.google.com. Bodo ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] Q: CBC in SSH
[Bodo Moeller bmoel...@acm.org (2013-02-13 14:26:56 UTC)] On Wed, Feb 13, 2013 at 12:52 PM, Peter Gutmann pgut...@cs.auckland.ac.nzwrote: active use of ECC suites on the public Internet is practically nonexistent That's not entirely accurate; try www.google.com. I didn't know that. Here is more: http://www.imperialviolet.org/2011/11/22/forwardsecret.html - Harald ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] [zfs] Edon-R hashing and dedup
- Forwarded message from Sašo Kiselkov skiselkov...@gmail.com - From: Sašo Kiselkov skiselkov...@gmail.com Date: Tue, 12 Feb 2013 23:14:36 +0100 To: z...@lists.illumos.org CC: Nico Williams n...@cryptonector.com, Richard Elling richard.ell...@gmail.com Subject: Re: [zfs] Edon-R hashing and dedup User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130106 Thunderbird/17.0.2 Reply-To: z...@lists.illumos.org On 02/12/2013 10:31 PM, Nico Williams wrote: On Tue, Feb 12, 2013 at 12:34 PM, Garrett D'Amore garrett.dam...@gmail.com wrote: I think security could be important here. A determined attacker with access to the file system (perhaps very indirectly) could cause devastating corruption if he could arrange for collisions against a key file or block on a dataset with dedup and no verify set. Admittedly the likelihood of a successful attack is low, but that isn't the same as saying that security is not a consideration for the ZFS hash function. That's why dedup requires a cryptographic hash. No, the cryptographic bit was chosen as a simple designator of hash with collision resistance, since cryptographic security implies that. Any SHA-3 candidate that was not rejected due to weaknesses discovered in cryptanalysis is almost certainly good enough for dedup. I would contend that the priority for ZFS dedup is: 1) software speed 2) collision resistance 3) cryptographic security The last two are not the same. Other hashes I considered were: BLAKE2: 1) It's slower than Edon-R (considerably so, the best I managed to get was ~5 CPB). 2) The above result requires using SSE 4.1 or other advanced vector instructions, which is difficult to support in the kernel. This also means that performance will suffer significantly if these instructions aren't available (e.g. on older CPUs or non-x86 CPUs). 3) It's derived from BLAKE, one of the SHA-3 finalists, which should make it theoretically as safe (though it hasn't been subjected to cryptanalysis to make sure the implementors haven't made a mistake). Blue Midnight Wish: 1) It's somewhat slower than Edon-R (though not as slow as BLAKE2) 2) Has been subjected to SHA-3 cryptanalysis and currently no preimage attacks exist against it (and no practical attacks on the full version of the hash either). 3) Its implementation is pure C, like Edon-R (no SSE trickery required). Let's, however, not lose focus on what is really important. Edon-R has *not* been broken. 1st preimage at complexity 2^343 is at most a thinking pause, however, it is nowhere near being anything practical. We'd be using the hash truncated to 256-bits anyway, which makes an exhaustive search still much easier to perform. The reason SHA-3 discarded it is because SHA-3 has a much broader scope and is supposed to mandate a standard for hashing all sorts of data at all possible sensitivity levels. ZFS dedup has an extremely narrow scope and the nature of our data is much more constrained, meaning, even if some theoretical attack can be made remotely practical, it is still nowhere near to posing any problems for our particularly narrow-scoped application. Cheers, -- Saso --- illumos-zfs Archives: https://www.listbox.com/member/archive/182191/=now RSS Feed: https://www.listbox.com/member/archive/rss/182191/22842876-6fe17e6f Modify Your Subscription: https://www.listbox.com/member/?member_id=22842876id_secret=22842876-a25d3366 Powered by Listbox: http://www.listbox.com - End forwarded message - -- Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org __ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] [zfs] Edon-R hashing and dedup
- Forwarded message from Sašo Kiselkov skiselkov...@gmail.com - From: Sašo Kiselkov skiselkov...@gmail.com Date: Wed, 13 Feb 2013 00:01:08 +0100 To: z...@lists.illumos.org CC: Pawel Jakub Dawidek p...@freebsd.org, Garrett D'Amore garrett.dam...@gmail.com, Richard Elling richard.ell...@gmail.com Subject: Re: [zfs] Edon-R hashing and dedup User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130106 Thunderbird/17.0.2 Reply-To: z...@lists.illumos.org -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 02/12/2013 11:37 PM, Pawel Jakub Dawidek wrote: On Tue, Feb 12, 2013 at 08:26:34PM +0100, Sašo Kiselkov wrote: Hi Garrett, On 02/12/2013 07:34 PM, Garrett D'Amore wrote: I think security could be important here. I don't dispute that, all I'm saying is that security isn't our primary concern and all in all dedup doesn't really focus on it - the hash isn't a security feature. I'll elaborate below. I'm sorry, but ZFS is general purpose file system and hash function used for dedup just has to be secure. You don't know how someone will use the file system. By that logic, we should mandate verification with dedup, because the chance of a random collision is just on the order of 2^128:1! Take UFS for example. When you create a file it has to write data, create an inode and point the inode at the data. The order is very important here. If you first create an inode then point an inode at the data and at the end write the data it is a security bug, because if you crash before writing the data, the inode will point at some previous data, who knows what kind of data was there before - maybe sshd private key? And it's possible that cosmic rays will hit your SAS interface in just the right manner so as to correctly recompute the fletcher checksum and make the in-flight BP point to a block containing sshd's private keys. Oh the horror! There are basically two types of attacks that you could mount against such a system: 1) Secret plaintext retrieval: you know the hash of the target, but want to retrieve some secret plaintext from the target (e.g. /etc/shadow). 2) Plaintext modification: you know the plaintext and the hash, but want to induce data corruption or alteration of the plaintext (e.g. modifying a known /etc/shadow to change the root password). We can comfortably dismiss #1 - if you know the hash of the secret plaintext, you've already broken into the storage system so deeply that you can just go ahead and read or modify the file anyway. We absolutely can not dismiss neither #1 nor #2. Only because you cannot come up with feasible an attack vector, doesn't mean there isn't one. #1 gives you ability to read file you have no permission to read. #2 gives you ability to modify file you have no permission to modify. How you get the hash in #1 or how do you trick someone to make #2 possible is totally different story, but let me provide some examples. #1 attack. You have access to company's shared storage and you can sniff HTTP traffic. The shared storage is used to hold important documents that are also time-stamped using external service over HTTP. The company is happy to use this external service as TSA (and you) sees only hashes. Now you have document's hash without deeply breaking into storage system. If you can find collision you can get this important document as well. I know that's not the best example as time-stamping make sense only when hash is collision-resistant, but you get the idea - there is no need to break into storage system to get the hash. Except that nobody uses Edon-R-512/256 is such a manner. #2 attack. Attend a key signing party that your victim is going to attend. Download victim's public key and create your key with the same weak ZFS hash as victim's public key. Fingerprint based on strong hash of course will be different. Give your fingerprint to the same people your victim is giving his fingerprint. Now if someone is running ZFS with dedup and weak checksum and will download your key first and then your victim's key your attack succeeded. He will now be encrypting messages to your victim using your public key. Another #2 attack. Certificate of some new CA is added to major browsers. Generate certificate that matches ZFS weak hash of this new CA cert. Start spamming or send e-mail to few selected targets. With maildirs it should be pretty easy to make the proper alignment so that if the new browser's version is installed and new CA cert stored dedup instead of writing it, will point at your cert instead. I'm sorry, but these are just pure fantasy land. The attacks you describe are so unbelievably contrived that you might as well resort to rubber hose cryptanalysis and be done with: https://xkcd.com/538/ What you propose is akin to performing dental surgery through the anus. This is exactly the sort of discussion I was hoping to avoid: armchair experts
Re: [cryptography] Q: CBC in SSH
On Feb 13, 2013, at 3:22 PM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote: Bodo Moeller bmoel...@acm.org writes: On Wed, Feb 13, 2013 at 12:52 PM, Peter Gutmann pgut...@cs.auckland.ac.nzwrote: active use of ECC suites on the public Internet is practically nonexistent That's not entirely accurate; try www.google.com. It was based on the last (SSL Observatory?) scans at the time which found about five or six servers worldwide, presumably the test servers being run by Certicom, Red Hat, Microsoft, etc. If Google supports ECC now that'd be good, one more site to test against. We see quite a bit of ECDHE traffic at the sites that feed our notary. At the moment, the top-3 cipher suites we see (by connection count) are TLS_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA and TLS_ECDHE_RSA_WITH_RC4_128_SHA. We also see TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (7th most popular). If http://www.imperialviolet.org/2012/03/02/ieecdhe.html is still correct, RC4+ECDHE is chosen by Chrome and Firefox. AES+ECDHE is Safari and Internet Explorer. The first non-AES/RC4 cipher suite is TLS_RSA_WITH_3DES_EDE_CBC_SHA (9th most popular) followed by TLS_RSA_WITH_CAMELLIA_256_CBC_SHA. Bernhard ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] Q: CBC in SSH
Those are some pretty odd stats... Camellia is almost as popular as 3DES? Well, it is what we see :). And all in all Camellia is even more popular than 3DES in our data set (there are some more less popular cipher suites for both 3DES and Camellia). It is pretty close though. Bernhard ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] Zero knowledge as a term for end-to-end encryption
On Tue, Feb 12, 2013 at 10:27 PM, ianG i...@iang.org wrote: AFAIK, the term 'least authority' as used by Tahoe-LAFS folks does not refer to 'zero knowledge' as per cryptographic protocols, but to the concept of least authority as derived from the 'capabilities' school of security thought. I strongly agree that capabilities are quite important to the Tahoe-LAFS idea of least authority, and I have been following the project for many years. But I think the Tahoe style of least authority and end-to-end encryption go hand-in-hand. Tahoe's capabilities are crypto capabilities, a.k.a. capabilities as keys. The capability tokens are the cryptographic keys themselves. This means the entire storage system is opaque to anyone who doesn't hold at least a readcap. The system, by design, deals only in ciphertext. It's ciphertext all the way down After the launch of MEGA, I've seen several sites (e.g. SpiderOak) trying to claim to be the first to have invented this concept. I don't know who did it first, but I'm pretty sure Tahoe was the first to actually get it right. -- Tony Arcieri ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography