Re: [cryptography] the Zcash Open Source Miner Challenge (and about Zcash in general)

2016-11-14 Thread Zooko Wilcox-OHearn
On Wed, Nov 9, 2016 at 3:07 PM, Jaromil <jaro...@dyne.org> wrote:
>
> ...but ZCash feels a bit scammy. Its pumped up entry on the market
> burnt a lot of people's money... is it just their fault being stupid?
…
> Sincerely, I'm not trolling. Seeing there is some space for a civil
> conversation, I'd be interested in reading answers from the Zcash ppl
> themselves here, what they are going to make out this market hype
> stun. I'm a big fan of all Z- things (ZFS, ZSh, Zorro) but
> ZCash still.. meh. how about helping us understand?


I'm not quite sure what your question or objection is. Can you spell it out?

I and the Zcash dev team have no control over the market price. We
don't operate an exchange, we haven't bought or sold any ZEC, we have
never given anyone investment advice, and we've always striven in our
public communications to be clear about the risks and limitations of
the Zcash project.

Sincerely,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] the Zcash Open Source Miner Challenge (and about Zcash in general)

2016-10-10 Thread Zooko Wilcox-OHearn
Hi folks!

I've been quiet on this list for a while now. I've been hard at work
on creating a Bitcoin-like cryptocurrency with zero-knowledge-based
crypto:

https://z.cash

This is the most sophisticated crypto that I've ever seen someone
attempt to deploy at scale to the Internet. (By all means feel free to
reply and teach me about counter-examples to that generalization.)

There's a lot going on there. To jump into the technical side, I'd
suggest the Zcash protocol spec:
https://github.com/zcash/zips/blob/master/protocol/protocol.pdf . For
an introduction to the bigger picture, probably our blog
(https://z.cash/blog/) and FAQ (https://z.cash/support/faq.html).

Okay the reason I'm writing today is to let you know about the Zcash
Open Source Miner Challenge:

https://zcashminers.org/

The Zcash company has donated $30,000 for prize money to reward better
open-source implementations of Equihash by Biryukov & Khovratovich:

https://www.internetsociety.org/sites/default/files/blogs-media/equihash-asymmetric-proof-of-work-based-generalized-birthday-problem.pdf

Jump in! The worst that can happen is that you get the fun and
education of implementing an interesting new proof-of-work algorithm.
:-)

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Should Sha-1 be phased out?

2015-11-06 Thread Zooko Wilcox-OHearn
On Tue, Oct 20, 2015 at 8:00 AM, Joachim Strömbergson
<joac...@strombergson.com> wrote:
>
> Esp in embedded space, md5 is still very, very common even in new
> designs. And SHA-1 is the new black.
>
> A typical setup is that someone has found out that there is a secure
> hash function called md5 and decided to implement it in their new
> system. When told that md5 is in fact broken since ages, the response is
> usually a at the moment-decision that it is not used for security, and
> that the application doesn't really have any security implications (i.e.
> that the service performed by the system has no value).

Yep. Actually the post-hoc rationalization is usually that
collision-resistance isn't needed, only (2nd-)pre-image resistance.

Some of the time this is actually true, but I think the people making
the claim don't really know whether it is true. I think what they
typically do is spend 60 seconds trying to imagine how they could
attack their own system using collisions, and then having failed to
find such an attack, they conclude that collision-resistance isn't
needed for their system.

Here's one of my favorite examples of this methodology, from Linus
Torvald: 
http://git.vger.kernel.narkive.com/9lgv36un/zooko-zooko-com-revctrl-colliding-md5-hashes-of-human-meaningful#post2

So, my attempted contribution to this pattern was to help specify
BLAKE2, so that instead of telling people "MD5 is broken! Switch to
this secure but slower hash function!" we could tell them "MD5 is
broken! Switch to this secure but faster hash function!"

https://blake2.net/acns/slides.html

It remains to be seen if they are any more responsive to the new
argument than they have been for the last couple of decades to the old
argument.

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] hashes based on lots of concatenated LUT lookups

2014-07-11 Thread Zooko Wilcox-OHearn
Dear Eugen:

There have been several experiments in this direction, using
memory-hard proofs-of-work. For example, this was the motivation for
Litecoin (https://en.wikipedia.org/wiki/Litecoin) to use scrypt in its
Proof-of-Work. To my knowledge, the state-of-the-art design is John
Tromp's Cuckoo PoW: https://github.com/tromp/cuckoo

In my opinion, this is a promising direction to take. It might still
succumb to centralization-of-mining in the long-term, but maybe not.
There's a possibility it would settle into an economic equilibrium in
which independent/hobbyist/small-time mining is sufficiently
rewarding, but customized, large-scale, vertically-integrated mining
is not rewarding enough to justify its costs.

Among anti-mining-centralization techniques that I've studied, this is
the only one that is easy to implement in the near-term, and doesn't
come with too many complications and risks for near-term deployment.

For the contrarian view, arguing that ASIC-resistance is either
undesirable and/or impossible, see this whitepaper by andytoshi:
http://download.wpsoftware.net/bitcoin/asic-faq.pdf . I disagree with
the conclusions, but it makes some good arguments.

For a survey of state-of-the-art ideas about Proof-of-Stake — ideas
which *aren't* easily implementable and which *do* come with
complexity, uncertainty, and risk — see Vitalik Buterin's latest opus:
https://blog.ethereum.org/2014/07/05/stake/ . That guy is a good
thinker and writer! And he appears to have been reading my mind. As
well as adding in a bunch of ideas that were not in my mind, from such
sources as http://eprint.iacr.org/2014/452.pdf .

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] Cuckoo Cycles: a new memory-hard proof-of-work system

2014-01-09 Thread Zooko O'Whielacronx
Hello John Tromp!

That is neat! The paper could use a related work section, for example
Litecoin uses scrypt in the attempt to make it harder to implement in
ASIC:

https://litecoin.info/Scrypt

The current Password Hashing Contest (disclosure: I am on the panel)
may be relevant to your interests:

https://password-hashing.net/

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [zfs] [Review] 4185 New hash algorithm support

2013-10-29 Thread Zooko Wilcox-OHearn
On Mon, Oct 28, 2013 at 6:49 AM, Richard Elling
richard.ell...@gmail.com wrote:

 I hate to keep this thread going, but it cannot end with an open-ended 
 threat... please, let's kill it off nice and proper.

Hey, I don't want to waste anyone's time, including my own. If nobody
is interested in this — possibly including the original author of the
patch, Saso Kiselkov, judging from ¹ — then by all means let's drop
the subject.

¹ http://article.gmane.org/gmane.os.illumos.zfs/3103

However, in case someone out there is reading this…

 Do you agree that if the attacker does not have DDT key (including the hash) 
 of the future intended write (ignoring the fact that we haven't invented a 
 properly working time machine yet) that this attack is extraordinarily 
 difficult to conduct with any hope of a fruitful outcome? If so, let's kill 
 this thread.

I'm not sure what you mean about the future intended write. The risk I
was talking about was that an attacker can cause two blocks (on
someone else's ZFS system) to hash to the same fingerprint.

Assuming that “the DDT key” is the secret which is prefixed to the
block contents in the current patch, then I agree it is extremely
difficult to cause two blocks to hash to the same fingerprint. A way
to be more precise about how difficult it is, is to talk about what
property we depend on the hash function to have in order to prevent
this attack.

If the attacker steals the secret, or if there is some variant of ZFS
which shares that secret among multiple parties ², then the property
that we rely on the hash function to have is “collision-resistance”.
If the attacker doesn't have the secret, then the property that we
rely on the hash function to have one which is closely related to, and
even easier-to-achieve than, “MAC”.

² http://article.gmane.org/gmane.os.illumos.zfs/3015

Functions which, in my opinion, have this easier-to-achieve-than-MAC
property include SHA-256, HMAC-MD5, Skein, BLAKE2, and
BLAKE2-reduced-to-5-rounds. Almost all cryptographic hash functions
have this property! One of the few cryptographic hash functions which
I would be not so confident in is Edon-R. It *probably* still has this
property, but it might not, and cryptographers haven't studied it
much.

Functions which, in my opinion, have the much harder-to-achieve
“collision-resistance” property include SHA-256, Skein, BLAKE2, and
*probably* BLAKE2-reduced-to-5-rounds.

 I'll let the fact that there is no future dedup run and there is no 
 replace blocks later in ZFS fall quietly in the forest with nobody 
 listening.

I'm sorry if I've misunderstood; I'm not an expert on ZFS. If you'd
like to take some of your valuable time to explain it to me, I'll
spend some of my valuable time to learn, because I'm interested in
filesystems in general and ZFS in particular. If not, I'm pretty sure
everything I've written above is still true.

Regards,

Zooko Wilcox-O'Hearn

Founder, CEO, and Customer Support Rep
https://LeastAuthority.com
Freedom matters.


---
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/22842876-ced276b8
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=22842876id_secret=22842876-4984dade
Powered by Listbox: http://www.listbox.com
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [zfs] [Review] 4185 New hash algorithm support

2013-10-22 Thread Zooko Wilcox-OHearn
On Tue, Oct 22, 2013 at 6:05 AM, Schlacta, Christ aarc...@aarcane.org wrote:

 If any weakened algorithm is to be implemented, how can we know how weak is 
 too weak, and how strong is sufficient?  Each professional Cryptographer has 
 given different opinions and all those at our immediate disposal have now 
 been biased.

A good way to do that is use an algorithm that has attracted interest
from a large number of independent cryptographers. If many
cryptographers have invested extensive effort trying to find
weaknesses in a algorithm, and haven't reported any, then we can feel
more confident that it is less likely to harbor undiscovered
weaknesses.

Among the algorithms we've been talking about in this thread, SHA-256,
HMAC-MD5, Skein, Keccak, and BLAKE are all in this category of being
well-studied.

Cryptographers publish it if they find a weakness in a reduced-round
variant of an important algorithm. You can see a summary of the best
results against weakened variants of BLAKE in ¹ (Table 1).

¹ http://eprint.iacr.org/2013/467

The rows labeled perm. and cf. are attacks on just one component
of the hash, not the whole algorithm. The # Rounds column shows how
many rounds of a reduced-round variant would be vulnerable to that
attack.

Don't forget to look at the Complexity column, too! That shows
(roughly) how many calculations would be necessary to implement the
attack. Yes, almost all of them are computations that are completely
impossible for anyone to actually execute in the forseeable future.
But still, they are the best attack that anyone has (publicly) come up
with against those weakened variants of BLAKE so they serve as a
heuristic indicator of how strong it is.

Among the well-studied algorithms listed above, BLAKE is one of the
best-studied. It was one of the five finalists in the SHA-3 contest,
and in the final report of the contest ², NIST wrote “The
cryptanalysis performed on BLAKE […] appears to have a great deal of
depth”. Here is a list of research reports that analyzed BLAKE: ³.

² http://dx.doi.org/10.6028/NIST.IR.7896
³ https://131002.net/blake/#cr

Now, BLAKE2 is not necessarily as secure as BLAKE. We could have
accidentally introduced weaknesses into BLAKE2 when making tweaks to
optimize it. The paper ¹ looked for such weaknesses and reported that
they found nothing to make them distrust BLAKE2.

We use a stream cipher named ChaCha ⁴,⁵ as the core of BLAKE and
BLAKE2, and nobody has found any weakness in ChaCha. Again, that
doesn't mean we didn't manage to screw it up somehow, but I think it
helps! If anyone found a weakness in ChaCha, it would *probably* also
show them a weakness in BLAKE2, and vice versa.

⁴ https://en.wikipedia.org/wiki/ChaCha_%28cipher%29#ChaCha_variant
⁵ https://tools.ietf.org/html/draft-agl-tls-chacha20poly1305-02

In sum, there has been a lot of independent analysis of BLAKE2, BLAKE,
and ChaCha, and I hope there will be more in the future. If you use a
reduced-round version of BLAKE2, you can look at these results to see
whether anyone has published an attack that would break that
reduced-round version. Of course, more rounds is safer against future
breakthroughs.

It was in that context that I recommended that ZFS use the most rounds
of BLAKE2 that it can while still being faster than Edon-R. ☺ That
will probably be around 5 rounds.

Regards,

Zooko Wilcox-O'Hearn

Founder, CEO, and Customer Support Rep
https://LeastAuthority.com
Freedom matters.


---
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/22842876-6fe17e6f
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=22842876id_secret=22842876-a25d3366
Powered by Listbox: http://www.listbox.com
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] my comment to NIST about reduced capacity in SHA-3

2013-10-16 Thread Zooko Wilcox-OHearn
Date: Tue, 1 Oct 2013 15:45:27 -0400
From: zooko zo...@zooko.com
To: Multiple recipients of list hash-fo...@nist.gov
Subject: Re: On 128-bit security

Folks:

Here are my personal opinions about these issues. I'm not expert at
cryptanalysis. Disclosure: I'm one of the authors of BLAKE2 (but not
one of the authors of BLAKE).

I personally do not believe that there is any secret agenda behind
this proposal, even though I believe that there was a secret agenda
behind Dual EC DRBG.

One reason that I believe that the motivation behind this proposal is
the stated motivation of improving performance, is that Joan Daemen
told me in person in January of 2013 that the Keccak team had
considered defining a reduced Keccak to compete with BLAKE2, but had
decided against it because they didn't want to disrupt the SHA-3
standardization process.

Apparently they changed their minds, and apparently their fears of
disruption turned out to be prescient!

I also do not think that a security level of 2^256 is necessarily
better than a security level of 2^128. *Maybe* it is better, but I'm
not aware of any examples where that sort of distinction has turned
out to matter in practice, and I can't really judge if it is likely to
matter in the future (except, of course, if you forget to take into
account multi-target issues…). I suspect nobody else can, either.

However, even though I *personally* would have confidence that a
Keccak with a 256-bit capacity would be safe and would be free of
maliciously induced weakness, I want a standard to be widely accepted
in addition to being safe.

This is the Caesar's wife must be above suspicion argument. It isn't
enough to make a secure standard, but also we need other people to
have confidence in it.

And, I don't know if we can persuade people that no it isn't actually
backdoored/weakened. It may be the kind of thing where if that's the
conversation we're having then we've already lost.

Would it make sense to go ahead and standardize
SHA3-as-a-replacement-for-SHA2 by standardizing the form of Keccak
which is most widely accepted by cryptographers and which is closest
to what was studied during the contest, and then separately offer
SHAKE and reduced-for-speed-Keccak as additional new things?

A lot of uses of secure hash functions don't need to be particularly
efficient. In my slides about BLAKE2
(https://blake2.net/acns/slides.html) I argue that there are use-cases
where efficiency is critical, but it is equally true that there are
common and important use cases where a 576-bit capacity Keccak would
be fine, e.g. public key certificates.

---

Joan Daemen, one of inventors of AES and one of the inventors of
Keccak (SHA-3), replied to my mailing list post as follows:

Date: Fri, 4 Oct 2013 05:08:07 -0400
From: Joan DAEMEN joan.dae...@st.com
To: Multiple recipients of list hash-fo...@nist.gov
Subject: RE: On 128-bit security

Hello all,

Zooko wrote:

 I personally do not believe that there is any secret
 agenda behind this proposal, even though I believe that
 there was a secret agenda behind Dual EC DRBG.

 One reason that I believe that the motivation behind
 this proposal is the stated motivation of improving
 performance, is that Joan Daemen told me in person in
 January of 2013 that the Keccak team had considered
 defining a reduced Keccak to compete with BLAKE2, but
 had decided against it because they didn't want to
 disrupt the SHA-3 standardization process.

 Apparently they changed their minds, and apparently
 their fears of disruption turned out to be prescient!

Yes, Zooko and I met at the end-of-Ecrypt II event on Tenerife early
2013 (24° C in January!).
I don't remember our conversation in detail, but I I'm sure Zooko is
citing me correctly because that is what we were thinking about at the
time.

Actually, what we had in mind was to propose something like Keccak2
to compete with BLAKE2 by drastically cutting the number of rounds,
e.g., down to 12 rounds for Keccak-f[1600], but otherwise keeping the
algorithm as it is. That might have sent the wrong message indeed, but
we just didn't do it.

In contrast, the capacity is an integral parameter of the Keccak
family that we even proposed as user-tunable in our SHA-3 submission.
Matching the capacity to the security strength levels of [NIST SP
800-57] is simply exploiting that flexibility.

Kind regards,

Joan, also on behalf of my Keccak companions

---

Regards,

Zooko Wilcox-O'Hearn

Founder, CEO, and Customer Support Rep
https://LeastAuthority.com
Freedom matters.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] RSA equivalent key length/strength

2013-09-26 Thread zooko
On Wed, Sep 18, 2013 at 02:23:11PM -0700, Lucky Green wrote:

 Moti Young and others wrote a book back in the 90's (or perhaps) 80's,
 that detailed the strength of various RSA key lengths over time. I am
 too lazy to look up the reference or locate the book on my bookshelf.
 Moti: help me out here? :-)

This is a very good resource because it includes recommendations from multiple
sources and makes it easy to compare them:

http://www.keylength.com/

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Asynchronous forward secrecy encryption

2013-09-26 Thread zooko
Let me just mention that this conversation is AWESOME. I only wish the folks
over at Perry's Crypto List (http://www.metzdowd.com/pipermail/cryptography/)
knew that we were having such a great conversation over here.

On Thu, Sep 19, 2013 at 09:20:04PM +0100, Michael Rogers wrote:

 The key reuse issue isn't related to the choice between time-based and 
 message-based updates. It's caused by keys and IVs in the current design 
 being derived deterministically from the shared secret and the sequence 
 number. If an endpoint crashes and restarts, it may reuse a key and IV with 
 new plaintext. Not good.

Another defense against this is to generate the IV from the plaintext, possibly
from the plaintext in addition to other stuff. There are three things that you
might want to throw into your IV generator: 1. the plaintext, 2. a persistent
secret key used only for this purpose and known only to this client, 3. a
random nonce read from the operating system.

I would suggest including 1 and 2 but not 3.

This *could* be seen as an alternative to the defense you described:

 In the new design, the temporary keys are still derived deterministically 
 from the shared secret, but the IVs and ephemeral keys are random.

Or it could be used as an added, redundant defense. I guess if it is an added,
redundant defense then this is the same as including the random nonce -- number
3 from the list above.

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] LeastAuthority.com announces PRISM-proof storage service

2013-08-29 Thread zooko
On Thu, Aug 29, 2013 at 02:44:37PM +0200, danimoth wrote:
 On 29/08/13 at 03:09pm, Nikos Fotiou wrote:
  A suspicious user may wonder, how can he be sure that the service
  indeed uses the provided source code. IMHO, end-to-end security can be
  really verifiable--from the user perspective--if it can be attested by
  examining only the source code of the applications running on the user
  side.
 
 
 I agree with you and I propose a simply protocol which follows your
 statement:
 
 - encrypt your data with a simmetric cipher and a private and robust key 
 - make an hash of the encrypted data and store it securely (no loss
   possibile) offline
 - upload the encrypted data over some service.
 - download the encrypted data when you need it, check the hash and
   decrypt with the key used in the first pass.
 
 In this (simple) case, what is run server side does not nullify security
 properties (confidentiality and integrity in this example), provided
 that what is run user-side is ok.

The Least-Authority Filesystem does all of the above. We have some pretty good
docs:

https://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/about.rst

http://code.google.com/p/nilestore/wiki/TahoeLAFSBasics

https://tahoe-lafs.org/trac/tahoe-lafs/wiki/FAQ

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Reply to Zooko (in Markdown)

2013-08-29 Thread zooko
On Sat, Aug 24, 2013 at 09:18:33PM +0300, ianG wrote:
 
 I'm not convinced that the US feds can at this stage order the 
 backdooring of software, carte blanche.  Is there any evidence of 
 that?
 
 (I suspect that all their powers in this area are from pressure and 
 horse trading.  E.g., the export of cryptomunitions needs a 
 licence...)

I don't know. I asked a lawyer a few days ago -- a person who is, as far as I
can tell, one of the leading experts in this field. Their answer was that
nobody knows.

In any case, you don't appear to be arguing that Silent Text is different than
Silent Mail, only that the U.S. Federal Government would not require Silent
Circle to actively backdoor their own products. This argument applies equally
to the canceled product and the current ones.

In fact, I don't think it is a useful question for evaluating the security of
services that you rely on. If a service provider could spy on you at the behest
of their government, then an attacker who infiltrated that service provider's
systems could also spy on you.

Imagine that your adversary is not the U.S. NSA, but instead Chinese
cyber-warriors, and instead of contacting your service provider and demanding
cooperation, they simply remotely infiltrate your service provider's employee's
laptops. They've apparently done this many times in recent years, to Adobe,
Google, Microsoft, Nortel Networks, and basically every other company you can
name.

So I don't think the question of To whom is my service provider vulnerable?
is the right question. You can't really know the answer, so it doesn't help you
much to wonder about it. The right question is Am I vulnerable to my service
provider?. The answer, as far as Silent Circle's current products go, is
Yes..


 I would be surprised if there was a single stated reason.

Here are the first five hits from DuckDuckGo for the query silent circle
mail:

We knew USG would come after us. That's why Silent Circle CEO Michael
Janke tells TechCrunch his company shut down its Silent Mail encrypted
email service.


http://techcrunch.com/2013/08/08/silent-circle-preemptively-shuts-down-encrypted-email-service-to-prevent-nsa-spying/

Silent Circle, the provider of a range of secure communications services,
has pre-emptively closed its Silent Mail email service in order to stop
U.S.  authorities from spying on its customers


http://gigaom.com/2013/08/09/another-u-s-secure-email-service-shuts-down-to-protect-customers-from-authorities/

Silent Circle, the global encrypted communications firm revolutionizing
mobile security for organizations and individuals alike, today announced it
has discontinued its Silent Mail e-mail encryption service in order to
preempt governments' demands for customer information in the escalating
surveillance environment targeting global communications. 


http://www.darkreading.com/privacy/silent-circle-ends-silent-mail-service-t/240159779

the Lavabit e-mail service used by National Security Agency leaker Edward
Snowden announced Thursday that it would shut down, implying heavily that
it had received some sort of government request for information. Hours
later ... Silent Circle, said it would preemptively shut down its Silent
Mail service to avoid ending up in the same position.


http://m.washingtonpost.com/business/technology/lavabit-silent-circle-shut-down-e-mail-what-alternatives-are-left/2013/08/09/639230ec-00ee-11e3-96a8-d3b921c0924a_story.html

There are far too many leaks of information and metadata intrinsically in
the email protocols themselves. Email as we know it with SMTP, POP3, and
IMAP cannot be secure.

https://silentcircle.wordpress.com/2013/08/09/to-our-customers/

(Kudos to Jon for saying something sensical in that last one!)

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Reply to Zooko (in Markdown)

2013-08-23 Thread Zooko Wilcox-OHearn
Dear Jon:

Thank you for your kind words and your detailed response.

I am going to focus only on the issue that I think is most relevant
and urgent for your customers and mine.

That urgent issue is: what's the difference between the now-canceled
Silent Mail product and the products that you are still offering, such
as Silent Text?

I don't understand why the Lavabit shutdown and the related domestic
surveillance disclosures imply that Silent Mail was unsafe in any way
that wouldn't also mean Silent Text is unsafe.

Before I go on, I'd like to point out a critical fact that some
readers might not be aware of: Ladar Levison, the owner of Lavabit,
now claims that he is being threatened with jail time *for having shut
down the service*:

http://investigations.nbcnews.com/_news/2013/08/13/20008036-lavabitcom-owner-i-could-be-arrested-for-resisting-surveillance-order?lite

This changes the equation, because it means not only can the U.S.
federal espionage authorities say Backdoor all of your customers or
close your business., they can also say Backdoor all of your
customers or go to jail.. As the owner and CEO of a
privacy-protecting service (https://LeastAuthority.com) and a U.S.
citizen, and as the father of three precious boys who do not want to
be separated from me for any length of time, this concerns me greatly.

Now, maybe the U.S. espionage authorities wouldn't make that threat
again. Maybe Ladar Levison's resistance will teach them that it was a
mistake. I don't know, but we have to take into account this
possibility for now. Your decision to shutter the Silent Mail product
was made because of such possibilities.

But your decision to *keep* the Silent Text service (and the others)
still operating while shutting down the Silent Mail service would make
sense only in the following scenario:

Attacker: We're here to compel you to give us access to the
confidential communications of all of your customers.

Silent Circle: But, to do that we would have to change our client —
for example, change its random number generator to produce output that
we can predict — and then upload a software update to the Apple and
Google app stores, and then wait for all of our customers to
automatically upgrade to the new version!

Attacker: Oh, well in that case nevermind.

Why do you think that this scenario is plausible? I don't think it is
plausible. Instead, I think the conversation would go like this:

Silent Circle: … and then wait for all of our customers to
automatically upgrade to the new version!

Attacker: Okay. Do that.


Now, there is a big, complex, and interesting question about how to
enable others to *verify* the security of software. It is not
impossible, as you suggested. Good progress on enabling independent
verification of security is being made, by Whisper Systems
(https://whispersystems.org/), my own company LeastAuthority.com, the
Tor Project 
(https://blog.torproject.org/blog/deterministic-builds-part-one-cyberwar-and-global-compromise),
Gitian (https://gitian.org/), Debian
(https://wiki.debian.org/ReproducibleBuilds), and Bitcoin
(https://en.bitcoin.it/wiki/Release_process).

But before we get into the nuts and bolts of how to facilitate
verification of end-to-end security, I want to hammer on the first
issue: before going forth to try to improve an issue, we should first
admit to our current customers and to the public that the issue
exists. We shouldn't mislead our customers into thinking that they are
safe from something that they are not. Silent Circle's closure of
Silent Mail for the stated reason is inconsistent with its continued
operation of the Silent Text service. The stated reason was that the
US federal government could compel Silent Circle to backdoor the
Silent Mail service. That same reason applies today to the Silent Text
service and the other services that Silent Circle is still operating.

To be clear, I'm not asking you to shut down your other services. I
think that would be a loss for everyone. And I'm not asking you to
magically fix all of the problems by tomorrow. I know, in part from
your detailed letter, that you are currently working on improving some
parts of your process, and I think that there are other techniques
that you could use (including licensing your source code as Free and
Open Source software) that would help. But I understand the challenges
of running a business, actively serving customers, and performing
sophisticated engineering all at once. I know that improvement takes
time. What I'm asking you to do is to *be clear* with your customers
and with the public about the current limitations.

Currently, the US federal espionage agencies can compel Silent Circle
to secretly provide access to all of Silent Circle's customers'
private communications. That's too bad. But it is fixable! But to fix
it starts with admitting what the problem is.


Regards,

Zooko Wilcox-O'Hearn

Founder, CEO, and Customer Service Rep
https://LeastAuthority.com
Freedom matters

Re: [cryptography] LeastAuthority.com announces PRISM-proof storage service

2013-08-16 Thread zooko
On Tue, Aug 13, 2013 at 03:16:33PM -0500, Nico Williams wrote:
 
 Nothing really gets anyone past the enormous supply of zero-day vulns in 
 their complete stacks.  In the end I assume there's no technological PRISM 
 workarounds.

I agree that compromise of the client is relevant. My current belief is that
nobody is doing this on a mass scale, pwning entire populations at once, and
that if they do, we will find out about it.

My goal with the S4 product is not primarily to help people who are being
targeted by their enemies, but to increase the cost of indiscriminately
surveilling entire populations.

Now maybe it was a mistake to label it as PRISM-Proof in our press release
and media interviews! I said that because to me PRISM means mass surveillance
of innocents. Perhaps to other people it doesn't mean that. Oops!

Regards,

Zooko

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] LeastAuthority.com announces PRISM-proof storage service

2013-08-16 Thread zooko
On Tue, Aug 13, 2013 at 01:52:38PM -0500, Nicolai wrote:
 
 Zooko: Congrats on the service.  I'm wondering if you could mention on the 
 site which primitives are used client-side.  All I see is that combinations 
 of sftp and ssl are used for data-in-flight.

Thanks!

I'm not sure what your question is. The available interfaces to the gateway -- 
i.e. the cleartext side that is marked in red on [1] -- are:

* the tahoe command-line tool [2]

* your unadorned web browser, even with JavaScript turned off, pointed at the 
gateway over localhost (or over SSL to a remote host, or whatever you want)

* your FTP or SFTP client

* FUSE (although in a Rube Goldberg-esque setup where FUSE is chained to the 
aforementioned SFTP server through the sshfs tool; Like a Rube Goldberg 
device, it actually does work once you get all the pieces set up next to each 
other.)

The semantics of what you can do with this are described in summary here:

https://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/about.rst#access-control

And in much more detail in the documentation pages linked from there.

Does that answer your question?

Regards,

Zooko

[1] https://tahoe-lafs.org/trac/chrome/LAFS.svg

[2] https://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/frontends/CLI.rst

P.S. This is a test of charset handling through GNU screen, mutt, and GNU 
mailman: ??

(That should be a superscript 1.)
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] open letter to Phil Zimmermann and Jon Callas of Silent Circle, re: Silent Mail shutdown

2013-08-16 Thread Zooko Wilcox-OHearn
-to-prevent-nsa-spying/

We're trying an approach to this problem, here at LeastAuthority.com,
of “*verifiable* end-to-end security”. For our data backup and storage
service, all of the software is Free and Open Source, and it is
distributed through channels which are out of our direct control, such
as Debian and Ubuntu. Of course this approach is not perfectly secure
— it doesn't guarantee that a state-level actor cannot backdoor our
customers. But it does guarantee that *we* cannot backdoor our
customers.

This currently imposes inconvenience on our customers, and I'm not
saying it is the perfect solution, but it shows that there is more
than one way to go at this problem.

Thank you for your attention to these important matter, and your
leadership in speaking out about them.

(By the way, LeastAuthority.com is not a competitor to Silent Circle.
We don't offer voice, text, video, or email services, like Silent
Circle does/did. What we offer is simply secure offsite *backup*, and
a secure cloud storage API that people use to build other services.)

Regards,

Zooko Wilcox-O'Hearn

.. _recent shutdown of Lavabit:
http://boingboing.net/2013/08/08/lavabit-email-service-snowden.html

.. _shutdown of Silent Circle's “Silent Mail” product:
http://silentcircle.wordpress.com/2013/08/09/to-our-customers/

.. _Jon Callas's posts about the topic on G+:
https://plus.google.com/112961607570158342254/posts/9uySMokvg7k

.. _Phil Zimmermann's interview in Forbes:
http://www.forbes.com/sites/parmyolson/2013/08/09/e-mails-big-privacy-problem-qa-with-silent-circle-co-founder-phil-zimmermann/

.. _2013 Mass Surveillance Scandal:
https://en.wikipedia.org/wiki/2013_mass_surveillance_scandal
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] LeastAuthority.com announces PRISM-proof storage service

2013-08-13 Thread Zooko Wilcox-OHearn
Dear people of the cryptography@randombit.net mailing list:

For obvious reasons, the time has come to push hard on *verifiable*
end-to-end encryption. Here's our first attempt. We intend to bring
more!

We welcome criticism, suggestions, and requests.

Regards,

Zooko Wilcox-O'Hearn

Founder, CEO, and Customer Support Rep
https://LeastAuthority.com
Freedom matters.

---




 LeastAuthority.com Announces A PRISM-Proof Storage Service


Wednesday, July 31, 2013

`LeastAuthority.com`_ today announced “Simple Secure Storage Service
(S4)”, a backup service that encrypts your files to protect them from
the prying eyes of spies and criminals.

.. _LeastAuthority.com: https://LeastAuthority.com

“People deserve privacy and security in the digital data that make up
our daily lives.” said the company's founder and CEO, Zooko
Wilcox-O'Hearn. “As an individual or a business, you shouldn't have to
give up control over your data in order to get the benefits of cloud
storage.”

verifiable end-to-end security
--

The Simple Secure Storage Service offers *verifiable* end-to-end security.

It offers “end-to-end security” because all of the customer's data is
encrypted locally — on the customer's own personal computer — before
it is uploaded to the cloud. During its stay in the cloud, it cannot
be decrypted by LeastAuthority.com, nor by anyone else, without the
decryption key which is held only by the customer.

S4 offers “*verifiable* end-to-end security” because all of the source
code that makes up the Simple Secure Storage Service is published for
everyone to see. Not only is the source code publicly visible, but it
also comes with Free (Libre) and Open Source rights granted to the
public allowing anyone to inspect the source code, experiment on it,
alter it, and even to distribute their own version of it and to sell
commercial services.

Wilcox-O'Hearn says “If you rely on closed-source, proprietary
software, then you're just taking the vendor's word for it that it
actually provides the end-to-end security that they claim. As the
PRISM scandal shows, that claim is sometimes a lie.”

The web site of LeastAuthority.com proudly states “We can never see
your data, and you can always see our code.”.

trusted by experts
--

The Simple Secure Storage Service is built on a technology named
“Least-Authority File System (LAFS)”. LAFS has been studied and used
by computer scientists, hackers, Free and Open Source software
developers, activists, the U.S. Defense Advanced Research Projects
Agency, and the U.S. National Security Agency.

The design has been published in a peer-reviewed scientific workshop:
*Wilcox-O'Hearn, Zooko, and Brian Warner. “Tahoe: the least-authority
filesystem.” Proceedings of the 4th ACM international workshop on
Storage security and survivability. ACM, 2008.*
http://eprint.iacr.org/2012/524.pdf

It has been cited in more than 50 scientific research papers, and has
received plaudits from the U.S. Comprehensive National Cybersecurity
Initiative, which stated: “Systems like Least-Authority File System
are making these methods immediately usable for securely and availably
storing files at rest; we propose that the methods be further
reviewed, written up, and strongly evangelized as best practices in
both government and industry.”

Dr. Richard Stallman, President of the Free Software Foundation
(https://fsf.org/) said “Free/Libre software is software that the
users control. If you use only free/libre software, you control your
local computing — but using the Internet raises other issues of
freedom and privacy, which many network services don't respect. The
Simple Secure Storage Service (S4) is an example of a network service
that does respect your freedom and privacy.”

Jacob Appelbaum, Tor project developer (https://www.torproject.org/)
and WikiLeaks volunteer (http://wikileaks.org/), said “LAFS's design
acknowledges the importance of verifiable end-to-end security through
cryptography, Free/Libre release of software and transparent
peer-reviewed system design.”

The LAFS software is already packaged in several widely-used operating
systems such as Debian GNU/Linux and Ubuntu.

https://LeastAuthority.com
Title: LeastAuthority.com Announces A PRISM-Proof Storage Service






LeastAuthority.com Announces A PRISM-Proof Storage Service

Wednesday, July 31, 2013
LeastAuthority.com today announced “Simple Secure Storage Service (S4)”, a backup service that encrypts your files to protect them from the prying eyes of spies and criminals.
“People deserve privacy and security in the digital data that make up our daily lives.” said the company's founder and CEO, Zooko Wilcox-O'Hearn. “As an individual or a business, you shouldn't have to give up control over your data in order to get the benefits of cloud storage.”

verifiable end-to-end security

Re: [cryptography] LeastAuthority.com announces PRISM-proof storage service

2013-08-13 Thread Zooko Wilcox-OHearn
On Tue, Aug 13, 2013 at 5:16 PM, Peter Saint-Andre stpe...@stpeter.im wrote:
 On 8/13/13 11:02 AM, ianG wrote:
 Super!  I think a commercial operator is an essential step forward.

 How so? Centralization via commercial operators doesn't seem to have helped 
 in the email space lately.

It helps because we at LeastAuthority.com
(https://LeastAuthority.com/about_us ) can spend our days improving
the performance and reliability of our ciphertext storage servers and
contributing patches back to the free-and-open-source client
(https://Tahoe-LAFS.org ).

If we weren't running LeastAuthority.com, we would presumably have to
get different jobs which would take a lot of time away from LAFS
hacking!

It helps our customers because they can avoid doing the effort and
expense of setting up and managing servers, and instead pay us a
monthly fee to maintain those servers and the storage of their
ciphertext. Also our customer and business partners like having the
option of hiring us for support when they are integrating the
free-and-open-source LAFS software into their own products.

Regards,

Zooko Wilcox-O'Hearn

Founder, CEO, and Customer Support Rep
https://LeastAuthority.com
Freedom matters.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] ANNOUNCING Tahoe-LAFS v1.10

2013-05-13 Thread Zooko Wilcox-OHearn
ANNOUNCING Tahoe, the Least-Authority File System, v1.10

The Tahoe-LAFS team is pleased to announce the immediate
availability of version 1.10.0 of Tahoe-LAFS, an extremely
reliable distributed storage system. Get it here:

https://tahoe-lafs.org/source/tahoe-lafs/trunk/docs/quickstart.rst

Tahoe-LAFS is the first distributed storage system to offer
provider-independent security — meaning that not even the
operators of your storage servers can read or alter your data
without your consent. Here is the one-page explanation of its
unique security and fault-tolerance properties:

https://tahoe-lafs.org/source/tahoe-lafs/trunk/docs/about.rst

The previous stable release of Tahoe-LAFS was v1.9.2, released
on July 3, 2012.

v1.10.0 is a feature release which adds a new Introducer
protocol, improves the appearance of the web-based user
interface, improves grid security by making introducer FURLs
unguessable, and fixes many bugs. See the NEWS file [1] for
details.


WHAT IS IT GOOD FOR?

With Tahoe-LAFS, you distribute your filesystem across
multiple servers, and even if some of the servers fail or are
taken over by an attacker, the entire filesystem continues to
work correctly, and continues to preserve your privacy and
security. You can easily share specific files and directories
with other people.

In addition to the core storage system itself, volunteers
have built other projects on top of Tahoe-LAFS and have
integrated Tahoe-LAFS with existing systems, including
Windows, JavaScript, iPhone, Android, Hadoop, Flume, Django,
Puppet, bzr, mercurial, perforce, duplicity, TiddlyWiki, and
more. See the Related Projects page on the wiki [3].

We believe that strong cryptography, Free and Open Source
Software, erasure coding, and principled engineering practices
make Tahoe-LAFS safer than RAID, removable drive, tape,
on-line backup or cloud storage.

This software is developed under test-driven development, and
there are no known bugs or security flaws which would
compromise confidentiality or data integrity under recommended
use. (For all important issues that we are currently aware of
please see the known_issues.rst file [2].)


COMPATIBILITY

This release should be compatible with the version 1 series of
Tahoe-LAFS. Clients from this release can write files and
directories in the format used by clients of all versions back
to v1.0 (which was released March 25, 2008). Clients from this
release can read files and directories produced by clients of
all versions since v1.0. Servers from this release can serve
clients of all versions back to v1.0 and clients from this
release can use servers of all versions back to v1.0.

Except for the new optional MDMF format, we have not made any
intentional compatibility changes. However we do not yet have
the test infrastructure to continuously verify that all new
versions are interoperable with previous versions. We intend
to build such an infrastructure in the future.

The new Introducer protocol added in v1.10 is backwards
compatible with older clients and introducer servers, however
some features will be unavailable when an older node is
involved. Please see docs/nodekeys.rst [14] for details.

This is the eighteenth release in the version 1 series. This
series of Tahoe-LAFS will be actively supported and maintained
for the foreseeable future, and future versions of Tahoe-LAFS
will retain the ability to read and write files compatible
with this series.


LICENCE

You may use this package under the GNU General Public License,
version 2 or, at your option, any later version. See the file
COPYING.GPL [4] for the terms of the GNU General Public
License, version 2.

You may use this package under the Transitive Grace Period
Public Licence, version 1 or, at your option, any later
version. (The Transitive Grace Period Public Licence has
requirements similar to the GPL except that it allows you to
delay for up to twelve months after you redistribute a derived
work before releasing the source code of your derived work.)
See the file COPYING.TGPPL.rst [5] for the terms of the
Transitive Grace Period Public Licence, version 1.

(You may choose to use this package under the terms of either
licence, at your option.)


INSTALLATION

Tahoe-LAFS works on Linux, Mac OS X, Windows, Solaris, *BSD,
and probably most other systems. Start with
docs/quickstart.rst [6].


HACKING AND COMMUNITY

Please join us on the mailing list [7]. Patches are gratefully
accepted -- the RoadMap page [8] shows the next improvements
that we plan to make and CREDITS [9] lists the names of people
who've contributed to the project. The Dev page [10] contains
resources for hackers.


SPONSORSHIP

Atlas Networks has contributed several hosted servers for
performance testing. Thank you to Atlas Networks [11] for
their generous and public-spirited support.

And a special thanks to Least Authority [12], which employs several
Tahoe-LAFS developers, for their continued support.

HACK TAHOE-LAFS!

If you can find a security 

Re: [cryptography] Bitcoin-mining Botnets observed in the wild? (was: Re: Bitcoin in endgame

2012-05-11 Thread Zooko Wilcox-O'Hearn
Folks:

Here's a copy of a post I just made to my Google+ account about this
alleged Botnet herder who has been answering questions about his
operation on reddit:

https://plus.google.com/108313527900507320366/posts/1oi1v7RxR1i

=== introduction ===

Someone is posting to reddit claiming to be a malware author, botnet
operator, and that they use their Botnet to mine Bitcoin: ¹.

I asked them a question about the economics of using a Botnet for the
Bitcoin distributed transaction-verification service (Bitcoin
mining): ².

They haven't provided any proof of their claims, but on the other hand
what they write and how they write it sounds plausible to me.


=== details ===

Here are my notes where I try to double-check their numbers and see if
they make sense.

They in their initial post ¹ that they do 13-20 gigahashes/sec of work
on the Bitcoin distributed transaction verification service.

The screenshot they provided ³ shows 10.6 gigahashes/sec (GH/s) in
progress, and that they're using a mining pool named BTCGuild.
According to this chart of mining pools ⁴, BTCGuild currently totals
about 12.5% of all known hashing power, and according to ⁵ the current
total hashing power on the network is about 12.5 terahashes/sec
(TH/s), so BTCGuild probably accounts for about 1.5 TH/s.

They say that their Botnet has about 10,000 bots. The screen shot
shows a count of total bots = 12,000 and connected in the last 24
hours = 3500. This ratio of total bots to bots connected in the last
24 hours is consistent with other reports I've read of Botnets ⁶, and
also consistent with my experience in p2p networking. The number of
live bots available at any one time for this Botnet herder should
probably average out to somewhere between 350 and 550. Let's pick 500
as an easy number to work with. Does it makes sense that 500 bots
could generate 10 GH/s? That's 20 MH/s per live bot. According to the
Bitcoin wiki's page on mining hardware ⁷, a typical widely-available
GPU should provide about 200 MH/s. Hm, so they are claiming only 1/10
the total hashpower that our back-of-the-envelope estimates would
assign to them. Here is an answer they give to another person's
question that sheds light on this: ⁸.

Q: Isn't Bitcoin mining pretty resource intensive on a computer? Like
to the point someone would notice something is up on their system form
it slowing eveyrthing down?

A: My Botnet only mines if the computer is unused for 2 minutes and
if the owner gets back it stops mining immidiatly, so it doesn't suck
your fps at MW3. Also it mines as low priority so movies don't lag. I
also set up a very safe threshold, the cards work at around 60% so
they don't get overheated and the fans don't spin as crazy.

It sounds plausible to me that those stealth measures could cut the
throughput by 10 compared to running flat-out 24/7. Also it isn't
clear if the botnet counts computers that don't have a GPU at all, or
don't have a usable one. Maybe such computers are rare nowadays?
Anyway if they are counted in there then that would be another reason
why the hashing throughput per bot is lower than I calculated.

In answer to another question ¹⁰, they said they get a steady $40/day
from running the Bitcoin transaction-confirmation (mining) service.
According to this chart ¹¹ from ¹², the current U.S. Dollar value of
Bitcoin mining is (or was a couple of days ago when they wrote that)
about $0.33 per day for 100 MH/s. Multiplying that out by their claim
of 10.6 GH/s results in $35/day. So that adds up, too.

(Note that it sounds like their primary business is stealing and
selling credit card numbers, and the Bitcoin transaction-verification
service is a sideline.)

I don't see a reason to doubt that they really generate about 10.6
GH/s of the Bitcoin distributed transaction verification service.

My primary question is: if this is profitable on a per-bot basis, then
why don't they scale up their operation? Of course, the answer to this
presumably sheds light on the related question of why competitors of
theirs don't launch similar operations. Perhaps one limiting factor is
that the larger your Botnet, the more likely you'll be arrested by
police or extorted by competitors. That may be a limiting factor that
this person doesn't yet know about or doesn't like to think about.
They mentioned ⁹ that most of their fellow cybercriminals are too
inexperienced to accept Bitcoin, so it may be that this person is
just ahead of the curve and more people will launch operations like
this in the future.

That's the question that I asked them on reddit—why don't they scale
up? They haven't yet replied to my question, but they earlier
mentioned in response to a different question ⁹:

Q: How many botted machines do you typically gain per month or per campaign.

A: about 500-1000 a day, weekends more. I'm thinking about just
buying them in bulks and milking them for bitcoins. Asian installs are
very cheap, 15$/1000 installs and have good GPUs.

If they're really gaining 

Re: [cryptography] DIAC: Directions in Authenticated Ciphers

2012-05-09 Thread Zooko Wilcox-O'Hearn
following-up to my own post:

On Wed, May 9, 2012 at 6:34 AM, Zooko Wilcox-O'Hearn zo...@zooko.com wrote:

 1. Decrypt the data,
 2. Verify the integrity of the data,
 3. Generate MAC tags for other data which would pass the integrity check.

 The fact that 3 is included in that bundle of authority means that I can't 
 use this notion of authenticated encryption to implement any of the current 
 Tahoe-LAFS filesystem semantics. We need to be able to grant authorities 1 
 and 2 while withholding 3.

I forgot to mention that we also need to be able to grant someone the
ability to do 2 without giving them the ability to do 1 or 3. This is
so that you can hire someone to verify the integrity of your data, and
repair damage to it, without giving them the ability to read or change
the data. That requirement might be an interesting requirement to
throw into the mix of symmetric-key-oriented Option A research.

Regards,

Zooko Wilcox-O'Hearn

Founder, CEO, and Customer Support Representative, Least Authority Enterprises
take advantage of cloud storage without losing control of your data:
https://leastauthority.com
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] data integrity: secret key vs. non-secret verifier; and: are we winning? (was: “On the limits of the use cases for authenticated encryption”)

2012-04-26 Thread Zooko Wilcox-O'Hearn
On Wed, Apr 25, 2012 at 9:27 PM, Marsh Ray ma...@extendedsubset.com wrote:
 On 04/25/2012 10:11 PM, Zooko Wilcox-O'Hearn wrote:

 1. the secret-oriented way: you make a MAC tag of the chunk (or equivalently 
 you use Authenticated Encryption on it) using a secret key known to the good 
 guy(s) and unknown to the attacker(s).

 2. the verifier-oriented way: you make a secure hash of the chunk, and make 
 the resulting hash value known to the good guy(s) in an authenticated way.

 Is option 2 sort of just pushing the problem around?

 What's going on under the hood in the term in an authenticated way?

 How do you do authentication in an automated system without someone somewhere 
 keeping something secret?

 Is authenticating the hash value fundamentally different from ensuring the 
 integrity of a chunk of data?

Those are definitely the right sorts of questions, Marsh. I think that
from our bias as crypto engineers familiar with protocols like TLS and
SSH, it seems like approach 1 is natural, or easy, or even the only
real, complete solution, but I think it is deceptive. I think _both_
approaches are sort of just pushing the problem around, and I suspect
that approach 1 actually leaves you with a harder problem left to
solve than approach 2 does. Observe that there is an in an
authenticated way problem hiding in option 1 as well -- someone has
to distribute the secret keys to the legitimate readers in an
authenticated way at some point.

Basically, as security engineers we tend to assume as a starting point
that there is some secret we can use which is known to the good guy(s)
and unknown to the attackers. But this isn't really a fair assumption.
It implies quite a lot of work that someone else is going to have to
do to make that true for us (especially in the multi-party case, but
we already don't seem to have solved the problem very well in the
traditional 2 party case!), and in practice it seems to often fail.
The secrets often turn out to be known to the attackers or unknown to
the intended recipients.

...

Um, frankly I'm having a hard time understanding exactly why my
intuitions about this come out so differently for data-at-rest tools
like Tahoe-LAFS and ZFS than for data-in-motion tools like TLS. My
intuition is that secret-based integrity-checking is fine for the
traditional two-party, time-limited session encryption a la TLS, but
that hash-based integrity-checking is more robust and pushes less of
the problem around when there are more than two parties and when the
data is persistent. The intuitions of the ZFS crypto designers (Darren
Moffat and others including Nico Williams) seems to have been that
secret-based integrity-checking still had some use even in that
scenario. However, I don't remember precisely what ZFS settled on for
secure data integrity checking.

I think to understand the trade-offs of these two options better we
would need an example system in which they could be used. My favorite
example is obviously Tahoe-LAFS, but I'm not sure if you would learn
from considering that example. There is no use of symmetric MAC (nor
Authenticated Encryption) anywhere in the Tahoe-LAFS data formats [*].
All data-integrity mechanisms are in the style of option 2 -- using
secure hashes of the data as the verification tag. In some cases
(immutable files and directories) that secure-hash-based integrity
check is the only integrity check. In others (mutable files and
directories), it is combined with a public key (RSA) digital
signature, where the message being signed includes the hash value.

I can't imagine how one could build a Tahoe-LAFS-like thing using the
secret-key integrity-checking approach instead. It seems like it would
make it impossible to give someone read-access to a file without also
giving them write-access. I.e. they would have to know the secret in
order to verify the contents of the file, and knowledge of the secret
would empower them to undetectably alter the contents of the file. I
guess this would make you want to share files with as few other people
as possible, and for as short of a time as possible, thus pushing you
back toward the two-party session type of usage.

Regards,

Zooko

[*] There are actually a couple of uses of symmetric MAC in the
Tahoe-LAFS system, but not for data integrity checking on the actual
file contents nor on the file metadata or directories, so let's ignore
those. They have to do with control plane stuff -- network
establishment and so on...
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] “On the limits of the use cases for authenticated encryption”

2012-04-25 Thread Zooko Wilcox-O'Hearn
Folks:

I posted this on Google+, which I'm effectively using as a blog:

https://plus.google.com/108313527900507320366/posts/cMng6kChAAW

I'll paste the content of my essay below. It elicited some keen
observations from Nikita Borisov in the comments on G+, but I guess you'll
have to actually load the page yourself to read those.

I also posted it on the tahoe-dev mailing list, where a small thread ensued:

https://tahoe-lafs.org/pipermail/tahoe-dev/2012-April/007315.html

Regards,

Zooko

*“On the limits of the use cases for authenticated encryption**”*

*What is authenticated encryption?*

“Authenticated Encryption” is an abstraction that is getting a lot of
attention among cryptographers and crypto programmers nowadays.
Authenticated Encryption is just like normal (symmetric) encryption, in
that it prevents anyone who doesn't know the key from learning anything [*]
about the text. The authenticated part is that it *also* prevents anyone
who doesn't know the key from undetectably altering the text. (If someone
who doesn't know the key does alter the text, then the recipient will
cleanly reject it as corrupted rather than accepting the altered text.)

It is a classic mistake for engineers using crypto to confuse encryption
with authentication. If you're trying to find weaknesses in someone's
crypto protocol, one of the first things to check is whether the designers
of the protocol assumed that by encrypting some data they were preventing
that data from being undetectably modified. Encryption doesn't accomplish
that, so if they made that mistake, you can attack the system by modifying
the ciphertext. Depending on the details of their system, this could lead
to a full break of the system, such that you can violate the security
properties that they had intended to provide to their users.

Since this is such a common mistake, with such potentially bad
consequences, and because fixing it is not that easy (especially due to
timing and exception-oracle attacks against authentication schemes),
cryptographers have studied how to efficiently and securely integrate both
encryption and authentication into one package. The resulting schemes are
called “Authenticated Encryption” schemes.

In the years since cryptographers developed some good authenticated
encryption schemes, they've started thinking of them as a drop-in
replacement for normal old unauthenticated encryption schemes, and started
suggesting that everyone should use authenticated encryption schemes
instead of unauthenticated encryption schemes in all cases. There was a
recent move among cryptographers, spearheaded by the estimable Daniel J.
Bernstein, to collectively focus on developing new improved authenticated
encryption schemes. This would be a sort of community-wide collaboration,
now that the community-wide collaboration on secure hash functions—the
SHA-3 contest—is coming to an end.

Several modern cryptography libraries, including “Keyczar” and Daniel J.
Bernstein's “nacl”, try to make it easy for the programmer to use an
authenticated encryption mode and some of them make it difficult or
impossible to use an unauthenticated encryption mode.

When Brian Warner and I presented Tahoe-LAFS at the RSA Conference in 2010,
I was surprised and delighted when an audience member who approached me
afterward turned out to be Prof. Phil Rogaway, renowned cryptographer and
author of a very efficient authenticated encryption scheme (OCB mode). He
said something nice about our presentation and then asked why we didn't use
an authenticated encryption mode. Shortly before that conversation he had
published a very stimulating paper named “Practice-Oriented Provable
Security and the Social Construction of Cryptography”, but I didn't read it
until years later. In that fascinating and wide-ranging paper he opines,
among many other ideas, that authenticated encryption is one of “the most
useful abstraction boundaries”.

So, here's what I wish I had been quick-witted enough to say to him when we
met in 2010: authenticated encryption can't satisfy any of my use cases!

*Tahoe-LAFS access control semantics*

I'm one of the original and current designers of the Tahoe-LAFS secure
distributed filesystem. We started out, in 2006, by choosing the access
control semantics that we wanted to offer our users and that we knew how to
implement. Here's what we chose:

*There are two kinds of files: immutable and mutable. When you write a file
to the filesystem you can choose which kind of file it will be in the
filesystem. Immutable files can't be modified once they have been written.
A mutable file can be modified by someone with read-write access to it. A
user can have read-write access to a mutable file or read-only access to
it, or no access to it at all.*

*In addition to read-write access and read-only access, we implement a
third, more limited, form of access which is verify-only access. You can
grant someone the ability to check the integrity of your ciphertexts
without also

[cryptography] what do you get when you combine Phil Zimmermann, Jon Callas, and a couple of ex-Navy SEALs?

2012-04-24 Thread Zooko Wilcox-O'Hearn
http://allthingsd.com/20120423/pgp-creator-phil-zimmerman-has-a-new-venture-called-silent-circle/

https://silentcircle.com/

Continually nowadays I think I'm living in one of the science fiction
novels of my youth. This one is by Neal Stephenson, I think.

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Doubts over necessity of SHA-3 cryptography standard

2012-04-13 Thread Zooko Wilcox-O'Hearn
 a
function more efficient than SHA-256 and is still secure, and perhaps
even possible to have a function more efficient than SHA-1 or even MD5
and is still secure. I guess it is going to be quite a few years
before we gain confidence in any such function, though, unfortunately.

To be clear, I'm not exactly recommending that you should *use* a
reduced-round SHA-256, a reduced-round SHA-3 finalist like Blake, or a
SHA-3 reject like Edon-R. I'm not saying you shouldn't use such a
thing either. What I'm saying is: their existence is reason to believe
that a secure hash function with this kind of efficiency could exist.

Regards,

Zooko

[¹] 
http://csrc.nist.gov/groups/ST/hash/sha-3/Round1/Feb2009/documents/EnRUPT_2009.pdf
[²] http://ehash.iaik.tugraz.at/wiki/The_SHA-3_Zoo
[³] http://bench.cr.yp.to/results-hash.html#h6dragon ; 32-bit ARM
h6dragon, 4096 byte input, worst quartile
[⁴] http://ehash.iaik.tugraz.at/wiki/Skein
[⁵] http://ehash.iaik.tugraz.at/wiki/BLAKE
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] workaround for length extension attacks (was: Doubts over necessity of SHA-3 cryptography standard)

2012-04-13 Thread Zooko Wilcox-O'Hearn
If you're using one of the pre-SHA-3 error secure hash functions which
is vulnerable to length-extension attacks (e.g. SHA-256), then a good
fix is the HASH_d technique suggested in Ferguson and Schneier's
Practical Cryptography book (whose new edition is Ferguson,
Schneier, and Kohno's Cryptography Engineering book).

HASH_d(x) = HASH(HASH(x))

That puts a stop to all length-extension attacks, and seems pretty
unlikely to introduce any other problems in a good hash function like
SHA-256.

I pretty much always use the HASH_d technique, and that way I don't
have to spend time figuring out what length-extension attacks can or
can't do to my designs.

Of course, once you upgrade to a shiny new hash function with built-in
protection against length-extension attack, then you should drop the
HASH_d technique.

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] workaround for length extension attacks (was: Doubts over necessity of SHA-3 cryptography standard)

2012-04-13 Thread Zooko Wilcox-O'Hearn
On Fri, Apr 13, 2012 at 9:50 AM, Marsh Ray ma...@extendedsubset.com wrote:

 But now SHA-2 takes a 50% performance hit on messages of 55 bytes and shorter.

Good point.

 So something like IPsec AH would see around a 66% loss in performance if its 
 bottleneck were actually the authentication (estimating from a handy packet 
 capture).

Is that actually its bottleneck? According to ¹ sha-256 for short
messages (64 bytes) costs about 150 cycles per byte on ARM, around 50
cpb on x86_64. So if the HASH_d approach doubles the cost, that's
~16000 cycles per packet instead of ~8000 on ARM, ~5000 cycles per
packet instead of ~2500 on x86_64.  How many packets per second do
your traces call for?

But anyway, yes, I wouldn't hesitate to use any old
length-extension-vulnerable hash function like SHA-256 in HMAC! If
you're talking about using a different hash function in your HMAC in
your IPsec AH, what about using a different MAC entirely, like say
Poly1305-AES? :-)

Regards,

Zooko

¹ http://bench.cr.yp.to/results-hash.html
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] workaround for length extension attacks

2012-04-13 Thread Zooko Wilcox-O'Hearn
On Fri, Apr 13, 2012 at 1:51 PM, Marsh Ray ma...@extendedsubset.com wrote:
 On 04/13/2012 02:38 PM, James A. Donald wrote:


 To construct a case where length extension matters, one must
 contrive a rather dreadful protocol.


 http://vnhacker.blogspot.com/2009/09/flickrs-api-signature-forgery.html

Yes, I think that's quite common. Web developers tasked with adding
authorization to requests seem to come up with tag = H(key | request)
more often than not. I guess that's one really good thing about SHA-3
is that the next generation of those web developers, after SHA-2 is
removed from standard libraries, will accidentally have safe auth. :-)

I really don't know when that will be, though. I think they currently
use SHA-1, and occasionally MD5, because those are the ones that they
have heard about and they are prominently documented in their standard
libraries. I suspect Linus Torvald's decision to use SHA-1 in git is
going to mean that those web developers choose SHA-1 for many, many
years to come.

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Bitcoin-mining Botnets observed in the wild? (was: Re: Bitcoin in endgame

2012-03-28 Thread Zooko Wilcox-O'Hearn
.


Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] announcing Tahoe-LAFS v1.8.3, fixing a security issue

2011-09-14 Thread Zooko O'Whielacronx
announcing Tahoe-LAFS v1.8.3, fixing a security issue

Dear People of the cryptography@randombit.net mailing list:

We found a vulnerability in Tahoe-LAFS (all versions from v1.3.0 to v1.8.2
inclusive) that might allow an attacker to delete files. This vulnerability
does not enable anyone to read file contents without authorization
(confidentiality), nor to change the contents of a file (integrity). How
exploitable this vulnerability is depends upon some details of how you use
Tahoe-LAFS. If you upgrade your Tahoe-LAFS storage server to v1.8.3, this
fixes the vulnerability.

We've written detailed docs about the issue and how to manage it in
the Known Issues document:

http://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/known_issues.rst

I am sorry that we introduced this bug into Tahoe-LAFS and allowed it
to go undetected until now. We aim for a high standard of security and
reliability in Tahoe-LAFS, and we're not satisfied until our users are
safe from threats to their data.

We've been working with the packagers who maintain packages of
Tahoe-LAFS in various operating systems, so if you get your Tahoe-LAFS
through your operating system there may already be a fixed version
available:

http://tahoe-lafs.org/trac/tahoe-lafs/wiki/OSPackages

Please contact us through the tahoe-dev mailing list if you have
further questions.

Regards,

Zooko Wilcox-O'Hearn

ANNOUNCING Tahoe, the Least-Authority File System, v1.8.3

The Tahoe-LAFS team announces the immediate availability of version 1.8.3 of
Tahoe-LAFS, an extremely reliable distributed storage system. Get it here:

http://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/quickstart.rst

Tahoe-LAFS is the first distributed storage system to offer
provider-independent security — meaning that not even the
operators of your storage servers can read or alter your data
without your consent. Here is the one-page explanation of its
unique security and fault-tolerance properties:

http://tahoe-lafs.org/source/tahoe/trunk/docs/about.html

The previous stable release of Tahoe-LAFS was v1.8.2, which was
released January 30, 2011 [1].

v1.8.3 is a stable bugfix release which fixes a security issue. See the file
[2] and known_issues.rst [3] file for details.


WHAT IS IT GOOD FOR?

With Tahoe-LAFS, you distribute your filesystem across
multiple servers, and even if some of the servers fail or are
taken over by an attacker, the entire filesystem continues to
work correctly, and continues to preserve your privacy and
security. You can easily share specific files and directories
with other people.

In addition to the core storage system itself, volunteers
have built other projects on top of Tahoe-LAFS and have
integrated Tahoe-LAFS with existing systems, including
Windows, JavaScript, iPhone, Android, Hadoop, Flume, Django,
Puppet, bzr, mercurial, perforce, duplicity, TiddlyWiki, and
more. See the Related Projects page on the wiki [4].

We believe that strong cryptography, Free and Open Source
Software, erasure coding, and principled engineering practices
make Tahoe-LAFS safer than RAID, removable drive, tape,
on-line backup or cloud storage.

This software is developed under test-driven development, and
there are no known bugs or security flaws which would
compromise confidentiality or data integrity under recommended
use. (For all important issues that we are currently aware of
please see the known_issues.rst file [3].)


COMPATIBILITY

This release is compatible with the version 1 series of
Tahoe-LAFS. Clients from this release can write files and
directories in the format used by clients of all versions back
to v1.0 (which was released March 25, 2008). Clients from this
release can read files and directories produced by clients of
all versions since v1.0. Servers from this release can serve
clients of all versions back to v1.0 and clients from this
release can use servers of all versions back to v1.0.

This is the fourteenth release in the version 1 series. This
series of Tahoe-LAFS will be actively supported and maintained
for the forseeable future, and future versions of Tahoe-LAFS
will retain the ability to read and write files compatible
with this series.


LICENCE

You may use this package under the GNU General Public License,
version 2 or, at your option, any later version. See the file
COPYING.GPL [5] for the terms of the GNU General Public
License, version 2.

You may use this package under the Transitive Grace Period
Public Licence, version 1 or, at your option, any later
version. (The Transitive Grace Period Public Licence has
requirements similar to the GPL except that it allows you to
delay for up to twelve months after you redistribute a derived
work before releasing the source code of your derived work.)
See the file COPYING.TGPPL.html [6] for the terms of the
Transitive Grace Period Public Licence, version 1.

(You may choose to use this package under the terms of either
licence, at your option.)


INSTALLATION

Tahoe-LAFS works on Linux, Mac OS X

Re: [cryptography] preventing protocol failings

2011-07-22 Thread Zooko O'Whielacronx
On Tue, Jul 12, 2011 at 5:25 PM, Marsh Ray ma...@extendedsubset.com wrote:

 Everyone here knows about the inherent security-functionality tradeoff. I 
 think it's such a law of nature that any control must present at least some 
 cost to the legitimate user in order to provide any effective security. 
 However, we can sometimes greatly optimize this tradeoff and provide the best 
 tools for admins to manage the system's point on it.

From http://www.hpl.hp.com/techreports/2009/HPL-2009-53.pdf :

“1. INTRODUCTION
Most people agree with the statement, ―There is an inevitable tension
between usability and security. We don’t, so we set out to build a
useful tool to prove our point.”

 Hoping to find security for free somewhere is akin to looking for free 
 energy. The search may be greatly educational or produce very useful
related discoveries, but at the end of the day the laws of
thermodynamics are likely to remain satisfied.

If they've done what they claim (which I find plausible), then how
could it be possible? Where does this free energy come from?

I think it comes from taking advantage of information which is already
present but which is just lying about unused by the security
mechanism: expressions of intent that the user makes but that some
security mechanisms ignore.

For example, if you send a file to someone, then there is no need for
your tools to interrupt your workflow with security-specific
questions, like prompting for a password or access code, popping up a
dialog that says This might be insecure! Are you sure?, or asking
you to specify a public key of your recipient. You've already
specified (as part of your *normal* workflow) what file and who to
send it to, and that information is sufficient the security system to
figure out what to do. Likewise there is no need for the recipient of
the file to have her workflow interrupted by security issues.

Again, the point is that *you've already specified*. The human has
already communicated all of the necessary information to the computer.
Security tools that request extra steps are usually being deaf to what
the human has already told the computer. (Or else they are just doing
CYA Security a.k.a Blame The Victim Security where if anything
goes wrong later they can say Well I popped up an 'Are You Sure?'
dialog box, so what happened wasn't my fault!.)

Okay, now I admit that once we have security tools that integrate into
user workflow and take advantage of the information that is already
present, *then* we'll still have some remaining hard problems about
fitting usability and security together.

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Is BitCoin a triple entry system?

2011-06-13 Thread Zooko O'Whielacronx
Also related, Eric Hughes posted about something he called Encrypted
Open Books on 1993-08-16. The idea was to allow an auditor to confirm
the correctness of the accounts without being able to see the details
of people's accounts.

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] rolling hashes, EDC/ECC vs MAC/MIC, etc.

2011-05-21 Thread Zooko O'Whielacronx
Dear Nico Williams:

Thanks for the reference! Very cool.

What I would most want is for ZFS (and every other filesystem) to
maintain a Merkle Tree over the file data with a good secure hash.
Whenever a change to a file is made, the filesystem can update the
Merkle Tree this with mere O(log(N)) work in the size of the file plus
O(N) work in the size of the change. For a modern filesystem like ZFS
which is already maintaining a checksum tree the *added* cost of
maintaining the secure hash Merkle Tree could be minimal.

Then, the filesystem should make this Merkle Tree available to
applications through a simple query.

This would enable applications—without needing any further
in-filesystem code—to perform a Merkle Tree sync, which would range
from noticeably more efficient to dramatically more efficient than
rsync or zfs send. :-)

Of course it is only more efficient because we're treating the
maintenance of the secure-hash Merkle Tree as free. There are two
senses in which this is legitimate and it is almost free:

1. Since the values get maintained persistently over the file's
lifetime then the total computation required is approximately O(N)
where N is the total size of all deltas that have been applied to the
file in its life. (Let's just drop the logarithmic part for now,
because see 2. below.)

Compare this to the cost of doing a fast, insecure CRC over the whole
file such as in rsync. The cost of that is O(N) * K where N is the
(then current) size of the file and K is the number of times you run
rsync on that file.

The extreme case is if the file hasn't changed. Then for the
application-level code to confirm that the file on this machine is the
same as the file on that machine, it merely has to ask the filesystem
for the root hash on each machine and transmit that root hash over the
network. This is optimally fast compared to rsync, and unlike zfs
send|recv it is optimally fast whenever the two files are identical
even if they have both changed since the last time they were synced.

2. Since the modern, sophisticated filesystem like ZFS is maintaining
a tree of checksums over the data *anyway* you can piggy-back this
computation onto that work, avoiding any extra seeks and minimizing
extra memory access.

In fact, ZFS itself can actually use SHA-256 for the checksum tree,
which would make it almost provide exactly what I want, except for:

2. a. From what I've read, nobody uses the SHA-256 configuration in
ZFS because it is too computationally expensive, so they use an
insecure checksum (fletcher2/4) instead.

2. b. I assume the shape of the resulting checksum tree is modified by
artifacts of the ZFS layout instead of being a simple canonical shape.
This is a show-stopper for this use case because if the same file data
exists on a different system, and some software on that system
computes a Merkle Tree over the data, it might come out with different
hashes than the ZFS checksum tree, thus eliminating all of the
performance benefits of this approach.

But, if ZFS could be modified to fix these problems or if a new
filesystem would add a feature of maintaining a canonical,
reproducible Merkle Tree, then it might be extremely useful.

Thanks to Brian Warner and Dan Shoutis for discussions about this idea.

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] rolling hashes, EDC/ECC vs MAC/MIC, etc.

2011-05-20 Thread Zooko O'Whielacronx
On Fri, May 20, 2011 at 3:30 PM,
travis+ml-rbcryptogra...@subspacefield.org wrote:

 I wonder if A/V shouldn't use something similar?

What's A/V?

 I assume MD4 is an outdated choice - perhaps some cryppie needs to
 design a hash function that is specifically designed for a FIFO kind
 of window?  Maybe there is and I'm just out of the loop.

 Potentially another application is for metadata silvering on file
 systems like ZFS, where we want to keep an updated checksum for a
 file, to detect corruption, but still want to have, say, efficient
 writing to the file - can you support appending?  How about random access?

 Also, FEC defends against an unintelligent adversary; I wonder if we
 couldn't defend against stronger ones (MAC/MIC) efficiently and
 neutralize the unintelligent one (nature and errors) for free?  It
 seems a shame to tack two sets of metadata onto our data.

All of the above seems well suited to maintaining a Merkle Tree over
the file data with a secure hash.

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Point compression prior art?

2011-05-20 Thread Zooko O'Whielacronx
Dear Paul Crowley:

How about the Compact Representation, section 4.2, of RFC 6090:

http://www.rfc-editor.org/rfc/rfc6090.txt

Is that the same point compression that you were looking for?

Regards,

Zooko
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Merkle Signature Scheme is the most secure signature scheme possible for general-purpose use

2010-09-01 Thread Zooko O'Whielacronx
On Wed, Sep 1, 2010 at 2:55 PM, Ben Laurie b...@links.org wrote:

 Therefore, you would end up hashing your messages with a
 secure hash function to generate message representatives short
 enough to sign.

 Way behind the curve here, but this argument seems incorrect. Merkle
 signatures rely on the properties of chained hash functions, whereas
 RSA, for example, only needs a single iteration of the hash function to
 be good.

All digital signatures, including RSA and including the hash-based
signatures that I am advocating, require a message representative
which is a small fixed-length thing, and since your message is an
arbitrarily large thing we need to use a compressing function, which
we do today with Merkle-Damgård chaining and in the future with SHA-3
(which will probably have some mechanism that looks a little bit like
a Merkle-Damgård chain if you squint at it just right).

A Merkle-Damgård chain is definitely relying on the properties of
chained inner compression functions, and several practical and
theoretical weaknesses of this reliance have been identified (length
extension, herding, multi-collisions, entropy-loss).

The Merkle Trees which are used in hash-based signatures don't seem
obviously weaker than normal linear hashes and indeed seem stronger in
at least some theoretical ways against collisions (they should not
suffer from entropy-loss, for example). In addition, using a full hash
function with initialization and finalization on larger inputs instead
of a inner-compression-function on smaller inputs is almost certainly
safer against preimage attacks.

Oh, but there's the rub! The security of the message-representative
depends on collision-resistance, but the security of the hash-based
signature depends only on pre-image resistance! This is a vast gulf
both practically and theoretically. Consider:

MD5: collisions: seconds on your laptops; pre-images: perhaps in a
hundred years if we make more progress [1]

SHA-1: collisions: a year or two of great expense and effort;
pre-images: perhaps never unless we have a breakthrough

SHA-3-256: collisions: 2¹²⁸; pre-images: 2²⁵⁶


 Or, to put it another way, in order to show that a Merkle signature is
 at least as good as any other, then you'll first have to show that an
 iterated hash is at least as secure as a non-iterated hash (which seems
 like a hard problem, since it isn't).

I'm not sure that I agree with you that security of a hash function
used once on an arbitrarily large message is likely to be better than
security of a hash function used a few times iteratively on its own
outputs. But regardless of that, I think the fair comparison here is:

... show that an iterated hash is more likely to have preimage
resistance than a non-iterated hash is to have collision-resistance.

And I think it is quite clear that for any real hash function such as
MD5, SHA-1, Tiger, Ripemd, SHA-2, and the SHA-3 candidates that this
does hold!

What do you think of that argument?

Regards,

Zooko

[1] http://www.springerlink.com/content/d7pm142n58853467/
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] 1280-Bit RSA

2010-07-17 Thread Zooko O'Whielacronx
Dan:

You didn't mention the option of switching to elliptic curves. A
256-bit elliptic curve is probably stronger than 2048-bit RSA [1]
while also being more efficient in every way except for CPU cost for
verifying signatures or encrypting [2].

I like the Brainpool curves which comes with a better demonstration
that they were generated with any possible back door than do the
NIST curves [3].

Regards,

Zooko

[1] http://www.keylength.com/
[2] http://bench.cr.yp.to/results-sign.html
[3] 
http://www.ecc-brainpool.org/download/draft-lochter-pkix-brainpool-ecc-00.txt
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography