Re: replay integrity

2003-07-10 Thread Zooko

 Ian Grigg wrote:

 So, some protocols don't need replay prevention
 from lower layers because they have sufficient
 checks built in.  This would apply to any protocols
 that have financial significance;  in general, no
 protocol should be without its own unique Ids.

I'll try to make this concrete.  My thesis is different than Ian's -- rather 
than saying that those apps need less than what TLS offers, I say that they 
need more!  (So that each app need no longer implement the added features 
itself.)

[Disclaimer: My understanding of SSL/TLS is incomplete.  Eric Rescorla's book 
is on my amazon wishlist.  Please be polite when correcting my errors, and 
I'll do the same for you.]

Replay prevention in SSL/TLS is related to the concept of sessions.  A given 
sequence of bytes can't be replayed within a given session, nor replayed in a 
different session, nor can a session itself be replayed in whole or in part.

Sounds good, right?

But suppose at the higher layer you have a message which you wish to send, and 
you wish to ensure that the message is processed by the recipient at most 
once, and you wish to keep trying to send the message even if you suffer a 
network failure.

For example: transferoutaccount_id100876/oinaccount_id975231/i 
currencyUSD/camount1000.00/a/transfer.

Assume that the user has delivered the instructions to you, through clicking 
on a GUI, sending you a signed snail mail letter, or whatever, and now it is 
your job to convince the computer at the other end of the TLS connection -- 
your counterparty -- to implement this transaction.

Now if you send this message, and you get a response from your counterparty 
saying transfer_statuscompleted/transfer_status, then you are finished.

But suppose you send this message, and then the TCP connection breaks and the 
TLS session ends?

You don't know if your counterparty got the message, much less if he was able 
to implement the transaction on his end.  If you open a new TLS connection and 
send the message again, you might inadvertently transfer *two* thousand 
dollars instead of one.

Now the state of the art in apps like these, as Ian has pointed out, is to 
implement replay protecton at the app level, for example adding a transaction 
sequence number to the message.

To me, this sounds like an opportunity for another layer, which provides a 
general solution to this problem.  (I would expect Ian's SOX protocol to be 
one such design.)

Of course, not all problems are amenable to a general, reusable solution.
Not even when, as in this case, almost all applications independently 
re-invent a special-purpose solution.

The particular sticking point in this problem seems to be state management -- 
you have to be careful that one side or the other isn't stuck with excessive 
requirements to store information in order to complete the protocol.

As Ian mentioned, apps can have several other possible requirements in 
addition to this one (which I call retriability).  Consider a situation 
where the message has to be printed out and stuck in a folder for a lawyer to 
review.  If the integrity guarantee is encoded into a long-term, multi-packet 
TLS stream, then this guarantee cannot easily be stuck into the folder.  If 
the integrity guarantee appears as a MAC or digital signature specific to that 
message, then perhaps it is reasonable for it to be printed out in the header 
of the message.

Now to be clear, I'm not saying that TLS ought to provide this kind of 
functionality, nor am I even asserting that a generic layer *could* provide 
functionality sufficient for these sorts of apps, but I am saying that the 
notion of replay-prevention and integrity which is implemented in TLS is 
insufficient for these sorts of apps, and that I'm interested in attempts to 
offer a higher-level abstraction.

Regards,

Zooko

http://zooko.com/
 ^-- under re-construction: some new stuff, some broken links

P.S.  I am aware that TLS encompasses the notion of stored or cached sessions, 
originally conceived for performance reasons.  Perhaps a higher-level 
abstraction could be built by requiring each party to use that facility in a 
specific way...

P.P.S.  A lot of the app-specific solutions that get deployed, such as the 
add a sequence number one mentioned in the example above, *depend* upon 
TLS's session-specific replay-prevention for security.  Ian suggested that 
this was a good test of the cryptographic robustness of a higher-layer protocol.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Announcing httpsy://, a YURL scheme

2003-07-15 Thread Zooko

Tyler should probably reference SFS on his HTTPSY pages.  Here's a good paper 
focussed specifically on this issue.

http://citeseer.nj.nec.com/mazieres99separating.html

Although I haven't looked closely at HTTPSY yet, I'm pretty sure that it 
simply applies to the Web the same notion that SFS applies to remote 
filesystems.

It is an excellent idea.

Regards,

Zooko


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Announcing httpsy://, a YURL scheme

2003-07-16 Thread Zooko

 Ed Gerck wrote:

 IF Alice is trusted by Bob to introduce ONLY authentic parties, yes. And that is the
 problem.

Cryptography can't prevent Alice from telling lies about the web page that she 
showing to Bob.  But it can prevent that Bob sees a page different than the 
one that Alice meant for him to see.

Regards,

Zooko

http://zooko.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Humorous anti-SSL PR

2004-07-28 Thread Zooko
Eric:
On 2004, Jul 15, , at 17:55, Eric Rescorla wrote:
There are advantages to message-oriented
security (cf. S-HTTP) but this doesn't seem like a very convincing
one.
Could you please elaborate on this, or refer me to a document which 
expresses your views?  I just read [1] in search of such ideas, but I 
have not yet read your book on TLS.

Thanks,
Zooko
[1] http://www.terisa.com/shttp/current.txt
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: no surprise - Sun fails to open source the crypto part of Java

2007-05-14 Thread zooko

 Ian G wrote:

 Third option:  the architecture of Sun's Java crypto 
 framework is based on motives that should have been avoided, 
 and have come back to bite (again).
 
 The crypto framework in Java as designed by Sun was built on 
   motives (nefarious, warped or just plain stupid, I don't 
 know) such as
 
 * the need or desire to separate out encryption from 
 authentication,
...
 * some notion that crypto code should be (must be) a 
 competitive market,
...
 * circular dependency 
...
 * Being dependent on PKI style certificates for signing, 
...

The most important motivation at the time was to avoid the risk of Java being
export-controlled as crypto.  The theory within Sun was that crypto with a
hole would be free from export controls but also be useful for programmers.

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fingerprint Firefox Plugin?

2007-10-24 Thread zooko

On Oct 23, 2007, at 12:46 AM, Arcane Jill wrote:

Can anyone tell me... is there a Firefox plugin which allows one to  
view the fingerprint of the SSL certificate of each page you visit  
(e.g. in the status bar or address bar or something)?


Better still if it can learn which ones you trust, but just being  
able to view them without having to jump through hoops would be a  
good start.


Suppose you did have a convenient way to display the SSL certificate  
for every site whenever you loaded a page from the site.  You  
probably wouldn't want to memorize all the certificates for the  
secure sites that you care about, so you might instead write some  
notes on a piece of paper next to your computer, for example writing  
down an SSL certificate and then next to it writing bank, and then  
writing down another one and then next to it writing mail, and so on.


Then, whenever you load a page, you would look at the SSL certificate  
that is linked to that page and glance at your notepad to see which  
description it maps to.  If you are looking at a random web site that  
you've never seen before, and the certificate doesn't appear on your  
notes, then no big deal.  If you are looking at a page that appears  
to belong to your bank, and the certificate that came with that page  
doesn't appear on your notes, then this is a big red flag!  Likewise,  
if you are looking at a page that appears to belong to your bank, and  
the certificate appears on your notes, but the note next to it  
doesn't say bank, then this is a red flag, too!  For example, it  
might be the certificate of your mail service, which appears on your  
paper along with the note mail.  Or it might just be a certificate  
that appears on your paper along with the note joke site from Harry.


Note that a system which classified certificates into trusted or  
untrusted categories might give you the green flag even when a  
certificate that you trust to serve up good jokes is serving up  
something that appears to be your bank account.


So, the thing about writing down certificates and mapping them to  
short hand-written notes is what the Pet Name Toolbar automates for you:


https://addons.mozilla.org/en-US/firefox/addon/957

Please let us know how it works for you.

Regards,

Zooko


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: crypto class design

2007-12-20 Thread zooko
On Dec 17, 2007, at 9:38 AM, [EMAIL PROTECTED]  
wrote:



So... supposing I was going to design a crypto library for use within
a financial organization, which mostly deals with credit card numbers
and bank accounts, and wanted to create an API for use by developers,
does anyone have any advice on it?


I'm curious if your crypto library is to be implemented by use of  
another one, perhaps an open-source one that I am familiar with.   
Nowadays I prefer Crypto++ [1].


Regards,

Zooko

[1] http://cryptopp.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [tahoe-dev] Surely M$ can patent this process?

2008-01-27 Thread zooko
[adding Cc: p2p-hackers and cryptography mailing lists as explained  
below; Please trim your follow-ups as appropriate.]


Dear Gary Sumner:


On Jan 26, 2008, at 9:44 PM, Gary Sumner wrote:
I was researching on the weekend and came across Tahoe…very  
exciting and can’t wait to delve in and understand more in detail.


I was reading over Plank’s work around erasure encoding and that  
lead me to Tahoe. One thing that I was really looking for was to be  
able to encrypt the data before storing it  and so was very excited  
when I read your architecture doc and it says “When a file is to be  
added to the grid, it is first encrypted using a key that is  
derived from the hash of the file itself.” This seems perfectly  
logical and natural way to apply this technique. However,  
researching also lead me to a patent M$ has been granted on this  
exact process:


Encryption Systems and Methods for Identifying and Coalescing  
Identical Objects Encrypted with Different Keys - http:// 
patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2Sect2=HITOFFp=1u=% 
2Fnetahtml%2FPTO%2Fsearch- 
bool.htmlr=1f=Gl=50co1=ANDd=PTXTs1=6983365.PN.OS=PN/ 
6983365RS=PN/6983365




I haven't read that patent, so I can't say whether it applies to what  
allmydata.org Tahoe does or not.  By default, for immutable files  
(but not for mutable files or directories), Tahoe sets the encryption  
key equal to the tagged hash of the file contents.  (A tagged hash is  
simply a hash of the data prefixed by a tag to distinguish it from  
other uses of hash functions).  You don't have to use Tahoe this way,  
however:



The encryption before storing is critical for my application.

If, for any reason, you don't want to let your encryption key be  
produced from the secure hash of the file contents, then Tahoe can  
instead use a randomly-generated encryption key.  The drawback of  
doing it this way -- with a random encryption key -- is that you lose  
the deduplication feature: two people who independently store the  
same file contents will use twice as much space, instead of each of  
them having a pointer to a single stored copy.  The advantages of  
doing it with a random encryption key are that you get a stronger  
guarantee about the confidentiality of the contents of your files,  
and it is faster as you don't need to process the whole file (in  
order to generate the encryption key) before beginning to upload the  
file.


Surely there must be prior art on this technique to refute this  
patent?




That's an interesting question, and I'm carbon-copying the p2p- 
hackers and cryptography mailing lists to ask if anyone knows.  I  
learned about this technique from Jim McCoy and Doug Barnes in their  
design of Mojo Nation.  I don't remember whether this technique was  
mentioned in Jim McCoy's personal communication of Mojo Nation to me  
in the summer of 1998, but it was definitely present in the design  
when I started working for Jim and Doug on Mojo Nation in 1999, and  
when Mojo Nation was first announced to the world at DefCon in July  
2000 [1, 2].  I don't know if Jim came up with the idea ex nihilo or  
was exposed to it in the swirling soup of ideas that we lived in at  
the time: cypherpunks / Electric Communities (which had many ideas  
gleaned from Xanadu) / Financial Cryptography / etc..


I remember reading about the newly announced Freenet project in 2000  
and being surprised at how many similarities its design had to our  
unannounced Mojo Nation project.  The influential Freenet paper [3]  
was published in July, 2000 -- one month too late to count as prior  
art for that patent, which was filed May 2000.  However, that paper  
was based on Ian Clarke's master's thesis, which was published in  
1999.  Let's see...  A there it is: [4].  Hm, no it does not seem to  
contain the notion that the 2000 Freenet paper would popularize as  
Content Hash Keys.


I've also just now re-read The Eternity Service (Anderson, 1996) [5],  
and it, like Clarke 1999, omits details of encryption.


It's an interesting puzzle of intellectual history.  The idea  
certainly seems to have been in the air, as both Mojo Nation and  
Freenet were working on it before the May 2000 patent submission by  
Doceur et al., but Mojo Nation and Freenet each published the idea  
shortly after May 2000.  According to my limited understanding of  
patent law, this means that they don't count as prior art on that  
patent.


Regards,

Zooko

[1] http://www.mccullagh.org/image/950-12/jim-mccoy-mojonation.html
[2] http://web.archive.org/web/20001118214000/http:// 
www.mojonation.net/docs/technical_overview.shtml

[3] http://citeseer.ist.psu.edu/420356.html
[4] http://citeseer.ist.psu.edu/380453.html
[5] http://citeseer.ist.psu.edu/anderson96eternity.html
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


announcing allmydata.org Tahoe v0.8

2008-02-21 Thread zooko
 reports, suggestions, demands, and money (employing several
allmydata.org Tahoe hackers and allowing them to spend part of their
work time on the next-generation, free-software project).  We are
eternally grateful!


Zooko O'Whielacronx
on behalf of the allmydata.org team
February 15, 2008
Boulder, Colorado, USA


[1] http://allmydata.org/trac/tahoe/browser/relnotes.txt?rev=1805
[2] http://allmydata.org/tahoe-figleaf/current/
[3] http://allmydata.org/trac/tahoe/wiki/Security
[4] http://allmydata.org/trac/tahoe/wiki/UseCases
[5] http://allmydata.org/source/tahoe/trunk/docs/install.html
[6] http://allmydata.org/cgi-bin/mailman/listinfo/tahoe-dev
[7] http://allmydata.org/trac/tahoe/roadmap
[8] http://allmydata.org/trac/tahoe/browser/CREDITS?rev=1804
[9] http://allmydata.org/trac/tahoe/wiki/Dev
[10] http://allmydata.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


announcing allmydata.org Tahoe v0.9

2008-03-15 Thread zooko
 on the next-generation, free-software project).  We are
eternally grateful!


Zooko O'Whielacronx
on behalf of the allmydata.org team
March 13, 2008
Boulder, Colorado, USA

[1] http://allmydata.org/trac/tahoe/browser/relnotes.txt?rev=2183
[2] http://allmydata.org/tahoe-figleaf/current/
[3] http://allmydata.org/trac/tahoe/wiki/Security
[4] http://allmydata.org/trac/tahoe/wiki/UseCases
[5] http://allmydata.org/source/tahoe/trunk/docs/install.html
[6] http://allmydata.org/cgi-bin/mailman/listinfo/tahoe-dev
[7] http://allmydata.org/trac/tahoe/roadmap
[8] http://allmydata.org/trac/tahoe/browser/CREDITS?rev=2307
[9] http://allmydata.org/trac/tahoe/wiki/Dev
[10] http://allmydata.com

P.S.  Someday, people might search for some of the earliest  
occurrences of the
name LAUGFS, which stands for Least AUthority Grid FileSystem.   
Hello,

searchers from the future!

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


convergent encryption reconsidered

2008-03-21 Thread zooko
(This is an ASCII rendering of https://zooko.com/ 
convergent_encryption_reconsidered.html .)


 Convergent Encryption Reconsidered

   Written by Zooko Wilcox-O'Hearn, documenting ideas due to Drew
   Perttula, Brian Warner, and Zooko Wilcox-O'Hearn, 2008-03-20.

  Abstract

   Convergent encryption is already known to suffer from a
   confirmation-of-a-file attack. We show that it suffers also
   from a learn-partial-information attack. The conditions under
   which this attack works cannot be predicted by a computer
   program nor by an unsophisticated user. We propose a solution
   which trades away part of the space savings benefits of
   convergent encryption in order to prevent this new attack. Our
   defense also prevents the old attack. The issues are presented
   in the context of the Tahoe Least-AUthority Grid File System, a
   secure decentralized filesystem.

  Background -- The Confirmation-Of-A-File Attack

   Convergent encryption, also known as content hash keying, was
   first mentioned by John Pettitt on the cypherpunks list in 1996
   [1], was used by Freenet [2] and Mojo Nation [3] in 2000, and
   was analyzed in a technical report by John Doceur et al. in
   2002 [4]. Today it is used by at least Freenet, GNUnet [5],
   flud [6], and the Tahoe Least-AUthority Grid File System [7].
   The remainder of this note will focus on the Tahoe LAUGFS
   filesystem. The use of convergent encryption in other systems
   may have different consequences than described here, because of
   the different use cases or added defenses that those systems
   may have.

   Convergent encryption is simply encrypting a file using a
   symmetric encryption key which is the secure hash of the
   plaintext of the file.

   Security engineers have always appreciated that convergent
   encryption allows an attacker to perform a
   confirmation-of-a-file attack -- if the attacker already knows
   the full plaintext of a file, then they can check whether a
   given user has a copy of that file.

   Whether this confirmation-of-a-file attack is a security or
   privacy problem depends on the situation. If you want to store
   banned books or political pamphlets without attracting the
   attention of an oppressive government, or store pirated copies
   of music or movies without attracting the attention of
   copyright holders, then the confirmation-of-a-file attack is
   potentially a critical problem. On the other hand, if the
   sensitive parts of your data are secret personal things like
   your bank account number, passwords, and so forth, then it
   isn't a problem. Or so I -- and as far as I know everyone else
   -- thought until March 16, 2008.

   I had planned to inform users of the current version of Tahoe
   -- version 0.9.0 -- about the confirmation-of-a-file attack by
   adding a FAQ entry:

 Q: Can anyone else see the contents of files that I have not
 shared?

 A: The files that you store are encrypted so that nobody can
 see a file's contents (unless of course you intentionally
 share the file with them). However, if the file that you
 store is something that someone has already seen, such as if
 it is a file that you downloaded from the Internet in the
 first place, then they can recognize it as being the same
 file when you store it, even though it is encrypted. So
 basically people can tell which files you are storing if they
 are publically known files, but they can't learn anything
 about your own personal files.

   However, four days ago (on March 16, 2008) Drew Perttula and
   Brian Warner came up with an attack that shows that the above
   FAQ is wrong.

  The Learn-Partial-Information Attack

   They extended the confirmation-of-a-file attack into the
   learn-partial-information attack. In this new attack, the
   attacker learns some information from the file. This is done by
   trying possible values for unknown parts of a file and then
   checking whether the result matches the observed ciphertext.
   For example, if you store a document such as a form letter from
   your bank, which contains a few pages of boilerplate legal text
   plus a few important parts, such as your bank account number
   and password, then an attacker who knows the boilerplate might
   be able to learn your account number and password.

   For another example, if you use Tahoe to backup your entire
   home directory, or your entire filesystem, then the attacker
   gains the opportunity to try to learn partial information about
   various files which are of predictable format but have
   sensitive fields in them, such as .my.cnf (MySQL configuration
   files), .htpasswd, .cvspass, .netrc, web browser cookie files,
   etc.. In some cases, files such as these will contain too much
   entropy from the perspective of the attacker to allow this
   attack, but in other cases the attacker will know, or be able
   to guess, most of the fields, and brute force

Fwd: [tahoe-dev] [p2p-hackers] convergent encryption reconsidered

2008-03-21 Thread zooko

Dear Perry Metzger:

Jim McCoy asked me to forward this, as he is not subscribed to  
cryptography@metzdowd.com, so his posting bounced.


Regards,

Zooko


Begin forwarded message:


From: Jim McCoy [EMAIL PROTECTED]
Date: March 20, 2008 10:56:58 PM MDT
To: theory and practice of decentralized computer networks p2p- 
[EMAIL PROTECTED]

Cc: [EMAIL PROTECTED], Cryptography cryptography@metzdowd.com
Subject: Re: [tahoe-dev] [p2p-hackers] convergent encryption  
reconsidered

Reply-To: [EMAIL PROTECTED]


On Mar 20, 2008, at 12:42 PM, zooko wrote:


  Security engineers have always appreciated that convergent
  encryption allows an attacker to perform a
  confirmation-of-a-file attack -- if the attacker already knows
  the full plaintext of a file, then they can check whether a
  given user has a copy of that file.


The truth of this depends on implementation details, and is an
assertion that cannot be said to cover all or even most of the
potential use-cases for this technique.  This property only holds if
it is possible for the attacker to link a selected ciphertext/file to
a user.  Systems which use convergent encryption to populate a shared
storage pool _might_ have this property, but is my no means a
certainty; if a system is implemented correctly is is not necessary
for users to expose their list of files in order to maintain this
shared storage space.


  basically people can tell which files you are storing if they
  are publically known files, but they can't learn anything
  about your own personal files.


It sounds like you have a design problem.  If nodes that participate
in the system can distinguish between publication and _re_- 
publication/

replication (or whatever you want to call the random sharing of
arbitrary data blocks for the purposes of increasing file
availability) then you have a problem.  If these two activities are
indistinguishable then an observer knows you have some blocks to a
file but should not be able to distinguish between you publishing the
blocks and the act of re-distribution to increase block availability.


 The Learn-Partial-Information Attack [...]


A better title for this would be Chosen-Plaintext attack on
Convergent Encryption, since what you are talking about is really a
chosen plaintext attack.  To be a bit more specific, this is really
just a version of a standard dictionary attack.  The solution to this
problem is to look at similar systems that suffered from dictionary
attacks an see what solutions were created to solve the problem.

The most widely known and studied version of this is the old crypt()/
passwd problem.


  For another example, if you use Tahoe to backup your entire
  home directory, or your entire filesystem, then the attacker
  gains the opportunity to try to learn partial information about
  various files which are of predictable format but have
  sensitive fields in them, such as .my.cnf (MySQL configuration
  files), .htpasswd, .cvspass, .netrc, web browser cookie files,
  etc..


The problem with this imagined attack are twofold.  I will use your
Tahoe example for my explanations because I have a passing familiarity
with the architecture.  The first problem is isolating the original
ciphertext in the pool of storage.  If a file is encrypted using
convergent encryption and then run through an error-correction
mechanism to generate a number of shares that make up the file an
attacker first needs to be able to isolate these shares to generate
the orginal ciphertext.  FEC decoding speeds may be reasonably fast,
but they are not without some cost.  If the storage pool is
sufficiently large and you are doing your job to limit the ability of
an attacker to see which blocks are linked to the same FEC operation
then the computational complexity of this attack is significantly
higher than you suggest.

Assuming an all-seeing oracle who can watch every bit sent into the
storage pool will get us around this first problem, but it does raise
the bar for potential attackers.

The second problem an attacker now faces is deciding what sort of
format a file might have, what the low-entropy content might be, and
then filling in values for these unknowns.  If your block size is
small (and I mean really small in the context of the sort of systems
we are talking about) there might be only a few kilobits of entropy in
the first couple of blocks of a file so either a rainbow-table attack
on known file formats or a dedicated effort to grab a specific file
might be possible, but this is by no means certain.  Increase your
block size and this problem becomes much harder for the attacker.


 Defense Against Both Attacks

  [...]
  However, we can do better than that by creating a secret value
  and mixing that value into the per-file encryption key (so
  instead of symmetric_key = H(plaintext), you have symmetric_key
  = H(added_secret, plaintext), where , denotes an unambiguous
  encoding of both operands). This idea is due to Brian Warner
  and Drew Perttula

Re: [p2p-hackers] convergent encryption reconsidered

2008-03-26 Thread zooko

Jim:

Thanks for your detailed response on the convergent encryption issue.

In this post, I'll just focus on one very interesting question that  
you raise: When do either of these attacks on convergent encryption  
apply?.


In my original note I was thinking about the allmydata.org Tahoe  
Least Authority Filesystem.  In this post I will attempt to follow  
your lead in widening the scope.  In particular GNUnet and Freenet  
are currently active projects that use convergent encryption.  The  
learn-partial-information attack would apply to either system if a  
user were using it with files that she intended not to divulge, but  
that were susceptible to being brute-forced in this way by an attacker.



On Mar 20, 2008, at 10:56 PM, Jim McCoy wrote:


On Mar 20, 2008, at 12:42 PM, zooko wrote:


  Security engineers have always appreciated that convergent
  encryption allows an attacker to perform a
  confirmation-of-a-file attack -- if the attacker already knows
  the full plaintext of a file, then they can check whether a
  given user has a copy of that file.


The truth of this depends on implementation details, and is an
assertion that cannot be said to cover all or even most of the
potential use-cases for this technique.


You're right.  I was writing the above in the context of Tahoe,  
where, as Brian Warner explained, we do not attempt to hide the  
linkage between users and ciphertexts.  What I wrote above doesn't  
apply in the general case.


However, there is a very general argument about the applicability of  
these attacks, which is: Why encrypt?.


If your system has strong anonymity properties, preventing people  
from learning which files are associated with which users, then you  
can just store the files in plaintext.


Ah, but of course you don't want to do that, because even without  
being linked to users, files may contain sensitive information that  
the users didn't intend to disclose.  But if the files contain such  
information, then it might be acquired by the learn-partial- 
information attack.


When designing such a system, you should ask yourself Why  
encrypt?.  You encrypt in order to conceal the plaintext from  
someone, but if you use convergent encryption, and they can use the  
learn-partial-information attack, then you fail to conceal the  
plaintext from them.


You should use traditional convergent encryption (without an added  
secret) if:


1.  You want to encrypt the plaintext, and
2.  You want convergence, and
3.  You don't mind exposing the existence of that file (ignoring the  
confirmation-of-a-file attack), and
4.  You are willing to bet that the file has entropy from the  
attacker's perspective which is greater than his computational  
capacity (defeating the learn-partial-information attack).


You should use convergent encryption with an added secret (as  
recently implemented for the Tahoe Least Authority Filesystem) if:


1.  You want to encrypt the plaintext, and
2.  You want convergence within the set of people who know the added  
secret, and
3.  You don't mind exposing the existence of that file to people in  
that set, and
4.  You are willing to disclose the file to everyone in that set, or  
else you think that people in that set to whom you do not wish to  
disclose the file will not try the learn-partial-information attack,  
or if they do that the file has entropy from their perspective which  
is greater than their computational capacity.


I guess the property of unlinkability between user and file addresses  
issue 3 in the above list -- the existence of a file is a much less  
sensitive bit of information than the existence of a file in a  
particular user's collection.


It could also effect issue 4 by increasing the entropy the file has  
from an attacker's perspective.  If he knows that the ciphertext  
belongs to you then he can try filling in the fields with information  
that he knows about you.  Without that linkage, he has to try filling  
in the fields with information selected from what he knows about all  
users.  But hiding this linkage doesn't actually help in the case the  
attacker is already using everything he knows about all users to  
attack all files in parallel.


Note that using an added secret does help in the parallel attack  
case, because (just like salting passwords) it breaks the space of  
targets up into separate spaces which can't all be attacked with the  
same computation.




The first problem is isolating the original
ciphertext in the pool of storage.  If a file is encrypted using
convergent encryption and then run through an error-correction
mechanism to generate a number of shares that make up the file an
attacker first needs to be able to isolate these shares to generate
the orginal ciphertext.  FEC decoding speeds may be reasonably fast,
but they are not without some cost.  If the storage pool is
sufficiently large and you are doing your job to limit the ability of
an attacker to see which blocks

announcing allmydata.org Tahoe, the Least-Authority Filesystem, v1.0

2008-03-26 Thread zooko

ANNOUNCING Allmydata.org Tahoe, the Least-Authority Filesystem, v1.0

We are pleased to announce the release of version 1.0 of the Tahoe
Least Authority Filesystem.

The Tahoe Least Authority Filesystem is a secure, decentralized,
fault-tolerant filesystem.  All of the source code is available under
a Free Software, Open Source licence (or two).

This filesystem is encrypted and distributed over multiple peers in
such a way it continues to function even when some of the peers are
unavailable, malfunctioning, or malicious.

A one-page explanation of the security and fault-tolerance properties
that it offers is visible at:

http://allmydata.org/source/tahoe/trunk/docs/about.html


We believe that this version of Tahoe is stable enough to rely on as a
permanent store of valuable data.  The version 1 branch of Tahoe will
be actively supported and maintained for the forseeable future, and
future versions of Tahoe will retain the ability to read files and
directories produced by Tahoe v1.0 for the forseeable future.

This release of Tahoe will form the basis of the new consumer backup
product from Allmydata, Inc. -- http://allmydata.com .


This is the successor to Allmydata.org Tahoe Least Authority
Filesystem v0.9, which was released March 13, 2008 [1].  Since v0.9
we've made the following changes:

 * Use an added secret for convergent encryption to better protect the
   confidentiality of immutable files, and remove the publically
   readable hash of the plaintext (ticket #365).

 * Add a mkdir-p feature to the WAPI (ticket #357).

 * Many updates to the Windows installer and Windows filesystem
   integration.


Tahoe v1.0 produces files which can't be read by older versions of
Tahoe, although files produced by Tahoe = 0.8 can be read by Tahoe
1.0.  The reason that older versions of Tahoe can't read files
produced by Tahoe 1.0 is that those older versions require the file to
come with a publically-readable hash of the plaintext, but exposing
such a hash is a confidentiality leak, so Tahoe 1.0 does not do it.


WHAT IS IT GOOD FOR?

With Tahoe, you can distribute your filesystem across a set of
computers, such that if some of the computers fail or turn out to be
malicious, the filesystem continues to work from the remaining
computers.  You can also share your files with other users, using a
strongly encrypted, capability-based access control scheme.

Because this software is the product of less than a year and a half of
active development, we do not categorically recommend it for the
storage of data which is extremely confidential or precious.  However,
we believe that the combination of erasure coding, strong encryption,
and careful engineering makes the use of this software a much safer
alternative than common alternatives, such as RAID, or traditional
backup onto a remote server, removable drive, or tape.

This software comes with extensive unit tests [2], and there are no
known security flaws which would compromise confidentiality or data
integrity.  (For all currently known security issues please see the
Security web page: [3].)

This release of Tahoe is suitable for the friendnet use case [4] --
it is easy to create a filesystem spread over the computers of you and
your friends so that you can share files and disk space with one
another.


LICENCE

You may use this package under the GNU General Public License, version
2 or, at your option, any later version.  See the file COPYING.GPL
for the terms of the GNU General Public License, version 2.

You may use this package under the Transitive Grace Period Public
Licence, version 1.0.  The Transitive Grace Period Public Licence says
that you may distribute proprietary derived works of Tahoe without
releasing the source code of that derived work for up to twelve
months, after which time you are obligated to release the source code
of the derived work under the Transitive Grace Period Public Licence.
See the file COPYING.TGPPL.html for the terms of the Transitive
Grace Period Public Licence, version 1.0.

(You may choose to use this package under the terms of either licence,
at your option.)


INSTALLATION

Tahoe works on Linux, Mac OS X, Windows, Cygwin, and Solaris.  For
installation instructions please see docs/install.html [5].


HACKING AND COMMUNITY

Please join us on the mailing list [6] to discuss uses of Tahoe.
Patches that extend and improve Tahoe are gratefully accepted -- the
RoadMap page [7] shows the next improvements that we plan to make and
CREDITS [8] lists the names of people who've contributed to the
project.  The wiki Dev page [9] contains resources for hackers.


SPONSORSHIP

Tahoe is sponsored by Allmydata, Inc. [10], a provider of consumer
backup services.  Allmydata, Inc. contributes hardware, software,
ideas, bug reports, suggestions, demands, and money (employing several
allmydata.org Tahoe hackers and instructing them to spend part of
their work time on this free-software project).  We are eternally
grateful!


Zooko O'Whielacronx

convergent encryption reconsidered -- salting and key-strengthening

2008-03-31 Thread zooko
, making a brute-force/dictionary  
attack infeasible.  Key strengthening allows you to choose an amount  
of wasted CPU that you are willing to impose on your users during  
normal use, and multiply the attacker's costs by exactly that  
amount.  If the attacker has 2^64 computational capacity, and the  
users are willing to waste 2^10 extra computrons on each file access,  
then the attacker's effective capacity is reduced to 2^54.


The trade-off is actually worse than it appears since the attacker is  
attacking multiple users at once (in traditional convergent  
encryption, he is attacking *all* users at once), so he gains an  
economy of scale, and can profitably invest in specialized tools,  
even specialized hardware such as a COPACOBANA [1].  At the very  
least he can profitably devote many CPU cores to churning out new  
guesses 24/7, while for normal users it is not profitable to allocate  
a 24/7 CPU load to strengthening their keys.  The reason for this  
disparity is that the attacker gets to attack everyone at once for  
the same cost as attacking only one target, where the defenders have  
to pay each for his own defense.



Next, one could imagine a variant of Tahoe's convergent encryption  
with added secret which adds key strengthening:


s = random()
k = H^1000(s, p)
c = E(k, p)

This would likewise be costly to normal users, but moreover it is not  
needed because the s = random() part of the algorithm locks out all  
attackers except those with whom s is shared from mounting such an  
attack at all.


Thank you for your comments on this issue.  If you have further  
ideas, especially as would be relevant to the Tahoe Least-Authority  
Filesystem, I would love to hear them.


Regards,

Zooko O'Whielacronx

[1] http://copacobana.org/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [p2p-hackers] convergent encryption reconsidered -- salting and key-strengthening

2008-04-02 Thread zooko

On Mar 31, 2008, at 4:47 AM, Ivan Krstić wrote:


Tahoe doesn't run this service either. I can't use it to make guesses
at any of the values you mentioned. I can use it to make guesses at
whole documents incorporating such values, which is in most cases a
highly non-trivial distinction.


The way that I would phrase this is that convergent encryption  
exposes whatever data is put into it, in whatever batch-size is put  
into it, to brute-force/dictionary attacks.


If the data that you put in is unguessable, then you needn't worry  
about these attacks.  (Likewise, as Ben Laurie reminds us, using  
strong passwords is a sufficient defense against these attacks on  
passwords.)


You correctly emphasize that typical convergent encryption services  
(which operate on files, or, in the case of GNUnet, on 32 KiB  
blocks), and typical uses of those services (which typically store  
files as produced by apps written for traditional filesystems),  
batch together data in such a way that the aggregate is more likely  
to be unguessable than if each field were stored separately.  I don't  
disagree with this observation.


I am often reminded of Niels Ferguson's and Bruce Schneier's dictum,  
in the excellent _Practical_Cryptography_, that security needs to be  
a *local* property.  They argue that one should be able to tell  
whether a component is secure by inspecting that component itself,  
rather than by reasoning about interactions between that component  
and other components.


Concretely, convergent encryption with a per-user added secret, as  
currently implemented in Tahoe, can be shown to guarantee  
confidentiality of the data, regardless of what the data is.


Traditional convergent encryption can be shown to offer  
confidentiality only with the proviso that the data put into it  
conform to certain criteria -- criteria that cannot be verified by a  
computer nor by a user who is not a skilled security expert.


You may argue that the chance that a user would put non-comformant  
data into it is small.  I don't necessarily disagree, although before  
I became willing to bet on it I would require more quantitative  
investigation.


However, arguing that component A is secure as long as component B  
behaves a certain way, and that component B is very likely to behave  
that way, is a different sort of argument than arguing that component  
A is secure regardless of the behavior of component B.


For one thing, the behavior of component B may change in the future.   
Concretely, people may write apps that store data in Tahoe in a way  
that previous apps didn't.  Those people will almost certainly be  
completely unaware of the nature of convergent encryption and brute- 
force/dictionary attacks.


Now obviously making the security properties of a system modular in  
this way might impose a performance cost.  In the case of Tahoe, that  
cost is the loss of universal convergence.  Allmydata.com analyzed  
the space savings due to convergence among our current customers and  
found that it was around 1% savings.  We (allmydata.com) intend to  
monitor the potential savings of universal convergence in an on-going  
way, and if it turns out that there are substantial benefits to be  
gained then I will revisit this issue and perhaps I will be forced to  
rely on an argument of the other form -- that users are unlikely to  
use it in an unsafe way.


Thank you again for your thoughtful comments on this issue.

Regards,

Zooko O'Whielacronx

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


OpenSparc -- the open source chip (except for the crypto parts)

2008-05-01 Thread zooko

On Apr 24, 2008, at 7:58 PM, Jacob Appelbaum wrote:


If we could convince (this is the hard part) companies to publish what
they think their chips should look like, we'd have a starting point.


I would think that it also helps if a company publishes the source  
code and complete verification tools for their chips, such as Sun has  
done with the Ultrasparc T2 under the GPL.


I was excited about this, and also about the fact that the T2 came  
with extremely efficient crypto implementations, until I read this  
bizarre comment in the news:


When the UltraSPARC T2 specifications are released Tuesday, Mehta  
said the company plans on releasing most of the source code,  
including the designs for the logic gate circuitry and the test  
suites. The one part of the source code that Sun can not release are  
the algorithms approved by the National Security Agency as part of  
the chip's cryptographic accelerations units.


http://www.eweek.com/c/a/Linux-and-Open-Source/Sun-Brings-Niagara-2- 
Chip-to-Open-Source/


I investigated and sure enough the crypto parts of the T2 have all  
been stubbed out of the source (all of them, not just algorithms  
approved by the NSA, whatever that means).


I sent e-mails inquiring about this to two journalists (the author of  
that article -- Scott Ferguson -- and noted cryptosecuritylibertarian  
gadfly Declan McCullagh) and three Sun employees, including Shrenik  
Mehta (quoted above), the open sparc community support e-mail  
address, and the Sun open source ombudsman, Simon Phipps.  None of  
them ever wrote back.


This experience rather dampened my enthusiasm about relying on T2  
hardware as a higher-assurance, but still pretty commodified, crypto  
implementation.


Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-26 Thread zooko

On May 24, 2008, at 9:18 PM, Steven M. Bellovin wrote:


I believe that all open source Unix-like systems have /dev/random
and /dev/urandom; Solaris does as well.


By the way, Solaris is an open source Unix-like system nowadays.  ;-)

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Why doesn't Sun release the crypto module of the OpenSPARC? Crypto export restrictions!

2008-06-11 Thread zooko

Dear people of the cryptography mailing list:

I received a note from Sridhar Vajapey, head of the Sun OpenSPARC  
programme, which releases a complete modern CPU under the GPL.   
Except that it isn't complete -- the parts that do AES, SHA-1 and  
SHA-2, and public key crypto acceleration are all mysteriously  
omitted from the released source [1].  I have previously posted about  
this issue on this list [2].


I inquired about this with Sridhar Vajapey, and he wrote US export  
control regulations prevent Sun from opensourcing the crypto portion  
of N2..  (N2 is the development code-name for the most recent  
OpenSPARC -- its product name is T2.)


Appended is my reply.  If anyone on this list knows more about the  
relevant export regulations, please share.


Regards,

Zooko

[1] http://www.opensparc.net/opensparc-t2/downloads.html
[2] http://www.mail-archive.com/cryptography@metzdowd.com/msg09090.html


From: [EMAIL PROTECTED]
	Subject: 	Re: Please contact me about open source of the crypto  
modules in T2

Date:   June 8, 2008 3:07:02 PM PDT
To: Sridhar Vajapey
Cc: Shrenik Mehta, Roberta Pokigo, Simon Phipps

Dear Sridhar Vajapey:

Thank you for the prompt reply.  Having participated in the struggle  
in the 1990's to make crypto freely available and to end the export  
restrictions, and having thought that we won, I am saddened to find  
out that this is why Sun hasn't open sourced that component.


So far, I have failed to understand why the current US crypto export  
regime (see survey here [1] -- be sure to follow the timeline as the  
laws have been relaxed many times over the last decade) doesn't  
permit Sun to post the source code of the crypto components of the  
T2.  It would appear to me that that source code falls under the  
rubric of publically available crypto source code, as described  
here [2], which would mean that Sun need only send an e-mail to the  
right address giving them the URL of the source code in order to  
satisfy the law.  On the other hand if the source code for building  
chips doesn't count as source code, then presumably it would count  
as mass-market crypto which means that Sun need only do slightly  
more paperwork in order to gain such approval.


If Sun applied for approval of GPL'ed crypto under such a regulation  
and was *denied* by BIS then I would really like to know why.


Another guess, and please don't take this the wrong way, is that NSA  
baloneyed you into *thinking* that you couldn't, or shouldn't,  
release the crypto components when legally you can.  (I have personal  
knowledge of two such extra-legal attempts by NSA to deter crypto  
proliferation in the 1990's -- once with Netscape and once with Cisco.)


Oh, in fact this leads me to another question:  Even in the (in my  
humble opinion unlikely) case that Sun is disallowed from exporting  
the source of the crypto modules to foreign countries, there is  
certainly no law which would constrain Sun from sharing that source  
with US persons within the US.  I originally became aware of this  
issue as a potential customer who was interested in the T2, rather  
than as an activist.  I am a US citizen residing in the US, and there  
is certainly no law which would preclude Sun from giving me that  
source under the GPL.  So, please do.  You can just attach it to your  
reply.  ;-)


Thanks again.  Adding cc: Simon Phipps (the Open Source Guy at  
Sun), as I have previously corresponded with him on this topic.


Regards,

Zooko Wilcox-O'Hearn

[1] http://rechten.uvt.nl/koops/cryptolaw/cls2.htm#us_1
[2] http://www.bis.doc.gov/encryption/default.htm

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


ANNOUNCING Allmydata.org Tahoe, the Least-Authority Filesystem, v1.1

2008-06-11 Thread zooko

ANNOUNCING Allmydata.org Tahoe, the Least-Authority Filesystem, v1.1

We are pleased to announce the release of version 1.1 of the Tahoe
Least Authority Filesystem.

The Tahoe Least Authority Filesystem is a secure, decentralized,
fault-tolerant filesystem.  All of the source code is available under
a Free Software, Open Source licence (or two).

This filesystem is encrypted and distributed over multiple peers in
such a way it continues to function even when some of the peers are
unavailable, malfunctioning, or malicious.

A one-page explanation of the security and fault-tolerance properties
that it offers is visible at:

http://allmydata.org/source/tahoe/trunk/docs/about.html


This is the successor to Allmydata.org Tahoe Least Authority
Filesystem v1.0, which was released March 25, 2008 [1].  This release
fixes several serious issues in Tahoe v1.0, and improves the user
interfaces.  See the known_issues.txt file [2] and the NEWS file [3]
for details.


COMPATIBILITY

The version 1 branch of Tahoe is used as the basis of the consumer
backup product from Allmydata, Inc. -- http://allmydata.com .

Tahoe v1.1 is fully compatible with Tahoe v1.0.  v1.1 clients produce
files which can be read by v1.0 clients.  v1.1 clients can read files
produced by clients of all versions = v0.8.  v1.1 servers can serve
v1.0 clients and v1.1 clients can use v1.0 servers.

This is the second release in the version 1 series.  We believe that
this version of Tahoe is stable enough to rely on as a permanent store
of valuable data.  The version 1 branch of Tahoe will be actively
supported and maintained for the forseeable future, and future
versions of Tahoe will retain the ability to read files and
directories produced by Tahoe v1 for the forseeable future.


WHAT IS IT GOOD FOR?

With Tahoe, you can distribute your filesystem across a set of
computers, such that if some of the computers fail or turn out to be
malicious, the filesystem continues to work from the remaining
computers.  You can also share your files with other users, using a
cryptographic capability-based access control scheme.

Because this software is the product of less than two years of active
development, we do not categorically recommend it for the storage of
data which is extremely confidential or precious.  However, we believe
that the combination of erasure coding, strong encryption, and careful
engineering makes the use of this software a much safer alternative
than common alternatives, such as RAID, or traditional backup onto a
remote server, removable drive, or tape.

This software comes with extensive unit tests [4], and there are no
known security flaws which would compromise confidentiality or data
integrity.  (For all currently known issues please see the
known_issues.txt file [2].)

This release of Tahoe is suitable for the friendnet use case [5] --
it is easy to create a filesystem spread over the computers of you and
your friends so that you can share files and disk space with one
another.


LICENCE

You may use this package under the GNU General Public License, version
2 or, at your option, any later version.  See the file COPYING.GPL [6]
for the terms of the GNU General Public License, version 2.

You may use this package under the Transitive Grace Period Public  
Licence,
version 1.0.  The Transitive Grace Period Public Licence says that  
you may
distribute proprietary derived works of Tahoe without releasing the  
source code
of that derived work for up to twelve months, after which time you  
are obligated
to release the source code of the derived work under the Transitive  
Grace Period
Public Licence.  See the file COPYING.TGPPL.html [7] for the terms  
of the

Transitive Grace Period Public Licence, version 1.0.

(You may choose to use this package under the terms of either licence,
at your option.)


INSTALLATION

Tahoe works on Linux, Mac OS X, Windows, Cygwin, and Solaris.  For
installation instructions please see docs/install.html [8].


HACKING AND COMMUNITY

Please join us on the mailing list [9] to discuss uses of Tahoe.
Patches that extend and improve Tahoe are gratefully accepted -- the
RoadMap page [10] shows the next improvements that we plan to make and
CREDITS [11] lists the names of people who've contributed to the
project.  The wiki Dev page [12] contains resources for hackers.


SPONSORSHIP

Tahoe is sponsored by Allmydata, Inc. [13], a provider of consumer
backup services.  Allmydata, Inc. contributes hardware, software,
ideas, bug reports, suggestions, demands, and money (employing several
allmydata.org Tahoe hackers and instructing them to spend part of
their work time on this free-software project).  We are eternally
grateful!


Zooko O'Whielacronx
on behalf of the allmydata.org team
June 11, 2008
San Francisco, California, USA

[1] http://allmydata.org/trac/tahoe/browser/relnotes.txt?rev=2348
[2] http://allmydata.org/trac/tahoe/browser/docs/known_issues.txt
[3] http://allmydata.org/trac/tahoe/browser/docs/NEWS
[4

Re: Why doesn't Sun release the crypto module of the OpenSPARC?

2008-06-13 Thread zooko

On Jun 12, 2008, at 4:35 PM, David G. Koontz wrote:


There's the aspect of competition.


I've also wondered if a reason they didn't release it is because  
they bought

the 'IP' from someone.


Those are good guesses, David, and I guessed similar things myself  
and inquired of various Sun folks if this was the real reason.   
Nobody could give me any definite answer, however, until Sridhar  
Vajapey wrote:


 US export control regulations prevent Sun from opensourcing the  
crypto portion of N2..


This is consistent with other public statements such as the comment  
that originally set me wondering about this in this press piece:


When the UltraSPARC T2 specifications are released Tuesday, Mehta  
said the company plans on releasing most of the source code,  
including the designs for the logic gate circuitry and the test  
suites. The one part of the source code that Sun can not release are  
the algorithms approved by the National Security Agency as part of  
the chip's cryptographic accelerations units.


http://www.eweek.com/c/a/Linux-and-Open-Source/Sun-Brings-Niagara-2- 
Chip-to-Open-Source/


Also, I've been watching Sun carefully for a couple of years now, and  
the top leadership is really fanatical about open source.  It would  
be inconsistent with their current pattern of behavior to withhold a  
component from GPL release for reason of competitive advantage.


My best guess remains that NSA or some such shadowy agency bamboozled  
them into thinking that it would be illegal to release it, or  
threatened them with unfortunate coincidences if they went ahead, or  
persuaded them that GPL'ing it would aid terrorists and cause the  
needless deaths of innocents.


Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why doesn't Sun release the crypto module of the OpenSPARC?

2008-06-29 Thread zooko

On Jun 26, 2008, at 6:55 PM, David G. Koontz wrote:


[Moderator's note: this seems to be much more about the open source
 wars and such than about crypto and security. I'm not going to
 forward replies on this topic that don't specifically address
 security issues -- those who were not interested in the original
 thread may want to skip this message, too. --Perry]



The high-order bit here is that the reason Sun has not open sourced  
the crypto module of the Sparc T2 along with all the other modules is  
the US government's export restrictions and their extra-legal  
implicit threats.  I've received another e-mail from a Sun employee  
stating that crypto export restrictions are the issue and that Sun  
management feels that it is too risky to defy the government's  
pressure because the government has the power to do billions of  
dollars in damage to the company by temporarily suspending their  
export licences for their whole suite of products.


My conclusions are:

1.  We didn't exactly win the free-crypto struggle after all (see Ian  
Grigg's and Sameer Parekh's comments [1, 2]), and


2.  I'm going to keep designing my security systems to be optimized  
for software crypto and not to rely on hardware acceleration.  In  
particular, that means that I can continue to consider the Tiger hash  
(faster in software but not available in commodity hardware) to be  
faster than the SHA-256 hash (slower in software but available in  
hardware in the Sparc T2 and probably other commodity products).   
Likewise newfangled ciphers like Salsa20 and EnRUPT will be  
considered by me to be faster than AES (because they are faster in  
software) rather than slower (because AES might be built into the  
commodity hardware).


Note that it would also be a reasonable stance to rely on hardware  
implementations of crypto even though there are not commodity open  
source hardware implementations.  The beginning of this thread was  
the question of how to weigh the threat of hardware backdoors, and  
what countermeasures we can use to gain assurance that we're not  
vulnerable to hardware backdoors.  I'm not saying that having the  
source code for your hardware is either necessary or sufficient to  
protect yourself from that threat, but it might help, and I currently  
think it is a better strategy to design around the assumptions of  
software crypto.


Regards,

Zooko

[1] https://financialcryptography.com/mt/archives/001064.html
[2] http://www.creativedestruction.com/archives/000937.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: how bad is IPETEE?

2008-07-16 Thread zooko

On Jul 15, 2008, at 16:33 PM, Leichter, Jerry wrote:


The goal is
to use some form of opportunistic encryption to make as much
Internet traffic as possible encrypted as quickly as possible -
which puts all kinds of constraints on a solution,


Oh, then they should learn about Adam Langley's Obfuscated TCP:

http://code.google.com/p/obstcp/

One of the design constraints for Obfuscated TCP was that an  
Obfuscated TCP connection is required to take zero more round trips  
to set up and use than a normal TCP connection.  Way to go, Adam!


Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


ANNOUNCING the Hack Tahoe! contest

2008-07-19 Thread zooko

Folks:

This contest is inspired by Sameer Parekh's Hack Netscape! contest  
in the fall of 1995.


It is already eliciting some really good security insights from smart  
people.


Regards,

Zooko


 ANNOUNCING the Hack Tahoe! contest

http://hacktahoe.org

Tahoe, the Least-Authority Filesystem [1], is a secure, decentralized
filesystem.  It is developed as a Free Software, Open Source project.

The Least-Authority Filesystem offers security and fault-tolerance
properties far greater than those of other distributed filesystems --
in addition to being protected against external attackers, users of
Tahoe are protected from the servers themselves, even if some of the
servers are malicious, and they are protected from other users, even
though they can choose to share specific files or directories with
specific users.

Security is nothing without usability, and to that end Tahoe
integrates cleanly with the World Wide Web using the principles of
REST [2], and it provides a simple and flexible method of sharing
access to your files (by sharing the URL of that file, using the
principles of Capability Security [3]).

We have created and deployed an implementation of the Least-Authority
Filesystem -- Tahoe v1.1 -- which we believe provides these strong
security properties.  However, we know that there is no substitute for
peer review, and so we are challenging the hackers of the world to
prove us wrong.  If you find a major security flaw in the design of
the Least-Authority Filesystem, or in the implementation of Tahoe,
then you win a customized t-shirt with your exploit and a big Thank
you from us printed on the front.  Also, you will be entered into the
Hall of Fame on http://hacktahoe.org .

Two people who discovered security flaws in earlier designs and helped
us to fix them have been retroactively declared as the -2nd and -1st
winners of the Hack Tahoe! contest.  Explanations of the security
flaws that they discovered, how we fixed them, and pictures of them
with their customized t-shirts are on the http://hacktahoe.org web
site.

If you want to be the 1st winner of the Hack Tahoe! contest, you'll
have to find a security design flaw that we overlooked, or an
implementation mistake that you can exploit. The metric of success is
that if you discover anything which compels us to change Tahoe and to
alert current users about the issue, then your discovery is worthy of
a customized t-shirt.

Other than that anything goes, because one of the first rules of
security is that you can win by breaking the rules.  People are
already relying on Tahoe to store their files safely and privately, so
if there is any way in which Tahoe is endangering their data, we want
to learn about it as soon as possible.

To get started, see the description on http://hacktahoe.org of what
security properties Tahoe is supposed to provide.  That web site has
news, a live Tahoe storage grid which you can play with, example
targets you can attack, the Hall of Fame, detailed design notes, and
full source code.

Thanks, and good luck!

Regards,

Zooko O'Whielacronx, on behalf of the allmydata.org team

[1] http://allmydata.org
[2] http://en.wikipedia.org/wiki/Representational_State_Transfer
[3] http://erights.org/talks/index.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


ANNOUNCING Allmydata.org Tahoe, the Least-Authority Filesystem, v1.2

2008-07-21 Thread zooko

Dear people of the Cryptography mailing list:

The Hack Tahoe! contest (http://hacktahoe.org ) has already led a  
security researchers to spot a flaw in our crypto design.  This  
release fixes that flaw.


Regards,

Zooko


ANNOUNCING Allmydata.org Tahoe, the Least-Authority Filesystem, v1.2

We are pleased to announce the release of version 1.2.0 of the Tahoe
Least Authority Filesystem.

The Tahoe Least Authority Filesystem is a secure, decentralized,
fault-tolerant filesystem.  All of the source code is available under
a Free Software, Open Source licence (or two).

This filesystem is encrypted and distributed over multiple peers in
such a way it continues to function even when some of the peers are
unavailable, malfunctioning, or malicious.

A one-page explanation of the security and fault-tolerance properties
that it offers is visible at:

http://allmydata.org/source/tahoe/trunk/docs/about.html


This is the successor to Allmydata.org Tahoe Least Authority
Filesystem v1.1, which was released June 11, 2008 [1].  This release
fixes a security issue in Tahoe v1.1, fixes a few small issues in the
web interface, adds a check health operation for mutable files, and
adds logging/operations/deployment improvements.

See the known_issues.txt file [2] and the NEWS file [3] for details.


COMPATIBILITY

The version 1 branch of Tahoe is used as the basis of the consumer
backup product from Allmydata, Inc. -- http://allmydata.com .

Tahoe v1.2 is fully compatible with Tahoe v1.0.  v1.2 clients produce
files which can be read by v1.0 clients.  v1.2 clients can read files
produced by clients of all versions = v0.8.  v1.2 servers can serve
v1.0 clients and v1.2 clients can use v1.0 servers.

This is the third release in the version 1 series.  We believe that
this version of Tahoe is stable enough to rely on as a permanent store
of valuable data.  The version 1 branch of Tahoe will be actively
supported and maintained for the forseeable future, and future
versions of Tahoe will retain the ability to read files and
directories produced by Tahoe v1 for the forseeable future.


WHAT IS IT GOOD FOR?

With Tahoe, you can distribute your filesystem across a set of
computers, such that if some of the computers fail or turn out to be
malicious, the filesystem continues to work from the remaining
computers.  You can also share your files with other users, using a
cryptographic capability-based access control scheme.

Because this software is the product of less than two years of active
development, we do not categorically recommend it for the storage of
data which is extremely confidential or precious.  However, we believe
that the combination of erasure coding, strong encryption, and careful
engineering make Tahoe safer than common alternatives, such as RAID,
or traditional backup onto a remote server, removable drive, or tape.

This software comes with extensive unit tests [4], and there are no
known security flaws which would compromise confidentiality or data
integrity.  (For all currently known issues please see the
known_issues.txt file [2].)

This release of Tahoe is suitable for the friendnet use case [5] --
it is easy to create a filesystem spread over the computers of you and
your friends so that you can share disk space and share files.


LICENCE

You may use this package under the GNU General Public License, version
2 or, at your option, any later version.  See the file COPYING.GPL
[6] for the terms of the GNU General Public License, version 2.

You may use this package under the Transitive Grace Period Public
Licence, version 1.0.  The Transitive Grace Period Public Licence says
that you may distribute proprietary derived works of Tahoe without
releasing the source code of that derived work for up to twelve
months, after which time you are obligated to release the source code
of the derived work under the Transitive Grace Period Public
Licence. See the file COPYING.TGPPL.html [7] for the terms of the
Transitive Grace Period Public Licence, version 1.0.

(You may choose to use this package under the terms of either licence,
at your option.)


INSTALLATION

Tahoe works on Linux, Mac OS X, Windows, Cygwin, and Solaris.  For
installation instructions please see docs/install.html [8].


HACKING AND COMMUNITY

Please join us on the mailing list [9] to discuss uses of Tahoe.
Patches that extend and improve Tahoe are gratefully accepted -- the
RoadMap page [10] shows the next improvements that we plan to make and
CREDITS [11] lists the names of people who've contributed to the
project.  The wiki Dev page [12] contains resources for hackers.


SPONSORSHIP

Tahoe is sponsored by Allmydata, Inc. [13], a provider of commercial
backup services.  Allmydata, Inc. contributes hardware, software,
ideas, bug reports, suggestions, demands, and money (employing several
allmydata.org Tahoe hackers and instructing them to spend part of
their work time on this free-software project).  Also they distribute
customized t-shirts just

multicore hash functions (was: 5x speedup for AES using SSE5?)

2008-08-25 Thread zooko

Hello Peter Gutmann.

I'm working on a contribution to the SHA-3 process, and I've been  
using exactly the sort of abstraction that you describe -- counting  
one computation of a hash compression function as a unit of work  
which could be computed concurrently by some sort of parallel computer.


I vaguely think that once I get this level of analysis done, I should  
add some terms to show how the velocity of data into the computer and  
from core to core is not infinite.


I certainly think that I should code up some actual implementations  
and benchmark them.  However, I don't have a machine available with  
lots of cores -- I'm considering requesting of Sun.com that they lend  
me a T2.  (Despite my earlier declaration to Sun that I had lost  
interest in their stupid architecture since they wouldn't release the  
source to the crypto module.)


Anyway, if you have a better way to think about parallelism of hash  
functions, I'm all ears.


Thanks,

Zooko
---
http://allmydata.org -- Tahoe, the Least-Authority Filesystem
http://allmydata.com -- back up all your files for $5/month

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ADMIN: no money politics, please

2008-11-08 Thread zooko
Hey folks: you are welcome to discuss money politics over at the p2p- 
hackers mailing list:


http://lists.zooko.com/mailman/listinfo/p2p-hackers

I'm extremely interested in the subject myself, having taken part in  
two notable failed attempts to deploy Chaumian digital cash and  
currently being involved in a project that might lead to a third  
attempt.


Regards,

Zooko
---
http://allmydata.org -- Tahoe, the Least-Authority Filesystem
http://allmydata.com -- back up all your files for $10/month

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


ANNOUNCING allmydata.org Tahoe, the Least-Authority Filesystem, v1.3

2009-02-14 Thread zooko

Folks:

We make some strong security claims about this distributed storage  
system (I guess it's called Cloud Storage now):


This filesystem is encrypted and distributed over multiple peers in  
such a way it continues to function even when some of the peers are  
unavailable, malfunctioning, or malicious.


Such ambitious security goals benefit greatly from public criticism  
and review, so please kick the tires and let us know what you think.


Regards,

Zooko

ANNOUNCING allmydata.org Tahoe, the Least-Authority Filesystem, v1.3

We are pleased to announce the release of version 1.3.0 of Tahoe, the
Least Authority Filesystem.

Tahoe-LAFS is a secure, decentralized, fault-tolerant filesystem.  All
of the source code is available under a choice of two Free Software,
Open Source licences.

This filesystem is encrypted and distributed over multiple peers in
such a way it continues to function even when some of the peers are
unavailable, malfunctioning, or malicious.

Here is the one-page explanation of the security and fault-tolerance
properties that it offers:

http://allmydata.org/source/tahoe/trunk/docs/about.html

This is the successor to v1.2, which was released July 21, 2008 [1].
This is a major new release, adding a repairer, an efficient backup
command, support for large files, an (S)FTP server, and much more.

See the NEWS file [2] and the known_issues.txt file [3] for more
information.

In addition to the many new features of Tahoe itself, a crop of related
projects have sprung up, including Tahoe frontends for Windows and
Macintosh, two front-ends written in JavaScript, a Tahoe plugin for
duplicity, a Tahoe plugin for TiddlyWiki, a project to create a new
backup tool, CIFS/SMB integration, an iPhone app, and three incomplete
Tahoe frontends for FUSE. See Related Projects on the wiki: [4].


COMPATIBILITY

The version 1 branch of Tahoe is the basis of the consumer backup
product from Allmydata, Inc. -- http://allmydata.com .

Tahoe v1.3 is fully compatible with the version 1 branch of Tahoe.
Files written by v1.3 clients can be read by clients of all versions
back to v1.0 unless the file is too large -- files greater than about
12 GiB (depending on the configuration) can't be read by older clients.
v1.3 clients can read files produced by clients of all versions since
v1.0.  v1.3 servers can serve clients of all versions back to v1.0 and
v1.3 clients can use servers of all versions back to v1.0 (but can't
upload large files to them).

This is the fourth release in the version 1 series.  We believe that
this version of Tahoe is stable enough to rely on as a permanent store
of valuable data.  The version 1 branch of Tahoe will be actively
supported and maintained for the forseeable future, and future versions
of Tahoe will retain the ability to read files and directories produced
by Tahoe v1 for the forseeable future.


WHAT IS IT GOOD FOR?

With Tahoe, you can distribute your filesystem across a set of
computers, such that if some of the computers fail or turn out to be
malicious, the entire filesystem continues to be available, thanks to
the remaining computers.  You can also share your files with other
users, using a simple and flexible access control scheme.

Because this software is new, we do not categorically recommend it as
the sole repository of data which is extremely confidential or
precious.  However, we believe that erasure coding, strong encryption,
Free/Open Source Software and careful engineering make Tahoe safer than
common alternatives, such as RAID, removable drive, tape, or on-line
storage or Cloud storage systems.

This software comes with extensive unit tests [5], and there are no
known security flaws which would compromise confidentiality or data
integrity.  (For all currently known issues please see the
known_issues.txt file [2].)

This release of Tahoe is suitable for the friendnet use case [6] --
it is easy to create a filesystem spread over the computers of you and
your friends so that you can share disk space and files.


LICENCE

You may use this package under the GNU General Public License, version
2 or, at your option, any later version.  See the file COPYING.GPL
[7] for the terms of the GNU General Public License, version 2.

You may use this package under the Transitive Grace Period Public
Licence, version 1.0.  The Transitive Grace Period Public Licence has
requirements similar to the GPL except that it allows you to wait for
up to twelve months after you redistribute a derived work before
releasing the source code of your derived work. See the file
COPYING.TGPPL.html [8] for the terms of the Transitive Grace Period
Public Licence, version 1.0.

(You may choose to use this package under the terms of either licence,
at your option.)


INSTALLATION

Tahoe works on Linux, Mac OS X, Windows, Cygwin, and Solaris, and
probably most other systems.  Start with docs/install.html [9].


HACKING AND COMMUNITY

Please join us on the mailing list [10].  Patches that extend

ANNOUNCING Tahoe-LAFS v1.4

2009-04-30 Thread zooko
 to spend part of their work time on
this Free Software project).  Also they awarded customized t-shirts to
hackers who find security flaws in Tahoe (see http://hacktahoe.org
). After discontinuing funding of Tahoe RD in early 2009, Allmydata,
Inc. has continued to provide servers, co-lo space and bandwidth to the
open source project. Thank you to Allmydata, Inc. for their generous and
public-spirited support.


Zooko Wilcox-O'Hearn
on behalf of the allmydata.org team

Special acknowledgment goes to Brian Warner, whose superb engineering
skills and dedication are primarily responsible for the Tahoe
implementation, and significantly responsible for the Tahoe design as
well, not to mention most of the docs and tests and many other things
besides.

April 13, 2009
Boulder, Colorado, USA

[1] http://allmydata.org/pipermail/tahoe-dev/2009-March/001461.html
[2] http://allmydata.org/trac/tahoe/browser/relnotes.txt?rev=3620
[3] http://allmydata.org/trac/tahoe/browser/NEWS?rev=3835
[4] http://allmydata.org/trac/tahoe/browser/docs/known_issues.txt
[5] http://allmydata.org/trac/tahoe/wiki/RelatedProjects
[6] http://allmydata.org/trac/tahoe/wiki/UseCases
[7] http://allmydata.org/trac/tahoe/browser/COPYING.GPL
[8] http://allmydata.org/source/tahoe/trunk/COPYING.TGPPL.html
[9] http://allmydata.org/source/tahoe/trunk/docs/install.html
[10] http://allmydata.org/cgi-bin/mailman/listinfo/tahoe-dev
[11] http://allmydata.org/trac/tahoe/roadmap
[12] http://allmydata.org/trac/tahoe/browser/CREDITS?rev=3758
[13] http://allmydata.org/trac/tahoe/wiki/Dev
[14] http://allmydata.com


---
Tahoe, the Least-Authority Filesystem -- http://allmydata.org
store your data: $10/month -- http://allmydata.com/?tracking=zsig
I am available for work -- http://zooko.com/résumé.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [Cryptography] Keeping backups (was Re: Separating concerns

2013-08-29 Thread zooko
On Thu, Aug 29, 2013 at 01:30:35PM -0400, Perry E. Metzger wrote:
 
 So, as has been discussed, I envision people having small cheap
 machines at home that act as their cloud, and the system prompting
 them to pick a friend to share encrypted backups with.

The Least-Authority Filesystem is designed for this use case (among a small
number of other use cases).

 Inevitably this means that said backups are going to either be
 protected by a fairly weak password or that the user is going to have
 to print the key out and put it in their desk drawer and risk having
 it lost or stolen or destroyed in a fire.

In LAFS, the keys are strong, computer-generated keys, so you have to print
them out or write them down. Printing them in triplicate and storing them in
separate locations seems like a good trade-off of the risk of theft vs. the
risk of loss, for the reasons you give:

 I think I can live with either problem. Right now, most people
 have very little protection at all. I think making the perfect the
 enemy of the good is a mistake. If doing bad things to me requires
 breaking in to my individual home, that's fine. If it is merely much
 less likely that I lose my data rather than certain that I have no
 backup at all, that's fine.
 
 BTW, automation *does* do a good job of making such things invisible.
 I haven't lost any real data since I started using Time Machine from
 Apple, and I have non-technical friends who use it and are totally
 happy with the results. I wish there was an automated thing in Time
 Machine to let me trade backups with an offsite friend as well.

The Least-Authority Filesystem comes with a nice backup tool (tahoe backup),
but it does not come with a nice GUI for your non-technical friends.

Regards,

Zooko
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] People should turn on PFS in TLS

2013-09-10 Thread zooko
On Fri, Sep 06, 2013 at 06:18:05PM +0100, Ben Laurie wrote:
 On 6 September 2013 18:13, Perry E. Metzger pe...@piermont.com wrote:
 
  It would be good to see them abandon RC4 of course, and soon.
 
 
 In favour of what, exactly? We're out of good ciphersuites.

Please ask your friendly neighborhood TLS implementor to move fast on
http://tools.ietf.org/id/draft-josefsson-salsa20-tls-02.txt .

Regards,

Zooko
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-11 Thread zooko
I agree that randomness-reuse is a major issue. Recently about 55 Bitcoin were
stolen by exploiting this, for example:

http://emboss.github.io/blog/2013/08/21/openssl-prng-is-not-really-fork-safe/

However, it is quite straightforward to make yourself safe from re-used nonces
in (EC)DSA, like this:

https://github.com/trezor/python-ecdsa/commit/8efb52fad5025ae87b649ff78faa9f8076768065

Whenever the public-key crypto spec says that you have to come up with a random
number, don't do it! Instead of just pulling a random number from your PRNG,
mix the message into your PRNG to generate a random number which will therefore
be unique to this message.

Note that you don't have to get anyone else's cooperation in order to do this
-- interoperating implementations can't tell how you chose your random
number, so they can't complain if you do it this way.

Wei Dai's Crypto++ library has done this for ages, for *all* nonces generated
in the course of public-key operations.

DJB's Ed25519 takes this one step further, and makes the nonce determined
*solely* by the message and the secret key, avoiding the PRNG part altogether:

http://ed25519.cr.yp.to/papers.html

In my opinion, that's the way to go. It applies equally well to (EC)DSA, and
still enjoys the above-mentioned interoperability.

There is now a standard for this fully-deterministic approach in the works,
edited by Thomas Pornin: https://tools.ietf.org/html/rfc6979 .

Therefore, Ed25519 or RFC-6979-enhanced (EC)DSA is actually safer than RSA-PSS
is with regard to this issue.

Regards,

Zooko
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: anonymous DH MITM

2003-10-02 Thread Zooko O'Whielacronx

 Bear wrote:

 DH is an open protocol; it doesn't rely on an initial shared
 secret or a Trusted Authority.
 
 There is a simple proof that an open protocol between anonymous
 parties is _always_ vulnerable to MITM.
 
 Put simply, in an anonymous protocol, Alice has no way of knowing
 whether she is communicating with Bob or Mallory, and Bob has no way
 of knowing whether an incoming communication is from Mallory or from
 Alice.  (that's what anonymous means).  If there is no shared secret
 and no Trent, then nothing prevents Mallory from being the MITM.
 
 You can have anonymous protocols that aren't open be immune to MITM
 And you can have open protocols that aren't anonymous be immune to
 MITM.  But you can't have both.

I'd like to see the proof.

I think it depends on what you mean by MITM.  Take the Chess Grandmaster 
Problem: can Alice and Bob play a game of chess against one another while 
preventing Mitch (the Man In The CHannel) from proxying their moves to one 
another while taking the credit for being a good chess player?

To make it concrete, suppose we limit it to the first two moves of a chess 
game.  One player is going to make the first move for White, and the other 
player is going to make the first move for Black.

Now, obviously Mitch could always act as a passive proxy, forwarding exactly 
the bits he receives, but in that case he can be defeated by e.g. DH.  To make 
it concrete, suppose that the first player includes both his move and his 
public key (or his public DH parameters) in his message, and the second player 
encrypts his message with the public key that arrived in the first message.

Mitch wins if the first player accepts the second player's move (the first 
move for Black).  The players win if the first player accepts a different move 
that the second player didn't make.  (This is the case where Mitch is no 
longer doing the Chess Grandmaster Attack, but is instead just playing two 
games of chess, one game with the first player and another game with the 
second player.)

If the players reject a message and end the protocol, then neither Mitch nor 
the players win -- it is a tie.  (A denial-of-service situation.)

Now, you might intuitively believe that this is one of those situations where 
Mitch can't lose.  But there are several protocols published in the literature 
that can help the players against Mitch, starting with Rivest  Shamir's 
Interlock Protocol from 1984.

The funny thing is that all of these published protocols seem to require a 
constraint on the game of chess.  For example, the Interlock Protocol works 
only with full duplex games where both sides are making a move at the same 
time.  You can't obviously apply it to chess, although you *could* apply it to 
a full-duplex game like chess variant Bughouse, or, um, children's card 
game War.  Or a Real-Time Strategy computer game where both players are 
sending moves to one another simultaneously.

Now, you might go back to the beginning of this message and say Well, Chess 
Grandmaster isn't MITM!.  In that case, I would like to see a definition of 
what exactly is this MITM which can't be countered in the no- shared-trust 
setting.  I'm not saying that there isn't such a thing -- indeed I can think 
of a definition of MITM which satisfies that requirement, but I'm not sure 
it is the same definition that other people are thinking of.

Anyway, it is a funny and underappreciated niche in cryptography, IMO.  AFAIK 
nobody has yet spelled out in the open literature what the actual theoretical 
limitations are.


Regards,

Zooko

http://zooko.com/log.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-03 Thread Zooko O'Whielacronx

 Perhaps I spoke too soon?  It's not in Eurocrypt or Crypto 84 or 85,
 which are on my shelf.  Where was it published?

R. L. Rivest and A. Shamir. How to expose an eavesdropper. Communications of the ACM, 
27:393-395, April 1984.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Strong-Enough Pseudonymity as Functional Anonymity

2003-10-04 Thread Zooko O'Whielacronx

I can think of three different goals one could have for identifying the
person behind a name.  If goal A is possible, I say that the name was a
verinym.  If goal C is possible, I say that the name was a pseudonym.  If
none of the goals are possible, the transaction was anonymous.

Unfortunately, there's no word for the kind of name where goal B is possible 
but goal A isn't.

Suppose Alice the Argulant visited the tavern that you own and operate in a 
virtual reality MUD world, and behaved badly and you had her thrown out.

Goal A: figure out the real human who operates the Alice persona, and break 
his or her kneecaps, or at least threaten to do so, while making it clear that 
you have the ability to make good on your threat.

Goal B: make sure that the real human who operates the Alice persona doesn't 
come back the next day under a different name: Bobo the Burbulant.

Goal C: make sure that the real human who operates the Alice persona suffers 
a loss of reputation capital or escrowed gold pieces or something, thus 
deterring him or her from behaving badly.

I imagine it might be nice to have Goal B achievable in a certain setting 
where Goal A remains unachievable.

Regards,

Zooko the Zoogulant

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-04 Thread Zooko O'Whielacronx

(about the Interlock Protocol)

 Benja wrote:

 The basic idea is that Alice sends *half* of her ciphertext, then Bob 
 *half* of his, then Alice sends the other half and Bob sends the other 
 half (each step is started only after the previous one was completed). 
 The point is that having only half of the first ciphertext, Mitch can't 
 decrypt it, and thus not pass on the correct thing to Bob in the first 
 step and to Alice in the second, so both can actually be sure to have 
 the public key of the person that made the other move.

That sounds like an accurate summary to me.

I think that the important thing is that the first message commits the sender 
to the contents while withholding knowledge of the contents from the recipient.  
The second message reveals the contents to the recipient.

The fact that this is implemented by sending half of the ciphertext at a time 
seems peripheral.  The same qualities would arise if this were implemented 
with a different commitment protocol, such as sending a secure hash of the 
tuple of (my_message, a_random_nonce).

Regards,

Zooko

http://zooko.com/log.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OOAPI-SSL/TLS (Was: Simple SSL/TLS - Some Questions)

2003-10-04 Thread Zooko O'Whielacronx

 Rich Salz wrote:

 You know about Wei's Crypto++, right?

I use it and like it.  I don't have to dig into the guts very often, which is 
good because I don't like mucking around in C++.

You have to understand templates to understand the API.  The docs are spartan, 
but the design is clean so it is okay.

It's difficult to compile on new platforms, new versions of compilers, etc., 
but that's probably true of any C++ library that doesn't deliberately restrict 
itself to a small subset of C++'s features.


 If you keep our C++ reasonably simple (no templates) then SWIG
 (http://www.swig.org) will make the scripting language glue code for you
 automatically.

I use SWIG and like it.  They say that the new SWIG handles templates better 
than good old 1.1.

I haven't tried SWIG on Crypto++.  I would really *like* for someone else to 
do so and share the results...


Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Simple SSL/TLS - Some Questions

2003-10-06 Thread Zooko O'Whielacronx

 Jill Ramonsky [EMAIL PROTECTED] wrote:

 I confess ignorance in matters concerning licensing. The basic rules 
 which I want, and which I believe are appropriate are:
 (i) Anyone can use it, royalty free. Even commercial applications.
 (ii) Anyone can get the source code, and should be able to compile it to 
 executable from this.
 (iii) Copyright notices must be distributed along with the toolkit.
 (iv) Anyone can modify the code (this is important for fixing bugs and 
 adding new features) and redistribute the modified version. (Not sure 
 what happens to the copyright notices if this happens though).

#include disclaimers/legalty
#include disclaimers/truth
#include disclaimers/appropriateness
#include disclaimers/miscellaneous

I entered your preferences (I think) into the handy dandy interactive license 
chooser at http://pgl.yoyo.org/lqr/, and it said the following.  I may have
misunderstood your desiderata though, so don't take my word for it.  ;-)

Regards,

Zooko

License
   |   Hackers like accepting code under it
   | | Combine with proprietary and redistribute
   | |   | Combine with GPL'ed code and redistribute
   | |   |   | Can redistribute binaries without source
   | |   |   |   | Required to include patent license with contrib
   | |   |   |   |   |
   | |   |   |   |   |
   v v   v   v   v   v
  ---   --- --- --- --- ---
 permissive  -   Y   -   Y   -
 GNU LPGPL   -2  Y1  -   N   -
 GNU GPL -2  N   -   N   -
 Mozilla PL 1.1  -2  Y   -3  N   -

notes:

   1. The LGPL imposes some conditions on redistributing a combination of
LGPL'ed and proprietary code, including some requirement on how the LGPL'ed code
and the proprietary code are linked at run-time on the user's machine. It
appears to me that these clauses are intended to prevent people from violating
the spirit of the LGPL by using an obfuscating linker which prevents the user
from swapping in alternative versions of the LGPL'ed code. Read Section 6 of the
LGPL for details.
   2. Some members of the community refuse to accept GPL'ed source code into
their projects, although other members of the community strongly prefer GPL'ed
source code over other licenses. Contrast with code under permissive licenses
such as BSD, X11, MIT, and expat, which nobody refuses to accept. Almost nobody
refuses to accept LGPL'ed code, except the Apache Foundation so refuses, saying
that they think it would impose LGPL requirement upon the proprietary code (when
they are linked via the Java class-loading mechanism). The FSF disagrees with
this statement, asserting that such linking falls under section 6 of the LGPL.
As far as I know, nobody refuses to accept code which is licensed under the
Mozilla PL 1.1-plus-GPL-compatibility-clause (see note #3).
   3. MPL 1.1 can be specifically amended to allow combining with GPL, according
to the FSF's license list. 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Protection against offline dictionary attack on static files

2003-11-16 Thread Zooko Journeyman

 Arcane Jill wrote:

... a way to make decryption more expensive ...

I think it is a neat idea.  I think it is best understood as a kind of 
key-stretching akin to iterated hashing of a password, as in:

Secure Applications of Low-Entropy Keys (1998)
John Kelsey, Bruce Schneier, Chris Hall, David Wagner 
http://citeseer.nj.nec.com/kelsey98secure.html

I invented it myself at one point, and then subsequently learned that it had 
already been published.  

Here are some notes I wrote about it earlier this year:

  
  I've learned that Udi Manber, Martín Abadi [1], Mark Lomas, and Roger 
  Needham [2] have already published one of my ideas -- that of an extra salt 
  used to hash passwords, erased, and then brute-force-rediscovered when needed. 
  This kind of thing reassures me that my own part-time, self-directed crypto 
  research isn't too far off the mainstream. Manber's paper [3] is earliest, 
  but Abadi's [4] (published as a Technical Report) contains extra goodies such 
  as consideration of off-line brute force attacks on weak keys used in 
  communication protocols and a comparison to the more widely used key-
  strengthening of iterated hashing.  

  [1] http://www.cse.ucsc.edu/~abadi
  [2] http://research.microsoft.com/users/needham/
  [3] http://citeseer.nj.nec.com/manber96simple.html
  [4] http://www.cse.ucsc.edu/~abadi/Papers/pwd-revised.ps
  

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: potential new IETF WG on anonymous IPSec

2004-09-13 Thread Zooko O'Whielacronx
On 2004, Sep 11, , at 17:20, Sandy Harris wrote:
Zooko O'Whielcronx wrote:
I believe that in the context of e-mail [1, 2, 3, 4] and FreeSWAN  
this is called opportunistic encryption.
That is certainly not what FreeS/WAN meant by opportunistic  
encryption.
http://www.freeswan.org/freeswan_trees/freeswan-1.99/doc/ 
glossary.html#carpediem
That link leads to the following definition: A situation in which any  
two IPsec-aware machines can secure their communications, without a  
pre-shared secret and without a common  PKI or previous exchange of  
public keys. This is one of the goals  of the Linux FreeS/WAN project,  
discussed in our introduction section. Setting up for opportunistic  
encryption is described in our  configuration document.

This definition is indeed consistent with the concept that we are  
discussing.

If FreeS/WAN's implementation boils down to using DNS as a common PKI  
that is too bad, but their definition (which explicitly excludes a  
common PKI) seems to be the same as mine.

This concept is too important to go without a name.  Currently the best  
way to tell your interlocutor what concept you are talking about seems  
to be you know, the way SSH does it, with the  
first-time-unauthenticated public key exchange.  I heartily  
approve of Peter Gutmann's suggestion to write an RFC for it.

Regards,
Zooko
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The Pointlessness of the MD5 attacks

2005-01-04 Thread Zooko O'Whielacronx
Something that is interesting about this issue is that it involves 
transitive vulnerability.

If there are only two actors there is no issue.  If Alice is the user 
and Bob is the software maintainer and Bob is bad, then Alice will be 
exploited regardless of the hash function.  If Alice is the user and 
Bob the maintainer and Bob is good then Alice will be safe, regardless. 
 However if there is a third actor, Charles, from whom Bob accepts 
information that he will use in a limited way (for example an image or 
sound file, a patch to the source code which contains extensive 
comments and whitespace), then whether the hash function is 
collision-resistant becomes an issue.  If Alice and Bob use a 
collision-resistant hash function, they can rest assured that any 
software package matching the hash is the package that Bob intended for 
Alice to use.  If they use a hash function which is not 
collision-resistant they can't, even if the function is second 
pre-image resistant.

This is interesting to me because the problem doesn't arise with only 
Alice and Bob nor with only Bob and Charles.  It is a problem specific 
to the transitive nature of the relationship: Alice is vulnerable to 
Charles's choice of package because she trusts Bob to choose packages 
and Bob trusts Charles to provide image files.  And because they are 
using a non-collision-resistant hash function.

Regards,
Zooko
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


switching from SHA-1 to Tiger ?

2006-07-11 Thread Zooko O'Whielacronx

Hal:

Thanks for the news about the planned NIST-sponsored hash function 
competition.  I'm glad to hear that it is in the works.


Yesterday I profiled my on-line data backup application [1] and 
discovered that for certain operations one third of the time is spent in 
SHA-1.  For that reason, I've been musing about the possibility of 
switching away from SHA-1.  Not to SHA-256 or SHA-512, but to Tiger.


The implementation of Tiger in Crypto++ on Opteron is more than twice as 
fast as SHA-1 and almost four times as fast as SHA-256 [2].


I hope that the hash function designers will be aware that hash 
functions are being used in more and more contexts outside of the 
traditional digital signatures and MACs.  These new contexts include 
filesystems like ZFS [3], decentralized revision control systems like 
Monotone [4], git [5], mercurial [6] and bazaar-ng [7], and peer-to-peer 
file-sharing systems such as Direct Connect, Gnutella, and Bitzi [6].


The AES competition resulted in a block cipher that was faster as well 
as safer than the previous standards.  I hope that the next generation 
of hash functions achieve something similar, because for my use cases 
speed in a hash function is more important than speed in encryption.


By the way, the traditional practice of using a hash function as a 
component of a MAC should, in my humble opinion, be retired in favor of 
the Carter-Wegman alternative such as Poly-1305 AES [7].


Regards,

Zooko

[1] http://allmydata.com/
[2] http://www.eskimo.com/~weidai/amd64-benchmarks.html
[3] http://www.opensolaris.org/os/community/zfs/
ZFS offers the option of performing a SHA-256 on every block of data
on every access.  The default setting is to use a non-cryptographic
256-bit checksum instead.
[4] http://www.venge.net/monotone/
[5] http://git.or.cz/
[6] http://en.wikipedia.org/wiki/Tiger_(hash)
[7] http://cr.yp.to/mac.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Proof of Work - atmospheric carbon

2009-01-27 Thread Zooko O'Whielacronx
On Jan 26, 2009, at 13:08 PM, John Levine wrote:

 If only.  People have been saying for at least a decade that all we
 have to do to solve the spam problem is to charge a small fee for
 every message sent.

I was one of those people, a decade and a half ago, on the cypherpunks
mailing list.  In fact, as I recall I once discussed with John Gilmore
after a Bay Area Cypherpunks Physical Meeting whether he would pay me to
implement some sort of solution to spam, but we didn't agree on a
strategy.

 Unfortunately, there's a variety of reasons that's never going to work.

Hey, the future is long.  (We hope.)

 One of the larger reasons is that despite a lot of smart people
 working on micropayments, we have nothing approaching a system that
 will work for billions of tranactions per day, where 90% of the
 purported payments are bogus, along with the lack of any interface to
 the real world financial system that would scale and withstand the
 predictable attacks.

Coincidentally, I just blogged today about how we are much closer to
this now than we were then, even though none of the smart people that
you were probably thinking of are involved in the new deployments:

http://testgrid.allmydata.org:3567/uri/URI:DIR2-RO:j74uhg25nwdpjpacl6rkat2yhm:kav7ijeft5h7r7rxdp5bgtlt3viv32yabqajkrdykozia5544jqa/wiki.html#%5B%5BDecentralized%20Money%5D%5D

WoW-gold, for example, appears to have at least millions of transactions
a day.  Does anyone have more detail about the scale and scope of these
currencies?

 My white paper could use a little updating, but the basic conclusions
 remain sound:

 http://www.taugh.com/epostage.pdf

Thanks!  I'll read this.

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What's the state of the art in factorization?

2010-04-22 Thread Zooko O'Whielacronx
On Wed, Apr 21, 2010 at 8:49 PM, Jerry Leichter leich...@lrw.com wrote:

 There are some concrete complexity results - the kind of stuff Rogoway does,
 for example - but the ones I've seen tend to be in the block
 cipher/cryptographic hash function spaces.  Does anyone one know of similar
 kinds of results for systems like RSA?

There is some interesting work in public key cryptosystems that reduce
to a *random* instance of a specific problem.

Here is a very cool one:

http://eprint.iacr.org/2009/576


Public-Key Cryptographic Primitives Provably as Secure as Subset Sum

Vadim Lyubashevsky and Adriana Palacio and Gil Segev

Abstract: We propose a semantically-secure public-key encryption
scheme whose security is polynomial-time equivalent to the hardness of
solving random instances of the subset sum problem. The subset sum
assumption required for the security of our scheme is weaker than that
of existing subset-sum based encryption schemes, namely the
lattice-based schemes of Ajtai and Dwork (STOC '97), Regev (STOC '03,
STOC '05), and Peikert (STOC '09). Additionally, our proof of security
is simple and direct. We also present a natural variant of our scheme
that is secure against key-leakage attacks, as well as an oblivious
transfer protocol that is secure against semi-honest adversaries.


Unless I misunderstand, if you read someone's plaintext without having
the private key then you have proven that P=NP!

Nice. :-)

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What's the state of the art in factorization?

2010-04-22 Thread Zooko O'Whielacronx
On Wed, Apr 21, 2010 at 5:29 PM, Samuel Neves sne...@dei.uc.pt wrote
(on the cryptography@metzdowd.com list):
 [2] http://www.cs.umd.edu/~jkatz/papers/dh-sigs-full.pdf

I've been looking at that one, with an eye to using it in the One
Hundred Year Cryptography project that is being sponsored by Google as
part of the Google Summer of Code (see recent discussions on the
tahoe-dev archives for April 2010 [1]).

Later I discovered this paper [2] which appears to be an improvement
on that one in terms of performance (see Table 1 in [2]) while still
having a tight reduction to the Computational Diffie-Hellman (CDH)
problem. Strangely, this paper [2] doesn't appear to have been
published anywhere except as an eprint on eprint.iacr.org. I wonder
why not. Is there something wrong with it?

I still have some major questions about the funky hash into a curve
part of these schemes. I'm hoping that [3] will turn out to be wrong
and a nice simple dumb efficient hack will be secure for these
particular digital signature schemes.

Of course if the newfangled schemes which reduce to a random instance
of a classic hard problem work out, that would provide an even
stronger assurance of long-term safety than the ones that reduce to
CDH. See for example the paper [4] that I mentioned previously on the
cryptography@metzdowd.com mailing list. Unless I misunderstand, if you
can break that scheme by learning someone's plaintext without knowing
their private key, then you've also proven that P=NP!

Unfortunately that one in particular doesn't provide digital
signatures, only public key encryption, and what I most need for the
One Hundred Year Cryptography project is digital signatures.

Regards,

Zooko

[1] http://allmydata.org/pipermail/tahoe-dev/2010-April/date.html
[2] http://eprint.iacr.org/2007/019
[3] http://eprint.iacr.org/2009/340
[4] http://eprint.iacr.org/2009/576

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


What's the state of the art in digital signatures? Re: What's the state of the art in factorization?

2010-07-09 Thread Zooko O'Whielacronx
By the way, the general idea of One Hundred Year Security as far as
digital signatures go would be to combine digital signature
algorithms. Take one algorithm which is bog standard, such as ECDSA
over NIST secp256r1 and another which has strong security properties
and which is very different from ECDSA. Signing is simply generating a
signature over the message using each algorithm in parallel.
Signatures consist of both of the signatures of the two algorithms.
Verifying consists of checking both signatures and rejecting if either
one is wrong.

Since the digital signature algorithms that we've been discussing such
as [1] are related to discrete log/Diffie-Hellman and since an
efficient implementation would probably be in elliptic curves, then
those are not great candidates to pair with ECDSA in this combiner
scheme.

Unfortunately I haven't stumbled on a digital signature scheme which
has good properties (efficiency, simplicity, ease of implementation)
and which is based on substantially different ideas and which isn't
currently under patent protection (therefore excluding NTRUSign).

Any ideas?

[1] http://eprint.iacr.org/2007/019

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [cryptography] What's the state of the art in factorization?

2010-07-09 Thread Zooko O'Whielacronx
On Fri, Apr 23, 2010 at 3:57 AM, Paul Crowley p...@ciphergoth.org wrote:

 My preferred signature scheme is the second, DDH-based one in the linked
 paper, since it produces shorter signatures - are there any proposals which
 improve on that?

http://eprint.iacr.org/2007/019

Has one. Caveat lector.

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


What's the state of the art in digital signatures? Re: What's the state of the art in factorization?

2010-07-09 Thread Zooko O'Whielacronx
On Thu, Apr 22, 2010 at 12:40 PM, Jonathan Katz jk...@cs.umd.edu wrote:
 On Thu, 22 Apr 2010, Zooko O'Whielacronx wrote:

 Unless I misunderstand, if you read someone's plaintext without having
 the private key then you have proven that P=NP!
…
 The paper you cite reduces security to a hard-on-average problem, whereas
 all that P \neq NP guarantees is hardness in the worst case.

I see. I did misunderstand. So although cracking the Lyubashevsky,
Palacio, Segev encryption scheme [1] doesn't mean that you've proven
P=NP, because NP is about worst-case rather than average-case, it
*does* mean that you've solved the subset sum problem for a random
instance. If you can do that for all keys that people use in real life
then you can solve the subset sum problem for almost all random
instances, which seems like it would still be a breakthrough in
complexity theory. If you can do it for only a few keys then this
means that the Lyubashevsky, Palacio, Segev scheme is susceptible to
weak keys.

Is that right?

Anyway, although this is not one, there do exist proposals for public
key crypto schemes where breaking the scheme implies solving a worst
case instance of a supposedly hard problem, right?

Here is a recent paper which surveys several of them (all
lattice-based) and estimates secure key sizes: [2].

None of the signature schemes mentioned therein appear to have the
sort of efficiency that we are used to. For example the ecdonaldp
(ECDSA) signature schemes measured on
http://bench.cr.yp.to/results-sign.html have key sizes on the order of
tens of bytes, where the most efficient digital signature algorithm
described in [2] has key sizes on the order of thousands of bytes.
(And that one is a one-time signature scheme!)

Okay, so I'm still searching for a signature algorithm which has the
following properties (or as many of them as I can get):

1. efficient (signing time, verification time, key generation time,
key size, signature size)

2. some kind of strong argument that it really is secure (the gold
standard would be reduction to a worst-case instance of an NP-complete
problem)

or, if we can't have (2) then at least we want (3) and (4):

3. rather different from ECDSA, so that a breakthrough is unlikely to
invalidate both ECDSA and this other scheme at once
and
4. not known to be vulnerable to quantum computers

and finally but importantly:

4. easy to understand and to implement

Suggestions welcome!

Regards,

Zooko Wilcox-O'Hearn

[1] http://eprint.iacr.org/2009/576
[2] http://eprint.iacr.org/2010/137

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Merkle Signature Scheme is the most secure signature scheme possible for general-purpose use

2010-07-09 Thread Zooko O'Whielacronx
Folks:

Regarding earlier discussion on these lists about the difficulty of
factoring and post-quantum cryptography and so on, you might be
interested in this note that I just posted to the tahoe-dev list:

100-year digital signatures

http://tahoe-lafs.org/pipermail/tahoe-dev/2010-June/004439.html

Here is an excerpt:


As David-Sarah [Hopwood] has pointed out, a Merkle Signature Scheme is at least
as secure as *any* other digital signature scheme, even in the
long-term—even if attackers have quantum computers and the knowledge
of how to solve math problems that we don't know how to solve today.

If you had some other digital signature scheme (even, for the sake of
argument, a post-quantum digital signature scheme with some sort of
beautiful reduction from some classic math problem), then you would
probably start wanting to digitally sign messages larger than the few
hundreds of bits that the digital signature algorithm natively
handles. Therefore, you would end up hashing your messages with a
secure hash function to generate message representatives short
enough to sign. Therefore, your system will actually depend on both
the security of the digital signature scheme *and* the security of a
hash function. With a Merkle Signature Scheme you rely on just the
security of a hash function, so there is one less thing that can go
wrong. That's why a Merkle Signature Scheme is at least as secure as
the best digital signature scheme that you can imagine. :-)


In that note I go on to talk about more Tahoe-LAFS-specific
engineering considerations and expose my ignorance about exactly what
properties are required of the underlying secure hash functions.

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


ANNOUNCING Tahoe, the Least-Authority File System, v1.7.0

2010-07-09 Thread Zooko O'Whielacronx
Dear people of the cryptography mailing lists:

We just released Tahoe-LAFS v1.7. The major new feature is an SFTP
server. This means that (with enough installing software and tinkering
with your operating system configuration) you can have a
normal-looking mount point backed by a Tahoe-LAFS grid.

Google is sponsoring us through Google Summer of Code. The next
release after this one will include the resulting improvements.

One of those improvements is the One Hundred Year Cryptography
project, with student Yu Xue and mentor Jack Lloyd. I'll post to these
lists about the progress they make.

Regards,

Zooko

ANNOUNCING Tahoe, the Least-Authority File System, v1.7.0

The Tahoe-LAFS team is pleased to announce the immediate
availability of version 1.7.0 of Tahoe-LAFS, an extremely
reliable distributed storage system.

Tahoe-LAFS is the first distributed storage system which offers
provider-independent security—meaning that not even the
operator of your storage server can read or alter your data
without your consent. Here is the one-page explanation of its
unique security and fault-tolerance properties:

http://tahoe-lafs.org/source/tahoe/trunk/docs/about.html

Tahoe-LAFS v1.7.0 is the successor to v1.6.1, which was released
February 27, 2010 [1].

v1.7.0 is a major new release with new features and bugfixes. It
adds a fully functional SFTP interface, support for non-ASCII character
encodings, and a new upload algorithm which guarantees that each file
is spread over multiple servers for fault-tolerance. See the NEWS file
[2] for details.


WHAT IS IT GOOD FOR?

With Tahoe-LAFS, you distribute your filesystem across multiple
servers, and even if some of the servers are compromised by
by an attacker, the entire filesystem continues to work
correctly, and continues to preserve your privacy and
security. You can easily share specific files and directories
with other people.

In addition to the core storage system itself, volunteers have
built other projects on top of Tahoe-LAFS and have integrated
Tahoe-LAFS with existing systems.

These include frontends for Windows, Macintosh, JavaScript,
iPhone, and Android, and plugins for Hadoop, bzr, mercurial,
duplicity, TiddlyWiki, and more. See the Related Projects page
on the wiki [3].

We believe that strong cryptography, Free and Open Source
Software, erasure coding, and principled engineering practices
make Tahoe-LAFS safer than RAID, removable drive, tape,
on-line backup or cloud storage systems.

This software is developed under test-driven development, and
there are no known bugs or security flaws which would
compromise confidentiality or data integrity under recommended
use. (For all currently known issues please see the
known_issues.txt file [4].)


COMPATIBILITY

This release is fully compatible with the version 1 series of
Tahoe-LAFS. Clients from this release can write files and
directories in the format used by clients of all versions back
to v1.0 (which was released March 25, 2008). Clients from this
release can read files and directories produced by clients of
all versions since v1.0. Servers from this release can serve
clients of all versions back to v1.0 and clients from this
release can use servers of all versions back to v1.0.

This is the ninth release in the version 1 series. This series
of Tahoe-LAFS will be actively supported and maintained for
the forseeable future, and future versions of Tahoe-LAFS will
retain the ability to read and write files compatible with
Tahoe-LAFS v1.


LICENCE

You may use this package under the GNU General Public License,
version 2 or, at your option, any later version. See the file
COPYING.GPL [5] for the terms of the GNU General Public
License, version 2.

You may use this package under the Transitive Grace Period
Public Licence, version 1 or, at your option, any later
version. (The Transitive Grace Period Public Licence has
requirements similar to the GPL except that it allows you to
wait for up to twelve months after you redistribute a derived
work before releasing the source code of your derived work.)
See the file COPYING.TGPPL.html [6] for the terms of the
Transitive Grace Period Public Licence, version 1.

(You may choose to use this package under the terms of either
licence, at your option.)


INSTALLATION

Tahoe-LAFS works on Linux, Mac OS X, Windows, Cygwin, Solaris,
*BSD, and probably most other systems. Start with
docs/quickstart.html [7].


HACKING AND COMMUNITY

Please join us on the mailing list [8]. Patches are gratefully
accepted -- the RoadMap page [9] shows the next improvements
that we plan to make and CREDITS [10] lists the names of people
who've contributed to the project. The Dev page [11] contains
resources for hackers.


SPONSORSHIP

Tahoe-LAFS was originally developed by Allmydata, Inc., a
provider of commercial backup services. After discontinuing
funding of Tahoe-LAFS RD in early 2009, they have continued
to provide servers, bandwidth, small personal gifts as tokens
of appreciation

Re: 1280-Bit RSA

2010-07-11 Thread Zooko O'Whielacronx
Dan:

You didn't mention the option of switching to elliptic curves. A
256-bit elliptic curve is probably stronger than 2048-bit RSA [1]
while also being more efficient in every way except for CPU cost for
verifying signatures or encrypting [2].

I like the Brainpool curves which comes with a better demonstration
that they were generated with any possible back door than do the
NIST curves [3].

Regards,

Zooko

[1] http://www.keylength.com/
[2] http://bench.cr.yp.to/results-sign.html
[3] 
http://www.ecc-brainpool.org/download/draft-lochter-pkix-brainpool-ecc-00.txt

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


ANNOUNCING Tahoe, the Least-Authority File System, v1.7.1

2010-07-19 Thread Zooko O'Whielacronx
 [12].


ACKNOWLEDGEMENTS

This is the fifth release of Tahoe-LAFS to be created solely
as a labor of love by volunteers. Thank you very much to the
team of hackers in the public interest who make Tahoe-LAFS
possible.

David-Sarah Hopwood and Zooko Wilcox-O'Hearn
on behalf of the Tahoe-LAFS team

July 18, 2010
Rainhill, Merseyside, UK and Boulder, Colorado, USA


[1] http://tahoe-lafs.org/trac/tahoe/browser/relnotes.txt?rev=4514
[2] http://tahoe-lafs.org/trac/tahoe/browser/NEWS?rev=4577
[3] http://tahoe-lafs.org/trac/tahoe/wiki/RelatedProjects
[4] http://tahoe-lafs.org/trac/tahoe/browser/docs/known_issues.txt
[5] http://tahoe-lafs.org/trac/tahoe/browser/COPYING.GPL
[6] http://tahoe-lafs.org/source/tahoe/trunk/COPYING.TGPPL.html
[7] http://tahoe-lafs.org/source/tahoe/trunk/docs/quickstart.html
[8] http://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
[9] http://tahoe-lafs.org/trac/tahoe/roadmap
[10] http://tahoe-lafs.org/trac/tahoe/browser/CREDITS?rev=4567
[11] http://tahoe-lafs.org/trac/tahoe/wiki/Dev
[12] http://tahoe-lafs.org/hacktahoelafs/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


ANNOUNCING Tahoe, the Least-Authority File System, v1.8.0

2010-09-27 Thread Zooko O'Whielacronx
 much to the
team of hackers in the public interest who make Tahoe-LAFS
possible.

David-Sarah Hopwood and Zooko Wilcox-O'Hearn
on behalf of the Tahoe-LAFS team

September 23, 2010
Rainhill, Merseyside, UK and Boulder, Colorado, USA


[1] http://tahoe-lafs.org/trac/tahoe/browser/relnotes.txt?rev=4579
[2] http://tahoe-lafs.org/trac/tahoe/browser/NEWS?rev=4732
[3] http://tahoe-lafs.org/trac/tahoe/wiki/RelatedProjects
[4] http://tahoe-lafs.org/trac/tahoe/browser/docs/known_issues.txt
[5] http://tahoe-lafs.org/trac/tahoe/browser/COPYING.GPL
[6] http://tahoe-lafs.org/source/tahoe/trunk/COPYING.TGPPL.html
[7] http://tahoe-lafs.org/source/tahoe/trunk/docs/quickstart.html
[8] http://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
[9] http://tahoe-lafs.org/trac/tahoe/roadmap
[10] http://tahoe-lafs.org/trac/tahoe/browser/CREDITS?rev=4591
[11] http://tahoe-lafs.org/trac/tahoe/wiki/Dev
[12] http://tahoe-lafs.org/hacktahoelafs/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Tahoe-LAFS developers' statement on backdoors

2010-10-06 Thread Zooko O'Whielacronx
http://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/backdoors.txt

Statement on Backdoors

October 5, 2010

The New York Times has recently reported that the current U.S.
administration is proposing a bill that would apparently, if passed,
require communication systems to facilitate government wiretapping and
access to encrypted data:

 http://www.nytimes.com/2010/09/27/us/27wiretap.html (login required;
username/password pairs available at
http://www.bugmenot.com/view/nytimes.com).

Commentary by the  Electronic Frontier Foundation
(https://www.eff.org/deeplinks/2010/09/government-seeks ),  Peter
Suderman / Reason
(http://reason.com/blog/2010/09/27/obama-administration-frustrate ),
Julian Sanchez / Cato Institute
(http://www.cato-at-liberty.org/designing-an-insecure-internet/ ).

The core Tahoe developers promise never to change Tahoe-LAFS to
facilitate government access to data stored or transmitted by it. Even
if it were desirable to facilitate such access—which it is not—we
believe it would not be technically feasible to do so without severely
compromising Tahoe-LAFS' security against other attackers. There have
been many examples in which backdoors intended for use by government
have introduced vulnerabilities exploitable by other parties (a
notable example being the Greek cellphone eavesdropping scandal in
2004/5). RFCs  1984 and  2804 elaborate on the security case against
such backdoors.

Note that since Tahoe-LAFS is open-source software, forks by people
other than the current core developers are possible. In that event, we
would try to persuade any such forks to adopt a similar policy.

The following Tahoe-LAFS developers agree with this statement:

David-Sarah Hopwood
Zooko Wilcox-O'Hearn
Brian Warner
Kevan Carstensen
Frédéric Marti
Jack Lloyd
François Deppierraz
Yu Xue
Marc Tooley

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread Zooko O'Whielacronx
I like the ideas, John.

The idea, and the protocol you sketched out, are a little reminiscent
of ZRTP ¹ and of tcpcrypt ². I think you can go one step further,
however, and make it *really* strong, which is to offer the higher
or outer layer a way to hook into the crypto from your inner layer.

This could be by the inner layer exporting a crypto value which the
outer layer enforces an authorization or authenticity requirement on,
as is done in ZRTP if the a=zrtp-hash is delivered through an
integrity-protected outer layer, or in tcpcrypt if the Session ID is
verified by the outer layer.

I think this is a case where a separation of concerns between layers
with a simple interface between them can have great payoff. The
lower/inner layer enforces confidentiality (encryption),
integrity, hopefully forward-secrecy, etc., and the outer layer
decides on policy: authorization, naming (which is often but not
necessarily used for authorization), etc. The interface between them
can be a simple cryptographic interface, for example the way it is
done in the two examples above.

I think the way that SSL combined transport layer security,
authorization, and identification was a terrible idea. I (and others)
have been saying all along that it was a bad idea, and I hope that the
related security disasters during the last two years have started
persuading more people to rethink it, too. I guess the designers of
SSL were simply following the lead of the original inventors of public
key cryptography, who delegated certain critical unsolved problems to
an underspecified Trusted Third Party. What a colossal, historic
mistake.

The foolscap project ³ by Brian Warner demonstrates that it is
possible to retrofit a nice abstraction layer onto SSL. The way that
it does this is that each server automatically creates a self-signed
certificate, the secure hash of that certificate is embedded into the
identifier pointing at that server, and the client requires the
server's public key match the certificate matching that hash. The fact
that this is a useful thing to do, and inconvenient and rare thing to
do with SSL, should give security architects food for thought.

So I have a few suggestions for you:

1. Go, go, go! The path your thoughts are taking seems fruitful. Just
design a really good inner layer of crypto, without worrying (for
now) about the vexing and subtle problems of authorization,
authentication, naming, Man-In-The-Middle-Attack and so on. For now.

2. Okay, but leave yourself an out, by defining a nice simple
cryptographic hook by which someone else who *has* solved those vexing
problems could extend the protection that they've gained to users of
your protocol.

3. Maybe study ZRTP and tcpcrypt for comparison. Don't try to study
foolscap, even though it is a very interesting practical approach,
because there doesn't exist documentation of the protocol at the right
level for you to learn from.

Regards,

Zooko

https://LeastAuthority.com ← verifiably end-to-end-encrypted storage

P.S. Another example that you and I should probably study is cjdns ⁴.
Despite its name, it is *not* a DNS-like thing. It is a
transport-layer thing. I know less about cjdns so I didn't cite it as
a good example above.

¹ https://en.wikipedia.org/wiki/ZRTP
² http://tcpcrypt.org/
³ http://foolscap.lothar.com/docs/using-foolscap.html
⁴ http://cjdns.info/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: Warning! New cryptographic modes!

2009-05-22 Thread Zooko Wilcox-O'Hearn
For what it is worth, in the Tahoe-LAFS project [1] we simply use CTR  
mode and a unique key for each file.  Details: [2]


Tahoe-LAFS itself doesn't do any deltas, compression, etc., but there  
are two projects layered atop Tahoe to add such features -- a plugin  
for duplicity [3] and a new project named GridBackup [4].


Those upper layers can treat the Tahoe-LAFS as a secure store of  
whole files and therefore don't have to think about details like  
cipher modes of operation, nor do they even have to think very hard  
about key management, thanks to Tahoe-LAFS's convenient capability- 
based access control scheme.


Regards,

Zooko

[1] http://allmydata.org
[2] http://allmydata.org/trac/tahoe/browser/docs/architecture.txt
[3] http://duplicity.nongnu.org
[4] http://podcast.utos.org/index.php?id=52

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 112-bit prime ECDLP solved

2009-07-19 Thread Zooko Wilcox-O'Hearn
By the way, we've recently been planning our next crypto-capabilities  
design for the TahoeLAFS secure distributed filesystem.  This  
involves deciding whether a 192-bit elliptic curve public key is  
strong enough, as well as subtler and more unusual issues involving  
embedding keys directly into filehandles or URLS, multiple-targets  
attacks, and a novel public key scheme that I invented (and that  
therefore I think we shouldn't use):


http://allmydata.org/pipermail/tahoe-dev/2009-July/002314.html

Your comments would be appreciated!  I've added  
ta...@hyperelliptic.org and jam...@echeque.com to the list of  
addresses that can post to tahoe-dev without being subscribed.


Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


why hyperelliptic curves?

2009-07-19 Thread Zooko Wilcox-O'Hearn
Oh, and by the way the way that TahoeLAFS uses public key  
cryptography highlights some of the weaknesses of current public key  
techniques and some of the strengths of possible future techniques  
such as hyperelliptic curves.  (I know that Tanja Lange has done a  
lot of work on those.)


TahoeLAFS generates a unique public-private key pair for each mutable  
file and each directory.  (Immutable files don't use public key  
cryptography at all -- they are secured solely with a stream cipher  
and secure hashes.)  The file handle or capability to a mutable  
file or directory contains the actual public key (if it is a read- 
only capability) or the actual private key (if it is a read-write  
capability).  Therefore some of our most important measures of  
performance are public key size and keypair generation time.   
Unfortunately, we blundered into using one of the worst possible  
public key signature algorithms for such requirements: RSA!  Our  
current project is replacing RSA with ECDSA.  TahoeLAFS v2.0 will  
support ECDSA-based capabilities (in addition to RSA-based ones for  
backward compatibility).


TahoeLAFS also requires more than two levels of privilege.  With  
traditional public/private keys there are exactly two levels: you  
either know the private key or you don't.  We need to have an  
intermediate level of privilege -- someone who doesn't know the  
private key but who does know something that not everyone knows.   
(Everyone knows the public key.)  We use these three levels of  
privilege to create read-write capabilities, read-only capabilities  
and verify capabilities.   (A verify capability gives the ability to  
check integrity of the ciphertext, which everyone has, because  
everyone knows the public key).  If this doesn't make sense to you  
then see if my longer explanation in lafs.pdf makes any more sense.


Anyway, if it is true that hyperelliptic curves have a security level  
commensurate with the number of bits in the public key, then  
hyperelliptic curves with semi-private keys would be the ideal public  
key crypto signature scheme for TahoeLAFS.  Unfortunately, semi- 
private keys aren't proven secure nor properly peer-reviewed, and  
hyperelliptic curves aren't well implemented or widely appreciated.   
Hopefully someday TahoeLAFS v3.0 will support semi-private- 
hyperelliptic-curve-based capabilities (in addition to RSA and ECDSA  
for backward compatibility).


Regards,

Zooko Wilcox-O'Hearn

P.S.  Oh, I told a lie in the interests of brevity when I said that  
file handles contain actual public keys or actual private keys.  RSA  
keys are way too big for that.  So instead we go through interesting  
contortions to make a surrogate value which can be used to check  
the correctness of the RSA key (i.e. the surrogate value is derived  
from the RSA key by secure hashing) as well as can be used to control  
access to the RSA key (the RSA key is encrypted with a stream cipher  
using the surrogate value as the symmetric encryption key).  The  
surrogate value therefore offers the same integrity and access  
control properties as the RSA key itself (when the user also has  
access to the encrypted RSA key itself), but it is sufficiently short  
to embed directly into the file handles a.k.a. capabilities.  This  
too is explained more fully in lafs.pdf.


[1] http://allmydata.org/~zooko/lafs.pdf
[2] http://allmydata.org/trac/tahoe/ticket/217#comment:50

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 112-bit prime ECDLP solved

2009-07-20 Thread Zooko Wilcox-O'Hearn

On Sunday,2009-07-19, at 13:24 , Paul Hoffman wrote:


At 7:54 AM -0600 7/18/09, Zooko Wilcox-O'Hearn wrote:
This involves deciding whether a 192-bit elliptic curve public key  
is strong enough...


Why not just go with 256-bit EC (128-bit symmetric strength)? Is  
the 8 bytes per signature the issue, or the extra compute time?


Those are two good guesses, but no.  The main concern is the size of  
the public key.  This is why (if I understand correctly),  
hyperelliptic curves might eventually offer public key signatures  
which are twice as good for the purposes of TahoeLAFS as elliptic  
curves.  (By which I mean, the keys are half as big.)  I discussed  
this topic a bit in a subsequent message to the cryptography mailing  
list entitled Why hyperelliptic curves?.


Actually, the computation time matters, too.  Our measurements on an  
ARM 266 MHz embedded system showed a significant penalty for 256-bit  
ECDSA vs. 192-bit:


http://allmydata.org/pipermail/tahoe-dev/2009-June/002083.html

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


cleversafe says: 3 Reasons Why Encryption is Overrated

2009-07-24 Thread Zooko Wilcox-O'Hearn

[cross-posted to tahoe-...@allmydata.org and cryptogra...@metzdowd.com]

Disclosure:  Cleversafe is to some degree a competitor of my Tahoe- 
LAFS project.  On the other hand, I tend to feel positive towards  
them because they open-source much of their work.  Our Related  
Projects page has included a link to cleversafe for years now, I  
briefly collaborated with some of them on a paper about erasure  
coding last year, and I even spoke briefly with them about the idea  
of becoming an employee of their company this year.  I am tempted to  
ignore this idea that they are pushing about encryption being  
overrated, because they are wrong and it is embarassing.  But I've  
decided not to ignore it, because people who publicly spread this  
kind of misinformation need to be publicly contradicted, lest they  
confuse others.


Cleversafe has posted a series of blog entries entitled 3 Reasons  
Why Encryption is Overrated.


http://dev.cleversafe.org/weblog/?p=63 # 3 Reasons Why Encryption is  
Overrated
http://dev.cleversafe.org/weblog/?p=95 # Response Part 1: Future  
Processing Power
http://dev.cleversafe.org/weblog/?p=111 # Response Part 2:  
Complexities of Key Management
http://dev.cleversafe.org/weblog/?p=178 # Response Part 3: Disclosure  
Laws


It begins like this:


When it comes to storage and security, discussions traditionally  
center on encryption.  The reason encryption – or the use of a  
complex algorithm to encode information – is accepted as a best  
practice rests on the premise that while it’s possible to crack  
encrypted information, most malicious hackers don’t have access to  
the amount of computer processing power they would need to decrypt  
information.


But not so fast.  Let’s take a look at three reasons why encryption  
is overrated.



Ugh.

The first claim -- the today's encryption is vulnerable to tomorrow's  
processing power -- is a common goof, which is easy to make by  
conflating historical failures of cryptosystems due to having too  
small of a crypto value with failures due to weak algorithms.   
Examples of the former are DES, which failed because its 56-bit key  
was small enough to fall to brute force, and the bizarre 40-bit  
security policies of the U.S. Federal Government in the 90's.  An  
example of the latter is SHA1, whose hash output size is *not* small  
enough to brute-force, but which is insecure because, as it turns  
out, the SHA1 algorithm allows the generation of colliding inputs  
much quicker than a brute force search would.


Oh boy, I see that in the discussion following the article Future  
Processing Power, the author writes:



I don’t think symmetric ciphers such as AES-256 are under any threat  
of being at risk to brute force attacks any time this century.



What?  Then why is he spreading this Fear, Uncertainty, and Doubt?   
Oh and then it gets *really* interesting: it turns out that  
cleversafe uses AES-256 in an All-or-Nothing Transform as part of  
their Information Dispersal algorithm.  Okay, I would like to  
understand better the cryptographic effects of that (and in  
particular, whether this means that the cleversafe architecture is  
just as susceptible to AES-256 failing as an encryption scheme such  
as is used in the Tahoe-LAFS architecture).


But, it is time for me to stop reading about cryptography and get  
ready to go to work.  :-)


Regards

Zooko
---
Tahoe, the Least-Authority Filesystem -- http://allmydata.org
store your data: $10/month -- http://allmydata.com/?tracking=zsig
I am available for work -- http://zooko.com/résumé.html
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: cleversafe says: 3 Reasons Why Encryption is Overrated

2009-07-31 Thread Zooko Wilcox-O'Hearn

Folks:

Over on the Tahoe-LAFS mailing list Brian Warner gave a typically  
thoughtful, thorough, and precise analysis of cleversafe access  
control as contrasted with Tahoe-LAFS access control.  Brian  
attempted to cross-post it to this list, but it bounced since he is  
not a subscriber.


http://allmydata.org/pipermail/tahoe-dev/2009-July/002482.html

Jason Resch of cleversafe has also been participating in the  
discussion on that list.


Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-08-02 Thread Zooko Wilcox-O'Hearn
I recommend Poly1305 by DJB or VMAC by Ted Krovetz and Wei Dai.  Both  
are much faster than HMAC and have security proven in terms of an  
underlying block cipher.


VMAC is implemented in the nice Crypto++ library by Wei Dai, Poly1305  
is implemented by DJB and is also in the new nacl library by DJB.


http://cryptopp.com/benchmarks-amd64.html

Says that VMAC(AES)-64 takes 0.6 cycles per byte (although watch out  
for that 3971 cycles to set up key and IV), compared to HMAC-SHA1  
taking 11.2 cycles per byte (after 1218 cycles to set up key and IV).


If you do any measurement comparing Poly1305 to VMAC, please report  
your measurement, at least to me privately if not to the list.  I can  
use that sort of feedback to contribute improvements to the Crypto++  
library.  Thanks!


Regards,

Zooko Wilcox-O'Hearn
---
Tahoe, the Least-Authority Filesystem -- http://allmydata.org
store your data: $10/month -- http://allmydata.com/?tracking=zsig
I am available for work -- http://zooko.com/résumé.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


ANNOUNCING Tahoe, the Lofty-Atmospheric Filesystem, v1.5

2009-08-02 Thread Zooko Wilcox-O'Hearn

Dear people of Perry's cryptography mailing list:

Please check out the new release of Tahoe-LAFS.  We claim that it is  
the first cloud storage technology which offers real security.  If  
you can find a weakness in the cryptographic structure (or any  
security hole whatsoever), then you will be added to the Hall Of Fame  
at http://hacktahoe.org .  :-)


Regards,

Zooko

---
The Tahoe-LAFS team is pleased to announce the immediate availability of
version 1.5 of Tahoe, the Lofty Atmospheric File System.

Tahoe-LAFS is the first cloud storage technology which offers security
and privacy in the sense that the cloud storage service provider itself
can't read or alter your data. Here is the one-page explanation of
its unique security and fault-tolerance properties:

http://allmydata.org/source/tahoe/trunk/docs/about.html

This release is the successor to v1.4.1, which was released April 13,
2009 [1]. This is a major new release, improving the user interface and
performance and fixing a few bugs, and adding ports to OpenBSD, NetBSD,
ArchLinux, NixOS, and embedded systems built on ARM CPUs. See the NEWS
file [2] for more information.

In addition to the functionality of Tahoe-LAFS itself, a crop of related
projects have sprung up to extend it and to integrate it into operating
systems and applications.  These include frontends for Windows,
Macintosh, JavaScript, and iPhone, and plugins for duplicity, bzr,
Hadoop, and TiddlyWiki, and more. See the Related Projects page on the
wiki [3].


COMPATIBILITY

Version 1.5 is fully compatible with the version 1 series of
Tahoe-LAFS. Files written by v1.5 clients can be read by clients of all
versions back to v1.0. v1.5 clients can read files produced by clients
of all versions since v1.0.  v1.5 servers can serve clients of all
versions back to v1.0 and v1.5 clients can use servers of all versions
back to v1.0.

This is the sixth release in the version 1 series. The version 1 series
of Tahoe-LAFS will be actively supported and maintained for the
forseeable future, and future versions of Tahoe-LAFS will retain the
ability to read and write files compatible with Tahoe-LAFS v1.

The version 1 series of Tahoe-LAFS is the basis of the consumer backup
product from Allmydata, Inc. -- http://allmydata.com .


WHAT IS IT GOOD FOR?

With Tahoe-LAFS, you can distribute your filesystem across a set of
servers, such that if some of them fail or even turn out to be
malicious, the entire filesystem continues to be available. You can
share your files with other users, using a simple and flexible access
control scheme.

We believe that the combination of erasure coding, strong encryption,
Free/Open Source Software and careful engineering make Tahoe-LAFS safer
than RAID, removable drive, tape, on-line backup or other Cloud storage
systems.

This software comes with extensive tests, and there are no known
security flaws which would compromise confidentiality or data integrity
in typical use.  (For all currently known issues please see the
known_issues.txt file [4].)


LICENCE

You may use this package under the GNU General Public License, version 2
or, at your option, any later version.  See the file COPYING.GPL [5]
for the terms of the GNU General Public License, version 2.

You may use this package under the Transitive Grace Period Public
Licence, version 1 or, at your option, any later version.  (The
Transitive Grace Period Public Licence has requirements similar to the
GPL except that it allows you to wait for up to twelve months after you
redistribute a derived work before releasing the source code of your
derived work.) See the file COPYING.TGPPL.html [6] for the terms of
the Transitive Grace Period Public Licence, version 1.

(You may choose to use this package under the terms of either licence,
at your option.)


INSTALLATION

Tahoe-LAFS works on Linux, Mac OS X, Windows, Cygwin, Solaris, *BSD, and
probably most other systems.  Start with docs/install.html [7].


HACKING AND COMMUNITY

Please join us on the mailing list [8].  Patches are gratefully accepted
-- the RoadMap page [9] shows the next improvements that we plan to make
and CREDITS [10] lists the names of people who've contributed to the
project.  The Dev page [11] contains resources for hackers.


SPONSORSHIP

Tahoe-LAFS was originally developed thanks to the sponsorship of
Allmydata, Inc. [12], a provider of commercial backup services.
Allmydata, Inc. created the Tahoe-LAFS project and contributed hardware,
software, ideas, bug reports, suggestions, demands, and money (employing
several Tahoe-LAFS hackers and instructing them to spend part of their
work time on this Free Software project).  Also they awarded customized
t-shirts to hackers who found security flaws in Tahoe-LAFS (see
http://hacktahoe.org ). After discontinuing funding of Tahoe-LAFS RD in
early 2009, Allmydata, Inc. has continued to provide servers, co-lo
space and bandwidth to the open source project. Thank you to Allmydata,
Inc. for their generous

Re: cleversafe says: 3 Reasons Why Encryption is Overrated

2009-08-05 Thread Zooko Wilcox-O'Hearn

[cross-posted to tahoe-...@allmydata.org and cryptogra...@metzdowd.com]

Folks:

It doesn't look like I'm going to get time to write a long post about  
this bundle of issues, comparing Cleversafe with Tahoe-LAFS (both use  
erasure coding and encryption, and the encryption and key-management  
part differs), and arguing against the ill-advised Fear, Uncertainty,  
and Doubt that the Cleversafe folks have posted.  So, I'm going to  
try to throw out a few short pieces which hopefully each make sense.


First, the most important issue in all of this is the one that my  
programming partner Brian Warner already thoroughly addressed in [1]  
(see also the reply by Jason Resch [2]).  That is the issue of access  
control, which is intertwined with the issues of key management.  The  
other issues are cryptographic details which are important to get  
right, but the access control and key management issues are the ones  
that directly impact every user and that make or break the security  
and usefulness of the system.


Second, the Cleversafe documents seem to indicate that the security  
of their system does not rely on encryption, but it does.  The data  
in Cleversafe is encrypted with AES-256 before being erasure-coded  
and each share stored on a different server (exactly the same as in  
Tahoe-LAFS).  If AES-256 is crackable, then a storage server can  
learn information about the file (exactly as in Tahoe-LAFS).  The  
difference is that Cleversafe also stores the decryption key on the  
storage servers, encoded in such a way that  any K of the storage  
servers must cooperate to recover it.  In contrast, Tahoe-LAFS  
manages the decryption key separately.  This added step of including  
a secret-shared copy of the decryption key on the storage servers  
does not make the data less vulnerable to weaknesses in AES-256, as  
their documents claim.  (If anything, it makes it more vulnerable,  
but probably it has no effect and it is just as vulnerable to  
weaknesses in AES-256 as Tahoe-LAFS is.)


Third, I don't understand why Cleversafe documents claim that public  
key cryptosystems whose security is based on math are more likely  
to fall to future advances in cryptanalysis.  I think most  
cryptographers have the opposite belief -- that encryption based on  
bit-twiddling such as block ciphers or stream ciphers is much more  
likely to fall to future cryptanalysis.  Certainly the history of  
modern cryptography seems to fit with this -- of the original crop of  
public key cryptosystems founded on a math problem, some are still  
regarded as secure today (RSA, DH, McEliece), but there has been a  
long succession of symmetric crypto primitives based on bit twiddling  
which have then turned out to be insecure.  (Including, ominously  
enough, AES-256, which was regarded as a gold standard until a few  
months ago.)


Fourth, it seems like the same access control/key management model  
that Cleversafe currently offers could be achieved by encrypting the  
data with a random AES key and then using secret sharing to split the  
key and store on share of the key with each server.  I *think* that  
this would have the same cryptographic properties as the current  
Cleversafe approach of using an All-Or-Nothing-Transform followed by  
erasure coding.  Both would qualify as computation secret sharing  
schemes as opposed to information-theoretic secret sharing  
schemes.  I would be curious if there are any significant differences  
between these two constructions.


I don't think there is any basis to the claims that Cleversafe makes  
that their erasure-coding (Information Dispersal)-based system is  
fundamentally safer, e.g. these claims from [3]: a malicious party  
cannot recreate data from a slice, or two, or three, no matter what  
the advances in processing power. ... Maybe encryption alone is  
'good enough' in some cases now  - but Dispersal is 'good always' and  
represents the future.


Fifth, as I've already mentioned, the emphasis on cryptography being  
defeated due to advances in processing power e.g. reference to  
Moore's Law is confused.  Advances in processing power would not be  
sufficient to crack modern cryptosystems and in many cases would not  
be necessary either.


Okay I think that's it.  I hope these notes are not so terse as to be  
confusing or inflammatory.


Regards,

Zooko Wilcox-O'Hearn

[1] http://allmydata.org/pipermail/tahoe-dev/2009-July/002482.html
[2] http://allmydata.org/pipermail/tahoe-dev/2009-August/002514.html
[3] http://dev.cleversafe.org/weblog/?p=63

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: cleversafe says: 3 Reasons Why Encryption is Overrated

2009-08-09 Thread Zooko Wilcox-O'Hearn

[dropping tahoe-dev from Cc:]

On Thursday,2009-08-06, at 2:52 , Ben Laurie wrote:


Zooko Wilcox-O'Hearn wrote:
I don't think there is any basis to the claims that Cleversafe  
makes that their erasure-coding (Information Dispersal)-based  
system is fundamentally safer

...
Surely this is fundamental to threshold secret sharing - until you  
reach the threshold, you have not reduced the cost of an attack?


I'm sorry, I don't understand your sentence.  Cleversafe isn't using  
threshold secret sharing -- it is using All-Or-Nothing-Transform  
(built out of AES-256) followed by Reed-Solomon erasure-coding.  The  
resulting combination is a computationally-secure (not information- 
theoretically-secure) secret-sharing scheme.  The Cleversafe  
documentation doesn't use these terms and is not precise about this,  
but it seems to claim that their scheme has security that is somehow  
better than the mere computational security that encryption typically  
offers.


Oh wait, now I understand your sentence.  You in your sentence is  
the attacker.  Yes, an information-theoretically-secure secret- 
sharing scheme does have that property.  Cleversafe's scheme hasn't.


Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: cleversafe says: 3 Reasons Why Encryption is Overrated

2009-08-09 Thread Zooko Wilcox-O'Hearn

[dropped tahoe-dev from Cc:]

On Thursday,2009-08-06, at 17:08 , james hughes wrote:

Until you reach the threshold, you do not have the information to  
attack. It becomes information theoretic secure.


This is true for information-theoretically secure secret sharing, but  
not true for Cleversafe's technique of composing an All-Or-Nothing- 
Transform with Reed-Solomon erasure coding.


CleverSafe can not provide any security guarantees unless these  
questions can be answered. Without answers, CleverSafe is neither  
Clever nor Safe.


Hey, let's be nice.  Cleversafe has implemented a storage system  
which integrates encryption in the attempt to make it safer.  They  
GPL at least some of their work [*], and they publish their ideas and  
engage in discussion about them.  These are all good things.  My  
remaining disagreements with them are like this:


1.  (The important one.)  I don't think the access control policy of  
whoever can access at least K of the N volumes of data is the  
access control policy that I want.  For one thing, it immediately  
leads to the questions that James Hughes was asking, about who is  
authorized to access what servers.  For another thing, I would really  
like my access control policy to be fine-grained, flexible, and  
dynamic.  So for example, I'd like to be able to give you access two  
three of my files but not all my other files, and I'd like you to  
then be able to give your friend access to two of those files but not  
the third.  See Brian Warner's and Jason Resch's discussion of these  
issues: [1, 2].


2.  Cleversafe seems to think that their scheme gives better-than- 
computational security, i.e. that it guarantees security even if  
AES-256 is crackable.  This is wrong, but it is an easy mistake to  
make!  Both Ben Laurie and James Hughes have jumped to the conclusion  
(in this thread) that the Cleversafe K-out-of-N encoding has the same  
information-theoretic security that secret-sharing K-out-of-N  
encoding has.


3.  Cleversafe should really tone down the Fear Uncertainty and Doubt  
about today's encryption being mincemeat for tomorrow's  
cryptanalysts.  It might turn out to be true, but if so it will be  
due to cryptanalytic innovations more than due to Moore's Law.  And  
it might not turn out like that -- perhaps AES-256 will remain safe  
for centuries.  Also, Cleversafe's product is not more secure than  
any other product against this threat.


It is hard to explain to non-cryptographers how much they can rely on  
the security of cryptographic schemes.  It's very complicated, and  
most schemes deployed have failed due to flaws in the surrounding  
system, engineering errors or key management (i.e. access control)  
problems.  Nobody knows what cryptanalytic techniques will be  
invented in the future.  My opinion is that relying on well- 
engineered strong encryption to protect your data is at least as safe  
alternatives such as keeping the data on your home computer or on  
your corporate server.  The Cleversafe FUD doesn't help people  
understand the issues better.


Regards,

Zooko

[1] http://allmydata.org/pipermail/tahoe-dev/2009-July/002482.html
[2] http://allmydata.org/pipermail/tahoe-dev/2009-August/002514.html

[*] Somebody stated on a mailing list somewhere that Cleversafe has  
applied for patents.  Therefore, if you want to use their work under  
the terms of the GPL, you should also be aware that if their patents  
are granted then some of what you do may be subject to the patents.   
Of course, this is always true of any software (the techniques might  
be patented), but I thought it was worth mentioning since in this  
case the company authoring the software is also the company applying  
for patents.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] cleversafe says: 3 Reasons Why Encryption isOverrated

2009-08-11 Thread Zooko Wilcox-O'Hearn
This conversation has bifurcated, since I replied and removed tahoe- 
dev from the Cc: line, sending just to the cryptography list, and  
David-Sarah Hopwood has replied and removed cryptography, leaving  
just the tahoe-dev list.


Here is the root of the thread on the cryptography mailing list archive:

http://www.mail-archive.com/cryptography@metzdowd.com/msg10680.html

Here it is on the tahoe-dev mailing list archive.  Note that  
threading is screwed up in our mailing list archive.  :-(


http://allmydata.org/pipermail/tahoe-dev/2009-August/subject.html#start

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] cleversafe says: 3 Reasons Why Encryption isOverrated

2009-08-11 Thread Zooko Wilcox-O'Hearn

On Monday,2009-08-10, at 13:47 , Zooko Wilcox-O'Hearn wrote:


This conversation has bifurcated,


Oh, and while I don't mind if people want to talk about this on the  
tahoe-dev list, it doesn't have that much to do with tahoe-lafs  
anymore, now that we're done comparing Tahoe-LAFS to Cleversafe and  
are just arguing about the cryptographic design of Cleversafe.  ;-)   
So, it seems quite topical for the cryptography list and only  
tangentially topical for the tahoe-dev list.  I've also been enjoying  
the subthread about the physical limits of computation that have  
spawned off on the cryptography mailing list.  Ooh, were you guys  
considering only classical computers and not quantum computers when  
you estimated that either 2^128, 2^200 or 2^400 was the physical  
limit of possible computation?  :-)


Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


strong claims about encryption safety Re: [tahoe-dev] cleversafe says: 3 Reasons Why Encryption isOverrated

2009-08-12 Thread Zooko Wilcox-O'Hearn
[removing Cc: tahoe-dev as this subthread is not about Tahoe-LAFS.   
Of course, the subscribers to tahoe-dev would probably be interested  
in this subthread, but that just goes to show that they ought to  
subscribe to cryptogra...@metzdowd.com.]


On Monday,2009-08-10, at 11:56 , Jason Resch wrote:

I don't think there is any basis to the claims that Cleversafe  
makes that their erasure-coding (Information Dispersal)-based  
system is fundamentally safer, e.g. these claims from [3]: a  
malicious party cannot recreate data from a slice, or two, or  
three, no matter what the advances in processing power. ...  
Maybe encryption alone is 'good enough' in some cases now  - but  
Dispersal is 'good always' and represents the future.


It is fundamentally safer in that even if the transformation key  
were brute forced, the attacker only gains data from the slice,  
which in general will have 1/threshold the data.


Okay, so the Cleversafe method of erasure-coding ciphertext and  
storing the slices on different servers is safer in exactly the  
same way that encrypting and then giving an attacker only a part of  
the ciphertext is safer.  That is: having less ciphertext might  
hinder cryptanalysis a little, and also even if the attacker totally  
wins and is able to decrypt the ciphertext, at least he'll only get  
part of the plaintext that way.  On the other hand I might consider  
it scant comfort if I were told that the good news is that the  
attacker was able to read only the first 1/3 of each of your  
files.  :-)


But the Cleversafe method of appending the masked key to the last  
slice makes it less safe, because having the masked key might help a  
cryptanalyst quite a lot.


In any case, the claims that are made on the Cleversafe web site are  
wrong and misleading: a malicious party cannot recreate data from a  
slice, or two, or three, no matter what the advances in processing  
power [1].  It is easy for customers to believe this claim, because  
an honest party who is following the normal protocol is limited in  
this way and because information-theoretically-secure secret-sharing  
schemes have this property.  I kind of suspect that the Cleversafe  
folks got confused at some point by the similarities between their  
AONT+erasure-coding scheme and a secret-sharing scheme.


In any case, the statement quoted above is not true, and not only  
that isolated statement, but also the entire thrust of the  
encryption isn't safe but Cleversafe's algorithm is safer argument  
[2].  Just to pick out another of the numerous examples of misleading  
and unjustified claims along these lines, here is another: Given  
that the level of security provided by the AONT can be set  
arbitrarily high (there is no limit to the length of key it uses for  
the transformation), information theoretic security is not necessary  
as one can simply use a key so long that it could not be cracked  
before the stars burn out. [3].


On the other hand Cleversafe's arguments about key management being  
hard and about there being a trade-off between confidentiality and  
availability are spot on: [3].  Although I don't think that their  
strategy for addressing the key management issues is the best  
strategy, at least their description of the problem are correct.   
Also, if you ignore the ill-justified claims about security on that  
page, their explanation of the benefits of their approach is  
correct.  (Sorry if this comes off as smug -- I'm trying to be fair.)


(I'm not even going to address their third point [4] -- at least not  
until we take this conversation to the law mailing list! :-))


Okay, I think I've made my opinion about these issues fairly clear  
now, so I'll try to refrain from following-up to this subthread --  
the strong claims about encryption safety subthread -- unless there  
are some interesting new technical details that I haven't thought  
of.  By the way, when googling in the attempt to learn more  
information about the Cleversafe algorithm, I happened to see that  
Cleversafe is mentioned in this paper by Bellare and Rogaway: Robust  
Computational Secrete Sharing and a Unified Account of Classical  
Secret-Sharing Goals [5].  I haven't read that paper yet, but given  
the authors I would assume it is an excellent starting point for a  
modern study of the cryptographic issues.  :-)


I still do intend to follow-up on the subthread which I call So how  
do *you* do key management, then?, which I consider to be the most  
important issue for practical security of systems like these.


Regards,

Zooko, writing e-mail on his lunch break

[1] http://dev.cleversafe.org/weblog/?p=63
[2] http://dev.cleversafe.org/weblog/?p=95
[3] http://dev.cleversafe.org/weblog/?p=111
[4] http://dev.cleversafe.org/weblog/?p=178
[5] http://www.cs.ucdavis.edu/~rogaway/papers/rcss.html

-
The Cryptography Mailing List
Unsubscribe by sending

Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git

2009-08-19 Thread Zooko Wilcox-O'Hearn
Okay, in today's installment I'll reply to my friend Kris Nuttycombe,  
who read yesterday's installment and then asked how the storage  
service provider could provide access to the files without being able  
to see their filehandles and thus decrypt them.


I replied that the handle could be stored in another file on the  
server, and therefore encrypted so that the server couldn't see it.   
You could imagine taking a bunch of these handles -- capabilities to  
read an immutable file -- and putting them into a new file and  
uploading to the Tahoe-LAFS grid.  Uploading it would encrypt it and  
give you a capability to that new file.  The storage service provider  
wouldn't be able to read the contents of that file, so it wouldn't be  
able to read the files that it references.  This forms a  
Cryptographic Hash Function Directed Acyclic Graph structure, which  
should be familiar to many readers as the underlying structure in git  
[*].  Git uses this same technique of combining identification and  
integrity-checking into one handle.


From this perspective, Tahoe-LAFS can be seen as like git, and use  
the handle for encryption in addition to integrity-checking and  
identification.


(There are many other differences.  For starters git has a high- 
performance compression scheme and it has a decentralized revision  
control tool built on top.  Tahoe-LAFS has erasure-coding and a  
distributed key-value store for a backend.)


Okay, the bus is arriving at work.

Oh, so then Kris asked But what about the root of that tree?.  The  
answer is that the capability to the root of that tree is not stored  
on the servers.  It is held by the client, and never transmitted to  
the storage servers.  It turns out that storage servers don't need  
the capability to the file in order to serve up the ciphertext.   
(Technically, they *do* need an identifier, and ideally they would  
also have the integrity-checking part so that they could perform  
integrity checks on the file contents (in addition to clients  
performing that integrity check for themselves).  So the capability  
gets split into its component parts during the download protocol,  
when the client sends the identification and integrity-checking bits  
to the server but not the decryption key, and receives the ciphertext  
in reply.)


Therefore the next layer up, whether another program or a human user,  
needs to manage this single capability to the root of a tree.  Here  
the abstraction-piercing problem of availability versus  
confidentiality remains in force, and different programs and  
different human users have different ways to manage their caps.  I  
personally keep mine in my bookmarks in my web browser.  This is  
risky -- they could be stolen by malicious Javascript (probably) or I  
might accidentally leak them in an HTTP Referer header.  But it is  
very convenient.  For the files in question I value that convenience  
more than an extra degree of safety.  I know of other people who keep  
their Tahoe-LAFS caps more securely, on Unix filesystems, on  
encrypted USB keys, etc..


Regards,

Zooko

[*] Linus Torvalds got the idea of a Cryptographic Hash Function  
Directed Acyclic Graph structure from an earlier distributed revision  
control tool named Monotone.  He didn't go out of his way to give  
credit to Monotone, and many people mistakenly think that he invented  
the idea.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git

2009-08-19 Thread Zooko Wilcox-O'Hearn

On Wednesday,2009-08-19, at 10:05 , Jack Lloyd wrote:


On Wed, Aug 19, 2009 at 09:28:45AM -0600, Zooko Wilcox-O'Hearn wrote:

[*] Linus Torvalds got the idea of a Cryptographic Hash Function  
Directed Acyclic Graph structure from an earlier distributed  
revision control tool named Monotone.


OT trivia: The idea actually predates either monotone or git;  
opencm (http://opencm.org/docs.html) was using a similiar technique  
for VCS access control a year or two prior to monotone's first  
release.


Note that I didn't say Monotone invented it.  :-)  Graydon Hoare of  
Monotone got the idea from a friend of his who, as far as we know,  
came up with it independently.  I personally got it from Eric Hughes  
who came up with it independently.  I think OpenCM got it from the  
Xanadu project who came up with it independently.  :-)


Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


a crypto puzzle about digital signatures and future compatibility

2009-08-26 Thread Zooko Wilcox-O'Hearn

Folks:

My brother Nathan Wilcox asked me in private mail about protocol  
versioning issues.  (He was inspired by this thread on  
cryptography@metzdowd.com [1, 2, 3]).  After rambling for a while  
about my theories and experiences with such things, I remembered this  
vexing future-compatibility issue that I still don't know how to  
solve:


Here is a puzzle for you (I don't know the answer).

Would it be a reasonable safety measure to deploy a Tahoe-LAFS v1.6,  
which used SHA-2 but didn't know how to use SHA-3 (because it hasn't  
been defined yet), and then later deploy a Tahoe-LAFS v1.7, which did  
know how to use SHA-3, and have v1.7 writers produce new files which  
v1.6 readers can integrity-check (using SHA-2) and also v1.7 readers  
can integrity-check (using SHA-3)?


So far this seems like an obvious win, but then you have to say what  
if, after we've deployed v1.7, someone posts a perl script to  
sci.crypt which produces second-pre-images for SHA-2 (but not  
SHA-3)?  Then writers who are using Tahoe-LAFS v1.7 really want to be  
able to *stop* producing files which v1.6 readers will trust based on  
SHA-2, right?  And also, even if that doesn't happen and SHA-2 is  
still believed to be reliable, then what if some sneaky v1.7 user  
hacks his v1.7 software to make two different files, sign one of them  
with SHA-2 and the other wish SHA-3, and then put both hashes into a  
single immutable file cap and give it to a v1.6 reader, asking him to  
inspect the file and then pass it on to his trusted, v1.7-using,  
partner?


Hm...

This at least suggests that the v1.7 readers need to check *all*  
hashes that are offered and raise an alarm if some verify and others  
don't.  Is that good enough?


:-/

Regards,

Zooko

[1] http://www.mail-archive.com/cryptography@metzdowd.com/msg10791.html
[2] http://www.mail-archive.com/cryptography@metzdowd.com/msg10807.html
[3] http://www.mail-archive.com/cryptography@metzdowd.com/msg10805.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] a crypto puzzle about digital signatures and future compatibility

2009-09-04 Thread Zooko Wilcox-O'Hearn

On Thursday,2009-08-27, at 19:14 , James A. Donald wrote:


Zooko Wilcox-O'Hearn wrote:
Right, and if we add algorithm agility then this attack is  
possible  even if both SHA-2 and SHA-3 are perfectly secure!
Consider this variation of the scenario: Alice generates a  
filecap  and gives it to Bob.  Bob uses it to fetch a file, reads  
the file and  sends the filecap to Carol along with a note saying  
that he approves  this file.  Carol uses the filecap to fetch the  
file.  The Bob-and- Carol team loses if she gets a different file  
than the one he got.

...
So the leading bits of the capability have to be an algorithm  
identifier.  If Bob's tool does not recognize the algorithm, it  
fails, and he has to upgrade to a tool that recognizes more  
algorithms.


If the protocol allows multiple hash types, then the hash has to  
start with a number that identifies the algorithm.  Yet we want  
that number to comprise of very, very few bits.


Jim, I'm not sure you understood the specific problem I meant -- I'm  
concerned (for starters) with the problems that arise if we support  
more than one secure hash algorithm *even* when none of the supported  
secure hash algorithms ever becomes crackable!


So much so that I currently intend to avoid having a notion of  
algorithm agility inside the current Tahoe-LAFS code base, and  
instead plan for an algorithm upgrade to require a code base upgrade  
and a separate, syntactically distinct, type of file capability.


This is almost precisely the example problem I discuss in http:// 
jim.com/security/prefix_free_number_encoding.html


Hey, that's interesting.  I'll study your prefix-free encoding format  
some time.


Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


so how do *you* manage your keys, then? part 3

2009-09-04 Thread Zooko Wilcox-O'Hearn

So How Do You Manage Your Keys Then, part 3 of 5

In part one of this series [1] I described how Tahoe-LAFS combines  
decryption, integrity-checking, identification, and access into one  
bitstring, called an immutable file read-cap (short for  
capability).  In part two [2] I described how users can build tree- 
like structures of files which contain caps pointing to other files,  
and how the cap pointing to the root of such a structure can reside  
on a different computer than the ciphertext.  (Which is necessary if  
you want someone to store the ciphertext for you but you don't want  
to give them the ability to read the file contents.)


In this installment, consider the question of whether you can give  
someone a cap (which acts as a file handle) and then change the  
contents of the file that the cap points to, while preserving their  
ability to read with the original cap.


This would be impossible with the immutable file read-caps that we  
have been using so far, because each immutable file read cap uses a  
secure hash function to identify and integrity-check exactly one  
file's contents -- one unique byte pattern.  Any change to the file  
contents will cause the immutable file read-cap to no longer match.   
This can be a desirable property if what you want is a permanent  
identifier of one specific, immutable file.  With this property  
nobody -- not even the person who wrote the file in the first place  
-- is able to cause anyone else's read-caps to point to any file  
contents other than the original file contents.


But sometimes you want a different property, namely that an  
authorized writer *can* change the file contents and readers will be  
able to read the new file contents without first having to acquire a  
new file handle.


To accomplish this requires the use of public key cryptography,  
specifically digital signatures.  Using digital signatures, Tahoe- 
LAFS implements a second kind of capability, in addition to the  
immutable-file capability, which is called a mutable file  
capability.  Whenever you create a new mutable file, you get *two*  
caps to it: a write-cap and a read-cap.  (Actually you can always  
derive the read-cap from the write-cap, so for API simplicity you get  
just the write-cap to your newly created mutable file.)


Possession of the read-cap to the mutable file gives you two things:  
it gives you the symmetric encryption key with which you decrypt the  
file contents, and it gives you the public key with which you check a  
digital signature in order to be sure that the file contents were  
written by an authorized writer.  The decryption and signature  
verification both happen automatically whenever you read data from  
that file handle (it downloads the digital signature which is stored  
with the ciphertext).


Possession of the write-cap gives two things: the symmetric key with  
which you can encrypt the ciphertext, and the private key with which  
you can sign the contents.  Both are done automatically whenever you  
write data to that file handle.


The important thing about this scheme is that what we crypto geeks  
call key management is almost completely invisible to the users.   
As far as the users can tell, there aren't any keys here!  The only  
objects in sight are the file handles, which they already use all the  
time.


All users need to know is that a write-cap grants write authority  
(only to that one file), and the read-cap grants read authority.   
They can conveniently delegate some of their read- or write-  
authority to another user, simply by giving that user a copy of that  
cap, without delegating their other authorities. They can bundle  
multiple caps (of any kind) together into a file and then use the  
capability to that file as a handle to that bundle of authorities.


At least, this is the theory that the object-capability community  
taught me, and I'm pleased to see that -- so far -- it has worked out  
in practice.


Programmers and end users appear to have no difficulty understanding  
the access control consequences of this scheme and then using the  
scheme appropriately to achieve their desired ends.


Installment 4 of this series will be about Tahoe-LAFS directories  
(those are the most convenient way to bundle together multiple caps  
-- put them all into a directory and then use the cap which points to  
that directory).  Installment 5 will be about future work and new  
crypto ideas.


Regards,

Zooko

[1] http://allmydata.org/pipermail/tahoe-dev/2009-August/002637.html  
# installment 1: immutable file caps
[2] http://allmydata.org/pipermail/tahoe-dev/2009-August/002656.html  
# installment 2: tree-like structure (like encrypted git)


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: so how do *you* manage your keys, then? part 3

2009-09-08 Thread Zooko Wilcox-O'Hearn
[added Cc: tahoe-...@allmydata.org, and I added ke...@guarana.org on  
the whitelist so his posts will go through to tahoe-dev even if he  
isn't subscribed]



On Tuesday,2009-09-08, at 5:54 , Kevin Easton wrote:

Possession of the read-cap to the mutable file gives you two  
things: it gives you the symmetric encryption key with which you  
decrypt the file contents, and it gives you the public key with  
which you check a digital signature in order to be sure that the  
file contents were written by an authorized writer.


How do you prevent someone possessing the read-cap for a mutable  
file from rolling the file back to an earlier version that they  
have seen, without the consent of the write-cap possessor(s)?


You don't even need a read-cap to perform a roll-back attack -- if  
you can control the ciphertext that the reader gets, then you can  
give them a copy of an older ciphertext, even if you yourself are  
incapable of decrypting it.  This is a difficult attack to defend  
against.  In the current version of Tahoe-LAFS we already have one  
interesting defense -- we the reader is communicating with many  
different servers, and if *any* of the servers is honest and up-to- 
date and informs the reader about the existence of a newer version,  
then the reader knows that the older version that he can read is not  
the latest.  Readers in Tahoe-LAFS always download shares of the file  
from multiple servers, and all of the servers that it uses would have  
to agree on the version number.  Therefore, to perform a rollback  
attack you need to control at least that many servers as well as you  
have to control or deny-access-to any other servers which the reader  
would query and which would inform him about the newer version  
number.  See section 5 of [1].


Does that answer your question about rollback?

It would be interesting to build stronger defenses against rollback  
attack.  For starters, if the same reader reads the same file  
multiple times and gets new contents each time, he might as well  
remember the version number so that he will detect whether that file  
rolled back during his inspection of it.  Also, it would be  
interesting if a file handle to a mutable file included the version  
number that the mutable file was at when the file handle was  
created.  Building on that, it would be really cool if, in a future  
version of Tahoe-LAFS, we could make it so that you can take a cheap  
snapshot of the current contents and then give someone a file-handle  
which *both* securely identified the most recent version that you  
can find of this file and *also* the specific (immutable) version  
of this file that existed when I created this file-handle.



Also, am I correct in assuming that once write-caps have been  
distributed, they cannot be revoked, and a new file handle must be  
created?



Currently, yes.  An improvement that I would like to make in the next  
version of Tahoe-LAFS is to allow the holder of a write-cap to revoke  
it.  While some kinds of revocation are tantamount to DRM (Digital  
Restrictions Management) and seem to be sufficiently difficult that  
we're not even going to try to implement them, the specific kind of  
revocation that you asked about seems to be quite doable.  Also, it  
happens to be the kind of revocation that I have already wanted for  
my own personal use of Tahoe-LAFS (to host my blog).  :-)


Here is a letter about that which explains why I needed this and how  
I envision it working: [2]



Stronger defenses against rollback attack, and revocation of write- 
caps -- these are only a few of the many possible extensions to the  
Tahoe-LAFS secure storage design.  We have a rich library of such  
designs documented on our issue tracker and our wiki.  We are  
currently having a detailed design discussion on the tahoe-dev list  
to which several cryptographers are contributing [e.g. 3, 4].  The  
primary goal for the next version of Tahoe-LAFS caps is to reduce the  
size of the caps and improve performance, but we're also cataloguing  
new features such as these to see if we can work them in.  Here is  
the wiki page where we're keeping our notes: [5].


If any smart cryptographer or hacker reading this wants to create  
secure, decentralized storage, please join us!  We could use the  
help!  :-)


Regards,

Zooko

[1] http://allmydata.org/~zooko/lafs.pdf
[2] http://allmydata.org/pipermail/tahoe-dev/2009-June/001995.html
[3] http://allmydata.org/pipermail/tahoe-dev/2009-July/002345.html
[4] http://allmydata.org/pipermail/tahoe-dev/2009-September/002808.html
[5] http://allmydata.org/trac/tahoe/wiki/NewCapDesign

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: RNG using AES CTR as encryption algorithm

2009-09-09 Thread Zooko Wilcox-O'Hearn
And while you are at it, please implement these test vectors and  
report to Niels Ferguson:


http://blogs.msdn.com/si_team/archive/2006/05/19/aes-test-vectors.aspx

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: how to encrypt and integrity-check with only one key

2009-09-14 Thread Zooko Wilcox-O'Hearn

following-up to my own post:

On Monday,2009-09-14, at 10:22 , Zooko Wilcox-O'Hearn wrote:

David-Sarah Hopwood suggested the improvement that the integrity- 
check value V could be computed as an integrity check (i.e. a  
secure hash) on the K1_enc in addition to the file contents.


Oops, that's impossible.  What David-Sarah Hopwood actually said was  
that this would be nice if it were possible, but since it isn't then  
people should pass around the tuple of (v, K1_enc) whenever they want  
to verify the integrity of the ciphertext.


http://allmydata.org/pipermail/tahoe-dev/2009-September/002798.html

Regards,

Zooko


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] Bringing Tahoe ideas to HTTP

2009-09-16 Thread Zooko Wilcox-O'Hearn

On Wednesday,2009-09-16, at 14:44 , Ivan Krstić wrote:

Yes, and I'd be happy to opine on that as soon as someone told me  
what those important problems are.


The message that you quoted from Brian Warner, which ended with him  
wondering aloud what new applications could be enabled by such  
features, began with him mentioning a specific use case that he cares  
about and sees how to improve: authentication of Mozilla plugins.   
Brian has an admirable habit of documenting the use cases that  
motivate his engineering decisions.  I think in this case he omitted  
some explanation of why he finds the current solution unsatisfactory,  
perhaps because he assumed the audience already shared his view.  (I  
think he mentioned something in his letter like the well-known  
failures of the SSL/CA approach to this problem.)


Regards,

Zooko
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


deterministic random numbers in crypto protocols -- Re: Possibly questionable security decisions in DNS root management

2009-11-01 Thread Zooko Wilcox-O'Hearn

On 2009 Oct 19, at 9:15 , Jack Lloyd wrote:


On Sat, Oct 17, 2009 at 02:23:25AM -0700, John Gilmore wrote:


DSA was (designed to be) full of covert channels.


one can make DSA deterministic by choosing the k values to be HMAC- 
SHA256(key, H(m))


I've noticed people tinkering with (EC) DSA by constraining that  
number k.  For example, Wei Dai's Crypto++ library generates k by  
hashing in the message itself as well as a timestamp into an RNG:


http://allmydata.org/trac/cryptopp/browser/c5/pubkey.h?rev=324#L1036

Wei Dai's motivation for this is to deal with the case that there is  
a rollback of the random number generator, which has always been  
possible and nowadays seems increasingly likely because of the rise  
of virtualization.  See also Scott Yilek: http://eprint.iacr.org/ 
2009/474 which appears to be a formal argument that this technique is  
secure (but I suspect that Scott Yilek and Wei Dai are unaware of one  
another's work).  Yilek's work is motivated by virtual machines, but  
one should note that the same issues have bedeviled normal old  
physical machines for years.


Since the Dai/Yilek approach also uses an RNG it is still a covert  
channel, but one could easily remove the RNG part and just use the  
hash-of-the-message part.


I'm beginning to think that *in general* when I see a random number  
required for a crypto protocol then I want to either  
deterministically generate it from other data which is already  
present or to have it explicitly provided by the higher-layer  
protocol.  In other words, I want to constrain the crypto protocol  
implementation by forbidding it to read the clock or to read from a  
globally-available RNG, thus making that layer deterministic.


This facilitates testing, which would help to detect implementation  
flaws like the OpenSSL/Debian fiasco.  It also avoids covert channels  
and can avoid relying on an RNG for security.  If the random numbers  
are generated fully deterministically then it can also provide  
engineering advantages because of convergence of the output -- that  
two computations of the same protocol with the same inputs yield the  
same output.


Now, Yilek's paper argues for the security of generating the needed  
random number by hashing together *both* an input random number (e.g.  
from the system RNG) *and* the message.  This is exactly the  
technique that Wei Dai has implemented.  I'm not sure how hard it  
would be to write a similar argument for the security of my proposed  
technique of generating the needed random number by hashing just the  
message.  (Here's a crack at it: Yilek proves that the Dai technique  
is secure even when the system RNG fails and gives you the same  
number more than once, right?  So then let's hardcode the system RNG  
to always give you the random number 4.  QED :-))


Okay, aside from the theoretical proofs, the engineering question  
facing me is What's more likely: RNG failure or novel cryptanalysis  
that exploits the fact that the random number isn't truly random but  
is instead generated, e.g. by a KDF from other secrets?.  No  
contest!  The former is common in practice and the latter is probably  
impossible.


Minimizing the risk of the latter is one reason why I am so  
interested in KDF's nowadays, such as the recently proposed HKDF:  
http://webee.technion.ac.il/~hugo/kdf/kdf.pdf .


On Tuesday,2009-10-20, at 15:45 , Greg Rose wrote:

Ah, but this doesn't solve the problem; a compliant implementation  
would be deterministic and free of covert channels, but you can't  
reveal enough information to convince someone *else* that the  
implementation is compliant (short of using zero-knowledge proofs,  
let's not go there). So a hardware nubbin could still leak  
information.


Good point!  But can't the one who verifies the signature also verify  
that the k was generated according to the prescribed technique?


Regards,

Zooko

P.S.  If you read this letter all the way to the end then please let  
me know.  I try to make them short, but sometimes I think they are  
too long and make too many assumptions about what the reader already  
knows.  Did this message make sense?


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Truncating SHA2 hashes vs shortening a MAC for ZFS Crypto

2009-11-02 Thread Zooko Wilcox-O'Hearn

Dear Darren J Moffat:

I don't understand why you need a MAC when you already have the hash  
of the ciphertext.  Does it have something to do with the fact that  
the checksum is non-cryptographic by default (http://docs.sun.com/app/ 
docs/doc/819-5461/ftyue?a=view ), and is that still true?  Your  
original design document [1] said you needed a way to force the  
checksum to be SHA-256 if encryption was turned on.  But back then  
you were planning to support non-authenticating modes like CBC.  I  
guess once you dropped non-authenticating modes then you could relax  
that requirement to force the checksum to be secure.


Too bad, though!  Not only are you now tight on space in part because  
you have two integrity values where one ought to do, but also a  
secure hash of the ciphertext is actually stronger than a MAC!  A  
secure hash of the ciphertext tells whether the ciphertext is right  
(assuming the hash function is secure and implemented correctly).   
Given that the ciphertext is right, then the plaintext is right  
(given that the encryption is implemented correctly and you use the  
right decryption key).  A MAC on the plaintext tells you only that  
the plaintext was chosen by someone who knew the key.  See what I  
mean?  A MAC can't be used to give someone the ability to read some  
data while withholding from them the ability to alter that data.  A  
secure hash can.


One of the founding ideas of the whole design of ZFS was end-to-end  
integrity checking.  It does that successfully now, for the case of  
accidents, using large checksums.  If the checksum is secure then it  
also does it for the case of malice.  In contrast a MAC doesn't do  
end-to-end integrity checking.  For example, if you've previously  
allowed someone to read a filesystem (i.e., you've given them access  
to the key), but you never gave them permission to write to it, but  
they are able to exploit the isses that you mention at the beginning  
of [1] such as Untrusted path to SAN, then the MAC can't stop them  
from altering the file, nor can the non-secure checksum, but a secure  
hash can (provided that they can't overwrite all the way up the  
Merkle Tree of the whole pool and any copies of the Merkle Tree root  
hash).


Likewise, a secure hash can be relied on as a dedupe tag *even* if  
someone with malicious intent may have slipped data into the pool.   
An insecure hash or a MAC tag can't -- a malicious actor could submit  
data which would cause a collision in an insecure hash or a MAC tag,  
causing tag-based dedupe to mistakenly unify two different blocks.


So, since you're tight on space, it would be really nice if you could  
tell your users to use a secure hash for the checksum and then  
allocate more space to the secure hash value and less space to the  
now-unnecessary MAC tag.  :-)


Anyway, if this is the checksum which is used for dedupe then  
remember the birthday so-called paradox -- some people may be  
uncomfortable with the prospect of not being able to safely dedupe  
their 2^64-block storage pool if the hash is only 128 bits, for  
example.  :-)  Maybe you could include the MAC tag in the dedupe  
comparison.


Also, the IVs for GCM don't need to be random, they need only to be  
unique.  Can you use a block number and birth number or other such  
guaranteed-unique data instead of storing an IV?  (Apropos recent  
discussion on the cryptography list [2].)


Regards,

Zooko

[1] http://hub.opensolaris.org/bin/download/Project+zfs%2Dcrypto/ 
files/zfs%2Dcrypto%2Ddesign.pdf

[2] http://www.mail-archive.com/cryptography@metzdowd.com/msg11020.html
---
Your cloud storage provider does not need access to your data.
Tahoe-LAFS -- http://allmydata.org

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


hedging our bets -- in case SHA-256 turns out to be insecure

2009-11-08 Thread Zooko Wilcox-O'Hearn
 of  
inputs that collide in H2 (see [5]).


Now the reason that a combiner like this one is not published in  
theoretical crypto literature is that it obviously could fail if the  
outer hash function H1 fails.  For example, even if H2 is collision- 
resistant, if H1 turns out to be susceptible to collisions, then  
theoretically speaking C[H1, H2] might be susceptible to collisions.   
However, in real life C[H1, H2] would most likely still be collision  
resistant!


All practical attacks on real hash functions so far (if I understand  
correctly) are multi-block attacks in which the attacker is able to  
feed a sufficiently long and unconstrained input to the hash  
functions that the effects of the later parts of his inputs are able  
to manipulate the state generated by the earlier parts of his  
inputs.  My combiner C uses H1 in its outer invocation on a single- 
block-sized input, which means no such multi-block attacks are  
possible on the outer invocation.  In addition, the inputs that the  
attacker gets to feed to the outer invocation of H1 are highly  
constrained.  Basically, he would already have to be very good at  
manipulating the inner invocations H1 and H2 in ways that he isn't  
supposed to before he can even begin to manipulate the outer  
invocation of H1.


A measure of the practical security of a combiner like this one would  
be how safe would it be if it were instantiated using broken  
practical hash functions such as MD5 and SHA1?.  It appears to me  
(from an admittedly cursory analysis) that there is no realistic way  
to find collisions in C[MD5, SHA1] even though there are realistic  
ways to find collisions in MD5 and in SHA1.  Of course, I'm not  
proposing to use C[MD5, SHA1]!  I'm proposing to use C[SHA-256, _]  
where _ is some other hash function which is believed to be strong.   
The example of instantiating C with MD5 and SHA1 just goes to show  
that C is a hash function which is stronger than either of its two  
underlying hash functions.


The other desirable security properties such as second-preimage  
resistance and pre-image resistance seem to follow the same pattern  
as collision-resistance -- C[H1, H2] seems to be much stronger than  
H1 or H2 alone.


Regards,

Zooko

[1] http://extendedsubset.com/Renegotiating_TLS.pdf
[2] http://allmydata.org/trac/tahoe/wiki/NewCaps/WhatCouldGoWrong
[3] http://bench.cr.yp.to/results-hash.html#arm-apollo
[4] Krzysztof Pietrzak: Non-Trivial Black-Box Combiners for  
Collision-Resistant Hash-Functions don't Exist
[5] Jonathan J. Hoch, Adi Shamir: On the Strength of the  
Concatenated Hash Combiner when All the Hash Functions are Weak
[6] Marc Fischlin, Anja Lehmann, Krzysztof Pietrzak: Robust Multi- 
Property Combiners for Hash Functions Revisited

[7] http://webee.technion.ac.il/~hugo/rhash/rhash.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Truncating SHA2 hashes vs shortening a MAC for ZFS Crypto

2009-11-09 Thread Zooko Wilcox-O'Hearn

On Wednesday,2009-11-04, at 7:04 , Darren J Moffat wrote:

The SHA-256 is unkeyed so there would be nothing to stop an  
attacker that can write to the disks but doesn't know the key from  
modifying the on disk ciphertext and all the SHA-256 hashes up to  
the top of the Merkle tree to the uberblock.  That would create a  
valid ZFS pool but the data would have been tampered with.   I  
don't see that as an acceptable risk.


I see.  It is interesting that you and I have different intuitions  
about this.  My intuition is that it is easier to make sure that the  
Merkle Tree root hash wasn't unauthorizedly changed than to make sure  
that an unauthorized person hasn't learned a secret.  Is your  
intuition the opposite?  I suppose in different situations either one  
could be true.


Now I better appreciate why you want to use both a secure hash and a  
MAC.  Now I understand the appeal of Nico Williams's proposal to MAC  
just the root of the tree and not every node of the tree.  That would  
save space in all the non-root nodes but would retain the property  
that you have to both know the secret *and* be able to write to the  
root hash in order to change the filesystem.


So if I don't truncate the SHA-256 how big does my MAC need to be  
given every ZFS block has its own IV ?


I don't know the answer to this question.  I have a hard time  
understanding if the minimum safe size of the MAC is zero (i.e. you  
don't need it anyway) or a 128 bits (i.e. you rely on the MAC and you  
want 128-bit crypto strength) or something in between.


Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


[Cryptography] LeastAuthority.com announces PRISM-proof storage service

2013-08-12 Thread Zooko Wilcox-OHearn
Dear people of cryptogra...@metzdowd.net:

For obvious reasons, now is the time to push forward on strong
encryption for everyone. Here is our first attempt from
https://LeastAuthority.com . (We have more projects in the works!)

One of our goals is to spread the idea of *verifiable* end-to-end
encryption. It is possible. It isn't easy, but we just might make it!

We welcome criticism, suggestions, and requests from you all.

Regards,

Zooko Wilcox-O'Hearn

Founder, CEO, and Customer Support Rep
https://LeastAuthority.com
Freedom matters.

---




 LeastAuthority.com Announces A PRISM-Proof Storage Service


Wednesday, July 31, 2013

`LeastAuthority.com`_ today announced “Simple Secure Storage Service
(S4)”, a backup service that encrypts your files to protect them from
the prying eyes of spies and criminals.

.. _LeastAuthority.com: https://LeastAuthority.com

“People deserve privacy and security in the digital data that make up
our daily lives.” said the company's founder and CEO, Zooko
Wilcox-O'Hearn. “As an individual or a business, you shouldn't have to
give up control over your data in order to get the benefits of cloud
storage.”

verifiable end-to-end security
--

The Simple Secure Storage Service offers *verifiable* end-to-end security.

It offers “end-to-end security” because all of the customer's data is
encrypted locally — on the customer's own personal computer — before
it is uploaded to the cloud. During its stay in the cloud, it cannot
be decrypted by LeastAuthority.com, nor by anyone else, without the
decryption key which is held only by the customer.

S4 offers “*verifiable* end-to-end security” because all of the source
code that makes up the Simple Secure Storage Service is published for
everyone to see. Not only is the source code publicly visible, but it
also comes with Free (Libre) and Open Source rights granted to the
public allowing anyone to inspect the source code, experiment on it,
alter it, and even to distribute their own version of it and to sell
commercial services.

Wilcox-O'Hearn says “If you rely on closed-source, proprietary
software, then you're just taking the vendor's word for it that it
actually provides the end-to-end security that they claim. As the
PRISM scandal shows, that claim is sometimes a lie.”

The web site of LeastAuthority.com proudly states “We can never see
your data, and you can always see our code.”.

trusted by experts
--

The Simple Secure Storage Service is built on a technology named
“Least-Authority File System (LAFS)”. LAFS has been studied and used
by computer scientists, hackers, Free and Open Source software
developers, activists, the U.S. Defense Advanced Research Projects
Agency, and the U.S. National Security Agency.

The design has been published in a peer-reviewed scientific workshop:
*Wilcox-O'Hearn, Zooko, and Brian Warner. “Tahoe: the least-authority
filesystem.” Proceedings of the 4th ACM international workshop on
Storage security and survivability. ACM, 2008.*
http://eprint.iacr.org/2012/524.pdf

It has been cited in more than 50 scientific research papers, and has
received plaudits from the U.S. Comprehensive National Cybersecurity
Initiative, which stated: “Systems like Least-Authority File System
are making these methods immediately usable for securely and availably
storing files at rest; we propose that the methods be further
reviewed, written up, and strongly evangelized as best practices in
both government and industry.”

Dr. Richard Stallman, President of the Free Software Foundation
(https://fsf.org/) said “Free/Libre software is software that the
users control. If you use only free/libre software, you control your
local computing — but using the Internet raises other issues of
freedom and privacy, which many network services don't respect. The
Simple Secure Storage Service (S4) is an example of a network service
that does respect your freedom and privacy.”

Jacob Appelbaum, Tor project developer (https://www.torproject.org/)
and WikiLeaks volunteer (http://wikileaks.org/), said “LAFS's design
acknowledges the importance of verifiable end-to-end security through
cryptography, Free/Libre release of software and transparent
peer-reviewed system design.”

The LAFS software is already packaged in several widely-used operating
systems such as Debian GNU/Linux and Ubuntu.

https://LeastAuthority.com



 LeastAuthority.com Announces A PRISM-Proof Storage Service


Wednesday, July 31, 2013

`LeastAuthority.com`_ today announced “Simple Secure Storage Service (S4)”, a 
backup service that encrypts your files to protect them from the prying eyes of 
spies and criminals.

.. _LeastAuthority.com: https://LeastAuthority.com

“People deserve privacy

[Cryptography] Open Letter to Phil Zimmermann and Jon Callas of Silent Circle, On The Closure of the “Silent Mail” Service

2013-08-21 Thread Zooko Wilcox-OHearn
 and the vulnerability that it imposes on users is the
first step. People will listen to you about this, now. Let's start
talking about it and we can start finding solutions.

Also, warn your users. Don't tell them the untruth that it is
impossible for you to eavesdrop on their communications even if you
try (as your company seems to be on the borderline of doing in public
statements like these: [ `¹`_, `²`_]).

.. _¹: 
http://www.forbes.com/sites/parmyolson/2013/07/15/corporate-customers-flock-to-anti-snooping-app-silent-circle/
.. _²: 
http://techcrunch.com/2013/08/08/silent-circle-preemptively-shuts-down-encrypted-email-service-to-prevent-nsa-spying/

We're trying an approach to this problem, here at
https://LeastAuthority.com, of “*verifiable* end-to-end security”. For
our service, all of the software is Free and Open Source, and it is
distributed through channels which are out of our direct control, such
as Debian and Ubuntu. Of course this approach is not perfectly secure
— it doesn't guarantee that a state-level actor cannot backdoor our
customers. But it does guarantee that *we* cannot backdoor our
customers.

This currently imposes inconvenience on our customers, and I'm not
saying it is the perfect solution, but it shows that there is more
than one way to go at this problem.

Thank you for your attention to these important matter, and your
leadership in speaking out about them.

(By the way, https://LeastAuthority.com is not a competitor to Silent
Circle. We don't offer voice, text, video, or email services, like
Silent Circle does/did. What we offer is simply secure offsite
*backup*, and a secure cloud storage API that people use to build
other services. So we aren't competitors.)

Regards,

Zooko Wilcox-O'Hearn

Founder, CEO, and Customer Support Rep
https://LeastAuthority.com
Freedom matters.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] What is the state of patents on elliptic curve cryptography?

2013-08-21 Thread Zooko Wilcox-OHearn
Here's a nice resource: RFC 6090!

https://tools.ietf.org/html/rfc6090

Also relevant:

http://cr.yp.to/ecdh/patents.html

I'd be keen to see a list of potentially-relevant patents which have
expired or are due to expire within the next 5 years.

Regards,

Zooko Wilcox-O'Hearn

Founder, CEO, and Customer Support Rep
https://LeastAuthority.com
Freedom matters.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography