Re: [cryptography] MalwareBytes

2016-06-24 Thread Seth David Schoen
Ron Garret writes:

> The whole idea of an expiration date (rather than an issue date)
> on a certificate is a sort of a scam by the CAs to coerce people
> into renewing (and hence paying for) their certificates on a regular
> schedule. I think some CAs don’t even enforce the use of a new key
> when a cert is renewed, which defeats the whole purpose.

Certificate expiry is useful if there isn't a way to check whether
a certificate has been revoked, or if some relying parties don't check
in practice, or if the revocation channel is unreliable.

It's also useful if certificate issuers think information in a certificate
may become inaccurate over time, but can't or don't continually check
whether the information has gone stale.

It's also useful, as you mentioned, if there's an ongoing risk of an
undiscovered private key compromise over time.  In that case the private
key should be changed periodically.

Finally, certificate issuees rarely actively revoke certificates when
they're no longer relevant.  If certificates didn't expire, there
would be an enormous pool of obsolete and disused certificates that
were still valid and could still potentially have their private key
out there somewhere (maybe in a backup or on a decommissioned server).

This also has operational consequences for CAs both in terms of OCSP and
CRLs: if the CA couldn't rely on expiry, it would have to keep signing
all unrevoked certificates for OCSP freshness and keep including all
revoked certificates in the CRL; both the lists of revoked and unrevoked
certificates could grow without bound, taxing CA resources and the
resources of CRL users.  For example, VeriSign might still have an
ongoing requirement to publish fresh data about certificates from 1995.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] MalwareBytes

2016-06-24 Thread Seth David Schoen
John R. Levine writes:

> >But all of this is rather a moot point nowadays.  Now that letsencrypt is 
> >live, there is no reason to pay for a cert any more.
> 
> Try getting a let's encrypt cert for your mail server.  Or getting an EV
> cert.

EV certs are definitely not available from Let's Encrypt, but you can
get a certificate for your mail server by using the DNS challenge type,
which just requires you to place a specified record into your DNS zone.
While the Certbot client doesn't support this mechanism, several other
Let's Encrypt clients, such as acme.sh, do.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Unbreakable crypto?

2015-03-21 Thread Seth David Schoen
Lee writes:

 On 3/21/15, Jeffrey Goldberg jeff...@goldmark.org wrote:
  [Apologies for quoting badly]
 
  No!  A thousand times no.
 
  (1) the file isn't secret
 
 But the fact that I'm using it as my one-time pad is.  Why isn't that
 good enough?

If an attacker has access to the same web sites and databases that you
do, the attacker could just try all of them the files in them.  There
shouldn't be more than 2⁵⁰ publicly-accessible files out there, right?

Or maybe each file in a copy of the Internet Archive, say.  One of them
is actually going to be that ISO file!

There are other conceptual problems that are much worse than this
practical problem.  The biggest conceptual problem is that, for the
one-time pad setting that has been mathematically proven secure, the
shortest description of the pad you are using should normally be the pad
itself.  If there is a shorter description, the attacker first of all has
a smaller work factor (set of things to try), but what's really _much much
much_ worse conceptually, the attacker has a probability of being able
to usefully distinguish one possible decryption as more likely than others.

In the ideal one-time pad, there is no basis on which an attacker (even
an attacker with completely unbounded resources) can usefully say that
one proposed decryption is more likely to be right than another, at
least compared to the attacker's prior beliefs about what the plaintext
was likely to be.  But if there is a meaningfully shorter description of
what you used as the pad, then an attacker with vast resources who
correctly guesses what that was will know that it's likely to be right,
which is a kind of success that the attacker couldn't have achieved with
a truly random pad.

*True random pad*: Attacker doesn't know whether pad k₁ is actually more
likely than pad k₂, if (c ⊕ k₁) and (c ⊕ k₂) both appear to be equally
plausible plaintexts.

*Choosing a meaningful file but keeping secret which one you used*: An
attacker who tries your file f₁ as the pad notices that both (c ⊕ f₁)
and f₁ itself appear meaningful, so it's more likely that f₁ is
correct compared to some other f₂ which is not meaningful.


This is the great thing about the classical one-time pad: the attacker
_literally doesn't know when the attack was successful_, a fact which
has nothing to do with how powerful the attacker is (how many keys the
attacker is able to try guessing).  In the classical one-time pad with
a true random pad, even an attacker who can try _every single_ pad
literally doesn't have any evidence which could reveal which one was
right, or which could reveal any new fact or property about the
plaintext.  Relaxing the randomness assumption, in turn, undermines this
conclusion because the attacker can now have some conceivable indication
about being on the right track (even if that's merely statistical),
which could simply never happen with a true random pad.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Complete repository of known playing card ciphers

2014-09-10 Thread Seth David Schoen
Natanael writes:

 Will you attempt to model human shuffling too and see how it affects
 analysis? Is there maybe any existing work on that too reuse? I'd like to
 know what the minimum requirement would be for a human to achieve a secure
 shuffle for these ciphers (in case any of these ciphers would actually be
 secure enough given a proper shuffle).

The most famous is probably Bayer and Diaconis (1992):

http://statweb.stanford.edu/~cgates/PERSI/papers/bayer92.pdf

A classic NYT report on this research (prior to its formal publication):

http://www.nytimes.com/1990/01/09/science/in-shuffling-cards-7-is-winning-number.html

-- 
Seth David Schoen sch...@loyalty.org  |  No haiku patents
 http://www.loyalty.org/~schoen/|  means I've no incentive to
  FD9A6AA28193A9F03D4BF4ADC11B36DC9C7DD150  |-- Don Marti
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Weak random data XOR good enough random data = better random data?

2014-07-28 Thread Seth David Schoen
Lodewijk andré de la porte writes:

 I don't see how it could reduce the randomness to XOR with patterned data.
 If someone knows better of this, let me know. If I'm correct that also
 means it should be okay to reuse the few KB's should they ever run out (in
 this system), at worst it no longer improves the randomness. I don't expect
 that to ever happen, and I'd prefer requesting new KB's, but it's still
 interesting.

DJB describes a more complicated scenario in which an active attacker
manipulates one source of entropy in order to reduce the unpredictability
of the overall output.

http://blog.cr.yp.to/20140205-entropy.html

I guess the other bad case is where both sources are systematically
correlated in some way (that doesn't change their overall statistics
individually, and that an attacker wouldn't otherwise have been able
to notice).  It's hard to see a path to that in this case.  But you
could certainly construct an artificial scenario where it's true.

DJB also announced a randomness-generation mailing list in that post;
I'm not sure what level of participation it's gotten, but that might
be another good place to bring up this topic.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fatal flaw in Taiwanese smart card RNG

2013-09-16 Thread Seth David Schoen
Krisztián Pintér writes:

 no. you can't test a rng by looking at the output. only the algorithm
 and the actual code can be analyzed and reviewed. it is because it
 is extremely easy to create a crappy rng that fools the smartest
 analytical tool on the planet. it is not that easy to fool an attacker
 that reverse engineers your system.

Well, there's a distinction between RNGs that have been maliciously
designed and RNGs that are just extremely poor (or just are
inadequately seeded but their designers or users don't realize this).

It sounds like such extremely poor RNGs are getting used in the wild
quite a bit, and these problems might well be detected by more
systematic and widespread use of these researchers' techniques.  It's
true that a maliciously designed RNG would not be detected this way.
The researchers do emphasize that

  An absence of common divisors is also not an indication of security.
  There are many potential vulnerabilities resulting from bad randomness;
  it is important to thoroughly test every component of a random-number
  generator, not merely to look for certain types of extreme failures.

-- 
Seth David Schoen sch...@loyalty.org  |  No haiku patents
 http://www.loyalty.org/~schoen/|  means I've no incentive to
  FD9A6AA28193A9F03D4BF4ADC11B36DC9C7DD150  |-- Don Marti
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] not a Paypal phish using EV certificate

2013-08-13 Thread Seth David Schoen
James A. Donald writes:

 Although websites often use huge numbers of huge cookies, one can
 easily optimize one's cookie use.  I can see no reason why anyone
 would ever need more than a single 96 bit cookie that is a random
 number.

They might want to make the content and purpose of the cookie
transparent to the user, and perhaps even reassure the user that
the cookie can't easily be used as a unique identifier for the
user's browser.

On the flip side, there are also some mechanisms to store
authenticated, encrypted session state in its entirety on the
client in order to _avoid_ storing it in a database on the
server.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PINS and [Short] Passwords

2012-04-04 Thread Seth David Schoen
Jeffrey Walton writes:

 What is the state of the art for mobile password cracking on iOS and Android?

I'm not sure if you're thinking primarily of the operating-system level
passwords or third-party crypto apps.

Dmitry Sklyarov (the same Dmitry Sklyarov) gave an interesting talk at
BlackHat Europe a few weeks ago where he described having cracked the
majority of PIN-oriented mobile crypto apps he examined -- a total of
around 20 apps.

https://media.blackhat.com/bh-eu-12/Belenko/bh-eu-12-Belenko-Password_Encryption-WP.pdf
https://media.blackhat.com/bh-eu-12/Belenko/bh-eu-12-Belenko-Password_Encryption-Slides.pdf

One common problem was incorrect use of crypto so that a candidate
PIN could be verified at very low computational cost.  (Sometimes,
there was a somewhat expensive decryption step but an independent
cheap, fast MAC.)  Another problem was that PINs are so low entropy
(most mobile devices today do not have physical keyboard, making it
harder for users to utilize motor learning to remember complex
passwords [so] on average, the complexity of a password that has to
be entered routinely on a mobile device will be lower).

A third problem, if I understood correctly, was that even if key
derivation like PBKDF2 is used correctly, mobile devices are so
much slower than the systems used for brute force attacks that the
attacker can still plausibly do an exhaustive search even with the
delays introduced by key derivation.  Crypto app developers,
unwilling to make the users wait a long time to access their data,
may use a relatively low key derivation work factor.

The highest PBKDF2 work factor of any app Sklyarov and Belenko
examined was 4000.  Sklyarov and Belenko still estimated that
passwords for this application could be verified on a GPU at about
160,000 passwords per second, from which they concluded that a
10-digit numeric PIN could be recovered on a single GPU in one
day.  And that was the _best_ mobile crypto app they examined.

However, they concluded that built-in crypto protections in mobile
operating systems may be stronger for various reasons, and maybe
that's more of what you were interested in.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [info] The NSA Is Building the Country’s Biggest Spy Center (Watch What You Say)

2012-03-25 Thread Seth David Schoen
ianG writes:

 On 26/03/12 07:43 AM, Jon Callas wrote:
 
 This is precisely the point I've made: the budget way to break crypto is to 
 buy a zero-day. And if you're going to build a huge computer center, you'd 
 be better off building fuzzers than key crackers.
 
 point of understanding - what do you mean by fuzzers?

Automatically trying to make software incur faults with large amounts of
randomized (potentially invalid) input.

https://en.wikipedia.org/wiki/Fuzz_testing

If you get an observable fault you can repeat the process under a
debugger and try to understand why it occurred and whether it is an
exploitable bug.  Here's a pretty detailed overview:

https://www.blackhat.com/presentations/bh-usa-07/Amini_and_Portnoy/Whitepaper/bh-usa-07-amini_and_portnoy-WP.pdf

When it was first invented, fuzzing basically just consisted of feeding
random bytes to software, but now it can include sophisticated
understanding of the kinds of data that a program expects to see, with
some model of the internal state of the program.  I believe there are
also fuzzers that examine code coverage, so they can give feedback to the
tester about whether there are parts of the program that the fuzzer isn't
exercising.

-- 
Seth David Schoen sch...@loyalty.org  |  No haiku patents
 http://www.loyalty.org/~schoen/|  means I've no incentive to
  FD9A6AA28193A9F03D4BF4ADC11B36DC9C7DD150  |-- Don Marti
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Auditable CAs

2011-11-28 Thread Seth David Schoen
Ben Laurie writes:

  How will the opt-out mechanism work so that it is not degraded by uses
  clicking through a warning?
 
 Don't quite understand the question: if you have opted out you
 shouldn't get a warning, surely?

I think that question was about unilateral client-side opt-out (users
ignoring security warnings) rather than the organized deployment of a
non-public CA.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-14 Thread Seth David Schoen
Arshad Noor writes:

 I'm not sure I understand why it would be helpful to know all (or any)
 intermediate CA ahead of time.  If you trust the self-signed Root CA,
 then, by definition, you've decided to trust everything that CA (and
 subordinate CA) issues, with the exception of revoked certificates.
 
 Can you please elaborate?  Thanks.

Of course, intermediate CAs are sometimes created for purely
operational reasons that may be quite prudent.  But delegating
root CA-like power to more distinct organizations creates risk.

Without external double-checks, the integrity of the CA system is as
strong as its weakest link, so every new CA is an additional
independent source of risk.  When CAs delegate to intermediates,
those intermediates can add new kinds of risk:

* they could be in different jurisdictions, so there's new risk that
  the legal systems in those jurisdictions could try to compel them
  to misissue*;

* they could be run by different people who could be persuaded to
  misissue in new ways;

* they could use different software or hardware or operating systems
  that could have different vulnerabilities;

* they could use different crypto primitives when issuing legitimate
  certificates that could have different vulnerabilities.

Whether or not the new CA does a worse job overall than the old CA, it
still creates new risk -- by CA proliferation!  (In fact, there are
already some cases showing that intermediate CAs _aren't_ always as
cautious or competent in practice as the roots that delegated to them.)

More fundamentally, as Peter Biddle points out, trust isn't
transitive.  Suppose we think that a particular CA is super-awesome
at verifying that someone owns a domain and issuing hard-to-forge
certificates attesting to this fact, while resisting compromises
and coercion.  That doesn't necessarily mean that it's also a good
judge of whether another organization is also a good CA.

Even giving the PKIX status quo the benefit of the doubt, the root
CA decisions are supposed to be made by neutral parties following a
careful process that includes input from professional auditors.  When
CAs get in the habit of delegating their power, that process is at
risk of being bypassed and in any case starts to happen much less
transparently.  There are plenty of cases in the real world where
someone is trusted with the power to take an action, but not
automatically trusted with the power to delegate that power to others
without external oversight.  And that makes sense, because trust isn't
transitive.


* see https://www.eff.org/files/countries-with-CAs.txt

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-13 Thread Seth David Schoen
Andy Steingruebl writes:

 They used to be quite common, but other than 1 or 2 sites I visit
 regularly that I know ave self-signed certs, I *never* run into cert
 warnings anymore. BTW, I'm excluding mixed content warnings from
 this for the moment because they are a different but related issue.

I see it about once per week, but not in the course of my own browsing --
in the course of following up on HTTPS Everywhere bug reports where sites
used to have a valid cert (perhaps on an HTTPS site that they didn't
actively promote) and then stopped.  An example from yesterday was

https://www.senate.gov/

which had a valid cert a while ago and then recently stopped.  (Their
HTTPS support was reported to us as working on June 29; according to
Perspectives, the most recent change apparently happened on September 9.)

HTTPS Everywhere makes users encounter this situation more than they
otherwise might.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-13 Thread Seth David Schoen
Randall Webmail writes:

 From: Seth David Schoen sch...@eff.org
 To: Crypto discussion list cryptography@randombit.net
 Sent: Tuesday, September 13, 2011 2:31:59 PM
 Subject: Re: [cryptography] Let's go back to the beginning on this
 
 HTTPS Everywhere makes users encounter this situation more than they
 otherwise might.
 
 A week or three ago, I got cert warnings - from gmail's page.  (Yes, I'm 
 using HTTPS Everywhere).

When _that_ happens, please tell Google and EFF.  I'm sure both
organizations would be fascinated.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-13 Thread Seth David Schoen
Ralph Holz writes:

 Yes, with the second operation offline and validating against the NSS
 root store. I don't have a MS one at the moment, it would be interesting
 (how do you extract that from Win? The EFF guys should know)

You might look at https://www.eff.org/files/ssl-observatory-code-r1.tar_.bz2
in the microsoft_CAs directory.

You can also look at

https://social.technet.microsoft.com/wiki/contents/articles/microsoft-root-certificate-program.aspx

which used to provide a PDF, but apparently now links to

https://social.technet.microsoft.com/wiki/contents/articles/2592.aspx

instead (not updated to reflect DigiNotar's removal).

One issue is that Microsoft has a protocol for MSIE to ask Microsoft
interactively whether to trust a new CA.  That means that the list of
trusted CAs is not actually stored on an MSIE end-user's machine and
can't be displayed in full inside of MSIE.  Instead, when a new CA is
encountered, MSIE will query Microsoft and ask whether that CA should
be trusted.  Personally, I find this indeterminism and delegation
concerning (since there's no way for users to review CAs ahead of
time, or see whether a particular CA will or won't be trusted ahead
of time).  On the other hand, a similar phenomenon occurs in other
browsers with regard to intermediate CAs, because there's no way to
get a list of intermediate CAs before they are encountered in the wild,
and definitely no way to get an exhaustive list of all of the
intermediate CAs that would be trusted.  In fact, in some sense no
one in the entire world is in possession of that list. :-(

Peter Eckersley has produced a list of intermediates which you can
see visualized in

https://www.eff.org/files/colour_map_of_CAs.pdf

but of course that list derives from a scan from a particular point
in time (and not using SNI); there is no guarantee that there aren't
other intermediate CAs which simply weren't encountered that way
(or even intermediate CAs whose existence is kept a secret and
that are only used in a limited way by particular attackers under
particular circumstances).

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
454 Shotwell Street, San Francisco, CA  94110   +1 415 436 9333 x107
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] *.google.com certificate issued by DigiNotar

2011-09-02 Thread Seth David Schoen
Marsh Ray writes:

 Why would they need to?
 
 What's the difference between a private key in the wild and a pwned
 CA that, even months after a breakin and audit, doesn't revoke or
 even know what it signed?
 
 (This is a serious question)

The pwned CA leaves evidence that other people can potentially discover
or collect.  It also means that an individual user who knows what
public-key cryptography is can potentially do something to determine
whether an alleged key is valid.

-- 
Seth David Schoen sch...@loyalty.org  |  No haiku patents
 http://www.loyalty.org/~schoen/|  means I've no incentive to
  FD9A6AA28193A9F03D4BF4ADC11B36DC9C7DD150  |-- Don Marti
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Bitcoin observation

2011-07-08 Thread Seth David Schoen
lodewijk andr de la porte writes:

 I honestly don't see how. A transaction has an orgin, which is verified to
 have the coins, and a destination, which is a public key that must have a
 private key. AFAIK every public key has a computable private key
 counterpart.
 
 But please correct me.

The reason Taral says that you could create transactions that
transfer money into black holes that are unrecoverable is that
the destination of a transaction is actually not a public key
but rather a _script_ in a scripting language.  To claim the coins
later, a recipient creates a new transaction that satisfies the
conditions of the script associated with the previous transaction.

The normal case is that each script has the effect of transferring
coins to a public key.

https://en.bitcoin.it/wiki/Script#Standard_Transaction_to_Bitcoin_address
https://en.bitcoin.it/wiki/Transactions

However, this documentation emphasizes that the script can be valid
even if it has a different effect.

  For example, it's possible to create an output that can be claimed
  by anyone without any authorization. It's also possible to require
  that an input be signed by ten different keys, or be redeemable
  with a password instead of a key.

You could equally easily write a script that cannot be executed
successfully, for instance because the condition for claiming its
output would be that 0=1.  (An even simpler case is that the script
just returns false immediately, via OP_RETURN.)  It seems like the
standard Bitcoin client should accept transactions _containing_
such a script as valid (i.e., the script reflects the intention of
the previous owner of some coins and successfully claims the output
of the previous transaction involving those coins), even though the
standard client would not be willing to generate those transactions.

You could think of a transaction as like a machine-readable will,
and the Bitcoin network as like a probate court that happens to
be implemented by a P2P software network.  Prospective heirs can
show up and prove that they have a legitimate claim to some of the
inheritance.  However, since the will is being interpreted by
software, the testator can (rather than the common case of naming
specific heirs) actually implement wacky conditions that are the
stuff of fiction in our world.  For example, BTC 1700 to the
first person to claim it, provided that 1+1=3.

Unlike a human-operated probate court, the Bitcoin network has no
notion of intestacy or escheat.  If it becomes clear that some coins
can never ever be claimed, well, wasn't that testator odd to do that?

-- 
Seth David Schoen sch...@loyalty.org  |  No haiku patents
 http://www.loyalty.org/~schoen/|  means I've no incentive to
  FD9A6AA28193A9F03D4BF4ADC11B36DC9C7DD150  |-- Don Marti
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography