[cryptography] Reply to Zooko (in Markdown)

2013-08-17 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Also at http://silentcircle.wordpress.com/2013/08/17/reply-to-zooko/


# Reply to Zooko

(My friend and colleague, [Zooko 
Wilcox-O'Hearn](https://leastauthority.com/blog/author/zooko-wilcox-ohearn.html)
 wrote an open letter to me and Phil [on his blog at 
LeastAuthority.com](https://leastauthority.com/blog/open_letter_silent_circle.html).
 Despite this appearing on Silent Circle's blog, I am speaking mostly for 
myself, only slightly for Silent Circle, and not at all for Phil.)

Zooko,

Thank you for writing and your kind words. Thank you even more for being a 
customer. We're a startup and without customers, we'll be out of business. I 
think that everyone who believes in privacy should support with their 
pocketbook every privacy-friendly service they can afford to. It means a lot to 
me that you're voting with your pocketbook for my service.

Congratulations on your new release of [LeastAuthority's 
S4](https://leastauthority.com) and 
[Tahoe-LAFS](https://tahoe-lafs.org/trac/tahoe-lafs). Just as you are a fan of 
my work, I am an admirer of your work on Tahoe-LAFS and consider it one of the 
best security innovations on the planet.

I understand your concerns, and share them. One of the highest priority tasks 
that we're working on is to get our source releases better organized so that 
they can effectively be built from [what we have on 
GitHub](https://github.com/SilentCircle/). It's suboptimal now. Getting the 
source releases is harder than one might think. We're a startup and are pulled 
in many directions. We're overworked and understaffed. Even in the old days at 
PGP, producing effective source releases took years of effort to get down pat. 
It often took us four to six weeks to get the sources out even when delivering 
one or two releases per year.

The world of app development makes this harder. We're trying to streamline our 
processes so that we can get a release out about every six weeks. We're not 
there, either.

However, even when we have source code to be an automated part of our software 
releases, I'm afraid you're going to be disappointed about how verifiable they 
are. 

It's very hard, even with controlled releases, to get an exact byte-for-byte 
recompile of an app. Some compilers make this impossible because they randomize 
the branch prediction and other parts of code generation. Even when the 
compiler isn't making it literally impossible, without an exact copy of the 
exact tool chain with the same linkers, libraries, and system, the code won't 
be byte-for-byte the same. Worst of all, smart development shops use the 
*oldest* possible tool chain, not the newest one because tool sets are designed 
for forwards-compatibility (apps built with old tools run on the newest OS) 
rather than backwards-compatibility (apps built with the new tools run on older 
OSes). Code reliability almost requires using tool chains that are 
trailing-edge.

The problems run even deeper than the raw practicality. Twenty-nine years ago 
this month, in the August 1984 issue of Communications of the ACM (Vol. 27, 
No. 8) Ken Thompson's famous Turing Award lecture, Reflections on Trusting 
Trust was published. You can find a facsimile of the magazine article at 
https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf and a 
text-searchable copy on Thompson's own site, 
http://cm.bell-labs.com/who/ken/trust.html.

For those unfamiliar with the Turing Award, it is the most prestigious award a 
computer scientist can win, sometimes called the Nobel Prize of computing. 
The site for the award is at http://amturing.acm.org.

In Thompson's lecture, he describes a hack that he and Dennis Ritchie did in a 
version of UNIX in which they created a backdoor to UNIX login that allowed 
them to get access to any UNIX system. They also created a self-replicating 
program that would compile their backdoor into new versions of UNIX portably. 
Quite possibly, their hack existed in the wild until UNIX was recoded from the 
ground up with BSD and GCC.

In his summation, Thompson says:

The moral is obvious. You can't trust code that you did not totally
create yourself. (Especially code from companies that employ people
like me.) No amount of source-level verification or scrutiny will
protect you from using untrusted code. In demonstrating the
possibility of this kind of attack, I picked on the C compiler. I
could have picked on any program-handling program such as an
assembler, a loader, or even hardware microcode. As the level of
program gets lower, these bugs will be harder and harder to detect.
A well installed microcode bug will be almost impossible to detect.

Thompson's words reach out across three decades of computer science, and yet 
they echo Descartes from three centuries prior to Thompson. In Descartes's 1641 
Meditations, he proposes the thought experiment of an evil demon who 
deceives us by simulating the 

Re: [cryptography] Reply to Zooko (in Markdown)

2013-08-17 Thread Bryan Bishop
On Sat, Aug 17, 2013 at 1:04 AM, Jon Callas j...@callas.org wrote:

 It's very hard, even with controlled releases, to get an exact
 byte-for-byte recompile of an app. Some compilers make this impossible
 because they randomize the branch prediction and other parts of code
 generation. Even when the compiler isn't making it literally impossible,
 without an exact copy of the exact tool chain with the same linkers,
 libraries, and system, the code won't be byte-for-byte the same. Worst of
 all, smart development shops use the *oldest* possible tool chain, not the
 newest one because tool sets are designed for forwards-compatibility (apps
 built with old tools run on the newest OS) rather than
 backwards-compatibility (apps built with the new tools run on older OSes).
 Code reliability almost requires using tool chains that are trailing-edge.


Would providing (signed) build vm images solve the problem of distributing
your toolchain?

- Bryan
http://heybryan.org/
1 512-203-0507
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] LeastAuthority.com announces PRISM-proof storage service

2013-08-17 Thread ianG

On 16/08/13 22:11 PM, zooko wrote:

On Tue, Aug 13, 2013 at 03:16:33PM -0500, Nico Williams wrote:


Nothing really gets anyone past the enormous supply of zero-day vulns in their 
complete stacks.  In the end I assume there's no technological PRISM 
workarounds.


I agree that compromise of the client is relevant. My current belief is that
nobody is doing this on a mass scale, pwning entire populations at once, and
that if they do, we will find out about it.

My goal with the S4 product is not primarily to help people who are being
targeted by their enemies, but to increase the cost of indiscriminately
surveilling entire populations.

Now maybe it was a mistake to label it as PRISM-Proof in our press release
and media interviews! I said that because to me PRISM means mass surveillance
of innocents. Perhaps to other people it doesn't mean that. Oops!



My understanding of PRISM is that it is a voluntary  secret arrangement 
between the supplier and the collector (NSA) to provide direct access to 
all information.


By 'voluntary' I mean that the supplier hands over the access, it isn't 
taken in an espionage or hacker sense, or leaked by an insider.  I 
include in this various techniques of court-inspired voluntarianism as 
suggested by recent FISA theories [0].


I suspect it is fair to say that something is PRISM-proof if:

  a) the system lacks the capability to provide access
  b) the operator lacks the capacity to enter into the voluntary 
arrangement, or

  c) the operator lacks the capacity to keep the arrangement (b) secret

The principle here seems to be that if the information is encrypted on 
the server side without the keys being held or accessible by the 
supplier, then (a) is met [1].


Encryption-sans-keys is an approach that is championed by Tahoe-LAFS and 
Silent Circle.  Therefore I think it is reasonable in a marketing sense 
to claim it is PRISM-proof, as long as that claim is explained in more 
detail for those who wish to research.


In this context, one must market ones product, and one must use simple 
labels to achieve this.  Otherwise the product doesn't get out there, 
and nobody is benefited.




iang


[0] E.g., the lavabit supplier can be considered to have not volunteered 
the info, and google can be considered to have not volunteered to the 
Chinese government.
[1]  In contrast, if an operator is offshore it would meet (b) and if an 
operator was some sort of open source distributed org where everyone saw 
where the traffic headed, it would lack (c).






Regards,

Zooko

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] open letter to Phil Zimmermann and Jon Callas of Silent Circle, re: Silent Mail shutdown

2013-08-17 Thread ianG

On 17/08/13 00:46 AM, Zooko Wilcox-OHearn wrote:

... This was
demonstrated in the Hushmail case in which the U.S. DEA asked Hushmail
(a Canadian company) to turn over the plaintext of the email of one of
its customers. Hushmail complied, shipping a set of CDs to the DEA
containing the customer's messages.

The President of Hushmail `emphasized`_ in interviews with journalists
at the time that Hushmail would be able to comply with such orders
regardless of whether the customer used Hushmail's “client-to-server”
(SSL) encryption or its “end-to-end” (Java applet) encryption.

.. _emphasized: http://www.wired.com/threatlevel/2007/11/hushmail-to-war/

Phil had been Chief Cryptographer of Hushmail years earlier, and was
still a member of the Advisory Board of Hushmail at the time of that
case. He commented about the case at that time, and he also `stated`_,
correctly, that the Hushmail model of *unverified* end-to-end
encryption was vulnerable to government coercion. That's the same
model that Silent Circle uses today.

.. _stated: http://www.wired.com/threatlevel/2007/11/pgp-creator-def/






As I was involved in Hushmail at the very early stages, I suppose I can 
add some words here.


This was always known as the weakness of the model.  The operator could 
simply replace the applet that was downloaded in every instance with one 
that had other more nefarious capabilities.  There were thoughts and 
discussions about how to avoid that, but a simple, mass market solution 
was never found to my knowledge [0] which rendered the discussions moot.


I don't think the company ever sought to hide that vulnerability.

Also, that vulnerability was rather esoteric as it required quite 
serious levels of cooperation.  So the bar was still high.


There were two reasons why this was a reasonable risk to accept.

1) There was a far greater danger that most cypherpunks ignored -- The 
capability to hack or subpoena your counterparty's emails was far more 
of a danger to the individual than any concerted 
Hushmail-government-applet replacement.  This is why I sometimes say 
that the threat is always on the node, as to a good order of 
approximation, most threats and most risks are concentrated on the node, 
and classical CIA provides far less than one thinks in the aggregate if 
that threat is ignored.


2) The service did provide something that no other provided:  easy 
access to a good crypto email service.  It's utility far exceeded that 
of the only serious contender, PGP.  So it got encryption out to the 
masses.  And, those masses could then appreciate and learn ... and some 
did use both hushmail and PGP at the same time.





iang




[0] Also, it's fair to say that applets themselves held early promise 
that was never really capitalised on (possibly because of the 
browser/language wars at the time).  If applets had developed, and if 
attention had been paid in browser vendors to real security risks by 
users, then we might have made some headway.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] open letter to Phil Zimmermann and Jon Callas of Silent Circle, re: Silent Mail shutdown

2013-08-17 Thread ianG

On 17/08/13 00:46 AM, Zooko Wilcox-OHearn wrote:



We're trying an approach to this problem, here at LeastAuthority.com,
of “*verifiable* end-to-end security”. For our data backup and storage
service, all of the software is Free and Open Source, and it is
distributed through channels which are out of our direct control, such
as Debian and Ubuntu. Of course this approach is not perfectly secure
— it doesn't guarantee that a state-level actor cannot backdoor our
customers. But it does guarantee that *we* cannot backdoor our
customers.



Other than the open source solution [0], how does one do it?  The 
example of Skype and its self-immolated reputation for security is 
instructive.


In order to gain early credibility for its closed source solution, it 
commissioned an audit of the tech.  This audit gave it a good passing 
grade, and specifically indicated that there were no known weaknesses, 
and the claims were good.  The aggressive cryptographic community was 
duly impressed.


However, an audit is a point-in-time review.  That means it is only true 
for that period of review.  Auditors will specifically state that you 
cannot rely on this review for a prediction of the future.  The audit 
must be repeated at some sort of regular interval to stop the company 
changing its mind.  The audit process must be a commitment to 
continuation, so as to control that possibility.


In contrast, the public widely believes that an audit is a prediction of 
the future (and the audit _profession_ does nothing to dissuade that 
view).  So Skype left that audit sitting there, and decided itself never 
to repeat that audit [1].  Fast forward nearly a decade, and the house 
of cards came tumbling down:  first the Heise discovery (as confirmed by 
Adam Back here) and then the PRISM claims [2].





So back to Silent Circle.  One known way to achieve some control over 
their closed source replacement vulnerability is to let an auditor into 
their inner circle, so to speak.


But if they wish to do this, they should not repeat the Skype mistake. 
Especially as this is the known  routine PLC of a cryptographic tool: 
first gain the trust of the cypherpunks, and promise them the world. 
Then, when sale time comes, gain the trust of the NSA, and the promise 
of future business.







iang



[0]  Remember that PGP Inc also tried the open source way.  In the long 
run, it didn't help.  If you compare on brutal measures, Skype succeeded 
with closed source, PGP Inc failed with open source.  Of course it is 
more complicated than that, but the end-delivery of security is 
something that can be measured and can be relied upon.


[1]  Nor to ever mention it, as rumour has it.  As time went on, the 
audit became more and more of an embarrassment...


[2]  Rumour/hearsay confirms:  Skype put the bad stuff in after the eBay 
sale, and before the Microsoft sale (who for their sins were happy 
either way).  Up until around that time, the various European agencies 
were lividly trying to gain access, and agitating in the press.  We know 
they got attack kits, and they also went quiet around the same time: 
It's been a long time since a western TLA has complained about Skype -- 
go figure.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-17 Thread ianG

On 17/08/13 10:57 AM, Peter Gutmann wrote:

Nico Williams n...@cryptonector.com writes:


It might be useful to think of what a good API would be.


The problem isn't the API, it's the fact that you've got two mutually
exclusive requirements, the security geeks want the (P)RNG to block until
enough entropy is available, everyone else wants execution to continue without
being blocked.  In other words a failure of security is preferred to a failure
of functionality.  Until you resolve that conflict, no API (re)design is going
to help you.



(not answering the posts sepcifically but) the rule of thumb I've always 
used is this:


If you don't care so much about security then use the tools that are 
provided, and suffer an occasional glitch.  Don't worry too much about 
the glitches coz your business already told you, you don't care too much 
about the security / randomness.  All those cypherpunkian arguments can 
go to hell, you've got customers to care for.


OTOH, if you care a lot, then you have to write your own.  The design is 
now very well established.  Many sources - mixer/pool - deterministic 
PRNG.  It's really not that hard, this is an intern level project, folks.


In result, if you care enough to argue about random v. urandom then you 
already put yourself in the second camp, and your problem is solved. 
Just use urandom and collect some other sources yourself.  You no longer 
care.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-17 Thread Ben Laurie
On 17 August 2013 06:01, ianG i...@iang.org wrote:

 On 17/08/13 10:57 AM, Peter Gutmann wrote:

 Nico Williams n...@cryptonector.com writes:

  It might be useful to think of what a good API would be.


 The problem isn't the API, it's the fact that you've got two mutually
 exclusive requirements, the security geeks want the (P)RNG to block until
 enough entropy is available, everyone else wants execution to continue
 without
 being blocked.  In other words a failure of security is preferred to a
 failure
 of functionality.  Until you resolve that conflict, no API (re)design is
 going
 to help you.



 (not answering the posts sepcifically but) the rule of thumb I've always
 used is this:

 If you don't care so much about security then use the tools that are
 provided, and suffer an occasional glitch.  Don't worry too much about the
 glitches coz your business already told you, you don't care too much about
 the security / randomness.  All those cypherpunkian arguments can go to
 hell, you've got customers to care for.

 OTOH, if you care a lot, then you have to write your own.  The design is
 now very well established.  Many sources - mixer/pool - deterministic
 PRNG.  It's really not that hard, this is an intern level project, folks.

 In result, if you care enough to argue about random v. urandom then you
 already put yourself in the second camp, and your problem is solved. Just
 use urandom and collect some other sources yourself.  You no longer care.


That's terrible advice. Implement your own crypto of any sort widely leads
to complete fail, as we see repeatedly.

Also, if there are other sources, why are they not being fed in to the
system PRNG?





 iang

 __**_
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/**mailman/listinfo/cryptographyhttp://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-17 Thread ianG

On 17/08/13 14:46 PM, Ben Laurie wrote:




On 17 August 2013 06:01, ianG i...@iang.org mailto:i...@iang.org wrote:

On 17/08/13 10:57 AM, Peter Gutmann wrote:

Nico Williams n...@cryptonector.com
mailto:n...@cryptonector.com writes:

It might be useful to think of what a good API would be.


The problem isn't the API, it's the fact that you've got two
mutually
exclusive requirements, the security geeks want the (P)RNG to
block until
enough entropy is available, everyone else wants execution to
continue without
being blocked.  In other words a failure of security is
preferred to a failure
of functionality.  Until you resolve that conflict, no API
(re)design is going
to help you.



(not answering the posts sepcifically but) the rule of thumb I've
always used is this:

If you don't care so much about security then use the tools that are
provided, and suffer an occasional glitch.  Don't worry too much
about the glitches coz your business already told you, you don't
care too much about the security / randomness.  All those
cypherpunkian arguments can go to hell, you've got customers to care
for.

OTOH, if you care a lot, then you have to write your own.  The
design is now very well established.  Many sources - mixer/pool -
deterministic PRNG.  It's really not that hard, this is an intern
level project, folks.

In result, if you care enough to argue about random v. urandom then
you already put yourself in the second camp, and your problem is
solved. Just use urandom and collect some other sources yourself.
  You no longer care.


That's terrible advice. Implement your own crypto of any sort widely
leads to complete fail, as we see repeatedly.



:)  Perhaps the distinction is that, if you care, when you repeatedly 
fail then you can repeatedly fix it.  OTOH, if you're using external 
crypto, you're up the creek without a paddle.




Also, if there are other sources, why are they not being fed in to the
system PRNG?



I agree in principle, but reality slaps us around a bit:

Linux and BSD can't agree on the basic definitions of urandom and 
random.  Some don't agree whether Intel's RNG is safe or not for Linux 
purposes.  Zooko  Jon don't agree whether open source is a sufficient / 
necessary proof.


And, as you say, FIPS don't agree with anyone:

 Amusing story: FIPS 140 requires self-tests on the PRNG. There was a
 bug in FIPS OpenSSL once where the self-test mode got stuck on and so
 no entropy was fed into the PRNG.

 Also, back when I was doing FIPS 140 they made me remove some of the
 entropy feeds into the PRNG - particularly ones that protect against
 pool duplication over forks.





iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-17 Thread Ben Laurie
On 17 August 2013 08:05, ianG i...@iang.org wrote:

 On 17/08/13 14:46 PM, Ben Laurie wrote:




 On 17 August 2013 06:01, ianG i...@iang.org mailto:i...@iang.org
 wrote:

 On 17/08/13 10:57 AM, Peter Gutmann wrote:

 Nico Williams n...@cryptonector.com
 mailto:n...@cryptonector.com** writes:

 It might be useful to think of what a good API would be.


 The problem isn't the API, it's the fact that you've got two
 mutually
 exclusive requirements, the security geeks want the (P)RNG to
 block until
 enough entropy is available, everyone else wants execution to
 continue without
 being blocked.  In other words a failure of security is
 preferred to a failure
 of functionality.  Until you resolve that conflict, no API
 (re)design is going
 to help you.



 (not answering the posts sepcifically but) the rule of thumb I've
 always used is this:

 If you don't care so much about security then use the tools that are
 provided, and suffer an occasional glitch.  Don't worry too much
 about the glitches coz your business already told you, you don't
 care too much about the security / randomness.  All those
 cypherpunkian arguments can go to hell, you've got customers to care
 for.

 OTOH, if you care a lot, then you have to write your own.  The
 design is now very well established.  Many sources - mixer/pool -
 deterministic PRNG.  It's really not that hard, this is an intern
 level project, folks.

 In result, if you care enough to argue about random v. urandom then
 you already put yourself in the second camp, and your problem is
 solved. Just use urandom and collect some other sources yourself.
   You no longer care.


 That's terrible advice. Implement your own crypto of any sort widely
 leads to complete fail, as we see repeatedly.



 :)  Perhaps the distinction is that, if you care, when you repeatedly
 fail then you can repeatedly fix it.  OTOH, if you're using external
 crypto, you're up the creek without a paddle.


What external crypto can you not fix? Windows? Then don't use Windows.
You can fix any crypto in Linux or FreeBSD.





  Also, if there are other sources, why are they not being fed in to the
 system PRNG?



 I agree in principle, but reality slaps us around a bit:

 Linux and BSD can't agree on the basic definitions of urandom and random.


So what? BSD's definition is superior. Linux should fix their RNG. Or these
people who you think should implement their own should. Or they could just
switch to BSD.


  Some don't agree whether Intel's RNG is safe or not for Linux purposes.


All entropy feeds are safe.


  Zooko  Jon don't agree whether open source is a sufficient / necessary
 proof.


Yet they're both selling it.



 And, as you say, FIPS don't agree with anyone:


Again: so what?



  Amusing story: FIPS 140 requires self-tests on the PRNG. There was a
  bug in FIPS OpenSSL once where the self-test mode got stuck on and so
  no entropy was fed into the PRNG.
 
  Also, back when I was doing FIPS 140 they made me remove some of the
  entropy feeds into the PRNG - particularly ones that protect against
  pool duplication over forks.





 iang


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] open letter to Phil Zimmermann and Jon Callas of Silent Circle, re: Silent Mail shutdown

2013-08-17 Thread Benjamin Kreuter
On Sat, 17 Aug 2013 12:30:40 +0300
ianG i...@iang.org wrote:

 This was always known as the weakness of the model.  The operator
 could simply replace the applet that was downloaded in every instance
 with one that had other more nefarious capabilities.  There were
 thoughts and discussions about how to avoid that, but a simple, mass
 market solution was never found to my knowledge [0] which rendered
 the discussions moot.
 
 I don't think the company ever sought to hide that vulnerability.
 
 Also, that vulnerability was rather esoteric as it required quite 
 serious levels of cooperation.  So the bar was still high.

I am not sure I see how serious levels of cooperation would be
required.  Adding a backdoor to the Java applet that forwards a
passphrase or secret key to Hushmail does not sound terribly hard to
do (it sounds like less than 10 lines of code).  It sounds like
something that would almost certainly be done if the company ever
decided to build a lawful interception system.

-- Ben



-- 
Benjamin R Kreuter
UVA Computer Science
brk...@virginia.edu
KK4FJZ

--

If large numbers of people are interested in freedom of speech, there
will be freedom of speech, even if the law forbids it; if public
opinion is sluggish, inconvenient minorities will be persecuted, even
if laws exist to protect them. - George Orwell


signature.asc
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-17 Thread Jeffrey Walton
On Sat, Aug 17, 2013 at 7:46 AM, Ben Laurie b...@links.org wrote:

 ...

 Also, if there are other sources, why are they not being fed in to the
 system PRNG?
Linux 3.x kernels decided to stop using IRQ interrupts (removal of the
IRQF_SAMPLE_RANDOM flag, without an alternative to gather entropy).

[PATCH 17/17] random: final removal of IRQF_SAMPLE_RANDOM,
http://lkml.indiana.edu/hypermail/linux/kernel/1207.2/01043.html.

Jeff
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] urandom vs random

2013-08-17 Thread Sandy Harris
shawn wilson ag4ve...@gmail.com wrote:

 I thought that decent crypto programs (openssh, openssl, tls suites)
 should read from random so they stay secure and don't start generating
 /insecure/ data when entropy runs low.

(Talking about Linux, the only system where I know the details)

urandom uses cryptographically strong mixing (SHA-1) and has
enormous state, so it should be secure barring pathological
cases like the router vendors whose version of Linux failed to
initialise things properly or an enemy who already has root on
your system so he/she can look at kernel internals. (and that
enemy has much better targets to go after).

Papers like Yarrow with respected authors argue convincingly
that systems with far smaller state can be secure.

 The only way I could see this
 as being a smart thing to do is if these programs also looked at how
 much entropy the kernel had and stopped when it got ~50 or so. Is this
 the way things are done when these programs use urandom or what?

That would make no sense since the interface provides another
way to get the effect. If you really need guaranteed entropy,
for example to generate a long-term key, then use /dev/random.
The driver then checks the entropy and blocks (makes your
program wait) if there is not enough.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-17 Thread Sandy Harris
On Fri, Aug 16, 2013 at 11:07 AM, Aaron Toponce aaron.topo...@gmail.com wrote:


 The /dev/urandom device in the Linux kernel uses the Yarrow pseudo random
 number generator when the entropy pool has been exhausted.

No, it doesn't, or at least did not last time I looked at the code, a few
months ago. There are similaities, but also large differences.

 It turns out, getting good, high quality, true random, and chaotic data
 into your kernel isn't really at all that difficult. All you need to do, is
 rely in quantum chas, which is really the only true source for random, as
 much as random can get. Some things people have done:

 * Tuned their radio to atmospheric noise, and fed it into their kernel
   through their sound card.
 * Created reverse PNL junctions, timing electron jumps.
 * Timing radioactive decay using Americium-241, common in everyday
   household smoke detectors.
 * Opening up the CCD on a web camera fully in a completely dark box.
 * Termal noise from resistors.
 * Clock drift from quartz-based clocks and power fluctuations.

My program to deal with this (which needs more analysis before it
should be entirely trusted) and a paper which discusses it and
several alternatives are at:
ftp://ftp.cs.sjtu.edu.cn:990/sandy/maxwell/

 At any event, using /dev/urandom is perfectly secure, as the Yarrow
 algorithm has proven itself over time to withstand practical attacks. So,
 let's dispel the myth that using /dev/urandom is insecure. :)

Yes.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-17 Thread Ben Laurie
On 17 August 2013 10:09, Jeffrey Walton noloa...@gmail.com wrote:

 On Sat, Aug 17, 2013 at 7:46 AM, Ben Laurie b...@links.org wrote:
 
  ...
 
  Also, if there are other sources, why are they not being fed in to the
  system PRNG?
 Linux 3.x kernels decided to stop using IRQ interrupts (removal of the
 IRQF_SAMPLE_RANDOM flag, without an alternative to gather entropy).

 [PATCH 17/17] random: final removal of IRQF_SAMPLE_RANDOM,
 http://lkml.indiana.edu/hypermail/linux/kernel/1207.2/01043.html.


I haven't studied the Linux PRNG, but my casual understanding is it does
not deal well with useless input. This is obviously a defect.


 Jeff

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] open letter to Phil Zimmermann and Jon Callas of Silent Circle, re: Silent Mail shutdown

2013-08-17 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Aug 17, 2013, at 2:41 AM, ianG i...@iang.org wrote:

 So back to Silent Circle.  One known way to achieve some control over their 
 closed source replacement vulnerability is to let an auditor into their inner 
 circle, so to speak.

One correction of fact:

Our source is not closed source. It's up on GitHub and has an non-commercial 
BSD variant license, which I know isn't OSI, but anyone who wants to build, 
use, and even distribute their verified version is free to do so.

Secondly, we have auditors in the mix. We are customers of Leviathan Security 
and their virtual security officer program. They do regular code audits, 
network audits, and are helping us create a software development lifecycle.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSD64VsTedWZOD3gYRAp5iAKDFiDEn9MyTMscvsuznSY5jS83SpACg41F3
WL8vRZBFo747yv4C1DfwFeA=
=FYfS
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-17 Thread yersinia
On Sat, Aug 17, 2013 at 6:39 PM, Sandy Harris sandyinch...@gmail.comwrote:

 shawn wilson ag4ve...@gmail.com wrote:

  I thought that decent crypto programs (openssh, openssl, tls suites)
  should read from random so they stay secure and don't start generating
  /insecure/ data when entropy runs low.

 (Talking about Linux, the only system where I know the details)

 urandom uses cryptographically strong mixing (SHA-1) and has
 enormous state, so it should be secure barring pathological
 cases like the router vendors whose version of Linux failed to
 initialise things properly or an enemy who already has root on
 your system so he/she can look at kernel internals. (and that
 enemy has much better targets to go after).


IMHO I found related to this discussion this article on LWN (
https://lwn.net/Articles/525459/http://www.linkedin.com/redirect?url=https%3A%2F%2Flwn%2Enet%2FArticles%2F525459%2Furlhash=Pvmd_t=tracking_anet)
, which describes the developments in the recent years on the generation of
random numbers in the linux kernel. In particular, it discusses the talk
Do not Play Dice With Random Numbers by Peter Alvin at LinuxCon Europe
2012 (
https://events.linuxfoundation.org/images/stories/pdf/lceu2012_anvin.pdfhttp://www.linkedin.com/redirect?url=https%3A%2F%2Fevents%2Elinuxfoundation%2Eorg%2Fimages%2Fstories%2Fpdf%2Flceu2012_anvin%2Epdfurlhash=K2Ee_t=tracking_anet
 )

Randomness is a subtle property. To illustrated this, Peter displayed a
photograph of three icosahedral says That He'd thrown at home, saying
here, if you need a random number, you can use 846. Why does not this work
, he asked. First of all, a random number is only random ounces. Additions
in, it is only random until we know what it is. These facts are not the
same thing. Peter Noted That It is possible to misuse by a random number
reusing it, this can lead to breaches in security protocols.

Best Regards
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] open letter to Phil Zimmermann and Jon Callas of Silent Circle, re: Silent Mail shutdown

2013-08-17 Thread ianG

On 17/08/13 20:08 PM, Jon Callas wrote:

On Aug 17, 2013, at 2:41 AM, ianG i...@iang.org wrote:


So back to Silent Circle.  One known way to achieve some control over their 
closed source replacement vulnerability is to let an auditor into their inner 
circle, so to speak.


One correction of fact:

Our source is not closed source. It's up on GitHub and has an non-commercial 
BSD variant license, which I know isn't OSI, but anyone who wants to build, 
use, and even distribute their verified version is free to do so.



Apologies, ack -- I noticed that in your post.

(And I think for crypto/security products, the BSD-licence variant is 
more important for getting it out there than any OSI grumbles.)



Secondly, we have auditors in the mix. We are customers of Leviathan Security and their 
virtual security officer program. They do regular code audits, network 
audits, and are helping us create a software development lifecycle.



Ah ok.  Will they be writing an audit report?  Something that will give 
us trust that more people are sticking their name to it?






iang



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Reply to Zooko (in Markdown)

2013-08-17 Thread Jon Callas

On Aug 17, 2013, at 12:49 AM, Bryan Bishop kanz...@gmail.com wrote:

 On Sat, Aug 17, 2013 at 1:04 AM, Jon Callas j...@callas.org wrote:
 It's very hard, even with controlled releases, to get an exact byte-for-byte 
 recompile of an app. Some compilers make this impossible because they 
 randomize the branch prediction and other parts of code generation. Even when 
 the compiler isn't making it literally impossible, without an exact copy of 
 the exact tool chain with the same linkers, libraries, and system, the code 
 won't be byte-for-byte the same. Worst of all, smart development shops use 
 the *oldest* possible tool chain, not the newest one because tool sets are 
 designed for forwards-compatibility (apps built with old tools run on the 
 newest OS) rather than backwards-compatibility (apps built with the new tools 
 run on older OSes). Code reliability almost requires using tool chains that 
 are trailing-edge.
 
 Would providing (signed) build vm images solve the problem of distributing 
 your toolchain?

Maybe. The obvious counterexample is a compiler that doesn't deterministically 
generate code, but there's lots and lots of hair in there, including potential 
problems in distributing the tool chain itself, including copyrighted tools, 
libraries, etc.

But let's not rathole on that, and get to brass tacks.

I *cannot* provide an argument of security that can be verified on its own. 
This is Godel's second incompleteness theorem. A set of statements S cannot be 
proved consistent on its own. (Yes, that's a minor handwave.)

All is not lost, however. We can say, Meh, good enough and the problem is 
solved. Someone else can construct a *verifier* that is some set of policies 
(I'm using the word policy but it could be a program) that verifies the 
software. However, the verifier can only be verified by a set of policies that 
are constructed to verify it. The only escape is decide at some point, meh, 
good enough.

I brought Ken Thompson into it because he actually constructed a rootkit that 
would evade detection and described it in his Turing Award lecture. It's not 
*just* philosophy and theoretical computer science. Thompson flat-out says, 
that at some point you have to trust the people who wrote the software, because 
if they want to hide things in the code, they can.

I hope I don't sound like a broken record, but a smart attacker isn't going to 
attack there, anyway. A smart attacker doesn't break crypto, or suborn 
releases. They do traffic analysis and make custom malware. Really. Go look at 
what Snowden is telling us. That is precisely what all the bad guys are doing. 
Verification is important, but that's not where the attacks come from (ignoring 
the notable exceptions, of course).

One of my tasks is to get better source releases out there. However, I also 
have to prioritize it with other tasks, including actual software improvements. 
We're working on a release that will tie together some new anti-surveillance 
code along with a better source release. We're testing the new source release 
process with some people not in our organization, as well. It will get better; 
it *is* getting better.

Jon



PGP.sig
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] open letter to Phil Zimmermann and Jon Callas of Silent Circle, re: Silent Mail shutdown

2013-08-17 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Aug 17, 2013, at 10:41 AM, ianG i...@iang.org wrote:

 Apologies, ack -- I noticed that in your post.
 
 (And I think for crypto/security products, the BSD-licence variant is more 
 important for getting it out there than any OSI grumbles.)

Thanks. I agree with your comments in other parts of those notes that I removed 
about issues with open versus closed source. I often wish I didn't believe in 
open source, because the people doing closed source get much less flak than we 
do.

 Ah ok.  Will they be writing an audit report?  Something that will give us 
 trust that more people are sticking their name to it?

I get regular audit reports, and have since last fall. :-)

I haven't been putting them out because it felt like argument from authority. 
Hey, don't audit this yourself, trust these guys!

Moreover, those reports are guidance we have from an independent party on what 
to do next. I want those to be raw and unvarnished. If they're going to get 
varnished, I lose guidance and I also lose speed. A report that's made for the 
public is definitionally sanitized. I don't want to encourage sanitizing.

It's a hard problem. I understand what you want, but my goal is to provide a 
good service, not a good report.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: iso-8859-1

wj8DBQFSD7+7sTedWZOD3gYRAtF4AJ4+feoP9wGq6s1Zni9ZhS6aiJx1YwCgwOiy
GHaj1lPMi8gBm3XDSvorr9U=
=HWhT
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] open letter to Phil Zimmermann and Jon Callas of Silent Circle, re: Silent Mail shutdown

2013-08-17 Thread Fabio Pietrosanti (naif)
Il 8/17/13 7:08 PM, Jon Callas ha scritto:
 On Aug 17, 2013, at 2:41 AM, ianG i...@iang.org wrote:

  So back to Silent Circle.  One known way to achieve some control
over their closed source replacement vulnerability is to let an auditor
into their inner circle, so to speak.

 One correction of fact:

 Our source is not closed source. It's up on GitHub and has an
non-commercial BSD variant license, which I know isn't OSI, but anyone
who wants to build, use, and even distribute their verified version is
free to do so.

It would be important to have a semi-automatic allignment of the
Github's source code code base to each SilentCircle application's release.
Now Github code is 6 months old.

This would allow inspection of code before upgrade, additionally
improving the transparency.

-- 
Fabio Pietrosanti (naif)
HERMES - Center for Transparency and Digital Human Rights
http://logioshermes.org - http://globaleaks.org - http://tor2web.org

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Reply to Zooko (in Markdown)

2013-08-17 Thread James A. Donald

On 2013-08-17 4:04 PM, Jon Callas wrote:

The problems run even deeper than the raw practicality. Twenty-nine years ago this month, in the August 1984 
issue of Communications of the ACM (Vol. 27, No. 8) Ken Thompson's famous Turing Award lecture, 
Reflections on Trusting Trust was published. You can find a facsimile of the magazine article at 
https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf and a text-searchable copy on 
Thompson's own site, http://cm.bell-labs.com/who/ken/trust.html.


An attack such as that described by Ken Thompson is extremely brittle, 
narrowly targeted, and subject to rapid bitrot.  It would only be used 
to target universally used and infrequently changing code - operating 
system code.  Therefore, irrelevant for applications.



further, the attack is defeated, and potentially detected, by cross 
compilation, which happens all the time during operating system development.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-17 Thread James A. Donald

On 2013-08-17 5:57 PM, Peter Gutmann wrote:

Nico Williams n...@cryptonector.com writes:


It might be useful to think of what a good API would be.

The problem isn't the API, it's the fact that you've got two mutually
exclusive requirements, the security geeks want the (P)RNG to block until
enough entropy is available, everyone else wants execution to continue without
being blocked.  In other words a failure of security is preferred to a failure
of functionality.  Until you resolve that conflict, no API (re)design is going
to help you.


The security geeks are the only people who want to use these.  If on 
some systems urandom is fixed to not block at startup, cannot use it 
portably.



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Reply to Zooko (in Markdown)

2013-08-17 Thread Nico Williams
On Sat, Aug 17, 2013 at 12:50 PM, Jon Callas j...@callas.org wrote:
 On Aug 17, 2013, at 12:49 AM, Bryan Bishop kanz...@gmail.com wrote:
  Would providing (signed) build vm images solve the problem of distributing
  your toolchain?

A more interesting approach would be to use a variety of independently
sourced disassemblers to compare builds and check that object code
differences from one build to the next can be accounted for by
corresponding changes to the source code or build systems.  This is
not really tractable when you change compilers or their settings, but
at least you can get a pretty good idea as you develop of what object
code is being produced.  This is terribly time-consuming, but you can
automate the comparison process and archive results for post-mortems
as a deterrent.  You'd have to do this on multiple machines handled by
different people, and so on...

It's not too farfetched, see http://illumos.org/man/1onbld/wsdiff
(Solaris release engineering used to use this tool, and I imagine that
they still do).

 I *cannot* provide an argument of security that can be verified on its own.
 This is Godel's second incompleteness theorem. A set of statements S cannot
 be proved consistent on its own. (Yes, that's a minor handwave.)

No one can.  We're in luck w.r.t. the Thompson attack: it needs care
and feeding, as it will rot if not kept up to date.  Any effort to
make it clever enough to keep up with a changing code base is likely
to lead to the attack being revealed.  Any effort to maintain it risks
detection too.  Any effort to use it risks detection.  And today a
Thompson attack would have to hide from a multiplicity of disassemlers
(possibly run on uncompromised systems), decompilers, and, of course,
tracing and debugging tools that may work at layers that the generated
exploit cannot do anything about (e.g., DTrace) without the bugged
compiler having been used to build pretty much all of those tools.
That is, I wouldn't worry too much about the Thompson attack.

 All is not lost, however. We can say, Meh, good enough and the problem is
 solved. Someone else can construct a *verifier* that is some set of policies
 (I'm using the word policy but it could be a program) that verifies the
 software. However, the verifier can only be verified by a set of policies
 that are constructed to verify it. The only escape is decide at some point,
 meh, good enough.

Yes, it's turtles all the way down.  You stop worrying about far
enough turtles because you have no choice (and hopefully they are too
far to really affect your world).

 I hope I don't sound like a broken record, but a smart attacker isn't going
 to attack there, anyway. A smart attacker doesn't break crypto, or suborn
 releases. They do traffic analysis and make custom malware. Really. Go look
 at what Snowden is telling us. That is precisely what all the bad guys are
 doing. Verification is important, but that's not where the attacks come from
 (ignoring the notable exceptions, of course).

Indeed, the vulnerabilities from the plethora of bugs we
unintentionally create, overwhelm (or should, in any reasonable
analysis) any concerns about turtles below the one immediately holding
up the Earth.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Reply to Zooko (in Markdown)

2013-08-17 Thread Jeffrey Walton
On Sat, Aug 17, 2013 at 3:49 AM, Bryan Bishop kanz...@gmail.com wrote:
 On Sat, Aug 17, 2013 at 1:04 AM, Jon Callas j...@callas.org wrote:

 It's very hard, even with controlled releases, to get an exact
 byte-for-byte recompile of an app. Some compilers make this impossible
 because they randomize the branch prediction and other parts of code
 generation. Even when the compiler isn't making it literally impossible,
 without an exact copy of the exact tool chain with the same linkers,
 libraries, and system, the code won't be byte-for-byte the same. Worst of
 all, smart development shops use the *oldest* possible tool chain, not the
 newest one because tool sets are designed for forwards-compatibility (apps
 built with old tools run on the newest OS) rather than
 backwards-compatibility (apps built with the new tools run on older OSes).
 Code reliability almost requires using tool chains that are trailing-edge.


 Would providing (signed) build vm images solve the problem of distributing
 your toolchain?
You might try Fully Countering Trusting Trust through Diverse
Double-Compiling, http://www.dwheeler.com/trusting-trust/
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] open letter to Phil Zimmermann and Jon Callas of Silent Circle, re: Silent Mail shutdown

2013-08-17 Thread Peter Maxwell
On 17 August 2013 19:23, Jon Callas j...@callas.org wrote:


 On Aug 17, 2013, at 10:41 AM, ianG i...@iang.org wrote:

  Apologies, ack -- I noticed that in your post.
 
  (And I think for crypto/security products, the BSD-licence variant is
 more important for getting it out there than any OSI grumbles.)

 Thanks. I agree with your comments in other parts of those notes that I
 removed about issues with open versus closed source. I often wish I didn't
 believe in open source, because the people doing closed source get much
 less flak than we do.


I'm not sure that's true (that closed-source gets less flak).  From the
user's point of view if security issues arise in a closed-source product
then there are two possible explanations: either the vendor made a mistake
or they did it deliberately; with no way to distinguish, it can be much
more damaging to a company's reputation.  This can be demonstrated by
example: can we have a show of hands for anyone who would trust Skype to
handle anything important/sensitive?

An open-source product on the other hand - in theory at least - is more
amenable to people determining whether a problem was a mistake or
deliberate... or at least the user can make an informed choice based on the
evidence.  From a personal point of view, I don't tend to run software I
cannot look at the source for; granted that is in part due to being able to
fix problems more easily but there have been instances where I've chosen
not to use software because I've seen the state of the source and thought
nae danger am I running that on an internet facing interface.

So, long-story-short, I think your choice was the preferable one and any
flak you might be getting is more likely to work in your favour in the long
term, as long as you keep doing as you have done by continuing to address
those concerns.  There are complicating factors with software like
SilentCircle as I don't trust the underlying OS or firmware of any
currently available mobile device - and I trust even less any potential
recipient's device - but that's a whole other discussion, and a far more
difficult problem.




  Ah ok.  Will they be writing an audit report?  Something that will give
 us trust that more people are sticking their name to it?

 I get regular audit reports, and have since last fall. :-)

 I haven't been putting them out because it felt like argument from
 authority. Hey, don't audit this yourself, trust these guys!

 Moreover, those reports are guidance we have from an independent party on
 what to do next. I want those to be raw and unvarnished. If they're going
 to get varnished, I lose guidance and I also lose speed. A report that's
 made for the public is definitionally sanitized. I don't want to encourage
 sanitizing.

 It's a hard problem. I understand what you want, but my goal is to provide
 a good service, not a good report.


I personally wouldn't expect publication of internal audits.  What might
assuage peoples' concerns though is being able to verify the package they
are running has definitely been compiled from the source code that is
publicly available: people have checked the source for SilentCircle's
products - and from what I can tell, independently - so if we assume we
trust the source there needs to be a chain of trust to ensure the binary
that's being executed has not been altered (I don't expect you ever would
but it's a nice feature to be able to prove it).

The corollary to this is for the ultra paranoid, the provision of a
hash/signature would probably be better done by a third-party, i.e. if
Zooko is intimating that in the current model SilentCircle could distribute
a back-doored package then there is no improvement unless the trust is
shared with an independent third-party... preferably someone not subject to
US jurisdiction.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-17 Thread James A. Donald

On 2013-08-17 10:12 PM, Ben Laurie wrote:


What external crypto can you not fix? Windows? Then don't use 
Windows. You can fix any crypto in Linux or FreeBSD.


No you cannot.



So what? BSD's definition is superior. Linux should fix their RNG. Or 
these people who you think should implement their own should. Or they 
could just switch to BSD.


That it does not, implicitly admits that you, Ben Laurie, cannot fix linux.

We want that all implementations of /dev/random and /dev/urandom behave 
the same, and that they behave correctly on all machines.  We don't have 
that.


Hence the need for each implementer to reinvent the wheel.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Reply to Zooko (in Markdown)

2013-08-17 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Aug 17, 2013, at 11:00 AM, Ali-Reza Anghaie a...@packetknife.com wrote:

 On Sat, Aug 17, 2013 at 1:50 PM, Jon Callas j...@callas.org wrote:
 I hope I don't sound like a broken record, but a smart attacker isn't going
 to attack there, anyway. A smart attacker doesn't break crypto, or suborn
 releases. They do traffic analysis and make custom malware. Really. Go look
 at what Snowden is telling us. That is precisely what all the bad guys are
 doing. Verification is important, but that's not where the attacks come from
 (ignoring the notable exceptions, of course).
 
 Part of the problem is that most people can't even wrap their heads
 around what a State or non-State Tier 1 Actor would even look like.
 They bully, kill leaders, deny resources, .. heck, they kill ~users~
 to dissuade use of a given tool.
 
 Then on the flip side we think about design and architectural
 aspects that don't even ever get the chance to be used against ~any~
 adversary because we force too much philosophy down into a hole that
 may have just one device, maybe just an iPhone - and limited to
 connectivity to even use it.
 
 I've called this the problem of Western Sensibilities where we seem
 to forget the economics and geopolitics of the rest of the world.
 
 Before getting heads wrapped around all these poles that are pretty
 exclusive to the haves - go out to truly hostile territory and live
 like a have not and try to build up the OPSEC routine you want,
 complete with FOSS only and full audits, and work from the field that
 way. It's non-trivial to say the least - even if you've done it a
 hundred times from a hundred different American and European venues.

I've had the privilege on several occasions to talk to people who really do 
this stuff. A couple of things really stuck with me:

* Don't patronize us. We know what we're doing, we know what we're up 
against. The guy who told me this had his brother murdered horribly. His 
tradecraft was basic and elegant.

* Simple, usable countermeasures are best because they have to be used by the 
sort of person who decided yesterday that they're not going to take it any 
more. They're newly-minted heroes who a threat to themselves and others if they 
screw up what they're doing. We asked them what they'd like most and the answer 
was SSL on websites. This was after Diginotar and we'd been talking about 
advanced threats, so we were a bit taken aback. They explained that the biggest 
problems are people putting stuff on websites as well as mistakes like making 
calendar entries for times and places of meetings. 

That put a fine point on the admonition not to patronize them. Heck, the 
adversaries don't have to crack anything sophisticated when they can just sniff 
CalDAV.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSD/qksTedWZOD3gYRAsj7AKCXuWr60RLPvsFXVtHzDGZUOS/fuwCgvK6m
6X311tAwXg+lYZD2TAOZAm0=
=C0O6
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] open letter to Phil Zimmermann and Jon Callas of Silent Circle, re: Silent Mail shutdown

2013-08-17 Thread dan

On the somewhat tangential-to-cryptography topic of open versus
closed source, may I suggest that the metrics that address the
question are the classic ones that define availability: mean time
between failure (MTBF) and mean time to repair (MTTR).  As you know,
you get 100% availability by driving MTBF to infinity or MTTR to
zero.  At this point in history with the array of installed base and
vested interests that we have, I'd suggest that further investment
in driving MTBF to infinity is a poorer spend that investing in
driving MTTR to zero.  On that proposition, open source wins as
while it is true that closed source is better out of the box on
average, open source has a brisker repair time.  Or so it seems to
this observer.

--dan

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-17 Thread Peter Gutmann
yersinia yersinia.spi...@gmail.com writes:

To illustrated this, Peter displayed a photograph of three icosahedral says
That He'd thrown at home, saying here, if you need a random number, you can
use 846.

And there's the problem, he used a D20 so there's a bias in the results.  If 
he'd used a D16 like this one, 
http://farm5.staticflickr.com/4056/4423369473_45e6fee61f_z.jpg, then there 
wouldn't be a problem.  I keep 8D16 on my desk for just this purpose, two 
rolls and I've got an AES key.

Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography