Re: [cryptography] no-keyring public

2013-08-24 Thread William Yager

On Aug 24, 2013, at 11:30 AM, Krisztián Pintér pinte...@gmail.com wrote:

 we can do that. how about this? stretch the password with some KDF, derive a 
 seed to a PRNG, and use the PRNG to create the the key pair. if the algorithm 
 is fixed, it will end up with the same keypair every time. voila, no-keyring 
 password-only public key cryptography.
 
 do you see any downsides to that, besides the obvious ones that follow from 
 the no-keyring requirement? (slow, weak password.)

You mean like a Bitcoin brain wallet? 

And yes, the downside is that they're very susceptible to brute force attacks. 
I suppose this is more the case with Bitcoin wallets than with other signature 
schemes.

Will



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-19 Thread William Yager
On Aug 19, 2013, at 7:46 PM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:

 You can get them for as little as $50 in the form of USB-key media players
 running Android.  Or if you really insist on doing the whole thing yourself,
 get something like an EA-XPR-003 ($29 in single-unit quantities from Digikey,
 http://www.digikey.com/product-detail/en/EA-XPR-003/EA-XPR-003-ND/2410099) and
 solder on a zener diode and a few I2C environmental sensors for
 noise/unpredictability generation.
 Peter.

If someone is interested in building something like this, you may want to start 
with this simple project I posted on Github a while back. 
https://github.com/wyager/TeensyRNG

It's a simple, but (I think) pretty secure hardware PRNG that takes 
environmental noise and securely mixes it into an internal entropy pool. It 
does a few nice things like input debiasing, cryptographic mixing, etc. With a 
few small changes you could slap it on pretty much any microcontroller or SoC 
and get a pretty decent entropy stick. I used the $19 teensy and it generates 
about 100 bytes/sec of what is probably pretty good pseudorandom data. No 
guarantees, of course. I probably made some fatal mistake that would render it 
useless in certain contexts, but like I said, it's a place to start.

Will


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [liberationtech] Heml.is - The Beautiful Secure Messenger

2013-07-13 Thread William Yager
It's nice that you can be so cavalier about this, but if your system's RNG is 
fundamentally broken, it doesn't really matter so much whether your other stuff 
is well-programmed or not. At least if my web browser is remotely exploitable, 
it doesn't break my disk encryption software, GPG, SSH, every other web browser 
I'm using, and pretty much every crypto appliance on my machine.

I'd rather have a rickety shed built on solid ground than a castle built on 
quicksand.

On Jul 12, 2013, at 11:32 PM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:

 William Yager will.ya...@gmail.com writes:
 
 no cryptographer ever got hurt by being too paranoid, and not trusting your
 hardware is a great place to start.
 
 And while you're lying awake at night worrying whether the Men in Black have
 backdoored the CPU in your laptop, you're missing the fact that the software
 that's using the random numbers has 36 different buffer overflows, of which 27
 are remote-exploitable, and the crypto uses an RSA exponent of 1 and AES-CTR
 with a fixed IV.
 
 Peter.
 

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [liberationtech] Heml.is - The Beautiful Secure Messenger

2013-07-12 Thread William Yager
There are plenty of ways to design an apparently random number generator so
that you can predict the output (exactly or approximately) without causing
any obvious flaws in the pseudorandom output stream. Even the smallest bias
can significantly reduce security. This could be a critical failure, and we
have no way to determine whether or not it is happening.

As for preventing potential security holes and making the backdoor
deniable, that takes a little more thinking.

And for legal issues, there are any number of hand-wavy blame-shifting
schemes that Intel and whoever would want to backdoor their RNG could use.

I contest the idea that we should ignore the fact that Intel's RNG could be
backdoored. Just because other problems exist doesn't mean we should ignore
this one. I agree that perhaps worrying about this constitutes being too
paranoid, but no cryptographer ever got hurt by being too paranoid, and
not trusting your hardware is a great place to start.

On Fri, Jul 12, 2013 at 7:20 PM, Peter Gutmann pgut...@cs.auckland.ac.nzwrote:

 Nico Williams n...@cryptonector.com writes:

 I'd like to understand what attacks NSA and friends could mount, with
 Intel's
 witting or unwitting cooperation, particularly what attacks that
 *wouldn't*
 put civilian (and military!) infrastructure at risk should details of a
 backdoor leak to the public, or *worse*, be stolen by an antagonist.

 Right.  How exactly would you backdoor an RNG so (a) it could be
 effectively
 used by the NSA when they needed it (e.g. to recover Tor keys), (b) not
 affect
 the security of massive amounts of infrastructure, and (c) be so totally
 undetectable that there'd be no risk of it causing a s**tstorm that makes
 the
 $0.5B FDIV bug seem like small change (not to mention the legal issues,
 since
 this one would have been inserted deliberately, so we're probably talking
 bet-
 the-company amounts of liability there).

 I'm *not* saying that my wishing is an argument for trusting Intel's RNG
 --
 I'm sincerely trying to understand what attacks could conceivably be
 mounted
 through a suitably modified RDRAND with low systemic risk.

 Being careful is one thing, being needlessly paranoid is quite another.
  There
 are vast numbers of issues that crypto/security software needs to worry
 about
 before getting down to has Intel backdoored their RNG.

 Peter.
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Looking for earlier proof: no secure channel without previous secure channel

2013-06-07 Thread William Yager
Precisely. You have no way of knowing anything about the alleged identity 
behind a key without having some form of interaction through a secure channel 
(like real-world interaction). 

On Jun 7, 2013, at 3:53 PM, Florian Weimer f...@deneb.enyo.de wrote:

 Practically speaking, this is true.  Maybe I'm a bit naïve, but I
 expect it's difficult to model such global semantic concerns
 accurately, including the fact that you cannot actually use public
 keys as identities because in reality, no one wants to talk to a key.




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Looking for earlier proof: no secure channel without previous secure channel

2013-06-07 Thread William Yager
We're starting to tread into very philosophical territory. I'd argue that
users on the Silk Road (sellers especially) are, in fact, authenticated
over very informal separate secure channels.

One secure channel is that of the Silk Road website itself. By being on
the website, it lends some credence to the idea that the owner of the key
you are communicating with is who they claim to be. This, it could be
argued, is a very fuzzy form of authentication.

A second secure channel is the review system on the Silk Road. People
reviewing the salesmen, perhaps using crypto to authenticate their reviews,
represents a *very* informal kind of certification system/web of trust.
This is another authentication channel.

It's important to realize that an identity is more than just a name or an
ID number. Most authentication systems only care about authenticating names
and ID numbers, but other authentication systems (like the informal one
used on the Silk Road) is about authenticating the part of someone's
alleged identity that says I sell drugs and I won't screw you over
somehow. It's not always Alice wants to talk to Bob. Sometimes it's A
legitimate drug purchaser wants to talk to a legitimate drug vendor. The
names don't matter, so they don't have to be authenticated over a secure
channel.

So I don't think it's accurate to say that people want to talk to the key.
They actually want to talk to a specific thing behind the key. However,
instead of wanting to talk to an identity that has the property of being
named John Doe or what have you, like we usually do in crypto, they want
to talk to an identity that has the property of being a drug salesman.


On Fri, Jun 7, 2013 at 9:02 PM, James A. Donald jam...@echeque.com wrote:

 On 2013-06-08 6:53 AM, Florian Weimer wrote:

 you cannot actually use public
 keys as identities because in reality, no one wants to talk to a key.


 Again, Silk road is a counter example.  That is a key that people do want
 to talk to.


 __**_
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/**mailman/listinfo/cryptographyhttp://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography