Re: [cryptography] no-keyring public

2013-08-24 Thread William Yager

On Aug 24, 2013, at 11:30 AM, Krisztián Pintér  wrote:

> we can do that. how about this? stretch the password with some KDF, derive a 
> seed to a PRNG, and use the PRNG to create the the key pair. if the algorithm 
> is fixed, it will end up with the same keypair every time. voila, no-keyring 
> password-only public key cryptography.
> 
> do you see any downsides to that, besides the obvious ones that follow from 
> the no-keyring requirement? (slow, weak password.)

You mean like a Bitcoin brain wallet? 

And yes, the downside is that they're very susceptible to brute force attacks. 
I suppose this is more the case with Bitcoin wallets than with other signature 
schemes.

Will



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-19 Thread William Yager
On Aug 19, 2013, at 7:46 PM, Peter Gutmann  wrote:

> You can get them for as little as $50 in the form of USB-key media players
> running Android.  Or if you really insist on doing the whole thing yourself,
> get something like an EA-XPR-003 ($29 in single-unit quantities from Digikey,
> http://www.digikey.com/product-detail/en/EA-XPR-003/EA-XPR-003-ND/2410099) and
> solder on a zener diode and a few I2C environmental sensors for
> noise/unpredictability generation.
> Peter.

If someone is interested in building something like this, you may want to start 
with this simple project I posted on Github a while back. 
https://github.com/wyager/TeensyRNG

It's a simple, but (I think) pretty secure hardware PRNG that takes 
environmental noise and securely mixes it into an internal entropy pool. It 
does a few nice things like input debiasing, cryptographic mixing, etc. With a 
few small changes you could slap it on pretty much any microcontroller or SoC 
and get a pretty decent entropy stick. I used the $19 teensy and it generates 
about 100 bytes/sec of what is probably pretty good pseudorandom data. No 
guarantees, of course. I probably made some fatal mistake that would render it 
useless in certain contexts, but like I said, it's a place to start.

Will


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [liberationtech] Heml.is - "The Beautiful & Secure Messenger"

2013-07-12 Thread William Yager
It's nice that you can be so cavalier about this, but if your system's RNG is 
fundamentally broken, it doesn't really matter so much whether your other stuff 
is well-programmed or not. At least if my web browser is remotely exploitable, 
it doesn't break my disk encryption software, GPG, SSH, every other web browser 
I'm using, and pretty much every crypto appliance on my machine.

I'd rather have a rickety shed built on solid ground than a castle built on 
quicksand.

On Jul 12, 2013, at 11:32 PM, Peter Gutmann  wrote:

> William Yager  writes:
> 
>> no cryptographer ever got hurt by being too paranoid, and not trusting your
>> hardware is a great place to start.
> 
> And while you're lying awake at night worrying whether the Men in Black have
> backdoored the CPU in your laptop, you're missing the fact that the software
> that's using the random numbers has 36 different buffer overflows, of which 27
> are remote-exploitable, and the crypto uses an RSA exponent of 1 and AES-CTR
> with a fixed IV.
> 
> Peter.
> 

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [liberationtech] Heml.is - "The Beautiful & Secure Messenger"

2013-07-12 Thread William Yager
There are plenty of ways to design an apparently random number generator so
that you can predict the output (exactly or approximately) without causing
any obvious flaws in the pseudorandom output stream. Even the smallest bias
can significantly reduce security. This could be a critical failure, and we
have no way to determine whether or not it is happening.

As for preventing potential security holes and making the backdoor
deniable, that takes a little more thinking.

And for legal issues, there are any number of hand-wavy blame-shifting
schemes that Intel and whoever would want to backdoor their RNG could use.

I contest the idea that we should ignore the fact that Intel's RNG could be
backdoored. Just because other problems exist doesn't mean we should ignore
this one. I agree that perhaps worrying about this constitutes being "too
paranoid", but no cryptographer ever got hurt by being too paranoid, and
not trusting your hardware is a great place to start.

On Fri, Jul 12, 2013 at 7:20 PM, Peter Gutmann wrote:

> Nico Williams  writes:
>
> >I'd like to understand what attacks NSA and friends could mount, with
> Intel's
> >witting or unwitting cooperation, particularly what attacks that
> *wouldn't*
> >put civilian (and military!) infrastructure at risk should details of a
> >backdoor leak to the public, or *worse*, be stolen by an antagonist.
>
> Right.  How exactly would you backdoor an RNG so (a) it could be
> effectively
> used by the NSA when they needed it (e.g. to recover Tor keys), (b) not
> affect
> the security of massive amounts of infrastructure, and (c) be so totally
> undetectable that there'd be no risk of it causing a s**tstorm that makes
> the
> $0.5B FDIV bug seem like small change (not to mention the legal issues,
> since
> this one would have been inserted deliberately, so we're probably talking
> bet-
> the-company amounts of liability there).
>
> >I'm *not* saying that my wishing is an argument for trusting Intel's RNG
> --
> >I'm sincerely trying to understand what attacks could conceivably be
> mounted
> >through a suitably modified RDRAND with low systemic risk.
>
> Being careful is one thing, being needlessly paranoid is quite another.
>  There
> are vast numbers of issues that crypto/security software needs to worry
> about
> before getting down to "has Intel backdoored their RNG".
>
> Peter.
> ___
> cryptography mailing list
> cryptography@randombit.net
> http://lists.randombit.net/mailman/listinfo/cryptography
>
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Looking for earlier proof: no secure channel without previous secure channel

2013-06-07 Thread William Yager
We're starting to tread into very philosophical territory. I'd argue that
users on the Silk Road (sellers especially) are, in fact, authenticated
over very informal separate secure channels.

One "secure channel" is that of the Silk Road website itself. By being on
the website, it lends some credence to the idea that the owner of the key
you are communicating with is who they claim to be. This, it could be
argued, is a very fuzzy form of authentication.

A second "secure channel" is the review system on the Silk Road. People
reviewing the salesmen, perhaps using crypto to authenticate their reviews,
represents a *very* informal kind of certification system/web of trust.
This is another authentication channel.

It's important to realize that an identity is more than just a name or an
ID number. Most authentication systems only care about authenticating names
and ID numbers, but other authentication systems (like the informal one
used on the Silk Road) is about authenticating the part of someone's
alleged identity that says "I sell drugs" and "I won't screw you over
somehow". It's not always "Alice wants to talk to Bob." Sometimes it's "A
legitimate drug purchaser wants to talk to a legitimate drug vendor." The
names don't matter, so they don't have to be authenticated over a secure
channel.

So I don't think it's accurate to say that people want to talk to the key.
They actually want to talk to a specific thing behind the key. However,
instead of wanting to talk to an identity that has the property of being
named "John Doe" or what have you, like we usually do in crypto, they want
to talk to an identity that has the property of being a drug salesman.


On Fri, Jun 7, 2013 at 9:02 PM, James A. Donald  wrote:

> On 2013-06-08 6:53 AM, Florian Weimer wrote:
>
>> you cannot actually use public
>> keys as identities because in reality, no one wants to talk to a key.
>>
>
> Again, Silk road is a counter example.  That is a key that people do want
> to talk to.
>
>
> __**_
> cryptography mailing list
> cryptography@randombit.net
> http://lists.randombit.net/**mailman/listinfo/cryptography
>
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Looking for earlier proof: no secure channel without previous secure channel

2013-06-07 Thread William Yager
Precisely. You have no way of knowing anything about the alleged identity 
behind a key without having some form of interaction through a secure channel 
(like real-world interaction). 

On Jun 7, 2013, at 3:53 PM, Florian Weimer  wrote:

> Practically speaking, this is true.  Maybe I'm a bit naïve, but I
> expect it's difficult to model such global semantic concerns
> accurately, including the fact that you cannot actually use public
> keys as identities because in reality, no one wants to talk to a key.




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] skype backdoor confirmation

2013-05-16 Thread william yager
>You do have to wonder if apple backdoored their IM client,

I am a little curious about Apple's iMessage encryption system. From the
bits and pieces I've picked up across the net, it sounds like Apple holds a
keyring containing the public keys of all your iMessage-using devices. When
someone wants to send you an iMessage, they download the keyring and
encrypt the message for all of those public keys. When you add a new device
to your account, Apple adds its public key to the keyring, all future
messages are encrypted for that device as well, and all your devices show
an alert that a new device is on the account.

If that's correct, I'm curious how, when I add a new device to my iMessage
account, all my old IMs show up in the chat history on the *new* device. If
my understanding is correct, it appears that someone who possesses a
cleartext copy of the messages is re-encrypting them with the new device's
public key.

Will


On Thu, May 16, 2013 at 2:52 PM, Adam Back  wrote:

> So when I saw this article
> http://www.h-online.com/**security/news/item/Skype-with-**
> care-Microsoft-is-reading-**everything-you-write-1862870.**html
>
> I was disappointed the rumoured skype backdoor is claimed to be real, and
> that they have evidence.  The method by which they confirmed is kind of odd
> - not only is skype eavesdropping but its doing head requests on SSL sites
> that have urls pasted in the skype chat!
>
> Now I've worked with a few of the german security outfits before, though
> not
> Heise, and they are usually top-notch, so if they say its confirmed, you
> generally are advised to believe them.  And the date on the article is a
> couple of days old, but I tried it anyway.  Setup an non-indexed
> /dev/urandom generated long filename, and saved it as php with a
> meta-refresh to a known malware site in case thats a trigger, and a passive
> html with no refresh and no args.  Passed a username password via
> ?user=foo&password=bar to the php one and sent the links to Ian Grigg who I
> saw was online over skype with strict instructions not to click.
>
> To my surprise I see this two entries in the apache SSL log:
>
> 65.52.100.214 - - [16/May/2013:13:14:03 -0400] "HEAD /**
> CuArhuk2veg1owOtiTofAryib7CajV**isBeb8.html HTTP/1.1" 200 -
> 65.52.100.214 - - [16/May/2013:14:08:52 -0400] "HEAD /**
> CuArhuk2veg1owOtiTofAyarrUg5bl**ettOlyurc7.php?user=foo&pass=**yeahright
> HTTP/1.1" 200 -
>
> I was using skype on ubuntu, my Ian on the other end was using MAC OSX.  It
> took about 45mins until the hit came so they must be batched.  (The gap
> between the two requests is because I did some work on the web server as
> the
> SSL cert was expired and I didnt want that to prevent it working, nor
> something more script like with cgi arguments as in the article).
>
>
> Now are they just hoovering up the skype IMs via the new microsoft central
> server architecture having back doored skype client to no longer have
> end2end encrption (and feedind them through echelon or whatever) or is this
> the client that is reading your IMs and sending selected things to the
> mothership.
>
> btw their HEAD request was completely ineffective per the weak excuse
> microsoft offered in the article at top my php contained a meta-refresh
> which the head wont see as its in the html body.  (Yes I confirmed via my
> own localhost HTTP get as web dev environments are automatic in various
> ways).
>
>
> So there is adium4skype which allows you to use OTR with your skype
> contacts
> and using skype as the transport.  Or one might be more inclined to drop
> skype in protest.
>
> I think the spooks have been watching "Person of Interest" too much to
> think
> such things are cricket.  How far does this go?  Do people need to worry
> about microsoft IIS web servers with SSL, exchange servers?
>
> You do have to wonder if apple backdoored their IM client, below the OTR,
> or
> silent circle, or the OS - I mean how far does this go?  Jon Callas said
> not
> apple, that wouldnt be cool, and apple aims for coolness for users; maybe
> he
> should dig a little more.  It seems to be getting to you cant trust
> anything
> without compiling it from source, and having a good PGP WoT network with
> developers.  A distro binary possibly isnt enough in such an environment.
>
> Adam
> __**_
> cryptography mailing list
> cryptography@randombit.net
> http://lists.randombit.net/**mailman/listinfo/cryptography
>
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography