Re: [cryptography] Math corrections [was: Let's go back to the beginning on this]

2011-09-18 Thread Ian G

On 18/09/11 2:59 PM, Arshad Noor wrote:

On 09/17/2011 09:14 PM, Chris Palmer wrote:


Thus, having more signers or longer certificate chains does not reduce
the probability of failure; it gives attackers more chances to score a
hit with (our agreed-upon hypothetical) 0.01 probability. After just
100 chances, an attacker is all but certain to score a hit.


Agreed. But, that is just a consequence of the numbers involved.


You guys have a very funny way of saying probability equals 100% but 
hey, ... as long as we get there in the end, who am I to argue :)



The real problem, however, is not the number of signers or the length
of the cert-chain; its the quality of the certificate manufacturing
process.


Which is a direct consequence of the fact that the vendors unwound the 
K6 mistake of PKI (my words), and hid the signature chain (your words).


Hence the commonly cited race to the bottom.

So, causes and effects.

The real question is, how to reverse the race to the bottom?  What tweak 
do we have in mind?




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Math corrections [was: Let's go back to the beginning on this]

2011-09-18 Thread Ian G

On 18/09/11 1:54 PM, Arshad Noor wrote:


When one connects to a web-site, one does not trust all 500 CA's in
one's browser simultaneously; one only trusts the CA's in that specific
cert-chain. The probability of any specific CA from your trust-store
being compromised does not change just because the number of CA's in the
trust-store increase (unless the rate of failure incidents across
all CA's do go up).



Right, but the user doesn't care about any specific CA.  She cares about 
the system of all CAs.  My words segwayed from an individual CA to the 
system of CAs ... perhaps a bit too briefly.


And, the attacker has the luxury of choosing the CA, apparently :)



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] The Government and Trusted Third Party

2011-09-18 Thread Ian G

On 18/09/11 7:55 PM, M.R. wrote:

On 18/09/11 09:12, Jeffrey Walton wrote:


If you can secure the system from the government...

 
I can't possibly be the only one here that takes the
following to be axiomatic:

+++
A communication security system, which depends on a corporate
entity playing a role of a ~trusted-third-party~, can not be
made secure against a government in whose jurisdiction that
trusted-third-party operates.
+++



Right.


On the other hand, a perfectly adequate low-level retail
transaction security system can best be achieved by using a
trusted-third-party, SSL-like system.



That's a marketing claim.  Best ignored in any scientific discussion.


It follows then that we are not looking at replacing the SSL
system with something better, but at keeping the current
SSL - perhaps with some incremental improvements - for the
retail transactions,



Actually, I'd say the above conclusion follows from normal inertia 
considerations.  We can't wholesale replace SSL because there are too 
many links and lumps and levels and locales involved.


So the question is, how to tweak the current application to deal with 
the mismatch between design and use?




and designing a new system, from the
ground up, based on some a-priory, contemporary and well
documented threat model. This new system should address
those applications which have spilled outside of the
(implied?) threat model on which the SSL design was based.
That new threat model must not fail to explicitly state just
who are the attackers are and what their capabilities and
motivations must be considered.



This would be a classical text book approach, but is unrealistic.

In the real world, figure out who is going to do this.  The who will 
dramatically drive the process.  Including the list of attackers, their 
capabilities, etc.  E.g., if Google does it, we get one result;  if 
China University School of CS do it, another result.


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Math corrections

2011-09-18 Thread Ian G

On 19/09/11 3:50 AM, Arshad Noor wrote:

On 09/17/2011 10:37 PM, Marsh Ray wrote:


It really is the fact that there are hundreds of links in the chain and
that the failure of any single weak link results in the failure of the
system as a whole.


I'm afraid we will remain in disagreement on this. I do not view the
failure of a single CA as a failure of PKI, no more than I see the
crash of a single airplane as an indictment of air-travel.



His point is that the failure of a single CA is the failure of the 
entire browsing PKI.  Not PKI in concept, but all secure browsing, being 
one of the PKIs.


One single CA failure means the faiure of the system.  That's the point.


Are there weaknesses in PKI? Undoubtedly! But, there are failures
in every ecosystem. The intelligent response to certificate
manufacturing and distribution weaknesses is to improve the quality
of the ecosystem - not throw the baby out with the bath-water.



Right -- how to fix the race to the bottom?



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] The Government and Trusted Third Party

2011-09-18 Thread Ian G

On 19/09/11 6:53 AM, James A. Donald wrote:

On 2011-09-18 7:55 PM, M.R. wrote:

It follows then that we are not looking at replacing the SSL
system with something better, but at keeping the current
SSL - perhaps with some incremental improvements - for the
retail transactions,


These days, most retail transactions have a sign in.

Sign ins are phisher food.

SSL fails to protect sign ins.



Hence, frequent suggestions to uptick the usage of client certificates, 
SRP, and SSL itself.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Math corrections

2011-09-18 Thread Ian G

On 19/09/11 7:11 AM, Marsh Ray wrote:


Now that the cat's out of the bag about PKI in general and there's an
Iranian guy issuing to himself certs for www.*.gov seemingly at will,


Hmmm... did he do that?

That would seem to get the message across to the PKI proponents far 
better than logic or explanation...  Maybe we need to ask some 
rhetorical questions of some flack?



I
think the current PKI system will not escape the black hole at this
point, it crossed the event horizon sometime earlier this year.


Predictions of demise, and all that :)

iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Math corrections

2011-09-18 Thread Ian G

Hi Joe,

On 19/09/11 5:30 AM, Joe St Sauver wrote:

Ian asked:

#Right -- how to fix the race to the bottom?

Wasn't that supposed to be part of the Extended Validation solution?


In a way, it was.  More particularly it was the fix to certificate 
manufacturing.  The obvious fix to low quality was to create high quality.


Of course, it didn't work out that way.  DigiNotar was an EV, as were 
most of the others that were hacked.  What EV did then was to create two 
products, both with their individual race to the bottom.


So there is an underlying cause that they didn't address.


If it has failed at that, and I could see arguments either way, the
other natural solution is probably government regulation.


Which would come up with approximately the EV solution proposed.  It 
always does.  And, independent assessments of before and after 
government intervention generally show that the situation isn't any 
better for the original motivation, but it is more expensive.  And we 
know who to complain to.  So noise increases.


The fundamental flaw with government intervention is this:  they don't 
know any better.  So they ask the incumbents what to do.  The incumbents 
tell them how they can help them to make money.  So the government puts 
in a design that helps the incumbents to make money.


(In econ theory this is called barriers to entry.  Typically, the 
incumbents all agree on something that (a) raises prices together and 
(b) makes it hard for small nimble competitors to cherry pick.)




It likely
wouldn't be pretty, but imagine:

-- governmental accreditation of CAs (instead of, or in addition to,
browser vendor/CAB reviews)


QC has that, which is DigiNotar's regime.


-- governmental minimum price points for regulated products (thereby
eliminating the race to the bottom, or competition on pricing in
general)


price controls lol...


-- potentially government required insurance bonds, protecting the
public against negligence or malfeasance


EV has that.  If you know anything about the insurance market, it makes 
for hilarious reading as it gave Verisign a free pass, and forced all 
the others to pay for it.


(However, the trick to understanding it is this:  it is structured such 
that there will be no payout.)



-- governmental audits/reviews of CA compliance


QC.


-- pressure on third parties to make sure that PCI-DSS and similar
regulations mandate use of government approved CAs, only


?  Did that help?


Of course, this may be one of those Be careful what you wish for
scenarios, eh?


Yeah.  None of that will help any.  But it will certainly raise costs. 
So you'll get agreement from the large players.





iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] The Government and Trusted Third Party

2011-09-18 Thread Ian G

Hi James,

On 19/09/11 1:39 PM, James A. Donald wrote:

On 19/09/11 6:53 AM, James A. Donald wrote:

These days, most retail transactions have a sign in.

Sign ins are phisher food.

SSL fails to protect sign ins.


On 2011-09-19 1:12 PM, Ian G wrote:

Hence, frequent suggestions to uptick the usage of client certificates,
SRP, and SSL itself.


Client certificates and SSL seem unlikely to protect sign in.



The point about SSL is two-fold:  using SSL solves a slew of other 
problems to do with cookies and hacking and so forth, as Peter points 
out.  I suppose we need a list :)


The second point is that as more and more people use SSL, there is more 
and more pressure on the vendors to address the UI.  Which leads into...



The chairman of the board cannot handle a client certificate. He
outsources that to someone in IT whose name he does not know. Not very
secure.


The problem with client certs is that they are mostly saddled with a 
horrible UI.  If the UI was slick, it would work.


The experiments we've conducted over at CAcert indicate that when it is 
up and going, and the user base is forced by one means or another to 
migrate, a properly written client-cert login procedure is far nicer and 
more secure than a password system.


However, this requires to solve the chicken and egg problem.  We did 
that in CAcert by some serendipitous decisions.  How to do it in your 
org would be something else.


http://wiki.cacert.org/Technology/KnowledgeBase/ClientCerts/theOldNewThing



All of which are suggestions that there are low-hanging fruit.  Tweaks 
to make the system work without redesigning.



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-16 Thread Ian G

On 17/09/11 2:33 AM, Ben Laurie wrote:


A sufficiently low upper bound is convincing enough :-)



This is all the example seeks to show:  There is a low upper bound.

We really don't care whether it is 1% or 30%, or +/- 2% or finger in the 
air... as long as it is too low to be credible.


We just want to know whether there is a scaling issue such that at some 
largish number of CAs, we lose most of our trust or reliance or 
whatever word we're using today.


As long as each of the calculation methods head in that direction, we've 
found it.


As we know that the CA business grows, the number only gets worse.  So 
we have to change the system.  QED.


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] The consequences of DigiNotar's failure

2011-09-16 Thread Ian G

On 17/09/11 3:07 AM, M.R. wrote:

On 16/09/11 09:16, Jeffrey Walton wrote:

The problem is that people will probably die
due Digitar's failure.


I am not the one to defend DigiNotar, but I would not make such
dramatic assumption.

No one actively working against a government that is known to engage
in extra-legal killings will trust SSL secured e-mail to protect him
or her from the government surveillance.


IMNSHO, 1% of technically savvy users will have any view that there is a 
flaw with SSL secured e-mail.  Then, technically savvy users are about 
1% of the general population.  I'd expect around 0.01% of the population 
to have this clue.



If this particular case, if
the most often repeated hypothesis of who did it and why is correct,
it was probably done for some bottom net-fishing and will likely result
with a whole bunch of little people with secret files that will make
them second-class citizens for a long, long time, ineligible for
government jobs and similar. (For instance, I'd expect them to end up
on some oriental no-fly list).


Would you be willing to bet your life on that?

iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-15 Thread Ian G
On 15/09/2011, at 15:40, Kevin W. Wall kevin.w.w...@gmail.com wrote:

  Trust is not binary.

Right. Or, in modelling terms, trust isn't absolute.

AES might be 99.99% reliable, which is approximately 100% for any million 
or so events [1].

Trust in a CA might be more like 99%.

Now, if we have a 1% untrustworthy rating for a CA, what happens when we have 
100 CAs?

Well, untrust is additive (at least). We require to trust all the CAs. So we 
have a 100% untrustworthy rating for any system of 100 CAs or more.

The empirical numbers show that: out of 60 or so CAs and 600 sub-CAs, around 4 
were breached by that one attacker.

So, what to do? When the entire system is untrustworthy, at some modelled level?

Do we try harder, Sarbanes-Oxley style?

Or, stop using the word trust?

Or?



Iang



[1] the reason for mentioning AES is that crypto world typically deals with 
absolutes, binaries. And this thinking pervades PKI, where architects model 
trust as a binary. Big mistake...
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-15 Thread Ian G


On 16/09/2011, at 1:22, Andy Steingruebl a...@steingruebl.com wrote:

 On Wed, Sep 14, 2011 at 7:34 PM, Arshad Noor arshad.n...@strongauth.com 
 wrote:
 
 However, an RP must assess this risk before trusting a self-signed
 Root CA's certificate.  If you believe there is uncertainty, then
 don't trust the Root CA.  Delete their certificate from your browser
 and other applications, effectively removing all risk from that CA
 and its subordinates from your computer.  Or, choose not to do
 significant business over the internet when you see their certificate
 on a site - you always have the choice.
 
 1. You don't really always have a choice.  Many devices such as
 smartphones don't allow you to edit the trust-store.

Its far worse, the user has no choice, more or less, for all browsers.

This is deliberate policy by the participants. Vendors have organized 
(atrophied?) the security user interface to obscure any capability for average 
users to assess the roots, and have declined any opportunity to pass new 
reliance responsibilities to users.

CAs have obfuscated the policies and contracts so that users cannot figure it 
out. This also is industry practice. Technical players have also played their 
part in denying clear and simple structures.

End result is that in secure browsing, the user cannot assess. Period. Vendors 
have long recognized thus failure in classical PKI thinking, and have taken on 
the role for their users: policies, audits, reviews.

In secure browsing, the vendor is the Relying Party, by proxy, on behalf of all 
users. They don't accept that in public statements, but the pattern of facts is 
undeniable. Policy, review, UI, tech, it's all there.


Iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-13 Thread Ian G


On 13/09/2011, at 23:57, Jeffrey Walton noloa...@gmail.com wrote:

 On Mon, Sep 12, 2011 at 5:48 PM, James A. Donald jam...@echeque.com wrote:
--
 On 2011-09-11 4:09 PM, Jon Callas wrote:
 The bottom line is that there are places that continuity
 works well -- phone calls are actually a good one. There
 are places it doesn't. The SSL problem that Lucky has
 talked about so well is a place where it doesn't. Amazon
 can't use continuity. It is both inconvenient and insecure.
 
 Most people who login to Amazon have a long existing relationship: Hence key
 continuity and SRP would work well.
 I can't help but feel that Thomas Wu's SRP (or other PAKEs) would have
 helped the folks in Iran. A process which only requires two parties
 (Google and the individual) had three parties, one of whom failed
 spectacularly.

It's possibly worth remembering that in 1994, PKI assumptions looked better.

There were no natural authorities or TTPs on the net. The closest we got was 
Netscape, yahoo, network solutions and Postel.

For various reason, nobody saw these players in the way that we now see the 
players. Now we have search engines, amazon, eBay, Microsoft, apple, 
competitive registries, wikipedia, cacert, eff, Mozilla,  ... And that's before 
we get to Facebook and hundreds of social networks.

The map has changed, it's chock full of natural parties of trust.



Iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] After the dust settles -- what happens next? (v. Long)

2011-09-12 Thread Ian G
The problem with shifts of faith is that if there is really a groundswell 
against, we're as likely to miss it. People who leave generally do exactly 
that, and don't bother talking about it.

That said ..

 Some of us observe a third, more likely approach: nothing significant 
 happens due to this event.

This is a good point. The null option exists. And, given the history, it 
demands serious consideration.

Having taken on the devil's commission as advocate, I'll play it out :)

 Do you have any evidence that improving crypto is being talked about by those 
 affected in Iran? I haven't seen it yet.

I'm not sure I'd want to hear Iranians talking about improving their comsec. 
Not a good sign. Same for the Chinese ...

 Look what you just wrote. Those [dutch business/government] folks aren't 
 looking for us to fix PKIX: they are looking for different CAs. That's not a 
 collapse of faith, just a desire for a quick fix.

Now, yes. But, Blind Freddy can see they have zero choice in the matter. The 
question is, are they willing sheep, or are they future foxy converts to an 
alternate...

Look at it this way. Last year, risk analysis didn't include the scenario and 
your CA just collapsed and all your certs are rejected and all your portals are 
in chaos.

Next year, they do.


 Could be, but neither you nor I work at Google so that's pure speculation.

Just FTR, entire post was speculation. Because...


 (There are likely some Googlers on this list who can speak authoritatively on 
 whether their management are scared as hell or even noticing.)

Googlers are unlikely to do so. Google has a firm rule about not discussing 
business outside the company.

 ? I have seen zero in the serious business press (Forbes, BusWeek, etc.)

Serious? Business? Press? Is there any such thing?

It's been a long time since I've seen any general press do more than copy 
soundbites from their favorite mouthpieces or recycle each others stories.
...

 The governments and government contractors  ...

On this I agree with Paul. Governments will be the slowest of the slow, the 
most compliant of the compliant. Even if they wanted to, they won't budge until 
a private sector solution is unstoppable. And even then, I doubt they'll talk 
about it openly, for compliance reasons.

  Many of the people who you and I *want* to be concerned are not as 
 concerned as you say.

Sure. We tried to get people concerned for over a decade. It didn't work. This 
ain't gonna change that.

It doesn't work like that. The buying public probably is as equally concerned 
about famine in Africa or global warming or dolphin sandwiches. In each case, 
they'll ask, what can I do about it?

The answer is, today, nothing.

On the build or sell side, anyone making money doesn't want to change.  I 
speculate that might change, because for the first time, we have a builder, who 
has all the interests in-house, who's looking at loosing money.

 The full damage is not even out yet. This thing is just getting started.
 
 If there is more significant damage in the future, of course people will talk 
 about it more. But that's just guessing about the future.

The point is more, is this it?  Can we say this was an isolated incident, like 
the RapidSSL thing? Or the debian thing?

Or, is there more rot under the paintwork?  That's the question that isn't 
being answered.

Try this thought experiment. Someone important phones you up and asks, is this 
it? Do we have all the bad news? Give me faith!

How to answer?

 Faith is built on certainty. Up until now, even detractors had to admit that 
PKI was certain to carry on exactly as is; certs, SSL, browsing, etc.


 ... influential people. ..

As above, if I was influential, I'd keep my mouth shut. If I wanted to shift, 
I'd know that it's easier to do it quietly.

As it is, I'm not, so I speculate openly :)

Iang___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PKI - and the threat model is ...?

2011-09-12 Thread Ian G


On 13/09/2011, at 0:15, M.R. makro...@gmail.com wrote:

 In these long and extensive discussions about fixing PKI there
 seems to be a fair degree of agreement that one of the reasons
 for the current difficulties is the fact that there was no precisely
 defined threat model, documented and agreed upon ~before~ the
 SSL system was designed and deployed.

There is a pretty good effort to do exactly that, here:

http://www.iang.org/ssl/rescorla_1.html



After reading that, you might try my critique:

http://iang.org/ssl/wytm.html

I believe Eric's attempt to be a good historical attempt to document it. As he 
says himself, he wasn't there, and worked from other sources. I've never heard 
anyone dispute his account.

 It appears to me that it is consequently surprising that again,
 in these discussions for instance, there is little or nothing
 offered to remedy that; i.e., to define the threat model
 completely independent of what the response to it might or
 might not be.

Close. I would say that the issue above is more that the incumbents refuse to 
be drawn on which threat model they are using today. That's because each of the 
models can be shown to have such grave flaws as to send responsible architects 
back to the drawing board.

Eg., You will have seen discussions this week on exactly whether the system 
protects credit cards, or introduction, or something else?

So, we enter a game, which is primarily about claiming X, showing !X, then 
claiming, but if Y followed by !Y, and then, no, but X.

One day after 2037, we'll get to the point that everyone who was alive in 1994 
agrees that the threat model for SSL was bungled. In another net-century, we 
might also have overcome the drawbacks of those times, which are that 
approximately everyone knows how to ask what's your threat model? but 
approximately no-one knows how to develop a good one.


Iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PKI - and the threat model is ...?

2011-09-12 Thread Ian G


On 13/09/2011, at 5:12, Marsh Ray ma...@extendedsubset.com wrote:

 It never was, and yet, it is asked to do that routinely today.
 
 This is where threat modeling falls flat.
 
 The more generally useful a communications facility that you develop, the 
 less knowledge and control the engineer has about the conditions under which 
 it will be used.
 
 SSL/TLS is very general and very useful. We can place very little restriction 
 on how it is deployed.

To be fair, I think this part has been done very well by the designers.  I get 
the feeling that the original designers really didn't understand anything at 
the business and architecture side, so backed off and decided to secure a low 
level toolbox as best as possible. In this case TCP.

I guess we've all been there, I know I have.

If they had then said SSL (inc. PKI) secures TCP against a broad range of 
attacks, then all would have been consistent.

But, as soon as we get to business, these claims loose foundation. Can it be 
used to secure credit cards? Websites? Love-chat? Dissident planning?

The answer is ... Dunno!  Seriously, we have no clue.

But we still get app-level architects taking the Pareto-secure result of SSL 
and implying it on their business:

 It will be used wherever it works and feels secure. More and more 
 firewalls seem to be proxying port 80 and passing port 443. So it will 
 continue to be used a lot.
 
 Few app layer protocol designers will say this really wasn't part of the 
 SSL/TLS threat model, we should use something else. Most will say this is 
 readily available and is used by critical infrastructure and transactions of 
 far greater value than ours.
 
 It needs to be as secure as possible, but I freely admit that I don't know 
 what that means.

It's a good start :) I once tried to answer that with the concept of 
Pareto-secure, but I'm not sure the concept is self-referential as yet.

Iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Diginotar Lessons Learned (long)

2011-09-11 Thread Ian G


On 11/09/2011, at 10:02, James A. Donald jam...@echeque.com wrote:

 On 2011-09-11 9:10 AM, Andy Steingruebl wrote:
 1. Phishing isn't the only problem right?

Malware + breaches might be the other  2 biggies.

Note that the malware/pc takeover market was probably financed by profits from 
phishing. Breaches seemed to rise in parallel. Ok, I've got no evidence for 
that, it's just a speculation.

 2. To some degree this is a game where we have to guess their next
 step, and make that harder too.
 
 If we were doing something about their first step, then it would be necessary 
 to guess their next step.

What James said. The history of threats developing to risks to 
institutionalized loss streams (cf CC) is one of ignoring the signs while 
looking elsewhere. Phishing in its mass (post-AOL) form was first tried approx 
10 years ago against an FI. (for topical interest it was a 9/11 subject.)

It failed ... But by 2003 the early experimenters had got it right and were 
looking at a bright future.

We knew all that, and, institutionally speaking, ignored it.

The history of Internet threat analysis is equally poor.  SSL got the threat 
wrong because it predicted it - MITM. SSH got the threat right because it 
followed the losses and designed its model to beat the attackers.

Similar stories for IPSec and S/mime. Guesswork failed completely, response 
worked better, where it could.

Part of the problem is that we inherited the military threat concepts, CIA and 
all that. Another problem is that our successes aren't rewarded, theirs are. 
Hence, the net attacker gets smarter in swarm form, while we get dumber. They 
have the feedback loop, so they OODA us at a ratio of around 10:1.

Damn, there I go again, too many words. Iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] After the dust settles -- what happens next? (v. Long)

2011-09-11 Thread Ian G
Lucky  Peter said:
 
 Moreover, I noticed that some posts list one or more desirable properties 
 and requirements together with a proposed solution.
 
 That's the nice thing about PKI, there's more than enough fail to go around.


So, what happens now?  As we all observe, there are two approaches to dealing 
with the collapse of faith of the PKI system: incremental fixes, and complete 
rewrite. Anything that looks like a rewrite isn't going to happen, so 
incremental it is.

Perhaps a more important question is:

.. Who is going to do all this?

I think we have enough information on the table to figure it out. And that will 
answer the what/where/how.

Let's raise the bar: Firstly, only those who can deploy will succeed. Secondly, 
you have to play to win, and finally, you have to win to keep playing!

After a lot of elimination, the winner is: google!

With CA pinning and vendor-revocation.  Adam described how pinning worked, and 
correctly called it a hack. But that was just a CS observation, or apology.

This is almost entirely a governance project, and the rules of governance 
apply.  At that, the project stands tall: Risk management clearly suggests 
there is a list of high value certs to be pinned, and the architecture clearly 
identifies the client browser where this sort of processing should go on. The 
rest is implementation details; and the same could be said for its fast vendor 
revocation.

Any vendor could have done this? Right? I'd say not.

Google has one more notable advantage: it is the only player with all interests 
aligned.

This business is about interests and consequently, the solution-space has 
always been led by whosoever can form the dominant group of interested parties. 
Include one group, exclude another.

So what makes google different? All of the at-risk players can be brought into 
the same room. (This has probably never happened before.)

Like all vendors, Google is a relying party, acting as proxy for its end-users. 
However, google is also a subscriber, and therefore its users are really its 
users. With feeling, and with profits, directly.

A threat to a gmail user is a threat to the advertising, which is a threat to 
the revenue stream. It doesn't get more direct nor primary than that.

In contrast, Microsoft's revenue stream is more dispersed (windows  word) so 
developers there tend to go for grand sweeps like InfoCard and .Net. Similar 
things can be said about other vendors.

For google, and for user security, all the important parties are in the same 
room.  So, google developers can hack for these interests. In ways that 
directly protect the parties needing direct protection. In ways that others 
cannot.

So what happens now? In the near term, google will refine its revocation 
mechanism for known-bad certs.  In fullness of time, we will have several 
distinct certificate revocation mechanism. Call them:

Class A: authority revocation at vendor.
Class B: bad/fraud Cert, at vendor.
Class C: contract revocation by CA.

Then there is pinning. This was hacked in as a fixed list of high-profile 
sites. That's just a start. Google will expand this in its natural direction. 
What this is depends on what works for google, alone. Which means, what 
protects gmail, etc, and keeps those ads served, those buyers and sellers 
meeting.

Which is the next clue. Lucky Green stated that an original goal of secure 
browsing (or the SSL system in his words) was to introduce the consumer 
reliably to a merchant site, where that first person didn't already know the 
site (second person). Hence the third party.

But in this case, google is already the third person, because it also serves 
the ad. It knows the merchant. So the next thing that is going to happen is 
google will serve up the ad reliably. Which means, one click, and we go 
HTTPS-everytime.

Straight in, shopping, happy faces. The cert will be pre-verified, and chrome 
will know it.

The rest will be implementation details. As this is a crypto list, technical 
questions will flood in like a tsunami ... But, step back from the bitsbytes, 
the crypto, the links and handovers and policies.

Think back to the goal: google wants to introduce its users to its advertisers. 
Through its browser. On its website.

Google will solve this. I could add, and I'd put good money on it! But I 
don't need to; the lion's share of google's revenues - advertising - are on it 
already.

Good luck guys!



Iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PKI fixes that don't fix PKI (part III)

2011-09-10 Thread Ian G
Arrgghh apologies. I fell asleep over my iPhone and my finger slid over the 
Send button.



On 10/09/2011, at 8:46, Ian G i...@iang.org wrote:

 
 
 On 09/09/2011, at 9:11, Lucky Green shamr...@cypherpunks.to wrote:
 
 o What do I mean by the SSL system?
 
 I've taken to using TLS for the protocol, SSL in the wider context including 
 PKI/certs, and secure browsing for the headline or flagship application. 
 ...xx..



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PKI fixes that don't fix PKI (part III)

2011-09-10 Thread Ian G
Hi Steve,

On 11/09/2011, at 1:07, Steven Bellovin s...@cs.columbia.edu wrote:

 Sorry, that doesn't work. Afaik, there is practically zero evidence of 
 Internet interception of credit cards. 
 
 This makes no sense whatsoever.

(the point here is that the original statement said we had limited Internet 
eavesdropping fraud to less than the level of card-present fraud; it is a 
loaded statement, it somehow implies mission accomplished when the reality 
isn't so clear.)

 Credit card numbers are *universally*
 encrypted; of course there's no interception of them.

I'm afraid that's not really true in the absolute sense. There are a lot of 
small merchants that take credit cards over http and email.  And phone...

 Sure, it's easier to harvest in bulk by hacking a web site, or by
 seeding self-propagating malware that logs keystrokes.  But if
 eavesdropping works -- and it has in enough other cases -- it would have
 been used.

MITMing has been tried using stolen certs, often enough, but has seemed to have 
been not worth the trouble, as against downgrade to http. Fwiw.

Eavesdropping has been attempted at cafes and other wireless places. I've never 
seen any hard numbers, but given the amount of wireless, it seems as this also 
hasn't shown itself sufficiently economic. So maybe it is an acceptable risk?

  The *only* reason it isn't used against credit card numbers
 has been SSL.

That isn't a scientifically valid statement. For a start, we never ran the 
experiment, so we don't know if there was ever a risk. We assume it from the 
telnet experience.

Secondly, the context was different.  I.e., the solution to proven password 
eavesdropping was SSH, which does not use certs. The solution to anticipated 
credit card MITMing was SSL-with-certs.  4 points of difference.

Secondly, there's ample evidence to suggest more than one reason why it's less 
economic. Attackers don't choose your threat model, they choose their own risk 
model.




What went wrong last month was the certs part. As Lucky Green intimated, 
assumptions proved to be less robust than the cryptographers anticipated.

We have certs, we have to live with them. The question now is how to fix it up 
so we can continue. Assumptions will be the thing that blocks us. E.g. All CAs 
are equal.

Iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] GlobalSign temporarily ceases issuance of all certificates

2011-09-08 Thread Ian G


On 08/09/2011, at 11:31, Lucky Green shamr...@cypherpunks.to wrote:

 The SSL/public CA model did an admirable job in that regard and Taher
 ElGamal and Paul Kocher deserve full credit for this accomplishment.

As long as we can document that original model, I'm inclined to agree.


 SSL's design goals explicitly excluded protection against national
 government security and law enforcement entities. Indeed, SSL original
 design contains a wide selection of features exclusively geared towards
 facilitating interception by governmental entities. RC4-40 being one
 such feature.

Reverse engineering the design strongly suggests this requirement. What we lack 
is evidence.

 With 40-bit crypto as the designated burst plate, there was no sound
 engineering reason to fortify the rest of the plumbing to withstand the
 pressures generated by national government level adversaries.

Is there any documentation that bears this out? Any testimony?

It would be useful to have, as the meta-CAs have struggled to publically 
document requirements here, and thus created unnecessary wheel-spinning ... Eg 
the CNICC affair.


Iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PKI fixes that don't fix PKI (part II)

2011-09-08 Thread Ian G
Hi, Lucky, good to see some perspective!

On 08/09/2011, at 8:52, Lucky Green shamr...@cypherpunks.to wrote:
 o Changes to OCSP
.
 The
 problem was that the top three CA vendors at the time, RSA Security,
 VeriSign, and Netscape didn't have a comprehensive database of
 certificates issued by their software and were only able to generate
 blacklist-based CRLs. During the IETF process, OCSP was therefore
 redesigned as a bug-compatible front end to be fed by those CRLs.

Influence on institutional lines, or design on security lines?

Now, there is some merit in this, in that turning OCSP into an oracle of the 
certificate database has privacy and security consequences. But, read on...

 
 But that's the best the majority of CA vendor products architecturally
 could provide at the time, which caused the IETF process to arrive at
 the rough consensus that became known as OCSP. The consequences of that 
 decision are hounding us to this day. OCSP needs a redesign.

In this conclusion, I disagree, or at least wish to propose another conclusion 
 implied question.

IMO, it is revocation that needs a redesign. Not OCSP, and here's a small hint 
of evidence:

 Quoting myself here from those days: learning in 80 ms that the
 certificate was good as of a week ago and to not hope for fresher
 information for another week seems of limited, if any, utility to us or
 our customers.


(order rearranged ;)

 o Static lists of trusted CAs

(I think I've noted elsewhere, this is revocation, but at a higher layer. 
Whatever we decide there, applies here. And, read on...)


 o Gobal CA

Yeah, but the train has already left that station.

In the beginning, vendors ran a list of roots. CAs applied and were added, no 
problem. It was just a list, right?

Over time, this migrated to a fully fledged governance operation with policies, 
reviews, contracts, liabilities, bureaucracies, delays, costs and recently, 
*revocation* .

In short, vendors are the new meta-CAs. They just haven't agreed to that as 
yet. However, IMO, this situation is embedded and developing, unstoppable. The 
train will reach that station soon enough.

 Also known as meta-CA, CA-CA, single trusted root, and the turtle on
 which all other turtles stand,

Yes, we lack an agreed term.

(with a nod to peter's sense of humor, I  suspect the europeans and latin 
americans will find that CACA smalls ... )

 Until the next episode of PKI fixes that don't fix PKI,

Thanks!


 --Lucky Green


Iang___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Diginotar Lessons Learned (long)

2011-09-07 Thread Ian G

On 7/09/11 7:34 AM, Fredrik Henbjork wrote:


Here's another gem related to the subject. In 2003 CAcert wished to have
their root certificate added to Mozilla's browser, and in the resulting
discussion in Bugzilla, Mozilla cryptodeveloper Nelson Bolyard had the
following to say:

I have no opinion about the worthyness of the particular CA being
proposed in this bug.  I don't know who it is yet.  But my question would be:

Does webtrust attest to this CA?



That was a clear NO at that time, A WebTrust audit was never done.  And 
they had the same problem that Thawte had in those days, one guy with 
everything in his head, and his sock draw.


But they got better!  We can estimate progress from the DRC audit [1] 
which is like a quality superset of WebTrust.  2003 to 2008 they didn't 
meet that standard.  By around 2009 they were in reasonable shape [2]. 
Some pluses, some minuses of course [3].




I think that should be one of the criteria.

PKI is about TRUST.  All root CAs that are trusted for (say) SSL service
are trusted EQUALLY for that service.  If we let a single CA into mozilla's
list of trusted CAs, and they do something that betrays the publics' trust,
then there is a VERY REAL RISH that the public will lose ALL FAITH in
the security (the lock icon) in mozilla and its derivatives.


Yes.  This is a double edged sword.  The idea is to make it seamless for 
users, who don't understand.  This is good, hard to argue with.  It 
works, sort of.  But there are consequences.


On the one hand, all CAs fight to do the minimum they can;  once in, 
they can sell certs at lowest cost because their certs look just like 
anyone elses.  No differentiation or discrimination is possible 
(marketing terms) because any increase in quality is not perceived by 
the market.


Hence, the well-known race-to-the-bottom, which is a big factor in 
DigiNotar.


On the other hand, persistent frustration on the part of the 
regulator/vendor/user community has led to higher and greater and more 
expensive barriers.  The CAs have not fought this because it works for 
them.  So, this has led to entrenchement of the industry, and a deadlock 
where all attention goes to more and more audit, and it has become even 
more unlikely that architectural change is possible.



We don't want that to happen.  If that happens,  mozilla's PKI becomes
nothing more than a joke.   If you want to see mozilla's PKI continue to
be taken seriously, you will oppose allowing un attested CAs into
mozilla's list of trusted root CAs.

https://bugzilla.mozilla.org/show_bug.cgi?id=215243



Given the recent CA stories, I can't help but smile at that comment...


Yes.  I hope Nelson can see now that the seeds of his fears were sewn by 
the very limitations he described, not by any particular CA or rejection 
of same.


As James [4] pointed out, Shades of Sarbannes Oxley.  This is the same 
thing that happened in the finance industry when Enron collapsed.  Enron 
lead directly to Sarbannes-Oxley, which all claimed would stop the 
problem.  Never again!


But it didn't, and in fact, it provided a blanket of complexity for the 
next problem.  Global financial crisis ... was in big part enabled by 
the inability for banks and finance companies to assess their risks ... 
and this was in part because governance was forced into compliance-more 
by Sarbanes-Oxley.  Which also provided them the plausibility and 
liability cover.


We did what you told us to do, look, here's our auditor's invoice!

So what do we do now?  We review the framework and double the work. 
Claim it'll never happen again.  And wait for the next blow-up.




Instead, we could reduce the overall barriers, and erect smarter 
barriers.  Push more work across to the users, push more failures onto 
the users.  Open up the market;  allow CAs to brand themselves, and 
allow them to increase their quality to the point where it works for 
their users ... differentiation ... rather than force them to decrease 
the quality to compete penny-wise for a commodity product.


Problem with this standard marketing  econ view is that ... all the 
institutions are composed of geeks.  It's greek.  Mumble jumble.  No, 
this market is different, we can make it perfect, we got crypto!  Got a 
patch?  Go speak to PKIX...




iang




[1] Disclosure:  I was the auditor that started and terminated the DRC 
review.  DRC is a rewrite (simply put) of WebTrust that was much better, 
much more mid 00s instead of mid 90s.  More aligned to the strength of 
EV, approximately.


[2] But I terminated them because they couldn't meet DRC within the 
resources ... they probably could have met WebTrust, again given the 
resources.


The problem that CAcert faces now is that they've (arguably) caught up 
to WebTrust, but now they face:  WebTrust + Baseline Requirements + 
Extended Valuation (possibly) + vendor review.  Each of those is a 
serious audit, with serious costs.


So the bar in terms of quality and quantity has been 

Re: [cryptography] GlobalSign temporarily ceases issuance of all certificates

2011-09-07 Thread Ian G

On 8/09/11 5:34 AM, Fredrik Henbjork wrote:

http://www.globalsign.com/company/press/090611-security-response.html

This whole mess just gets better and better...



As a responsible CA, we have decided to temporarily cease issuance of 
all Certificates until the investigation is complete. 


GlobalSign has officially announced the appointment of Fox-IT to assist 
with investigations into the claimed breach. Fox-IT is the Dutch 
cybersecurity experts hired to investigate the compromise of the Dutch 
CA DigiNotar and therefore already have a wealth of current knowledge 
and experience of the hacker.




H I'm not sure I'd suspend issuance without some evidence.  If 
it was me.  I might put all issuances under manual control and checking 
... but evidence would be highly respected in this case.


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] GlobalSign temporarily ceases issuance of all certificates

2011-09-07 Thread Ian G

On 8/09/11 6:02 AM, I wrote:

H I'm not sure I'd suspend issuance without some evidence.


On 8/09/11 6:13 AM, Franck Leroy wrote, coz he checked the source!:

 http://pastebin.com/GkKUhu35

 extract:

 Third: You only heards Comodo (successfully issued 9 certs for me -
 thanks by the way-), DigiNotar (successfully generated 500+ code
 signing and SSL certs for me -thanks again-), StartCOM (got connection
 to HSM, was generating for twitter, google, etc. CEO was lucky enough,
 but I have ALL emails, database backups, customer data which I'll
 publish all via cryptome in near future), GlobalSign (I have access to
 their entire server, got DB backups, their linux / tar gzipped and
 downloaded, I even have private key of their OWN globalsign.com
 domain, hahahaa)



Snap!  OK, I'm convinced.  Suspend :)

Given the DigiNotar experience, this is a dead cert.

Reason being, DigiNotar has established a liability pattern, and the CA 
mentioned above has figured out that it might apply to them.


Under normal corporate governance escalation procedures, might becomes 
are you willing to bet your job?  Risk modelling out the window :)



 BUT YOU HAVE TO HEAR SO MUCH MORE! SO MUCH MORE!
 At least 3 more, AT LEAST! Wait and see, just wait a little bit like I
 said in Comodo case.

 This is very disturbing...

Somebody on one of these lists made the point that the attacker wasn't 
acting like what was expected in the designed threat model.


It is not a new observation that the original threat modelling had flaws 
you could drive a truck through :)  To add to that, the original 
security requirement was to protect Credit cards.  Only.  Which have a 
known value range, a loss model, an insurance model, institutions 
already at arms to protect Robbie Relier.


So, when people started using SSL for other purposes (email, banking, 
but not porn) ... what happens?


Well, the value changed (up or down?) ... and the insurance disappeared.

Perhaps this is just another one of those aphorism moments?

There is only one security model, and it are belong to me.


 Franck.
 ___
 dev-security-policy mailing list
 dev-security-pol...@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security-policy



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [SSL Observatory] Diginotar broken arrow as a tour-de-force of PKI fail

2011-09-05 Thread Ian G

On 5/09/11 7:23 PM, Gervase Markham wrote:


The thing which makes the entire system as weak as its weakest link is
the lack of CA pinning.



Just a question of understanding:  how is the CA pinning information 
delivered to the browser?


(For those who don't know, I also had to look it up too :)  CA pinning 
is where a particular CA is the only one permitted to issue certs for a 
website.  I think, it's a very new feature, in some browsers only?)



   An HSM or smart card that does anything the PC that it's attached to tells
   it to is only slightly more secure than simply storing the key directly on
   the PC.  You need to do more to secure a high-value signing process than
   sprinkling smart card/HSM pixie dust around and declaring victory.


This is true, but I'm not sure it's particularly relevant.


Well, what's relevant is whether the security processes are doing the 
job.  Evidence over the last year says no.  Why?


What Peter's saying is that there are signs that the processes are 
weaker than they appear.  One clue is when they go for expensive 
solutions rather than smart solutions, and declare it done.



(Who claims
that HSMs are magic pixie dust?)


CABForum, in BR15.6.  CA must use a HSM approx.

Monkey-see-monkey-do.  Which, amusingly, contradicts most of the rest of 
section 15 :)



Lack of breach disclosure requirements for CAs means that they'll cover
problems up if they can get away with it:


Do you think that remains true?


We don't know.  There is no full disclosure mechanism, so we don't know 
what is disclosed and what not.  and even when the full disclosure 
mechanism is in place, we'll need 20 or so events to gain confidence in it.


Recall SB1386?  It actually didn't do anything until 2 years had passed. 
 Then someone paniced.  And attitudes shifted...



Comodo didn't cover their problems up,


Have they released the full report of the issue?  Has Mozilla?

Or do we just know the headline, and what people have dug up against 
their best wishes?


You saw the chat on mozilla list, another CA declined to report, dressed 
up by lots of buts, ifs, maybes, not-us's and other rantings.


Non-disclosure is certainly in place.


and are still in business. DigiNotar covered theirs up, and are not.
Covering up is a massive business gamble, because if anyone finds the
certs in the wild (as happened here), you are toast. Particularly given
that browsers are deploying more technologies like pinning which makes
this sort of attack easier to find, it would be a brave CA who covered a
breach up after the lesson we had last week. You'd have to be pretty
darn confident any misissued certs didn't get obtained by the attackers
- and if they didn't get out, is there actually a problem?



What is of current concern is that CAs may now be disclosing to the 
vendors.  And calling that disclosure.


This is of concern for several reasons:  firstly, it likely puts the 
vendors in a very difficult position, even to the point of corrupting 
them.  Secondly, it creates a liability-shifting mechanism:  the broken 
CA can now point to this as its industry-standard disclosure mechanism 
(regardless of utility and user damages) which reduces its own 
liability, without a commensurate payment; and the vendor now has to 
take on the risk of suits.  Thirdly, it's being done in an ad hoc knee 
jerk fashion, again in secret, and there is no particular faith that the 
parties involved will be able to keep their interests of the table.


(For Mozilla alone, private disclosure goes against their principles.)

I'm not denying that disclosure to vendors may help.  But I have no 
faith in the risk managers at the other side to analyse that risk.


If you feel that they can do a good job, post their risk analysis.

Right, I thought so, they haven't done one.  All vendors are in breach 
of BR.  Doesn't auger well does it :)



   there's nothing protecting the user.  Even the most trivial checks by
   browsers would have caught the fake Google wildcard cert that started all
   this.


What sort of trivial checks are you suggesting?


Perhaps CA pinning!  But in the browser :)



   Diginotar both passed audits in order to get on the browser gravy train and
   then passed a second level of auditing after the compromise was discovered.
   The auditors somehow missed that fact that the Diginotar site showed a two-
   year history of compromise by multiple hacking groups, something that a
   bunch of random commentators on blogs had no problem finding.


I think there are definitely searching questions to ask of DigiNotar's
auditors.


:)  and, any other CA audited by that organisation.  And any CA audited 
to that standard


And ... wait, all of them!  Oops!

Short story -- you won't be able to blame the auditor for this.

Sure, you can embarrass them a lot!  But, it's pretty obvious on one 
reading of webtrust that it's a farce.  It's also pretty obvious reading 
BR that an audit would not have picked this 

Re: [cryptography] Smart card with external pinpad

2011-08-20 Thread Ian G

On 21/08/11 6:21 AM, Simon Josefsson wrote:

Thierry Moreau writes:


If there were devices meeting the stated goal (commercially available
with a reasonable cost structure), they would be a very useful
security solution element for high security contexts. The user
guidance would be: never enter the PIN anywhere else than on one of
these devices. Gone the phishing threat!


Not so fast -- that prevent the phisher from getting the PIN, but what
the phisher usually wants is to perform some private key operation using
your smartcard without you noticing.


Yes.  A problem with smart cards is that they typically aren't secure by 
themselves, they typically require a secure interface device.


(Unless we're talking about some of the more advanced digital cash 
designs, but they have the advantage of a simplified security goal.)



All smartcard readers with PIN entry pads that I have used has had the
property that once you have entered the PIN, the host (which normally is
untrusted and can have a trojan running) will be able to perform
unlimited number of private key operations using your smartcard.


It all depends what you mean by the host.  Typically, the reader is 
part of the hard security boundary, and it exports some safe high-level 
API.  In rollouts, the reader is also a heavily branded item that the 
customer is supposed to learn, so as to avoid sticking the card into any 
old slot.


Where you've got some pass-through reader connected to a PC, all bets 
are off!  That's a breach of the security model.  Or a development kit. 
 Or a bankers' liability shifting model :P



So the trojan have to wait for someone to enter their PIN to do a normal
transaction, and then the trojan can ask the smartcard to do whatever it
wants.  Bingo.

I'm surprised there aren't smartcard readers with a button to authorize
every private key operation.  At least I haven't seen any.  It is still
not perfect (the trojan can race the legitimate application and perform
its operation first) but it is an improvement.


There are.  They're called cellphones.  Problem is, until recently they 
weren't hackable so easily.  Apple then Google fixed that, so maybe 
we'll see more use in the future.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] bitcoin scalability to high transaction rates

2011-07-20 Thread Ian G

On 20/07/11 9:08 PM, Eugen Leitl wrote:

On Wed, Jul 20, 2011 at 11:56:06AM +0200, Alfonso De Gregorio wrote:


I'd better rephrase it in: expectation to have money backed by
bitcoins exhibiting all the desirable properties of a perfect
currency (ie, stable money) are greatly exaggerated.


The question is not whether it's perfect, but whether it's good enough.


The question is whether it is even close.  It's pretty clear it can 
never be stable enough to be a currency.  Pretty much all currencies 
lean on some form of stability;  BitCoin does not, and suggests when 
it's big enough, supply v. demand will stabilise it...


Only gold/silver has ever pulled off that trick, and emulating gold is 
not what you'd call a winning strategy.  Actually there's a name for it: 
 alchemy.  BitCoin is cryptographic alchemy.




BTC is basically a global version of http://en.wikipedia.org/wiki/Local_currency
or http://en.wikipedia.org/wiki/Alternative_currency and hence
isn't something completely new.



Sure, and those things have rules too.  Local currency is local; 
BitCoin is not.  The difference is that in local currencies we can rely 
on the trust and reputation networks to stop people stealing.  In 
BitCoin, we can't.  In local currencies, when the currency moves outside 
the very tight trust circle where everyone knows each other, they fail, 
because someone moves into the currency who has no reputation to lose.


(Alternative currency is just a term used by the regulated currency 
people, it doesn't really tell us anything.)



It would be intesting to see whether BTC's successors
could improve the scheme, by allowing a (subexponential)
growth, built-in devaluation to encourage circulation and
discourage hoarding (this would be probably hard to
do), and so on.


Not really.  It's problem isn't its mathematics or its release rate, but 
that it has no ground to stand on.  Which is to say, if people want to 
bid it to the sky, they can.  If people want to dump it to the bottom of 
the ocean, they can too...


With a currency that is backed on something stable, the stable commodity 
forms an anchor around which value gyrates.  So, it is worth holding if 
the price goes up too low, because you can always use it for its stable 
thing.  E.g., in US of A, the american people are quite happy to hold 
$$$ because they can pay their taxes with it.  They really don't care 
that much what the exchange rate is doing, up or down.  This anchor 
means USD is a good currency.


Possibly what people don't realise is that it is very easy to corner a 
market.  However, the fundamental value of the unit (the commodity) will 
stabilise and punish the speculator who corners the market.  With 
BitCoin there is no underlying anchor to punish the person cornering the 
market, so the games will be excessive, and volatility will be too high 
to be current.




iang

PS: having said all that negative stuff, I quite like BitCoin.  If it 
got the econ right, we'd be having different conversations :)

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OTR and deniability

2011-07-14 Thread Ian G

On 14/07/11 12:37 PM, Ai Weiwei wrote:

Hello list,

Recently, Wired published material on their website which are claimed to be 
logs of instant message conversations between Bradley Manning and Adrian Lamo 
in that infamous case. [1] I have only casually skimmed them, but did notice 
the following two lines:

 (12:24:15 PM) bradass87 has not been authenticated yet. You should 
authenticate this buddy.
 (12:24:15 PM) Unverified conversation with bradass87 started.

I'm sure most of you will be familiar; this is evidence that a technology known 
as Off-the-Record Messaging (OTR) [2] was used in the course of these alleged 
conversations.

I apologize if this is off topic or seems trivial, but I think a public discussion of the 
merits (or lack thereof) of these alleged logs from a technical perspective 
would be interesting.


I believe it is germane to anyone designing crypto protocols to 
understand how they actually impact in user-land.  This particular one 
is a running sore for me because of its outrageous claim of deniability.



The exact implications of the technology may not be very well known beyond this 
list. I have carbon copied this message to the defense in the case accordingly.

If I understand correctly, OTR provides deniability, which means that these alleged 
logs cannot be proven authentic.


The *claim made by OTR is to provide technological deniability* as 
opposed to any non-technological status.  Its non-technical deniability 
is zilch.


Unfortunately, outside the technology, it is trivial to prove the logs 
as authentic.  This is confusing for the technologists as they are 
trying to create a perfect security product, and they believe that 
technology rules.  What they've failed to realise is that real life 
provides some trivial bypasses, and in this situation, they may very 
well be creating more harm -- by sucking people into a false sense of 
security.


Design of security systems is tough, it is essential to include the 
human elements in the protocol, elsewise we end up with elegant but 
useless features.  Sometimes we enter into danger, as is seen with OTR 
or BitCoin, where a technological elegance causes people to lose their 
common sense and grasp of reality.




In fact, the OTR software is distributed with program code which makes falsifying such 
logs trivial. Is this correct?


Dunno.  Could be.  Evidence of a false sense of security, to me.


What do you think?  


On the specific legal case:  well, nothing we see in open press will 
really be reliable.  You're looking at the USG going for broke against a 
couple of lonely mixed up people who USG mistakenly let near a TS site. 
 It will be a total mess.  Mincemeat, fubar, throw away the key.  The 
case will see all sorts of mud thrown up, with both sides trying their 
darndest to muddy the waters.


From the external pov, there will be no clarity.  Nothing really to say 
or think, except, ... don't make that mistake?  Relying on crypto 
blahblah promises like OTR or PGP when you're about to release a 
wikileaks treasure trove doesn't sound like rational thinking to me.


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-13 Thread Ian G

On 13/07/11 9:25 AM, Marsh Ray wrote:

On 07/12/2011 04:24 PM, Zooko O'Whielacronx wrote:

On Tue, Jul 12, 2011 at 11:10 AM, Hill, Bradbh...@paypal-inc.com
wrote:


I have found that when H3 meets deployment and use, the reality
too often becomes: Something's gotta give. We haven't yet found
a way to hide enough of the complexity of security to make it
free, and this inevitably causes conflicts with goals like
adoption.


This is an excellent objection. I think this shows that most crypto
systems have bad usability in their key management (SSL, PGP). People
don't use such systems if they can help it, and when they do they
often use them wrong.


But the entire purpose of securing a system is to deny access to the
protected resource.


And that's why it doesn't work;  we end up denying access to the 
protected resource.


Security is just another function of business, it's not special.  The 
purpose of security is to improve the profitability of the resource. 
Protecting it is one tool to serve security  profits, and 
re-engineering it so it doesn't need any protection is another tool... 
There are many such tools :)




In the case of systems susceptible to potential
phishing attacks, we even require that the user themselves be the one to
decline access to the system!

Everyone here knows about the inherent security-functionality tradeoff.
I think it's such a law of nature that any control must present at least
some cost to the legitimate user in order to provide any effective
security. However, we can sometimes greatly optimize this tradeoff and
provide the best tools for admins to manage the system's point on it.



Not at all.  I view this as hubris from those struggling to make 
security work from a technical pov, from within the box.  Once you start 
to learn the business and the human interactions, you are looking 
outside your techie box.  From the business, you discover many 
interesting things that allow you to transfer the info needed to make 
the security look free.


A couple of examples:  Skype works because people transfer their 
introductions first over other channels, hey, my handle is bobbob, and 
then secondly over the packet network.  It works because it uses the 
humans to do what they do naturally.


2nd.  When I built a secure payment system, I was able to construct a 
complete end-to-end public infrastructure without central points of 
trust (like with CAs).  And I was able to do it completely.  The reasons 
is that the start of the conversation was always a. from person to 
person, and b. concerning a financial instrument.  So the financial 
instrument was turned into a contract with embedded crypto keys.  Alice 
hands Bob the contract, and his softwate then bootstraps to fully 
secured comms.




Hoping to find security for free somewhere is akin to looking for free
energy. The search may be greatly educational or produce very useful
related discoveries, but at the end of the day the laws of
thermodynamics are likely to remain satisfied.



:)


Those looking for no-cost or extremely low-cost security either don't
place a high value on the protected resource or, given the options they
have imagined them, that they may profit more by the system being in the
less secure state. Sometimes they haven't factored all the options into
their cost-benefit analysis. Sometimes it never occurs to them that the
cost of a security failure can be much much greater than the nominal
value of the thing being protected (ask Sony).


No, it's much simpler than that:  denying someone security because they 
don't push the right buttons is stilly denying them security.  The 
summed benefit of internet security protocols typically goes up with the 
number of users, not with the reduction of flaws.  The techie view has 
it backwards.


...

So even if you're a web site just selling advertising and your users'
personal information, security is a feature that attracts and retains
users, specifically those who value their _own_ stuff. (Hint hint: this
is the kind with money to spend with your advertisers.) Smart people
value their own time most of all and would find it a major pain to have
to put everything back in order after some kind of compromise.


This is a curiousity to me;  has anyone actually figured out how to find 
a marketplace full of security conscious users?  Was there ever such a 
product where vendors successfully relied upon the users' good security 
sense?



...

I hope there was a coherent point in all of that somewhere :-) I know
I'm preaching to the choir but Brad seemed to be asking for arguments of
this sort.




:)


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-13 Thread Ian G

On 13/07/11 3:10 AM, Hill, Brad wrote:

Re: H3, There is one mode and it is secure

I have found that when H3 meets deployment and use, the reality too often becomes: 
Something's gotta give.  We haven't yet found a way to hide enough of the 
complexity of security to make it free, and this inevitably causes conflicts with goals 
like adoption.

An alternate or possibly just auxiliary hypothesis I've been promoting on how 
to respond to these pressures is:

Build two protocols and incentivize.

That is:

Recognize in advance that users will demand an insecure mode and give it to 
them.


I've heard of users demanding easy modes, but never demanding insecure 
modes :)



Make it a totally different protocol, not an option, mode or negotiation of 
the secure protocol.
Encourage appropriate self-sorting between the secure and insecure 
protocols.

Making two completely different protocols means that neither has to pay the 
complexity cost of the other mode, (avoiding e.g. the state explosion Zooko 
described with ZRTP) eliminates or greatly reduces introduced attack classes 
around negotiation and downgrade, and makes the story around managing and 
eventually deprecating legacy clients simpler.

The self-sorting is the tricky bit.  Google Checkout and SXIP are good examples 
of this.   Google Checkout allowed both signed and unsigned shopping carts.  
Unsigned shopping carts were dead-easy to implement, but had a higher fee 
structure than the signed carts.  This meant that it was easy to join the 
ecosystem as a prototyper, hobbyist or small and unsophisticated business.  But 
it also meant that as soon as your transaction volume got large enough, it was 
worthwhile to move to the secure version.   SXIP built the incentive between 
protocols by having additional features / attributes that were only available 
to users of the secure protocol.


I would never have done that.  I would have had signed shopping carts, 
period.  I would have just set the fee structure on whether I recognise 
the signer of the shopping cart, or not.


(I'm not saying it is wrong, just that there is an easy way to get the 
same benefit without having two modes...)



The other advantage of building two protocols is that if/when the insecure 
protocol actually becomes a target of attack, the secure version is ready to 
go, deployed, proven, ready for load, with libraries, sample code, the works 
needed for a smooth transition.

This is a bit like Ian's Build one to throw away, except that I'd say, build 
them both at the same time, and maybe you won't need to throw away the insecure one.


I know it sounds good, but has it ever worked?  Has any vendor ever been 
successfully attacked through a weak demo system, and then rolled out a 
new one *which happened to be prepared in time for this eventuality* ?


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ssh-keys only and EKE for web too (Re: preventing protocol failings)

2011-07-13 Thread Ian G

On 14/07/11 4:33 AM, Jeffrey Walton wrote:

On Wed, Jul 13, 2011 at 2:17 PM, James A. Donaldjam...@echeque.com  wrote:

On 2011-07-13 9:10 PM, Peter Gutmann wrote:


As for Microsoft,



Microsoft have a big interest in bypassing the status quo, and they've 
tried several times.  But each time it isn't for the benefit of the 
users, more for their own benefit, in that they've tried to rebuild the 
security infrastructure with themselves in control.  (recall .net, 
InfoCard, Brands' patents, etc.)  Nothing wrong with that, they have to 
pay for it somehow.


This has proven ... a harder nut to crack than they envisage.  But at 
least they are trying, my hat goes off to them!




Opera, etc who knows?  (If you work on, or have worked
on,
any of these browsers, I'd like to hear more about why it hasn't been
considered).  I think it'll be a combination of two factors:

1. Everyone knows that passwords are insecure so it's not worth trying to
do
anything with them.

2. If you add failsafe mutual authentication via EKE to browsers, CAs
become
entirely redundant.


Indeed, if EKE is implemented in the most straightforward way, any page or
data that can only be accessed while logged in, is securely encrypted even
if accessed over http.

Free browsers are supported by CAs.


Well, not financially, more like the policy side is impacted by the CAs, 
which are coordinated in a confidential industry body called CABForum. 
This body communicates internally to Mozilla (being a member) and via 
private comment by CAs to the CA desk.


Against that are a small and noisy but also uncoordinated group of user 
representatives.  As we're punching against an organised, paid opponent 
that can't be seen, we don't get very far.


They (Mozilla, other vendors and the CAs) are in the process of raising 
the standards yet again for CAs, on the back of various claimed breaches 
of certs and rising angst against all security problems.  Because they 
have laid out their architecture, and because it makes money, they 
aren't about to change it.  But they are bedding it in.


The chances of them approving or agreeing to EKE are next to nil.


EKE enabled browsers would only be
supported by people needing secure logins, which form a less concentrated
interest, therefore an interest less capable of providing public goods.

I believe Mozilla is [in]directly supported by Google. Mozilla has
made so much money, they nearly lost their tax exempt status:
http://tech.slashdot.org/story/08/11/20/1327240/IRS-Looking-at-GoogleMozilla-Relationship.

I was also talking with a fellow who told me NSS is owned by Red Hat.
While NSS is open source, the validated module is proprietary. I don't
use NSS (and have no need to interop with the library), so I never
looked into the relationship.



Possibly, I haven't heard that.  The problem with Mozilla security 
coding is more this:  most (all?) of the programmers who work in that 
area are all employees of the big software providers.  And they all have 
a vested interest in working for the status quo, all are opposed to any 
change.


(Not because they are bad or good, but because that's what they are paid 
to do.)


(It doesn't help to offer help either;  they have their ways of 
rejecting any asymmetric help.)


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-12 Thread Ian G

On 13/07/11 8:36 AM, Andy Steingruebl wrote:

On Tue, Jul 12, 2011 at 2:24 PM, Zooko O'Whielacronxzo...@zooko.com  wrote:


When systems come with good usability properties in the key management
(SSH, and I modestly suggest ZRTP and Tahoe-LAFS) then we don't see
this pattern. People are willing to use secure tools that have a good
usable interface. Compare HTTPS-vs-HTTP to SSH-vs-telnet (this
observation is also due to Ian Grigg).


I reject the SSH key management example though.


The SSH-vs-telnet example was back in the mid-90s where there were two 
alternatives:  secure telnet and this new-fangled thing called SSH.


What's instructive is this:  secure telnet told the user to do 
everything correctly, and was too much trouble.  SSH on the other hand 
got up and going with as little trouble as it could think of at the 
time.  Basically it used the TOFU model, and that worked.


The outstanding factoid is that SSH so whipped the secure telnet product 
that these days it's written out of history.


(Granted, SSH wasn't really thinking about the large scale admin issues 
that came later.)


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Bitcoin observation

2011-07-05 Thread Ian G

On 5/07/11 4:44 PM, Jon Callas wrote:

Did you know that if a Bitcoin is destroyed, then the value of all the other 
Bitcoins goes up slightly? That's incredible. It's amazing and leads to some 
emergent properties.


This assumes fixed value.  As there is no definition of the value in 
BitCoin, it's hard to sustain that assumption :)


In practice, there will be a number of effects.  If you potlatch your 
own coins, at the margin, others go up in value a little.  And you paid 
a lot for that, so you lose.


If you destroy others' coins, a little value goes up for all, sure.  But 
also currency being money, it loses its store of value characteristic, 
and rapidly loses value as people get out.  Demand goes down faster than 
supply, trading price plummets.



If you have a bunch of Bitcoins and you want to increase your worth, you can do 
this by one of three ways:

(1) Create more Bitcoins.
(2) Buy up more Bitcoins, with the end state of that strategy being that you've 
cornered the market.


If you buy all of them ... you also stopped the market :)


(3) Destroy other people's Bitcoins. The end state of that is also that you've 
cornered the market.


Except, reputation effects will cause a run, dumping, loss of value.


I also observe that if the player succeeds at either strategy (2) or (3), then 
Bitcoins are no longer a decentralized currency. They're a centralized 
currency. (And presumably, that player wins the Bitcoin Game.)


Um.  If a player succeeds in isolating all the money to self, it's no 
longer money :)



I'll go further and note that if a self-stable oligarchy manages to buy or 
destroy all the other  Bitcoins, they win as a group, too. With enough value in 
the Bitcoin universe, and properly motivated players, that could easily happen.

I wonder myself when it is more efficient to destroy a Bitcoin than buy or 
create one? Let's call the value of the energy to create one C. We'll call the 
value to buy one B. There must be some constant H where H*C or H*B makes it as 
efficient to destroy one than to buy or create. I suppose there's really two 
separate constants, H_c and H_b.

Nonetheless, I call this H because it's the Highlander Constant. You know -- 
there can only be one! If H is large enough, then you have unrestricted 
economic war that leads to a stable end where a single player or an oligarchy 
holds all the bitcoins.


Ah... there is only one BitCoin, and it is current!

At which point the film ends, and the script writers scratch their heads 
for the sequel :)



So if we consider a universe of N total coins and a total market value of V, 
and a players purse size of P coins, what's the value of H? I think it's an 
interesting question.

I have some other related things to muse over as well, like what it means to 
destroy a bitcoin. If you silently destroy one, the value of the remaining 
coins increases passively through deflation.


The value of the remaining coins might go up because of the unit of 
exchange characteristic being in demand, assuming that you don't 
actually hoard it otherwise.


Privately destroyed coins that were otherwise privately hoarded won't 
effect the value.  This is the Fort Knox radiation problem.  Is the gold 
in Fort Knox?  Is it radiated and unusable?  We don't know .. so what's 
it's value?  We don't know.  So it's value to us is zero.



But if you publicly destroy one, you could see an immediate uptick. ...


Yes, I guess an immediate public destruction is new info to the price, 
so the uptick will be expected.  As long as it is at the margin this 
will work.



Also, does public destruction actually hurt the market by making people tend to 
not want to put money into Bitcoins? Might this form some sort of negative 
feedback on the value of H, by cheapening Bitcoins as a whole? But is there a 
double-negative feedback through the fact that if people want to sell coins 
cheaply, the big players just buy them cheap and run the market back up that 
way?


It certainly confuses people's sense of what the value is.  Each trade 
(as opposed to a posted price) will reveal information about the value 
at that point.  These value points are ... valuable to the market, to 
excuse the pun.


However, other events are less informative.  Destruction, seizure, 
expiry, loss, theft all result in unclear information.




The end of all this musing, though, is that I believe that a decentralized 
coinage that has the property that destroying a coin has value *inevitably* 
leads to centralization through the Highlander Constant.


Yes, but centralisation is a self-limiting property, because a 
centralised currency isn't a currency.  It has to be current, which is 
to say, it has to be available to a large number of people in order to 
settle current debts.


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-05 Thread Ian G

On 5/07/11 3:59 PM, Jon Callas wrote:


There are plenty of people who agree with you that options are bad. I'm not one 
of them. Yeah, yeah, sure, it's always easy to make too many options. But just 
because you can have too many options that doesn't mean that zero is the right 
answer. That's just puritanism, the belief that if you just make a few absolute 
rules, everything will be alright forever. I'm smiling as I say this -- 
puritanism: just say no.


I find it ironic to be on the side of the puritans, but I think it's not 
inappropriate.


The 90s were the times of an excess of another religious crowd -- the 
hedonists.  In those times, more modes was more better.  The noble drive 
to secure the Internet intersected with the jihadic expression of code 
as freedom, the net as the new world, crypto as numbers, government as 
the enemy, and as much as possible of all of them.  Right now!  Today!


Hell, I was even part of it.  I thought it was so cool I coded up extra 
algorithms for Cryptix, just for fun, and lobbied to get extra 
identifiers stuffed into OpenPGP.




But what was the benefit?  Let's just take one example, the 
oft-forgotten client certificate.


Does anyone make much use of client certificate mode in SSL?  No, 
probably not.  They work [0], but nobody uses them, much.  And, it turns 
out that there is a good reason why nobody uses this fairly workable 
product:  because you don't have to.  Because it is optional.  As client 
certificates are optional, sites can't rely on the client certs being 
available.  So they fall back to that which they can insist on, which is 
passwords.  Which humans can be told to invent, and they will, without 
any audible grumbling.


So, options means unavailability.  Which means it can't be used.

Yet, there's no *security* reason for them being optional.  Client certs 
could be mandatory, just like server certs.  There is no *business 
benefit* for users in client certs being optional (and by this I mean 
client-side and server-side).




That's just one mode.  It turns out there is another mode -- HTTP.  This 
mode is turned on far more than it should be, resulting in a failure of 
user discrimination.  Hence, phishing.


Now, we may poo-poo the whole phishing thing, but consider that phishing 
is a bypass on SSL's authentication properties for online banking, etc. 
 At whatever layer we found it.  Phishing is the breach that exploits 
HTTP mode in browsing.


And consider that phishing, alongside server-breaching, financed the 
current wave of crime, step by step, to our current government 
cybercrime social disaster.


It's a lot to lay at the feet of a little mode like optional HTTP in 
secure browsing, but the bone points squarely at it.


If you've followed the history of real use and real breach, modes can be 
shown to cause failure.  OTOH, if we look at famous systems with few 
modes, we see less failure.  Skype has only one mode.  And it is secure. 
 SSH has very few modes.  And what modes it has -- password login for 
example -- caused a wave of SSH password snaffling until sysadms learned 
to turn off password mode!


In contrast:  SSL again.  Some packet bugs fixed in SSL v3.  MD5 
deprecation, much anticipated by a squillion cipher suites but target 
missed completely!  Re-negotiation - a mode to re-negotiate modes!  And 
finally the TLS/SNI bug.  Ug.




I claim that we've got causality and we've got correlation.  Which gives 
us the general hypothesis:


   there is only one mode, and it is secure.


I think that crypto people are scared of options because options are hard to 
get right, but one doesn't get away from options by not having them. The only 
thing that happens is that when one's system fails, someone builds a completely 
new one and writes papers about how stupid we were at thinking our system would 
not need an upgrade. Options are hard, but you only get paid to solve hard 
problems.



What's left is arguing about the exceptions.  In H6.6 [6], I argued that:

   Knowing the Hypotheses is a given, that's the job of a
   protocol engineer. That which separates out engineering
   from art is knowing when to breach a hypothesis.

Another way of putting it is, do you think you know as much as Jon or 
Peter or the designers at Skype or Tatu Ylönen?  Probably not, but I for 
one am not going to criticise you if you've got the balls for trying, 
and you *know the risks*.




iang



[0] An alternate view on why  how client certs work:
http://wiki.cacert.org/Technology/KnowledgeBase/ClientCerts/theOldNewThing

[6]http://iang.org/ssl/h6_its_your_job_do_it.html#6.6
Hmm, perhaps that should be numbered H6.6.6 ?
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Oddity in common bcrypt implementation

2011-06-29 Thread Ian G

On 28/06/11 1:01 PM, Paul Hoffman wrote:

And this discussion of ASCII and internationalization has what to do with 
cryptography,


I personally think this list is about users of crypto, rather than 
cryptographers-creators in particular.  The former are mostly computer 
scientists who think in block-algorithm form, the latter are more the 
mathematicians.


As a crypto-plumber (computer science user of crypto) I think it is 
impossible to divorce crypto from all the other security techniques. 
All the way up the stack.


Or, talking about non-crypto security techniques like passwords is 
punishment for mucking up the general deployment of better crypto 
techniques.



asks the person on the list is who is probably most capable of arguing about it 
but won't? [1]

--Paul Hoffman

[1] RFC 3536, and others



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Oddity in common bcrypt implementation

2011-06-28 Thread Ian G

On 28/06/11 11:25 AM, Nico Williams wrote:

On Tue, Jun 28, 2011 at 9:56 AM, Marsh Rayma...@extendedsubset.com  wrote:



Consequently, we can hardly blame users for not using special characters in
their passwords.


The most immediate problem for many users w.r.t. non-ASCII in
passwords is not the likelihood of interop problems but the
heterogeneity of input methods and input method selection in login
screens, password input fields in apps and browsers, and so on, as
well as the fact that they can't see the password they are typing to
confirm that the input method is working correctly.


This particular security idea came from terminal laboratories in the 
1970s and 1980s where annoying folk would look over your shoulder to 
read your password as you typed it.


The assumption of people looking over your shoulder is well past its 
use-by date.  These days we work with laptops, etc, which all work to a 
more private setting.  Even Internet Cafes have their privacy shields 
between booths.


There are still some lesser circumstances where this is an issue (using 
your laptop in a crowded place or typing a PIN onto a reader/ATM). 
Indeed in the latter case, the threat is a camera that picks up the keys 
as they are typed.


But for the most part, we should be deprecating the practice at its 
mandated level and exploring optional or open methods.  Like:



Oddly enough
mobiles are ahead of other systems here in that they show the user the
*last/current* character of any passwords they are entering.



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] this house believes that user's control over the root list is a placebo

2011-06-27 Thread Ian G

On 26/06/11 1:26 PM, Marsh Ray wrote:

On 06/25/2011 03:48 PM, Ian G wrote:

On 21/06/11 4:15 PM, Marsh Ray wrote:


This was about the CNNIC situation,


Ah, the I'm not in control of my own root list threat scenario.

See, the thing there is that CNNIC has a dirty reputation.


That's part of it. But there are some deeper issues.

Deeper issue A applies equally if you *are* the government of China.
Would it make sense for you to be trust root CAs controlled by other
governments? Of course, this might seem a more academic question if you
in China since your OS is likely MS Windows made in the US anyway.


Yes, exactly.  For everyone who's paranoid about CNICC, there are 10 who 
are scared of some other government.  It's not about one CA, it's about 
all of those scenarios;  there are many many people outside the USA that 
feel the same about the USA government.


If we pander to those who are scared of CNICC, that means all the 
USA-based CAs are next.


A better thing to do is work our risk analysis (which is shortly to be 
mandated for CAs, but not for anyone else...).


For what we want browsers to do, is it reasonable that governments 
somewhere somehow can MITM us?


Probably:  for online banking or credit cards (what SSL was intended to 
deal with) it is reasonable.  For freedom fighting / terrorism, it's 
probably not reasonable.  But, are we really saying that we want to 
provide a system for those latter people?  What costs are we willing to 
take on board?  Are we going to kill it for the former group?


That's a rabbit hole, are you sure you want to go down it?



Deeper issue B is a simple engineering failure calculation. Even if you
only trust reliable CAs that will protect your security 99 years out of
100 (probably a generous estimate of CA quality), then with 100 such
roots you can expect to be pwned 63% of the time.
(1 - 0.99^100) = 0.63


Well, ug!  Those numbers assume that a CA breaches us for the entire 
year, and it breaches everyone for that year, and we all lose big time 
from that breach.


It seems unreasonable to assume such apocalyptic results, especially 
given the rather singular data points we have (a handful of breaches, 
and zero damaged customers or users).


More likely, we will see breaches at a level of 0.1% to 1% per year, and 
those breaches will effect around 0.1% to 0.1% of the users, and 
around 0.1% to 0.1% of the RPs.


That's an acceptable risk.


But CNNIC passed the test to get into the root lists.


That tells me it was a bad test.


Many might agree with you.  When I did the test, the result was 
positive, but still didn't pass ... I'm not sure I can confirm whether 
it was a good test or a bad test on one data point, but I can tell you 
it is an expensive test :)



Which do you want? A CA gets into a root list because it is nice and
pretty and bribes its way in? This was the old way, pre 1995. Or
there is an objective test that all CAs have an equivalent hurdle in
passing? This was the post 1995 way.


There's no dichotomy here. Cash payments can make a fantastically
objective test.


:)  So CNNIC is in either way.


There's no easy answer to this. Really, the question being asked is
wrong.


Yeah.


The question really should be something like do we need a
centralised root list?


Well something is going to get shipped with the browser, even if it's
something small and just used to bootstrap the more general system.


Right.  The Microsoft dynamic population makes a lot of sense, from an 
engineering perspective.  Especially if you're aware of how hard Mozilla 
has found it to police this issue.  Indeed, the fixed root list of 
Mozilla looks very 1970s-ish.



How about these questions:
When is a centralized root list necessary and when can it be avoided?


Vendors typically rejects all variations of the centralised root list 
model at the centralised distribution level.  This is an article of faith.


Where there is some room to experiment is with plugins to browsers.


How can the quality of root CAs be improved?


Not easily.  There are several barriers:

Disclosures.  We need a lot more of the right disclosures before we can 
move to improve the quality of the CAs, as only once the entire model is 
on the table in documented form can focus be achieved.  The CAs control 
what they want to disclose via CABForum.  So you will only see the right 
disclosures come slowly, if at all.  There are a batch of new 
disclosures coming through in a document called Basic Requirements.


Reputation.  The vendors hold the line that reputation of CAs is not to 
be used in a formalised sense to allow the CAs to compete.  This is 
basically a failure of marketing on the part of the vendors.  For their 
credit, the CAs have grumbled about this for a long time.  EV goes some 
way towards branding the CAs, but it mucked it up by exchanging the 
branding for a hill of beans called EVG.  So it ended up confirming the 
race to the bottom

Re: [cryptography] this house believes that user's control over the root list is a placebo

2011-06-26 Thread Ian G

On 26/06/11 5:50 AM, Ralph Holz wrote:

Hi,


Any model that offers a security feature to a trivially tiny minority,
to the expense of the dominant majority, is daft.  The logical
conclusion of 1.5 decades worth of experience with centralised root
lists is that we, in the aggregate, may as well trust Microsoft and the
other root vendors' root list entirely.

Or: find another model.  Change the assumptions.  Re-do the security
engineering.


You have avoided the wording find a better model - intentionally so?


:)  It's very hard to word proposals that go against the belief of many, 
without being inflamatory.  If it is too inflamatory, nobody reads it. 
Even if it is right.  We lose another few years...



Because such work would only be meaningful if we could show we have
achieved an improvement by doing it.


Yeah.  So we have a choice:  improve the overall result of the current 
model, or try another model.


The point of the subject line is that certain options are fantasy.  In 
the current model, we're rather stuck with a global solution.


So, fixing it for CNNIC is ... changing the model.


Which brings us to the next point: how do we measure improvement? What
we would need - and don't have, and likely won't have for another long
while - are numbers that are statistically meaningful.


Right, indeed.  The blind leading the blind :)


On moz.dev.sec.policy, the proposal is out that CAs need to publicly
disclose security incidents and breaches.


Yes, but they (we) haven't established why or what yet.


This could actually be a good
step forward. If the numbers show that incidents are far more frequent
than generally assumed, this would get us away from the low frequency,
high impact scenario that we all currently seem to assume, and which is
so hard to analyse. If the numbers show that incidents are very rare -
fine, too. Then the current model is maybe not too bad (apart from the
fact that one foul apple will still spoil everything, and government
interference will still likely remain undetected).


Except, we've known that the numbers of security patches released by 
Microsoft tells us ... nothing.  We need more than numbers and 
research to justify a disclosure.



The problem is that CAs object to disclosure on the simple grounds that
public disclosure hurts them. Even Startcom, otherwise aiming to present
a clean vest, has not disclosed yet what happened on June, 15th this year.


Yes, it's hilarious isn't it :)


Mozilla seems to take the stance that incidents should, at most, be
disclosed to Mozilla, not the general public. While understandable from
Moz's point of view


Mozo are doing it because it makes them feel more in control.  They are 
not as yet able to fully explain what the benefit is.  Nor what the 
costs are.



- you don't want to hurt the CAs too badly if you
are a vendor - it still means researchers won't get the numbers they
need. And the circle closes - no numbers, no facts, no improvements,
other than those subjectively perceived.



OK.  So we need to show why researchers can benefit us with those numbers :)

(IMHO, the point is nothing to do with researchers.  It's all to do with 
reputation.  It's the only tool we have.  So disclosure as a blunt 
weapon might work.)




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] this house believes that user's control over the root list is a placebo

2011-06-25 Thread Ian G

On 21/06/11 4:15 PM, Marsh Ray wrote:

On 06/21/2011 12:18 PM, Ian G wrote:

On 18/06/11 8:16 PM, Marsh Ray wrote:

On 06/18/2011 03:08 PM, slinky wrote:



 But we know there are still hundreds of
trusted root CAs, many from governments, that will silently install
themselves into Windows at the request of any website. Some of these
even have code signing capabilities.


Hmmm... I'm currently working on a risk analysis of this sort of thing.
Can you say more about this threat scenario?


I did a blog post about it a while back: http://extendedsubset.com/?p=33

This was about the CNNIC situation,


Ah, the I'm not in control of my own root list threat scenario.

See, the thing there is that CNNIC has a dirty reputation.  But CNNIC 
passed the test to get into the root lists.


Which do you want?  A CA gets into a root list because it is nice and 
pretty and bribes its way in?  This was the old way, pre 1995.  Or there 
is an objective test that all CAs have an equivalent hurdle in passing? 
 This was the post 1995 way.


There's no easy answer to this.  Really, the question being asked is 
wrong.  The question really should be something like do we need a 
centralised root list?




since then we've seen Tunisia MITM
its citizens and they have a national CA as well.


Yup.


Basically, MS Windows has a list of Trusted Root CAs. But the list
displayed there is actually just a subset of the CAs that are
effectively trusted. When you browse to a site with a CA not in this
list, Windows can contact Microsoft and on-the-fly add that cert to your
trusted root store. Innovative, huh?



This is the geek's realisation that they cannot control their list of 
trusted CAs.  Their judgement is undermined, as MS Windows' root list 
has gone the next step to dynamic control, which means that the users' 
ability to verify the root is undermined a bit more by not having an 
ability to stop the future dynamic enhancements.


In practice, if we assume a centralised root list, this is probably the 
better result.


It works quite simply:  1 billion users don't check the root list, at 
all.  They rely entirely on the ueber-CA to generate a good root list. 
A tiny fraction of that number (under 1 million, or 0.1%) know about 
something called a root list, something perversely called trust bits, 
and the ability to fiddle those bits.  They do that, and imagine that 
they have achieved some higher level of security.  But, this technique 
has difficulty establishing itself as anything more than a placebo.


Any model that offers a security feature to a trivially tiny minority, 
to the expense of the dominant majority, is daft.  The logical 
conclusion of 1.5 decades worth of experience with centralised root 
lists is that we, in the aggregate, may as well trust Microsoft and the 
other root vendors' root list entirely.


Or: find another model.  Change the assumptions.  Re-do the security 
engineering.


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Is Bitcoin legal?

2011-06-16 Thread Ian G

On 16/06/11 12:34 AM, John Levine wrote:

Bitcoins aren't securities, because they don't act like securities.


Right.  Or more particularly, he asked:

... I can’t help wondering why
Bitcoins aren’t unregistered securities.

And the answer is that the registrar of securities defines what the 
securites are, and the SEC's definition is a long way away from BitCoin.


Uh-oh?  Maybe someone will be hearing from the SEC?

No, that's not how the SEC works.  The SEC is a responding organisation. 
 They only deal when there is a complaint.  Even when there is a 
complaint, they will try their best to ignore it.  C.f, Madoff, which 
was dealing in securities.


If there is an interest, it will come from UST, via USSS.



There's no promise to pay, no nominal value, and you don't have a
claim on some part of something else.


Nod. A security is a contract where something is secured by one party to 
the benefit of another.  We're missing the three components here, so 
where do we start?  Who's going to complain and what's their complaint?


Rock on...

iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Crypto-economics metadiscussion

2011-06-14 Thread Ian G

On 14/06/11 2:31 AM, Marsh Ray wrote:


I 'aint no self-appointed moderator of this list and I do find the
subject of economics terribly interesting, but maybe it would make sense
to willfully confine the scope of our discussion of Bitcoin and other
virtual currencies to the crypto side of it.


Crypto people spend all their lives learning theoretical crypto in 
groups like this.  Then they go and apply their theoretical crypto out 
in the real world, and it bombs.  Or worse:


  http://forum.bitcoin.org/index.php?topic=16457.0

In contrast, economists spend all their lives learning theoretical econ 
in other places.  Then they go and apply their theoretical econ in the 
real world, and it bombs.  (Que in links to IMF, WB, etc.)


Everything that the econ people say is true, but they ain't gonna build 
it.  Everything that the crypto people say is true, but people ain't 
gonna use it.


How might there be a place where the knowledge can pass back and forth? 
 Back in the halcyon days of DigiCash, Zooko and I used to run an 
informal thing called the Weber Economics Club.  Us digital cash people 
would collect every Friday night in a cafe called the Weber, and there 
we'd spend about an hour or two talking through some particular 
economics concept.  And especially how it applied to our world of 
digital cash.  We were very aware that economics was key to our designs, 
then.


I'm not saying this group can do that.  But, to the extent that the 
ecogniscenti can influence the crypto people, something of value might 
come out.  To the extent that the cryptoplumbers can build something of 
economic stability, some good might come out.


On the other hand, talking just pure theory is fun too :)

iang


PS: I agree that talk about the housing crisis belongs elsewhere.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Crypto-economics metadiscussion

2011-06-14 Thread Ian G

On 15/06/11 12:47 AM, Ian G wrote:

Or worse:

http://forum.bitcoin.org/index.php?topic=16457.0


That link is down, no surprise.  From my cached copy, I wrote it up on 
the blog:


http://financialcryptography.com/mt/archives/001327.html

Far too much from me, signing out... iang.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Is BitCoin a triple entry system?

2011-06-13 Thread Ian G

On 13/06/11 12:56 PM, James A. Donald wrote:

On 2011-06-12 8:57 AM, Ian G wrote:

I wrote a paper about John Levine's observation of low knowledge, way
back in 2000, called Financial Cryptography in 7 Layers. The sort of
unstated thesis of this paper was that in order to understand this area
you had to become very multi-discipline, you had to understand up to 7
general areas. And that made it very hard, because most of the digital
cash startups lacked some of the disciplines.


One of the layers you mention is accounting.


Yes, so back to crypto, or at least financial cryptography.

The accounting layer in a money system implemented in financial 
cryptography is responsible for reliably [1] holding and reporting the 
numbers for every transaction and producing an overall balance sheet of 
an issue.


It is in this that BitCoin may have its greatest impact -- it may have 
shown the first successful widescale test of triple entry [2].


Triple entry is a simple idea, albeit revolutionary to accounting.  A 
triple entry transaction is a 3 party one, in which Alice pays Bob and 
Ivan intermediates.  Each holds the transaction, making for triple copies.


To make a transaction, Alice signs over a payment instruction to Bob 
with her public-key-based signature [3].  Ivan the issuer then packages 
the payment request into a receipt, and that receipt becomes the 
transaction.


This transaction is digitally signed by multiple parties, including at 
least one independent party [4].  It then becomes a powerful evidence of 
the transaction [5].


The final receipt *is the entry*.  Then, the *collection of signed 
receipts* becomes the accounts, in accounting terms.  Which collection 
replaces ones system of double entry bookkeeping, because the single 
digitally signed receipt is a better evidence than the two entries that 
make up the transaction, and the collection of signed receipts is a 
better record than the entire chart of accounts [6].


A slight diversion to classical bookkeeping, as replacing double entry 
bookkeeping is a revolutionary idea.  Double entry has been the bedrock 
of corporate accounting for around 700 years, since documentation by a 
Venetian Friar named Luca Pacioli.  The reason is important, very 
important, and may resonate with cryptographers, so let's digress to there.


Double entry achieves the remarkable trick of separating out mishaps 
from frauds.  The problem with single entry (what people do when making 
lists of numbers and adding them up) is that the person can leave off a 
number, and no-one is the wiser [7].  We can't show the person as either 
a bad bookkeeper or as a fraudulent bookkeeper.  This achilles heel of 
primitive accounting meant that the bookkeeping limited the business to 
the size with which it could maintain honest bookkeepers.


Where, honest bookkeepers equals family members.  All others, typically, 
stole the boss's money.  (Family members did too, but at least for the 
good of the family.)  So until the 1300s and 1400s, most all businesses 
were either crown-owned, in which case the monarch lopped off the head 
of any doubtful bookkeeper, *or* were family businesses.


The widespread adoption of double-entry through the Italian trading 
ports led to the growth of business beyond the limits of family.  Double 
entry therefore was the keystone to the enterprise, it was what created 
the explosion of trading power of the city states in now-Italy [8].


Back to triple entry.  The digitally signed receipt dominates the two 
entries of double entry because it is exportable, independently 
verifiable, and far easier for computers to work with.  Double entry 
requires a single site to verify presence and preserve resiliance, the 
signed receipt does not.


There is only one area where a signed receipt falls short of complete 
evidence and that is when a digital piece of evidence can be lost.  For 
this reason, all three of Alice, Bob and Ivan keep hold of a copy.  All 
three combined have the incentive to preserve it;  the three will police 
each other.


Back to BitCoin.  BitCoin achieves the issuer part by creating a 
distributed and published database over clients that conspire to record 
the transactions reliably.  The idea of publishing the repository to 
make it honest was initially explored in Todd Boyle's netledger design.


We each independently converged on the concept of triple entry.  I 
believe that is because it is the optimal way to make digital value work 
on the net;  even when Nakomoto set such hard requirements as no 
centralised issuer, he still seems to have ended up at the same point: 
Alice, Bob and something I'll call Ivan-Borg holding single, replicated 
copies of the cryptographically sealed transaction.


With that foundation, we can trade.




Recall that in 2005
November, it became widely known that toxic assets were toxic.


In 2005, the SEC looked at my triple entry implementation, and


 From late in 2005 to late in 2007

Re: [cryptography] Digital cash in the news...

2011-06-13 Thread Ian G

On 13/06/11 5:54 PM, Adam Back wrote:

Bitcoin is not a pyramid scheme, and doesnt have to have the collapse and
late joiner losers. If bitcoin does not lose favor - ie the user base grows
and then maintains size of user base in the long term, then no one loses.


Um, Adam, that's the very definition of a pyramid scheme :)

No-one need lose as long as the size of the user base grows, long term!

So everyone is incentivised to bring in new victims^H^H^H^H^H^H users :P

That's why they're illegal, typically.


I think in the current phase the deflation (currency increasing in value)
helps increase interest and number of users.


Um, yeah, whatever.  Look, whatever you do, don't tell anyone of your 
friends or family to invest in it.



Say that in the next phase bitcoin stops rapid expansion and reaches some
stable number of users, the deflationary period stops, and the remaining
users use it for transactions only (not speculation). I dont see the losers
in that scenario.


No, but the scenario is incomplete:  Those speculating on an increase in 
value will realise it has reached stability.  So they'll sell.  Which 
will cause a reduction in value.  Which will cause a run, as those that 
didn't understand the mechanics of a pyramid scheme get their rude lesson.



However. Unless the laws of financial conservation have been repealed
by the design, those who follow have to invest a lot and come out with
less...




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Digital cash in the news...

2011-06-12 Thread Ian G

On 12/06/11 4:21 PM, Peter Gutmann wrote:

Am I the only one who thinks it's not coincidence that the (supposed) major
use of bitcoin is by people buying hallucinogenic substances?



The best way to think of this is from the marketing concepts of product 
diffusion or product life cycle.


http://www.quickmba.com/marketing/product/diffusion/

The challenge for the new product is to migrate from left to center to 
right of that graph in above link.  In doing so, the newer groups come 
to dominate the earlier groups, and the earlier groups typically fall away.


Recall the video story?  The innovators got in very early, and bought 
Betamax because it was better quality.  But they got stuck when the 
market was captured by the VHS system.  So lesson #1 is that early 
groups are risking punishment.  Same story for DVD.


Also, the backroom story for video was that the porn films, the big 
market that lifted the revenues of the distribution chains, and made it 
worthwhile.  These products/people/chains kept the industry alive while 
it built up steam for the mainstream.  Lesson #2 -- you need these 
strange uncomfortable groups to get to where you want to get.


Later on, as more mainstream comes into play, these strange 
uncomfortable groups can be eased out.  Or they go somewhere else, or we 
change our minds about them.  We also write them out of history...


So, as far as recreational pharma product is concerned, this is typical 
of these things (if that is what it is).  E.g., SSL certificates early 
revenue was also porn, Paypal had some dodgy customers, and for e-gold, 
it was ponzis / games that pushed the business into the black.


The challenge is what to do next, how to grow up.  This is going to be 
practically impossible for BitCoin because it has no guiding hand like 
e.g., Paypal had.  It's only got the invisible hand, which suits the 
innovators fine ... but it also means it hasn't got much of a chance of 
going mainstream.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Preserve us from poorly described/implemented crypto

2011-06-07 Thread Ian G

On 6/06/11 11:57 AM, David G. Koontz wrote:

On 5/06/11 6:26 PM, Peter Gutmann wrote:


That's the thing, you have to consider the threat model: If anyone's really
that desperately interested in watching your tweets about what your cat's
doing as you type them then there are far easier attack channels than going
through the crypto.




It's a consumer-grade keyboard, not military-crypto hardware, chances are
it'll use something like AES in CTR mode with an all-zero IV on startup, so
all you need to do is force a disassociate, it'll reuse the keystream, and you
can recover everything with an XOR.



There are other ways to deny effectiveness. If the fixed keys are generated
from things knowable during Bluetooth device negotiation the security would
be illusory.  If that security were dependent on an external security factor
but otherwise based on knowable elements you'd have key escrow.

It's hard to imagine as Peter said there'd be any great interest in
cryptanalytic attacks on keyboard communications.  You could counter the
threat by using your laptop's built-in keyboard. It sounds like a marketing
gimmick, and could be considered a mild form of snake oil - the threat
hasn't been defined, nor the effectiveness of the countermeasure proven.  A
tick box item to show sincerity without demonstrating dedication.



Maybe it is intended just as a slight hurdle to stop the kid brother 
listening in to big sister's sex chat with her b/f.  Or office level 
snooping.


As such, it's welcome.  It means that anyone who does succeed has gone 
to special efforts to do this .. which leaves some tracks.


There are the military / national security guys.  And then there are the 
rest of us.  For the rest of society, some simple opportunistic fix is 
often all that is needed to knock out 99.9% of the opportunistic 
attacks.  As practically all of our threats are opportunistic, this is 
pretty much the top priority for society at large.


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] encrypted storage, but any integrity protection?

2011-01-16 Thread Ian G

On 14/01/11 5:40 AM, travis+ml-rbcryptogra...@subspacefield.org wrote:

So does anyone know off the top of their head whether dm-crypt or
TrueCrypt (or other encrypted storage things) promise data integrity
in any way, shape or form?

I'm assuming they're just encrypting, but figured I'd ask before
digging into source and design docs.

It's important to understand the guarantees of the tools.



Others have answered, but fwiw here is my wet-blanket comment:

This is an example of bottom-up thinking, and the unfortunate tendency 
to consider cryptology as the answer to any  all needs.


If instead we look at the issue top-down, a different picture emerges.

A user wants her data to be secure and resiliant.  Available to her, and 
to those she designates, all the time, and not to anyone else, any of 
the time.


A proper design exercise would then realise that the 363kg gorilla in 
the room is that the data is unreliably stored under many circumstances 
that aren't within the grasp of cryptography.  The canonical thing is 
the failure of the hard drive.


This leads us to backups.  As an integral part of any discussion about 
any data.  If we follow this along its natural (top-down) path we 
discover the worst aspect of backups is that they aren't available when 
needed.  For hundreds of reasons.  We can see a attempt at an answer to 
this in a popularity of resiliant drives (mirroring, raid, etc).


(If we follow the unnatural path, and again think of a cryptographic 
solution, we discover that what is privacy for an online drive is 
*not* privacy for a set of backups.  So we end up with *two* 
cryptographic solutions being required, not one)


Back to the natural top-down path.  The uncertainty of backups leads us 
towards distributed m-of-n network drive arrangements, at either a 
service level or an application level.  Then, once that basic 
requirement is made, adding privacy features to the cloud layout of 
the drive becomes much more tenable within a holistic design approach.


So for example, the Tahoe system would epitomise this form of complete 
architectural thinking leading towards meeting the user's entire needs.


http://tahoe-lafs.org/source/tahoe/trunk/docs/quickstart.html

Or, to put it another way, in brutal terms, putting some sort of 
cryptography into a single drive approach is likely to solve only a 
small part of the user's problem.   So small that likely, the added 
complexity won't pay for itself.  They'll always be the toy-thing of 
geeks.  Or worse, the added complexity might make the user's overall 
problem worse.  If so, the further likely result will be that encrypted 
drives will not make their way into the mass market, because users will 
lose when they try and use them.


fwiw :)

iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] wanted: recommendations for best papers in cryptology

2011-01-08 Thread Ian G
Following is written as a user perspective, not a cryptography 
perspective :)


On 8/01/11 1:03 PM, travis+ml-rbcryptogra...@subspacefield.org wrote:

Hey all,

I'm attempting to create an extensive archive of papers on -graphy and
-analysis, locally stored and broken down by category/hierarchy,
according to my own personal taxonomy.  Maybe one day I'll try to
figure out how to annotate their metadata in some way, possibly a
bibtex-to-filename-to-hyperlink mapping, and web apps to ease data
entry.


The most important things I keep quoting (because they are typically the 
things that cause the house of cards to collapse...) are these:


Kherchhoffs' 6 laws, especially the 6th, which is dynamite to most 
security systems.

Adi Shamir's law, that crypto is typically bypassed.


I know that taxonomies are doomed with such large collections of
unique data, but the web and citeseer and Google Scholar just isn't
doing the job for me, for a variety of reasons that should be obvious
to anyone who has done extensive self-study in a field like this.

I was wondering if anyone had suggestions on conference proceedings,
individual papers, and authors that are worthy of inclusion.  Quality
is far more important than quantity - the web already provides the
latter.


For me,

* Specifications for DES and SHA (or similar), because how they 
created the black box approach.

* CBC, and similar modes for turning one algorithm into another.
* Selecting Cryptographic Key Sizes, Arjen K. Lenstra and Eric R. 
Verheul, PKC2000: p. 446-465, 01/2000, because of how they showed how to 
connect black boxes and solve one particular issue, that of numerology.
* NIST's newer approach to PRNGs, because of their (perceived) 
success in boxing the RNG field (see disclaimer below).


Each of these represent major steps forward in black box design, which 
fundamentally meets the needs of computer scientists to package and 
interface with simple sets of requirements.


(In side-comment to cryptographers who feel roughly treated by the 
throwaway culture to their fine academic achievements, I point to Adi 
Shamir's other law which says only the simplest ideas get adopted. 
Which means, that unless it meets the needs of the computer scientist 
for simplicity and clarity, it won't get adopted.  Sorry.)



Particularly, I've found cryptanalysis to be spottier in coverage.


I gather this is really a course of study, not a paper or collection of 
papers.  It's also highly specialised, of limited usefulness outside the 
strict field of cryptography.



I recall Schneier had an interesting self-study course in block
cipher cryptanalysis:

http://www.schneier.com/paper-self-study.pdf


good start, I think several colleges have published their courses?


Is there anything else out there like this?

Also, here are three books I wish I had.  Do they exist, or will I
have to compile them over the next decade or two?

0) Cryptographic Protocol Design

Something like this:
http://www.subspacefield.org/security/security_concepts/index.html#tth_sEc28.6
However, I think it could be made into an entire book, and covered in far
more detail and less like a cookbook, but still accessible to security
engineers, as opposed to discrete math postgrads.



I personally think you'll be looking in the wrong place.

The problem (IMNSHO) with the cryptography world view is that they think 
that cryptographic protocol design is an art of cryptography.  It isn't, 
it's an art of computer science.


This art is best described as protocol design augmented with a little 
cryptography.  In this way, we avoid the bottom-up disease that 
typically infects the crypto-dreamers' attempts to solve every perceived 
issue with another crypto trick, and add in issues that don't exist 
because they enable a new crypto trick to be used...


And, protocol design is firmly a part of computer science.  When you get 
up into higher layer designs, it becomes architecture ... and eventually 
migrates to being business.  So even computer scientists don't have the 
lock on it :P


On a tangent, my experienced I've coalesced here:
http://iang.org/ssl/hn_hypotheses_in_secure_protocol_design.html


1) Cryptography: A Study in Failure.

Show cryptosystems and how they were broken or semi-broken, over the
years.  That _is_ how we learn, right?


Definately.  This is actually embedded in cryptographic pedagogy.  Lore 
has it that you should spend a decade or so on cryptanalysis of existing 
algorithms before attempting to design your own algorithm.  This used to 
be bedded into computer science when the hacker was a journeyman who's 
task it was to break systems in order to understand how to build them 
better.  Unfortunately, we've lost that culturally these days.



I'm thinking of knapsack, Kerb, e=3 SSL keys, hash length extension,
PKCS#7 padding oracle, and so on.

Note that the system doesn't have to have been designed according to
best practices at the time to be instructive; 

Re: [cryptography] Fwd: [gsc] Fwd: OpenBSD IPSEC backdoor(s)

2010-12-17 Thread Ian G

(resend, with right sender this time)

On 17/12/10 3:30 PM, Peter Gutmann wrote:


To put it more succinctly, and to paraphrase Richelieu, give me six lines of
code written by the hand of the most honest of coders and I'll find something
in there to backdoor.



This is the sort of extraordinary claim which I like.

So, how to explore this claim and turn it into some form of 
scientifically validated proposition?


Perhaps we should run a competition?

   Come one, come all!  Bring your KR!

   Submit the most subtle backdoor into open source crypto thingumyjob.

   Win fame, fortune, and a free holiday in a disputed part of Cuba ...

   Judged by a panel of extremely crotchety and skeptical cryptoplumbers

   (aka, assembled herein).

Fancy?

iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] current digital cash / anonymous payment projects?

2010-12-01 Thread Ian G

On 1/12/10 6:12 AM, travis+ml-rbcryptogra...@subspacefield.org wrote:

Can anyone give me a good rundown of the current anonymous payment
systems, technologies and/or algorithms?



OK, there are some issues here.  There is technology, algorithms, 
patents, techniques, protocols, applications, services, business models 
... all lumped into one general term without care.


Anonymous payment systems are a bit of a contradiction, internally. 
What you're probably talking about is untraceable payment systems, which 
typically use Chaum or Brands or Wagner algorithms (there are a handful 
of other variants).  In this model, the coin is stripped of its 
identifying information as it transfers from Ivan to Alice to Bob.  When 
Bob deposits the coin to Ivan (issuer) for credit to his account, or for 
rollover to new coins, the chain of traceability is broken.


Then, there is another variation called nymous payment systems.  This 
model is typically done with a client-server public-private key 
arrangement, where the client registers the public key, and signs 
requests (including payments) which are sent to the server.  The privacy 
trick with this one is that the issuer doesn't need to know who holds 
the private key;  so while everything is traceable, it's also nymous.


Now, both of the above have privacy foibles and weaknesses, and both can 
be combined.  Discussing that is ... too much text.


Another variant is the continual slice  combine model.  Somewhat 
echoing the last note below, it all depends what you want to use it for.




It's just an idea at this point, but I'd be interested in hearing
about any commercial offerings by companies offering such systems.



It's a tough market because typically there are regulatory, business and 
governance traps that will knock out most players sooner or later.



Being a cryptonerd, I'd also be interested in hearing about the
technology generally - I've read Applied Crypto 2e, but wondered about
the state of the art.  Is there a good locus of such information?



Not really, but one thing is:  if you build it bottom-up, from the 
crypto, you'll have trouble :)  Instead, look to the business, and go 
bottom down.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] current digital cash / anonymous payment projects?

2010-12-01 Thread Ian G

On 2/12/10 1:36 AM, Rayservers wrote:

Not really, but one thing is:  if you build it bottom-up, from the crypto,
you'll have trouble :)  Instead, look to the business, and go bottom down.


You mean top down... :)


Oh, snap!  Yes, exactly.

iang


Which is exactly going on here:
http://www.global-settlement.org/

And when you start at the top, you have to start at the very top.
See Public Notice.

So yes, there is a locus of information, it is referenced in the signature. And
no, admiralty lawyers cannot help you. They don't know the law of the land.



lol.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] AES side channel attack using a weakness in the Linux scheduler

2010-11-26 Thread Ian G

On 25/11/10 3:26 AM, Jack Lloyd wrote:


What are people's thoughts on these kinds of local cache attacks, in
terms of actual systems security? While obviously very powerful, I
tend to think that once you have a focused attacker in an unprivledged
account on your machine, you have bigger problems than losing your AES
keys (maybe Midori or Coyotos or L4 will fix this someday).



Yes.

I would call this a medium security architecture, no more.  Anything 
that allows an attacker that close to a machine can't be considered to 
be hi-sec.  Another giveaway for med-sec is using a random selection of 
letters for your security model...


So if you've decided that you're only doing a medium security system 
then it's probably likely that you have not done a full analysis, and 
can easily accept the esoteric risk of a cache attack.




iang

PS: Didn't one of the authors of Rijdael write a toungue-in-cheek paper 
revealing a timing attack on AES?

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] not trusted

2010-11-22 Thread Ian G

On 21/11/10 11:19 PM, Peter Gutmann wrote:

Ian Gi...@iang.org  writes:


It sucks so badly, I decided in future that the only moral and ethical way
one could use the words encryption or security or the like in any
conversation was if the following were the case:

 there is only one mode, and it is secure.


Something similar was done by the CORBA folks, they banned the use of the word
trusted unless it was accompanied by an explanation in the form of by whom
and for what.



Ha, yes we do that at CAcert - the word trust is not used, and not to 
be used.


We do assurance and reliance and verification and things like that, but 
that word is more or less banned.


There are a couple of terms that still include it (TTP and WoT).  I've 
mused on whether we can change those, but I can't see an easy way.




Having their magic pixie dust taken away like this reportedly caused severe
problems for some of the people involved...



It causes occasional rumptions when people come in looking for the 
product they wanted to buy.  You're a CA, sell me some trust!  But 
after it is explained to them that this is impossible, and we do 
something different but similar, they are generally happy.  Not all 
people, but most people.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] philosophical question about strengths and attacks at impossible levels

2010-11-21 Thread Ian G

On 21/11/10 8:37 AM, Marsh Ray wrote:

On 11/19/2010 05:39 PM, Ian G wrote:



I don't think this qualifies as a bait-and-switch scenario because the
originally-advertised functionality (the bait) is still part of the
package.



:)


Bait-and-switch would be more like a salesperson saying No, I'm sorry
we just ran out of the low-priced RSA certificates we advertised in the
Sunday paper. But I have a fresh shipment of ECC EV certificates that
only cost X times more Especially if the store had no intention of
stocking enough of the advertised item to cover the anticipated demand.

The best term for this that I can think of is plain old exaggeration,
but I don't feel like that really captures the idea. It's more that the
claims are extended beyond their original domain, to the point where
they may no longer apply.

Perhaps there's not a word for this because it's simply taken for
granted in marketing. E.g., this bottled liquid is proven to prevent
dehydration is extended to imply this particular bottled liquid will
associate you in some way with others like these happy and popular
off-duty lifeguards playing beach volleyball.



Yeah.  So, we are in the grey area of marketing.  The line between one 
thing and another is not fixed.  Maybe there is another term, or maybe not.


Terms and laws are just lines drawn on sand, and can be avoided or 
bypassed or shifted to suit the intention.  By the marketing guy, or by 
the attacker.


You go to the store and ask for the product that makes you like 
lifeguards, and they send you home with bottled water.  You're right, 
the product that they originally advertised doesn't exist, and this 
means that they couldn't have been breaking the law.


So not only have they baited your mind with one concept, and switched 
you to a purchase of a product, they've got you on their side, arguing 
their product is fairly marketed!


Maybe the best bait and switch;  by defining the term in law, the 
marketing profession allowed themselves lots and lots of protection, 
lots and lots of grey area, and a chance to look like good corporate 
citizens :)


While we're on marketing and other magic, I highly recommend doing some 
serious units on marketing at b-school or marketing school or somewhere. 
 It's a real eye-opener.  Most folk with engineering background have no 
idea, and typically make as many huge blunders about it as marketing 
folk make about tech.


A bit like internet tech and patent law ;)



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] patents and stuff (Re: NSA's position in the dominance stakes)

2010-11-20 Thread Ian G

On 21/11/10 2:45 AM, John Levine wrote:

By the way, what does all this semi-informed ranting about patents
have to do with cryptography?



NSA's dominance in security engineering?
  = example of DES-era crypto dominance
  = ECC push today means?
  = patents complication
  = war of words!

The takeaway is that things like patent laws can have a dramatic 
influence on (cryptographic) engineering  business.


Oh, and I should learn to ask my question another way ...



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] philosophical question about strengths and attacks at impossible levels

2010-11-19 Thread Ian G

On 20/11/10 2:10 PM, James A. Donald wrote:

Ian G wrote:

On this I would demure. We do have a good metric: losses. Risk
management starts from the business, and then moves on to how losses are
effecting that business, which informs our threat model.

We now have substantial measureable history of the results of open use
of cryptography. We can now substantially and safely predict the result
of any of the familiar cryptographic components in widespread use,
within the bounds of risk management.

The result of 15-20 years is that nobody has ever lost money because of
a cryptographic failure, to a high degree of reliability.


How about all the money lost because Wifi security does not work?


Yeah, good point...

I would say protocols like that are outside open crypto.  Wasn't wifi 
security put together by closed industry cartels?  IMHO, they've been 
repeatedly shown to have not done a good job.


(Having said that, yes, it is an arguable boundary, open crypto versus 
other stuff.  Perhaps the point is to say that the job is done properly? 
 But that is circular and won't support my claim.)




If the administrator selects encryption for the wifi network, follows
good practices with passwords, and yet attackers get in, is that not an
a cryptographic failure?


It sucks.  It sucks so badly, I decided in future that the only moral 
and ethical way one could use the words encryption or security or the 
like in any conversation was if the following were the case:


there is only one mode, and it is secure.

What you describe is a non-secure system.  A wifi that can be configured 
to not use encryption?  That's funny, did they pay for that? :D



A common, perhaps the most common, attack on corporations is to get
inside the corporate network through wifi, then mount an sql injection
attack on the corporate database, then steal the corporate database.
This often causes extremely large monetary losses.


Right, that's now beginning to emerge.  I don't know if there is any 
reliable statistics or measurements on how much money is lost because of 
WiFi security, but if we were to attribute the Gonzalez case to poor 
quality of wifi security entirely, then we're in the money.


http://financialcryptography.com/mt/archives/001294.html
http://www.nytimes.com/2010/11/14/magazine/14Hacker-t.html?_r=2pagewanted=all

Unless the wifi was configurable, that is ... in which case, well, 
that's silly.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography