Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-07 Thread Jaap-Henk Hoepman
 
 Public-key cryptography is less well-understood than symmetric-key 
 cryptography. It is also tetchier than symmetric-key crypto, and if you pay 
 attention to us talking about issues with nonces, counters, IVs, chaining 
 modes, and all that, you see that saying that it's tetchier than that is a 
 warning indeed.

You have the same issues with nonces, counters, etc. with symmetric crypto so I 
don't see how that makes it preferable over public key crypto.

Jaap-Henk
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-07 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Sep 6, 2013, at 11:05 PM, Jaap-Henk Hoepman j...@cs.ru.nl wrote:

 
 Public-key cryptography is less well-understood than symmetric-key 
 cryptography. It is also tetchier than symmetric-key crypto, and if you pay 
 attention to us talking about issues with nonces, counters, IVs, chaining 
 modes, and all that, you see that saying that it's tetchier than that is a 
 warning indeed.
 
 You have the same issues with nonces, counters, etc. with symmetric crypto so 
 I don't see how that makes it preferable over public key crypto.

Point taken.

Bruce made a quip, and I offered an explanation about why that quip might make 
sense. 

I have also, in debate with Jerry, opined that public-key cryptography is a 
powerful thing that can't be replaced with symmetric-key cryptography. That's 
something that I firmly believe. At its most fundamental, public-key crypto 
allows one to encrypt something to someone whom one does not have a prior 
security relationship with. That is powerful beyond words.

If you want to be an investigative reporter and want to say, If you need to 
talk to me privately, use K -- you can't do it with symmetric crypto; you have 
to use public-key. If you are a software developer and want to say say, If you 
find a bug in my system and want to tell me, use K -- you can't do it with 
symmetric crypto.

Heck, if you want to leave someone a voicemail securely you've never talked to, 
you need public key crypto.

That doesn't make Bruce's quip wrong, it just makes it part of the whole story.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSKsy0sTedWZOD3gYRAm9wAJ9k8cASoXlfYOK/d0jrMtXQ8N/XegCg3ikv
miKwWy0D+O8JGF+6hh1Y3oU=
=msNM
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread Samuel Weiler

On Thu, 5 Sep 2013, Phillip Hallam-Baker wrote:

* Allowing deployment of DNSSEC to be blocked in 2002(sic) by 
blocking a technical change that made it possible to deploy in 
.com.


As an opponent of DNSSEC opt-in back in the day, I think this is a 
poor example of NSA influence in the standards process.


I do not challenge PHB's theory that the NSA has plants in the 
IETF to discourage moves to strong crypto, particularly given John 
Gilmore's recent message on IPSEC, but I doubt that the NSA had any 
real influence on the DNSSEC opt-in debacle of 2003.


First, DNSSEC does not provide confidentiality.  Given that, it's not 
clear to me why the NSA would try to stop or slow its deployment.


Second, as I look at the people who opposed opt-in and the IETF 
working group chairs who made the decision to kill it, I don't see 
likely NSA stooges.  The list of opponents during working group last 
call was so short [1] (as compiled by PHB, back in the day) that I 
thought the working group chairs got the consensus call wrong.  The 
DNSEXT chairs were Randy Bush and Olafur Gudmundsson.  In previous 
years, Olafur had worked for TIS Labs, which had taken plenty of DoD 
money over the years.  Even so, I do not suspect he was influenced by 
the NSA.  Randy has taken money from DHS in more recent years, but I'm 
even more convinced he was not an NSA stooge.  (Randy was the chair 
issuing the opt-in last call and writing the summary.)


Third, many of the opt-in opponents in 2003 seemed to be pretty 
convinced that the lowered security guarantees and extra complexity of 
opt-in were nothing more than a subsidy for Verisign, which could just 
as well throw more money at the problem of signing its large zones. 
One might plausibly argue that Verisign's push for opt-in (and its 
later push for NSEC3) was itself a stalling tactic.  One might even go 
further and say that Verisign initiated such stalling at the behest of 
the NSA.  I would not make that argument, but it is at least as 
plausible as an argument that the opt-in opponents or WG chairs were 
NSA stooges.


Lastly, the US DoD was funding some amount of work on DNSSEC at the 
time (i.e., my own participation).  During that timeframe, significant 
progress was being made on the deployability of DNSSEC, and I think 
the DoD funding helped.  Depending on your whims, you could either 
credit DoD for helping or blame them for not providing even more 
funding, which might have made for faster progress.


So, again, while PHB's general theory might have merit, I think the 
DNSSEC opt-in example is not on point.


Disclosures: I was deeply involved in the IETF's DNSEXT working group 
during this time, and my funding came from non-NSA bits of DoD.  I am 
not aware of any NSA influence in my funding, and I felt no NSA 
pressure in the work I was doing.  I was a vocal opponent of opt-in, 
but in the end I chose to step aside and let it advance.[2]


-- Samuel Weiler


[1] http://marc.info/?l=namedroppersm=105145468327451w=2

[2] http://marc.info/?l=namedroppersm=104874927417175w=2

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] XORing plaintext with ciphertext

2013-09-07 Thread Dave Horsfall
Got a question that's been bothering me for a whlie, but it's likely 
purely academic.

Take the plaintext and the ciphertext, and XOR them together.  Does the 
result reveal anything about the key or the painttext?

-- Dave
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread Gregory Perry
As an opponent of DNSSEC opt-in back in the day, I think this is a
poor example of NSA influence in the standards process.

I do not challenge PHB's theory that the NSA has plants in the
IETF to discourage moves to strong crypto, particularly given John
Gilmore's recent message on IPSEC, but I doubt that the NSA had any
real influence on the DNSSEC opt-in debacle of 2003.

First, DNSSEC does not provide confidentiality.  Given that, it's not
clear to me why the NSA would try to stop or slow its deployment.

Insecure DNS deployments are probably in the top five attack vectors
for remotely compromising internal network topologies, even those
sporting split DNS configurations.  As you were ...deeply involved in the
IETF's DNSEXT working group then I presume you know this.

For example, DNS cache poisoning attacks, local ARP cache spoofing
attacks to redirect DNS queries and responses, redirection of operating
system update and patching services that map to fully qualified domain
names such as windowsupdate.microsoft.com, etc.

Correct me if I am wrong, but in my humble opinion the original intent
of the DNSSEC framework was to provide for cryptographic authenticity
of the Domain Name Service, not for confidentiality (although that
would have been a bonus).

Lastly, the US DoD was funding some amount of work on DNSSEC at
the time (i.e., my own participation).  During that timeframe,
significant progress was being made on the deployability of DNSSEC,
and I think the DoD funding helped.  Depending on your whims, you
could either credit DoD for helping or blame them for not providing
even more funding, which might have made for faster progress.

There are many different camps within the DoD.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread ianG

On 7/09/13 01:51 AM, Peter Gutmann wrote:

ianG i...@iang.org writes:


And, controlling processes is just what the NSA does.

https://svn.cacert.org/CAcert/CAcert_Inc/Board/oss/oss_sabotage.html


How does '(a) Organizations and Conferences' differ from SOP for these sorts
of things?



In principle, it doesn't -- which is why SOPs are saboteur's tools of 
preference.  They are used against you, as the lesser experienced people 
can't see the acts behind [1]


The point is one of degree.  SOPs are there to resolve real disputes. 
They can also be used to cause disputes, and to turn any innocent thing 
into a fight.  So do that, and keep doing that!  Pretty soon the org 
becomes a farce.


In contrast, strong leadership (the chair) knows when to put the lid on 
such trivialities and move on.  So, part of the overall strategy is to 
neutralise the strong chair [2].  As John just reported:


  *  NSA employees participted throughout, and occupied leadership roles
 in the committee and among the editors of the documents

Slam dunk.  If the NSA had wanted it, they would have designed it 
themselves.  The only conclusion for their presence that is rational is 
to sabotage it [3].




iang




[0]   SOPs is standard operating procedures.
[1]   This is the flaw in don't attribute to malice what can be 
explained by incompetence.  Explaining by incompetence does not 
eliminate that malice inspired incompetence.  Remember, we are all 
innoculated against malice, so we prefer to see benign causes.
[2]  this is not to say that committees are ill-intentioned or people 
are bad, but that it only takes a few with malicious intent and 
expertise to bring the whole game to a halt.  Cartels such as IETF WGs 
are fundamentally and inescapably fragile.
[3]  as a sort of summer-flu-shot, I present that document to each new 
board as their SOPs.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread ianG

On 7/09/13 03:58 AM, Jon Callas wrote:


Could an encryption algorithm be explicitly designed to have properties like this?  I 
don't know of any, but it seems possible.  I've long suspected that NSA might want this 
kind of property for some of its own systems:  In some cases, it completely controls key 
generation and distribution, so can make sure the system as fielded only uses 
good keys.  If the algorithm leaks without the key generation tricks leaking, 
it's not just useless to whoever grabs onto it - it's positively hazardous.  The gun that 
always blows up when the bad guy tries to shoot it


We know as a mathematical theorem that a block cipher with a back door *is* a 
public-key system. It is a very, very, very valuable thing, and suggests other 
mathematical secrets about hitherto unknown ways to make fast, secure public 
key systems.



I'm not as yet seeing that a block cipher with a backdoor is a public 
key system, but I really like the mental picture this is trying to create.


In order to encrypt to that system, one needs the (either) key.  If 
everyone has it (either) the system is ruined.


A public key system is an artiface where one can distribute the public 
key, and not have to worry about the system being ruined;  it's still 
perfectly usable.  Whereas with a symmetric system with two keys, either 
key being distributed ruins the system.


One could argue that the adversary would prefer the cleaner, more 
complete semantics of the public key system -- maybe that is what the 
theorem assumes?  But if I was the NSA I'd be happy with the compromise. 
 I'm good at keeping *my key secret* at least.




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] People should turn on PFS in TLS

2013-09-07 Thread ianG

On 6/09/13 21:11 PM, Perry E. Metzger wrote:

On Fri, 6 Sep 2013 18:56:51 +0100 Ben Laurie b...@links.org wrote:

The problem is that there's nothing good [in the way of ciphers]
left for TLS  1.2.


So, lets say in public that the browser vendors have no excuse left
for not going to 1.2.

I hate to be a conspiracy nutter, but it is that kind of week. Anyone
at a browser vendor resisting the move to 1.2 should be viewed with
deep suspicion.

(Heck, if they're not on the government's payroll, then shame on them
for retarding progress for free. They should at least be charging. And
yes, I'm aware many of the people resisting are probably doing so
without realizing they're harming internet security, but we can no
longer presume that is the motive.)

Chrome handles 1.2, there is no longer any real excuse for the others
not to do the same.



The sentiment I agree with.  But the record of such transitions is not good.

E.g., Back in September 2009 Ray  Dispensa discovered a serious bug 
with renegotiation in SSL.  According to SSL Pulse, it took until around 
April of this year [0] before 80% of the SSL hosts were upgraded to 
cover the bug.


Which gives us an OODA response loop of around 3-4 years.

And, that was the best it got -- the SSL community actually cared about 
that bug.  It gets far worse in stuff that they consider not to be a 
bug, such as HTTPS Everywhere, TLS/SNI, MD5, browser security fixes for 
phishing, HTTP-better-than-self-signed, HTTPS starting up with its own 
self-signed cert, etc, etc.




iang


[0] it depends on how you measure the 80% mark, though.
PS: More here on OODA loops
http://financialcryptography.com/mt/archives/001210.html
http://financialcryptography.com/mt/archives/001444.html





___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-07 Thread ianG

On 7/09/13 09:05 AM, Jaap-Henk Hoepman wrote:


Public-key cryptography is less well-understood than symmetric-key 
cryptography. It is also tetchier than symmetric-key crypto, and if you pay 
attention to us talking about issues with nonces, counters, IVs, chaining 
modes, and all that, you see that saying that it's tetchier than that is a 
warning indeed.


You have the same issues with nonces, counters, etc. with symmetric crypto so I 
don't see how that makes it preferable over public key crypto.




It's a big picture thing.  At the end of the day, symmetric crypto is 
something that good software engineers can master, and relatively well, 
in a black box sense.  Public key crypto not so easily, that requires 
real learning.  I for one am terrified of it.


Therefore, what Bruce is saying is that the architecture should 
recognise this disparity, and try and reduce the part played by public 
key crypto.  Wherever  whenever you can get part of the design over to 
symmetric crypto, do it.  Wherever  whenever you can use the natural 
business relationships to reduce the need for public key crypto, do that 
too!




iang

ps; http://iang.org/ssl/h2_divide_and_conquer.html#h2.4
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-07 Thread Jaap-Henk Hoepman

 I have also, in debate with Jerry, opined that public-key cryptography is a 
 powerful thing that can't be replaced with symmetric-key cryptography. That's 
 something that I firmly believe. At its most fundamental, public-key crypto 
 allows one to encrypt something to someone whom one does not have a prior 
 security relationship with. That is powerful beyond words.

I share that belief. Hence my desire to fully understand Bruce's remark.

Strictly speaking you need some kind of security relationship: you need to be 
sure the public key belongs to the intended recipient (and is under his sole 
control). So public key crypto allows you to bootstrap from some authentic 
piece of information (public key belongs to X) to a confidential communication 
channel (with X).

Jaap-Henk
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] XORing plaintext with ciphertext

2013-09-07 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Sep 7, 2013, at 12:14 AM, Dave Horsfall d...@horsfall.org wrote:

 Got a question that's been bothering me for a whlie, but it's likely 
 purely academic.
 
 Take the plaintext and the ciphertext, and XOR them together.  Does the 
 result reveal anything about the key or the painttext?

It better not. That would be a break of amazing simplicity that transcends 
broken. 

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSKuANsTedWZOD3gYRAhHiAJsGJ43vKlGRY1p9moFvyY0GZV8ePgCfa4R0
oCWJ6kNVs+qlnwcpfhU/bNA=
=Ub19
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Bruce Schneier has gotten seriously spooked

2013-09-07 Thread Brian Gladman
On 07/09/2013 01:48, Chris Palmer wrote:
 Q: Could the NSA be intercepting downloads of open-source encryption 
 software and silently replacing these with their own versions?
 
 Why would they perform the attack only for encryption software? They
 could compromise people's laptops by spiking any popular app.

Because NSA and GCHQ are much more interested in attacking communictions
in transit rather than attacking endpoints.

Endpoint attacks cost more to undertake, only give access to a limited
amount of data and involve much greater risks that their attack will
either be discovered or their means of attack will leave evidence of
what they have done and how they have done it.  The internal bueaucratic
costs of gaining approval for (adverarial) endpoint attacks also makes
it a more costly process than the use of network based interception.

There is significant use of open source encryption software in end to
end encryption solutions, in file archivers, in wifi and network
routers, and in protecing the communications used to manage and control
such components when at remote locations.  The open source software is
provided in source code form and is compiled from source in a huge
number of applications and this means that the ability to covertly
substitute broken source code could provide access to a huge amount of
traffic without the risks involved in endpoint attacks.

I stress that I am NOT suggesting that this has happened (or is
happening), simply that it has attractions from an NSA/GCHQ viewpoint.
Fortunately, I think it is a difficult attack to mount covertly (that
is, without the acqiecience of the author(s) of the software in question).

On the more general debate here, in my view, 'security for the masses'
through the deployment of encryption is a 'pipe dream' that isn't going
to happen.  Functionality (and the complexity that comes with it) is the
enemy of security and it is very clear that the public places a much
higher value on functionality than it does on security (or privacy).

Every time a new device comes onto the market, it starts with limited
functionality and some hope of decent security but rapidly evolves to be
a high functionality product in which the prospect of decent security
declines rapidly to zero.  Raspberry Pis look interesting _now_ but I
would be willing to bet that they won't buck the trend of increasing
funtionality and declining security simply because this is what the
majority in even this limited user community will want.

To buck this trend we need an effort like the Raspberry Pi effort but
one driven by our community with a strong commitment to simplicty and
deliberately limited functionality in both hardware and software.

   Brian Gladman

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread Eugen Leitl
On Fri, Sep 06, 2013 at 09:19:07PM -0400, Derrell Piper wrote:
 ...and to add to all that, how about the fact that IPsec was dropped as a 
 'must implement' from IPv6 sometime after 2002?

Apropos IPsec, I've tried searching for any BTNS (opportunistic encryption mode 
for
IPsec) implementations, and even the authors of the RFC are not aware of any.

Obviously, having a working OE BTNS implementation in Linux/*BSD would be a very
valuable thing, as an added, transparent protection layer against passive 
attacks.

There are many IPsec old hands here, it is probably just a few man-days worth
of work. It should be even possible to raise some funding for such a project.

Any takers?


signature.asc
Description: Digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread ianG

On 7/09/13 10:15 AM, Gregory Perry wrote:


Correct me if I am wrong, but in my humble opinion the original intent
of the DNSSEC framework was to provide for cryptographic authenticity
of the Domain Name Service, not for confidentiality (although that
would have been a bonus).



If so, then the domain owner can deliver a public key with authenticity 
using the DNS.  This strikes a deathblow to the CA industry.  This 
threat is enough for CAs to spend a significant amount of money slowing 
down its development [0].


How much more obvious does it get [1] ?

iang



[0] If one is a finance geek, one can even calculate how much money the 
opponents are willing to spend.
[1] As an aside, NSA/DoD have invested significant capital in the PKI as 
well.  Sufficient that they will be well aligned with the CA mission, 
and sufficient that they will approve of any effort to keep the CAs in 
business.  But this part is far less obvious.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] XORing plaintext with ciphertext

2013-09-07 Thread Dave Horsfall
Thanks for the response; that's what I thought, but thought I'd better 
ask (I'm still new at this crypto game).

-- Dave
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [liberationtech] Random number generation being influenced - rumors

2013-09-07 Thread Eugen Leitl
- Forwarded message from Andy Isaacson a...@hexapodia.org -

Date: Fri, 6 Sep 2013 22:24:00 -0700
From: Andy Isaacson a...@hexapodia.org
To: liberationtech liberationt...@lists.stanford.edu
Subject: Re: [liberationtech] Random number generation being influenced - rumors
User-Agent: Mutt/1.5.20 (2009-06-14)
Reply-To: liberationtech liberationt...@lists.stanford.edu

On Sat, Sep 07, 2013 at 12:51:19AM +0300, Maxim Kammerer wrote:
 On Fri, Sep 6, 2013 at 10:34 PM, Andy Isaacson a...@hexapodia.org wrote:
  This is not to say that RdRand is completely unusable.  Putting RdRand
  entropy into a software pool implementation like /dev/urandom (or
  preferably, a higher-assurance multipool design like Fortuna) is a cheap
  way to prevent a putative backdoor from compromising your system state.
 
 Nearly nothing from what you wrote is relevant to RDRAND, which is not
 a pure HWRNG, but implements CTR_DRBG with AES (unclear whether
 128/192/256) from NIST SP 800-90A [1,2].

That's the claimed design, yes.  I see no particular reason to believe
that the hardware in my server implements the design.  I can't even test
that the AES whitening does what it is documented to do, because Intel
refused to provide access to the prewhitened input.

Providing accessible test points (software interfaces to the innards
of the implementation, with documentation of expected behavior between
the components) would be the absolute minimum to provide believable
assurance of the absence of a backdoor.  Better would be documents from
Intel of how the chip is designed at the mask level, and a third party
mill-and-microphotograph of a retail chip showing that the shipped
implementation matches the design.

Intel will never go for that, of course, since their chip masks are
their jealously guarded IP.  Since they can't provide evidence of a lack
of a backdoor, any reasonably cautious user should avoid depending on
Intel's implementation.

-andy
-- 
Liberationtech is a public list whose archives are searchable on Google. 
Violations of list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://ativel.com http://postbiota.org
AC894EC5: 38A5 5F46 A4FF 59B8 336B  47EE F46E 3489 AC89 4EC5
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] XORing plaintext with ciphertext

2013-09-07 Thread Jerry Leichter
On Sep 7, 2013, at 4:13 AM, Jon Callas wrote:
 Take the plaintext and the ciphertext, and XOR them together.  Does the 
 result reveal anything about the key or the painttext?
 
 It better not. That would be a break of amazing simplicity that transcends 
 broken. 
The question is much more subtle than that, getting deep into how to define a 
the security of a cipher.

Consider a very simplified and limited, but standard, way you'd want to state a 
security result:  A Turing machine with an oracle for computing the encryption 
of any input with any key, when given as input the cyphertext and allowed to 
run for time T polynomial in the size of the key, has no more than an 
probability P less than (something depending on the key size) of guessing any 
given bit of the plaintext.  (OK, I fudged on how you want to state the 
probability - writing this stuff in English rather than mathematical symbols 
rapidly becomes unworkable.)  The fundamental piece of that statement is in 
given as input... part:  If the input contains the key itself, then obviously 
the machine has no problem at all producing the plaintext!  Similarly, of 
course, if the input contains the plaintext, the machine has an even easier 
time of it.

You can, and people long ago did, strengthen the requirements.  They allow for 
probabilistic machines as an obvious first step.  Beyond that, you want 
semantic security:  Not only shouldn't the attacking machine be unable to get 
an advantage on any particular bit of plaintext; it shouldn't be able to get an 
advantage on, say, the XOR of the first two bits.  Ultimately, you want so say 
that given any boolean function F, the machine's a postiori probability of 
guessing F(cleartext) should be identical (within some bounds) to its a priori 
probability of guessing F(cleartext).  Since it's hard to get a handle on the 
prior probability, another way to say pretty much the same thing is that the 
probability of a correct guess for F(cleartext) is the same whether the machine 
is given the ciphertext, or a random sequence of bits.  If you push this a bit 
further, you get definitions related to indistinguishability:  The machine is 
simply expected to say the input is the result of apply
 ing the cipher to some plaintext or the input is random; it shouldn't even 
be able to get an advantage on *that* simple question.

This sounds like a very strong security property (and it is) - but it says 
*nothing at all* about the OP's question!  It can't, because the machine *can't 
compute the XOR of the plaintext and the ciphertext*.  If we *give* it that 
information ... we've just given it the plaintext!

I can't, in fact, think of any way to model the OP's question.  The closest I 
can come is:  If E(K,P) defines a strong cipher (with respect to any of the 
variety of definitions out there), does E'(K,P) = E(K,P) XOR P *also* define a 
strong cipher?  One would think the answer is yes, just on general principles: 
To someone who doesn't know K and P, E(K,P) is indistinguishable from random 
noise, so E'(K,P) should be the same.  And yet there remains the problem that 
it's not a value that can be computed without knowing P, so it doesn't fit into 
the usual definitional/proof frameworks.  Can anyone point to a proof?

The reason I'm not willing to write this off as obvious is an actual failure 
in a very different circumstance.  There was work done at DEC SRC many years 
ago on a system that used a fingerprint function to uniquely identify modules.  
The fingerprints were long enough to avoid the birthday paradox, and were 
computed based on the result of a long series of coin tosses whose results were 
baked into the code.  There was a proof that the fingerprint looked random.  
And yet, fairly soon after the system went into production, collisions started 
to appear.  They were eventually tracked down to a merge fingerprints 
operation, which took the fingerprints of two modules and produces a 
fingerprint of the pair by some simple technique like concatenating the inputs 
and fingerprinting that.  Unfortunately, that operation *violated the 
assumptions of the theorem*.  The theorem said that the outputs of the 
fingerprint operation would look random *if chosen without knowledge of the 
coi
 n tosses*.  But the inputs were outputs of the same algorithm, hence had 
knowledge of the coin tosses.  (And ... I just found the reference to this.  
See ftp://ftp.dec.com/pub/dec/SRC/research-reports/SRC-113.pdf, documentation 
of the Fingerprint interface, page 42.)

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-07 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Sat, Sep 07, 2013 at 10:57:07AM +0300, ianG wrote:
 It's a big picture thing.  At the end of the day, symmetric crypto
 is something that good software engineers can master, and relatively
 well, in a black box sense.  Public key crypto not so easily, that
 requires real learning.  I for one am terrified of it.

Don’t be. There is no magic there. From what I can tell, there are two
different issues with public key.

1. Weaknesses in the math.
2. Fragility in use.

The NSA (or other national actors) may well have found a mathematical
weakness in any of the public key ciphers (frankly they may have found
a weakness in symmetric ciphers as well). Frankly, we just don’t know
here. Do we trust RSA more then Diffie-Hellman or any of the Elliptic
Curve techniques? Who knows. We can make our keys bigger and hope for
the best.

As for fragility. Generating random numbers is *hard*, particularly on
a day to day basis. When you generate a keypair with GPG/PGP it
prompts you to type in random keystrokes and move the mouse etc., all
in an attempt to gather as much entropy as possible. This is a pain,
but it makes sense for one-lived keys. People would not put up with
this if you had to do this for each session key. Fragile public key
systems (such as Elgamal and all of the variants of DSA) require
randomness at signature time. The consequence for failure is
catastrophic. Most systems need session keys, but the consequence for
failure in session key generation is the compromise of the
message. The consequence for failure in signature generation in a
fragile public key system is compromise of the long term key!

I wrote about this in NDSS 1991 I cannot find an on-line reference
to it though.

Then if you are a software developer, you have the harder problem of
not being able to control the environment your software will run on,
particularly as it applies to the availability of entropy.

So my advice.

Use RSA, choose a key as long as your paranoia. Like all systems, you
will need entropy to generate keys, but you won’t need entropy to use
it for encryption or for signatures.

- -Jeff

___
Jeffrey I. Schiller
Information Services and Technology
Massachusetts Institute of Technology
77 Massachusetts Avenue  Room E17-110A, 32-392
Cambridge, MA 02139-4307
617.910.0259 - Voice
j...@mit.edu
http://jis.qyv.name
___

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iD8DBQFSKzKi8CBzV/QUlSsRAhoSAJ98g7NreJwIK+aYODM1zDsVsreMCQCcD2R9
vnvmNc4Uo45+ckUFQafuE4U=
=x9bK
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Protecting Private Keys

2013-09-07 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

While we worry about symmetric vs. public key ciphers, we should not
forget the risk of compromise of our long-term keys. How are they
protected?

One of the most obvious ways to compromise a cryptographic system is
to get the keys. This is a particular risk in TLS/SSL when PFS is not
used. Consider a large scale site (read: Google, Facebook, etc.) that
uses SSL. The private keys of the relevant certificates needs to be
literally on hundreds if not thousands of systems. Chances are they
are not encrypted on those systems so those systems can auto-restart
without human intervention. Those systems also break
periodically. What happens to the broken pieces, say a broken hard
drive?

If one of these private keys is compromised, all pre-recorded traffic
can now be decrypted, as long as PFS was not used (and as we know, it
is rarely used).

Encrypted email is also at great risk because we have no PFS in any of
these systems. Our private keys tend to last a long time (just look at
the age of my private key!).

If I was the NSA, I would be scavenging broken hardware from
“interesting” venues and purchasing computers for sale in interesting
locations. I would be particularly interested in stolen computers, as
they have likely not been wiped.

The bottom line here is that the NSA has upped the game (and probably
did so quite a while ago, but we are just learning about it now). This
means that commercial organizations that truly want to protect their
customers from the NSA, and other national actors whom I am sure are
just as skilled and probably more brazen, need to up their game, by a
lot!

- -Jeff

P.S. I am very careful about which devices my private key touches and
what happens to it when I am through with it.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iD8DBQFSKzZE8CBzV/QUlSsRAqTsAJ4xJymTj04zCGF7v9OaZ4vJC3WoMgCfU1Qd
960tkxkWdrzz4ymCksyaKog=
=0JHf
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread Jerry Leichter
On Sep 7, 2013, at 12:31 AM, Jon Callas wrote:
 I'm sorry, but this is just nonsense.  You're starting with informal, rough 
 definitions and claiming a mathematical theorem.
 
 Actually, I'm doing the opposite. I'm starting with a theorem and arguing 
 informally from there
Actually, if you look at the papers cited, *they* are themselves informal.  The 
fundamental thing they are lacking is a definition of what would constitute a 
master key.  Let's see if we can formalize this a bit:

We're given a block cipher E(K,P) and a corresponding decryption algorithm 
D(K,C).  The system has a master key M such that D(E(K,P),M) == P.  This is 
what a master key does in a traditional lock-and-key system, so unless we see 
some other definition, it's what we have to start with.  Is there such a 
system?  Sure, trivially.  Given any block cipher E'/D', I simply define E(K,P) 
= E'(M,K) || E'(K,P).  (I can optimize the extra length by leaking one randomly 
chosen bit of E'(M,K) per block.  It won't take long for the whole key to be 
transmitted.)  OK, where's the public key system?

So maybe there isn't *one* master key, but let's go to the extreme and say 
there is one unique master per user key, based on some secret information S.  
That is:  Given K, there is a function F(S,K) which produces a *different* key 
K', with the property that D(K,C) == D(K',C).  Or maybe, as in public key 
systems, you start with S and some random bits and produce a matched pair K and 
K'  But how is this a master key system?  If I wasn't present at the birth 
of the K that produced the cyphertext I have in hand ... to get K' now, I need 
K (first form) or S and the random bits (second form), which also gives me K 
directly.  So what have I gained?

I can construct a system of the first form trivially:  Just use an n-bit key 
but ignore the first bit completely.  There are now two keys, one with a 
leading 0, one with a leading 1.  Constructing a system of the second form 
shouldn't be hard, though I haven't done it.  In either case, it's 
uninteresting - my master key is as hard to get at as the original key.

I'm not sure exactly where to go next.  Let's try to modify some constraints.  
Eliminate directly hiding the key in the output by requiring that E(K,.) be a 
bijection.  There can't possibly be a single master key M, since if there were, 
what could D(M,E(M,0...0)) be?  It must be E(K,0...0) for any possible K, so 
E(K,0...0) must be constant - and in fact E must be constant.  Not very 
interesting.  In fact, a counting argument shows that there must be as many M's 
as there are K's.  It looks as we're back to the two-fold mapping on keys 
situation.  But as before ... how could this work?

In fact, it *could* work.  Suppose I use a modified form of E() which ignores 
all but the first 40 bits of K - but I don't know that E is doing this.  I can 
use any (say, 128-bit) key I like, and to someone not in on the secret, a brute 
force attack is impossible.  But someone who knows the secret simply sets all 
but the first 40 bits to 0 and has an easy attack.

*Modified forms (which hid what was happening to some degree) of such things 
were actually done in the days of export controls!*  IBM patented and sold such 
a thing under the name CDMF 
(http://domino.research.ibm.com/tchjr/journalindex.nsf/600cc5649e2871db852568150060213c/a453914c765e690085256bfa0067f9f4!OpenDocument).
  I worked on adding cryptography to a product back in those days, and we had 
to come up with a way to be able to export our stuff.  I talked to IBM about 
licensing CDMF, but they wanted an absurd amount of money.  (What you were 
actually paying for wasn't the algorithm so much as that NSA had already 
approved products using it for export.)  We didn't want to pay, and I designed 
my own algorithm to do the same thing.  It was a silly problem to have to 
solve, but I was rather proud of the solution - I could probably find my spec 
if anyone cares.  It was also never implemented, first because this was right 
around the time the crypto export controls got loosened; a
 nd second because we ended up deciding we didn't need crypto anyway.  We came 
back and did it very differently much later.  My two fun memories from the 
experience:  (a) Receiving a FAX from NSA - I still have it somewhere; (b) 
being told at one point that we might need someone with crypto clearance to 
work on this stuff with NSA, and one of my co-workers chiming in with Well, I 
used to have it.  Unfortunately it was from the KGB.

Anyway ... yes, I can implement such a thing - but there's still no public key 
system here.

So ... would *you* like to take a stab at pinning down a definition relative to 
which the theorem you rely on makes sense?

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help

2013-09-07 Thread Bill Stewart

At 06:49 PM 9/6/2013, Marcus D. Leech wrote:
It seems to me that while PFS is an excellent back-stop against NSA 
having/deriving a website RSA key, it does *nothing* to prevent the kind of
  cooperative endpoint scenario that I've seen discussed in other 
forums, prompted by the latest revelations about what NSA has been up to.
But if your fave website (gmail, your bank, etc) is disclosing the 
session-key(s) to the NSA,


Depends a lot on how cooperative they are.  It's much easier to get a 
subpoena/secret-order/etc. for business records that a company 
keeps, which may include the long-term key, than to get one for 
transient session keys that their software doesn't keep.  Doesn't 
mean they can't do it, but it's probably much easier to get an order 
to produce plaintext, especially for a company like a bank or email 
service where the plaintext is something they would be keeping, at 
least briefly, as a business record anyway.


Do we now strongly suspect that NSA have a flotilla of TWIRL (or 
similar) machines, so that active cooperation of websites isn't 
strictly necessary

  to derive their (weaker) RSA secret keys?


Unlikely - the economics are still strongly against that.  Keeping a 
fleet of key cracking machines to grab long-term private keys from 
high-value targets might make sense, but each long-term key gets used 
to protect thousands or millions of transient session keys.  If they 
have 1024-bit RSA crackers at all, unless there's been a radical 
breakthrough in factoring, they're still not fast.


I've always preferred RSA-signed Diffie-Hellmann to encrypted 
session-key transfer when it's practical.  The long-term keys only 
get used for signatures, so if they're compromised they can only be 
used to impersonate the endpoints, not to read previous sessions, and 
under less-than-NSA versions of due process, it's a lot easier to 
argue in court against a police agency that wants to impersonate you 
than one that wants a copy of a transaction.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Suite B after today's news

2013-09-07 Thread Ralph Holz
Hi,

On 09/07/2013 12:50 AM, Peter Gutmann wrote:

 But for right now, what options do we have that are actually implemented
 somewhere? Take SSL. CBC mode has come under pressure for SSL (CRIME, BEAST,
 etc.), and I don't see any move towards TLS  1.0.
 
 http://tools.ietf.org/html/draft-gutmann-tls-encrypt-then-mac-02 fixes all of
 these, I just can't get any traction on it from the TLS WG chairs.  Maybe

Exactly, precious little movement on that front. Sadly.

BTW, I do not really agree with your argument it should be done via TLS
extension. I think faster progress could be made by simply introducing
new allowed cipher suites and letting the servers advertise them and
client accept them - this possibly means bypassing IETF entirely. Or, to
keep them in, do it in TLS 1.3. But do it fast, before people start
using TLS 1.2.

I don't really see the explosion of cipher suite sets you give as a
motivation - e.g. in SSH, where really no-one seems to use the
standards, we have a total of 144 or so cipher suites found in our
scans. Yet the thing works, because clients will just ignore the weird
ones. It should be possible in SSL, too, unless openssl/gnutls/nss barfs
at an unexpected suite name - but I don't think so.

Ralph

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-07 Thread Naif M. Otaibi
it boils down to this: symmetric crypto is much faster than asymmetric
crypto. Asymmetric crypto should only be used to exchange symmetric keys
and signing.


On Sat, Sep 7, 2013 at 11:10 AM, Jaap-Henk Hoepman j...@cs.ru.nl wrote:


  I have also, in debate with Jerry, opined that public-key cryptography
 is a powerful thing that can't be replaced with symmetric-key cryptography.
 That's something that I firmly believe. At its most fundamental, public-key
 crypto allows one to encrypt something to someone whom one does not have a
 prior security relationship with. That is powerful beyond words.

 I share that belief. Hence my desire to fully understand Bruce's remark.

 Strictly speaking you need some kind of security relationship: you need to
 be sure the public key belongs to the intended recipient (and is under his
 sole control). So public key crypto allows you to bootstrap from some
 authentic piece of information (public key belongs to X) to a confidential
 communication channel (with X).

 Jaap-Henk
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Bruce Schneier has gotten seriously spooked

2013-09-07 Thread Ray Dillinger

On 09/06/2013 01:25 PM, Jerry Leichter wrote:

A response he wrote as part of a discussion at 
http://www.schneier.com/blog/archives/2013/09/the_nsa_is_brea.html:

Q: Could the NSA be intercepting downloads of open-source encryption software and 
silently replacing these with their own versions?

A: (Schneier) Yes, I believe so.
 -- Jerry



Here is another interesting comment, on the same discussion.

https://www.schneier.com/blog/archives/2013/09/the_nsa_is_brea.html#c1675929

Schneier states of discrete logs over ECC: I no longer trust the constants.
I believe the NSA has manipulated them through their relationships with 
industry.

Is he referring to the standard set of ECC curves in use?  Is it possible
to select ECC curves specifically so that there's a backdoor in cryptography
based on those curves?

I know that hardly anybody using ECC bothers to find their own curve; they
tend to use the standard ones because finding their own involves counting all
the integral points and would be sort of compute expensive, in addition to
being involved and possibly error prone if there's a flaw in the implementation.

But are the standard ECC curves really secure? Schneier sounds like he's got
some innovative math in his next paper if he thinks he can show that they
aren't.

Bear



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Bruce Schneier has gotten seriously spooked

2013-09-07 Thread Dan McDonald

On Sep 7, 2013, at 2:36 PM, Ray Dillinger wrote:
SNIP!
 
 Schneier states of discrete logs over ECC: I no longer trust the constants.
 I believe the NSA has manipulated them through their relationships with 
 industry.
 
 Is he referring to the standard set of ECC curves in use?  Is it possible
 to select ECC curves specifically so that there's a backdoor in cryptography
 based on those curves?

That very statement prompted me to start the Suite B thread a couple of days 
ago.

What concerns me most about ECC is that your choices seem to be the IEEE 
Standard curves (which have NSA input, IIRC), or ones that will bring down the 
wrath of Certicom (Slogan:  We're RSA Inc. for the 21st Century!).

I've said this repeatedly over the past year, but if whomever ends up buying 
Certicom-owner Blackberry would set them free, it would help humanity (at the 
cost of the patent revenues, alas).

Dan

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [tor-talk] NIST approved crypto in Tor?

2013-09-07 Thread Eugen Leitl
- Forwarded message from Nick Mathewson ni...@alum.mit.edu -

Date: Sat, 7 Sep 2013 13:02:04 -0400
From: Nick Mathewson ni...@alum.mit.edu
To: tor-t...@lists.torproject.org tor-t...@lists.torproject.org
Subject: Re: [tor-talk] NIST approved crypto in Tor?
Reply-To: tor-t...@lists.torproject.org

On Sat, Sep 7, 2013 at 5:25 AM, Sebastian G. bastik.tor
bastik@googlemail.com wrote:
 Hi,

 Tor switches over to ECC what's a reasonable step.

 I'm unable to find the blog post (or maybe it was an official comment on
 the blog) [With DDG and StartPage] where someone said that if the NIST
 (I guess) is not lying ECC is safe.

 Is the ECC used by Tor in some way certified by NIST?

The TLS ECDH groups P-256 and P-224 are NIST-certified.  For circuit
extension, we use Dan Bernstein's non-NIST-certified curve25519 group.

 Are other parts of Tor certified by NIST?

NIST has certified tons of stuff, including AES and SHA1 and SHA256
and SHA3.  If you're jumping ship from NIST, you need to jump ship
from those as well.


Of all the NIST stuff above, my suspicion is not that they are
cryptographically broken, but that they are deliberately hard to
implement correctly: see
  * http://cr.yp.to/talks/2013.05.31/slides-dan+tanja-20130531-4x3.pdf
(on the P groups)
and
  * http://cr.yp.to/antiforgery/cachetiming-20050414.pdf (on AES)

Also, we're not using DSA, but DSA (as recommended by NIST) fits into
this pattern: DSA (as recommended by NIST) requires a strong random
number generator to be used when signing, and fails terribly in a way
that exposes the private key if the random number generator is the
least bit week or predictable. (see
https://en.wikipedia.org/wiki/Digital_Signature_Algorithm#Sensitivity)

To me, this suggests a trend of certifying strong cryptographic
algorithms while at the same time ensuring that most implementations
will be of poor quality.  That's just speculation, though.

(And I'm probably falling to the fallacy where you assume that
whatever results somebody gets are the ones they wanted.)



Of course, the deliberately in deliberately hard to implement
correctly is almost impossible to prove.  Is it nearly impossible to
write a fast side-channel-free AES implemenation in C because because
of a nefarious conspiracy, or simply because cryptographers in 2000
didn't appreciate how multiplication in GF(2^8) wasn't as
software-friendly a primitive?  (Looking at the other AES finalists, I
see a bunch of other hard-to-do-right-in-fast-software stuff like
GF(2^8) multiplication and table-based s-boxes.)   Are the ECC P
groups shaped that way for nefarious reasons, or simply because the
standards committee didn't have an adequate appreciation of the
software issues?

And it's not like NIST standards are the only ones that have problems.
 TLS is an IETF standard, but TLS implementations today have three
basic kinds of ciphersuirte: a fraught-with-peril CBC-based
pad-MAC-then-encrypt kind where somebody finds a new active attack
every year or so; a stream-cipher-based kind where the only supported
stream cipher is the ridiculously bad RC4, and an authenticated
encryption kind where the the AEAD mode uses GCM, which is also hard
to do in a side-channel-free way in software.

Conspiracy, or saboteurs in the (international) TLS working group, or
international bureaucratic intertia? Who can say?

And let's not mention X.509.  Let's just not, okay?  X.509 is
byzantine in a way that would make any reasonable implementor's head
spin, *and* the X.509 CA infrastructure is without a doubt one of the
very worst things in web security today.  And it's an international
standard.


[...]
 I understand that ECC used for Tor is different from what the essay is
 about.

 However the NSA may found something it can exploit in ECC and made NIST
 (maybe unknowingly) standardize the curve (or whatever) that is most
 vulnerable or recommends for a weak one, or for too short keys.

 Does Tor use stuff certified or recommended by NIST?

Yes; see above.  Also, there were once NIST recommendations for using
TLS; I have no idea whether we're following them or not.  (There are
NIST recommendations for nearly )

 If so would it be reasonable to move to international standards
 (whatsoever) without the involvement of NIST and NSA 'consultation'?
 (Completely unrelated to what might be going on, just as defense-in-depth.)

I'm not sure that there *are* international-standards recommendations
for ECC groups or for ciphers that diverge from NIST's.  The IETF is
an international body, after all, and TLS standards have been happily
recommending SHA1, SHA256, AES, DSA, and the P groups for ages.  (See
also notes above about the not-much-betterness of international
stuff.)

With any luck, smart cryptographers will start to push non-NIST curves
and ciphers into prominence.  I've got some hopes for the EU here;
ECRYPT and ECRYPT II produced some exceptionally worthwhile results; I
hope that whoever makes funding decisions funds a nice 

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread Gregory Perry
If so, then the domain owner can deliver a public key with authenticity
using the DNS.  This strikes a deathblow to the CA industry.  This
threat is enough for CAs to spend a significant amount of money slowing
down its development [0].

How much more obvious does it get [1] ?

The PKI industry has been a sham since day one, and several root certs
have been compromised by the proverbial bad guys over the years (for
example, the Flame malware incident used to sign emergency Windows
Update packages which mysteriously only affected users in Iran and the
Middle East, or the Diginotar debacle, or the Tunisian Ammar MITM
attacks etc).  This of course is assuming that the FBI doesn't already
have access to all of the root CAs so that on domestic soil they can
sign updates and perform silent MITM interception of SSL and
IPSEC-encrypted traffic using transparent inline layer-2 bridging
devices that are at every major Internet peering point and interconnect,
because that would be crazy talk.

However, some form of authenticity and integrity is better than zero,
which is what the majority of the current DNS system offers, and it is
point and click trivial to perform MITM attacks with unauthenticated
DNS, especially on local area network segments which are rarely
protected with more than the Windows firewall.

Even without a centralized PKI, stateless port 53 UDP DNS could benefit
from some type of cryptographic security, but as with any standard
seemingly related to privacy or confidentiality we are left with this
DNSSEC quagmire of meetings and proposed meetings to talk about the next
meeting to discuss how the committee will propose the next request for
comment, ad nauseum.

Bitcoin for example doesn't need hundreds of private companies with
elaborate PKI documentation authentication services which are in reality
just mental placebos for Joe Consumer when he updates his monthly
Brazzers subscription, and it's doing just fine as the runner up for the
next global world monetary standard.

So with that said, I would still place my wager on the FBI being the
source of these various privacy enhancing service delays and not some
secret cabal of PKI execs that are engaging in standards committee
subterfuge.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Bruce Schneier has gotten seriously spooked

2013-09-07 Thread Gregory Perry
On 09/07/2013 02:53 PM, Ray Dillinger wrote:

Is he referring to the standard set of ECC curves in use?  Is it possible
to select ECC curves specifically so that there's a backdoor in cryptography
based on those curves?

I know that hardly anybody using ECC bothers to find their own curve; they
tend to use the standard ones because finding their own involves counting all
the integral points and would be sort of compute expensive, in addition to
being involved and possibly error prone if there's a flaw in the 
implementation.

Take a trip down memory lane and research the historical roots of the Data 
Encryption Standard, especially the pre-DES Lucifer standard with IBM.  Some 
hints would be the last minute reduction to 56-bit, as well as the replacement 
S-Boxes that were mandated for use by IBM before Lucifer became the DES.

And then if you were in the Beltway region back in '98, you might also remember 
the entire federal government freaking out about EFF's Deep Crack, which almost 
overnight caused 56-bit DES to be deprecated in favor of 3DES.  But then there 
were the complaints about the computational expensiveness of 3DES, so our 
superheros at NIST jumped in with the Advanced Encryption Standard contest and 
here were are again.

In the '90s there were a few papers written about optimal DES S-Box 
calculation; they disappeared from publication.  There was also a fellow who 
released a software application used for alternate DES S-Box generation, that 
got yanked as well.  I am not suggesting black helicopters or extrajudicial 
renditions, just that once they were on the Internet and then a few weeks later 
they were not online anymore, anywhere.

An oldie but goodie in this category of discussion is SANS' S-Box 
Modifications and Their Effect in DES-like Encryption Systems, Joe Gargiulo, 
July 25, 2002.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread David Mercer
On Sat, Sep 7, 2013 at 2:19 AM, ianG i...@iang.org wrote:

 On 7/09/13 10:15 AM, Gregory Perry wrote:

  Correct me if I am wrong, but in my humble opinion the original intent
 of the DNSSEC framework was to provide for cryptographic authenticity
 of the Domain Name Service, not for confidentiality (although that
 would have been a bonus).



 If so, then the domain owner can deliver a public key with authenticity
 using the DNS.  This strikes a deathblow to the CA industry.  This threat
 is enough for CAs to spend a significant amount of money slowing down its
 development [0].

 How much more obvious does it get [1] ?

 iang


I proposed essentially this idea around 10 years ago on the capabilities
list, using custom TXT records and some hackish things that  are/were
sub-optimal due to DNSSEC being more of a pipedream then than it is now to
deliver public keys for any arbitrary purpose. I only went so far as to
kick around design ideas on and off-list back then under the tag-line of
objectdns (as in being able to locate and connect to any arbitrary object
via a public key crypto connection) and registering the domain objectdns.com.
Things stalled out there due to my lack of copious free time.

David Mercer - http://dmercer.tumblr.com
IM:  AIM: MathHippy Yahoo/MSN: n0tmusic
Facebook/Twitter/Google+/Linkedin: radix42
FAX: +1-801-877-4351 - BlackBerry PIN: 332004F7
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Washington Post: Google racing to encrypt links between data centers

2013-09-07 Thread Thor Lancelot Simon
On Fri, Sep 06, 2013 at 07:53:42PM -0400, Marcus D. Leech wrote:

 One wonders why they weren't already using link encryption systems?

One wonders whether, if what we read around here lately is much guide,
they still believe they can get link encryption systems that are
robust against the only adversary likely to be attacking their North
American links?

Thor
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG

2013-09-07 Thread Eugen Leitl
- Forwarded message from Thor Lancelot Simon t...@panix.com -

Date: Sat, 7 Sep 2013 15:36:33 -0400
From: Thor Lancelot Simon t...@panix.com
To: Eugen Leitl eu...@leitl.org
Cc: cryptogra...@randombit.net
Subject: Re: [cryptography] Random number generation influenced, HW RNG
User-Agent: Mutt/1.5.20 (2009-06-14)

On Sat, Sep 07, 2013 at 09:05:33PM +0200, Eugen Leitl wrote:
 
 This pretty much rules out CPU-integral RNGs. It has to be
 a third-party add-on (USB or PCIe), and it has to be open hardware.

I think you take this more than a little too far.  I see CPU-integral
RNGs as very valuable source to be mixed with other sources in a
software pool of entropy.  Why should we reject them, unless we think
the mixing functions themselves are useless?

The lesson here seems to me to be that we should be far more
assiduous in seeking out additional sources of entropy and in always
ensuring software RNGs mix input from multiple such sources into
all output.  We should abandon sacred cows like the notion of
information-theoretic randomness (that we don't actually know how
to measure, but in pursuit of which we hamstring our software RNGs
by arranging that they refuse to produce any output unless, by some
questionable criterion, there is enough of it) and pursue engineering
goals we can actually achieve, like mixing enough other-source input,
of whatever quality, with the output of fast generators we can no longer
trust that the adversary must actually attack the mixing function, rather
than iteratively guessing the few state bits he does not already know.

Secondarily -- and sadly! -- we must now be very suspicious of devices
that integrate random number generation and encryption.  Can we even
trust raw hardware RNG output for the generation of IVs?  I would argue
not, because the same device's AES engine could be leaking key bits into
our explicit IVs, etc, and we couldn't ever know.  Devices that offload
packet processing in its entirety (SSL accellerators, IPsec accellerators,
etc.) have even more opportunity to do this sort of thing.  Hardware
crypto offload may still be very useful -- random number generation perhaps
in particular -- but we will have to apply it with extreme care, and with
a deliberate eye towards eliminating covert channels put in place by
people at least as smart as we are, and with far more time and experience
thinking about the problem from the offensive point of view.

Finally, we have to accept that the game might just be over, period.  So
you use a pure software RNG, mixing in RdRand output or not as you may
prefer.  How hard do you think it is to identify the datastructures used
by that RNG if you can execute code on a coprocessor with access to host
RAM?  Almost every modern server has such a coprocessor built in (its
management processor) and you won't find the source code to its firmware
floating around.  Intel even puts this functionality directly on its
CPUs (Intel AMT).  Rather than beating up on the guy who put a lovely
RNG instruction into every processor we're likely to use any time soon,
it seems to me we ought to be beating up on ourselves for ignoring far
simpler and more obvious risks like this one for well over a decade.

Seriously, show of hands, who here has ever really put his or her foot
down and insisted that a product they were purchasing _omit_ such
functionality?  Not chosen not to pay for it, refused to buy server X
or mainboard Y simply on the basis that management processor functionality
was onboard?  Now, compare to the number of people complaining about
backdoored RNGs here and elsewhere on the Internet.  Go figure.

To me the interesting question, but one to which I don't expect to ever
know the answer, is whether the adversary -- having, we can assume,
identified high value devices to systematically compromise, and lower value
devices to defer for later or simply ignore entirely -- went at those
devices sniper-style, or shotgun-style.  Were a few key opportunities for
tampering identified, and one or two attempted against each targeted
device?  Or were a wide variety of avenues explored, and every single one
that seemed relevant attempted everywhere, or at least against certain
particularly high value devices?  If we knew that, in a way we might know,
when we did finally see concrete evidence of a particular kind of
tampering, how long to keep looking for more.

But we aren't going to know that, no matter how much we might want to.
Attacks on crypto hardware, attacks on management processors, attacks
on supervisory or trusted execution modes seldom exercised in normal
system operation, attacks on flash modules holding boot code, so that
under the right circumstances they replace page P with evil page P',
attacks on elements of IC vendors' standard cell libraries (DMA engines
would seem promising); assume the adversaries are smart, and good at their
jobs, and the sky would seem to be the limit.

The sky will fall, of course, when various 

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-07 Thread Bill Stewart



On 7/09/13 09:05 AM, Jaap-Henk Hoepman wrote:
Public-key cryptography is less well-understood than symmetric-key 
cryptography. It is also tetchier than symmetric-key crypto, and 
if you pay attention to us talking about issues with nonces, 
counters, IVs, chaining modes, and all that, you see that saying 
that it's tetchier than that is a warning indeed.


You have the same issues with nonces, counters, etc. with symmetric 
crypto so I don't see how that makes it preferable over public key crypto.


At 12:57 AM 9/7/2013, ianG wrote:
It's a big picture thing.  At the end of the day, symmetric crypto 
is something that good software engineers can master, and relatively 
well, in a black box sense.  Public key crypto not so easily, that 
requires real learning.  I for one am terrified of it.


Public-key crypto requires learning math, and math is hard (or at 
least ECC math is hard, and even prime-number-group math has some 
interesting tricks in it.)
Symmetric-key crypto is easy in a black-box sense, because most 
algorithms come with rules that say You need to do this and not do 
that, yet the original PPTP did half a dozen things wrong with RC4 
even though the only rule is never use the same state twice.
But if you want to look inside the black box, most of what's there is 
a lot of bit-twiddling, maybe in a Feistel network, and while you can 
follow the bits around and see what changes, there can still be 
surprises like the discovery of differential cryptanalysis.
Public-key crypto lets you use math to do the analysis, but [vast 
over-simplification] symmetric-key mostly lets you play around and 
decide if it's messy enough that you can't follow the bits.


But there are other traps that affect people with either kind of 
system.  Once PGP got past the Bass-o-matic stage, the biggest 
security problems were mostly things like variable-precision numbers 
that were trying so hard to save bits that you could trick the 
program into interpreting them differently and accepting bogus 
information.  Fortunately we'd never have problems like that today 
(yes, ASN.1 BER/DER, I'm looking at you), and nobody ever forgets 
to check array bounds (harder in modern languages than in C or 
Fortran, but still quite possible), or fails to validate input before 
using it (SQL injections), etc.





___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help

2013-09-07 Thread Tony Arcieri
On Fri, Sep 6, 2013 at 6:49 PM, Marcus D. Leech mle...@ripnet.com wrote:

 It seems to me that while PFS is an excellent back-stop against NSA
 having/deriving a website RSA key


Well, it helps against passive eavesdropping. However if the NSA has a web
site's private TLS key, they can still MitM the traffic, even with PFS.

Likewise with perfect forward secrecy, they can collect and store all
your traffic for the next 10-20 years when they get a large quantum
computer, and decrypt your traffic then.

PFS is far from perfect

-- 
Tony Arcieri
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] XORing plaintext with ciphertext

2013-09-07 Thread Florian Weimer
* Dave Horsfall:

 Take the plaintext and the ciphertext, and XOR them together.  Does the 
 result reveal anything about the key or the painttext?

Yes, their length.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Protecting Private Keys

2013-09-07 Thread Phillip Hallam-Baker
On Sat, Sep 7, 2013 at 10:20 AM, Jeffrey I. Schiller j...@mit.edu wrote:


 If I was the NSA, I would be scavenging broken hardware from
 “interesting” venues and purchasing computers for sale in interesting
 locations. I would be particularly interested in stolen computers, as
 they have likely not been wiped.


+1

And this is why I have been so peeved at the chorus of attack against
trustworthy computing.

All I have ever really wanted from Trustworthy computing is to be sure that
my private keys can't be copied off a server.


And private keys should never be in more than one place unless they are
either an offline Certificate Signing Key for a PKI system or a decryption
key for stored data.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Protecting Private Keys

2013-09-07 Thread Jim Popovitch
On Sat, Sep 7, 2013 at 10:20 AM, Jeffrey I. Schiller j...@mit.edu wrote:
 One of the most obvious ways to compromise a cryptographic system is
 to get the keys. This is a particular risk in TLS/SSL when PFS is not
 used. Consider a large scale site (read: Google, Facebook, etc.) that
 uses SSL. The private keys of the relevant certificates needs to be
 literally on hundreds if not thousands of systems.

$5k USD to anyone one of the thousands of admins with access

-Jim P.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread Anne Lynn Wheeler

On 09/07/13 05:19, ianG wrote:

If so, then the domain owner can deliver a public key with authenticity using 
the DNS.
This strikes a deathblow to the CA industry.  This threat is enough for CAs to 
spend a significant amount
of money slowing down its development [0].


unfortunately as far as SSL domain name certificate ... the domain name 
infrastructure is the
authoritative agency for domain name ownership ... the SSL domain name 
certification agencies
have to rely on the domain name infrastructure to validate true ownership for 
SSL domain name
applications. As I've repeatedly referenced ... this puts the CAs in catch22 
... they
need improved integrity of domain name infrastructure (attacks on ownership 
records of domain
name ownership and then being issued valid SSL certificate) ... which comes 
with lots of
DNSSEC ... but that also eliminates much of the need for SSL domain 
certificates.

as per prior reference about original working on SSL for electronic commerce 
... at least for
the financial industry I've repeatedly shown that digital certificates were 
redundant
and superfluous. I also shown that at the time, the addition of digital 
certificates
increased the payload size by two orders of magnitude (besides being redundant 
and superfluous).
That apparently motivated the compressed digital certificate financial 
standard effort ...
trying to reduce digital certificates so that the payload bloat was only ten 
times (instead
of hundred times) ... in large part by eliminating all information that the 
processing
institution already had. I demonstrated that processing institution would have 
all
information and therefor digital certificates could be reduced to zero bytes 
... so
instead of eliminating redundant and superfluous digital certificates ... it 
was possible
to mandate that zero byte certificates be appended to every transaction (it 
would be
possible to digitally sign a payment transaction for authentication ... and 
rely on
the individual's financial institution to have registered the person's public 
key ... w/o
having to increase the size of every payment transaction in the world by 100 
times just
to transmit a redundant and superfluous appended digital certificate).

I like the interchange at panel discussion in early 90s ACM SIGMOD ballroom 
open session,
somebody in the audience asked what was all this x.5xx stuff about and one of 
the panelists
said it was a bunch of networking engineers trying to reinvent 1960s database 
technology.

there was some amount of participation by the information assurance directorate 
in financial
industry standards meetings. at various times there were references to rifts 
between IA
and SIGINT ... but for all I know that may be kabuki theater. I was fairly 
vocal about
any backdoors could put financial industry at risk for bad guys discovering the 
vulnerabilities
... and wanted KISS applied to as much as possible (and backdoors forbidden)

there are other agendas in much of this. at the start of the century there
were several safe internet payment products pitched to major merchants 
(accounting for 70%
of internet transactions) which got high acceptance. Merchants have been 
indoctrinated for
decades that a large part of interchange fee is proportional to associated 
fraud rate ...
and the merchants were expecting an order of magnitude reduction in their fees 
(with
the safe products). Then came the cognitive dissonance when the banks told the 
merchants that
rather than major reduction in interchange fees with the safe payment 
products ... there would
effectively be a surcharge added to the highest fee that they were already 
paying (and all the
safe efforts collapse).

Part of the issue was that the bottom line for large issuing banks was 40%-60% 
from these
fees and an order of magnitude reduction in those fees would be a big hit to
their bottom line (the size of fees in part justified by fraud rates). The 
safe products
going a long way to eliminating most fraud and commoditizing the payment 
transaction
business ... which would also lower the bar for entry by competition.

--
virtualization experience starting Jan1968, online at home since Mar1970
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Bruce Schneier has gotten seriously spooked

2013-09-07 Thread Chris Palmer
On Sat, Sep 7, 2013 at 1:33 AM, Brian Gladman b...@gladman.plus.com wrote:

 Why would they perform the attack only for encryption software? They
 could compromise people's laptops by spiking any popular app.

 Because NSA and GCHQ are much more interested in attacking communictions
 in transit rather than attacking endpoints.

So they spike a popular download (security-related apps are less
likely to be popular) with a tiny malware add-on that scans every file
that it can read to see if it's an encryption key, cookie, password
db, whatever — any credential-like thing. The malware uploads any hits
to the mothership, then exits (possibly cleaning up after itself).
Trivial to do, golden results.

But really, why not leave a little CC pinger behind? Might as well;
you never know when it will be useful.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Washington Post: Google racing to encrypt links between data centers

2013-09-07 Thread Tony Arcieri
On Fri, Sep 6, 2013 at 4:53 PM, Marcus D. Leech mle...@ripnet.com wrote:

 One wonders why they weren't already using link encryption systems?


Probably line rate and the cost of encrypting every single fiber link.
There are few vendors who sell line rate encryption for 10Gbps+

-- 
Tony Arcieri
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread Gregory Perry
On 09/07/2013 04:20 PM, Phillip Hallam-Baker wrote:

Before you make silly accusations go read the VeriSign Certificate Practices 
Statement and then work out how many people it takes to gain access to one of 
the roots.

The Key Ceremonies are all videotaped from start to finish and the auditors 
have reviewed at least some of the ceremonies. So while it is not beyond the 
realms of possibility that such a large number of people were suborned, I think 
it drastically unlikely.

Add to which Jim Bizdos is not exactly known for being well disposed to the NSA 
or key escrow.


Hacking CAs is a poor approach because it is a very visible attack. Certificate 
Transparency is merely automating and generalizing controls that already exist.

But we can certainly add them to S/MIME, why not.

VeriSign is one single certificate authority.  There are many, many more 
certificate authorities spread across the world, and unless you can guarantee 
an air-gapped network with tightly constrained physical security controls and a 
secret videotaped bohemian ceremony such as the one you reference above at each 
and every one of those CAs, then maybe it's not such a silly accusation to 
think that root CAs are routinely distributed to multinational secret services 
to perform MITM session decryption on any form of communication that derives 
its security from the CA PKI.

To whit:  ...Mozilla maintains a list of at least 57 trusted root CAs, though 
multiple commercial CAs or their resellers may share the same trusted root). 
[http://en.wikipedia.org/wiki/Certificate_authority]http://en.wikipedia.org/wiki/Certificate_authority

Another relevant read:  
http://www.quora.com/SSL-Certificates/How-many-intermediate-Certificate-Authorities-are-there#

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread Phillip Hallam-Baker
On Sat, Sep 7, 2013 at 5:19 AM, ianG i...@iang.org wrote:

 On 7/09/13 10:15 AM, Gregory Perry wrote:

  Correct me if I am wrong, but in my humble opinion the original intent
 of the DNSSEC framework was to provide for cryptographic authenticity
 of the Domain Name Service, not for confidentiality (although that
 would have been a bonus).



 If so, then the domain owner can deliver a public key with authenticity
 using the DNS.  This strikes a deathblow to the CA industry.  This threat
 is enough for CAs to spend a significant amount of money slowing down its
 development [0].

 How much more obvious does it get [1] ?


Good theory only the CA industry tried very hard to deploy and was
prevented from doing so because Randy Bush abused his position as DNSEXT
chair to prevent modification of the spec to meet the deployment
requirements in .com.

DNSSEC would have deployed in 2003 with the DNS ATLAS upgrade had the IETF
followed the clear consensus of the DNSEXT working group and approved the
OPT-IN proposal. The code was written and ready to deploy.

I told the IESG and the IAB that the VeriSign position was no bluff and
that if OPT-IN did not get approved there would be no deployment in .com. A
business is not going to spend $100million on deployment of a feature that
has no proven market demand when the same job can be done for $5 million
with only minor changes.


CAs do not make their money in the ways you imagine. If there was any
business case for DNSSEC I will have no problem at all finding people
willing to pay $50-100 to have a CA run their DNSSEC for them because that
is going to be a lot cheaper than finding a geek with the skills needed to
do the configuration let alone do the work.

One reason that PGP has not spread very far is that there is no group that
has a commercial interest in marketing it.

At the moment revenues from S/MIME are insignificant for all the CAs.
Comodo gives away S/MIME certs for free. Its just not worth enough to try
to charge for right now.

If we can get people using secure email or DNSSEC on a large scale then CAs
will figure out how to make money from it. But right now nobody is making a
profit from either.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Washington Post: Google racing to encrypt links between data centers

2013-09-07 Thread Eugen Leitl
On Sat, Sep 07, 2013 at 01:53:13PM -0700, Tony Arcieri wrote:
 On Fri, Sep 6, 2013 at 4:53 PM, Marcus D. Leech mle...@ripnet.com wrote:
 
  One wonders why they weren't already using link encryption systems?
 
 
 Probably line rate and the cost of encrypting every single fiber link.
 There are few vendors who sell line rate encryption for 10Gbps+

Nanog and denog had a discussion about this, and in general nobody
believes the products you can buy, especially the export version, 
have no backdoor.

Doing it in software is only feasible at network edge, not core.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Washington Post: Google racing to encrypt links between data centers

2013-09-07 Thread Eugen Leitl
On Sat, Sep 07, 2013 at 04:41:04PM -0400, Richard Outerbridge wrote:

 Surely not Canada? After all, we're one of the five eyes! ;)

Six. Sweden (FRA) is part of it. http://www.heise.de/tp/blogs/8/154917
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread Gregory Perry
On 09/07/2013 05:03 PM, Phillip Hallam-Baker wrote:

Good theory only the CA industry tried very hard to deploy and was prevented 
from doing so because Randy Bush abused his position as DNSEXT chair to prevent 
modification of the spec to meet the deployment requirements in .com.

DNSSEC would have deployed in 2003 with the DNS ATLAS upgrade had the IETF 
followed the clear consensus of the DNSEXT working group and approved the 
OPT-IN proposal. The code was written and ready to deploy.

I told the IESG and the IAB that the VeriSign position was no bluff and that if 
OPT-IN did not get approved there would be no deployment in .com. A business is 
not going to spend $100million on deployment of a feature that has no proven 
market demand when the same job can be done for $5 million with only minor 
changes.

And this is exactly why there is no real security on the Internet.  Because the 
IETF and standards committees and working groups are all in reality political 
fiefdoms and technological monopolies aimed at lining the pockets of a select 
few companies deemed worthy of authenticating user documentation for purposes 
of establishing online credibility.

There is no reason for any of this, and I would once again cite to Bitcoin as 
an example of how an entire secure online currency standard can be created and 
maintained in a decentralized fashion without the need for complex hierarchies 
of quasi-political commercial interests.

Encrypting SMTP is trivial, it's all about the standard to make it happen.  
Encrypting IPv6 was initially a mandatory part of the spec, but then it somehow 
became discretionary.  The nuts and bolts of strong crypto have been around for 
decades, but the IETF and related standards powers to be are more interested 
in creating a global police state than guaranteeing some semblance of 
confidential and privacy for Internet users.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-07 Thread Derrell Piper
On Sep 6, 2013, at 11:51 PM, Marcus D. Leech mle...@ripnet.com wrote:

 The other thing that I find to be a dirty little secret in PK systems is 
 revocation.  OCSP makes things, in some ways, better than CRLs, but I still
  find them to be a kind of swept under the rug problem when people are 
 waxing enthusiastic about PK systems.

Well, there are other saddles, as it were.  SPKI/SDSI both offer a path forward 
without needing a trusted CA...


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-07 Thread Tony Arcieri
On Sat, Sep 7, 2013 at 1:01 PM, Ray Dillinger b...@sonic.net wrote:

 And IIRC, pretty much every asymmetric ciphersuite (including all public-
 key crypto) is vulnerable to some transformation of Shor's algorithm that
 is in fact practical to implement on such a machine.


Lattice-based (NTRU) or code-based (McEliece/McBits) public key systems are
still considered post-quantum algorithms. There are no presently known
quantum algorithms that work against these sorts of systems.

See http://pqcrypto.org/

-- 
Tony Arcieri
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help

2013-09-07 Thread james hughes

On Sep 7, 2013, at 1:50 PM, Peter Fairbrother zenadsl6...@zen.co.uk wrote:

 On 07/09/13 02:49, Marcus D. Leech wrote:
 It seems to me that while PFS is an excellent back-stop against NSA
 having/deriving a website RSA key, it does *nothing* to prevent the kind of
   cooperative endpoint scenario that I've seen discussed in other
 forums, prompted by the latest revelations about what NSA has been up to.
 
 True.
 
 But does it matter much? A cooperative endpoint can give plaintext no matter 
 what encryption is used, not just session keys.

+1. 

Cooperative endpoints offer no protection to any cryptography because they have 
all the plaintext. One can argue that the subpoenas are just as effective as 
cooperative endpoints. The reductio ad absurdum argument is that PFS is not 
good enough in the face of subpoenas? I don't think cooperative endpoints is a 
relevant point. 

Passive monitoring and accumulation of cyphertext is a good SIGINT strategy. 
Read about the VENONA project. 
http://en.wikipedia.org/wiki/Venona_project
 Most decipherable messages were transmitted and intercepted between 1942 and 
 1945. […] These messages were slowly and gradually decrypted beginning in 
 1946 and continuing […] through 1980,

Clearly, the traffic was accumulated during which time there was no known 
attack.

While reusing OTP is not the fault here, PFS makes recovering information with 
future key recovery harder, since a single key being recovered with whatever 
means, does not make old traffic more vulnerable. 

This is not a new idea. The separation of key exchange from authentication 
allows this. A router I did the cryptography for (first produced by Network 
Systems Corporation in the 1994) was very careful not to allow any old (i.e. 
recorded) traffic to be vulnerable even if one or both end points were stolen 
and all the key material extracted. The router used DH (both sides ephemeral) 
for the key exchange and RSA for authentication and integrity. This work 
actually predates IPSEC and is still being used.

http://www.blueridge.com/index.php/products/borderguard/borderguard-overview

I am getting from the list that there have been or are arguments that doing two 
public key operations is too much. Is it really? 

PFS may not be a panacea but does help.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Does NSA break in to endpoints (was Re: Bruce Schneier has gotten seriously spooked)

2013-09-07 Thread Perry E. Metzger
On Sat, 07 Sep 2013 09:33:28 +0100
Brian Gladman b...@gladman.plus.com wrote:

 On 07/09/2013 01:48, Chris Palmer wrote:
  Q: Could the NSA be intercepting downloads of open-source
  encryption software and silently replacing these with their own
  versions?
  
  Why would they perform the attack only for encryption software? They
  could compromise people's laptops by spiking any popular app.
 
 Because NSA and GCHQ are much more interested in attacking
 communictions in transit rather than attacking endpoints.

Except, one implication of recent revelations is that stealing keys
from endpoints has been a major activity of NSA in the last decade.

I'm not going to claim that altering patches and software during
download has been a major attack vector they've used for that -- I have
no evidence for the contention whatsoever and besides, endpoints seem
to be fairly vulnerable without such games -- but clearly attacking
selected endpoints is now an NSA passtime.

Perry
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] New task for the NSA

2013-09-07 Thread Jerry Leichter
The NY Times has done a couple of reports over the last couple of months about 
the incomprehensibility of hospital bills, even to those within the industry - 
and the refusal of hospitals to discuss their charge rates, claiming that what 
they will bill you for a treatment is proprietary.

Clearly, it's time to sic the NSA on the medical care system's billing offices. 
 Let them do something really useful for a change!

-- Jerry :-)

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Sat, Sep 07, 2013 at 09:14:47PM +, Gregory Perry wrote:
 And this is exactly why there is no real security on the Internet.
 Because the IETF and standards committees and working groups are all
 in reality political fiefdoms and technological monopolies aimed at
 lining the pockets of a select few companies deemed worthy of
 authenticating user documentation for purposes of establishing
 online credibility.
 ...
 Encrypting IPv6 was initially a mandatory part of the spec,
 but then it somehow became discretionary.  The nuts and bolts of
 strong crypto have been around for decades, but the IETF and related
 standards powers to be are more interested in creating a global
 police state than guaranteeing some semblance of confidential and
 privacy for Internet users.

I’m sorry, but I cannot let this go unchallenged. I was there, I saw
it. For those who don’t know, I was the IESG Security Area Director
from 1994 - 2003. (by myself until 1998 after which we had two co-AD’s
in the Security Area). During this timeframe we formed the TLS working
group, the PGP working group and IPv6 became a Draft Standard. Scott
Bradner and I decided that security should be mandatory in IPv6, in
the hope that we could drive more adoption.

The IETF was (and probably still is) a bunch of hard working
individuals who strive to create useful technology for the
Internet. In particular IETF contributors are in theory individual
contributors and not representatives of their employers. Of course
this is the theory and practice is a bit “noisier” but the bulk of
participant I worked with were honest hard working individuals.

Security fails on the Internet for three important reasons, that have
nothing to do with the IETF or the technology per-se (except for point
3).

 1.  There is little market for “the good stuff”. When people see that
 they have to provide a password to login, they figure they are
 safe... In general the consuming public cannot tell the
 difference between “good stuff” and snake oil. So when presented
 with a $100 “good” solution or a $10 bunch of snake oil, guess
 what gets bought.

 2.  Security is *hard*, it is a negative deliverable. You do not know
 when you have it, you only know when you have lost it (via
 compromise). It is therefore hard to show return on investment
 with security. It is hard to assign a value to something not
 happening.

 2a. Most people don’t really care until they have been personally
 bitten. A lot of people only purchase a burglar alarm after they
 have been burglarized. Although people are more security aware
 today, that is a relatively recent development.

 3.  As engineers we have totally and completely failed to deliver
 products that people can use. I point out e-mail encryption as a
 key example. With today’s solutions you need to understand PK and
 PKI at some level in order to use it. That is likely requiring a
 driver to understand the internal combustion engine before they
 can drive their car. The real world doesn’t work that way.

No government conspiracy required. We have seen the enemy and it is...

-Jeff

___
Jeffrey I. Schiller
Information Services and Technology
Massachusetts Institute of Technology
77 Massachusetts Avenue  Room E17-110A, 32-392
Cambridge, MA 02139-4307
617.910.0259 - Voice
j...@mit.edu
http://jis.qyv.name
___
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iD8DBQFSK7xM8CBzV/QUlSsRApyUAKCB6GpP/hUHxtOQNGjSB5FDZS8hFACfVec6
pPw4Xvukq3OqPEkmVZKl0c8=
=9/UP
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Bruce Schneier has gotten seriously spooked

2013-09-07 Thread Gregory Perry
On 09/07/2013 07:32 PM, Brian Gladman wrote:
 I don't have experience of how the FBI operates so my comments were
 directed specifcally at NSA/GCHQ interests.  I am doubtful that very
 large organisations change their direction of travel very quickly so I
 see the huge investments being made in data centres, in the tapping of
 key commmunications cables and core network routers and 'above our
 heads', as evidence that this approach still works well for NSA and
 GCHQ.  And I certainly don't think that volume is a problem yet since
 they have been able to invest heavily to develop the techniques that
 they use to see through lightweight protection and to pull out 'needles
 from haystacks'.

 Of course, you might well be right about the future direction they will
 have to travel because increasing volume in combination with better end
 to end protection must be a nightmare scenario for them.  But I don't
 see this move happening all that soon because a surprisingly large
 amount of the data in which they have an interest crosses our networks
 with very little protection.  And it seems even that which is protected
 has been kept open to their eyes by one means or another.

   Brian

As a perennial optimist I would hope that global surveillance efforts
were focused solely on core communication peering and network access
points.  Unfortunately, the realist (and technologist) in me says otherwise.

It is not possible to view or intercept local area network
communications from a core network router.  For example, if I wanted to
catch some U.S. senator fornicating with his neighbor's wife for
purposes of blackmail fodder, then access to a core network router
wouldn't do me much good. 

However, if I had access to that senator's premise router by way of a
lawful intercept backdoor, then perhaps I could for example observe
that senator and his mistress' comings and goings by capturing a 720p
video feed from the Xbox camera in his living room.  Or by remotely
enabling the speaker phone microphone on a Cisco VoIP device.  Or maybe
I could enable the microphone and video camera on a LAN-connected laptop
to listen in on ambient conversations and to observe a live video feed
from the room where the laptop is sleeping.

Etc, etc.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] ElGamal, DSA randomness (was Re: Why prefer symmetric crypto over public key crypto?)

2013-09-07 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Sep 7, 2013, at 5:09 PM, Perry E. Metzger pe...@piermont.com wrote:

 Note that such systems should at this point be using deterministic
 methods (hashes of text + other data) to create the needed nonces. I
 believe several such methods have been published and are considered
 good, but are not well standardized. Certainly this eliminates a *very*
 important source of fragility in such systems and should be universally
 implemented.
 
 References to such methods are solicited -- I'm operating without my
 usual machine at the moment while its hard drive restores from backup.

For as long as PGP has done DSA, it protected the signature nonce by hashing it 
with the DSA private key. These days, we'd do an HMAC, most likely.

There's now an RFC 6979 on Deterministic DSA now, as well. Phil Z, David 
Kravitz, and I started on something equivalent and then stopped when we saw 
what Thomas Pornin was doing. It's good stuff.

https://datatracker.ietf.org/doc/rfc6979/

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSK8FpsTedWZOD3gYRAs2DAKCA8Di/fH9ZYvAb4y5Byb2bN6MudQCgkXZO
80uY0/A7zZ3CBe6C0/1ALfU=
=eqWE
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] ADMIN: Volume, top posting, trimming, SUBJECT LINES

2013-09-07 Thread Perry E. Metzger
1) Volume has gotten understandably high the last few days given the
current news. I'd like people to please consider if their posting
conveys interesting information before sending.

2) Please adjust the Subject lines of your messages if your posting
deviates from the original Subject. This makes it much easier for
people to determine what they want to skip.

3) Again, please don't top post. Again, please trim the message you are
replying to.

Perry
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-07 Thread Perry E. Metzger
On Sat, 07 Sep 2013 13:01:53 -0700
Ray Dillinger b...@sonic.net wrote:
 I think we can no longer rule out the possibility that some attacker
 somewhere (it's easy to point a finger at the NSA but it could be
 just as likely pointed at GCHQ or the IDF or Interpol) may have
 secretly developed a functional quantum computer with a qbus wide
 enough to handle key sizes in actual use.

In the same sense that we can no longer rule out the possibility that,
given modern synthetic biology techniques, someone has already come up
with a way to create pigs with wings. I see the possibility of the
quantum computer as slightly smaller, however.

 And IIRC, pretty much every asymmetric ciphersuite (including all
 public- key crypto) is vulnerable to some transformation of Shor's
 algorithm that is in fact practical to implement on such a machine.

To my knowledge, there is no ECC analog of Shor's algorithm.

Perry
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-07 Thread Perry E. Metzger
On Sat, 7 Sep 2013 13:06:14 -0700
Tony Arcieri basc...@gmail.com wrote:
 In order to beat quantum computers, we need to use public key systems
 with no (known) quantum attacks, such as lattice-based (NTRU) or
 code-based (McEliece/McBits) algorithms. ECC and RSA will no longer
 be useful.

I'm unaware of an ECC equivalent of the Shor algorithm. Could you
enlighten me on that?

Perry
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Replacing CAs (was Re: Why prefer symmetric crypto over public key crypto?)

2013-09-07 Thread Perry E. Metzger
On Sat, 7 Sep 2013 17:46:39 -0400
Derrell Piper d...@electric-loft.org wrote:

 On Sep 6, 2013, at 11:51 PM, Marcus D. Leech mle...@ripnet.com
 wrote:
 
  The other thing that I find to be a dirty little secret in PK
  systems is revocation.  OCSP makes things, in some ways, better
  than CRLs, but I still find them to be a kind of swept under the
  rug problem when people are waxing enthusiastic about PK systems.
 
 Well, there are other saddles, as it were.  SPKI/SDSI both offer a
 path forward without needing a trusted CA...

I think that in general one doesn't need CAs much. I will point out,
again, a message I sent to the list recently in which I propose that
simple demonstration of long term use and association may be
sufficient for ordinary purposes:

http://www.metzdowd.com/pipermail/cryptography/2013-August/016870.html
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-07 Thread Perry E. Metzger
On Sat, 7 Sep 2013 20:43:39 -0400 I wrote:
 To my knowledge, there is no ECC analog of Shor's algorithm.

...and it appears I was completely wrong on that.

See, for example: http://arxiv.org/abs/quantph/0301141

Senility gets the best of us.

Perry
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography