[Cryptography] djb's McBits (with Tung Chaou and Peter Schwabe)

2013-09-16 Thread ianG

On 15/09/13 07:17 AM, Tony Arcieri wrote:

... djb is
working on McBits.


McBits: fast constant-time code-based cryptography

Abstract.
This paper presents extremely fast algorithms for code-based
public-key cryptography, including full protection against timing 
attacks. For example, at a 2^128 security level, this paper achieves a 
reciprocal decryption throughput of just 60493 cycles (plus cipher cost 
etc.) on a single Ivy Bridge core. These algorithms rely on an additive 
FFT for fast root computation, a transposed additive FFT for fast 
syndrome computation, and a sorting network to avoid cache-timing attacks.




CHES 2013 was late August, already.  Was anyone there?  Any comments on 
McBits?


(skimming paper reveals one gotcha -- huge keys, 64k for 2^80 security. 
 I'm guessing that FFT==fast fourier transform.)


iang




Slides:
http://cr.yp.to/talks/2013.06.12/slides-djb-20130612-a4.pdf
Paper:
http://binary.cr.yp.to/mcbits-20130616.pdf
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] real random numbers

2013-09-16 Thread Joachim Strömbergson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Aloha!

John Denker wrote:
 On 09/15/2013 03:49 AM, Kent Borg wrote:
 
 When Bruce Schneier last put his hand to designing an RNG he 
 concluded that estimating entropy is doomed. I don't think he
 would object to some coarse order-of-magnitude confirmation that
 there is entropy coming in, but I think trying to meter entropy-in
 against entropy-out will either leave you starved or fooled.
 
 That's just completely backwards.  In the world I live in, people get
 fooled because they /didn't/ do the analysis, not because they did.
 
 I very much doubt that Bruce concluded that accounting is doomed. 
 If he did, it would mark a dramatic step backwards from his work on
 the commendable and influential Yarrow PRNG: J. Kelsey, B. Schneier,
 and N. Ferguson (1999) http://www.schneier.com/paper-yarrow.pdf

What Kent is probably referring to is the Fortuna RNG which is a
successor to Yarrow. One difference between Yarrow and Fortuna is the
lack of the estimator in Fortuna.

As Bruce and Ferguson states in chapter 10.3 of Practical Cryptography
(where Fortuna is described in good detail) [1]:

Fortuna solves the problem of having to define entropy estimators by
getting rid of them.

[1] https://www.schneier.com/book-practical.html

- -- 
Med vänlig hälsning, Yours

Joachim Strömbergson - Alltid i harmonisk svängning.

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.18 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAlI29rMACgkQZoPr8HT30QEqRwCfb4+6/K6AtK04cvtFU4KCVGwB
VA8AoKWhC8lOsru/xIkac71My0jIzjI9
=fx8M
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] A lot to learn from Business Records FISA NSA Review

2013-09-16 Thread Perry E. Metzger
On Sat, 14 Sep 2013 20:37:07 -0700 John Gilmore g...@toad.com wrote:
[A very interesting message, and I'm going to reply to just one tiny
detail in it...]
 We in the outside world *invented* all of NSA's infrastructure.
 They buy it from us, and are just users like most computer
 users.  (Yes, they have programmers and they write code, but their
 code seems mostly applications, not lower level OS improvements or
 protocols.  I'm not talking about the parts of NSA that find
 security holes in other peoples' infrastructure, nor the malware
 writers.)

Well, we do know they created things like the (not very usable)
seLinux MAC (Multilevel Access Control) system, so clearly they do
some hacking on security infrastructure.

(I will not argue with the larger point though.)

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-16 Thread Tero Kivinen
ianG writes:
 On 14/09/13 18:53 PM, Peter Fairbrother wrote:
  But, I wonder, where do these longer equivalent figures come from?
 
 http://keylength.com/ (is a better repository to answer your question.)

I assume that web site only takes account of time, it does not base
its calculations to cost of doing cracking, which would also include
the space needed to do the actual calculations.

Old paper from year 2000 which takes also space calculations in to
account

http://www.emc.com/emc-plus/rsa-labs/historical/a-cost-based-security-analysis-key-lengths.htm

says that to crack 1620 bit RSA key you need 10^10 years, with 158000
machines each having 1.2*10^14 bytes (120 Tb) of memory (year 2000 $10
trillion estimate).

Cost of that amount of memory today would still be quite high (at
$3-$10 per GB, the price would be hundreds of thousands - over million
dollars per machine).

Most of key size calculations in the net only take account the time
needed, not the space at all, thus they assume that memory is free.
For symmetric crypto cracking that is true, as you do not need that
much of memory, for public keys that is not true for some of the
algoritms.
-- 
kivi...@iki.fi
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Broken RNG Generating Taiwanese Citizen Digital Certificates

2013-09-16 Thread Kent Borg
Broken RNG-time again: In looking 2.2 million certificates, researchers 
found reused primes in 103 of them.



News story: 
http://arstechnica.com/security/2013/09/fatal-crypto-flaw-in-some-government-certified-smartcards-makes-forgery-a-snap/


Original paper: http://smartfacts.cr.yp.to/smartfacts-20130916.pdf


-kb

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] The paranoid approach to crypto-plumbing

2013-09-16 Thread Bill Frantz
After Rijndael was selected as AES, someone suggested the really 
paranoid should super encrypt with all 5 finalests in the 
competition. Five level super encryption is probably overkill, 
but two or three levels can offer some real advantages. So 
consider simple combinations of techniques which are at least as 
secure as the better of them.


Unguessable (aka random) numbers:

  Several generators, each reseeded on its own schedule, combined
  with XOR will be as good as the best of them.


Symmetric encryption:

  Two algorithms give security equal to the best of them. Three
  protect against meet-in-the-middle attacks. Performing the
  multiple encryption at the block level allows block cyphers to
  be combined with stream cyphers. RC4 may have problems, but
  adding it to the mix isn't very expensive.


Key agreement:

  For forward security, using both discrete log and elliptic
  curve Diffie-Hellman modes combined with XOR to calculate
  keying material is as good as the better of them. Encrypting a
  session key with one public key algorithm and then encrypting
  the result with another algorithm has the same advantage for
  the normal mode of TLS key agreement if you don't want
  forward security (which I very much want).


MACs:

  Two MACs are better than one. :-)

All this has costs, some of them significant, but those costs 
should be weighted against the security risks. Introducing a new 
algorithm with interesting theoretical security properties is a 
lot safer if the data is also protected with a well-examined 
algorithm which does not have those properties.


Cheers - Bill (who has finally caught up with the list)

---
Bill Frantz| Re: Computer reliability, performance, and security:
408-356-8506   | The guy who *is* wearing a parachute is 
*not* the

www.pwpconsult.com | first to reach the ground.  - Terence Kelly

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] End to end

2013-09-16 Thread Phillip Hallam-Baker
Just writing document two in the PRISM-Proof series. I probably have to
change the name before November. Thinking about 'Privacy Protected' which
has the same initials.


People talk about end-to-end without talking about what they are. In most
cases at least one end is a person or an organization, not a machine. So
when we look at the security of the whole system people security issues
like the fact they forget private key passphrases and lose machines matter.

Which ends you are talking about depends on what the context is. If we are
talking about message formats then the ends are machines. If we are talking
about trust then the ends are people and organizations.

End to end has a lot of costs. Deploying certificates to end users is
expensive in an enterprise and often unnecessary. If people are sending
email through the corporate email system then in many cases the corporation
has a need/right to see what they are sending/receiving.

So one conclusion about S/MIME and PGP is that they should support domain
level confidentiality and confidentiality, not just account level.

Another conclusion is that end-to-end security is orthogonal to transport.
In particular there are good use cases for the following configuration:

Mail sent from al...@example.com to b...@example.net

* DKIM signature on message from example.com as outbound MTA 'From'.

* S/MIME Signature on message from example.com with embedded logotype
information.

* TLS Transport Layer Security with Forward Secrecy to example.net mail
server using DNSSEC and DANE to authenticate the IP address and certificate.

* S/MIME encryption under example.net EV certificate

* S/MIME encryption under b...@example.net personal certificate.

[Hold onto flames about key validation and web of trust for the time being.
Accepting the fact that S/MIME has won the message format deployment battle
does not mean we are obliged to use the S/MIME PKI unmodified or require
use of CA validated certificates.]


Looking at the Certificate Transparency work, I see a big problem with
getting the transparency to be 'end-to-end', particularly with Google's
insistence on no side channels and ultra-low latency.

To me the important thing about transparency is that it is possible for
anyone to audit the key signing process from publicly available
information. Doing the audit at the relying party end prior to every
reliance seems a lower priority.

In particular, there are some type of audit that I don't think it is
feasible to do in the endpoint. The validity of a CT audit is only as good
as your newest notary timestamp value. It is really hard to guarantee that
the endpoint is not being spoofed by a PRISM capable adversary without
going to techniques like quorate checking which I think are completely
practical in a specialized tracker but impractical to do in an iPhone or
any other device likely to spend much time turned off or otherwise
disconnected from the network.



-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Apple and Certificate Pinning

2013-09-16 Thread Perry E. Metzger
I've not been able to figure out if Apple is using certificate
pinning for its applications (including its update systems) that seem
to use PKI. Does anyone know?

-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-16 Thread Bill Frantz

On 9/16/13 at 12:36 PM, leich...@lrw.com (Jerry Leichter) wrote:


On Sep 16, 2013, at 12:44 PM, Bill Frantz fra...@pwpconsult.com wrote:

After Rijndael was selected as AES, someone suggested the really paranoid 
should super encrypt with
all 5 finalests in the competition. Five level super encryption 
is probably overkill, but two or three levels can offer some 
real advantages. So consider simple combinations of techniques 
which are at least as secure as the better of them

This is trickier than it looks.

Joux's paper Multicollisions in iterated hash functions 
http://www.iacr.org/archive/crypto2004/31520306/multicollisions.ps
shows that finding ... r-tuples of messages that all hash to 
the same value is not much harder than finding ... pairs of 
messages.  This has some surprising implications.  In 
particular, Joux uses it to show that, if F(X) and G(X) are 
cryptographic hash functions, then H(X) = F(X) || G(X) (|| is 
concatenation) is about as hard as the harder of F and G - but 
no harder.


That's not to say that it's not possible to combine multiple 
instances of cryptographic primitives in a way that 
significantly increases security.  But, as many people found 
when they tried to find a way to use DES as a primitive to 
construction an encryption function with a wider key or with a 
bigger block size, it's not easy - and certainly not if you 
want to get reasonable performance.


This kind of result is why us crypto plumbers should always 
consult real cryptographers. :-)


I am not so much trying to make the construction better than the 
algorithms being used, like 3DES is much more secure than 1DES, 
(and significantly extended the useful life of DES); but to make 
a construction that is at least as good as the best algorithm 
being used.


The idea is that when serious problems are discovered with one 
algorithm, you don't have to scramble to replace the entire 
crypto suite. The other algorithm will cover your tail while you 
make an orderly upgrade to your system.


Obviously you want to chose algorithms which are likely to have 
different failure modes -- which I why I suggest that RC4 (or an 
extension thereof) might still be useful. The added safety also 
allows you to experiment with less examined algorithms.


Cheers - Bill

---
Bill Frantz|The nice thing about standards| Periwinkle
(408)356-8506  |is there are so many to choose| 16345 
Englewood Ave
www.pwpconsult.com |from.   - Andrew Tanenbaum| Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] AES [was NSA and cryptanalysis]

2013-09-16 Thread Tim Newsham
 What I think we are worried about here are very widespread
 automated attacks, and they're passive (data is collected and
 then attacks are run offline). All that constrains what attacks
 make sense in this context.

John Kelsey discusses several attacks that might fit this
profile but one he did not consider was:

- A backdoor that leaks cryptographic secrets

consider for example applications using an intel chip with
hardware-assist for AES. You're feeding your AES keys
directly into the cpu. Any attacker controlling the cpu has
direct access and doesn't have to do any fancy pattern matching
to discover the keys. Now if that CPU had a way to export
some or all of the bits through some channel that would also
be passively observable, the attacker could pull off an offline
passive attack.

What about RNG output? What if some bits were redundantly
encoded in some of the RNG output bits which where then
used directly for tcp initial sequence numbers?

Such a backdoor would be feasible.

-- 
Tim Newsham | www.thenewsh.com/~newsham | @newshtwit | thenewsh.blogspot.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] End to end

2013-09-16 Thread Phillip Hallam-Baker
On Mon, Sep 16, 2013 at 3:14 PM, Ben Laurie b...@links.org wrote:


 On 16 September 2013 18:49, Phillip Hallam-Baker hal...@gmail.com wrote:

 To me the important thing about transparency is that it is possible for
 anyone to audit the key signing process from publicly available
 information. Doing the audit at the relying party end prior to every
 reliance seems a lower priority.


 This is a fair point, and we could certainly add on to CT a capability to
 post-check the presence of a pre-CT certificate in a log.


Yeah, not trying to attack you or anything. Just trying to work out exactly
what the security guarantees provided are.



 In particular, there are some type of audit that I don't think it is
 feasible to do in the endpoint. The validity of a CT audit is only as good
 as your newest notary timestamp value. It is really hard to guarantee that
 the endpoint is not being spoofed by a PRISM capable adversary without
 going to techniques like quorate checking which I think are completely
 practical in a specialized tracker but impractical to do in an iPhone or
 any other device likely to spend much time turned off or otherwise
 disconnected from the network.


 I think the important point is that even infrequently connected devices
 can _eventually_ reveal the subterfuge.


I doubt it is necessary to go very far to deter PRISM type surveillance. If
that continues very long at all. The knives are out for Alexander, hence
the story about his Enterprise bridge operations room.

Now the Russians...


Do we need to be able to detect PRISM type surveillance in the infrequently
connected device or is is sufficient to be able to detect it somewhere?

One way to get as good timestamp into a phone might be to use a QR code:
This is I think as large as would be needed:

[image: Inline image 1]



-- 
Website: http://hallambaker.com/
qr-256.png___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Radioactive random numbers

2013-09-16 Thread Dave Horsfall
On Fri, 13 Sep 2013, Eugen Leitl wrote:

  Given that there is One True Source of randomness to wit radioactive 
 
 What makes you think that e.g. breakdown oin a reverse biased
 Zener diode is any less true random? Or thermal noise in a
 crappy CMOS circuit?

It was a throw-away line; sigh...  The capitals should've been a hint.

And yes, I know about crappy CMOS circuits; I've unintentionally built
enough of them :-)

 In fact, 
 http://en.wikipedia.org/wiki/Hardware_random_number_generator#Physical_phenomena_with_quantum-random_properties
 listens a lot of potential sources, some with a higher
 rate and more private than others.

Thanks.

-- Dave, who must stop being subtle
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] End to end

2013-09-16 Thread Ben Laurie
On 16 September 2013 18:49, Phillip Hallam-Baker hal...@gmail.com wrote:

 To me the important thing about transparency is that it is possible for
 anyone to audit the key signing process from publicly available
 information. Doing the audit at the relying party end prior to every
 reliance seems a lower priority.


This is a fair point, and we could certainly add on to CT a capability to
post-check the presence of a pre-CT certificate in a log.


 In particular, there are some type of audit that I don't think it is
 feasible to do in the endpoint. The validity of a CT audit is only as good
 as your newest notary timestamp value. It is really hard to guarantee that
 the endpoint is not being spoofed by a PRISM capable adversary without
 going to techniques like quorate checking which I think are completely
 practical in a specialized tracker but impractical to do in an iPhone or
 any other device likely to spend much time turned off or otherwise
 disconnected from the network.


I think the important point is that even infrequently connected devices can
_eventually_ reveal the subterfuge.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] MITM source patching [was Schneier got spooked]

2013-09-16 Thread Phillip Hallam-Baker
On Mon, Sep 16, 2013 at 2:48 PM, zooko zo...@zooko.com wrote:

 On Sun, Sep 08, 2013 at 08:28:27AM -0400, Phillip Hallam-Baker wrote:
 
  It think we need a different approach to source code management. Get rid
 of
  user authentication completely, passwords and SSH are both a fragile
  approach. Instead every code update to the repository should be signed
 and
  recorded in an append only log and the log should be public and enable
 any
  party to audit the set of updates at any time.
 
  This would be 'Code Transparency'.

 This is a very good idea, and eminently doable. See also Ben Laurie's blog
 post:

 http://www.links.org/?p=1262

  Problem is we would need to modify GIT to implement.

 No, simply publish the git commits (hashes) in a replicated, append-only
 log.


Well people bandwidth is always a problem.

But what I want is not just the ability to sign, I want to have a mechanism
to support verification and checking of the log etc. etc.



 So what's the next step? We just need the replicated, append-only log.


Where I am headed is to first divide up the space for PRISM-PROOF email
between parts that are solved and only need good execution (message
formats, mail integration, etc) and parts that are or may be regarded as
research (key distribution, key signing, PKI).

Once that is done I am going to be building myself a very lightweight
development testbed built on a SMTP/SUBMIT + IMAP proxy.

But hopefully other people will see that there is general value to such a
scheme and work on:

[1] Enabling MUAs to make use of research built on the testbed.

[2] Enabling legacy PKI to make use of the testbed.

[3] Research schemes


Different people have different skills and different interests. My interest
is on the research side but other folk just want to write code to a clear
spec. Anyone going for [3] has to understand at the outset that whatever
they do is almost certain to end up being blended with other work before a
final standard is arrived at. We cannot afford another PGP/SMIME debacle.

On the research side, I am looking at something like Certificate
Transparency but with a two layer notary scheme. Instead of the basic
infrastructure unit being a CA, the basic infrastructure unit is a Tier 2
append only log. To get people to trust your key you get it signed by a
trust provider. Anyone can be a trust provider but not every trust provider
is trusted by everyone. A CA is merely a trust provider that issues policy
and practices statements and is subject to third party audit.


The Tier 2 notaries get their logs timestamped by at least one Tier 1
notary and the Tier 1 notaries cross notarize.

So plugging code signing projects into a Tier 2 notary would make a lot of
sense.

We could also look at getting Sourceforge and GITHub to provide support
maybe.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-16 Thread Tony Arcieri
On Mon, Sep 16, 2013 at 9:44 AM, Bill Frantz fra...@pwpconsult.com wrote:

 After Rijndael was selected as AES, someone suggested the really paranoid
 should super encrypt with all 5 finalests in the competition. Five level
 super encryption is probably overkill, but two or three levels can offer
 some real advantages.


I wish there was a term for this sort of design in encryption systems
beyond just defense in depth. AFAICT there is not such a term.

How about the Failsafe Principle? ;)

-- 
Tony Arcieri
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-16 Thread Jerry Leichter
On Sep 16, 2013, at 6:20 PM, Bill Frantz wrote:
 Joux's paper Multicollisions in iterated hash functions 
 http://www.iacr.org/archive/crypto2004/31520306/multicollisions.ps
 shows that finding ... r-tuples of messages that all hash to the same value 
 is not much harder than finding ... pairs of messages.  This has some 
 surprising implications.  In particular, Joux uses it to show that, if F(X) 
 and G(X) are cryptographic hash functions, then H(X) = F(X) || G(X) (|| is 
 concatenation) is about as hard as the harder of F and G - but no harder.
 This kind of result is why us crypto plumbers should always consult real 
 cryptographers. :-)
Yes, this is the kind of thing that makes crypto fun.

The feeling these days among those who do such work is that unless you're going 
to use a specialized combined encryption and authentication mode, you might as 
well use counter mode (with, of course, required authentication).  For the 
encryption part, counter mode with multiple ciphers and independent keys has 
the nice property that it's trivially as strong as the strongest of the 
constituents.  (Proof:  If all the ciphers except one are cracked, the attacker 
is left with a known-plaintext attack against the remaining one.  The need for 
independent keys is clear since if I use two copies of the same cipher with the 
same key, I end up sending plaintext!  You'd need some strong independence 
statements about the ciphers in the set if you want to reuse keys.  Deriving 
them from a common key with a one-way hash function is probably safe in 
practice, though you'd now need some strong statements about the hash function 
to get any theoretical result.  Why rely on such things when you 
 don't need to?)

It's not immediately clear to me what the right procedure for multiple 
authentication is.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-16 Thread Watson Ladd
On Mon, Sep 16, 2013 at 4:02 PM, Jerry Leichter leich...@lrw.com wrote:
 On Sep 16, 2013, at 6:20 PM, Bill Frantz wrote:
 Joux's paper Multicollisions in iterated hash functions
http://www.iacr.org/archive/crypto2004/31520306/multicollisions.ps
 shows that finding ... r-tuples of messages that all hash to the same
value is not much harder than finding ... pairs of messages.  This has
some surprising implications.  In particular, Joux uses it to show that, if
F(X) and G(X) are cryptographic hash functions, then H(X) = F(X) || G(X)
(|| is concatenation) is about as hard as the harder of F and G - but no
harder.
 This kind of result is why us crypto plumbers should always consult real
cryptographers. :-)
 Yes, this is the kind of thing that makes crypto fun.

 The feeling these days among those who do such work is that unless you're
going to use a specialized combined encryption and authentication mode, you
might as well use counter mode (with, of course, required authentication).
 For the encryption part, counter mode with multiple ciphers and
independent keys has the nice property that it's trivially as strong as the
strongest of the constituents.  (Proof:  If all the ciphers except one are
cracked, the attacker is left with a known-plaintext attack against the
remaining one.  The need for independent keys is clear since if I use two
copies of the same cipher with the same key, I end up sending plaintext!
 You'd need some strong independence statements about the ciphers in the
set if you want to reuse keys.  Deriving them from a common key with a
one-way hash function is probably safe in practice, though you'd now need
some strong statements about the hash function to get any theoretical
result.  Why rely on such things when you
  don't need to?)

 It's not immediately clear to me what the right procedure for multiple
authentication is.
 -- Jerry
The right procedure would be to use a universal hash function together with
counter mode encryption. This has provable security relatable to the
difficulty of finding linear approximations to the encryption function.

But I personally don't think this is much use. We have ciphers that have
stood up to lots of analysis. The real problems have been in modes of
operation, key negotiation, and deployment.
Sincerely,
Watson Ladd

 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography



-- 
Those who would give up Essential Liberty to purchase a little Temporary
Safety deserve neither  Liberty nor Safety.
-- Benjamin Franklin
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-16 Thread Bill Frantz

On 9/16/13 at 4:02 PM, leich...@lrw.com (Jerry Leichter) wrote:

The feeling these days among those who do such work is that 
unless you're going to use a specialized combined encryption 
and authentication mode, you might as well use counter mode 
(with, of course, required authentication).  For the encryption 
part, counter mode with multiple ciphers and independent keys 
has the nice property that it's trivially as strong as the 
strongest of the constituents.  (Proof:  If all the ciphers 
except one are cracked, the attacker is left with a 
known-plaintext attack against the remaining one.


Let me apply the ideas to the E communication protocol 
http://www.erights.org/elib/distrib/vattp/index.html. The code 
is available on the ERights site http://www.erights.org/.


Cutting out the details about how IP addresses are resolved, the 
initiator sends a series of messages negotiating the details of 
the connection and uses Diffie-Hellman for session key 
agreement.  --  Change the protocol to use both discrete log and 
elliptic curve versions of Diffie-Hellman, and use the results 
of both of them to generate the session key. I would love to 
have a key agreement algorithm other than Diffie-Hellman to use 
for one of the two algorithms to get a further separation of 
failure modes.


Authentication is achieved by signing the entire exchange with 
DSA.  --  Change the protocol to sign the exchange with both RSA 
and DSA and send and check both signatures.


In all cases, use algorithm bit lengths acceptable by modern standards.

The current data exchange encryption uses SHA1 in HMAC mode and 
3DES in CBC mode with MAC then encrypt. The only saving grace is 
that the first block of each message is the HMAC, which will 
make the known plain text attacks on the protocol harder. -- I 
would replace this protocol with one that encrypts twice and 
MACs twice. Using one of the modes which encrypt and MAC in one 
operation as the inner layer is very tempting with a different 
cypher in counter mode and a HMAC as the outer layer.



The need for independent keys is clear since if I use two 
copies of the same cipher with the same key, I end up sending 
plaintext!  You'd need some strong independence statements 
about the ciphers in the set if you want to reuse keys.  
Deriving them from a common key with a one-way hash function is 
probably safe in practice, though you'd now need some strong 
statements about the hash function to get any theoretical 
result.  Why rely on such things when you don't need to?)


I'm not sure you can avoid that one-way hash function in 
practice. Either it will be distilling randomness in your RNG or 
it will be stretching the pre-master secret in your key/IV/etc 
generation. You could use several and XOR the results if you can 
prove that their outputs are always different.




It's not immediately clear to me what the right procedure for multiple 
authentication is.


The above proposal uses two different digital signature 
algorithms, sends both, and checks both. I think it meets the 
no worse than the best of the two test.


Cheers - Bill

---
Bill Frantz|We used to quip that password is the most common
408-356-8506   | password. Now it's 'password1.' Who said 
users haven't

www.pwpconsult.com | learned anything about security? -- Bruce Schneier

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography