Re: [cryptography] Fault attacks on Bitcoin's secp256k1

2014-06-30 Thread Billy Brumley
I think they are mixing attacks. Checking input/output points has to
do with when you have a fault when you're computing scalar
multiplication, or in a protocol where an attacker can send you a
point that isn't actually on the curve you're expecting. So its a
false curve attack or fault attack depending on the scenario. (OpenSSL
checks the input point BTW.)

Then they start talking about (I believe) the Barenghi et al paper
Fault Attack to the Elliptic Curve Digital Signature Algorithm with
Multiple Bit Faults that really has to do faults in the second half
of an (EC)DSA signature. If you want to know what kind of faults they
need, read all about it in Sec 3. I haven't fully read the paper but
I'm gussing verifying the signature before you release it is the
no-brainer countermeasure. There are surely more clever ways to
prevent it.

What cryptosystems, and furthermore protocols, you can attack and how
you carry out the attack very much depend on the nature of the
fault/defect and the details of the protocol.

Shameless self promotion: https://eprint.iacr.org/2011/633

BBB

On Sun, Jun 29, 2014 at 1:25 PM, Ondrej Mikle ondrej.mi...@nic.cz wrote:
 Could anyone give an example what flaws a secp256k1 implementation needs to 
 have
 in order to succumb to the fault attack described in this tweet:
 https://twitter.com/pbarreto/status/392415079934615552 ?

 It mentions that an implementation is susceptible unless the implementation
 checks everything, but doesn't go into details.

 I don't understand the fault attacks much, but IIRC it requires a raw point 
 that
 is not on the curve to enter an incorrectly written algorithm. I don't see 
 where
 the problematic raw point comes into play.

 Regards,
   Ondrej
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ChaCha/Salsa blockcounter endianness

2014-01-26 Thread Billy Brumley
I think the fact that, in the reference code, input[12] and input[13]
are contiguous is throwing you off. The spec really just talks about
bytes:

http://cr.yp.to/snuffle/spec.pdf

- Sec 10 Here i is the unique 8-byte sequence ...
- Then see how that looks like in Sec 9 (e.g. Example 2)
- Then Sec 8 finally Sec 7 how they get mapped to 32-bit ints

So my read is how you want to implement that 64-bit counter is up to
you--as long as you respect the interface and feed the bytes in the
order it expects.

BBB

On Mon, Jan 27, 2014 at 2:20 AM, ianG i...@iang.org wrote:
 Has anyone implemented Salsa/ChaCha hereabouts?

 I'm looking at the blockcounter and I have a doubt... It is an 8byte
 block, and as the reference code works in u32s, it converts it as two
 4-byte quantities to two 4-byte ints (u32s) in a platform independent
 fashion (controlling each for endianness).

 As it is working in little-endian mode, it then does the increment of
 the two numbers manually with the first u32 [12] being the low-order.
 Unfortunately, this means they are hard-coded in little endian mode:

 x-input[12] = PLUSONE(x-input[12]);
 if (!x-input[12]) {
  x-input[13] = PLUSONE(x-input[13]);
  /* stopping at 2^70 bytes per nonce is user's responsibility */
 }

 This is maybe sorta correct if that is how it is defined;  the problem
 is that it punts the question of what the actual ordering should be if
 we wanted to use longs.  As the reference code sets the blockcounter to
 zero, and doesn't offer the choice of restarting down the stream at some
 long value, it doesn't matter what the user thinks because there is no
 setting of it.

 I'm doing Java/network order/bigendian and I'm restarting at random
 places determined in longs ... :( so I can't punt it.  If I take a long,
 and convert it to byte[8], will I be compatible with anyone else?

 To make matters worse, none of the test vectors will pick this issue up
 because they using the raw byte[8] all zeros as the blockcounter so will
 happily increment internally in little-endian order and compare nicely.

 (DJB's cunning test vector starts at long value -1 ... but again that is
 symmetrical like zero (ox), and +1 for the next block
 gives zero.  Doh!  )

 There appear to be two options:

 1.  fix the ordering so that conversions to u64s are like the u32s, and
 defined in a platform compatible fashion.
 2.  stick with the two u32s layed out in little-endian format,
 regardless, if that's what everyone has already sort of done.

 Any comments?

 iang
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Curve25519 OID

2013-10-08 Thread Billy Brumley
 I would appreciate expansion on all these horror scenarios.

 Most of the desirable characteristics of curve25519 are things that make it 
 different from NIST curves, for example montgomery coordinates protect you 
 against point compression patents, since you don't calculate y, therefore 
 cannot violate someone's patent for calculating y, not to mention that point 
 compression, montgomery coordinate style, has prior art going a long way back.

 Further, we should automatically distrust everything touched by NIST, because 
 we cannot invest the time, energy and thought to check out everything they 
 have touched.

Here is a non-exhaustive list of features curve25519 as a function
(and sometimes even a curve) gives you.

1. Avoiding point compression. (You mentioned this already.)
2. Protection against invalid curve attacks.
3. Protection against small subgroup attacks.
4. Easy private key generation. (32 random bytes and clear some bits
vs random integer from 1 to n where n=generator order.)

In reality, most libraries (e.g., OpenSSL) will already handle 1-4
above in one way or another because they're setup to handle generic
(Weierstrass form) curves.

The horror scenarios in my head are more about integration. Here are
some examples.

Scenario #1
Weierstrass form of curve25519 gets standardized. Its parameters get
dumped in as a named curve into a library (e.g., OpenSSL). The
implementation handles it as a generic curve. Its scalar
multiplication routine is like any other curve--some windowed NAF or
sliding window with execution time dependent on secrets--not
side-channel secure.

Scenario #2
Weierstrass form of curve25519 gets standardized. A secure
implementation also gets integrated into a library (e.g., scrape some
code from NaCl to OpenSSL). So the implementation of the curve
arithmetic is great. Then it's used for digital signatures like ECDSA
and the mod n arithmetic has execution time dependent on secrets.

So it depends on where your concerns fall--cooked curves or poor
implementations (that put secrets at risk)? Mine are of the latter
variety--we have proof of them and I've personally carried them out
many times in the past.

My 2c: Standardize the Weierstrass form of curve25519. Handle
integration aspects as needed. There's some evidence to support that
this is acceptable. (E.g., E. Käsper's secure implementation of P-224
got picked up by OpenSSL and no one is barking about the other moving
parts yet.)

BBB
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Curve25519 OID

2013-10-07 Thread Billy Brumley
People seem to be mixing curve25519 as a function and curve25519 as a ...
well, curve (I prefer this).

The form Samuel gives is compatible with many standards. And of course it
can be used for digital signatures. Implementations can choose to transform
to and from the Montgomery form and benefit from all the implementation
slickness.

I suspect Dan wouldn't like this because, viewing curve25519 as just a
curve and in standards compatible form, there's so many ways an
implementation would violate all that curve25519 as a function brings to
the table. I can expand on these horror scenarios, or just use your
imagination.

BBB


On Sun, Oct 6, 2013 at 3:13 PM, Samuel Neves sne...@dei.uc.pt wrote:

 On 06-10-2013 18:45, CodesInChaos wrote:
  There are many details that are not clear to me. Typical Curve25519
  usage deviates from typical NIST curve usage in several ways:
 
  1. montgomery form, not weierstrass (conversion probably possible,
  never looked into details)

 This is always possible. For curve25519, we have:

 y^2 = x^3 - 102314837768112 x + 398341948620716521344

 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] cryptanalysis of 923-bit ECC?

2012-06-22 Thread Billy Brumley
I don't understand the last few posts here. In the paper linked to by
Samuel Neves:

http://eprint.iacr.org/2012/042

Table 3, towards the top. (I read that as 2^53 steps.)

So to me, the recent result is we verified computationally that our
analysis is correct.

Maybe my brain is too simple.

BBB

On Fri, Jun 22, 2012 at 10:54 AM, Jon Callas j...@callas.org wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1


 On Jun 22, 2012, at 2:01 AM, James A. Donald wrote:

 On 2012-06-22 6:21 PM, James A. Donald wrote:
 Is this merely a case where 973 bits is equivalent to ~60 bits symmetric?

 As I, not an authority, understand this result, this result is not oops, 
 pairing based cryptography is broken

 It is oops, pairing based cryptography requires elliptic curves over a 
 slightly larger field than elliptic curve based cryptography does

 Indeed. So kudos to the Fujitsu guys, and we make the curves bigger. Even 77 
 bits is really too small for serious work.

 Does anyone know what the ratio is for equivalences, either before or after?

 The usual rule of thumb is 2x bits for symmetric security equivalence on 
 hashes and normal ECC, with integer public keys being 1024 maps to 80 
 symmetric, 2048 to 112, and 3K to 128.

 What creates the 953 - 153 relation? Then of course there's the obvious 153 
 halved, but do we know at all how we'd compensate for the new result?

        Jon


 -BEGIN PGP SIGNATURE-
 Version: PGP Universal 3.2.0 (Build 1672)
 Charset: us-ascii

 wj8DBQFP5LFxsTedWZOD3gYRAi2oAKDTs9aRZVTc2IoFlaKPbEJw9pd6jACeOSqe
 WMl+TXGl/i+KHfW9p88dxHA=
 =0+9/
 -END PGP SIGNATURE-
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography