[cryptography] How to optimize modular inversion w.r.t a fixed large prime?
In Elliptic curve calculations, there are lots of modular inversions. And the prime is a fixed large number, say 256 bits. I wonder how I can optimize this operation, right now it takes a lot of time. Can any one point me to something? ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] [liberationtech] New Anonymity Network for Short Messages
On 11/06/13 20:06, Eugen Leitl wrote: Use a timing-independent array comparisonhttp://rdist.root.org/2010/01/07/timing-independent-array-comparison/. It's an easy fix. I've made the same mistake before, which is why I always look for it now. the page says Usually it's not, but if these were passwords instead of cryptographic values, it would be better to hash them with PBKDF2 http://en.wikipedia.org/wiki/PBKDF2 or bcrypt http://www.mindrot.org/projects/py-bcrypt/ instead of working with them directly. if you are indeed comparing passwords thru their hashed/bcrypt'ed/pbkdf2'ed representations; you would now leak info about whether or not those representations mach. You have essentially shifted the problem to their hashes. I don't believe this is enough. if users have simple password, this theoretically allows someone to brute force password offline once attackers know the hashed/bcrypt'ed/pbkdf2'ed representation (leaked thru the side channel mentioned above; e.g. timing). Yes it is better than plain text password but not bullet proof. Let H be the representation of the password using an (iterative) hash; then wouldn't it be better to compare H(N,The_pwd) and H(N,attempt_pwd), where N is picked randomly each time the comparison is performed? This way, every time you compare pwds; the H representation changes, and you cannot do offline brute force search. BTW, scrypt is also better than bcrypt/pbkdf2 against pwd cracking. ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] [liberationtech] New Anonymity Network for Short Messages
On 12/06/13 08:36, James A. Donald wrote: Difficult to avoid something like that while retaining parallelizability. /Galois///Counter Mode/ (GCM) is parallelizable and provides authenticated encryption. ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] [ipv6hackers] opportunistic encryption in IPv6
The process of randomly generating and calculating a public key for every brute-force attempt will slow the process considerably. However, for further key stretching, perhaps many iterations of SHA-* et al. is not the best option. Since web servers may be processing thousands of new connections per second, thousands of iterations of SHA and co. per connection may be prohibitively time-intensive for servers to implement. At the same time, attackers with GPUs/FPGAs/ASICs will have an advantage of several orders of magnitude. Perhaps in this case, it would be wise to leverage a universally slow algorithm like Scrypt. It's not more difficult to implement than SHA et al. but it's slower to brute-force with dedicated crypto hardware. On Jun 12, 2013, at 5:21, Eugen Leitl eu...@leitl.org wrote: - Forwarded message from Jim Small jim.sm...@cdw.com - Date: Wed, 12 Jun 2013 03:31:10 + From: Jim Small jim.sm...@cdw.com To: IPv6 Hackers Mailing List ipv6hack...@lists.si6networks.com Subject: Re: [ipv6hackers] opportunistic encryption in IPv6 Reply-To: IPv6 Hackers Mailing List ipv6hack...@lists.si6networks.com Here's an interesting question more relevant to the list and the paper though - are IPv6 CGAs useful? It seems like SeND is dead. But does anyone on the list think that CGAs could provide a useful competitive advantage for IPv6 over IPv4? Are these a useful building block? I believe CGAs solves PKI problem entirely. If using CGAs one does not need any PKI or CA certificate at all. True as long as you don't need authentication. But I have to concede, the whole point of OE is just to encrypt the traffic. Each node having CGA can give self signed certificate. The certificate is used only to extract public key (PK), modifier, collision counter and any extension fields. Extracted information can be used to verify that host address is valid CGA with the given public key. Next step is symmetric key negotiation. If during key negotiation messages are encrypted with the specified public key then only node having the corresponding private key can decrypt key negotiation messages. This step ensures that MITM is not possible if you are using CGA generated not from your own public/private key pair. If you use your own public/private keys then you no longer can easily choose your address. If using CGA+IPSEC then IKE daemon can do the key negotiation part when given authenticated public key. In SEND PKI is used only to protect from rogue routers. Only certificates signed by the CA should be able to send router advertisements. TLDR: For address authentication (protection against MITM) when using CGA no PKI is needed. Per RFC 3972, CGAs are not certified. I read the RFC as assuming a strong hash and secure private key, once someone uses a CGA someone else can't hijack/impersonate that address. So they are great for unauthenticated encryption. CGAs is holy grail for opportunistic encryption. Node can immediately start using opportunistic encryption by generating self signed certificate and CGA. One thing I wonder about is a 64 bit hash is pretty small - I wonder if that is sufficiently complex to provide security for the coming decade+? When generating CGA you can choose security level which allows to slow down brute force attacks (search for modifiers which would generate specific CGA address). Security level is encoded in the first three bits of the address. Because of that CGAs with lower security does not overlap with stronger CGAs. True, but I wonder how well this fairs against modern massive parallel GPU crackers. SHA-1 is a weak hash. Would be nice to see an update using SHA-2/SHA-3 and to mandate longer key lengths - say = 2048 bits. Otherwise doesn't it seem like we're going down the WEP path again? Still - it's a great point, CGAs do seem well suited for OE if you can live with the limitations. Is there anything that currently supports this? I'm wondering how much IPv6 market value this has... --Jim ___ Ipv6hackers mailing list ipv6hack...@lists.si6networks.com http://lists.si6networks.com/listinfo/ipv6hackers - End forwarded message - -- Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org __ ICBM: 48.07100, 11.36820 http://ativel.com http://postbiota.org AC894EC5: 38A5 5F46 A4FF 59B8 336B 47EE F46E 3489 AC89 4EC5 ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography smime.p7s Description: S/MIME cryptographic signature ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] [ipv6hackers] opportunistic encryption in IPv6
On Wed, Jun 12, 2013 at 05:59:38PM +0200, Eugen Leitl wrote: Here, I just don't understand the logic. To me, encrypting without authenticating buys you absolutely nothing, except to burn CPU cycles and contribute to global warming. In the *vast* majority of networking technology we use, modifying data in transit is just as easy as passively reading data in transit, within a constant factor. (That is, in a big-O sense, these are the same difficulty.) So what? Being able to detect if you are being attacked, even if most people don't bother, is a huge step forward over having no way of knowing at all. -- 'peter'[:-1]@petertodd.org 002c90d9b4f79320cf4b85fef8165be49be8ebcc29be25d353db signature.asc Description: Digital signature ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
[cryptography] keyserver
so what's the go-to keyserver to look people up these days? i've tried http://keyserver.rayservers.com/, it times out upon searching, so does http://pgp.mit.edu/ am i missing some obvious places? -- Noon Silk ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] keyserver
On Thu, Jun 13, 2013 at 10:53 AM, Jeremy Stanley fu...@yuggoth.org wrote: On 2013-06-13 10:49:25 +1000 (+1000), Noon Silk wrote: so what's the go-to keyserver to look people up these days? [...] I use hkps://hkps.pool.sks-keyservers.net quite happily. I generally point people to this extremely well-written article... https://we.riseup.net/debian/openpgp-best-practices Excellent thank you, just what I wanted! -- { PGP( 48F9961143495829 ); FINGER( fu...@cthulhu.yuggoth.org ); WWW( http://fungi.yuggoth.org/ ); IRC( fu...@irc.yuggoth.org#ccl ); WHOIS( STANL3-ARIN ); MUD( kin...@katarsis.mudpy.org:6669 ); } ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography -- Noon Silk ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
[cryptography] CTR mode limit cycle length
Not to detract from the important discussion of how best to use AES CTR mode, but I have a more basic question... I can certainly understand why the discussion of CTR mode is considered to be boring. I assume that anyone can easily verify that testing trillions of different 128-bit counter values, even in incremental sequence, produces radically different xor masks, given a reasonable IV. But what's the probability of 2 xor masks colliding? Is this just assumed to be random, i.e. compatible with a birthday attack? Has anyone done anything like a limit median iteration count before repetition (LMICBR) test or scintillating entropy test? (These are described in detail on my blogs.) The former test, which could actually be performed in useful fashion on a 128-bit space using existing computer power, would likely throw up warning signs if the cycle were too short. The latter test would potentially shrink the upper bound complexity estimate for differential (i.e. interblock) cryptanalysis. So if, let's say, 2 in every 100 xor masks collide, then I need only store 100 encrypted blocks in order to have a good chance of finding of a matching pair (or n-tuple) of xor masks, thereby facilitating statistical cracking methods. Obviously 100 is too small. So what is the actual number, for a given counter width? Personally, I'd prefer to rely on the predictable limit cycles of Karacell 3 (but then, I'm biased). But I'm quite open to a demonstration or whitepaper showing that CTR limit cycles are also predictable and usefully long. Or maybe I've just misunderstood how CTR works. Anyone? ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] CTR mode limit cycle length
On 2013-06-13 12:31 PM, Russell Leidich wrote: Not to detract from the important discussion of how best to use AES CTR mode, but I have a more basic question... I can certainly understand why the discussion of CTR mode is considered to be boring. I assume that anyone can easily verify that testing trillions of different 128-bit counter values, even in incremental sequence, produces radically different xor masks, given a reasonable IV. But what's the probability of 2 xor masks colliding? Is this just assumed to be random, i.e. compatible with a birthday attack? If it was not random there would be equivalent attacks on all other modes. I am seeing a lot of people imagining all sorts of problems with ctr happening under certain circumstances, when, given those circumstances, there would be equivalent problems with all other modes. This is the bicycle shed effect. A committee has to a discuss a ten million dollar auditorium and a five hundred dollar bicycle shed. The auditorium goes through in three minutes, because no one understands the potential problems with the auditorium, whereas the bicycle shed bogs down the committee for three months. For example someone pointed out that ctr is problematic because you don't necessarily have access to true randomness or non repeating pseudo randomness. Well guess what? Every other mode needs randomness also. Every other mode needs authentication also. Has anyone done anything like a limit median iteration count before repetition (LMICBR) test or scintillating entropy test? (These are described in detail on my blogs.) The former test, which could actually be performed in useful fashion on a 128-bit space using existing computer power, would likely throw up warning signs if the cycle were too short. The latter test would potentially shrink the upper bound complexity estimate for differential (i.e. interblock) cryptanalysis. So if, let's say, 2 in every 100 xor masks collide, then I need only store 100 encrypted blocks in order to have a good chance of finding of a matching pair (or n-tuple) of xor masks, thereby facilitating statistical cracking methods. Obviously 100 is too small. So what is the actual number, for a given counter width? Personally, I'd prefer to rely on the predictable limit cycles of Karacell 3 (but then, I'm biased). But I'm quite open to a demonstration or whitepaper showing that CTR limit cycles are also predictable and usefully long. Or maybe I've just misunderstood how CTR works. Anyone? ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography