Re: [Cryptography] Sha3

2013-10-07 Thread Peter Fairbrother

On 05/10/13 00:09, Dan Kaminsky wrote:

Because not being fast enough means you don't ship.  You don't ship, you
didn't secure anything.

Performance will in fact trump security.  This is the empirical reality.
  There's some budget for performance loss. But we have lots and lots of
slow functions. Fast is the game.


That may once have been mostly true, but no longer - now it's mostly false.

In almost every case nowadays the speed at which a device computes a 
SHA-3 hash doesn't matter at all. Devices are either way fast enough, or 
they can't use SHA-3 at all, whether or not it is made 50% faster.




(Now, whether my theory that we stuck with MD5 over SHA1 because
variable field lengths are harder to parse in C -- that's an open
question to say the least.)


:)

-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Peter Fairbrother

On 05/10/13 20:00, John Kelsey wrote:

http://keccak.noekeon.org/yes_this_is_keccak.html



Seems the Keccac people take the position that Keccak is actually a way 
of creating hash functions, rather than a specific hash function - the 
created functions may be ridiculously strong, or far too weak.


It also seems NIST think a competition is a way of creating a hash 
function - rather than a way of competitively choosing one.



I didn't follow the competition, but I don't actually see anybody being 
right here. NIST is probably just being incompetent, not malicious, but 
their detractors have a point too.


The problem is that the competition was, or should have been, for a 
single [1] hash function, not for a way of creating hash functions - and 
in my opinion only a single actual hash function based on Keccak should 
have been allowed to enter.


I think that's what actually happened, and an actual function was 
entered. The Keccac people changed it a little between rounds, as is 
allowed, but by the final round the entries should all have been fixed 
in stone.


With that in mind, there is no way the hash which won the competition 
should be changed by NIST.


If NIST do start changing things - whatever the motive  - the benefits 
of openness and fairness of the competition are lost, as is the analysis 
done on the entries.


If NIST do start changing things, then nobody can say "SHA-3 was chosen 
by an open and fair competition".


And if that didn't happen, if a specific and well-defined hash was not 
entered, the competition was not open in the first place.




Now in the new SHA-4 competition TBA soon, an actual specific hash 
function based on Keccac may well be the winner - but then what is 
adopted will be what was actually entered.


The work done (for free!) by analysts during the competition will not be 
wasted on a changed specification.




[1] it should have been for a _single_ hash function, not two or 3 
functions with different parameters. I know the two-security-level model 
is popular with NSA and the like, probably for historical "export" 
reasons, but it really doesn't make any sense for the consumer.


It is possible to make cryptography which we think is resistant to all 
possible/likely attacks. That is what the consumer wants and needs. One 
cryptography which he can trust in, resistant against both his baby 
sister and the NSA.


We can do that. In most cases that sort of cryptography doesn't take 
even measurable resources.



The sole and minimal benefit of having two functions (from a single 
family) - cheaper computation for low power devices, there are no other 
real benefits - is lost in the roar of the costs.


There is a case for having two or more systems - monocultures are 
brittle against failures, and like the Irish Potato Famine a single 
failure can be catastrophic - but two systems in the same family do not 
give the best protection against that.


The disadvantages of having two or more hash functions? For a start, 
people don't know what they are getting. They don't know how secure it 
will be - are you going to tell users whether they are using HASH_lite 
rather than HASH_strong every time? And expect them to understand that?


Second, most devices have to have different software for each function - 
and they have to be able to accept data and operations for more than one 
function as well, which opens up potential security holes.


I could go on, but I hope you get the point already.

-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] AES-256- More NIST-y? paranoia

2013-10-01 Thread Peter Fairbrother
AES, the latest-and-greatest block cipher, comes in two main forms - 
AES-128 and AES-256.


AES-256 is supposed to have a brute force work factor of 2^256  - but we 
find that in fact it actually has a very similar work factor to that of 
AES-128, due to bad subkey scheduling.


Thing is, that bad subkey scheduling was introduced by NIST ... after 
Rijndael, which won the open block cipher competition with what seems to 
be all-the-way good scheduling, was transformed into AES by NIST.



So, why did NIST change the subkey scheduling?

I don't know.

Inquiring minds ...



NIST have previously changed cipher specs under NSA guidance, most 
famously for DES, with apparently good intentions then - but with NSA 
and it's two-faced mission, we always have to look at capabilities, not 
intentions.



-- Peter Fairbrother


[and why doesn't AES-256 have 256-bit blocks???]

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-10-01 Thread Peter Fairbrother

On 01/10/13 08:54, ianG wrote:

On 1/10/13 02:01 AM, Tony Arcieri wrote:

On Mon, Sep 30, 2013 at 1:02 AM, Adam Back mailto:a...@cypherspace.org>> wrote:

If we're going to do that I vote no ASN.1, and no X.509.  Just BNF
format
like the base SSL protocol; encrypt and then MAC only, no
non-forward secret
ciphersuites, no baked in key length limits.  I think I'd also vote
for a
lot less modes and ciphers.  And probably non-NIST curves while
we're at it.


Sounds like you want CurveCP?

http://curvecp.org/




Yes, EXACTLY that.  Proposals like CurveCP.



I have said this first part before:

Dan Boneh was talking at this years RSA cryptographers track about 
putting some sort of quantum-computer-resistant PK into browsers - maybe 
something like that should go into TLS2 as well?



We need to get the browser makers - Apple, Google, Microsoft, Mozilla - 
and the webservers - Apache, Microsoft, nginx - together and get them to 
agree "we must all implement this" before writing the RFC.


Also, the banks and the CA's should have an input. But not a say.



More rules:

IP-free, open source code,

no libraries (*all* functions internal to each suite)

a compiler which gives repeatable binary hashes so you can verify binary 
against source.



Note to Microsoft - open source does not always mean free. But in this 
case it must be free.




Maximum of four crypto suites.

Each suite has fixed algorithms, protocols, key and group sizes etc. 
Give them girls' names, not silly and incomplete crypto names - "This 
connection is protected by Alice".



Ability to add new suites as secure browser upgrade from browser 
supplier. ?New suites must be signed by working group?. Signed new 
suites must then be available immediately on all platforms, both browser 
and webserver.




Separate authentication and sessionkeysetup keys mandatory.

Maybe use existing X.509? but always for authentication only, never 
sessionkeysetup.





No client authentication. None. Zero.

That's too hard for an individual to manage - remembering passwords or 
whatever, yes, global authentication, no. That does not belong in TLS.


I specifically include this because the banks want it, now, in order to 
shift liability to their customers.


And as to passwords being near end-of-life? Rubbish. Keep the password 
database secure, give the user a username and only three password 
attempts, and all your GPUs and ASIC farms are worth nothing.





-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-10-01 Thread Peter Fairbrother

On 01/10/13 08:49, Kristian Gjøsteen wrote:

1. okt. 2013 kl. 02:00 skrev "James A. Donald" :


On 2013-10-01 08:24, John Kelsey wrote:

Maybe you should check your code first?  A couple nist people verified that the 
curves were generated by the described process when the questions about the 
curves first came out.


And a non NIST person verified that the curves were not generated by the 
described process after the scandal broke.


Checking the verification code may be a good idea.

I just checked that the verification process described in Appendix 5 in the 
document RECOMMENDED ELLIPTIC CURVES FOR FEDERAL GOVERNMENT USE, July 1999 
(http://csrc.nist.gov/groups/ST/toolkit/documents/dss/NISTReCur.pdf) accepts 
the NIST prime field curves listed in that document. Trivial python script 
follows.

I am certainly not the first non-US non-government person to check.

There is solid evidence that the US goverment does bad things. This isn't it.


Agreed (though did you also check whether the supposed verification 
process actually matches the supposed generation process?).


Also agreed, NSA could not have reverse-engineered the parts of the 
generating process from "random" source to the curve's b component, ie 
they could not have started with a chosen b component and then generated 
the "random" source.




However they could easily have cherry-picked a result for b from trying 
several squillion source numbers. There is no real reason not to use 
something like the digits of pi as the source - which they did not do.


Also, the method by which the generators (and thus the actual groups in 
use, not the curves) were chosen is unclear.



Even assuming NSA tried their hardest to undermine the curve selection 
process, there is some doubt as to whether these two actual and easily 
verifiable failings in a supposedly "open" generation process are enough 
to make the final groups selected useful for NSA's nefarious purposes.


But there is a definite lack of clarity there.


-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread Peter Fairbrother

On 26/09/13 07:52, ianG wrote:

On 26/09/13 02:24 AM, Peter Fairbrother wrote:

On 25/09/13 17:17, ianG wrote:

On 24/09/13 19:23 PM, Kelly John Rose wrote:


I have always approached that no encryption is better than bad
encryption, otherwise the end user will feel more secure than they
should and is more likely to share information or data they should not
be on that line.



The trap of a false sense of security is far outweighed by the benefit
of a "good enough" security delivered to more people.


Given that mostly security works (or it should), what's really important 
is where that security fails - and "good enough" security can drive out 
excellent security.


We can easily have excellent security in TLS (mk 2?) - the crypto part 
of TLS can be unbreakable, code to follow (hah!) - but 1024-bit DHE 
isn't say unbreakable for 10 years, far less for a lifetime.



We are only talking about security against an NSA-level opponent here. 
Is that significant?


Eg, Tor isn't robust against NSA-level opponents. Is OTR?


We're talking multiple orders of magnitude here.  The math that counts
is:

Security = Users * Protection.


No. No. No. Please, no? No. Nonononononono.

It's Summa (over i)  P_i.I_i where P_i is the protection provided to
information i, and I_i is the importance of keeping information i
protected.



I'm sorry, I don't deal in omniscience.Typically we as suppliers of
some security product have only the faintest idea what our users are up
to.  (Some consider this a good thing, it's a privacy quirk.)



No, and you don't know how important your opponent thinks the 
information is either, and therefore what resources he might be willing 
or able to spend to get access to it - but we can make some crypto which 
(we think) is unbreakable.


No matter who or what resources, unbreakable. You can rely on the math.

And it doesn't usually cost any more than we are willing to pay - heck, 
the price is usually lost in the noise.


Zero crypto (theory) failures.

Ok, real-world systems won't ever meet that standard - but please don't 
hobble them with failure before they start trying.



With that assumption, the various i's you list become some sort of
average


Do you mean I-i's?

Ah, average, Which average might that be? Hmmm, independent 
distributions of two variables - are you going to average them, then 
multiply the averages?


That approximation doesn't actually work very well, mathematically 
speaking - as I'm sure you know.



This is why the security model that is provided is typically
one-size-fits-all, and the most successful products are typically the
ones with zero configuration and the best fit for the widest market.


I totally agree with zero configuration - and best fit - but you are 
missing the main point.


Would 1024-bit DHE give a reasonable expectation of say, ten years 
unbreakable by NSA?


If not, and Manning or Snowden wanted to use TLS, they would likely be 
busted.


Incidentally, would OTR pass that test?



-- Peter Fairbrother

(sorry for the sloppy late reply)

(I'm talking about TLS2, not a BCP - but the BCP is significant)
(how's the noggin? how's Waterlooville?? can I come visit sometime?)
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] forward-secrecy >=2048-bit in legacy browser/servers? (Re: RSA equivalent key length/strength)

2013-09-26 Thread Peter Fairbrother

On 25/09/13 13:25, Adam Back wrote:

On Wed, Sep 25, 2013 at 11:59:50PM +1200, Peter Gutmann wrote:

Something that can "sign a new RSA-2048 sub-certificate" is called a
CA.  For
a browser, it'll have to be a trusted CA.  What I was asking you to
explain is
how the browsers are going to deal with over half a billion (source:
Netcraft
web server survey) new CAs in the ecosystem when "websites sign a new
RSA-2048
sub-certificate".


This is all ugly stuff, and probably < 3072 bit RSA/DH keys should be
deprecated in any new standard, but for the legacy work-around senario to
try to improve things while that is happening:

Is there a possibility with RSA-RSA ciphersuite to have a certified RSA
signing key, but that key is used to sign an RS key negotiation?

At least that was how the export ciphersuites worked (1024+ bit RSA auth,
512-bit export-grade key negotation).  And that could even be weakly
forward
secret in that the 512bit RSA key could be per session.  I imagine that
ciphersuite is widely disabled at this point.

But wasnt there also a step-up certificate that allowed stronger keys if
the
right certificate bits were set (for approved export use like banking.)
Would setting that bit in all certificates allow some legacy
server/browsers
to get forward secrecy via large, temporary key negotiation only RSA keys?
(You have to wonder if the 1024-bit max DH standard and code limits was bit
of earlier sabotage in itself.)


A couple of points: all the big CAs will give you a new certificate with 
a new key for free (but revocation is your baby) - while it isn't 
something they do, can't they issue say two years worth of one-day certs 
for perhaps a little more than the price of a two-year cert?




In the UK we have a law called RIPA, part of which allows Plod to demand 
keys. They can demand keys used for encryption and for key setup - but 
they can't demand keys used only for authentication. I don't think they 
routinely demand keys from TLS/SSL webservers.


The point is that in an ordinary TLS session the RSA key is used for 
both secrecy and authentication - in any future TLS these functions 
should be split.




Also, Dan Boneh was talking at this years RSA cryptographers track about 
putting some sort of quantum-computer-resistant PK into browsers - maybe 
something like that should go into TLS2 as well?


You need to get the browser makers - Apple, Google, Microsoft, Mozilla - 
and the webservers - Apache, Microsoft, nginx - together and get them to 
agree "we must all implement this" before writing the RFC.



-- Peter Fairbrother



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-26 Thread Peter Fairbrother

On 25/09/13 17:17, ianG wrote:

On 24/09/13 19:23 PM, Kelly John Rose wrote:


I have always approached that no encryption is better than bad
encryption, otherwise the end user will feel more secure than they
should and is more likely to share information or data they should not
be on that line.



The trap of a false sense of security is far outweighed by the benefit
of a "good enough" security delivered to more people.

We're talking multiple orders of magnitude here.  The math that counts is:

Security = Users * Protection.


No. No. No. Please, no? No. Nonononononono.

It's Summa (over i)  P_i.I_i where P_i is the protection provided to 
information i, and I_i is the importance of keeping information i 
protected.


Actually it's more complex than that, as the importance isn't a linear 
variable, and information isn't either - but there's a start.


Increasing i by increasing users may have little effect on the overall 
security, if the protecting the information they transmit isn't 
particularly valuable.



And saying that something is secure - which is what people who are not 
cryptographers think you are doing when you recommend that something - 
tends to increase I_i, the importance of the information to be protected.


And if the new system isn't secure against expensive attacks, then 
overall security may be lessened by it's introduction. Even if Users are 
increased.





I have about 30 internet passwords, only three of which are in any way 
important to me - those are the banking ones. I use a simple password 
for all the rest, because I don't much care if they are compromised.


But I use the same TLS for all these sites.

Now if that TLS is broken as far as likely attacks against the banks go, 
I care. I don't much care if it's secure against attacks against the 
other sites like my electricity and gas bills.


I might use TLS a lot more for non-banking sites, but I don't really 
require it to be secure for those. I do require it to be secure for banking.



And I'm sure that some people would like TLS to be secure against the 
NSA for, oh, let's say 10 years. Which 1024-bit DHE will not provide.






If you really want to recommend 1024-bit DHE, then call a spade a spade 
- for a start, it's EKS, ephemeral key setup. It doesn't offer much in 
the way of forward secrecy, and it offers nothing at all in the way of 
perfect forward secrecy.


It's a political stunt to perhaps make trawling attacks by NSA more 
expensive (in cases where the website has given NSA the master keys [*]) 
- but it may make targeted attacks by NSA cheaper and easier.


And in ten years NSA *will* be able to read all your 1024-bit DHE 
traffic, which it is storing right now against the day.




[*] does anyone else think it odd that the benefit of introducing 
1024-bit DHE, as opposed to 2048-bit RSA, is only active when the 
webserver has given or will give NSA the keys? Just why is this being 
considered for recommendation?


Yes, stunt.

-- Peter Fairbrother




iang


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-24 Thread Peter Fairbrother

On 23/09/13 09:47, Peter Gutmann wrote:

Patrick Pelletier  writes:


I'm inclined to agree with you, but you might be interested/horrified in the
"1024 bits is enough for anyone" debate currently unfolding on the TLS list:


That's rather misrepresenting the situation.  It's a debate between two
groups, the security practitioners, "we'd like a PFS solution as soon as we
can, and given currently-deployed infrastructure DH-1024 seems to be the best
bet", and the theoreticians, "only a theoretically perfect solution is
acceptable, even if it takes us forever to get it".

(You can guess from that which side I'm on).


Lessee - a "forward secrecy solution" which either doesn't work now or 
won't work soon - so that it probably won't protect traffic made now for 
it's useful lifetime - versus - well, who said anything about 
theoretically perfect?


To hell with perfect. I won't even use the word when describing forward 
secrecy (unless it's an OTP).


If you just want a down-and-dirty 2048-bit FS solution which will work 
today, why not just have the websites sign a new RSA-2048 
sub-certificate every day? Or every few hours? And delete the secret 
key, of course.


Forward secrecy doesn't have to be per-session.


Though frankly, I don't think ubiquitous 1024-bit FS without deployment 
of some software/RFC/standard is possible, and if so that deployment 
should also include a 2048-bit solution as well. And maybe 3072-bit and 
4096-bit solutions too.


And please please please don't call them all the same thing - because 
they aren't.




But, the immediate question before the court of TLS now is - "do we 
recommend a 1024-bit FS solution?"


And I for one cannot say that you should. In fact I would be horrified 
if you did.



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Peter Fairbrother

On 14/09/13 17:14, Perry E. Metzger wrote:

On Sat, 14 Sep 2013 16:53:38 +0100 Peter Fairbrother
 wrote:

NIST also give the "traditional" recommendations, 80 -> 1024 and 112
-> 2048, plus 128 -> 3072, 192 -> 7680, 256 -> 15360.

[...]

But, I wonder, where do these longer equivalent figures come from?

I don't know, I'm just asking - and I chose Wikipedia because that's
the general "wisdom".

[...]

[ Personally, I recommend 1,536 bit RSA keys and DH primes for
security to 2030, 2,048 if 1,536 is unavailable, 4,096 bits if
paranoid/high value; and not using RSA at all for longer term
security. I don't know whether someone will build that sort of
quantum computer one day, but they might. ]


On what basis do you select your numbers? Have you done
calculations on the time it takes to factor numbers using modern
algorithms to produce them?


Yes, some - but I don't believe that's enough. Historically, it would 
not have been (and wasn't) - it doesn't take account of algorithm 
development.


I actually based the 1,536-bit figure on the old RSA factoring 
challenges, and how long it took to break them.


We are publicly at 768 bits now, and that's very expensive 
http://eprint.iacr.org/2010/006.pdf - and, over the last twenty years 
the rate of public advance has been about 256 bits per decade.


So at that rate 1,536 bits would become possible but very expensive in 
2043, and would still be impossible in 2030.



If 1,024 is possible but very expensive for NSA now, and 256 bits per 
decade is right, then 1,536 may just be on the verge of edging into 
possibility in 2030 - but I think progress is going to slow (unless they 
develop quantum computers).


We have already found many of the "easy-to-find" advances in theory.



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Peter Fairbrother
Recommendations are given herein as: symmetric_key_length -> 
recommended_equivalent_RSA_key_length, in bits.


Looking at Wikipedia,  I see:

"As of 2003 RSA Security claims that 1024-bit RSA keys are equivalent in 
strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit 
symmetric keys and 3072-bit RSA keys to 128-bit symmetric keys. RSA 
claims that 1024-bit keys are likely to become crackable some time 
between 2006 and 2010 and that 2048-bit keys are sufficient until 2030. 
An RSA key length of 3072 bits should be used if security is required 
beyond 2030.[6]"


http://www.emc.com/emc-plus/rsa-labs/standards-initiatives/key-size.htm

That page doesn't give any actual recommendations or long-term dates 
from RSA now. It gives the "traditional recommendations" 80 -> 1024 and 
112 -> 2048, and a 2000 Lenstra/Verheul minimum commercial 
recommendation for 2010 of 78 -> 1369.



"NIST key management guidelines further suggest that 15360-bit RSA keys 
are equivalent in strength to 256-bit symmetric keys.[7]"


http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf

NIST also give the "traditional" recommendations, 80 -> 1024 and 112 -> 
2048, plus 128 -> 3072, 192 -> 7680, 256 -> 15360.




I get that 1024 bits is about on the edge, about equivalent to 80 bits 
or a little less, and may be crackable either now or sometime soon.


But, I wonder, where do these longer equivalent figures come from?

I don't know, I'm just asking - and I chose Wikipedia because that's the 
general "wisdom".


Is this an area where NSA have "shaped the worldwide cryptography 
marketplace to make it more tractable to advanced cryptanalytic 
capabilities being developed by NSA/CSS", by perhaps greatly 
exaggerating the equivalent lengths?


And by emphasising the difficulty of using longer keys?

As I said, I do not know. I merely raise the possibility.


[ Personally, I recommend 1,536 bit RSA keys and DH primes for security 
to 2030, 2,048 if 1,536 is unavailable, 4,096 bits if paranoid/high 
value; and not using RSA at all for longer term security. I don't know 
whether someone will build that sort of quantum computer one day, but 
they might. ]



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Squaring Zooko's triangle

2013-09-11 Thread Peter Fairbrother

On 11/09/13 12:23, Paul Crowley wrote:

 From the title it sounds like you're talking about my 2007 proposal:

http://www.lshift.net/blog/2007/11/10/squaring-zookos-triangle
http://www.lshift.net/blog/2007/11/21/squaring-zookos-triangle-part-two

This uses key stretching to increase the work of generating a colliding
identifier from 2^64 to 2^88 steps.



That part is similar, though I go from 80 bits (actually 79.3 bits) to 
100 bits ; and a GPG key fingerprint is similar too, though my mashes 
are shorter than either, in order to make them easy to input.


There is another difference, mashes are easy to write and input without 
error - the mash alphabet only has 31 characters; A-Z plus 0-9, but 0=O, 
1=I=J=L, 2=Z, 5=S. If one of those is misread as another in the subset 
it doesn't matter when the mash is input. Capitalisation is also irrelevant.





However the main, big, huge difference is that a mash isn't just a hash 
of a public key - in fact as far as Alice, who doesn't understand public 
keys, is concerned:


It's just a secure VIOP number.

Maybe she needs an app to use the number on her iphone or googlephone. 
And another app to use it on her laptop or desktop - but the mash is 
your secure VOIP number.


Or it's a secure email address.

Or it's both.

Alice need not ever see the "real" voip IP address, or the real email 
address - and unless she's a cryptographer and hacker she simply won't 
be able to contact you without using strong authenticated end-to-end 
encryption - if the only address she has for you is your mash.





Contrast this with your proposal, or a PGP finger print. In order to use 
one of these, Alice has to have an email address or telephone number to 
begin with. She also has to find the key and compare it with the hash, 
in order to use it securely - but she can use the email address or 
telephone number without ever thinking about downloading or checking the 
public key.


That's just not possible is all you give out is mashes.



It's looking at the mash as an address, not as a public key or an 
adjunct to a public key service - which is why I think it's kind-of 
turning Zooko's Triangle on it's head (I had never heard of ZT before :( 
- but I know Zooko though, hi Zooko!).


Or maybe not, looking at the web I see ZT in several slightly different 
forms.


But it probably is turning the OP's problem - the napkin scribble - on 
it's head. You don't write your email and fingerprint on the napkin - 
just the mash.




-- Peter Fairbrother

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Thoughts about keys

2013-09-10 Thread Peter Fairbrother

On 10/09/13 10:00, Guido Witmond wrote:

Hi Peter,

We really have different designs. I'll comment inline.

On 09/09/13 19:12, Peter Fairbrother wrote:

On 09/09/13 13:08, Guido Witmond wrote:



I like to look at it the other way round, retrieving the correct
name for a key.

You don't give someone your name,



sorry, that should read "You don't give someone your address or 
telephone number". mea culpa. You can give them your name.



you give them an 80-bit key
fingerprint. It looks something like m-NN4H-JS7Y-OTRH-GIRN. The m-
is common to all, it just says this is one of that sort of hash.

There is only one to remember, your own.


If I read it correctly, each participant has one *single identity*?



Yes - except of course you can have as many identities as you want. You 
create them yourself after all.


The only assurance given by the scheme is that if a person gave you a 
hash which he generated himself, and you match it with a string and that 
string matches what you know about the person (eg their name or photo), 
then no-one else can have MTM'd it.


(maybe the server returns two or three matches, as after a while there 
will be random birthday collisions. That's why you should check the 
string matches what you know about the person. But an attacker can't 
find a hash which matches a particular pre-chosen person by trying, it 
would take 2^100 work)


You can have one for business, one for pretty girls, one for ugly girls 
- you just have to remember them all (except maybe the one for ugly 
girls). Or you can write them down. Or put them on your business card.





The point is that for practical purposes the hash *is* your telephone 
number, and/or your email, and/or your facebook page - we just need to 
get everyone else to install the software to do the lookup, checking, 
translation etc automagically and behind the scenes in their telephones, 
browsers, email clients etc.


(this was originally designed only for use in a single semi-secure comms 
program suite - but I don't see why it couldn't be more widely used)




[...]

As you and I have never met, I can't validate your photo, neither half
your claimed penis size. ;-)

How do I know it's not a Man in the Middle using your picture?


See above. It would take on average 2^79 operations each of which would 
require 2^20 work to find a matching hash, starting with a picture. Or 
even just starting with a name, or whatever.



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Squaring Zooko's triangle

2013-09-10 Thread Peter Fairbrother

On 10/09/13 05:38, James A. Donald wrote:

On 2013-09-10 3:12 AM, Peter Fairbrother wrote:

I like to look at it the other way round, retrieving the correct name
for a key.

You don't give someone your name, you give them an 80-bit key
fingerprint. It looks something like m-NN4H-JS7Y-OTRH-GIRN. The m- is
common to all, it just says this is one of that sort of hash.


1.  And they run away screaming.


Sorry, I misspoke: you can of course give them your name, just not your 
telephone number or email address. You give them the hash instead of those.



2.  It only takes 2^50 trials to come up with a valid fingerprint that
agrees with your fingerprint except at four non chosen places.



And that will help an attacker how?

To use a hash to contact you Bob has to ask the semi-trusted server to 
find the hash and then return your matching input string - if he gets it 
wrong even in one place the server will return a different hash, or no 
hash at all.


Bob can't use a hash which doesn't match exactly.

Sound too restrictive? But Bob can't use a telephone number or email 
address which is wrong in one place, never mind four, either.




I was even thinking of using a 60-bit hash fingerprint (with a whole lot 
of extra work added, to make finding a matching tailored preimage about 
2^100 or so total work), so a hash would look like s-NN4H-JS7Y-OTRH but 
I haven't convinced myself that that would work yet.


Mind you, I haven't ruled it out either. There is a flood attack, but it 
can be defeated by people paying a dollar to the server when they input 
a hash.



-- Peter Fairbrother



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] What TLS ciphersuites are still OK?

2013-09-10 Thread Peter Fairbrother

On 10/09/13 14:03, Ben Laurie wrote:

On 10 September 2013 03:59, james hughes mailto:hugh...@mac.com>> wrote:

[...]

TLS_DHE_RSA_WITH_AES_128_GCM_SHA256

I retract my previous "+1" for this ciphersuite. This is hard coded
1024 DHE and 1024bit RSA.


It is not hard coded to 1024 bit RSA. I have seen claims that some
platforms hard code DHE to 1024 bits, but I have not investigated these
claims. If true, something should probably be done.



Yes - hard code them all to 1024-bit. Then dump 
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 in the bin where it belongs.



Then replace it with a suite such as 
TLS_DHE2048_WITH_RSA2048_WITH_AES_128_GCM_SHA256.


Would a non-cryptographer know what 
TLS_DHE2048_WITH_RSA2048_WITH_AES_128_GCM_SHA256 meant? No. So for 
heaven's sake call it Ben's_suite or something, with a nice logo or 
icon, not TLS_DHE2048_WITH_RSA2048_WITH_AES_128_GCM_SHA256.



They won't know what Ben's_suite means either, but they may trust you 
(or perhaps not, if you are still Working for Google ...)





The problem with TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 is that you don't 
know what you are getting.



[ The other problem is of course that the main browsers don't make it 
easy to find out which suite is actually in use ... :( ]



Hmmm, can a certificate have several keylengths to choose from? And, if 
the suite allows it, can a certificate have an RSA key for 
authentication and a different RSA key for session key setup (cf RIPA)?


-- Peter Fairbrother

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [cryptography] SSH uses secp256/384r1 which has the same parameters as what's in SEC2 which are the same the parameters as specified in SP800-90 for Dual EC DRBG!

2013-09-09 Thread Peter Fairbrother

On 09/09/13 23:03, Perry E. Metzger wrote:


On Mon, 9 Sep 2013, Daniel wrote:
[...] They are widely used curves and thus a good way to reduce
conspiracy theories that they were chosen in some malicious way to
subvert DRBG.



Er, don't we currently have documents from the New York Times and the
Guardian that say that in fact they *did* subvert them?

Yes, a week ago this was paranoia, but now we have confirmation, so
it is no longer paranoia.


I did not see that, and as far as I can tell there is no actual 
confirmation.



Also, the known possible subversion of DRBG did not involve curve 
selection, but selection of a point to be used in DRBG. I think Kristian 
G has posted about that.





As to elliptic curves, there are only two of significance, in terms of 
being widely used:  they are NIST P-256 and NIST P-384.


NIST P-224 is also occasionally used.

These are the same curves as the secp256/384r1 curves, and the same 
curves as almost any other 256-bit or 384-bit curves you might want to 
mention - eg the FIPS 186-3 curves, and so on.


These are all the same curves.

They all began in 1999 as the curves in the (NIST) RECOMMENDED ELLIPTIC 
CURVES FOR FEDERAL GOVERNMENT USE


csrc.nist.gov/groups/ST/toolkit/documents/dss/NISTReCur.pdf‎


The way they were selected is supposed to be pseudo-random based on 
SHA-1, though it's actually not quite like that (or not even close).


Full details, or at least all of the publicly available details about 
the curve selection process, are in the link, but as I wrote earlier:



"Take FIPS P-256 as an example. The only seed which has been published 
is s=  c49d3608 86e70493 6a6678e1 139d26b7 819f7e90 (the string they 
hashed and mashed in the process of deriving c).


I don't think they could reverse the perhaps rather overly-complicated 
hashing/mashing process, but they could certainly cherry-pick the s 
until they found one which gave a c which they could use.


c not being one of the usual parameters for an elliptic curve, I should 
explain that it was then used as c = a^3/b^2 mod p.


However the choice of p, r, a and G was not seeded, and the methods by 
which those were chosen are opaque.


I don't really know enough about ECC to say whether a perhaps 
cherry-picked c = a^3/b^2 mod p is enough to ensure that the resulting 
curve is secure against chosen curve attacks - but it does seem to me 
that there is a whole lot of wiggle room between a cherry-picked c and 
the final curve."



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] A Likely Story!

2013-09-09 Thread Peter Fairbrother

On 09/09/13 12:53, Alexander Klimov wrote:

On Sun, 8 Sep 2013, Peter Fairbrother wrote:


You can use any one of trillions of different elliptic curves,which should be
chosen partly at random and partly so they are the right size and so on; but
you can also start with some randomly-chosen numbers then work out a curve
from those numbers. and you can use those random numbers to break the session
key setup.


Can you elaborate on how knowing the seed for curve generation can be
used to break the encryption? (BTW, the seeds for randomly generated
curves are actually published.)




Move along please, there is nothing to see here.

This is just a wild and disturbing story. It may upset you to read it, 
so please stop reading now.


You may have read a bit about the story in the papers or internet or 
elsewhere, but isn't actually true. Government Agencies do not try to 
break the internet's encryption, as used by Banks and Doctors and 
Commerce and Government Departments and even Government Agencies 
themselves - that wouldn't be sensible.


Besides which, there is no such agency as the NSA.


But ..

Take FIPS P-256 as an example. The only seed which has been published is 
s=  c49d3608 86e70493 6a6678e1 139d26b7 819f7e90 (the string they hashed 
and mashed in the process of deriving c).


I don't think they could reverse the perhaps rather overly-complicated 
hashing/mashing process, but they could certainly cherry-pick the s 
until they found one which gave a c which they could use.


c not being one of the usual parameters for an elliptic curve, I should 
explain that it was then used as c = a^3/b^2 mod p.


However the choice of p, r, a and G was not seeded, and the methods by 
which those were chosen are opaque.



I don't really know enough about ECC to say whether a perhaps 
cherry-picked c = a^3/b^2 mod p is enough that the resulting curve is 
secure against chosen curve attacks - but it does seem to me that there 
is a whole lot of legroom between a cherry-picked c and the final curve.




And as I said, it's only a story. We don't know much about what the NSA 
knows about chosen curve attacks, although we do know that they are 
possible. Don't go believing it, it will just upset you.


They wouldn't do that.


-- Peter Fairbrother

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Thoughts about keys

2013-09-09 Thread Peter Fairbrother

On 09/09/13 13:08, Guido Witmond wrote:

Hi Perry,

I just came across your message [0] on retrieving the correct key for a
name. I believe that's called Squaring Zooko's Triangle.

I've come up with my ideas and protocol to address this need.
I call it eccentric-authentication. [1,2]

With Regards, Guido.



0: http://www.metzdowd.com/pipermail/cryptography/2013-August/016870.html

1:
http://eccentric-authentication.org/blog/2013/08/31/the-holy-grail-of-cryptography.html

2:
http://eccentric-authentication.org/eccentric-authentication/global_unique_secure.html


I like to look at it the other way round, retrieving the correct name 
for a key.


You don't give someone your name, you give them an 80-bit key 
fingerprint. It looks something like m-NN4H-JS7Y-OTRH-GIRN. The m- is 
common to all, it just says this is one of that sort of hash.


There is only one to remember, your own.

The somebody uses the fingerprint in a semi-trusted (eg trusted not to 
give your email to spammers, but not trusted as far as giving the 
correct key goes) reverse lookup table, which is published and shared, 
and for which you write the entry and calculate the fingerprint by a 
long process to make say 20 bits more work.


Your entry would have your name, key, address, company, email address, 
twitter tag, facebook page, telephone number, photo, religious 
affiliation, claimed penis size, today's signed ephemeral DH or ECDHE 
keypart, and so on - whatever you want to put in it.


He then checks that you are someone he thinks you are, eg from the 
photo, checks the fingerprint, and if he wants to contact you he has 
already got your public key.


He cannot contact you without also getting your public key first - 
because you haven't given him your email address, just the hash.



[ That's what's planned for m-o-o-t (a CD-based live OS plus for 
secure-ish comms) anyway. As well, in m-o-o-t you can't contact anyone 
without checking the fingerprint, and you can't contact him in 
unencrypted form at all. Also the lookup uses a PIR system to avoid 
traffic analysis by lookup. It isn't available just now, so don't ask. ]



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] A Likely Story!

2013-09-08 Thread Peter Fairbrother
ons.



Or maybe they didn't. It's just a story, after all. The cryptography, 
while incomplete, is correct, and it may all seem plausible - but of 
course it isn't true.




-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] In the face of "cooperative" end-points, PFS doesn't help

2013-09-07 Thread Peter Fairbrother

On 07/09/13 02:49, Marcus D. Leech wrote:

It seems to me that while PFS is an excellent back-stop against NSA
having/deriving a website RSA key, it does *nothing* to prevent the kind of
   "cooperative endpoint" scenario that I've seen discussed in other
forums, prompted by the latest revelations about what NSA has been up to.


True.

But does it matter much? A cooperative endpoint can give plaintext no 
matter what encryption is used, not just session keys.


Okay, that might be a little harder to do in bulk - but perhaps not that 
much harder, depending on circumstances.



But if your fave website (gmail, your bank, etc) is disclosing the
session-key(s) to the NSA, or has deliberately-weakened session-key
negotiation in
   some way, then PFS doesn't help you.

I agree that if the scenario is "NSA has a database of RSA keys of
'popular sites'" then PFS helps tremendously.  But if the scenario goes
deeper
   into the "cooperative endpoint" territory, then waving the PFS flag
is perhaps like playing the violin on the deck of the Titantic.

Do we now strongly suspect that NSA have a flotilla of TWIRL (or
similar) machines, so that active cooperation of websites isn't strictly
necessary
   to derive their (weaker) RSA secret keys?



Maybe. Or maybe they have broken (the NIST curves for) ECDHE. Or maybe 
it's something else.


Whatever, I don't think they would be asking for $5.2 billion plus (for 
comparison, BULLRUN has an annual budget of $280 million) to spend on 
developing "advanced cryptanalytic capabilities" for which it is useful 
to "shape the worldwide cryptography marketplace to make it more 
tractable to" unless it was against some sort of key establishment 
mechanism in SSL/TLS.


I can't think of any other target which is worth that much money. Okay, 
maybe I'm ignoring the "never underestimate what the enemy is willing to 
spend" rule here, but..


Breaking a cipher like AES, 3DES or RC4 wouldn't give them nearly as 
much access to plaintext as breaking a KEM - they would have to break 
each ciphertext individually, whereas they would only need to break a 
KEM once.


And most of their interception is passive, they just listen - you 
generally need at least one plaintext/ciphertext pair to break a cipher 
and find a session key, and most often they don't have the plaintext, 
just the ciphertext.


You just need the right math (and/or maybe some input into curve 
choices) to break a PK KEM, and find *all* the session keys it is used for.



(the $5.2 billion figure is from a NSA request for additional 
congressional funding for "exciting new cryptanalytic capabilities" made 
a few years ago, and leaked by a congressman)



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] People should turn on PFS in TLS (was Re: Fwd: NYTimes.com: N.S.A. Foils Much Internet Encryption)

2013-09-06 Thread Peter Fairbrother

On 06/09/13 15:36, Perry E. Metzger wrote:

One solution, preventing passive attacks, is for major browsers
and websites to switch to using PFS ciphersuites (i.e. those
based on ephemeral Diffie-Hellmann key exchange).


It occurred to me yesterday that this seems like something all major
service providers should be doing. I'm sure that some voices will say
additional delay harms user experience. Such voices should be
ruthlessly ignored.


Any additional delay will be short - after all, if forward secrecy by 
ephemeral key setup (I hate the term PFS, there is nothing perfect about 
it) is not used then you have to use something else - usually RSA - 
instead.


For a desktop, laptop, or even a decent mobile the difference is not 
noticeable in practice if the server is fast enough.




However, while the case for forward secrecy is easy to make, 
implementing it may be a little dangerous - if NSA have broken ECDH then

using it only gives them plaintext they maybe didn't have before.




Personally, operating on the assumption that NSA have not made a crypto 
break is something I'm not prepared to do. I just don't know what that 
break is is. I think it's most likely RSA/DH or ECC, but could easily be 
wrong.


I don't really care if the "break" is non-existent, irrelevant or 
disinformation - beefing up today's crypto is only hard in terms of 
getting people to choose a new updated crypto, and then getting people 
to implement it. This happens every so often anyway.



One point which has been mentioned, but perhaps not emphasised enough - 
if NSA have a secret backdoor into the main NIST ECC curves, then even 
if the fact of the backdoor was exposed - the method is pretty well 
known - without the secret constants no-one _else_ could break ECC.


So NSA could advocate the widespread use of ECC while still fulfilling 
their mission of protecting US gubbmint communications from enemies 
foreign and domestic. Just not from themselves.



Looking at timing, the FIPS 186-3 curves were introduced in July 2009 - 
the first hints that NSA had made a cryptanalytic break came in early to 
mid 2010.



I'm still leaning towards RSA, but ...


-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-05 Thread Peter Fairbrother
BULLRUN seems to be just an overarching name for several wide programs 
to obtain plaintext of passively encrypted internet communications by 
many different methods.


While there seem to be many non-cryptographic attacks included in the 
BULLRUN program, of particular interest is the cryptographic attack 
mentioned in the Snowden papers and also hinted at in earlier US 
congressional manouverings for NSA funding.


The most obvious target of attack is some widespread implementation of 
SSL/TLS, and while it might just be an attack against a reduced 
keyspace, eg password-guessing or RNG compromise, I wonder whether NSA 
have actually made a big cryptographic break against some cipher, and if 
so, against what?


Candidate ciphers are:

3DES
RC4
AES

and key establishment mechanisms:

RSA
DH
ECDH


I don't think a break in another cipher or KEM would be widespread 
enough to matter much. Assuming NSA (or possibly GCHQ) have made a big 
break:


I don't think it's against 3DES or RC4, though the latter is used a lot 
more than people imagine.


AES? Maybe, but a break in AES would be a very big deal. I don't know 
whether hiding that would be politically acceptable.


RSA? Well, maybe indeed. Break even a few dozen RSA keys per month, and 
you get a goodly proportion of all internet encrypted traffic. It's just 
another advance on factorisation.


If you can break RSA you can probably break DH as well.

ECDH? Again quite possible, especially against the curves in use - but 
perhaps a more widespread break against ECDH is possible as well. The 
math says that it can be done starting with a given curve (though we 
don't know how to do it), and you only need to do the hard part once per 
curve.





My money? RSA.


But even so, double encrypting with two different ciphers (and using two 
different KEMs) seems a lot more respectable now.


-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: Question regarding common modulus on elliptic curve cryptosystems

2010-03-23 Thread Peter Fairbrother

Sergio Lerner wrote:


I looking for a public-key cryptosystem that allows commutation of the 
operations of encription/decryption for different users keys

( Ek(Es(m)) =  Es(Ek(m)) ).


Diffie-Hellman combined with Pohlig-Hellman can do what you describe.

It's a variation on www.zenadsl6186.zen.co.uk/ICUR.pdf using DH (as a 
public key system rather than as a key agreement system) rather than El 
Gamal. If it's not obvious how to implement it ask offlist.



But I don't think that's what you need. PK is not the same thing as 
signatures. It's not "a commutative signing primitive".


That's not something which I've come across before, but maybe I could 
work one out ... I gather you need for the verifier to be unable to tell 
the order in which the signatures were applied? Can't think offhand of 
any other reason why you'd need one.


But I need to know exactly what you need.



Need coffee, it's raining .. dilemma, wet or unwoken?



-- Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: UK RIPA Pt 3

2007-07-05 Thread Peter Fairbrother

Peter Fairbrother wrote:
The UK Home Office have just announced that they intend to bring the 
provisions of Pt 3 of the Regulation of Investigatory Powers Act 2000 
into force on 1st October. This is the law that enables Policemen to 
demand keys to encrypted material, on pain of imprisonment, and without 
judicial approval of these demands.


There is one last Parliamentary process to go through, the approval of a 
code of practice, but as far as I know there has never been a case of 
one of these failing to pass - though a related one was withdrawn a few 
years ago. We will try to prevent it happening, the chances of success 
are against us but it is not impossible.



You are not required to keep keys indefinitely, or give up a key you 
don't have, but the rules regarding the assumption that you know a key 
at least partially reverse the normal burden of proof.



I forgot to mention that Pt.3 also includes coercive demands for access 
keys - so for instance if Mr Bill Gates came to the UK, and if there was 
some existing question about Microsoft's behaviour in some perhaps 
current EU legal matter, Mr Gates could be required to give up the keys 
to the Microsoft internal US servers. Or go to jail.



Though I'd quite like to see that :), I don't think it would be entirely 
appropriate ...



-- Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


UK RIPA Pt 3

2007-07-05 Thread Peter Fairbrother
The UK Home Office have just announced that they intend to bring the 
provisions of Pt 3 of the Regulation of Investigatory Powers Act 2000 
into force on 1st October. This is the law that enables Policemen to 
demand keys to encrypted material, on pain of imprisonment, and without 
judicial approval of these demands.


There is one last Parliamentary process to go through, the approval of a 
code of practice, but as far as I know there has never been a case of 
one of these failing to pass - though a related one was withdrawn a few 
years ago. We will try to prevent it happening, the chances of success 
are against us but it is not impossible.



You are not required to keep keys indefinitely, or give up a key you 
don't have, but the rules regarding the assumption that you know a key 
at least partially reverse the normal burden of proof.




m-o-o-t will be there on the day. m-o-o-t is a freeware live CD 
containing OS and applications, including an ephemerally keyed messaging 
service, and a steganographic file system.


If anyone knows of any other technologies to defeat this coercive attack 
I would be glad to hear of them, and perhaps include them in m-o-o-t.



-- Peter Fairbrother
www.m-o-o-t.org

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Windows guru requested - Securing Windows

2006-06-07 Thread Peter Fairbrother
Today the UK Home Office announced the public consultation on the Code of
Practice of Part 3 of RIPA. This is the first stage of the process by which
it can be brought into force. Part III of RIPA is the
"policeman-say-gimme-all-your-keys-or-go-to-jail-(and-don't-tell-anybody)"
law passed 6 years ago but not yet brought into force.

With the advent of GAK in the UK looking more and more likely, I am ramping
up the m-o-o-t project ( http://www.m-o-o-t.org , but the website is
woefully out-of-date and the final form of the project may be rather
different to that described therein), which has been dormant for some time.

m-o-o-t's goal is to provide even the dumbest luser with the tools to avoid
and evade demands for keys in such a way that it is very hard for them to
mess up and to anything insecurely.

This will be either for free or at very low cost (might do something with a
USB stick which the user would have to buy, but not from us - software will
be free).

In a preliminary search for alternatives, I am seeking an answer to this
question. I know very little about Windows beyond that many people use it
and that source is not available, so be gentle with me please.




In an attempt to partially secure Windows for temporary use, ie when it's
being temporarily used in "secure mode", and to prevent data being stored in
softwarekeylogger, temp and swap files, would something like the following
be possible?

Bot from CD, create a memory FS, union mount it to the main windows fat-32
FS, with the fat-32 fs mounted read-only, boot Windows? That way any changes
to the files would be wiped out when the power was switched off, and the
fat-32 fs would remain untouched.

Mount a steganographic FS read/write on eg a USB key (or a different
partition) on / with a hard-to-guess name. Secret files should be saved to
this fs.



Thanks,

-- 
Peter Fairbrother

[EMAIL PROTECTED]
http://www.m-o-o-t.org
Moderated mailing list: [EMAIL PROTECTED]
Notification of release: blank email to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: thoughts on one time pads

2006-01-31 Thread Peter Fairbrother
Peter Gutmann wrote:

> Jonathan Thornburg <[EMAIL PROTECTED]> writes:
> 
>> Melting the CD should work... but in practice that takes a specialized "oven"
>> (I seriously doubt my home oven gets hot enough), and is likely to produce
>> toxic fumes, and leave behind a sticky mess (stuck to the surface of the
>> specialized oven).
> 
> For no adequately explored reason I've tried various ways of physically
> destroying CDs:

Does a microwave oven do anything? I've been reading too much Tom Clancy ...

It does get rid of the stuff on the top, leaving a surface that a bit of
sanding would make irretrievable, and some flakes that could be burned
maybe?



Another possibility might be to n-of-n [1] split the data up so you need to
have a whole disk rotation's worth in order to reconstruct any of it - that
might well make assured destruction a lot easier.

The repeatedly applied hammer would probably work well then, I doubt it's
that hard to destroy ~2^100 bits with a few blows to one track.

but the hot fiery furnace in the basement is probably still the best. :)







It used to be a fashion to have key signing parties when crypto people
gathered - and at several ones over the last few years I have seen CD's of
OTP data swapped instead. And DVD's are about the same price as CDs now.

I'm talking about the kind of careful people who get the message and do the
xor themselves, probably in shell script. No "applications".

They can easily change to using symmetric keys to save OTP material (using
some of the otp for the symmetric key) when large files are sent - "Here's
the porneo.mpg of Hillary Clinton [2], encrypted in AES with this key:
xxx..."



Often doubly encrypted, typically using both Blowfish and AES with different
keys, in case one of those ciphers has been covertly broken.

Hey, why not? It costs nothing.


-- 
Peter Fairbrother



[1] the crypto variety of m-of-n splitting, but where m=n so you need all of
the pieces to reconstruct any of the whole - not the RAID variety of m-of-n
splitting, where you only need as much data as the original data.

[2] Anne Widdecombe?


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: solving the wrong problem

2005-08-08 Thread Peter Fairbrother
Peter Gutmann wrote:

> Peter Fairbrother <[EMAIL PROTECTED]> writes:
>> Perry E. Metzger wrote:
>>> Frequently, scientists who know nothing about security come up with
>>> ingenious ways to solve non-existent problems. Take this, for example:
>>> 
>>> http://www.sciam.com/article.cfm?chanID=sa003&articleID=00049DB6-ED96-12E7-A
>>> D9
>>> 683414B7F
>>> 
>>> Basically, some clever folks have found a way to "fingerprint" the
>>> fiber pattern in a particular piece of paper so that they know they
>>> have a particular piece of paper on hand.
>> 
>> Didn't the people who did US/USSR nuclear arms verification do something
>> very similar, except the characterised surface was sparkles in plastic
>> painted on the missile rather than paper?
> 
> Yes.  The intent was that forging the fingerprint on a warhead should cost as
> much or more than the warhead itself.

Talking of solving the wrong problem, that's a pretty bad metric - forging
should cost the damage an extra warhead would do, rather than the cost of an
extra warhead. That's got to be in the trillions, rather than a few hundred
thousand for another warhead.


-- 
Peter Fairbrother


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: solving the wrong problem

2005-08-07 Thread Peter Fairbrother
Perry E. Metzger wrote:

> 
> Frequently, scientists who know nothing about security come up with
> ingenious ways to solve non-existent problems. Take this, for example:
> 
> http://www.sciam.com/article.cfm?chanID=sa003&articleID=00049DB6-ED96-12E7-AD9
> 683414B7F
> 
> Basically, some clever folks have found a way to "fingerprint" the
> fiber pattern in a particular piece of paper so that they know they
> have a particular piece of paper on hand.


Didn't the people who did US/USSR nuclear arms verification do something
very similar, except the characterised surface was sparkles in plastic
painted on the missile rather than paper?



-- 
Peter Fairbrother


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: EMV

2005-07-11 Thread Peter Fairbrother
Florian Weimer wrote:

> * David Alexander Molnar:
> 
>> Actually, smart cards are here today. My local movie theatre in Berkeley,
>> California is participating in a trial for "MasterCard PayPass." There is
>> a little antenna at the window; apparently you can just wave your card at
>> the antena to pay for tickets. I haven't observed anyone using it in
>> person, but the infrastructure is there right now.
> 
> If you are interested in useful RFID applications, just visit
> Singapore. 8-) They use RFID tickets on the subway (MRT) and on
> busses, and you don't have to worry about buying the right ticket
> because the system charges you the correct amount.  However, there's
> one thing that makes me nervous: if you know the card number (which is
> printed on the cards), you can go to a web page, enter it, and obtain
> the last 20 rides during the last 3 days, without any further
> authentication.  

London Underground have a contactless system too, but it isn't used much. As
I remember it had a similar problem, but they may have changed that.

You take out your wallet with the card in and wave it over a palm-sized
yellow blob on the turnstile, but you don't have to open your wallet to
withdraw a token. 

Muggers and pickpockets keep a close eye out to see how fat your wallet is
and where you keep it ...


-- 
Peter Fairbrother


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Why Blockbuster looks at your ID.

2005-07-09 Thread Peter Fairbrother
Jerrold Leichter wrote:

> There have been a couple of articles in RISKS recently about the fairly recent
> use of a two-factor system for bank cards in England.  There are already
> significant hacks -

yes ...

> and the banks managed to get the law changed so that, with
> this "guaranteed to be secure" new system, the liability is pushed back onto
> the customer.

 I'm not too sure what you mean.

 In the UK the merchant is not usually liable for card-present fraud.

 There has been / is about to be a change to the liability of the merchant,
usually to the effect that if a fraud is successful because the merchant
hasn't installed PIN equipment then they will be liable. A few banks are
making merchants liable for all fraud if PIN equipment has not been
installed.

EMV said the change would begin on 1st Jan, but the banks haven't all
implemented it yet. Many did so on 1st July.

The change occurs in the contract between the aquiring banks and the
merchants, not the law; the legality of the change is questionable, but as
it is basically just a way to encourage retailers to install PIN equipment
it has not been challenged afaik.

There is no change in the merchant's liability if he has installed Chip n'
PIN equipment - the tales circulating of all merchants becoming liable for
all frauds are simply not true.





 There will also be a change in the way fraud claims are dealt with, to the
almost certain disadvantage of the cardholder, as there is no physical
signature to contest and at least in the first instance the issuers
determine the "facts".


 However I am not aware of any changes to the law.


 There was a very recent Banking Ombudsman case where the cardholder had
been grossly negligent about her PIN security, but her liability was still
limited to £50 (which is a statutory limit and applies to credit cards, but
not to debit cards - although it is in practice applied to them too).
Usually the £50 limit is not charged by the issuing bank.





 However the customer eventually pays for fraud anyway, in the form of
higher prices, so the issuer - merchant liability split is not of immediate
relevance to the customer. It should be tilted firmly against the banks IMO
though, as they are responsible for the system, not the merchants, who have
no say, as EMV + AmEx is an effective monopoly.



 BTW, one of my banks recently sent me a leaflet which said Chip n' PIN was
going to be introduced worldwide. Anyone know more about that?


-- 
Peter Fairbrother


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Why Blockbuster looks at your ID.

2005-07-09 Thread Peter Fairbrother
Perry E. Metzger wrote:
 
> A system in which the credit card was replaced by a small, calculator
> style token with a smartcard style connector could effectively
> eliminate most of the in person and over the net fraud we experience,
> and thus get rid of large costs in the system and get rid of the need
> for every Tom, Dick and Harry to see your drivers license when you
> make a purchase. It would both improve personal privacy and help the
> economy by massively reducing transaction costs.

I agree that it might well reduce costs and fraud - but how will it improve
privacy? Your name is already on the card ... and the issuer will still have
a list of your transactions.

Not having to show ID may save annoyance, but it doesn't significantly
improve privacy.



-- 
Peter Fairbrother


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: massive data theft at MasterCard processor

2005-06-20 Thread Peter Fairbrother
Steven M. Bellovin wrote:

> Designing a system that deflects this sort of attack is challenging.
> The right answer is smart cards that can digitally sign transactions


No, it isn't! A handwritten signature is far better, it gives post-facto
evidence about who authorised the transaction - it is hard to fake a
signature so well that later analysis can't detect the forgery, and few
people would bother to do it that well anyway, while it is easy enough to
enter a PIN with "digital reproducibility".


Also there are several attacks on Chip n' PIN as deployed here in the UK,
starting with the fake reader attacks - for instance, a fake reader says you
are authorising a payment for $6.99 while in fact the card and PIN are being
used to authorise a transaction for $10,000 across the street. They get
quite complex, there's the double-dip, where the $6.99 transaction is also
made, and the delayed double dip, where a reader belonging to a crook makes
the $10,000 transaction several days later (the crook has to skip town with
the money in this attack - so far. Except of course he never existed in the
first place, and maybe ...).

Then there's probably a Bank-wide attack, where an expensive attack on one
card can break all the cards used by one bank - ouch! because the Banks
haven't actually issued cards that digitally sign the transaction (and it
would make little difference to many of the fake reader attacks if they
had), but just reuse one key or a key with an offset or XOR on the card to
generate a keyed hash of the transaction for authorisation.

There are some more classes of attacks too. It's a bit early to say about
many of them, but it looks like there are a goodly number of going-to-be
successful attacks.

This might not matter that much except to the banks, but the liability for
what appears to be a PIN-authorised transaction is being foisted off on the
cardholder, who has litle recourse to proof that he didn't make the
transaction when one of these attacks is made.

I don't have any Chip n' PIN cards, and I don't want any either. I'm
sticking with signatures.


-- 
Peter Fairbrother


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


aid worker stego

2005-03-29 Thread Peter Fairbrother
I've been asked to advise an aid worker about stego. Potential major
government attacker.

I don't think there is much danger of severe torture, but I don't think
"innocent-until-proven-guilty" applies either, and suspicion should be
minimised or avoided.



I though about recommending Best/Drive -Crypt as they are supposedly
general-purpose encryption programs which can also do encrypted containers
which are undetectable, but I don't know if that's actually so, or which to
choose.

If he's using Windows will they clean up the temp and swap files?



An alternative is a stego program to hide data in eg images.  I don't know
which are the better ones now available, can anyone advise?

The other point is that the stego program itself will be visible on disk. Is
there a small stego program that you could eg hide in an image and somehow
bootstrap from something totally innocuous?




Any other ideas?


-- 
Peter Fairbrother


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Article on passwords in Wired News

2004-06-07 Thread Peter Fairbrother
Peter Gutmann wrote:

>> An article on passwords and password safety, including this neat bit:
>> 
>> For additional security, she then pulls out a card that has 50
>> scratch-off codes. Jubran uses the codes, one by one, each time she
>> logs on or performs a transaction. Her bank, Nordea PLC, automatically
>> sends a new card when she's about to run out.
>> 
>> http://www.wired.com/news/infostructure/0,1377,63670,00.html
> 
> One-time passwords (TANs) was another thing I covered in the "Why isn't the
> Internet secure yet, dammit!" talk I mentioned here a few days ago.  From
> talking to assorted (non-European) banks, I haven't been able to find any that
> are planning to introduce these in the foreseeable future.  I've also been
> unable to get any credible explanation as to why not, as far as I can tell
> it's "We're not hurting enough yet".  Maybe it's just a cultural thing,
> certainly among European banks it seems to be a normal part of allowing
> customers online access to banking facilities.

My (European) bank uses "memorable information", an alphanumeric string
provided by me, and they ask for three randomly chosen characters when
authenticating online. There is also a fixed password.

Not terribly secure, or terribly one-time, but it would defeat a simple
keylogger or shoulder surfing attack, for instance. It doesn't give me the
warm fuzzies, but it does mean I would use a dodgy terminal at least once if
I was stuck in the badlands (and then change passwords etc.).


-- 
Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: safety of Pohlig-Hellman with a common modulus?

2003-12-07 Thread Peter Fairbrother
David Wagner wrote:

> Peter Fairbrother  wrote:

>> Not usually.  In general index calculus attacks don't work on P-H, [...]
> 
> Sure they do.  If I have a known plaintext pair (M,C), where
> C = M^k (mod p), then with two discrete log computations I can
> compute k, since k = dlog_g(C)/dlog_g(M) (mod p-1).  This works for
> any generator g, so I can do the precomputation for any g I like.

Duuuh. I _knew_ that. I've even proposed changing p from time to time to
limit the take from an IC attack. Dumb of me.

Too much beer, no coffee, got a brainstorm and couldn't see the wood for the
trees... Sorry.


-- 
Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: safety of Pohlig-Hellman with a common modulus?

2003-12-06 Thread Peter Fairbrother
David Wagner wrote:

> Steve Bellovin  wrote:
>> Is it safe to use Pohlig-Hellman encryption with a common modulus?
>> That is, I want various parties to have their own exponents, but share
>> the same prime modulus.  In my application, a chosen plaintext attack
>> will be possible.  (I know that RSA with common modulus is not safe.)
> 
> Yes, I believe so.  The security of Pohlig-Hellman rests on the difficulty
> of the discrete log problem.

Nope. In P-H there is no g. A ciphertext is M^k mod p. An attacker won't
know k, and usually won't know M, but see below. I don't know what the
problem is called, but it isn't DLP. Anyone?

> Knowing the discrete log of g^y doesn't help
> me learn the discrete log of g^x (assuming x,y are picked independently).
> This is not like RSA, where using a common modulus allows devastating
> attacks.
> 
> There is a small caveat, but it is pretty minor.  There are some
> precomputation attacks one can do which depend only on the prime p; after
> a long precomputation, one can compute discrete logs mod p fairly quickly.
> The more people who use the same modulus, the more attractive such a
> precomputation effort will be.  So the only reason (that I know of)
> for using different modulii with Pohlig-Hellman is to avoid putting all
> your eggs in one basket.

Not usually.  In general index calculus attacks don't work on P-H, except in
chosen plaintext attacks (where the chosen plaintext sort-of substitutes for
g).

When using P-H I usually pre-encrypt data in any old symmetric cipher with a
random IV and any old key, to avoid known plaintext attacks. There are
several other attacks to be aware of, including some nasty
adaptive-chosen-plaintext and chosen-ciphertext attacks.

-- 
Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: safety of Pohlig-Hellman with a common modulus?

2003-12-06 Thread Peter Fairbrother
I wrote:

Steve Bellovin wrote:

> Is it safe to use Pohlig-Hellman encryption with a common modulus?
> That is, I want various parties to have their own exponents, but share
> the same prime modulus.  In my application, a chosen plaintext attack
> will be possible.  (I know that RSA with common modulus is not safe.)
> 
> --Steve Bellovin, http://www.research.att.com/~smb

As far as I can tell it's safe - the main danger is that it that if an
attacker does the work to calculate the factor base for an index calculus
attack, the factor base is useful for attacking all ciphertext which uses
the modulus. It's fairly easy to find an individual discreet log with a
factor base, so such an attacker would get a bigger return on investment.



Sorry, the above is complete nonsense, and only applies in a few situations.

There are some chosen plaintext attacks, and especially adaptive chosen
plaintext attacks, but they apply whether or not the modulus is shared.

But P-H with a shared modulus is pretty much as safe as with different
moduli, afaict.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: safety of Pohlig-Hellman with a common modulus?

2003-12-06 Thread Peter Fairbrother
Steve Bellovin wrote:

> Is it safe to use Pohlig-Hellman encryption with a common modulus?
> That is, I want various parties to have their own exponents, but share
> the same prime modulus.  In my application, a chosen plaintext attack
> will be possible.  (I know that RSA with common modulus is not safe.)
> 
> --Steve Bellovin, http://www.research.att.com/~smb

As far as I can tell it's safe - the main danger is that it that if an
attacker does the work to calculate the factor base for an index calculus
attack, the factor base is useful for attacking all ciphertext which uses
the modulus. It's fairly easy to find an individual discreet log with a
factor base, so such an attacker would get a bigger return on investment.

Two simple ways around that - use longer prime moduli, or change the modulus
from time to time.


The attack on RSA has no equivalent here. That attack involves using a key
pair to find phi(n) or a divisor of it, but in Pohlig-Hellman the value of
phi(p) is not secret (= p-1).

I am presently using Pohlig-Hellman in a construction for universal
re-encryption, taking advantage of it's key-multiplicative property. See
http://www.zenadsl6186.zen.co.uk/ICURpaper3.pdf or
http://www.zenadsl6186.zen.co.uk/ICURpaper3.ps for the messy math details,
my application is in the online observable SFS for m-o-o-t.



Pohlig-Hellman is still however very slow compared to modern symmetric
ciphers, and in most cases where P-H is used a group cipher with the
required key properties would be more efficient.

No such cipher exists, but I am thinking of building one. I have already got
a few indications of interest, if anyone else wants to contribute please let
me know. I'm not committed to doing it, but if enough people want it..

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Searching for uncopyable key made of sparkles in plastic

2003-12-04 Thread Peter Fairbrother
R. A. Hettinga wrote:

> 
> --- begin forwarded text
> 
> 
> Status:  U
> Date: Tue, 2 Sep 2003 14:45:43 -0400
> To: [EMAIL PROTECTED]
> From: Peter Wayner <[EMAIL PROTECTED]>
> Subject: Searching for uncopyable key made of sparkles in plastic
> Sender: [EMAIL PROTECTED]
> 
> Several months ago, I read about someone who was making a key that
> was difficult if not "impossible" to copy. They mixed sparkly things
> into a plastic resin and let them set. A camera would take a picture
> of the object and pass the location of the sparkly parts through a
> hash function to produce the numerical key represented by this hunk
> of plastic. That numerical value would unlock documents.
> 
> This was thought to be very difficult to copy because the sparkly
> items were arranged at random. Arranging all of the sparkly parts in
> the right sequence and position was thought to be beyond the limits
> of precision for humans.
> 
> Can anyone give me a reference to this paper/project?
> 
> 
> Thanks!
> 
> -Peter

(catching up on old posts)

Not a ref as such, more a bit of trivia.

A similar system was used to verify SALT. Russian ICBM's etc had sparkles
glued to them, and from time to time US people would test them to see if
they were the same missiles.

I don't know what the Russians did to the US missiles, but I think it was
the same.


-- Peter Fairbrother

I hear that the emperor of china
used to wear iron shoes with ease

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


lockable trapdoor one-way function

2003-11-26 Thread Peter Fairbrother
Does anyone know of a trapdoor one-way function whose trapdoor can be locked
after use?

It can be done with secure hardware and/or distributed trust, just delete
the trapdoor key, and prove (somehow?) you've deleted it.

It looks hard to do in "trust-the-math-only" mode...


-- 
Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Are there...one-way encryption algorithms

2003-11-26 Thread Peter Fairbrother
Bodo Moeller wrote:

> The Pohlig-Hellman cipher is the modular scheme that you describe, but
> observe there is a connection to the protocol above: that protocol
> works only if encryption and decryption has a certain commutativity
> property (decrypting  B(A(M))  with key  A   must leave  B(M),  not
> just some  A^-1(B(A(M)))  that might look entirely different), and
> the Pohlig-Hellman cipher has this property.

A useful property for all sorts of things. I'm using P-H to improve Golle et
al's universal encryption methods,
http://www.zenadsl6186.zen.co.uk/ICURpaper3.pdf but it's a pity that
Pohlig-Hellman is still slow, and that there isn't a faster algorithm with
similar properties.

There's lots of potential uses for one of those :)



-- 
Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: A-B-a-b encryption

2003-11-19 Thread Peter Fairbrother
martin f krafft wrote:

> it came up lately in a discussion, and I couldn't put a name to it:
> a means to use symmetric crypto without exchanging keys:
> 
> - Alice encrypts M with key A and sends it to Bob
> - Bob encrypts A(M) with key B and sends it to Alice
> - Alice decrypts B(A(M)) with key A, leaving B(M), sends it to Bob
> - Bob decrypts B(M) with key B leaving him with M.
> 
> Are there algorithms for this already? What's the scheme called?
> I searched Schneier (non-extensively) but couldn't find a reference.
> 
> Thanks,

The protocol is called the Shamir three-pass protocol. It needs a
commutative cipher.

Probably the only cipher that it can be securely used with is called the
Pohlig-Hellman cipher, a simple exponentiating cipher over Zp.

Whether it's a symmetric cipher is a matter of precise definition, though
despite the encryption and decryption keys being different I would consider
it such. A better term might be a secret-key cipher. It's quite easy to find
the decryption key d from the encryption key e:

d*e = 1 mod (p-1)

C = M^e mod p
M = C^d mod p


p should be a "safe" (= 2q+1, q prime) prime, and all keys used should be
odd and !=q. There is an ECC variant. There are lots of things to watch out
for in implementations.



I'm trying to develop (or find? anyone?) a secure symmetric cipher which is
a group, where if you know A and B you can find a key C that decrypts
B(A(M)), but that's a different story.


-- 
Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: quantum hype

2003-10-03 Thread Peter Fairbrother
[EMAIL PROTECTED] wrote:

>> From: [EMAIL PROTECTED]
>> [mailto:[EMAIL PROTECTED] Behalf Of Dave Howe
>> 
>> Peter Fairbrother may well be in possession of a break for the QC hard
>> problem - his last post stated there was a way to "clone" photons with
>> high accuracy in retention of their polarization
>> [SNIP]
>> 
> Not a break at all. The physical limit for cloning is 5/6ths of the bits will
> clone true. Alice need only send 6 bits for every one bit desired to assure
> Eve has zero information. For a 256-bit key negotiation, Alice sends 1536 bits
> and hashes it down to 256 bits for the key.

I've just discovered that that won't work. Eve can get sufficient
information to make any classical error correction or entropy distillation
techniques unuseable.

See:  http://www.gap-optique.unige.ch/Publications/Pdf/9611041.pdf


You have to use QPA instead, which has far too many theoretical assumptions
for my trust.

-- 
Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Can Eve repeat?

2003-09-30 Thread Peter Fairbrother
Dan Riley wrote:

> I'm not an expert on this stuff, but I'm interested enough to chase
> a few references...
> 
> Ivan Krstic <[EMAIL PROTECTED]> writes:
>> The idea that observing modifies state is something to be approached with
>> caution. Read-only does make sense in quantum world; implementations of
>> early theoretical work by Elitzur and Vaidman achieved roughly 50% success
>> on interaction-free measurements.
> 
> Careful there--EV interaction-free measurements do *not* read the
> internal state of the system measured.  The trick to the EV IFM is
> that it determines the location (or existence) of a system without
> interacting with the internal state of that system; a corollary is
> that it derives no information about the internal state.

Caveat - I'm slightly drunk, and in a quitting-nicotine state...

IFM's can detect classical measurements, without changing the classical
measurement (eg the presence of an obstacle in a path). They may change the
quantum state of the classical measurement, but that doesn't matter as far
as changing it's classical aspect goes.


What I was referring to was using IFM and quantum Zeno techniques to detect
_which_ quanta were incorrectly cloned by a UQCM. That would increase the
Shannon information available to Eve, and allow her to either stop
transmission of the incorrectly cloned quanta, or, and this is an unknown
step, to reconstruct the incident quantum and try again to clone it.

The simple apparatus is just a UQCM in one limb of a Mach-Zender
interferometer. That's enough to increase Eve's Shannon infomation. The QZ
apparatus I can't describe, but I can well imagine one that would allow
near-perfect detection of incorrect clones.


> "The meaning of the EV IFM is that if an object changes its internal
> state [...] due to the radiation, then the method allows detection
> of the location of the object without any change in its internal
> state.
> [...]
> We should mention that the interaction-free measurements do not
> have vanishing interaction Hamiltonian. [...] the IFM can change
> very significantly the quantum state of the observed object and we
> still name it interaction free."
> Lev Vaidman, "Are Interaction-free Measurements Interaction
> Free?", http://arxiv.org/abs/quant-ph/0006077

But this follows your quote:

"On the other hand the method do allow performing some non-demolition
measurements. It might be momentum-exchange free and energy-exchange free."

You are claiming too much, and ignoring what the paper actually says. Bad
boy! Spank! It might be changing-polarisation-free too. I'm pretty sure it
can be.


> Intercepting QC is all about determining the internal state (e.g.
> photon polarization), and AFAIK that requires becoming entangled with
> the state of the particle.  EV IFM doesn't appear to provide a way
> around this.

Not entirely. IFM techniques can also eg compare the incident photon with
the same photon in a "has-been-cloned" state. The theory is slightly
different, which was why I said "sort of" in my earlier post on the subject.

> and later...
>> On Fri, 26 Sep 2003 09:10:05 -0400, Greg Troxel <[EMAIL PROTECTED]> wrote:
>>> The current canoncial
>>> paper on how to calculate the number of bits that must be hashed away
>>> due to detected eavesdropping and the inferred amount of undetected
>>> eavesdropping is "Defense frontier analysis of quantum cryptographic
>>> systems" by Slutsky et al:
>>> 
>>> http://topaz.ucsd.edu/papers/defense.pdf
>> 
>> Up-front disclaimer: I haven't had time to study this paper with the
>> level of attention it likely deserves, so I apologize if the following
>> contains incorrect logic. However, from glancing over it, it appears
>> the assumptions on which the entire paper rests are undermined by work
>> such as that of Elitzur and Vaidman (see the article I linked
>> previously). Specifically, note the following:
> [...]
>> If we do away with the idea that there are no interaction-free
>> measurements (which was, at least to me, convincingly shown by the
>> Quantum seeing in the dark article), this paper becomes considerably
>> less useful; the first claim's validity is completely nullified (no
>> longer does interference with particles necessarily introduce
>> transmission errors),
> 
> If Eve can measure the state of a particle without altering its state
> at all, 100% of the time, then QC is dead--the defense function
> becomes infinite.  But AFAICT the EV IFM techniques do not provide
> this ability.

Not as you describe. But the earlier "it would defy Einsteinian causality"
(without a detailed explanation of how) argument has been shown to be wrong,
and there is now nothing I know of that will prevent EV IFM-like techniques
working to detect which quanta are incorrectly cloned. Or recreating the
original photon (though there might be limits on that).


>> while the effect on the second statement is
>> evil: employing the proposed key distillation techniques, the user
>> might be given a (very

Re: quantum hype

2003-09-28 Thread Peter Fairbrother
I promised some links about the 5/6 cloning figure. You've had a few
experimental ones, here are some theory ones.


Cloning machines:
http://www.fi.muni.cz/usr/buzek/mypapers/96pra1844.pdf

Theoretically optimal cloning machines:
http://www.gap-optique.unige.ch/Publications/Pdf/PRL02153.pdf

1/6 disturbance is theoretically optimal, both as a QC interception strategy
and "it's an optimal cloning machine":
http://www.gap-optique.unige.ch/Publications/Pdf/PRA04238.pdf

A different approach to the 1/6 figure (2/3 cloned correctly, the 1/3
imperfectly cloned still has a 50% chance of being right):
http://arxiv.org/PS_cache/quant-ph/pdf/0012/0012121.pdf


That lot is pretty much indisputed...

...except for the "optimal" part; and that's a sideways argument anyway -
the math and physics theory are right as far as they go, just that they
didn't consider everything.

It may be possible to clone better than those "optimal" solutions,
especially in the classic QC case, or get more information like which
photons were cloned correctly, and perhaps to as near perfection as you
like, but that is in dispute. Actually it's a pretty friendly dispute,
people mostly say "I don't know"*. I'll post some more links on that later.


*unless someone mentions non-linear transformations. Which is a different
dispute really.
-- 
Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Can Eve repeat?

2003-09-26 Thread Peter Fairbrother
Ivan Krstic wrote:

> On 24 Sep 2003 08:34:57 -0400, Greg Troxel <[EMAIL PROTECTED]> wrote:
> [snip]
>> In Quantum Cryptography, Eve is allowed to not only observe, but also
>> transmit (in the quantum world observing modifies state, so the notion
>> of read only doesn't make sense).  Also, Eve is typically accorded
>> unlimited computational power.
> [snip]
> 
> The idea that observing modifies state is something to be approached with
> caution. Read-only does make sense in quantum world; implementations of
> early theoretical work by Elitzur and Vaidman achieved roughly 50% success
> on interaction-free measurements. Later work, relying on the quantum Zeno
> effect, raised the success rate significantly: "Preliminary results from
> new experiments at Los Alamos National Laboratory have demonstrated that
> up to 70 percent of measurements could be interaction-free. We soon hope
> to increase that figure to 85 percent."
> 
> The quote comes from a article by Kwiat, Weinfurter and Zeilinger
> published in SciAm, November 1996 -- if they were getting success rates
> like these back then, I wonder what the current status is.
> 
> The article is well worth a read. There's a copy online at:
> http://www.fortunecity.com/emachines/e11/86/seedark.html
> 
> Best regards,
> Ivan Krstic

Thanks for the interesting link.

That's pretty much what I was talking about when I said that it may be
possible to clone an arbitrarily large proportion of photons - and that
Quantum Cryptography may not actually be secure.

For instance, you can clone a "virtual" photon and do an interaction-free
measurement comparing the now-cloned photon and the photon in it's uncloned
state. If they don't match, the photon was incorrectly cloned. You may only
be able to correctly clone 5/6 of the photons, but that way you know which
photons were correctly cloned.

It may also be possible to clone an arbitrarily large proportion of photons,
ie approaching all of them, by measuring the incorrectly cloned photons and
their clones or transforming to get the original photon back, then trying to
clone again. Other methods are perhaps possible too.


The "no-cloning rule" says that no unitary transform will take two quantum
waveforms, one unknown, and generate two wavefoms with the same state as the
unknown waveform. It's probably true (anent some non-linear transform), but
it _doesn't_ say that there isn't another way to clone quanta, perhaps using
three waveforms, or "virtual" waveforms, or generating a new quantum from
interaction-free measurement of the original quantum waveform.

IMO far too much reliance has been placed on it, or perhaps people just
misunderstood what it says.


It reminds me a bit of the "cd's are better than vinyl" dispute - the cd
guys said that Nyquist theorem showed that cd's reproduced music to above
human hearing, but it doesn't, it just shows that less-than-Nyquist sampling
rates have extra problems. And it only applies to steady states, not music.
And so on.

And the "no-cloning rule" _doesn't_ say it's impossible to clone. Perhaps it
should be called somthing else.


-- 
Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: quantum hype

2003-09-22 Thread Peter Fairbrother
Matt Crawford wrote:

>> BTW, you can decrease the wavelength of a photon by bouncing it off
>> moving
>> mirrors.
> 
> Sure.  To double the energy (halve the wavelength), move the mirror at
> 70% of the speed of light.  And since you don't know exactly when the
> photon is coming, keep it moving at that speed ...
> 
 
I never suggested it was very practical, but:

Trap it in a cavity between two parallel mirrors, and shrink the cavity. It
doesn't matter (within reason) how fast you shrink it, just how much.

:)


-- 
Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: quantum hype

2003-09-22 Thread Peter Fairbrother
[EMAIL PROTECTED] wrote:

>> From: [EMAIL PROTECTED]
>> [mailto:[EMAIL PROTECTED] Behalf Of Dave Howe
>> 
>> Peter Fairbrother may well be in possession of a break for the QC hard
>> problem - his last post stated there was a way to "clone" photons with
>> high accuracy in retention of their polarization
>> [SNIP]
>> 
> Not a break at all. The physical limit for cloning is 5/6ths of the bits will
> clone true. Alice need only send 6 bits for every one bit desired to assure
> Eve has zero information. For a 256-bit key negotiation, Alice sends 1536 bits
> and hashes it down to 256 bits for the key.

Agreed. It's not a break, though it does make it harder. Many people think
the no-cloning theorem says you can't clone photons at all. Most COTS QC
gear only "works" under that false assumption.

Then there's the noise/error rates - in practice it's very hard to get > 60%
single photon detection rates, even under the most favourable conditions,
and low error rates are hard to get too.

I tend to the opinion, without sufficient justification and knowledge to
make it more than an opinion, that most COTS QC products are probably secure
today in practice, but claims for theoretical security are overblown.




There may be yet another problem which I should mention. First, I'd like to
state that I'm not a quantum mechanic, and I find the math and theory quite
hard, so don't rely too much on this.

I'm not certain that the 5/6 figure is a universal physical limit. It may
just be an artifact of the particular unitary transform used in that
specific cloning process.

It _may_ be possible for the cloner to get some information about which
photons were cloned incorrectly. This is tricky, and I don't know if it's
right - it involves non-interactive measurement of virtual states, kind of.

Another possibility is to imperfectly clone the photon more than once.

The no-cloning theorem per se doesn't disallow these, it only disallows
perfect cloning, but other physics might.

QC's unbreakability isn't based on a "hard problem", it's based on the
physical impossibility of perfect cloning. But exactly what that
impossibility means in practice, I wouldn't like to say. You can't clone
every photon. Can you only clone 5/6 of photons? Or 99.9...% of them? It
may be the latter.




BTW, you can decrease the wavelength of a photon by bouncing it off moving
mirrors.


-- 
Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: quantum hype

2003-09-21 Thread Peter Fairbrother
Peter Fairbrother wrote:
 
> If the channel is authentic then a MitM is hard - but not impossible. The
> "no-cloning" theorem is all very well, but physics actually allows imperfect
> cloning of up to 5/6 of the photons while retaining polarisation, and this
> should be allowed for as well as the noise calculations. I don't know of any
> existing OTS equipment that does that.
> 
> A lasing medium can in theory clone photons with up to 5/6 of them retaining
> enough polarisation data to use as above, though in practice the noise is
> usually high.
> 
> There is also another less noisy cloning technique which has recently been
> done in laboratories, though it doubles the photon's wavelength, which would
> be noticeable, and I can't see ofhand how in practice to half the wavelength
> again without losing polarisation (except perhaps using changing
> gravitational fields and the like); but there is no theory that says that
> that can't be done.

Had two requests for links (and some scepticism) about this already. Try:

http://www.photonics.com/spectra/research/XQ/ASP/preaid.44/QX/read.htm

for an article and some ref's (though I'm not even sure if the paper
referred to is the one I'm thinking of, the one with wavelength doubling. I
though it was published earlier this year).

I'll try and post some better links later.


-- 
Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: quantum hype

2003-09-21 Thread Peter Fairbrother
There are lots of types of QC. I'll just mention two.

In "classic" QC Alice generates polarised photons at randomly chosen either
"+" or "x" polarisations. Bob measures the received photons using a randomly
chosen polarisation, and tells Alice whether the measurement polarisation he
chose was "+" or "x", on a authenticated but non-secret channel. Alice
replies with a list of correct choices, and the shared secret is calculated
according as to whether the "+" polarisations are horizontal or vertical,
similar for the "slant" polarisations.


If the channel is authentic then a MitM is hard - but not impossible. The
"no-cloning" theorem is all very well, but physics actually allows imperfect
cloning of up to 5/6 of the photons while retaining polarisation, and this
should be allowed for as well as the noise calculations. I don't know of any
existing OTS equipment that does that.

A lasing medium can in theory clone photons with up to 5/6 of them retaining
enough polarisation data to use as above, though in practice the noise is
usually high.

There is also another less noisy cloning technique which has recently been
done in laboratories, though it doubles the photon's wavelength, which would
be noticeable, and I can't see ofhand how in practice to half the wavelength
again without losing polarisation (except perhaps using changing
gravitational fields and the like); but there is no theory that says that
that can't be done.



In another type of QC Alice and Bob agree on the measurement angles (any
angles, not just multiples of 45 deg) they will use, and Alice generates a
pair of entangled photons, sending one to Bob. Both measure the individual
photons at that angle, and the shared secret is generated according to
whether the photons pass the filter.

If the agreed-on measurement angles are kept secret, and noise bounds etc
are obeyed, then a MitM is hard as before except the theoretical maximum
ratio of "clonable" photons is lower - but it isn't much use, except as an
"otp key multiplier".



There are a zillion variations on these themes, and other types of QC. For
instance Alice can send Bob data rather than generating a random shared
secret, and without a separate channel, if she generates the quantum string
using a preshared secret. Mallory can get 1/2 of the bits, but AONT's can
defend against that, and if properly implemented no MitM is possible.

And so on.

-- 
Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: pubkeys for p and g

2003-06-27 Thread Peter Fairbrother
martin f krafft wrote:

> also sprach Peter Fairbrother <[EMAIL PROTECTED]> [2003.06.27.1903 +0200]:
>> Can you give me a ref to where they say that? I'd like to know
>> exactly what they are claiming.
> 
> this will have to wait a couple of days.
> 
>> Perhaps they are encrypting the DH secrets with RSA keys to provide some
>> recipient authentication?
> 
> nope.
> 
>> Or perhaps they are using DH instead of RSA for their public keys?
> 
> nope.

Hmmm.

It's not exactly DH, but if you used the e of a RSA key as g, and the N as
p, that would actually work. It's only one RSA key tho'.


-- 
Peter Fairbrother


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: pubkeys for p and g

2003-06-27 Thread Peter Fairbrother
martin f krafft wrote:


> My point was that some commercial vendors (Check Point and others)
> claim, that if two partners want to perform a DH key exchange, they
> may use their two public keys for g and p. This, in effect, would
> mean that g and p were not globally known, but that the public keys
> are used in their place.

Can you give me a ref to where they say that? I'd like to know exactly what
they are claiming. 

Perhaps they are encrypting the DH secrets with RSA keys to provide some
recipient authentication?

Or perhaps they are using DH instead of RSA for their public keys?

> Thus every communication party would have a key pair, aA and bB,
> where the capital letter is the public key. Then, the following
> happens:
> 
> let g = A and p = B
> let A' = g^a mod p and B' = g^b mod p
> = A^a mod B= A^b mod B
> 
> and off you go, doing DH with g = A, p = B, and the keypairs aA' and
> bB' on either side.

(I assume a and b the usual DH secrets)

> This would, in my opinion, only be possible if:
> 
> - there would be a rule to decide which public key is p and which
> is g.
> - all public keys (RSA in this case) are primes.
> - all public keys are good generators mod p.

You mean "all public keys are good generators mod all public keys"

This won't work, for instance, the N's in RSA keys can't be prime. The e's
can be, but there is then no way that I can think of to ensure that an e is
a generator of a sufficiently large subgroup of another, unknown at
generation, e. 

It might be possible to use some algorithm to find a suitable g, but that
doesn't conform to your/ their stipulation.




-- 
Peter Fairbrother



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]