Re: Haskell crypto

2005-11-30 Thread Alexander Klimov
On Sat, 19 Nov 2005, Ian G wrote:

 Someone mailed me with this question, anyone know
 anything about Haskell?

It is a *purely* functional programming language.
http://www.haskell.org/aboutHaskell.html

  Original Message 

 I just recently stepped into open source cryptography directly, rather
 than just as a user.  I'm writing a SHA-2 library completely in
 Haskell, which I recently got a thing for in a bad way.  Seems to me
 that nearly all of the message digest implementations out there are
 written in C/C++, or maybe Java or in hw as an ASIC, but I can't find
 any in a purely functional programming language, let alone in one that
 can have properties of programs proved.

TTBOMK the main reason why people write low-level crypto in something
other than C is for integration simplification (e.g., there is a lisp
sha1 implementation in the emacs distribution): IMO it is pointless to
write SHA in a language that ``can have properties of programs
proved,'' because test vectors are good enough, and there is no real
assurance that when you write the specification in a machine-readable
form you do not make the same mistake as in your code.

BTW, there is low-level crypto in Haskell as well:
http://web.comlab.ox.ac.uk/oucl/work/ian.lynagh/sha1/haskell-sha1-0.1.0/

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ISAKMP flaws?

2005-11-30 Thread bear


On Sat, 19 Nov 2005, Peter Gutmann wrote:

- The remaining user base replaced it with on-demand access to network
  engineers who come in and set up their hardware and/or software for them and
  hand-carry the keys from one endpoint to the other.

  I guess that's one key management model that the designers never
  anticipated... I wonder what a good name for this would be, something better
  than the obvious sneakernet keying?

Actually this is a good thing.  Separation of the key distribution channel
from the flow of traffic encrypted under those keys.  Making key distribution
require human attention/intervention.  This is treating key distribution
seriously, and possibly for the first time in the modern incarnation of the
industry.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ISAKMP flaws?

2005-11-30 Thread Peter Gutmann
bear [EMAIL PROTECTED] writes:
On Sat, 19 Nov 2005, Peter Gutmann wrote:
- The remaining user base replaced it with on-demand access to network
  engineers who come in and set up their hardware and/or software for them and
  hand-carry the keys from one endpoint to the other.

  I guess that's one key management model that the designers never
  anticipated... I wonder what a good name for this would be, something better
  than the obvious sneakernet keying?

Actually this is a good thing.

Unless you're the one paying someone $200/hour for it.

Separation of the key distribution channel from the flow of traffic encrypted
under those keys.  Making key distribution require human
attention/intervention.

Somehow I suspect that this (making it so unworkable that you have to hand-
carry configuration data from A to B) wasn't the intention of the IKE
designers :-).  It's not just the keying data though, it's all configuration
information.  One networking guy spent some time over dinner recently
describing how, when he has to set up an IPsec tunnel where the endpoints
aren't using completely identical hardware, he uses a hacked version of
OpenSWAN with extra diagnostics enabled to see what side A is sending in the
IKE handshake, then configures side B to match what A wants.  Once that's
done, he calls A and has a password/key read out over the phone to set up for
B.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Haskell crypto

2005-11-30 Thread Nathan Loofbourrow

Haskell is a strongly typed functional language with type inference,
much like ML; its key difference from ML is that is purely functional,
allowing it to use lazy evaluation.

I'm not sure how that illuminates the original message, except to note
that I agree that coding in Haskell is quite fun and addictive.

In addition to traditional crypto conference, he might also consider
submitting to POPL or SIGLANG if the implementation has features unique
to Haskell.

http://en.wikipedia.org/wiki/Haskell_programming_language

nathan


Ian G wrote:

Someone mailed me with this question, anyone know
anything about Haskell?

 Original Message 

I just recently stepped into open source cryptography directly, rather
than just as a user.  I'm writing a SHA-2 library completely in
Haskell, which I recently got a thing for in a bad way.  Seems to me
that nearly all of the message digest implementations out there are
written in C/C++, or maybe Java or in hw as an ASIC, but I can't find
any in a purely functional programming language, let alone in one that
can have properties of programs proved.  Haskell can, and also has a
very good optimizing compiler.  I'm not sure where to submit for
publication when I'm done and have it all written up, though!

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ISAKMP flaws?

2005-11-30 Thread Peter Gutmann
Tero Kivinen [EMAIL PROTECTED] writes:

If I understood correctly the tools they used now did generate specific hand-
crafted packets having all kind of wierd error cases. When testing with the
crypto protocols the problem is that you also need to do the actual crypto,
key exchangement etc to be able to test things after the first packet. 

The two that I'm aware of (the X.509 cert data generator that found ASN.1
parser faults and the SSH hello-packet generator) both just created vaguely
correct-looking PDUs that contained garbage data, so that a simple firewall
check would reject 99% of the packets before they even got to the real
processing.  The SSH generator only sent the first packet, so it never got
past the first step of the SSH handshake.  I'm not sure what the ISAKMP data
generator did.

Peter.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Anon_Terminology_v0.24

2005-11-30 Thread R. A. Hettinga

--- begin forwarded text


 Delivered-To: [EMAIL PROTECTED]
 Date: Mon, 21 Nov 2005 12:14:40 +0100
 From: Andreas Pfitzmann [EMAIL PROTECTED]
 To: undisclosed-recipients: ;
 Subject: Anon_Terminology_v0.24
 Sender: [EMAIL PROTECTED]

 Hi all,

 Marit and myself are happy to announce

Anonymity, Unlinkability, Unobservability,
Pseudonymity, and Identity Management -
A Consolidated Proposal for Terminology
(Version v0.24   Nov. 21, 2005)

 for download at

http://dud.inf.tu-dresden.de/Anon_Terminology.shtml

 We incorporated clarification of whether organizations are subjects
 or entities; suggestion of the concept of linkability brokers by
 Thomas Kriegelstein; clarification on civil identity proposed by Neil
 Mitchison;

 But most importantly: The terminology made it to another language.

Stefanos Gritzalis, Christos Kalloniatis:
Translation of essential terms to Greek

 Many thanx to both of them, in accompany with our kind request to
 translate two newly introduced terms.

 Translations to further languages are welcome.

 Enjoy - and we are happy to receive your feedback.

 Marit and Andreas

 --
 Andreas Pfitzmann

 Dresden University of Technology Phone   (mobile) +49 170 443 87 94
 Department of Computer Science   (office) +49 351 463 38277
 Institute for System Architecture (secretary) +49 351 463 38247
 01062 Dresden,  Germany  Fax  +49 351 463 38255
 http://dud.inf.tu-dresden.de e-mail[EMAIL PROTECTED]



 ___
 NymIP-res-group mailing list
 [EMAIL PROTECTED]
 http://www.nymip.org/mailman/listinfo/nymip-res-group

--- end forwarded text


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Fermat's primality test vs. Miller-Rabin

2005-11-30 Thread Anton Stiglic


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Joseph Ashwood
Sent: November 18, 2005 3:18 AM
To: cryptography@metzdowd.com
Subject: Re: Fermat's primality test vs. Miller-Rabin

 Look at table 4.3 of the Handbook of
 applied cryptography: for t = 1 (one iteration) and for a 500-bit 
 candidate,
 we have probability p(X | Y_1) = 2^-56, which is better than what you
 concluded.  (X representing the event that the candidate n is composite, 
 Y_t
 representing the event that Miller-Rabin(n, t) declares n to be prime).

 The results in table 4.3 and 4.4 of HAC are for randomly (uniform) chosen
 candidates, and I think you need to do a basic sieving (don't remeber if
 that is necessary, but I think it is).  The result is due to the fact 
 that under these conditions, the strong pseudoprime test does in fact 
 much  better than 1/4 probability of error ( value of P(Y_t | X) is very
 low ), this result is due to Damgard, Landrock and Pomerance, based on 
 earlier work of Erdos and Pomerance.

I think much of the problem is the way the number is being applied. Giving
a stream of random numbers that have passed a single round of MR you will
find that very close to 50% of them are not prime, this does not mean that
it passes 50% of the numbers (the 2^-80 probability given above is of this 
type). 

Do you do an initial sieving to get rid of the more obvious primes?  I'm
guessing you don't since you seem to have a result contradictory to what has
been proven by Damgard, Landrock and Pomerance.  If you look at table 4.3 of
HAC (which comes from Damgard  al. paper), it says that if your candidates
come from a uniform random distribution, then for 500 bit candidate, the
probability that a candidate n is composite when one round of miller-Rabin
said it was prime is = (1/2)^56.  You are finding that the probability is
about 1/2, that seems very wrong (unless you are not doing the sieving,
which is very important).  Am I misunderstanding something?


In fact it appears that integers fall on a continuum of difficulty 
for MR, where some numbers will always fail (easy composites), and other 
numbers will always pass (primes). The problem comes when trying to denote 
which type of probability you are discussing. 

Well I think I explained it pretty clearly.  I can try to re-iterate.  Let X
represent the event that a candidate n is composite, and let Y_n denote the
event that Miller-Rabin(n,t) declares n to be prime, where Miller-Rabin(n,t)
means you apply t iterations of Miller-Rabin on n.
Now the basic theorem that we all know is that P(Y_t | X) = (1/4)^t (this
is problem in one of Koblitz basic textbooks on cryptography, for example).
But this is not the probability that we are interested in, we are (at least
I am) more interested in P(X | Y_t).  In other words, what is the
probability that n is in fact composite when Miller-Rabin(n, t) declared n
to be prime?  Do we agree that this is the probability that we are
interested in?


What are the odds that a 
random 512-bit composite will be detected as composite by MR in one round?
I don't think anyone has dependably answered that question, but the answer
is very different from 1-(probability that MR-* says it's a prime)^-k. Any 
discussion needs to be more accurately phrased.

You are looking for P( Comp Y_t | X), where Comp Z is the complementary
event of Z. In our case, Comp Y_t is the event that Miller-Rabin(n,t) proves
n to be composite. Is that what you are looking for?


For example, my phrasing is that in the tests that I performed 50% (+/- 
experimental noise) of those numbers that passed a single round of MR also 
passed 128 rounds, leading me to conclude that 50% of the numbers that 
passed a single round of MR are in fact prime. Since each number that
passed a single round was subjected to 127 additional rounds, a number of 
additional statistics can be drawn, in particular that of those that failed

at least one round none failed less than 40 rounds, and that few passed
less than 40 rounds. Due to the fact that this was only iterated 65536
times there is still substantial experimental error available. These pieces
of information combined indicate that for 512-bits it is necessary to have
80 rounds of MR to verify a prime.
 
I don't understand what you are trying to point out.  If you chose your
candidates uniformly at random, do the sieving before applying the
Miller-Rabin tests, then for 512 bit number it is sufficient to apply 5
rounds to get probability of error lower than (1/2)^80.  

You should take a look at Damgard  al's paper, they did a very good
analysis.

--Anton
  



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: timing attack countermeasures (nonrandom but unpredictable delays)

2005-11-30 Thread Travis H.
Good points all.

I was implicitly assuming that d(k, x) is related to the timing of
f(k,x) -- tailored to the algorithm(s) used, and that the attacker
cannot control k.  Actually the idea was to have k merely provide a
unique function d_k(x) for each host.

 The only way to avoid this is to make d(k,x) somehow related to
 f(k,x).  That's the idea behind things like having software or
 hardware go through both the 0 and 1 case for each bit processed in an
 exponent.  In that case, we get d(k,x) being fast when f(k,x) is slow,
 and vice versa, and we close the timing channel.

Interestingly, I read a book that says that there's no reason for a
computer which performs only reversible operations needs to dissipate
heat.  Basically, destroying information requires generating heat, but
actual computation does not.  I can't quite place my finger on it, but
something in my head says this is related to doing operations on both
inputs and their complements.  Or more accurately, it involves having
as many output bits as input bits.  I wonder if there is any more
significant relationship.  Wouldn't it be neat if the same
countermeasure could prevent both timing and power consumption
side-channel attacks?
--
http://www.lightconsulting.com/~travis/  --
We already have enough fast, insecure systems. -- Schneier  Ferguson
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: timing attack countermeasures (nonrandom but unpredictable de lays)

2005-11-30 Thread Travis H.
 Why do you need to separate f from f+d?  The attack is based on a timing
 variation that is a function of k and x, that's all.  Think of it this way:
 Your implementation with the new d(k,x) added in is indistinguishable, in
 externally visible behavior, from a *different* implementation f'(k,x)
 which has the undesired property:  That the time is a function of the
 inputs.

Suppose that the total computation time was equal to a one way
function of the inputs k and x.  How does he go about obtaining k?

It is not enough that it is a function, it must be a function that can
leak k given x and f(k,x) with an efficiency greater than a
brute-force of the input space of k (because, presumably, f and the
output are known to an attacker, so he could simply search for k that
gives the correct value(s)).

In reality, the time it takes to compute the crypto function is just
another output to the attacker, and should have the same properties
that any other output has with respect to the inputs one wishes to
keep secret.  It does not have to be constant.
--
http://www.lightconsulting.com/~travis/  --
We already have enough fast, insecure systems. -- Schneier  Ferguson
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-30 Thread Joseph Ashwood
- Original Message - 
From: Anton Stiglic [EMAIL PROTECTED]

Subject: RE: Fermat's primality test vs. Miller-Rabin



-Original Message-
From: [Joseph Ashwood]
Subject: Re: Fermat's primality test vs. Miller-Rabin

I think much of the problem is the way the number is being applied. Giving
a stream of random numbers that have passed a single round of MR you will
find that very close to 50% of them are not prime, this does not mean that
it passes 50% of the numbers (the 2^-80 probability given above is of this
type).


Do you do an initial sieving to get rid of the more obvious primes?


No I did not, since this was specifically to test the effectiveness of MR I 
determined that it would be better to test purely based on MR, and not use 
any sieving. The actual algorithm was:



16384 times
{
   question = random 512-bit number
   //this is not the most efficient, but it should remove bias making this 
just MR
   while(question does not pass a single round of MR) question = random 
512-bit number

   127 times
   {
   perform an MR round
   log MR round result
   }
}

Then I performed analysis based on the log generated. I will gladly disclose 
the source code to anyone who asks (it's in Java).



I'm
guessing you don't since you seem to have a result contradictory to what 
has
been proven by Damgard, Landrock and Pomerance.  If you look at table 4.3 
of
HAC (which comes from Damgard  al. paper), it says that if your 
candidates

come from a uniform random distribution, then for 500 bit candidate, the
probability that a candidate n is composite when one round of miller-Rabin
said it was prime is = (1/2)^56.  You are finding that the probability is
about 1/2, that seems very wrong (unless you are not doing the sieving,
which is very important).  Am I misunderstanding something?


No you're not. The seiving is important from a speed standpoint, in that the 
odds improve substantially based on it, however it is not, strictly 
speaking, necessary, MR will return a valid result either way.



In fact it appears that integers fall on a continuum of difficulty
for MR, where some numbers will always fail (easy composites), and other
numbers will always pass (primes). The problem comes when trying to denote
which type of probability you are discussing.


Well I think I explained it pretty clearly.  I can try to re-iterate.  Let 
X
represent the event that a candidate n is composite, and let Y_n denote 
the
event that Miller-Rabin(n,t) declares n to be prime, where 
Miller-Rabin(n,t)

means you apply t iterations of Miller-Rabin on n.
Now the basic theorem that we all know is that P(Y_t | X) = (1/4)^t (this
is problem in one of Koblitz basic textbooks on cryptography, for 
example).
But this is not the probability that we are interested in, we are (at 
least

I am) more interested in P(X | Y_t).  In other words, what is the
probability that n is in fact composite when Miller-Rabin(n, t) declared n
to be prime?  Do we agree that this is the probability that we are
interested in?


If we are discussing that aspect, then yes we can agree to it. That is the 
probability I gave, at exactly a single round (i.e. no sieving involved), 
approaching 1/2 (my sample was too small to narrow it beyond about 2 
significant digits). I know this result is different from the standard 
number, but the experiment was performed, and the results are what I 
reported. This is where the additional question below becomes important 
(since it gives how quickly the odds of being incorrect will fall).




What are the odds that a
random 512-bit composite will be detected as composite by MR in one round?
I don't think anyone has dependably answered that question, but the answer
is very different from 1-(probability that MR-* says it's a prime)^-k. Any
discussion needs to be more accurately phrased.


You are looking for P( Comp Y_t | X), where Comp Z is the complementary
event of Z. In our case, Comp Y_t is the event that Miller-Rabin(n,t) 
proves

n to be composite. Is that what you are looking for?


Actually I'm not, the probability is a subtley different one and the key 
different is in Y. Instead it is given random composite RC what is P(MR(RC, 
k) | Comp X). This appears to me to be a complex probability based on the 
size of the composite. But this is the core probability that governs the 
probability of composites remaining in the set of numbers that pass MR-k. 
Fortunately, while it is a core probability, it is not necessary for MRs 
main usefulness. Performing log_2(N)/4 rounds of MR appears to be a solid 
upper bound on the requirements, and as this is the probability given by 
Koblitz, and the most common assumption on usage it is a functionable 
substitute.



For example, my phrasing is that in the tests that I performed 50% (+/-
experimental noise) of those numbers that passed a single round of MR also
passed 128 rounds, leading me to conclude that 50% of the numbers that
passed a single round of MR are in 

Re: timing attack countermeasures (nonrandom but unpredictable de lays)

2005-11-30 Thread leichter_jerrold
|  Why do you need to separate f from f+d?  The attack is based on a timing
|  variation that is a function of k and x, that's all.  Think of it this
way:
|  Your implementation with the new d(k,x) added in is indistinguishable,
in
|  externally visible behavior, from a *different* implementation f'(k,x)
|  which has the undesired property:  That the time is a function of the
|  inputs.
| 
| Suppose that the total computation time was equal to a one way
| function of the inputs k and x.  How does he go about obtaining k?
Why would it matter?  None of the attacks depend on inverting f in any 
analytical sense.  They depend on making observations.  The assumption is
not 
that f is invertible, it's that it's countinous in some rough sense.
 
| It is not enough that it is a function, it must be a function that can
| leak k given x and f(k,x) with an efficiency greater than a
| brute-force of the input space of k (because, presumably, f and the
| output are known to an attacker, so he could simply search for k that
| gives the correct value(s)).
Well, yes ... but the point is to characterize such functions in some useful

way other than they don't leak.  I suppose if d(k,x) were to be computed
as D(SHA1(k | x)) for some function D, timing information would be lost 
(assuming that your computation of SHA1 didn't leak!); but that's a very 
expensive way to do things:  SHA1 isn't all that much cheaper to compute
than 
an actual encryption.

| In reality, the time it takes to compute the crypto function is just
| another output to the attacker, and should have the same properties
| that any other output has with respect to the inputs one wishes to
| keep secret.  It does not have to be constant.
Agreed.  The problem is to (a) characterize those properties; (b) attain
them 
at acceptable cost.
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Session Key Negotiation

2005-11-30 Thread Will Morton

I am designing a transport-layer encryption protocol, and obviously wish
to use as much existing knowledge as possible, in particular TLS, which
AFAICT seems to be the state of the art.

In TLS/SSL, the client and the server negotiate a 'master secret' value
which is passed through a PRNG and used to create session keys.

My question is: why does this secret need to be negotiated?  Why can one
side or another (preference for client) not just pick a secret key and
use that?

I guess that one reason would be to give both sides some degree of
confidence over the security in the key.  Is this true, and if so is it
the only reason?

Many thanks, and apologies if this has been asked before...

Will

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ISAKMP flaws?

2005-11-30 Thread Bill Stewart

At 06:56 PM 11/18/2005, William Allen Simpson wrote:

| tromped around the office singing, Every bit is sacred / Every bit
| is great / When a bit is wasted / Phil gets quite irate.



| Consider this to be one of the prime things to correct. Personally,
| I think that numbers should never (well, hardly ever) be smaller
| than 32 bits.
(Jon Callas, 1997-08-08)

Ah yes, a couple of years after Photuris.  And wasn't Jon the _author_
of the PGP variable length integer specification?  Hoisted on his petard?


No, it was still Phil's old heavily-used petard,
worked over by various other people from PGP 3.0 and PGP Inc.
Jon was going for backwards compatibility in the OpenPGP specs.
He may have cleaned up the specs a bit,
and fixed some of the security holes from VL-integer exploits,
but unfortunately OpenPGP retained almost all the old ugliness.

I was always grumpy about the impossibility of doing stealth easily
in the native PGP formats and the fact that the OpenPGP code
fossilized it.  For political reasons I'd have also liked
PGP to have had an optional very simple format so you could
fit it into one page of Perl or equivalent to go with the
RSA in 4 lines of Perl or lisp.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Web Browser Developers Work Together on Security

2005-11-30 Thread Jason Holt


http://dot.kde.org/1132619164/

 Core KDE developer George Staikos recently hosted a meeting of the security 
developers from the leading web browsers. The aim was to come up with future 
plans to combat the security risks posed by phishing, ageing encryption 
ciphers and inconsistent SSL Certificate practise. Read on for George's report 
of the plans that will become part of KDE 4's Konqueror and future versions of 
other web browsers.


In the past few years the Internet has seen a rapid growth in phishing 
attacks. There have been many attempts to mitigate these types of attack, but 
they rarely get at the root of them problem: fundamental flaws in Internet 
architecture and browser technology. Throughout this year I had the fortunate 
opportunity to participate in discussions with members of the Internet 
Explorer, Mozilla/FireFox, and Opera development teams with the goal of 
understanding and addressing some of these issues in a co-operative manner.


Our initial and primary focus is, and continues to be, addressing issues in 
PKI as implemented in our web browsers. This involves finding a way to make 
the information presented to the user more meaningful, easier to recognise, 
easier to understand, and perhaps most importantly, finding a way to make a 
distinction for high-impact sites (banks, payment services, auction sites, 
etc) while retaining the accessibility of SSL and identity for smaller 
organisations.


In Toronto on Thursday November 17, on behalf of KDE and sponsored by my 
company Staikos Computing Services, I hosted a meeting of some of these 
developers. We shared the work we had done in recent months and discussed our 
approaches and strengths and weaknesses. It was a great experience, and the 
response seems to be that we all left feeling confident in our direction 
moving forward. There was strong support for the ideas proposed and I think 
we'll see many of them released in production browsers in the near future. I 
think we were pleasantly surprised to see elements of our own designs in each 
other's software, and it goes to show how powerful our co-operation can be.


The first topic and the easiest to agree upon is the weakening state of 
current crypto standards. With the availability of bot nets and massively 
distributed computing, current encryption standards are showing their age. 
Prompted by Opera, we are moving towards the removal of SSLv2 from our 
browsers. IE will disable SSLv2 in version 7 and it has been completely 
removed in the KDE 4 source tree already.


KDE will furthermore look to remove 40 and 56 bit ciphers, and we will 
continually work toward preferring and enforcing stronger ciphers as testing 
shows that site compatibility is not adversely affected. In addition, we will 
encourage CAs to move toward 2048-bit or stronger keys for all new roots.


These stronger cryptography rules help to protect users from malicious 
cracking attempts. From a non-technical perspective, we will aim to promote, 
encourage, and eventually enforce much stricter procedures for certificate 
signing authorities. Presently all CAs are considered equal in the user agent 
interface, irrespective of their credentials and practices. That is to say, 
they all simply get a padlock display when their issued certificate is 
validated. We believe that with a definition of a new strongly verified 
certificate with a special OID to distinguish it, we can give users a more 
prominent indicator of authentic high-profile sites, in contrast to the 
phishing sites that are becoming so prevalent today. This would be implemented 
with a significant and prominent user-interface indicator in addition to the 
present padlock. No existing certificates would see changes in the browser.


To explain what this will look like, I need to take a step back and explain 
the history of the Konqueror security UI. It was initially modeled after 
Netscape 4, displaying a closed golden padlock in the toolbar when an SSL 
session was initiated and the certificate verification project passed. The 
toolbar is an awful place for this, but consistency is extremely important, 
and during the original development phase of KDE 2.0, this was the only easy 
way to implement what we needed. Eventually we added a mechanism to add icons 
to the status bar and made the status bar a permanent fixture in browser 
windows, preventing malicious sites from spoofing the browser chrome and 
making the security icon more obvious to the user. In the past year a padlock 
and yellow highlight were added to the location bar as an additional 
indication. This was primarily based on FireFox and Opera.


I was initially resistant to the idea of using colour to indicate security - 
especially the colour yellow! However the idea we have discussed have been 
implemented by Microsoft in their IE7 address bar, when I saw it in action I 
was sold. I think we should implement Konqueror the same way for KDE4. It 
involves the following steps:



Web Browser Developers Work Together on Security

2005-11-30 Thread Aram Perez
Core KDE developer George Staikos recently hosted a meeting of the  
security developers from the leading web browsers. The aim was to  
come up with future plans to combat the security risks posed by  
phishing, ageing encryption ciphers and inconsistent SSL Certificate  
practise. Read on for George's report of the plans that will become  
part of KDE 4's Konqueror and future versions of other web browsers...


http://dot.kde.org/1132619164/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


[Clips] Cyberterror 'overhyped,' security guru says

2005-11-30 Thread R. A. Hettinga

--- begin forwarded text


 Delivered-To: [EMAIL PROTECTED]
 Date: Thu, 24 Nov 2005 14:08:41 -0500
 To: Philodox Clips List [EMAIL PROTECTED]
 From: R. A. Hettinga [EMAIL PROTECTED]
 Subject: [Clips] Cyberterror 'overhyped,' security guru says
 Reply-To: [EMAIL PROTECTED]
 Sender: [EMAIL PROTECTED]

 http://news.com.com/2102-7348_3-5968997.html?tag=st.util.print

 CNET News

  Cyberterror 'overhyped,' security guru says

  By Tom Espiner

  Story last modified Wed Nov 23 07:41:00 PST 2005


 Fears of cyberterror could actually hurt IT security, a threats expert
asserts.

  Bruce Schneier, who has written several books on security and is the
 founder of Counterpane Internet Security, told ZDNet UK that officials
 claiming terrorists pose a serious danger to computer networks are guilty
 of directing attention away from the threat faced from criminals.

  I think that the terrorist threat is overhyped, and the criminal threat
 is underhyped, Schneier said Tuesday. I hear people talk about the risks
 to critical infrastructure from cyberterrorism, but the risks come
 primarily from criminals. It's just criminals at the moment aren't as
 'sexy' as terrorists.

  Schneier was speaking after the SANS Institute released its latest
 security report at an event in London. During this event, Roger Cummings,
 director of the U.K. National Infrastructure Security Coordination Center,
 said that foreign governments are the primary threat to the U.K.'s critical
 infrastructure.

  Foreign states are probing the (critical infrastructure) for
 information, Cummings said. The U.K.'s (critical infrastructure) is made
 up of financial institutions; key transport, telecom and energy networks;
 and government organizations.

  Schneier, though, is concerned that governments are focusing too much on
 cyberterrorism, which is diverting badly needed resources from fighting
 cybercrime.

 We should not ignore criminals, and I think we're underspending on crime.
 If you look at ID theft and extortion--it still goes on. Criminals are
 after money, Schneier said.

  Cummings also said that hackers are already being employed by both
 organized criminals and government bodies, in what he termed the malicious
 marketplace.

  Schneier agrees this is an issue.

  There is definitely a marketplace for vulnerabilities, exploits and old
 computers. It's a bad development, but there are definitely conduits
 between hackers and criminals, Schneier said.


 --
 -
 R. A. Hettinga mailto: [EMAIL PROTECTED]
 The Internet Bearer Underwriting Corporation http://www.ibuc.com/
 44 Farquhar Street, Boston, MA 02131 USA
 ... however it may deserve respect for its usefulness and antiquity,
 [predicting the end of the world] has not been found agreeable to
 experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'
 ___
 Clips mailing list
 [EMAIL PROTECTED]
 http://www.philodox.com/mailman/listinfo/clips

--- end forwarded text


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Broken SSL domain name trust model

2005-11-30 Thread Anne Lynn Wheeler
so this is (another in long series of) post about SSL domain name trust
model
http://www.garlic.com/~lynn/2005t.html#34

basically, there was suppose to be a binding between the URL the user
typed in, the domain name in the URL, the domain name in the digital
certificate, the public key in the digital certificate and something
that certification authorities do. this has gotten terribly obfuscated
and looses much of its security value because users rarely deal directly
in actual URLs anymore (so the whole rest of the trust chain becomes
significantly depreciated).

the contrast is the PGP model where there is still a direct relationship
between the certification the user does to load a public key in their
trusted public key repository, the displayed FROM email address and the
looking up a public key using the displayed FROM email address.

the issue isn't so much the PGP trust model vis-a-vis the PKI trust
model ... it is the obfuscation of the PKI trust model for URL domain
names because of the obfuscation of URLs.

so one way to restore some meaning in a digital signature trust model is
to marry some form of browser bookmarks and PGP trusted public key
repository. these trusted bookmarks contain both some identifier, a url
and a public key. the use has had to do something specific regarding the
initial binding between the identifier, the url and the public key. so
such trusted bookmarks might be

1) user clicks on the bookmark, and a psuedo SSL/TLS is initiated
immediately by transmitting the random session key encrypted with the
registered public key. this process might possible be able to take
advantage of any registered public keys that might be available from
security enhancements to the domain name infrastructure

2) user clicks on something in the web page (icon, thumbnail, text,
etc). this is used to select a bookmark entry ... and then proceeds as
in #1 above (rather than used directly in conjunction with a URL and
certificate that may be supplied by an attacker).

there are other proposals that try and collapse the obfuscation between
what users see on webpages and the actual security processes (trying to
provide a more meaningful direct binding between what the user sees/does
and any authentication mechanism) ... but most of them try and invent
brand new authentication technologies for the process.

digital signatures and public keys are perfectly valid authentication
technologies  but unfortunately have gotten terribly bound up in the
certification authority business processes. the issue here is to take
perfectly valid digital signature authentication process ... and create
a much more meaningful trust binding for the end-user (not limited to
solely the existing certification authority and digital certificate
business models).

the issue in #2 is that the original electronic commerce trust process
was that the URL initially provided by the user (typed or other means)
started the trust process and avoided spoofed e-commerce websites. one
of the problems has been that the SSL security has so much overhead,
that e-commerce sites starting reserving it just for payment operation.
As a result, users didn't actually encounter SSL until they hit the
checkout/pay button. Unfortunately if you were already at a spoofed
site, the checkout/pay button would have the attacker providing the
actual URL, the domain name in that URL, and the SSL domain name
certificate.

so the challenge is to drastically reduce the obfuscation in the
existing process ... either by providing a direct mechanism under the
user control for getting to secure websites or by doing something that
revalidates things once a user is at a supposedly secure webstie.

the issue is that if users start doing any pre-validation step and
storing the results ... possibly something like secure bookmarks ...
then it becomes farily straight-forward to store any related digital
certificates along with the bookmark entry. if that happens, then it
becomes obvious that the only thing really needed is the binding the
user has done between the public key in the digital certificate and the
bookmark entry. at that point, it starts to also become clear that such
digital certificates aren't providing a lot of value (being made
redundant and superfluous by the trust verification that the user has
done regarding the various pieces of data in the entry).

in effect, the PKI model is based on premise that it is a substitute
where the relying party isn't able to perform any trust
validation/operations themselves (i.e. the letters of
credit/introduction model from the sailing ship days). when the relying
parties have to go to any of their own trust operations, then there is
less reliance and less value in the trust operations performed by
certification authorities.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Encryption using password-derived keys

2005-11-30 Thread Jack Lloyd

The basic scenario I'm looking at is encrypting some data using a
password-derived key (using PBKDF2 with sane salt sizes and iteration
counts). I am not sure if what I'm doing is sound practice or just pointless
overengineering and wanted to get a sanity check.

My inclination is to use the PBKDF2 output as a key encryption key, rather than
using it to directly key the cipher (with the key used for the cipher itself
being created by a good PRNG). For some reason the idea of using it directly
makes me nervous, but not in a way I can articulate, leading me to suspect I'm
worried over nothing.

So, assuming using it as a KEK makes sense: At first I thought to use XOR to
combine the two keys, but realized that could lead to related key attacks (by
just flipping bits in the field containing the encrypted key). That is probably
not a problem with good algorithms, but, then again, why take the chance; so I
was thinking instead using NIST's AES-wrap (or perhaps a less weirdly designed
variant of it that uses HMAC for integrity checking and AES in CBC mode for
confidentiality).

Am I thinking about this far harder than I should?

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Call for papers -- IS-TSPQ 2006

2005-11-30 Thread Ed Gerck

==

  CALL FOR PAPERS


 First International Workshop on
  Interoperability Solutions to Trust, Security, Policies and QoS
 for Enhanced Enterprise Systems
  (IS-TSPQ 2006)

  In the frame of

Second International Conference on
 Interoperability for Enterprise Software and Applications
 (I-ESA)

 Bordeaux, France
 March 21st, 2006

 http://istspq2006.cs.helsinki.fi/

==

  SCOPE:

With the increasing demands from the networked economy and government,
interoperability has become a strategic factor for enterprise software
and applications. In the context of collaboration between enterprises
and their business services, several interoperability issues stem from
non-functional aspects (NFA). The non-functional aspects are introduced
to provide separation of concerns between the main functions of enterprise
software and the supporting themes that cause modification of the main
functional behaviour. Traditionally, this is applied to supporting
technology that addresses, for example, quality of service, security and
dependability, but it may also involve business value, business policies,
and trust.

The IS-TSPQ 2006 workshop objective is to explore architectures, models,
systems, and utilization for non-functional aspects, especially addressing
the new requirements on interoperability. Space is given for building
understanding of the non-functional aspects themselves and improve the
shared understanding of future solutions for non-functional aspect
interoperability.

The IS-TSPQ 2006 workshop is hosted by the Second International Conference
on Interoperability of Enterprise Software and Applications (I-ESA)
organized by the INTEROP NoE. The workshop aims to bring together
researchers and practitioners.


  Topics:

In keeping with the focus on interoperability and non-functional aspects,
the IS-TSPQ 2006 workshop especially encourages original unpublished papers
addressing the following areas:

- modelling of enterprises and their collaboration;
- interoperability architectures and models;
- negotiation mechanisms and representations of agreements that support
  interoperability;
- challenges from the strategic business needs;
- alignment of business needs and computing support; and
- linking the above to trusted, dependable infrastructure solutions.

General papers on these topics will be welcome, but it would be particularly
valuable for papers to relate to the target domains of:

- Trust and Trust Models, Reputation, and Privacy on data integration
  and inter-enterprise computing;
- eContracting, contract knowledge management, business commitment
  monitoring and fulfilment, and the ontologies of contracts;
- Non-Functional Aspects, Quality of Service (QoS), Quality Attributes;
- Information Security, Performance, Reliability and Availability;
- Digital Rights and Policy Management, Compliance, regulatory
  environments, corporate governance, and Policy Frameworks; and
- Business Value, Business processes, Risk Management and Asset
  Management.


  SUBMISSION GUIDELINES:

Submissions must be no longer than 12 pages in length and should follow
the guidelines given at
http://www.hermes-science.com/word/eng-guidelines.doc.
Authors are requested to submit their manuscripts electronically in PDF
format
using the paper submission tool available at the workshop web page.

The workshop proceedings will be published after the conference (and will be
sent by post to the registered participants). Papers will be included in the
proceedings only if they are presented by one of the authors at the
workshop.
The final, cameraready papers are accepted by the publisher as Word files
only.


  GENERAL INFORMATION:

For more information please visit the web site at:

 http://istspq2006.cs.helsinki.fi/

 http://www.i-esa.org/


  IMPORTANT DATES:

 Papers due: January 5, 2006
 Acceptance: February 1, 2006
 Papers for participant proceedings: February 23, 2006
 Workshop : March 21, 2006
 Final papers due: April 10, 2006


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Matt Blaze finds flaws in FBI wiretap equipment

2005-11-30 Thread Perry E. Metzger

New York Times article:

   Security Flaw Allows Wiretaps to Be Evaded, Study Finds

   By JOHN SCHWARTZ and JOHN MARKOFF
   Published: November 30, 2005

   The technology used for decades by law enforcement agents to wiretap
   telephones has a security flaw that allows the person being wiretapped
   to stop the recorder remotely, according to research by computer
   security experts who studied the system. It is also possible to
   falsify the numbers dialed, they said.

   Someone being wiretapped can easily employ these devastating
   countermeasures with off-the-shelf equipment, said the lead
   researcher, Matt Blaze, an associate professor of computer and
   information science at the University of Pennsylvania.

http://www.nytimes.com/2005/11/30/national/30tap.html

original paper at:

http://www.crypto.com/papers/wiretapping/   

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


ADMIN: microsoft.com subscribers may be unsubscribed soon

2005-11-30 Thread Perry E. Metzger

About half the messages to cryptography are being rejected by
microsoft.com's anti-spam content filter, with messages like this:

[EMAIL PROTECTED]: host maila.microsoft.com[131.107.3.125] said: 550 5.7.1
Your e-mail was rejected by an anti-spam content filter on gateway
(131.107.3.125).Reasons for rejection may be: obscene language, graphics,
or spam-like characteristics. Removing these may let the e-mail through the
filter. (in reply to end of DATA command)

I've been manually preventing the Microsoft addresses from being
unsubscribed from the list for excess bounces but I'm going to stop
doing that shortly -- it is too much work. Sorry.

I would forward examples of the messages that are bouncing to the
folks at MS but unfortunately, it is impossible to do so for obvious
reasons.

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Session Key Negotiation

2005-11-30 Thread Eric Rescorla
Will Morton [EMAIL PROTECTED] writes:
 I am designing a transport-layer encryption protocol, and obviously wish
 to use as much existing knowledge as possible, in particular TLS, which
 AFAICT seems to be the state of the art.

 In TLS/SSL, the client and the server negotiate a 'master secret' value
 which is passed through a PRNG and used to create session keys.

May I ask why you don't just use TLS?


 My question is: why does this secret need to be negotiated?  Why can one
 side or another (preference for client) not just pick a secret key and
 use that?

Well, in TLS in RSA mode, the client picks the secret value (technical
term: PreMaster Secret) but both sides contribute randomness to ensure
that the Master Secret secret is unique. This is a clean way to
ensure key uniqueness and prevent replay attack.

In DH mode, of course, both sides contribute shares, but that's
just how DH works.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


`Identified by` CA technique of TrustBar adopted by IE, other browsers...

2005-11-30 Thread Amir Herzberg
IE 7 implements some of TrustBar and FF improvements to security 
indicators. Specifically, they now color-code the location bar and 
added, in SSL/TLS pages, the name of the site and the `Identified by` 
name of CA - like TrustBar.


They do not yet implement some of our other mechanisms, including the 
petnaming (allowing users to select their own name or logo which will be 
automatically displayed on entering a specific site), and the `random 
training exercise attacks`. OTOH, at least regarding the last 
mechanisms, we definitely agree it is not yet ready for prime time (and 
hope soon to provide a better version of it).


Some relevant links:

http://blogs.msdn.com/ie/archive/2005/11/21/495507.aspx - IE developer 
describing the improved security UI, with some screen shots


http://dot.kde.org/1132619164/ - KDE developer describes a meeting of 
developers of four major browsers (IE, FF, Opera, KDE) where they agreed 
to adopt these ideas


http://AmirHerzberg.com/TrustBar - my page for info and downloads of 
TrustBar... TrustBar is a public domain, open source project.

--
Best regards,

Amir Herzberg

Associate Professor
Department of Computer Science
Bar Ilan University
http://AmirHerzberg.com
Try TrustBar - improved browser security UI: 
http://AmirHerzberg.com/TrustBar
Visit my Hall Of Shame of Unprotected Login pages: 
http://AmirHerzberg.com/shame


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]