Re: ?splints for broken hash functions

2004-09-06 Thread bear


On Wed, 1 Sep 2004, David Wagner wrote:

Hal Finney writes:
[John Denker proposes:] the Bi are the input blocks:
  (IV) - B1 - B2 - B3 - ... Bk - H1
  (IV) - B2 - B3 - ... Bk - B1 - H2
then we combine H1 and H2 nonlinearly.

This does not add any strength against Joux's attack.  One can find
collisions for this in 80*2^80 time with Joux's attack.

First, generate 2^80 collisions for the top line.  Find B1,B1* that
produce a collision, i.e., C(IV,B1)=C(IV,B1*)=V2.  Then, find B2,B2*
that produce a collision, i.e., C(V2,B2)=C(V2,B2*)=V3.  Continue to
find B3,B3*, ..., Bk,Bk*.  Note that we can combine this in any way
we like (e.g., B1, B2*, B3*, B4, .., Bk) to get 2^80 different messages
that all produce the same output in the top line (same H1).

Next, look at the bottom line.  For each of the 2^80 ways to combine
the above blocks, compute what output you get in the bottom line.
By the birthday paradox, you will find some pair that produce a
collision in the bottom line (same H2).  But that pair also produces
a collision in the top line (since all pairs collide in the top line),
so you have a collision for the whole hash (same H1,H2).

The birthday paradox does not apply in this case because H1 is fixed.
The above construction is in fact secure against the Joux attack as
stated.  2^80 work will find, on average, one collision.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ?splints for broken hash functions

2004-09-06 Thread Bill Stewart

how about this simpler construction?
  (IV1) - B1 - B2 - B3 - ... Bk - H1
  (IV2) - B1 - B2 - B3 - ... Bk - H2
This approach and the cache Block 1 until the end approach
are both special-case versions of maintain more state attacks.
This special case maintains 2*(size of hash output) bits of state.
The cache block 1 case maintains
(size of hash output) + (size of block 1) bits of state,
but doesn't change the (size of block 1) bits between cycles.
(Also, if you're going to do that, could you maintain
(hash(Block1)) bits between cycles instead of the raw bits?)
They both have some obvious simplicity to them,
but I'm not convinced that simplicity actually helps,
compared to other ways of getting more state.
Perhaps the effective state of the 2-IV version is
twice the size of the basic hash, perhaps it's less.
My intuition is that more mixing might be better,
and probably isn't worse, but I could easily be wrong.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Compression theory reference?

2004-09-06 Thread Bill Stewart
It's a sad situation when you have to get a non-technical
judge to resolve academic conflicts like this,
but it's your head that you're banging against the wall, not mine.
If you want to appeal to authority, there's the FAQ,
which of course requires explaining the Usenet FAQ traditions;
perhaps you can find Lempel, Ziv, or Welch?
In reality, you could show an algorithm for which any input
gets at most _one_ bit longer, rather than arbitrarily longer.
And of course most of the compression algorithms work because
real data almost always has structure which reduces the entropy.
My information theory books from grad school have
long vanished into some closet, and were written
just about the time LZ came out so they mainly discuss
Huffman coding in the discrete-message sections,
but you should be able to find a modern book on the topic.
Matt Crawford's inductive argument is very strong -
it gives you a constructive way to say that
for any integer N, I can give a proof for that N,
starting at 1 and working your way up,
showing that if there's a lossless coding that doesn't make
any messages of length N any longer, then it doesn't make any any shorter,
so it's not a compression method, just a permutation.
The You could compress any message down to 1 bit
argument is a great throwaway line, but it isn't rigorously valid.
(And if it were, you might as well compress down to zero bits while you're 
at it.)
The problem is that for most lossless compression algorithms,
some strings will get shorter (maybe even much shorter),
but some will stay the same length,
so even if you had a hypothetical never gets longer
compression algorithm, you can't guarantee that your
starting message would be one that got shorter as opposed to staying the same,
so you can't say that all messages would compress to zero.

If your judge doesn't like inductions that count up,
or your academic opponents insist on examining methods that count down,
Bear's argument gets you most of the way there,
by emphasizing the 1-1 mapping aspect, but you could easily get tangled.
(To do this properly, you need to do n and 2**n, but I'll use 10 for 
concreteness.)

There are 1024 10-bit messages, and only 512 9-bit messages,
so something obviously happened to the =512 that didn't compress to 9 bits.
Maybe 512 of them didn't compress further and stayed as 10-bit;
almost certainly some of them became 8 bits or shorter.
At least one message didn't get shorter, because
(2**10 - 1) = 2**9 + 2**8 + 2**7 ... + 2**1
So if you want to recurse downwards through repeated compression,
you need to be sure your mapping keeps track of the ones that
didn't compress the first time (maybe they'll compress the second time?),
the ones that compressed by one bit,
and the ones that compressed by more than one bit,
and avoid wandering around in a maze of twisty little passages.
So working your way up is probably cleaner.
At 11:30 AM 9/1/2004, bear wrote:
Actually you don't need to take it all the way to the
reductio ad absurdum here.  There are 1024 10-bit
messages.  There are 512 9-bit messages.  You need
to point out that since a one-to-one mapping between
sets of different ordinality cannot exist, it is not
possible to create something that will compress every
ten-bit message by one bit.
Or, slightly more formally, assume that a function C
can reversibly compress any ten-bit message by one bit.
Since there are 1024 distinct ten-bit messages, there
must be at least 1024 distinct nine-bit messages which,
when the reversal is applied, result in these 1024
messages.  There are exactly 512 distinct nine-bit
messages.  Therefore 512 = 1024.
Bear
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Bill Stewart  [EMAIL PROTECTED] 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Compression theory reference?

2004-09-06 Thread John Denker
Matt Crawford wrote:
Plus a string of log(N) bits telling you how many times to apply the
decompression function!
Uh-oh, now goes over the judge's head ...
Hadmut Danisch wrote:
The problem is that if you ask for a string of log(N) bits, then 
someone else could take this as a proof that this actually works, 
because a string of log(N) bits is obviously shorter than the 
message of N bits, thus the compression scheme is working. Hooray!
That misses the point of the construction that was the subject of
Matt's remark.  The point was (and remains) that the compressed
output (whether it be 1 bit, or 1+log(N) bits, or 1+log^*(N) bits)
is ridiculous because it is manifestly undecodeable.  It is far, far
too good to be true.
The only question is whether the construction is simple enough
for the judge to understand.
There is no question whether the construction is a valid _reductio
ad absurdum_.
  While we are on the subject, I recommend the clean and elegant
  argument submitted by Victor Duchovni (08/31/2004 03:50 PM) and
  also in more detail by Matt Crawford (08/31/2004 06:04 PM).  It
  uses mathematical induction rather than proof-by-contradiction.
  It is a less showy argument, but probably it is understandable
  by a wider audience.  It proves a less-spectacular point, but it
  is quite sufficient to show that the we-can-compress-anything
  claim is false.  (Although with either approach, at least *some*
  mathematical sophistication is required.  Neither PbC nor MI will
  give you any traction with ten-year-olds.)
  So it appears we have many different ways of approaching things:
   1) The pigeon-hole argument.  (This disproves the claim that all
N-bit strings are compressible ... even if the claim is restricted
to a particular fixed N.)
   2) The mathematical induction argument.  (Requires the claimed
algorithm to be non-expansive for a range of N.)
   3) The proof-by-contradiction.  (Requires the claimed algorithm
to be compressive -- not just non-expansive -- for a range of N.)
   4) Readily-demonstrable failure of *any* particular claimed example,
including Lempel-Ziv and all the rest.
   *) Others?
  Harumph.  That really ought to be enough.  Indeed *one* disproof
  should have been enough.
The problem is, that the number of iterations is not in the order of 
N, but in the order of 2^N, so it takes log2(around 2^N) = around N bits to
store the number of iterations.
I don't see why the number of iterations should be exponential in
the length (N) of the input string.  A compression function is
supposed to decrease N.  It is not supposed to decrement the
number represented by some N-bit numeral  after all, the string
might not represent a number at all.
Also I repeat that there exist special cases (e.g. inputs of
known fixed length) for which no extra bits need be represented,
as I explained previously.
The recursion convertes a message of 
N bit recursively into a message of 1 or zero bit length (to your
taste), *and* a number which takes around N bit to be stored. 
Nothing is won. But proof that. 
I disagree, for the reasons given above.
In the worst case, you need log^*(N) extra bits, not N bits.  In
special cases, you don't need any extra bits at all.  The win
is very substantial.  The win is extreme.
This recursion game is far more complicated than it appears to be. 
Maybe.  But there's no need to make it more complicated than it
really is.
Note also that storing a number takes in reality more than log(N)
bits. Why? Because you don't know N in advance. We don't have any
limit for the message length. 
For general N, that's true.
 So you'r counting register needs
theoretically inifinte many bits. 
Maybe.  For many practical purposes, the required number of bits
is considerably less than infinity.
 When you're finished you know
how many bits your number took. But storing your number needs an 
end symbol or a tristate-bit (0,1,void). That's a common mistake. 
We agree that there are many common mistakes.  We agree that
it is a mistake to have undelimited strings.  But it is also a
mistake to think that you need to reserve a special symbol to
mark the end of the string.  Yes, that is one option, but from a
data-compression point of view it is an inefficient option.
Anybody who is interested in this stuff reeeally ought to read
Chaitin's work.  He makes a big fuss about the existence of
self-delimiting strings and self-delimiting programs.  There
are many examples of such:
 -- The codewords of many commonly-used compression algorithms
  are self-delimiting.  This is related to the property of being
  instantaneously decodable.
 -- As Chaitin points out, you can set up a dialect of Lisp such
  that Lisp programs are self-delimiting.
 -- If you want to be able to represent M, where M is *any* N-bit
  number, you need more than log(M) bits (i.e. more than N bits).
  That's because you need to specify how many bits are used to
  represent log(M), which adds another log(log(M)) bits.  

[wearables] CFP: Workshop on Pervasive Computing and Communication Security (fwd from [EMAIL PROTECTED])

2004-09-06 Thread Eugen Leitl
From: Bob Mayo [EMAIL PROTECTED]
Subject: [wearables] CFP: Workshop on Pervasive Computing and Communication Security
To: [EMAIL PROTECTED]
Date: Thu, 2 Sep 2004 16:36:15 -0700 (PDT)
Reply-To: [EMAIL PROTECTED]




CALL FOR PAPERS

  PerSec 2005

 Second IEEE International Workshop on Pervasive Computing and
 Communication Security

   Held in conjunction with IEEE PerCom 2005

8 March 2005, Kauai island, Hawaii, USA

  http://www.cl.cam.ac.uk/persec-2005/


Research in pervasive computing continues to gain momentum. The importance of
security and privacy in a pervasive computing environment cannot be
underestimated. PerSec 2005 will bring together the world's experts on this
topic and provide an international forum to stimulate and disseminate original
research ideas and results in this field.

Contributions are solicited in all aspects of security and privacy in pervasive
computing, including:

Models for access control, authentication and privacy management.

Incorporation of contextual information into security and privacy models, and
mechanisms.

Management of tradeoffs between security, usability, performance, power
consumption and other attributes.

Architectures and engineering approaches to fit security and privacy features
into mobile and wearable devices.

Biometric methods for pervasive computing.

Descriptions of pilot programs, case studies, applications, and experiments
integrating security into pervasive computing.

Auditing and forensic information management in pervasive settings.

Protocols for trust management in networks for pervasive computing.

Incorporation of security into communication protocols, computing architectures
and user interface designs for pervasive computing.

Impact of security and privacy in relation to the social, legal, educational
and economic implications of pervasive computing.



INSTRUCTIONS FOR AUTHORS


Papers must be sent to persec-2005 at cl.cam.ac.uk as file attachments in Adobe
PDF format.

Papers must have authors' affiliation and contact information on the first
page.

Papers must be unpublished and not being considered elsewhere for publication.
In particular, papers submitted to PerSec must not be concurrently submitted to
PerCom in identical or modified form.

Papers must be formatted in strict accordance with the IEEE Computer Society
author guidelines published at
ftp://pubftp.computer.org/Press/Outgoing/proceedings/INSTRUCT.HTM. For your
convenience, templates are available at
ftp://pubftp.computer.org/Press/Outgoing/proceedings/. LaTeX is recommended.

Papers are limited to 5 pages in IEEE 8.5x11 conference
format. Excessively long papers will be returned without review.

Papers selected for presentation will be published in the Workshop Proceedings
of PerCom 2005 by IEEE Press.


IMPORTANT DATES
===

Paper submission: 13 September 2004

Acceptance Notification: 15 November 2004

Camera-ready manuscripts: 29 November 2004

PerSec Workshop: 8 March 2005 (first day of PerCom, which runs until the 
12th)


PROGRAM CO-CHAIRS
=

 * Frank Stajano, University of Cambridge, UK
 * Roshan Thomas, McAfee Research, USA


SECRETARY
=

 * Boris Dragovic, University of Cambridge, UK

Contact email (goes to co-chairs and secretary): persec-2005 at 
cl.cam.ac.uk


STEERING COMMITTEE CHAIR


 * Ravi Sandhu, George Mason University, USA


PROGRAM COMMITTEE
=

 * Tuomas Aura, Microsoft Research, UK
 * Mark Corner, UMass, USA
 * Srini Devadas, MIT, USA
 * Boris Dragovic, University of Cambridge, UK
 * Naranker Dulay, Imperial College, UK
 * Kris Gaj, George Mason University, USA
 * Robert Grimm, NYU, USA
 * Dieter Hutter, DFKI, Germany
 * Ari Juels, RSA Laboratories, USA
 * Tim Kindberg, HP Labs Bristol, UK
 * Cetin Kaya Koc, Oregon State University, USA
 * Marc Langheinrich, ETH Zurich, Switzerland
 * Mark Lomas, BIICL, UK
 * Robert N. Mayo, HP Labs Palo Alto, USA
 * Refik Molva, Eurecom, France
 * Kai Rannenberg, University of Frankfurt, Germany
 * Stephen Weis, MIT

--

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net


pgpGBv9c4Gh1h.pgp
Description: PGP signature


Re: Approximate hashes

2004-09-06 Thread Len Sassaman
On Wed, 1 Sep 2004, Marcel Popescu wrote:

 Hence my question: is there some approximate hash function (which I could
 use instead of SHA-1) which can verify that a text hashes very close to a
 value? So that if I change, say, tabs into spaces, I won't get exactly the
 same value, but I would get a good enough?

Hi Marcel,

You may wish to look at Cmeclax's nilsimsa. It has been used to detect
slightly-modified message floods in anonymous remailer systems, and was
also used in Spamassassin at some point.

http://lexx.shinn.net/cmeclax/nilsimsa.html



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Implementation choices in light of recent attacks?

2004-09-06 Thread John Kelsey
From: bear [EMAIL PROTECTED]
Sent: Sep 1, 2004 2:43 PM
To: Jim McCoy [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: Implementation choices in light of recent attacks?

On Wed, 1 Sep 2004, Jim McCoy wrote:

After digesting the various bits of information and speculation on the
recent breaks and partial attacks on various popular hash functions I
am wondering if anyone has suggestions for implementation choices for
someone needing a (hopefully) strong hash today, but who needs to keep
the hash output size in the 128-192 bit range.  A cursory examination
of Tiger seems to indicate that it uses a different methodology than
the MDx  SHAx lines, does this mean that it does not suffer from the
recent hash attacks?  Would a SHA256 that has its output chopped be
sufficient?
Any suggestions would be appreciated.

I believe that SHA256 with its output cut to 128 bits will be
effective.  The simplest construction is to just throw away
half the bits.

Yes, but it does depend a little on what you're trying to defend against, right?  I 
mean, if you're worried about not having a strong hash function with a 128-bit output 
anymore, then it seems like you should always be able to truncate a stronger hash.  

If I had a way to force the low 128 bits of SHA1 to collide much faster than 2^{64}, 
while randomizing the remaining 32 bits, I could use it to find full SHA1 collisions 
faster than 2^{80}work.  This doesn't work if my trick for getting collisions in 128 
bits requires that the remaining 32 bits not collide, however.  (This can happen if 
you have a truncated differential through the whole hash function whose output value 
is (x,0,0,0,0), e.g., it forces a nonzero difference into the first word of output.  I 
believe this came up in Biham and Chen's SHA0 near collisions, requiring running the 
differential across two or more compression functions.  The same basic problem came up 
in Biham and Shamir's N-HASH results, many years ago.  So you can't get a simple 
reduction proof here, but maybe someone better at proofs can do a more complicated 
one

If you're worried about cryptanalysis of existing MD5-like functions, then there's 
probably some benefit to looking at alternative designs that look radically different, 
like Whirlpool or Tiger.  But I'm not sure how much analysis either has seen, so I'd 
be reluctant to feel like I really understood their security yet.  It's clear from 
recent events that designed by smart people isn't enough by itself to give you lots 
of confidence in hash functions, any more than in block ciphers.  

Finally, if you want to use truncation on hashes, make sure it's never possible to get 
the two sides confused about which hash is to be used.  There are a lot of places, 
such as KDFs, where you can get some really nasty attacks if you can get Alice to use 
SHA256 and Bob to use SHA256 with the output truncated to 224 bits.  (Yes, this is the 
reason SHA224 has a different starting IV than SHA256.) 

   Bear

--John Kelsey

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


PGP Identity Management: Secure Authentication and Authorization over the Internet

2004-09-06 Thread R. A. Hettinga
http://www.pgp.com/resources/ctocorner/identitymgmt.html


Click for illustrations, etc...

Cheers,
RAH


PGP Corporation - Resources - CTO Corner

United States | International?


  


  Resources  CTO Corner  Guest Contributors  PGP Identity Management
 Welcome CTO Corner
Data Sheets
Flash Government Regulations Webcasts White Papers



PGP Identity Management:
 Secure Authentication and Authorization over the Internet

 By Vinnie Moscaritolo,
 PGP Cryptographic Engineer

3 September 2004
Abstract
Access to computer services has conventionally been managed by means of
secret passwords and centralized authentication databases. This method
dates back to early timeshare systems. Now that applications have shifted
to the Internet, it has become clear that the use of passwords is not
scalable or secure enough for this medium. As an alternative, this paper
discusses ways to implement federated identity management using strong
cryptography and the same PGP key infrastructure that is widely deployed on
the Internet today.

Beyond Passwords
 The inherent security weakness and management  complexities of
password-based authentication and centralized authorization  databases make
such systems inadequate for the real-world requirements  of today's public
networks. However, by applying the same proven cryptographic technology
used today for securing email, we can construct a robust authentication
system with the following goals in mind:
*Provide a single sign-on experience in which users only need to
remember one password, yet make it less vulnerable to cracking (hacking)
attempts.
*Employ strong user authentication, extendable to multi-factor
methods such as tokens or smart cards. The only copy of the authenticating
(secret) material is in the possession of the user.
*Design such a system so it does not depend on any trusted
servers and so that the compromise of any server does not affect the
security of other servers or users.
*Build on existing and well-accepted infrastructures that scale
to fit a very large base of users and servers.
*Enable users to sign on to the networks of more than one
enterprise securely to conduct transactions.

 Authentication with Cryptographic Signatures
Email communications  via the Internet face a security challenge similar to
network user authentication.  Messages traveling through public networks
can be eavesdropped or counterfeited  without much effort. Yet we have been
able to successfully mitigate  these risks by using public key cryptography
to digitally sign and authenticate  email messages.

 With public key cryptography, each user generates a pair of mathematically
related cryptographic keys. These keys are created in such a way that it is
computationally infeasible to derive one key from the other. One of the
keys is made publicly available to anyone who wishes to communicate with
that user. The other key is kept private and never revealed to anyone else.
This private key can be further secured by either placing it in a hardware
token, encrypting it to a passphrase, or sometimes both. The holder uses
the private key to digitally sign data. This digital signature can later be
checked with the matching public key to ensure the data has not been
tampered with and that it originated from the holder of the private key.

 Because the holder of the private key is the only entity that can create a
digital signature that verifies with the corresponding public key, there is
a strong association between a user's identity and the ability to sign with
that private key. Thus, a digital signature is strong testimony to the
authenticity of the sender.

Cryptographic Challenge-Response
 Because the public key functions  as a user's identity in cyberspace, we
can apply digital signatures  to strongly authenticate users of network
services. One way to do this  is to challenge the user to sign a randomly
generated message from the  server. The server then verifies the identity
of the user with the public  key. This process is illustrated below.

1.  The user initiates network service access.
2.  The server looks up the user's public key in its authentication
database. The server then generates a random challenge string and sends the
challenge to the client.
3.  The client digitally signs the challenge string and returns the
cryptographic signature to the server. The client also sends a
counter-challenge string, which is used to verify the server's authenticity.
4.  The server then checks the client's signature, and if successful,
grants access. It also signs and returns the client's counter-challenge.

 The use of such cryptographic user authentication offers a number of
advantages over password-based systems. For example, if we employ the same
key used to sign email, user authentication becomes as strong as the
applied cryptographic digital signature 

Re: Kerberos Design

2004-09-06 Thread Joseph Ashwood
I'm currently looking into implementing a single sign-on solution for
distributed services.
Be brave, there's more convolutions and trappings there than almost anywhere
else.
Since I'm already using OpenSSL for various SSL/x.509 related things,
I'm most astonished by the almost total absence of public key
cryptography in Kerberos, and I haven't been able to find out why this
design choice was made - performance reasons, given that at its
inception public key operation cost was probably much more prohibitive?
Actually the primary reason Iv'e heard had more to do with the licensing
costs (at the time they were not free) than with anything else. You will
however find PKI extensions to Kerberos, don't remember the RFC off-hand.
- Is there a good web/book/whatever resource regarding the design
  of Kerberos? Amazon offers the O'Reilly book, which, from the
  abstract, seems to take the cryptographic design of Kerberos as
  a given and concentrates on its usage, and another one that also
  doesn't seem to give much detail on the issue. Something in the
  direction of EKR's SSL/TLS book would be very much appreciated.

From my understanding Kerberos was originally thrown together at MIT, then
it was broken, and patched, and broken and patched, until it was relatively
recently qualified to be implemented in Windows, so you're not likely to
find much in the way of well thought-out arguments governing the little
details. In fact many of the decisions seem to be based on My pet project
is . . . .
- Is Kerberos a sane choice to adapt for such solutions today?
  Is there anything more recent that I should be aware of?
Kerberos is a very sane choice, it may not be the cleanest design ever but
it has withstood a great deal of analysis. Actually, I was a member of a
group that was working on a replacement for Kerberos because of it's age and
potential issues in the future, but we fell into substantial disarray, and
eventually it collapsed. Given this, I can confidently say that it is 
unlikely that you will find something in the Kerberos vein taht is newer.
   Joe

Trust Laboratories
Changing Software Development
http://www.trustlaboratories.com 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Which book for a newbie to cryptography?

2004-09-06 Thread Foo-O-Matic
Hi, first im new to this list and to cryptography. :)
I've read the first lesson from this 24 crypto lessons:
http://www.und.nodak.edu/org/crypto/crypto/lanaki.crypt.class/lessons/
and found it really interesting. I want to start learning cryptography
from a book, and I have access to these 3 books from the library in
the college near me:

- Handbook of applied cryptography / Alfred J. Menezes, Paul C. van
Oorschot, Scott A. Vanstone / 1996
- Cryptography : theory and practice 2nd ed. / Douglas R. Stinson / 2002

What I want to know is which one is recommended for someone which is
new to cryptography?

Thanks in advance,
Foo-o-Matic

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Kerberos Design

2004-09-06 Thread Rich Salz
I've been trying to study Kerberos' design history in the recent past
and have failed to come up with a good resource that explains why things
are built the way they are. 
http://web.mit.edu/kerberos/www/dialogue.html
/r$
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Spam Spotlight on Reputation

2004-09-06 Thread R. A. Hettinga
http://www.eweek.com/print_article/0,1761,a=134748,00.asp

EWeek

 Spam Spotlight on Reputation


Spam Spotlight on Reputation

September 6, 2004
 By   Dennis Callaghan



As enterprises continue to register Sender Protection Framework records,
hoping to thwart spam and phishing attacks, spammers are upping the ante in
the war on spam and registering their own SPF records.

E-mail security company MX Logic Inc. will report this week that 10 percent
of all spam includes such SPF records, which are used to authenticate IP
addresses of e-mail senders and stop spammers from forging return e-mail
addresses. As a result, enterprises will need to increase their reliance on
a form of white-listing called reputation analysis as a chief method of
blocking spam.

E-mail security appliance developer CipherTrust Inc., of Alpharetta, Ga.,
also last week released a study indicating that spammers are supporting SPF
faster than legitimate e-mail senders, with 38 percent more spam messages
registering SPF records than legitimate e-mail.

The embrace of SPF by spammers means enterprises' adoption of the framework
alone will not stop spam, which developers of the framework have long
maintained.

Enter reputation analysis. With the technology, authenticated spammers
whose messages get through content filters would have reputation scores
assigned to them based on the messages they send. Only senders with
established reputations would be allowed to send mail to a user's in-box.
Many anti-spam software developers already provide such automated
reputation analysis services. MX Logic announced last week support for such
services.

There's no question SPF is being deployed by spammers, said Dave
Anderson, CEO of messaging technology developer Sendmail Inc., in
Emeryville, Calif.

Companies have to stop making decisions about what to filter out and start
making decisions about what to filter in based on who sent it, Anderson
said.

The success of reputation lists in organizations will ultimately depend on
end users' reporting senders as spammers, Anderson said. In the system
we're building, the end user has the ultimate control, he said.

Scott Chasin, chief technology officer of MX Logic, cautioned that
authentication combined with reputation analysis services still won't be
enough to stop spam. Chasin said anti-spam software vendors need to work
together to form a reputation clearinghouse of good sending IP addresses,
including those that have paid to be accredited as such.

There is no central clearinghouse at this point to pull all the data that
anti-spam vendors have together, said Chasin in Denver. We're moving
toward this central clearinghouse but have to get through authentication
first.

-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]