Cryptography-Digest Digest #191, Volume #12 Mon, 10 Jul 00 06:13:00 EDT
Contents:
SecurID crypto (was "one time passwords and RADIUS") (Vin McLellan)
----------------------------------------------------------------------------
From: Vin McLellan <[EMAIL PROTECTED]>
Subject: SecurID crypto (was "one time passwords and RADIUS")
Date: Mon, 10 Jul 2000 06:00:23 -0400
greuh <[EMAIL PROTECTED]> recently queried the sci.crypt newsgroup
about the reliability of the SecurID one-time password token. Against
his better judgment, Mr. G is considering using RSA Security's popular
hand-held authenticator (HHA) with a RADIUS server.
The combination of ACE/SecurID two-factor authentication and the RADIUS
(Remote Authentication Dial-In User Service) protocol is a popular one.
It recently become even more so, because the latest version of RSA's
ACE/Server (4.1) permits a single database to support both RADIUS and
SecurID user records.
>It's pretty hard to gather data about this system since its PRNG
>algorithm is patented, but I doubt it can be very powerful since
>it's implemented in a device not more complex than a pocket
>calculator (and it seems to use a clock input, although it is
>not sure).
The SecurID hash, created by John Brainard of RSA Labs in 1985, is not
patented, but it is a RSA trade secret -- largely because when
ACE/SecurID was first brought to market 14 years ago, the secrecy of any
embedded crypto in a commercial product was often a market requirement.
RSA continues to honor commitments made then to early users of
ACE/SecurID, but it has always publicly insisted that the secrecy of the
hash is merely a customer requirement, and that the publication of the
hash would in no way lessen the integrity of ACE authentication.
RSA <www.RSAsecurity.com> has sold some seven million SecurIDs. There
are now approximately 12,000 ACE/Server installations worldwide.
Interest in the SecurID hash, per se, has been cyclical.
Its integrity has rarely been called into question -- never, AFAIK, by
anyone who has studied it - but it is getting a little long in the
tooth, and Moore's Law clearly indicates that any cryptosystem will
(sooner rather than later;-) require more than a 64-bit key. It is,
nevertheless, a measure of the confidence that RSA's major customers
place in Brainard's hash that it has remained unchanged for 14 years --
while RSA's authentication server (ACE/Server) and the ACE client/server
protocol have been steadily enhanced to meet new threats in new network
environs.
If you are a current or (serious) potential customer, RSA has always
been willing to give you -- or your designated crypto consultants --
access to the SecurID algorithm under a NDA. So, while Brainard's hash
has never been published, it has, over the years, been studied in depth
by a sizable and well-credentialed community of corporate and government
cryptographers.
So many prominent cryptographers have had occasion to evaluate the
SecurID hash over the past 15 years -- legally, under NDA; or,
illicitly, on gray market work benches -- that it was probably
inevitable that the algorithm has begin to be mentioned more frequently
in professional crypto circles. Good hashes are fairly rare, after all.
Beyond that, any unpublished cryptosystem which holds the market's trust
for so long -- even for so narrow and specific an application --
inevitably earns a place in the lore of the Craft.
This sci.crypt thread raises a number of awkward questions about the
integrity and security of the SecurID. No one else familiar with the
technology spoke up, so I thought I might address them as best I can.
(Fair warning: this is a little long. Please jump to another topic if
you are not obsessively curious about the subject;-) Please also note
that my judgment is not without bias; I've been a consultant to RSA, and
previously to Security Dynamics, since the late Paleocene epoch.
Published details about the SecurID hash are sparse.
RSA has said only that the SecurID hash consists of a sequence of 256
operations (each of which requires several processor instructions),
where each operation depends upon the results of the previous operation.
[This means, of course that SecurID implementations -- unlike RC5 or
DES, e.g. -- cannot be made to run faster by performing multiple
operations concurrently, which makes a brute force attack, on a single
SecurID token with a 64-bit secret, very <ahem> time-consuming.]
The issue of a secret cryptosystem was not always so contentious. In
'85, after a stint as IBM's historian, I was a journalist when I came
across Ken Weiss, the inventor of the SecurID. I wrote an aggressively
positive cover story about the technology for "Information Week." I
thought it was gonna take over the world.
Shortly thereafter (thinking I was one smart guy;-), Weiss invited me
to spend an afternoon with his planning team in what was then the
company's headquarters, the living room of his Boston townhouse, to
present the case for open publication of the SecurID hash.
I did a pretty good job, I think, but my arguments were not enough to
sway them. It was not yet clear, even to me, whether or not the NSA
would succeed in its campaign to crush or leash the nascent
private-sector cryptographic community. Meanwhile, several large banks
and a major financial services firm had expressed interest in buying the
tokens, and all had said they expected that the SecurID algorithm would
be kept secret.
Over the years, as ACE/SecurID rapidly seized and held a hungry lion's
share of the market for two-factor authentication {particularly in large
corporate enterprise environments), I've had to concede that -- on the
"business case," which is what dominates in the real world -- they were
right and I was wrong.
I realize that, today, secret (and proprietary;-) algorithms rub a lot
of people on this forum in the wrong way -- but anyone with history in
this Craft can tell you that they were once quite common, and are not
(solely by virtue of being unpublished) inherently weak.
Conventional Wisdom to the contrary, the question of _who_ evaluates a
cryptosystem has always been much more important than how many people
get to study it;-)
Most Western and many Asian governments use ACE/SecurID; as do hundreds
of the most security-sensitive multinationals. In the US, all White
House staffers carry SecurIDs; all US Senators carry SecurIDs; and large
numbers of DoD, CIA, NSA staff carry SecurIDs. The Director of the CIA
carries a SecurID key fob on his key ring, as do thousands of executives
at leading firms in the US and overseas.
In the face of natural concern from new RSA customers, others
unfamiliar with the HHA market, and congenital skeptics, I suggest only
that there is clear evidence that several generations of
probably-competent cryptographers at places like the NSA, GCHQ, IBM, BT,
NTT, GE, and AT&T (to grab just a handful of the alphabet) have studied
the SecurID hash and found it more than sufficient for its function.
[As per Kerchoff's Principles, of course, standard practice in any
serious cryptanalytic review requires the evaluation team to presume
that every attacker will have full access to the Brainard hash -- and
each and every other ACE secret, except the token-specific secret key.
(The ACE client/server protocol uses another unpublished hash, F2,
designed by Ron Rivest, Brainard's mentor at MIT.)]
>This device is generating a 8-digit password that is valid for a
>period of 60 seconds.
Yup, the SecurID's PRN token-code rolls over every 30 or 60 seconds.
Each device -- a key-fob or a credit-card-sized token -- is wrapped in a
sealed cartridge with a lithium battery good for 2, 3, or 4 years; with
a pre-scheduled date and hour of death.
The SecurID takes a 24-bit digital representation of "Current Time,"
and a factory-installed token-specific (true random) 64-digit secret
key, and puts them through Brainard's one-way function to generate a
series of 6-8 digit (or alphanumeric) "token-codes" which are
continuously displayed on a LCD on the face of the token.
"Two-factor" authentication ("something you know," "something you
have") is fundamental to the ACE/SecurID design. The SecurID token-code,
together with a user-memorized PIN, are both submitted to an
authentication server, what RSA calls the ACE/Server, for two-factor
authentication of a user previously registered on that server.
(In many ACE/SecurID user environments, the initial link, or the whole
network, is separately encrypted. In others, a PINpad SecurID is used to
add the user's PIN to the SecurID token-code, the better to safeguard
the PIN and assure the integrity of the authentication service. Still
others choose to transmit the PIN and token-code in the clear. Different
levels of security for different threat environments.)
>I fear it's enough to allow a brute force attack to break the
>password (although this risk could be reduced with the use of
>the RADIUS protocol ?)
There have been, I suggest, a lot of egregious overstatements about the
vulnerability of a 64-bit secret key to a brute-force attack. Consider,
for a comparative estimate, Distributed.net's huge ongoing effort to
crack RSA's RC5-64 Challenge: <http://stats.distributed.net/rc5-64/>.
If cracking RC5-64 takes three years on 50,000 Pentiums, then brute
force attacks are not an immediate threat to the 64-bit secret used in
the SecurID hash.
[The name of the game in Security Operations (as opposed to academic
cryptography) is to raise a barrier high enough so that any fool will
choose another avenue of attack because it is apparent that the
alternatives are more likely to succeed, faster, or more economical. A
potential threat is not the same thing as a feasible or realistic
vulnerability.]
Rivest's RC5, please note, is also vastly more amenable to such an
brute-force attack than the SecurID hash would be. My amateur's guess is
that -- even with brilliant efforts to optimize the hash code,
comparable to that achieved in the efforts to crack DES -- a brute-force
attack on a single SecurID to obtain its 64-bit secret would require at
least an order of magnitude more time or MIPS than a RC5-64 attack.
The ACE/Server also protects itself with a few fundamental rules (among
them constraints of the sort Mr. G is looking for):
No two SecurID tokens have the same true-random key.
No SecurID token-codes will be accepted out of sequence, nor will the
same token-code be accepted twice from the same token.
After three "bad" or mistyped token-codes, the ACE/Server will demand
two consecutive SecurID token-code.
After three "bad" or mistyped PINs (default), the account is frozen.
After ten "bad" or mistyped token-codes, the ACE/Server will freeze the
users account and alert the ACE Admin.
In the sci.crypt thread, Joseph Ashwood <[EMAIL PROTECTED]>, responded to
Mr. G's query with his customary generosity.
(Elsewhere, Mr. Ashwood recently described the NTRU PKC a piece of
"shit" which doesn't deserve the patent it just received -- a crisp
cryptographic evaluation which I suspect his professional and personal
friends may find opportunity to recall often in the years to come;-)
Here, Mr. Ashwood wrote:
>> Honestly I think the SecurID system needs some work. It's a
>> great idea, and a rather good implementation. But a massive
>> amount has been learned in the last decade about security,
>> and the SecurID cards don't take advantage of this.
I can't argue with his last line, but Mr. A judges the SecurID as
somehow, vaguely, deficient, solely by virtue of its longevity in the
marketplace. He never bother to ask the real question: does the SecurID
meet the requirements and demand of the market efficiently, effectively,
and securely.
(We've learned a lot about metallurgy since we began to fabricate
titanium alloys -- but so what?)
It is, nonetheless, widely expected that a new SecurID -- probably with
a 128-bit secret key -- will be phased in beginning next year. RSA has
not made any public commitments, but rumors also suggest that the next
generation of SecurIDs may carry RSA's AES candidate, RC6.
Whatever the design, according to RSA, the SecurID crypto will be a
published algorithm. Two years ago, Brainard published what will be the
new second-generation (RSApkc-based) ACE protocol, and requested
critical review from RSA customers and the public.
>> Right
>> now they use a rather antiquated (although amazingly still
>> quite secret) algorithm with a high level view approximating
>> f(time, secret) where the secret is a card specific secret
>> used for ID. I suspect that the function f() is not large
>> enough to avoid a sophisticated attack, but the fact that it
>> has remained largely a secret, and if you look back a few
>> days you'll see that at least one person here has seen it
>> and found it to be sufficient,
Ummm. Methinks Mr. Ashwood hasn't bothered to find out much about the
"antiquated" crypto he is pontificating on. Even basic widely-published
stuff, like the fact that SecurID uses a hash rather than a symmetric
cipher, escapes him.
The only person I know who has studied the SecurID hash and spoken
about it publicly is Steve Bellovin of ATT Research, who led the AT&T
evaluation team five years ago. Mr. Bellovin, and his review team of
AT&T cryptographers, felt the Brainard hash was appropriately
irreversible. AT&T and Lucient subsequently became big users of
ACE/SecurID.
>> I'd say that for medium
>> security situations it's a good solution.
Unhuh.
>> For reference I
>> consider human remembered secrets to be low security (even
>> using strong methods), methods like SecurID where the
>> administrator has no ability to update outside of
>> replacement to be moderate security,
"Two-factor authentication," which Mr. Ashwood doesn't mentioned, is
the classical definition for "strong authentication."
IMNSHO, it is subjective and downright eccentric to suggest that -- for
this class of user authentication devices -- some crucial distinction
between "moderate" and "high" security lies in giving an on-site
administrator the option to "update" or change a token's internal
secret.
There are pros and cons here, but I suspect that few security experts
would agree with Mr. A that a two-factor HHA which has to be programmed
at the local site, and can be repeatedly reprogrammed, has any big
advantage over a token with a factory-loaded secret, and a limited and
preset lifespan.
>> for high security it
>> takes a full hardware solution (smartcard with hand entered
>> PIN), preferrably with the added security of transience (see
>> PK-INIT for some idea). There are various solution that
>> don't fall neatly into the categories, but they can be
>> generally placed based on their security assumptions.
[RSA (predictably, I think;-) offers product up and down this line:
software versions of SecurID for PCs and Palm Pilots; the classic
SecurID token; the SecurID 1100 in a smartcard form-factor (which fits
Mr. A's "high-security" model above); and various RSA/Gemplus
smartcards. The smartcards all feature key and credential storage:
<http://www.rsasecurity.com/products/>]
Obviously, any "high security" network will also require link or
network encryption to protect the packets (and any authentication
service) from eavesdroppers and session hijackers -- but you don't need
a smartcard or Mr. A's "full hardware solution" (whatever that may be;-)
to set up network crypto.
Obviously, too, if a network jumps to PKI and PKC-based security
services, any admin will want to bump crypto keys and other PKI
credentials off vulnerable PCs, ASAP, and secure them in a smartcard or
some similar personal credential repository. (Personally, I like
Ashwood's idea of an ephemeral key, a la PK-INIT -- but I note that the
idea didn't seem to have legs in the commercial Kerberos market.)
I guess it is unavoidable that Mr. A's model of a security spectrum
that runs from passwords, to HHA tokens, and up to a smartcard-based
"full hardware solution," is awkward and oversimplified -- despite the
general structure being fairly conventional.
There is obviously a lot unspecified. No network admin, for instance,
would have difficulty imagining a scenario in which Mr. A's
all-smartcard environment would be an unmitigated security disaster;-)
Six or seven years ago -- after the SecurID and its competitors had
been judged fundamentally sound by the Infosec pros -- the competitive
market requirements for HHAs shifted to focus on the administrative and
cross-realm functionality of the authentication server. (Even today,
while the SecurID seems to maintain an ease-of-use advantage, the
relative security of the various HHA tokens is pretty much a wash in the
marketplace.)
A smartcard, of course, is even more obviously just a tail attached to
an elephant of unknown size and disposition. The smartcard's function
and integrity can be attacked from many different places (e.g., the
reader, the CA) in the larger infrastructure.
Mr. Ashwood is quite right to imply that the security continuum
encompasses both apples and apple orchards.
A SecurID, like any HHA, offers no more than user authentication.
Period. It cannot, for instance, protect or guarantee the integrity of
message traffic on a network.
A smartcard, OTOH, is typically a repository for both crypto keys, PKI
certs, and other credentials. In conjunction with a broader
infrastructure investment, smartcards are used to enable digital
signatures and the whole array of RSApkc-based security services: not
only authentication, but full encryption, assurance of message
integrity, non-repudiation, etc.
(Proponents of PKI may not, however, have given much thought to what
will inevitably be lost as market requirements move from HHA-based
authentication to smartcard-enabled PKI. A SecurID -- with no circuit
connection between the token and the network -- offers an elegant
simplicity that many may regret losing.
(With an hand-held authenticator, it's very clear exactly what the
token's function is. Anyone can be utterly certain that a SecurID is
doing just what they want it to do, and nothing more. In a smartcard
environment, a similar assurance may be difficult to come by.)
Mr. G replied to Mr. Ashwood's critique of SecurID:
.> Thanks for this quick answer, you confirmed my personal
.> feeling about this device. I have been keeping looking for
.> docs in the scope of my problem until I found a paper about
.> Differential Power Analysis (http://www.cryptography.com/dpa/)
.> and I fear such a device could be vulnerable against this kind
.> of attack (at least it would probably help finding the
.> algorithm used in the card's chip, allowing a more serious
.> cryptanalysis later on). <snip>
DPA is a real and significant threat to all tokens and smartcards. I
suspect none are immune. It will be interesting to see the variety of
ways in which the next generation of these devices will counter DPA and
other forms of "side channel" attacks.
There are, IMNSHO, several alternative ways -- bribery, among the most
obvious -- by which an attacker could probably obtain the SecurID hash
far more quickly, cheaply, and easily than a DPA attack.
There are, after all, tens of thousands of copies of the Brainard hash
distributed all over the world in RSA software. Last month, the author
of an article on SecurID in 2600, the Hacker Quarterly, offers anyone
access to an illicit copy of ACE/Server.
(Mind you, in addition to all the cryptanalysts who have reviewed the
SecurID hash under NDA, I have always assumed that -- over the years --
hundreds of techies at various ACE/SecurID sites have teased the SecurID
hash out of RSA code, just to satisfy their own curiosity.
(I figure the SecurID hash has _not_ been anonymously published only
because those folks chose to honor the terms of their employer's RSA
license, their own employee contracts, and RSA's request that they
safeguard its trade secrets. That's an aspect of the professional IT
culture -- the traditional "hacker" culture, if you will -- that often
gets overlooked.)
If an attacker _does_ obtain the SecurID algorithm and the appropriate
timing data, however, DPA might well be the most serious threat to the
secrecy of a single token's 64-bit secret key (given an unreported
theft, sufficient time, and appropriate skills.)
OTOH, DPA is probably a more serious threat to smartcards, as a class.
A power-line sensor in a card reader can be a wholly invisible source of
the timing data needed for DPA attacks, possibly on many smartcards.
There is also the human element. With a SecurID, the most fundamental
line of defense is a conscientious token-holder who has been taught that
it is his responsibility to safeguard the physical security of his token
(and his PIN), and to promptly report the loss or theft of the SecurID
to network admins. (In many jobs, of course, an employee often can't
work, or even get in the building, without his SecurID.)
Also, unlike some smartcard applications -- where the smartcard may be
used to restrict or ration the user's access: to money, or to encrypted
cable channels, say -- with network access control, the SecurID user and
the network administrators usually have common interest. An individual
who carries a SecurID has little incentive to cooperate in any attempt
to disassemble a token, subvert its function, or hide its loss. He
already has all the access a token thief might seek.
Mr. Ashwood, in his reply, offered a brief outline for a
"sensible attack" on a SecurID:
/> It would be much more reasonable to record a long series of
/> outputs with their associated time, and perform a known-time
/> analysis, with knowledge of the algorithm. There is really no
/> telling if the algorithm is secure against a significant
/> attack [...]
In crypto, few things are impossible, but many things are
infeasible or impractical. No one can say that Mr. Ashwood may not
discover some new cryptanalytic weakness in the hash, but his idea for
an analytical attack is pretty basic. Many smart people have looked at
a SecurID's token-code output without discovering any exploitable
pattern. (And the scale of such a task, I've been told, would probably
entail collecting millions of token-codes from a given SecurID, for a
statistical attack with bleak prospects for success.)
A SecurID token is purposely designed so that it is very difficult to
"speed up" the rate at which it generates new token-codes. If it takes a
week to get a 300,000 token-codes, we might reasonably hope that the
SecurID user would report the loss or theft of his SecurID, and the
token would be made useless, long before the bad guys set to work.)
/> [...] a good counterexample would be if one were to take the
/> time concatenate the secret, run that through SHA-1, and take
/> the lower 8 bytes, most of us would agree that this would be
/> difficult to break, and that your best bet would be through a
/> birthday paradox exploiting attack.
Not bad. [Although if you are going to use SHA-1, you might consider a
truncated HMAC (RFC 2104), since it offers some nice provable security
properties.] Unfortunately, I haven't seen anyone manage SHA-1 in less
than 600 bits of RAM, which probably makes it impractical for a token
like the SecurID.
Also, the Birthday Paradox (which helps us discover collisions, more
than one input which gets hashed down to the same token-code) is really
irrelevant here. Collisions are useless to someone attacking an
ACE/SecurID environment, since without possession of a token's secret
key, they are unpredictable.
So, unexpected bonus points for Mr. A: a hash in this application, even
his, is far stronger than he expects it to be;-)
/> However I doubt that the SecurID
/> card was designed with such a strong algorithm in mind, it
/> is probably an actual encryption algorithm, using the secret
/> as the key, and the time as the plaintext.
Not a symmetric cipher, a hash -- whose strength and rep will probably
survive Mr. Ashwood's doubts.
In North America, and in many nations in Europe, Asia, and the Middle
East, citizens and local corporations can usually ask their respective
national cryptographic authority for either a formal or informal
evaluation of the SecurID algorithm. YMMV. What you'll get will probably
be a declaration of sufficiency, similar to what the American NSA issued
years ago.
/> It's safe to
/> assume that the secret is no larger than 64-bits, making it
/> brute-forcable.
Humbug. I think the Distributed.net effort, noted above, is probably
the best answer to that.
/> There are probably attacks against it that
/> will be much more effective than brute force, but the
/> secrecy of the algorithm, combined with the security of
/> hardware makes it difficult to analyze.
A harsh, if vague, judgment -- particularly from someone who has so
demonstrably proven that he knows very little about the cryptosystem he
so glibly suggests is "probably" so vulnerable.
/> For reference I
/> would refer everyone to the analysis of the clipper chip
/> that was performed before the algorithm was made public
/> (limited to little more than disabling the LEAF), and the
/> analysis done afterwards, leading to significant analysis.
I'm at a bit of a loss to find myself challenging someone who claims to
know exactly what happened, and what didn't happen, in the NSA's
internal development and testing of Clipper: the "key-escrow" protocol
that the US spooks hoped to force all Americans to adopt.
Not to put too fine a point on it, I suspect that Mr. A's cartoonish
description of the NSA's pre-publication "analysis" of Clipper is -- for
all the crisp certainty and detail -- is either a misstatement, or a
figment of faulty memory or fertile imagination.
I readily concede, however, that it was probably only because the
Clipper design process was so politicized that the stage was set for the
glorious 1994 fiasco in which AT&T cryptographer Matt Blaze deftly
drilled a hole in the NSA's design. Blaze showed how to modify the LEAF
(Law Enforcement Access Field) to shut the lawmen and spooks out of
their own eavesdropping protocol.
Blaze's analysis of Clipper remains one of the triumphs of "open
review." It has, quite rightly, greatly enhanced the importance of open
review in establishing credibility for new cryptosystems and crypto
protocols. (I suspect it also had a lot to do with the extraordinary
openness with which NIST has been permitted to manage the ongoing AES
competition.)
OTOH, "who" reviews the code is still more important than how many
people review the code. There have also been numerous examples of "open
source" crypto, freely available to all, being widely used for years
before a serious cryptographer picked through it carefully and
discovered dangerous weaknesses.
If, as Mr. A and others have suggested, the pre-publication review of
the Clipper's EES protocol and its KEA/DSS/Skipjack crypto suite --
within the NSA and among a few tame academic cryptographers -- was
politicized and perfunctory, it makes a bad contrast to the evaluation
process the SecurID has endured over the years.
None of the cryptographers who have reviewed the SecurID crypto before
their government or their employer adopted it had any reason to evaluate
by anything less than the most rigorous standard. If one could crack
the SecurID they were guaranteed, at least, a toast among their peers,
and probably headlines as well.
All things being equal, published cryptography (and for some, open
source) is more attractive to many than unpublished or proprietary
crypto for many reasons -- but not because it guarantees either
cryptographic quality, nor quality in a crypto implementations.
Still, I don't mean to argue against open publication and unfettered
evaluations of crypto. I think that argument has been won, and the
market has echoed the pro-publication judgment of the professional
cryptographic community. For cryptosystems, I think that is a very good
thing -- although I suspect proprietary implementation code (buttressed
by 24/7 support, development efforts, and vendor stability) will remain
a fixture in the commercial market.
I suggest, however, that both the market and the crypto community have
made some rare exceptions and "grandfathered" cryptosystems which,
despite being unpublished, have built their own credibility as they have
been challenged, evaluated, and used over the years.
To judge by the market, the SecurID hash is one of these. (RSA probably
sold as many SecurIDs in the first half of this year than it did in the
first five years of the company's existence as Security Dynamics.) YMMV
-- but that's the great thing about an open commercial marketplace. Mr.
G can put his money wherever he wants.
I beg the indulgence of the newsgroup for the length of these comments.
These are interesting issues, and it was a quiet Sunday eve on the Net.
Criticism, questions, or other comments are always welcome.
Suerte,
_Vin
----
"Cryptography is like literacy in the Dark Ages. Infinitely potent, for
good and ill... yet basically an intellectual
construct, an idea, which by its nature will resist efforts to
restrict it to bureaucrats and others who deem only themselves worthy of
such Privilege."
_A Thinking Man's Creed for Crypto _vbm
* Vin McLellan + The Privacy Guild + <[EMAIL PROTECTED]>
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************