Cryptography-Digest Digest #844, Volume #13 Fri, 9 Mar 01 10:13:01 EST
Contents:
Re: Tempest Systems (Frank Gerlach)
Re: PKI and Non-repudiation practicalities (those who know me have no need of my
name)
Re: Tempest Systems (Frank Gerlach)
Re: Meaninog of Kasumi ("Sam Simpson")
Re: NTRU - any opinions ("Sam Simpson")
Re: Encryption software (Runu Knips)
Re: Elgamal (Nicholas Hopper)
Re: The Foolish Dozen or so in This News Group (Richard Herring)
Re: What is the probability that an md5sum of a group of md5sums will be the same?
("Joseph Ashwood")
Re: how long can one Arcfour key be used?? ("Joseph Ashwood")
Re: encryption and information theory ("Joseph Ashwood")
Q: Authentication in dynamic workgroups ("Henrick Hellstr�m")
Re: OverWrite freeware completely removes unwanted files from harddrive ("Joseph
Ashwood")
Re: NTRU - any opinions (DJohn37050)
Re: PKI and Non-repudiation practicalities (Anne & Lynn Wheeler)
Re: frequency "flattening" (Mok-Kong Shen)
Re: PKI and Non-repudiation practicalities (Anne & Lynn Wheeler)
Re: => FBI easily cracks encryption ...? (Damian Kneale)
Re: PKI and Non-repudiation practicalities (Anne & Lynn Wheeler)
----------------------------------------------------------------------------
From: Frank Gerlach <[EMAIL PROTECTED]>
Subject: Re: Tempest Systems
Date: Fri, 09 Mar 2001 12:12:26 +0100
Mok-Kong Shen wrote:
> Couldn't one create noise with some appropriate generators
> to defeat monitoring?
Very difficult, because the noise is added additively, unlike a stream
cipher, which uses a nonlinear function like XOR. This means that you
can simple average out the noise, if the signal is sent multiple times
(as with a lot of computer signals).
With a truckload of VCRs you can store the emanations of a PC and
process it offline.
>
> M. K. Shen
------------------------------
From: [EMAIL PROTECTED] (those who know me have no need of my name)
Subject: Re: PKI and Non-repudiation practicalities
Date: Fri, 09 Mar 2001 11:19:01 -0000
<S40q6.51$[EMAIL PROTECTED]> divulged:
>Mark Currie wrote in message <3aa88818$0$[EMAIL PROTECTED]>...
>>OK, read your aadsover.htm and aadswp.htm. Sounds pretty good to me.
i like the idea as well. but there doesn't seem to be as much analysis of
the system from the consumer side as from the provider side.
>>One (possibly minor) problem that I can see is if your private key is
>>compromised/lost, in AADS you have to contact all institutions
>>yourself, in CADS you only have to contact the CA.
>In the PKI models, every digsig recipient has to check every cert for
>revocation. Which is simple and more efficient? AADS models wins, in
>my book.
more efficient for whom? if i have 8 institutions, say 2 banks and 6
credit cards, the compromise (loss) of my private secret means that: a) i
have to request a replacement (i assume that the aads model will create an
industry for direct to consumer supply); b) be fucked until the new secret
is delivered (since i cannot access anything without it); and c) contact
all 8 institutions myself to activate the token/secret once it arrives. i
can't imagine that this gets any better when you are outside your home
area/country.
perhaps a and c can be combined (the a company contacts all c's), so that
the replacement arrives "live", but you'll still be completely b'd until
then. (likewise in cads.)
at least that's the way it looks to me. surely i must have missed
something?
--
okay, have a sig then
------------------------------
From: Frank Gerlach <[EMAIL PROTECTED]>
Subject: Re: Tempest Systems
Date: Fri, 09 Mar 2001 12:28:00 +0100
Frank Gerlach wrote:
>
> Mok-Kong Shen wrote:
>
> > Couldn't one create noise with some appropriate generators
> > to defeat monitoring?
> Very difficult, because the noise is added additively, unlike a stream
> cipher, which uses a nonlinear function like XOR. This means that you
> can simple average out the noise, if the signal is sent multiple times
> (as with a lot of computer signals).
Maybe some electrical engineers in this group can give an estimation how
many signal repetitions are necessary for a given Signal-to-noise ratio.
Is this the Shannon formula ?
------------------------------
From: "Sam Simpson" <[EMAIL PROTECTED]>
Subject: Re: Meaninog of Kasumi
Date: Fri, 9 Mar 2001 11:33:30 -0000
See the paper at:
http://www.niksula.cs.hut.fi/~jwallen/kasumi/kasumi.html
--
Regards,
Sam
http://www.scramdisk.clara.net/
Arturo <aquiranNO$[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
>
> KASUMI is the name of the encryption algorithm to be used in
> third-generation mobile phones. My question is, what does that workd
stand for?
> Does it have any meaning? TIA
------------------------------
From: "Sam Simpson" <[EMAIL PROTECTED]>
Subject: Re: NTRU - any opinions
Date: Fri, 9 Mar 2001 11:35:43 -0000
Have a look at the paper at:
http://www.tml.hut.fi/~pk/crypto/fast_pk_crypto.pdf
--
Regards,
Sam
http://www.scramdisk.clara.net/
"James Russell" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Does anyone here have any opinions on the viability of NTRU's public key
> algorithm?
>
> Thanks.
>
> James
> _________________________________________________________________
> Get your FREE download of MSN Explorer at http://explorer.msn.com
>
>
> --
> Posted from [206.156.202.110] by way of f220.law10.hotmail.com
[64.4.15.220]
> via Mailgate.ORG Server - http://www.Mailgate.ORG
------------------------------
Date: Fri, 09 Mar 2001 12:56:44 +0100
From: Runu Knips <[EMAIL PROTECTED]>
Subject: Re: Encryption software
Steve Portly wrote:
> Tom St Denis wrote:
> > Um isn't pgpi opensource?
> >
> > Tom
>
> It takes a little longer to read the later versions of the pgpi source. Many
> programmers find the GPG source laid out better.
I doubt that pgpi is opensource according to its definition
at http://www.opensource.org. GPG is, however. Unfortunately
GPG is GPL, not LGPL, so it can't be a standard either.
------------------------------
From: Nicholas Hopper <[EMAIL PROTECTED]>
Subject: Re: Elgamal
Date: Fri, 9 Mar 2001 08:36:38 -0500
See:
Eiichiro Fujisaki and Tatsuaki Okamoto, "Secure Integration of Asymmetric
and Symmetric Encryption Schemes," CRYTPO '99, LNCS 1666, pp 537-554.
or
http://citeseer.nj.nec.com/okamoto00epoc.html
for what is apparently a newer version of this scheme; the authors give a
construction for converting a weakly secure probabilistic public-key
encryption function into one which is provably secure against adaptive
chosen ciphertext attacks in the random oracle model.
On 9 Mar 2001, Vipul Ved Prakash wrote:
> Are there provably secure schemes for encrypting and signing with Elgamal
> (analogous to Optimal Asymmetric Encryption and Probablistic Signature
> Scheme for RSA) ?
>
> best,
> vipul.
>
> --
>
> Vipul Ved Prakash, http://www.vipul.net/
> PGP Fingerprint d5f78d9fc694a45a00ae086062498922
>
>
>
------------------------------
From: [EMAIL PROTECTED] (Richard Herring)
Crossposted-To: alt.hacker
Subject: Re: The Foolish Dozen or so in This News Group
Date: 9 Mar 2001 13:34:48 GMT
Reply-To: [EMAIL PROTECTED]
In article <[EMAIL PROTECTED]>, Benjamin Goldberg
([EMAIL PROTECTED]) wrote:
> Eric Lee Green wrote:
> [snip]
> > Well, there's three layers to consider here: Filesystem, buffer cache,
> > and underlying hardware.
> Err, four. Szopa's using fopen, fwrite, fclose, so there's also a C
> library cache in addition to the other layers.
Perhaps it would clarify matters if someone with a better
knowledege than I have could explain how something like PGPwipe
works?
--
Richard Herring | <[EMAIL PROTECTED]>
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: What is the probability that an md5sum of a group of md5sums will be the
same?
Date: Tue, 27 Feb 2001 12:19:55 -0800
Crossposted-To: sci.math
It's much more complex than just what are the odds. The odds are very near
to 2^-128. However you have to remember that since this is a security
auditing tool the attacker will do some work which can rather quickly eay
away at that 128. For example if the attacker choose to make 2 attempts,
then the odds immediately fall to 2^-127, if he performs 2^64 work, he has a
1/2 chance of success. 2^64 sounds like an enormous number, it is after all
a billion billion however you are sitting at a computer which can easily do
2^40 work on a weekend (if you're sitting at a seriously out of date
computer), and it was proven through the RSA DES III challenge that 2^56
work can be done for $250,000 in less than a day. Given that 2^64 doesn't
seem like all that much work. I would recommend that you move instead create
a SHA1sum program (www.cryptopp.com for an implementation). Or at the very
least add a secret value to the beginning (have the user enter it).
Joe
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: how long can one Arcfour key be used??
Date: Tue, 27 Feb 2001 12:35:02 -0800
"Julian Morrison" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Ok. How about key length?
For most of the attacks the key length will have little effect, although a
longer key will be stronger.
> One of my intended algorithms will use throwaway
> from-scratch DH to setup a key, but creating DH primes for a full length
> 256 byte RC4 key would take several minutes a pop, way too slow. (I'm
> doing it this way so as to have "forward security" - once the transaction
> is over, there should be no way to decrypt it from wiretap records and a
> siezed machine.)
I think you're doing the DH wrong. You pick the set of parameters {G,P}
once, or at most occassionally, you pick x and by relation G^x at random.
This should make it happen much faster than selecting a new prime, and since
you're using a 2048 bit prime it's not a major concern, just replace it
every year or 10 and you'll be fine.
>
> For example, CipherSaber suggests a 62 byte key + IV; for how long could
> that be used?
The same amount of time that an 80-bit no IV key, or a 256-byte all key, or
whatever else, the known attacker on RC4 are not based on the size of the
key. The only consideration is whether the length of the key can resist
brute force.
Joe
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: encryption and information theory
Date: Tue, 27 Feb 2001 12:07:40 -0800
The entropy content remains approximately the same. There is an amount of
entropy that is added via the key. I'll leave out the proofs, but the final
entropy is Entropy(key)+Entropy(data), the placement of the entropy if
determined by the encryption algorithm.
Joe
"Andreas Moser" <see@http://www.ztop.freeserve.co.uk> wrote in message
news:97fvk8$j6f$[EMAIL PROTECTED]...
> A question regarding the information content (entropy) of
> encrypted messages:
> Does the encryption change the entropy, i.e. does the
> encrypted message still reflect the information content of
> the original message? Say the original message had an
> entropy of 1 kbit, then use, say, PGP encryption, does it
> increase?
>
> If the answer is yes, where does the additional information
> come from, and if the answer is no, isn't there a way to see
> through the encryption?
>
> Just curious...
> Andreas
------------------------------
From: "Henrick Hellstr�m" <[EMAIL PROTECTED]>
Subject: Q: Authentication in dynamic workgroups
Date: Fri, 9 Mar 2001 14:58:10 +0100
Consider the following scenario:
Adam, Bart, Carol and David work for the same corporation. They travel a lot
and want to be able to have virtual group meetings, to share files etc, even
when someone or almost all of them are away from the home office. Sometimes
they are divided into smaller groups and need to communicate securely within
that group, disallowing any non member to connect.
Consider the following alternatives. Which one is preferable? Are there any
other alternative I haven't thought of?
1. Adam, Bart, Carol and David have individual passwords and connect to a
central server Eve. Eve uses Thomas Wu's SRP for authenticated login. Eve is
the only one who stores password verifiers, keys etc of any kind. Eve
manages the group configurations internally in application code, without any
additional cryptographic safe guards. All communication passes through Eve.
Hence, all transmitted information is at some point decrypted and stored as
plain text in the internal memory of Eve.
2. Adam, Bart, Carol and David use the same password, but have different
passwords for each work group they are part of. Any member of a work group
who is currently at the home office might start a server application at his
personal IP and configure it to allow connections using that specific group
password only. Thomas Wu's SRP is used for authenticated login. No password
verifiers, keys etc are stored anywhere. However, the member at the home
office might shut down his server application once all members know the
current IP of each other. At that point each member serves as both client
and server, and uses the addresses passed on from the stationary member to
locate each other.
3. A combination of (1) and (2). Each member has an individual password that
he use to connect to Eve, but only to announce his present location and
connection status. The rest goes as in (2).
4. Same as 2, but in addition each member has a long term DH key pair, and
SRP is in some way extended into full AKE. At first login, the client uses
the group password as his private key, and the server (at the home office)
uses his long term DH key. The members who are away from the home office
only have to bring with them the long term public keys of the other members.
Once they know the IP of each other they use regular SRP to establish secure
connections.
To me it seems as if (1) is the standard solution, but I don't like the fact
that Eve knows all transmitted information. It does not fit the scenario.
I'd like to think that (2) is the best solution since it seems to be almost
as practical as (1), but would some kind of user error make it insecure? (3)
might be an alternative, but having the users keep track of too many
passwords is not desirable. (4) is the least practical solution, but I guess
it ought to be more secure than either of (1), (2) or (3). (Or isn't it?)
Does anyone have a specification of such an AKE protocol?
--
Henrick Hellstr�m [EMAIL PROTECTED]
StreamSec HB http://www.streamsec.com
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: OverWrite freeware completely removes unwanted files from harddrive
Date: Tue, 27 Feb 2001 12:58:09 -0800
Crossposted-To: alt.hacker
I have stayed away from the lack of any semblance of knowledge that is Szopa
for long enough.
What everyone is correctly telling you will cause problems for your program
is called caching. I refer to to the website of any hard drive manufacturer
to see that hard disks now have caches associated with them. The cache is
there to improve speed as much as possible. Because writes are a very
expensive operation, they are not performed if they can be avoided. As your
program overwrites the file, that data will first be placed in the cache,
where it will sit until the disk is idle or the cache is full. When you
write a second time to the same location the cache is written over, the
cache may or may not be reflecting what has been physically written to the
disk. There are of course a multitude of different cache types, for some
cache types you are correct it will be immediately written to disk, this
type is very uncommon because, like I said, disk writes are expensive. There
are two ways to deal with this either wait until the disk has been idle long
enough to flush it's caches to disk, or write more to the drive than the
cache can hold.
Additionally some hard drive controllers maintain a cache. This can be a
much more severe problem for overwriters, because these caches are quite
often in excess of 100 MB which is larger than most any file you will be
normally overwriting. This has the same problem as before, and these caches
generally have a higher retention rate (due to their size) and will also
marshall the transactions to disk in a much more advanced way, meaning that
it will be even more difficult to determine what has been written to disk
and what has not. Your options here are more complicated than the disk level
options, basically you have to outthink the caching mechanism, and a massive
number of fast writes simply is not enough, I'll leave it to you to
determine how you want to proceed on this.
The motherboard also becomes involved in this. Marshalling the DMA actions
of the CPU, and hard drive controller. This could very well cause a
reordering of the instructions as well, which will make the underlying
transactions much more difficult to manipulate. Last time I checked this
could almost be used to generate true random numbers, so you guess is only
marginally worse than mine.
Moving up again, we get to the operating system. The operating system can
perform even more gyrations than the hard disk controller, having more
memory, more compute power, and a higher difference in latency between
writing to disk and RAM. Because of this the operating system will heavily
optimize the write instructions, with a cache that is not uncommon to exceed
200 MB. You can however reason with an operating system and request to be
flushed to disk (although WindowsNT and 2000 don't obey very well, they
still insist on optimizing you) which will put your code merely at the mercy
of the other 3 systems. However you could simply speak SCSI and IDE that
should get you down to only the 3 layers.
So basically good luck, you have an uphill battle to get there.
Joe
"Anthony Stephen Szopa" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> "Trevor L. Jackson, III" wrote:
> >
> > Anthony Stephen Szopa wrote:
> >
> > > Michael Brown wrote:
> > > >
> > > > <SNIP>
> > > > > >
> > > > > > I checked the web pages, but I can't find any description for
how the
> > > > > >>>>>(SNIP SNIP)
> >>>
> >>>
> > the OS might never write to the disk at all.
>
>
> I told you what the coded instructions are.
>
> You and others suggest that just maybe these coded instructions are
> somehow not being carried out.
>
> You are suggesting that maybe sometimes they are and sometimes they
> are not.
>
> Urban Legend or FUD.
>
> Either way, refer us to some research papers that clearly
> address / demonstrate this.
>
> This is no trivial matter.
------------------------------
From: [EMAIL PROTECTED] (DJohn37050)
Date: 09 Mar 2001 14:02:28 GMT
Subject: Re: NTRU - any opinions
So, ECC has a space advantage and perhaps NTRU has a speed advantage on a
Pentium, if you believe NTRU is strong. I notice that the NTRU sig method
presented at Crypto is no where to be found (anymore) on the NTRU webstie,
instead a new one from fall 2000 is being offered. What happened to the old
one, did someone break it? Do you think this inspires confidence?
Don Johnson
------------------------------
Subject: Re: PKI and Non-repudiation practicalities
Reply-To: Anne & Lynn Wheeler <[EMAIL PROTECTED]>
From: Anne & Lynn Wheeler <[EMAIL PROTECTED]>
Date: Fri, 09 Mar 2001 14:24:02 GMT
[EMAIL PROTECTED] (Mark Currie) writes:
> OK, read your aadsover.htm and aadswp.htm. Sounds pretty good to me.
>
> Questions:
>
> 1. Would an account holder have to have a separate PK set for each institution
> ?
>
> 2. Does the model allow the account holder to generate the PK set ?
>
> One (possibly minor) problem that I can see is if your private key is
> compromised/lost, in AADS you have to contact all institutions yourself, in
> CADS you only have to contact the CA.
>
> Mark
>
in general, these are business decisions and/or personal decisions.
in the credit card world ... if you have 15 cards in your wallet and
your wallet is lost/stolen ... then you have to contact all 15
institutions ... unless you have a 1-800 sevice that does it for you.
if you had one hardware token in your wallet registered with 15
different institutions or 15 hardware tokens in your wallet ... the
lost/stolen risk is still that if your wallet is lost/stolen you have
to contact 15 different institutions ... or register with a 1-800
service that contacts all institutions.
In the CADS model, there is an implicit assumption that either CRLs
can actually scale-up to world-wide proportions (long way from being
prooven, even very modest populations of couple hundred thousand with
very few end-points seem to experience scaling problems) or that all
the end-points do something like OCSP (again major scaling issue).
Part of the issue is that while there are millions of possible
end-points, any, one specific person may have only registered with a
dozen or so such end-points (in the AADS model). Having the person
just register the places the end-points they specifically are involved
with scales significantly better than assuming absolute worse case
scenerio. Note that with the wallet being the lost/stolen risk mode,
whether there is one or two hardware token (PK set) with multiple
registrations or one hardware token (PK set) per registration
... there are still multiple end-points that need to receive
notifications i.e. the mode of registration (one-to-many or
one-to-one) is individual/personal issue ... but multiple end-points
still have to be notified ... and this can be done with a 1-800
operaton.
also note regarding CADS model there have been a number of financial
and business institutions claiming that only "relying-party-only"
certification works because of privacy and liability reasons. An
identity certificate represents a severe privacy issue. Third party
certificates represent severe liability issues.
The "solution" is an relying-party-only "account"
certificate. However, it is relatively trivial to show that appending
a relying-party-only "account" certificate to a transaction is
superfulous and redundant. The process flow on a signed transaction
with an appended "account" certificate has to read the account record,
by definition an account record contains the original of the
information in the "account" certificate (even the registered public
key). If the transaction involves the reading of the account record
containing the original (and superset) of all information in the
relying-party-only account certificate, then sending an
relying-party-only account certificate on every transaction to the
entity that contains the original (and superset) of all information in
the relying-party-only account certificate is redundant and
superfulous.
i.e. a relying-party-only CADS model is a AADS model transmitting
redundant and superfulous certificates appended to every transaction.
random URLS
http://www.garlic.com/~lynn/2000.html#36
http://www.garlic.com/~lynn/2000b.html#92
http://www.garlic.com/~lynn/2001c.html#8
http://www.garlic.com/~lynn/ansiepay.htm#aadsnwi2
http://www.garlic.com/~lynn/2000e.html#40
http://www.garlic.com/~lynn/99.html#236
http://www.garlic.com/~lynn/99.html#240
http://www.garlic.com/~lynn/2000b.html#53
http://www.garlic.com/~lynn/2000e.html#47
http://www.garlic.com/~lynn/2000f.html#15
http://www.garlic.com/~lynn/2000f.html#24
http://www.garlic.com/~lynn/2001.html#67
http://www.galric.com/~lynn/2001c.html#9
http://www.garlic.com/~lynn/aadsm4.htm#00
http://www.garlic.com/~lynn/aadsm4.htm#01
--
Anne & Lynn Wheeler | [EMAIL PROTECTED] - http://www.garlic.com/~lynn/
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: frequency "flattening"
Date: Fri, 09 Mar 2001 15:30:27 +0100
If you use a good cipher like AES, it probably isn't
worthwhile to do homophonic mappingn in addition.
M. K. Shen
------------------------------
Subject: Re: PKI and Non-repudiation practicalities
Reply-To: Anne & Lynn Wheeler <[EMAIL PROTECTED]>
From: Anne & Lynn Wheeler <[EMAIL PROTECTED]>
Date: Fri, 09 Mar 2001 14:35:20 GMT
[EMAIL PROTECTED] (those who know me have no need of my name) writes:
> more efficient for whom? if i have 8 institutions, say 2 banks and 6
financial and business institutions are claiming that they have to go
to relying-party-only certificates because of privacy and liability
reasons.
a generalized identity certificate represents a severe privacy
issue. solution is a domain/business specific certificate carrying
something like just an account number ... so not to unnecessarily
divulge privacy information (even name). EU is saying that electronic
payments at point-of-sale needs to be as annonymous as cash. By
implication that means that payment cards need to remove even name
from the card. A certificate infrastructure works somewhat with an
online environment and just an account number.
a generalized 3rd party certificate represents a severe liability
issue. solution is a domain/business specific "relying-party-only"
certificate.
combine an "account" certificate and a "relying-party-only"
certificate you have a CADS model that is the AADS model with
redundant and superfulous certificates appended to every transactions,
i.e. it is trivial to show that when a public key is registered with
an institution and a copy of the relying-party-only certificate is
returned (for privacy and liability reaons) where the original is kept
by the registering institution; then returning the copy of the
relying-party-only certificate to the institution appended to every
transaction is superfulous and redundant because the institution
already has the original of the relying-party-only certificate.
it is redundant and superfulous to send a copy of the
relying-party-only certificate appended to every transaction to an
institution that has the original of the relying-party-only
certificate (i.e. the insitution that was both the RA and the CA for
that specific relying-party-only certificate).
--
Anne & Lynn Wheeler | [EMAIL PROTECTED] - http://www.garlic.com/~lynn/
------------------------------
From: [EMAIL PROTECTED] (Damian Kneale)
Crossposted-To: alt.security.pgp,talk.politics.crypto
Subject: Re: => FBI easily cracks encryption ...?
Date: Fri, 09 Mar 2001 14:29:46 GMT
Once "Mxsmanic" <[EMAIL PROTECTED]> inscribed in stone:
>"Damian Kneale" <[EMAIL PROTECTED]> wrote in message
>news:[EMAIL PROTECTED]...
>
>> Once you have that, you do the key cracking or
>> recovery. Its hard, but not impossible. If it
>> weren't possible, virtually every western
>> government wouldn't have sigint agencies.
>
>There is far more to signal intelligence than just cracking codes. If
>cracking codes were all it included, then indeed no spook agencies would
>still have it, since cracking codes isn't a very success-prone endeavor
>these days.
I am sure I have a better idea of sigint requirements than most, and I
know the difficulties. However, it is relatively well known that
certain countries have intercept capability, and therefore it follows
that message traffic is not secure. Thus the only security you can
rely on is the difficulty of breaking the encryption on the links.
Admittedly intercepting message traffic is getting harder with the
increase in undersea fibre, but governments have the ability to
intercept any traffic they like leaving their own territory via the
exchanges, or satellite intercepts.
>> I'd bet you _all_ your lifetime earnings that
>> at least some government level codes are crackable.
>
>Since it is easy to use a very secure cryptosystem today, I doubt that
>any significantly weak ones are in use, at least in domains where
>organizations like the NSA have a say in their selection. While these
>secure cryptosystems are certainly crackable in a theoretical sense, in
>a real-world sense, they probably aren't.
Probably doesn't cut it when you are designing a government grade
encryption system. You have to know _exactly_ how hard it is to
crack, and the best way to do that is to practice on other people.
I suspect we'll just have to disagree on the relative security of real
world encryption systems.
>> Even a 5-10% success rate is a good success rate
>> in terms of giving a picture of ongoing activities.
>
>I suspect this is over-optimistic by many orders of magnitude.
More than likely. That is information I'm not likely to see exact
numbers on. And of course it depends on the country/system/individual
of interest. Simple statistics indicate that no system is 100% secure
if sufficient time and resources are devoted to breaking it.
>> If you truly believe code breaking isn't possible for
>> good codes, why resist the trend to legislate?
>
>Because if you are thrown into jail for just using a code, it deletes
>the utility of the code significantly.
Not even the US government has tried to enforce laws quite that
futile. Personally I know I have insufficient interest to attract a
national defence agency to have interest in me, and the police have
their own problems with individual rights legislation when attempting
to spy on me. I'm far more worried about things like online credit
card security, and refuse to use mine online. Even supposedly secure
systems and SSL links don't convince me. _There_ is the real utility
of good encryption - banking and monetary transfers.
Damian.
------------------------------
Subject: Re: PKI and Non-repudiation practicalities
Reply-To: Anne & Lynn Wheeler <[EMAIL PROTECTED]>
From: Anne & Lynn Wheeler <[EMAIL PROTECTED]>
Date: Fri, 09 Mar 2001 14:59:10 GMT
[EMAIL PROTECTED] (those who know me have no need of my name) writes:
> more efficient for whom? if i have 8 institutions, say 2 banks and 6
the other scenerio from
http://www.garlic.com/~lynn/ansiepay.htm#aadsnwi2
is that for the CADS business solution to privacy and liability that
the certificates are actually there .... but using some knowledge
about the actual business flow of the transactions it is possible to
compress the certificate size by eliminating fields that are already
in possession of the relying party ... and specifically to show that
all fields in the certificate are already present at the relying party
and therefor it is psosible to deploy a very efficient CADS
infrastructure using zero byte certificates.
--
Anne & Lynn Wheeler | [EMAIL PROTECTED] - http://www.garlic.com/~lynn/
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to sci.crypt.
End of Cryptography-Digest Digest
******************************