Cryptography-Digest Digest #56, Volume #13 Tue, 31 Oct 00 10:13:00 EST
Contents:
Re: RSA Multiprime (Francois Grieu)
Re: Newbie about Rijndael (Mike DeTuri)
Re: Psuedo-random number generator (Rob Warnock)
Re: Calculating the redudancy of english? (John Bailey)
Re: shared secret signing using a hash... (Tony L. Svanstrom)
Re: Q. to Ritter /PKCS cascade/Hybrid PKCS (JPeschel)
Rijndael Key Schedule (Trish Conway)
Re: Q. to Ritter /PKCS cascade/Hybrid PKCS (Mike Connell)
Re: RSA Multiprime (DJohn37050)
Re: 3-dimensional Playfair? (John Savard)
Re: BEST BIJECTIVE RIJNDAEL YET? (SCOTT19U.ZIP_GUY)
Re: DATA PADDING FOR ENCRYPTION (SCOTT19U.ZIP_GUY)
----------------------------------------------------------------------------
From: Francois Grieu <[EMAIL PROTECTED]>
Subject: Re: RSA Multiprime
Date: Tue, 31 Oct 2000 13:15:00 +0100
[EMAIL PROTECTED] (Scott Contini) wrote:
> The only thing more ridiculous than Compaq patenting this is
> RSA Security licensing the patent.
Agreed, if true. I have not seen the patent claims, and do not
know the details of the cross-licensing agreements between Compaq
and RSA Security. I hope scientists still have some influence at
RSA security (Bob are you listening ?). I bet the net flow of cash
from Compaq to RSA Security will remain non-negative.
I do fell it would be ridiculous to apply in 1999-2000 for a patent
claiming asymetric cryptography based on modular exponentiation
modulo a composite formed of 3 or more primes, with use of the
Chinese Remainder Theorem to perform the exponentiation. For
example, back in 1994, Solaic (a French Smart Card manufacturer,
now merged with Schlumberger) proposed to use of this technique
in a bid for the French "CPS" (a Smart Card for the health
practitioner, that can digitaly sign). This was seen as a
solution to implement 768 bit secret-key RSA operation on the
ST16F48 IC (however with execution time in the 30 seconds) to
circumvent the late availability of the ST16CF54 with
cryptoprocessor. Professor Jean-Jacques Quisquater actually proposed
the use of multiple primes, and I studied the implementation on an
8 bit processor. I still have the code fragments, and memos in
electronic form.
> While (multiple primes) gives some speedups, it opens up the
> (RSA) algorithm to possible new attacks: if a faster special
> purpose algorithm than the elliptic curve method is invented,
> then multi-prime RSA easily could become insecure.
Well, GNFS and even MPQS are faster than ECM for pratical purpose,
and all three are equaly efficient against two-prime and
multi-prime RSA. The product of 2 random 288 bit primes is just
as hard to factor as the product of 3 random 192 primes, and this
situation has not evolved in the last 20 years. Yes, it is
conceivable this could change, but it is also conceivable, and
IMHO more likely, that some other breakthru will be made on
factorisation or the discrete log problem over elliptic curves.
> (ECC enables) somewhat efficient operation on 8-bit processors
> without a coprocessor. If you're concerned about the speed of
> private key operations, my recommendation is to use ECC (for
> security concerns, use a randomly generated curve)
Have you any firm reference of ECC on 8 bits processors without a
coprocessor ? Certicom once proposed this on the 68HC05, but it
was apparently retracted. I do not know the reason, and still
wonder if attacks have been found (on the special field used, or
by power/timing attacks).
In summary, I think multiple primes is a useful idea, but not a
patentable one. It gives a sizable (not dramatic) speed increase
for constant modulus size, and allows to increase modulus size
on hardware supporting only fixed-size modular exponentiation,
which do boost security against existing brute-force attacks.
Francois Grieu
[revised post]
------------------------------
From: [EMAIL PROTECTED] (Mike DeTuri)
Subject: Re: Newbie about Rijndael
Date: Tue, 31 Oct 2000 12:17:56 GMT
Some really good suggestions for adding and removing padding were
recently discussed in the thread titled "Rijndael file encryption
question." It's dated 10/24/00.
Mike
On Tue, 31 Oct 2000 06:41:34 +0100, "mac" <[EMAIL PROTECTED]> wrote:
>Hello!
>
>I'm a newbie with Rijndael, block ciphers and cryptology in general. I've
>downloaded Mike Scott's C implementation from Rijndael's home site. I'm
>trying to figure out how it works and I have one question. When I want to
>encrypt a string of, say three characters(bytes), what do I have to fill up
>the rest of the array(another 14 bytes). I had problems when passing just a
>null terminated string that is much shorter that 16 bytes to encrypt/decrypt
>block functions. It works fine when I pass a 16-byte long null terminated
>array. I know this seems pretty dumb question to you, but I don't understand
>everything what's happening in encryption/decryption functions and it's
>killing me.
>
>Thank you very much for any explanations, thoughts or code.
>
>
====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
======= Over 80,000 Newsgroups = 16 Different Servers! ======
------------------------------
From: [EMAIL PROTECTED] (Rob Warnock)
Subject: Re: Psuedo-random number generator
Date: 31 Oct 2000 13:34:03 GMT
Terry Ritter <[EMAIL PROTECTED]> wrote:
+---------------
| [EMAIL PROTECTED] (Rob Warnock) wrote:
| >Actually, synchronization steps are probably the only place in
| >a computer you *can* get true quantum randomness. ;-} ;-}
| >... the probability of the
| >synchronizer "settling" to a definite state (a hard "1" or "0")
| >during any subsequent discrete interval is a true random process.
...
| I have never imagined that anyone would think of *using* metastable
| operation for something.
+---------------
Well, here's at least one reference [actually, several abstracts]:
<URL:http://www.imse.cnm.es/~barriga/absdigit.htm>
...
Bellido,M.J., Acosta,A.J., Valencia,M., Barriga, A. and
Huertas, J.L.: "Simple Binary Random Number Generator".
Electronic Letters,vol. 28, No. 7, pp. 617-618, Mar. 1992.
"A random number generator based upon forcing metastable operation
in a CMOS latch is presented. Sequences produced by this generator
have passed standard tests, exhibiting a reasonable random behaviour."
Bellido, M.J., Acosta, A., Valencia, M., Barriga, A.,
Huertas, J.L.: "A simple binary random number generator:
new approaches for CMOS VLSI". 35 th MIDWEST Symp. on
Circuits and Systems, pp.127-129, Washington, Aug. 1992.
"Random number generators (RNGs) based upon metastable operation in
a CMOS latch are presented. Some different techniques to force
metastable operation and detect the final state are also reported.
Many prototypes have been integrated and sequences produced by
these generator have passed standard tests, exhibiting a good
random behavior."
...
+---------------
| First, I am unaware of any parts which have specified and therefore
| predictable metastable operation.
+---------------
You must not have looked lately. While it is true that several decades
ago it was hard to get such information, for at least the last decade
manufacturers have been characterizing their synchronizer latches and
publishing the results -- especially for their "metastability-hardened"
designs. The standard parameters are usually "tau" and "T0" (and sometimes
"D0" or "T1"), where the failure rate for the latch failing to settle
within "t" after the clock is given as Pfail(t) = Fd*Fc*T0*exp(-(t-D0)/tau),
or expressed as an MTBF:
(t-D0)/tau
e
MTBF(t) = ------------
Fd * Fc * T0
where:
Fd = average frequency of transitions on the data input
Fc = clock frequency
T0 = the "real" (as opposed to published!) difference between
the setup & hold time, that is, the window of vulnerability
for *entering* the metastable state
D0 = base constant or irreducible delay in the latch [sometimes
set to zero by artificially bloating up the value of "T0"]
tau = how much you have to increase the settling time "t" to
reduce the failure rate by a factor of 1/e, that is, how
long it will (probably) take to *exit* the metastable state,
once entered. [Tau is inversely related to the gain-bandwidth
product of the core latch, among other things.]
Look in the "synchronizer cell" descriptions in any modern ASIC library,
and you'll find the above specs (or something similar from which tau & T0
can be derived).
+---------------
| the characterization of particular parts from particular lots from
| particular manufacturers, who could change that operation at any time.
+---------------
That's true, but now that the problem is "out in the open" (where it
wasn't at one time -- manufacturers didn't want to admit that true
randomness existed!), they keep that info as up-to-date as any of
the rest of the specs.
+---------------
| Second, the probability of encountering metastability (other than a
| deliberate attempt) is very, very low.
+---------------
The probability is *NOT* "very, very low" if the clock and data rates
are very, very high!! See the above formula.
+---------------
| This is a very inefficient way to encounter quantum randomness.
+---------------
See what I said above about designing a feedback loop to *force*
the data edges into the "T0" band. My guess is that you could create
a metastable event on a high fraction of clock edges, at least 10% if
not more. With a 33 Mhz clock (a *low* rate today), that's at least
3 mega-metastables per second, each of which could produce *at least*
one random bit (perhaps even two or three). Efficient enough?
+---------------
| Third, as I recall, the length of the metastable period is something
| like a negative exponential:
+---------------
The exact formula is given above. But the length per se is not a
negative exponential. Rather, the "metastable exit" probability is
a *constant*, which means that the probability of exceeding some
specified length is a negative exponential function of the length.
[Yes, that's a quibble, but an important one if you're trying to
extract more than one bit of randomness per metastable event...]
+---------------
| very long metastable periods are very, very, very rare. Most metastability
| occurs in short transients and may not cause problems, and so may not be
| noticed at all by normal circuits.
+---------------
Not so. All you need is for the decision to be delayed long enough to
penetrate the critical setup/hold window of the following stage. If your
design is running close to the limits of propagation time for its clock
rate (that is, there is very little "timing margin" -- which is almost
always true for any economical design!), even the slighest additional
metastability-induced delay can cause the following logic to fail
catastrophically.
Consider the output of a synchronizer going into two other inputs, one
of them through an inverter. A static logic analysis will assume that
the following inputs *always* have opposite values at the next clock
edge, but due to metastability-induced delay one input might see a change
while the other (the one through the inverter) doesn't -- *WHAMMO!*
Suddenly you have two parts of your logic with inconsistent states in it.
[This *is* a real problem. I've been bitten by it myself, long ago.]
Allowing enough time for reliable synchronization SLOWS THE LOGIC DOWN,
so ignorant or skeptical designers unfortunately sometimes try to "cheat
Mother Nature"... and fail.
+---------------
| >...but you can never reduce the incidence of synchronizer failure
| >to zero. [However, modern hardened synchronizers have an MTBF of
| >many centuries, so for all practical purposes, it's a solved problem.]
|
| Right. So why are we talking about it?
+---------------
Because you can just as easily "unsolve" it, and thereby create a
source of true random numbers!! Basically, you build a "synchronizer
failure detector", and use feedback from it to tune a variable delay
line (or equivalent) so that nearly every data edge falls right *in*
the critical window. [This is the same kind of circuit that those
"lab-bench testers" use.] The old Cheney & Molnar papers showed that
once the metastable state was entered there was a fairly long time
during which the the probability of exiting was fairly uniform (followed
by a rapid exponential tail). If you chop up the results from each
measurement into a bunch of "buckets", you should actually be able
to get *more* than one random bit per metastability event -- probably
megabits per second of truly random bits.
+---------------
| >Anyway, the point is that by using a bunch of different frequency
| >sources and synchronizing data passing through multiple clock domains,
| >one can *deliberately* generate genuinely random data,
|
| At this level of criticality, it is likely that thermal-based noise
| plays a significant role in allowing or preventing metastability from
| occurring.
+---------------
Not quite. Thermal noise does not *prevent* the metastability from
occurring, though it certainly does have a strong role in helping
the latch *exit* the metastable state, once entered -- and that's
exactly what you exploit when creating a metastability-based RNG.
+---------------
| It would be far easier to just work with the noise.
+---------------
Maybe, maybe not. Normal analog "noise generators" are *very* sensitive
to coupling of power-supply transients into the noise source, and thus
contaminating the resulting random stream. While metastability-based RNGs
are certainly also somewhat susceptible to power-supply coupling, the
fact is that in a metastability-based RNGs the "thermal noise" is in the
*time* domain, not the analog voltage domain (at least, not externally),
and is thus somewhat less bothered by power-supply fluctuations.
+---------------
| >- for one thing, you have to
| >make sure that the different oscillators aren't coupled, or you'll
| >get "locking" effects -- but it can be done without too much pain.
| >People who design & test synchronizers do it all the time...
|
| But we aren't talking about synchronizers in a test jig, surrounded by
| precision measuring equipment with repeated stimulation in the
| critical region. We are talking about ordinary synchronizations, in
| practice, in a conventional working system.
+---------------
I addressed that in my other posting, in which I apologized for implying
that I might have been taken as suggesting using the synchronizers
that are already "lying around" in a clone PC. I was (and am) really
talking about synchronizers explicitly *designed* as RNGs, that just
happen to use metastability as the source of quantum randomness. Today,
that woul have to be done as an add-on card.
However, if people feel it's important to have large numbers of true
random numbers available ubiquitously, and lobbied for it enough to
the clone PC makers, such a logic block *could* be easily stuck in
a corner of any commodity PC chip set. It would add "zero" (roughly,
discounting the NRE of the IP) to the cost of the system.
-Rob
=====
Rob Warnock, 31-2-510 [EMAIL PROTECTED]
Network Engineering http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
1600 Amphitheatre Pkwy. PP-ASEL-IA
Mountain View, CA 94043
------------------------------
From: [EMAIL PROTECTED] (John Bailey)
Subject: Re: Calculating the redudancy of english?
Date: Tue, 31 Oct 2000 13:35:26 GMT
On Tue, 31 Oct 2000 07:46:22 GMT, [EMAIL PROTECTED] wrote:
>In article <8tkosd$84d$[EMAIL PROTECTED]>,
> Simon Johnson <[EMAIL PROTECTED]> wrote:
>> How does one calculate the redudancy of english?
>>
>This is covered in detail by Shannon in a very early
>paper (title something like "The Entropy of Printed
>English"). The basic idea is very simple - get the
>smartest people you can find, have them guess an
>English text letter by letter, and see how many tries
>it takes to get each letter right on the average.
And extending this protocol: if, instead of humans, a very intelligent
BUT predictable computer is used, THEN use two computers. Analyze the
text with one at one end of the transmission line and only transmit
the YES-NO guessing results. At the other end, with the matching
computer and program, give it the same responses to its guessing.
I thought this was from Hofstadter's Godel, Escher, and Bach, but in
a quick check, it was not to be found.
John
------------------------------
Subject: Re: shared secret signing using a hash...
From: [EMAIL PROTECTED] (Tony L. Svanstrom)
Date: Tue, 31 Oct 2000 14:04:13 GMT
Anders Thulin <[EMAIL PROTECTED]> wrote:
> "Tony L. Svanstrom" wrote:
>
> > $data = 'this is the string';
> > $signature = md5_base64 "[this is the secret] $data";
>
> For more on that particular topic, try RFC1828 and RFC2104.
Thank you, but do you know of any more "interesting" uses?
/Tony
PS Vad g�r ni ProSoft-m�nniskor egentligen? *nyfiken*
--
/\___/\ Who would you like to read your messages today? /\___/\
\_@ @_/ Protect your privacy: <http://www.pgpi.com/> \_@ @_/
--oOO-(_)-OOo---------------------------------------------oOO-(_)-OOo--
on the verge of frenzy - i think my mask of sanity is about to slip
---���---���-----------------------------------------------���---���---
\O/ \O/ �99-00 <http://www.svanstrom.com/?ref=news> \O/ \O/
------------------------------
From: [EMAIL PROTECTED] (JPeschel)
Subject: Re: Q. to Ritter /PKCS cascade/Hybrid PKCS
Date: 31 Oct 2000 14:20:26 GMT
Mok-Kong Shen [EMAIL PROTECTED] writes:
>If you publish, then you'll get fame and become a guru
>or even be immortal (possible in France).
It's tough to become immortal by publishing your work.
I plan to become immortal by not dying. Only one
slight bug to work out.
Joe
__________________________________________
Joe Peschel
D.O.E. SysWorks
http://members.aol.com/jpeschel/index.htm
__________________________________________
------------------------------
From: [EMAIL PROTECTED] (Trish Conway)
Subject: Rijndael Key Schedule
Date: 31 Oct 2000 12:55:08 -0000
In the Rijndael key schedule the first subkey is just a copy of the userkey(for a 128
bit userkey). Could the following scenario be interpreted as a weakness : Say the
subkeys are generated in a black box in hardware and an unauthorised person breaks
into the black box and obtains the subkeys. They now have the userkey and can go to
another black box and input the userkey and impersonate a legitimate user(supposing
that the userkey is distruted to a number of users all using a central host).
------------------------------
From: Mike Connell <[EMAIL PROTECTED]>
Subject: Re: Q. to Ritter /PKCS cascade/Hybrid PKCS
Date: 31 Oct 2000 15:37:01 +0100
[EMAIL PROTECTED] (JPeschel) writes:
> Mok-Kong Shen [EMAIL PROTECTED] writes:
>
> >If you publish, then you'll get fame and become a guru
> >or even be immortal (possible in France).
>
> It's tough to become immortal by publishing your work.
> I plan to become immortal by not dying. Only one
> slight bug to work out.
>
How to stop the ghost of Woody Allen from haunting you? ;-)
best wishes,
Mike.
--
Mike Connell [EMAIL PROTECTED] +46 (0)31 772 8572
[EMAIL PROTECTED] http://www.flat222.org/mac/ icq: 61435756
------------------------------
From: [EMAIL PROTECTED] (DJohn37050)
Subject: Re: RSA Multiprime
Date: 31 Oct 2000 14:24:54 GMT
The original RSA patent that just expired mentioned the possibility of using
more than 2 primes. Draw your own conclusions about the Compaq multiprime
patent. Bob Silverman, RSA Labs, at the last ANSI X9F1 meeting said he thought
the Compaq patent would be declared invalid.
Don Johnson
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: 3-dimensional Playfair?
Date: Tue, 31 Oct 2000 14:35:34 GMT
On 31 Oct 2000 11:08:35 GMT, [EMAIL PROTECTED] (Juergen
Nieveler) wrote, in part:
>Since I'm just reading "The Codebreakers" (KAHN), and he mentioned that a
>Playfair with more than 2 dimensions would be harder to solve, I'm now
>looking for pointers to people who have actually tried this (just
>curiosity... I know it wouldn't be a really safe system).
>Has anybody ever stumbled about such a thing? Technically, it probably
>wouldn't be hard to do with a computer... just take a 3-dimensional array
>of alphabets instead of a 2-dimensional grid, and encrypting 3 letters at
>once instead of two.
>How much harder would it be to break such an algorithm?
Well, something close to a 3-dimensional Playfair does exist. But it
isn't quite the same, because it doesn't use the same kind of rules.
Encode each letter of a 27-letter alphabet by a combination of three
digits from 1 to 3, and then write the letters in by rows, and take
them out by columns.
Something more closely analogous to Playfair would be less secure than
that, as it wouldn't change which coordinate any digit applied to, and
so with only three digits, and with the extra complexity, it hasn't
been tried that I've heard of.
John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: BEST BIJECTIVE RIJNDAEL YET?
Date: 31 Oct 2000 14:39:52 GMT
[EMAIL PROTECTED] (Tim Tyler) wrote in <[EMAIL PROTECTED]>:
>SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]> wrote:
>: [EMAIL PROTECTED] (Tim Tyler) wrote in <[EMAIL PROTECTED]>:
>:>SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]> wrote:
>:>: [EMAIL PROTECTED] (John Savard) wrote:
>
>:>:>Here I am in difficulty in understanding the specifics, but I think
>:>:>I may be more in agreement with Mr. Tyler here. In fact, this is
>:>:>precisely the point on which I've found Mr. Scott's bijective scheme
>:>:>flawed when he converts a compressed file to one with a length that
>:>:>is an integer number of octets.
>:>
>:>: What you fail to understand is that it is not a flaw.
>:>
>:>I was under the impression that your original Huffmann compression
>:>routine exhibited a bias in the final byte, when applied to sets of
>:>random inputs (e.g. usenet news messages).
>
>: Yes I belive you did find a bias based on compressing usenet
>: messages. But that still does not take away from the fact that
>: any byte value could have been used as the last byte of compressed
>: text and that it would still correctly decompress and recompress.
>
>That's true. However John Savard seems to have believed that the
>ending bias was a weakness. Certainly, if you have a large number of
>files distributed under the same key, such a bias will enable attackers
>to reject keys, by seeing if the expected statistical bias is present in
>the last bits. I agree with him that this is a potential source of
>concern.
I don't think an attacker can use it to reject keys, Since still
any ending is possible. However If the attacker does notice a pattern.
becuase of the kind of message you normally compress and send, There
is an optimal way to search if one can tell in advance which keys
lead to which endings. But still any ending is possible.
>
>: I think this is an artifact of the optimal endings. [...]
>
>I agree.
>
>: You could hide it by using my focused huffman compression but its
>: still there in one form or the other.
>
>Yes.
>
>:>This meant that the last bits were more likely to be zeros than ones.
>:>
>:>I tested it to see if this was the case, and observed this effect.
>:>
>:>After some reflection, I believe the problem is specific to your
>:>ending method - I don't think other 1-1 methods are necessarily flawed
>:>in the same way.
>
>After some more reflection, I retract this comment. It was an error ;-|
>
>:>I regard this as a flaw - and it is a problem that John's scheme
>:>avoids.
>
>: John methods are not 1-1 and he changes the distribution by adding
>: random numbers.
>
>He would probably argue that *if* the numbers are genuinely random,
>that won't help attackers. Essentially, I'm inclined to agree with him.
>
>: The biasis comes in during the coversion of the fintiely
>: odd files to one of some fixed structure such as 8 bit groupings.
>: [...]
>
>I believe John's method applies to ordinary bitstreams - not finitely
>odd files (though you can convert the latter to the former by appending
>a "1" bit).
>
>: If one looks at fintiley odd files then each in case a file 1 bit
>: longer allows twice as many files for each bit. In this sense its not
>: biased however if you look at a ranges of file that your encrypting
>: they don't increase this way so there are many more with a trailing
>: bit of zero.
>
>Yes, this is the source of the problem.
This potentail problem that I feel should ne minamized as best as
one can before the god of random occurs. Maybe there is some hidden
theorm or rule of thumb that sould say the larger the final block
size granularity you try to encrypt you can expect the key to be
weakened by so many bits. I think that fitting to 8 bit granularity
should at most weaken it by 8 bits. And I think it would be much less
if the effect is there at all.
>
>: Suppose there is a bijective way to end the file that does not have
>: my biase as you call it. Then there would still be a way to convert
>: that ending to mine so that all an attacker would have to do is
>: transform it to one of my endings and look at the last bit there.
>
>A convincing argument why a bijective method will fail to completely
>eliminate the problem. John's solution to this problem was to avoid
>using a bijective method.
>
>: Check my focused huffman the way you did the other one and see if you
>: see the same biass.
>
>It's not so obviouly there - but an attacker can still reject keys based
>on characteristics of sets of compressed files (though he is slowed
>down).
I am not saying that one can not add random if one wishes. Just like
one may want authenacation. But you can add these either after or before
the compression encryption phase.
As another example. compress a file where you allow the last few bytes
to be random. When you get to end. Evoke the god of random to get a target
byte. check or see how you can modify the last few bytes so the target
byte would appear or at least be gotten to as close as possible. Note
this requires a little work. But still since these are adaptive compression
methods you only have to play the game at the end. You may even just
choose out of a set of random endings and see how close they come to the
current random ending you choose to imulate. And store the result so that
next time you encrypt a random ending you can baises the way you pick
from the so that it appears most random based on current random target
value and previous endings actaully used. There are many ways but its
nice to have an easyily islolated compression encryption package where
the user can still check for complete bijection.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website **now all allowed**
http://members.xoom.com/ecil/index.htm
Scott LATEST UPDATED source for scott*u.zip
http://radiusnet.net/crypto/ then look for
sub directory scott after pressing CRYPTO
Scott famous Compression Page
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: DATA PADDING FOR ENCRYPTION
Date: 31 Oct 2000 14:53:29 GMT
[EMAIL PROTECTED] (Tim Tyler) wrote in <[EMAIL PROTECTED]>:
>It does makes files shorter - though I believe not by very much.
>
But over many files it can add up to a lot.
>I suspect the files are "already" in this form. My only query was whether
>it was worth tranforming the results to files of bytes, when the security
>benefit is zero.
I am not sure its zero. For one thing we always assume the attacker
knows the method of encryption. But I don't think we should advertise
it. If the NSA is monitoring ones encrypted traffic why tell him you
are using IDEA or Rijndael. They may suspect but since they likely
follow very blind narrow procedures. If they don't observer certain
types of mutiply groups they may not put much effort in trying to
break the code correctly. I supose an example might be the midle east. Fron
the news posts I guess some one new the Cole was going to be attacked
and becuase the message was in arabic they gave it a low weight. Way
did the analysist quite his job. THe US says he did not know the target
was the Cole. But knowing its offical policy to lie to the american
people. Does anyone really belive he quite over a lesser reason.
>
>I don't believe other folks generally perform such a stage (though this is
>probably partly because they don't know how to do it).
>
I suspect others may know how but commites can be lead
to use bad methods. Does you think the AES will allow for
a fuully bijective byte file to file version of padding
and chaining to occur. Of cource not it to good of an idea
for a commitee that is heavily influences by the NSA to allow
in a blessed standard.
>: Like I have said to others this is not the end all. But I fill his
>: implementions should be one of the ones most used people combine
>: compression and encryption.
>
>It needs to be built into a program with a UI for most folk to use it.
I think Matt did just that.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website **now all allowed**
http://members.xoom.com/ecil/index.htm
Scott LATEST UPDATED source for scott*u.zip
http://radiusnet.net/crypto/ then look for
sub directory scott after pressing CRYPTO
Scott famous Compression Page
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************