Cryptography-Digest Digest #339, Volume #12 Wed, 2 Aug 00 14:13:01 EDT
Contents:
Re: Sending Messages in Morse Code (wtshaw)
Re: counter as IV? (David A. Wagner)
Re: Proving continued possession of a file (Andru Luvisi)
Re: counter as IV? (stanislav shalunov)
determining valid data (ruebarb)
Re: Proving continued possession of a file (Andru Luvisi)
Re: Sending Messages in Morse Code (JimD)
Re: Small block ciphers (Mack)
Re: Hardware based PK crypto device - University project (Mike Rosing)
Re: counter as IV? (David A. Wagner)
Re: Plausible Word Generation via Trigram Statistics (James Pate Williams, Jr.)
Re: Elliptic Curves encryption (Mike Rosing)
Re: counter as IV? (David A. Wagner)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (wtshaw)
Crossposted-To: sci.math
Subject: Re: Sending Messages in Morse Code
Date: Wed, 02 Aug 2000 10:15:25 -0600
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (John Savard) wrote:
>
> I'm assuming I have a bit sequence that is precisely uniformly
> distributed, and my goal is to find the optimal transformation of that
> sequence - within limits to the complexity of the transformation used
> - into Morse code.
>
....
> I do keep the original Morse code symbols. I note that if I introduced
> more symbols, even symbols from the original Morse code, such as those
> for digits, this would result in changes to the transformation.
>
> Normally, in constructing a Huffman code for the English alphabet, for
> example, one has probabilities for symbols, and then uses those
> probabilities to construct symbols. Supposing one already has a fixed
> set of symbols of different lengths: can one go in reverse, and
> determine how to use those symbols optimally?
>
I've done a study in this area. Your idea is fine. However, the scheme
does not allow for simple conversion from trits. The best bet is to
convert to base 26 and do a survey of characters to assign the shortest
morse characters to the most frequent letters, substitution.
Considering the possibilities, my base translation scheme simply makes the
most sense to get set 26 from certain larger sets. The natural good ones
are only a few simple, and they are most useful: 129(Okinawa),
95(NagaHills), 93(Dabaka), 77(Tonk), 58(Oliva), 47(Mafra), and
30(Belfast).
The larger the base, the more characters can be involved in plaintext. The
larger the source set, the longer the resultant translated text. Necessary
padding to complete the last group may cause more length. Spaces are
encrypted characters.
Consider this sentences in these seven algorithms using default keys.
Okinawa: vhiwc vtgfv xnweu uezgh ftlow huvdr tkygh fvdvw eutld tlovz vwrjv
drson tvfvx xwhpu yvghf wdcvd vggtt kzsrf uujgh gtlgw dggeo srptg avdv
NagaHills: jlsdb mesdf hidpt exttb xsksl yjsrw jbqjy uougu slwbz gpkaw
wpmui qugex iydwj cqrhm wqtlj tkaey yxyqw qgndn cslnb srp
Dabaka: jjjmc xfagb ouddg jfcfb arzin onxyr bphjv ikhkc srvpq wtybi ybmsx
syirv ospvw sqrnk ljcdm ngyhl pjkyw duwio wgonh jutao txvxy
Tonk: nsvnt clzok thtld gtffq okgwo kgfok whptj ptldc tcbft cavok hknay
hrukr tldkt firtc mjpen kojil tujox dykve ettvd o
Oliva: nzuuo qmhbb ysgvk uxxyc ssvup nxxmf qoguh qbkow ysbpv ovwgi ruesl
qpixs uxydc qoezt oiyym vrbrr riszs fxasn
Mafra: ygrdc fklme ujiys xivnw mbgqv srfpc kpkwp ihxgh bwiat grjno iiqxk
ipnvd yomwv kpswb dknkh wpzqx mqga
Belfast: cxgnw dtiyj ynjpj duzhc gcpqi hexzp vwpry xidkv fskuw bepny iazqp
iekrh rnitd dxaye cibtq xvhpj ofnl
Base 26 moy be obtained through slightly smaller and
multiplexed(compressed) bases as well.
--
Free Circus soon to appear in Philadelphia, complete with a
expectation of elephants, and noisy clowns in undignified
costumes performing slight of logic, and, lots of balloons.
------------------------------
From: [EMAIL PROTECTED] (David A. Wagner)
Subject: Re: counter as IV?
Date: 2 Aug 2000 10:03:55 -0700
Douglas A. Gwyn <[EMAIL PROTECTED]> wrote:
> Feel free to modify the question to: What advantage is there
> in one-shot random IVs (with your choice of chaining mode)
> which have to be communicated in the clear, over just using the
> next value of a counter (also known to the enemy) for the IV?
> (Some ways of using an IV are not equivalent to producing a
> related key.) In other words, what does the "randomness" buy
> you, given that the IV is known to the enemy?
Well, that's a very different question, but I'll answer it, too.
If you use a counter as your IV (instead of using a truly random IV)
with, say, CBC mode, some information about the plaintext can be leaked.
To quote from an essay written in 1995 by Phil Rogaway:
``As an example, it is not true that CBC encryption
can use an arbitrary nonce initialization vector: it is essential
that the IV be unpredictable by the adversary. (To see this, suppose
the IV is a sequence number: 0, 1, 2, ... . Then a (first) encryp-
tion of 0x0000000000000000 followed by an encryption of
0x0000000000000001 is recognizably distinct from a (first) encryption
of 0x0000000000000000 followed by an encryption of
0x0000000000000000. Clearly this violates violates the notion of a
secure encryption sketched [earlier in the essay].)''
http://www.cs.ucdavis.edu/~rogaway/papers/draft-rogaway-ipsec-comments-00.txt
If our plaintexts were truly random, this wouldn't be an issue. But
in practice, we know that plaintexts are often patterned, and sometimes
in a way which may lead -- with non-negligible probability -- to two
consecutive plaintexts that differ by 1 mod 2^64, and in this latter
case, information is leaked.
Using a truly random IV eliminates this vulnerability. That's why I
recommend to use an unpredictable, random IV, and not, e.g., a counter.
Note that this is a diferent issue from how you use the IV in the
chaining mode. I also suggest, no matter how you generate your IV,
to avoid using the IV by XORing it against the key; but this suggestion
is for entirely different reasons. I hope I haven't introduced any
confusion between the two issues, which should remain separate.
------------------------------
From: Andru Luvisi <[EMAIL PROTECTED]>
Subject: Re: Proving continued possession of a file
Date: 02 Aug 2000 09:55:38 -0700
[EMAIL PROTECTED] (Mark Wooding) writes:
[snip]
> I suspect, then, that this is actually more
> effort than simply computing g^x mod n (especially since keen Bobs can
> do serious precomputation on the exponent if they feel like it). Or
> have I misunderstood something?
[snip]
I get it now. You aren't missing anything. I was under the highly
mistaken impression that in order to engage in your protocol Bob would
need to have the entire file in ram at once. I thought I had made
progress by allowing the file to be processed in ram.
Andru
--
Andru Luvisi, Programmer/Analyst
------------------------------
From: stanislav shalunov <[EMAIL PROTECTED]>
Subject: Re: counter as IV?
Date: 02 Aug 2000 13:15:27 -0400
"Douglas A. Gwyn" <[EMAIL PROTECTED]> writes:
> Simon Johnson wrote:
> > Yes its desirable, and therefore usally the case, to spen a long
> > time setting up the sub-keys for the rounds. The reason being is
> > it makes Brute-Force attacks more computationally difficult, if
> > the key sheduling algorithm uses wierd and slow functions.
>
> I have to disagree with that. If a brute-force attack is feasible
> at all, trying to protect against it by such means is futile.
I agree that spending a lot of time setting up key schedule offers no
inherent security advantage. But what are some of the block ciphers
for which your scheme (setting up new key schedule for each block)
wouldn't impose a lot of overhead?
Wouldn't this chaining mode require some new block ciphers designed
with it in mind?
--
"If my Eternal Life Device does not give immortality, then the entire
Bible is a joke." -- Alex Chiu
http://www.alexchiu.com/affiliates/clickthru.cgi?id=shalunov
------------------------------
Subject: determining valid data
From: ruebarb <[EMAIL PROTECTED]>
Date: Wed, 02 Aug 2000 10:12:21 -0700
I'm what you might call an interested beginner in cryptanalysis -
Neal Stephenson's book kind of pushed me over the edge, so to
speak, and I'm currently brushing up on my C coding skills to
mess around with a couple of algorithms.
I'm also working on a program that will decode a message written
using several alphabets. (my friend used 3 seperate symbols for
each letter and rotated them in a random fashion)
I'm working on what can be a brute force cracker for this puzzle,
and I'm trying to get a picture of what the best analysis for
this puzzle would be. I get the impression from Neal's book that
no one runs a spellcheck against the results of each combination,
but that you would run a frequency count against the possible
plaintext, and if it's close to what it should be, you would mark
it for review by a person.
I would think an easier solution would be to write a C program
that runs a spellcheck on the possible ciphertext and discards it
provided more than oh, say 10 percent of the words don't check
out.
Is there anyone with possible web resources or source code that
can point me in the right direction? I'm using Applied
Cryptography by B. Schlinder, and a couple of other things. If
so, feel free to email me at [EMAIL PROTECTED]
Now all I have to do is decide whether to do this on C (cross
platform support) or VB (optimized for windows but easy to
compile and optimize for Pentium III's)
===========================================================
Got questions? Get answers over the phone at Keen.com.
Up to 100 minutes free!
http://www.keen.com
------------------------------
From: Andru Luvisi <[EMAIL PROTECTED]>
Subject: Re: Proving continued possession of a file
Date: 02 Aug 2000 10:05:50 -0700
Andru Luvisi <[EMAIL PROTECTED]> writes:
[snip]
> I get it now. You aren't missing anything. I was under the highly
> mistaken impression that in order to engage in your protocol Bob would
> need to have the entire file in ram at once. I thought I had made
> progress by allowing the file to be processed in ram.
[snip]
Argh. I meant processed in blocks. Which could already be done with
your protocol, I just didn't see it.
Andru
--
Andru Luvisi, Programmer/Analyst
------------------------------
From: [EMAIL PROTECTED] (JimD)
Crossposted-To: sci.math
Subject: Re: Sending Messages in Morse Code
Reply-To: JimD
Date: Wed, 02 Aug 2000 16:19:17 GMT
On Tue, 01 Aug 2000 10:32:32 GMT, [EMAIL PROTECTED] (John
Savard) wrote:
>For all I know, the scheme I outline on
>
>http://home.ecn.ab.ca/~jsavard/crypto/mi060307.htm
>
>may be the subject of a patent somewhere...or the subject of a common
>homework problem in classes on information theory...or it might even
>have been considered once (but probably never used in practice) within
>the hallows of the NSA.
>
>It addresses what technique one must use to solve a cute class of
>problems in information theory.
>
>Supposing one wants to send a message, for historical reasons, in the
>form of letters from the standard 26-letter alphabet to be transmitted
>by Morse Code.
I recall a system called 'Atbash' which encrypts binary files
to five-character groups using A-Z and 1-0. You might like to
do a Web search for it.
I'm not qualified to comment on its security.
It's probably not as spectrum efficient as it seems as it's
rather a bulky system. Unsurprisingly.
--
__________________________
Jim Dunnett.
g4rga at thersgb.net
------------------------------
From: [EMAIL PROTECTED] (Mack)
Date: 02 Aug 2000 17:26:17 GMT
Subject: Re: Small block ciphers
>
>
>Mack wrote:
>
>> >> Let see an actual example.
>> >> This is rather long and uses ANF.
>> >> It is rather simple but it shows the principle.
>> >>
>> >> This can be expanded to any number of bits T.
>> >> B = bits of block size
>> >> K = bits of key size.
>> >> T = B + K
>> >>
>> >> The number of terms is equal to 2^T.
>> >> The number of equations is B
>> >> Obviously this is impractical for large T.
>> >
>> >Isn't it that one can have B equations where on the left sides one
>> >has the known values of the output bits , while on the left side one
>> >has functions involving the K unknown values of the key bits and the
>> >B known values of the input bits? If yes, then we have B equations
>> >and K unknowns. Or I am missing something here?
>>
>> For the most part that is it. But the B equations have K unknowns
>> with lots of nonlinear terms. But at some point you have T unknowns.
>> Ie. before you have known values. Basically you have to develop
>> the representation in T unknowns.
>
>We are doing at the bit level, aren't we? If one substitutes the known
>bits into the T terms, how can one say that an expression involving
>the T terms has T unknowns instead of B unknowns? (Compare
>my example quoted below.)
>
Yes it is a bit level representation. The expression is developed before
we have known bits. Once those bits are known the expression can
be reduced. ie. without a known pair we develop the expression of
the cipher in ANF in terms of equations of output with input and
key being unknown. So we have T unknowns at this point. Once
we have a known pair we can reduce this to a set of equations in
K unknowns by substituting the known bits for the B variables.
Now it is conceivable to develop a set of equations for K unknowns
by assuming a certain plaintext. This results in a chosen plaintext
attack rather than a known plaintext attack.
>> >> It also illustrates how a non-linear term can
>> >> be assumed to be a variable. It shows
>> >> how some pairs are better than others.
>> >
>> >The point is whether, on introducing a non-linear term as a
>> >variable, one takes away a previously existing (simple) variable,
>> >or one has the non-linear term as an 'additional' (new) variable.
>> >This is essential, since, if I don't err, your argument of imfeasibility
>> >has to do with the count of number of variables with respect to
>> >the number of equations. To use an analogous case, the equation
>> >x^2+x+1=0 is normally considered to have only one variable.
>> >Do you want to regard x^2, which is non-linear, to be a variable
>> >and thus consider the equation to be of two variables?
>> >
>>
>> No it is still one variable. Although regarding it as two variables
>> may make solving the system of equations easier. For some systems
>> it may simply be impractical to solve the set of equations without
>> such reductions.
>>
>> The final rule is that if you have B equations in K unknowns
>> then it can only be solved if the B equations are nondegenerate
>> and consistent and B>=K.
>>
>> nondegenerate and consistent having the standard meaning of
>> linear equations.
>>
>
>Is your target argument 'impracticability' or 'impossibility'? Many
>non-linear equations in normal computations are very difficult to
>solve. This is only impractical not impossible. My current
>understanding is that you are seeking a set of equations that are
>difficult to solve. But then you run into certain troubles. For the
>'difficulty' is in general cases hard to quantify.
>
Ideally for any cipher we should be able to say it is impossible to
solve with a given set of criteria. But in general we can only say
we think it is impossible. DES supposedly has a unicity distance
of about 2.5. Meaning we need on average 2.5 known plaintext to
find a solution by brute force. But I have never seen a proof of that.
And I can't recall what the reference was so I can't tell you how it
was arrived at. But for some known plaintext we should be able to
find the key with only one plaintext since it has a 56 bit key.
>M. K. Shen
>
>
>
Mack
Remove njunk123 from name to reply by e-mail
------------------------------
From: Mike Rosing <[EMAIL PROTECTED]>
Subject: Re: Hardware based PK crypto device - University project
Date: Wed, 02 Aug 2000 12:26:34 -0500
Paul Morris wrote:
> I am currently reading Electronic Engineering at Southampton University, UK,
> and as a major part of my degree course I must develop an electronics based
> project. Having read a little about encryption technology in general and
> having a general interest in the subject area I was hoping to base my
> project on some form of public key encryption system.
> My initial idea revolved around designing a unit capable of conducting a
> secure transaction with an authenticated partner. This unit would consist of
> a PIC16 micro computer for general control and possible data crunching or
> the use of an FPGA board which could provide more sophisticated and faster
> data manipulation. Having read a little into SSLv3 I was considering basing
> the handshaking process on this model. Is that reasonable?
Starting with a PIC and it's serial interface is a great way to learn things.
You may want to move up in power from there once you understand the problems
and where you want to go. I wouldn't use SSLv3 to start with, that's too
complicated. You should start with your own very simple com-link set up
and make the crypto work. As you gain experience and confidence, you can
move to more complicated protocols. You'll also have a better feel for why
they are what they are.
> a, Could a modern (secure) algorithm be implemented using the equipment
> described above and just how difficult a job would you rate developing it
> onto the hardware.
Yes, it's not too hard. Depending on how much you want to learn vs how
much time you want to spend just implementing, it could take from 3 to 9
months. That's just to make something work! You'll have to spend more
time to make it into what ever will help you pass your exams.
> b, Could you suggest a suitable, secure algorithm for this type of
> implementation.
An elliptic curve algorithm over GF(2^n) would be secure and doable.
By choosing some very special curves you can save lots of processing
time, but understanding the math behind it takes longer.
> c, Since the project is generally supposed to be of some academic or
> commercial value can you think of a specific requirement which could be met
> by a project of this nature or similar.
You could call it a smart card prototype. But you'll have to figure that
one out :-)
> d, Could you suggest a good book or two for a relative newbie (hopefully not
> too much mathematical proof) that I could buy through normal channels
> (amazon etc.)
I'm way to biased of course, but you can find this one thru amazon:
http://www.manning.com/rosing
Patience, persistence, truth,
Dr. mike
------------------------------
From: [EMAIL PROTECTED] (David A. Wagner)
Subject: Re: counter as IV?
Date: 2 Aug 2000 10:30:37 -0700
Douglas A. Gwyn <[EMAIL PROTECTED]> wrote:
> "David A. Wagner" wrote:
> > Note that your proposal _can_ be modified to allow a proof of security.
> > Introduce a secure pseudo-random function F, and choose the secret key K
> > from the keyspace of F. Then,
> > set N := 0
> > for each plaintext block Bi:
> > let L_N := F(K, N)
> > encrypt Bi using key L_N in algorithm E
> > increment N
> > It is straightforward to prove this secure (so long as N never repeats).
>
> ? Why is a "related-key" attack not still possible?
> For a *known* F and N, surely there is a known relationship
> between F(K,N) anf F(K,N+1), although it might not be
> expressable as simply as when F is merely the XOR function.
>
> Indeed, clearly the scheme is *not* secure in an absolute
> sense, since for normal plaintext the key can be recovered
> (by exhaustive search of the key space). I guess you meant
> that it cannot be cracked more easily than E used as ECB.
>
> It would be instructive to see the "straightforward" proof.
> My intuition is that F would have to be a genuinely random
> function to make the proof work.
Sure. I'll sketch the proof. I didn't mean it is secure in an absolute,
information-theoretically-secure sense; I meant that, if breaking E and
F is infeasible, then so is breaking my construction.
My proof-sketch asks you to consider a sequence of four encryption modes:
1. Ideal encryption: The ciphertext is independent of the plaintext,
and uniformly random.
2. Idealized-analogue of my construction: We replace E and F above
with a uniformly random permutation and uniformly random function.
3. Partially-idealized analogue: We replace just E with its
uniformly random idealization.
4. My original construction: We use the base primitives E and F as
I suggested in my posting.
We'll assume that E and F are secure, i.e., it is infeasible to
distinguish them from their idealization.
Then I hope you agree that
a. It is infeasible to distinguish cases #1 and #2 from merely their
input-output behavior (this observation requires some thought,
but it is just an information-theoretic statement; anyway, it is
elaborated in further detail below);
b. It is infeasible to distinguish cases #2 and #3
(almost a trivial statement: if we had a black box that could
distinguish #2 from #3, we could distinguish E from its idealization
by using our black box as a subroutine);
c. It is infeasible to distinguish cases #3 and #4
(similar trivial reason);
d. And therefore, it is infeasible to distinguish cases #1 from #4
(combination of arguments a.--c.).
But d. just says that the encryption mode I proposed is secure, under
the assumption that E and F are secure.
I'll elaborate on that in a bit more detail below. I won't provide
a fully formal proof, but it's pretty close to one.
First I need to define security a bit more clearly. My ideal model of
a block cipher is that, once you pick a random key, the resulting cipher
is uniformly distributed on the set of all permutations. Of course, in
practice we never reach this ideal model, so the definition of security
is that E it is computationally infeasible to distinguish E(L,.) from a
uniformly-random permutation with non-negligible probability of success.
("Probability of success" really means "advantage over just tossing a
coin and guessing whether you're looking at an instance of E() or an
instance of a permutation chosen uniformly at random from the set of
all possibilities.)
Similarly, security for F means that it is infeasible to distinguish
F(K,.) from a function chosen uniformly at random from the set of all
functions (of the right size).
Finally, security for the whole construction means that, even if
we let the attacker choose the plaintext, it will be infeasible to
distinguish the resulting ciphertext from a truly random bitstream
(chosen independently of everything else).
These definitions are all very standard; see the provable security
literature for fully formal definitions, and more details. I'm leaving
out lots of formal rigor/notation that conveys no insight and will be
trivial for anyone who is used to these proofs of security to fill in.
Now the way the proofs work is by reduction, i.e., by showing that
"if you give me an algorithm A that breaks the construction, I can give
you either (1) an algorithm A' that breaks E, or (2) an algorithm A''
that breaks F." Moreover, we show that A'/A'' will have about the same
complexity as A. Consequently, if it is infeasible to break E or F,
then it will be infeasible to break my construction (assuming I can show
such a statement above).
The reduction proof usually works in two steps. First, you show an
information-theoretic analogue: When you replace all primitives (E and
F) by their idealized versions, the resulting analogue-construction
is secure. Then, you conclude that this means that, if the original
construction can be distinguished from its ideal version, then one of
the primitives (E or F) must be distinguishable from its ideal version,
which is what was to be proved. The second step is always easy; it's
the first step where all the work is done.
So, let's show the first step. Replace F(K,.) by a random function.
Then each L_N will be truly random, and moreover will be independent
of each other and of everything else. Replace each E(L_N,.) by a
random permutation. Since each L_N is independent, each E(L_N,.)
yields an independent permutation, each chosen uniformly at random from
the set of all permutations. Now, no matter what block B the adversary
chooses, if you apply a random permutation Pi to B, the result Pi(B) is
distributed uniformly at random from the set of all ciphertext blocks.
Moreover, if Pi_1,..,Pi_m are permutations chosen independently and
uniformly at random, then no matter what list of locks B_1,..,B_m the
adversary chooses, the concatenation Pi_1(B_1),..,Pi_m(B_m) is also
distributed uniformly at random from the set of all bitstrings of the
appropriate length (since all the Pi_i's are independent of each other and
uniformly distributed). But this is exactly what we needed to prove to
show that the idealized-analogue of my construction is secure (in fact,
it is ideally secure).
Finally, the last step of the proof follows by straightforward
manipulation of the definitions, and if you think about it for a minute
or two, it should be pretty clear why.
------------------------------
From: [EMAIL PROTECTED] (James Pate Williams, Jr.)
Subject: Re: Plausible Word Generation via Trigram Statistics
Date: Wed, 02 Aug 2000 17:51:21 GMT
On 1 Aug 2000 22:40:51 -0700, [EMAIL PROTECTED] (Kurt Shoens)
wrote:
>I wrote such a program to generate random words from trigram statistics.
>I train the thing with a dictionary of words and it spits out random
>words for you. Here's an example of what they look like:
>
> election iniopsitism communiform savskillar yorrating destaffoot epidotion
> ratogenous easligory alystely arelaughhea asculpator antarenesis pomologic
>
>Most of the words it creates can be pronounced.
>
>For fun, I also trained it with the name of high tech companies from
>the NASDAQ stock exchange. There aren't as many examples to train from,
>so the generated statistics are thin. Examples of what it generates are:
>
> speedusa affinited isocortec rediacread adflexiinter invistargen
> sagensorb tradiohm sunriserv telekabel tripointer metrimagine genersal
>
>Consultants who make up successful names charge high prices. I wonder
>if they use such foolishness to generate ideas.
>
>Anyway, it's fun.
Is the source code public domain? If so, I would like a copy. What
sort of learning paradigm are you using?
==Pate Williams==
[EMAIL PROTECTED]
http://www.mindspring.com/~pate
------------------------------
From: Mike Rosing <[EMAIL PROTECTED]>
Subject: Re: Elliptic Curves encryption
Date: Wed, 02 Aug 2000 12:42:09 -0500
Doug Kuhlman wrote:
> Overusing a word is a common mathematical problem. I'd love solutions!
> I tried using postulates for my proofs.... Certainly, assuming your
> postulates are true is dangerous and that is, so some extent, what PKC
> does (Course, both RSA and EC are even worse. They're not even shown to
> be equivalent to factoring or to solving the ECDLP). To my knowledge,
> only Blum-Blum-Schub (yes, I brought it up) is actually proven
> equivalent to anything (namely, factoring). Does that mean it's hard?
> I don't know. Certainly, the mathematics doesn't say so (CAN'T say
> so...even NP-complete problems use various postulates). I know I can't
> factor very large n and that the public crypto world can't either.
> That's good enough for me. If you want more, you're welcome to look,
> but I doubt you'll find it.
RSA may not be equivelent to factoring, but factoring will crack it.
ECC seems to me to be just ECDLP, especially for a simple DH like
transfer. A sub-exponential algorithm for ECDLP would advance a lot
of mathematics.
Terry is correct to state that these things are not secure from an
unknown method of solution. It's ballancing the present against the
future. If you want something secure *forever* you can't use anything.
Because Terry is right, we simply can't know what will be discoverd
in the future.
To me, this is an impractical state. I'm willing to bet that any of
these algorithms is secure for the next few years. That's good enough.
I would very much like to see a scalable cipher. This too would be a
pretty major advance for the art. But until it comes along, I'll do
what everyone else does: use ciphers that *look* secure *now*.
Patience, persistence, truth,
Dr. mike
------------------------------
From: [EMAIL PROTECTED] (David A. Wagner)
Subject: Re: counter as IV?
Date: 2 Aug 2000 10:38:18 -0700
Douglas A. Gwyn <[EMAIL PROTECTED]> wrote:
> For a *known* F and N, surely there is a known relationship
> between F(K,N) anf F(K,N+1), although it might not be
> expressable as simply as when F is merely the XOR function.
I think there is a miscommunication here. The XOR function is not a
pseudorandom function.
Please note that the phrase "pseudorandom function" is a term
of art. A fully formal definition may be found in Goldwasser and
Bellare's course notes on foundations of cryptography (please see
http://www-cse.ucsd.edu/users/mihir/papers/gb.html), but roughly
speaking, it refers to a family of functions with the property
that, once instantiated with a secret random key, distinguishing its
input-output behavior from that of a truly-random function is infeasible.
(By truly-random function, I mean one that has been chosenly uniformly
at random from the set of all functions of a given length.)
There is also a similar definition for "pseudorandom permutation",
which seems to be the right formalization of a secure block cipher.
These are fundamental concepts in the literature on provable security.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************