Cryptography-Digest Digest #176, Volume #11      Mon, 21 Feb 00 14:13:02 EST

Contents:
  Re: EOF in cipher??? ("Trevor Jackson, III")
  Re: Question about OTPs (Jerry Coffin)
  Re: Who is using ECC? ([EMAIL PROTECTED])
  Re: UK publishes 'impossible' decryption law (Jerry Coffin)
  Re: shorter key public algo? (Mike Rosing)
  Re: NIST publishes AES source code on web (David A Molnar)
  Re: EOF in cipher??? ("Trevor Jackson, III")
  Re: EOF in cipher??? ("Trevor Jackson, III")
  Re: Biggest keys needed (was Re: Does the NSA have ALL Possible PGP   ("Trevor 
Jackson, III")
  Re: efficiency of ecash schemes (Mike Rosing)
  Re: NIST publishes AES source code on web (Mok-Kong Shen)
  Re: NIST publishes AES source code on web (Mok-Kong Shen)
  Re: OAP-L3 Encryption Software - Complete Help Files at web site ("Trevor Jackson, 
III")
  Re: shorter key public algo? (Jerry Coffin)

----------------------------------------------------------------------------

Date: Mon, 21 Feb 2000 13:18:48 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: EOF in cipher???

Runu Knips wrote:

> "Douglas A. Gwyn" schrieb:
> >
> > Runu Knips wrote:
> > > Unfortunately, the above code will run on many systems without
> > > problems until fp is a binary file which happends to contain
> > > the code 0xff, which is equal to (signed char)-1 (on machines
> > > with 8 bits for a character).
> >
> > Wrong again!  When getc reads an FFh byte, it returns the
> > *int* value 0xFF, which is *never* equal to EOF.
>
> Which is equal to EOF if interpreted as a signed char, because:
>
> a) (unsigned char)(signed char)(-1) == 0xff
> b) (int)(signed char)0xff           == (int)(-1)
>
> Got it ?

Your statements are correct, but they are slightly wide of the point.
The critical point is to correctly detect end-of-data.  The original
post was from someone who was experiencing a false end-of-data due to
modal IO (text vs binary).

Your suggestion would lead to another kind of false end-of-data signal.
It will fail when the data contains 0xFF.  That byte and all data
following it would be ignored.

Good C advice on character IO (on which I expect complete approval of
the experts):

1.  Read into an int.

The variable that receives the result of fgetc/getc/getchar must
distinguish among 2^CHAR_BIT+1 values.  So a char cannot act properly no
matter how much conditional code you provide around it.

2. Compare for equality against EOF.

The run-time libraries are designed so that a programmer does not have
to care what EOF is "really".  While there is merit in understanding
what is going on under the covers, there is no excuse for coding that
understanding into programs.  The run-time library boundary hides
information that programs have no business caring about.


------------------------------

From: Jerry Coffin <[EMAIL PROTECTED]>
Subject: Re: Question about OTPs
Date: Mon, 21 Feb 2000 11:17:07 -0700

In article <[EMAIL PROTECTED]>, 
[EMAIL PROTECTED] says...

[ ... ]

> Actually, in my lab class I think I've stumbled across a very
> efficient way of generating a OTP. Take an oscilloscope, hook it to an
> A/D board on the computer, and have the oscilloscope record noise.
> Then, for all voltages >0 output a 1, and all voltages <0 record a 0
> (or the other way around).

Your oscilloscope isn't really adding much here: ultimately, you're 
simply using the noise source to produce randomness and using the 
'scope to sample it.  At the bottom of things, that's what most of us 
have been advocating all along.

The real question is "what is your source of noise?"  If (for example) 
you simply look at a trace from the 'scope with no input connected, 
you're not seeing particularly "good" (i.e. random) noise -- for 
example, where I live the noise I get on an open-probe scope is fairly 
predictable -- quite a strong component at 1.25 MHz (the carrier of a 
nearby AM radio station) another fairly strong component at 100 MHz, 
and so on.

In fact, if you have a really serious attacker and they know your 
location, they might easily set up a fairly directional transmitter so 
the "random" numbers are exactly what they decide to feed you...

-- 
    Later,
    Jerry.
 
The universe is a figment of its own imagination.

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Who is using ECC?
Date: Mon, 21 Feb 2000 18:13:19 GMT

In article <[EMAIL PROTECTED]>,
  JCA <[EMAIL PROTECTED]> wrote:
>
>     I'd be interested to learn who is using ECC. I am aware that it is
> far less well-known than RSA,
> and that not many certificate authorities issue ECC- (ECDSA) based
> certificates yet, these two
> factors working against its usage. However, I wonder if people working
> on smartcards, cell phones
> and similarly resource-constrained environments are already using it
on
> an industrial scale.

I raised this point in my thread above.  I would like the answer too.. I
guess its a bit early for ECC,  but the momentum seems to be going that
way...can you imagine using a digital cert with a 1k RSA key with a
mobile phone...will take forever...

I hope some serious discusion is waranted here....


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Jerry Coffin <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto
Subject: Re: UK publishes 'impossible' decryption law
Date: Mon, 21 Feb 2000 11:22:10 -0700

In article <[EMAIL PROTECTED]>, eric-no-spam-for-
[EMAIL PROTECTED] says...
> zapzing <[EMAIL PROTECTED]> writes:
> > I remember seeing in "Wired" magazine (forgot
> > the issue) that an upgrade of the TCP/IP
> > protocol is planned , and it is encrypted so that
> > what the hackers did will be impossible.
> 
> The DDoS attack will probably *always* be possible.
> Encrypting TCP connections (or IP) will not prevent it.

You're right -- in fact, I doubt any but the most ignorant politicos 
and such who've looked at it think anything being contemplated will 
really stop DoS attacks.  There's still some hope and help available 
from cryptography in general though: if every packet is signed, 
tracking down the originator of a packet becomes a lot easier...

-- 
    Later,
    Jerry.
 
The universe is a figment of its own imagination.

------------------------------

From: Mike Rosing <[EMAIL PROTECTED]>
Subject: Re: shorter key public algo?
Date: Mon, 21 Feb 2000 12:23:13 -0600

David Hopwood wrote:
> That sounds a bit optimistic. 100 bit ECC is very close to being broken
> in a public attack; 768 bit RSA isn't.

Yes, 100 bit ECC isn't very secure.  I'm supprised that it hasn't
already
been broken in the Certicom challenge already.  512 bit RSA has recently
been
done publicly, so I suspect it will be a while before 768 is done.  I
suppose 130
bits is closer to the same level of cracking ability.  Note that 112 bit
ECC
is acceptable for US export with just the "one time review", so it's
obviously
not very secure :-)

Patience, persistence, truth,
Dr. mike

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: NIST publishes AES source code on web
Date: 21 Feb 2000 18:15:46 GMT

Paul Koning <[EMAIL PROTECTED]> wrote:
> That's one reason why I will never trust anything written
> by Dorothy Denning...

I thought her latest report showed that cryptography impedes law
enforcement in only a negligible fraction of cases? or did I mishear?


------------------------------

Date: Mon, 21 Feb 2000 13:39:08 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: EOF in cipher???

David Hopwood wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
>
> "Trevor Jackson, III" wrote:
> > "Douglas A. Gwyn" wrote:
> > > "Trevor Jackson, III" wrote:
> > > > Mok-Kong Shen wrote:
> > > > > I am ignorant of what the C standard specifies. Question: Does
> > > > > 'binary' require the file to be multiple of words or just any
> > > > > multiple of bytes will do? Thanks.
>
> > > > Neither.  The elements written to files are characters.  Sometimes
> > > > (usually) that means bytes.
> > >
> > > Wrong.  Bytes are written to binary files, characters to text files.
> >
> > Interesting.  Tell me, just how do they do that on a machine that doesn't
> > _have_ bytes?
> >
> > Don't go so far out of your way to "correct" me that you end up asserting
> > something completely stupid.
>
> *All* machines that support C and its file I/O libraries use bytes, in the
> sense that C programs running on that machine (and programs in other
> languages with similar file semantics) are guaranteed to be able to write
> sequences of 8-bit bytes - capable of representing integers from 0..255 -
> to a file in binary mode and read them back again.
>
> Note that this is true regardless of the size of char and unsigned char, or
> the sizes of integers that can be manipulated by the processor(s), or what
> character encoding(s) are supported by the C compiler and operating system.

But only true in so far as the compiler is conformant with modern C standards.
There was a time not that long ago that compilers did _not_ promise 8-bit bytes.

The critical issue is this:  why do you care?  It is perfectly possible to write
programs _without_ the assumption of 8-bit bytes.  Since the assumption gains you
little there is little value in embedding it in code, and significant risk in
doing so.

One of the related topics was the relationship between sizeof(int) and
sizeof(char), with the attendant assumption that chars are always smaller than
ints.  Sophisticated users might assume <= rather than <.  But both assumptions
are unnecessary.  Here are two reasons why this kind of assumption is bad.
First, in Plauger, "The Standard C Library", 1992, pp34-35, discussing <ctype.h>,
he states:

"The vast majority of C implementations use exactly eight bits to represent a
character.  Hence a translation table must contain 257 elements.  An
implementation can however, use more bits.  C has been implemented with nine,
ten, and even 32 bits used to represent character types."

The other reason is the ISO character taxonomy.  Its full expression requires
32-bit characters.  A fully international program might have to exceed the
Unicode requirements and implement full ISO conformance.  If the environment was
a 16-bit machine/compiler, it might have sizeof(char) much larger than
sizeof(int).  Consider an international smartcard  as an example.  32-bit ints
might be unreasonable.  But 32-bit characters might be mandatory.

To the linguistically pure this combination would not be a conforming compiler.
But to the lingistally pure there are no conforming compilers because there are
no bug-free compilers.  So the linguistically pure line between C and not C is
uninteresting -- worthless.

OTOH, diddling the C run-time libraries to handle 32-bit chars and 16-bit ints is
certainly possible.  It might even be the right thing to do given budget and time
constraints.  Of course a better solution would be to use 32-bit ints in order to
preserve the assumption that sizeof(char)<=sizeof(int).  But it is not the _only_
solution.



------------------------------

Date: Mon, 21 Feb 2000 13:46:50 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: EOF in cipher???

David Hopwood wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
>
> Runu Knips wrote:
> > "Trevor Jackson, III" schrieb:
> > > "Douglas A. Gwyn" wrote:
> > > > "Trevor Jackson, III" wrote:
> > > > > Runu Knips wrote:
> > > > > > EOF works well, because EOF is defined to be -1, while all
> > > > > > characters are returned as nonnegative values.
>
> > > > > This is _completely_ off topic.  But the last statement is
> > > > > completely false.  The signedness of characters is implementation
> > > > > defined.  Thus on some systems characters are signed.
> > > >
> > > > No, first of all you mean char type, not characters.
> > > > Secondly, the standard I/O functions such as getc deal with
> > > > the data as unsigned char.  So long as sizeof(char) < sizeof(int),
> > > > which is practically always the case, EOF (-1) can never result
> > > > from inputting any data value, even when char is a signed type.
> > > >
> > > > Please stop giving bad C advice!
> > >
> > > You are out of line again.  The original statement was that "all
> > > characters are returned as nonnegative values".  That statement was
> > > false.
> >
> > That statement is true. It's exactly what the standard says.
>
> Yes (characters returned from getc, that is).
>
> > > And there are machines where EOf != -1 and some where sizeof(char) ==
> > > sizeof(int).  I've worked on them.
> >
> > That's both against the standard. ISO specifies that int must be at
> > least 16 bit (i.e. it says short must be 16 bit, and sizeof(int) >=
> > sizeof(short), which means int is also at least 16 bit). A character
> > value, however, is the smallest type which has at least 7 bit.
>
> No, this is wrong (apart from the second sentence).
> unsigned char must be able to represent at least the values 0..255,
> so it cannot be 7 bit. Also, the sizes of char and unsigned char could
> be >= 16 bit, and equal to that of int.
>
> However, on all machines, *it is only possible to rely on files being
> able to represent sequences of unsigned 8-bit values*. Therefore, the
> issue of whether putc(...) ever returns < 0 for a valid input character
> would not arise in any well-designed system, regardless of the sizes
> of int, char, and unsigned char, and however obscure or broken the C
> implementation.
>
> [Technically it might be possible to construct, perhaps using operating
> system tools, a file containing elements that will be read from getc in
> binary mode as values outside the range 0..255. However, since no sane
> design would generate such files deliberately, it doesn't really matter
> how this is interpreted, as long as the program doesn't do something
> insecure as a result. Treating the first such element as equivalent to
> end-of-file would be perfectly reasonable, as would taking the least
> significant 8 bits.]
>
> IOW, all of the following code snippets will work, regardless of the
> compiler (although the first is probably the clearest in terms of style):
>
> 1. int c;
>    while ((c = getc(in)) != EOF) ...
>
> 2. int c;
>    while ((c = getc(in)) >= 0) ...
>
> 3. int c;
>    while ((c = getc(in)) >= 0 && c <= 255) ...
>
> FWIW, the only other person who has consistently given sound advice and
> error-free information in this subthread is Doug Gwyn. It's easy to see
> how newbies could get into bad programming habits by listening to advice
> given on Usenet.

You are falling into the same linguistics-is-all trap that Gwyn dug for himself.

This thread is _not_ about the C language.  It is about good programming
practice, which is a proper superset of the C languages (plural) across the axis
of machine architecture and the axis of standards revisions.  It is easy to see
how newbies can get into bad engineering habits by listening to pedantic advice
given on Usenet.

Learning a language or knowning a language standard, no matter how perfectly,
does _not_ indicate anything about learning how to write software.  People who
give advice based solely upon linguistic issues should restrict themselves to
answering linguistic questions and leave the engineering questions to people who
are actually able to understand the issues involved.



------------------------------

Date: Mon, 21 Feb 2000 13:56:13 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Biggest keys needed (was Re: Does the NSA have ALL Possible PGP  

"David A. Wagner" wrote:

> In article <[EMAIL PROTECTED]>,
> Trevor Jackson, III <[EMAIL PROTECTED]> wrote:
> > Best I've heard for factoring is square root improvement [...]
>
> Well, probably the single most famous result in the field is Shor's
> algorithm that does factoring and discrete log in quantum-poly time.
> That's much more than square-root improvement.

Yes, that is dramatically better.  Does this represent a general
breakthrough or is it a narrowly specialized technique?



------------------------------

From: Mike Rosing <[EMAIL PROTECTED]>
Subject: Re: efficiency of ecash schemes
Date: Mon, 21 Feb 2000 12:43:39 -0600

ahbeng wrote:
> I would like to know, if I have two ecash scheme, how do I do a
> comparison of the efficiency in terms of storage, speed and memory
> usage?
> 
> I was thinking of doing this to calculate the storage efficiency:
> Select an appropriate key size and the appropriate variables size,
> count the number of bytes for each coin.
> 
> To calculate the speed, I have no idea at all. Do I use the O-notation?
> What happen if one scheme uses RSA and the other uses DLP or ECC? Do I
> have to do an actual implementation to find out which is more efficient?
> 
> Anybody can help?

I'd think you are better off measuring things directly (if possible).
There are simply too many variables to try to control for.  Does one
system use a built in crypto-coprocessor?  Even if it's not efficient,
it might still be faster!  The other thing to think about is portablity,
can the software run on any platform?  

A few design decisions can help make things much more efficient, but you
may have to trade off something else to get it.  One thing you might do
is compare things with *no* efficiency.  Then look at what choices there
are and see how it moves things around as you try to get more efficient.

This is a really hard task, good luck!

Patience, persistence, truth,
Dr. mike

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: NIST publishes AES source code on web
Date: Mon, 21 Feb 2000 19:55:20 +0100

Brian Gladman wrote:
> 
> "Mok-Kong Shen" <[EMAIL PROTECTED]> wrote:
> > Brian Gladman wrote:
> > >
> > > The Wassenaar Arrangement (WA) is no more than an informal agreement and
> > > this means that the extent to which national laws implement its
> provisions
> > > is highly variable since there is a legal obligation on any of its
> > > participants to do anything.
> >
> > What is your definition of 'informal' where high government offcials
> > from ministries signed for their respective countries? It is known
> > that the agreement has to be ratified. But is that the 'characteristic'
> > that led you to consider that the agreement is 'informal'? (Even
> > a peace treaty ending a war needs ratification, if I don't err.)
> >
> > >
> > > The US has generally gone way beyond what is required and recent changes
> > > have simply bought US regulations somewhat closer to its provisions.
> > >
> > > Many other countries do not implement any restrictions on commercial
> > > cryptographic products because the WA does not require this.  In
> practice
> > > only a few countries now intepret the crypto controls in Wassenaar as
> having
> > > any impact on such products.  There are token restrictions on export to
> > > 'naughty countries' but everyone involved knows that these are of little
> > > practical value.
> >
> > Could you please cite the text of Wassenaar Agreement that exempts
> > 'commercial products' from its crypto restrictions?
> 
> See my paper listed at:
> 
>    http://www.brian.gladman.btinternet.co.uk/papers/index.html
> 
> Most people who comment on the WA don't seem to have read it.  I have and in
> detail.

First, you didn't seem to have answered my point about 'informal'.

Second, your paper argued with the help of the statement of purpose 
of WA that it 'will not impede bona fide civil transactions' to
establish your claim that ANY software used for civil purposes is
exempted from control. Am I right? But there are two obvious problems
here. Firstly, you have to determine what 'impede' means. What
you consider to be 'impediment' may not correspond to what the
authorities mean. (Compare the issue of what is bad for health.
Is drinking alcohol bad for health? The answer could be different,
not only in different contexts but because of different opinions
as such of the people answering the question.) Second, it is 
generally understood in any document that, if in any field covered
by a general statement there are clauses concerning special cases
or there are clauses about some details, then in cases of conflict
the general statement doesn't apply. Now from the document entitled
'List of Dual-Use Goods and Technologies and Munitions List',
Category 5, Part 1, 5.D.1, Software, and Category 5, Part 2, Note 3,
isn't very clear that crypto software in symmetric algorithms with 
key length greater than 56 bits is under control? If you think it
is not, please kindly show your arguments through quoting the 
relelvant text (or exact position) from that document together with 
your reasoning (the logic establishing your claim). May I stress 
once again that your argument based on 'impediment' only is not 
sufficient for an exemption in my conviction. If there are exemptions 
than in any legal documents such exemptions, being exceptions, should 
be clearly stated as exemptions and not left to the reader to 
excercise his reasoning of what should be included within the scope 
of the meaing of a word in a statement of general nature and what 
should be excluded. (In other words, the reader is not permitted
to be a 'philosopher'.)

Concerning your last two sentences above, I like to say that I also
have read the WA in detail. Of course, I might have misunderstood
it and you might be right. It is the very purpose of the present
discussion to find out the truth about crypto controls that are
really covered by WA.


> 
> > Your sentence 'The US has generally gone way beyond what is required'
> > is indeed interesting. The US seems to do that in one AND also the
> > other direction. Previously it posed in its own territories
> > crypto restrictions much more severe than what the other countries
> > were ready to pose. Now that there are, after much lobbying, finally
> > crypto clauses of the Wassenaar Agreement (US was the main pushing
> > force in that!), the US, for reasons (in my view) not very apparent
> > (or convincing) to the outside world, suddenly wanted to relax the
> > restrictions and, according to an official document quoted in this
> > thread, even now permits strong crypto of 128 and 256 key bits
> > freely accessible from terrorist countries, which is certainly
> > contrary to the spirit of the Wassenaar Agreement.
> 
> Who said anything about 'spirirt' - in crypto the name of the WA is mostly
> used for other purposes that have nothing to do with the aims of the WA
> itself.
> 
> The WA is not intended to be used to prevent bona fide commerical
> transactions and 99.9% of crypto sales fall in this category.

Covered above.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: NIST publishes AES source code on web
Date: Mon, 21 Feb 2000 19:55:27 +0100

Douglas A. Gwyn wrote:
> 
> Mok-Kong Shen wrote:
> > ... in principle (theory) anything that limits/reduces the
> > capabilities of the bad guys is good.
> 
> That theory is bogus, although it has a surface appeal if one
> doesn't think about it.  It's easier to see what is wrong if
> you apply it to a non-emotional issue: Bad Guys presumably use
> pencils; we should regulate the use of pencils to hamper the
> Bad Guys.

The 'theory' does not say whether any specific means used to 
limit/reduce the capabilities of the bad guys is practical,
economically justifiable, etc. etc. It only means that, if
other things are equal, then the case in which the bad guys
are hampered in some way is better than the case in which
they are not. I suppose that is evident from context.

> 
> > The situation is somewhat different in the case of, say,
> > drug control. Here there is meterials involved and the
> > authorities can exercise control and achieve certain real
> > efficicies, even though it is apparent that one can never
> > 'absolutely' solve the problem of drugs before the police
> > recruits at least one tenths of the population to be its
> > officiers.
> 
> The so-called "War on Drugs" has led to many abuses without
> significantly improving the drug situation.  The drug problem
> is a social and psychological problem, not something that can
> be solved by any amount of law enforcement.  The US should
> know better, from its previous dalliance with nationwide
> alcohol prohibition, but people don't learn from history and
> they seek easy solutions to problems that don't have easy
> solutions.
> 
> This seems to have strayed from relevance to cryptology,
> although there are still echos of a connection.

You seemed to agree with me (at least globally) on the issue of
drugs. I used that only as a 'contrast' to the issue of crypto 
control in my argumentation about the latter, so this 'straying', 
if it is considered as such, can be allowed in my view, as long as 
we don't start from here to argue about the drug problem in detail, 
i.e. the justification of different policies from diverse viewpoints, 
the different practices in different countries, the mafias, etc. etc.
(By the way, in internet discussion groups one not seldom finds
what should probably be called 'intended' straying, namely attempts 
to create new and almost (or even untirely) unrelated topics in a 
thread posted by someone else, thus diverting the readers of the 
thread to these new topics and thereby choking the original topic 
to be further discussed. I hate such practices, in particular 
because of the choking effect.)

M. K. Shen

------------------------------

Date: Mon, 21 Feb 2000 14:08:33 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto,alt.privacy
Subject: Re: OAP-L3 Encryption Software - Complete Help Files at web site

Chuck wrote:

> On Sun, 20 Feb 2000 22:35:44 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote:
>
> >
> >On Sun, 20 Feb 2000 09:26:45 -0600, in
> ><[EMAIL PROTECTED]>, in sci.crypt Chuck
> ><[EMAIL PROTECTED]> wrote:
> >
> >>[...]
> >>In the privacy and encryption communities, an algorithm or
> >>implementation is considered guilty until proven innocent.
> >
> >But in cryptography there *can* *be* *no* "proven innocent."  About
> >the best we get is "not proven weak," and we just get that when people
> >run out of attack ideas, or when something must be chosen on schedule.
>
> I really didn't want to get into all that. <g>
>
> >>[...]
> >>Of all the algorithms subjected to rigorous analysis
> >>over the past 15 years, there are fewer than ten survivors that are
> >>trusted well enough by governments to be used in military and
> >>spy-vs-spy communications.
> >
> >That value sounds way, way low.
>
> There may be a lot of oddballs in small places, but from what I've
> read it appears that only a handful of algorithms make up the lion's
> share of military & intelligence encryption. Am I wrong? If so I'd
> like to know just for curiosity's sake what other algorithms besides
> the usual (IDEA, 3DES, possibly Blowfish) are in widespread use by the
> military and intelligence agencies around the world?

There are two thresholds involved.  Security is naturally the first, but
efficienccy/performance is also important in cipher selection.  Concluding
that only ~10 ciphers are secure because only ~10 ciphers have been widely
deployed (which is also too low) is an error.  Conceptually making secure
ciphers is not hard*.  Making secure ciphers that are also efficient is
hard.

*Illustration of the ease of making more secure ciphers.  One can describe
a whole family of ciphers all based on DES.  3DES is the most efficient,
but NDES (N=2K+1), are all secure, and decreasingly efficient.



------------------------------

From: Jerry Coffin <[EMAIL PROTECTED]>
Subject: Re: shorter key public algo?
Date: Mon, 21 Feb 2000 12:08:41 -0700

In article <[EMAIL PROTECTED]>, 
[EMAIL PROTECTED] says...

[ ... ]

> Yes, 100 bit ECC isn't very secure.  I'm supprised that it hasn't
> already been broken in the Certicom challenge already. 

The Certicom challenge currently being attacked (at least the only 
current attack of which I'm aware) is at 108-bit ECC.  At the rate 
things are going right now, I'd expect to see it broken in about 
another month to six weeks.

-- 
    Later,
    Jerry.
 
The universe is a figment of its own imagination.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to