Cryptography-Digest Digest #922, Volume #8       Sun, 17 Jan 99 17:13:07 EST

Contents:
  Re: Twofish vs DES? (William Hugh Murray)
  Re: Practical True Random Number Generator (R. Knauer)
  Re: Trying to find simple, yet effective implementation of crypto... ("Ryan 
Phillips")
  Re: Trying to find simple, yet effective implementation of crypto... (wtshaw)
  Re: mars-algorythm ("Markus Hahn")
  Re: Trying to find simple, yet effective implementation of crypto... (Mr. Tines)
  Re: Too simple to be safe (David Wagner)
  Re: file size of encrypted vs. unencrypted data (Bill Unruh)
  Re: Rabin/Williams/Koyama .. and Flannery? (David A Molnar)
  Re: Question on current status of some block ciphers in AC2 (David Hamilton)
  Re: file size of encrypted vs. unencrypted data ([EMAIL PROTECTED])
  Java speed vs 'C' (was Re: New Twofish Source Code Available) (Mr. Tines)

----------------------------------------------------------------------------

From: William Hugh Murray <[EMAIL PROTECTED]>
Subject: Re: Twofish vs DES?
Date: Sun, 17 Jan 1999 14:20:46 -0500

This is a multi-part message in MIME format.
==============F133729FE6831C91C864107B
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Consider also that we have two decades worth of research that says the
cheapest known (cryptanalytic) attack against DES is an exhaustive
attack against the key while Twofish has only been in existence for
months.  

Consider that the ratio of work for doing a DES encryption with the key
to that of recovering a message (by cryptanalysis) without benefit of
the key is 1 to 2^55.  That is what it was in 1975 and that is what it
is today.  

Consider also that the standard to replace DES, FIPS-46-3 was announced
last month and it is Triple-DES.  

Consider that anyone of us can write an encryption algorithm which we
cannot break.  The work is not in writing it but in knowing anything
about it.  For a long time to come we will know more about the DES than
we know about any other algorithm.

Consider that I can now do a Triple DES operation for 2.5 x 10^-4 what a
DES operation cost when DES was announced.  

Consider that in some senses the assumptions that underly DES are now
obsolete.  These include the fact that both computing power and storage
are scarce.  Thus, the balance between complexity and key-length that
was struck in 1975 cannot be the right choice for today except by
accident.

Consider that NSA is right when they say that a single standard
algorithm reduces the work of an attacker, that most modern encryption
implemntations support an arbitrary number of algorithms and leave the
choice to message or session-negotiation time.  

Consider that if 56 bit DES is the weak link in your security, you are
probably the most secure person in the world.  If encryption is the weak
link in your security, you are stupid.  

Consider that, for most of us, bribery is more efficient than
cryptanalysis; nation states prefer breaking fingers to breaking codes.

In a few hours I leave for the RSA conference.  At an earlier such
conference Arjen Lenstra said, "The algortihms (the context was DES and
RSA) are timeless; what matters now is how we use them."  At the same
conference Paul Kocher said, "Think dollars, not bits."  

In short, there are a lot of considerations and few shortcuts.

Matt Curtin wrote:
> 
> "TERACytE" <[EMAIL PROTECTED]> writes:
> 
> > Which is better?  Twofish or DES?
> 
> Consider that Twofish is being considered for AES, the standard to
> replace DES...
> 
> --
> Matt Curtin [EMAIL PROTECTED] http://www.interhack.net/people/cmcurtin/
==============F133729FE6831C91C864107B
Content-Type: text/x-vcard; charset=us-ascii;
 name="whmurray.vcf"
Content-Transfer-Encoding: 7bit
Content-Description: Card for William Hugh Murray
Content-Disposition: attachment;
 filename="whmurray.vcf"

begin:vcard 
n:Murray;William Hugh
tel;fax:800-690-7952
tel;home:203-966-4769
tel;work:203-761-3088
x-mozilla-html:FALSE
org:Deloitte   Touche
adr:;;24 East Avenue, Suite 1362;New Canaan;Connecticut;06840;
version:2.1
email;internet:[EMAIL PROTECTED]
title:Executive Consultant, Information Security
x-mozilla-cpt:;-1
fn:William Hugh Murray
end:vcard

==============F133729FE6831C91C864107B==


------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Practical True Random Number Generator
Date: Wed, 13 Jan 1999 13:54:29 GMT
Reply-To: [EMAIL PROTECTED]

On Wed, 13 Jan 1999 14:06:13 +0100, Mok-Kong Shen
<[EMAIL PROTECTED]> wrote:

>Since I am not physicist, I have to pose stupid questions.

Why would you say that? It is very easy sometimes for
non-practitioners to ask brilliant questions that practitioners would
never think of because they are prejudiced. All forward movement in
science comes about someone questions the status quo, and that is
usually done by people who have not snuggled down comfortably in the
middle of the herd.

>For the commonly employed radioactive materials, at times they are
>usually employed, how much does this decay effect influence (in 
>average) the ratio of (two) mesurements of successive intervals 
>demarcated by the time points where the detector registers the 
>particles in question?

You would have to do an analysis like the one presented earlier today
by one poster.

>Is this effect smaller, equal, or greater 
>in orders of magnitude to the effect of appratus (those that are 
>commonly employed in such experiments) errors and the effect of 
>errors in utilizing radio time signals?

That obviously depends on the design and performance of the apparatus.

>If the first case applies and
>the decay effect is very much smaller, then we can well neglect
>the decay effect in the present context (i.e. the suggestion of
>KloroX of 12 Jan to modify the program code to remove the bias
>for two successive intervals would be unnecessary).

You need to provide a proof for that.

>Could you give 
>some approximate numerical values so that a layman may obtain some 
>(even if vague) idea of the accuracy issue?

There are other errors beside the reduction in the amount of the
radioisotope present over time. Whatever these might be, ranging from
detector latency to electronic drift in the timing circuits, the
technique under consideration removes bias in the output. It is
difficult to parameterize these systematic errors without knowing more
about the specific design of the device.

What is important is to recognize that such systematic errors do
exist, so it is necessary to do something about them regardless of
their significance in any particular design. Better safe than sorry,
especially when the cost for that safety is negligible.

BTW, toggling the output like that is similar to the XOR operation,
which is used to remove bias from bitstreams computationally.

Bob Knauer

"Since the politician never believes what he says, he is surprised
when others believe him."
--Charles De Gaulle


------------------------------

From: "Ryan Phillips" <[EMAIL PROTECTED]>
Subject: Re: Trying to find simple, yet effective implementation of crypto...
Date: Sun, 17 Jan 1999 10:41:40 -0800

Don't create a protocol that you want to keep secret.  It's not very
difficult to look at the packets and reverse engineer what the structure is.
I am still trying to create this type of chat program, and my idea was to
use RSA/DHH public/private keys to encrypt the message, and sign it. So this
way if the server/recipient check the signature, on the fly, then both will
be able to see if the message is from the true owner.

Does the server have to decrypt the message?  This seems to me an insecure
way to send a message.  How about having the server forward the message to
the recipient?  If the recipient checks the sig. then the message must be
valid.  If you decrypt at the server, you're asking for trouble.

       - Signed
          Ryan Phillips

Tim Mavers wrote in message <77t2af$nmj$[EMAIL PROTECTED]>...
>I am trying to implement a system where my program can send messages to a
>server (also written by me) who then can reply (or send messages) to
another
>client (my program as well).  The messages have to be encrypted.  The
>messages basically contain a meta-protocol that the programs use to
>communicate with each other (the server parses the messages and determines
>what to do).  Since I don't want anyone knowing my protocol (it's a very
>basic one, but if known by all, malicious acts can be done).
>
>Is there a basic crypto principle I can use for this?  I have read through
>the sci.crypt FAQ and the first couple chapters of Bruce Scheiners book
>(very eye-opening), but am still confused about what the best method to
>take.
>
> The server must decrypt the packets on-the-fly (very quickly) and then
>re-encrypt them to go to someone else.
>Each client program doesn't need to have it's own "public" key as
everything
>can be encrypted the same (is this a flaw?)
>
>Can anyone help?
>
>
>
>



------------------------------

From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: Trying to find simple, yet effective implementation of crypto...
Date: Sun, 17 Jan 1999 12:53:10 -0600

In article <77t2af$nmj$[EMAIL PROTECTED]>, "Tim Mavers" <[EMAIL PROTECTED]> wrote:

> I am trying to implement a system where my program can send messages to a
> server (also written by me) who then can reply (or send messages) to another
> client (my program as well).  The messages have to be encrypted.  The
> messages basically contain a meta-protocol that the programs use to
> communicate with each other (the server parses the messages and determines
> what to do).  Since I don't want anyone knowing my protocol (it's a very
> basic one, but if known by all, malicious acts can be done).
> 
> Is there a basic crypto principle I can use for this?  I have read through
> the sci.crypt FAQ and the first couple chapters of Bruce Scheiners book
> (very eye-opening), but am still confused about what the best method to
> take.

There is often a tradeoff between speed and complexity, you are trying for
the best compromize.  The natural fit is a secred keyed system in which
the computer handles both sides of the communication, each physical party
having a key.  The algorithm needs to be strong enough that having two
ciphertexts in different keys for the same plaintext is not much help. 
Keys could be setup via public-key, but you still have the same general
problem left, how to do the routine communications.   The crypto system
should be inductive, highly variable as to ciphertexts for the same
message using the same formal keys, so that someone would have difficulty
acting like a true user, working off of some intercepted stream so as to
masquerade as one.
> 
>  The server must decrypt the packets on-the-fly (very quickly) and then
> re-encrypt them to go to someone else.
> Each client program doesn't need to have it's own "public" key as everything
> can be encrypted the same (is this a flaw?)
> 
It would be best that the same cryptosystem is used, but with different
keys for different links.  You present the essence of a common problem.
Solvable? Yes, depending on what compromises you are willing to accept.
-- 
Crypto with attitude....

------------------------------

From: "Markus Hahn" <[EMAIL PROTECTED]>
Subject: Re: mars-algorythm
Date: Sun, 17 Jan 1999 21:55:23 +0100


Fredi Suter schrieb in Nachricht <77t4te$[EMAIL PROTECTED]>...
>does anyone have the "std_defs.h" - file, needed to compile the
>mars-algorythm?

A complete MARS algorithm implementation is included in the
UCDI driver package available @ http://come.to/hahn
The code was written by Walter Dvorak.

Markus

///////////////////////////////////
please remove the "NOTRASH_" prefix
to get my real e-mail address
///////////////////////////////////





------------------------------

From: Mr. Tines <[EMAIL PROTECTED]>
Subject: Re: Trying to find simple, yet effective implementation of crypto...
Date: 17 Jan 1999 19:58 +0000

###

On Sun, 17 Jan 1999 10:15:35 -0600, in <77t2af$nmj$[EMAIL PROTECTED]>
          "Tim Mavers" <[EMAIL PROTECTED]> wrote.....

> I am trying to implement a system where my program can send messages to a
> server (also written by me) who then can reply (or send messages) to
another
> client (my program as well).  The messages have to be encrypted.  The
> messages basically contain a meta-protocol that the programs use to
> communicate with each other (the server parses the messages and
determines
> what to do).  Since I don't want anyone knowing my protocol (it's a very
> basic one, but if known by all, malicious acts can be done).

How controlled is the deployment of the client code?  Does
your threat model encompass an attacker for whom it would be
worthwhile to reverse-engineer the protocol from your client
executable?  Or to simply extract the key and just read the
traffic?  The more determined your likely adversary, the
less likely simple solutions are to be robust enough.

Is this a commercial application or a hobby one?  Where in
the world is it to be deployed?  This would affect details
of your decision as IP issues and export controls cut in.


-- PGPfingerprint: BC01 5527 B493 7C9B  3C54 D1B7 248C 08BC --
 _______ {pegwit v8 public key =581cbf05be9899262ab4bb6a08470}
/_  __(_)__  ___ ___     {69c10bcfbca894a5bf8d208d001b829d4d0}
 / / / / _ \/ -_|_-<      www.geocities.com/SiliconValley/1394
/_/ /_/_//_/\[EMAIL PROTECTED]      PGP key on page

### end pegwit v8 signed text
07bf5cb60fa0a85b6b03123d910466b6bea32e7f0d73df512895fb61d797
deae48f122b86b000fb84b12cc9884ac6a354e34b2e366b47e83304ec0cf


------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Too simple to be safe
Date: 17 Jan 1999 13:11:47 -0800

In article <77o4b0$nn8$[EMAIL PROTECTED]>, almis <[EMAIL PROTECTED]> wrote:
[ Key = a prime p, Keystream = the binary expansion of sqrt(p) ]

It's been proposed on sci.crypt before, and it's insecure.  
You can quickly recover p from a short stretch of known keystream
output using the technique of continued fractions.

See e.g.
  http://www.dejanews.com/getdoc.xp?AN=275974909
for an earlier discussion on the subject, and for a generalization
to algebraic numbers with small degree (they can also be broken,
this time by lattice reduction).

------------------------------

From: [EMAIL PROTECTED] (Bill Unruh)
Crossposted-To: comp.security.misc
Subject: Re: file size of encrypted vs. unencrypted data
Date: 17 Jan 1999 21:24:32 GMT

In <[EMAIL PROTECTED]> Grant Karlin <[EMAIL PROTECTED]> writes:

>I am trying to determine what the change in the required bandwidth is to
>send encrypted data (using standard SSL transactions) over a network
>versus sending the unencrypted data.

The sizes should be very much the same, except for very short files --
ie the overhead should only be a few bytes in negotiating the
encryption. Encryptions take blocks of the file and produce equal length
blocks (8 bytes for DES,3DES,IDEA,..., 1 byte for RC4 ) So if the block
size does not correspond to a multiple of the length of the file, the
file needs to be padded up to a multiple. And the length of the original
file needs to be sent, and the encrypted symmetric key needs to sent. So
maybe 100 bytes extra. (I am sure someone knows the exact figure for
SSL)

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: Rabin/Williams/Koyama .. and Flannery?
Date: 17 Jan 1999 21:06:38 GMT

[EMAIL PROTECTED] wrote:

> advantage of being simpler to understand. Is it because the RSA patent
> covers them, or is it for other reasons, that the Rabin or Williams
> methods - or the Koyama 2-dimensional method, which may be the most
> similar to this new method - are not widely used?

Mind if I throw out a conjecture or two? 

Systems "like RSA, but faster and slightly different" are not uncommon. 
There are a fair range of them, as far as I can tell. Some of them even
look like they would be well worth implementing. Unfortunately, like most
Really Cool Protocols found in cryptographic literature, they are

* known only to people who read cryptographic literature
and 
* compete with a huge installed base of software using RSA

leads to 

* very, very low visibility or mindshare among possible implementers

because this militates against their ever being implemented, 

* not so many people spend time trying to analyse them in practice

means

* fewer papers written on possible problems or implementation details,
        so stays known only to the happy few that caught the original
        paper

means

* Cool system surfaces, maybe has a reference implementation if it's
lucky, then dies because there's no way to pick it out from the rest. 
Plus one is never _quite_ sure if some amazing new attack or mistake
in proof will come along and make the system suddenly insecure. 

(yes, I know, provably secure means we can avoid that kind of thing. 
I need to spend a lot more time looking at the concept of provable
security in crypto and the examples I've heard of provably secure
systems before becoming quite convinced that one can use a system
fresh off a paper like this.

I'm being way too glib. and I mean no disrespect at _all_ to the
people who put such hard work into new RSA-like systems. I'm 
being pessimistic...and I wish more such systems were implemented
and tested.
)

Plus,

* you now have a system which is not compatible with RSA. See 
PGP 2.3a vs. 2.6.2 and 2.6.2 vs. 5.0 for fun examples of what 
happens when you don't preserve backwards compatibility. 
(althoug with 2.3a it was still RSA, just different formatting). 

standards compliance, too. 

Now as I read it, Flannery's "Cayley-Purser" system speeds up something 
which produces ciphertext and plaintext compatible with a standard
modular exponetiation version of RSA. So it's a means of optimizing
RSA that can be simply dropped in. This is important, and if correct,
will probably soon be adopted by anyone willing to spend the time
coding it. This is very cool. 

I think the fact that this is getting buzz, vs. other methods of crypto,
just shows that we need

* more people reading Journal of Cryptology, CRYPTO proceedings,
EUROCRYPT, ACM SIGSAC proceedings, etc. etc. and keeping current
* more implementers for crypto
* less of a gap between academic crypto and the guy entering his
first CS class. or the average sci.crypt reader.  Like, RSA is mentioned
now in some CS1 type classes, and this is good. It's implemented in some
CS2 classes, and this is even better. 

But I'd like to see ppl coming out of the first year knowing that
crypto != block ciphers, knowing that places like counterpane.com,
Eli Biham's web page, and the CIS Group at MIT exist, and aware
that their local library may have some very interesting reading 
material. Not everyone will catch on, but exposure to more than
"hey, here's one cool algorithm, with no idea what happened next"
might be nice. You can do that in one lecture. 

Subscribing and reading CRYPTO-GRAM from Counterpane is a good idea,
too. I'm very happy with it (Thanks!) and the way it summarizes 
"the year in crypto" or "the week in crypto." 

ok, stop ranting here. sorry. Seriously, though, I conjecture
that visibility has a lot to do with the reason some very interesting
protocols have yet to be implemented. not just visibility, of course 
(digicash may be the counterexample). So what can we do to change that?

-David Molnar

------------------------------

From: [EMAIL PROTECTED] (David Hamilton)
Subject: Re: Question on current status of some block ciphers in AC2
Date: Sun, 17 Jan 1999 21:44:33 GMT

=====BEGIN PGP SIGNED MESSAGE=====

[EMAIL PROTECTED] (Bruce Schneier) wrote:

>On Fri, 15 Jan 1999 19:28:32 GMT, [EMAIL PROTECTED] (David
>Hamilton) wrote:

>>I'm reading Applied Cryptography 2nd edition by Bruce Schneier. In chapters
>>13 and 14 he gives views on a number of block ciphers. I'm wondering if
>>anything 'ever became' of the following half a dozen:-
>>Madryga;

>Ick.  I didn't like it in the book.

You're right.

>>Redoc II;

>Broken.  And ick, besides.

Now, if I've read AC2 correctly, this is new. (If I haven't read AC2
correctly, apologies!) You say Redoc II has 10 rounds and that Biham and
Shamir successfully crytanalysed 1 round but that the attack could not be 
extended to multiple rounds. You end by saying 'I know of no other
cryptanalysis'.  

>>Loki191 (LOKI-97 is an AES candidate now though);

>Double ick.  Didn't I talk about the LOKI-91 break in the book.

This isn't clear to me Bruce. You say Knudsen found it 'secure against
differential cryptanalysis' but 'found a related key chosen plaintext attack
that reduces the complexity of a brute force search by almost a factor of 4.'
You go on to say that another attack on related keys will break Loki91 ...
but it's easy to make Loki91 resistant by avoiding the simple key schedule.  

(snip remainder)

Thanks to Joe Peschel and Sam Simpson for the other information - I haven't
followed up Joe's pointer yet ... and Joe's reference may give more info on
Loki/Redoc above.


David Hamilton.  Only I give the right to read what I write and PGP allows me
                           to make that choice. Use PGP now.
I have revoked 2048 bit RSA key ID 0x40F703B9. Please do not use. Do use:-
2048bit rsa ID=0xFA412179  Fp=08DE A9CB D8D8 B282 FA14 58F6 69CE D32D
4096bit dh ID=0xA07AEA5E Fp=28BA 9E4C CA47 09C3 7B8A CE14 36F3 3560 A07A EA5E
Both keys dated 1998/04/08 with sole UserID=<[EMAIL PROTECTED]>
=====BEGIN PGP SIGNATURE=====
Version: PGPfreeware 5.5.3i for non-commercial use <http://www.pgpi.com>
Comment: Signed with RSA 2048 bit key

iQEVAwUBNqJY4co1RmX6QSF5AQHgMwgAiNPTT79U4PSeVmSQKAa5OnwRsrJWK+T0
knHC04W7NXhaCUZoEOpsimWLs5oc3GrI2hgtwlVzgapgvbPN/kp/rB4COfLk3u7G
/CPaMzRQrDNOpIRQWlK99kLx7L8oc8l9ud9fCh5MMrOJQRPf63G5DBt94aW8z8+j
3O+MRKgxtbh1XOlSjxXv6GkmTW2t7sTwjHZ6rJRbc71ekYjDMOVgoX/2gdAkKnQg
k2zv7+I2L4Cx/BynUOZI1/bKZqRKHSgnt76+zs8Cz/QtkeLA5vOmVfn5B5E8ezxg
nqVH3HDENd3oUgUqMvdBBTtZFvvCtFUaKSFbve/eaidb1FGdImDq7A==
=4/Tc
=====END PGP SIGNATURE=====

------------------------------

From: [EMAIL PROTECTED]
Crossposted-To: comp.security.misc
Subject: Re: file size of encrypted vs. unencrypted data
Date: Sun, 17 Jan 1999 21:39:07 GMT

In article <[EMAIL PROTECTED]>,
  Grant Karlin <[EMAIL PROTECTED]> wrote:
> I am trying to determine what the change in the required bandwidth is to
> send encrypted data (using standard SSL transactions) over a network
> versus sending the unencrypted data.
>
> For example, if I have a 10kb file what is the size of the encrypted
> file sent when using 40-bit encryption?  128-bit encryption?  An
> explanation of how this is computed would be appreciated.
>
> Is the answer the same for both binary and text files?
>
> Thanks for your help.
> Grant
>

many forms of encryption change the length of a file. however
scott19u would not chage the length of your file if you
desired. However when it comes to sending a file over the internet
you should see how long it takes to send a compressed file. Since
if the encryption is any good it will appear to the net as an encrypted
file and will take the same amount of time to transfer since compreession
is sometimes done during some data transfers and good encrypted data
will not compress. Do the compression if desired before encryption.

David A. Scorr


http://cryptography.org/cgi-bin/crypto.cgi/Misc/scott19u.zip
http://members.xoom.com/ecil/index.htm

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: Mr. Tines <[EMAIL PROTECTED]>
Subject: Java speed vs 'C' (was Re: New Twofish Source Code Available)
Date: 17 Jan 1999 19:04 +0000

###

On Sat, 16 Jan 1999 20:05:39 GMT, in <[EMAIL PROTECTED]>
          Daniel James <[EMAIL PROTECTED]> wrote.....

> In article <[EMAIL PROTECTED]>, Mr. Tines wrote:
> > .... the IDEA code from PGP 2.6ui
> > compiled debug under BC++5.02 only ran 3 times faster than
> > a non JIT-ed Java version of the same code (using the test
> > harness there) in JDK 1.1.6.
> >
>
> Comparing debug C code with non-JIT-compiled Java bytecode is hardly a
> fair or meaningful exercise, is it? How does the optimized C compare
> with JIT compiled bytecode? I'd expect rather more of a difference.

It really depends on the cypher - here are some results I
bashed out this afternoon.  There was no attempt to do any
oprimization on the Java code - in each it's just a simple
minded translation of the 'C', with a lot of (var&0xFFFF)
to emulate the unsigned 16-bit arithmetic in the IDEA code,
for example.

This is on a 200MHz Evergreen Pentium-MMX emulator, using
Borland C++5.02 and Borland JBuilder2 with JDK 1.1.6 using
the Symatec JIT and the Borland accelerator.  This is
without the -O option on the compiler which the JBuilder
environment doesn't have a way of switching.


'C' Optimized-compilation IDEA
 Encrypting 1024 KBytes (65536 blocks)...
1.927 seconds = 544431 bytes per second
 Encrypting 1024 KBytes (65536 blocks)...
1.992 seconds = 526657 bytes per second
 Encrypting 1024 KBytes (65536 blocks)...
1.988 seconds = 527718 bytes per second

Unoptimized Java with JIT and Borland AppAccelerator IDEA

Encrypting 1024 KBytes (65536 blocks)...
4.72 seconds = 216.9491525423729 kbytes per second
Encrypting 1024 KBytes (65536 blocks)...
4.88 seconds = 209.8360655737705 kbytes per second
Encrypting 1024 KBytes (65536 blocks)...
4.78 seconds = 214.22594142259413 kbytes per second

So Java is about 40% the speed of C for this cypher.
I also tried a couple of other algorithms to be
representative, just plugging in the different 128-bit
key 64-bit block algorithms.  This is closer than I
was getting with debug vs no JIT - so the JIT is
apparently doing more than the compile/no debug/optimized.

C/Blowfish

Encrypting 10240 KBytes (655360 blocks)...
3.847 seconds = 2726406 bytes per sec
Encrypting 10240 KBytes (655360 blocks)...
3.799 seconds = 2760863 bytes per sec
Encrypting 10240 KBytes (655360 blocks)...
3.851 seconds = 2723574 bytes per sec

Java

Encrypting 10240 KBytes (655360 blocks)...
43.77 seconds = 233.95019419693853 kbytes per second
Encrypting 10240 KBytes (655360 blocks)...
44.27 seconds = 231.30788344251184 kbytes per second
Encrypting 10240 KBytes (655360 blocks)...
43.39 seconds = 235.99907812860107 kbytes per second

Java loses badly here because I haven't done *anything*
to inline all the macros - they are simply coded as
private static final functions, which means that there
is plenty of room for improvement in the implementation.

C/TEA

Encrypting 10240 KBytes (655360 blocks)...
8.533 seconds = 1228992 bytes per second
Encrypting 10240 KBytes (655360 blocks)...
8.522 seconds = 1230578 bytes per second
Encrypting 10240 KBytes (655360 blocks)...
8.532 seconds = 1229136 bytes per second

Java

Encrypting 10240 KBytes (655360 blocks)...
22.85 seconds = 448.14004376367615 kbytes per second
Encrypting 10240 KBytes (655360 blocks)...
22.85 seconds = 448.14004376367615 kbytes per second
Encrypting 10240 KBytes (655360 blocks)...
22.85 seconds = 448.14004376367615 kbytes per second

With another cypher that doesn't have a load of macro
expansions, we get back to the no more than 3 times
as slow.




-- PGPfingerprint: BC01 5527 B493 7C9B  3C54 D1B7 248C 08BC --
 _______ {pegwit v8 public key =581cbf05be9899262ab4bb6a08470}
/_  __(_)__  ___ ___     {69c10bcfbca894a5bf8d208d001b829d4d0}
 / / / / _ \/ -_|_-<      www.geocities.com/SiliconValley/1394
/_/ /_/_//_/\[EMAIL PROTECTED]      PGP key on page

### end pegwit v8 signed text
4dadb5e7c79332d6c774c2b05e79af058e1ee58b591e8a15eee6969f9b5c
fd58c533e1aed2d658971f6ee0c7d86cd7012ddbb3b344963935730392d4


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to