Cryptography-Digest Digest #153, Volume #11      Fri, 18 Feb 00 16:13:01 EST

Contents:
  Re: EOF in cipher??? ([EMAIL PROTECTED])
  Re: VB & Crypto ([EMAIL PROTECTED])
  Re: UK publishes 'impossible' decryption law (Jim)
  Re: We Finns were smarter... Re: My background - Markku Juhani Saarelainen - few 
additional findings    ... (Jim)
  Re: Keys & Passwords. ([EMAIL PROTECTED])
  USENIX Annual Technical Conference, 2000 - Preliminary Program (Moun Chau)
  Re: VB & Crypto (lordcow77)
  Re: RSA Speed (long) (Erik)
  Re: NIST, AES at RSA conference (Terry Ritter)
  Re: NIST, AES at RSA conference (Terry Ritter)
  Re: Keys & Passwords. (Mok-Kong Shen)
  Re: UK publishes 'impossible' decryption law (Tim Tyler)
  Re: NSA Linux and the GPL ([EMAIL PROTECTED])
  Re: Processor speeds. (Mok-Kong Shen)
  Re: Processor speeds. ("John E. Kuslich")

----------------------------------------------------------------------------

From: [EMAIL PROTECTED]
Subject: Re: EOF in cipher???
Date: Fri, 18 Feb 2000 19:16:17 GMT

Well in either case, I'm using C++ not C and the advice to open up the
file in binary worked for me.  But I did enjoy reading the thread
created by my simple question.

In article <[EMAIL PROTECTED]>,
  "Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote:
> "Trevor Jackson, III" wrote:
> > Runu Knips wrote:
> > > EOF works well, because EOF is defined to be -1, while all
characters
> > > are returned as nonnegative values.
> > This is _completely_ off topic.  But the last statement is
completely
> > false.  The signedness of characters is implementation defined.
Thus on
> > some systems characters are signed.
>
> No, first of all you mean char type, not characters.
> Secondly, the standard I/O functions such as getc deal with
> the data as unsigned char.  So long as sizeof(char) < sizeof(int),
> which is practically always the case, EOF (-1) can never result
> from inputting any data value, even when char is a signed type.
>
> Please stop giving bad C advice!
>


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: VB & Crypto
Date: Fri, 18 Feb 2000 19:21:12 GMT

Well then you'll be happy to know that VB7 has "Option Strict".  And
what was wrong with RC4 (original poster said it "generated to much
code")?  I wrote my own version in C++ and it wasn't that many lines at
all.  Oh well.

In article <[EMAIL PROTECTED]>,
  lordcow77 <[EMAIL PROTECTED]> wrote:
> VB is a poor prototyping language for anything other than GUIs.
> The lack of bit operations and unsigned integer support really
> hurts and the weak typing of the language allows (ie. Variants)
> logic and algorithm errors to creep in where they normally would
> be caught in a more strongly typed language.
>
> * Sent from RemarQ http://www.remarq.com The Internet's Discussion
Network *
> The fastest and easiest way to search and participate in Usenet -
Free!


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED] (Jim)
Crossposted-To: talk.politics.crypto
Subject: Re: UK publishes 'impossible' decryption law
Date: Fri, 18 Feb 2000 19:36:50 GMT
Reply-To: [EMAIL PROTECTED]

On Fri, 18 Feb 2000 15:51:20 -0000, "Scotty" <[EMAIL PROTECTED]> wrote:

>
>Gordon Walker wrote in message <[EMAIL PROTECTED]>...
>>On Fri, 18 Feb 2000 11:16:17 +0100, "ink" <[EMAIL PROTECTED]>
>>wrote:
>>
>>>>Any firearm can be used as a weapon. The US govt considers crypto to be
>>>dangerous
>>>>
>>>>enough that it is classified as a "munition". What does that tell you?
>>>
>>>Hardly any other government has doen that. What does that tell you?
>>
>>Many western governments (UK, France etc) have restricted cryptography
>>in some way. I neither know nor care whether they implemented these
>>restrictions by such a classification.
>
>BTW France was very restricted, until a short while ago, when the whole law
>was reversed, so that now France is much freer than the UK.

How do you mean? There are no restrictions on the use of crypto in
the UK.

-- 
Jim,
nordland at lineone.net
amadeus at netcomuk.co.uk

------------------------------

From: [EMAIL PROTECTED] (Jim)
Subject: Re: We Finns were smarter... Re: My background - Markku Juhani Saarelainen - 
few additional findings    ...
Date: Fri, 18 Feb 2000 19:36:52 GMT
Reply-To: [EMAIL PROTECTED]

On Fri, 18 Feb 2000 08:51:58 GMT, "Lassi Hippeläinen"
<"lahippel$does-not-eat-canned-food"@ieee.org> wrote:

>By 1996 the USSR had been gone for three years! But if he was from
>there, it wouldn't be surprising that we didn't treat him nicely. The
>Soviets were never that popular over here. Especially not in the '80s.

...and very much more unpopular in the forties!

-- 
Jim,
nordland at lineone.net
amadeus at netcomuk.co.uk

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Keys & Passwords.
Date: Fri, 18 Feb 2000 19:25:56 GMT

This is a little off topic, but let's say I have a program that takes a
password and then converts that into a 256 byte array.  That array is in
turn what's used to cipher the data.

Does that mean the program is 2048bit encryption (256B = 2048b)?  If not
what determines the bit level security (is it the original key or what's
actually used to cipher the data)?

Thanks
p.s. I ask because I wrote such a program (loosely based on RC4) and
someone asked me what the bit security was (40bit, 56bit etc) and I
don't know.


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

Crossposted-To: 
comp.windows.x,hannet.ml.linux.rutgers.linux-admin,ieee.admin,misc.security,muc.lists.www-security,comp.mail.sendmail,comp.os.linux,comp.os.linux.hardware,comp.os.linux.networking,comp.os.linux.x,comp.unix.bsd.freebsd.misc,comp.unix.programmer
From: [EMAIL PROTECTED] (Moun Chau)
Subject: USENIX Annual Technical Conference, 2000 - Preliminary Program
Date: Fri, 18 Feb 2000 19:41:11 GMT

2000 USENIX Annual Technical Conference
June 18-23, 2000
San Diego Marriott Hotel & Marina
San Diego, California, USA
http://www.usenix.org/events/usenix2000

The USENIX Annual Technical Conference is the gathering place for like
minds in the computer industry, a place to meet peers and experts and
share solutions to common problems. Join us in San Diego on June 18 -
23, 2000 as we celebrate our 25th Anniversary and pave the way for
future innovations.

TUTORIAL SESSIONS - MASTER COMPLEX TECHNOLOGIES
====================================================================
* Learn in-depth procedures from industry experts and professionals
* Select from the following topics:

UNIX Security Tools     Sendmail
Perl                    System and Network Performance Tuning
Windows NT Internals    Intrusion Detection
Solaris Systems         Linux Systems Administration
Advanced CGI            VPN Architecture and Implementation
Samba Servers NEW       Web Site Development and Maintenance

TECHNICAL SESSIONS
====================================================================
* Keynote Presentation by Bill Joy, Co-Founder of Sun Microsystems
* Closing Presentation by Thomas Dolby Robertson, Founder of Beatnik,
  Inc.
* Refereed paper presentations and Invited talks includes new work from
  Bill Cheswick, Rob Pike, and Margo Seltzer; the latest research results
  on operating systems, tools and techniques for dealing with the system
  infrastructure headaches, and a discussion on the Microsoft Antitrust
  Case by expert witness Edward Felten of Princeton University. 
* The very popular Freenix track returns with topics on *BSD, Linux,
  X11-based graphical user interfaces, and the full range of freely
  redistributable software.
* BoFs and WiPs bring attendees together for informal reports on
  interesting new projects and on-going work. Fast paced and spontaneous,
  WiPs and BoFs discuss new ideas and novel solutions. See website for
  schedule and to reserve WiP slots.

=====================================================================
For detailed technical and tutorial programs and online registration:
http://www.usenix.org/events/usenix2000
=====================================================================
Sponsored by USENIX, the Advanced Computing Systems Association.






------------------------------

Subject: Re: VB & Crypto
From: lordcow77 <[EMAIL PROTECTED]>
Date: Fri, 18 Feb 2000 11:59:37 -0800

Does Option Strict prevent you from using undeclared variables
or prevent the use of variants?


* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!


------------------------------

From: Erik <[EMAIL PROTECTED]>
Subject: Re: RSA Speed (long)
Date: Fri, 18 Feb 2000 15:00:58 -0500

Doug Stell wrote:
> Since you asked, here is a worked example of RSA with a small modulus.
> It is done in the conventional manner, using Chinese Remainder Theorem
> and using Montgomery Multiplication. I did this years ago as an aid in
> teaching classes on RSA.
>
> <snip>

Thanks very much - that was very helpful.

Erik

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: NIST, AES at RSA conference
Date: Fri, 18 Feb 2000 20:06:01 GMT


On 17 Feb 2000 13:26:38 -0800, in
<88hp2e$8n3$[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (David Wagner) wrote:

>In article <[EMAIL PROTECTED]>,
>John Savard <[EMAIL PROTECTED]> wrote:
>> You have it backwards. Terry Ritter is suggesting using a pool of
>> ciphers, and adding triple encryption to that as a safety precaution
>> to fix the weakest link problem, not the other way around.
>> 
>> Subject to certain precautions, triple encryption can be expected to
>> be as strong as the strongest of the three ciphers used (so the 3rd
>> weakest one in the pool is the worst case) and hoped to be
>> considerably stronger.
>
>Ok.  But increasing pool size does make the "weakest link" problem worse,
>whether or not you triple the ciphers, so it's not clear to me that tripling
>really helps all that much to address the concern.  (And, to me, 3rd weakest
>doesn't sound all that much better than weakest...)

I realize this is Usenet, where we can expect to have the same
conversation repeatedly, but we used to at least finish them and have
a 6 to 9 month delay before starting all over again with new people.
Now we just keep going around and around and around.  But much of the
background for this is available on my web site, including my
published guest column in IEEE Computer, "Cryptography: Is Staying
with the Herd Really Best?":

   http://www.io.com/~ritter/ARTS/R8INTW1.PDF

And then we have the extensive conversation from last year:

   http://www.io.com/~ritter/NEWS4/LIMCRYPT.HTM

And of course there have been many, many postings in the current
discussion, with virtually every point addressed many times.  These
postings are normally available on news archive sites, like RemarQ:

   http://www.remarq.com/

or Deja:

   http://www.deja.com/


OVERVIEW:  As I see it, there are several of disturbing aspects to the
way ciphers are currently understood and used.  These problems affect
the security of the systems we build and use.  Addressing these
problems is not trivial, but does appear to be possible.  The intent
is to reduce our risk of exposure while still using ciphers which we
cannot know to be strong.  

THE MAJOR PROBLEM with current ciphering is that the cipher we use
could be broken already.  If so, our data will continue to be exposed
as long as we use that same cipher.  That situation is not going to
fix itself.  If we wish to minimize the consequences of a broken
cipher, our only alternative is to switch to a different cipher.  And
we must change ciphers without first getting a hint from our opponents
that we should do so.  

[At this point we usually get into a discussion in which the other
side hints that our ciphers *must* be secure because of how they are
designed or because they are looked at by academics.  But the only
strength we care about occurs with respect to opponents who we do not
know.  Those opponents have experience, capabilities and resources
which we also do not know.  We have no idea what they can break.]

CHANGING CIPHERS:  If we are going to change ciphers, we will need at
least several and perhaps many ciphers.  We also need to deal with the
perception that these "other" ciphers will be weaker than the one
cipher we would otherwise use.  I suggest that we can address *both*
of these problems by multi-ciphering in a stack of three ciphers with
independent keys. 

With a three-cipher "stack," if we have n ciphers, we could have
n-cubed different ciphering "stacks."  Moreover we expect that --
given some reasonable precautions to be defined -- that no "stack"
will be weaker than the strongest cipher it contains.  And we can even
allow the user to force the use of the original cipher, so the "stack"
will be no weaker than the original system, even if we change
cipherings to avoid the "already broken" problem.  

Since individual ciphers in a "stack" do not expose both plaintext and
ciphertext simultaneously (even when the opponents have input to and
output from the "stack"), they cannot be attacked by "known plaintext"
or "defined plaintext" techniques.  Since these techniques are
normally the worst attack situations, even weak ciphers may be
stronger in a multi-ciphering "stack" than they would be used alone.

THE CURRENT CONCERN:  The point of using a "stack" of ciphers has
little to do with raw strength per se.  Each and every one of the
ciphers we use should have more than enough strength -- provided they
actually do what we imagine they should.  The point of the cipher
"stack" is to protect us from our own delusions about strength:
Strength is *not* about what *we* can't break, it is what our unknown
*opponents* can't break, and we don't know *what* they can do.  So if
one of the ciphers in the "stack" really is broken, we have two other
ciphers in place and functioning which must also be broken before our
data are exposed.  The intent is to protect against system failure
from attacks against particular ciphers.  

[At this point, the other side often claims that the probability of
cipher weakness is so low that we need not bother with all this
multi-ciphering stuff.  But, in reality, we cannot know when our
ciphers are broken, so we also cannot know the probability of such
breakage.  Claiming and re-claiming that weakness is "more likely" to
be in implementation or key management is just delusion:  If the
cipher is already broken, implementation problems are beside the
point.  Beyond fundamental physical limitations, we simply don't know
*what* our opponents can or cannot do.]

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: NIST, AES at RSA conference
Date: Fri, 18 Feb 2000 20:06:08 GMT


On Thu, 17 Feb 2000 15:23:09 GMT, in
<[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (John Savard) wrote:

>[EMAIL PROTECTED] (David Wagner) wrote, in part:
>
>>Ok.  But increasing pool size does make the "weakest link" problem worse,
>>whether or not you triple the ciphers, so it's not clear to me that tripling
>>really helps all that much to address the concern.  (And, to me, 3rd weakest
>>doesn't sound all that much better than weakest...)
>
>Which is why I made a suggestion that (it seemed to me) Terry Ritter
>did not consider was terribly useful: that one should have an option
>of constraining the cipher choice, so that at least one of the ciphers
>used must come from a "highly trusted" smaller pool (i.e., one's
>favorites among the AES candidates) while others come from a broader
>range - combining the benefits hoped for with reduced risks.

If I gave the impression that your suggestion to force a particular
cipher or one from a group of ciphers was not useful, I am sorry.
Indeed, I think something like that is necessary to address
perceptions of weakness when moving to a multiple cipher environment.

It seems to me that the other side has a peculiarly inconsistent
attitude about cipher validation:  Currently, the idea is that various
groups produce their best designs, and if nobody can find an attack
for a design, it becomes "trustable."  Yet this is exactly the same
process we see in newbie ciphers all the time, and we don't consider
*those* "trustable."  The difference between newbies and academics is
quantitative, not qualitative; a defective vetting process is still a
defective process.  

Note that this same process does make sense for *learning* about
cryptography, and to the extent that problems are found and fixed, the
resulting ciphers will be improved.  But it is a fallacy to extend
"improved" to "trusted," and that is what AES tries to do.  

Cipher strength is not about what our academics can do, but what our
opponents can do.  We cannot extrapolate lack of success by academics
to lack of success by the opponents.  We cannot estimate the
probability of weakness because we never know when the ciphers we use
are weak.  All we can do is minimize the possibilities of weakness and
limit the extent of loss.   

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Keys & Passwords.
Date: Fri, 18 Feb 2000 21:17:55 +0100

[EMAIL PROTECTED] wrote:
> 
> This is a little off topic, but let's say I have a program that takes a
> password and then converts that into a 256 byte array.  That array is in
> turn what's used to cipher the data.
> 
> Does that mean the program is 2048bit encryption (256B = 2048b)?  If not
> what determines the bit level security (is it the original key or what's
> actually used to cipher the data)?

If your password has n bits then (assuming that these are random)
you can't get more than n bits of security out of that. It is
commonly assumed that the algorithm you use to generate sequences
is known to the opponent. The opponent simply needs to brute
force the n bits in the worst case. If the n bits are not (ideally) 
random, then you have less security from the very beginning.

M. K. Shen

------------------------------

Crossposted-To: talk.politics.crypto
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: UK publishes 'impossible' decryption law
Reply-To: [EMAIL PROTECTED]
Date: Fri, 18 Feb 2000 20:13:18 GMT

In sci.crypt Jim <[EMAIL PROTECTED]> wrote:

:>BTW France was very restricted, until a short while ago, when the whole law
:>was reversed, so that now France is much freer than the UK.

: How do you mean? There are no restrictions on the use of crypto in
: the UK.

Read the thread.

If you use crypto in the UK - and lose your key, the current bill will
make this a criminal act, if the government asks you to decrypt.
-- 
__________
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

Be good, do good.

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: NSA Linux and the GPL
Date: Fri, 18 Feb 2000 20:21:47 GMT

Just some other thoughts on the matter.

1. As far as I can tell, Linux is doing a much better job of being a
viable desktop alternative.  I don't see many desktop applications being
ported to a xBSD.

2. Commercial backing.  There are a lot of companies backing Linux
because of its popularity.

3. This may be true, it may not, however one consideration could be how
easy it is to install.  I don't know who would be using Linux there, but
they may want it to be fairly easy and vendors like RedHat, Corel,
Mandrake, and I belive Caldera have all made it much easier.

I personally have never used a xBSD, but from what I've read they are
very mature and perform exceptionally well.  I am planning on trying out
at least one of them sometime in the future mainly because I heard that
they have excellent TCP/IP stacks.  Well, that's my 2 bits.

csybrandy

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (John Savard) wrote:
> On Thu, 17 Feb 2000 22:06:20 -0500, "Adam Durana"
> <[EMAIL PROTECTED]> wrote, in part:
>
> >The bigger question is why is the NSA wasting thier time with Linux?
If I
> >were them I would work on something like OpenBSD, or maybe FreeBSD
since
> >OpenBSD is based in Canada.  I guess the NSA is just being trendy.
>
> It *is* true that BSD is generally considered a more secure operating
> system than Linux. But Linux has improved considerably since its early
> releases.
>
> However, the NSA might have wanted the security part to be written
> from scratch, so that any known flaws in BSD would not be a problem.
> Also, Linux is ahead of BSD in another area: it is more
> POSIX-compliant.
>
> Whatever else one makes of the news item, I think it is a feather in
> Linux' cap.
>
> John Savard (teneerf <-)
> http://www.ecn.ab.ca/~jsavard/index.html
>


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Processor speeds.
Date: Fri, 18 Feb 2000 21:39:22 +0100

John wrote:
> 
> How many MIPS does a pentium 3 perform? How many does the
> fastest super computer perform?

Some little bit of relevant informations I happened to get from a
newspaper today:

Intel will bring out 1 GHZ Pentium-III this year. AMD, producing
850 MHZ Athlon, has demonstrated a 1.1 GHZ version. IBM will
bring out 1.2 GHZ chips.

Wine 2, a Japanese special purpose computer for molecular
dynamics simulations, will in its final stage (2688 chips)
attain 50 Teraflops.

M. K. Shen

------------------------------

From: "John E. Kuslich" <[EMAIL PROTECTED]>
Subject: Re: Processor speeds.
Date: Fri, 18 Feb 2000 14:05:14 -0700

http://www.haveland.com/povbench/

The above site has some interesting benchmarks for clusters.  This benchmark
may be of particular interest to cryptographers because the ray-trace
algorithm requires very little interprocess communication as do most
cryptographic applications (at least in the brute force key search case).

If you scroll down to the XCellerator entry you will find our 4 - processor
Celeron array which we use to crack Word and Excel version 8 files.  Please
note the cost per POVMARK is very low.  I believe at the time we entered it
we were the lowest or very close to the lowest in cost/performance.

An overclocked Celeron is an awesome machine for the money and when you use
Linux as an OS, connect them on cheap ethernet cards and use diskless
booting, you can do some megaprocessing for very few dollars.

Even at the 450 MHZ clock rate the 300 mhz celeron is rock solid under
Linux.  We have yet to experience a system crash after almost a year in
operation.  I am absolutely amazed at the price some firms are paying for
simulation or computing clusters from Sun, Compaq and other vendors.  We
have also used overclocked Celerons with Windows and have experienced the
same number of crashes we experience without overclocking :--)  Bill Gates
says Windows 2000 will be reliable...I wonder if he means we can count on it
crashing regularly!

Anyone with Linux and some hardware assembly skills can build a computer
cluster with amazing performance at one tenth the cost of competing
commercially available clusters.

JK  http://www.crak.com  Password Recovery Software



John <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> How many MIPS does a pentium 3 perform? How many does the
> fastest super computer perform?
>
>
> * Sent from RemarQ http://www.remarq.com The Internet's Discussion Network
*
> The fastest and easiest way to search and participate in Usenet - Free!
>


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to