Cryptography-Digest Digest #972, Volume #9 Mon, 2 Aug 99 17:13:03 EDT
Contents:
Re: How Big is a Byte? (was: New Encryption Product!) (Brian Inglis)
Re: SSL vs TLS ([EMAIL PROTECTED])
Re: Help please (WWI/WWII ciphers) ([EMAIL PROTECTED])
Re: With all the talk about random... ("Kasper Pedersen")
Re: With all the talk about random... (Jim Dunnett)
Re: Cryptanalysis of R250 (Patty Broad)
PCKS#12 (Christopher Steel)
Re: Bad Test of Steve Reid's SHA1 ([EMAIL PROTECTED])
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (Brian Inglis)
Crossposted-To: alt.folklore.computers
Subject: Re: How Big is a Byte? (was: New Encryption Product!)
Date: Mon, 02 Aug 1999 19:21:31 GMT
Reply-To: [EMAIL PROTECTED]
On 30 Jul 1999 12:24:46 -0400, [EMAIL PROTECTED] (Patrick
Juola) wrote:
>In article <[EMAIL PROTECTED]>,
> <[EMAIL PROTECTED]> wrote:
>>On 30 Jul 1999, Patrick Juola wrote:
>>> In article <[EMAIL PROTECTED]>,
>>> <[EMAIL PROTECTED]> wrote:
>>> >On Thu, 29 Jul 1999, John Savard wrote:
>>> >> Zero may be my first counter state, but it is not the number
>>> >> associated with the first item or event. As there are times when
>>> >> reserving a storage location for element zero of an array makes sense,
>>> >> and other times when it does not, because of the purpose to which the
>>> >> array will be put, flexibility is useful...but, on the other hand,
>>> >> having to specify this every time is wasteful too. Starting with
>>> >> element zero at least allows both option with, at worst, a slight
>>> >> waste of storage, which is probably worth it to make one's program
>>> >> easy to understand and document, and avoid unnecessary offset
>>> >> arithmetic.
>>> >The problem with this is that array[0] is the first item.
>>> Only in C and its descendants; in Fortran, if I recall properly, array(1)
>>> is the first item. In Pascal, I can define an array to start and finish
>>> at any ordinal position -- and if I'm using a Real Language(tm) I can
>>> define arrays based on any durn-fool index scheme I want. (I just
>>> *love* the associative arrays available in /bin/awk.)
>>I'm sorry. I thought we were discussing off by one errors and the
>>differences between counting and addressing, not your bias against C. Yes,
>>I was assuming C but the C usage comes from the natural methods of most
>>assembly languages and is very logical when you think about it.
>Interesting statement; I've been programing in C professionally since
>quite possibly before you touched your first computer -- and here I'm
>accused, irrelevantly, of an anti-C bias.
>My feelings about C are beside the point; I'm simply stating that the
>array semantics in C are a matter purely of *convention* and aren't any
>more natural or intuitive -- and in fact, to a traditionally trained
>mathematician are *less* natural/intuitive -- than one-based indexing
>or any other method you like. C arrays are just thinly disguised pointers,
>which again aren't particularly natural or intuitive; they just happened
>to be convenient for Thompsen and Richie. Whether I like C or not,
>I can certainly recognize that the zero-based indexing conflicts with
>standard mathematical practice and is difficult to present to students.
Mathematicians (and engineers!) should not be allowed to program,
teach or explain programming or algorithms, until they have been
deprogrammed from using symbols and learned to use meaningful
names to describe objects. I guess it must result from learning
with Basic and ForTran.
>You seem to be making the common mistake of believing that the
>structures in your first or preferred language are somehow less
>arbitrary than the structures in other languages. 'Tain't so,
>dude.
>>> >The reason this
>>> >is a problem is that in computer addressing the address refers to the next
>>> >item in memory (i.e. the item that starts at array[0]).
>>> No, the reason for this is that the semantics of C-style arrays are
>>> defined as syntactic sugar over a pointer access. Which is one of
>>> the most serious security/reliability holes in C/Unix.
Which you can fix with a small amount of extra syntactic sugar
(macros) to allow you to access and define vectors and arrays as
in ForTran (see /Numerical Recipes in C/) or Pascal (see many
academic texts on C).
>>Yes, C arrays come very nearly directly from (or at least have a lot in
>>common with) assembly language. And arrays are pointers in C. However, I
>>really think you're exaggerating when you try to claim that C's array
>>syntax is a security or a reliability hole in either C or Unix.
>I stand by my statement. Two words : finger bug.
And typically, not much more serious or difficult to find and fix
than any other typo in C.
>> The
>>improper use of C array syntax can cause said holes but then most C
>>programmers have gotten used to the "it starts at 0" concept.
>Of course, one can get used to anything in time -- why should one
>have to work that hard to master arcana?
See macros above!
>> Would you
>>also say it's a major flaw with assembly language?
>Yes -- although assembly language itself has so many other major flaws
>that it's hard to know where to begin cataloguing them. The main one,
>of course, being that assembly language is so damn hard to write
>*because* it follows the conventions of the machine instead of conventions
>useful for thinking about the problem at hand.
You just have to go into a little more detail when you write your
lowest level routines. You always have to translate from the
problem model to the program model. IMHO, mathematicians seem to
find this particularly difficult, perhaps expecting a simple
equivalence between forms of expression, compared to other
scientists, who are perhaps more familiar with using different
conceptual models depending on what your problem domain is and
what kind of results you want.
>If I needed to make a graph of financial data covering the years 1950-1999,
>it would make much more sense in the context of that problem to
>be able to use an array indexed from 1950 to 1999, inclusive, with
>compiler support to prevent out-of-bounds accesses. Which C notably
>fails to provide.
See macros above!
>>Should we all start
>>having our assembly language programs use pointers to arrays that start
>>one byte sooner that the array just so we can say the item 1 is at array:
>>+ 1?
Mathematical programs expressed in Basic based on ForTran code
often just ignore the zeroth elements, and this was common even
in the seventies when memories were small, expensive and core!
>No, we should stop writing assembly language programming. No one does
>large projects in assembly any more for a very good reason. The
>compiler is usually a better assembly language programmer than you are
>and doesn't make careless mistakes in translation.
>>Now, if you want to discuss this rationally rather that show your bias
>>against C I'd be happy to.
>"If all you have is a hammer, the entire world looks like a nail."
>Again, I'm not especially biased against C -- but I've been using it
>enough to be well aware of its weaknesses, and array indexing is,
>sho 'nuff, one of the big ones.
Not if you use the whole language to express your problem.
>>PS Off by one errors occur in more languages than just C and kin.
>Yes. But typically many fewer of them.
I have worked in system and technical support at times, and
almost all development and production problems with programs in
all languages involve undisciplined array access. I have had
hundreds of support calls and visits from programmers about
program failures, to which I always responded with: "Are you
using arrays? Yes! Check your array access to ensure that you are
not over-/underrunning the bounds." Never heard back from any of
them. If it was not an array bound problem (~90%), it was a
division by zero (~9%). In many cases, these brainiacs had,
deliberately or by cloning, disabled bounds checking and
divide-by-zero exception reporting!
>The pointer-based semantics of C arrays encourages lots of
>errors : off-by-one errors, overrun screws, smashing the stack,
>unexpected aliasing, and so forth. Most of the major security holes
>in Unix -- or at least the publicized ones -- are related in some
>fashion or another to the array semantics. Array semantics, esp.
>w.r.t. multidimensional arrays, are probably the hardest part of C
>to teach to a student and the easiest part to screw up (and often the
>hardest bugs to find and fix).
And the easiest to avoid. See macros above. Add
pre-/postconditions with assert, explicit checks, or better
thinking about the code.
>The various bondage and domination languages -- Pascal, Ada, &c. --
>specifically implement arrays as *packaged* memory blocks, which
>allows greater flexibility (and hence greater intuitiveness) in
>calling as well as better ability to catch and trap errors. This,
>in my view, doesn't make up for the *rest* of the officious SE crap
>one has to put up with in order to program in Ada, but if I had the
>ability to pick and choose features in designing my own ideal
>language, I would definitely use Ada- or Pascal-style array semantics
>in preference to C. And general associative arrays in preference
>to either if I had good compiler support.
For C, read good library support! A lot of thought, a few good
library functions, glue functions, macros, declarations, and you
can easily have associative memory built on arrays, heaps, hash
tables, lists or trees.
Think about it right, and you can almost always use dynamically
allocated arrays in a multithreaded high transaction rate
environment without a hiccup for years.
> -kitten
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada
--
[EMAIL PROTECTED] (Brian dot Inglis at SystematicSw dot ab dot ca)
use address above to reply
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: SSL vs TLS
Date: Mon, 02 Aug 1999 19:27:29 GMT
Here's the link to a RFC all about TLS.
ftp://ftp.isi.edu/in-notes/rfc2246.txt
Sorry it's not a hyperlink.
In article <[EMAIL PROTECTED]>,
Bjoern Sewing <[EMAIL PROTECTED]> wrote:
> Hi Gabriel
> >
> > TLS is a little different from SSL. The biggest change I see is in
the
> > MAC computation, it uses new a stronger version of HMAC (RFC 2104).
> >
> > http://www.consensus.com/ietf-tls/ietf-tls-home.html
> >
> > The file is <draft-ietf-tls-ssl-mods-00.txt>
>
> The draft only describes , what changes should be done,
> but not the changes have been done.
>
> Even so thanks
>
> Bjoern
>
> --
> ---------------------------------------------------------------
> Bjoern Sewing - University of Bielefeld (TF-NOV - N3.02)
>
> email: [EMAIL PROTECTED]
> ---------------------------------------------------------------
>
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Help please (WWI/WWII ciphers)
Date: Mon, 02 Aug 1999 19:36:44 GMT
In article <KI0p3.1871$[EMAIL PROTECTED]>,
"Mike Blais" <[EMAIL PROTECTED]> wrote:
> If anyone can help me with this it would truly be appreciated, by me
and my
> girlfriend have spent far too much time on this and are ready to give
> up(this isn't really our type of stuff).
>
> We have a package (newsletter) that has a code hidden in it, and our
only
> hint was that it was a code stolen from the Germans in the war(don't
know
> which war). Unless the code is actually hidden in the text this is
what we
> believe is the code: 223,172,926 paragraph 2 section (b) and
a
> page later
> 89,254,167 section (b) paragraph (iii)
> We don't know if the section/paragraph numbers mean anything but I
figured
> they should be included.
> So what I have is this 223 172 926 89 254 167 and the answer key is
What it sounds like to me is that the numbers are perhaps some sort of
ID number for books or manuals, like an ISBN number. That could be why
you have the section/paragraph information. Of course, I don't know
where you can get any old german books :-)
Anyway, if you don't mind me asking, why did you receive this mysterious
code?
like
> this
> ---- ------- ---------- --- ----
> As far as I found on the web the only number for letter cipher was
the
> Zimmerman Telegraph, but the key I found was in German. Also there are
more
> answer spaces than numbers and the only numbers in the rest of the
package
> are telephone numbers(all real)
> Any tips from whether we're on the right track to a solution would be
great.
>
> Mike Blais
>
> Kirsten Johnston
>
> [EMAIL PROTECTED]
>
> Port Coquitlam BC
>
>
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: "Kasper Pedersen" <[EMAIL PROTECTED]>
Subject: Re: With all the talk about random...
Date: Mon, 2 Aug 1999 21:23:33 +0200
<[EMAIL PROTECTED]> wrote in message
news:7nvksf$lin$[EMAIL PROTECTED]...
> If there is a bias stronger then 1/2 over an infinite length seqeunce
> it is not random. You can't say somethings random by looking at a
> finite sequence cuz you will always find biases or some pattern that
> may work for the section you have ...
>
> All hardware sources are unpredictable at best, but not truly random.
What I should have said was something like 'unpredictable bit generator with
entropy close to 1', but I think that would have confused more than
helped.
What would you call such a device? The letters 'RNG' would not be usable.
(you are right - once upon a time I heard something like that in statistics
class)
> The only difference between a PRNG and a TRNG is that PRNG are finite
> in nature and thus can never be 'truly' random.
>
And even if we could find one, we couldn't measure it without error.
(side thought: Resistor noise is supposed to have unlimited bandwidth, but
there is a finite number of electrons..)
But what I dislike about PRNGs in this context is, of course, that they are
all deterministic. They leak the entropy of their seed, and that's it.
OTOH: A TRNG cannot be predictable in real life (?) - it would require
infinite 'internal state'.
...
> > When you have s, it's broken.
> > To turn it around, random() is as weak as it can possibly be.
> > To get that OTP, you need a real source of random numbers.
>
> In this case you could simply guess the 32-bit seed and check it
> against the message.
Certainly. Buy what I was trying to get across is that that particular PRNG
delivers it's state slice-by-slice, and thus is especially easy. If one were
to search 2^32 blocks it would have taken an hour, maybe. Then he'd have
found a just-as-leaky 64-bit generator and though he was safe.
> Some must note that what doesn' look random to the 'eye' might actually
> be random in nature (or unpredictable, or infinite ...). a million
> zeros in a row is just as random as 1010101110111000011010010011 etc...
I know. But I would become seriously nervous. After all, p(hardware failure
in the time it takes to get 1M samples) >>>> 1.01*10^-301030.
> Tom
------------------------------
From: [EMAIL PROTECTED] (Jim Dunnett)
Subject: Re: With all the talk about random...
Date: Mon, 02 Aug 1999 19:24:40 GMT
Reply-To: Jim Dunnett
On Mon, 02 Aug 1999 14:57:30 GMT,
[EMAIL PROTECTED] (John McDonald, Jr.)
wrote:
>On Fri, 30 Jul 1999 19:15:19 -0500, Frank Kienast
><[EMAIL PROTECTED]> wrote:
>
>>I guess I could now go and decrypt all those old files and re-encrypt
>>them with PGP or something, but given that information such as what I
>>thought about my boss ten years ago is probably nothing earth-shaking if
>>someone found it now, I will probably not bother.
>
> This statement illustrates something else that is often
>neglected in cryptography: the "who cares" factor. After all, if my
>encryption will store my information for thousands or millions of man
>years, then lets suppose that using a method such as distributed.net
>would crack it in 10.
>
>Most of the time, in 10 years, I could care less how "secret" that
>information was.... So at what point is encryption "good enough?"
Which is why, in the real world, tactical cryptosystems are
simpler to use and less secure than strategic ones.
It doesn't matter if the enemy penetrates you tactical system
next week ... it's already too late ...
--
Regards, Jim. | Findhorn Community:
amadeus%netcomuk.co.uk | Developing EcoVillage
dynastic%cwcom.net | of about 350 people:
|
PGP Key: pgpkeys.mit.edu:11371 | http://www.gala.org/findhorn/
------------------------------
From: Patty Broad <[EMAIL PROTECTED]>
Subject: Re: Cryptanalysis of R250
Date: Mon, 02 Aug 1999 15:30:40 -0500
SCOTT19U.ZIP_GUY wrote:
>
> It is not so much that you are using R250 or what ever. It is
> that if you use the same key twice so that you if your enemy
> has one plaintext message encryptrd message pair he can
> easily decrypt all future messages by getting the stiring of bits
> used in the XOR. So that all future messages are broken.
> See attacks on OTP ciphers where the same key used more
> than once.
>
> David A. Scott
This cipher isn't going to be used in the real world. I'm using it as
an acedemic exercise in cryptanalysis. The exercise assumes limited
amounts of available cipher-text and that the sender was smart enough
not to use the same key(s) more than once (and that the intercetor has
no access to the algorithm itself -- only that it is an XOR stream
cipher).
Its the actual mathematics of this sort of cryptanalysis that I need.
I shall, however, look into attacks against OTP's as per your
suggestion. But if you happen to have pointers to any research regarding
the relative cryptographic "strength" of R250, please feel free to
share.
--R. Pelletier,
Sys Admin, House Galiagante
------------------------------
From: Christopher Steel <[EMAIL PROTECTED]>
Subject: PCKS#12
Date: Mon, 02 Aug 1999 16:43:59 -0400
Does PKCS#12 support DSA, and if so, how is it used for secure key
exchange?
-Chris
--
=================================================================================
_/_/_/ _/ _/ _/ _/ Christopher Steel
_/ _/ _/ _/_/ _/ Senior Java Architect - Sun Java Center D.C.
_/_/_/ _/ _/ _/ _/_/ Sun Professional Services
_/ _/ _/ _/ _// 7900 Westpark Drive
_/_/_/ _/_/_/ _/ _/ McLean, Va 22102
Phone: 703-208-5778
M I C R O S Y S T E M S Fax: 703-208-5836
Mobile: 703-798-6558
Email: mailto:[EMAIL PROTECTED]
==================================================================================
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Bad Test of Steve Reid's SHA1
Date: Mon, 02 Aug 1999 20:52:33 GMT
Thank you very much for a clear and interesting answer. Correct me
if I am wrong: It appears that, since I got the "right" answer when
defining LITTLE_ENDIAN, it is because it is required for my Intel
processor. If this is so, does Reid's SHA1 routine not work to compute
the "network byte order" you refer to on Intel platforms? The 14C20690
F653FB16 396E4F80 4C9DAFB4 6E513D4F I get for "abc" when I do not
define LITTLE_ENDIAN does not appear to be just the byte reversal of
A9993E36 4706816A BA3E2571 7850C26C 9CD0D89D. I would have expected
9DD8D09C 6CC25078 71253EBA 6A810647 363E99A9.
For now it looks as if I just need to define LITTLE_ENDIAN.
Thanks again,
Rob
In article <h13p3.8558$[EMAIL PROTECTED]>,
"Richard Parker" <[EMAIL PROTECTED]> wrote:
>
> > So far no-one has told me what LITTLE_ENDIAN does. Does it refer to
> > one of the alternate methods referred to as 7. and 8. on the NTIS
site
> > (thanks to Jerry Coffin for that URL)? I would assume not since,
as D.
> > L. Keever mentions, you do need to define LITTLE_ENDIAN to get the
> > right digest. Or does it refer to the "technical correction" that
> > generated SHA-1 (FIPS 180) in the first place? Something else?
Just
> > curious.
>
> big-endian vs. little-endian
>
> Big-endian and little-endian are terms that describe the order in
> which a sequence is stored. Big-endian is an order in which the
> "big end" (most significant value in the sequence) is stored first.
> Little-endian is an order in which the "little end" (least
> significant value in the sequence) is stored first. Consider a
> sequence of bytes with increasing memory addresses ABCD, to be
> interpreted as a 32-bit word with numerical value N. In
> little-endian, the byte with the lowest memory address (A) is the
> least significant byte: N = (2^24)D + (2^16)C + (2^8)B + A.
> In big-endian, the byte with the lowest address (A) is the most
> significant byte: N = (2^24)A + (2^16)B + (2^8)C + D.
>
> IBM's 370 computers, most RISC processors, and Motorola
> microprocessors use the big-endian byte ordering. On the other
> hand, Intel processors and DEC Alphas are little-endian.
>
> Big-endian byte ordering is the byte ordering used in TCP/IP,
> the protocol suite that forms the basis of the Internet, therefore
> you sometimes see big-endian byte ordering called "network byte
> order."
>
> Note that within both big-endian and little-endian byte orders, the
> bits within each byte are big-endian. That is, there is no attempt
> to be big- or little-endian about the entire bit stream represented
> by a given number of stored bytes. While it is possible to be
> big-endian or little-endian about the bit order, CPUs are almost
> always designed for a big-endian bit order. In data transmission,
> however, it is possible to have either bit order.
>
> The names big-endian and little-endian are drawn from the fued
> between the two mythical islands Lilliput and Blefescu in Jonathan
> Swift's novel "Gulliver's Travels." In the novel the two islands
> disagree over the correct end (big or little) at which to crack an
> egg.
>
> -Richard
>
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************