Cryptography-Digest Digest #360, Volume #13 Mon, 18 Dec 00 13:13:01 EST
Contents:
Re: hash function for public key digital signature : one way? ([EMAIL PROTECTED])
Re: Q: Result of an old thread? (Mok-Kong Shen)
Re: Why primes? (Mok-Kong Shen)
Re: Why primes? (Mok-Kong Shen)
Re: Nonlinearity question (Mok-Kong Shen)
Re: does CA need the proof of acceptance of key binding ? ([EMAIL PROTECTED])
Re: Mathematical concepts (Bo Lin)
Re: Why primes? (Francois Grieu)
Re: does CA need the proof of acceptance of key binding ? (Mok-Kong Shen)
Re: Visual Basic Source Code (Paul Schlyter)
Re: Visual Basic Source Code ("Jason Bock")
----------------------------------------------------------------------------
From: [EMAIL PROTECTED]
Subject: Re: hash function for public key digital signature : one way?
Date: Mon, 18 Dec 2000 13:56:33 GMT
In article <[EMAIL PROTECTED]>,
Benjamin Goldberg <[EMAIL PROTECTED]> wrote:
> Simon Johnson wrote:
> >
> > In article <91igpd$nlb$[EMAIL PROTECTED]>,
> > [EMAIL PROTECTED] wrote:
> > > hash function for the public key digital signature must be one
way ?
My argument seems to be confusing. In this model, the digital
signature is the signed message digest and the plaintext of the message
digest.
> > > I think a collision free hash function is adequate for the
> > > authentication in public key digital signature that includes the
> > > signed message digest and plaintext. No motivation to generate the
> > > plaintext from the "unsigned" digital signature or message digest
> > > since the plaintext is publicly available.
>
> Although there doesn't seem to be a need for preimage resistance for a
> public key signature, I don't know of any hash that is collision
> resistant but not preimage resistant.
for those who not clear about preimage resistance like myself,
#Preimage resistance: for any pre-specified output y, it is
computationally infeasible to find the input x such that h(x) = y.
I assume preimage resistant is identical to one-wayness.
> > > May I conclude that one-way property of the hash function in
public
> > > key digital signature is only necessary for the hybrid protocol
> > > because it wants to preserve the secrecy of the plaintext?
>
> If you hash the plaintext for the signature, then encrypt the
plaintext,
> and then append the signature, security is compromised if the hash is
> not preimage resistant.
This is the case of hybrid protocol that will encrypt the plaintext
with a session key, which I think acquire collision free and one-way
hash function.
> > No, If the hash is reversible, then you can create another document
> > easily that hashes to the same value, and therefore have a situation
> > where i can generate any document i want and attach your signiture
to
> > it. Collision resistance is different, if i _randomly_ pick two
plain-
> > texts the chance of them taking the same hash value should be low.
I guest you mean the creation of another document easily that hashes
to the same value may due to the availability of
1. the input plaintext to the reversible hash function that is
obtained by reversing the hash.
2. the collision in the reversible hash due to the reversible
property.
for case 1, the input plaintext is available in the digital
signature, therefore the hash is not the culprit for the collision.
for case 2, let's assume that here is a reversible but collision
free hash, is the conclusion valid?
for case 2 again, how to show that the reversible property causes
collision in a hash function?
> Sorry, but you're wrong about what collision resistance is. for a
> collision resist hash, it is infeasible to construct two messages
which
> collide. Picking messages at random is not the same.
#http://paris.cs.berkeley.edu/~perrig/projects/validation/node3.html
vincent
[EMAIL PROTECTED]
Sent via Deja.com
http://www.deja.com/
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Q: Result of an old thread?
Date: Mon, 18 Dec 2000 15:18:47 +0100
Benjamin Goldberg wrote:
>
> Mok-Kong Shen wrote:
> >
> > Quite a time ago someone posted a scheme of transmission of
> > message without using a shared secret key as follows (all
> > matrices are of the same size):
> >
> > The message is in a singular matrix S (e.g. one with a zero
> > column). Alice chooses an arbitrary non-singluar matrix A
> > and sends AS to Bob. Bob chooses an arbitrary non-singular
> > matrix B and sends ASB to Alice. Alice multiplies it with
> > A^(-1) and sends SB to Bob, who can multiply it with B^(-1)
> > to obtain S.
> >
> > If my memory is correct, nobody has commented at that time
> > whether the scheme is secure or not (or how secure it is).
> > Does anyone know more about the issue or can say something
> > concrete about the security of the scheme? Thanks.
>
> Another interesting thing to point out is... if S is created by
> multiplying more than one matrix together, and one of those factors is
> singular, the result is also singular. With the scheme you described,
> the opponent can know that he has S because there's a column of 0s. If
> S is instead the product of matrices, it can be made so that there's no
> 0 column, and we can still be sure it's singular -- thus making it
> slightly harder for an opponent to test that his answer is correct.
>
> For instance, suppose each matrix is 8x8 with elements are in GF(2^32).
> Start out with S0, which has the first column as all 0s, and the rest
> generated from a TRNG. Then pick a matrix R all of whose elements are
> also generated from the TRNG. Use S = S0 * R, and we can thus be
> certain that S is singular, and that is is unlikely to have an all 0s
> column.
>
> I'd also like to suggest a[nother] modified scheme:
> Alice uses an m x m matrix, Bob uses an n x n matrix, and S is m x n.
>
> If the security of this is not much less than the original scheme, then
> we can allow one end or the other to do less computation.
There are surely more than one method of obtaining a
singular matrix. If one does more systematically to
store the given information, say in the first n-1 columns
of a n*n matrix, then one can let the last column to
be any linear combination of the other columns (I used the
zero column only as an example in the original post). One
could also restrict more the space allocated to the
information, e.g. let it occupy only n-c arbitrary columns
and with the rest be linear combinations. I am not certain,
but I conjecture that it is also possible to allocate the
information everywhere excepting the main diagonal, with
the diagonal elements so adjusted as to render the matrix
singular. As to using matrices of different sizes, my
guess (no proof) is that employing the same size would
somehow render the scheme 'optimal'. On the other hand,
the scheme is in my intuition almost certainly not very
secure, though we haven't yet seen a neat proof of that
in this thread up till now.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Why primes?
Date: Mon, 18 Dec 2000 15:19:02 +0100
Jorgen Hedlund wrote:
>
> [The irritating little gnome is
> back with new silly questions. :)]
>
> I've understood it that in public/private
> key ciphering one uses quite big prime numbers.
>
> Well, why use primes? Why can't any two quite
> big numbers (QBN [tm]) be used?
>
> What is the relationship between the public
> key and the private? Is it that the public
> key is the product of the two QBNs? If so,
> what does the private key consist of?
> (the actual two QBNs?)
>
> If I could just understand this I might be
> able to use this in my STTSC algorithm
> (posted in this NG many days ago)..
For a composite number of a given size, the work to
find the factors is in general largest, when it is
the product of two primes of the same magnitude. The
larger the size of such numbers to be factored, the
larger the work. Hence the product of two large
numbers. For public key encryptions, you should consult
the standard textbooks whose names are again and again
repeated in this group, e.g. Schneier's Applied
Cryptography and Menezes' Handbook of Applied Cryptography.
BTW, for a number of questions it is advisable to first
read the FAQ of the group.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Why primes?
Date: Mon, 18 Dec 2000 15:22:55 +0100
Mok-Kong Shen wrote:
>
> For a composite number of a given size, the work to
> find the factors is in general largest, when it is
> the product of two primes of the same magnitude. The
[snip]
Sorry. Typo: Read 'same order of magnitude'.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Nonlinearity question
Date: Mon, 18 Dec 2000 15:32:28 +0100
Benjamin Goldberg wrote:
>
[snip]
> Right, which is why I suggested it. As Mok wrote, the only problem is a
> practical one, though certainly not the way he implied. Only an utter
> madman would try implement a 128x128 sbox as a lookup table.
> Calculating the inverse under the polynomial modulo would be done as a
> function, *not* a table. The practical problem is that of speed, not
> space. Inversion doesn't use that much memory, but it's not very fast,
> and its possible that there might be timing attacks on it.
The issue is probably not clear-cut without actual
investigation. While the speed of doing GF computations
for 128 bit S-box is lower, one might be able to reduce
the number of rounds needed, I guess. There is a trade-off.
But, unless there is a theoretical proof, the quality of
the S-boxes has to be verified through computation in
any case.
M. K. Shen
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: does CA need the proof of acceptance of key binding ?
Date: Mon, 18 Dec 2000 14:34:50 GMT
In article <[EMAIL PROTECTED]>,
Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote:
> > sorry for the confusion: I assume the 'proof of acceptance for the
> > key binding' is the applicant's analog signature or analog
fingerprint
> > on an agreement that specifies the binding of the applicant's unique
> > identity to a public key.
>
> I guess the issue is analogous to one's obtaining unique
> account numbers at one's bank, isn't it? Otherwise what
> significance would the function of CA have?
If you mean that the CA needs the 'proof of acceptance for the
key binding', is that all degital certificate must be applied offline
that the applicant must turn up to be authenticated?
However verisign offers #Digital ID for Microsoft Outlook '98 without
getting any 'proof of acceptance for the key binding' except sending
some information to the applicant's email. But we know that the email
is insecure without encryption. The attacker may impersonate someone by
creating an email to be claimed as someone's email, then applies a
digital ID for that email from Verisign.
It seems like the authentication role falls on the email registration
centre, which normally offer online email registration, again without
'proof of acceptance' of that email.
Is that means Verisign digital ID could be abused or the binded of
the digital ID from Verisign could be denied?
#http://www.verisign.com/securemail/outlook98/outlook.html#1
Vincent
[EMAIL PROTECTED]
Sent via Deja.com
http://www.deja.com/
------------------------------
From: Bo Lin <[EMAIL PROTECTED]>
Subject: Re: Mathematical concepts
Date: Mon, 18 Dec 2000 15:16:30 +0000
Try the following book:
E. Kranakis, Primality and Cryptography, John Wiley & Sons, 1986
Bo Lin
Motorola Limited
Joris Vankerschaver wrote:
>
> Hello
>
> I'm terribly sorry if I'm asking a question that has been asked some
> times before, but I did quite some searching and I'm getting desperate.
>
> I'm reading "Applied Cryptography" now. It's a great book, but can
> anyone recommend a good maths book on cryptography? I've studied finite
> fields, rings and such things in uni so it doesn't really have to be that
> basic (I suppose).
>
> So what I'm looking for is a book that approaches cryptography from a
> more mathematical point of view, preferably with some of those
> prime-generating methods in detail....
>
> With kind regards,
> Joris
------------------------------
From: Francois Grieu <[EMAIL PROTECTED]>
Subject: Re: Why primes?
Date: Mon, 18 Dec 2000 17:06:28 +0100
[EMAIL PROTECTED] wrote:
> Well, why use primes? Why can't any two quite
> big numbers (QBN [tm]) be used?
In the RSA and Rabin-Williams cryptosystems, the difficulty of
deciphering, or forging a digital signature, is no higher that
factoring the product of the two QBN used.
With random primes of about equal size, the factoring is
believed hard, and practice confirms this. With random QBN,
factoring the product is comparably easy, because most often
each of the two QBN has small factors which can be found
in a snap, reducing the difficulty of the problem quite a lot.
There are other less fundamental reasons; in particular the QBN
are to be factored once anyway to use them at all; and there might
be a somewhat less negligible few easily deciphered messages with
QBN than with primes.
Compaq has recently applied for a patent on an RSA variant using
product of more than 2 primes, carefully selected.
The technique is valid, though the patent is not, according to
most reputable experts.
Francois Grieu
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: does CA need the proof of acceptance of key binding ?
Date: Mon, 18 Dec 2000 17:14:31 +0100
[EMAIL PROTECTED] wrote:
>
> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> > I guess the issue is analogous to one's obtaining unique
> > account numbers at one's bank, isn't it? Otherwise what
> > significance would the function of CA have?
>
> If you mean that the CA needs the 'proof of acceptance for the
> key binding', is that all degital certificate must be applied offline
> that the applicant must turn up to be authenticated?
>
> However verisign offers #Digital ID for Microsoft Outlook '98 without
> getting any 'proof of acceptance for the key binding' except sending
> some information to the applicant's email. But we know that the email
> is insecure without encryption. The attacker may impersonate someone by
> creating an email to be claimed as someone's email, then applies a
> digital ID for that email from Verisign.
>
> It seems like the authentication role falls on the email registration
> centre, which normally offer online email registration, again without
> 'proof of acceptance' of that email.
>
> Is that means Verisign digital ID could be abused or the binded of
> the digital ID from Verisign could be denied?
>
> #http://www.verisign.com/securemail/outlook98/outlook.html#1
My poor knowledge does not allow me to answer your question
in concrete terms. But I believe that it is indispensable
for the CA to be able to well bind the a key to the physical
person once and with that to later verify communications from
him and for that there is apparently no other secure means
than the convential ones of proving identity. Of course one
CA could rely on another CA, but at least one CA must do the
work. One problem that is in my view not resolved (resolvable)
is that the CA must be trusted and that is a subjective matter.
M. K. Shen
------------------------------
From: [EMAIL PROTECTED] (Paul Schlyter)
Subject: Re: Visual Basic Source Code
Date: 18 Dec 2000 17:46:57 +0100
In article <[EMAIL PROTECTED]>,
Simon Best <[EMAIL PROTECTED]> wrote:
> Paul Schlyter wrote:
>
>> BASIC as a language was really a stripped-down FORTRAN, with line
>> numbers added for editing your program. However the original
>> Darthmouth BASIC wasn't merely a programming language, but also a
>> programming environment: it introduced the time-sharing terminal
>> which was a great improvement: the student of programming could now
>> get feedback in just a few seconds.
>
> BASIC 'introduced' the time sharing terminal? I take it you mean BASIC
> was the first language to take advantage of time sharing terminals by
> using an interactive programming environment?
Correct
> (It would be rather stupid of me to think you meant that bits of
> hardware were parts of a programming language!)
Are you in nitpick mode right now?
>> Before BASIC he would have to punch his card deck, hadn it in for
>> processing, wait a few hours, and then read his printoput which many
>> times was very short because the compuler complained over some trivial
>> syntax error. Being able to correct these errors after a few seconds
>> instead of a few hours was a great improvement, don't you think so?
>
> But that's not really something that BASIC brought about, just something
> that BASIC took advantage of.
This was the way BASIC was presented.
> It could have been some other development environment (we're talking
> about BASIC as a development environment here, it seems), but (as I don't
> know that it wasn't) I'll take it that BASIC was the first. However,
> that doesn't really have anything much at all to do with whether or
> not BASIC is a good language.
If BASIC had been presented as just a new language, with the same
environment as the earlier languages (i.e. a compiled language only,
where most users used ounched cards to store their program) it would
quickly have been forgotten. The goal of BASIC was to simplify the
entire process of creating and running simpler programs. This
required both a simpler programming language (BASIC instead of
FORTRAN or COBOL) and a simpler environment (the time-sharing
terminal instead of batch processing using punched cards). And one
must conclude that this concept has been extremely successful: the
batch processing environment is now almost completely gone, everyone
is using their personal terminals. And many (most?) programmers are
still, 36 years later, using some variety of BASIC.
>>> _if_ the teacher is a good programmer and a good programming teacher.
>>
>> ...and in particular if the environment provided feedback within
>> seconds instead of within hours.
>>
>> Also, the programs written by students in the 1960'ies were by today's
>> standard very simple programs. And for such simple programs, the
>> bad structure of BASIC didn't matter that much.
>
> But, being students, they were learning stuff to be used beyond their
> studies. They may not have needed well structured programs, local
> variable name scoping, etc, to do their assignments, but what about
> after graduation?
You must remember that Computer Science hadly existed back then but
was considered a branch of either electronic engineering (hardware
and system software) or math/science (application software). Few
people outside the natural sciences ever used computers at all.
No-one could foresee the explosive growth of Computer Science back
then, thus "what about after graduation" was beyond people's
imagination.
> Alas, the original Unix kernel was such a hideous mess of code
> that the programmers (who were once students) didn't even dare
> try debugging it!
Probably true -- that was the state-of-art at that time. At least
they had the sources to hte original Unix kernel -- I've heard that
for IBM's OS/360 only the binaries were remaining many years while
that OS was still in use! Compared to that, UNIX was a good example!
>> If one would apply that principle to all teaching and not merely
>> teaching of programming, then one should show bad examples rather
>> than good examples to the pupils.
>
> No, that's not what I meant. Interpreting what I wrote in that
> 'either-or' way leads it to be obviously ridiculous (as you make so
> plainly clear).
>
>> In literature they should read badly-written stories;
>
> It wouldn't be bad for them to occasionally compare badly written
> stories with good ones.
It's very easy to get at badly written stories. The pupils will
do that all by themselves anyway -- no-one needs to help them.
> 'perpetuum mobile machines'? I think you mean 'perpetual motion
> machines'.
They are the same thing; "perpetuum mobile" is latin though.
>> However, BASIC ought to have been put peacefully to rest in
>> 1975 or so, when Pascal became readily available.
>
> No, I disagree. That's letting BASIC off too lightly. Parade BASIC
> around as an example of a bad language that discourages good
> programming. Make a mockery of it in lessons. Throw rotten vegetables
> at it. Expose it and shame it before the peoples of all nations!
And why only BASIC? Why not also parade FORTRAN and COBOL the same
way? Of course one should use early dialects of these languages,
e.g. FORTRAN I from 1957, which lacked facilities like subroutines
and provided arcane statements like IF ACCUMULATOR OVERFLOW ....
:-))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))
--
================================================================
Paul Schlyter, Swedish Amateur Astronomer's Society (SAAF)
Grev Turegatan 40, S-114 38 Stockholm, SWEDEN
e-mail: pausch at saaf dot se or paul.schlyter at ausys dot se
WWW: http://hotel04.ausys.se/pausch http://welcome.to/pausch
------------------------------
From: "Jason Bock" <[EMAIL PROTECTED]>
Subject: Re: Visual Basic Source Code
Date: Sun, 17 Dec 2000 23:40:29 -0600
Paul Schlyter <[EMAIL PROTECTED]> wrote in message
news:91jkva$9v7$[EMAIL PROTECTED]...
> In article <3a3d0b6b$0$90275$[EMAIL PROTECTED]>,
> Jason Bock <[EMAIL PROTECTED]> wrote:
> > There are situations where it's needed, and other times it's not.
> > That's the way I've seen it.
>
> There are situations where you THINK it's not needed -- but you don't
> really KNOW after some years have passed. I've been porting some code
> myself which obviously weren't written with porting in mind.
Guess we have different career insights. But I did KNOW what the timelines
were for the projects I was on.
> > There are more pressures in software development that programming
> > in a "real programming language."
>
> Indeed very true -- the pressure to get the software out quickly and
> as cheaply as possible often make developers overlook these issues.
> As a result the overall cost over the entire lifetime of the software
> may go up -- but that doesn't bother the initial project which got
> the first version of the software out the door, since that phase
> probably got somewhat cheaper.
And if that's what the project lead wants (i.e. the one with the money) then
that is what happens. I don't necessarily advocate this, but that is the
bottom line. You can either choose not do what they want and not get the
project, or you can work with them. I personally opt for the latter. I
have my views on what software development should really be like, but I am
usually not in the position to fully steer a project. So if quicker is the
strongest requirement, I will use a language that achieves that goal.
> > Or, the project is a prototype,
>
> Even if the project is a prototype, its code may be useful later in
> the real implementation -- that is, if it is of good enough quality
> to be resued at all... If it never makes it into a real
> implementation, it becomes my case 1. above...
Although I try to code prototypes as best as I can, I always add the
qualifier that a prototype is just what it is. It may lead to different
ideas and the original one may be canned altogether. That has nothing to do
with the effort of the code underneath. I was just on a project where I
needed to prototype something on a Jornada unit. I bet none of that code
will ever see the light of day for that project, but I learned a lot and I
will use that code base sometime in the future.
> > or the project doesn't have the lifetime of years on end,
>
> That was my case 2. above....
>
> > etc., etc. Some projects don't live 20 years - sorry if you haven't
> > seen this.
>
> I've seen it all too much. I've seen projects which have lived less
> than 1 year -- nevertheless even the code from such projects could
> be reused in other, more long-lived, projects, if it's of good enough
> quality.
It's not just the quality that matters, though. You have to program with
reuse in mind. Again, if the underlying goal is time-to-market, reuse
usually falls by the wayside. But this is getting a bit away from the
argument of "is VB a 'real-programming language'?" (thank god ;) ).
> > You can think that. Change happens irregardless of what I want to do
> > or not. I don't pretend to have such control over the industry as a
> > whole.
>
> Neither do I -- thus we both agree changes will happen. Now, since
> one knows changes will happen, why not try to plan for it a little?
> Of course we won't know exactly what the changes will be,
> nevertheless there are some things one can do. For instance try to
> write portable code as much as possible, since portable code is more
> likely to survive the next paradigm shift than non-portable code.
In that case, one may argue you write it all in Java. Or some other
language that purports cross-platform use. Problem is, there's SO much code
written in other languages in other systems that you end up with a hard
choice. Do you decide to rewrite it all in the current warm-n-fuzzy
language of the day, or do you try and interop? That's where I think we'll
eventually end up. It's simply too difficult to get everyone to speak the
same language and get everyone to use the same OS. You're always going to
need to interop.
Of course I recommend writing portable code when I can. Again, though, the
requirements of a project may not be geared towards such an endeavor.
> I think the only difference between us here is that you're
> advocating short-term planning while I am advocating somewhat
> more long-term planning, which initially may be somewhat more
> expensive but which will pay itself back in the long run.
Depends on the requirements ;).
> > Other industries are different. People buying a house won't tolerate
having
> > their house broken down in three years. The computing industry simply
> > hasn't fought back enough to force stability. But then again, I don't
see
> > that happening for a long time. The industry allows you to change
things at
> > a very rapid rate, much faster than building a house.
> >
> > Of course, building software does have a lot of analogies to building a
> > house. But I just don't see things slowing down for a while. I can
either
> > fight to stay on one OS using one language for the rest of my life,
>
> If you do, you'll have to do it as a hobby towards the end of your
> life, when the commercial lifetime of that OS has ended.
Well, I wouldn't do this. As I stated, things change too much. I really
don't program much in VB anymore, simply because I don't see a lot of future
with it in MS's .NET world (I personally use C#).
> > or I can decide to watch where things are going and make career changes
> > based on such observations.
>
> "If house built hoses the way programmers build software systems,
> the next woodpecker which came along would destroy civilisation...."
If house build hoses? Eh??
I would add this, if (good) developers and programmers were given the time
to implement systems with reuse in mind, solid test plans, etc., then we
would be in a better place. But if management is in power and demands the
new system be done in 2 weeks, there's not much wiggle room.
But I don't really see what this all has to do with VB being or not being a
"real programming language." I think it is, just as I think Eiffel, Perl,
C#, Java, COBOL, Ruby, C, etc., etc., etc., are all real programming
languages. They're all just at a different level of abstraction. VB was
designed from the start to be an easy way to create Windows programs. It
was never designed to allow one to create an OS with the tool. But calling
it a "toy," or not a "real programming language" shows that you haven't used
the tool enough. I've heard the outcry of "VB is a toy language" so much I
just laugh when I hear it. It can makes some projects extremely easy and
cost-effective. It can also make others a living nightmare. Same with
other languages in other situations.
Jason
P.S. Personally, if I had the choice I'd have everyone program in Eiffel or
some derivative thereof. It's one of the cleanest languages I've ever seen.
But it's used so infrequently that I haven't made it my principle language
(I need to make money ;) ).
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to sci.crypt.
End of Cryptography-Digest Digest
******************************