Cryptography-Digest Digest #357, Volume #13      Mon, 18 Dec 00 00:13:01 EST

Contents:
  Re: Elliptical Curve Math Question (Scott Contini)
  Re: Protocol for computer go (Benjamin Goldberg)
  Re: Q: Result of an old thread? (Benjamin Goldberg)
  Re: hash function for public key digital signature : one way? (Benjamin Goldberg)
  Re: Use of multiplexing (Benjamin Goldberg)
  SHA 256 (George)
  Re: Visual Basic Source Code (Simon Best)
  Re: Visual Basic Source Code (Simon Best)
  Re: important programming languages (Simon Best)
  Re: SHA 256 (Simon Best)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Scott Contini)
Subject: Re: Elliptical Curve Math Question
Date: 18 Dec 2000 02:18:57 GMT

In article <JYc%5.43097$[EMAIL PROTECTED]>,
Mike Vaughn  <"vaughnmt"@@@home.com> wrote:
>I ran the first equation through an equation solver that I found on the net, and
>thought that it was a simplified version of the former. Obviously it is not, so
>I will forgo the latter and stay with the original y^2 = x^3 + A x + B. I am not
>using this for cryptology so I am not concerned with points and finite fields. I
>have my own need of it for my own little project. I would like however, to be
>able to solve for X once A, B and Y are known. I would greatly appreciate it if
>someone could help me out with this. My math skills are nowhere near as good as
>yours. This is why I posted here, because I respect the knowledge that shows
>itself here.
>
>Again,
>Many Thanks
>
>Mike Vaughn wrote:
>

Here is one way to solve it:

        25^2 = x^3 + 10*x + 20

        implies

        x^3 + 10*x - 605 = 0

So you need to find the roots of this polynomial.  In all cases there
will be either 1 or 3 real roots.  In this particular case, there is
only 1 real root.  There are forumulas for solving degree 3 equations in
radicals if you want an exact solution.  Otherwise you can use some
computer algebra package (such as Magma) to get approximate solutions.
With Magma I computed the real root to be approx:
        x = 8.0638704603350585196836269977065645145

Here are the complex roots:

-4.031935230167529259841813498853282 + 7.666127125269855675259466070485778*i
-4.031935230167529259841813498853282 - 7.666127125269855675259466070485778*i

If you want to be able to find the roots yourself without using
a computer algebra package, you can do:

        1) Look up Cardano's formula to find out how to solve cubics
        in radicals.  For example, it is explained in Michael Artin's
        "Algebra" book.  Or,
        2) Use numerical methods to approximate a solution.  For example,
        a Newton iteration.  Any elementary numerical analysis book will
        explain how to do this.

Scott



------------------------------

From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Protocol for computer go
Date: Mon, 18 Dec 2000 02:20:23 GMT

Francois Grieu wrote:
> 
> Benjamin Goldberg <[EMAIL PROTECTED]> wrote (ecrivait) :
> 
> > Instead of having the opponent's move generate an interrupt, require
> > that the program poll some particular function to see if the
> > opponent has moved.  In creating a log of the game, for the replay
> > version, we can say: poll_opponent() returns false exactly N times,
> > before the opponent moved.
> 
> > Since we don't ever look at our own computer's clock, only the
> > messages from the referee, we always do a deterministic number of
> > computations, which can thus be repeated by the replay computer.
> 
> There is a problem with this approach. As a proof of concept,
> assume the play and replay program are genuinely deterministic
> and running the same code on identical  brand CPU, except for
> clock speed, and a human changes the clock speed of the play
> machine while running. The programs sends queries every say
> 1 million instructions. If the adversary has played after say
> 10 queries, the program plays defensively, else it plays
> offensively. A human looks at the game, and when it thinks the
> program has a good position, slows it down. It will start
> to defend (sensing the adversary is fast). The replay program
> will automatically be kept in sync by the log.

This is similar to the idea of using packet timing as a side channel.
With that idea, the program uses the lsb of the arrival time of the
packet to partially determine it's strategy.  What your suggesting is to
artificially make packets seem to speed up.  I don't think any protocol
*can* block an attack of this type.

> Besides, there is no way you'll prevent the play program from
> beeing non-deterministic and reading wall clock time.

Sure you can.  If the program is deterministic, it will play identically
on the replay machine, and if it is nondeterministic, it will play
differently.

poll_opponent, get_opponents_move, and send_move_to_opponent are in a
referree supplied dll.  On the game machine, these perform network io,
and write protocol-specified infomation to a log file -- specificly, how
many calls to poll_opponent were made before get_opponents_move and what
move was made.  On the replay machine, poll_opponent returns falls N
times, where N was read from the log, then returns the opponents move
from the log, then compares the resultant move with the one stored in
the log.

There is no way for a human to send side band information to both the
game machine and the replay machine.

> In real
> life, the above cheating scenario would be implemented using
> a fixed-speed CPU, and changing the pool rate in software, say
> according to the caps-lock key.

I assume you're talking about speeding up the apparent packet rate
cheat, not the look-at-wall-clock cheat.  As I said, the
look-at-wall-clock cheat results in a game that can't be reproduced by
the replay computer.  And the packet rate cheat works on nearly ALL
protocols, so far as I can tell.

-- 
There are three methods for writing code in which no bug can be found:
1) Make the code so straightforward that there are obviously no bugs.
2) Make the code so complicated that there are no obvious bugs.
3) Insist that any apparent bugs were really intentional features.

------------------------------

From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Q: Result of an old thread?
Date: Mon, 18 Dec 2000 02:36:09 GMT

Walter Hofmann wrote:
> 
> On Sun, 17 Dec 2000 06:15:16 GMT, Benjamin Goldberg
> <[EMAIL PROTECTED]> wrote:
> >
> >I'm confused.  Please tell me how a limit is defined when a discrete
> >valued finite field is used.
> 
> I wasn't speaking about finite fields.
> 
> Surely the only reasonable (ie. Haussdorff) topology on a finite filed
> is the discrete one, so a sequence converges iff it is eventually
> constant.

What kind of wierdo bases a network protocol on floating point values?

I think that it's been assumed by nearly all that the protocol is
working with integers in either GF(p), or GF(2^m).

-- 
There are three methods for writing code in which no bug can be found:
1) Make the code so straightforward that there are obviously no bugs.
2) Make the code so complicated that there are no obvious bugs.
3) Insist that any apparent bugs were really intentional features.

------------------------------

From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: hash function for public key digital signature : one way?
Date: Mon, 18 Dec 2000 02:53:31 GMT

Simon Johnson wrote:
> 
> In article <91igpd$nlb$[EMAIL PROTECTED]>,
>   [EMAIL PROTECTED] wrote:
> > hash function for the public key digital signature must be one way ?
> >
> > I think a collision free hash function is adequate for the
> > authentication in public key digital signature that includes the
> > signed message digest and plaintext. No motivation to generate the
> > plaintext from the "unsigned" digital signature or message digest
> > since the plaintext is publicly available.

Although there doesn't seem to be a need for preimage resistance for a
public key signature, I don't know of any hash that is collision
resistant but not preimage resistant.

> > May I conclude that one-way property of the hash function in public
> > key digital signature is only necessary for the hybrid protocol
> > because it wants to preserve the secrecy of the plaintext?

If you hash the plaintext for the signature, then encrypt the plaintext,
and then append the signature, security is compromised if the hash is
not preimage resistant.

> No, If the hash is reversible, then you can create another document
> easily that hashes to the same value, and therefore have a situation
> where i can generate any document i want and attach your signiture to
> it. Collision resistance is different, if i _randomly_ pick two plain-
> texts the chance of them taking the same hash value should be low.

Sorry, but you're wrong about what collision resistance is.  for a
collision resist hash, it is infeasible to construct two messages which
collide.  Picking messages at random is not the same.

-- 
There are three methods for writing code in which no bug can be found:
1) Make the code so straightforward that there are obviously no bugs.
2) Make the code so complicated that there are no obvious bugs.
3) Insist that any apparent bugs were really intentional features.

------------------------------

From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Use of multiplexing
Date: Mon, 18 Dec 2000 03:05:32 GMT

Mok-Kong Shen wrote:
> 
> Multiplexing is a rather old topic in CS. In the crypto
> context, it is also well-known (see spread spectrum, CDMA).
> I like to consider the issue presently, however, purely
> at the application level, i.e. whether a normal user, who
> does encryptions as usual, can profitably apply multiplxing
> techniques of his own.
> 
> In plenty of situations one does not send occassionally
> single messages. Between two distant officies of a
> commercial firm, for example, a substantial amount of
> communications, usually of different security levels,
> occur. It is thus feasible to multiplex the messages,
> i.e. intermingling them in some secret (either fixed or
> dynamically varying) way that is unknown to the opponent,
> thus rendering his analysis job difficult. There will
> certainly be some appreciable cost/work involved for
> the partners doing multiplexing, but the resulting
> difficulty caused to the opponent could under circumstances
> justify that, I believe. I wonder thus whether such
> multiplexing has not already been successfully employed in
> practice.
> 
> We note that multiplexing can be done at two different
> levels, (intermingling of plaintexts and of ciphertexts),
> and also at different granularity (word/byte/bit).
> 
> M. K. Shen
> -----------------------------
> http://home.t-online.de/home/mok-kong.shen

Indeed.  One way that I've thought of for mutiplexing N messages on the
bit level, is to first create N arithmetic coder contexts, all outputing
to a single bitstream.  Cycle though the N messages, taking one byte
from each, and output though the corresponding encoder.  The number of
bits that any one encoder will output, given some input byte, will vary
quite a bit.  Because of this, it should be difficult for an attacker to
seperate messages without knowing which bits were output by which
encoder, and the encoder state.

Of course, there is a bit of a problem -- if underflow occurs, the
encoder won't even output any thing for that byte.  To compensate for
that, if some encoder i is in a state of underflow, we keep writing
bytes from message i until we are out of the state of underflow.  If we
want our N messages to be sent at approximately the same rate, then for
the next x cycles to this encoder, we *dont* write any bytes or bits.

If these messages have each undergone all but the last step of BWT
compression before being multiplexed, the compression is even better!

Also, unlike most encryption schemes proposed for using compression,
this one has an important advantage: the total size of the combined
message (output from the N arithmetic encoders) is exactly equal to the
total size of each one compressed seperately.

-- 
There are three methods for writing code in which no bug can be found:
1) Make the code so straightforward that there are obviously no bugs.
2) Make the code so complicated that there are no obvious bugs.
3) Insist that any apparent bugs were really intentional features.

------------------------------

From: George <[EMAIL PROTECTED]>
Subject: SHA 256
Reply-To: [EMAIL PROTECTED]
Date: Sun, 17 Dec 2000 22:14:05 -0600

I got the correct SHA 256 hash value for the "abc" string, but when I try the 
56 byte long string, I have incorrect values for A and E on the first block 
when i is 15.  If I were to post my code here, could someone please try to 
help me firgure out the problem?

-George
[EMAIL PROTECTED]


------------------------------

From: Simon Best <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Visual Basic Source Code
Date: Mon, 18 Dec 2000 04:29:31 +0000

Paul Schlyter wrote:
> 
> In article <[EMAIL PROTECTED]>,
> Simon Best  <[EMAIL PROTECTED]> wrote:
[...]
> > Wasn't BASIC originally devised as a language to be used for teaching
> > programming?
> 
> Quite correct.
> 
> > It's a bad programming language, and hence a good tool for teaching
> > programming,
> 
> Well, you'll have to think back to 1964 to get the rpoper perspective.
> The generally available programming languages back then was FORTRAN,
> COBOL and Algol -- the latter was the most elegant, however there were
> hardly any Algol compilers available.
> 
> BASIC as a language was really a stripped-down FORTRAN, with line
> numbers added for editing your program.  However the original
> Darthmouth BASIC wasn't merely a programming language, but also a
> programming environment: it introduced the time-sharing terminal
> which was a great improvement: the student of programming could now
> get feedback in just a few seconds.

BASIC 'introduced' the time sharing terminal?  I take it you mean BASIC
was the first language to take advantage of time sharing terminals by
using an interactive programming environment?  (It would be rather
stupid of me to think you meant that bits of hardware were parts of a
programming language!)

> Before BASIC he would have to punch his card deck, hadn it in for processing,
> wait a few hours, and then read his printoput which many times was very short
> because the compuler complained over some trivial syntax error.  Being able
> to correct these errors after a few seconds instead of a few hours was
> a great improvement, don't you think so?

But that's not really something that BASIC brought about, just something
that BASIC took advantage of.  It could have been some other development
environment (we're talking about BASIC as a development environment
here, it seems), but (as I don't know that it wasn't) I'll take it that
BASIC was the first.  However, that doesn't really have anything much at
all to do with whether or not BASIC is a good language.

> > _if_ the teacher is a good programmer and a good programming teacher.
> 
> ...and in particular if the environment provided feedback within
> seconds instead of within hours.
> 
> Also, the programs written by students in the 1960'ies were by today's
> standard very simple programs.  And for such simple programs, the
> bad structure of BASIC didn't matter that much.

But, being students, they were learning stuff to be used beyond their
studies.  They may not have needed well structured programs, local
variable name scoping, etc, to do their assignments, but what about
after graduation?  Alas, the original Unix kernel was such a hideous
mess of code that the programmers (who were once students) didn't even
dare try debugging it!

> > Because BASIC makes it so easy to do things very badly, and get in a
> > hideous mess, learning good programming with it is something that has to
> > be conciously and explicitly done.  If that's how the teacher makes use
> > of BASIC in lessons, then the pupils benefit.  Pupils can be guided to
> > learn various valuable lessons on good and bad programming (they can be
> > lead to find out the hard way very easily with BASIC, if necessary).
> > With some other language that imposes various constraints 'for the
> > programmer's own good' there isn't so much opportunity for that.
> 
> If one would apply that principle to all teaching and not merely
> teaching of programming, then one should show bad examples rather
> than good examples to the pupils.

No, that's not what I meant.  Interpreting what I wrote in that
'either-or' way leads it to be obviously ridiculous (as you make so
plainly clear).

> In literature they should read badly-written stories;

It wouldn't be bad for them to occasionally compare badly written
stories with good ones.  It could help them to:

1.  Appreciate well written stories.
2.  Learn the differences between well written and badly written
stories.
3.  Learn to avoid some of the mistakes that lead to badly written
stories.

> in math they should be presented to mostly erroneous computations, and those
> computations which happened to be correct should at least have been done in a
> clumsy way;

Which is of course ridiculous, which is supposedly your point.  Did you
really think that that was the kind of thing I meant?  Or is this just
more of the same silly willy waving I've seen so often in this group?

If students are shielded from analysis of bad ideas, common mistakes,
etc, will that really benefit them?  Should teachers not take the
opportunities that will arise anyway to help the students learn from
mistakes they make in assignments?  Students will do things badly
sometimes, so maybe they should instead do no assigments to avoid that? 
Would it not be enlightening for teachers to show students why they are
mistakes?  How would a teacher teach students about mathematical
disproof without making use of things that are mathematically wrong?

I never, ever said that students should not be taught what good
programming is (what I wrote wouldn't make sense otherwise, as you've
made so obviously obvious).  I merely suggested that a 'bad' language
would be useful for demonstrating why software generally needs to be
well structured, and hence why many programming languages have support
(if not requirement in some special cases) for such structuring of
software.

> in foreign languages they should be presented to bad grammar and mis-used
> words; in physics they should be lectured by e.g. people trying to construct
> perpetuum mobile machines; in biology creationists should be teachers
> and in astronomy astrologers should lecture the pupils.

'perpetuum mobile machines'?  I think you mean 'perpetual motion
machines'.  One of my physics teachers used some old ideas for perpetual
motion machines to teach us a bit of stuff about thermodynamics.  He
wanted us to understand why the ideas wouldn't work, and to help us
understand what the laws of thermodynamics meant.  He didn't have to be
some crackpot to do this, either.

> Is that the way you'd want to run the school system?

Not the way you've misconstrued what I said!  Perhaps you were under the
erroneous impression that these things can only be 'either-or' things? 
That it is impossible for teachers to teach good stuff if they also give
examples of bad stuff to show why and how the bad stuff is bad?  Perhaps
you think it's bad to help students to discriminate between good and bad
stuff?  Of course you don't think such things!  So I'm not going to
claim that you do think such absurdities.

Or perhaps you think posters to this newsgroup should say everything
that they could say about their point in one single post?  Perhaps you
think that posters should explicate the obvious immediately?  Perhaps
you think posters should always preempt idiotic misinterpretations of
what they did (or didn't) write, just in case someone lambasts a
strawman in response?  Surely not!

> > So, I suggest that BASIC really belongs where it was originally
> > intended: in schools, but with good programming teachers.
> 
> The problem with BASIC isn't that it merely makes it easy to do
> things very badly (it's easy to do things badly in e.g. C/C++ too!),
> but that BASIC makes it difficult to do things in a good way.

Good stuff has to be done conciously and explicitly in BASIC, as it
doesn't provide that for the student.  Hence it's an opportunity for
students to gain greater appreciation and understanding of the good
stuff.  Then, if the teacher does a good job of this, the students may
have greater appreciation for good features in other languages. 
(Perhaps I need to explicitly state that I do not mean that no other
languages should also be used in such teaching?)

> However, BASIC ought to have been put peacefully to rest in
> 1975 or so, when Pascal became readily available.

No, I disagree.  That's letting BASIC off too lightly.  Parade BASIC
around as an example of a bad language that discourages good
programming.  Make a mockery of it in lessons.  Throw rotten vegetables
at it.  Expose it and shame it before the peoples of all nations!

Simon

-- 
_______________________________________________________________________________
Personal: [EMAIL PROTECTED]
Yellow Skies: [EMAIL PROTECTED] http://www.yellowskies.com
Everyone does their own signature to be different.  How does that work?

------------------------------

From: Simon Best <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Visual Basic Source Code
Date: Mon, 18 Dec 2000 04:44:07 +0000

Jason Bock wrote:
> 
> Simon Best <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > Because BASIC makes it so easy to do things very badly, and get in a
> > hideous mess
> 
> I think this is a matter of opinion.  C and Perl can lead to some of the
> most twisted code I've ever seen.  Then again, I've seen elegant code with
> those languages as well.  Likewise with BASIC.  It's not the language, it's
> the programmer who uses the language that really makes the difference.
> 
> Jason

I find Perl nostalgic, 'cause it reminds me of beginning to learn
programming, with some kind of BASIC, on my ZX Spectrum (TS 2000?) years
ago!

Indeed, it's the programmer much more than the language that determines
whether the resulting code is good or bad.  BASIC, as I remember,
annoyingly didn't do any local variables and name scoping, and calling
subroutines with labels rather than line numbers seemed to have to be
done a bit kludgily.  But I also found those kinds of limitations so
enlightening with my little projects years ago.  I got to appreciate the
whole business of putting local variables in local scopes, etc, in a way
I'm not sure I would have done otherwise.  (I'm regarding MS VB and the
like as much more being BASIC derivatives).

Talking of subroutines, they seem to use an awful lot of them in Star
Trek.  Do they:

1.  Do it all in assembly or machine code?
2.  Use BASIC?
3.  Use Perl?  (Maybe Larry Wall was one of the few programmers to
survive the 21st century atomic wars...)

Simon

-- 
_______________________________________________________________________________
Personal: [EMAIL PROTECTED]
Yellow Skies: [EMAIL PROTECTED] http://www.yellowskies.com
Everyone does their own signature to be different.  How does that work?

------------------------------

From: Simon Best <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: important programming languages
Date: Mon, 18 Dec 2000 04:51:09 +0000

Tim Tyler wrote:
> 
> Paul Schlyter <[EMAIL PROTECTED]> wrote:
> 
> : I wonder if we'll ever see a "hardware compiler" which compiles e.g.
> : a C program into not assembly language but a hardware implementation
> : executing the operations specified by the program. [...]
> 
> Handel C.  A close C variant for direct hardware compilation.  Oxford
> Hardware Compilation Group.  Embedded Solutions Ltd.   Celoxia.
> [http://www.celoxica.com/]
> 
> In particular, see the document on:
> 
> http://www.celoxica.com/university/academic_papers.htm
> 
> ``Handel-C an effective method for designing FPGAs (and ASICs).''

And there are silicon compilers for VHDL, Verilog HDL (not the same),
and other hardware description languages (HDLs).

Simon

-- 
_______________________________________________________________________________
Personal: [EMAIL PROTECTED]
Yellow Skies: [EMAIL PROTECTED] http://www.yellowskies.com
Everyone does their own signature to be different.  How does that work?

------------------------------

From: Simon Best <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: SHA 256
Date: Mon, 18 Dec 2000 05:00:14 +0000

George wrote:
> 
> I got the correct SHA 256 hash value for the "abc" string, but when I try the
> 56 byte long string, I have incorrect values for A and E on the first block
> when i is 15.  If I were to post my code here, could someone please try to
> help me firgure out the problem?
> 
> -George
> [EMAIL PROTECTED]

If you give details on what checking and analysis you've already done,
what possibilities you've already eliminated, etc, then that would also
be helpful to anyone who wishes to help debug your code.

Simon

-- 
_______________________________________________________________________________
Personal: [EMAIL PROTECTED]
Yellow Skies: [EMAIL PROTECTED] http://www.yellowskies.com
Everyone does their own signature to be different.  How does that work?

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to