Cryptography-Digest Digest #427, Volume #11 Mon, 27 Mar 00 02:13:01 EST
Contents:
Re: Card shuffling (DMc)
Re: polynomial fitting to noisey data ("Leo Sgouros")
Re: OAP-L3: Answer me these? ("Trevor L. Jackson, III")
Re: NIST publishes AES3 papers ("Joseph Ashwood")
Re: Checking for a correct key ("Joseph Ashwood")
Re: Sunday People 26/3/2000: "FORGET YOUR PASSWORD... END UP IN JAIL" ("donoli")
Re: NIST publishes AES3 papers (Scott Contini)
Re: NIST publishes AES3 papers ("Joseph Ashwood")
----------------------------------------------------------------------------
From: DMc <[EMAIL PROTECTED]>
Subject: Re: Card shuffling
Date: Mon, 27 Mar 2000 02:43:40 GMT
On Sun, 26 Mar 2000 18:26:17 GMT, [EMAIL PROTECTED] (Scott Nelson)
wrote:
>Pardon, I should have been more explicit.
>1.) I think it's unreasonable to assume that you are starting with
>a deck in which no person knows it's order.
>
If you have a card deck somewhere near you which is previously used,
use it without looking at any card face. Name the top card and set
it to the side without looking at its face. Do the same for each
succeeding card. Of course you record each name. Now compare your
record with each actual cards.
You will probably end with 0, 1, 2, or 3 matches. That is good
evidence no person knew your card deck's order.
>
>In my experience, most expert bridge players can remember the play
>of every card in the previous hand. And once they are aware the
>order of the pickup is important, they can remember that too. Even
>non experts have little trouble remembering the lay of the
>important cards.
>
>2.) It's also unreasonable to demand that players know the position
>of the cards without seeing their hand first. In a real game, the
>cards are dealt, and you look at your hand before any actions based
>on the randomness of the deck are taken.
>
This can be incorporated easily. You should keep in mind these are
permutations being tested, not combinations. The expected values
will change slightly also.
>
>3.) It possible that you are demanding too high a level of accuracy.
>
I take that as a compliment.
>
>In bridge, it often occurs that knowing if a king is to your left or
>right will let you know if the finesse will work. There's a 50%
>chance that you can get this right just by guessing. Increasing
>that to 75% is enough to be noticeable.
>
Genuine contract bridge experts have numerous ways to increase their
finessing chances, including some which increase their probability to
100%. All are lawful, and do not require knowing anything about a
prior deal.
>
>(Some sharps actually made use of this info in a tournament, and
>they were caught because the other players noticed that there
>[their?] bidding was too aggressive, but they always "lucked out")
>
This is not a violation of any contract bridge law I am aware of. It
sounds to me like a "story" someone related to you.
Actually, few deals are shuffled and dealt in tournaments. They are
made up at tables from computer listings. The table that makes them
up does not later play those deals for obvious reasons.
>
[EMAIL PROTECTED] previously wrote:
>>
>>I repeat my previous statement: One riffle, and one cut, returns a
>>bridge deck back to fair; no matter what the previous play and card
>>collection process. (By the way, this is where bridge experts begin
>>to lay claim to knowing something of the next deal if certain cards
>>are observed clumped together. Their claim is not objectively
>>supportable. In my experiments, I completely controlled for such a
>>possibility.)
>
>I too have run experiments with one shuffle and one cut, but I seem
>to have gotten very different results.
>
>In particular, I find the last cut almost irrelevant.
>
The last cut? Out of one cut? I am confused.
>
>If the Ace of Diamonds is on top of the King, it's going to be
>dealt to the right of the King, and no amount of normal cutting
>will change that. A three way cut (the so called "Scarn" cut)
>has a only a tiny chance of changing it.
>
You apparently introduce more than a simple cut into the discussion.
I am only contemplating a single cut somewhere in the middle of a
deck after a riffle. No more, no less.
>
>If the Ace of Diamonds is on top of the King before the shuffle,
>after one riffle shuffle there's a better than 50% chance there
>is no more than one card between it and the King.
>
>(It's better if the shuffler is bad, but expert bridge players tend
>to be above average shufflers.)
>
>That means the odds are about twice as good that the King will not
>be to the right of the Ace. (About 5/6 instead of 2/3)
>
A player notes his D AK together is picked up as part of the 13
cards just played in order. This player notes the deck being riffled
and cut smoothly.
You say that player can now calculate a more accurate probability
of where that D AK ends up in the next hand.
First off, you say there is more than a 50% chance of zero or one
card lying between the D AK. That means there is less than a 50%
chance of 2 or more cards separating the D AK in the next deal.
I will let you look at your new hand.
1 - You have the D AK; No advantage.
2 - You have the D A; More than 50% certain it is left of you or
opposite you. Less than 50% certain it is right of you.
3 - You have the D K; More than 50% certain it is right or
opposite you. Less than 50% certain it is left of you.
4 - You have neither. You are 100% certain you do not know where
the D AK are.
My conclusion: A lot of energy for very little possible advantage.
>
>Knowing this is not a big advantage, but when the players are
>otherwise evenly matched, even a tiny advantage can be enough.
>
You say tiny; I say ignorably infinitesimal.
[EMAIL PROTECTED]
------------------------------
From: "Leo Sgouros" <[EMAIL PROTECTED]>
Crossposted-To: sci.math,sci.physics,alt.religion.kibology
Subject: Re: polynomial fitting to noisey data
Date: Mon, 27 Mar 2000 03:44:55 GMT
"Phil Henshaw" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
>
>
> Bryan Reed wrote:
> > In article <8aqhun$5jv$[EMAIL PROTECTED]>,
> > Richard Herring <[EMAIL PROTECTED]> wrote:
> > >In article <[EMAIL PROTECTED]>, Colin E.
([EMAIL PROTECTED]) wrote:
> > >> Does anyone know of any references to similar work? or know of any
> > >> scheme that enables one to accurately determine the curvature of a
> > >> specimen from data with experimental noise? or does anyone know how
to
> > >> reduce the edge-effect mentioned?
> > >
> > >Splines?
> >
> > On noisy data? Not a good idea. The spline will insist on hitting
every
> > point, and you'll get all sorts of localized anomalies with
meaninglessly
> > large slopes and curvatures. There are "smoothed splines" that don't
hit
> > every point, but that's still not your best bet.
>
> right, to use splines you need to work from landmarks. The curve will be
> smooth but won't pick up the shapes inbetween the landmarks.
>
> > Try Bevington's "Data Reduction and Error Analysis for the Physical
> > Sciences." There are algorithms for polynomial fits.
>
> A better approach would be to try curvature scale space techniques,
> basically gaussian smoothing, to develop the shape organically. The
> marvelous property that results follows from the tendency of smoothing
> to erase inflection points, successively. What you end up with is a
> hierarchical description of shape, the ability to see which scales of
> shape are derived from noise, and which scales of shape are local and
> which are global. This is pattern recognition technique developed for
> computer vision.
>
> When you develop the scale space diagram
> [http://idt.net/~ph/pacss.htm, showing one of mine for a fossil record]
> or more directly applicable to your task,
> [http://www.ee.surrey.ac.uk/Research/VSSP/imagedb/demo.html, a
> delightful real time demo for mapping the shape of a fish]. A lot of
> work has been done on this. Also see tutorial
> [http://www.cv.ruu.nl/Conferences/Tutorial.html]
>
> I think you can get anything you need from these. If you *must* end up
> with an equation, then determine the appropriate level of gaussian
> smoothing first. If your data is sparce then gaussian smoothing will
> suppress too much shape and you should use derivative reconstruction
> first. My recent paper on the subject is
> [http://idt.net/~ph/fdcs-ph99-1i.pdf (15k intro) or
> http://idt.net/~ph/fdcs-ph99-1.pdf (383k full paper), published in
> IJPRAI a couple months ago]
> Then you'll have the task of curve fitting a smooth shape.
>
> One possibility, I havn't tried, but should work, is to base a spline
> curve on the landmarks provided by the inflection points of the smoothed
> curve, and for successively better fit use the inflection points of
> derivatives of the smoothed curve as your landmarks. If need be, using
> derivative reconstruction will allow you to obtain several more levels
> of meaningful derivatives than you would expect.
> ...clip
>
> > Now, with polynomial fits, you do tend to have problems at the end
points.
> if you're studying closed shapes this is not a problem with organic
> shape reconstructions, though it remains a problem with open shapes and
> linear sequences...
>
> --
> Phil Henshaw
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 167 W 87th St NY NY 10024
> tel: 212-579-2914
> explorations: http://idt.net/~ph
be a way to track memes backwards?
one should be able to do some neat decrypting of previous output by tracking
the memes
------------------------------
Date: Sun, 26 Mar 2000 23:29:49 -0500
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto
Subject: Re: OAP-L3: Answer me these?
Scott Fluhrer wrote:
> Jerry Coffin <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
>
> > This is where the BIG lie comes into the picture: you have some
> > garbage that you SAY covers the theory and the specification of
> > procedures and processes, but as has been pointed out repeatedly in
> > the past, what you've posted covers nothing of the sort; it contains
> > nothing more than hand-waving. Based on its content, there are two
> > possibilities: either you don't really know how your software works,
> > or else you're intentionally covering things up to prevent the rest
> > of the world from knowing how it works.
> Just to be technically accurate: there is a third possibility -- he doesn't
> know the level of detail an algorithm needs to be specified, and so he
> honestly thinks that the very rough outline on his web page is sufficient.
> Still, this does not reflect greatly on his capabilities as a cryptographer.
While this is a logical possibility I think we can rule it out in practice.
There are dozens of well-formed cipher definitions available for comparison. So
pleading ignorance with respect to the level of detail such a specification
needs is equal to pleading negligence. This is highlighted by the number of
observers who have dissed his documentation, and his disregard for those
criticisms.
I suspect he does not _want_ to know the level of detail appropriate to a cipher
spec.
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: NIST publishes AES3 papers
Date: Sun, 26 Mar 2000 20:06:15 -0000
With as much respect as I have for both of you, I still
disagree. To me triple-DES appears as little more than DES
with 3x the rounds. While that clearly gives a great image
of it's security, it seems to me that it would have been
just as secure to create an expanded DES with 3x the rounds,
which would give a smaller key. This smaller key is to me of
the most importance, because if my speculation that the
protection offered would be (effectively) identical, there
is an attack on triple-DES that we are not yet aware, that
will reduce the security to that effective size. I have
little doubt that the balance of analysis is further swayed
towards DES than the 1/10th given, but by changing the
number of rounds, the makeup of the cipher itself has been
changed, and requires analysis as a seperate algorithm. This
was the same reason that was given for not requesting a
parameterized algorithm for AES. This is not to say that I
do not trust triple-DES, I have for some circumstances
recommended it.
Even looking at AES finalists, there is one that just
strikes me as hollow, and all of the analysis indicates that
it has the lowest security margin, so I would certainly
trust triple-DES over RC6. However examining the other
algorithms, none of them strikes me that there is a
significant attack that will be uncovered anytime soon.
This of course has little bearing on any professional
recommendation I would make, where because I don't expect
any surprises in the future of triple-DES, I would be quite
likely to recommend it. Because when it really comes down to
it, I will depend on the judgement of the masses of the
community above my own (at least until I can prove them),
and act in the interest of my employer.
Joe
> I trust triple-DES over any of the AES candidates.
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Checking for a correct key
Date: Sun, 26 Mar 2000 20:41:31 -0000
3 seems to be the best from your list. OTOH it is commonly
considered secure (and almost certainly is) to store a hash
of the original file. Given a cryptographically strong hash
this does not weaken the security of the file (except that
it becomes possible to verify a key guess). Storing the hash
encrypted is of no use, so save yourself the processor time,
and simply store the hash.
Joe
"Thomas Luzat" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Hi,
>
> I'd like to encrypt certain files with a user supplied
key. When
> decrypting them, I'd like to check whether they entered
the correct
> one. I see multiple possibilites:
>
> 1. Store a checksum (encrypted?)
> 2. Store a string like "VALID" to test for
> 3. Store the hashed key (encrypted?)
>
> 2 seems to be not a good solution, but what about 1 and 3;
Are they
> secure? Should I store those values encrypted together
with the rest
> of the file, too? Are there better methods?
>
>
> Thomas
------------------------------
From: "donoli" <[EMAIL PROTECTED]>
Crossposted-To:
uk.media.newspapers,uk.legal,alt.security.pgp,alt.privacy,uk.politics.parliament,uk.politics.crime,uk.politics.censorship
Subject: Re: Sunday People 26/3/2000: "FORGET YOUR PASSWORD... END UP IN JAIL"
Date: Mon, 27 Mar 2000 04:59:51 GMT
>
>Home Secretary Jack Straw aims to make it a criminal offence to refuse to
>tell police or secret services the way into your personal computer.
>
>And you could go down for two years, even if you've only forgotten the
vital
>word.
>
That's the last straw.
donoli.
------------------------------
From: [EMAIL PROTECTED] (Scott Contini)
Subject: Re: NIST publishes AES3 papers
Date: 27 Mar 2000 05:14:20 GMT
In article <#JXaRQ6l$GA.244@cpmsnbbsa03>,
Joseph Ashwood <[EMAIL PROTECTED]> wrote:
>With as much respect as I have for both of you, I still
>disagree. To me triple-DES appears as little more than DES
>with 3x the rounds. While that clearly gives a great image
>of it's security, it seems to me that it would have been
>just as secure to create an expanded DES with 3x the rounds,
>which would give a smaller key. This smaller key is to me of
>the most importance, because if my speculation that the
>protection offered would be (effectively) identical, there
>is an attack on triple-DES that we are not yet aware, that
>will reduce the security to that effective size. I have
>little doubt that the balance of analysis is further swayed
>towards DES than the 1/10th given, but by changing the
>number of rounds, the makeup of the cipher itself has been
>changed, and requires analysis as a seperate algorithm. This
>was the same reason that was given for not requesting a
>parameterized algorithm for AES. This is not to say that I
>do not trust triple-DES, I have for some circumstances
>recommended it.
>
>Even looking at AES finalists, there is one that just
>strikes me as hollow, and all of the analysis indicates that
>it has the lowest security margin, so I would certainly
>trust triple-DES over RC6. However examining the other
>algorithms, none of them strikes me that there is a
>significant attack that will be uncovered anytime soon.
I disagree with your understanding of the security of RC6. Since
RC6 is based upon a cipher that has been well studied (RC5), this
allowed the designers to more accurately asses the security of it
than other AES candidates were able to. It is also true that Rijndael
is based upon a previous algorithm, but the three other candidates
are entirely new, so they had to allow a larger security margin.
There has been a lot of talk about minimum number of secure rounds
and similar things, but I think this information is misleading because
it is bias towards algorithms that are less well studied. I should
emphasize that this is a personal opinion - I am not speaking for
the designers of RC6. But I was involved in the security analysis, and
that paper is available on my web site:
http://www.maths.usyd.edu.au:8000/u/contini/
Scott
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: NIST publishes AES3 papers
Date: Sun, 26 Mar 2000 22:47:35 -0000
RC6 is apparently secure for now, but with even the tiniest
progress on an attack, RC6 will fall. If a new attack is
found RC6 doesn't necessarily have the security margin to
deal with it, where any of the others have enough margin to
make it more likely that they will remain secure. All
currently known attacks have been applied to every AES
finalist, so RC6 has no demonstrable advantage in that
respect, except perhaps in that it doesn't require more than
the minimal amount of time to resist the attacks.
I do not see where RC6 is any more studied than any of the
others, it is quite simple to establish that the changing of
the smallest detail of a cipher can make it weaker, in the
case of RC5 to RC6, the enhancements were carefully chosen,
but it is a new cipher, and as such it needs to be
considered as a new cipher. Yes it will inherit strength
from RC5, but all attacks need to be reevaluated, and new
lines of attack may have been created. For the final
decision of AES I should hope that consideration of security
margin against potential future attacks is considered. We
all pretty much agree that we don't know everything about
cryptography yet, why should we trust a cipher that has a
security margin of virtually nothing?
Joe
> I disagree with your understanding of the security of RC6.
Since
> RC6 is based upon a cipher that has been well studied
(RC5), this
> allowed the designers to more accurately asses the
security of it
> than other AES candidates were able to. It is also true
that Rijndael
> is based upon a previous algorithm, but the three other
candidates
> are entirely new, so they had to allow a larger security
margin.
>
> There has been a lot of talk about minimum number of
secure rounds
> and similar things, but I think this information is
misleading because
> it is bias towards algorithms that are less well studied.
I should
> emphasize that this is a personal opinion - I am not
speaking for
> the designers of RC6. But I was involved in the security
analysis, and
> that paper is available on my web site:
>
> http://www.maths.usyd.edu.au:8000/u/contini/
>
> Scott
>
>
>
>
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************