magnifying unpredictability and common subexpressions

2007-08-08 Thread travis+ml-cryptography
So I'm looking for a minimum cost transformation with _only_ the
following characteristic:

Given a set of m input bits X, produce a set of n output bits Y such
that knowledge of some subset of X and Y gives a minimum knowledge of
the remainder (of Y if that makes it simple, but of X would be nice).

This is not unlike the all-or-nothing package transform proposed
by Rivest; the basic idea is that you have to know all of X in order
to know anything about Y - sort of.

My first iteration is as follows:

Let ^ represent xor.
Let p be the exclusive or of all bits of x (i.e. the parity).

Let Y = f(X), where f(x) = p ^ x

Basically, each bit of Y is the xor of every _other_ bit of x
(x was actually xored into p, so by xoring again, it cancels out)

However, my problem is that no matter how few bits of X you know, you
only have to guess one bit, p, in order to know the bits of Y that
correspond to the bits you know of X.

Let me be concrete; suppose that I know only bits 0 of X and bit
0 of Y.  I can then compute p, and from that I can use any further
knowledge of bits of X to compute corresponding bits of Y.

Note that p = x[0] ^ y[0].
Then if I know x[1], I can reveal y[1] = p ^ x[1].
And so forth.

Obviously I need something more complex.  The problem seems to be that
the equations for every bit y depend only on x and p; that is, if we
know x, we only have to guess p once for the whole array.

In a way, this reminds me of an idea I had earlier, whereby this
variable p is what I call a common subexpression, a key which unlocks
the equation, a sort of trapdoor.

Suppose I had written Y = f(X) as follows:

y[0] = x[1] ^ x[2] ^ x[3] ^ x[4]
y[1] = x[0] ^ x[2] ^ x[3] ^ x[4]
y[2] = x[0] ^ x[1] ^ x[3] ^ x[4]
...

A novice may have not have realized that each bit of y depends _only_
on x ^ p, and that knowing or guessing p would reveal every bit of Y
for each known bit of X, without having to know any other bit of X.

Now, what I sometimes wonder is whether the equations and tables in
things like DES or SHA-1 are not similar... a table allows for several
boolean logic representations, and depending on which you pick for
each output bit, it may be possible that a common subexpression like p
falls out of the equations, minimizing the amount of unknowns, or
the amount of compute power necessary to brute-force it.

For a Feistel cipher like DES, one might pick different boolean logic
representations in different rounds, to minimize the complexity of
the equations you get at the end.

This is why I don't try to design ciphers.  I could possibly come up
with some complicated-looking tables and equations, but I'd have no
assurance that a simple common subexpression did not exist.

Do I need to resort to a conventional hash like SHA-1?  I am not
convinced that it is necessary, or that I'd have any more assurance
from SHA-1 than I'd have with a randomly-generated set of equations.

Does it have to be random?  Isn't there a regular structure I could
exploit here?  It seems like there should be!

Randomly-generated equations remind me a bit of the following AI Koan:

In the days when Sussman was a novice, Minsky once came to him as he
sat hacking at the PDP-6.

What are you doing?, asked Minsky.

I am training a randomly wired neural net to play Tic-Tac-Toe,
Sussman replied.

Why is the net wired randomly?, asked Minsky.

I do not want it to have any preconceptions of how to play, Sussman
said.

Minsky then shut his eyes.

Why do you close your eyes?, Sussman asked his teacher.

So that the room will be empty.

At that moment, Sussman was enlightened.
-- 
URL:http://www.subspacefield.org/~travis/ -- dharma  advaita
For a good time on my UBE blacklist, email [EMAIL PROTECTED]


pgpdBhbOliHn7.pgp
Description: PGP signature


unintended consequences?

2007-08-08 Thread Steven M. Bellovin
I recently saw a news story about a new kind of fiber optic cable from
Corning -- it has a much smaller bending radius.  (See
http://money.cnn.com/magazines/fortune/fortune_archive/2007/08/06/100141306/index.htm?postversion=2007072303
and
http://www.corning.com/media_center/press_releases/2007/2007072301.aspx)
The problem is that when fiber is bent too sharply, the light escapes.
Of course, that's the rumored way that, umm, agencies tap fiber: they
bend it enough that some light escapes, but not too much.  That trick
won't work nearly as well with the new fiber, which is reportedly 100x
more bendable.  Does that mean that the new fiber is less tappable?



--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: unintended consequences?

2007-08-08 Thread Ed Gerck
Steven M. Bellovin wrote:
 Does that mean that the new fiber is less tappable?

No change, notwithstanding anecdotal references on fiber bending
as used for tapping.

Tapping a fiber can be done without much notice by matching the
index of refraction outside the outer fiber layer, after abrasion
and etching to reach that layer. There is no need for bending,
which might not be physically possible (eg, in a thick cable bundle),
would increase propagation losses beyond that caused by the tapped
signal power itself, and might create detectable backward
propagating waves (BPWs are monitored to detect fiber breach).

Low-loss taps are essential. A tap must extract a portion of
the through-signal. This, however, should not have the effect of
significantly reducing the level of the remaining signal. For
example, if one-quarter of the incident signal is extracted, then
there is a 1.25 db loss in the remaining through-signal, which
can easily be detected.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]