Re: A secure Internet requires a secure network protocol

2007-07-25 Thread bear
 is controlled by any
process related to security, nor is keysigning halted or signed keys
revoked in the case of users who have perpetrated or permitted known
security abuses.

It should therefore be no surprise that SSL is nearly useless.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: How the Greek cellphone network was tapped.

2007-07-21 Thread bear


On Thu, 19 Jul 2007, Charles Jackson wrote:

An earlier post, talking about vulnerabilities and the lack of an
appropriate market response, said:

We're talking about phone calls -- did all of the well-publicized
cellular eavesdropping (Prince Charles, Newt Gingrich (then a major US
politician), and more) prompt a change?  Well, there are now US laws
against that sort of phone eavesdropping gear -- a big help

Halfway, I think.  ISTR there are laws against manufacture for sale,
sale, purchase, or most usage of such gear - but no laws against
manufacture without intent to sell, posession, or some exempted
types of use of such gear.

Basically, owning such devices is not a crime, nor is using them
provided the target has been duly notified that their call will be
or is being intercepted.  So you can build the gear, and you can demo
the gear you've built on a call made for purposes of demo-ing the
gear.

Consult a lawyer first, but I believe it may also be legal to monitor
calls made in a given location provided you first put up a sign that
says all cell calls made on these premises will be monitored etc.
But you can't legally buy or sell the equipment to do it.

 I think the most publicized cases of cellular interception,
 including the two mentioned above, were interceptions of analog
 calls.  Such interception was not too hard to do.  In some cases you
 could pick up one side of such calls on old American TV sets (sets
 that tuned above channel 69 on the UHF dial).

The technical requirement was for a TV with a UHF analog *tuner* as
opposed to a digital channel-selection dial.  The channels that the
cellular network used (still uses?  I don't know) were inbetween the
channels that were assigned whole numbers in TV tuning.  So you could
pick up some cell traffic if you tuned, for example, to UHF TV
channel 78.44.  But not if you tuned to channel 78 or channel 79.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: How the Greek cellphone network was tapped.

2007-07-21 Thread bear


On Sat, 21 Jul 2007, Steven M. Bellovin wrote:

Not as I read the statute (and of course I'm not a lawyer).  Have a
look at 18 USC 2512
(http://www4.law.cornell.edu/uscode/html/uscode18/usc_sec_18_2512000-.html)

   any person who intentionally ...

   manufactures, assembles, possesses, or sells any electronic,
   mechanical, or other device, knowing or having reason to know
that the design of such device renders it primarily useful for the
   purpose of the surreptitious interception of wire, oral, or
   electronic communications, and that such device or any component
   thereof has been or will be sent through the mail or transported
   in interstate or foreign commerce;

   ...

So simple possession of a surreptitious interception device is illegal,
with exceptions for things like sale to law enforcement or
communications companies.

Hm.  Okay, we're looking at the same law, and I am not a lawyer
either; but I read knowing or having reason to know ... that such
device or any component thereof has been or will be sent through the
mail or transported in interstate or foreign commerce as a limiting
clause on what would otherwise be an unconstitutional law.

In the case of someone who manufactures and posesses such a device,
but never sends it or its components through the mail nor transports
it in interstate or foreign commerce, I don't think this law gets
broken.  Despite intimidation tactics that do their best to try to
spread the opposite impression, this is explicitly *not* forbidden by
this law.

And the statute on using such a device, IIRC, also has a limitation,
in that it bans using such devices *surreptitiously* - which I think
permits non-surreptitious use such as demonstrations.

Still, it's a case of two reasonably educated people being able to
look at the same statute and draw different conclusions: Sooner or
later it will have to be decided in a trial to see who can pay the
best lawyers^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H see which
interpretation of the statute best serves justice.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Can you keep a secret? This encrypted drive can...

2006-11-12 Thread bear


On Mon, 6 Nov 2006, Derek Atkins wrote:

Quoting Leichter, Jerry [EMAIL PROTECTED]:

 Just wondering about this little piece.  How did we get to 256-bit
 AES as a requirement?  Just what threat out there justifies it?

 It's a management requirement.  The manager sees AES128 and AES256
 and thinks 256 must be better than 128 and therefore the edict comes
 down that AES256 must be used.  It's not a technical decision.  It's
 not a decision made by analyzing the threats.  It's made purely
 by assertion, but it's a decision that can't easily be refuted.

Yep.  When costs are equal (and in this case computing power is so
cheap as to make that approximately true) any competent manager will
always pick the method which is superior to the other in any way.

The facts are that with AES128 or AES256, the cipher itself will *NOT*
be the weakest link in security, so the theoretical superiority of
AES256 doesn't matter much.

Anybody who is making a serious attack will have to do pretty much
exactly the same thing -- social engineering, rubberhose attack,
subpoena, password guess, protocol flaw exploit, Van Eck monitor
exploit, keyboard monitor, software backdoor exploit, DLL substitution
attack,  mem device exploit by a trojan running at the same time as
the encryption software, audio interferometry to determine keystroke
sequences, audio-frequency carrier wave interference from some metal
thing in the same office as the transmitter vibrating to the voice
that's being encrypted, etc...  There's a million different links
all weaker than the cipher itself.

Conversely, it harms nothing to have them pick the stronger cipher,
given that both ciphers are sufficiently strong that their strength
has nothing to do with the mimimum effort required to attack their
application.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: thoughts on one time pads

2006-01-27 Thread bear


On Thu, 26 Jan 2006, Adam Fields wrote:

On Thu, Jan 26, 2006 at 06:09:52PM -0800, bear wrote:
[...]
 Of course, the obvious application for this OTP material,
 other than text messaging itself, is to use it for key
 distribution.

Perhaps I missed something, but my impression was that the original
post asked about how a CD full of random data could be used as a key
distribution mechanism.

You did not miss anything; I confirmed the OP's supposition
explicitly, and I agree with it in principle.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: thoughts on one time pads

2006-01-26 Thread bear


On Thu, 26 Jan 2006, Travis H. wrote:

 For example, you may have occasional physical meetings with a good
 friend, colleague, family member, or former co-worker.  Let's say
 you see them once every few years, maybe at a conference or a
 wedding or a funeral or some other occasion.  At such times, you
 could easily hand them a CD-ROM or USB flash drive full of key
 material.  Then, you could use that pad to encrypt messages to them
 until the next time you meet.  Let's say you send them ten 1kB
 messages per year.  Then a $1 CD-ROM would hold enough data for
 7 years of communication!  Heck, I could put the software on the
 image and make a dozen to keep with me, handing them out to new
 acquaintances as a sort of preemptive secure channel.

It's far easier and less error-prone to hand them a CD-ROM
full of symmetric keys indexed by date.

The problem is that most people will not take the care needed
to properly use a one-time pad.  For text communications like
this forum, they're great, and a (relatively) small amount of
keying material, as you suggest, will last for many years.

But modern applications are concerned with communicating *DATA*,
not original text; someone on the system is going to want to
send their buddy a 30-minute video of the professor explaining
a sticky point to the class, and where is your keying material
going then?  He wants to be ignorant of the details of the
cryptosystem; he just hits secure send and waits for magic
to happen.  Or if not a 30-minute video, then the last six
months of account records for the west coast division of the
company, or a nicely formatted document in a word processor
format that uses up a megabyte or two per page, or ...
whatever.  The OTP is nice for just plain text, but the more
bits a format consumes, the less useful it becomes.  And
fewer and fewer people even understand how much or how
little bandwidth something is; they think in terms of human
bandwidth, the number of seconds or minutes of attention
required to read or listen to or watch something.

An OTP, as far as I'm concerned, makes a really good system,
but you have to respect its limits.  One of those limits is
a low-bandwidth medium like text-only messages, and in the
modern world that qualifies as specialized.

Given a low-bandwidth medium, and indexing keying material
into daily chunks to prevent a system failure from resulting
in pad reuse, you get 600 MB on a CD-ROM.  Say you want a
century of secure communications, so you divide it into 8-
kilobyte chunks -- each day you can send 8 kilobytes and
he can send 8 kilobytes.  (Note that DVD-ROMs are better).

That gives you a little over 100 years (read, all you're likely
to need, barring catastrophic medical advances,) of a very
secure low-bandwidth channel.

Of course, the obvious application for this OTP material,
other than text messaging itself, is to use it for key
distribution.

Bear















Bruce acknowleges this by saying [t]he exceptions to this are
generally in specialized situations where simple key management is a
solvable problem and the security requirement is timeshifting.  He
then dismisses it by saying [o]ne-time pads are useless for all but
very specialized applications, primarily historical and non-computer.

Excuse me?  This would in fact be a _perfect_ way to distribute key
material for _other_ cryptosystems, such as PGP, SSH, IPSec, openvpn,
gaim-encryption etc. etc.  You see, he's right in that the key
distribution problem is the hardest problem for most computer
cryptosystems.  So the OTP system I described here is the perfect
complement for those systems; it gives them a huge tug on their
bootstraps, gets them running on their own power.

I'm not sure it is even limited to this use case.  For example, before
a ship sets out to sea, you could load it up with enough key material
to last a few millenia.  How much key material could a courier carry?
I bet it's a lot.  As they say, never underestimate the bandwidth of
a station wagon full of tapes.  And don't embassies have diplomatic
pouches that get taken to them and such?

So my questions to you are:

1) Do you agree with my assessment?  If so, why has every crypto
expert I've seen poo-pooed the idea?

2) Assuming my use case, what kind of attacks should I worry about?
For example, he might leave the CD sitting around somewhere before
putting it in his computer.  If it sits around on CD, physical access
to it would compromise past and future communications.  If he copies
it to flash or magnetic media, then destroys the CD, we can
incrementally destroy the pad as it is used, but we have to worry
about data remanence.

3) How should one combine OTP with another conventional encryption
method, so that if the pad is copied, we still have conventional
cipher protection?  In this manner, one could use the same system for
different use cases; one could, for example, mail the pad, or leave it
with a third party for the recipient

Re: quantum chip built

2006-01-17 Thread bear


On Sat, 14 Jan 2006, Michael Cordover wrote:

 In order to factor a 1024
 bit modulus you'd need a 1024 bit QC.  Perhaps if there were some sudden
 breakthrough it'd be a danger in a decade - but this is the same as the
 risk of a sudden classical breakthrough: low.

This is not necessarily so.  In order to factor a 1024-bit
modulus using Shor's algorithm, you would indeed need a 1024-
qbit machine.  But we haven't seen what fruit may be borne by
algorithm research and hybrid machinery; it seems plausible
that a hybrid machine may be able to use, say, 16 qbits to
divide the work factor of factoring large numbers in general
by approx. 65536.

In general, I think that until QC is a mature field, cryptographers
and cryptologists ought to assume that some QC or hybrid algorithm
or machinery that may be discovered any minute now can
simultaneously exploit the strengths of both QC and classical
computation.  And that means, in general, that I'd want to *add*
the number of bits factorable by Shor's algorithm in the foreseeable
future to the number of bits factorable by classical brute-force
algorithms.

In fact, maybe we ought to be worried about synergistic effects
and multiplying the figures together, although I can't imagine
where such effects would come from.  Let us say simply that Quantum
Computing is far from mature, and at this moment we are only
beginning to understand it.  I remember all the mechanical engineers
who proved that no heavier-than-air flying machine could exist
back in the 19th century, back when knowledge of mechanics and
materials was less precise than now...  And these guys knew what
there was to know about it.  I'm chary of people proving that
no n-bit factoring machine can be built just because the way they
already know to build one (Shor's algorithm, which requires n qbits)
won't work.  Given that our knowledge of QC is nascent, our
ignorance of QC's practical limits is likely staggering, and
caution is to be advised.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Proving the randomness of a random number generator?

2005-12-03 Thread bear


On Fri, 2 Dec 2005, Lee Parkes wrote:

 Hi,
 Apologies if this has been asked before.

 So, the question is, how can the randomness of a PRNG be proved within
 reasonable limits of time, processing availability and skill?


Randomness is a quality that, intrinsically, cannot be proven.  Period.
You can take an urn with a hundred numbered balls and pull them out one
at a time -- a truly random process -- and the sequence from one to a
hundred by ones is just as likely as every other sequence.  If it happens
to come up, even that doesn't prove that it wasn't a random process.

On a practical note, I would test the PRNG's output against pattern detectors.
Spectral Analysis software is quite good at detecting patterns in PRNG output.

Then there are the pattern detectors built into various file compression
tools.  If gzip or winzip or arc or arj or (etc) can find a pattern, it
will succeed in producing a shorter file than the original.

Before you do any of that, however, check the literature to see if it's
already been done.  If you're using a commercial or cryptological PRNG
that's been studied, read the papers of the people who studied it, and
the papers of people who studied competing products.  See if they found
anything usable or any useful property that you can use to support a
claim of randomness.  (note: it won't actually *be* randomness, for
the simple reason that that can't be proven.  But some systems have
proofs that someone who has access to the entire output of the PRNG so
far has no strategy better than random guessing for determining the
next and subsequent outputs, and that may be random enough for your
bosses.)


Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ISAKMP flaws?

2005-11-30 Thread bear


On Sat, 19 Nov 2005, Peter Gutmann wrote:

- The remaining user base replaced it with on-demand access to network
  engineers who come in and set up their hardware and/or software for them and
  hand-carry the keys from one endpoint to the other.

  I guess that's one key management model that the designers never
  anticipated... I wonder what a good name for this would be, something better
  than the obvious sneakernet keying?

Actually this is a good thing.  Separation of the key distribution channel
from the flow of traffic encrypted under those keys.  Making key distribution
require human attention/intervention.  This is treating key distribution
seriously, and possibly for the first time in the modern incarnation of the
industry.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Optimisation Considered Harmful

2005-06-25 Thread bear
 it completely wrong for several tries.)

Um.  Yup.

 Enter cryptographic algorithms.  On their face, these are simple
 mathematical transformations. ... The problem is that these side-
 channel attacks broaden the meaning of the program to something
 that has never been considered in previous work that I know of.

Certainly I haven't seen any discussion of these problems anywhere
along my four-foot bookshelf of compiler books.

 What can be done?  Well, the first thing that's clearly needed is a
 more complete specification of the semantics of cryptographic
 algorithms.  Just the input/output transformation - which is all we
 write down in most analyses - is insufficient.  We sometimes state
 things informally, almost in passing - as in the comments on AES
 that table accesses take constant time.  We certainly assume
 things like the time to add two numbers is independent of the
 number of carries - which is probably true on machines today, but
 may actually have been false at one time.

Interestingly, although the timing attack is obsolete, there is a
potential power attack here.  Since power is consumed when gates
switch from 0 to 1 or vice versa, if you know the machine is
performing an add instruction and you know how much power it's eating
at that precise instant, you can probably tell how many carries it
took, because the gates that hold the carry bits are normally 0 when
the add instruction starts and switch back to 0 before it's over.
I dunno if you can separate this from the other stuff going on at
the same time, and when you got it it would be combined with the
power for changing the destination operand to the answer - but you
might be able to eliminate a lot of possible keys that way.

 Without a way to write
 down what matters, we have no way to judge whether a particular
 compilation/hardware approach is safe.  There's some work on
 abstract program interpretation that might help here (though it's
 mainly aimed in other directions).

Right now there is no computer language (and no mathematical notation)
for writing this kind of stuff down.  Our compiler theory is mainly
based on math, our idea of program semantics almost wholly so based,
and these issues are not now modeled or tracked in any formal
semantics we use.

Further, the chip architecture used by most desktop machines is
not equipped to treat such semantic requirements correctly, and
even if we developed notation and semantic models that took these
issues into account, it would be difficult to get them to run
correctly on desktop machines.

   - Use specialized hardware, designed not to leak side-channel
   information.

I think the easiest way to achieve such hardware would be to
make it *constantly* leak side-channel information based on
several chaotic processes - in the hopes that attackers couldn't
isolate which parts of the information were related to your
algorithm.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is 3DES Broken?

2005-02-02 Thread bear


On Mon, 31 Jan 2005, Steven M. Bellovin wrote:
snip re: 3des broken?

[Moderator's note: The quick answer is no. The person who claims
 otherwise is seriously misinformed. I'm sure others will chime
 in. --Perry]

I'll be happy to second Perry's comment -- I've seen no evidence
whatsoever to suggest that it's been broken.  But there are some
applications where it's a bad choice for cryptographic reasons.

When using CBC mode, one should not encrypt more than 2^32 64-bit
blocks under a given key.

I think you meant ECB mode?

whichever it is, as you point out there are other and more secure
modes available for using 3DES if you have a fat pipe to encrypt.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [anonsec] Re: potential new IETF WG on anonymous IPSec (fwd from [EMAIL PROTECTED]) (fwd from [EMAIL PROTECTED])

2004-09-11 Thread bear


On Fri, 10 Sep 2004, Eugen Leitl wrote:

From: Joe Touch [EMAIL PROTECTED]

To clarify, this is not really anonymous in the usual sense.

It does not authenticate the endpoint's identification, other than same
place I had been talking to.


That's pseudonymity, not anonymity.


There's no difference between having no name and having a name you
cannot trust. I.e., I could travel under the name anonymous or , or
under the name A. Smith. If you don't know whether I am actually A.
Smith, the latter is identical to the former.

This is just plain not true.  When operating under a pseudonym,
you are making linkable acts - linkable to each other even if
not necessarily linkable to your own official identity.  Anonymous
actions or communications are those which cannot be linked to any
other no matter how hard someone tries.

We can expect the public to fail to grasp the distinction, but
on this list anonymous is a very strong claim.  Anonymity is
*HARD* to do, not something that results from failing to check
a credential.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Spam Spotlight on Reputation

2004-09-07 Thread bear


On Tue, 7 Sep 2004, Hadmut Danisch wrote:

 The last 17 month of work in ASRG (Anti Spam Research
 Group, IRTF) and MARID (Mail authorization records in DNS, IETF) are
 an excellent example of how to not design security protocols.

 This was all about marketing, commercial interests, patent claims,
 giving interviews, spreading wrong informations, underminding
 development, propaganda. It completely lacked proper protocol design,
 a precise specification of the attack to defend against, engineering
 of security mechanisms. It was a kind of religious war. And while
 people were busy with religious wars, spammers silently realized that
 this is not a real threat to spam.


For what it's worth, do you remember a device that was marketed on
American television called the Ronco Pocket Fisherman?  It was
a sort of folding fishing rod with a built-in, tiny, tacklebox,
and the idea was that here was a complete fishing rig that you
could toss into a suitcase and still have room for all your
clothes and stuff.

The fact is, as fishing gear, it was astonishingly bad.  But, as
the owner of a bait shop once explained to me after someone who
had come in with one tossed it in the trash and walked out with
a real fishing rod, It's not made to catch fish.  It's made to
catch fishermen.

Similarly, the current generation of anti-spam technology isn't
made to catch spammers; it's made to catch ISP's and software
companies and get them to part with their money.  Alas, unlike
the Ronco Pocket Fisherman, there is no proven technology that
people can go back to after getting fed up with it not working.

It has been clear from the outset that all the solutions to spam
consisting of building a fence around the internet and keeping
the spammers out aren't going to work, any more than the old
anarchist-cypherpunk dream of building a fence around our
cryptographic networks and keeping the government out was going
to to work.  The problem in both cases is that if the information
needed to join the network is available to members of your
intended in group, it's also available to members of your
intended excluded group.

I have two patents in natural language, and a fair amount of
experience engineering in the field.  But that's a fairly recondite
skill, and these days most folks are looking for engineers for much
more prosaic tasks like interfacing their middleware with their
databases.  In the last year, I have been unemployed.  I've
turned down two job offers, though -- from software companies
with bulk mail products, looking for natural-language guys
to build paraphrase engines to bypass spam filters or copy check
functions to estimate the likelihood of a particular message body
being filtered.  That's the level of commitment these guys are
showing.  They're actually willing to hire engineers at specialist
salaries to build new ways to bypass filters.

We should not be at all surprised, when we offer a way to
auto-whitelist email and therefore bypass filters at a lower
cost than hiring engineers, that they're leaping onto it at
a much higher rate than legit senders.

From a cryptographic perspective, there are a lot of systems out
there that are solving some trivialized version of the problem or
some not-very-crucial aspect of the problem.  There are a lot of
systems that have a threat model that's very peculiar, and which
can be solved, however meaninglessly, while their customers still
get lots of UCE.  Indeed, there are a lot of systems out there
that don't have any published threat model.  These are failures
of protocol design, though not necessarily failures of
marketability.  But to the extent that they allow bypassing
filters, the spammers are the biggest customers.

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ?splints for broken hash functions

2004-09-06 Thread bear


On Wed, 1 Sep 2004, David Wagner wrote:

Hal Finney writes:
[John Denker proposes:] the Bi are the input blocks:
  (IV) - B1 - B2 - B3 - ... Bk - H1
  (IV) - B2 - B3 - ... Bk - B1 - H2
then we combine H1 and H2 nonlinearly.

This does not add any strength against Joux's attack.  One can find
collisions for this in 80*2^80 time with Joux's attack.

First, generate 2^80 collisions for the top line.  Find B1,B1* that
produce a collision, i.e., C(IV,B1)=C(IV,B1*)=V2.  Then, find B2,B2*
that produce a collision, i.e., C(V2,B2)=C(V2,B2*)=V3.  Continue to
find B3,B3*, ..., Bk,Bk*.  Note that we can combine this in any way
we like (e.g., B1, B2*, B3*, B4, .., Bk) to get 2^80 different messages
that all produce the same output in the top line (same H1).

Next, look at the bottom line.  For each of the 2^80 ways to combine
the above blocks, compute what output you get in the bottom line.
By the birthday paradox, you will find some pair that produce a
collision in the bottom line (same H2).  But that pair also produces
a collision in the top line (since all pairs collide in the top line),
so you have a collision for the whole hash (same H1,H2).

The birthday paradox does not apply in this case because H1 is fixed.
The above construction is in fact secure against the Joux attack as
stated.  2^80 work will find, on average, one collision.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Compression theory reference?

2004-09-01 Thread bear


On Wed, 1 Sep 2004, Hadmut Danisch wrote:


I have often heard that example and I used it myself several times.  I
do not use it anymore, because it is not that easy. There's a major
flaw in this example, and therefore it is not really a
counterexample. If the faculty found that flaw I'd be in a bad
position.

You could define some reversible bizarre function f that does exactly
that job, i.e. for a given message m you need to apply f again and
again and after some finite number of calculations you'll find that
f(...f(m)...) = x where x = 0 or 1

For example, loading the message into a Linear Feedback Shift
Register and iterating until it produces one of two predetermined
messages 0 or 1?

For a nontrivial message, the last star will burn out before you
get that many iterations.  Also, as you say, in order to decode
the message you have to know how many iterations you made - a
number which will, on the average, be the same length as the
message.

It hardly seems a worthwhile example.

They say LZW and MTF. I have already given an example for LZW.
They don't care. I've told them to take any random string taken
from /dev/random under Linux. They don't care. The german principle is
that a faculty is always right by definition.

That is inconsistent with the advancement of knowledge.  Any
university relying on such a principle has abandoned its duty.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Compression theory reference?

2004-09-01 Thread bear


On Tue, 31 Aug 2004, John Denker wrote:

 I emphasize that I am only discussing messages of length N,
 where N is some pre-chosen number.  For concreteness, let's
 choose N=10.

 I repeat my assertion that _if_ XX can compress any string,
 shortening it by one bit, and _if_ you know that the original
 messages each have exactly 10 bits, _then_ any 10-bit message
 can be compressed down to a single bit.

Actually you don't need to take it all the way to the
reductio ad absurdum here.  There are 1024 10-bit
messages.  There are 512 9-bit messages.  You need
to point out that since a one-to-one mapping between
sets of different ordinality cannot exist, it is not
possible to create something that will compress every
ten-bit message by one bit.

Or, slightly more formally, assume that a function C
can reversibly compress any ten-bit message by one bit.
Since there are 1024 distinct ten-bit messages, there
must be at least 1024 distinct nine-bit messages which,
when the reversal is applied, result in these 1024
messages.  There are exactly 512 distinct nine-bit
messages.  Therefore 512 = 1024.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


A splint for broken hash functions

2004-08-31 Thread bear


Here's a handy way to build Joux-resistant hash
functions with existing tools.

This construction takes slightly more than twice as
much work and twice as much internal state as a
conventional hash function (embrace it guys, I don't
think there's a way around it).

   H2(H1) = H1( H1(M)  xor  H1( TT( M)))

 TT denotes some trivial transformation
 (I propose swapping high and low halves
 of the first block).  H1 is a conventional
 hash function and H2 is, I believe, fully
 resistant to the Joux attack.

Or, more precisely defined in the
scheme programming language:


;; joux-resistance takes a conventional
;; hash function and returns a joux-resistant
;; hash function derived from it.

(define joux-resistance
  (lambda (H1)
(lambda (message)
  (H1 (xor (H1 message)
   (H1 (TT message)))


(define JR-MD5 (joux-resistance MD5))
(define JR-SHA1 (joux-resistance SHA1))
;; ... etc...

This is based on John Denker's second proposal.
But this proposal uses a trivial transformation
to force divergence in internal states rather
than using a reordering of blocks.

Now the attacker has to find blocks that have
internal-state to internal-state collisions in
both H1(M) and H1(TT(M)) in order to use the
attack, which is the same work factor as finding
collisions in both mappings of start-to-end
states (Denker's reordering proposal) or finding
state-to-state collisions on an internal state
twice as long (Finney's wider internal path
proposal).

I think this is a refinement on Denker's
proposal because we don't have to store the
initial block while we're doing it and don't
have to make changes as deep into the existing
applications in order to implement it, and
more immediately usable than Finney's proposal
because we don't have to wait for the next
generation of hash functions to be written.

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A splint for broken hash functions

2004-08-31 Thread bear


On Sun, 29 Aug 2004, John Denker wrote:

bear wrote:

H2(H1) = H1( H1(M)  xor  H1( TT( M)))

I think that was intended to be something like

  H2(M)  = H1( H1(M)  xor  H1( TT( M)))
 ^

Actually, it was intended to take a hash function
as an argument and define a new hash function which
is Joux-resistant in terms of it.

Perhaps

JR(H1)(M) = H1( H1 (M) xor H1( TT( M)))

would have communicated the intent more clearly.

Anyway, functions on functions that return functions
kind of need lambda calculus for clear expression,
which is why there was scheme pseudocode restating
the definition following this.

OK, it looks like we are in agreement on the general
outline of a reasonable method for splinting broken hash
functions.

However I think the given example of a  Trivial Transformation
is _too_ trivial.  There are too many common strings for which
swapping halves of the first block leaves the string unchanged.

Good point.  I don't want to mess with the IV's though,
because some hash functions may depend for their security
on the exact value of the IV.

 Also, I don't like the XOR function suggested above.  It
 is linear and totally lacking in one-wayness.

That's compensated, I think, by the outer application of H1.
But Hal Finney's response proposes using a 2n-bit to n-bit
hash function on the concatenation of the two H1 results,
which is definitely better.

 That means
 that an attacker has the option of making equal-and-opposite
 (or in the case of XOR, just equal :-) changes in the H1
 output and the H1prime output, leaving the result of the
 XOR unchanged.

It still requires finding a double collision, so the work
factor should be the same.

 Therefore I restate my preference for combining the H1 and
 H1 prime results nonlinearly, perhaps like this:
  Hnew(M)  = H1(M  (+)  H1prime(TT(M)))
 where (+) denotes the string-concatenation operator.

Okay, if your H1 and H1prime are nonidentical hash
functions, I don't think you're getting any added
security from the trivial-transformation here; Its
purpose in my construction was to force divergence
of the internal state between the two hash functions;
why is it included in yours?

In case anybody didn't notice, bear's idea of applying the TT
operator within a given block has some good properties.  In
particular it strengthens the hashing of short messages,
including one-block messages.

However IMHO using a truly trivial transformation is really
under-doing it.  Perhaps we should be thinking in terms of
Tasteful Transformations rather than Trivial Transformations.

:-)  You're right about that; a trivial transformation
makes it too easy to construct an attack by constructing
a block that reads the same under transformation as it
does plaintext.  Instead of swapping first and second halves
of the block, probably the first block of the message should
be replaced by its hash.

Responding to your issue about xor being too insecure, by
taking Hal Finney's suggestion of using a wider hash function
to combine the outputs,  And redefining TT for greater
strength, the construction becomes

H2(M) = Hnew (H1 (M) + H1( TT(M)))

Where H2 is the Joux-resistant version of H1, which is our
desired objective.  TT now means replacing the first block of the
message by its hash value under H1, and + denotes concatenation.
However, this requires implementing an Hnew, which must take a
*single* large block of input (the size of two H1 blocks) and
produce a single H1-sized block of output.

Since we all seem to agree that an Hnew is needed to make the
suggestion concrete, what definition of Hnew do you prefer?

 Also I suggest the transformation should be applied to each
 and every block of the second stream, not just the first
 block.

Why?  Divergence in internal state ought to be complete after
the first block; constructing a message that would force the
internal states of the two hash functions to ever reconverge
would require the same work factor either way.  I don't think
a long-range permutation buys us anything.

It's cheap and it doesn't hurt. We can include it if it makes
us feel any more secure; but why should it make us feel any
more secure?

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: More problems with hash functions

2004-08-26 Thread bear


On Wed, 25 Aug 2004, Hal Finney wrote:

Dan Carosone writes:
 My immediate (and not yet further considered) reaction to the
 description of Joux' method was that it might be defeated by something
 as simple as adding a block counter to the input each time.

Right after the talk, Scott Fluhrer (I think it was) spoke up with
several quick ideas for avoiding the attacks, some along these lines,
some involving overlapping blocks, and so on.  There was some rapid
back-and-forth between him and Joux which I could not follow, Joux
saying that these things could be dealt with, and Fluhrer finally seemed
to agree.  Nobody I spoke with afterwards had a simple fix.

It seems like, for every obvious thing you can do to get around
it, it can be rearranged and applied in a different way.  And the
less obvious things, which actually do work, are all CPU-expensive.

One interesting idea which I came up with and haven't seen a way
past yet is to XOR each block with a value computed from its
sequence number, then compute the hash function on the blocks in
a nonsequential order based on the plaintext of the blocks.

In concrete terms, you have a message of n blocks, p1 through pn.
you xor each block with a value computed by a nonlinear function
from its sequence number to get q1 through qn.  Now rearrange
q1 through qn by imposing a total ordering on p1 through pn: for
example if p4 sorted before p7, you put q4 in front of q7.
Finally, you compute the hash value on the blocks q1 through qn
in their new order.

Now the attack doesn't work; collisions against individual blocks
don't combine to produce a collision on the sequence because the
colliding values wouldn't have been fed into the hash function
in the same order as the actual blocks.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: More problems with hash functions

2004-08-26 Thread bear


On Wed, 25 Aug 2004, Hal Finney wrote:

Bear writes:
 One interesting idea which I came up with and haven't seen a way
 past yet is to XOR each block with a value computed from its
 sequence number, then compute the hash function on the blocks in
 a nonsequential order based on the plaintext of the blocks.
snip

This is an interesting idea, but it does not fully prevent the Joux
attack.
 snip
The attacker could choose an ordering of the blocks ahead of time, and
find collisions which preserve that order.
 snip

Joux shows that finding an 2^k collision takes k times the work to
find a single block collision.  Among other things he shows how this
reduces the work to find a collision in the output of two independent
n-bit hashes from 2^n to n*2^(n/2).  Your approach effectively makes
this (n^3)*2^(n/2) which is an improvement, but still not attaining
the exponential security increase expected from ideal hash functions.

You are correct.  Thank you for your quick and clear thought
regarding the proposal.  Increasing the attacker's work by
a factor of n^2 where n is the number of blocks preserves the
strength only where n^2 = (2^k)/2 where k is the block size.

So, for a 160-bit hash, (2^k)/2 is 2^159, and that means the
proposed method preserves nominal strength only for messages
longer than 2^80 blocks.  Which in practical terms will not
happen; I don't think 2^80 bits of storage have yet been
manufactured by all the computer hardware makers on earth.

I guess I momentarily forgot that with a hash function the
attacker has access to the plaintext and can therefore tell
unambiguously the result of the permutation of the blocks.
I'll continue to think about it.  Maybe I'll come up with
something better.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: identification + Re: authentication and authorization

2004-07-10 Thread bear


On Thu, 8 Jul 2004, Anton Stiglic wrote:

The problem is not really authentication theft, its identity theft, or if
you want to put it even more precisely, it's identity theft and
authenticating as the individual to whom the identity belongs to.  But the
latte doesn't make for a good buz-word :)

I have always thought that credential fraud would make a better
description than identity theft.  The crime about which we are
concerned, literally, is the use of your credentials by someone
else in the commission of a fraud.

Theft would imply to me that he simply walked into the bank (or
wherever) and took the money (or whatever) at gunpoint, directing
them to charge your account; an image I find more than a little
preposterous.  There has to be some kind of fraud or subterfuge
for the proposed crime to even be credible.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Security clampdown on the home PC banknote forgers

2004-06-08 Thread bear


On Tue, 8 Jun 2004, Axel H Horns wrote:

Hmm hmmm ... and what about Open Source graphics software like Gimp?

  http://www.gimp.org/

Will Gimp be banned because of everybody can throw out the call to the
banknote detection routine?

Will the banknote detection software be made publicly available to the
Gimp developer team?

Questions over questions ...

Probably not; instead, the banknote detection stuff will probably be
pushed out to tamper-resistant hardware ROMs in the printers, where
it's *NOT* under the control of anything running on a general-purpose
computer.  Because, really, nothing prevents someone from building
their own electronic device from scratch and attaching it to the
printer. The logic has to be something you can't use the printer
without, and that means built into it.

This is actually a lot less annoying than something like Palladium,
where people want remote restriction on a general-purpose PC.  If
it's pushed out to the printing hardware, there's no need to restrict
the architecture of a general-purpose machine.

Of course, there is such a thing as money that really and truly
*can't* be counterfeited.  Elements such as gold, or other rare
commodities, for example, cannot be faked; something either is gold,
or it isn't.  Also, useful objects and consumables in general cannot
be faked; something either is useful, or it isn't.

Fiat currencies are based on artificially imposed rarity, and
increasingly people are able to overcome the artificial impositions.
Wouldn't it be a stitch if nations were forced to re-adopt the gold
standard (or adopt the chocolate standard) because all their bills
(and SmartCoins, and RFID tokens, and ) could be counterfeited?

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The future of security

2004-05-30 Thread bear


On Sat, 29 May 2004, Russell Nelson wrote:

Eugen Leitl writes:
  If I'm a node in a web of trust (FOAF is a human), prestige will
  percolate through it completely. That way I can color a whole
  domain with a nonboolean trust hue, while a domain of fakers will
  have only very few connections (through compromises, or human
  mistakes), which will rapidly sealed, once actually used to do
  something to lower their prestige (I signed the key of a spammer,
  please kill me now).

http://www.web-o-trust.org/

The trouble is that it requires human action, which is expensive and
becoming more expensive.

The bigger problem is that webs of trust don't work.
They're a fine idea, but the fact is that nobody keeps
track of the individual trust relationships or who signed
a key;  few people even bother to find out whether there's
a path of signers that leads from them to another person,
or whether the path has some reasonably small distance.

I have not yet seen an example of reputation favoring
one person over another in a web of trust model; it looks
like people can't be bothered to keep track of the trust
relationships or reputations within the web.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The future of security

2004-05-28 Thread bear


On Fri, 28 May 2004, Anne  Lynn Wheeler wrote:

connecting systems that were designed for fundamentally safe and isolated
environment to wide-open anarchy hostile operation exposes all sorts of
problems. somewhat analogous to not actually needing a helmet for riding a
motorcycle ... or seat belts and airbags to drive a car.

Perspective on things...

Where I grew up, safety equipment inside your car (or on your head on
a motorcycle) was limited to that which prevented you from becoming
more of a hazard to *OTHER* drivers.  Motorcyclists didn't need
helmets, because helmets don't prevent crashes or change the
consequences of crashes for anyone who's not wearing them.  But they
did need eye protection, because eye protection reduced the
probability of crashes that could be dangerous to others.

I thought this was actually a well-considered system.  The law
required us to take whatever reasonable precautions we needed to
protect others from our actions, but it was entirely up to us whether
we attempted to protect ourselves from our own actions.

Now, in most states, law doesn't work this way any more -- protecting
people from each other has gotten fuzzed into the idea of protecting
the people (monolithic unit) from themselves (monolithic unit).

But I think there is some wisdom here that may apply to the spam
situation. Have partial solutions been getting rejected because we're
seeing that we can't protect users against their *own* stupidity?
What we actually need is systems to protect *other* users from their
stupidity.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-29 Thread bear


On Tue, 23 Dec 2003, Seth David Schoen wrote:

When attestation is used, it likely will be passed in a service like
HTTP, but in a documented way (for example, using a protocol based on
XML-RPC).  There isn't really any security benefit obtained by hiding
the content of the attestation _from the party providing it_!

It's not the parties who are interested in security alone we're worried
about.  There is an advantage in profiling and market research, so I
expect anyone able to effectively subvert the protocols to attempt
to hide the content of remote attestataion.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and other forms of trust

2003-12-22 Thread bear


On Sat, 20 Dec 2003, Ian Grigg wrote:

Bill Frantz wrote:

 [I always considered the biggest contribution from Mondex was the idea of
 deposit-only purses, which might reduce the incentive to rob late-night
 business.]

...

The first smart card money system in the Netherlands
was a service-station system for selling fuel to
truck drivers.  As security costs kept on rising,
due to constant hold-ups, the smart card system
was put in to create stations that had no money
on hand, so no need for guards or even tellers.

This absence of night time staff created a great
cost saving, and the programme was a big success.
Unfortunately, the early lessons were lost as time
went on, and attention switched from single-purpose
to multi-purpose applications.

This underscores an important point.  In security
applications limitations are often a feature rather
than a bug.  We are accustomed to making things better
by making them able to do more; but in some spaces
it's actually better to use a solution that can do
very little.

Much of the current security/cryptography angst can
be summed up as small, limited, simple systems work,
but big, complex, general systems are very hard to
get right or have unintended drawbacks.  Often the
very generality of such systems is a barrier to their
wide adoption.

I would say that if you want to make any money in
cryptography and security (and make it honestly) you
should pick one business application, with one threat
model and one business model, and nail it.  Add no
features, nor even include any room in your design,
that don't directly address *that* problem.  When
you are able to present people with a solution to
one problem, which has no requirement of further
involvement than solving that one problem and introduces
no risks or interactions other than those flatly necessary
to solve that one problem, then they'll pay for it.

But when we start talking about multi-function cards,
it becomes a tradeoff where I can't get anything I want
without getting things I don't want or risking network
effects that will lead to markets dominated by business
models I don't want to deal with.  It makes the buy
decision complicated and fraught with risk.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and other forms of trust

2003-12-20 Thread bear


On Wed, 17 Dec 2003, Jerrold Leichter wrote:

Given this setup, a music company will sell you a program that you must
install with a given set of access rights.  The program itself will check
(a) that it wasn't modified; (b) that a trusted report indicates that it
has been given exactly the rights specified.  Among the things it will check
in the report is that no one has the right to change the rights!  And, of
course, the program won't grant generic rights to any music file - it will
specifically control what you can do with the files.  Copying will, of course,
not be one of those things.

I think that if the music company wants that much control
(which is, btw, in clear violation of the First Sale Doctrine),
then the only legal way for them to achieve it is to provide
a player specifically for the music which they own, in exactly
the same way that banks retain ownership of the credit cards
and smartcards we use.  As long as the player is not their
property, they can't do this.

The major problem I want a trusted kernel for is because I
don't want to trust binaries provided by closed-source software
houses.  I want my trusted kernel to tell me exactly what
priveleges they're asking for and I want to tell it exactly
what priveleges it's allowed to provide them.  I want it to
be able to tell me exactly when every executable file appeared,
and as a result of running which other executable file (all
the way back to whichever command *I* gave that resulted in
its being there).  I want it to tell me exactly how the daemon
listening on any tcp port got installed and what priveleges
it has.  I want my trusted kernel to keep tamper-proof logs;
in fact I'd go so far as to want to use a write-once media
for logfiles just to make absolutely sure.

A trusted kernel should absolutely know when any program
is reading screen memory it didn't write, or keyboard
keystrokes that it then passes as input to another program,
and it should be possible for me to set up instant notification
for it to alert me when any program does so.

A trusted kernel should monitor outgoing network packets and
sound an alarm when any of them contains personal information
like PINs, passwords, keys, Social Security Number, Drivers
License, Credit Card numbers, Address, etc.  It should even
be possible to have a terminate-with-prejudice policy that
drops any such packets before sending and terminates and
uninstalls any unauthorized application that attempts to send
such packets.

I really don't care if anyone *else* trusts my system; as
far as I'm concerned, their secrets should not be on my
system in the first place, any more than my secrets should
be on theirs.  The fact is I'm building a system out of
pieces and parts from hundreds of sources and I don't know
all the sources; with an appropriate trusted kernel I
wouldn't have to extend nearly as much black box trust
to all the different places software comes from.


Yes, you can construct a system that *you* can trust, but no one else has
any reason to trust.  However, the capability to do that can be easily
leveraged to produce a system that *others* can trust as well.  There are
so many potential applications for the latter type of system that, as soon
as systems of the former type are fielded, the pressure to convert them to
the latter type will be overwhelming.

I do not think so.  People want to retain ownership of their
computer systems and personal information, and a system that
is made for *others* to trust would be used to take that
ownership and information.

 Ultimately, TCPA or no, you will be faced with a stark choice:  Join the
 broad trust community, or live in the woods.

No.  Lots of bands release music and encourage sharing, as promo
for their main revenue source (concert tours).  I see those bands
getting a leg up as their released music becomes popular while
music only available with onerous conditions languishes.  Lots of
other artists do graphic or animation work just for the chance to
be seen, and some of them are quite good.

You may consider it living in the woods to listen to stuff that
isn't the top 20; but I think lots of people will find that the
woods is a friendlier and more trustworthy place than a world
full of weasels who want to control their systems.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: yahoo to use public key technology for anti-spam

2003-12-07 Thread bear


On Sun, 7 Dec 2003, Anton Stiglic wrote:


- Original Message -
From: Carl Ellison [EMAIL PROTECTED]
To: 'Will Rodger' [EMAIL PROTECTED]; 'Steve Bellovin'
[EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Sunday, December 07, 2003 8:44 AM
Subject: RE: yahoo to use public key technology for anti-spam


 I, for one, hate the idea.  My From address should be [EMAIL PROTECTED]  That's
 my remailer where I receive all my incoming e-mail.  However, my outgoing
 SMTP server depends on which cable modem provider or hot spot I happen to
be
 at the moment.  It would be that SMTP machine that signs my outgoing mail,
 not acm.org who never sees my outgoing mail.

But you should be sending mails via *your* SMTP server, and should be
connecting to that SMTP server using SSL and authentication.  Open relays
encourage spam.  People shouldn't be relaying mail via just any SMTP server.

This is generally how I work it.  I sit down at any hotspot and I
get network connectivity.  But all the hotspot is ever going to see
of my browsing, email, and anything else I like to keep private is
SSH packets to my home machine, or encrypted X packets running
between the X server on my laptop and X clients on my home machine.

A bit of lag is acceptable. Sending private mail via untrusted
SMTP servers is not.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-05 Thread bear


On Sat, 4 Oct 2003, Benja Fallenstein wrote:
Does it work?

Assume A() is Alice's series, B() is Bob's, MA() is the one Mitch uses
with Alice, MB() the one Mitch uses with Bob.

- Mitch sends first half of cyphertext of MA(1000) (to Alice)
- Alice sends first half of cyphertext of her move + A(1000) (to Mitch)
- Mitch sends second half
- Alice sends second half

Mitch can now decrypt Alice's move.

- Bob sends first half of cyphertext of B(1000) (to Mitch)
- Mitch sends first half of cyphertext of Alice's move + MB(1000) (to Bob)
- Bob sends second half.
- Mitch sends second half.

Bob decides on his move.

- Bob sends first half of ciphertext of his move + B(999) (to Mitch)
- Mitch sends first half of ciphertext of MB(999) (to Bob)
- Bob sends second half.
- Mitch sends second half.

Mitch can now decrypt Bob's move...

Am I missing something?

Yes, although I hadn't immediately realized it would be necessary:
Timing information.  If you require 30-45 seconds between packets,
Mitch's game dies a rapid death.

T:0 - Mitch sends first half of cyphertext of MA(1000) (to Alice)
T:30 - Alice sends first half of cyphertext of her move + A(1000) (to Mitch)
T:60 - Mitch sends second half
T:90 - Alice sends second half

Mitch can now decrypt Alice's move.

T:60 - Bob sends first half of cyphertext of B(1000) (to Mitch)
T:90 - Mitch sends first half of cyphertext of Alice's move + MB(1000) (to Bob)
T:120 - Bob sends second half.
T:135 - Alice times out waiting for Bob's response because it's 45
seconds since her last packet. Mitch must commit to a move
ignorant of Bob's move by now, if he is to continue the game.
T:150 - Mitch sends second half of Alice's move to Mitch

Bob decides on his move.

You could fiddle the intervals, within limits, or allow the players an
I need more time to think move, but if they're not allowed to use it
more than one time in three, then mitch isn't going to be able to make
more than two moves.


Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-04 Thread bear


On Fri, 3 Oct 2003, Benja Fallenstein wrote:

bear wrote:

 Why should this not be applicable to chess?  There's nothing to
 prevent the two contestants from making nonce transmissions twice a
 move when it's not their turn.

I.e., you would need a protocol extension to verify the nonces somehow--
if that's possible at all-- or are you just faster than me, and have
thought about a way to do that already?

Not faster per se, but I do happen to know the solution to that
problem.  :-)

Suppose Alice picks a nonce A(zero).  Then for n=one to a thousand
(presumably no chess game will last 1000 moves) she calculates A(n) =
hash (A(n-1)).  Note, this has to be a ONE WAY hash function rather
than any kind of encryption that can be decrypted.  I'd suggest
seventeenth-power mod K where K is prime, but lots of good
irreversible hashing functions that aren't so expensive in CPU cycles
are around.

Bob also picks a nonce B(zero) on his side, and produces the same kind
of sequence of B(one...one thousand) using the same hash function.

Now let the moves of the chess game be numbered from 1000 down to 0.
(ie, the first move they play will be move 1000, the second will be
move 999, etc.)

When it's Bob's turn, he sends his move padded with B(n), and Alice
sends a random move padded with A(n).  When it's Alice's turn, she
sends her move padded with A(n) and Bob sends a random move padded
with B(n).

Bob can rapidly check to make sure that the A(n) recieved with each
message has the right relation to the A(n+1) he recieved with the
previous move, but there is no way he (or Mitch) can possibly predict
A(n-1) to know what he'll get in the next move.

Likewise Alice can rapidly check to make sure that the B(n) recieved
with each move has the right relation to the B(n+1) she recieved with
the previous move, but there is no way she (or Mitch) can predict
B(n-1) to know what she'll get the next move.

The only change to the rules of chess this requires is that if they
ever exhaust the finite sequence of generated nonces, they have to
call that game a draw.  But a thousand moves, really, shouldn't be a
problem for chess, and if it is you can just make the sequence longer
and start a new game.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-03 Thread bear


On Thu, 2 Oct 2003, Zooko O'Whielacronx wrote:


 Perhaps I spoke too soon?  It's not in Eurocrypt or Crypto 84 or 85,
 which are on my shelf.  Where was it published?


 R. L. Rivest and A. Shamir. How to expose an
 eavesdropper. Communications of the ACM, 27:393-395, April 1984.

Ah.  Interesting, I see. It's an interesting application of a
bit-commitment scheme.

Hmmm.  The key to this is that synchronous communications have to
happen.  When it's your turn to move, you create a message that gives
the move, then pad it to some unsearchable length, encrypt, and send
half.  MITM can't tell what the move is without seeing the second
half, so either has to make something up and send half of that, or
just transmit unchanged.  The second half is sent by each player when
the first half has been recieved, and includes a checksum on the first
half that was actually recieved.

Mitch hast the choice of playing his own game of bughouse against each
of the contestants, which just turns him into a third contestant.  Or
he has the choice of allowing the first two contestants to complete
their game without interference.

Why should this not be applicable to chess?  There's nothing to
prevent the two contestants from making nonce transmissions twice a
move when it's not their turn.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-02 Thread bear


On Wed, 1 Oct 2003, Ian Grigg wrote:

M Taylor wrote:

 Stupid question I'm sure, but does TLS's anonymous DH protect against
 man-in-the-middle attacks? If so, how? I cannot figure out how it would,


Ah, there's the rub.  ADH does not protect against
MITM, as far as I am aware.

DH is an open protocol; it doesn't rely on an initial shared
secret or a Trusted Authority.

There is a simple proof that an open protocol between anonymous
parties is _always_ vulnerable to MITM.

Put simply, in an anonymous protocol, Alice has no way of knowing
whether she is communicating with Bob or Mallory, and Bob has no way
of knowing whether an incoming communication is from Mallory or from
Alice.  (that's what anonymous means).  If there is no shared secret
and no Trent, then nothing prevents Mallory from being the MITM.

You can have anonymous protocols that aren't open be immune to MITM
And you can have open protocols that aren't anonymous be immune to
MITM.  But you can't have both.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Reliance on Microsoft called risk to U.S. security

2003-10-01 Thread bear


On Wed, 1 Oct 2003, Peter Gutmann wrote:

This doens't really work.  Consider the simple case where you run Outlook with
'nobody' privs rather than the current user privs.  You need to be able to
send and receive mail, so a worm that mails itself to others won't be slowed
down much.  In addition everyone's sending you HTML-formatted mail, so you
need access to (in effect) MSIE via the various HTML controls.  Further, you
need Word and Excel and Powerpoint for all the attachments that people send
you.  They need access to various subsystems like ODBC and who knows what else
as an extension of the above.  As you follow these dependencies further and
further out, you eventually end up running what's more or less an MLS system
where you do normal work at one privilege level, read mail at another, and
browse the web at a third.  This was tried in the 1970s and 1980s and it
didn't work very well even if you were prepared to accept a (sizeable) loss of
functionality in exchange for having an MLS OS, and would be totally
unacceptable for someone today who expects to be able to click on anything in
sight and have it automatically processed by whatever app is assigned to it.

I think part of the point is that that expectation is a substantial
problem.

Data that moves between machines is inherently suspect; and if it can
originate at unknown machines (as in SMTP or NNTP), it should be
regarded as guilty until proven innocent.  There ought to be no way to
send live code through the mail.  Users simply cannot be expected to
have the ability to make an informed decision (as opposed to a habit)
about whether to run it, because its format does not give them enough
information to make an informed decision.

The distinction between live code and text is crucial.  While both are
just sequences of bytes, text has no semantics as far as the machine
is concerned.  Once you start sending something that has machine
semantics - something that contains instructions for the machine to
run and running those instructions may cause the machine to do
something besides just displaying it - then you are dealing with live
code. And live code is handy, but dangerous.

There is pressure to stick live code into any protocol that moves
text; SMTP sprouted 'clickable' attachments.  Java, javascript, and
now flash seem to have gotten stuck into HTTP. But I think that live
code really and truly needs a different set of protocols; and for
security's sake, there really need to be text-only protocols.  It
should be part of their charter and part of their purpose that they do
*NOT* under any circumstances deliver live code.

Can be relied on to _only_ deliver text is a valuable and important
piece of functionality, and a capability that has been cut out of too
many protocols with no replacement in sight.

Separating it by protocol would give people practical things that they
could do.  You could, for example, allow people to use a live-code
mail protocol inside a company firewall or allow a live-code browsing
protocol inside the firewall, while allowing only a text mail protocol
or a text browsing protocol to make connections from outside the
company.  We approximate this by trying to make smarter clients that
have different trust models for different domains, but that's always a
crapshoot; you then have to depend on a client, and if the client can
be misconfigured and/or executes live code it can't really be relied
on.  It would be better to have separate protocols; Ideally, even
separate applications.

One thing that I noticed in the responses to CyberInsecurity: The Cost of
Monopoly was that of the people who criticised it as recommending the wrong
solution, no two could agree on any alternative remedy.  This indicates just
how hard a problem this really is...

Indeed.  I think that there ought to be simpler, text-only protocols
for the use of people who don't need to send and recieve live code, so
that they could be effectively protected from live code at the outset
unless they really need it.  Others, of course, disagree.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread bear


On Wed, 1 Oct 2003, John S. Denker wrote:

According to 'ps', an all-up ssh system is less
than 3 megabytes (sshd, ssh-agent, and the ssh
client).  At current memory prices, your clients
would save less than $1.50 per system even if
their custom software could reduce this bulk
to zero.

That's not the money they're trying to save.  The money they're trying
to save is spent on the salaries of the guys who have to understand
it.  Depending on what needs you have, that's anything from
familiarity with setting up the certs and authorizations and servers
and configuring the clients, to the ability to sit down and verify the
source line by line and routine by routine.  The price of computer
memory is a non sequitur here; people want something dead-simple so
that there won't be so much overhead in _human_ knowledge and
understanding required to operate it.

Crypto is not like some game or something that nobody has to really
understand how it works; key management and cert management is a
complex issue and people have to be hired to do it.  Code that has so
much riding on it has to be audited in lots of places, and people have
to be hired to do that.  Every line of code costs money in an audit,
even if somebody else wrote it.

So, yeah, they'd rather see a lot of stuff hard-coded instead of
configurable; hard-coded is easier to verify, hard-coded has less
configuration to do, and hard-coded is cheaper to own.  We get so busy
trying to be all things to all people in computer science that we
often forget that what a lot of our clients really want is simplicity.

1) Well, they could just ignore the new release
and stick with the old version.  Or, if they think
the new features are desirable, then they ought
to compare the cost of re-stripping against the
cost of implementing the new desirable features
in the custom code.

And in a lot of places that's exactly what they do.  If the shop
requires a full code audit before taking any new software, going to
the new version can cost tens of millions of dollars over and above
the price.  And the bigger the new version's sourcecode is, the more
the audit is going to cost.

2) If you do a good job stripping the code, you
could ask the maintainers to put your #ifdefs into
the mainline version.  Then you have no maintenance
hassle at all.

You wouldn't.  But the people who have to slog through that tarball of
code for an audit get the jibblies when they see #ifdefs all over the
place, because it means they have to go through line by line and
routine by routine again and again and again with different
assumptions about what symbols are defined during compilation, before
they can certify it.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Code breakers crack GSM cellphone encryption

2003-09-10 Thread bear


On Mon, 8 Sep 2003, Dave Emery wrote:

   Just to amplify this a bit, does anyone seriously think the
NSA's satellite and embassy based cellphone interception capability is
primarily targeted against - US - GSM calls ?   Or that they can
routinely get warrants to listen in using the wired tapping
infrastructure in say Russia or France or Iran ?

Of course the NSA's satellite and embassy based cellphone interception
capability isn't primarily targeted against - US - calls; that would
be illegal.  The snooping in the US is done by others and then handed
over to the NSA instead.  And of course the NSA does the same for
them.  This is what the UKUSA agreement is all about.

Bluntly, no matter who does the actual interception work, in the
modern world every intel agency's analytic and correlative resources
are targeted against everybody in the world.  To say that some
particular agency doesn't do intercepts in some particular country is
irrelevant; It's all just data. Remember lawmakers learning that the
internet treats censorship as damage and routes around it?  Well,
we're looking at the same phenomenon here: the worldwide intel
community treats privacy laws and operational restrictions as damage
and routes around them.  It's exactly the same thing.

I'd be willing to bet most nations even get intel on their own
citizens that's gathered by actively hostile countries: An actively
hostile nation, let's say, snoops on american citizens.  Then they
share the intel product with someone they've got a treaty with, and
then that country shares it with somebody they've got a treaty with,
and they share it with the US.  It's all just routing.  Someone has
information somebody else wants, somebody else has money or intel to
swap for it.  It doesn't take a genius to figure out, it's just going
to happen.  Anything an intel service shares with anybody, it's
putting into the network, and it's going to get around to everybody.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: blackmail / real world stego use

2003-08-27 Thread bear


On Sat, 23 Aug 2003, Barry Wels wrote:

Hi,

So far I have only found one English item in the news about this.

http://www.expatica.com/index.asp?pad=2,18,item_id=33655

So let me translate some of the dutch information about this
interesting case :

snip - story of a dutch man who used surfola.com, an american
 anonymizing service claiming we will not give out your info
 to anyone ever, to browse a website containing stego data
 worth US$185K that he'd extorted a dutch company into placing
 there.  FBI was apprised of situation, had his email address
 within 24 hours - he was arrested by (dutch?) police the
 instant he tried to touch the money. 

It is interesting to speculate about whether the FBI served
surfola.com with a warrant.  If the anonymizing service is
transparent after the fact to the details recorded during
the FBI's ordinary daily monitoring of the internet, then we
live in interesting times indeed.

That would imply packet recording and correlation on a level
greater than we've ever considered to be in the arsenal of
cryptographic threats, implying the emergence of forces (and
inevitably of forces other than governments) that have
eavesdropping capabilities that cannot be defeated except with
time-delayed packet relay through many hosts and re-encryption/
redecryption at each step of the way.

That is a model that does not permit realtime communication,
meaning that monitoring may be impossible to escape for
realtime activities such as web browsing.

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Announcing httpsy://, a YURL scheme

2003-07-16 Thread bear


On Wed, 16 Jul 2003 [EMAIL PROTECTED] wrote:

A YURL aware search engine may find multiple independent references to a
YURL, thus giving you parallel reporting channels, and increasing trust.

But any single search engine is itself a single reference, regardless
of how many times and contexts it uses to print the reference on the
page.

IOW, if you go to Mallory's search engine, then no matter how many
references you find, they're all coming to you through the same
channel and you have to trust Mallory.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Looking for an N -out-of-M split algorithm

2003-07-16 Thread bear


On Wed, 16 Jul 2003 [EMAIL PROTECTED] wrote:

Hi,

I remember reading (many years ago) a description on some web page somewhere
of an algorithm by which an arbitrary file F could be split into M pieces,
such that:
(1) given any N pieces, F can be reconstructed precisely, and
(2) given fewer than N pieces, it is impossible to determine even a single
bit of information about F.

Unfortunately, that was many years ago, and -- search as I might -- I
haven't been able to find it on web now.

Does anyone have any idea where I might learn about this algorithm - or
indeed any algorithm which does the job.

[Moderator's note: look for Shamir Sharing -- the trick is just
turning the secret into a polynomial of degree N so that with enough
points you determine the polynomial uniquely and with too few you
can't determine it. I'm pretty sure that Schneier and all of the other
standard references explain this trick. --Perry]

Perry has it exactly right, although he was pretty brief and gave no
examples.

Let's say you want to be able to reconstruct a message given any
two out of three splits of the message.  What you do is plot the
message as the y-intercept on a cartesian graph, and draw a line
in some random direction.  (where random != straight up)

Now, the line you've drawn crosses the x=0 vertical line at the
message, and it crosses the x=5000 line at some other point whose y
coordinate is A, and it crosses the x=1 line at some other point
whose Y coordinate is B, and it crosses the x=15000 line at yet some
other point, whose Y coordinate is C.

A, B, and C are your three secret splits.  Unless someone has at least
two of them, they have no information about the slope of the line and
they can't project it back to the (x=0, y=M) point to get the message.
If you want two out of four, or two out of six, or whatever else, it's
the same thing; two points establish a line, so you can just pick more
points along the line.

If you want a 3-out-of-N secret split, you need to use a curve that
requires three points to establish (such curves include functions
of X^2). And so on...

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Attacking networks using DHCP, DNS - probably kills DNSSEC

2003-07-01 Thread bear


On Tue, 1 Jul 2003, Peter Gutmann wrote:


 Given that their goal is zero-configuration networking, I can see
 that being required to provide a shared secret would mess things up
 a bit for them.  It'd be a bit like PKIX being asked to make
 ease-of-use a consideration in their work, or OpenPGP to take X.509
 compatibility into account.

I tend to agree...  I don't think zero-configuration networking has
a real possibility to create any safety zones beyond the immediate
physical machine.  After all, if you can plug it into any network and
it just works, you can plug it into an insecure or subverted network
and it'll just work.

At the very least you've got to have a file of keys.

Bear





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Attacking networks using DHCP, DNS - probably kills DNSSEC NOT

2003-06-30 Thread bear


On Mon, 30 Jun 2003, Simon Josefsson wrote:

Bill Stewart [EMAIL PROTECTED] writes:

* Your laptop see and uses the name yahoo.com.attackersdomain.com.
   You may be able to verify this using your DNSSEC root key, if the
   attackersdomain.com people have set up DNSSEC for their spoofed
   entries, but unless you are using bad software or judgment, you will
   not confuse this for the real yahoo.com.

 The DNS suffix business is designed so that your laptop tries
 to use yahoo.com.attackersdomain.com, either before yahoo.com
 or after unsuccessfully trying yahoo.com, depending on implementation.
 It may be bad judgement, but it's designed to support intranet sites
 for domains that want their web browsers and email to let you
 refer to marketing as opposed to marketing.webservers.example.com,
 and Netscape-derived browsers support it as well as IE.

It can be a useful feature, but it does not circumvent DNSSEC in any
way, that I can see.  DNSSEC see yahoo.com.attackersdomain.com and can
verify that the IP addresses for that host are the one that the owner
of the y.c.a.c domain publishes, and that is what DNSSEC delivers.
The bad judgement I referred to was if your software, after DNSSEC
verification, confuses yahoo.com with yahoo.com.attackersdomain.com.

I think that the problem would be somewhat ameliorated if there
were a DNS cache on the laptop itself.  It would still use DNS
servers, but if it got a different IP number for the same address,
it should notify someone.

This can happen without an attack going on, if the legitimate
addressee's DNS record changes because the IP address of that
service actually changes - but with sites like Yahoo, Paypal,
etc, they've got a lot of infrastructure and momentum there.
Those IP addresses don't change on a whim. And those are the
major targets for a DNS spoof.

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Draft Edition of LibTomMath book

2003-06-25 Thread bear


On Wed, 25 Jun 2003, tom st denis wrote:

The Draft Edition of the LibTomMath book [book about how to implement
bignum math] is freely available on my site at

http://book.libtomcrypt.org

Keep in mind it is a draft and has not been edited yet.  However, if
you ever wanted to learn how to implement efficient [portable too]
bignum math routines you might want to give it a read.

Enjoy,
Tom

One thing that I've noticed for a long time is that there
are *VERY* few math libraries that don't leave whatever
numbers they're working with in memory when deallocating
(deallocating heap via free() or deallocating stack via
returning from a procedure call or deallocating swapspace
by getting paged back in off a disk).

And numbers that an application leaves lying around in
whatever working memory or media it's using, can be
discovered and exploited by other programs - frequently
by unauthorized ones.

Windowing systems have the same kind of leakage, but you
can avoid using windowing systems with a crypto program;
there's no need to put sensitive information like keys
or passwords on the screen ever.  Admittedly, I'd like
to have a secure windowing system, but it seems unlikely.

But I think Math is indispensable to crypto, and there
ought to be a secure mathematics library.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New vs Old (was Snake Oil)

2003-06-04 Thread bear


On Tue, 3 Jun 2003 [EMAIL PROTECTED] wrote:


I confess to being confused - though admittedly part of the blame for this
is my own ignorance.

I remember a time when PGP was a command line application. The only
algorithms it used were IDEA (symmetric), RSA (assymetric) and MD5 (hash). I
came to trust these algorithms.

Now these once-'standard' algorithms are no longer encouraged. The new
versions of PGP seem to prefer CAST instead of IDEA, DH/DSS instead of RSA,
and SHA-1 instead of MD5.

So, could someone please tell me:

(1) What is the justification for using these new algorithms instead of
the old ones? (A cynic might suggest that, since the powers that be
couldn't break the old algorithms, they encouraged the use of new ones that
they could. This probably isn't true, but I'm sure you can understand why
someone might think that).

Well - Hans Dobbertin found hash collisions in MD5 and while I haven't
heard much more, that's a toehold that somebody might be able to use
to break it, and makes it vulnerable in some applications.  SHA-1 is
now considered better.

IDEA is still a good cipher as far as I know, but PGP has been driven
away from it in the US due to intellectual-property issues.  Rather than
continue with incompatible versions for use inside/outside the USA, they're
switching to CAST (although this is causing more, rather than less, version
incompatibilities).

RSA is still good, as far as I know, and has been in the public domain
worldwide since September 2001.  But it had the same kind of IP issues
as IDEA until that point, and several versions of PGP had to be
produced that used a different asymmetric cipher for that reason.  I
don't know enough about DH/DSS specifically to comment further on its
relative security, but RSA has had several scares and people are
concerned that custom hardware (such as a million-qubit quantum
computing device or Bernstein's matrix hardware factoring device)
might cause insecurity in RSA _and_ be possible for someone to keep
secret.  And lots of people quit using RSA because they don't like
the big block of key that it requires.

(2) What actually _IS_ DH/DSS? (I don't mean what do the initials it stand
for, I mean what actually is the algorithm?). I ask because I can understand
RSA, and implement it myself relatively straightforwardly, but I have not
been able to find an explanation, simple or otherwise, of what the DH/DSS
algorithm actually is, or of why it's hard to break.

(3) Ditto CAST and SHA-1.

for a good complete description of SHA-1 and a few others, try

http://csrc.nist.gov/publications/fips/fips180-2/fips180-2.pdf

(warning: link may be outdated).

I don't have pointers to the other two offhand.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PGP Encryption Proves Powerful

2003-05-31 Thread bear

Aside from the whole governments-and-people-and-terrorists thing,
I will say that there was an event last year at my former employers'
that made us very glad we were using PGP.

An engineer's laptop got stolen. With the entire source tree of an
enterprise application that licensed for $25K a seat on it.  Fortunately,
since it was in an encrypted archive, we didn't need to worry too much.

I don't know how many incidents like this happen every year.  I don't
think governments care that much about the kind of risk companies not
using crypto to protect their livelihoods take.  They don't become aware
of crypto when it averts trouble.  They become aware of crypto when it
causes trouble.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]