Bletchley Park restoration

2008-09-09 Thread Jerrold Leichter

[Moderator's note: I posted on this earlier, but I really do want to
see Bletchley Park maintained... :) --Perry]

IBM and PGP have donated $100,000 to help restore and maintain  
Bletchley Park as a museum.  This money is intended to get others  
involved - millions more will be needed.


Details:

http://news.bbc.co.uk/2/hi/technology/7604762.stm

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


More man-in-the-middle'd SSL sessions on the way

2008-08-08 Thread Jerrold Leichter

From an article about WAN optimization appliances in Computerworld:

In some markets, such as health and finance, [hiring] a managed
	provider [who will do the encryption outside your routers] isn't 	a  
good option for another reason: Because data is optimized in an 	 
unencrypted state, privacy and security concerns arise. But vendors 	 
such as Riverbed, Juniper Networks and Blue Coat Systems can serve 	as  
a trusted man in the middle for optimizing data encrypted with 	SSL,  
which is commonly used in applications with Web interfaces and 	other  
Internet traffic. They terminate the encrypted session,

decrypt, optimize and then re-encrypt and forward the traffic.
[Gartner's Joe] Skorupa said most vendors are developing this
useful capability.

It may indeed be a useful capability - but widespread use will destroy  
what little is left of the SSL trust model.


-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


On the difficulty of detection on-line fraud

2005-10-02 Thread Jerrold Leichter
Not cryptography, but ultimately what we talk about here often comes down to 
protection that actually works *for people*.


Also a good counter to arguments of the form if only people were more
careful

-- Jerry


From:[EMAIL PROTECTED]
Subject: User Interface Design Newsletter - September, 2005
Date:September 28, 2005 3:37:43 PM EDT


User Interface Design Update Newsletter - September, 2005

Each month HFI reviews the most useful developments in UI
research from major conferences and publications.

View in HTML - http://www.humanfactors.com/downloads/sep05.asp
__

In this issue:

Kath Straub, Ph.D., CUA, Chief Scientist, looks at recent research
on how people detect, and often miss, Web site fraud.

The Pragmatic Ergonomist, Dr. Eric Schaffer, gives practical advice.
__

PHISHING AND PHARMING AND PHRAUD, OH MY

The ability to recognize people who want to take advantage you is core to
survival. Researchers studying the evolution of cognition suggest that we
begin to develop generic cheating detection algorithms through exposure to
the types of deception that occur day to day (Cosmides and Tooby, 1989; Cheng
and Holyoak, 1985; Vasek, 1986) In a general way, we learn to suspect
deception and become cautious when there is a notable inconsistency between
what is happening and what we expected to happen.

Yet, consumers' ability to spot fraud in the Internet is still not very
good. This is because our ability to hone our generic cheater detectors
depends our specific or mediating knowledge of the deception
environment. When you think about it, it's not hard to imagine why. Even savvy
users find it hard to keep up with the newest scam. Can you define Phishing?
How about Pharming?

Here are the Wikipedia definitions for these Internet deception methods:

  - Phishing: (also carding and spoofing) is a form of social engineering,
 characterized by attempts to fraudulently acquire sensitive information,
 such as passwords and credit card details, by masquerading as a
 trustworthy person or business in an apparently official electronic
 communication, such as an email or an instant message. The term phishing
 arises from the use of increasingly sophisticated lures to fish for
 users' financial information and passwords.

  - Pharming: is the exploitation of a vulnerability in the DNS server software
 that allows a hacker to acquire the Domain Name for a site, and to
 redirect that Web site's traffic to another Web site.

And there's more:

  - Page-jacking and mouse-trapping: are techniques used by scammers to divert
 Internet users from their intended Web destination (page-jacking) to the
 scammers site from which the user is unable to leave using their browsers
 back, forward or even close buttons (mouse-trapping).

And, with all the excitement about phishing and pharming, people forget about
just plain fraud.

Its not surprising that people have a hard time identifying Internet
deception. The specific cues you use to detect fraud in the rest of your life
work don't really apply in cyberspace. In bricks-and- mortar transactions you
can see who you are dealing with. In cyberspace, grifters are harder to
spot... if they are even there at all.

THE AVERAGE VICTIM OF INTERNET FRAUD LOSES OVER $700 NOT COUNTING LOST TIME

The good news is that as consumers learn more about how the Internet works
they will, by extension, learn more about how Internet deception works. It
will become much harder to dupe them. Like magic, deception is usually not so
tricky if you know where to look. The challenge then, is to help consumers
learn where to look.

Organizations like Consumer WebWatch, the Internet arm of Consumer Union, have
published reports intended to guide consumers to correctly identify the
characteristics of a credible Internet site. One problem is that not enough
consumers read their reports. And of those that do read them, not enough
actually check the cues. Another problem is that those who practice Internet
fraud do seem to read the reports.

Researchers like Grazioli are taking a different route. Grazioili's work (and
his work with colleagues like Jarvenpaa) contrasts the differences between the
behavior of successful and unsuccessful deception detectors. Consumers good at
detecting deception on the Internet evaluate on assurance cues -- concrete
parameters of an organization or its business model that can be evaluated for
truthfulness (e.g., the phone number) or legal validity (e.g., a warranty). In
contrast, consumers who fail to notice deception tend to assign credibility
based on trust cues -- self-report marketing elements (e.g., customer
testimonials or product sales reports) which are difficult to verify, at best.

WHEN PEOPLE ARE LYING THEY TEND TO TOUCH THEIR FACES. WHAT DO WEB SITES DO?

Grazioli 

Re: Java: Helping the world build bigger idiots

2005-09-21 Thread Jerrold Leichter
|  One thing to consider is that an idiom like this solves an annoying 
problem.  
|  Consider a linear search through an array:
|  
|  for (i = 0; i  lim; i++)
|  {   if (a[i] == target)
|  {   do something
|  break;
|  }
|  }
|  /*
|   * Did we get here because we matched or because we
|   * failed to match?
|   */
| 
| No, we got here because we didn't know basic C usage.  Come on
| people, please stop creating these fake illustrations.
Oh, come on.  This loop has a trivial increment and a trivial stopping 
condition, for the purpose of illustration.  But even here, it's not so 
simple.  Better style in general is to limit the loop variable's scope to the 
loop:

for (int i = 0; i  lim; i++)
{   }

Now i isn't accessible after the loop is done.  My personal style has always 
been to treat the loop variable as local to the loop, even when C didn't let
you declare it that way.  There are exceptions - especially cases where you 
traverse the same array in two consecutive loops, the second picking up where 
the first ended - but I've generally found that to be a better style.

| A real C programmer would have known that, if i == lim, there
| was no match.  This is so trivial it beggars belief that it
| needs to be pointed out in a forum like this.
| 
|  Personally, I sometimes use:
|  
|  for (i = 0; i  lim; i++)
|  {   if (a[i] == target)
|  goto found;
|  }
|  
|  This draws shock and horror from some code readers, but I don't care.  :-)
|  Note how much it looks like the exception-based code.
| 
| It only draws gasps from people who don't know C.  The goto that
| is famously considered harmful is not spelled goto in C, but
| rather longjmp; it's not used all that often and does need
| careful handling.  The C goto statement is purely a local goto
| and scares nobody who has grown up.
Nice to know we're all adults here :-)

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Clearing sensitive in-memory data in perl

2005-09-17 Thread Jerrold Leichter

[Moderator's note: forwarded on Jerry's behalf -- he's having mail problems.]

| So wouldn't the world be a better place if we could all agree on a
| single such library? Or at least, a single API. Like the STL is for C++.
| 
| 
| 
|  Yes, absolutely, but who is going to do it?
|
| One could argue it has already been done.
|
| There exists a widely available, freely available, well-implemented
| example of something just like the STL for C++.  It is called
| the STL for C++.
|
| a) Writing in C++ is easier than writing in C.
|
| b) Porting legacy C to C++ isn't rocket surgery.  It can be done
| incrementally.

And the STL is safer than C ... in what sense?  STL iterators are modeled on C
pointers - warts and all.  An STL iterator can point to random memory just as
easily as a C pointer.  An indexing operation into an STL vector or similar
data structure is subject to exactly as much range checking as a C indexing
operation.

Yes, there exist implementations of the STL that check such things - just as
there exist implementations of C that check such things.  None appear to be
widely used.  And, of course, while no standard *forbids* such checks, none
*require* them, so (a) availability is spotty; (b) compilers typically contain
no optimizations aimed at making such things efficient.

RISKS had a large number of messages on this topic back in 2002.  I wrote one
long commentary (in RISKS 21.85 - see http://catless.ncl.ac.uk/Risks/21.85 for
one on-line source) that I stand by to this day.  Very little has changed.
(Actually, there has been a *bit* of improvement:  Microsoft submitted a
proposal for a set of new string - and related - functions for standardization
in the C library.  They differ from the classic functions in always having an
explicit output buffer length.  Personally, I find the particular API typical
of Microsoft - butt-ugly and a pain to use - and I *think* that the standards
groups may be re-working them.  But one way or another, after all these years,
we may eventually have safe alternatives - once we work throught the
standardization process and get the stuff out there (I'd guess another 2 years
at least before you can safely assume that it'll be available).

-- Jerry



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Clips] Escaping Password Purgatory

2005-08-05 Thread Jerrold Leichter
|  Computer Hardware Software
|  Escaping Password Purgatory
|  David M. Ewalt,  08.03.05, 3:00 PM ET
| 
|  ... I think I have passwords for
|  over 47 different applications both internal and external that I access,
|  and I've acquired those IDs and passwords over several years, says Wayne
|  Grimes, manager of customer care operations for the U.S. Postal Service.
| 
| Try Site Password, 
| http://www.hpl.hp.com/personal/Alan_Karp/site_password/.  It takes a 
| good master password, and a site name, and hashes them together to produce 
| a site-specific password.
| 
Hmm.  I came up with the same idea a while back - though with a different 
constraint:  I think it's reasonable to trade off the one-wayness of the
hash for the ability to work out the password with pencil and paper when
necessary.  Various classic pencil-and-paper encryption systems can be bent
to this purpose.  Since the volume of data encrypted is very small and it's
hard for an attacker to get his hands on more than tiny samples - a given
web site only sees its own password - you don't need much strength to give a
reasonable degree of protection.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Query about hash function capability

2005-08-05 Thread Jerrold Leichter
| Hi all,
| 
| My question relates to hash functions in general and not specifically
| cryptographic hashes. I was wondering if there exists a group of hash
| function(s) that will return an identical result for sequentially
| similar yet rotate/shift wise dissimilar input:
| 
| ie: input1 : abcdefg - h(abcdefg) = 123
| input2 : gabcdef - h(gabcdef) = 123
| input3 : fgabcde - h(fgabcde) = 123
| 
| Here a,b,c,d,e,f,g represent symbols (ie: groups of bits with equivalent
| group sizes etc...)
| 
| I know that one simple hash method would be to add the symbols
| together, but the results would also be equivalent if say the symbols
| were in any order, also collisions would occur with other totally
| dissimilar sequences that happen to have the same sum as the sequence.
| 
| Is there anything out there research/papers etc, or is this a meaningless
| avenue of enquiry?
| 
| 
| any help would be very much appreciated.
Rotate the input string until it has the smallest possible value among all 
possible rotations.  Possible rotations are those that you want to consider 
equivalent under the hash - if you want just ab and ba as ASCII strings to 
be equivalent, then allow only rotations in units of bytes.  If you also want 
0xc2c4 - the result of rotating that pair of bytes left by one bit - to be 
equivalent, include bit rotations in the possible rotations.  Finally, hash 
using any standard algorithm.  (What this is doing is partitioning the set of 
inputs into equivalence classes - where two inputs are in the same equivalence 
class if they are intended to hash to the same value - and then replacing the 
input string by a unique representative of the equivalence class.  You can 
define equivalence classes any way you like, as long as you can compute a 
unique representative.  For example, a hash that ignores upper/lower case
distinctions is trivially realized by replacing all letters in the input with 
their lower-case equivalents.  That just chooses the all-lower-case version as 
the representative of the set of case-equivalent versions of the string.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ID theft -- so what?

2005-07-25 Thread Jerrold Leichter
| Jerrold Leichter wrote:
|  It's also clear that they don't expect customers to look closely at, or
|  question, their bills.  If they did, they'd make sure that meaningful 
merchant
|  names appeared on the bills, or at least were available if you called to ask
|  about a charge.
| 
| a company can operate under possibly 3(?) different names ... a dba
| name, a legal name, and some sort of general use name. the legal name
| that a merchant signs a contract with a merchant financial institution
| ... can be totally different from the dba name.
| 
| i once found a legal name on a statement that was something like
| minority women owned title ?-something, no. ??? company ... which was
| totally different from the name on their store-front. I assumed that
| their legal name was to help highlight their classification when bidding
| on gov. contracts. the no. ??? apparently was to differentiate them
| from other companies that had also included the same reference to their
| minority women owned status. I don't remember what the title
| ?-something was ... but it struct me as something where a minority
| female owned small company was given special treatment in gov. bids.
| 
| i don't think it is so much a short coming of the merchant/financial
| institution relationship ... it is just the general nature of the way
| the society operates.
The banks, operating through the clearing agents, could if they wished impose 
a requirement on the way names appear in billing statements, regardless of how 
the names appear on contracts.  Alternatively, they could at least require 
that an end-user-familiar name be made available in whatever database records 
all merchants, which the banks obviously have access to.

This is yet another situation where the banks are really the only ones who can 
fix something that hurts the customers and merchants.  In the past, I denied a 
charge to a company whose name I didn't recognize.  It turned out the charge 
was legitimate - it's just that the name and location listed for the company 
were completely unrelated to anything that appeared on their website. Once I 
denied the charge, the bank was done.  The vendor had to go through the 
expense of getting in touch with me, and then I had to write a letter to the 
bank authorizing the charge.  Meanwhile, the vendor was out the money.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ID theft -- so what?

2005-07-22 Thread Jerrold Leichter
| one of the business processes is that somebody calls their issuing
| bank and disputes a charge by a specific merchant on such  such a date.
| the issuing bank eventually provides notice to the merchant (giving the
| account number, date, and purchase details). the merchant then looks for
| a transaction (in their transaction log) for that account number on that
| date. In some cases, the merchant bank processor may provide an online
| service for merchants ... where the merchant processor keeps the online
| merchant transaction log on behalf of merchants for things like dispute
| resolution (this may include things like online library of digital
| images of the original reciepts).
I just had that experience.  My bill contained one charge that made no sense
at all, and one for a local store whose name I don't know.  I called the
issuing bank.  They could provide essentially no useful information.  All they
could offer was to send me a copy of the original charge information - whatever
that is - which would take 4-6 weeks.  (How can *anything* take 4-6 weeks?  If
they had to get this from backup tapes out at Iron Mountain, OK - but for
charges made in the last month?)  I could tell that for one transaction, this
wasn't going to help at all:  The company was a computer supplier that works
mainly on-line or by phone.  (They may have a physical store in California.)
What original records could they have?

Anyhow ... I called the computer vendor, and they were able to search their
database on the CC account number.  It was, indeed, a fraudulent order -
shipped to a town 30 miles from me.

The other order - on the same day - remains a mystery.  It's at a record and
DVD store, but the only name they have is something like Record and DVD
Store.  It *could* be a DBA for the local Tower Records; it could be some
unrelated store where the same guy who ordered the computer stopped in for
a couple of DVD's.

It's clear that the CC companies really don't care very much.  They are happy
to issue me a new account and dump the charges back on the vendors.  If I
choose to do the research and find out where the charged merchandise ended
up ... they'll take the data from me, but probably not do anything with it.
It's not costing *them* anything.

It's also clear that they don't expect customers to look closely at, or
question, their bills.  If they did, they'd make sure that meaningful merchant
names appeared on the bills, or at least were available if you called to ask
about a charge.
-- Jerry





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ID theft -- so what?

2005-07-21 Thread Jerrold Leichter
| an analogy i've used recently with respect to userid/password paradigm,
| is that account numbers are being concurrently used for both the userid
| function (requiring security *integrity* but not security
| *confidentiality*) as well as the password function (requiring strong
| security *confidentiality*). as a result there are frequently
| diametrically opposing requirements where the muiltitude of userid-type
| business functions require access to the account number ... at the same
| time, the password-type functions require that the account number be
| kept strictly *confidential* and not be available at all
If this is all you need, then using a 1-way hash of the card number for
identification and the card number itself for security would give you much
of what you need.  There are databases out there which identify customers
by their CC numbers, not because they are willing to use the stored CC
number for charging, but just becauses it's a good unique ID.  If what were
stored were the 1-way hash, there would be nothing worth stealing in the
database.

This kind of thing is actually implemented, though people rarely think of it
in these terms.  You can see it on many printed charge slip these days:  Your
CC number no longer appears, being replaced by a one-way hash that produces 12
X's and the last 4 digits of your number.  Hardly cryptographically strong in
the usual sense, and not generally applicable, but for ID purposes - letting
you identify which of your cards you used when making the charge - this
particular one-way hash is perfectly good.  (It's also common on Web forms
that tell you which of your cards a charge will be applied to.)

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ID theft -- so what?

2005-07-15 Thread Jerrold Leichter
| Date: Wed, 13 Jul 2005 16:08:20 -0400
| From: John Denker [EMAIL PROTECTED]
| To: Perry E. Metzger [EMAIL PROTECTED]
| Cc: cryptography@metzdowd.com
| Subject: Re: ID theft -- so what?
| ...
| Scenario:  I'm shopping online.  Using browser window #1, I
| have found a merchant who sells what I want.   in the checkout
| phase.
| 
| Now, in browser window #2, I open a secure connection to my
| bank.  The bank and I authenticate each other.  (This is
| two-way authentication;  one possible method is SSL certificate
| on the bank's part, and a password or some such on my part.)
| 
| ...
| As a refinement, I could ask the bank to issue a number that
| was not only restricted to a single use, but also restricted
| as to dollar amount.  As a further refinement, it could be
| restricted to a particular merchant.
| 
| Everything to this point is upward-compatible with existing
| systems.  The merchant doesn't even need to know there's
| anything special about this card number;  the charge moves
| through existing processing channels just like any other.
| 
| As a further refinement, once the system is established,
| the merchant could provide, on the checkout page, a number
| that encodes in some standard format various details of
| the transaction:  merchant ID, date, dollar amount, and
| maybe even a description of the goods sold.  I cut-and-paste
| this code from the merchant to the bank, and let the bank
| site decode it.  If it looks correct, I click OK and then
| the bank generates a response that the merchant will
| accept in payment.  If necessary I can cut-and-paste this
| from the bank to the merchant ... but it would make more
| sense for the bank to transmit it directly to the merchant.
| This further increases security, and also saves me from
| having to enter a lot of billing detail
In effect, what you've done is proposed a digital model of *checks* to
replace the digital model of *credit cards* that has been popular so far.
In the old days, paper checks were considered hard to forge and you were
supposed to keep yours from being stolen.  Your physical signature on the
check was considered hard to forge.  Checks were constructed in a way
that made made alteration difficult, binding the details of the transaction
(the payee, the amount being paid) and the signature to the physical
instrument.

There was a time when checks were accepted pretty freely, though often with
some additional identification to show that you really were the person whose
name was on the check.

Credit cards and credit card transactions never bound these various features
of the transaction nearly as tightly.  Stepping back and looking at the
two systems, it seems that using the check model as the starting point for
a digital payment system may well be better than using credit cards, whose
security model was never really as well developed.  When I handed a check to
a vendor, I had (and to this day have) excellent assurance that he could
not change the amount, and (in the old days) reasonable assurance that he
could not create another check against my account.  (Given digital scanners
on the one hand, and the virtualization of the check infrastructure, from
the ability to initiate checking transactions entirely over the phone to
check truncation at the merchant on the other, this is long gone.  It would be
nice to recover it.)
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: payment system fraud, etc.

2005-07-11 Thread Jerrold Leichter
| Jerrold Leichter [EMAIL PROTECTED] writes:
|  In doing this calculation, be careful about the assumptions you make
|  about how effective the countermeasures will be.  The new systems
|  may be more secure, but people will eventually come up with ways to
|  break them.  The history of security measures is hardly encouraging.
| 
| I'm not sure I agree with that, and I'll tell you why.
| 
| Take the case of NAMPS cell phone fraud. At one time, phone cloning
| was a serious problem. The main issue was that people could simply
| listen in on call setup and get all the information they needed to do
| phone fraud. Once strong crypto was used to authenticate mobiles with
| the deployment of digital cellphone networks, mobile phone cloning
| fraud didn't just shift around, it almost completely vanished
It's very difficult to get a clean experiment on something like this.

There is no doubt that going from NAMPS to digital cellphone networks raised 
the cost of phone cloning or related methods for getting uncharged/mischarged 
service considerably.  However, at the same time, the cost of *legitimate* 
cellphone service fell dramatically.  When you can get 500 minutes of free 
calls to anywhere in the US for around $40/month (with various hours or calls 
to customers of the same carrier free on top of that), just how much does it 
pay to clone a phone?  Overseas calls probably provided some incentive for a 
while, but soon their prices dropped radically, pre-paid, cheap phone cards 
became widespread (and were probably stolen) - and more recently services like 
Skype have reduced the cost to zero.

The only remaining reason to clone a phone is to place untraceable calls - but
you can do as well by buying a pre-paid phone and the number of minutes of
airtime you need, paying cash, then tossing the phone.  (Using a clone phone
for this purpose was getting rather dangerous toward the end of the NAMPS era
anyway as the providers started rolling out equipment that recognized the
transmission signatures of individual phones.  Generally, this was aimed at
preventing clones from operating, but it could as well be used to recognize a
given clone regardless of the identification info it sent.)

A better history to look at might be satellite TV subscription services, which
took many generations of allegedly secure cryptography to get to wherever they
are today (which as far as I can tell is a non-zero but tolerably low rate of
fraud - the cost of entry to satellite TV subscription fraud these days is
very high).
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Optimisation Considered Harmful

2005-06-25 Thread Jerrold Leichter
| Suppose you have something that is inadvertently an
| oracle - it encrypts stuff from many different users
| preparatory to sending it out over the internet, and
| makes no effort to strongly authenticate a user.
| 
| Have it encrypt stuff into a buffer, and on a timer
| event, send out the buffer
| The problem is with edges:
| 
| Suppose the timer goes off every 10ms.
| 
| You have an operation that takes either 5ms or 15ms, depending on
| whether a chosen bit of the key is 1 or 0.
| 
| Whether or not a given time slot is occupied with results will emit
| whether the bit was 1 or 0.
| 
| Now, suppose your timer goes off every 200ms.  No problem, right?
| 
| At time=190ms, you force an encryption.  If it's done by the time=200ms
| deadline, you know
thus bringing us back to an observation that goes back maybe 30 years:  Timing
channels are usually impractical to eliminate, but you can lower the bit rate
(or decrease the S/N ratio, which comes down to the same thing).  For the case
at hand, we've lowered the bit rate by a factor of 20.

If you're willing to change the protocol a bit, you can do better:  Never send
a block whose encryption crossed the boundary.  Instead, send a special ignore
me block.  (Obviously, the ignore me marker is *inside* the encrypted
envelope.)  Depending on the nature of the protocol - is the size of a response
invarient, or is the number of exchanges invarient? - you then continue either
by sending the real block, or by the sender returning an ignore me block
which you reply to with the encrypted block that crossed the boundary.  This
still doesn't completely remove the timing channel - there are differences in
the data stream depending on whether you cross the boundary - but they are
much subtler, so again you've decreased the bit rate.  (It's not clear how to
quantify in this case, though.)

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Optimisation Considered Harmful

2005-06-23 Thread Jerrold Leichter
| A brief altercation this evening with CERT over the recent hyperthread caching
| issues has brought something that's been simmering at the back of my brain to
| the forefront.
| 
| The recent hyperthread/cache key recovery trick, followed by DJB's related
| (IMO) symmetric key recovery, and preceded by the RSA timing attacks (Boneh et
| al?) are all examples of attacks on the same thing: optimisation.
| 
| The problem is that if different paths through your code expose different
| availability of optimisation, then there's a timing attack available. I
| predict, for example, that someone will find a way to attack something using
| pipeline breaks on Pentium processors[1].
|
| How do we fix this? Well, its easy to say: we make everything equally crap -
| we make sure that all operations are as slow as the slowest possible variant
| can be. However, on a modern processor, this is _very_ hard to do.
| 
| Could it be that for safe crypto we should be using Z80s?
This is an excellent point, as it reveals a deep and significant parallel
between cryptography and compiler/hardware optimization.

Consider what it means to create an optimizing compiler (or some kind of opti- 
mization in hardware - the issues are the same, but I'll talk in terms of a 
compiler for definiteness.)  The input is source code; the output is a bunch 
of instructions.  A compiler's transformation is correct if it's semantics- 
preserving:  The source has some meaning, and the object code correctly 
represents that meaning.  There are an infinite number of possible object 
codes that preserve the input semantics.  Some are better than others with 
respect to some objective function, say size or speed.  An optimizing compiler 
simply chooses a better object code than some reference choice.

The big issue in compiler optimization - and even more so in some hardware
optimization - is defining exactly what semantics has to be preserved.  For
example, must computations be done even if the compiler can determine that
they cannot affect the output?  Can the rules of algebra be used to rearrange
expressions (possibly breaking carefully crafted error management strategies,
since floating point arithmetic doesn't actually obey the rules of algebra)?
Do writes to two variables written in order in the source code have to occur
in that order in the object code?  If two writes are issued in order to the
hardware, do they have to hit memory in that order?

An understanding of what semantics has to be preserved, and what semantics is
side-effect and can be tossed to gain performance, has gradually emerged in
the hardware and software communities.  There have been, and continue to be,
missteps along the way.  Some of the weaker memory consistency models might
have helped the hardware guys, but were just too hard for the software guys
to deal with.  Newbies to multi-threaded programming are caught by the not-so-
obvious memory access semantics present even at the language level in all
common programming languages.  (Java tried to pin this down exactly and got it
completely wrong for several tries.)

Enter cryptographic algorithms.  On their face, these are simple mathematical
transformations.  You have to really abuse the implementations (e.g., having
multiple side-effect-producing operations in one statement) to get into area
where a programmer or hardware developer's warning bell would sound watch the
semantics.  And, in fact, from the point of view of input/output transforma-
tions, there really are no semantic issues.  The problem is that these side-
channel attacks broaden the meaning of the program to something that has
never been considered in previous work that I know of.  (The closest you are
likely to find is in complaints by real-time programmers that modern machines
give you no way to determine how long an instruction sequence will really
take:  You might take a page fault, or a cache miss, or who knows what along
the way, and in some real-time code, you have to *know* that that won't
happen.  Such code really is sometimes run only on machines like Z80's!

What can be done?  Well, the first thing that's clearly needed is a more
complete specification of the semantics of cryptographic algorithms.  Just
the input/output transformation - which is all we write down in most analyses
- is insufficient.  We sometimes state things informally, almost in passing -
as in the comments on AES that table accesses take constant time.  We
certainly assume things like the time to add two numbers is independent of
the number of carries - which is probably true on machines today, but may
actually have been false at one time.  Without a way to write down what
matters, we have no way to judge whether a particular compilation/hardware
approach is safe.  There's some work on abstract program interpretation that
might help here (though it's mainly aimed in other directions).

Ultimately, performance is likely to suffer.  Software and hardware optimiza-
tions are 

Re: AES cache timing attack

2005-06-22 Thread Jerrold Leichter
|  It's much harder to see how one could attack a session key in a properly
|  implemented system the same way.  You would have to inject a message into
|  the ongoing session.  However, if the protocol authenticates its messages,
|  you'll never get any response to an injected message.  At best, you might
|  be able to observe some kind of reaction to the injected message.  But
|  that's a channel that can be made very noisy, since it shouldn't occur
|  often.  (BTW, if you use encrypt-then-authenticate, you're completely
|  immune to this attack, since the implementation won't ever decrypt the
|  injected message.  Of course, there may be a timing attack against the
|  *authentication*!)
| 
| I agree with your comments about the injection, but I
| don't see why the attack doesn't work on the session
| passively.  Are you assuming that because it is a
| session, it's in some way not plausible to match the
| inbound packets with outbound packets?  I would
| have thought that was possible with things like keep
| alives and so forth.  The only drawback I can see is
| that there might not be enough data (hence desire to
| tickle things along with an injection).
That's basically it.  You need to pair up packets that have known (or at least
recognizeable) plaintext - or, more accurately, input to the encryptor.  (In
CTR mode, what matters is the value of the counter, not the plaintext.)
Unless the protocol required some externally-visible response to a keep-alive,
it would provide you with no information - *nothing* would pair with the keep-
alive packet.  (Well, I suppose one can imagine an implementation that uses 
absolute time on its system to a very high degree of precision to determine 
when to send a keep-alive, and then posit an attacker who can get access to 
the system time.  But given any reasonably typical keep-alive mechanism, 
this seems pretty unlikely.)

A low-level ack to a received message might work.  Any higher-level response - 
one that came from semantic processing of the actual data, not from the 
low-level bits - is likely to contain so much noise that pulling useful timing 
out will take more paired, guesssable packets than an attacker will ever see 
(yet another reason for periodic rekeying, of course).

This does point out some interesting interactions.  For example, a mode like
CBC is harder to attack because you need to know more actual plaintext to
determine the input to the encryptor.  On the other hand, a mode like CTR may
be particularly vulnerable since the input can be determined independently of
the actual plaintext.  Pre-whitening, even with a (secret) constant (for a
particular connection) - something that generally seems pointless - would
help.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: AES cache timing attack

2005-06-21 Thread Jerrold Leichter
| Uhh, that wasn't really what I was after, that's pretty much textbook stuff,
| what I wanted was specifically advice on how to use block ciphers in a way
| that avoids possibilities for side-channel (and similar) attacks.  I have some
| initial notes that can be summarised as Don't let yourself be used as an
| oracle that I was planning to add after I've fleshed them out a bit.
It strikes me that block ciphers in *communications*get used in two 
different contexts:

- With a long-lived key.  This is in protocols like Kerberos,
where the long-lived key is used as part of a mechanism
to produce a session key.  In most more recent protocols,
some combination of D-H or public key plays this role.

- With a session key.

Usage in first of these may be subject to Bernstein's attack.  It's much 
harder to see how one could attack a session key in a properly implemented 
system the same way.  You would have to inject a message into the ongoing 
session.  However, if the protocol authenticates its messages, you'll never 
get any response to an injected message.  At best, you might be able to 
observe some kind of reaction to the injected message.  But that's a channel 
that can be made very noisy, since it shouldn't occur often.  (BTW, if you use 
encrypt-then-authenticate, you're completely immune to this attack, since the 
implementation won't ever decrypt the injected message.  Of course, there may 
be a timing attack against the *authentication*!)

By their nature, the first class of uses don't usually require the ultimate in 
performance.  Since they receive and respond to small messages involving the 
encryption of a small amount of data, the encryption portion of the what they 
do is rarely the major time cost.

If this dicotomy holds up, it leads to a simple recommendation:  Use a 
constant-time implementation in the first role; use the highest-performance 
implementation in the second role.
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: AmEx unprotected login site (was encrypted tapes, was Re: Papersabout Algorithm hiding ?)

2005-06-08 Thread Jerrold Leichter
| Perry makes a lot of good points, but then gives a wrong example re Amex site
| (see below). Amex is indeed one of the unprotected login sites (see my `I-NFL
| Hall of Shame`, http://AmirHerzberg.com/shame.html). However, Amex is one of
| the few companies that actually responded seriously to my warning on this
| matter. In fact, I think they are the _only_ company that responded seriously
| - but failed to fix their site... I had an interesting discussion with their
| security and web folks, and my conclusions are:
| 
| 1. These are serious people who understand technology and security
| reasonably well. They are aware of many attacks, including much more
| advanced spoofing attacks (that can foil even an expert user of a `regular`
| browser - by regular I mean without improved security indicators like
| provided by TrustBar).  Unfortunately, they use this awareness to justify to
| themselves the lack of protection (`why should I put a lock when some people
| know how to break it?`)
|
| 4. Ultimately, what we have here is simply the `usability beats security`
| rule...
If you look at their site now, they *claim* to have fixed it:  The login box 
has a little lock symbol on it.  Click on that, and you get a pop-up window 
discussing the security of the page.  It says that although the page itself 
isn't protected, your information is transmitted via a secure environment.

No clue as to what exactly they are doing, hence if it really is secure.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Papers about Algorithm hiding ?

2005-05-31 Thread Jerrold Leichter
| Hi,
| 
| you most probably have heard about the court case where the presence
| of encryption software on a computer was viewed as evidence of
| criminal intent.
| 
| http://www.lawlibrary.state.mn.us/archive/ctappub/0505/opa040381-0503.htm
| 
http://news.com.com/Minnesota+court+takes+dim+view+of+encryption/2100-1030_3-5718978.html
| 
| 
| 
| Plenty of research has been done about information hiding.
| But this special court case requires algorithm hiding as a kind of
| response. Do you know where to look for papers about this subject?
There was a paper published on this a while back.  The question was posed as 
essentially:  Can one produce a one-way trapdoor compiler?  That is, just as 
an encryption algorithm takes plaintext and converts it to cryptotext, with 
the property that the inverse transformation is computationally intractable 
without the key, we want something that takes an algorithm and a key and 
produces a different but equivalent algorithm, such that asking some set of 
questions (like, perhaps, whether the output really *is* equivalent to the 
input) without the key is computationally intractable.  The result of the 
paper was that no such compiler can exist.
 
| What about designing an algorithm good for encryption which someone
| can not prove to be an encryption algorithm?
Can prove *any* algorithm is an encryption algorithm?  If so, I think there 
are some big prizes coming your way.

On a more general note:  This court case is being blown out of proportion. 
Screwdrivers, hammers, and a variety of other implements are burglar's 
tools.  If you are caught in certain circumstances carrying burglar's tools, 
not only will they be introduced into evidence against you, but the fact you 
have them will be a criminal violation in and of itself.  The way the law 
deals with all kinds of ambiguities like this is to look at intent:  If I 
carry a screwdriver to repair a broken door on my own house, it's not a 
burglar's tool.  If I carry it to break the lock on my neighbor's house, it 
is.  Determining intent is up to a jury (or judge or judge's panel, depending 
on the legal system and the defendent's choices).  It's outside the realm of 
mathematics, proof in the mathematical sense, or much other than human 
judgement.  If an expert witness testifies something is an encryption 
algorithm, and the jury believes him more than the defense's expert witness 
who testifies it's a digital controller for a secret ice cream maker ... 
that's what it is.  If the jury further believes that encryption algorithm was 
used in the furtherance of a crime ... the defendent is in trouble.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Do You Need a Digital ID?

2005-03-25 Thread Jerrold Leichter
://www.garlic.com/~lynn/subpubkey.html#3factor
| 
| I do agree that (possibly because of the syntactic similarity) lots of people
| confuse digital signature and real human signatures. A human signature carries
| with it the connotation of understanding, aggreement, approval, authorization,
| etc. A digital signature is simply the expression of the access and use of a
| private key ... and the definition/law of the public/private key business
| process is that the private key be consistently protected and kept private so
| that relying parties ... when verifying a particular digital signature ... can
| associate it with the authentication of a specific entity.
| 
| There are several deployed infrastructures of the application of the
| public/private key business process, where the digital signature generation is
| simply for authentication purposes ... there is a human that is responsible
| for activating the access and operation of the private key for the generation
| of the digital signature ... w/o a requirement that the human ever observes
| the contents of what the digital signature is being applied to.
| 
| As previously mentioned numerous times before ... there is a dual-use attack
| on public/private key infrastructures where there are procedures in place that
| require a human to observe, read, and understand any bits that are being
| digital signed. However, if the same private key that is used in real
| signature applications, is also ever used in authentication applications
| where the human doesn't observe and read the contents, then the attacker just
| supplies a valid document masguerading as authentication bits (which the human
| won't be reading and/or understanding).
| 
| note that non-repudiation is sometimes referenced with regard to some aspects
| of digital signatures being similar to human signatures (aka read, observe,
| understand, approve, authorize, agree). the eu finread definition tried to
| include some aspects of read, observe, understand, approx, authorize, agree
| ... misc. past finread postings:
| http://www.garlic.com/~lynn/subpubkey.html#finread
| 
| 
| Jerrold Leichter wrote:
|  This is a rather bizarre way of defining things.  Something you have is a
|  physical object.  On the one hand, any physical object can be copied to an
|  arbitrary degree of precision; on the other hand, no two physical objects
|  are
|  *identical*.  So a distinction based on whether a replacement is identical
|  to the original gets you nowhere.
|  
|  A digital signature is just a big number.  In principal, it can be
|  memorized,
|  thus becoming something you know.  As a *number*, I don't see how it can,
|  in
|  and of itself, *ever* be something you *have*.
|  
|  From a purely information point of view, there is not, and cannot be, any
|  difference among the different authentication modalities.  A house key can
|  be represented as a fairly short number (the key blank number and the
|  pinning). Even a very fancy and elaborate key - or any physical object -
|  can, in principle, be represented as a CAD file.  While something I am is
|  difficult to represent completely this way (at least today!), it doesn't
|  matter:  A something I am *authentication element* has to ultimately be
|  testable for veracity on the basis of information the tester has access to.
|  
|  The meaningful distinction here has to do with possible modes of attack,
|  constrained by the *physical* characteristics of the system.  An
|  authentication element is something you have if an attacker must gain
|  physical possession of it to be able to authenticate as you.  The
|  closeness and length of time the attacker must possess the element form
|  the fundamental measures of quality of such an element.  A house key is a
|  prototypical something you have.  Duplicating it requires the ability to
|  physically hold it.  (One can, of course, imagine taking very detailed
|  photographs from a distance, or using some other kind of remote sensing
|  technology.  While possible in principle, this would be a very expensive and
|  difficult attack in practice, and we generally ignore the possibility.)
|  Keys with restricted blanks are relatively difficult to duplicate even if
|  you have physical possession.  We generally assume that you can take a key
|  back, thus revoking access.  This is also a general property of any
|  something you have authentication element - and is truely present only to
|  some degree.  Still, one can meaningfully ask of such an element How many
|  copies are in existence?  Who has access to them?
|  
|  Conversely, something you know can, in principle, only be learned by you
|  revealing it.  Once revealed, a something you know element cannot be
|  revoked.  It can be copied easily, and determining who might know it is
|  usually impractical once there is any suspicion of compromise.
|  
|  A key card by itself is like a blank house key.  It becomes something you
|  have when it's encoded

Re: Secure Science issues preview of their upcoming block cipher

2005-03-25 Thread Jerrold Leichter
| Really?  How does one go about proving the security of a block cipher?
They don't claim that:

This cipher is ... provably just as secure as AES-128.

I can come up with a cipher provably just as secure as AES-128 very quickly

(Actually, based on the paper a while back on many alternative ways to
formulate AES - it had a catchy title something like How Many Ways Can You
Spell AES?, except that I can't find one like that now - one could even
come up with a formulation that is (a) probably as secure as AES-128; (b)
actually faster in hardware or simpler to implement or whatever...)

-- Jerry :-) 

| My understanding is that you, and others, perform attacks against it,
| and see how it holds up.  Many of the very best minds out there
| attacked AES, so for your new CS2 cipher to be provably just as
| secure as AES-128, all those people would have had to have spent as
| much time and energy as they did on AES.  That strikes me as unlikely,
| there's a lot more interest in hash functions today.
| 
| Adam
| 
| PS: I've added the cryptography mail list to this.  Some of the folks
| over there may be interested in your claims.
| 
| On Wed, Mar 23, 2005 at 05:00:25PM -0800, BugTraq wrote:
| | Secure Science is offering a preview of one of the 3 ciphers they will 
| | be publishing througout the year. The CS2-128 cipher is a 128-bit block 
| | cipher with a 128 bit key. This cipher is proposed as an alternative 
| | hardware-based cipher to AES, being that it is more efficient in 
| | hardware, simpler to implement, and provably just as secure as AES-128.
| | 
| | http://www.securescience.net/ciphers/csc2/
| | 
| | -- 
| | Best Regards,
| | Secure Science Corporation
| | [Have Phishers stolen your customers' logins? Find out with DIA]
| | https://slam.securescience.com/signup.cgi - it's free!
| | 
| 
| -
| The Cryptography Mailing List
| Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
| 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Secure Science issues preview of their upcoming block cipher

2005-03-25 Thread Jerrold Leichter
| Jerrold Leichter writes:
| They don't claim that:
| 
|  This cipher is ... provably just as secure as AES-128.
| 
| I can come up with a cipher provably just as secure as AES-128 very 
quickly
| 
| Actually, I think Adam is totally right.
| 
| Have you looked at their scheme?
|   http://www.securescience.net/ciphers/csc2/
I was responding in jest to the text Adam actually quoted - and indeed was
refering to:

| The way to come up with a cipher provably as secure as AES-128 is to use
| AES-128 as part of your cipher 
[Remind self once more:  Ironic humor doesn't work in mail]

|-- but their scheme does not do anything
| like that.
| 
| I am very skeptical about claims that they have a mathematical proof that
| CS2-128 is as secure as AES-128.  I want to see the proof.
I didn't see that claim on their site, but then again I only glanced at it
quickly.  Unless they have some entirely new kind of reduction, I'm guessing
that what they are really claiming is that the same proofs of security that
are available for AES - against generalized differential attacks, for example -
are also available for CSC2.  *That* much is certainly possible.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Do You Need a Digital ID?

2005-03-21 Thread Jerrold Leichter
| if a re-issued a new token/card (to replace a lost/stolen token/card) is
| identical to the lost/stolen token/card ... then it is likely that there is no
| something you have authentication involved (even tho a token/card is
| involved in the process) ... and therefor the infrastructure is just single
| factor authentication.
| 
| at the basics, a digital signature is an indirect indication of something you
| have authentication  aka the existance of a digital signature implies
| that the originator accessed and utilized a private key in the generation of
| the digital signature. a digital signature by itself says nothing about the
| integrity of that something you have authentication ... since the digital
| signature doesn't carry any indication of the integrity measures used to
| secure and access the associated private key.
This is a rather bizarre way of defining things.  Something you have is a
physical object.  On the one hand, any physical object can be copied to an
arbitrary degree of precision; on the other hand, no two physical objects are
*identical*.  So a distinction based on whether a replacement is identical
to the original gets you nowhere.

A digital signature is just a big number.  In principal, it can be memorized,
thus becoming something you know.  As a *number*, I don't see how it can, in
and of itself, *ever* be something you *have*.

From a purely information point of view, there is not, and cannot be, any 
difference among the different authentication modalities.  A house key can be 
represented as a fairly short number (the key blank number and the pinning). 
Even a very fancy and elaborate key - or any physical object - can, in 
principle, be represented as a CAD file.  While something I am is difficult 
to represent completely this way (at least today!), it doesn't matter:  A 
something I am *authentication element* has to ultimately be testable for 
veracity on the basis of information the tester has access to.

The meaningful distinction here has to do with possible modes of attack, 
constrained by the *physical* characteristics of the system.  An 
authentication element is something you have if an attacker must gain 
physical possession of it to be able to authenticate as you.  The closeness 
and length of time the attacker must possess the element form the fundamental 
measures of quality of such an element.  A house key is a prototypical 
something you have.  Duplicating it requires the ability to physically hold 
it.  (One can, of course, imagine taking very detailed photographs from a 
distance, or using some other kind of remote sensing technology.  While 
possible in principle, this would be a very expensive and difficult attack in 
practice, and we generally ignore the possibility.)  Keys with restricted 
blanks are relatively difficult to duplicate even if you have physical 
possession.  We generally assume that you can take a key back, thus revoking 
access.  This is also a general property of any something you have 
authentication element - and is truely present only to some degree.  Still, 
one can meaningfully ask of such an element How many copies are in existence?  
Who has access to them?

Conversely, something you know can, in principle, only be learned by you
revealing it.  Once revealed, a something you know element cannot be
revoked.  It can be copied easily, and determining who might know it is
usually impractical once there is any suspicion of compromise.

A key card by itself is like a blank house key.  It becomes something you
have when it's encoded with a password, a digital signature private key, or
some other secret that's, say, part of an interactive zero-knowledge proof
system.  The quality of the key card depends on how easy it is to extract
the information and produce another key card that can be used in its place.

Of course, quality is a *system* property.  A house key reveals its secret
when placed in a lock - any lock.  While I could easily enough build a lock that
would read off the pinning of any key inserted into it and send it to me on
the Internet, this doesn't at present appear to be a threat that needs to be
defended against.  We generally assume that locks are simple physical devices
that don't leak any information.  On the other hand, a key card by its very
nature sends information into a digital system, and protecting information
once it is in digital form is challenging.  If I could know to a sufficient
degree of certainty that my keycard would only be used in secure readers
which would send the information no further, there would be relatively little
difference between a key card with a simple password encoded on a magnetic
strip, and a house hey.  Both would provide a something you have element.

A digital signature isn't an authentication element at all!  We incorrectly 
analogize it to a traditional signature, because inherent in the notion of the 
latter is a whole system embodying assumptions like (a) a signature 

Re: Can you help develop crypto anti-spoofing/phishing tool ?

2005-02-09 Thread Jerrold Leichter
| [1] This is also my solution to the famous trust paradox proposed by Ken
| Thompson in his  Reflections of Trusting Trust. Trust is earned, not
| given. To trust Ken's code, I would first ask two or more programmers (who
| I choose) to code the same function and submit their codes to tests. If they
| provide the same answers for a series of inputs, including random inputs,
| I would have a qualification for trusting (or not) Ken's code. This works
| even without source code. Trust is not in the thing, it's how the thing works.
The code Thompson describes would produce equivalent outputs[1] for all but a
very small number of specially-selected strings.  If this test caused you to
change your level of trust in the code ... it did you a great disservice.
N-version programming - which is what you are proposing here - can increase
your level of trust against random errors[2], but its of no use at all against
a deliberate attack.  (Recall the conversation here a couple of months ago
about how difficult - to the point of impossibility - it would be to use
external testing to determine if a crypto-chip had been spiked.)

[1]  Since Thompson was talking about a compiler, the mapping from inputs to
outputs is a partial function.  Just because two compilers produce different
sets of bytes doesn't mean either one is wrong.  Determining the equivalence
of two programs is a very hard problem; in fact, even *defining* the equiva-
lence seems intractable.  Suppose the only difference is that the spiked
compiler introduces enough data-dependent speed variation to leak information
through a timing channel?

[2] BTW, test have shown that the random error model for bugs isn't a very
good one.  Certain kinds of code are just more likely to contain errors - and
the errors produced by completely independent programmers are often quite
strongly correlated.  So the simple-minded analysis that says that if one
program fails with probability p then the chance that of of n different
versions fail with probability p^n is way off.

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is 3DES Broken?

2005-02-07 Thread Jerrold Leichter
|  I think you meant ECB mode?
|  
|  No, I meant CBC -- there's a birthday paradox attack to watch out for.
|  
|  Yep.  In fact, there's a birthday paradox problem for all the standard
|  chaining modes at around 2^{n/2}.
|  
|  For CBC and CFB, this ends up leaking information about the XOR of a couple
|  plaintext blocks at a time; for OFB and counter mode, it ends up making the
|  keystream distinguishable from random.  Also, most of the security proofs
|  for block cipher constructions (like the secure CBC-MAC schemes) limit the
|  number of blocks to some constant factor times 2^{n/2}.
| 
| I'm surprised that no-one has said that ECB mode is unsafe at any speed.
Picking nits, but:  ECB mode is unsafe at any speed to encrypt an arbitrary 
data stream.  If the data stream is known to have certain properties - e.g., 
because it has undergone some kind of transform before being fed into ECB - 
then ECB is as good as any other mode.

After all, CBC is just ECB applied to a datastream transformed through a
particular unkeyed XOR operation.

There's a paper - by Ron Rivest and others? - that examines this whole issue,
and carefully separates the roles of the unkeyed and keyed transformations.
(I think this may be the paper where all-or-nothing transforms were 
introduced.)
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is 3DES Broken?

2005-02-07 Thread Jerrold Leichter
|   No, I meant CBC -- there's a birthday paradox attack to watch out for.
|  
|  
|  Yep.  In fact, there's a birthday paradox problem for all the standard
|  chaining modes at around 2^{n/2}.  
|  For CBC and CFB, this ends up leaking information about the XOR of a couple
|  plaintext blocks at a time; for OFB and counter mode, it ends up making the
|  keystream distinguishable from random.  Also, most of the security proofs
|  for block cipher constructions (like the secure CBC-MAC schemes) limit the
|  number of blocks to some constant factor times 2^{n/2}.
|   
| 
| It seems that the block size of an algorithm then
| is a severe limiting factor.  Is there anyway to
| expand the effective block size of an (old 8byte)
| algorithm, in a manner akin to the TDES trick,
| and get an updated 16byte composite that neuters
| the birthday trick?
Many people have tried to do this.  I know of no successes that are really
practical.  (I've played around with many obviously good ideas myself, and
have always managed to break them with a little more thought.  Everything 
that gives you the desired security ends up costing much more than twice
the cost of the underlying block algorithm for a double-size block.)

The block size appears to be a fairly basic and robust property of block
ciphers.  There's probably a theorem in there somewhere - probably one of
those that isn't hard to prove once you figure out exactly what it ought to
say!
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: entropy depletion (was: SSL/TLS passive sniffing)

2005-01-07 Thread Jerrold Leichter
|  You're letting your intuition about usable randomness run roughshod
|  over the formal definition of entropy.  Taking bits out of the PRNG
|  *does* reduce its entropy.
| 
| By how much exactly? I'd say, _under the hypothesis that the one-way
| function can't be broken and other attacks fail_, exactly zero; in the
| real world, maybe a little more. But in
| /usr/src/linux/drivers/char/random.c I see that the extract_entropy()
| function, directly called by the exported kernel interface
| get_random_bytes(), states:
| 
| if (r-entropy_count / 8 = nbytes)
| r-entropy_count -= nbytes*8;
| else
| r-entropy_count = 0;
| 
| ...which appears to assume that the pool's entropy (the upper bound of
| which is POOLBITS, defined equal to 4096) drops by a figure equal to the
| number of bits that are extracted (nbytes*8). This would only make sense
| if those bits weren't passed through one-way hashing.
The argument you are making is that because the one-way function isn't
reversible, generating values from the pool using it doesn't decrease its
computational entropy.  (Its mathematical entropy is certainly depleted,
since that doesn't involve computational difficulty.  But we'll grant that
that doesn't matter.)

The problem with this argument is that it gives you no information about the
unpredictablity of the random numbers generated.  Here's an algorithm based
on your argument:

Pool: bits[512]
initializePool()
{   Fill Pool with 512 random bits; }

getRandom() : bits[160]
{   return(SHA(bits));
}

By your argument, seeing the result of a call to getRandom() does not reduce
the effective entropy of the pool at all; it remains random.  We certainly
believe that applying SHA to a random collection of bits produces a random
value.  So, indeed, the result of getRandom() is ... random.  It's also
constant.

Granted, no one would implement a random number generator this way.  But
*why*?  What is it you have to change to make this correct?  Why?  Can you
prove it?  Just saying you have to change the pool after every call
won't work:

getRandom() : bits[160]
{   Rotate bits left by 1 bit;
return(SHA(bits));
}

This (seems to) generated 512 random values, then repeats.  Just what *is*
good enough?
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: entropy depletion

2005-01-07 Thread Jerrold Leichter
|  
|   random number generator this way.   Just what *is*
|  good enough?
| 
| That's a good question.  I think there is a good answer.  It
| sheds light on the distinction of pseudorandomness versus
| entropy:
| 
|   A long string produced by a good PRNG is conditionally
|   compressible in the sense that we know there exists a shorter
|   representation, but at the same time we believe it to be
|   conditionally incompressible in the sense that the adversaries
|   have no feasible way of finding a shorter representation.
| 
| In contrast,
| 
|   A long string produced by a HESG is unconditionally, absolutely
|   incompressible. There does not exist a shorter representation.
|   There cannot possibly exist a shorter representation.
You're using Kolgomorv/Chaitin complexity here, but unfortunately that's a
bit hard to apply.  *Any* string has a shorter representation - if I get to
specify the model *after* you choose the string.  K/C complexity is robust
when you talk about sets of possible strings, because playing around with
the machine can only shorten a fixed number of them.  Turning that into a
notion useful for cryptography is a bit tougher - and in any case, if you
want to get at PRNG's, you need to get at K/C complexity with trapdoor
information:  The bit stream may *look* uncompressible, but given the
internal state, it is easily produced.

More generally:  We're talking about stretching entropy here.  There are
certainly theoretical results in that direction, the one usually mentioned
being the BBS bit generator, which takes k bits of entropy and gives you
p(k) (for some polynomial p) bits that are polynomially-indistinguishable
from random bits.  But you (a) need some significant work to set this up and
prove it; (b) BBS generators are slow.

A simpler approach that does work (in some sense) is:  Choose a random
starting value R, and a number k.  Compute SHA^i(R) for i from 0 to k.  Emit
these values *backwards*.  Then given the first k-1 outputs, an attacker
cannot determine the next one (under the standard hypotheses about SHA).

Unfortunately, this is useless as, say, a key generator:  If you send me
the k-1'st output for use as a key, I can't determine what your *next* key
will be - but I can trivially read your preceding k-2 sessions.

The idea that revealing just the hash of random bits doesn't reduce the
effective entropy sounds great, but it's naive.  It's like the argument
that if H is a good hash function, then H(K || message) is a good MAC.  Not
quite.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Microsoft Passport fades away

2004-10-23 Thread Jerrold Leichter
From Computerworld:

Microsoft Scales Back Passport Ambitions

Microsoft's decision to reposition its .Net Passport identification
system comes as Monster.com is dropping support for the authentication
service.

http://www.computerworld.com/newsletter/0,4902,96838,00.html?nlid=PM


-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Printers betray document secrets

2004-10-21 Thread Jerrold Leichter
| It turns out that their techniques aren't all that useful.
| Changing laser printer cartridges changes the results.
| You might find that two documents were printed
| by the same printer, but it doesn't give you the
| options for tracking it down that manual typewriters did.
Actually, they say they can identify the make and model - which is about all
you could do with a typewriter.  Going further, in either case, means tying a
particular piece of text to a particular writing instrument to which you have
gained access.

Changing printer cartridges will certainly work, but then again simply replac-
ing the typewriter will, too.  Any identification of physical objects can only
work as long as the physical object isn't replaced.  In practice, there's a
great deal of inertia in replacing physical objects, for cost, convenience, and
other reasons.  So such identifications may still be useful.

| And the differences don't identify a specific printer
| in a way that can be tracked, e.g. identifying a serial number
| that could be looked up from warranty records.
A bullet can't be tied to a gun's serial number, but that doesn't make it
useless to examine bullets.

| It's not clear that they work at all with inkjet printers,
| and changing ink cartridges is even more common than
| changing laser printer cartridges.
The technique is based on variations in dot pattern that ultimately come down
to small variations in mechanical parts, usually the gears that drive the
paper.  Laser printer cartridges are deliberately designed so that (just
about) all moving/wearing parts are part of the cartridge.  So most variations
in the results are necessarily tied to the cartridge.  That's not true for ink
jets. While the paper describing all this isn't yet available, from what is
published I don't think they are making any claims about inkjets, just laser
printers. However, they seem to believe the same general approach - look for
variations due to variations in manufacture that don't produce artifacts that
are visible to the naked eye, so don't need to be and hence are not controlled
- would work.  Whether the source of the variation would be in the ink
cartridge or in the fixed mechanicals, who can say at this point.

| If you're sloppy,
| you've probably got a bunch of partly-used cartridges around,
| so even if you want to print out a bunch of ransom notes
| or whatever, you don't even have to go to Kinko's
| to get them to be different.
|
| If printer makers want to build in watermarking to
| make everything they print traceable, the way many of them
| check for documents that look like money and don't print them,
| they could hide patterns that survive cartridge changes
| (would you notice a few inverted pixels on a 600x600dpi printout?)
Actually, this would probably be noticable in certain pictures.  But slight
variations in pixel spacing - which is what these guys look for - is not
visible.  (In fact, the origin of this work seems to have been work in the
opposite direction:  Early laser printers had a problem with banding, due to
periodic variations in paper movement causing variations in pixel spacing.
The trick was to find out how much variation you could allow without visible
artifacts and then get to that level cheaply.  But there is still plenty of
variation left for appropriate software to find.)  You could probably play
games with pixel sizes, too.

| But even then, inkjet printers are dirt cheap;
| when they're on sale, they're essentially a free enclosure
| in a box of overpriced printer cartridges,
| so even of the printer wants to rat out the user and
| it's not easy to change the serial number PROM,
| you can just replace the printer.
One could say the same about most physical objects that end up being used
for identification.  You would think that fibers would be useless for
identification, for example - you can always throw out the clothing you were
wearing and buy a new tee shirt.  Still ... the real world has a great deal
of inertia.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Approximate hashes

2004-09-01 Thread Jerrold Leichter
| nilsimsa
| Computes nilsimsa codes of messages and compares the codes and finds
| clusters of similar messages so as to trash spam.
|
| What's a nilsimsa code?
|
| A nilsimsa code is something like a hash, but unlike hashes, a small change
| in the message results in a small change in the nilsimsa code.
|
| http://lexx.shinn.net/cmeclax/nilsimsa.html
I had a look at the code (which isn't easy to follow).  This appears to be a
new application of Bloom filters.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: More problems with hash functions

2004-08-28 Thread Jerrold Leichter
| Bear writes:
|  One interesting idea which I came up with and haven't seen a way
|  past yet is to XOR each block with a value computed from its
|  sequence number, then compute the hash function on the blocks in
|  a nonsequential order based on the plaintext of the blocks
|  in their new order.
|
| This is an interesting idea, but it does not fully prevent the Joux
| attack.  This is seen most obviously in the two block case, where
| your method can at most increase the attacker's work by a factor of 2.
| Joux shows how to find a 4-collision using 2*2^(n/2) work, while it should
| take 2^(3n/4) work.  In the case of a 160 bit hash this is the difference
| between 2^81 and 2^120.  Doubling the work to find the 4-collision will
| not do much to address this discrepency.
|
| Even in the more general case, where Joux puts k blocks together to
| generate 2^k multicollisions, I can see a way to attack your method,
| with an increase in the work by only a factor of k^2
There's a more general problem with the algorithm:  It isn't on-line.  An
implementation requires either an unbounded internal state, or the ability to
re-read the input stream.  The first is obviously impossible, the second a
problem for many common uses of hash functions (in communications, as opposed
to storage).

However ... *any* on-line algorithm falls to a Joux-style attack.  An
algorithm with fixed internal memory that can only read its input linearly,
exactly once, can be modeled as an FSM.  A Joux-style attack then is:  Find
a pair of inputs M1 and M1' that, starting from the fixed initial state, both
leave the FSM in state S.  Now find a pair of inputs M2 and M2' that, starting
from S, leave the FSM in state S'.  Then for the cost of finding these two
collisions, we have *four* distinct collisions as before.  More generally,
by just continuing from state S' to S'' and so on, for the cost of k
single collision searches, we get 2^k collisions.

The nominal security of the FSM is its number of states; for k large enough,
we certainly get too many collisions compared to a random function.

What this shows is that our vague notion that a hash function should behave
like a random function can't work.  On-line-ness always kills that.  (In
fact, even given the function random access to its input doesn't help:  An FSM
can only specify a finite number of random places to look in the input.
If the FSM can only specify an absolute location, it can't work on arbitrarily
long inputs.  If it can specify a location relative to the current position,
you can re-write the machine as a linear one that gets repeated copies of
blocks of a fixed size.)

This is really just the prefix property writ large.  Define an on-line
function f:{0,1}*-{0,1}^k as one with the property that there exist
infinitely many pairs of strings Si, Si' with the property that, for any S in
{0,1}*, f(Si||S) = f(Si'||S).  (In particular, this is true for S the empty
string, so f(Si) = f(Si').)  Then the most we can expect of an (on-line)
hash function is that it act like a function picked at random from the set of
on-line functions.  (This, of course, ignores all the other desireable
properties - we want to choose from a subset of the on-line functions.)

Getting a security parameter in there is a bit tricky.  An obvious first cut
is to say an on-line function has minimum length n if the shortest Si has
length n.

Actual hash functions don't follow this model exactly.  For example, MD5 needs
512+128+64 bits of internal storage* - the first for the input buffer, all of
whose bits are needed together, the second for the running hash state, the
third for the length.  So there are 2^704 states in the MD5 FSM, but the
output only has 2^128 values - only 128 bits of the state number are used.
That in and of itself isn't an issue - it doesn't hurt, in this particular
analysis, to throw bits away in the *final* result.  What does limit it is
that every 512 input bits, the FSM itself logically mods out 512 bits of the
state (the input buffer), and ends up in one of only 2^192 possible states
(and if we work with short strings, then only a small fraction of the 2^64
states of the length field are actually accessible).  Because of this, MD5 can
at best have been chosen from on-line functions of minimum length 128, rather
than those of a minimal length up to 704.  That's why keeping more internal
state *might* help - though you have to be careful, as we've seen, about how
you do it.
-- Jerry

* Actually, you need 3 more bits because the MD5 length is in octets, not
bits. This memory is hidden in any real implementation on byte-oriented
hardware.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: More problems with hash functions

2004-08-28 Thread Jerrold Leichter
|  However ... *any* on-line algorithm falls to a Joux-style attack.  An
|  algorithm with fixed internal memory that can only read its input linearly,
|  exactly once, can be modeled as an FSM.  A Joux-style attack then is:  Find
|  a pair of inputs M1 and M1' that, starting from the fixed initial state, both
|  leave the FSM in state S.  Now find a pair of inputs M2 and M2' that, starting
|  from S, leave the FSM in state S'.  Then for the cost of finding these two
|  collisions, we have *four* distinct collisions as before.  More generally,
|  by just continuing from state S' to S'' and so on, for the cost of k
|  single collision searches, we get 2^k collisions.
|
| That's an interesting point.  One counter example I can offer is my
| earlier suggestion to use a wide path internally between the stages.
| The analysis suggested that if the internal path was twice the width
| of the output of the hash, the Joux attack was avoided.  Basically if
| the hash output width is n, and the internal width is 2n, then it will
| take 2^n to find an internal collision.  And the overall hash strength
| is never more than 2^n against any attack, since you can exhaustively
| invert the function for that effort.  So nothing based on internal
| collisions can weaken a hash built around this principle.
|
| It's not a particularly appealing design strategy due to its apparent
| inefficiency.  But if you're right about the general vulnerability of
| hashes that support incremental hashing, as any useful hash surely must,
| then perhaps it is worth exploring.
It all depends on how you define an attack, and how you choose to define your
security.  I explored the outer edge:  Distinguishability from a random
function.  For a random function from {0,1}*-{0,1}^k, we expect to have to do
2^k work (where the unit of work is an evaluation of the function) per
collision.  The collisions are all independent - if you've found N, you have
... N.  The next one you want still costs you 2^k work.  However ... no on-
line function can actually be this hard, because a Joux-style attack lets you
combine collisions and find them with much less than the expected work.
Because a Joux-style attack works exponentially well, the cost for a very
large number of collisions is arbitrarily small.  Note that this has nothing
to do with the number of internal states:  One can distinguish random on-line
functions (in the sense I defined) from random functions (My criterion of
infinitely many extendible collision pairs may be too generous; you may need
some kind of minimum density of extendibile collision pairs.)

You can certainly de-rate such a function by discarding some of the bits
of the output and considering the range of the output to be {0,1}^l, l  k.
But I don't see how that helps:  In the limit, collisions become arbirarily
cheap, and making collisions easier can only decrease the cost of finding
them.

If the question is how close to the ultimate limit you can get a function
*constructed in a particular way*, then, yes, passing more internal state may
help.  In fact, it's the only thing that can, if you want an on-line
algorithm - you have nothing else available to vary!  A full analysis in this
direction, though, should have *two* security parameters:  The current one,
the size of the output; and the amount of memory required.  After all, MD5
(for example) is only defined on inputs of up to 2^64 bytes.  If I allow 2^64
bytes of internal state, then the on-line qualifier becomes meaningless.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: First quantum crypto bank transfer

2004-08-24 Thread Jerrold Leichter
|  ... the comments I've seen on this list and elsewhere have been much
|  broader, and amount to QM secure bit distribution is dumb, it solves
|  no problem we haven't already solved better with classical
|  techniques.
|
| Most of the comments on this list are more nuanced than that.
Perhaps we hear them differently.

| Examples of sensible comments include:
|   -- We have seen claims that QM solves the key distribution
|problem.  These claims are false.
I'm not sure what the key distribution problem would be or what solving it
would mean.  As we all know, the real problem with OTP systems is that you
have to distribute as much keying material, securely, as you have material to
protect.  So OTP pretty much comes down to leveraging a single secure channel
to produce another.  In all practical instances I know of, the two channels
are separated in time and space:  You leverage the security of your diplomatic
pouch today to get secure messages from a spy tomorrow.

QM key sharing lets you build an OTP with a shared transmission medium and an
arbitrarily small time separation.  This is new.  It gives you guarantees that
the bits sent have not been intercepted.  That's new. Certainly, it doesn't
solve MITM attacks, as mathematical abstractions. What it does is reduce
protection from MITM attacks to protection of physical assets.  All crypto
ultimately has to rest on that - if you can't protect your keys, nothing
works.  The nature of the system that must be protected, and the kind of
protection, are somewhat different than in traditional systems, but the
inherent problem is neither eliminated nor made inherently worse.

|   -- _Commercialization_ of QM bit-exchange is dumb, for now
|and for the forseeable future
Here, I'll pretty much agree with you.

|  Also, there is a world of difference between:
| 
|  1.  Showing something is possible in principle;
|  2.  Making it work on the lab bench;
|  3.  Making it into something that works in the real world.
| 
|  For QM key exchange, step 1 goes back maybe 10-15 years, and most
|  people thought it was a curiosity - that you could never maintain
|  coherence except in free space and over short distances.
|
| That's backwards.  Quantum crypto free in space is hard.
The thought experiments on this always involve simple pictures in free space.
I agree, actually *doing* anything in free space over macroscopic distances is
a non-starter.

| It's
| much easier to use a single-mode fiber, over distances such
| that there is little total attenuation (which can be a quite
| macroscopic distance, since the attenuation is a fraction of
| a db/km if you do it right).
|
|  Step 2 is a couple of years back, the first surprise being that you
|  could actually make things work through fiber, then through a couple
|  of Km of fiber coiled on a bench.
|
| Again, that diametrically misstates the physics.  Propagation
| through a couple km of fiber shouldn't have surprised anybody.
I think that's obvious now, but might not have been so obvious 20 years ago.
(For that matter, just how long have we had usable multi-km single-mode
fibers?)

|  BTW, if we look at QM *computation* in comparison, we've barely made
|  it through Step 1.  There are still plausible arguments that you
|  can't maintain coherence long enough to solve any interesting
|  problems.
|
| Within a year of the invention of quantum computation,
| people were working on quantum error correction.
Actually, they started off pointing out that error correction couldn't be
done in QM systems without unmixing the states, thus losing the essense of the
computation.  Well, it turned out that things are more subtle than that.

Don't take this as a criticism of those who sayd quantum error correction was
impossible!  This is all new, complex physics.  We're wrong before we're
right.

|  This
| is interesting work and has had spin-offs in the form
| of changing how people think about error correction even
| in non-quantum systems.  And it has had spin-offs
| applicable to quantum cryptography, i.e. showing how it
| is possible to survive a modest amount of attenuation.
|
|  Some of the papers I've seen solve the problem only in their titles:
|  They use a QM system, but they seem to only make classical bits
|  available for general use.
|
| Huh?  The world abounds in QM systems that produce classical
| results, including e.g. transistors, lasers, practically all of
| chemistry, etc. etc. etc.  Quantum computers produce classical
| results because that is what is desired.
You miss my point.  Papers have been published _ there's not much point
dredging them up - whose title and abstract implies that they are providing a
way to store and manipulate qubits, but when you look at what they actually
end up providing, you can't *use* them as qubits, just classical bits.  (What
a surprise:  There are poor papers 

Re: On hash breaks, was Re: First quantum crypto bank transfer

2004-08-24 Thread Jerrold Leichter
|  Alternatively, how anyone can have absolute confidence in conventional
|  crypto
|  in a week when a surprise attack appears against a widely-fielded
|  primitive
|  like MD5 is beyond me.  Is our certainty about AES's security really any
|  better today than was our certainty about RIPEM - or even SHA-0 - was
|  three
|  weeks ago?
|  -- Jerry
|
| Actually for years the cryptography community has been saying retire MD5,
...because it's been seen as giving too short a hash, and because of a minor
weakness - widely described as certificational - in the compression function
that no one ever showed lead to an attack.  (While the details of the current
attack aren't yet completely clear, the fact that it worked on so many
functions strongly indicates that the particular weakness in the MD5
compression function has nothing to do with it.)

The advice may have been prudent, but it doesn't rise to the level of a theory
for distinguishing good from bad hash functions.

| SHA-0 has been required to be replaced by SHA-1 for some time,
because the NSA said so.  It turns out they were ahead of public crypto by a
couple of years.  I will grant you that this is indirect evidence that NSA
has no attacks on AES, since this is now the second time that they've
strengthened a proposed primitive against which no publically-known attacks
existed.  It tells us little about how strong AES actually is - and absolutely
nothing about any other system out there, since NSA has no reason to comment
on those and every reason not to.

|   the RIPEM
| series is functionally-speaking unused
...but not because anyone thought there was a weakness.  MD5 happened to be
widely used, SHA-1 had standards pushing it; little room was left for another
hash.

|and represented the only real
| surprise. Except for RIPEM there were known to be reasons for this, MD5 was
| known to be flawed, SHA-0 was replaced because it was flawed (although
| knowledge of the nature of the flaw was hidden). Even with RIPEM (and SHA-1
| for the same reason) I have plans in place (and have had for some time) the
| move away from 160-bit hashes to larger ones, so the attack on RIPEM had
| little effect on me and my clients, even a full attack on SHA-1 would have
| little effect on the clients that actually listen (they all have backup
| plans that involve the rest of the SHA series and at the very least
| Whirlpool).
Moving to a larger hash function with no underlying theory isn't very far from
the million-bit key algorithms you see all over the place.  Bigger probably
can't be worse, but is it really better?

| So basically I encourage my clients to maintain good business practices
| which means that they don't need to have belief in the long term security of
| AES, or SHA-1, or RSA, or . This is just good business, and it is a
| process that evolved to deal with similar circumstances.
Real good business practice has to make judgements about possible risks and
trade them off against potential costs.  I quite agree that your advice is
sound.  But that doesn't change the facts:  Our theoretical bases for security
are much weaker than we sometimes let on.  We can still be surprised.

Suppose a year ago I offered the following bet:  At the next Crypto, all but
one of the widely-discussed hash functions will be shown to be fundamentally
flawed.  What odds would you have given me?  What odds would you have given me
on the following bet:  At the next Crypto, an attack against AES that is
substantially better than brute force will be published?  If the odds were
significantly different, how would you have justified the difference?

Let's update the question to today:  Replace widely-discussed hash functions
with SHA-1 and the related family.  Keep the AES bet intact.  But let's got
out 5 years.  Now what odds do you give me?  Why?

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: More problems with hash functions

2004-08-24 Thread Jerrold Leichter
|  It strikes me that Joux's attack relies on *two* features of current
|  constructions:  The block-at-a-time structure, and the fact that the state
|  passed from block to block is the same size as the output state.  Suppose we
|  did ciphertext chaining:  For block i, the input to the compression function
|  is the compressed previous state and the xor of block i and block i-1.  Then
|  I can no longer mix-and-match pairs of collisions to find new ones.
|
| Here's how I understand what you are suggesting.  Presently the hash
| compression function takes two inputs: a state, and an input data block.
| For SHA-1 the state is 160 bits and the input data block is 512 bits.
| It outputs a state-size block of 160 bits, which gets fed into the
| next block:
|
| IV  ---  COMP   ---   COMP   ---   COMP   ---  Output
|^ ^ ^
|| | |
|| | |
|  Block 1   Block 2   Block 3
|
| I think you are proposing to increase the output of each block by 512
| bits in this case, so as to provide some data that can be xored into
| the input data block of the next stage, a la CBC:
|
| IV  ---  COMP   ---   COMP   ---   COMP   ---  Output
|^  \  ^  \  ^
||\-- X\-- X
|| | |
|  Block 1   Block 2   Block 3
|
| By itself, adding an xor step doesn't do anything
Not true.

Joux's attack says:  Find single block messages M1 and M1' that collide on
the blank initial state.  Now find messages M2 amd M2' that collide with
the (common) final state from M1 and M1'.  Then you hav four 2-block
collisions for the cost of two:  M1|M2, M1'|M2, and so on.

But even a simple XOR breaks this.  M1 and M1' have the same hash H, but the
state being passed is now very different:  H,M1 in one case, H,M1' in the
other.  So M1|M2 and M1'|M2 are completely different:  Both started the second
step with the compression function in state H, but the first compressed
M1 XOR M2, and the second M1' XOR M2.

All I'm left with, unless there's some cleverer attack I'm missing, is:

- Find M1 and M1' that collide;
- Find M2 and M2' such that M1 XOR M2 collides with M1' XOR M2',
on the common initial state.

Then for the cost of finding two block-level collisions, I have a collision
between two-block messages (M1|M2 and M1'|M2').  But why bother?  It's always
been sufficient to find a collision on the *last* block, with an arbitrary
initial state.  (It would be nice to eliminate *this* generic weakness, but
it's not clear you can and still have an on-line algorithm.)

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: dual-use digital signature vulnerability

2004-07-21 Thread Jerrold Leichter
| the issue in the EU FINREAD scenario was that they needed a way to
| distinguish between (random) data that got signed ... that the key owner
| never read  and the case were the key owner was actually signing to
| indicate agreement, approval, and/or authorization. They specified a
| FINREAD terminal which supposed met the requirements that the key owner had
| to have read and to some extent understood  and approved as to the
| meaning of the contents of what was being signed.
|
| However, the FINREAD specification didn't define any mechanism that
| provided proof to the relying party that a FINREAD terminal was actually
| used as part of the signature process.
Fascinating.  They are trying to re-create the historical, mainly now lost,
role of a notary public.

Notary publics came into being at a time when many people were illiterate,
and only the wealthy had lawyers.  A notary public had two distinct roles:

- To ensure that a person signing a notarized document actually
understood what he was signing;
- To witness that the signer - who might well sign with just an X -
is the person whose name appears.

The first role is long lost.  Notary publics don't look at the material being
signed.  We lost our trust in a public official explaining contracts - the
assumption now is that everyone gets his own lawyer.  The second role
remained, even with universal literacy.  A notary public is supposed to check
for some form of good ID - or know the person involved, the traditional
means.

A traditional notary public, in modern terms, would be a tamper-resistant
device which would take as inputs (a) a piece of text; (b) a means for
signing (e.g., a hardware token).  It would first present the actual text
that is being signed to the party attempting to do the signing, in some
unambiguous form (e.g., no invisible fonts - it would provide you with a
high degree of assurance that you had actually seen every bit of what you
were signing).  The signing party would indicate assent to what was in the
text.  The notary might, or might not - depending on the means for signing -
then authenticate the signer further.  The notary would then pass the text to
the means for signing, and verify that what came back was the same text,
with an alleged signature attached in a form that could not modify the text.
(E.g., if the signature were an actually RSA signature of the text, it would
have to decrypt it using the signer's public key.  But if the signature were
a marked, separate signature on a hash, then there is no reason why the notary
has to be able to verify anything about the signature.)  Finally, the notary
would sign the signed message itself.

We tend not to look at protocols like this because we've become very
distrustful of any 3rd party.  But trusted 3rd parties have always been
central to most business transactions, and they can be very difficult to
replace effectively or efficiently.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: dual-use digital signature vulnerability

2004-07-21 Thread Jerrold Leichter
| note that some of the online click-thru contracts have been making
| attempt to address this area; rather than simple i agree/disagree
| buttons ... they put little checkmarks at places in scrolled form  you
| have to at least scroll thru the document and click on one or more
| checkmarks  before doing the i agree button. a digital signature has
| somewhat higher integrity than simple clicking on the i agree button ...
| but wouldn't subsume the efforts to demonstrate that a person was required
| to make some effort to view document. Of course in various attack scenarios
| ... simple checkmark clicks could be forged. However, the issue being
| addressed isn't a forging attack ... it is person repudiating that they
| read the TCs before hitting the I agree button.
...which makes for an interesting example of thw way in which informal
understandings don't necessarily translate well when things are automated.

The law school professor of a friend of mine told a story about going to rent
an apartment.  The landlord was very surprised to watch him sign it with only
a glance - not only was this guy a law professor, but he had done a stint as a
Housing Court judge.  Aren't you going to read it before signing?  No -
it's not enforceable anyway.  (This is why there have been cases of landlords
who refused to rent to lawyers - a refusal that was upheld!)

If you are offered a pre-drafted contract on a take-it-or-leave it basis -
the technical name is an adhesion contract, I believe - and you really need
whatever is being contracted for, you generally *don't* want to read the
thing too closely

When you buy a house these days, at least some lawyers will have you initial
every page of the agreement.  Not that there is anything in there you want to
read too closely either.  (The standard terms for the purchase of a house in
Connecticut have you agree not to use or store gasoline on the property.  I
pointed out to my lawyer - who had actually been on the committee that last
reviewed the standard form - that as written this meant I couldn't drive my
car into the garage, or even the driveway.  His basic response was Don't
worry about it.)

The black-and-white of a written contract makes things appear much more
formal and well-defined than they actually are.  The real world rests on many
unwritten, even unspoken, assumptions and ways of doing business.  It's
just the way people are built.  When digital technologies only *seem* to match
existing mechanisms, all kinds of problems arise.  Despite such sayings as
You can't tell a book by its cover, we trust others based on appearances
all the time.  Twenty years ago, if a company had printed letterhead with a
nice logo, you'd trust them to be for real.  Every once in a while, a con
man could abuse this trust - but it was an expensive undertaking, and most
people weren't really likely to ever see such an attack.

Today, a letterhead or a nice business card mean nothing - even when they are
on paper, as opposed to being just bits.  It's really, really difficult to
come up with formal, mechanized equivalents of these informal, intuitive
mechanisms.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EZ Pass and the fast lane ....

2004-07-12 Thread Jerrold Leichter
|  another purpose -- preserving the privacy of drivers by using more
|  complicated protocols. However, as the benefit of such systems is to
|  people who are unlikely to have much voice in the construction of the
|  system, and who are also unlikely to be willing to pay more money to
|  gain privacy, I think the implementation of such tags is unlikely.
|
| I think it would be easier to provide drivers with a simpler method of
| turning off their transponder. Recently ordered FasTrak tokens come with a
| mylar bag for this purpose, which is too unwieldy. A switch, however,
| might be enough.
|
| This would not prevent an adversary from recording the IDs of cars that
| pass through toll gates. It would, however, prevent reading those IDs at
| other times.
EZpass actually went in the opposite direction.  When I got my EZpass a number
of years back, they provided such a bag, along with instructions on use. These
days, they no longer provide the bag, and indirectly they strongly discourage
you from using any such thing:  According to the rules, EZpasses must be
mounted on your windshield:  They provide a variant on Velcro strips, which
make the box a pain to remove while driving.  (For commercial vehicles,
there's an external, permanently-mounted version).  People used to just keep
the thing loose inside the car and wave it at the sensor, which apparently
caused to many misreads, leading to traffic backups.  Now, if they catch you
doing that, there's a substantial fine.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EZ Pass and the fast lane ....

2004-07-12 Thread Jerrold Leichter
| ...unless people are willing to go very hi-tech in their toll evasion
| maneuvers, implementing, say, thin see-through LCD screens placed over their
| license plates that turn opaque at a push of a button
A local TV station here in the NY area did a show about a lower-tech version
of the same thing:  A plastic cover for the plate that is supposed to cause
enough glare in a camera that the plate is unreadable when snapped by the
various automated speed traps and red-light-running traps out there.  These
things are apparently advertised in all the car magazines.  According to the
TV show, they vary in effectiveness, from quite effective for some kinds of
cameras in certain uses to pretty much ineffective.

A universal feature of all such devices is that they are illegal.  At least
around here (and I think in most if not all states), license plates may not be
covered *at all*.  If any kind of device emerged that was effective at
actually making plates unreadable, I can easily see municipalities make using
one into a parking violation - a quick source of revenue, at least until most
people figured out that it wasn't worth it to buy these things.

How long before license plates have transponders built into them?  After all,
it's long-established law that you can be required to place an identifier on
your car when it's on the public roads - why's there a difference between one
that responds at optical frequencies and one that responds at a couple of
gigahertz?  (For that matter, even if you want to stick to optical and you
can't get plate reading accurate enough, the technology for reading bar codes
from moving vehicles is well-developed - it's been used for years to identify
railroad cars, and many gated communities use them to open the gates for cars
owned by residents.)
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EZ Pass and the fast lane ....

2004-07-10 Thread Jerrold Leichter
|  No mention is made of encryption or challenge response
|  authentication but I guess that may or may not be part of the design
|  (one would think it had better be, as picking off the ESN should be duck
|  soup with suitable gear if not encrypted).
|
|  From a business perspective, it makes no
| sense to spend any money on crypto for this
| application.  If it is free, sure use it,
| but if not, then worry about the 0.01% of
| users who fiddle the system later on.
|
| It would be relatively easy to catch someone
| doing this - just cross-correlate with other
| information (address of home and work) and
| then photograph the car at the on-ramp.
It would, in principle, be relatively easy to query these boxes yourself, or
listen in near a station.  You could quickly build up a database of valid
ID's, and could then build/sell a clone box, perhaps a tumbler box that
would rotate among valid ID's.

The actual money involved can be substantial - in the NY area, a cross-Hudson
-River commuter spends at least $5/day through EZ-pass, and you can now charge
things like parking at airports - $25/day or more.  So ... you'd think there
would be an active market in rigged EZ-pass boxes by now (as, for example,
there has been an active market for counterfeit monthly passes on the commuter
rail lines in the New York area.)  Curiously, if there is such a thing, it's
so far on a low enough scale that the press hasn't picked it up.

The basic protection mechanism involved is apparently quite simple:  Every
time you use EZ-pass, a photo of your license plate, and of the driver, is
taken.  The photos are kept for quite some time.  So cheaters can be tracked.

In addition, where there are high-value charges, there is usually a gate.  If
your EZ-pass is invalid, you're stuck in what is effectively a man-trap,
waiting for the cops on duty to check things out.  You'd better have a valid
EZ-pass to show them.  I don't know how much info they can get out of the
system, but it could easily tell them if, when they scan your good pass,
it shows a different ID from the one registered before.  (On the other hand,
high-speed readers - where there is no gate - are spreading.  Several were
recently installed at the Tappan-Zee Bridge, where the toll is $7.)

All in all, the system seems to depend on what I've heard described as the
bull in the china shop theory of security:  You can always buy more china,
but the bull is dead meat.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is finding security holes a good idea?

2004-06-15 Thread Jerrold Leichter
| Thor Lancelot Simon [EMAIL PROTECTED] writes:
|
|  On Mon, Jun 14, 2004 at 08:07:11AM -0700, Eric Rescorla wrote:
|  Roughly speaking:
|  If I as a White Hat find a bug and then don't tell anyone, there's no
|  reason to believe it will result in any intrusions.  The bug has to
| 
|  I don't believe that the premise above is valid.  To believe it, I think
|  I'd have to hold that there were no correlation between bugs I found and
|  bugs that others were likely to find; and a lot of experience tells me
|  very much the opposite.
|
| The extent to which bugs are independently rediscovered is certainly
| an open question which hasn't received enough study. However, the
| fact that relatively obvious and serious bugs seem to persist for
| long periods of time (years) in code bases without being found
| in the open literature, suggests that there's a fair amount of
| independence.
I don't find that argument at all convincing.  After all, these bugs *are*
being found!

It's clear that having access to the sources is not, in and of itself,
sufficient to make these bugs visible (else the developers of close-source
software would find them long before independent white- or black-hats).
Something else accounts for their surfacing.  I would guess that at least two
factors are involved:

- There's a level of similarity beyond, say, buffer overflow.
For example, once one buffer overflow in parsing a To:
field is found, everyone starts looking for bugs in To:
field parsing - and then at the closely-related code for
parsing other header fields.  We know from experience - look
at the attempts to gain super-high reliability through
N-version programming - that bugs cluster:  Certain kinds of
problems are harder to get right than others.  Thus, this
focused attention on an area where bugs have been found is
highly likely to find other bugs.

- New tools and techniques for finding bugs in code are developed,
bring to light previously-hidden bugs.

To the extent these are true, a white-hat could reasonably argue that a
newly-found bug is unlikely to be rediscovered only if it is neither closely
related to previously-found bugs; nor found using a new technique.  But these
are exactly the cases in which you would *want* to publish!

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: voting

2004-04-09 Thread Jerrold Leichter
|   privacy wrote:
|   [good points about weaknesses in adversarial system deleted]
|
|  It's baffling that security experts today are clinging to the outmoded
|  and insecure paper voting systems of the past, where evidence of fraud,
|  error and incompetence is overwhelming.  Cryptographic voting protocols
|  have been in development for 20 years, and there are dozens of proposals
|  in the literature with various characteristics in terms of scalability,
|  security and privacy.  The votehere.net scheme uses advanced cryptographic
|  techniques including zero knowledge proofs and verifiable remixing,
|  the same method that might be used in next generation anonymous remailers.
| 
| Our anonymous corrospondent has not addressed the issues I raised in my
| initial post on the 7th:
|
| 1. The use of receipts which a voter takes from the voting place to 'verify'
| that their vote was correctly included in the total opens the way for voter
| coercion.
|
| 2. The proposed fix - a blizzard of decoy receipts - makes recounts based
| on the receipts impossible.
The VoteHere system is really quite clever, and you're attacking it for not
being the same as everything that went before.

Current systems - whether paper, machine, or whatever - provide no inherent
assurance that the vote you cast is the one that got counted.  Ballot boxes
can be lost, their contents can be replaced; machines can be rigged.  We
use procedural mechanisms to try to prevent such attacks.  It's impossible to
know how effective they are:  We have no real way to measure the effectiveness,
since there is no independent check on what they are controlling.  There are
regular allegations of all kinds of abuses, poll watchers or no.  And there
are plenty of suspect results.

| Answer this:
|
| 1. How does this system prevent voter coercion, while still allowing receipt
| based recounts?
a)  Receipts in the VoteHere system are *not* used for recounts.  No receipt
that a user takes away can possibly be used for that - the chances of you being
able to recover even half the receipts a day after the election are probably
about nil.  Receipts play exactly one role:  They allow a voter who wishes to
to confirm that his vote actually was tallied.

b)  We've raised prevention of voter coercion on some kind of pedestal.
The fact is, I doubt it plays much of a real role.  If someone wants to coerce
voters, they'll use the kind of goons who collect on gambling debts to do it.
The vast majority of people who they try to coerce will be too frightened to
even think about trying to fool them - and if they do try, will lie so
unconvincingly that they'll get beaten up anyway.  Political parties that want
to play games regularly bring busloads of people to polling places.  They
don't check how the people they bus in vote - they don't need to.  They know
who to pick.

However, if this really bothers you, a system like this lets you trade off
non-coercion and checkability:  When you enter the polling place, you draw a
random ball - say, using one of those machines they use for lotteries.  If the
ball is red, you get a receipt; if it's blue, the receipt is retained in a
sealed box (where it's useless to anyone except as some kind of cross-check of
number of votes cast, etc.)  No one but you gets to see the color of the ball.
Now, even if you are being coerced and get a red ball, you can simply discard
the receipt - the polling place should have a secure, private receptacle; or
maybe you can even push a button on the machine that says Pretend I got a
blue ball - and claim you got a blue ball.  The fraction of red and blue
balls is adjustable, depending on how you choose to value checkability vs.
non-coercion.

| Or do you have some mechanism by which I can
| personally verify every vote which went into the total, to make sure they
| are correct?
In VoteHere's system, you can't possibly verify that every vote that went into
the total was correctly handled.  You can verify that the votes *that the
system claims were recorded* are actually counted correctly.  And you can
verify that *your* vote was actually recorded as you cast it - something you
can't do today.  The point of the system is that any manipulation is likely to
hit someone who chooses to verify their vote, sooner or later - and it only
takes one such detected manipulation to start an inquiry.

Whether in practice people want this enough to take the trouble ... we'll have
to wait and see.

| 2. On what basis do you think the average voter should trust this system,
| seeing as it's based on mechanisms he or she cant personally verify?
On what basis should an average voter trust today's systems?  How many people
have any idea what safeguards are currently used?  How many have any personal
contact with the poll watchers on whom the system relies?  Could *you* verify,
in any meaningful sense, the proper handling of a vote you cast?  Could you
watch the machines/boxes/whatever being handled?  

Re: [Fwd: Re: Non-repudiation (was RE: The PAIN mnemonic)]

2004-01-09 Thread Jerrold Leichter
| Non-repudiation applied to digital signatures implies that the definition
| states that only one person possibly had possession of the private signing
| key and was conscious about the fact that it was used to sign something.
There is absolutely *no* cryptographic or mathematical content to this
definition!  It could as well apply to key locks, to signatures on paper,
or whatever.  It's in no way a property of a cryptographic system, or of
*any* system.  Nor, as written, is there even any possible set of evidence
that could be adduced to prove this:  After all, someone might, just by
chance, have guessed the private key.

Granted, there are significant issues with non-repudiation - so significant
that it probably isn't a very useful concept.  But it there *is* some
cryptographic content behind it!  Otherwise, what are we to make, for example,
of the various evolving signature key schemes that detect stolen keys?

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Fwd: Re: Non-repudiation (was RE: The PAIN mnemonic)]

2004-01-07 Thread Jerrold Leichter
Now that we've trashed non-repudiation ... just how is it different from
authentication?  In both cases, there is a clear technical meaning (though as
with anything in mathematics, when you get right down to it, the details are
complex and may be important):  To produce an authenticator/non-repudiable
signature, you must have access to the secret.  There isn't, at this level,
even any difference between the requirements for the two.  Where we get into
trouble is in attempting to bind the real world to the mathematics.  In each
case, the receiver wants to be able to say:

 1. I can rely on the fact that X sent me this data, because it came
with a signature that could be calculated only by X.

What he *really* needs to say is:

 2. I can rely on the fact that X sent me this data, because it came
with a signature that could be calculated only by someone knowing X's
secret.

To go from 2 to 1, the receiver must also have:

 3. I can rely on the fact that only X knows X's secret.

In ordinary English usage, there is little difference between I've authenti-
cated this message as coming from X and X can't deny that he wrote this
message.  We've learned that non-repudiation is a concept with relatively
little use in the legal system.  However, authentication (of a signature,
document, whatever) is quite common (even if for the usual kinds of objects
that need authentication, there is generally little to discuss).  If the
ultimate question is whether, as a legal matter, X is bound by some writing
or whatever, authentication gets at the same basic question (which is only
part, usually a small part, of the relevant legal issues).

The problems that we've been discussion here are clear from 2 and 3:

- Rely on is inherently outside of the cryptography or mathematics.
It's only meaningful to the extent that there is some recourse
(generally through agreements, but ultimately through the legal
system) if you rely on something that turns out not be what
you thought it was.

- We identify X with an individual, but in fact X rarely knows
the secret personally, and never does the actual calculations -
some code running in some real physical machine does the work.

So in fact we can't even begin to get 3; at best, we have:

3'. I can rely on the fact that, if X has shared his secret with Y (where
Y is typically some equipment), then I can rely on X to be bound by
whatever Y does.

This is now so bizarre and removed from ordinary notions that it should be
clear why it's unlikely be of much real-world use!

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2004-01-04 Thread Jerrold Leichter
| David Wagner writes:
|
|  To see why, let's go back to the beginning, and look at the threat
|  model.  If multiple people are doing shared development on a central
|  machine, that machine must have an owner -- let's call him Linus.  Now
|  ask yourself: Do those developers trust Linus?
| 
|  If the developers don't trust Linus, they're screwed.  It doesn't how
|  much attestation you throw at the problem, Linus can always violate their
|  security model.  As always, you've got to trust root (the system
|  administrator); nothing new here.
| 
|  Consequently, it seems to me we only need to consider a threat model
|  where the developers trust Linus.  (Linus need not be infallible, but the
|  developers should believe Linus won't intentionally try to violate their
|  security goals.)  In this case, owner-directed attestation suffices.
|  Do you see why?  Linus's machine will produce an attestation, signed
|  by Linus's key, of what software is running.  Since the developers trust
|  Linus, they can then verify this attestation.  Note that the developers
|  don't need to trust each other, but they do need to trust the owner/admin
|  of the shared box.  So, it seems to me we can get by without third-party
|  attestation.
|
| You could conceivably have a PC where the developers don't trust
| Linus, but instead trust the PC manufacturer.  The PC manufacturer
| could have made it extremely expensive for Linus to tamper with the PC
| in order to violate [the developers'] security model.  (It isn't
| logically impossible, it's just extremely expensive.  Perhaps it costs
| millions of dollars, or something.)
Precisely - though see below.

| There are computers like that today.  At least, there are devices that can
| run software, that are highly tamper-resistant, and that can do attestations.
Smart cards are intended to work this way, too.

| (Now there is an important question about what the cost to do a hardware
| attack against those devices would be.)  It seems to me to be a good thing
| that the ordinary PC is not such a device.  (Ryan Lackey, in a talk
| about security for colocated machines, described using devices like
| these for colocation where it's not appropriate or desirable to rely on
| the physical security of the colocated machine.  Of course, strictly
| speaking, all security always relies on physical security.)
This kind of thing goes *way* back.  In the '70's, there was a company - I
think the name was BASIC 4 - that sold a machine with two privileged levels.
The OS ran at level 1 (user code at unprivileged level 2, of course).  There
were some things - like, probably, accounting - that ran at level 0.  Even
with physical access to the machine, it was supposed to be difficult to do
anything to level 0 - unless you had a (physical) key to use in the lock
on the front panel.  The machine was intended as a replacement for the then-
prevalent time-sharing model:  An application developer would buy machines
from the manufacturer, load them with application environments, and sell
application services.  Users of the machines could use the applications with
fast local acceess, even do development - but could not modify the basic
configuration.  I know the company vanished well before networks got fast
enough, and PC's cheap enough, the the business model stopped making any
sense; but I know nothing of the details.

| I don't know how the key management works in these devices.  If the
| keys used to sign attestations are loaded by (or known to) the device
| owner, it wouldn't help with the case where the device owner is
| untrusted.  If the keys are loaded by the manufacturer, it might
| support a model where the owner is untrusted and the manufacturer is
| trusted.
There's no more reason that the manufacturer has to be trusted than that the
manufacturer of a safe has to be trusted (at least in the sense that neither
needs to know the keys/combination on any particular machine/safe).  If
machines like this are to be built, they should require some special physical
override to allow the keys to be configured.  A key lock is still good
technology for this purpose:  It's a very well-understood technology, and its
simplicity is a big advantage.  A combination lock might be easier to
integrate securely, for the same basic reason that combination locks became
the standard for bank vaults:  No need for an open passageway from the outside
to the inside.  (In the bank vault case, this passageway was a great way to
get nitroglycerin inside the locking mechanism.)  In either case, you could
(like a bank) use a form of secret sharing, so that only a trusted group of
people - with multiple keys, or multiple parts of the combination - could
access the key setup mode.  Given this, there is no reason why a machine fresh
from the manufacturer need have any embedded keys.

Will machines like this be built?  Probably not, except for special purposes.
The TCPA machines will likely require you (and the people who want to 

Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-30 Thread Jerrold Leichter
| Rick Wash  wrote:
| There are many legitimate uses of remote attestation that I would like to
| see.  For example, as a sysadmin, I'd love to be able to verify that my
| servers are running the appropriate software before I trust them to access
| my files for me.  Remote attestation is a good technical way of doing that.
|
| This is a good example, because it brings out that there are really
| two different variants of remote attestation.  Up to now, I've been
| lumping them together, but I shouldn't have been.  In particular, I'm
| thinking of owner-directed remote attestation vs. third-party-directed
| remote attestation.  The difference is who wants to receive assurance of
| what software is running on a computer; the former mechanism allows to
| convince the owner of that computer, while the latter mechanism allows
| to convince third parties
|
| Finally, I'll come back to the topic you raised by noting that your
| example application is one that could be supported with owner-directed
| remote attestation.  You don't need third-party-directed remote
| attestation to support your desired use of remote attestation.  So, TCPA
| or Palladium could easily fall back to only owner-directed attestation
| (not third-party-attestation), and you'd still be able to verify the
| software running on your own servers without incurring new risks of DRM,
| software lock-in, or whatever
All of this is fine as long as there is a one-to-one association between
machines and owners of those machines.  Consider the example I gave
earlier:  A shared machine containing the standard distribution of the
trusted computing software.  All the members of the group that maintain the
software will want to have the machine attest, to them, that it is properly
configured and operating as intended.  We can call the group the owner of the
machine, and create a single key pair that all of them know.  But this is
brittle - shared secrets always are.  Any member of the group could then
modify the machine and, using his access to the private key, fake the all
clear indication.  Each participant should have his own key pair, since
attestation using a particular key pair only indicates security with respect
to those who don't know the private key of the pair - and a member of a
development team for the secure kernel *should* mistrust his fellow team
members!

So, again, there are simple instances where it will prove useful to be able
to maintain multiple sets of independent key pairs.

Now, in the shared distribution machine case, on one level team members should
be mutually suspicious, but on another they *do* consider themselves joint
owners of the machine - so it doesn't bother them that there are key pairs
to which they don't have access.  After all, those key pairs are assigned to
*other* owners of the machine!  But exactly the same mechanism could be used
to assign a key pair to Virgin Records - who we *don't* want to consider an
owner of the machine.

As long as, by owner, you mean a single person, or a group of people who
completely trust each other (with respect to the security problem we are trying
to solve); and as long as each machine only has only one owner; then, yes, one
key pair will do.  But as soon as owner can encompass mutually suspicious
parties, you need to have mutual independent key pairs - and then how you
use them, and to whom you grant them, becomes a matter of choice and policy,
not technical possibility.

BTW, even with a single owner, multiple independent key pairs may be useful.
Suppose I have reason to suspect that my private key has been leaked.  What
can I do?  If there is only one key pair around, I have to rebuild my machine
from scratch.  But if I had the forsight to generate *two* key pairs, one of
which I use regularly - and the other of which I sealed away in a safe - then
I can go to the safe, get out my backup key pair, and re-certify my machine.
In fact, it would probably be prudent for me to generate a whole bunch of
such backup key pairs, just in case.

You're trying to make the argument that feature X (here, remote attestation for
multiple mutually-suspicious parties) has no significant uses.  Historically,
arguments like this are losers.  People come up with uses for all kinds of
surprising things.  In this case, it's not even very hard.

An argument that feature X has uses, but also imposes significant and non-
obvious costs, is another thing entirely.  Elucidating the costs is valuable.
But ultimately individuals will make their own analysis of the cost/benefit
ratio, and their calculations will be different from yours.  Carl Ellison, I
think, argued that TCPA will probably never have large penetration because the
dominant purchasing factor for consumers is always initial cost, and the
extra hardware will ensure that TCPA-capable machines will always be more
expensive.  Maybe he's right.

Even if he isn't, as long as people believe that they have control over the
costs associated with 

Re: [camram-spam] Re: Microsoft publicly announces Penny Black PoW postage project

2003-12-30 Thread Jerrold Leichter
(The use of memory speed leads to an interesting notion:  Functions that are
designed to be differentially expensive on different kinds of fielded hardware.
On a theoretical basis, of course, all hardware is interchangeable; but in
practice, something differentially expensive to calculate on an x86 will remain
expensive for many years to come.)

In fact, such things are probably pretty easy to do - as was determined during
arguments over the design of Java.  The original Java specs pinned down
floating point arithmetic exactly:  A conforming implementation was required
to use IEEE single- and double-precision arithmetic, and give answers
identical at the bit level to a reference implementation.  This is easy to do
on a SPARC.  It's extremely difficult to do on an x86, because x86 FP
arithmetic is done to a higher precision.  The hardware provides only one way
to round an intermediate result to true IEEE single or double precision:
Store to memory, then read back.  This imposes a huge cost.  No one could find
any significantly better way to get the bit-for-bit same results on an x86.
(The Java standards were ultimately loosened up.)

So one should be able to define an highly FP-intensive, highly numerically
unstable, calculation all of whose final bits were considered to be part of
the answer.  This would be extremely difficult to calculate rapidly on an
x86.

Conversely, one could define the answer - possibly to the same problem - as
that produced using the higher intermediate precision of the x86.  This would
be very hard to compute quickly on machines whose FP hardware doesn't provide
exactly the same length intermediate results as the x86.

One can probably find problems that are linked to other kinds of hardware. For
example, the IBM PowerPC chip doesn't have generic extended precision values,
but does have a fused multiply/add with extended intermediate values.

Some machines provide fast transfers between FP and integer registers; others
require you to go to memory.  Vector-like processing - often of a specialized,
limited sort intended for graphics - is available on some architectures and
not others.  Problems requiring more than 32 bits of address space will pick
out the 64-bit machines.  (Imagine requiring lookups in a table with 2^33
entries.  8 Gig of real memory isn't unreasonable today - a few thousand
dollars - and is becoming cheaper all the time.  But using it effectively on a
the 32-bit machines out there is very hard, typically requiring changes to
the memory mapping or segment registers and such, at a cost equivalent to
hundreds or even thousands of instructions.)

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Repudiating non-repudiation

2003-12-29 Thread Jerrold Leichter
Ian's message gave a summary that's in my accord with how courts work.  Since
lawyers learn by example - and the law grow by and example - here's a case
that I think closely parallels the legal issues in repudiation of digital
signature cases.  The case, which if I remember right (from hearing about it
20 years ago from a friend in law school) is known informally as the
Green Giant Peas case, and forms one of the bases of modern tort liability.

The beginning of the 20th century lead to the first mass production, distri-
bution, and marketing of foods.  Before that, you bought peas.  Now, you
could buy a can of Green Giant Peas, sold by a large manufacturer who sold
through stores all over the place, and advertised for your business.

Someone bought a can of Green Giant Peas at a local store.  The can contained
metal shavings.  The purchaser we injured, and sued Green Giant.  One of the
defenses Green Giant raised was:  Just because it says Green Giant on the label
doesn't *prove* Green Giant actually packed the stuff!  The plaintiff must
first prove that these peas really were packed by Green Giant.  Such defenses
had worked in the past - there are many of the same general flavor, insisting
that no recovery should be possible unless plaintiff could reach a level of
proof that was inherently unreachable.  In this case, the courts finally
threw out this defense.  I can't find the actual case on line, but at
http://www.lawspirit.com/legalenglish/handbook/evid08.htm (a curious site -
it seems to be mainly in Chinese) the following text appears:

D. Self-authentication: A few types of documents are
self-authenticating, because they are so likely to be what they
seem, that no testimony or other evidence of their genuineness need be
produced. [474 - 475]

1. State provisions: Under most state statutes, the following
are self-authenticating: (1) deeds and other instruments that
are notarized; (2) certified copies of public records (e.g., a
certified copy of a death certificate); and (3) books of
statutes which appear to be printed by a government body
(e.g., a statute book appearing to be from a sister state or
foreign country).

2. Federal Rules: FRE 902 recognizes the above three classes,
and also adds: (1) all official publications (not just
statutes); (2) newspapers or periodicals; and (3) labels,
signs, or other inscriptions indicating ownership, control,
or origin (e.g., a can of peas bearing the label Green Giant
Co. is self-authenticating as having been produced by Green
Giant Co.).

Self-authenticating here seems very close in concept to what we are trying
to accomplish with digital signatures - and the Green Giant example shows how
the law grows to encompass new kinds of objects.  But it's also important to
look at how self-authentication is actually implemented.  Nothing here is
absolute.  What we have is a shift of the burden of proof.  In general, to
introduce a document as evidence, the introducer has to provide some
proof that the document is what it purports to be.  No such proof is
required for self-authenticating documents.  Instead, the burden shifts to
the opposing console to offer proof that the document is *not* what it
purports to be.  This is as far as the courts will ever be willing to go.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: I don't know PAIN...

2003-12-29 Thread Jerrold Leichter
|  Note that there is no theoretical reason that it should be
|  possible to figure out the public key given the private key,
|  either, but it so happens that it is generally possible to
|  do so
| 
|  So what's this generally possible business about?
|
| Well, AFAIK its always possible, but I was hedging my bets :-) I can
| imagine a system where both public and private keys are generated from
| some other stuff which is then discarded.
That's true of RSA!  The public and private keys are indistinguishable - you
have a key *pair*, and designate one of the keys as public.  Computing either
key from the other is as hard as factoring the modulus.  (Proof:  Given both
keys in the pair, it's easy to factor.)

Merkle's knapsack systems (which didn't work out for other reasons) had the
property that the public key was computed directly from the private key.
(The private key had a special form, while the public key was supposed to
look like a random instance of the knapsack problem.)

Obviously, a system in which the private key could be computed easily from the
public key would not be useful for encryption; so we've covered all the
meaningful is computable from bases
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: I don't know PAIN...

2003-12-29 Thread Jerrold Leichter
| On Dec 27, 2003, at 10:01 AM, Ben Laurie wrote:
|  Note that there is no theoretical reason that it should be possible
|  to figure out the public key given the private key, either, but it so
|  happens that it is generally possible to do so
|  So what's this generally possible business about?
| 
|  Well, AFAIK its always possible, but I was hedging my bets :-) I can
|  imagine a system where both public and private keys are generated from
|  some other stuff which is then discarded.
|
| Sure.  Imagine RSA where instead of a fixed public exponent (typically
| 2^16 + 1), you use a large random public exponent.  After computing the
| private exponent, you discard the two primes and all other intermediate
| information, keeping only the modulus and the two exponents.  Now it's
| very hard to compute either exponent from the other, but they do
| constitute a public/private key-pair.  The operations will be more
| expensive that in standard RSA where one party has a small exponent and
| the other party has an arithmetical shortcut, but still far less
| computation than cracking the other party's key.
This doesn't work for RSA because given a single private/public key pair, you
can factor.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-23 Thread Jerrold Leichter
|  We've met the enemy, and he is us.  *Any* secure computing kernel
|  that can do
|  the kinds of things we want out of secure computing kernels, can also
|  do the
|  kinds of things we *don't* want out of secure computing kernels.
| 
|  I don't understand why you say that.  You can build perfectly good
|  secure computing kernels that don't contain any support for remote
|  attribution.  It's all about who has control, isn't it?
| 
| There is no control of your system with remote attestation. Remote
| attestation simply allows the distant end of a communication to
| determine if your configuration is acceptable for them to communicate
| with you.
|
| But you missed my main point.  Leichter claims that any secure kernel is
| inevitably going to come with all the alleged harms (DRM, lock-in, etc.).
| My main point is that this is simply not so.
|
| There are two very different pieces here: that of a secure kernel, and
| that of remote attestation.  They are separable.  TCPA and Palladium
| contain both pieces, but that's just an accident; one can easily imagine
| a Palladium-- that doesn't contain any support for remote attestation
| whatsoever.  Whatever you think of remote attestation, it is separable
| from the goal of a secure kernel.
|
| This means that we can have a secure kernel without all the harms.
| It's not hard to build a secure kernel that doesn't provide any form of
| remote attestation, and almost all of the alleged harms would go away if
| you remove remote attestation.  In short, you *can* have a secure kernel
| without having all the kinds of things we don't want.  Leichter's claim
| is wrong
The question is not whether you *could* build such a thing - I agree, it's
quite possible.  The question is whether it would make enough sense that it
would gain wide usage.  I claim not.

The issues have been discussed by others in this stream of messages, but
lets pull them together.  Suppose I wished to put together a secure system.
I choose my open-source software, perhaps relying on the word of others,
perhaps also checking it myself.  I choose a suitable hardware base.  I put
my system together, install my software - voila, a secure system.  At least,
it's secure at the moment in time.  How do I know, the next time I come to
use it, that it is *still* secure - that no one has slipped in and modified
the hardware, or found a bug and modified the software?

I can go for physical security.  I can keep the device with me all the time,
or lock it in a secure safe.  I can build it using tamper-resistant and
tamper-evident mechanisms.  If I go with the latter - *much* easier - I have
to actually check the thing before using it, or the tamper evidence does me
no good ... which acts as a lead-in to the more general issue.

Hardware protections are fine, and essential - but they can only go so far.
I really want a software self-check.  This is an idea that goes way back:
Just as the hardware needs to be both tamper-resistent and tamper-evident,
so for the software.  Secure design and implementation gives me tamper-
resistance.  The self-check gives me tamper evidence.  The system must be able
to prove to me that it is operating as it's supposed to.

OK, so how do I check the tamper-evidence?  For hardware, either I have to be
physically present - I can hold the box in my hand and see that no one has
broken the seals - or I need some kind of remote sensor.  The remote sensor
is a hazard:  Someone can attack *it*, at which point I lose my tamper-
evidence.

There's no way to directly check the software self-check features - I can't
directly see the contents of memory! - but I can arrange for a special highly-
secure path to the self-check code.  For a device I carry with me, this could
be as simple as a self-check passed LED controlled by dedicated hardware
accessible only to the self-check code.  But how about a device I may need
to access remotely?  It needs a kind of remote attestation - though a
strictly limited one, since it need only be able to attest proper operation
*to me*.  Still, you can see the slope we are on.

The slope gets steeper.  *Some* machines are going to be shared.  Somewhere
out there is the CVS repository containing the secure kernel's code.  That
machine is updated by multiple developers - and I certainly want *it* to be
running my security kernel!  The developers should check that the machine is
configured properly before trusting it, so it should be able to give a
trustworthy indication of its own trustworthiness to multiple developers.
This *could* be based on a single secret shared among the machine and all
the developers - but would you really want it to be?  Wouldn't it be better
if each developer shared a unique secret with the machine?

You can, indeed, stop anywhere along this slope.  You can decide you really
don't need remote attestation, even for yourself - you'll carry the machine
with you, or only use it when you are physically in front of it.  Or you
can 

Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-15 Thread Jerrold Leichter
|  Which brings up the interesting question:  Just why are the reactions to
|  TCPA so strong?  Is it because MS - who no one wants to trust - is
|  involved?  Is it just the pervasiveness:  Not everyone has a smart card,
|  but if TCPA wins out, everyone will have this lump inside of their
|  machine.
|
| There are two differences between TCPA-hardware and a smart card.
|
| The first difference is obvious. You can plug in and later remove a smart
| card at your will, at the point of your choice. Thus, for homebanking with
| bank X, you may use a smart card, for homebaning with bank Y you
| disconnect the smart card for X and use another one, and before online
| gambling you make sure that none of your banking smart cards is connected
| to your PC. With TCPA, you have much less control over the kind of stuff
| you are using.
|
| This is quite an advantage of smart cards.
However, this advantage is there only because there are so few smart cards,
and so few smart card enabled applications, around.

Really secure mail *should* use its own smart card.  When I do banking, do
I have to remove my mail smart card?  Encryption of files on my PC should
be based on a smart card.  Do I have to pull that one out?  Does that mean
I can't look at my own records while I'm talking to my bank?  If I can only
have one smart card in my PC at a time, does that mean I can *never* cut and
paste between my own records and my on-line bank statement?  To access my
files and my employer's email system, do I have to have to trust a single
smart card to hold both sets of secrets?

I just don't see this whole direction of evolution as being viable.  Oh,
we'll pass through that stage - and we'll see products that let you connect
multiple smart cards at once, each guaranteed secure from the others.  But
that kind of add-on is unlikely to really *be* secure.

Ultimately, to be useful a trusted kernel has to be multi-purpose, for exactly
the same reason we want a general-purpose PC, not a whole bunch of fixed-
function appliances.  Whether this multi-purpose kernel will be inside the PC,
or a separate unit I can unplug and take with me, is a separate issue. Give
the current model for PC's, a separate key is probably a better approach.
However, there are already experiments with PC in my pocket designs:  A
small box with the CPU, memory, and disk, which can be connect to a small
screen to replace a palmtop, or into a unit with a big screen, a keyboard,
etc., to become my desktop.  Since that small box would have all my data, it
might make sense for it to have the trusted kernel.  (Of course, I probably
want *some* part to be separate to render the box useless is stolen.)

| The second point is perhaps less obvious, but may be more important.
| Usually, *your* PC hard- and software is supposed to to protect *your*
| assets and satisfy *your* security requirements. The trusted hardware
| add-on in TCPA is supposed to protect an *outsider's* assets and satisfy
| the *outsider's* security needs -- from you.
|
| A TCPA-enhanced PC is thus the servant of two masters -- your servant
| and the outsider's. Since your hardware connects to the outsider directly,
| you can never be sure whether it works *against* you by giving the
| outsider more information about you than it should (from your point if
| view).
|
| There is nothing wrong with the idea of a trusted kernel, but trusted
| means that some entity is supposed to trust the kernel (what else?). If
| two entities, who do not completely trust each other, are supposed to both
| trust such a kernel, something very very fishy is going on.
Why?  If I'm going to use a time-shared machine, I have to trust that the
OS will keep me protected from other users of the machine.  All the other
users have the same demands.  The owner of the machine has similar demands.

The same goes for any shared resource.  A trusted kernel should provide some
isolation guarantees among contexts.  These guarantees should be independent
of the detailed nature of the contexts.  I think we understand pretty well
what the *form* of these guarantees should be.  We do have problems actually
implementing such guarantees in a trustworthy fashion, however.

Part of the issue with TCPA is that the providers of the kernel that we are
all supposed to trust blindly are also going to be among those who will use it
heavily.  Given who those producers are, that level of trust is unjustifiable.

However, suppose that TCPA (or something like it) were implemented entirely by
independent third parties, using open techniques, and that they managed to
produce both a set of definitions of isolation, and an implementation, that
were widely seen to correctly specify, embody, and enforce strict protection.
How many of the criticisms of TCPA would that mute?  Some:  Given open
standards, a Linux TCPA-based computing platform could be produced.
Microsoft's access the the trusted kernel would be exactly the same as
everyone else's; there would be no 

Re: Additional Proposed Hash Function (Forwarded)

2003-12-06 Thread Jerrold Leichter
|  NIST is proposing a change notice for FIPS 180-2, the Secure Hash Standard
|  that will specify an additional hash function, SHA-224, that is based on
|  SHA-256. The change notice is available at
|  http://csrc.nist.gov/publications/drafts.html. NIST requests comments for
|  the change notice by January 16, 2004. Comments should be addressed to
|  [EMAIL PROTECTED]
|
| Does anyone know what the story is behind this?  It seems to be the
| same sort of relationship that SHA-384 has to SHA-512 - that is, the
| same basic algorithm, the same amount of work to calculate it, but
| with different initial values, and some bits chopped off at the end.
| It all seems a lot of effort just to save 4 bytes in the final hash.
I'd guess that this is part of an effort to define hashes equivalent in
strength to various standardized ciphers.  Because of birthday attacks, in
some sense the security of an n-bit hash is comparable to that of an n/2-bit
cipher.  So SHA-256, -384, and -512 are intended to match the three supported
AES key sizes of 128, 196, and 256 bits.  SHA-224 then comes out to match
2-key 3-DES.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: safety of Pohlig-Hellman with a common modulus?

2003-12-06 Thread Jerrold Leichter
| Is it safe to use Pohlig-Hellman encryption with a common modulus?
| That is, I want various parties to have their own exponents, but share
| the same prime modulus.  In my application, a chosen plaintext attack
| will be possible.  (I know that RSA with common modulus is not safe.)
The question seems equivalent to:  Is P-H with a *known* modulus safe?  The
only difference, from the point of view of an attacker, is that with a shared
modulus, he gets to see a whole bunch of values X^e_i mod P, including (if he
is a member of the system) some where he knows X and e.  But with a known
modulus, he can easily generate as many of these as he likes.  The safety of
P-H had better depend on the difficulty of solving the discrete log problem
mod P, not on keeping P secret.  (Keeping P secret would require great care in
the protocol, in particular in how you respond to messages that are greater
than P.  Otherwise, an attacker can guess the rough size of P - based on the
maximum sizes of messages he observes - and then try to do a binary division
by sending messages of differing sizes.)

The situation is different in RSA since there are *two* secrets:  The
factorization of N, and the correspondence between public and private keys.
These secrets are so closely related that revealing one reveals the other as
well.  We usually only consider the implication factoring N lets you get the
private key from the public, but the other one is present as well. Giving
someone a public/private key pair for a given N is *not* zero knowledge!

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: lockable trapdoor one-way function

2003-12-01 Thread Jerrold Leichter
| Does anyone know of a trapdoor one-way function whose trapdoor can be locked
| after use?
|
| It can be done with secure hardware and/or distributed trust, just delete
| the trapdoor key, and prove (somehow?) you've deleted it.
|
| It looks hard to do in trust-the-math-only mode...
You're going to have to be much more explicit about the details to have any
hope of getting at this.  If you take too naive an approach, even a simple
trapdoor function is impossible.  Given f(X), it's impractical to compute
X.  However, whoever computed f(X) can of course give you X!  So one sets
it up as something like:  If we have any probabilistic Turing machine T
(subject to some constraints on size and running time) which is given an input
tape containing f(X), then over all X and all runs of the machine, the
probability that it halts with the tape containing X is less than (some limit
not much bigger than the probability of just just guessing X correctly).
This is the only way I know of to capture the notion of T is only given f(X).
But this whole approach can't possibly model the notion of forgetting X,
since the only way you can *prove* you've forgotten something is to display
that *all* possible memory you have access to, or will have access to in the
future, is (say) zeroed.

You could certainly spread the trust out.  For example, one could imagine a
trap door computed using shared-secret techniques:  The trap door information
is shared among K participants.  The K participants jointly compute the trap
door function.  Even *after* the computation is complete, it still takes all
K participants to compute the function.  So if we now tell all the participants
to forget their portion of the secret, if you trust *any* of them to do so,
the trap door information is gone.

Even this has a subtlety:  What's to prevent a participant - or anyone else? -
from storing X,f(X) pairs?  So, on the one hand, you may need to state the
requirement in terms of being able to compute f(X) or its inverse on data that
had never been presented to the K participants.  Alternatively, you could
insist that the K participants individually never see X or f(X).  (But then
who *does* see them to distribute and finalize them?)

There's the seed of an interesting primitive here, but exactly what it is
seems difficult to pin down.  What application do you have in mind for such a
thing?
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Are there...

2003-11-18 Thread Jerrold Leichter
| As David Wagner points out, encryption with a public key (for which the
| private key has been discarded) would seem to work.
There's something seriously wrong here, however.  There are many close, but
not identical, definitions, of a one-way hash.  While none of them explicitly
say so, many *uses* of one-way hashes assume that the output of a hash looks
like the output of a random function.  However, for (say) RSA, this fails
very badly, since RSA is multiplicative.  Suppose, for example, that we used
the well-known technique of generating working keys K1 and K2 from a master
key K by applying a one-way hash function H:  K1 = H(K || K1),
K2 = H(K || K2).  With H=SHA or any reasonable one-way hash revealing K1
provides no useful information about K or K2.  But now suppose instead of
concatenation we, for some reason, used multiplication:

K1 = H(3 * K)   K2 = H(5 * K)

For H as before, this is just as secure.  But for H = RSA with a known modulus
N and public key, this is completely insecure:  It's easy to compute 3' = the
inverse of 3 mod N (Euclidean algorithm), at which point H(3') * K1 * H(5) =
H(3') * H(K * 3) * H(5) = H(3' * 3 * K * 5) = H(K * 5) = K2.

A useful one-way hash function shouldn't have any simple algebraic properties.
(The existing ones do have a prefix property due to their iterative nature,
and it would be nice to avoid that, though I don't know of any work in that
direction.  Note that this prefix property does mean that the K in K || K1)
should probably be significantly less than the block size.  Personally, I favor
XOR'ing the distinguisher - K1, here - into K, avoiding the whole potential
interaction with the prefix property.  Note of course that this assumes that
H does *not* have some bad interaction with XOR - as RSA does with simple
multiplication.)
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Cryptography as a component of security

2003-11-13 Thread Jerrold Leichter
| I listened to yet another talk on computer security, which
| incorporated security.  It got me to thinking two things:
|
|   o Pseudo-random implies pseudo security.
|
| If you're re-keying by running the old key through a pseudo-random
| function without adding any new entropy, then you're not re-keying at
| all.
That's not true.  Consider a stream cipher:  It stretches your original key
into a much longer stream.  By your argument, as soon as there is sufficient
generated stream to uniquely determine the key, no additional entropy is being
introduced, so all the rest is pseudo-security!  But in fact the unicity point
must be reached very quickly, or the generated stream would be too *indepen-
dent* of the key, and an attacker could guess most of the stream without even
knowing the key.

Or consider the following construct:  Suppose we have two encryption functions
E1 and E2, E1 is very fast, but breakable - there is an attack of concern to
us that becomes viable after an attacker has seen (or been able to partially
specify, if you want to think about chosen plaintext attacks) N plaintext/
ciphertext pairs.  E2 is slow, but we consider it to be secure against the
attacks of concern.  To encrypt the stream of plaintext P_i to ciphertext C_i,
do:

K - Master key
Choose n  N
for i = 0, 1, ...
// Begin new segment
SK_i - E2(K,i)
for j = 0 .. n-1
C_{n*i + j} = E1(SK_i,P_{n*i + j})

Even if an attacker breaks into a number of segments and gets those S_i's
(which, BTW, is an assumption about the form of the attack:  Some attacks
reveal the plaintext without revealing the key), what he's left with is a set
of plaintext/ciphertext pairs with which he now has to attack E2 - which is
assumed to be difficult.  (Even worse for the attacker, even if he could mount
a chosen ciphertext attack against E1, he has no control over the plaintexts
fed into E2.)

Note that we assumed E2 was actually very strong.  But even if we assume E2
has the same effective strength as E1 - it can be broken after seeing N
plaintext/ciphertext pairs - then the security guarantees (such as they are;
one would have to be more precise about the nature of attackable after N
pairs are seen vulnerability) are pretty much the same until about N^2
plaintext/ciphertext pairs have been sent.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Software protection scheme may boost new game sales

2003-10-13 Thread Jerrold Leichter
| I've not read the said article just yet, but from that direct quote as
| the copy degrades... I can already see the trouble with this scheme:
| their copy protection already fails them.  They allow copies to be made
| and rely on the fact that the CDR or whatever media, will eventually
| degrade, because their code looks like scratches...  Rggghtt.
You should read the article - the quote is misleading.  What they are doing is
writing some bad data at pre-defined points on the CD.  The program looks
for this and fails if it finds good data.

However ... I agree with your other points.  This idea is old, in many
different forms.  It's been broken repeatedly.  The one advantage they have
this time around is that CD readers - and, even more, DVD readers; there is
mention of applying the same trick to DVD's - is, compared to the floppy
readers of yesteryear, sealed boxes.  It's considerably harder to get at the
raw datastream and play games.  Of course, this cuts both ways - there are
limits to what the guys writing the protection code can do, too.

The real new idea here has nothing to do with how they *detect* a copy - it's
what they *do* when they detect it.  Rather than simply shut the game down,
the degrade it over time.  Guns slowly stop shooting straight, for example.
In the case of DVD's, the player works fine - but stops working right at some
peak point.  Just like the guy on the corner announcing first hit's free,
they aim to suck you in, then have you running out to get a legit copy to
save your character's ass - or find out how The One really lives through
it all.  This will probably work with a good fraction of the population.

Actually, this is a clever play on the comment from music sharers that they
get a free copy of a song, then buy the CD if they like the stuff.  In effect,
what they are trying to do is make it easy to make teasers out of their
stuff.  There will be tons of people copying the stuff in an unsophisticated
way - and only a few who will *really* break it.  Most people will have no
quick way to tell whether they are getting a good or a bad copy.  And every
bad copy has a reasonable chance of actually producing a sale

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Simple SSL/TLS - Some Questions

2003-10-07 Thread Jerrold Leichter
| From: Jill Ramonsky [EMAIL PROTECTED]
|   From: Ian Grigg [mailto:[EMAIL PROTECTED]
|  
|   The only question I wasn't quite sure of
|   was whether, if I take your code, and modify it,
|   can I distribute a binary only version, and keep
|   the source changes proprietary?
|
| You can't distribute a binary only version of ANY crypto product,
| surely? No crypto product can EVER be trustworthy unless you can see the
| source code and verify that it has no back doors, and then compile it.
| Unless you give your users the power to inspect the source code, and
| /know/ that it is the source code (because they can actually compile it
| and run the resulting executable) then you could have put all sorts of
| back doors into it. You could have added password theft, key escrow, who
| knows what?
This sounds nice in principle, but if you follow that where it goes, you are
left with GPL - not even LGPL.  There's no way for a library to protect itself
against code it is linked with.  (Where's TOPS-20 when you need it)  Even
if you let me completely rebuild the crypto library you've shipped with your
product, why should I trust it?

| Don't get me wrong. I agree with you that crypto has enough barriers
| already, and I would like to produce something that is as freely
| distributable as possible. For the masses crypto is, I guess, an
| unwritten design goal. But allowing people to hide the crypto source
| from crypto users would allow the bad guys (you can define your own bad
| guys) to produce Trojan Horse crypto. Closed source crypto is to all
| intents worthless. (In my opinion). Please feel free to argue that I'm
| wrong.
This is not really a soluable problem.  Going full GPL will render your project
useful for just about any commercial use.  Anything less leaves even someone
with the necessary skills, desire, and time in a position where they can't
say anything meaningful about the system in which the code is embedded.

|   Q:  Does your employer  have any say or comment
|   on this project?  Might be wise to clear up the
|   posture, and either get it in writing, or make
|   the repository public from the git-go.  Many an
|   open source project has foundered when the boss
|   discovered that it works...
|
| It has absolutely nothing whatsoever to do with my employer. All my code
| will be written at home in my spare time, and uploaded to CVS or
| whatever also from home. It is true that I happen to be sending this
| email from work, but even that's in my own time. I don't see how they
| have any say. To be /really/ safe,  I'd be happy to always post to this
| list only from home, but right now I don't think it's a problem.
Check your employment contract carefully.  Many contracts these days basically
say if you invented/developed/wrote it while you worked for us, it's ours.
(Sometimes there are provisions like if it's related to what we sell, but
when you look more closely, they will define what we sell so broadly - and
then add the ability to change their minds later anyway - that it doesn't
change anything.)

Lawsuits about this sort of stuff get ugly and *very* expensive.  Just because
your current employer is reasonable doesn't mean it won't be acquired by
someone who isn't.  Doing a bit of up-front checking is only prudent.

BTW, someone remarked in another issue that you could put of your choice of
license from among a variety of possibilities until later.  Well ... yes and
no.  Once something goes out the door under less restrictive terms, you can't
add restrictions later.  Let a copy go out with the phrase released to the
public domain and it's no longer yours - it's everyone's.  Let it out under
a BSD license, and you can't apply GPL terms later.  (You can apply stricter
terms to new versions, but if you try that, the resulting confusion will kill
any possibilities of broad acceptance:  You'll end up with a competition
between your restrictive versions and evolved versions of less-restrictive
ones that other people do because they aren't willing, or can't accept your
later restrictions.)
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: how to defeat MITM using plain DH, Re: anonymous DH MITM

2003-10-05 Thread Jerrold Leichter
[Using multiple channels on the assumption that the MITM can't always get all
of them.]

This is starting to sound like some very old work - to which I don't have a
reference - on what was called the wiretap channel.  Basic idea:  Alice and
Bob wish to talk; Carol can listen in to everything, but her tap isn't
perfect, so she gets a BER that's slightly higher.  Alice and Bob can then
choose a code (in the information-theory sense, not the crypto sense) that is
fine-tuned to exactly match their BER - and also has the property that if you
have one more bit error than the code supports, you can't decode at all.
They get through, Carol gets nothing.

The same idea has been revived in creating CD's that work in audio players but
not PC's (which hvae CD drives that typically are not willing to tolerate as
high an error rate.)
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Protocol implementation errors

2003-10-05 Thread Jerrold Leichter
| This is the second significant problem I have seen in applications that use
| ASN.1 data formats.  (The first was in a widely deployed implementation of
| SNMP.)  Given that good, security conscience programmers have difficultly
| getting ASN.1 parsing right, we should favor protocols that use easier to
| parse data formats.
| 
| I think this leaves us with SSH.  Are there others?
|
| I would say the exact opposite: ASN.1 data, because of its TLV encoding, is
| self-describing (c.f. RPC with XDR), which means that it can be submitted to a
| static checker that will guarantee that the ASN.1 is well-formed.  In other
| words it's possible to employ a simple firewall for ASN.1 that isn't possible
| for many other formats (PGP, SSL, ssh, etc etc).  This is exactly what
| cryptlib does, I'd be extremely surprised if anything could get past that.
| Conversely, of all the PDU-parsing code I've written, the stuff that I worry
| about most is that which handles the ad-hoc (a byte here, a unit32 there, a
| string there, ...) formats of PGP, SSH, and SSL.  We've already seen half the
| SSH implementations in existence taken out by the SSH malformed-packet
| vulnerabilities, I can trivially crash programs like pgpdump (my standard PGP
| analysis tool) with malformed PGP packets (I've also crashed quite a number of
| SSH clients with malformed packets while fiddling with my SSH server code),
| and I'm just waiting for someone to do the same thing with SSL packets.  In
| terms of safe PDU formats, ASN.1 is the best one to work with in terms of
| spotting problems.
I think there's a bit more to it.

Properly implementing demarshalling code - which is what we are really talking
about here - is an art.  It requires an obsessive devotion to detailed
checking of *everything*.  It also requires a level and style of testing that
few people want to deal with.

Both of these are helped by a well-specified low-level syntax.  TLV encoding
lets you cross-check all sorts of stuff automatically, once, in low-level
calls.  Ad hoc protocols scatter the validation all over the place - and
some of it will inevitably be overlooked.

A couple of years back, the place I work decided to implement its own SNMP
library.  (SNMP is defined in ASN.1)  We'd been using the same free library
done at, I think, UC San Diego many years before, and were unhappy with many
aspects of it.  The guy we had do it had the right approach.  Not only did he
structure the code to carefully track and test all suspect data, but he also
wrote a test suite that checked:

- Many valid inputs.  Most people stop here:  It gets the
right results on valid data, who cares about the rest.
- Many expectable forms of invalid data.  A few people will write
a few of these tests.  Usually, they write test cases that
match the error paths they thought to include in their
code.  Fine, but not nearly enough.
- Many randomly-generated tests.  The trick here is to shape
the randomness:  Most completely random bit strings will
be rejected at very low levels in the code.  What you want
are strings that look valid enough to pass through multiple
layers of testing but contain random junk - still with
some bias, e.g., 1 too high or too low is a test you want
to bias toward.  Hardly anyone does this.

The proof of the pudding came a couple of years later, when a group a OUSPG
(Oulu University Secdure Computing Group) came up with a test that ripped
through just about everyone's SNMP suites, leading to a CERT advisory and a
grand panic.  Our stuff passed with no problems, and in fact when we looked at
OUSPG's test cases, we found that we had almost certainly run them all in some
form or another already, since they were either in our fixed test list or were
covered by the randomized testing.

OUSPG's efforts were actually directed toward something more on point to this
discussion, however:  Their interest is in protocol test generation, and their
test cases were generated automatically from the SNMP definitions.  This kind
of thing is only possible when you have a formal, machine-readable definition
of your protocol.  Using ASN.1 forces you to have that.  (Obviously, you can
do that without ASN.1, but all too often, the only definition of the protocol
is the code.  In that case, the only alternative is manual test generation -
but it can be so hard to do that it just doesn't get done.)

BTW, the OUSPG saga is another demonstration of the danger of a monoculture.
There are, it turns out, vey few independent implementations of SNMP around.
Just about everyone buys SNMP support from SNMP Research, which I believe
sells a commercialized version of the old UCSD code.  Find a bug in that, and
you've found a bug in just about every router and switch on the planet.


Re: anonymous DH MITM

2003-10-04 Thread Jerrold Leichter
| From: Tim Dierks [EMAIL PROTECTED]
|
| I'm lost in a twisty page of MITM passages, all alike.
|
| My point was that in an anonymous protocol, for Alice to communicate with
| Mallet is equivalent to communicating with Bob, since the protocol is
| anonymous: there is no distinction. All the concept of MITM is intended to
| convey is that in an anonymous protocol, you don't know who you're talking
| to, period. Mallet having two conversations with Alice  Bob is equivalent
| to Mallet intermediating himself into a conversation between Alice  Bob.
|
| If you have some unintermediated channel to speak with a known someone
| once, you can exchange a value or values which will allow you to
| authenticate each other forevermore and detect any intermediations in the
| past. But the fundamental truth is that there's no way to bootstrap a
| secure communication between two authenticated parties if all direct 
| indirect communications between those parties may be intermediated. (Call
| this the 'brain in a jar' hypothesis.)
OK, let's set up two different scenarios:

1.  Non-anonymous communication.  Alice talks to Bob.  Alice knows
Bob is on the other end, Bob knows Alice is on the other
end.  They share some secret data; Alice wishes it to be
known only to her and Bob.  Mallet has a bug in Bob's home
and copies the data.

Can Alice or Bob detect that Mallet is there?  Clearly not if
Mallet never uses the data in a detectable way.  No matter how
many times Alice and Bob communicate, whether or not Mallet
continues to bug Bob, neither Alice nor Bob can never learn of
Mallet's presence.

2.  Anonymous communication.  Alice and Bob have a conversation.
Mallet plays MITM.  Alice and Bob don't know who their
corresponding partner is, but they each tell the other
that they will not reveal the secrets they exchange, and
each believes the other - and indeed neither ever reveals
those secrets.  They wish to know if anyone else had a
chance to learn their secret.

On the face of it, there's no difference between these two
cases.  In each case, someone receives a copy of the secrets
exchanged between Alice and Bob, but doesn't *do* anything
with them that either Alice or Bob can see.

However, in this case, unlike 1, if Alice and Bob continue to
communicate - using private pseudonyms for each other to
make continue to communicate a meaningful phrase - then,
assuming Mallet cannot *always* interpose himself, they will
eventually discover that someone has played a MITM game on
them.

If, indeed, you have a full brain in a jar, and Mallet *always* manages to
interpose himself, then, yes, this situation is almost certainly undetectable.
I've learned not to make snap judgements on stuff like this - too many
clearly impossible things turn out not to be.  In fact, I find the
distinction between cases 1 and 2 quite surprising!

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-03 Thread Jerrold Leichter
| Date: Fri, 3 Oct 2003 10:14:42 -0400
| From: Anton Stiglic [EMAIL PROTECTED]
| To: Cryptography list [EMAIL PROTECTED],
|  Tim Dierks [EMAIL PROTECTED]
| Subject: Re: anonymous DH  MITM
|
|
| - Original Message -
| From: Tim Dierks [EMAIL PROTECTED]
|
| 
|  I think it's a tautology: there's no such thing as MITM if there's no such
|  thing as identity. You're talking to the person you're talking to, and
|  that's all you know.
|
| That seems to make sense
No; it's false.  If Alice and Bob can create a secure channel between them-
selves, it's reasonable to say that they are protected from MITM attacks if
they can be sure that no third party can read their messages.  That is:
If Alice and Bob are anonymous, they can't say *who* can read the messages
they are sending, but they might be able to say that, assuming that their
peer is following the protocol exactly (and in particular is not releasing the
shared secret) *exactly one other party* can read the message.

Note that if you have this, you can readily bootstrap pseudonymity:  Alice
and Bob simply use their secure channel to agree on a shared secret, or on
pseudonyms they will henceforth use between themselves.  If there were a
MITM, he could of course impersonate each to the other ever afterward.

The Interlock Protocol doesn't provide this - it prevents the MITM from
modifying the exchanged messages, but can't prevent him from reading them.
It's not clear if it can be achieved at all.  But it does make sense as a
security spec.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-03 Thread Jerrold Leichter
| From: Anton Stiglic [EMAIL PROTECTED]
| From: Jerrold Leichter [EMAIL PROTECTED]
|  No; it's false.  If Alice and Bob can create a secure channel between
|  themselves, it's reasonable to say that they are protected from MITM
|  attacks if they can be sure that no third party can read their messages.
|
| How do they create the secure channel in the first place?  We are talking
| about MITM that takes place during the key agreement protocol.
I didn't say I had a protocol that would accomplish this - I said that the
notion was such a protocol was not inherently self-contradictory.

|  That is: If Alice and Bob are anonymous, they can't say *who* can read the
|  messages they are sending, but they might be able to say that, assuming
|  that their peer is following the protocol exactly (and in particular is
|  not releasing the shared secret) *exactly one other party* can read the
|  message.
|
| That's false.  Alice and Bob can follow the basic DH protocol, exactly, but
| Mallory is in the middle, and what you end up with is a shared key between
| Alice and Bob and Mallory.
There's nothing to be true or false:  It's a definition!  (And yes, DH does
not provide a system that meets the definition.)

| The property you are talking about, concerning the *exactly one other party*
| can read the message is related to the *key authentication* property,
| discussed in [1] (among other places), which enables you to construct
| authenticated key agreements.
The reference was missing; I'd be interested in seeing it.

| 
|  Note that if you have this, you can readily bootstrap pseudonymity:  Alice
|  and Bob simply use their secure channel to agree on a shared secret, or on
|  pseudonyms they will henceforth use between themselves.  If there were a
|  MITM, he could of course impersonate each to the other ever afterward.
|
| But how do they share the initial secret?
I have no idea!

|And with true anonymity you don't
| want linkability.  Pseudonymity is a different thing, with pseudonymity you
| have linkability.
If Alice and Bob wish to establish pseudonyms for future use, they can.  No
one says they have to.  On the other hand, linkability is a funny property.
If Alice and Bob each keep their secrets, and they each believe the other
party keeps their secrets, then if there is *anything* unique in their
conversations with each other that they keep around - like the sessions keys,
or the entire text of the conversation - they can use *that* to link future
conversations to past ones.  (No one without access to the secrets can do
that, of course.)  If you define anonymity as complete lack of linkability,
even to the participants, you're going to end up requiring all participants to
forget, not just their session keys, but everything they learned in their
conversations.  Perhaps there are situations where that's useful, but they
strike me as pretty rare.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-03 Thread Jerrold Leichter
| Date: Fri, 03 Oct 2003 17:27:36 -0400
| From: Tim Dierks [EMAIL PROTECTED]
| To: Jerrold Leichter [EMAIL PROTECTED]
| Cc: Cryptography list [EMAIL PROTECTED]
| Subject: Re: anonymous DH  MITM
|
| At 03:28 PM 10/3/2003, Jerrold Leichter wrote:
| From: Tim Dierks [EMAIL PROTECTED]
| | No; it's false.  If Alice and Bob can create a secure channel between
| | themselves, it's reasonable to say that they are protected from MITM
| | attacks if they can be sure that no third party can read their messages.
| | That is: If Alice and Bob are anonymous, they can't say *who* can read
| | the messages they are sending, but they might be able to say that,
| | assuming that their peer is following the protocol exactly (and in
| | particular is not releasing the shared secret) *exactly one other party*
| | can read the message.
| |
| | They've got exactly that same assurance in a MITM situation:
| | unfortunately,
| | Mallet is the one other party who can read the message.
| But Mallet is violating a requirement:  He is himself passing along the
| information Alice and Bob send him to Bob and Alice.  No notion of secrecy
| can make any sense if one of the parties who legitimately *has* the secret
| chooses to pass it along to someone else!
|
| In an authenticated protocol, you can have a risk model which includes the
| idea that an authorized person will not choose to share a secret with an
| unauthorized person. However, in an anonymous protocol, you cannot have
| such a risk model, because there's no such distinction.
Why not?

| Are you saying that you're defining a protocol with rules of behavior which
| cannot be enforced (namely that Mallet is breaking the protocol by
| forwarding the data)?
They can't be *enforced*, but violations can (perhaps) be detected.

|   Previously, you said that you were defining the thing
| to be controlled as the shared secret, but now you're extending it to any
| and all content transmitted over the link.
The shared secret is the session key.  Assuming the encryption is sufficient,
the security of this shared secret implies the security of all data exchanged
on the link.

|Describing the format of
| communications between parties is in a protocol; what they do with those
| communications is in a risk model or trust model.

| As long as Mallet continues to interpose himself in *all* subsequent
| sessions between Alice and Bob, he can't be detected.  But suppose each of
| them keeps a hash value that reflects all the session keys they think they
| ever used in talking to each other.  Every time they start a session, they
| exchange hashes.  Whenever Mallet is present, he modifies the messages to
| show the hash values for the individual sessions that he held with each
| party seperately.  Should they ever happen to form a session *without*
| Mallet, however, the hashes will not agree, and Mallet will have been
| detected.  So the difference isn't just notional - it's something the
| participants can eventually find out about.
|
| No disagreement with this: if you can ever communicate over an
| unintermediated channel, you can detect previous or future intermediations.
Every being able to communicate over an unintermediated channel can mean
two things:

1.  We can communicate over such a channel *and know we are doing so*.
In that case, we can safely exchange keys, check our previous
communications, etc.

2.  We sometimes communicate over such a channel, but we can't
recognize when it happens.

Case 2 is much weaker than case 1, but is sufficient to detect that Mallet
has been playing games.  In fact, even case 2 is stronger than needed:
Suppose that there are multiple Mallet_i playing MITM.  Any given connection
may go through any subset of the Mallets.  Any time a connection happens
not to go through a particular either Mallet that usually talks directly
to Alice or Bob, the game is up.

| There are easier ways to do it than maintaining a hash of all session keys:
| you can just insist that the other party have the same public key they had
| the first time you spoke, and investigate if that becomes untrue (for
| example, ssh's authentication model).
Sure.

| In fact, if we assume there is a well-known bulletin board somewhere, to
| which anyone can post but on which no one can modify or remove messages, we
| can use it as to force a conversation without Mallet.  Alice and Bob can:
| 
|   [...elided...]
| 
| If not, Mallet was at work.  (For this to work, the bulletin must have a
| verifiable identity - but it's not necessary for anyone to identify himself
| to the bulletin board.)
|
| This can be defeated by Mallet if he makes changes in his forwarding of
| communications (that either have no semantic effect or have whatever
| semantic effect he chooses to impose), but which causes the hashes to vary.
| He then posts statements re: his communications

Re: Reliance on Microsoft called risk to U.S. security

2003-10-02 Thread Jerrold Leichter
|  Can be relied on to _only_ deliver text is a valuable and important
|  piece of functionality, and a capability that has been cut out of too
|  many protocols with no replacement in sight.
While I agree with the sentiment, the text/code distinction doesn't capture
what's important.

Is HTML text or code?  Well ... it's actually neither.  There is a set of tags
in HTML that is perfectly safe.  Setting fonts and font colors, describing a
table ... all kinds of stuff like that isn't text, but it can't hurt you
either.  (Whether these are *useful* in mail is a whole other question)
On the other hand, embedded references that reach out over the network are
inherently dangerous, since the very act of autonomously opening a connection
to a system of someone else's choosing is something I, personally, don't want
my mail reader doing, ever.

Text is code, too!  A plain ASCII message received by Pine is run through the
Pine Interpretation Engine - the Pine MUA under another name.  Some character
sequences cause entries to be made into a list of subject lines.  Others
may trigger automatic refiling.  Most cause bits to be written to a terminal
window.  Those bits in turn are really code, interpreted by my xterm (or
whatever), causing it do to all kinds of complicated things that ultimately
light some pixels on my screen.

The real distinction is:  Which of *my* resources are *you* able to manipulate
by sending me mail?  I'm willing to give you complete access to send characters
to the terminal in which pine runs ... well, almost.  There's a long history
of security holes *in terminals* - e.g., escape sequences that load the WRU
buffer, and then a control character that causes the WRU to send back
qyy|rm *\n!  You have to carry this analysis *all the way through all the
levels of interpretation*.  Way back, when I worked on designing terminals at
DEC, we were aware of the security issues and it was a design rule that there
was no way to send characters to the terminal and have exactly those read back
automatically.  (Usually, you couldn't read it back at all.  Otherwise, they
were wrapped up in a format that would be innocuous to all but very strange
programs - e.g., as a hex string.)  It was fun evaluating competitor's
terminals against that design rule - almost all of them failed.  (It was not
so much fun telling marketing that no, we would not implement that particular
feature even though the competition had it.)

Anyhow ... if you consider the entire *system*, then the rule is that I will
give you complete access to write a certain safe subset of ASCII characters,
or a safe subset of ASCII characters plus escape sequences.  That's why mail
clients have long filtered out certain characters.  (An MUA that talks direct
X calls has no reason to filter anything, since writing a text string directly
with X goes into an interpreter that can change *only* the bits on the
screen.)

A resource-based model gives you some basis for deciding which parts of HTML
are acceptable, and which are not.  Given such a model, it's perfectly
possible to write a safe HTML interpreter.  Of course, you lose some features -
that's life.  Life would also be much easier if you never had to lock your
door because no one would ever think of stealing.

BTW, a resource model has to change with time.  The traditional view has been
that I grant anyone full write access to the end of mail file - i.e., anyone
can send me as much mail as they like.  These days, I really *don't* want to
grant that access to spammers - and spam blocking techniques are all about how
to control the SMTP interpreter I present to the outside world to protect
my disk resources.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]