Re: nettime Pirate Utopia, FEED, February 20, 2001

2001-09-20 Thread Adam Back

Also it's interesting to note that it appears from Niels Provos and
Peter Honeymans paper that none of the currently available stego
encoding programs are secure.  They have broken them all (at least I
recognise the main stego programs available in their list of systems
their tools can attack), and it appears that all of the stego encoders
are naive attempts.

So either the FBI and NSA are unaware of and lagging behind Provos
work and the media reports are unsubstantiated hype (images could
have contained stego content) designed to further alternative agendas
(nasty privacy software outlawing agendas, or perhaps pure media
originated hype).

Or, they found existing stego software and evidence of it's use on
seized equipment or even some 2nd generation, non-publicly available
stego software on seized equipment.

I rather doubt this second possibility as we've also seen reports that
the perpetrators didn't even use crypto.

Adam


On Fri, Sep 21, 2001 at 08:27:00AM +1000, Grant Bayley wrote:
 
 It's a shame that Niels Provos, one of the main developers of open-source
 Steganography software at the moment wasn't able to detect a single piece
 of information hidden steganographically in a recent survey of two million
 images...  Sort of destroys the whole hype about the use of it by
 criminals.   Details on the paper below:



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: nettime Pirate Utopia, FEED, February 20, 2001

2001-09-21 Thread Adam Back

My point was higher level.  These systems are either already broken or
fragile and very lightly peer reviewed.  There aren't many people
building and breaking them.

I did read the papers; my summary is the above, and from that I
surmise it would not be wise for a terrorist to use current generation
steganography systems.

Probably more likely would be the other posters comment that they
would use pre-arranged manually obscured meaning in inoccuous email,
which if done with low enough bandwidth is probably pretty damn
robust and secure.

However unlike the other poster, I don't consider this stego in the
sense of the news report being discussed -- they are talking up the
idea of banning anonymity and steganography software -- where-as in
reality the software is not being used, doesn't make sense to use due
to the current state of the art.  The lobbying by the signals
intelligence community is mis-characterizing the technical reality to
further their own special interest which is easy to do as both the
public and the media are easy to manipulate as they have even less
understanding of anonymity and steganography than they do of
confidentiality.

Adam

On Fri, Sep 21, 2001 at 03:10:05AM +0200, Nomen Nescio wrote:
 No, Provos' own system, Outguess, www.outguess.org, is secure in the
 latest version.  At least, he can't break it.  It remains to be seen
 whether anyone else can.  See the papers on that site.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: nettime Pirate Utopia, FEED, February 20, 2001

2001-09-22 Thread Adam Back

On Fri, Sep 21, 2001 at 06:19:43PM +0100, Adam Back wrote:
 My point was higher level.  These systems are either already broken or
 fragile and very lightly peer reviewed.  There aren't many people
 building and breaking them.

To elaborate on this slightly.  There are inherent reasons why
steganography is harder than encryption: the arms race of hiding data
in noise is based on which side (the hider vs the detecter) has the
best understanding of the characteristics of the host signal.  The
problem is the host signal is not something with clear definition,
what is known is primarily empirical statistical analysis.
Manipulating signals with noise in them to replace noise with the
stego text is not so hard, but knowing and modeling the signal and the
source noise is not a solvable problem.

There will be a never-ended stream of more refined and accurate models
of the signal itself, and biases in the equipment that collects the
signal.  So there will be always a risk that the detecter gets the
edge by marginally more accurately modeling the bias, or finding a
some new bias not modelled by the hider.

 Or, they found existing stego software and evidence of it's use on
 seized equipment or even some 2nd generation, non-publicly available
 stego software on seized equipment.

There have subsequently been news reports claiming the terrorists had
non-publicly available stego software written by their own expert.
This still conflicts with numerous other reports, so it's not clear
what's going on.

But either way none of this would help the signals intelligence
special interest groups arguments to ban steganography, anonymity or
encryption as if anything it would be proof by example of the argument
that terrorists won't have difficulty obtaining software as they can
in the worst case write it from scratch.

Adam



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: limits of watermarking (Re: First Steganographic Image in theWild)

2001-10-19 Thread Adam Back

On Fri, Oct 19, 2001 at 10:24:55AM -0400, Roop Mukherjee wrote:
 The analogy was intended towards publicy know provably strong means
 of copy protection. 

But no such schemes exist, and as I was arguing earlier, I don't think
they will be found either because there are fundamental problems with
the framework before one even gets to implementation details.

 Most security measures these days would be foolish to choose
 otherwise. My impression of the DRM work that was being undertaken
 is that most of it aiming towards open specifications that are
 provably secure. For instance the SDMI charter says, ...to develop
 open technology specifications that protect the playing, storing,
 and distributing of digital music  Measures like this would
 indeed raise the bar in much the same way as some other security
 measures like SSL did.

Well Kerchoff's principle (strength lies only in the key, assuming
open specifications) is a very good thing, but I don't think in the
case of copy protection schemes, abiding by it would raise the bar
significantly.  It would tend to remove the stupid things like the
broken proprietary algorithms, simply because someone would look at
the specs and guffaw before they'd shipped it.  But schemes meeting
the RIAA and MPAA's objectives are not buildable whether one uses good
crypto or broken proprietary crypto, and whether one publishes what
one designs or not.

For example Microsoft's DRM v2 was cracked recently [1], and if you
read the technical description, there is some sound crypto (SHA1, DES
(small keys, but sound), ECC key exchanges) in the design as well as
one proprietary block cipher used to build a MAC, but the attacker
didn't even have to try to break the proprietary MAC, because the DRM
v2 system, and _all such schemes generically_ are systemically flawed.

(In this case the attacker simply read the keys from memory, and in
fact with far less effort than anticipated by the implementors simply
side-stepped their not that thorough attempts at obfuscation.)

You can't hide things in the open in software on a PC.  You can't
even hide things in hardware if the attackers are determined.  And as
DeCSS shows a few million linux users and hackers counts as a very
determined and incredibly technically able group of people.

Adam

[1] http://www.theregister.co.uk/content/4/22354.html



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



limits of watermarking (Re: First Steganographic Image in the Wild)

2001-10-16 Thread Adam Back

On Tue, Oct 16, 2001 at 11:30:05AM -0700, Greg Broiles wrote:
 Adam Back wrote:
 Stego isn't a horseman, and the press drumming up scare stories around
 stego is ludicrous.  We don't need any more stupid cryptography or
 internet related laws.  More stupid laws will not make anyone safer.
 
 I agree, but if Congress isn't careful (and they don't seem to be in a
 careful mood these days), they'll end up outlawing watermarking in
 digital content, which would do to the DRM (digital rights management)
 industry what they tried to do to security researchers with the DMCA.
 
 Perhaps the RIAA and SDMI folks will now come out in favor of
 steganography in order to save their businesses.
 
 Or maybe they be forced to rewrite their complicated protection schemes
 to enable stego escrow, so that federal agents can monitor the secrets
 hidden inside published content, to make sure there aren't any hidden
 messages in Anthrax albums.

So I presume your discussion on the applicability of stego techniques
to the detection of unauthorised copying refers to the framework where
content is personalised by having something identifying the purchaser
encoded in it at time of delivery to the purchaser.

Steganography means hiding the existance of a message -- making it
hard to distinguish content without a stegotext from content with a
stegotext embedded in it.

Copymarks are about making it hard for the user to remove the message
without massively degrading the quality (*).  This means you want some
or all of the purchaser identifying information to be hard to locate
-- because once it is located it can be removed.

But watermarks don't have to be invisible -- just hard to remove
without degrading the image quality.  This tends to mean spread
spectrum techniques, and unpublished parameters of where the signal
will be stored so that there is no publicly constructable
discriminator, and no black-box discriminators queryable either.

However this framework inherently violates Kerchoff's principle.

Another framework is to have players which will only play content with
certified copy marks (no need for them to be visible -- they could be
encoded in a logo in the corner of the screen).  The copymark is a
signed hash of the content and the identity of the purchaser.

This could be relatively robust, except that usually there is also a
provision for non-certified content -- home movies etc -- and then the
copy mark can be removed while still playing by converting the content
into the home movie format, which won't and can't be certified.

Just to say that copymarks and steganography are related but different.

In my opinion copymarks are evil and doomed to fail technically.
There always need to be playble non-certified content, and current
generation watermarks seem easy to remove; and even if some really
good job of spread spectrum encoding were done, someone would reverse
engineer the players to extract the location parameters and then they
too would be removable -- and in the end even if someone did manage to
design a robust watermarking scheme respecting Kerchoff's principle,
the identity information is weakly authenticated, and subject to
identity theft or the content itself could be stolen or plausibly
deniably claimed to have been stolen and this only has to happen once
for each work.

All with no comments on the US Congress being careful of course --
they are ham-fisted at the best of times, and they have degraded far
beyond their normal state.

Adam

(*) This in itself is pretty hard -- reportedly stirmark [1] (a small
random shearing image transform) gets rid of all evaluated watermarks.

[1] Fabien A.P. Petitcolas, Ross J. Anderson, Markus G. Kuhn: Attacks
on copyright marking systems Information Hiding, Second International
Workshop, IH'98

http://www.cl.cam.ac.uk/~mgk25/stirmark.html



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: limits of watermarking (Re: First Steganographic Image in the Wild)

2001-10-17 Thread Adam Back

Ben Laurie wrote:
 The other obvious weakness in such a scheme is that the player can
 be modified to ignore the result of the check - rather like
 defeating dongles, which have yet to exhibit any noticable
 resistance to crackers.

I think though that that weakness is more workablee -- for example
playstations can be chipped to work from copies of CDs, however
probably the proportion of the market willing to make hardware
modifications is sufficiently low that the copying rate is not a
significant financial loss to the distributor (especially after
adjusting for people who wouldn't have bought the work anyway, which
is the group most likely to make the modification (students with low
budgets etc)).

Things which can be defeated in software or firmware upgrades only are
for more fragile, and subject to changing user demographics, more
internet aware and connected users, increasing scale of file-sharing
networks; whereas devices needing hardware modifications have non-zero
reproduction costs, and risk of damaging expensive equipment in the
operation.

On Wed, Oct 17, 2001 at 10:23:03AM +0100, Ben Laurie wrote:
 Adam Back wrote:
  [...why copymarks don't work...]

 [...]
 It seems to me that putting the details of the purchaser in plaintext on
 the beginning of the file and making it illegal to remove it is as good
 a protection as you are ever going to get - but that would ruin a whole
 bunch of business plans, so I guess no expert is going to admit that.

It may be more to do with attempts to qualify under legal provisions
of DMCA to construct something which is (legally) arguable qualifying
as a system intended to prevent copying, so they can sue people who
by-pass it.

Another argument I've heard for making dumb proprietary schemes is
that they ened them to be proprietary so they can make onerous
conditions part of the licensing agreement, and sue anyone who makes
devices or software without licensing their broken technology from
them.  In effect that it's utterly broken doesn't matter -- that it's
claimable as an original work under patent law matters.

 In short, the agenda, it seems to me, is the business plans of
 companies in the watermarking business.

That too is doubtless part of the problem.  IBM's cryptolopes lending
credibility by brand recognition to related technologically broken
efforts such as InterTrust and other watermark related business plan
startups digi-boxes and the like.  SDMI was another broken attempt.

 No more, no less. I'm amazed the media moguls are willing to waste
 so much of their time and money on it.

It could be that the only thing keeping the InterTrust types in
business is the patentability and DMCA qualifying legal arguments
above.  Technologically they are all systemically broken.

There may be an element of technological naivete on the part of MPAA
RIAA too though, perhaps decision makers were genuinely confused to
start with, and crypto-box outfits will have incentives to exaggerage
the technological properties of their systems to their customers, the
RIAA, DMCA etc.

Adam



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: PGP GPG compatibility

2002-01-21 Thread Adam Back

If you ask me GPG has as much to answer for in the
non-interoperability problems with it's rejection of shipping IDEA
with the default GPG as PRZ et al for deciding to not ship RSA.

I tried arguing with PGP that if they wanted to phase out RSA use, the
best way would be to support it: then more people would have upgraded
to pgp5.x and started using new key types.  Instead people continued
to use PGP2.x in defense as it was the only thing which reliabily
interoperated.  

It's understandable that PGP would have wanted to phase out RSA due to
the trouble RSADSI caused with licensing of the RSA patent, but still
the approach taken had predicatbly the opposite effect to that which
they hoped to achieve.

GPG on the other hand is simply wilfully damaging interoperability by
putting their anti-patent stance over the benefit of PGP users.  I
know there are modules to add IDEA support but they're not shipped by
default so most people don't use them.

It seems that the result of GPG and PGP intentionally induced
incompabilities has greatly reduced PGP use.  I used to use PGP a lot,
these days I use it a lot less, most uses induce all kinds of problems
to the extent that most people resort to using plaintext.

If the -pgp2 option implies that GPG will then ship with IDEA and that
there is a way to request PGP2 compability that is a good step.

However it should be possible to automatically select that option
based on the public key parameters of the person you're sending to,
which was if I recall the reason for the introduction of the new
openPGP RSA format, so that a PGP2 generated RSA keys could be
distinguished from openPGP keys, and compability could be maintained.

Adam

On Mon, Jan 21, 2002 at 09:35:24AM +0100, Werner Koch wrote:
 On 20 Jan 2002 21:46:35 -0500, Derek Atkins said:
 
  Question: How many users of PGP 2.x are still out there?  If people
  have upgraded to more recent versions, then it's not quite as bad.
  OTOH, I have successfully interoperated with PGP 2.6 fairly recently.
 
 Things would get much better if a PGP 2 version with support for CAST5
 would get more into use.  We can't officially support IDEA for patent
 reasons in GnuPG; the next release comes with a --pgp2 option to
 bundle all the options needed for pgp 2 cmpatibility and furthermore
 you will get a warning if a message can't be encrypted in a PGP2
 compatible way.  
 
 There is a pgp 2 version by Disastry (http://disastry.dhs.org/pgp)
 which support all OpenPGP defined ciphers. 



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Secure peripheral cards

2002-03-21 Thread Adam Back

On Thu, Mar 21, 2002 at 10:02:20AM -0500, R. A. Hettinga wrote:
 At 7:21 PM -0500 on 3/20/02, Roop Mukherjee wrote:
  I am searching for some citable references about secure peripheral cards.
  Contrary to what I had imagined when I had started searching, I found very
  little. I am looking to see what are the peripherals that have
  cryptographic capabilities and what are thier capabilities?
 
  The Embassy (www.wave.com) thing seems like a single secure system in
  itself, which can run programs and do everything from secure boot to
  secure IO. So I imagine that all of this stuff will not be put in the
  peripherals. Also in the same vein US patent 6,314,409 talk of a secure
  system but in more abstract terms.
 
  Intel's audio players and sigmatels auddio _decoders_ (can be a
  comeplte device or a peripheral according to the brochure) seems to calim
  Microsoft's DRM compatibility.
 
  I would appreciate some better references.
 
 I think you should talk to NCipher about this stuff.
 
 As far as I can tell, Nicko's hardware development people have the best
 handle on secure boxes to store keys in, cryptographic accelerator
 peripherals, and so on.

I'm not sure NCipher gear is the #1 for acceleration, I think they're
probably more focussed and used for secure key management.  For
example they quote [1] an nForce can do up to 400 new SSL connections
per second.  So that's CRT RSA, not sure if 1024 bit or 512 bit (it
does say up to).  openSSL on a PIII-633Mhz can do 265 512 bit CRT
RSA per second, or 50 1024 bit CRT RSA per second.  So wether it will
even speed up current entry-level systems depends on the correct
interpretation of the product sheet.  

And the economics of course depends on how expensive they are relative
to general purpose CPUs, plus the added complexity of using embedded
hardware and drivers and getting to play with your web server.
General purpose CPUs are _really_ fast and cheap right now.

But for the application at hand -- secure key-management, perhaps an
NCipher card is ok -- I haven't compared feature sets so can't really
comment.

Adam

[1] http://www.ncipher.com/products/rscs/datasheets/nFast.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



fast SSL accelerators (Re: Secure peripheral cards)

2002-03-23 Thread Adam Back

On Fri, Mar 22, 2002 at 03:39:01PM +1100, Greg Rose wrote:
 But don't forget that your pentium can't do anything *else* while it's 
 doing those RSAs... whereas the machine with the nForce can be actually 
 servicing the requests.

While that is true, the issue is the economics; depending on the
figures it may be cheaper and much simpler to buy a faster pentium or
better yet an even faster and better value for money Athlon.  Even buy
a dual processor machine.

Cryptoapps seem to make a 2000 key per second clearly stated as 1024
bit (CRT) RSA for $1400 [1].  That might be harder to compete with
with Athlons as one of those PCI cards is around 13x faster than the
fastest i86 compatible processor you can buy right now.

Of course this is now straying off the original discussion of secure
hardware to and focussing on the fastest most economical way to do
lots of SSL connections per second rather than the most secure way to
store keys in hardware, so I changed the subject line.

Adam

[1] http://www.cryptoapps.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: RSA on general-purpose CPU's [was:RE: Secure peripheral cards]

2002-03-25 Thread Adam Back

On Sat, Mar 23, 2002 at 05:00:12PM -0800, Eric Young wrote:
 openSSL on a PIII-633Mhz can do 265 512 bit CRT RSA per 
 
 I don't know what the OpenSSL people did to the x86 ASM code, but
 SSLeay (the precursor to OpenSSL, over 3 years old) did/does 330
 512bit and 55 1024 bit RSAs a second on a 333mhz celeron (linux
 and/or win32).

Hmm probably the question is how did whoever that compiled that binary
(not me) manage to compile it without bignum assembler.

Here's another binary, try this instead (also P3-633Mhz):

OpenSSL 0.9.5a 1 Apr 2000
built on: Wed Aug 30 14:46:28 EDT 2000
options:bn(32,32) rc4(ptr,int) des(ptr,risc1,16,long) blowfish(idx) 
compiler: gcc -DTHREADS -D_REENTRANT -fPIC -ggdb -DNO_IDEA -DNO_MDC2 -DNO_RC5 -DNO_MD2 
-DL_ENDIAN -DTERMIO -O2 -march=i386 -mcpu=i686 -Wall -DSHA1_ASM -DMD5_ASM -DRMD160_ASM
  signverifysign/s verify/s
rsa  512 bits   0.0019s   0.0002s523.8   5095.9

(It is -O2 and has debug)

So I think more in-line with your figures, and I suppose making the
point even more strongly than current processors are amazingly fast
and cheap; plus Lucky's figures on the IA64 show the trend continuing.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



ciphersaber-2 human memorable test vectors

2002-03-28 Thread Adam Back

A while ago I wrote some code to search for human readable test
vectors for Arnold Reinhold's ciphersaber-2
(http://ciphersaber.gurus.com).

Ciphersaber-2 is designed to be simple enough to be implemented from
memory, to avoid the risk of being caught with crypto software on your
computer for use in regimes which have outlawed encryption.  But the
only available test vectors are a big string of hex digits which
probably the implementor will find difficult to remember, and it is
quite easy to make implementation mistakes implementing ciphers -- and
the risks of accidentally using a incorrectly implemented and weak
variant of ciphersaber-2 are significant.  It would be useful
therefore for the stated purpose of ciphersaber-2 to have human
readable and memorable test vectors.
 
The software for exploring for human readable test vector phrases and
information on using it is here:
 
http://www.cypherspace.org/adam/csvec/
 
I have not myself spent much time looking for satisfyingly witty,
topical and memorable phrases. I'll appoint Arnold as the judge and
let you the reader see what you can come up with using the
software. The winning test-vector phrases will go on that
page. Perhaps Arnold will make some honorary 2nd level Cipher
Knighthood and certificate for producing the coolest phrase which is
also a ciphersaber-2 test vector.
 
By way of example the following is a ciphertext plaintext pair:
 
% csvec -c2 db 3 4 3 5 3 spook.lines
selected 155 words of length [3-5]
k=AMME,iv=spy fraud : bce DES

which is interpreted as:

ciphertext = spy fraud DES
plaintext = bce
key = AMME

(the iv is prepended to the ciphertext)

and can be checked as: 

% echo -n spy fraud DES | cs2.pl -d -k=AMME
bce

and so on.

Anton Stiglic and I were also thinking that there would be other
ciphers which you could make human memorable test vectors for.  For
example DES and other 64-bit block ciphers seem plausible though the
searching would be slower as the plaintext would have to be 8 chars
which are slower due to the rate of the English language.

Anton had a number of ideas about how you could make test vectors for
other ciphers like SHA using a partial hash collision as the validity
test as computing a full collision would be impossibly expensive.

In general purely human readable test vectors are not ideal as they
are 7 bit, and there have been cases where implementation errors or
related to the 7th bit (for example one blowfish implementation had
problems with signd / unsigned chars), but it is kind of an
interesting though experiment.

Also if done by the cipher designer, he may choose any magic constants
for the cipher / hash function specifically to allow an human
memorable test vector, though this may weaken the usual kosherized
random number or generators and use of constants such as PI to assure
the reader that there is no ulterior motives for the vectors.

I suppose the general approach of trying lots of human readable
strings for one which has a desired property also calls into slight
doubt the use of purely text phrases as a kosherizing method (eg magic
constants are repeated SHA1 hash of some phrase) -- if in fact the
algorithm designers could try lots of phrases to arrive at a subtly
weakened set of magic numbers.

Adam
--
http://www.cypherspace.org/adam/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: ciphersaber-2 human memorable test vectors

2002-03-30 Thread Adam Back

On Sat, Mar 30, 2002 at 08:27:02AM -0800, Jeff Cours wrote:
 On Fri, 29 Mar 2002, Adam Back wrote:
 
  Any takers on ciphersaber-2 test vectors which are also topical
  and amusing english phrases?
 
 Is there a faster way to search the test vector space than brute
 force? Only certain output values from the PRNG will transform
 alphanumeric characters into other alphanumerics, so that's one way to
 constrain the search, but are there other, more effective ones?

The code on the web page makes that optimization.

http://www.cypherspace.org/adam/csvec/

Here's what it does: 

- from the word sets you feed it equal length word pairs are first
XORed and stored for fast lookup with the lookup key being the xor of
the word pair, and the value stored being a list of word pairs (you
get quite often multiple word pairs that xor to the same value)

- brute force by human readable key and iv meeting constraints given
by user

- first test if key output is 7 bit clean (xor of two 7 bit clean
values is 7 bit clean).

- if so lookup successive word lengths from the set of word lengths
the user requested in the pre-computed word-pair database

I use Dan Bernstein's amazingly fast and compact CDB (Constant
DataBase) to store the xor pairs in -- if you have enough RAM, or a
small word set the lookups will anyway be cached, but the CPU to
lookup ratio is such that it's fast enough.  (I don't try to keep the
CPU busy while waiting for disk, the disk isn't exactly buzzing even
with fairly short plaintext / ciphertext words -- if you cared about
that small improvement you could start a few clients in parallel or
fix the code).

Those seemed like the obvious speedups, perhaps there are others.  But
the current approach may be fast enough, the frequency with which it
finds words goes down as you request longer plaintext - ciphertext
words due to the rate of English, but I presume will become more CPU
bound as a higher proportion of RC4 PRNG outputs will not be 7-bit
clean and so will be rejeced without before getting to the database
lookup for.

Adam
--
http://www.cypherspace.org/adam/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



what is GPG's #1 objective: security or anti-patent stance ( Re: on the state of PGP compatibility (2nd try))

2002-04-01 Thread Adam Back

Hi

I've trimmed the Cc line a bit as this is now focussing more on GPG
and not adding any thing new technically for the excluded set.

On Sun, Mar 31, 2002 at 06:08:14PM -0500, David Shaw wrote:
 The OpenPGP spec handles compatibility issues quite well.  
 The catch, of course, is that PGP 2.x isn't OpenPGP and PGP 5.x
 isn't OpenPGP.  PGP 6 and 7 aren't really OpenPGP either, but
 they're generally close enough for all practical purposes.

I don't see how this is a useful distinction.  They are self-evidently
not close enough for practical purposes as evidenced by the fragmented
user base and ongoing problems you experience if you try using PGP.

Back in the days of pgp2.x I used to receive and send a fair
proportion of mail encrypted with pgp; these days it is a much lower
proportion, and a rather high proportion of those fail.  It's not like
I'm using old software or failing to try what is reasonable to get
messages to work.  Even with my fairly complete collection of PGP
versions you saw the results.  Imagine how much worse it will be
between people who do not upgrade frequently or take such defensive
strategies.  So you say upgrade already.  However as I think I have
demonstrated, I follow this strategy myself and as you can see it
doesn't work either.

 PGP 7 or GnuPG with the IDEA plugin can handle messages from any
 version of PGP, OpenPGP or not.

I can't speak of PGP 7's behavior in this discussion as it is not
available for the operating system I primarily use (linux) as far as I
am aware.

 I doubt it's intentionally hidden, though it's certainly a pain to
 find.

I would characterise the situation as intentionally frustrating
attempts to use IDEA.  The whole point of the little exercise of
stripping out the idea.c, making it a separate dynamically loadable
module, tucking it away in a FAQ where you are pointed to lectures
about how software and algorithm patents are bad is _specifically, and
explicitly_ to discourage use of patented algorithms (and in this case
of the idea.c implementation) and to encourage people to do lobby
about the patent madness.

Campaigning against patent madness is a good cause in itself but not
when it gets in the way of privacy to the point that people are
sending messages in plaintext.  After all what is GPG's primary
purpose: is it an email security software package focussing on
security, or a platform for promulgating political views.  I view the
exclusion of idea.c from GPG as basically a security bug of higher
severity than for example PGP2.x's manipulable fingerprint, or
pgp5.something's (before it got fixed) unsigned ADK bug packet, or the
potential buffer overflow in ZLIB.  This bug is worse because it
reproducibly and frequently causes people to send mail in clear text.
The other bugs are by comparison less dangerous, yet they (the two
more recent ones) were fixed by NAI, and GPG and other PGP software
maintainers with rushed over-night hot fixes.

I would suggest this bug would be best fixed in GPG by:

a) including IDEA as a default option in GPG -- the ASCOM patent
license is really very liberal for non-commercial use, and 

b) if that goes against the GNU philosophy to the extent that it is
worth causing hinderance to hundreds of thousands of users who
practically are _going_ to want it they could at least make it a
configuration file option and ship it as other crypto libraries such
as openSSL do.  (I'm not sure how it hurts the anti-patent stance to
do this -- gnupg.org is after all _already_ distributing idea.c, just
separately).

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



objectivity and factoring analysis

2002-04-20 Thread Adam Back

I'd just like to make a few comments about the apparently unnoticed or
unstated conflicts of interest and bias in the analysis surrounding
Bernstein's proposal.

The following is not intended to trample on anyone's ego -- but I
think deserves saying.

- I'm not sure any of the respondents so far except Bernstein have
truly understood the math -- there are probably few who do, factoring
being such a narrow research area.

- Dan Bernstein stated that it is not easy to estimate the constants
involved to know whether the asymptotic result affects currently used
key sizes; he stated that the conclusion should be considered unknown
until experimental evidence is gained.

- Nicko van Someren -- the person credited with originally making the
exaggerated, or at least highly worst case interpretation at the FC02
panel -- has a conflict interest -- hardware accelerator gear that
ncipher sell will be more markedly needed if people switch to 2048 or
larger keys.  Nicko has made no public comments in the resulting
discussion.

- Ian Goldberg also on the panel quickly distanced himself from van
Someren's claim, as Lucky's earlier mail could have been read to imply
Goldberg had also agreed with van Someren's claim.

- RSA's FAQ down playing the result seems relatively balanced though
they have an incentive to downplay the potential of Bernstein's
approach.  They have a history of producing biased FAQs: for example
previously the ECC FAQ where they compared ECC unfavorably to RSA.
The FAQ was removed after they licensed tech from certicom and
included ECC in BSAFE.

- Bob Silverman, former RSA factoring expert, observes on sci.crypt,
quote:

 At this point, there is noone left at RSA Labs who has the expertise
 or knowledge to judge Bernstein's work.

- Bruce Schneier's somewhat downplaying comments, as far as I know
Bruce isn't an expert on factoring and he doesn't credit anyone who is
in his report.  Bruce's comments lately seem to have lost much of
their earlier objectivity -- many of his security newsletters lately
seem to contain healthy doses of adverts for counterpane's managed
security offering, and calls for lobbying and laws requiring companies
to use such products for insurance eligibility.

- Lucky on the other hand suggested a practical security engineering
approach to start to plan for possibility of migrating to larger key
sizes.  Already one SSH implementation added a configuration option to
select a minimum key size accepted by servers as a result.  This seems
like a positive outcome.  Generally the suggestion to move to 2048 bit
keys seems like a good idea to me.  Somewhat like MD5 - SHA1, MD5
isn't broken for most applications but it is potentially tainted by a
partial result.  Similarly I would concur with Lucky that it's prudent
security engineering to use 2048 bit keys in new systems.
Historically for example PGP has had similar migrations from minimum
listed key sizes for casual use from 512 - 768 - 1024 over the
years.  The progression to 2048 is probably not a bad idea given
current entry level computer speeds and possibility of Bernstein's
approach yeilding an improvement in factoring.

The mocking tone of recent posts about Lucky's call seems quite
misplaced given the checkered bias and questionable authority of the
above conflicting claims we've seen quoted.

Adam
--
http://www.cypherspace.org/adam/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: objectivity and factoring analysis

2002-04-22 Thread Adam Back

Nicko writes:
 [...] the Bernstein proposal [...] (among other things) it details
 the conceptual design of a machine for computing kernels in a large,
 sparse matrix.  The design talks of the number of functional units
 and the nature of the communication between these units.  What I set
 out to do was look at how complex those units are and how hard it
 would be to connect them and to place a cost on this.
 
 [...]
 
 OK, here are the numbers I used [...] we should be able to find a
 kernel in somewhere around 10^7 seconds, which is 16 weeks or 4
 months.

For people who aren't following as closely I think it would be useful
to remind that this is an estimate of the building cost and running
time of one aspect of computation in the overal proposal.  (You give a
rough running-time estimate which some might misunderstand).

The _overall_ running time of the algorithm and whether it is even any
faster or more economical than existing algorithms remains _unknown_
due to the asymptotic result which the experiment is intended to
investigate.

If the hardware were to be built it might for example turn out that
the asymptotic result may only start to offer speedups at far larger
key sizes and if this were the case, depending on the results it could
be that the approach turns out to offer no practical speed-ups for the
for-seeable future.

Or it might turn out that it does offer some incremental improvement,
and even that key sizes should be increased.

But right now no one knows.

Adam

btw. As disclaimed in the original post no insult was intended --
merely more accurate information given the somewhat wild speculations.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Shortcut digital signature verification failure

2002-06-21 Thread Adam Back

Doesn't a standard digital signature plus hashcash / client puzzles
achieve this effect?

The hashcash could be used to make the client to consume more cpu than
the server.  The hashcash collision wouldn't particularly have to be
related to the signature, as the collision would just act as a
proof-of-work entitling the client to have the server verify the
accompanying signature.

Adam

On Thu, Jun 20, 2002 at 11:08:37PM -0700, Bill Frantz wrote:
 I have been thinking about how to limit denial of service attacks on a
 server which will have to verify signatures on certain transactions.  It
 seems that an attacker can just send random (or even not so random) data
 for the signature and force the server to perform extensive processing just
 to reject the transaction.
 
 If there is a digital signature algorithm which has the property that most
 invalid signatures can be detected with a small amount of processing, then
 I can force the attacker to start expending his CPU to present signatures
 which will cause my server to expend it's CPU.  This might result in a
 better balance between the resources needed by the attacker and those
 needed by the server.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Ross's TCPA paper

2002-06-26 Thread Adam Back

On Wed, Jun 26, 2002 at 10:01:00AM -0700, bear wrote:
 As I see it, we can get either privacy or DRM,
 but there is no way on Earth to get both.
 [...]

Hear, hear!  First post on this long thread that got it right.

Not sure what the rest of the usually clueful posters were thinking!

DRM systems are the enemy of privacy.  Think about it... strong DRM
requires enforcement as DRM is not strongly possible (all bit streams
can be re-encoded from one digital form (CD-MP3, DVD-DIVX),
encrypted content streams out to the monitor / speakers subjected to
scrutiny by hardware hackers to get digital content, or A-D
reconverted back to digital in high fidelity.

So I agree with Bear, and re-iterate the prediction I make
periodically that the ultimate conclusion of the direction DRM laws
being persued by the media cartels will be to attempt to get
legislation directly attacking privacy.

This is because strong privacy (cryptographically protected privacy)
allows people to exchange bit-strings with limited chance of being
identified.  As the arms race between the media cartels and DRM
cohorts continues, file sharing will start to offer privacy as a form
of protection for end-users (eg. freenet has some privacy related
features, serveral others involve encryption already).

Donald Eastlake wrote:

| There is little *tehcnical* difference between your doctors records
| being passed on to assorted insurance companies, your boss, and/or
| tabloid newspapers and the latest Disney movies being passed on from a
| country where it has been released to people/theaters in a country
| where it has not been released.

There is lots of technical difference.  When was the last time you saw
your doctor use cryptlopes, watermarks etc to remind himself of his
obligations of privacy.

The point is that with privacy there is an explicit or implied
agreement between the parties about the handling of information.  The
agreement can not be technically *enforced* to any stringent degree.

However privacy policy aware applications can help the company avoid
unintentionally breaching it's own agreed policy.  Clearly if the
company is hostile they can write the information down off the screen
at absolute minimum.  Information fidelity is hardly a criteria with
private information such as health care records, so watermarks, copy
protect marks and the rest of the DRM schtick are hardly likely to
help!

Privacy applications can be successful to the in helping companies
avoid accidental privacy policy breaches.  But DRM can not succeed
because they are inherently insecure.  You give the data and the keys
to millions of people some large proportion of whom are hostile to the
controls the keys are supposedly restricting.  Given the volume of
people, and lack of social stigma attached to wide-spread flouting of
copy protection restrictions, there are ample supply of people to
break any scheme hardware or software that has been developed so far,
and is likely to be developed or is constructible.

I think content providors can still make lots of money where the
convenience, and /or enhanced fidelity of obtaining bought copies
means that people would rather do that than obtain content on the net.

But I don't think DRM is significantly helping them and that they ware
wasting their money on it.  All current DRM systems aren't even a
speed bump on the way to unauthorised Net re-distribution of content.

Where the media cartels are being somewhat effective, and where we're
already starting to see evidence of the prediction I mentioned above
about DRM leading to a clash with privacy is in the area of
criminalization of reverse engineering, with Skylarov case, Ed
Felten's case etc.  Already a number of interesting breaks of DRM
systems are starting to be released anonymously.  As things heat up we
may start to see incentives for the users of file-sharing for
unauthorised re-distribution to also _use_ the software anonymsouly.

Really I think copyright protections as being exploited by media
cartels need to be substantially modified to reduce or remove the
existing protections rather than further restrictions and powers
awareded to the media cartels.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



DRMs vs internet privacy (Re: Ross's TCPA paper)

2002-06-26 Thread Adam Back

On Wed, Jun 26, 2002 at 03:57:15PM -0400, C Wegrzyn wrote:
 If a DRM system is based on X.509, according to Brand I thought you could
 get anonymity in the transaction. Wouldn't this accomplish the same thing?

I don't mean that you would necessarily have to correlate your viewing
habits with your TrueName for DRM systems.  Though that is mostly
(exclusively?) the case for current deployed (or at least implemented
with a view of attempting commercial deployment) copy-mark
(fingerprint) systems, there are a number of approaches which have
been suggested, or could be used to have viewing privacy.

Brands credentials are one example of a technology that allows
trap-door privacy (privacy until you reveal more copies than you are
allowed to -- eg more than once for ecash).  Conceivably this could be
used with a somewhat online, or in combination with a tamper-resistant
observer chip in lieu of online copy-protection system to limit
someone for example to a limited number of viewings.

Another is the public key fingerprinting (public key copy-marking)
schemes by Birgit Pfitzmann and others.  This addresses the issue of
proof, such that the user of the marked-object and the verifier (eg a
court) of a claim of unauthorised copying can be assured that the
copy-marker did not frame the user.

Perhaps schemes which combine both aspects (viewer privacy and
avoidance of need to trust at face value claims of the copy-marker)
can be built and deployed.

(With the caveat that though they can be built, they are largely
irrelevant as they will no doubt also be easily removable, and anyway
do not prevent the copying of the marked object under the real or
feigned claim of theft from the user whose identity is marked in the
object).


But anyway, my predictions about the impending collision between
privacy and the DRM and copy protection legislation power-grabs stems
from the relationship of privacy to the later redistrubtion
observation that:

1) clearly copy protection doesn't and can't a-priori prevent copying
and conversion into non-DRM formats (eg into MP3, DIVX)

2) once 1) happens, the media cartels have an interest to track
general file trading on the internet;

3) _but_ strong encryption and cryptographically enforced privacy mean
that the media cartels will ultimately be unsuccessful in this
endeavour.

4) _therefore_ they will try to outlaw privacy and impose escrow
identity and internet passports etc. and try to get cryptographically
assured privacy outlawed.  (Similar to the previous escrow on
encryption for media cartel interests instead of signals intelligence
special interests; but the media cartels are also a powerful
adversary).

Also I note an slip in my earlier post [of Bear's post]:

| First post on this long thread that got it right.

Ross Anderson's comments were also right on the money (as always).

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



p2p DoS resistance and network stability (Re: Thanks, Lucky, for helping to kill gnutella)

2002-08-10 Thread Adam Back

On Fri, Aug 09, 2002 at 08:25:40PM -0700, AARG!Anonymous wrote:
 Several people have objected to my point about the anti-TCPA efforts of
 Lucky and others causing harm to P2P applications like Gnutella.

The point that a number of people made is that what is said in the
article is not workable: clearly you can't ultimately exclude chosen
clients on open computers due to reverse-engineering.

(With TCPA/Palladium remote attestation you probably could so exclude
competing clients, but this wasn't what was being talked about).

The client exclusion plan is also particularly unworkable for gnutella
because some of the clients are open-source, and the protocol is (now
since original reverse engineering from nullsoft client) also open.

With closed-source implementations there is some obfuscation barrier
that can be made: Kazaa/Morpheus did succeed in frustrating competing
clients due to it's closed protocols and unpublished encryption
algorithm.  At one point an open source group reverse-engineered the
encryption algorithm, and from there the contained kazaa protocols,
and built an interoperable open-source client giFT
http://gift.sourceforge.net, but then FastTrack promptly changed the
unpublished encryption algorithm to another one and then used remote
code upgrade ability to upgrade all of the clients.

Now the open-source group could counter-strike if they had
particularly felt motivated to.  For example they could (1)
reverse-engineer the new unpublished encryption algorithm, and (2) the
remote code upgrade, and then (3) do their own forced upgrade to an
open encryption algorithm and (4) disable further forced upgrades.

(giFT instead after the ugrade attack from FastTrack decided to
implement their own open protocol openFT instead and compete.  It
also includes a general bridge between different file-sharing
networks, in a somewhat gaim like way, if you are familiar with
gaim.)

 [Freenet and Mojo melt-downs/failures...] Both of these are object
 lessons in the difficulties of successful P2P networking in the face
 of arbitrary client attacks.

I grant you that making simultaneously DoS resistant, scalable and
anonymous peer-to-peer networks is a Hard Problem.  Even removing the
anonymous part it's still a Hard Problem.

Note both Freenet and Mojo try to tackle the harder of those two
problems and have aspects of publisher and reader anonymity, so that
they are doing less well than Kazaa, gnutella and others is partly
because they are more ambitious and tackling a harder problem.  Also
the anonymity aspect possibly makes abuse more likely -- ie the
attacker is provided as part of the system tools to obscure his own
identity in attacking the system.  DoSers of Kazaa or gnutella would
likely be more easily identified which is some deterrence.

I also agree that the TCPA/Palladium attested closed world computing
model could likely more simply address some of these problems.

(Lucky slide critique in another post).

Adam
--
http://www.cypherspace.org/adam/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

2002-08-12 Thread Adam Back

I think you are making incorrect presumptions about how you would use
Palladium hardware to implement a secure DRM system.  If used as you
suggest it would indeed suffer the vulnerabilities you describe.

The difference between an insecure DRM application such as you
describe and a secure DRM application correctly using the hardware
security features is somewhat analogous to the current difference
between an application that relies on not being reverse engineered for
it's security vs one that encrypts data with a key derived from a user
password.

In a Palladium DRM application done right everything which sees keys
and plaintext content would reside inside Trusted Agent space, inside
DRM enabled graphics cards which retrict access to video RAM, and
later DRM enabled monitors with encrypted digital signal to the
monitor, and DRM enabled soundcards, encrypted content to speakers.
(The encrypted contentt to media related output peripherals is like
HDCP, only done right with non-broken crypto).

Now all that will be in application space that you can reverse
engineer and hack on will be UI elements and application logic that
drives the trusted agent, remote attesation, content delivery and
hardware.  At no time will keys or content reside in space that you
can virtualize or debug.


In the short term it may be that some of these will be not fully
implemented so that content does pass through OS or application space,
or into non DRM video cards and non DRM monitors, but the above is the
end-goal as I understand it.

As you can see there is still the limit of the non-remote
exploitability of the trusted agent code, but this is within the
control of the DRM vendor.  If he does a good job of making a simple
software architecture and avoiding potential for buffer overflows he
stands a much better chance of having a secure DRM platofrm than if as
you describe exploited OS code or rogue driver code can subvert his
application.


There is also I suppose possibility to push content decryption on to
the DRM video card so the TOR does little apart from channel key
exchange messages from the SCP to the video card, and channel remote
attestation and key exchanges between the DRM license server and the
SCP.  The rest would be streaming encrypted video formats such as CSS
VOB blocks (only with good crypto) from the network or disk to the
video card.


Similar kinds of arguments about the correct break down between
application logic and placement of security policy enforcing code in
Trusted Agent space apply to general applications.  For example you
could imagine a file sharing application which hid the data the users
machine was serving from the user.  If you did it correctly, this
would be secure to the extent of the hardware tamper resistance (and
the implementers ability to keep the security policy enforcing code
line-count down and audit it well).


At some level there has to be a trade-off between what you put in
trusted agent space and what becomes application code.  If you put the
whole application in trusted agent space, while then all it's
application logic is fully protected, the danger will be that you have
added too much code to reasonably audit, so people will be able to
gain access to that trusted agent via buffer overflow.


So therein lies the crux of secure software design in the Palladium
style secure application space: choosing a good break-down between
security policy enforcement, and application code.  There must be a
balance, and what makes sense and is appropriate depends on the
application and the limits of the ingenuity of the protocol designer
in coming up with clever designs that cover to hardware tamper
resistant levels the the applications desired policy enforcement while
providing a workably small and pracitcally auditable associated
trusted agent module.


So there are practical limits stemming from realities to do with code
complexity being inversely proportional to auditability and security,
but the extra ring -1, remote attestation, sealing and integrity
metrics really do offer some security advantages over the current
situation.

Adam

On Mon, Aug 12, 2002 at 03:28:15PM -0400, Tim Dierks wrote:
 At 07:30 PM 8/12/2002 +0100, Adam Back wrote:
 (Tim Dierks: read the earlier posts about ring -1 to find the answer
 to your question about feasibility in the case of Palladium; in the
 case of TCPA your conclusions are right I think).
 
 The addition of an additional security ring with a secured, protected 
 memory space does not, in my opinion, change the fact that such a ring 
 cannot accurately determine that a particular request is consistant with 
 any definable security policy. I do not think it is technologically 
 feasible for ring -1 to determine, upon receiving a request, that the 
 request was generated by trusted software operating in accordance with the 
 intent of whomever signed it.
 
 Specifically, let's presume that a Palladium-enabled application is being 
 used for DRM; a secure

Re: trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

2002-08-12 Thread Adam Back

At this point we largely agree, security is improved, but the limit
remains assuring security of over-complex software.  To sum up:

The limit of what is securely buildable now becomes what is securely
auditable.  Before, without the Palladium the limit was the security
of the OS, so this makes a big difference.

Yes some people may design over complex trusted agents, with sloppy
APIs and so forth, but the nice thing about trusted agents are they
are compartmentalized:

If the MPAA and Microsoft shoot themselves in the foot with a badly
designed over complex DRM trusted agent component for MS Media Player,
it has no bearing on my ability to implement a secure file-sharing or
secure e-cash system in a compartment with rigorously analysed APIs,
and well audited code.  The leaky compromised DRM app can't compromise
the security policies of my app.

Also it's unclear from the limited information available but it may be
that trusted agents, like other ring-0 code (eg like the OS itself)
can delegate tasks to user mode code running in trusted agent space,
which can't examine other user level space, nor the space of the
trusted agent which stated them, and also can't be examined by the OS.

In this way for example remote exploits could be better contained in
the sub-division of trusted agent code.  eg. The crypto could be done
by the trusted-agent proper, the mpeg decoding by a user-mode
component; compromise the mpeg-decoder, and you just get plaintext not
keys.  Various divisions could be envisaged.


Given that most current applications don't even get the simplest of
applications of encryption right (store key and password in the
encrypted file, check if the password is right by string comparison is
suprisingly common), the prospects are not good for general
applications.  However it becomes more feasible to build secure
applications in the environment where it matters, or the consumer
cares sufficiently to pay for the difference in development cost.

Of course all this assumes microsoft manages to securely implement a
TOR and SCP interface.  And whether they manage to succesfully use
trusted IO paths to prevent the OS and applications from tricking the
user into bypassing intended trusted agent functionality (another
interesting sub-problem).  CC EAL3 on the SCP is a good start, but
they have pressures to make the TOR and Trusted Agent APIs flexible,
so we'll see how that works out.

Adam
--
http://www.cypherspace.org/adam/

On Mon, Aug 12, 2002 at 04:32:05PM -0400, Tim Dierks wrote:
 At 09:07 PM 8/12/2002 +0100, Adam Back wrote:
 At some level there has to be a trade-off between what you put in
 trusted agent space and what becomes application code.  If you put the
 whole application in trusted agent space, while then all it's
 application logic is fully protected, the danger will be that you have
 added too much code to reasonably audit, so people will be able to
 gain access to that trusted agent via buffer overflow.

 I agree; I think the system as you describe it could work and would be
 secure, if correctly executed. However, I think it is infeasible to
 generally implement commercially viable software, especially in the
 consumer market, that will be secure under this model. Either the
 functionality will be too restricted to be accepted by the market, or there
 will be a set of software flaws that allow the system to be penetrated.

 The challenge is to put all of the functionality which has access to
 content inside of a secure perimeter, while keeping the perimeter secure
 from any data leakage or privilege escalation. [...]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



TCPA/Palladium user interst vs third party interest (Re: responding to claims about TCPA)

2002-08-14 Thread Adam Back

The remote attesation is the feature which is in the interests of
third parties.

I think if this feature were removed the worst of the issues the
complaints are around would go away because the remaining features
would be under the control of the user, and there would be no way for
third parties to discriminate against users who did not use them, or
configured them in given ways.

The remaining features of note being sealing, and integrity metric
based security boot-strapping.

However the remote attestation is clearly the feature that TCPA, and
Microsoft place most value on (it being the main feature allowing DRM,
and allowing remote influence and control to be exerted on users
configuration and software choices).

So the remote attesation feature is useful for _servers_ that want to
convince clients of their trust-worthiness (that they won't look at
content, tamper with the algorithm, or anonymity or privacy properties
etc).  So you could imagine that feature being a part of server
machines, but not part of client machines -- there already exists some
distinctions between client and server platforms -- for example high
end Intel chips with larger cache etc intended for server market by
their pricing.  You could imagine the TCPA/Palladium support being
available at extra cost for this market.

But the remaining problem is that the remote attesation is kind of
dual-use (of utility to both user desktop machines and servers).  This
is because with peer-to-peer applications, user desktop machines are
also servers.

So the issue has become entangled.

It would be useful for individual liberties for remote-attestation
features to be widely deployed on desktop class machines to build
peer-to-peer systems and anonymity and privacy enhancing systems.

However the remote-attestation feature is also against the users
interests because it's wide-spread deployment is the main DRM enabling
feature and general tool for remote control descrimination against
user software and configuration choices.

I don't see any way to have the benefits without the negatives, unless
anyone has any bright ideas.  The remaining questions are:

- do the negatives out-weigh the positives (lose ability to
reverse-engineer and virtualize applications, and trade
software-hacking based BORA for hardware-hacking based ROCA);

- are there ways to make remote-attestation not useful for DRM,
eg. limited deployment, other;

- would the user-positive aspects of remote-attestation still be
largely available with only limited-deployment -- eg could interesting
peer-to-peer and privacy systems be built with a mixture of
remote-attestation able and non-remote-attestation able nodes.

Adam
--
http://www.cypherspace.org/adam/

On Sat, Aug 10, 2002 at 04:02:36AM -0700, John Gilmore wrote:
 One of the things I told them years ago was that they should draw
 clean lines between things that are designed to protect YOU, the
 computer owner, from third parties; versus things that are designed to
 protect THIRD PARTIES from you, the computer owner.  This is so
 consumers can accept the first category and reject the second, which,
 if well-informed, they will do.  If it's all a mishmash, then
 consumers will have to reject all of it, and Intel can't even improve
 the security of their machines FOR THE OWNER, because of their history
 of security projects that work against the buyer's interest, such as
 the Pentium serial number and HDCP.
 [...]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)

2002-08-15 Thread Adam Back

Phew... the document is certainly tortuous, and has a large number of
similarly and confusingly named credentials, certificates and keys,
however from what I can tell this is what is going on:

Summary: I think the endorsement key and it's hardware manufacturers
certificate is generated at manufacture and is not allowed to be
changed.  Changing ownership only means (typically) deleting old
identities and creating new ones.

The longer version...

- endorsement key generation and certification - There is one
endorsement key per TPM which is created and certified during
manufacture.  The creation and certification process is 1) create
endorsement key pair, 2) export public key endorsement key, 3)
hardware manufacturer signs endorsement public key to create an
endorsement certificate (to certify that that endorsement public key
belongs to this TPM), 4) the certificate is stored in the TPM (for
later use in communications with the privacy CA.)

- ownership - Then there is the concept of ownership.  The spec says
the TPM MUST ship with no Owner installed.  The owner when he wishes
to claim ownership choose a authentication token which is sent into
the TPM encrypted with the endorsement key.  (They give the example of
the authentication token being the hash of a password).  Physical
presence tests apply to claiming ownership (eg think BIOS POST with no
networking enabled, or physical pin on motherboard like BIOS flash
enable).  The authentication token and ownership can be changed.  The
TPM can be reset back to a state with no current owner.  BUT _at no
point_ does the TPM endorsement private key leave the TPM.  The
TPM_CreateEndorsementKeyPair function is allowed to be called once
(during manufacture) and is thereafter disabled.

- identity keys - Then there is the concept of identity keys.  The
current owner can create and delete identities, which can be anonymous
or pseudonymous.  Presumably the owner would delete all identity keys
before giving the TPM to a new owner.  The identity public key is
certified by the privacy CA.

- privacy ca - The privacy CA accepts identity key certification
requests which contain a) identity public key b) a proof of possession
(PoP) of identity private key (signature on challenge), c) the
hardware manufacturers endorsement certificate containing the TPM's
endorsement public key.  The privacy CA checks whether the endorsement
certificate is signed by a hardware manufacturer it trusts.  The
privacy CA sends in response an identity certificate encrypted with
the TPM's endorsement public key.  The TPM decrypts the encrypted
identity certifate with the endorsement private key.

- remote attestation - The owner uses the identity keys in the remote
attestation functions.  Note that the identity private keys are also
generated on the TPM, the private key also never leaves the TPM.  The
identity private key is certified by the privacy CA as having been
requested by a certified endorsement key.


The last two paragraphs imply something else interesting: the privacy
CA can collude with anyone to create a virtualized environment.  (This
is because the TPM endorsement key is never directly used in remote
attestation for privacy reasons.)  All that is required to virtualize
a TPM is an attestation from the privacy CA in creating an identity
certificate.

So there are in fact three avenues for FBI et al to go about obtaining
covert access to the closed space formed by TCPA applications: 

(A) get one of the hardware manufacturers to sign an endorsement key
generated outside a TPM (or get the endorsement CA's private key), or

(B) get a widely used and accepted privacy CA to overlook it's policy
of demanding a hardware manufacturer CA endorsed endorsement public
key and sign an identity public key created outside of a TPM (or get
the privacy CA's private key).

(C) create their own privacy CA and persuade an internet server they
wish to investigate the users of to accept it.  Create themselves a
virtualized client using their own privacy CA, look inside.


I think to combat problem C) as a user of a service you'd want the
remote attestation of software state to auditably include it's
accepted privacy CA database to see if there are any strange Privacy
CAs on there.

I think you could set up and use your own privacy CA, but you can be
sure the RIAA/MPAA will never trust your CA.  A bit like self-signing
SSL site keys.  If you and your friends add your CA to their trusted
root CA database it'll work.  In this case however people have to
trust your home-brew privacy CA not to issue identity certificates
without having seen a valid hardware-endorsement key if they care
about preventing virtualization for the privacy or security of some
network application.

Also, they seem to take explicit steps to prevent you getting multiple
privacy CA certificates on the same identity key.  (I'm not sure why.)
It seems like a bad thing as it forces you to trust just one CA, it
prevents web of trust which 

Re: TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)

2002-08-15 Thread Adam Back

I think a number of the apparent conflicts go away if you carefully
track endorsement key pair vs endorsement certificate (signature on
endorsement key by hw manufacturer).  For example where it is said
that the endorsement _certificate_ could be inserted after ownership
has been established (not the endorsement key), so that apparent
conflict goes away.  (I originally thought this particular one was a
conflict also, until I noticed that.)  I see anonymous found the same
thing.

But anyway this extract from the CC PP makes clear the intention and
an ST based on this PP is what a given TPM will be evaluated based on:

http://niap.nist.gov/cc-scheme/PPentries/CCEVS-020016-PP-TPM1_9_4.pdf

p 20:
| The TSF shall restrict the ability to initialize or modify the TSF 
| data: Endorsement Key Pair [...] to the TPM manufacturer or designee.

(if only they could have managed to say that in the spec).

Adam
--
http://www.cypherspace.org/adam/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



employment market for applied cryptographers?

2002-08-16 Thread Adam Back

On the employment situation... it seems that a lot of applied
cryptographers are currently unemployed (Tim Dierks, Joseph, a few
ex-colleagues, and friends who asked if I had any leads, the spate of
recent security consultant .sigs, plus I heard that a straw poll of
attenders at the codecon conference earlier this year showed close to
50% out of work).

Are there any more definitive security industry stats?  Are applied
crypto people suffering higher rates of unemployment than general
application programmers?  (From my statistically too small sample of
acquaintances it might appear so.)

If this is so, why is it?

- you might think the physical security push following the world
political instability worries following Sep 11th would be accompanied
by a corresponding information security push -- jittery companies
improving their disaster recovery and to a lesser extent info sec
plans.

- governments are still harping on the info-war hype, national
information infrastructure protection, and the US Information Security
Czar Clarke making grandiose pronouncements about how industry ought
to do various things (that the USG spent the last 10 years doing it's
best to frustrate industry from doing with it's dumb export laws)

- even Microsoft has decided to make a play of cleaning up it's
security act (you'd wonder if this was in fact a cover for Palladium
which I think is likely a big play for them in terms of future control
points and (anti-)competitive strategy -- as well as obviously a play
for the home entertainment system space with DRM)

However these reasons are perhaps more than cancelled by:

- dot-com bubble (though I saw some news reports earlier that though
there is lots of churn in programmers in general, that long term
unemployment rates were not that elevated in general)

- perhaps security infrastructure and software upgrades are the first
things to be canned when cash runs short?  

- software security related contract employees laid off ahead of
full-timers?  Certainly contracting seems to be flat in general, and
especially in crypto software contracts look few and far between.  At
least in the UK some security people are employed in that way (not
familiar with north america).

- PKI seems to have fizzled compared to earlier exaggerated
expectations, presumably lots of applied crypto jobs went at PKI
companies downsizing.  (If you ask me over use of ASN.1 and adoption
of broken over complex and ill-defined ITU standards X.500, X.509
delayed deployment schedules by order of magnitude over what was
strictly necessary and contributed to interoperability problems and I
think significantly to the flop of PKI -- if it's that hard because of
the broken tech, people will just do something else.)

- custom crypto and security related software development is perhaps
weighted towards dot-coms that just crashed.

- big one probably: lack of measurability of security -- developers
with no to limited crypto know-how are probably doing (and bodging)
most of the crypto development that gets done in general, certainly
contributing to the crappy state of crypto in software.  So probably
failure to realise this issue or perhaps just not caring, or lack of
financial incentives to care on the part of software developers.
Microsoft is really good at this one.  The number of times they
re-used RC4 keys in different protocols is amazing!


Other explanations?  Statistics?  Sample-of-one stories?

Adam
--
yes, still employed in sofware security industry; and in addition have
been doing crypto consulting since 97 (http://www.cypherspace.net/) if
you have any interesting applied crypto projects; reference
commissions paid.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Cryptographic privacy protection in TCPA

2002-08-18 Thread Adam Back

With Brands digital credentials (or Chaums credentials) another
approach is to make the endorsement key pair and certificate the
anonymous credential.  That way you can use the endorsement key and
certificate directly rather than having to obtain (blinded) identity
certificates from a privacy CA and trust the privacy CA not to issue
identity certificates without seeing a corresponding endorsement
credential.

However the idea with the identity certificates is that you can use
them once only and keep fetching new ones to get unlinkable anonymity,
or you can re-use them a bit to get pseudonymity where you might use a
different psuedonym for a different service where you are anyway
otherwise linkable to a given service.

With Brands credentials the smart card setting allows you to have more
compact and computationally cheap control of the credential from
within a smart card which you could apply to the TPM/SCP.  So you can
fit more (unnamed) pseudonym credentials on the TPM to start with.

You could perhaps more simply rely on Brands credential lending
discouraging feature (ability to encode arbitrary values in the
credential private key) to prevent break once virtualize anywhere.

For discarding pseudonyms and when you want to use lots of pseudonyms
(one-use unlinkable) you need to refresh the certificates you could
use the refresh protocol which allows you to exchange a credential for
a new one without trusting the privacy CA for your privacy.

Unfortunately I think you again are forced to trust the privacy CA not
to create fresh virtualized credentials.  Perhaps there would be
someway to have the privacy CA be a different CA to the endorsement CA
and for the privacy CA to only be able to refresh existing credentials
issued by the endorsement CA, but not to create fresh ones.

Or perhaps some restriction could be placed on what the privacy CA
could do of the form if the privacy CA issued new certificates it
would reveal it's private key.

Also relevant is An Efficient System for Non-transferable Anonymous
Credentials with Optional Anonymity Revocation, Jan Camenisch and
Anna Lysyanskaya, Eurocrypt 01

http://eprint.iacr.org/2001/019/

These credentials allow the user to do unlinkable multi-show without
involving a CA.  They are somewhat less efficient than Chaum or Brands
credentials though.  But for this application does this removes the
need to trusting a CA, or even have a CA: the endorsement key and
credential can be inserted by the manufacturer, can be used
indefinitely many times, and are not linkable.

 A secondary requirement is for some kind of revocation in the case
 of misuse.

As you point out unlinkable anonymity tends to complicate revocation.

I think Camenisch's optional anonymity revocation has similar
properties in allowing a designated entity to link credentials.

Another less TTP-based approach to unlinkable but revocable
credentials is Stubblebine's, Syverson and Goldschlag, Unlinkable
Serial Transactions, ACM Trans on Info Systems, 1999:

http://www.stubblebine.com/99tissec-ust.pdf

(It's quite simple you just have to present and relinquish a previous
pseudonym credential to get a new credential; if the credential is due
to be revoked you will not get a fresh credential.)


I think I would define away the problem of local breaks.  I mean the
end-user does own their own hardware, and if they do break it you
can't detect it anyway.  If it's anything like playstation mod-chips
some proportion of the population would in fact would do this.  May be
1-5% or whatever.  I think it makes sense to just live with this, and
of course not make it illegal.  Credentials which are shared are
easier to revoke -- knowledge of the private keys typically will
render most schemes linkable and revocable.  This leaves only online
lending which is anyway harder to prevent.

Adam

On Fri, Aug 16, 2002 at 03:56:09PM -0700, AARG!Anonymous wrote:
 Here are some more thoughts on how cryptography could be used to
 enhance user privacy in a system like TCPA.  Even if the TCPA group
 is not receptive to these proposals, it would be useful to have an
 understanding of the security issues.  And the same issues arise in
 many other kinds of systems which use certificates with some degree
 of anonymity, so the discussion is relevant even beyond TCPA.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: comparing RMAC to AES+CBC-MAC or XCBC (Re: Why is RMAC resistant to birthday attacks?)

2002-10-23 Thread Adam Back
The problem with this one-size fits all approach is that for most
applications given the key size of AES, the extension forgery is
impractical.  It would be more flexible to specify RMAC as having an
optional salt, with the size determined by the implementer as
appropriate for their scenario.

So mostly no salt as the number of messages required under the same
key to stand a non-negligible chance of finding a collision would be
greater than that possibly exchanged in the life-time of the MAC key.

For longer lived key scenarios, the size of the salt would be chosen
to address the problem.

See for example Rogaway's arguments about limited value of defending
against extension forgery attacks in XCBC:

No Added Resistance to Key-Search Attacks. While other CBC MAC
variants use additional keys to improve resistance to key-search
attacks, what is presented here does not. One can perform an
exhaustive key-search on the MAC presented just as efficiently as on
the underlying AES primitive. But this concern, quite appropriate for
DES, would seem to be moot for AES.

http://csrc.nist.gov/encryption/modes/workshop2/presentations/xcbc.pdf

Given that RMAC's salt should be _optional_ on all MAC output sizes
(contrary to the parameter sets given in the RMAC draft), and the
choice of salt size should be up to the developer -- for example sizes
ranging from 0 to 128 bits in increments of 8 bits, so they can match
the defense to that which makes sense in the context they are
deploying it.

Adam

On Tue, Oct 22, 2002 at 04:07:53PM -0700, Sidney Markowitz wrote:
  The choice of parameter sets is a bit odd.
 
 I think that they are chosen to make the work factors for General
 Forgery and Extension Forgery attacks about the same in any one
 parameter set. It would not make sense to have a parameter set which
 was a lot weaker to one of the attacks than to the other. Look at
 Table 2 to see that is so.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: palladium presentation - anyone going?

2002-10-21 Thread Adam Back
On Sun, Oct 20, 2002 at 10:38:35PM -0400, Arnold G. Reinhold wrote:
 There may be a hole somewhere, but Microsoft is trying hard to get
 it right and Brian seemed quite competent.

It doesn't sound breakable in pure software for the user, so this
forces the user to use some hardware hacking.

They disclaimed explicitly in the talk announce that:

| Palladium is not designed to provide defenses against
| hardware-based attacks that originate from someone in control of the
| local machine.

However I was interested to know exactly how easy it would be to
defeat with simple hardware modifications or reconfiguration.

You might ask why if there is no intent for Palladium to be secure
against the local user, then why would the design it so that the local
user has to use (simple) hardware attacks.  Could they not, instead of
just make these functions available with a user present test in the
same way that the TOR and SCP functions can be configured by the user
(but not by hostile software).

For example why not a local user present function to lie about TOR
hash to allow debugging (for example).

 Adam Back wrote:
 - isn't it quite weak as someone could send different information to
 the SCP and processor, thereby being able to forge remote attestation
 without having to tamper with the SCP; and hence being able to run
 different TOR, observe trusted agents etc.
 
 There is also a change to the PC memory management to support a 
 trusted bit for memory segments. Programs not in trusted mode can't 
 access trusted memory.

A trusted bit in the segment register doesn't make it particularly
hard to break if you have access to the hardware.

For example you could:

- replace your RAM with dual-ported video RAM (which can be read using
alternate equipment on the 2nd port).

- just keep RAM powered-up through a reboot so that you load a new TOR
which lets you read the RAM.

 Also there will be three additional x86 instructions (in microcode)
 to support secure boot of the trusted kernel and present a SHA1 hash
 of the kernel code in a read only register.  

But how will the SCP know that the hash it reads comes from the
processor (as opposed to being forged by the user)?  Is there any
authenticated communication between the processor and the SCP?

Adam
--
http://www.cypherspace.net/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-21 Thread Adam Back
I think they are presuming there will be no encryption, so Eve can
verify collisions by observing the MAC values.  Eve just records
messages and their MACs that Alice sends Bob.  They are also presuming
exceedingly long lived MAC keys.  (If you changed keys the collection
of messages would have to start over).  The optional salt ensures that
K3 (the key used to do the final encryption of the CBC-MAC computed
using K1) is different even if the same MAC keys are used
indefinately.  (K3 = K2 xor salt).

Note also in A.3 they are talking about a full collision rather than
just an equal MAC.  If the MAC is truncated (mb), then you can have
equal truncated MACs but different untruncated MACs.

So the full collision looks like: 

MAC(x) = MAC(x')

they then observe that for RMAC (and many other MACs) given (1)

MAC(x||y) = MAC(x'||y) (2)

and (2) means that if an attacker can get MAC(x||y) he automatically
has MAC(x'||y) for all values of y he can induce Alice into MACing as
they have the same full MACs (and truncated MACs).

This leads to the comment that:

(from A.3):
| Moreover, if a parameter set is chosen in which mb, i.e., if
| CIPHK3(On) is truncated to produce the MAC, then the discarded bits
| may be difficult for an unauthorized party to determine, so collisions
| may be difficult to detect.

which means that if the MAC is truncated it could suprisingly be
actually stronger (against this attack anyway) because the attacker
can't distinguish a truncated MAC collision from a full MAC collision
because he only sees the truncated MACs.  Truncated MAC collisions are
still useful to the attacker probably: he can swap the messages and
fool the verifier.  But full MAC collisions allow the attacker --
presuming he passively sees or can actively persuade Alice to compute
multiple MAC(x||y) for different y values -- then he can subject to
that limitation re-use the work of finding the full MAC collision.

Adam
--

[EMAIL PROTECTED] wrote:
 So Eve wants to convince Bob that a message really is from
 Alice. What does Eve do?  Does Eve somehow entice Alice to send
 ~sqrt(2^n) messages to Bob? How does the birthday attack come into
 play when the attacker cannot independently test potential
 collisions?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



comparing RMAC to AES+CBC-MAC or XCBC (Re: Why is RMAC resistant to birthday attacks?)

2002-10-22 Thread Adam Back
But the salt doesn't increase the MAC length.  It just frustrates
attempts to collect message+MAC pairs to find a collision.  Think of
it like a salt on unix passwords.  You can still brute force one
particular target in the same time, but the salt reduces your scope
for pre-computation.

There is still probability 1/2^m of finding a collision given two
random messages, whether the salt has size 0 or 64.

Note that the salt is optional.  They list parameter sets I through V.
Parameter sets II through V are considered safe for general use.
Parameter set II has salt size 0.

Parameter set I is considered only safe for applications where only a
limited number of messages could be sent.  This is more a function of
the small MAC size (32 bits) I think than the fact that the salt size
is 0 for parameter set I.

I would have thought that unless the keys can essentially never change
(for example burnt into hardware) the salt option is of limited
practical use.

The choice of parameter sets is a bit odd.  For example there are no 0
size salts for MAC outputs over 64 bits, while there is for the
smaller MAC outputs, and yet you would think the smaller MAC outputs
are more in need of the salt as finding a collision is more
realistically achievable.  Collecting 2^64 messages (for parameter set
V) seems already quite theoretical for many applications without
adding a 128 bit salt.  Yet collecting 2^32 messages (parameter set
II) seems much more plausible and yet there is no salt defined at that
parameter set.  Given the definition of the parameter sets I suspect
people will interpret the standard as that they must use one of the
listed parameter sets and can't use their own.  At least most
implementations will tend to do that.  Would it not be simpler to just
do away with the salt and parameter sets and describe the collision
problem and note that minimally K2 should be changed (however the
application may decide to arrange this) frequently enough to avoid a
non-neglible risk of collisions being obtainable to attacker.

If the salt is removed / ignored, RMAC is essentially the same as
CBC-MAC but just defined for use with AES (rather than just DES), so
providing more security due to larger block size (and key size).

The one difference which is an incremental improvement over raw
CBC-MAC is that the final CBC-MAC a-like output is encrypted with the
2nd key K3.  (K3 defined as K2 xor salt, K2 an independent key).

However for example Rogaway and Black's XCBC is simpler, more
efficient (not requiring a key schedule for each salt-change) and
equally deals with variable length messages).

http://csrc.nist.gov/encryption/modes/proposedmodes/xcbc-mac/xcbc-mac-spec.pdf

The protection against collisions is of limited practical value, and I
think better left out of the standard.

Adam

On Tue, Oct 22, 2002 at 01:52:18PM -0700, Sidney Markowitz wrote:
 [EMAIL PROTECTED]
  I want to understand the assumptions (threat models) behind the
  work factor estimates. Does the above look right?
 
 I just realized something about the salt in the RMAC algorithm,
 although it may have been obvious to everyone else:

 RMAC is equivalent to a HMAC hash-based MAC algorithm, but using a
 block cipher. The paper states that it is for use instead of HMAC
 iin circumstances where for some reason it is easier to use a block
 cipher than a cryptographic hash.

 The security of HMAC against attacks based on collisions is measured
 as a function of the bit length of the hash. Using a block cipher in
 CBC mode makes it in effect a b bit hash, where b is the block
 length of the cipher. In many cases the block length of a cipher
 being 64 or 128 bits will be too small by itself. Hence the need to
 add r bits from the salt and the need to write up explicitly how
 RMAC handles collision based attacks and how the salt affects that.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: comparing RMAC to AES+CBC-MAC or XCBC (Re: Why is RMAC resistant to birthday attacks?)

2002-10-24 Thread Adam Back
On Thu, Oct 24, 2002 at 02:08:11AM -0700, Sidney Markowitz wrote:
 [...] XCBC should be inherently resistant to extension forgery
 attacks. The attack requires that the MAC have the property that
 MAC(x) == MAC(y) implies that MAC(x||z) == MAC(y||z). In the case of
 XCBC, because of the padding and the use of K2 and K3 that would
 only be true when x and y are the same length or both have lengths
 that are multiples of the cipher block size.

The pre-conditions you give are a little over restrictive, but yes
there are limitations due to the structure of XCBC.  However provided
the pre-conditions are met, and they don't seem that implausible to
occur, the extension forgery attacks are possible so I wouldn't say
RMAC is inherently resistant to extension forgery.

 I agree with your conclusion [...]
 
 In the case of RMAC, if the parameter sets were chosen to make the
 work factors comparable on the two attacks, I think it is making the
 mistake of comparing apples and oranges: In the exhaustive key
 search attack, the attackers captures one message and the work
 factor is multiplied times the time it takes to try a key on their
 own computers. In the extension forgery attack the work factor is
 multiplied by the time between captured messages. The latter is
 somewhat under the control of the person who is using RMAC. There is
 no reason to require that they have similar work factors if the
 scale is much different.

Yes.  Perhaps I/someone should submit my comment to them before the
deadline.  If RMAC parameter sets were interpreted strictly they would
be quite incovenient and inflexible for the protocol designer.

Adam
--
http://www.cypherspace.net/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



palladium presentation - anyone going?

2002-10-17 Thread Adam Back
Would someone at MIT / in Boston area like to go to this and send a
report to the list?  Might help clear up some of the currently
unexplained aspects about Palladium, such as:

- why they think it couldn't be used to protect software copyright (as
the subject of Lucky's patent)

- are there plans to move SCP functions into processor?  any relation
to Intel Lagrange

- isn't it quite weak as someone could send different information to
the SCP and processor, thereby being able to forge remote attestation
without having to tamper with the SCP; and hence being able to run
different TOR, observe trusted agents etc.

I notice at the bottom of the talk invite it says 

| Palladium is not designed to provide defenses against
| hardware-based attacks that originate from someone in control of the
| local machine.

but in this case how does it meet the BORA prevention.  Is it BORA
prevention _presuming_ the local user is not interested to reconfigure
his own hardware?

Will it really make any significant difference to DRM enforcement
rates?  Wouldn't the subset of the file sharing community who produce
DVD rips still produce Pd DRM rips if the only protection is the
assumption that the user won't make simple hardware modifications.

Adam

 Original Message 
Subject: LCS/CIS Talk, OCT 18, TOMORROW
Date: Thu, 17 Oct 2002 12:49:01 -0400
From: Be Blackburn [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
CC: [EMAIL PROTECTED]


Open to the Public

Date: Friday, Oct 18, 2002 
Time: 10:30 a.m.- 12:00 noon 
Place:NOTE: NE43-518, 200 Tech Square 
Title:Palladium
Speaker:  Brian LaMacchia, Microsoft Corp.
Hosts:Ron Rivest and Hal Abelson

Abstract: 

This talk will present a technical overview of the Microsoft
Palladium Initiative.  The Palladium code name refers to a set of
hardware and software security features currently under development
for a future version of the Windows operating system.  Palladium
adds four categories of security services to today's PCs:

  a. Curtained memory. The ability to wall off and hide pages of main
memory so that each Palladium application can be assured that it is
not modified or observed by any other application or even the
operating system.

  b. Attestation. The ability for a piece of code to digitally sign
or otherwise attest to a piece of data and further assure the
signature recipient that the data was constructed by an unforgeable,
cryptographically identified software stack.

  c. Sealed storage. The ability to securely store information so
that a Palladium application or module can mandate that the
information be accessible only to itself or to a set of other trusted
components that can be identified in a cryptographically secure
manner.

  d. Secure input and output. A secure path from the keyboard and
mouse to Palladium applications, and a secure path from Palladium
applications to an identifiable region of the screen.

Together, these features provide a parallel execution environment to
the traditional kernel- and user-mode stacks.  The goal of
Palladium is to help protect software from software; that is, to
provide a set of features and services that a software application can
use to defend against malicious software also running on the machine
(viruses running in the main operating system, keyboard sniffers,
frame grabbers, etc).  Palladium is not designed to provide defenses
against hardware-based attacks that originate from someone in control
of the local machine.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Palladium -- trivially weak in hw but secure in software?? (Re: palladium presentation - anyone going?)

2002-10-22 Thread Adam Back
Remote attestation does indeed require Palladium to be secure against
the local user.  

However my point is while they seem to have done a good job of
providing software security for the remote attestation function, it
seems at this point that hardware security is laughable.

So they disclaim in the talk announce that Palladium is not intended
to be secure against hardware attacks:

| Palladium is not designed to provide defenses against
| hardware-based attacks that originate from someone in control of the
| local machine.

so one can't criticise the implementation of their threat model -- it
indeed isn't secure against hardware based attacks.

But I'm questioning the validity of the threat model as a realistic
and sensible balance of practical security defenses.

Providing almost no hardware defenses while going to extra-ordinary
efforts to provide top notch software defenses doesn't make sense if
the machine owner is a threat.

The remote attestation function clearly is defined from the view that
the owner is a threat.

Without specifics and some knowledge of hardware hacking we can't
quantify, but I suspect that hacking it would be pretty easy.  Perhaps
no soldering, $50 equipment and simple instructions anyone could
follow.

more inline below...

On Mon, Oct 21, 2002 at 09:36:09PM -0400, Arnold G. Reinhold wrote:
 [about improving palladium hw security...] Memory expansion could be
 dealt with by finding a way to give Palladium preferred access to
 the first block of physical memory that is soldered on the mother
 board.

I think standard memory could be used.  I can think of simple
processor modifications that could fix this problem with hardware
tamper resistance assurance to the level of having to tamper with .13
micron processor.  The processor is something that could be epoxyied
inside a cartridge for example (with the cartridge design processor +
L2 cache housings as used by some Intel pentium class processors),
though probably having to tamper with a modern processor is plenty
hard enough to match software security given software complexity
issues.

Adam
--
http://www.cypherspace.net/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: patent free(?) anonymous credential system pre-print

2002-10-30 Thread Adam Back
Some comments on this paper comparing efficiency, and functionality
with Camenisch, Chaum, Brands.

On Tue, Oct 29, 2002 at 11:49:21PM +, Jason Holt wrote:
 http://eprint.iacr.org/2002/151/
 
 It mentions how to use the blinding technique Ben Laurie describes
 in his Lucre paper, which I don't think has been mentioned in the
 formal literature, and also describes what I call a non-interactive
 cut and choose protocol which is new AFAICT.  Thanks again!

- efficiency

The non-interactive cut and choose protocol results in quite big
messages in the issuing and showing protcols to attain good security.
The user who wishes to cheat must create n/2 false attributes, and n/2
true attributes.  (True attributes being the ones he will try to
convince the CA are encoded in all the attributes).  The user can in
an offline fashion keep trying different combinations of false and
true attributes until he finds one where the attributes selected for
disclosure during issuing are the n/2 true attributes.  Then in the
showing protocol he can show the n/2 false attributes.

But C(n,n/2) grows sub-exponentially and so the user has to for
example encode 132 blinded hashed attributes to provide assurance of
work factor of 2^128 to the CA.  (C(132,66) ~ 2^128).  Without looking
in detail at what must be sent I presume each the issuing message for
a single credential would be order of 10KB.  Similar for the showing
protocol.

Computational efficiency is probably still better than Camenisch
credentials despite the number of attribute copies which must be
blinded and unblinded, but of course less efficient than Brands.

- functionality

The credentials have a relatively inefficient cut-and-choose based
issuing and showing protocol.  Brands has efficient issuing protocols
which support offline showing.  Chaum's basic offline credentials are
based on interactive cut-and-choose, but there is an efficient
variant [1].

As with Brands and Chaum's certificates if they are shown multiple
times they are linkable.  (Camenisch offers unlinkable multi-show but
they are quite inefficient).

The credentials can be replayed (as there is no credential private
key, a trace of a credential show offers no defense against replay).
Brands credentials have a private key so they can defend against this.
(Chaum's credentials have the same problem).

The credentials unavoidably leave the verifier with a transferable
signed trace of the transaction.  Brands credentials offer a
zero-knowledge option where the verifier can not transfer any
information about what he was shown.

The credentials support selective disclosure of attributes, but only
in a restricted sense.  Attributes can be disclosed with AND
connectives.  However other connectives (OR, +, -, negation, and
formulae) are not directly possible.  Brands supports all of these.

The credentials do not support lending deterence (there is no option
to have a secret associated with a credential that must necessarily be
revealed to lend the credential as with Brands).

The credentials are not suitable for offline use because they offer no
possibility for a secret (such as user identity, account number etc)
to be revealed if the user spends more times than allowed.

Most of these short-falls stem from the analogous short-falls in the
Wagner blinding method they are based on.  Of course (and the point of
the paper) the credentials do offer over the base Wagner credentials
(a restrictive) form of selective disclosure which the base
credentials do not.

On citations:

 I've submitted a pre-print of my anonymous credential system to the
 IACR ePrint server.  Thanks to all of you who responded to the
 questions I posted here while working on it.  I'd love to hear
 feedback from any and all before I sumbit it for publication;
 particularly, I want to make sure I haven't forgotten to give proper
 attribution for any previous work.

Brands discusses the salted hash form of selective disclosure in his
book [2], you might want to cite that.  He includes some related
earlier reference also.  I reinvented the same technique before being
aware of the Brands reference also -- it seems like an obvious
construction for a limited hashing based form of selective disclosure.

Adam
--
[1] Niels Ferguson, Single Term Off-Line Coins, eurocrypt 93.

[2] Stefan Brands, Rethinking Public Key Infrastructures and Digital
Certificates; Building in Privacy, MIT Press, Aug 2000

viz p27: Another attempt to protect privacy is for the CA to
digitally sign (salted) oneway hashes of attributes, instead of (the
concatenation of) the attributes themselves. When transacting or
communicating with a verifier, the certificate holder can selectively
disclose only those attributes needed.  Lamport [244] proposed this
hashing construct in the context of one-time signatures.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



deadbeef attack was choose low order RSA bits (Re: Key Pair Agreement?)

2003-01-21 Thread Adam Back
On Mon, Jan 20, 2003 at 09:08:31PM -0500, Radia Perlman wrote:
 [...] I was going to suggest something similar to what David Wagner
 suggested, but with Scott telling Alice the modulus size and the
 *high* order 64 bits (with the top bit constrained to be 1). I can
 see how Alice can easily generate two primes whose product will have
 that *high* order part, but it seems hard to generate an RSA modulus
 with a specific *low* order 64 bits.

One cheap way the low order 64 bits can be set is to set the low order
bits of p to the target bitset and the low order bits of q to ...1
(63 0s and one 1 in binary), and then to increase the stride of
candidate values in the prime sieve to be eg 2^64.

This was the method used in the deadbeef attack on pgp 2.x keyids.
pgp 2.x uses the least significant 64 bits as a keyid, and being able
to easily generate keys with the same keyids can be a nuisance for
keyservers and databases and could possibly confuse a user who
incorrectly though they unique.

(The intended unique id value is the pgp fingerprint, but in 2.x this
had problems, see:

http://216.239.53.100/search?q=cache:BcFAtn404nAC:cypherpunks.venona.com/date/1997/06/msg00523.html+dead+fingerprint+%22adam+back%22hl=enie=UTF-8

).

Adam

 As for Jack Lloyd's solution...I was also thinking
 of something based on Diffie-Hellman, and was going
 to suggest that Scott supply the prime p. I'd have
 had Scott generate p (as with PDM). If Alice also
 needs assurance that p isn't funny somehow, then she
 could specify the high order bits of p to Scott, or
 Scott could provide the seed to Alice from which he
 generated p. But that would force Alice to do a lot
 of work. I sort of like making it cheap for Alice,
 and making Scott, who is making Alice jump
 through hoops for no discernible reason, do a lot
 of work.
 
 Radia
 
 
 David Wagner [EMAIL PROTECTED] wrote:
 Jeroen C. van Gelderen wrote:
 Here is a scenario: Scott wants Alice to generate a key pair after 
 which he will receive Alice's public key. At the same time, Scott wants 
 to make sure that this key pair is newly generated (has not been used 
 before).
 
 You might be able to have Scott specify a 64-bit string, and then ask
 Alice to come up with a RSA public key that has this string as its low
 64 bits.  I believe it is straightforward to modify the RSA key generation
 algorithm to generate keypairs of the desired form.
 
 If you're worried about the security of allowing Scott to choose the
 low bits of Alice's public key, you could have Scott and Alice perform
 a joint coin-flipping protocol to select a random 64-bit string that
 neither can control, then proceed as before.
 
 I haven't worked out all the details, but something like this might
 be workable.
 
 In practice, you might also want to confirm that Alice knows her private
 key (i.e., has ability to decrypt messages encrypted under her public
 key).
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to
 [EMAIL PROTECTED]
 
 
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: deadbeef attack was choose low order RSA bits (Re: Key Pair Agreement?)

2003-01-23 Thread Adam Back
On Wed, Jan 22, 2003 at 03:18:34PM +1300, Peter Gutmann wrote:
 One cheap way the low order 64 bits can be set is to set the low order bits
 of p to the target bitset and the low order bits of q to ...1 (63 0s and
 one 1 in binary), and then to increase the stride of candidate values in the
 prime sieve to be eg 2^64.
 
 That way's trivially detectable by inspection of the private key
 [...].  More challenging though are ways of embedding a fixed
 pattern that isn't (easily) detectable, 

An alternate method which doesn't leave such an obvious pattern in the
private key would be to find a factorization of x the target string
other than using ...0001 and x, to use p' and q' being equal length
factors of x = p'.q'.  Or if there aren't any then equal length
factorizations of r||x where r is some number of random bits.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



password based key-wrap (Re: The Crypto Gardening Guide and Planting Tips)

2003-02-07 Thread Adam Back
Peter lists applied crypto problem in his Crypto Gardening Guide at:

http://www.cs.auckland.ac.nz/~pgut001/pubs/crypto_guide.txt

One of the problems from the Problems that Need Solving section is:

 * A key wrap function where the wrapping key is derived from a
 password.  The requirements for this are subtly different from a
 straight symmetric key wrap in that the threat model is rather
 different.  For example a symmetric key wrap may use HMAC to ensure
 non-malleability, but for password-based key wrap this makes a
 dictionary attack rather easier (throw passwords at the HMAC,
 sidestepping the encrypted key altogether).  There exists a (ad-hoc)
 design that has rather limited non-malleability in order to avoid
 potential dictionary attacks.

I may not be fully understanding the problem spec: you want to encrypt
(wrap) a randomly generated key (a per message session key for
example) with a key derived from a password.

What would be wrong with using PBKDF2 (from PKCS #5 / RFC2898) as the
key derivation function to give you defense against dictionary attack.
(Allows choice of number of iterations to stretch the password,
allows a salt to frustrate precomputation.)

Why do you care about non-malleability of the key-wrap function?

If you do want non-malleability of th ekey-wrap function, isn't
encrypt and MAC a standard way to do this?

Then you would need two keys, and I presume it would make sense to
derive them (using KDF2 from IEEE P1363a) a start key:

sk = KDF2( password, salt, iterations )
ek = KDF( sk, specialization1 )
mk = KDF( sk, specialization2 )

and then AES in CBC mode with random IV encrypting with ek, with
appended HMAC with key mk.

That leaves the comment:

 but for password-based key wrap this makes a dictionary attack
 rather easier (throw passwords at the HMAC, sidestepping the
 encrypted key altogether).

but in this case the attacker could take his pick with no significant
advantage of either method:

- brute force passwords to get sk, derive ek from sk, decrypt the
wrapped key and use some knowledge about the plaintext encrypted with
the wrapped key to tell if the write password was chosen; or

- brute force passwords to get sk, derive mk from sk, and see if the
MAC is valid MAC of the ciphertext (presuming encrypt and then MAC)

Or is the problem that the above ensemble is ad-hoc (though using
standardised constructs).  Or just that the ensemble is ad-hoc and so
everyone will be forced to re-invent minor variations of it, with
varying degrees of security.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: NSA being used to influence UN votes on Iraq

2003-03-05 Thread Adam Back
Why is US secret service eavesdropping and dirty tricks against UN
votes on Iraq news worthy?

Because it's an attempt to pervert the political process, and sabotage
the political representation of other UN member countries.

I'm sure it is a little more than delegations bothering to protect
their comms; there is plenty of room in physical bugs, black bag jobs,
political bribery, and even potentially individual blackmail whatever
crypto the delegates may be using.

Adam

On Sun, Mar 02, 2003 at 01:49:53PM -0500, John Ioannidis wrote:
 Why is this even newsworthy?  It's the NSA's responsibility to provide
 sigint and comint.  Furthermore, if the delegates are not US citizens,
 and at least one end of the communication is outside the US, they are
 not even breaking any laws in doing so.
 
 If the delegations can't be bothered to protect their own
 communications, it's their tough luck if they get intercepted.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]