[Cryptography] please dont weaken pre-image resistance of SHA3 (Re: NIST about to weaken SHA3?)

2013-10-14 Thread Adam Back

On Tue, Oct 01, 2013 at 12:47:56PM -0400, John Kelsey wrote:

The actual technical question is whether an across the board 128 bit
security level is sufficient for a hash function with a 256 bit output. 
This weakens the proposed SHA3-256 relative to SHA256 in preimage

resistance, where SHA256 is expected to provide 256 bits of preimage
resistance.  If you think that 256 bit hash functions (which are normally
used to achieve a 128 bit security level) should guarantee 256 bits of
preimage resistance, then you should oppose the plan to reduce the
capacity to 256 bits.  


I think hash functions clearly should try to offer full (256-bit) preimage
security, not dumb it down to match 128-bit birthday collision resistance.

All other common hash functions have tried to do full preimage security so
it will lead to design confusion, to vary an otherwise standard assumption. 
It will probably have bad-interactions with many existing KDF, MAC,

merkle-tree designs and combined cipher+integrity modes, hashcash (partial
preimage as used in bitcoin as a proof of work) that use are designed in a
generic way to a hash as a building block that assume the hash has full
length pre-image protection.  Maybe some of those generic designs survive
because they compose multiple iterations, eg HMAC, but why create the work
and risk to go analyse them all, remove from implementations, or mark as
safe for all hashes except SHA3 as an exception.

If MD5 had 64-bit preimage, we'd be looking at preimages right now being
expensive but computable.  Bitcoin is pushing 60bit hashcash-sha256 preimage
every 10mins (1.7petaHash/sec network hashrate).

Now obviously 128-bits is another scale, but MD5 is old, broken, and there
maybe partial weakenings along the way.  eg say design aim of 128 slips
towards 80 (in another couple of decades of computing progress).  Why design
in a problem for the future when we KNOW and just spent a huge thread on
this list discussing that its very hard to remove upgrade algorithms from
deployment.  Even MD5 is still in the field.

Is there a clear work-around proposed for when you do need 256?  (Some
composition mode or parameter tweak part of the spec?) And generally where
does one go to add ones vote to the protest for not weakening the
2nd-preimage propoerty?

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] was this FIPS 186-1 (first DSA) an attemped NSA backdoor?

2013-10-10 Thread Adam Back

Some may remember Bleichenbacher found a random number generator bias in the
original DSA spec, that could leak the key after soem number of signatures
depending the circumstances.

Its described in this summary of DSA issues by Vaudenay Evaluation Report
on DSA

http://www.ipa.go.jp/security/enc/CRYPTREC/fy15/doc/1002_reportDSA.pdf

   
Bleichenbacher's attack is described in section 5.


The conclusion is Bleichenbacher estimates that the attack would be
practical for a non-negligible fraction of qs with a time complexity of
2^63, a space complexity of 2^40, and a collection of 2^22 signatures.  We
believe the attack can still be made more efficient.

NIST reacted by issuing special publication SP 800-xx to address and I
presume that was folded into fips 186-3.  Of course NIST is down due to the
USG political level stupidity (why take the extra work to switch off the web
server on the way out I dont know).

That means 186-1 and 186-2 were vulnerable.

An even older NSA sabotage spotted by Bleichenbacher?

Anyway it highlights the significant design fragility in DSA/ECDSA not just
in the entropy of the secret key, but in the generation of each and every k
value, which leads to the better (but non-NIST recommended) idea adopted by
various libraries and applied crypto people to use k=H(m,d) so that the
signture is determinstic in fact, and the same k value will only be used
with the same message (which is harmless as thts just reissuing the bitwise
same signature).  


What happens if a VM is rolled back including the RNG and it outputs the
same k value to a different network dependeng m value?  etc.  Its just
unnecessarily fragile in its NIST/NSA mandated form.

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] are ECDSA curves provably not cooked? (Re: RSA equivalent key length/strength)

2013-10-01 Thread Adam Back

On Mon, Sep 30, 2013 at 06:35:24PM -0400, John Kelsey wrote:

Having read the mail you linked to, it doesn't say the curves weren't
generated according to the claimed procedure.  Instead, it repeats Dan
Bernstein's comment that the seed looks random, and that this would have
allowed NSA to generate lots of curves till they found a bad one.


That is itself a problem, the curves are in fact, not fully veriably fairly
chosen.  Our current inability to design a plausible mechanism by which this
could have been done is not proof that it was not done.  Also bear in mind
unlike the NSA the crypto community has focused more on good faith (how to
make thing secure) and less on bad faith (how to make things trapdoor
insecure while providing somewhat plausible evidence that no sabotage took
place).  Ie we didnt spend as much effort examining that problem.  Now that
we have a reason to examine it, maybe such methods can be found. 
Kleptography is a for the open community a less explored field of study.


Conversely it would have been easy to prove that the curve parameters WERE
fairly chosen as Greg Maxwell described his surprise that the seed was big
and random looking:


Considering the stated purpose I would have expected the seed to be
some small value like … “6F” and for all smaller values to fail the
test. Anything else would have suggested that they tested a large
number of values, and thus the parameters could embody any
undisclosed mathematical characteristic whos rareness is only
bounded by how many times they could run sha1 and test.


So the question is rather why on earth if they claim good faith, did they
not do that?  Another plausible explanation that Greg mentions also, is that
perhaps it was more about protecting the then secrecy of knowledge.  eg weak
curves and avoiding them without admitting the rules for which curves the
knew were weak.  


Clearly its easier to weaken a system in symmetric way that depends only on
analysis (ie when someone else figures out the class of weak curves they
gain the advantage also, if its public then everyone suffers), vs a true
trapdoor weakening, as in the EC DRBG fiasco.

So if that is their excuse, that the utility of NSA input one can get due to
institutional mentality of secrecy, is hardening but with undisclosed
rationale, I think we'd sooner forgoe their input and have fully open
verifiable reasoning.  Eg maybe they could still prove good faith if they
chose to disclose their logic (which may now be public information anyway)
and the actual seed and the algorithm that rejected all iterations below the
used value.  However that depends on the real algorithm - maybe there is no
way to prove it, if the real seed was itself random.

But I do think it is a very interesting and pressing research question as to
whether there are ways to plausibly deniably symmetrically weaken or even
trapdoor weaken DL curve parameters, when the seeds are allowed to look
random as the DSA FIPS 186-3 ones do.

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] are ECDSA curves provably not cooked? (Re: RSA equivalent key length/strength)

2013-10-01 Thread Adam Back

On Tue, Oct 01, 2013 at 08:47:49AM -0700, Tony Arcieri wrote:

  On Tue, Oct 1, 2013 at 3:08 AM, Adam Back [1]a...@cypherspace.org
  wrote:

But I do think it is a very interesting and pressing research question
as to whether there are ways to plausibly deniably symmetrically
weaken or even trapdoor weaken DL curve parameters, when the seeds are
allowed to look random as the DSA FIPS 186-3 ones do.



  See slide #28 in this djb deck:
  If e.g. the NSA knew of an entire class of weak curves, they could
  perform a brute force search with random looking seeds, continuing
  until the curve parameters, after the seed is run through SHA1, fall
  into the class that's known to be weak to them.


Right but weak parameter arguments are very dangerous - the US national
infrastructure they're supposed to be protecting could be weakened when
someone else finds the weakness.  Algorithmic weaknesses cant be hidden with
confidence, how do they know the other countries defense research agencies
arent also sitting on the same weakness even before they found it.  Thats a
strong disincentive.  Though if its a well defined partial weakening they
might go with it - eg historically they explicitly had a go at in public
requiring use of eg differential cryptography where some of the key bits
of lotus notes were encrypted to the NSA public key (which I have as a
reverse-engineering trophy here[1]).  Like for examle they dont really want
foreign infrastructure to have more than 80 bits or something close to the
edge of strength and they're willing to tolerate that on US infratructure
also.  Somewhat plausible.

But the more interesting question I was referring to is a trapdoor weakness
with a weak proof of fairness (ie a fairness that looks like the one in FIPS
186-3/ECDSA where we dont know how much grinding if any went into the magic
seed values).  For illustration though not applicable to ECDSA and probably
outright defective eg can they start with some large number of candidate G
values where G=xH (ie knowing the EC discrete log of some value H they pass
off as a random fairly chosen point) and then do a birthday collision
between the selection of G values and diffrent seed values to a PRNG to find
a G value that they have both a discrete log of wrt H and a PRNG seed. 
Bearing in mind they may be willing to throw custom ASIC or FPGA

supercomputer hardware and $1bil budgt at the problem as a one off cost.

Adam

[1] http://www.cypherspace.org/adam/hacks/lotus-nsa-key.html
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-09-30 Thread Adam Back

On Mon, Sep 30, 2013 at 11:49:49AM +0300, ianG wrote:

On 30/09/13 11:02 AM, Adam Back wrote:

no ASN.1, and no X.509 [...], encrypt and then MAC only, no non-forward
secret ciphersuites, no baked in key length limits [...] support
soft-hosting [...] Add TOFO for self-signed keys.  


Personally, I'd do it over UDP (and swing for an IP allocation).  


I think lack of soft-hosting support in TLS was a mistake - its another
reason not to turn on SSL (IPv4 addresses are scarce and can only host one
SSL domain per IP#, that means it costs more, or a small hosting company can
only host a limited number of domains, and so has to charge more for SSL):
and I dont see why its a cost worth avoiding to include the domain in the
client hello.  There's an RFC for how to retrofit softhost support via
client-hello into TLS but its not deployed AFAIK.

The other approach is to bump up security - ie start with HTTP, then switch
to TLS, however that is generally a bad direction as it invites attacks on
the unauthenticated destination redirected to.  I know there is also another
direction to indicate via certification that a domain should be TLS only,
but as a friend of mine was saying 10 years ago, its past time to deprecate
HTTP in favor of TLS.

Both client and server must have a PP key pair.  


Well clearly passwords are bad and near the end of their life-time with GPU
advances, and even amplified password authenticated key exchanges like EKE
have a (so far) unavoidable design requirement to have the server store
something offline grindable, which could be key stretched, but thats it. 
PBKDF2 + current GPU or ASIC farms = game over for passwords.


However whether its password based or challenge response based, I think we
ought to address the phish problem for which actually EKE was after all
designed for (in 1992 (EKE) and 1993 (password augmented EKE)).  Maybe as
its been 20 years we might actually do it.  (Seems to be the general rule of
thumb for must-use crypto inventions that it takes 20 years until the
security software industry even tries).  Of course patents ony slow it down. 
And coincidentally the original AKE patent expired last month.  (And I

somehow doubt Lucent, the holder, got any licensing revenue worth speaking
about between 1993 and now).

By pinning the EKE or AKE to the domain, I mean that there should be no MITM
that can repurpose a challenge based on phish at telecon.com to telecom.com,
because the browser enforces that EKE/AKE challenge reponse includes the
domain connected to is combined in a non-malleable way into the response. 
(EKE/AKE are anyway immune to offline grinding of the exchanged messags.)


Clearly you want to tie that also back to the domains TLS auth key,
otherwise you just invite DNS exploits which are trivial across ARP
poisoning, DNS cache-poisoning, TCP/UDP session hijack etc depending on the
network scenario.

And the browser vendors need in the case of passwords/AKE to include a
secure UI that can not be indistinguishably pasted over by carefully aligned
javascript popups.

(The other defense with securid and their clones can help prop up
AKE/passwords.)


Both, used every time to start the session, both sides authenticating each
other at the key level.  Any question of certificates is kicked out to a
higher application layer with key-based identities established.


While certs are a complexity it would be nice to avoid, I think that
reference to something external and bloated can be a problem, as then like
now you pollute an otherwise clean standard (nice simple BNF definition)
with something monstrous like ASN.1 and X.500 naming via X.509.  Maybe you
could profile something like openPGP though (it has its own crappy legacy
they're onto v5 key formats by now, and some of the earlier vs have their
own problems, eg fingerprint ambiguity arising from ambiguous encoding and
other issues, including too many variants, extra mandatory/optional
extensions.) Of course the issue with rejecting formats below a certain
level is the WoT is shrunk, and anyway the WoT is also not that widely used
outside of operational security/crypto industry circes.  That second
argument may push more towards SSH format keys which are by comparison
extremely simple, and are recently talking about introducing simple
certification as I recall.

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-09-30 Thread Adam Back

If we're going to do that I vote no ASN.1, and no X.509.  Just BNF format
like the base SSL protocol; encrypt and then MAC only, no non-forward secret
ciphersuites, no baked in key length limits.  I think I'd also vote for a
lot less modes and ciphers.  And probably non-NIST curves while we're at it. 
And support soft-hosting by sending the server domain in the client-hello. 
Add TOFO for self-signed keys.  Maybe base on PGP so you get web of trust,

thogh it started to get moderately complicated to even handle PGP
certificates.

Adam

On Sun, Sep 29, 2013 at 10:51:26AM +0300, ianG wrote:

On 28/09/13 20:07 PM, Stephen Farrell wrote:


b) is TLS1.3 (hopefully) and maybe some extensions for earlier
   versions of TLS as well



SSL/TLS is a history of fiddling around at the edges.  If there is to 
be any hope, start again.  Remember, we know so much more now.  Call 
it TLS2 if you want.


Start with a completely radical set of requirements.  Then make it 
so. There are a dozen people here who could do it.


Why not do the requirements, then ask for competing proposals?  
Choose 1.  It worked for NIST, and committees didn't work for anyone.


A competition for TLS2 would bring out the best and leave the 
bureaurats fuming and powerless.




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] three crypto lists - why and which

2013-09-30 Thread Adam Back

I am not sure if everyone is aware that there is also an unmoderated crypto
list, because I see old familiar names posting on the moderated crypto list
that I do not see posting on the unmoderated list.  The unmoderated list has
been running continuously (new posts in every day with no gaps) since mar
2010, with an interesting relatively low noise, and not firehose volume.

http://lists.randombit.net/mailman/listinfo/cryptography

The actual reason for the creation of that list was Perry's list went
through a hiatus when Perry stopped approving/forward posts eg

http://www.mail-archive.com/cryptography@metzdowd.com/

originally Nov 2009 - Mar 2010 (I presume the mar 2010 restart was motivated
by the creation of randombit list starting in the same month) but more
recently sep 2010 to may 2013 gap (minus traffic in aug 2011).

http://www.metzdowd.com/pipermail/cryptography/

I have no desire to pry into Perry's personal circumstances as to why this
huge gap happened, and he should be thanked for the significant moderation
effort he has put into create this low noise environment, but despite that
it is bad for cryptography if people's means of technical interaction
spuriously stops.  Perry mentioned recently that he has now backup
moderators, OK so good.

There is now also the cypherpunks list which has picked up, and covers a
wider mix of topics, censorship resistant technology ideas, forays into
ideology etc.  Moderation is even lower than randombit but no spam, noise
slightly higher but quite reasonable so far.  And there is now a domain name
that is not al-quaeda.net (seriously?  is that even funny?): cpunks.org. 

https://cpunks.org/pipermail/cypherpunks/ 


At least I enjoy it and see some familiar names posting last seen decade+
ago.

Anyway my reason for posting was threefold: a) make people aware of
randombit crypto list, b) rebooted cypherpunks list (*), but c) about how to
use randombit (unmoderated) and metzdowd.  


For my tastes sometimes Perry will cut off a discussion that I thought was
just warming up because I wanted to get into the detail, so I tend more
prefer the unmoderated list.  But its kind of a weird situaton because there
are people I want views and comments from who are on the metzdowd list who
as far as I know are not on the crypto list, and there's no convenient way
to migrate a conversation other than everyone subscribing to both.  Cc to
both perhaps works somewhat, I do that sometimes though as a general
principle it can be annoying when people Cc to too many lists.

Anyway thanks for your attention, back to the unmoderated (or moderated)
discussion!

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] forward-secrecy =2048-bit in legacy browser/servers? (Re: RSA equivalent key length/strength)

2013-09-25 Thread Adam Back

On Wed, Sep 25, 2013 at 11:59:50PM +1200, Peter Gutmann wrote:

Something that can sign a new RSA-2048 sub-certificate is called a CA.  For
a browser, it'll have to be a trusted CA.  What I was asking you to explain is
how the browsers are going to deal with over half a billion (source: Netcraft
web server survey) new CAs in the ecosystem when websites sign a new RSA-2048
sub-certificate.


This is all ugly stuff, and probably  3072 bit RSA/DH keys should be
deprecated in any new standard, but for the legacy work-around senario to
try to improve things while that is happening:

Is there a possibility with RSA-RSA ciphersuite to have a certified RSA
signing key, but that key is used to sign an RS key negotiation?

At least that was how the export ciphersuites worked (1024+ bit RSA auth,
512-bit export-grade key negotation).  And that could even be weakly forward
secret in that the 512bit RSA key could be per session.  I imagine that
ciphersuite is widely disabled at this point.

But wasnt there also a step-up certificate that allowed stronger keys if the
right certificate bits were set (for approved export use like banking.)
Would setting that bit in all certificates allow some legacy server/browsers
to get forward secrecy via large, temporary key negotiation only RSA keys? 


(You have to wonder if the 1024-bit max DH standard and code limits was bit
of earlier sabotage in itself.)

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism proof email, namespaces, and anonymity

2013-09-15 Thread Adam Back

On Fri, Sep 13, 2013 at 04:55:05PM -0400, John Kelsey wrote:

The more I think about it, the more important it seems that any anonymous
email like communications system *not* include people who don't want to be
part of it, and have lots of defenses to prevent its anonymous
communications from becoming a nightmare for its participants.


Well you could certainly allow people to opt-in to receiving anonymous
email, send them a notification mail saying an anonymous email is waiting
for them (and whatever warning that it could be a nastygram, as easily as
the next thing).

People have to bear in mind that email itself is not authenticated - SMTP
forgeries still work - but there are still a large number of newbies some of
whom have sufficiently thin skin to go ballistic when they realize they
received something anonymous and not internalized the implication of digital
free-speech.


At ZKS we had a pseudonymous email system.  Users had to pay for nyms (a
pack of 5 paid per year) so they wouldnt throw them away on nuisance pranks
too lightly.  They could be blocked if credible abuse complaint were
received.

Another design permutation I was thinking could be rather interesting is
unobservable mail.  That is to say the participants know who they are
talking to (signed, non-pseudonymous) but passive observers do not.  It
seems to me that in that circumstance you have more design leverage to
increase the security margin using PIR like tricks than you can with
pseudonymous/anonymous - if the contract is that the system remains very
secure so long as both parties to a communication channel want it to remain
that way.

There were also a few protocols for to facilitate anonymous abuse resistant
emails - user gets some kind of anonymously refreshable egress capability
token.  If they abuse they are not identified but lose the capability.  eg
http://www-users.cs.umn.edu/~hopper/faust-wpes.pdf

Finally there can be different types of costs for nyms and posts - creating
nyms or individual posts can cost real money (hard to retain pseudonymity),
bitcoin, or hashcash, as well lost reputation if a used nym is canceled.

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Adam Back

On Sat, Sep 14, 2013 at 12:56:02PM -0400, Perry E. Metzger wrote:

http://tools.ietf.org/html/rfc3766

  | requirement | Symmetric | RSA or DH| DSA subgroup |
  | for attack  | key size  | modulus size | size |
  +-+---+--+--+
  |100  |100| 1926 | 186  |

if TWIRL like machines appear, we could presume an 11 bit reduction in
strength


100-11 = 89 bits.  Bitcoin is pushing 75 bits/year right
now with GPUs and 65nm ASICs (not sure what balance).  Does that place ~2000
bit modulus around the safety margin of 56-bit DES when that was being
argued about (the previous generation NSA key-strength sabotage)?

Anyone have some projections for the cost of a TWIRL to crack 2048 bit RSA? 
Projecting 2048 out to a 2030 doesnt seem like a hugely conservative

estimate.  Bear in mind NSA would probably be willing to drop $1b one-off to
be able to crack public key crypto for the next decade.  There have been
cost and performance, power, density improvements since TWIRL was proposed. 
Maybe the single largest employer of mathematicians can squeeze a few

incremetal optimizations of the TWIRL algorithm or implementation strategy.

Tin foil or not: maybe its time for 3072 RSA/DH and 384/512 ECC?

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: Against Rekeying

2010-03-25 Thread Adam Back
Seems people like bottom post around here.

On Tue, Mar 23, 2010 at 8:51 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
 On Tue, Mar 23, 2010 at 10:42:38AM -0500, Nicolas Williams wrote:
 On Tue, Mar 23, 2010 at 11:21:01AM -0400, Perry E. Metzger wrote:
  Ekr has an interesting blog post up on the question of whether protocol
  support for periodic rekeying is a good or a bad thing:
 
  http://www.educatedguesswork.org/2010/03/against_rekeying.html
 
  I'd be interested in hearing what people think on the topic. I'm a bit
  skeptical of his position, partially because I think we have too little
  experience with real world attacks on cryptographic protocols, but I'm
  fairly open-minded at this point.

 I fully agree with EKR on this: if you're using block ciphers with
 128-bit block sizes in suitable modes and with suitably strong key
 exchange, then there's really no need to ever (for a definition of
 ever relative to common connection lifetimes for whatever protocols
 you have in mind, such as months) re-key for cryptographic reasons.

 I forgot to mention that I was referring to session keys for on-the-wire
 protocols.  For data storage I think re-keying is easier to justify.

 Also, there is a strong argument for changing ephemeral session keys for
 long sessions, made by Charlie Kaufman on EKRs blog post: to limit
 disclosure of earlier ciphertexts resulting from future compromises.

 However, I think that argument can be answered by changing session keys
 without re-keying in the SSHv2 and TLS re-negotiation senses.  (Changing
 session keys in such a way would not be trivial, but it may well be
 simpler than the alternative.  I've only got, in my mind, a sketch of
 how it'd work.)

 Nico

In anon-ip (a zero-knowledge systems internal project) and cebolla [1]
we provided forward-secrecy (aka backward security) using symmetric
re-keying (key replaced by hash of previous key).  (Backward and
forward security as defined by Ross Anderson in [2]).

But we did not try to do forward security in the sense of trying to
recover security in the event someone temporarily gained keys.  If
someone has compromised your system badly enough that they can read
keys, they can install a backdoor.

Another angle on this is timing attacks or iterative adaptive attacks
like bleichenbacher's attack on SSL encryption padding.  If re-keying
happens before the attack can complete, perhaps the risk of a
successful so far unnoticed adaptive or side-channel attack can be
reduced.  So maybe there is some use.

Simplicity of design can be good too.

Also patching SSL now that fixes are available might be an idea.  (In
my survey of bank sites most of them still have not patched and are
quite possibly practically vulnerable).

Adam

[1] http://www.cypherspace.org/cebolla/
[2] http://www.cypherspace.org/adam/nifs/refs/forwardsecure.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-23 Thread Adam Back
In anon-ip (a zero-knowledge systems internal project) and cebolla [1]
we provided forward-secrecy (aka backward security) using symmetric
re-keying (key replaced by hash of previous key).  (Backward and
forward security as defined by Ross Anderson in [2]).

But we did not try to do forward security in the sense of trying to
recover security in the event someone temporarily gained keys.  If
someone has compromised your system badly enough that they can read
keys, they can install a backdoor.

Another angle on this is timing attacks or iterative adaptive attacks
like bleichenbacher's attack on SSL encryption padding.  If re-keying
happens before the attack can complete, perhaps the risk of a
successful so far unnoticed adaptive or side-channel attack can be
reduced.  So maybe there is some use.

Simplicity of design can be good too.

Also patching SSL now that fixes are available might be an idea.  (In
my survey of bank sites most of them still have not patched and are
quite possibly practically vulnerable).

Adam

[1] http://www.cypherspace.org/cebolla/
[2] http://www.cypherspace.org/adam/nifs/refs/forwardsecure.pdf

On Tue, Mar 23, 2010 at 8:51 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
 On Tue, Mar 23, 2010 at 10:42:38AM -0500, Nicolas Williams wrote:
 On Tue, Mar 23, 2010 at 11:21:01AM -0400, Perry E. Metzger wrote:
  Ekr has an interesting blog post up on the question of whether protocol
  support for periodic rekeying is a good or a bad thing:
 
  http://www.educatedguesswork.org/2010/03/against_rekeying.html
 
  I'd be interested in hearing what people think on the topic. I'm a bit
  skeptical of his position, partially because I think we have too little
  experience with real world attacks on cryptographic protocols, but I'm
  fairly open-minded at this point.

 I fully agree with EKR on this: if you're using block ciphers with
 128-bit block sizes in suitable modes and with suitably strong key
 exchange, then there's really no need to ever (for a definition of
 ever relative to common connection lifetimes for whatever protocols
 you have in mind, such as months) re-key for cryptographic reasons.

 I forgot to mention that I was referring to session keys for on-the-wire
 protocols.  For data storage I think re-keying is easier to justify.

 Also, there is a strong argument for changing ephemeral session keys for
 long sessions, made by Charlie Kaufman on EKRs blog post: to limit
 disclosure of earlier ciphertexts resulting from future compromises.

 However, I think that argument can be answered by changing session keys
 without re-keying in the SSHv2 and TLS re-negotiation senses.  (Changing
 session keys in such a way would not be trivial, but it may well be
 simpler than the alternative.  I've only got, in my mind, a sketch of
 how it'd work.)

 Nico
 --

 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Password hashing

2007-10-12 Thread Adam Back
I would have thought PBKDF2 would be the obvious, standardized (PKCS
#5 / RFC 2898) and designed for purpose method to derive a key from a
password.  PBKDF2 would typically be based on HMAC-SHA1.

Should be straight-forward to use PBKDF2 with HMAC-SHA-256 instead for
larger key sizes, or for avoidance of SHA1 since the partial attacks
on it.

Adam

On Thu, Oct 11, 2007 at 10:19:18PM -0700, james hughes wrote:
 A proposal for a new password hashing based on SHA-256 or SHA-512 has  
 been proposed by RedHat but to my knowledge has not had any rigorous  
 analysis. The motivation for this is to replace MD-5 based password  
 hashing at banks where MD-5 is on the list of do not use algorithms.  
 I would prefer not to have the discussion MD-5 is good enough for  
 this algorithm since it is not an argument that the customers  
 requesting these changes are going to accept.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: open source digital cash packages

2007-09-18 Thread Adam Back
credlib provides Brands' and Chaum credentials, both of which can be
used for ecash.

http://www.cypherspace.org/credlib/

Adam

On Mon, Sep 17, 2007 at 01:46:04PM -0400, Steven M. Bellovin wrote:
 Are there any open source digital cash packages available?  I need one
 as part of another research project.
 
 
   --Steve Bellovin, http://www.cs.columbia.edu/~smb
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: remote-attestation is not required (Re: The bank fraud blame game)

2007-07-04 Thread Adam Back
I think you misread what I said about BIOS jumper required install.

Ie this is not a one click install from email.  It is something one
user in 10,000 would even install at all!  It would be more like
people who program and install custom BIOSes or something, people who
reverse-engineer security products.  Point is to allow audit of
running code by a few paranoid people to keep things honest.

The whole point of the separate program space is that it DOES NOT get
infested with viruses like windows does.  The software running in it
will be very very simple, have minimal UI, minimal code etc.

Obviously there would be no software connection between anything
received in email and changing the software in the physical or virtual
software compartment.

Adam

On Tue, Jul 03, 2007 at 05:53:19PM -, John Levine wrote:
 I do not believe the mentioned conflict exists.  The aim of these
 calculator-like devices is to make sure that no malware, virus etc can
 create unauthorized transactions.  The user should still be able to
 debug, and inspect the software in the calculator-like device, or
 virtual software compartment, just that installation of software or
 upgrades into that area should be under direct explicit user control.
 (eg with BIOS jumper required to even make any software change!)
 
 In view of the number of people who look at an email message, click on
 an attached ZIP file, rekey a file password in the message, and then
 run the program in the file, thereby manually installing a virus, it's
 way too dangerous to let users install any code at all on a security
 device.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


remote-attestation is not required (Re: The bank fraud blame game)

2007-07-03 Thread Adam Back
I do not believe the mentioned conflict exists.  The aim of these
calculator-like devices is to make sure that no malware, virus etc can
create unauthorized transactions.  The user should still be able to
debug, and inspect the software in the calculator-like device, or
virtual software compartment, just that installation of software or
upgrades into that area should be under direct explicit user control.
(eg with BIOS jumper required to even make any software change!)

The ring -1 and loss-of-control aspects of TPM are different, they are
saying that you are not really root on your own machine anymore!  In
the sense that if you do load under a debugger the remote party can
tell this and refuse to talk with you.

This remote attestation feature is simply not required for
user-centric, user-controlled security.

Adam

On Sun, Jul 01, 2007 at 11:09:16PM -0400, Leichter, Jerry wrote:
 | something like a palm pilot, with screen and input and a reasonably
 | trustworthy OS, along with (as you say) the appropriate UI investment.
 You do realize that you've just come down to what the TPM guys want to
 build?  (Of course, much of the driving force behind having TPM comes
 from a rather different industry.  We're all happy when TPM can be
 used to ensure that our banking transactions actually do what the bank
 says it will do for a particular set of instructions issued by us and
 no one else, not so happy when they ensure that our music transactions
 act the same way)
 
 Realistically, the only way these kinds of devices could catch on would
 be for them to be standardized.  No one would be willing to carry one
 for their bank, another for their stock broker, a third for their
 mortgage holder, a fourth for their credit card company, and so on.
 But once they *are* standardized, almost the same potential for
 undesireable uses appears as for TPM's.  What's to prevent the movie
 download service requiring that you present your Universal Safe Access
 Fob before they authorize you to watch a movie?  If the only significant
 differences between this USAF and TPM is that the latter is more
 convenient because more tightly tied to the machine, we might as well
 have the convenience.
 
 (This is why I find much of the discussion about TPM so surreal.  The
 issue isn't the basic technology, which one way or another, in some form,
 is going to get used.  It's how we limit the potential misuses)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New digital bearer cash site launched

2007-02-24 Thread Adam Back
I read some of the docs and ecache appears to be based on HMAC
tickets, plus mixes.  The problem I see is that you have to trust the
mix.  Now the documentation does mention that they anticipate 3rd
party mixes, but still you have to trust those mixes also.

And as we know from mixmaster etc., there are attacks on mixes such as
flooding.

So it seems to me they would achieve much stronger anonymity, using a
blinding based ecash system such as Chaum (patent expired) or Brands.

In this way the anonymity set would be with all of the coins issued
since coin-epoch start, rather than with the mixes used.  And there
would be no trust concerns as the blinding protocols dont require
trust in any servers (even the bank and merchant in collusion cant
identify a coin with its withdrawer).

Adam

On Wed, Feb 21, 2007 at 09:28:03AM -0800, Steve Schear wrote:
 With the expiration of Chaum's key patents it was assumed that someone 
 would step up an try their hand at launching a DBC-based financial 
 service.  Some time has passed and I'm happy to announce that this has 
 finally happened.  Taking a cue from the lively Digital Gold Currencies, 
 eCache's first denomination if gold backed.  Unlike Digicash's instruments, 
 eCache is using a mixing technique, rather than blinding, to help preserve 
 unlinkability.  Its mint is located on a hidden server in TOR-land.  More 
 information at: https://ffij33ewbnoeqnup.onion.meshmx.com/doc.php
 
 Comments are invited about the technology and governance aspects that such 
 financial services invoke.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


private credential/ecash thread on slashdot (Re: announce: credlib library with brands and chaum credentials)

2007-02-21 Thread Adam Back
Credentica (Stefan Brands ecash/credentials) U-prove library and
open source credlib library implementing the same are on slashdot:

http://yro.slashdot.org/yro/07/02/20/2158240.shtml

Maybe some list readers would like to inject some crypto knowledge
into the discussion.  

There is quite some underinformed speculation as critique on the
thread...  Its interesting to see people who probably understand SSL,
SMIME and stuff at least at a power user if not programmer level, try
to make logical leaps about what must be wrong or limited about
unlinkable credential schemes.  Shows the challenges faced in
deploying this stuff.  Cant deploy what people dont understand!

Adam
--
http://www.cypherspace.org/credlib/

On Fri, Feb 16, 2007 at 11:14:39AM -0500, Adam Back wrote:
 Hi
 
 I implemented Chaumian and Brands credentials in a credential library
 (C code, using openSSL).  I implemented some of the pre-computation
 steps.  Have not made any attempt so far to benchmark it.  But thought
 I could take this opportunity to make it public.  I did not try to
 optimize so far.  One optimization opportunity at algorithm level, is
 you dont need witness indistinguishability on a single attribute
 credential, which saves some of the computations.
 
   http://www.cypherspace.org/credlib/
 
 Ben, if you have a partial implementation of Camenisch credentials,
 you could maybe do some comparisons of that against this C
 implementation.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


announce: credlib library with brands and chaum credentials (Re: see also credentica announcement about U-prove)

2007-02-16 Thread Adam Back
Hi

I implemented Chaumian and Brands credentials in a credential library
(C code, using openSSL).  I implemented some of the pre-computation
steps.  Have not made any attempt so far to benchmark it.  But thought
I could take this opportunity to make it public.  I did not try to
optimize so far.  One optimization opportunity at algorithm level, is
you dont need witness indistinguishability on a single attribute
credential, which saves some of the computations.

http://www.cypherspace.org/credlib/

Ben, if you have a partial implementation of Camenisch credentials,
you could maybe do some comparisons of that against this C
implementation.

(I previous shared a copy with a few list participants).

The Brands credential paper I used as reference (simpler precis than
the thesis as a source):

A Technical Overview of Digital Credentials, Technical Report, February 2002.
http://www.cypherspace.org/credlib/brands-technical.pdf

could be useful as a source of quick reference of whats modexp, modinv
steps would be involved in issuing, showing etc, for comparison with
Camenisch.

About flexibility and generality I mean Brands has a huge list of
features, like a very efficient observer setting, with cheap
operations suitable for an 8 bit smartcard, limited multi-show (though
linkable, there is an online credential refresh phase if unlinkable is
desired), single show, ability to show formulae, ability to show and
bombine formulae across credentials from different issuers etc.  And
also prove negatives involving attributes, and related technique for
testing a black list of revoked credentials blindly.  I am a bit rusty
about Camenisch, as its been a few years, but from my recollection it
doesnt do most of these things.  Also Brands in the ecash setting
there is a neat technique for making offline respendable coins with
double-spend protection.  (I thought I discovered it, but I asked
Stefan, and its a foot note in the thesis book that I missed, and
turns out it was topic of someone's MSc thesis).

The credlib library so far does unlimited show linkable credentials
(issuing, showing etc) for 0 or more attributes.

The u-prove library does a lot more things, I think, but its java and
I'm more of a C person, though java is interesting in some java device
and j2ee server settings, and for app portability.  I guess I just
like C efficiency.

Adam

On Thu, Feb 15, 2007 at 06:24:11PM +, Ben Laurie wrote:
  I believe Brands credentials are considerably more computationally
  efficient and more general/flexible than Camenisch credentials.
 
 Not sure about more general. Brands does claim they are more efficient,
 though - however, Camenisch/Lysyanskya credentials have been improved
 since they were first thought of, and are also a lot faster if you don't
 insist on academic rigour. I have not yet put them side-by-side, but I
 do have a partial implementation of C/L credentials for OpenSSL and am
 planning a Brands implementation, too.
 
  (Re Hal's comment on the patent status of Camenisch credentials, as
  far as I know patents apply to both systems).
  
  Looks like you can obtain an evaluation copy of U-prove also.
  
  Adam
  
  On Sun, Feb 04, 2007 at 10:34:33AM -0800, Hal Finney wrote:
  John Gilmore forwards:
  http://news.com.com/IBM+donates+new+privacy+tool+to+open-source/2100-1029_3-6153625.html
 
  IBM donates new privacy tool to open-source
By  Joris Evers
Staff Writer, CNET News.com
Published: January 25, 2007, 9:00 PM PST
 
  IBM has developed software designed to let people keep personal  
  information secret when doing business online and donated it to the  
  Higgins open-source project.
 
The software, called Identity Mixer, was developed by IBM  
  researchers. The idea is that people provide encrypted digital  
  credentials issued by trusted parties like a bank or government agency  
  when transacting online, instead of sharing credit card or other  
  details in plain text, Anthony Nadalin, IBM's chief security architect,  
  said in an interview.
  ...
  I just wanted to note that the idemix software implements what we
  sometimes call Camenisch credentials.  This is a very advanced credential
  system based on zero knowledge and group signatures.  The basic idea is
  that you get a credential on one pseudonym and can show it on another
  pseudonym, unlinkably.  More advanced formulations also allow for
  credential revocation.  I don't know the specifics of what this software
  implements, and I'm also unclear about the patent status of some of the
  more sophisticated aspects, but I'm looking forward to being able to
  experiment with this technology.
 
  Hal Finney
 
  -
  The Cryptography Mailing List
  Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
  
  -
  The Cryptography Mailing List
  Unsubscribe by sending 

see also credentica announcement about U-prove (Re: IBM donates new privacy tool to open-source)

2007-02-15 Thread Adam Back
Related to this announcement, credentica.com (Stefan Brands' company)
has released U-Prove, their toolkit  SDK for doing limited-show,
selective disclosure and other aspects of the Brands credentials.

http://www.credentica.com/uprove_sdk.html

(Also on Stefans blog http://www.idcorner.org/?p=144).

I believe Brands credentials are considerably more computationally
efficient and more general/flexible than Camenisch credentials.

(Re Hal's comment on the patent status of Camenisch credentials, as
far as I know patents apply to both systems).

Looks like you can obtain an evaluation copy of U-prove also.

Adam

On Sun, Feb 04, 2007 at 10:34:33AM -0800, Hal Finney wrote:
 John Gilmore forwards:
  http://news.com.com/IBM+donates+new+privacy+tool+to+open-source/2100-1029_3-6153625.html
 
  IBM donates new privacy tool to open-source
By  Joris Evers
Staff Writer, CNET News.com
Published: January 25, 2007, 9:00 PM PST
 
  IBM has developed software designed to let people keep personal  
  information secret when doing business online and donated it to the  
  Higgins open-source project.
 
The software, called Identity Mixer, was developed by IBM  
  researchers. The idea is that people provide encrypted digital  
  credentials issued by trusted parties like a bank or government agency  
  when transacting online, instead of sharing credit card or other  
  details in plain text, Anthony Nadalin, IBM's chief security architect,  
  said in an interview.
  ...
 
 I just wanted to note that the idemix software implements what we
 sometimes call Camenisch credentials.  This is a very advanced credential
 system based on zero knowledge and group signatures.  The basic idea is
 that you get a credential on one pseudonym and can show it on another
 pseudonym, unlinkably.  More advanced formulations also allow for
 credential revocation.  I don't know the specifics of what this software
 implements, and I'm also unclear about the patent status of some of the
 more sophisticated aspects, but I'm looking forward to being able to
 experiment with this technology.
 
 Hal Finney
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


secure CRNGs and FIPS (Re: How important is FIPS 140-2 Level 1 cert?)

2006-12-26 Thread Adam Back
Anoymous wrote:
 [criticizing FIPS CRNGs]

You can make a secure CRNG that you can obtain FIPS 140 certification
on using the FIPS 186-2 appendix 3.1 (one of my clients got FIPS 140
on an implementation of the FIPS 186-2 RNG that I implemented for
general key generation and such crypto use.)

You should apply change notice 1 under the section general purpose
random number generation, or you will be doing needless modulo q
bignum operations for general RNG use (the default, non-change-note
modified RNG is otherwise hard code for DSA k value generation and
related things 186-2 being the FIPS DSA standard doc).


Also about continuously adding seeding this is also provided with
186-2 rng via the XSEED parameter, which allows the system to add
extra entropy at any time.


About the criticisms of Common Critera evaluation in general, I think
why people complain it is a documentation exercise is because pretty
much all it does ensure that it does what it says it does.  So
basically you have to enumerates threats, state what threats the
system is designed to protect against, and which are out of scope.

Then the rest of the documentation is just saying that in increasing
detail, that you have not made mistakes in the design and
specification and to some extent implementation.


So as someone else said in the thread, as a user you need to read the
security target document section on security objectives and
assumptions, and check if they protect against attacks that are
relevant to you.

Another aspect of security targets is protection profiles.  A
protection profile is basically a sort of set of requirements for
security targets for a given type of system.  So you might get eg a
protection profile for hard disk encryption.  The protection profile
will be standardized on and so it makes it a bit easier for the
consumer as its less likely the protection profile will be massaged.
(I mean the consortium or standardization body creating the protection
profile will want some security quality bar).

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: TPM disk crypto

2006-10-12 Thread Adam Back
I was suspecting that as DRM at least appears to one of the main
motivators (along side trojan/malware protection) for trustworthy
computing that probably you will not be able to put the TPM into debug
mode (ie manipulate code without affecting the hash attested in debug
mode).  Ability to do so breaks DRM.

Also bear in mind the vista model where it has been described that
inserting an unsigned device driver into the kernel will disable some
media playback (requiring DRM).  And also the secure (encrypted) path
between trusted agent and video/audio card, and between video/audio
card and monitor/speakers.  The HDMI spec has these features, and you
can already buy HDMI cards and monitors (though I dont know if they
have the encryption features implemented/enabled).

I think generally full user control model will not be viewed
compatible.  Ie there will be a direct conflict between user ability
to debug attested apps and DRM.

So then enters the possibility to debug all apps except special ones
flagged as DRM, but if that technical ability is there, you wont have
wait long for it to be used for all things: file formats locked to
editors, per processor encrypted binaries, rented by the hour software
you cant debug or inspect memory space of etc.

I think the current CPUs / memory managers do not have the ring -1 /
curtained memory features, but already a year ago or more Intel and
AMD were talking about these features.  So its possible the for
example hypervisor extra virtualization functionality in recent
processors ties with those features, and is already delivered?  Anyone
know?

The device driver signing thing is clearly bypassable without a TPM,
and we know TPMs are not widely available at present.  (All that is
required is to disable or nop out the driver signature verification in
the OS; or replace the CA or cert it is verified against with your own
and sign your own drivers).  How long until that OS binary patch is
made?

Adam

On Tue, Oct 10, 2006 at 12:56:07PM +0100, Brian Gladman wrote:
 I haven't been keeping up to date with this trusted computing stuff over
 the last two years but when I was last involved it was accepted that it
 was vital that the owner of a machine (not necessarily the user) should
 be able to do the sort of things you suggest and also be able to exert
 ultimate control over how a computing system presents itself to the
 outside world.
 
 Only in this way can we undermine the treacherous computing model of
 trusted machines with untrusted owners and replace it with a model in
 which trust in this machine requires trust in its owner on which real
 information security ultimately depends (I might add that even this
 model has serious potential problems when most machine owners do not
 understand security).
 
 Does anyone know the current state of affairs on this issue within the
 Trusted Computing Group (and the marketed products of its members)?

 Adam Back wrote:
  So the part about being able to detect viruses, trojans and attest
  them between client-server apps that the client and server have a
  mutual interest to secure is fine and good.
  
  The bad part is that the user is not given control to modify the hash
  and attest as if it were the original so that he can insert his own
  code, debug, modify etc.
  
  (All that is needed is a debug option in the BIOS to do this that only
  the user can change, via BIOS setup.)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: TPM disk crypto

2006-10-09 Thread Adam Back
So the part about being able to detect viruses, trojans and attest
them between client-server apps that the client and server have a
mutual interest to secure is fine and good.

The bad part is that the user is not given control to modify the hash
and attest as if it were the original so that he can insert his own
code, debug, modify etc.

(All that is needed is a debug option in the BIOS to do this that only
the user can change, via BIOS setup.)

Adam

On Mon, Oct 09, 2006 at 08:03:40PM +1000, James A. Donald wrote:
 Erik Tews wrote:
 What you do is, you trust your TPM and your BIOS that they never lie to
 you, because they are certified by the manufature of the system and the
 tpm. (This is why it is called trusted computing)
 
 So if you don't trust your hardware and your manufactor, trusted
 computing is absolutely worthless for you. But if you trust a
 manufactor, the manufactor trusts the tpms he has build and embedded in
 some systems, and you don't trust a user that he did not boot a modified
 version of your operating system, you can use these components to find
 out if the user is lieing.
 
 Well obviously I trust myself, and do not trust anyone else all that 
 much, so if I am the user, what good is trusted computing?
 
 One use is that I can know that my operating system has not changed 
 behind the scenes, perhaps by a rootkit, know that not only have I not 
 changed the operating system, but no one else has changed the operating 
 system.
 
 Further, I can know that a known program on a known operating system has 
 not been changed by a trojan.
 
 So if I have a login and banking client program, which communicates to 
 me over a trusted path, I can know that the client is the unchanged 
 client running on the unchanged operating system, and has not been 
 modified or intercepted by some trojan.
 
 Further, the bank can know this, and can just not let me login if there 
 is something funny about client program or the OS.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: IGE mode is broken (Re: IGE mode in OpenSSL)

2006-09-10 Thread Adam Back
On Sat, Sep 09, 2006 at 09:39:04PM +0100, Ben Laurie wrote:
  There is some more detail here:
  
  http://groups.google.ca/group/sci.crypt/browse_thread/thread/e1b9339bf9fb5060/62ced37bb9713a39?lnk=st
 
 Interesting. In fact, Gligor et al appear to have proposed IGE rather
 later than this date (November 2000).

Well looking at the paper by Gligor in their mode submission to NIST
on IGE, it appears rather that our FREE-MAC was a re-invention of IGE!
Apparently according to Gligor IGE was proposed by Carl Campbell in
Feb 1977, about the same time as CBC mode was proposed.  Gligor et al
wrote the mode-submission for IGE in Nov 2000.

 I may have misunderstood the IGE paper, but I believe it includes proofs
 for error propagation in biIGE. Obviously if you can prove that errors
 always propagate (with high probability, of course) then you can have
 authentication cheaply - in comparison to the already high cost of
 biIGE, that is.

I am not sure about the proofs in the IGE-spec paper, but at least the
proofs about IGE at least must be flawed somehow because the sci.crypt
post shows a a class of known plaintext modifications that exhibits
error recovery.  I worked through it on paper at the time, and as far
as I can see it trivially breaks IGE/FREE-MAC.  No doubt there are
other variations so there are lots of permutations you can do in
rearranging the ciphertext such that the integrity check still
passes.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


IGE mode is broken (Re: IGE mode in OpenSSL)

2006-09-09 Thread Adam Back
Hi Ben, Travis

IGE if this description summarized by Travis is correct, appears to be
a re-invention of Anton Stiglic and my proposed FREE-MAC mode.
However the FREE-MAC mode (below described as IGE) was broken back in
Mar 2000 or maybe earlier by Gligor, Donescu and Iorga.  I recommend
you do not use it.  There are simple attacks which allow you to
manipulate ciphertext blocks with XOR of a few blocks and get error
recovery a few blocks later; and of course with free-mac error
recovery means the MAC is broken, because the last block is
undisturbed.

There is some more detail here:

http://groups.google.ca/group/sci.crypt/browse_thread/thread/e1b9339bf9fb5060/62ced37bb9713a39?lnk=st

Adam

On Mon, Sep 04, 2006 at 04:28:51PM -0500, Travis H. wrote:
 Nevermind the algorithm, I saw the second PDF.
 
 For the other readers, the algorithm in more
 standard variable names is:
 
 c_i = f_K(p_i xor c_(i-1)) xor p_(i-1)
 
 IV = p_(-1), c_(-1)
 
 I suppose the dependency on c_(i-1) and p_(i-1) is the part that
 prevents the attacker from predicting and controlling the garble.
 -- 
 If you're not part of the solution, you're part of the precipitate.
 Unix guru for rent or hire -- http://www.lightconsulting.com/~travis/
 GPG fingerprint: 9D3F 395A DAC5 5CCC 9066  151D 0A6B 4098 0C55 1484
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A security bug in PGP products?

2006-08-27 Thread Dr Adam Back
What they're saying is if you change the password, create some new
data in the encrypted folder, then someone who knew the old password,
can decrypt your new data.

Why?  Well because when you change the password they dont change the
symmetric key used to encrypt the data.  The password is used to
create a KEK (key encryption key) and this in-turn is used to encrypt
the folder key (which is used to do the actual data encryption.)  Now
in common with a lot of other systems, changing the password does not
entail re-encrypting the actual data.

(To do so would require waiting for it to re-encrypt.  There are
systems that do this, but it is a tradeoff, espeically in a
single-user scenario)

Personally my preferred security property (in a multi-user storage
system where users can be added and removed) is that people who had
access can still decrypt the stuff they had access to, but can't
decrypt new data encrypted since then.  I think its a good balance
because that person had the data anyway, and could remember it, have
backups of it etc.

Another thing that can be done is to utilize an online server, which
has an additional key such that it cant decrypt, but can hand it over
on successful auth and can delete that key on request.  Obviously the
key would be combined in a one-way fashion so the server does not have
to be trusted other than to delete keys on request.

However the article also talks about forensics, and I think they maybe
confusing something there because most encrypted content is not
authenticated anyway (you can merrily switch around ciphertext blocks
without triggering any integrity warnings at the crypto level).  And
anyway if the forensic investigator has the password, he can change
anything! -- symmetric encryption keys known to others are not
signatures.

Adam

On Mon, Aug 21, 2006 at 03:36:16PM -0700, Max A. wrote:
 Hello!
 
 Could anybody familiar with PGP products look at the following page
 and explain in brief what it is about and what are consequences of the
 described bug?
 
 http://www.safehack.com/Advisory/pgp/PGPcrack.html
 
 The text there looks to me rather obscure with a lot of unrelated stuff.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Hamiltonian path as protection against DOS.

2006-08-14 Thread Adam Back
On Mon, Aug 14, 2006 at 12:23:03PM +1000, mikeiscool wrote:
 But you're imaging an attack with a distributed bot net DDoS'ing you,
 correct? Couldn't they then also use their botnet to process the
 messages faster then normally? They already have the computering
 power. Just a minor addon to the bot client app.

If you're using a hashcash token which takes 20 seconds of your CPU,
it'll slow the spammer down if they owned node has broadband.

(Think about 5k message size, multiple Bcc recipients etc; the spammer
of an owned botnet node can send multple many per second if hashcash
reduces the number of messages that can be sent by a factor of 100,
thats a good thing.)

Whether its enough of a slow down is an open question -- but I think
its difficult to imagine a security protocol that prevent spam with
the attacker owning some big proportion of nodes.

Adam

 Or if it is many requests from one or thousands of clients, can you
 not, per host, ask them to use a cached version? Per X timeout.
 
 Of course, you can't do this with SSL, though.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


encrypted filesystem integrity threat-model (Re: Linux RNG paper)

2006-05-05 Thread Adam Back
I think an encrypted file system with builtin integrity is somewhat
interesting however the threat model is a bit broken if you are going
to boot off a potentially tampered with disk.

I mean the attacker doesnt have to tamper with the proposed
encrypted+MACed data, he just tampers with the boot sector/OS boot,
gets your password and modifies your data at will using the MAC keys.

I think you'd be better off building a boot USB key using DSL or some
other small link distro, checksumming your encrypted data (and the
rest of the disk) at boot; and having feature to store the
keyed-checksum of the disk on shutdown some place MACed such that the
USB key can verify it.  Then boot the real OS if that succeeds.

(Or better yet buy yourself one of those 32GB usb keys for $3,000 and
remove the hard disk, and just keep your encrypted disk on your
keyring :)


Of course an encrypted network filesystem has other problems... if
you're trying to actively use an encrypted filesystem backed in an
unsecured network file system then you're going to need MACs and
replay protection and other things normal encrypted file system modes
dont provide.

Adam

On Thu, May 04, 2006 at 01:44:48PM -0500, Travis H. wrote:
 On 5/4/06, markus reichelt [EMAIL PROTECTED] wrote:
 Agreed; but regarding unix systems, I know of none crypto
 implementation that does integrity checking. Not just de/encrypt the
 data, but verify that the encrypted data has not been tampered with.
 
 Are you sure?  There's a aes-cbc-essiv:sha256 cipher with dm-crypt.
 Are they using sha256 for something other than integrity?
 
 I guess perhaps the reason they don't do integrity checking is that it
 involves redundant data, so the encrypted volume would be smaller, or
 the block offsets don't line up, and perhaps that's trickier to handle
 than a 1:1 correspondence.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Unforgeable Blinded Credentials

2006-04-19 Thread Adam Back
On Wed, Apr 19, 2006 at 11:53:18AM -0700, bear wrote:
 On Sat, 8 Apr 2006, Ben Laurie wrote:
 Adam Back wrote:
  My suggestion was to use a large denomination ecash coin to have
  anonymous disincentives :) ie you get fined, but you are not
  identified.
 
 The problem with that disincentive is that I need to sink the money for
 each certificate I have. Clearly this doesn't scale at all well.
 
 Um, if it's anonymous and unlinkable, how many certificates do you
 need?  I should think the answer would be one.

Agreed, its very nice if we could do this.  However all of the
practical schemes are show-linkable.

I looked at the paper that was referenced earlier in the thread about
the Chameleon [1] credentials which are an attempt to add unlinkable
multi-show to Brands credentials.

So aside from the fact that it uses a non-standard assumption that it
is hard to find e^v = a^x + c mod n (for RSA e,n).  Apparently
Camenisch's other assumption that it is hard to find e^v = a^x +1 was
broken... so thats not very comforting to start.  (They offer no proof
of this assumption).

Then they use an interactive ZKP in the show which I think will
require say 80 rounds for reasonable security, each round involving
some non-trivial computation.

So its not that practical compared to Chaum, Brands etc -- its not
very efficient in time nor communication required for the showing of
the chameleon certs.

Adam

[1] An Anonymous Credential System and a Privacy-Aware PKI by Pino
Persiano and Ivan Visconti

I put a copy online here temporarily:

http://www.cypherspace.org/adam/papers/chameleon.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Unforgeable Blinded Credentials

2006-04-08 Thread Adam Back
On Sat, Apr 08, 2006 at 07:53:37PM +0100, Ben Laurie wrote:
 Adam Back wrote:
  [about Brands credentials]
  I think they shows are linkable, but if you show more than allowed
  times, all of the attributes are leaked, including the credential
  secret key and potentially some identifying information like your
  credit card number, your address etc.
 
 I could be wrong, but I'm pretty sure they're unlinkable - that's part
 of the point of Brands' certificates.

No they are definitely mutually linkable (pseudonymous), tho obviously
not linkable to the real identity at the issuer.

 Christian Paquin wrote:
  In Brands' system, multiple uses of a n-show credential are not linkable
  to the issuing (i.e. they are untraceable), but they are indeed linkable
  if presented to the same party: the verifier will recognize the
  credential when re-used. This is useful for limited pseudonymous access
   to accounts or resources. If you want showing unlinkability, better get
  n one-show credentials (simpler and more efficient).
 
 That's only true if the credential contains any unblinded unique data,
 surely?

No.  It arises because the credential public key is necessarily shown
during a show.  (The credential public key is blinded during
credential issue so its not linkable to issue).  So you can link
across shows simply by comparing the credential public key.

Its hard to blind the public key also.  I thought thats what you were
talking about in a previous mail where you were saying about what
could be done to make things unlinkable.  (Or maybe trying to find the
same property you thought Brands had ie unlinkable multi-show, for
Chaums credentials.)


Note with Brands credentials you can choose: unlimited show, 1-show or
n-show.  To do 1-show or n-show you make some formula for initial
witness that is fair and verifiable by the verifier, so there are only
n allowed IWs, and consequently if you reuse one it leaks two shows
with the same IW which allows the credential private key to be
recovered.  ie its just a trick to define a limited number of allowed
(and verifier verified) IWs -- IW is a sort of commitment by the
credential owner in the show protocol.

So there is something compact that the verifier can send
somewhere and it can then collate them and notice when a show is  n
shows (presuming there are multiple verifiers and you want to impose n
shows across all of them).


 Adam Back wrote:
  Well the other kind of disincentive was a credit card number.  My
  suggestion was to use a large denomination ecash coin to have
  anonymous disincentives :) ie you get fined, but you are not
  identified.
 
 The problem with that disincentive is that I need to sink the money for
 each certificate I have. Clearly this doesn't scale at all well.

No I mean put the same high value ecash coin in all of your offline
limited show credentials / offline ecash coins.

eg say you can choose to hand over $100 and retain your anonymity even
in event of double-spending offline ecash coins, or over-using
limited-show credentials.


I was curious about the Chameleon credential as they claim to work
with Brands credentials, I wrote to one of the authors to see if I
could get an electronic copy, but no reply so far.


Note also about your earlier comments on lending deterrence,
ultimately I think you can always do online lending.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Unforgeable Blinded Credentials

2006-04-04 Thread Adam Back
On Tue, Apr 04, 2006 at 06:15:48AM +0100, Ben Laurie wrote:
  This illustrates a problem with multi-show credentials, that the holder
  could share his credential freely, and in some cases even publish it,
  and this would allow non-authorized parties to use it.  To avoid this,
  more complicated techniques are needed that provide for the ability
  to revoke a credential or blacklist a credential holder, even in an
  environment of unlinkability.  Camenisch and Lysyanskaya have done quite
  a bit of work along these lines, for example in
  http://www.zurich.ibm.com/%7Ejca/papers/camlys02b.pdf .
 
 So, for the record, has Brands.
 
 I agree that, in general, this is a problem with multi-show credentials
 (though I have to say that using a completely different system to
 illustrate it seems to me to be cheating somewhat).
 
 Brands actually has a neat solution to this where the credential is
 unlinkable for n shows, but on the (n+1)th show reveals some secret
 information (n is usually set to 1 but doesn't have to be). 

I think they shows are linkable, but if you show more than allowed
times, all of the attributes are leaked, including the credential
secret key and potentially some identifying information like your
credit card number, your address etc.

The main use I think is to have 1-show, where if you show more than 1
time your identity is leaked -- for offline electronic cash with fraud
tracing.  But as you say the mechanism generalizes to multiple show.

 This obviously gives a disincentive against sharing if the secret
 information is well chosen (such as here's where to go to arrest
 the guy).

Well the other kind of disincentive was a credit card number.  My
suggestion was to use a large denomination ecash coin to have
anonymous disincentives :) ie you get fined, but you are not
identified.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Unforgeable Blinded Credentials

2006-04-02 Thread Adam Back
On Sat, Apr 01, 2006 at 12:35:12PM +0100, Ben Laurie wrote:
 However, anyone I show this proof to can then masquerade as a silver
 member, using my signed nonce. So, it occurred to me that an easy
 way to prevent this is to create a private/public key pair and
 instead of the nonce use the hash of the public key. Then to prove
 my silver status I have to show that both the hash is signed by BA
 and that I possess the corresponding private key (by signing a
 nonce, say).  It seems to me quite obvious that someone must have
 thought of this before - the question is who? Is it IP free?

Well I thought of this a few years ago also.  However I suspect you'd
find the same idea earlier as a footnote in Stefan Brands book.  (Its
amazing how much stuff is in there, I thought I found something else
interesting -- offline transferable cash, turns out that also was a
footnote referring to someone's MSc thesis.)

 Obviously this kind of credential could be quite useful in identity
 management. Note, though, that this scheme doesn’t give me
 unlinkability unless I only show each public/private key pair
 once. What I really need is a family of unlinkable public/private
 key pairs that I can somehow get signed with a single “family”
 signature (obviously this would need to be unlinkably transformed
 for each member of the key family).

This is harder, I thought about this a bit also.

I was thinking a way to do this would be to have a self-reblindable
signature.  Ie you can re-blind the certificate signature such that
the signature remains, but it is unlinkable.  I didn't so far find a
way to do this with any of the schemes.

So it would for example be related to the more recent publicly
re-encryptable Elgamal based signatures.  (Third party can re-encrypt
the already encrypted message with out themselves being able to
decrypt the message).


Brands also has a mechanism to simplify the use each cert once method.
He can have the CA reissue you a new cert without having to go through
the attribute verification phase.  Ie you present an old cert and get
it reblinded, and the CA does not even if I recall see what attributes
you have.  So you just periodically get yourself another batch.
Mostly does what you want just with some assistance from the CA.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Your secrets are safe with quasar encryption

2006-03-30 Thread Adam Back
How many suitable quasars are there?  You'd be damn lucky if its a
cryptograhic strength number.

Now you might think there are limits to how many signals you can
listen to and that would be some protection, however you still have
brute force guess a signal, and probability of guessing the right key
would be rather high compared to eg 2^-256 per guess with AES.

Also they offer the strange comment The method does not require a
large radio antenna or that the communicating parties be located in
the same hemisphere, as radio signals can be broadcast over the
internet at high speed.  So if we are talking only about enough
signals such that they can be continuosly monitored or a trusted
server which monitors your subset for you... well then how do you
secure the stream (ie if you send it over the internet AES encrypted,
you'd just as well AES encrypt your data).

Sounds more than a bit dubious overall.

Adam

On Wed, Mar 29, 2006 at 06:20:33PM -0800, Sean McGrath wrote:
 http://www.newscientisttech.com/article.ns?id=dn8913print=true
 
 Your secrets are safe with quasar encryption
 
 * 16:00 29 March 2006
 * NewScientist.com news service
 * Will Knight
 
 Intergalactic radio signals from quasars could emerge as an exotic but 
 effective new tool for securing terrestrial communications against 
 eavesdropping.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


conservative choice: encrypt then MAC (Re: general defensive crypto coding principles)

2006-02-09 Thread Adam Back
Don't forget Bleichenbacher's error channel attack on SSL
implementations, which focussed on the mac then encrypt design of
SSL... web servers gave different error for malformed padding vs
plaintext MAC failure.  The lesson I drew from that is the
conservative choice is encrypt then MAC.

I dont think encrypt then MAC presents a timing attack because your
chances of getting past the MAC are 1/2^80 in the first place.

And obviously you include the whole ciphertext (including the IV!) in
the MAC.  In fact anything which affects decryption process should be
in the mac.

Adam

On Thu, Feb 09, 2006 at 05:01:05PM +1300, Peter Gutmann wrote:
 Jack Lloyd [EMAIL PROTECTED] writes:
 Bellare and Namprempre have a paper on this [worth reading IMO;
 http://www-cse.ucsd.edu/~mihir/papers/oem.html] which suggests that this
 method (which they term Encrypt-and-MAC) has problems in terms of information
 leakage. An obvious example occurs when using a deterministic authentication
 scheme like HMAC - an attacker can with high probability detect duplicate
 plaintexts by looking for identical tags. They also show that using a MAC on
 the ciphertext is a secure construction in a fairly broad set of cases.
 
 Here's a trivial way in which it can be weaker: Let's say you MAC the
 ciphertext and if it checks out OK you decrypt it and use it.  If you're using
 any mode other than ECB (which you'd better be doing) an attacker can
 arbitrarily modify the start of the message by fiddling with the IV.  CBC (by
 far the most widely-used mode) is particularly nasty because you can make the
 decrypted data anything you want, as the IV is xored directly into the
 plaintext.  So you can use encrypt-then-MAC, but you'd better be *very*
 careful how you apply it, and MAC at least some of the additional non-message-
 data components as well.
 
 Another problem with encrypt-then-MAC is that it creates a nice timing channel
 if you bail out on a MAC failure before doing the decryption step.  So while
 EtM may be theoretically better in some (somewhat artificial) cases, it's much
 more brittle in terms of implementation problems.  Since implementors are
 rarely expert cryptographers, I'd prefer the safer MtE rather than EtM for
 protocol designs.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: long-term GPG signing key

2006-01-11 Thread Adam Back
There are a number of differences in key management priorities between
(communication) signature and encryption keys.

For encryption keys:
- you want short lived keys
- you should wipe the keys after at first opportunity
- for archiving you should re-encrypt with storage keys

- you can't detect or prove an encryption key is compromised as the
attacker will just be decrypting documents

For signature keys:

- you want longer lived keys (or two tier keys, one for ceritfying
that is kept offline, and one for signing communications that is
offline) - in fact many applications dont even want signatures they
want authentication (convince the recipient of author and integrity,
but be non-transferable)

- with signature keys if they are compromised and the compromised key
used, there is risk (to the attacker) that the recipient or others can
detect and prove this.

I do agree tho that the relative value of encryption vs signature
depends on teh application.

Adam

On Wed, Jan 11, 2006 at 09:04:07AM -0500, Perry E. Metzger wrote:
 
 Ian G [EMAIL PROTECTED] writes:
  Travis H. wrote:
  I'd like to make a long-term key for signing communication keys using
  GPG and I'm wondering what the current recommendation is for such.  I
  remember a problem with Elgamal signing keys and I'm under the
  impression that the 1024 bit strength provided by p in the DSA is not
  sufficiently strong when compared to my encryption keys, which are
  typically at least 4096-bit D/H, which I typically use for a year.
 
  1. Signing keys face a different set of
  non-crypto threats than to encryption
  keys.  In practice, the attack envelope
  is much smaller, less likely.
 
 I call bull.
 
 You have no idea what his usage pattern is like, and you have no idea
 what the consequences for him of a forged signature key might be. It
 is therefore unreasonable -- indeed, unprofessional -- to make such
 claims off the cuff.
 
 -- 
 Perry E. Metzger  [EMAIL PROTECTED]
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSSL BIGNUM vs. GMP

2006-01-03 Thread Adam Back
On Tue, Jan 03, 2006 at 10:10:50PM +, Ben Laurie wrote:
 Jack Lloyd wrote:
  Some relevant and recent data: in some tests I ran this weekend
  [gmp faster than openssl]
  AFAIK blinding alone can protect against all (publicly known)
  timing attacks; am I wrong about this?
 
 Yes, you are - there's the cache attack, which requires the attacker to
 have an account on the same machine. I guess I shouldn't have called it
 constant time, since its really constant memory access that defends
 against this.

Does openSSL defend against cache related attacks?

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Defending users of unprotected login pages with TrustBar 0.4.9.93

2005-09-21 Thread Adam Back
I would think it would be safer to block the site, or provide a
warning dialog.  (This is what I was expecting when I started reading
the head post; I was bit surprised at the interventionism to actually
go ahead and fix the site, maybe that would be a better default
behavior).


btw Regarding unadvertised SSL equivalents, I have noticed if you
login to gmail, you get SSL for login, but then http for web mailer.
However if you edit the URL after login to https, it appears to work
ok over SSL also.

Adam

On Mon, Sep 19, 2005 at 04:20:07PM -0700, John Gilmore wrote:
 Perhaps the idea of automatically redirecting people to alternative
 pages goes a bit too far:
 
  1. TrustBar will automatically download from our own server,
  periodically, a list of all of the unprotected login sites, including
  any alternate protected login pages we are aware of. By default,
  whenever a user accesses one of these unprotected pages, she will be
  automatically redirected to the alternate, protected login page.
 
 How convenient!  So if I could hack your server, I could get all
 TrustBar users' accesses -- to any predefined set of pages on the
 Internet -- to be redirected to scam pages.
 
 A redirect to an untrustworthy page is just as easy as a redirect to a
 trustworthy page.  The question is who you trust.
 
  BTW, TrustBar is an open-source project, so if some of you want to
  provide it to your customers, possibly customized (branded) etc., there
  is no licensing required.
 
 Also providing a handy platform for slightly modified versions, that will
 take their cues from a less trustworthy list of redirects.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


e2e security by default (Re: e2e all the way)

2005-08-27 Thread Adam Back
OK summing up: I think e2e secure, and secure by default.

On Fri, Aug 26, 2005 at 04:17:32PM -0400, Steven M. Bellovin wrote:
 On the contrary -- I did say that I support and use e2e security.  I 
 simply said that user-to-server security solves a lot of many -- most? 
 -- people's security needs.

I think user-to-server security is not secure by in my view the most
important comms security metric -- e2e security.  So if one engineers
for this as the default your system becomes not secure by default.

Don't forget that peoples security model can change radically without
warning.  (end users) typically give no prior thought to security
until something goes wrong.  At which point they are screwed if the
default is secure comm to the UTP.

People complain at microsoft for example, if their software is not
secure by default.  The BSD OS goes to some pains to be secure by
default.

If you're saying yes it _could_ be e2e secure if users jump through
x,y,z hoops (like run your own IM server!) ... well you know that even
power users, who do want security, may give up at the inconvenience of
that.

 I don't think it is that hard to do e2e security.  Skype does it.
 Fully transparently.
 
 Really?  You know that the public key you're talking to corresponds to 
 a private key held by the person to whom you're talking?  Or is there a 
 MITM at Skype which uses a per-user key of its own?

I draw the line for IM security:

- private keys generated on the client
- public keys maybe by default certified by central entity
  - but advanced user has choice to use other ceritfication, including
out of band, WoT etc
(central entity one can easily automate, which is what skype does)

now a-kind of active MITM with rogue CA attack is possible with
collusion of the central CA (by issuing a second certficiate for the
wire-tapping party), however the advanced user can detect and come
away with evidence of this.  The fact that the advanced user retains
this ability I think adds value for even non-technical users; the CA
run risk of ruining their reputation, violating the CPS etc. and of
their being evidence.

btw I think there is signifciant additional value in _forceing_ the
attacker to sniff the traffic *and* do an active MITM with rogue CA
attack.  With your by default route through UTP, the attacker has a
natural and convenient place to subpeona, OS penetrate etc. and
undetectably snoop on traffic.

 Btw, I regularly use 3 different machines when talking to my Jabber
 server.  Is your client going to cache all 3 keys for me?  Will all
 of my correspondents know when to click yes and when not to?

I used key roaming in this scenario when I had this problem.  (Without
giving the central server cleartext private key).

 Here's the problem for a protocol designer.  You want to design a 
 protocol that will work as securely as possible, on a wide range of 
 platforms, over a reasonably long period of time.  What do you do?  If 
 you engineer only for e2e security, you end up in a serious human 
 factors trap (cue Why Johnny Can't Encrypt and Simson Garfinkel's 
 dissertation).  Instead, I recommend engineering for a hybrid scenario 
 -- add easy-to-use client-to-server security, which solves at least 75% 
 of most people's threats (I suspect it's really more like 90-95%), 
 while leaving room in the protocol for e2e security for people who need 
 it and/or can use it, especially as operating environments change.  
 This is precisely what Jabber has done.
 
 To sum it up in one sentence: design for the future *and* the present.

I disagree.  My metrics for secure IM protocol design are:

- private keys are generated on client machine
- private keys do not leave client machine in unencrypted form
- end2end security where possible
- immediate forward secrecy where possible

And you can do auth-key roaming in this.

(Note for IM security you are better off certifying auth keys and
using the auth keys to authenticate EDH; if the user forgets the
password etc., you can just issue new auth certs).

If you've looked at IM security, there are a number of other
interesting challenges in IM security also btw, joining and leaving
security, and the fact that comms group can be  2 endpoints.  Joining
and leaving security argue for backward secure and forward secure
re-keying respectively.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


e2e all the way (Re: Another entry in the internet security hall of shame....)

2005-08-26 Thread Adam Back
On Fri, Aug 26, 2005 at 11:41:42AM -0400, Steven M. Bellovin wrote:
 In message [EMAIL PROTECTED], Adam Back writes:
 Thats broken, just like the WAP GAP ... for security you want
 end2end security, not a secure channel to an UTP (untrusted third
 party)!
 
 
 What is security?  What are you trying to protect, and against whom?

Well I think security in IM, as in all comms security, means security
such that only my intended recipients can read the traffic.  (aka e2e
security).

I don't think the fact that you personally don't care about the
confidentiality of your IM messages should argue for not doing it.
Fair enough you don't need it personally but it is still the correct
security model.  Some people and businesses do need e2e security.  (It
wasn't quite clear, you mention you advised jabber; if you advised
jabber to skip e2e security because its too hard... bad call I'd
say).

 Do I support e2e crypto?  Of course I do!  But the cost -- not the
 computational cost; the management cost -- is quite high; you need
 to get authentic public keys for all of your correspondents.  That's
 beyond the ability of most people.

I don't think it is that hard to do e2e security.  Skype does it.
Fully transparently.

Another option: I would prefer ssh style cached keys and warnings if
keys later change (opportunistic encryption) to a secure channel to
the UTP (MITM as part of the protocol!).

Ssh-style is definitely not hard.  I mean nothing is easier no doubt
than slapping an SSL tunnel over the server mediated IM protocol, but
if the security experts are arguing for the easy way out, what hope is
there.  I'm more used to having to argue with application
functionality focussed people that they need to do it properly -- not
with crypto people.


I do think we have a duty in the crypto community to be advocates for
truly secure systems.  We are building piecemeal the defacto privacy
landscape of the future; as everything moves to the internet.  Take
another example... the dismal state of VOIP security.  I saw similar
arguments on the p2p-hackers list a few days ago about security of p2p
voip: who cares about voice privacy etc.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: How many wrongs do you need to make a right?

2005-08-17 Thread Adam Back
Not to defend PKI, but what about delta-CRLs?

Maybe not available at time of the Navy deployment?  But certainly
meaning that people can download just changes since last update.

Steven writes:
 [alternatives] such as simply publishing the hash of revoked
 certificates,

Well presumably you mean a Merkle hash tree or something?  (A single
hash of all the revoked certs doesn't help you as you don't know which
are revoked and so have insufficient data to go into the hash function
verify if a given cert is on the list.)

Adam

On Wed, Aug 17, 2005 at 08:40:19AM -0400, Steven M. Bellovin wrote:
 In message [EMAIL PROTECTED], Florian Weimer writes:
 
 
 Can't you strip the certificates which have expired from the CRL?  (I
 know that with OpenPGP, you can't, but that's a different story.)
 
 OTOH, I wouldn't be concerned by the file size, although it's
 certainly annoying.  I would be really worried that the contents of
 that CRL leaks sensitive information.  At least from a privacy point
 of view, this is a big, big problem, especially if you include some
 indication which allows you to judge the validity of old signatures.
 
 
 One can easily conceive of schemes that don't have such problems, such 
 as simply publishing the hash of revoked certificates, or using a Bloom 
 filter based on the hashes.
 
 Of course, that doesn't mean that was how it was done...

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


locking door when window is open? (Re: solving the wrong problem)

2005-08-08 Thread Adam Back
Single picket fence -- doesn't work without a lot of explaining.

The one I usually have usually heard is the obvious and intuitive
locking the door when the window is open.

(ie fixating on quality of dead-bolt, etc on the front door when the
window beside it is _open_!)

Adam

On Sat, Aug 06, 2005 at 04:27:51PM -0400, John Denker wrote:
 Perry E. Metzger wrote:
 
 We need a term for this sort of thing -- the steel tamper
 resistant lock added to the tissue paper door on the wrong vault
 entirely, at great expense, by a brilliant mind that does not
 understand the underlying threat model at all.
 
 Anyone have a good phrase in mind that has the right sort of flavor
 for describing this sort of thing?
 
 In a similar context, Whit Diffie once put up a nice
 graphic:  A cozy little home protected by a picket fence.
 he fence consisted of a single picket that was a mile
 high ... while the rest of the perimeter went totally
 unprotected.
 
 So, unless/until somebody comes up with a better metaphor,
 I'd vote for one-picket fence.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: mother's maiden names...

2005-07-16 Thread Adam Back
I think in the UK check signatures are not verified below £30,000
(about US $53,000).  I presume it is just economics ... cost of
infrastructure to verify vs value of verifying given the fraud rate.

Adam

On Fri, Jul 15, 2005 at 01:42:08PM +0100, Ben Laurie wrote:
 My bank doesn't even bother to move them around, as I discovered when I 
 had a chequebook stolen and cheques for large sums forged, and honoured.
 
 When I spoke to a person who had found the cheque in their store I asked 
 is it my signature? (yes, I am sufficiently absent-minded that I might 
 have written a large cheque and forgotten about it). Their response was 
 that they didn't know and had no way to find out. In the end they faxed 
 me a copy so I could check it myself.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: use KDF2 / IEEE1363a (Re: expanding a password into many keys)

2005-06-14 Thread Adam Back
I suppose I should also have note that the master key going into KDF2
would be derived with PBKDF2 from a password if this is a password
derived set of keys, to get the extra features of a salt and iterator
to slow down brute force.

Adam

On Tue, Jun 14, 2005 at 04:21:39AM -0400, Adam Back wrote:
 The non-banking version of this is the KDF2 function in IEEE1363a.
 
 Same deal:  
 
 void KDF2( const void* Z, int, const void* P, int, void* K, int );
 
 Z = master-key, P = permuter, K = derived key
 
 each is variable sized.  (Sorry I implemented the source for someone
 who has the copyright or you could have that).  It's very simple to
 implement however:
 
 key = SHA1( Z || 0 || P ) || SHA1( Z || 1 || P ) ...
 
 for as many bytes as you need.  So I would eg use P = AES and P =
 HMACS to derive two different key.  Looks like KDF2 has the same
 problem John mentioned, so don't do that (let attacker chose P).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Microsoft info-cards to use blind signatures?

2005-05-30 Thread Adam Back
Yes but the other context from the related group of blog postings, is
Kim Cameron's (microsoft) laws of identity [1] that this comment is
made in the context of.

It is relatively hard to see how one could implement an identity
system meeting the stated laws without involving blind signatures of
some form...

Adam

[1] http://www.identityblog.com/stories/2005/05/13/TheLawsOfIdentity.html

On Sat, May 21, 2005 at 11:17:04AM -0700, David Wagner wrote:
 http://www.idcorner.org/index.php?p=88
 The Identity Corner
 Stephan Brands
 
 I am genuinely excited about this
 development, if it can be taken as an indication that Microsoft is getting
 serious about privacy by design for identity management. That is a big
 if, however: indeed, the same Microsoft researcher who came up with the
 patent (hello Dan!) was also responsible for Microsoft e-cash patent no.
 5,768,385 that was granted in 1998 but was never pursued.
 
 What a strange criticism of Microsoft!  Here is something to know about
 patents: many companies file patents all the time.  That doesn't mean
 they are committing to build a product around every patent they file.
 The fact that Microsoft hasn't pursued patent 5,768,385 tells you
 essentially nothing about what they are going to do with this patent.
 
 I wouldn't take patent filings as an indicator of intent or of future
 business strategy.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: and constrained subordinate CA costs?

2005-03-28 Thread Adam Back
On Fri, Mar 25, 2005 at 04:02:36PM -0600, Matt Crawford wrote:
 There's an X.509v3 NameConstraints extension (which the higher CA would 
 include in the lower CA's cert) but I have the impression that ends 
 system software does not widely support it.  And of course if you don't 
 flag it critical, it's not very effective.

Well I would say downright dangerous -- if its not flagged critical
and not understood, right?

Implication would be an intended constrained subordinate CA would be
able to function as an unconstrained subordinate CA in the eyes of
many clients -- free ability to forge any domain in the global SSL
PKI.

Adam

On Fri, Mar 25, 2005 at 04:02:36PM -0600, Matt Crawford wrote:
 
 On Mar 25, 2005, at 11:55, Florian Weimer wrote:
 
 Does anyone have info on the cost of sub-ordinate CA cert with a name
 space constraint (limited to issue certs on domains which are
 sub-domains of a your choice... ie only valid to issue certs on
 sub-domains of foo.com).
 
 Is there a technical option to enforce such a policy on subordinated
 CAs?
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


and constrained subordinate CA costs? (Re: SSL Cert prices ($10 to $1500, you choose!))

2005-03-25 Thread Adam Back
The URL John forwarded gives survey of prices for regular certs and
subdomain wildcard certs/super certs (ie *.mydomain.com all considered
valid with respect to a single cert).

Does anyone have info on the cost of sub-ordinate CA cert with a name
space constraint (limited to issue certs on domains which are
sub-domains of a your choice... ie only valid to issue certs on
sub-domains of foo.com).

Maybe the answer is a lot of money... CA operators probably view
users of this kind of tech to be corporations with big infrastructure
to secure.  It sounds like the http://www.thawte.com/spki/ offers this
kind of service.  However it sounds like its web based so they have
really just bundled a more streamlined way to create lots of certs.

(Thawte spki means starter PKI (not simple pki)).

Adam

On Fri, Mar 04, 2005 at 03:53:51PM -0800, John Gilmore wrote:
 For the privilege of being able to communicate securely using SSL and a
 popular web browser, you can pay anything from $10 to $1500.  Clif
 Cox researched cert prices from various vendors:
 
   http://neo.opn.org/~clif/SSL_CA_Notes.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


pgp global directory bugged instructions

2004-12-22 Thread Adam Back
So PGP are now running a pgp key server which attempts to consilidate
the inforamtion from the existing key servers, but screen it by
ability to receive email at the address.

So they send you an email with a link in it and you go there and it
displays your key userid, keyid, fingerprint and email address.

Then it says:

| Please verify that the email address on this key, [EMAIL PROTECTED],
| is your email address, and is properly configured to send and
| receive PGP secured email.
|
| If the information is correct, click 'Accept'. By clicking 'Accept',
| your key will be published to the directory, where other PGP users
| will be able to retrieve it in order to encrypt messages to you and
| verify signed messages from you.
|
| If this information is incorrect, click 'Cancel'. By clicking
| 'Cancel', this key will not be published. You may then submit
| another key with the correct information.

So here's the problem: it does not mention anything about checking
that this is your fingerprint.  If it's not your fingerprint but it is
your email address you could end up DoSing yourself, or at least
perpetuating a imposter key into the new supposedly email validated
keyserver db.

(For example on some key servers there are keys with my name and email
that are nothing to do with me -- they are pure forgeries).

Suggest they add something to say in red letters check the fingerprint
AND keyid matches your key.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The Pointlessness of the MD5 attacks

2004-12-15 Thread Adam Back
Well the people doing the checking (a subset of the power users) may
say I checked the source and it has this checksum, and another user
may download that checksum and be subject to MITM and not know it.

Or I could mail you the source and you would check it with checksum
and compare checksum to web site.

Or somone could just go ahead and change the source without changing
the checksum or any of the changlog / cvs change notification stuff
and people would not think there is a change to review.

Some of this scenarios will likely work some of the time against
users.

Adam

On Tue, Dec 14, 2004 at 11:21:13PM +, Ben Laurie wrote:
 Adam Back wrote:
 I thought the usual attack posited when one can find a collision on a
 source checksum is to make the desired change to source, then tinker
 with something less obvious and more malleable like lsbits of a UI
 image file until you find your collision on two input source packages.
 
 Quite so, but the desired change to source is either not visible, or 
 suspicious. If it's not visible, then just make it malicious. And if 
 it's suspicious then it shouldn't be run.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The Pointlessness of the MD5 attacks

2004-12-15 Thread Adam Back
Is this the case?  Can't we instead start with code C and malicious C'
and try to find a collision on H(C||B) == H(C'||B') after trying 2^64
B values we'll find such a collision by the birthday principle.

Now we can have people review and attest to the correctness of code C,
and then we can MITM and change surrepticiously with C'.

Adam

On Wed, Dec 15, 2004 at 08:44:03AM +, Ben Laurie wrote:
 Adam Back wrote:
 Well the people doing the checking (a subset of the power users) may
 say I checked the source and it has this checksum, and another user
 may download that checksum and be subject to MITM and not know it.

 You are missing the point - since the only way to make this trick work 
 is to include a very specific chunk of 64 bytes with a few bits flipped 
 (or not), the actual malicious code must be present anyway and triggered 
 by the flipped bits. So, all of these attacks rely on the code not being 
 inspected or being sufficiently cunning that inspection didn't help. 
 And, if that's the case, the attacks work without any MD5 trickery.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The Pointlessness of the MD5 attacks

2004-12-14 Thread Adam Back
I thought the usual attack posited when one can find a collision on a
source checksum is to make the desired change to source, then tinker
with something less obvious and more malleable like lsbits of a UI
image file until you find your collision on two input source packages.

Adam

On Tue, Dec 14, 2004 at 10:17:28PM +, Ben Laurie wrote:
 But the only way I can see to exploit this would be to have code that
 did different things based on the contents of some bitmap. My contention
 is that if the code is open, then it will be obvious that it does
 something bad if a bit is tweaked, and so will be suspicious, even if
 the something bad is not triggered in the version seen.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Brands credential book online (pdf)

2004-10-05 Thread Adam Back
Stefan Brands book on his credential / ecash technology is now
downloadable in pdf format from credentica's web site:

http://www.credentica.com/the_mit_pressbook.php

(previously it was only available in hardcopy, and only parts of the
content was described in academic papers).

Also the credentica web site has gone live, lots of content.

Credentica is Stefan's company around digital credentials ecash /
anonymity news watchers may have seen some discussion of the
credentica startup company earlier this year.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


anonymous IP terminology (Re: [anonsec] Re: potential new IETF WG on anonymous IPSec (fwd from [EMAIL PROTECTED]))

2004-09-11 Thread Adam Back
Joe Touch [EMAIL PROTECTED] wrote:
 The point has nothing to do with anonymity;
 
 The last one, agreed. But the primary assumption is that we can avoid a 
 lot of infrastructure and impediment to deployment by treating an 
 ongoing conversation as a reason to trust an endpoint, rather than a 
 third-party identification. Although anonymous access is not the primary 
 goal, it is a feature of the solution.

Joe:

I respectfully request that you call this something other than
anonymous.  It is quite confusing and misleading.  

Some people have spent quite a bit of time and effort in fact working
on anonymous IP and anonymous/pseudonymous transports.

For example at ZKS we worked on an anonymous/pseudonymous IP product
(which means cryptographically hiding the souce IP address from the
end-site).

There are some new open source anonymous IP projects.


Your proposal, which may indeed have some merit in simplifying key
management, has _nothing_ to do with anonymous IP.  Your overloading
of the established term will dilute the correct meaning.

Zooko provided the correct term and provided references:
opportunistic encryption.  It sounds to have similar objectives to
what John had called opportunistic encryption and tried to do with
freeSWAN.  Lowever level terms may be unauthenticated as Hal
suggested.  Or non-certified key management (as the SSH cacheing of
previously before seen IP - key bindings and warnings when they
change).

 Although anonymous access is not the primary goal, it is a feature
 of the solution.

The access is _not_ anonymous.  The originator's IP, ISP call traces,
phone access records will be all over it and associated audit logs.

The distinguishing feature of anonymous is that not only is your name
not associated with the connection but there is no PII (personally
identifiable information) associated with it or obtainable from logs
kept.

And to be clear also anonymous means unlinkable anonymous across
multiple connections (which SSH type of authentication would not be)
and linkable anonymous means some observable linkage exists between
sessions which come from the same source (though no PII), and
pseudonymous means same as linkable anonymous plus association to a
persistent pseudonym.

Again there are actually cryptographic protcols for_ having anonymous
authentication: ZKPs, multi-show unlinkable credentials, and
refreshable (and so unlinkable) single-show credentials.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous IP terminology (Re: [anonsec] Re: potential new IETF WG on anonymous IPSec (fwd from [EMAIL PROTECTED]))

2004-09-11 Thread Adam Back
On Sat, Sep 11, 2004 at 11:38:00AM -0700, Joe Touch wrote:
 Although anonymous access is not the primary goal, it is a feature
 of the solution.
 
 The access is _not_ anonymous.  The originator's IP, ISP call traces,
 phone access records will be all over it and associated audit logs.
 
 And you cannot determine whether that IP address came from the authentic 
 owner of that address or is spoofed. I'll try to be more careful - 
 you're right, in that it's not anonymous access. It IS anonymous 
 security, though.

I think you are confusing a weak potential for a technical ambiguity
of identity under attack conditions with anonymity.  (The technical
ambiguity would likely disappear in most practical settings).

Anonymity implies positives steps to avoid linking with PII.  With
anonymity you want not just technical ambiguity, but genuinely
pluasible deniability from an anonymity set -- preferably a large set
of users who could equally plausibly have established a given
connection, participated in an authentication protocol etc.

We don't after all call TCP anonymous, and your system is cleary
_less_ anonymous than TCP as there are security mechanisms involved
with various keys and authentication protocols which will only reduce
ambiguity.

 The distinguishing feature of anonymous is that not only is your name
 not associated with the connection but there is no PII (personally
 identifiable information) associated with it or obtainable from logs
 kept.
 
 If I know the IP address you used, I still know NOTHING, FWIW. This is 
 no more distinguishable than the port number is in identifying something 
 behind a NAT.

Practically, knowing the IP address conveys a lot.  Many ISPs have
logs, some associated with DSL subscriber and phone records, for
billing, bandwidth caps, abuse complaints, spam cleanup etc etc.

The IP may be used for many different logged activities and some of
those activites may involve directly identified authentication.
People go to lengths to hide their IP precisely because it does
typically convey all too much.

 And to be clear also anonymous means unlinkable anonymous across
 multiple connections (which SSH type of authentication would not be)
 
 That might be more specifically per-connection anonymous, but the term 
 'anonymous' is too general for that usage. However, there's still 
 nothing associated across connections in ANONSEC, IMO.

 You cannot know whether two connections from 10.0.0.1 on two different 
 ports with two different cookies are from the same endpoint. The point 
 of ANONSEC is that you don't care.

If one wants this to be true in practice it has to propogate up the
stack.  (Not the problem of ANONSEC, a problem for the higher level
app).

But even at the authentication protocol level one has to be quite
careful.  There are many gotchas if you really do want it to be
unlinkable.  (eg. pseudo random sequences occur in many settings at
different protocol levels which are in fact quite linkable).  I'll
give you one high level example.  At ZKS we had software to remail
MIME mail to provide a pseudonymous email.  But one gotcha is that
mail clients include MIME boundary lines which are pseudo-random
(purely to avoid string collision).  If these random lines are
generated with a non-cryptographic RNG it is quite likely that so
called unlinkable mail would in fact be linkable because of this
higher level protocol.  (We cared about unlinkability even tho' I said
pseudonymous because the user had multiple pseudonyms which were
supposed to be unlinkable across).

I would say if your interest in fixing such pseudo random sequeneces
is not present you should not be calling this anonymous.

But if it is part of your threat model, then you may in fact be using
anonymous authentication and that would be interesting to me at least
to participate.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


finding key pairs with colliding fingerprints (Re: How thorough are the hash breaks, anyway?)

2004-08-28 Thread Adam Back
You would have to either:

- search for candidate collisions amongst public keys you know the
private key for (bit more expensive)

- factorize the public key after you found a collision

the 2nd one isn't as hard as it sounds because the public key would be
essentially random and have non-negligible chance of finding trivial
factors.  (Not a secure backdoor, but still create a pretty good mess
in DoS terms if such a key pair were published).

The latter approach is what I used to create a sample dead-fingerprint
attack on a PGP 2.x fingerprints.

http://cypherpunks.venona.com/date/1997/06/msg00523.html

(No need to find hash collision even tho' md5 because it has another
bug: the serialization has multiple candidate inputs).  So I just
searched through the candidate inputs for one I can factor in a few
minutes.

Adam

On Fri, Aug 27, 2004 at 12:22:26AM +0100, Ian Grigg wrote:
 Correct me if I'm wrong ... but once finding
 a hash collision on a public key, you'd also
 need to find a matching private key, right?
 
 If someone finds a collision for microsoft's windows update cert (or a
 number of other possibilities), and the fan is well and truly buried
 in it.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RPOW - Reusable Proofs of Work

2004-08-21 Thread Adam Back
It's like an online ecash system.  Each recipient sends the RPOW back
to the mint that issued it to ask if it has been double spent before
accepting it as valid.  If it's valid (not double spent) the RPOW
server sends back a new RPOW for the receiving server to reuse.

Very like Chaum's online ecash protocol, but with no blinding (for
patent reasons) and using hashcash as way to buy coins.  The other
wrinkle is he can prove the mint can not issue coins without
exchanging them for hashcash or previous issued coins (up to the
limits of the effectiveness of the IBM tamper resistant processor
card, and of course up to the limits of your trust in IBM not to sign
hardware code signing keys that are not generated on board one of
these cards).  This is the same as the remote attestation feature
used in Trustworthy Computing for opposite effect -- restricting
what users can do with their computers; Hal is instead using this to
have a verifiable server where the user can effectively audit and
check what code it is running.

Adam

On Fri, Aug 20, 2004 at 04:34:00PM -0500, Matt Crawford wrote:
 I'm wondering how applicable RPOW is.  Generally speaking, all
 the practical applications I can think of for a proof-of-work
 are defeated if proofs-of-work are storable, transferable, or
 reusable.
 
 I have some code to play online games with cryptographic protection, 
 cards and dice,
 and I am planning to modify it to let people make bets with RPOWs as
 the betting chips.
 
 If you think of POW as a possible SPAM mitigation, how does the first 
 receiving MTA assure the next MTA in line that a message was paid 
 for?  Certainly the mail relay doesn't want to do new work, but the 
 second MTA doesn't know that the first isn't a spambot.
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: should you trust CAs? (Re: dual-use digital signature vulnerability)

2004-08-01 Thread Adam Back
On Wed, Jul 28, 2004 at 10:00:01PM -0700, Aram Perez wrote:
 As far as I know, there is nothing in any standard or good security
 practice that says you can't multiple certificate for the same email
 address. If I'm willing to pay each time, Verisign will gladly issue me a
 certificate with my email, I can revoke it, and then pay for another
 certificate with the same email. I can repeat this until I'm bankrupt and
 Verisign will gladly accept my money.

Yes but if you compare this with the CA having the private key, you
are going to notice that you revoked and issued a new key; also the CA
will have your revocation log to use in their defense.

At minimum it is detectable by savy users who may notice that eg the
fingerprint for the key they have doesn't match with what someone else
had thought was their key.

 I agree with Michael H. If you trust the CA to issue a cert, it's
 not that much more to trust them with generating the key pair.

Its a big deal to let the CA generate your key pair.  Key pairs should
be generated by the user.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


should you trust CAs? (Re: dual-use digital signature vulnerability)

2004-07-28 Thread Adam Back
The difference is if the CA does not generate private keys, there
should be only one certificate per email address, so if two are
discovered in the wild the user has a transferable proof that the CA
is up-to-no-good.  Ie the difference is it is detectable and provable.

If the CA in normal operation generates and keeps (or claims to
delete) the user private key, then CA misbehavior is _undetectable_.

Anyway if you take the WoT view, anyone who may have a conflict of
interest with the CA, or if the CA or it's employees or CPS is of
dubious quality; or who may be a target of CA cooperation with law
enforcement, secrete service etc would be crazy to rely on a CA.  WoT
is the answer so that the trust maps directly to the real world trust.
(Outsourcing trust management seems like a dubious practice, which in
my view is for example why banks do their own security,
thank-you-very-much, and don't use 3rd party CA services).

In this view you use the CA as another link in the WoT but if you have
high security requirements you do not rely much on the CA link.

Adam

On Wed, Jul 28, 2004 at 11:15:16AM -0400, [EMAIL PROTECTED] wrote:
 I would like to point out that whether or not a CA actually has the
 private key is largely immaterial because it always _can_ have the
 private key - a CA can always create a certificate for Alice whether or
 not Alice provided a public key.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Brands' private credentials

2004-05-25 Thread Adam Back
On Wed, Apr 28, 2004 at 07:54:50PM +, Jason Holt wrote:
 Last I heard, Brands started a company called Credentica, which
 seems to only have a placeholder page (although it does have an
 info@ address).
 
 I also heard that his credential system was never implemented, 

It was implemented at least twice: once by ECAFE ESPRIT project years
ago, more recently by ZKS before they stopped licensing the patents.

 Anna Lysyanskaya and Jan Camenisch came up with a credential system
 that I hear is based on Brands'. Anna's dissertation is online and
 might give you some clues.  They might also have been working on an
 implementation.

I looked at Camenisch protocol briefly a couple of years ago and it is
not based Brands.  It is less efficient computationally, and more
rounds of communication are required if I recall.

But one feature that it does have that Brands doesn't have directly is
self-reblindability.  In their protocol it is the credential holder
who does the blinding, rather than the issuer / holder, and the issuer
can also re-blind to get a 2nd unlinkable show.  The way you do this
with Brands is to have the CA issue you a new credential in a
re-issuing protocol; Brands re-issuing protocol has the property that
you do not even have to reveal to the CA what attributes are in the
re-issued cert.

On re-showable/re-blindable approach, as with Ernie Brikell's
re-showable credential proposal for Palladium the converse side of
unlinkable re-showing is that there is no efficient way to revoke
credentials.  (If eg the private key is compromised, or the credential
owner violates some associated policy in the Palladium/DRM case).
(Caveat of course I think DRM is an unenforceable idea and the
schelling point ought to be not to even pretend to do it in software
or hardware, rip-once copy-everywhere *always* wins).

 I came up with a much simpler system that has many similar
 properties to Brands', and even does some things that his doesn't.
 It's much less developed than the other systems, but we did write a
 Java implementation and published a paper at WPES last year about
 it.

Is this the same as described in http://eprint.iacr.org/2002/151/ with
interactive cut-and-choose and large credenitals?  There was some
discussion of that protocol in:

http://archives.abditum.com/cypherpunks/C-punks20021028/0076.html 

Not read the new paper you cite yet.

 Note that most anonymous credential systems are encumbered by
 patents.  The implementation for my system is based on the
 Franklin/Boneh IBE which they recently patented, although there's
 another IBE system which may not be encumbered and which should also
 work as a basis for Hidden Credentials.

The problem with the Yacobi's scheme (which is based on a composite
modulus variant of DH where you choose n=p.q such that p and q are
relatively smooth so you can do discrete log to setup the public key
for an identity) is that to get desirable security parameters for n
(eg 1024 bits) you have to expend huge amounts of resources per
identity public key.  So I would say it is not really practical.  It
is the only other semi-practical IBE scheme that I am aware of which
is why Boneh and Franklins IBE based on weil pairing was considered
such a break through.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


chaum's patent expiry? (Re: Brands' private credentials)

2004-05-25 Thread Adam Back
Oh yes, my other comment I forgot to mention was that if non-patent
status were a consideration, aside from Wagner's approach, another
approach for which the patent will presently expire is Chaum's
original approach combined with Niels Ferguson's single term offline
coins.  (Don't have citation handy but google will find you both).

Anyone have to hand the expiry date on Chaum's patent?  (Think it is
in patent section of AC for example; perhaps HAC also).

Having an expired patent might be a clearer route to non-patented
status than the putative this is a blind MAC not a blind signature
approach of Wagner's protocol.  But I obviously am not a patent
lawyer, and have avoided reading and participating in the writing of
patents.

Adam

On Sun, May 09, 2004 at 05:08:09AM -0400, Adam Back wrote:
 [...]
 I looked at Camenisch protocol briefly a couple of years ago and it is
 not based Brands.  It is less efficient computationally, and more
 rounds of communication are required if I recall.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Brands' private credentials

2004-05-25 Thread Adam Back
[copied to cpunks as cryptography seems to have a multi-week lag these
days].

OK, now having read:

 http://isrl.cs.byu.edu/HiddenCredentials.html
 http://isrl.cs.byu.edu/pubs/wpes03.pdf

and seeing that it is a completely different proposal essentially
being an application of IBE, and extension of the idea that one has
multiple identities encoding attributes.  (The usual attribute this
approach is used for is time-period of receipt .. eg month of receipt
so the sender knows which key to encrypt with).

On Wed, Apr 28, 2004 at 07:54:50PM +, Jason Holt wrote:
 properties to Brands', and even does some things that his doesn't.

so here is one major problem with using IBE: everyone in the system
has to trust the IBE server!

 I feel a little presumptuous mentioning it in the context of the
 other systems, which have a much more esteemed set of authors and
 are much more developed, but I'm also pretty confident in its
 simplicity.

One claim is that the system should hide sensitive attributes from
disclosure during a showing protocol.  So the example given an AIDs
patient could authenticate to an AIDS db server without revealing to
an outside observer whether he is an AIDs patient or an authorised
doctor.

However can't one achieve the same thing with encryption: eg an SSL
connection and conventional authentication?  

Outside of this, the usual approach to this is to authenticate the
server first, then authenticate the client so the client's privacy is
preserved.


Further more there seems to be no blinding at issue time.  So to
obtain a credential you would have to identify yourself to the CA /
IBE identity server, show paper credentials, typically involving True
Name credentials, and come away with a private key.  So it is proposed
in the paper the credential would be issued with a pseudonym.  However
the CA can maintain a mapping between True Name and pseudonym.

However whenever you show the credential the event is traceable back
to you by collision with the CA.

 Note that most anonymous credential systems are encumbered by
 patents.

I would not say your Hidden Credential system _is_ an anonymous
credential system.  There is no blinding in the system period.  All is
gated via a trust-me CA that in this case happens to be an IBE
server, so providing the communication pattern advantages of an IBE
system.

What it enables is essentially an offline server assisted oblivious
encryption where you can send someone a message they can only decrypt
if they happen to have an attribute.  You could call this a credential
system kind of where the showing protcool is the verifier sends you a
challenge, and the shower decrypts the challenge and sends the result
back.

In particular I don't see any way to implement an anonymous epayment
system using Hidden Credentials.  As I understand it is simply not
possible as the system has no inherent cryptographic anonymity?

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Brands' private credentials

2004-05-25 Thread Adam Back
On Mon, May 10, 2004 at 02:42:04AM +, Jason Holt wrote:
  However can't one achieve the same thing with encryption: eg an SSL
  connection and conventional authentication?  
 
 How would you use SSL to prove fulfillment without revealing how?
 You could get the CA to issue you a patient or doctor SSL cert,

Well SSL was just to convince you that you were talking to the right
server (you have reached the AIDs db server).

After that I was presuming you use a signature to convince the server
that you are authorised.  Your comment however was that this would
necessarily leak to the server whether you were a doctor or an AIDs
patient.

However from what I understood from your paper so does your scheme,
from section 5.1:

P = (P1 or P2) is encoded HC_E(R,p) = {HC_E(R,P1),HC_E(R,P2)} 

With Hidden Credentials, the messages are in the other direction: the
server would send something encrypted for your pseudonym with P1 =
AIDs patient, and P2 = Doctor attributes.  However the server could
mark the encrypted values by encoding different challenge response
values in each of them, right?

(Think you would need something like Bert Jaap-Koops Binding
cryptography where you can verify externally to encryption that the
contained encrypted value is the same to prevent that; or some other
proof that they are the same.)


Another approach to hiding membership is one of the techniques
proposed for non-transferable signatures, where you use construct:

RSA-sig_A(x),RSA-sig_B(y) and verification is x xor y = hash(message).

Where the sender is proving he is one of A and B without revealing
which one.  (One of the values is an existential forgery, where you
choose a z value first, raise it to the power e, and claim z is a
signature on x= z^e mod n; then you use private key for B (or A) to
compute the real signature on the xor of that and the hash of the
message).  You can extend it to moer than two potential signers if
desired.

  Outside of this, the usual approach to this is to authenticate the
  server first, then authenticate the client so the client's privacy is
  preserved.
 
 If you can trust the server to do so.  Firstly, hidden credentials limit what
 the server learns, so you don't *have* to trust the server as much.  But
 secondly, they also solve the problem which shifts to the server when it goes
 first: 

OK so the fact that the server is the AIDs db server is itself secret.
Probably better example is dissident's server or something where there
is some incentive to keep the identity of the server secret.  So you
want bi-directional anonymity.  It's true that the usual protocols can
not provide both at once; SSL provides neither, the anonymous IP v2
protocol I designed at ZKS had client anonymity (don't reveal
pseudonym until authenticate server, and yet want to authenticate
channel with pseudonym).  This type of bi-directional anonymity pretty
much is going to need something like the attribute based encryption
model you're using.

However it would be nice/interesting if one could do that end-2-end
secure without needing to trust a CA server.

 My system lets folks:
 
 * access resources without the server even knowing whether they fulfill the
 policy

this one is a feature auth based systems aren't likely to be able to
fullfil, you can say this because the server doesn't know if you're
able to decrypt or not

 So it's definitely in the realm of other privacy systems.  We could
 define a new term just to exclude my system from the others, but at
 this point I don't think naming confusion is any worse for my
 system; they all have lots of different nonorthogonal features.  

I think it would be fair to call it anonymity system, just that the
trust model includes a trusted server.  There are lots of things
possible with a trusted server, even with symmetric crypto (KDCs).

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


blinding BF IBE CA assisted credential system (Re: chaum's patent expiry?)

2004-05-25 Thread Adam Back
On Mon, May 10, 2004 at 03:03:56AM +, Jason Holt wrote:
 [...] Actually, now that you mention Chaum, I'll have to look into
 blind signatures with the BF IBE (issuing is just a scalar*point
 multiply on a curve).  

I think you mean so that the CA/IBE server even though he learns
pseudonyms private key, does not learn the linkage between true name
and pseudonym.  (At any time during a show protocol whether the
private key issuing protocol is blinded or not the IBE server can
compute the pseudonyms private key).

Seems like an incremental improvement yes.

 That could be a way to get CA anonymity for hidden credentials -
 just do vanilla cut and choose on blinded pseudonymous credential
 strings, then use a client/server protocol with perfect forward
 secrecy so he can't listen in.  

Note PFS does not make end-2-end secure against an adversary who can
compute the correspondents private keys, as vulnerable to MITM.  Could
say invulnerable to passive eavesdropper.  However you might have an
opening here for a new security model combining features of Hidden
Credentials with a kind of MITM resistance via anonymity.  What I mean
is HC allows 2 parties to communicate, and they know who they are
communicating with.  The CA colluding MITM however we'll say does not
apriori, so he has to brute force try all psuedonym, attribute
combinations until he gets the right one.  Well still not desirable
security margin, but some extra difficulty for the MITM.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


more hiddencredentials comments (Re: Brands' private credentials)

2004-05-25 Thread Adam Back
On Mon, May 10, 2004 at 08:02:12PM +, Jason Holt wrote:
 Adam Back wrote:
  [...] However the server could mark the encrypted values by encoding
  different challenge response values in each of them, right?
 
 Yep, that'd be a problem in that case.  In the most recent (unpublished)  
 paper, I addressed that by using R as the key for a ciphertext+MAC on the
 actual message.  

OK that sounds like it should work.  Another approach that occurs is
you could just take the plaintext, and encrypt it for the other
attributes (which you don't have)?  It's usually not too challenging
to make stuff deterministic and retain security.  Eg. any nonces,
randomizing values can be taken from PRMG seeded with seed also sent
in the msg.  Particularly that is much less constraining on the crypto
system than what Bert-Jaap Koops had to do to get binding crypto to
work with elgamal variant.

 In either case, though, you can't just trust that the server
 encrypted against patient OR doctor unless you have both creds and
 can verify that they each recover the secret.

The above approach should fix that also right?

 (And you're right, the AIDS example is not very compelling.  The
 slides give a better one about FBI agents, but I'm still looking for
 other examples of super-sensitive transactions where HCs would fit)

dissident computing I think Ross Anderson calls it.  People trying to
operate pseudonymously and perhaps hiding the function of their
servers in a cover service.

 Hugo Krawczyk gave a great talk at Crypto about the going-first problem in
 IPSec, which is where I got the phrase.  He has a nice compromise in letting
 the user pick who goes first, but for some situations I think hidden
 credentials really would hit the spot.

Unless it's signifcantly less efficient, I'd say use it all the time.

  I think it would be fair to call it anonymity system, just that the
  trust model includes a trusted server.  There are lots of things
  possible with a trusted server, even with symmetric crypto (KDCs).
 
 Yeah, although I think most of them would require an on-line trusted
 server.  But that just makes all sorts of things way too easy to be
 interesting. :)

Yes.  But you could explore public key based without IBE.  You may
have to use IBE as a sub-protocol, but I think ideally want to avoid
the IBE server being able to decrypt stuff.  Sacrificing the IBE
communication pattern wouldn't seem like a big deal.

Hmm well IBE is has a useful side-effect in pseudonymity systems
because it also has the side-effect of saving the privacy problems in
first obtaining the other parties key.  Other way to counteract that
is to always include the psuedonym public key with the pseudonym name
(which works for mailto: style URLs or whatever that are
electronically distributed, but not for offline distributed).

Btw one other positive side-effect of IBE is the server can't
impersonate by issuing another certificate in a pseudonyms name
because there is definitionally only one certificate.

I was thinking particularly if you super-encrypt with the psuedonym's
(standard CA) public key as well as the IBE public key you get the
best of both feature sets.

btw#2 You could probably come up with a way to prevent a standard (non
IBE) CA from issuing multiple certs.  eg. if he does that and someone
puts two certs together they learn CA private key, ala Brands
credential kind of offline double spending protection.

Kind of a cryptographically enforced version of the policy enforced
uniqueness of serial numbers in X.509 certs.  And we change the policy
to one cert per pseudonym (kind of sudden death if you lose the
private key, but hey just don't do that; we'd have no other way to
authenticate you to get a new cert in the same psuedonyms name anyway,
so you may just as well backup your pseudonym private key).

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: blinding BF IBE CA assisted credential system (Re: chaum's patent expiry?)

2004-05-25 Thread Adam Back
But if I understand that is only half of the picture.  The recipient's
IBE CA will still be able to decrypt, tho the sender's IBE CA may not
as he does not have ability to compute pseudonym private keys for the
other IBE CA.

If you make it PFS, then that changes to the recipient's IBE CA can
get away with active MITM rather than passive eavesdropping.

An aside is that PKI for Psuedonym's is an interesting question.  The
pseudonym can't exactly go and be certified by someone else as an
introducer without revealing generally identifying things about the
network of trust.  But ignoring this presuming that the identities
were not subject to MITM from day one, and could build up a kind of
WoT despite lack of out-of-band way to check info to base WoT
signatures on.  It would still be interesting to defend the pseudonym
against MITM colluding with IBE CA that at some point after the
pseudonym has transferred keys without insertion of a MITM from.

This problem of addressing the who goes first problem for pseudonymous
communicants appears somewhat related to Public Key Steganography
where there is a similar scenario and threat model.  (Anderson and
PetitcolasOn The Limits of Steganography
http://www.petitcolas.net/fabien/publications/jsac98-limsteg.pdf).
They also cite a Prisoners' problem which might be something you
could extend involving a warden who is eavesdropping and prisoners who
will be penalized if he can detect and identify communicants.

My earlier comment:

| Btw one other positive side-effect of IBE is the server can't
| impersonate by issuing another certificate in a pseudonyms name
| because there is definitionally only one certificate.

may not be that useful a distinction as the IBE CA server also gets
your private key, so he doesn't _need_ to generate a certificate
impersonating you as a conventional rogue CA would.

But if we could make the binding from pseudonym to the pseudonym's
non-IBE public key strictly first come first served, so that the IBE
CA's attemt to claim his later released non-IBE public key is the
correct one would be detectable.  Either secure time-stamping,
extending the psuedonym name to include fingerprint as
self-authenticator would allow this.

Adam

On Mon, May 10, 2004 at 06:45:56PM +, Jason Holt wrote:
 Well, he can always generate private keys for any pseudonym, just as in cash
 systems where the bank can always issue bank notes.  Here's what I'm
 suggesting, where b is a blinding function and n1... are random nyms:
 [...]
 (Alice generates random nonce na)
 HC_E(na, Bob:agent, FBI)---
 
  (Bob generates random nb)
  ---HC_E(nb, Alice:member, NRA)
 
 Both generate session keys from Hash(na,nb).

 The FBI can *always* impersonate an agent, because, well, they're
 the CA and they can make up pseudonymous agents all day long. But in
 this protocol, I believe they wouldn't be able to be a MITM and/or
 just eavesdrop on AliceBob.

 That's because Bob only wants to talk to NRA members, and the FBI can't
 impersonate that.
 
 Now, this is for an interactive session, rather than just sending a single
 request/response round like I discuss in the paper.  But even then, policies
 are always respected.  Just change na to request and nb to response.  
 Alice's policy is respected whether she talks to FBI-authorized-Bob or
 FBI-authorized-FBI, and the FBI doesn't get to read Bob's NRA-Alice-only
 repsonse.
 
   -J

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: 3. Proof-of-work analysis

2004-05-25 Thread Adam Back
Here's a forward of parts of an email I sent to Richard with comments on
his and Ben's paper (sent me a pre-print off-list a couple of weeks ago):


One obvious comment is that the calculations do not take account of
the CAMRAM approach of charging for introductions only.  You mention
this in the final para of conclusions as another possible.


My presumption tho don't have hard stats to measure the effect is that
much of email is to-and-fro between existing correspondents.  So if I
were only to incur the cost of creating a stamp at time of sending to
a new recipient, I could bear a higher cost without running into
limits.

However the types of levels of cost envisaged are aesthetically
unpleasing; I'd say 15 seconds is not very noticeable 15 mins is
noticeable and 1.5 hrs is definately noticeable.


Of course your other point that we don't know how spammers will adapt
is valid.  My presumption is that spam would continue apace, the best
you could hope for would be that it is more targetted, that there are
financial incentives in place to make it worth while buying
demographics data.  (After all when you consider the cost of sending
junk paper mail is way higher, printing plus postage, and yet we still
receive plenty of that).

Also as you observe if the cost of spamming goes up, perhaps they'll
just charge more.  We don't know how elastic the demand curve is.
Profitability, success rates etc are one part of it.  There is an
interplay also: if quantity goes down, perhaps the success rate on the
remaining goes up.  Another theory is that a sizeable chunk of spam is
just a ponzi scheme: the person paying does not make money, but a lot
of dummy's keep paying for it anyway.




Another potential problem with proof-of-work on introductions only, is
that if the introduction is fully automated without recipient opt-in,
spammers could also benefit from this amortized cost.  So I would say
something like the sender sent a proof-of-work, and the recipient took
some positive action, like replying, filing otherwise than junk or
such should be the minimum to get white-listed.




On the ebiz web site problem, I think these guys present a problem for
the whole approach.  An ebiz site will want to send lots of mail to
apparent new recipients (no introductions only saving), a popular ebiz
site may need to send lots of mail.


Well it is ebiz so perhaps they just pass the cost on to the consumer
and buy some more servers.




Another possibility is the user has to opt-in by pre-white-listing
them, however the integration to achieve this is currently missing and
would seem a difficult piece of automation to retrofit.




One of the distinguishing characteristics of a spammer is the
imbalance between mail sent and mail received.  Unfortunately I do not
see a convenient way to penalize people who fall into this category.




Also because of network effect concerns my current hashcash deployment
is to use it as a way to reduce false positives, rather than directly
requiring hashcash.  Well over time this could come to the same thing,
but it gives it a gentle start, so we'll see how long it is before the
1st genuine spam with hashcash attached.

CAMRAM's approach is distinct and is literally going straight for the
objective of bouncing mail without some kind of proof (hashcash or
reverse-turing, or short term ability to reply to email
challenge-response).

Adam

Richard Clayton wrote:
 [...] Ben Laurie) and I have recently
 been doing some sums on proof-of-work / client puzzles / hashcash
 methods of imposing economic constraints upon the sending of spam...
 
 Ben wanted to know how big a proof was needed for a practical scheme
 he was considering -- and I told him it wasn't going to work. We then
 carefully worked through all the calculations, using the best data
 that we could obtain -- and we did indeed come to the conclusion that
 proof-of-work is not a viable proposal :(

 Paper:
 
  http://www.cl.cam.ac.uk/~rnc1/proofwork.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Reusable hashcash for spam prevention

2004-05-25 Thread Adam Back
FYI Richard amended the figures in the paper which makes things 10x
more favorable for hashcash in terms of being an ecomonic defense
against spammers.

Richard wrote on asrg:
| we're grateful (albeit a little embarrassed) for the consideration
| given to one of our figures by Ted Wobber (MS Research) who has
| pointed out a tenfold error in our sum involving an 0.0023% response
| rate
| 
| we'll be revising our text accordingly -- since this will weaken our
| statements about the economic argument ... though it does _not_
| affect our analysis based on use of zombies.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Microsoft publicly announces Penny Black PoW postage project

2003-12-28 Thread Adam Back
I did work at Microsoft for about a year after leaving ZKS, but I quit
a month or so ago (working for another startup again).

But for accuracy while I was at Microsoft I was not part of the
microsoft research/academic team that worked on penny black, though I
did exchange a few emails related to that project and hashcash etc
with the researchers.

I thought the memory-bound approaches discussed on CAMRAM before were
along the lines of hash functions which chewed off artificially large
code foot-print as a way to impose the need for memory.  

Arnold Reinhold's HEKS [1] (Hash Extended Key Stretcher) key stretching
algorithm is related also.  HEKS aims to make hardware attacks on key
stretching more costly: both by increasing the memory footprint
required to efficiently compute it, and by requiring operations that
are more expensive in silicon (32 bit multiplies, floating point is
another suggestion he makes).

The relationship to hashcash is you could simply use HEKS in place of
SHA1 to get the desired complexity and hence silicon cost increase.

The main design goal of this algorithm is to make massively parallel
key search machines it as expensive as possible by requiring many
32-bit multiplies and large amounts of memory.

I think I also recall discussing with Peter Gutmann the idea of using
more complex hash functions (composed of existing hash functions for
security) to increase the cost of hardware attacks.


The innovation in the papers referred to by the Penny Black project
was the notion of building a cost function that was limited by memory
bandwidth rather CPU speed.  In otherwords unlike hashcash (which is
CPU bound and has minimal working memory or code footprint) or a
notional hashcash built on HEKS or other similar system (which is
supposed to take memory and generaly expensive operations to build in
silicon), the two candidate memory-bound functions are designed to be
computationally cheap but require a lot of random access memroy
utilization in a way which frustrates time-space trade-offs (to reduce
space consumption by using a faster CPU).  They then argue that this
is desirable because there is less discrepency in memory latency
between high end systems and low end systems than there is discrepency
in CPU power.

The 2nd memory [3] bound paper (by Dwork, Goldber and Naor) finds a
flaw in in the first memory-bound function paper (by Adabi, Burrows,
Manasse, and Wobber) which admits a time-space trade-off, proposes an
improved memory-bound function and also in the conclusion suggests
that memory bound functions may be more vulnerable to hardware attack
than computationally bound functions.  Their argument on that latter
point is that the hardware attack is an economic attack and it may be
that memory-bound functions are more vulnerable to hardware attack
because you could in their view build cheaper hardware more
effectively as the most basic 8-bit CPU with slow clock rate could
marshall enough fast memory to under-cut the cost of general purpose
CPUs by a larger margin than a custom hardware optimized
hashcash/computationally bound function.

I'm not sure if their conclusion is right, but I'm not really
qualified -- it's a complex silicon optimization / hardware
acceleration type question.

Adam

[1] http://world.std.com/~reinhold/HEKSproposal.html

[2] Abadi, Burrows, Manasse and Wobber Moderately Hard, Memory-bound
Functions, Proceedings of the 10th Annual Network and Distributed
System Security Symposium, February 2003

http://research.microsoft.com/research/sv/PennyBlack/demo/memory-final-ndss.pdf

[3] Dwork, Goldberg, and Naor, On Memory-Bound Functions for Fighting
Spam, Proceedings of the 23rd Annual International Cryptology
Conference (CRYPTO 2003), August 2003.

http://research.microsoft.com/research/sv/PennyBlack/demo/lbdgn.pdf


On Fri, Dec 26, 2003 at 09:13:23AM -0800, Steve Schear wrote:
 http://news.bbc.co.uk/2/hi/technology/3324883.stm
 
 Adam Back is part of this team, I think.
 
 Similar approach to Camram/hahscash.  Memory-based approaches have been 
 discussed.  Why hasn't Camram explored them?
 
 steve

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Microsoft publicly announces Penny Black PoW postage project

2003-12-28 Thread Adam Back
Oh yes forgot one comment:

One down-side of memory bound is that it is memory bound.  That is to
say it will be allocated some amount of memory, and this would be
chosen to be enough memory to that a high end machine should not have
that much cache so think multiple MB, maybe 8MB, 16MB or whatever.
(Not sure what is the max L2 cache on high end servers).

And what the algorithm will do is make random accesses to that memory
as fast as it can.

So effectively it will play badly with other applications -- tend to
increase likelihood of swapping, decrease memory available for other
applications etc.  You could think of the performance implications as
a bit like pulling 8MB of ram or whatever the chosen value is.

hashcash / computationally bound functions on the other hand have a
tiny footprint and CPU consumption by hashcash can be throttled to
avoid noticeable impact on other applications.

Adam

On Fri, Dec 26, 2003 at 09:37:18PM -0500, Adam Back wrote:
 I did work at Microsoft for about a year after leaving ZKS, but I quit
 a month or so ago (working for another startup again).
 
 But for accuracy while I was at Microsoft I was not part of the
 microsoft research/academic team that worked on penny black, though I
 did exchange a few emails related to that project and hashcash etc
 with the researchers.
 
 I thought the memory-bound approaches discussed on CAMRAM before were
 along the lines of hash functions which chewed off artificially large
 code foot-print as a way to impose the need for memory.  
 
 Arnold Reinhold's HEKS [1] (Hash Extended Key Stretcher) key stretching
 algorithm is related also.  HEKS aims to make hardware attacks on key
 stretching more costly: both by increasing the memory footprint
 required to efficiently compute it, and by requiring operations that
 are more expensive in silicon (32 bit multiplies, floating point is
 another suggestion he makes).
 
 The relationship to hashcash is you could simply use HEKS in place of
 SHA1 to get the desired complexity and hence silicon cost increase.
 
 The main design goal of this algorithm is to make massively parallel
 key search machines it as expensive as possible by requiring many
 32-bit multiplies and large amounts of memory.
 
 I think I also recall discussing with Peter Gutmann the idea of using
 more complex hash functions (composed of existing hash functions for
 security) to increase the cost of hardware attacks.
 
 
 The innovation in the papers referred to by the Penny Black project
 was the notion of building a cost function that was limited by memory
 bandwidth rather CPU speed.  In otherwords unlike hashcash (which is
 CPU bound and has minimal working memory or code footprint) or a
 notional hashcash built on HEKS or other similar system (which is
 supposed to take memory and generaly expensive operations to build in
 silicon), the two candidate memory-bound functions are designed to be
 computationally cheap but require a lot of random access memroy
 utilization in a way which frustrates time-space trade-offs (to reduce
 space consumption by using a faster CPU).  They then argue that this
 is desirable because there is less discrepency in memory latency
 between high end systems and low end systems than there is discrepency
 in CPU power.
 
 The 2nd memory [3] bound paper (by Dwork, Goldber and Naor) finds a
 flaw in in the first memory-bound function paper (by Adabi, Burrows,
 Manasse, and Wobber) which admits a time-space trade-off, proposes an
 improved memory-bound function and also in the conclusion suggests
 that memory bound functions may be more vulnerable to hardware attack
 than computationally bound functions.  Their argument on that latter
 point is that the hardware attack is an economic attack and it may be
 that memory-bound functions are more vulnerable to hardware attack
 because you could in their view build cheaper hardware more
 effectively as the most basic 8-bit CPU with slow clock rate could
 marshall enough fast memory to under-cut the cost of general purpose
 CPUs by a larger margin than a custom hardware optimized
 hashcash/computationally bound function.
 
 I'm not sure if their conclusion is right, but I'm not really
 qualified -- it's a complex silicon optimization / hardware
 acceleration type question.
 
 Adam
 
 [1] http://world.std.com/~reinhold/HEKSproposal.html
 
 [2] Abadi, Burrows, Manasse and Wobber Moderately Hard, Memory-bound
 Functions, Proceedings of the 10th Annual Network and Distributed
 System Security Symposium, February 2003
 
 http://research.microsoft.com/research/sv/PennyBlack/demo/memory-final-ndss.pdf
 
 [3] Dwork, Goldberg, and Naor, On Memory-Bound Functions for Fighting
 Spam, Proceedings of the 23rd Annual International Cryptology
 Conference (CRYPTO 2003), August 2003.
 
 http://research.microsoft.com/research/sv/PennyBlack/demo/lbdgn.pdf
 
 
 On Fri, Dec 26, 2003 at 09:13:23AM -0800, Steve Schear wrote:
  http://news.bbc.co.uk/2/hi/technology/3324883.stm

Re: Protection against offline dictionary attack on static files

2003-11-13 Thread Adam Back
Yes this is a good idea, and some people thought of it before also.  

Look for paper secure applications of low entropy keys or something
like that by Schnieir, Wagner et al.  (on counterpane labs page I
think).

Also the PBKDF2 function defined in PKCS#5 used to convert the
password into a key for unwrapping PKCS#12 uses the same idea.  The
general approach is called key-stretching.

The approach usually involves some form of iterative hashing so
similar to what you proposed.

Adam

On Thu, Oct 23, 2003 at 08:20:35AM +0100, Arcane Jill wrote:
 Hi,
 
 It's possible I may be reinventing the wheel here, so my apologies if 
 that's so, but it occurs to me that there's a defence against an offline 
 dictionary attack on an encrypted file. Here's what I mean: Say you have 
 a file, and you want to keep it secret. What do you do? Obviously you 
 either encrypt it directly, or you store it in an encrytped volume 
 (thereby encrypting it indirectly). Problem? Maybe an attacker can 
 somehow get hold of the encrypted file or volume ... maybe your laptop 
 gets stolen  maybe other people have access to your machine. In 
 principle, you're protected by your passphrase, but if an attacker can 
 get hold of the file, they can try an offline dictionary attack to guess 
 your passphrase, so unless you're very good at inventing high entropy 
 passphrases /and remembering them without writing them down/, there may 
 still be a risk.
 
 Here's the defence:
 
 To encrypt a file:
Generate a random number R between 0 and M-1 (for some fixed M, a 
 power of 256)
Type in your passphrase P
Let S = R || P (where || stands for concatenation)
Let K = hash(S)
 K is now your encryption key. R is to be thrown away.
 
 To decrypt the same file:
Generate a random number r between 0 and M-1
Type in your passphrase P
for (int i=r; ; i=(i+1)%M)
{
Let S = I || P
Let K = hash(S)
Try to decrypt using key K
}
 
 This places a computational burden on your PC at decrypt-time. The 
 larger you choose M, the more CPU time it will take to figure out K. So, 
 you choose M such that it takes your PC about one second to find K, then 
 your attacker will experience the same burden - but multiplied a 
 squillionfold (a squillion being the entropy of your passphrase). This 
 means that even if your passphrase consists of just two words from a 
 dictionary, /and/ your attacker suspects this, it will still take him or 
 her over a hundred and fifty years to decrypt (assuming your attacker 
 has a PC of equivalent power). Even if your attacker has a faster PC 
 than you, it will still be relatively easy to pick a 
 strong-yet-memorable passphrase, since better tech can only ease the 
 attacker's problem, not remove it. All of a sudden, weak passphrases 
 turn into strong ones, and strong passphrases turn into computationally 
 infeasible ones.
 
 Is this useful?
 Has anyone come up with it before? (Someone must have ... but I don't 
 recall seeing the technique used in applications)
 
 Jill
 
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


efficiency?? vs security with symmetric crypto? (Re: Tinc's response to Linux's answer to MS-PPTP)

2003-09-26 Thread Adam Back
What conceivable trade-offs could you have to make to get acceptable
performance out of symmetric crypto encrypted+authenticated tunnel?
All ciphers you should be using are like 50MB/sec on a 1Ghz machine!!

If you look at eg cebolla (more anonymity than VPN, but it's a nested
forward-secret VPN related thing) it's even possible to do pretty
immediate forward secrecy every second or something at minimal CPU
cost.  (I'll read the writeup but that trade-off argument sounds very
wrong.)

Adam

On Fri, Sep 26, 2003 at 12:12:03PM +0200, Guus Sliepen wrote:
 Hello Peter Gutmann and others,
 
 Because of its appearance on this mailing list and the Slashdot posting
 about Linux's answer to MS-PPTP, and in the tinc users' interest, we
 have created a section about the current security issues in tinc, which
 currently contains a response to Peter Gutmann's writeup:
 
 http://tinc.nl.linux.org/security
 
 I want to emphasize for the cryptography community here that certain
 tradeoffs have been made between security and efficiency in tinc. So
 please read the response as why we think we need to do/used to do it
 this way instead of why we think tinc is still as secure as anything
 else. Comments are welcome. 
 
 -- 
 Met vriendelijke groet / with kind regards,
 Guus Sliepen [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


why are CAs charging so much for certs anyway? (Re: End of the line for Ireland's dotcom star)

2003-09-24 Thread Adam Back
Saw this on theregister.co.uk: geotrust is undercutting veri$ign at
$159/cert vs $350/cert by verisign and $199 by Thawte (which as you
note is just a verisign brand at this point).

http://www.theregister.co.uk/content/67/33009.html

You'd have thought there would be plenty of scope for certs to be sold
for a couple of $ / year.  Eg. by one of the registrars bundling a
cert with your domain registration.  I mean if someone can provide DNS
service for $10 or less / year (and lower for some tlds) which
requires servers to answer queries etc., surely they can send a you a
few more bits (all they have to do is make sure they send the cert to
the person who they register the domain for).

From what I heard Mark Shuttleworth (of Thawte) got his cert in the
browser DBs for free just for the asking by being in the right place
at the right time.  So once you have that charging  $100 for a few
seconds of CPU time to sign a cert is a license to print money.

With all the .com crashes you'd think the price of a root cert ought
to be pretty low by now.

Adam

On Wed, Sep 24, 2003 at 01:15:22PM -0400, Anton Stiglic wrote:
 
  Why is it that none of those 100-odd companies with keys in the browsers
  are doing anything with them?  Verisign has such a central role in
  the infrastructure, but any one of those other companies could compete.
  Why isn't anyone undercutting Verisign's prices?  Look what happened with
  Thawte when it adopted this strategy: Mark Shuttleworth got to visit Mir!
 
 And Thawte got bought by Verisign, so no more competition...
 Interestingly, last time I checked, it was cheaper to buy from Thawte than 
 it was from Verisign directly.
 
 --Anton
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: why are CAs charging so much for certs anyway? (Re: End of the line for Ireland's dotcom star)

2003-09-24 Thread Adam Back
On Wed, Sep 24, 2003 at 05:40:38PM -0700, Ed Gerck wrote:
 Yes, there is a good reason for CAs to charge so much for certs.
 I hope this posting is able to set this clear once and for all.

 [zero risk, zero cost, zero liability, zero regulatory burden]

 9. Product Price: At Will.
 
  There is no reference in price for an array of 2 Kbytes. It can
  range from $5.00 to $500.00 or beyond. 

Uh?  The why argument you give is basically price gouging?

That was my point and why I said I don't see any reason cert prices
with reasonable competition couldn't fall to a few dollars/year.
(Ian: recurring billing is because they expire).

Adam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: traffix analysis

2003-08-28 Thread Adam Back
I agree with anonymous summary of the state of the art wrt
cryptographic anonymity of interactive communications.

Ulf Moeller, Anton Stiglic, and I give some more details on the
attacks anonymous describes in this IH 2001 [1] paper:

http://www.cypherspace.org/adam/pubs/traffic.pdf

which explores this in the context of ZKS Freedom Network, and Pipenet
presenting attacks on the Freedom Network, Onion Network, Crowds and
Pipenet which affect privacy and availability.

Adam

Traffic Analysis Attacks and Trade-Offs in Anonymity Providing
Systems, IH 2001, Adam Back, Ulf Moeller, and Anton Stiglic.

On Wed, Aug 27, 2003 at 09:17:05PM -0500, Anonymous wrote:
 This is not true, and in fact this result is one of the most important
 to have been obtained in the anonymity community in the past decade.  The
 impossibility of practical, strong, real-time anonymous communication has
 undoubtedly played a role in the lack of deployment of such systems.
 
 The attack consists of letting the attacker subvert (or become!) one of
 the communication endpoints.  This can be as simple as running a sting
 web site offering illegal material.
 
 Then the attacker arranges to insert delays into the message channels
 leading from subscribers into the crowd.  He looks for correlations
 between those delays and observed delays in the message traffic to his
 subverted endpoint.  This will allow him to determine which subscriber
 is communicating with that endpoint, regardless of how the crowd behaves.
 
 It will often be possible to also trace the communication channel back
 through the crowd, by inserting delays onto chosen links and observing
 which ones correlate with delays in the data observed at the endpoint.
 This way it is not necessary to monitor all subscribers to the crowd,
 but rather individual traffic flows can be traced.
 
 Wei Dai's PipeNet proposal aims to defeat this attack, but at the
 cost of running the entire crowd+subscriber network synchronously.
 The synchronous operation defeats traffic-delay attacks, but the problem
 is that any subscriber can shut the entire network down by simply delaying
 his packets.
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Session Fixation Vulnerability in Web Based Apps

2003-06-15 Thread Adam Back
I think he means higher level frameworks, web programming libraries,
toolkits, and web page builder stuff; not hooks into SSL sessions.
Not to say that a hook into an SSL session is not a good place to get
an application sessions identifier from -- it would be, presuming that
you can't trick a browser into adopting someone else's SSL session.

I wouldn't know one way or the other if these higher level frameworks
fall victim to the session adoption problem as I haven't used them;
but it seems plausible that there might exist some that do.  If this
were the case it would be quite bad as there would presumably be many
users of them who had relied on the security of the high-level
framework.  But I would be suprised if most or many of them did for
similar reasons to the reason people are expressing doubt that many
hand coded login pages would be affected: it seems like generally a
mistake natural login and session managing web programming idioms
would not lend themselves to.

Adam

On Sun, Jun 15, 2003 at 05:52:17PM -0400, Rich Salz wrote:
  The framework, however, generally provides insecure cookies.
 
 No I'm confused.  First you said it doesn't make things like the
 session-ID available, and I posted a URL to show otherwise.  Now you're
 saying it's available but insecure?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]