Re: TPM disk crypto

2006-10-13 Thread cyphrpunk

On 10/10/06, Adam Back [EMAIL PROTECTED] wrote:

I think the current CPUs / memory managers do not have the ring -1 /
curtained memory features, but already a year ago or more Intel and
AMD were talking about these features.  So its possible the for
example hypervisor extra virtualization functionality in recent
processors ties with those features, and is already delivered?  Anyone

Intel LaGrande Technology is supposed to ship soon and combines
virtualization with TPM integration so you can load what they call a
MVMM: a measured virtual machine monitor. Measured means the hash
goes securely to the TPM so it can attest to it, and third parties can
verify what VMM you are running. Then the security properties would
depend on what the VMM enforces. The MVMM runs in what you might call
ring -1, while the OS running in ring 0 has only virtualized access to
certain system resources like page tables.

One thing the MVMM could do is to measure and attest to OS properties.
Then if you patched the OS to bypass a signed-driver check, it might
not work right.

One question that was raised is how these systems can be robust
against OS upgrades and such. It would seem that ultimately this will
require attestation to be based on a signing key rather than the code
fingerprint. Rather than hashing the code it loads, the MVMM would
verify that the code is signed by a certain key, and hash the key,
sending that to the TPM. Then any code signed by the same key could
produce the same attestation and have access to the same sealed data.

The TCG infrastructure working group is supposed to standardize what
kinds of attestions will be used and what they will mean.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: TPM disk crypto

2006-10-13 Thread cyphrpunk

Here is a posting from the cypherpunks mailing list describing the
capabilities of Intel's new virtualization/TPM technology. Gets a bit
ranty but still good information.


-- Forwarded message --
From: Anonymous Remailer (austria) [EMAIL PROTECTED]
Date: Fri, 29 Sep 2006 03:25:57 +0200 (CEST)
Subject: Palladium is back. And this time, it's...

In the past few weeks new information has come out on the Trusted
Computing (TC) front which provides clues to where this powerful
and controversial technology may be heading.  Much of this has come
from Intel, which has revealed more information about their LaGrande
technology, now un-codenamed to Trusted Execution Technology.  A good
source of links is the Hack the Planet blog,
- scroll down to the September 25 entry.

LaGrande was originally designed as the hardware support for Microsoft's
now-defunct Palladium, relating to the differences between Palladium and
TCPA (now called TCG).  Both technologies relied on the TPM chip to take
measurements of running software, report those measurements remotely via
trusted attestations, and lock encrypted data to those measurements so
that other software configurations could not decrypt it.  These are the
core capabilities which give TC its power.  But there were important
differences in the two approaches.

TCPA was focused on a measured boot process.  As the system boots,
each stage would measure (i.e. hash into the TPM) the next stage before
switching control to it.  At the end of this process the TPM's Platform
Configuration Registers would hold a fingerprint of the software
configuration that had booted.  With a TPM-aware OS the PCRs could be
further updated as each program launches to keep an up-to-date picture
of what is running.

Palladium instead wanted to be able to switch to trusted mode in mid
stream, after booting; and wanted to continue to run the legacy OS while
new applications ran in the trusted area.  LaGrande Technology (LT,
now TET), in conjunction with new TPM capabilities offered in the 1.2
chips now available, would provide the support for this late launch
concept.  Palladium is now gone but Intel has continued to develop
LaGrande and has now released documentation on how it will work, at

Late launch starts with the OS or the BIOS executing one of the new
LT instructions.  This triggers a complex sequence of operations
whose purpose is to load, measure (ie hash into the TPM) and launch a
hypervisor, that is, a Virtual Machine Monitor (VMM).  The hypervisor can
then repackage the state of the launching OS as a Virtual Machine (VM)
and transfer control back to it.  The OS has now become transparently
virtualized and is running on top of the VMM.  The VMM can then launch
secure VMs which execute without being molested by the legacy OS.

Another enhancement of LT is that the chipset can be programmed to prevent
DMA access to specified memory areas.  This will close a loophole in
existing VMM systems, that VMs can program DMA devices to overwrite other
VMs' memory.  This protection is necessary for the TC goal of protected
execution environments.

Both VMWare and Xen are getting involved with this technology.  As the
blog entry above says, Intel donated code to Xen a few days ago to support
much of this functionality, so that Xen will be able to launch in this
way on TET machines.  Another link from the blog entry is an amazing
Intel presentation showing how excited the NSA is about this technology.
Within a couple of years they will be able to acquire Commercial Off
the Shelf (COTS) systems configured like this, that will allow running
multiple instances of OS's with different security classifications.
The slides show a system running two versions of Windows, one for Secret
and one for Top Secret data, appearing in separate windows on the screen.
Xen or VMWare with TET will be able to do this very soon if not already.

Here's Intel's description of how software might be configured to use
this capability, from their Trusted Execution Technology Architectural
Overview linked from the LaGrande page above:

Trusted Execution Technology provides a set of capabilities that can be
utilized in many different operating environments (Figure 2). One proposed
architecture provides a protection model similar to the following:

A standard partition that provides an execution environment that is
identical to today's IA-32 environment. In this environment, users will be
able to run applications and other software just as they do on today's
PC. The standard partition's obvious advantage is that it preserves
the value of the existing code base (i.e. existing software does not
need modification to run in the standard partition) and potential future
software that is less security conscious. Unfortunately, it also retains
the inherent vulnerabilities of today's environment.

A protected partition provides a 

Re: TPM disk crypto

2006-10-13 Thread cyphrpunk

On 10/13/06, Kuehn, Ulrich [EMAIL PROTECTED] wrote:

With reliably stopping the boot process I mean the following: Given that
stage i of the process is running, it takes the hash of the next stage,
compares that to an expected value. If they match, the current stage extends
the TPM register (when also running the TCG stuff), and executes the next
stage. If the computed and expected hashes do not match, the machine goes
into a predetermined halt state.

Predetermined means that the system administrator (on behalf of the system
owner) can determine the expected hash value.

You don't need the TPM for this. You could imagine a boot process
where each stage hashed the next stage, and refused to proceed if it
didn't match an expected value. One question though is how you prevent
malware from changing these expected values, even potentially
reflashing the BIOS.

A student project at Dartmouth a few years ago,, worked like this. It could also optionally
use a TPM but didn't have to. The project appears to be abandoned but
the supervising professor, Sean Smith, in his book Trusted Computing
Platforms says that new students are bringing it up to date, getting
it working with newer kernels including selinux support.

Here's the Enforcer description. Nice piece of work. Hopefully they'll
release an updated version now that TPMs are more common.

The Enforcer is a Linux Security Module designed to improve integrity
of a computer running Linux by ensuring no tampering of the file
system. It can interact with TCPA hardware to provide higher levels of
assurance for software and sensitive data.

It can check, as every file is opened, if the file has been changed,
and take an admin specified action when it detects tampering. The
actions can be any combination of log the error, deny access to the
file, panic the system, or several operations that work with the TPM.

The Enforcer can also work with the TPM to store the secret to an
encrypted loopback file system, and unmount this file system when a
tampered file is detected. The secret will not be accessible to mount
the loopback file system until the machine has been rebooted with
untampered files. This allows sensitive data to be protected from an

The Enforcer can also bind specific files so that only specific
applications can access them (for example, only apache is allowed to
access apache's secret ssl key). This means that even if someone
compromises your system, the attacker will not be able to steal
critical files.

Finally, the Enforcer can make sure that no files added to
directories after its database is built are allowed to be accessed.

One thing they worked hard on in the design is the balance between
detecting malicious changes, and allowing necessary changes for
maintenance and upgrades. They identified different classes of
components that were updated seldom, occasionally or frequently, and
architected the system to provide an appropriate degree of checking
for each category. The academic paper is here:


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: TPM disk crypto

2006-10-12 Thread cyphrpunk

On 10/10/06, Brian Gladman [EMAIL PROTECTED] wrote:

I haven't been keeping up to date with this trusted computing stuff over
the last two years but when I was last involved it was accepted that it
was vital that the owner of a machine (not necessarily the user) should
be able to do the sort of things you suggest and also be able to exert
ultimate control over how a computing system presents itself to the
outside world.

Only in this way can we undermine the treacherous computing model of
trusted machines with untrusted owners and replace it with a model in
which trust in this machine requires trust in its owner on which real
information security ultimately depends (I might add that even this
model has serious potential problems when most machine owners do not
understand security).

Does anyone know the current state of affairs on this issue within the
Trusted Computing Group (and the marketed products of its members)?

1. The issue is still moot at present. We are a long way from where
open, public, remote attestion will be possible. See this diagram from
the Trousers open-source TPM software stack project which shows which
pieces are still missing:

There is actually another important piece missing from that diagram,
namely operating system support. At present the infrastructure would
only allow attestation at the OS-boot level, i.e. you could prove what
OS you booted. It's a big step from there to proving that you are
running a safe application, unless the service would require you to
reboot your machine into their OS every time you want to run their

2. Not an insider, but I haven't heard anything about serious efforts
to implement Owner Override or similar proposals. Instead, the
response seems to be to wait and hope all that fuss blows over.

3. What little evidence exists suggests that TCG is going in the
opposite direction. The 1.2 TPM is designed to work with Intel's
Lagrange Technology which will add improved process isolation and late
launch. This will make it possible to attest at the level of
individual applications, and provide protection against the local user
that a plain TPM system can't manage. 1.2 also adds a
cryptographically blinded attestation mode that gets rid of the ugly
privacy ca which acted as a TTP in 1.1, and which will make it
easier to move towards attestation.

4. Software remains the biggest question mark, and by software I mean
Microsoft. They have said nothing about attestation support in Vista.
Given the hostile response to Palladium I doubt there is much
enthusiasm about jumping back into that crocodile pit. It doesn't seem
to be stopping HD-DVD from moving forward, even though there is no
credible probability of an attestation feature appearing in the time
frame needed for these new video product introductions.

Without a driving market force to introduce attestation, and
tremendous social resistance, the status quo will probably prevail for
another couple of years. By that time LT will be available, TPMs will
be nearly universal but used only for improved local security, and
perhaps some tentative steps into attestation will appear. The initial
version might be targeted at corporate VPNs which will prevent mobile
employees from connecting unless their laptops attest as clean. This
would be an uncontroversial use of the technology except for its
possible implications as a first step towards wider use.

Whether we will eventually ever see the whole model, with attestation,
process isolation, sealed storage, and trusted i/o path all leading to
super-DRM, is very much an open question. So many barriers exist
between here and there that it seems unlikely that this will be seen
by anyone as the right solution to that problem, by then.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: [Clips] Feds mull regulation of quantum computers

2005-11-13 Thread cyphrpunk
  WASHINGTON--Quantum computers don't exist outside the laboratory. But the
  U.S. government appears to be exploring whether it should be illegal to
  ship them overseas.

  A federal advisory committee met Wednesday to hear an IBM presentation
  about just how advanced quantum computers have become--with an eye toward
  evaluating when the technology might be practical enough to merit
  government regulation.

Suppose that quantum computers work and the NSA has them. What steps
can or should they take to try to stop the propagation of this
technology? If they come out too openly with restrictions, it sends a
signal that there's something there, which could drive more research
into the technology by the NSA's adversaries, the opposite of the
desired outcome. If they leave things alone then progress may continue
towards this technology that the NSA wants to suppress.

Something like the present action isn't a bad compromise. Work towards
restrictions on technology exports, but in a studiously casual
fashion. There's nothing to see here, folks. We're just covering our
bases, in the outside chance that something comes out of this way down
the road. Meanwhile we'll just go ahead and stop exports of related
technologies. But we certainly don't think that quantum computers are
practical today, heavens no!


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Symmetric ciphers as hash functions

2005-11-07 Thread cyphrpunk
On 10/30/05, Arash Partow [EMAIL PROTECTED] wrote:
 How does one properly use a symmetric cipher as a cryptographic hash
 function? I seem to be going around in circles.

The usual method is to feed the data into the key slot of the
cipher, and to use a fixed IV in the plaintext slot. Then, add the
IV to the output ciphertext.

If the data is too big, break it up into pieces and chain these
constructions together. The output of one block becomes the input IV
of the next block.

To prevent length extension attacks, pad with an unambiguous final
suffix that includes the message length.

This is basically the Merkle/Damgard construction.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: HTTPS mutual authentication alpha release - please test

2005-11-07 Thread cyphrpunk
On 10/31/05, Nick Owen [EMAIL PROTECTED] wrote:
 The system works this way: Each WiKID domain now can include a
 'registered URL' field and a hash that website's SSL certificate.  When
 a user wants to log onto a secure web site, they start the WiKID token
 and enter their PIN. The PIN is encrypted and sent to the WiKID server
 along with a one-time use AES key and the registered URL.  The server
 responds with a hash of the website's SSL certificate.  The token client
 fetches the SSL certificate of the website and compares it the hash.  If
 the hashes don't match, the user gets an error.  If they match, the user
 is presented with registered URL and the passcode.  On supported
 systems, the token client will launch the default browser to the
 registered URL.

What threat is this supposed to defend against? Is it phishing? I
don't see how it will help, if the bogus site has a valid certificate.

 Most one-time-password systems suffer from man-in-the-middle attacks
 primarily due to difficulties users have with validating SSL
 certificates. The goal of this release is to validate certificates for
 the end user, providing an SSH-esque security for web-enabled
 applications such as online banking.

What does it mean to validate a certificate? Aren't certs
self-validating, based on the key of the issuer? Again, what is this
protecting against?


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: On the orthogonality of anonymity to current market demand

2005-11-07 Thread cyphrpunk
On 11/6/05, Travis H. [EMAIL PROTECTED] wrote:
 Personally, I'm less suprised by my own software (and, presumably,
 key-handling) than vendor software, most of the time.  I think TCPA is
 about control, and call me paranoid, but ultimate control isn't
 something I'm willing to concede to any vendor, or for that matter any
 other person.  I like knowing what my computer is doing, to the bit
 and byte level, or at least being able to find out.

I suggest that you're fooling yourself, or at least giving yourself a
false sense of security. Software today is so complex and large that
there is no way that you can be familiar with the vast bulk of what
you are running (and it's only going to get worse in the future). It
is an illusion that you have transparency into it. Water is
transparent but an ocean of it is opaque and holds many secrets.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: On Digital Cash-like Payment Systems

2005-11-07 Thread cyphrpunk
On 11/4/05, Travis H. [EMAIL PROTECTED] wrote:
 By my calculations, it looks like you could take a keypair n,e,d and
 some integer x and let e'=e^x and d'=d^x, and RSA would still work,
 albeit slowly.  Reminds me of blinding, to some extent, except we're
 working with key material and not plaintext/ciphertext.

Your point would be to make the encryption key very large?
Unfortunately, making it large enough to present any kind of challenge
to an attacker who is plucking files off a trojaned computer would
make it far too large to be used, with this system.

 Since I'm on the topic, does doing exponentiation in a finite field
 make taking discrete logarithms more difficult (I suspect so), and if
 so, by how much?

This doesn't make sense. The discrete log operation is the inverse of
exponentiation. Doing exponentiation is a prerequisite for even
considering discrete log operations. Hence it cannot make them more

 Is there any similar property that could be used on e' and d' to make
 computing e and d more difficult?  Of course whatever algorithm is
 used, one would need to feed e' and d' to it en toto, but a really
 clever attacker might be able to take the xth root prior to
 exfiltrating them.

That's a new word to me. What is your goal here, to make something
that is even stronger than RSA? Or is it, as in the context of this
thread, to inflate keys, making them bigger so that an attacker can't
download them easily?

 Also, application of a random pad using something like XOR would be
 useful; could be done as a postprocessing stage independently of the
 main algorithm used to encrypt the data, or done as a preprocessing
 stage to the plaintext.  I prefer the latter as it makes breaking the
 superencryption much more difficult, and fixed headers in the
 ciphertext could give away some OTP material.  However, the
 preliminary encryption in something like gpg would suffer, so it would
 have the effect of making the ciphertext bigger.  Perhaps this is an
 advantage in your world.

That's not feasible in most cases. If you really have a OTP handy, why
are you bothering with RSA? Or are you planning to use it as a
two-time-pad? That generally doesn't work well. (The fact that you are
worried about giving away OTP material is not a good sign!)

 An alternate technique relies in specifying, say, 256 bits of key,
 then using a cryptographically strong PRNG to expand it to an
 arbitrary length, and storing that for use.  Pilfering it then takes
 more bandwidth, but it could be reconstructed based on the 256-bit
 seed alone, if one knew the details of the PRNG.  So the key could be
 compressed for transfer, if you know the secret seed.  Search for
 the seed would still be expensive, even if PRNG details are known.

So where do you store this 256 bit seed? You want to distract the
attacker with the smoke and mirrors of the big file for him to
download, hoping he will ignore this little file which is all he
really needs? I think we are assuming the attacker is smarter than
this, otherwise you could just use regular key files but give them
obscure names.

 Alternately, in a message encrypted with gpg-like hybrid ciphering,
 one could apply a secret, implicit PRNG to the message key seed before
 using it as a symmetric key.  For example, you could take a 256-bit
 message key, run it through the PRNG, create 3x256 bits, then use
 triple-AES to encrypt the message.  In this case, the PRNG buys
 forgery resistance without the use of PK techniques.  The PRNG
 expander could not be attacked without breaking the PK encryption
 (which supports arbitrarily large keys) of the seed or the triple-AES
 symmetric encryption of the message.

What is forgery resistance in this context? A public key encryption
system, by definition, allows anyone to create new encrypted messages.

Your technique is complicated but it is not clear how much security it
adds. Fundamentally it is not too different from RSA + counter mode,
where CTR can be thought of as a PRNG expanding a seed. This doesn't
seem to have anything to do with the thread topic. Are you just
tossing off random ideas because you don't think ordinary hybrid RSA
encryption is good enough?

 You know, they specify maximum bandwidth of covert channels in bits
 per second, I wonder if you could use techniques like this to prove
 some interesting property vis-a-vis covert channel leakage.  It's
 remarkably difficult to get rid of covert channels, but if you inflate
 whatever you're trying to protect, and monitor flows over a certain
 size, then perhaps you can claim some kind of resilience against them.

I'm not sure conventional covert-channel analysis is going to be that
useful here, because the bandwidths we are looking at in this attack
model are so much greater (kilobytes to megabytes per second). But
broadly speaking, yes, this was Daniel Nagy's idea which started this
thread, that making the key files big enough would make it more likely
to catch someone 

Re: [EMAIL PROTECTED]: Skype security evaluation]

2005-11-04 Thread cyphrpunk
On 10/31/05, Kuehn, Ulrich [EMAIL PROTECTED] wrote:
 There are results available on this issue: First, a paper by
 Boneh, Joux, and Nguyen Why Textbook ElGamal and RSA Encryption
 are Insecure, showing that you can essentially half the number
 of bits in the message, i.e. in this case the symmetric key

Thanks for this pointer. In the case of Skype it would be consistent
with the security report if they are encrypting random 128 bit values
under each other's RSA keys, unpadded, and exchanging them, then
hashing the pair of 128 bit values together to generate their session

The paper above shows an easy birthday attack on such encryptions.
Approximately 18% of 128 bit numbers can be expressed as a product of
two 64-bit numbers. For such keys, if the ciphertext is C, consider
all 2^64 values m1 and m2, and compare m1^e with C/m2^e. This can be
done in about 2^64 time and memory, and if the plaintext is in that
18%, it will be found as m1*m2.

Based on these comments and others that have been made in this thread,
the Skype security analysis seems to have major flaws. We have a
reluctance in our community to criticize the work of our older
members, especially those like Berson who have warm personalities and
friendly smiles. But in this case the report leaves so much
unanswered, and focuses inappropriately on trivial details like
performance and test vectors, that overall it can only be called an
entirely unsatisfactory piece of work.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: HTTPS mutual authentication alpha release - please test

2005-11-04 Thread cyphrpunk
On 11/3/05, Nick Owen [EMAIL PROTECTED] wrote:
 The token client pulls down a hash of the certificate from the
 WiKID server. It pulls the certificate from the website and performs a
 hash on it.  It compares the two hashes and if they match, presents the
 user with the OTP and the message:
 This URL has been validated. It is now safe to proceed.

Let me see if I understand the attack this defends against. The user
wants to access The phisher uses DNS
poisoning to redirect this request away from the actual citibank
machine to a machine he controls which puts up a bogus citibank page.
To deal with the SSL, the phisher has also managed to acquire a fake
citibank certificate from a trusted CA(!). He fooled or suborned the
CA into granting him a cert on even though the phisher
has no connections with citibank. He can now use this bogus cert to
fool the client when it sets up the SSL connection to

Is this it? This is what your service will defend against, by
remembering the hash of the true citibank certificate?

Has this attack ever been used, in the history of the net?


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: HTTPS mutual authentication alpha release - please test

2005-11-04 Thread cyphrpunk
On 11/3/05, Nick Owen [EMAIL PROTECTED] wrote:
 cyphrpunk wrote:
  On 10/31/05, Nick Owen [EMAIL PROTECTED] wrote:
 The system works this way: Each WiKID domain now can include a
 'registered URL' field and a hash that website's SSL certificate.  When
 a user wants to log onto a secure web site, they start the WiKID token
 and enter their PIN. The PIN is encrypted and sent to the WiKID server
 along with a one-time use AES key and the registered URL.  The server
 responds with a hash of the website's SSL certificate.  The token client
 fetches the SSL certificate of the website and compares it the hash.  If
 the hashes don't match, the user gets an error.  If they match, the user
 is presented with registered URL and the passcode.  On supported
 systems, the token client will launch the default browser to the
 registered URL.
  What threat is this supposed to defend against? Is it phishing? I
  don't see how it will help, if the bogus site has a valid certificate.

 Yes, phishing.  The token client isn't checking to see if the cert is
 valid, it's only checking to see if it's the same as the one that is on
 the WiKID authentication server.  The cert doesn't have to be valid or
 have the root CA in the browser.

But this would only help in the case that an old URL is used and a new
certificate appears, right? That's what would be necessary to get a
match in your database, pull down an old certificate, and find that it
doesn't match the new certificate.

Phishers don't do this. They don't send people to legitimate URLs
while somehow contriving to substitute their own bogus certificates.
They send people to wrong URLs that may have perfectly valid
certificates issued for them. I don't see how your system defends
against what phishers actually do.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like Payment Systems

2005-10-31 Thread cyphrpunk
On 10/25/05, Travis H. [EMAIL PROTECTED] wrote:
 More on topic, I recently heard about a scam involving differential
 reversibility between two remote payment systems.  The fraudster sends
 you an email asking you to make a Western Union payment to a third
 party, and deposits the requested amount plus a bonus for you using
 paypal.  The victim makes the irreversible payment using Western
 Union, and later finds out the credit card used to make the paypal
 payment was stolen when paypal reverses the transaction, leaving the
 victim short.

This is why you can't buy ecash with your credit card. Too easy to
reverse the transaction, and by then the ecash has been blinded away.
If paypal can be reversed just as easily that won't work either.

This illustrates a general problem with these irreversible payment
schemes, it is very hard to simply acquire the currency. Any time you
go from a reversible payment system (as all the popular ones are) to
an irreversible one you have an impedence mismatch and the transfer
reflects rather than going through (so to speak).


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: On Digital Cash-like Payment Systems

2005-10-31 Thread cyphrpunk
On 10/26/05, James A. Donald [EMAIL PROTECTED] wrote:
 How does one inflate a key?

Just make it bigger by adding redundancy and padding, before you
encrypt it and store it on your disk. That way the attacker who wants
to steal your keyring sees a 4 GB encrypted file which actually holds
about a kilobyte of meaningful data. Current trojans can steal files
and log passwords, but they're not smart enough to decrypt and
decompress before uploading. They'll take hours to snatch the keyfile
through the net, and maybe they'll get caught in the act.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like Payment Systems

2005-10-31 Thread cyphrpunk
One other point with regard to Daniel Nagy's paper at

A good way to organize papers like this is to first present the
desired properties of systems like yours (and optionally show that
other systems fail to meet one or more of these properties); then to
present your system; and finally to go back through and show how your
system meets each of the properties, perhaps better than any others.
This paper is lacking that last step. It would be helpful to see the
epoint system evaluated with regard to each of the listed properties.

In particular I have concerns about the finality and irreversibility
of payments, given that the issuer keeps track of each token as it
progresses through the system. Whenever one token is exchanged for a
new one, the issuer records and publishes the linkage between the new
token and the old one. This public record is what lets people know
that the issuer is not forging tokens at will, but it does let the
issuer, and possibly others, track payments as they flow through the
system. This could be grounds for reversibility in some cases,
although the details depend on how the system is implemented. It would
be good to see a critical analysis of how epoints would maintain
irreversibility, as part of the paper.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like Payment Systems

2005-10-31 Thread cyphrpunk
On 10/28/05, Daniel A. Nagy [EMAIL PROTECTED] wrote:
 Irreversibility of transactions hinges on two features of the proposed
 systetm: the fundamentally irreversible nature of publishing information in
 the public records and the fact that in order to invalidate a secret, one
 needs to know it; the issuer does not learn the secret at all in some
 implementnations and only learns it when it is spent in others.

 In both cases, reversal is impossible, albeit for different reasons. Let's
 say, Alice made a payment to Bob, and Ivan wishes to reverse it with the
 possible cooperation of Alice, but definitely without Bob's help. Alice's
 secret is Da, Bob's secret is Db, the corresponding challenges are,
 respectively, Ca and Cb, and the S message containing the exchange request
 Da-Cb has already been published.

 In the first case, when the secret is not revealed, there is simply no way to
 express reverslas. There is no S message with suitable semantics semantics,
 making it impossible to invalidate Db if Bob refuses to reveal it.

The issuer can still invalidate it even though you have not explicitly
defined such an operation. If Alice paid Bob and then convinces the
issuer that Bob cheated her, the issuer could refuse to honor the Db
deposit or exchange operation. From the recipient's perspective, his
cash is at risk at least until he has spent it or exchanged it out of
the system.

The fact that you don't have an issuer invalidates cash operation in
your system doesn't mean it couldn't happen. Alice could get a court
order forcing the issuer to do this. The point is that reversal is
technically possible, and you can't define it away just by saying that
the issuer won't do that. If the issuer has the power to reverse
transactions, the system does not have full ireversibility, even
though the issuer hopes never to exercise his power.

 In the second case, Db is revealed when Bob tries to spend it, so Ivan can,
 in principle, steal (confiscate) it, instead of processing, but at that
 point Da has already been revealed to the public and Alice has no means to
 prove that she was in excusive possession of Da before it became public

That is an interesting possibility, but I can think of a way around
it. Alice could embed a secret within her secret. She could base part
of her secret on a hash of an even-more-secret value which she would
not reveal when spending/exchanging. Then if it came to where she had
to prove that she was the proper beneficiary of a reversed
transaction, she could reveal the inner secret to justify her claim.

 Now, one can extend the list of possible S messages to allow for reversals
 in the first scenario, but even in that case Ivan cannot hide the fact of
 reversal from the public after it happened and the fact that he is prepared
 to reverse payments even before he actually does so, because the users and
 auditors need to know the syntax and the semantics of the additional S
 messages in order to be able to use Ivan's services.

That's true, the public visibility of the system makes secret
reversals impossible. That's very good - one of the problems with
e-gold was that it was never clear when they were reversing and
freezing accounts. Visibility is a great feature. But it doesn't keep
reversals from happening, and it still leaves doubt about how final
transactions will be in this system.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: [EMAIL PROTECTED]: Skype security evaluation]

2005-10-25 Thread cyphrpunk
On 10/23/05, Travis H. [EMAIL PROTECTED] wrote:
 My understanding of the peer-to-peer key agreement protocol (hereafter
 p2pka) is based on section 3.3 and 3.4.2 and is something like this:

 A - B: N_ab
 B - A: N_ba
 B - A: Sign{f(N_ab)}_a
 A - B: Sign{f(N_ba)}_b
 A - B: Sign{A, K_a}_SKYPE
 B - A: Sign{B, K_b}_SKYPE
 A - B: Sign{R_a}_a
 B - A: Sign{R_b}_b

 Session key SK_AB = g(R_a, R_b)

But what you have shown here has no encryption, hence no secrecy.
Surely RSA encryption must be used somewhere along the line. The
report doesn't say anything about the details of how that is done. In
particular, although it mentions RSA signature padding it says nothing
about RSA encryption padding.

Is it possible that Skype doesn't use RSA encryption? Or if they do,
do they do it without using any padding, and is that safe?


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like Payment Systems

2005-10-25 Thread cyphrpunk
On 10/22/05, Ian G [EMAIL PROTECTED] wrote:
 R. Hirschfeld wrote:
  This is not strictly correct.  The payer can reveal the blinding
  factor, making the payment traceable.  I believe Chaum deliberately
  chose for one-way untraceability (untraceable by the payee but not by
  the payer) in order to address concerns such as blackmailing,
  extortion, etc.  The protocol can be modified to make it fully
  untraceable, but that's not how it is designed.

 Huh - first I've heard of that, would be
 encouraging if that worked.  How does it
 handle an intermediary fall guy?   Say
 Bad Guy Bob extorts Alice, and organises
 the payoff to Freddy Fall Guy.  This would
 mean that Alice can strip her blinding
 factors and reveal that she paid to Freddy,
 but as Freddy is not to be found, he can't
 be encouraged to reveal his blinding factors
 so as to reveal that Bob bolted with the

Right, that is one of the kinds of modifications that Ray referred to.
If the mint allows (de-facto) anonymous exchanges then a blackmailer
can simply do an exchange of his ecash before spending it and he will
be home free. Another mod is for the blackmailer to supply the
proto-coin to be signed, in blinded form.

One property of Daniel Nagy's epoint system is that it creates chains
where each token that gets created is linked to the one it came from.
This could be sold as an anti-abuse feature, that blackmailers and
extortionists would have a harder time avoiding being caught. In
general it is an anti-laundering feature since you can't wash your
money clean, it always links back to when it was dirty.

U.S. law generally requires that stolen goods be returned to the
original owner without compensation to the current holder, even if
they had been purchased legitimately (from the thief or his agent) by
an innocent third party. Likewise a payment system with traceable
money might find itself subject to legal orders to reverse subsequent
transactions, confiscate value held by third parties and return the
ill-gotten gains to the victim of theft or fraud. Depending on the
full operational details of the system, Daniel Nagy's epoints might be
vulnerable to such legal actions.

Note that e-gold, which originally sold non-reversibility as a key
benefit of the system, found that this feature attracted Ponzi schemes
and fraudsters of all stripes, and eventually it was forced to reverse
transactions and freeze accounts. It's not clear that any payment
system which keeps information around to allow for potential
reversibility can avoid eventually succumbing to pressure to reverse
transactions. Only a Chaumian type system, whose technology makes
reversibility fundamentally impossible, is guaranteed to allow for
final clearing. And even then, it might just be that the operators
themselves will be targeted for liability since they have engineered a
system that makes it impossible to go after the fruits of criminal


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like Payment Systems

2005-10-25 Thread cyphrpunk
On 10/24/05, Steve Schear [EMAIL PROTECTED] wrote:
 I don't think E-gold ever held out its system as non-reversible with proper
 court order.  All reverses I am aware happened either due to some technical
 problem with their system or an order from a court of competence in the
 matter at hand.

Back in the days of such companies as and there were cases where e-gold froze accounts
without waiting for court orders. I was involved with the discussion
on the e-gold mailing lists back then and it caused considerable hard
feeling among the users. E-gold was struggling to deal with the
onslaught of criminal activity (Ian Grigg described the prevailing
mood as one of 'angst') and they were thrown into a reactive mode.
Eventually I think they got their house in order and established
policies that were more reasonable.

 Its not clear at all that courts will find engineering a system for
 irreversibility is illegal or contributory if there was good justification
 for legal business purposes, which of course there are.

Yes, but unfortunately it is not clear at all that courts would find
the opposite, either. If a lawsuit names the currency issuer as a
defendant, which it almost certainly would, a judge might order the
issuer's finances frozen or impose other measures which would impair
its business survival while trying to sort out who is at fault. It
would take someone with real cojones to go forward with a business
venture of this type in such uncharted waters.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like Payment Systems

2005-10-25 Thread cyphrpunk
On 10/24/05, John Kelsey [EMAIL PROTECTED] wrote:
 More to the point, an irreversible payment system raises big practical
 problems in a world full of very hard-to-secure PCs running the
 relevant software.  One exploitable software bug, properly used, can
 steal an enormous amount of money in an irreversible way.  And if your
 goal is to sow chaos, you don't even need to put most of the stolen
 money in your own account--just randomly move it around in
 irreversible, untraceable ways, making sure that your accounts are
 among the ones that benefit from the random generosity of the attack.

To clarify one point, it is not necessary to have accounts in an
ecash system. Probably the simpler approach is for a mint that has
three basic functions: selling ecash for real money; exchanging ecash
for new ecash of equal value; and buying ecash for real money. All
ecash exchanges with the mint can be anonymous, and only when ecash is
exchanged for real money does that side of the transaction require a
bank account number or similar identifying information.

In such a system, the ecash resides not in accounts, but in digital
wallets which are held in files on end users' computers. The basic
attack scenario then is some kind of virus which hunts for such files
and sends the ecash to the perpetrator. If the ecash wallet is
protected, by a password or perhaps a token which must be inserted,
the virus can lie in wait and grab the ecash once the user opens the
wallet manually. There are several kinds of malicious activities that
are possible, from simply deleting the cash to broadcasting it in
encrypted form such as by IRC. Perhaps it could even engage in the
quixotic action of redistributing some of the cash among the users,
but my guess is that pecuniary motivations would dominate and most
viruses will simply do their best to steal ecash. Without accounts per
se, and using a broadcast channel, there is little danger in receiving
or spending the stolen money.

Digital wallets will require real security in user PCs. Still I don't
see why we don't already have this problem with online banking and
similar financial services. Couldn't a virus today steal people's
passwords and command their banks to transfer funds, just as easily as
the fraud described above? To the extent that this is not happening,
the threat against ecash may not happen either.

 The payment system operators will surely be sued for this, because
 they're the only ones who will be reachable.  They will go broke, and
 the users will be out their money, and nobody will be silly enough to
 make their mistake again.

They might be sued but they won't necessarily go broke. It depends on
how deep the pockets are suing them compared to their own, and most
especially it depends on whether they win or lose the lawsuit. As
Steve Schear noted, there is a reasonable argument that a payment
system issuer should not be held liable for the misdeeds of its
customers. Jurisdictional issues may be important as well. Clearly
anyone proposing to enter this business will have to accept the risk
and cost of defending against such lawsuits as part of the business


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: [PracticalSecurity] Anonymity - great technology but hardly used

2005-10-25 Thread cyphrpunk

  I believe that for anonymity and pseudonymity technologies to survive
  they have to be applied to applications that require them by design,
  rather than to mass-market applications that can also do (cheaper)
  without. If anonymity mechanisms are deployed just to fulfill the
  wish of particular users then it may fail, because most users don't
  have that wish strong enough to pay for fulfilling it. An example for
  such an application (that requires anonymity by design) could be
  E-Voting, which, unfortunately, suffers from other difficulties. I am
  sure there are others, though.

The truth is exactly the opposite of what is suggested in this
article. The desire for anonymous communication is greater today than
ever, but the necessary technology does not exist.

For the first time there are tens or hundreds of millions of users who
have a strong need and desire for high volume anonymous
communications. These are file traders, exchanging images, music,
movies, TV shows and other forms of communication. The main threat to
this illegal but widely practiced activity is legal action by
copyright holders against individual traders. The only effective
protection against these threats is the barrier that could be provided
by anonymity. An effective, anonymous file sharing network would see
rapid adoption and would be the number one driver for widespread use
of anonymity.

But the technology isn't there. Providing real-time, high-volume,
anonymous communications is not possible at the present time. Anyone
who has experienced the pitiful performance of a Tor web browsing
session will be familiar with the iron self-control and patience
necessary to keep from throwing the computer out the window in
frustration. Yes, you can share files via Tor, at the expense of
reducing transfer rates by multiple orders of magnitude.

Not only are there efficiency problems, detailed analysis of the
security properties of real time anonymous networks have repeatedly
shown that the degree of anonymity possible is very limited against a
determined attacker. Careful insertion of packet delays and monitoring
of corresponding network reactions allow an attacker to easily trace
an encrypted communication through the nodes of the network. Effective
real-time anonymity is almost a contradiction in terms.

Despite these difficulties, file trading is still the usage area with
the greatest potential for widespread adoption of anonymity. File
traders are fickle and will gravitate rapidly to a new system if it
offers significant benefits. If performance can be improved to at
least approximate the transfer rates of non-anonymous networks, while
allowing enough security to make the job of the content lawyers
harder, that could be enough to give this technology the edge it needs
to achieve widespread acceptance.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like Payment Systems

2005-10-21 Thread cyphrpunk
As far as the issue of receipts in Chaumian ecash, there have been a
couple of approaches discussed.

The simplest goes like this. If Alice will pay Bob, Bob supplies Alice
with a blinded proto-coin, along with a signed statement, I will
perform service X if Alice supplies me with a mint signature on this
value Y. Alice pays to get the blinded proto-coin Y signed by the
mint. Now she can give it to Bob and show the signature on Y in the
future to prove that she upheld her end.

A slightly more complicated one starts again with Bob supplying Alice
with a blinded proto-coin, which Alice signs. Now she and Bob do a
simultaneous exchange of secrets protocol to exchange their two
signatures. This can be done for example using the commitment scheme
of Damgard from Eurocrypt 93. Bob gets the signature necessary to
create his coin, and Alice gets the signed receipt (or even better,
perhaps Bob's signature could even constitute the service Alice is

I would be very interested to hear about a practical application which
combines the need for non-reversibility (which requires a degree of
anonymity) with the need to be able to prove that payment was made
(which seems to imply access to a legal system to force performance,
an institution which generally will require identification).


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Hooking nym to wikipedia

2005-10-04 Thread cyphrpunk
On 10/3/05, Jason Holt [EMAIL PROTECTED] wrote:

 More thoughts regarding the tokens vs. certs decision, and also multi-use:

This is a good summary of the issues. With regard to turning client
certs on and off: from many years of experience with anonymous and
pseudonymous communication, the big usability problem is remembering
which mode you are in - whether you are identified or anonymous. This
relates to the technical problem of preventing data from one mode from
leaking over into the other.

The best solution is to use separate logins for the two modes. This
prevents any technical leakage such as cookies or certificates.
Separate desktop pictures and browser skins can be selected to provide
constant cues about the mode. Using this method it would not be
necessary to be asked on every certificate usage, so that problem with
certs would not arise.

(As far as the Chinese dissident using net cafes, if they are using
Tor at all it might be via a USB token like the one (formerly?)
available from The browser on the token can
be configured to hold the cert, making it portable.)

Network eavesdropping should not be a major issue for a pseudonym
server. Attackers would have little to gain for all their work. The
user is accessing the server via Tor so their anonymity is still

Any solution which waits for Wikimedia to make changes to their
software will probably be long in coming. When Jimmy Wales was asked
whether their software could allow logins for trusted users from
otherwise blocked IPs, he didn't have any idea. The technical people
are apparently in a separate part of the organization. Even if Jimmy
endorsed an idea for changing Wikipedia, he would have to sell it to
the technical guys, who would then have to implement and test it in
their Wiki code base, then it would have to be deployed in Wikipedia
(which is after all their flagship product and one which they would
want to be sure not to break).

Even once this happened, the problem is only solved for that one case
(possibly also for other users of the Wiki code base). What about
blogs or other web services that may decide to block Tor? It would be
better to have a solution which does not require customization of the
web service software. That approach tries to make the Tor tail wag the
Internet dog.

The alternative of running a pseudonym based web proxy that only lets
good users pass through will avoid the need to customize web
services on an individual basis, at the expense of requiring a
pseudonym quality administrator who cancels nyms that misbehave. For
forward secrecy, this service would expunge its records of which nyms
had been active, after a day or two (long enough to make sure no
complaints are going to come back).

As far as the Unlinkable Serial Transactions proposal, the gist of it
is to issue a new blinded token whenever one is used. That's a clever
idea but it is not adequate for this situtation, because abuse
information is not available until after the fact. By the time a
complaint arises the miscreant will have long ago received his new
blinded token and the service will have no way to stop him from
continuing to use it.

I could envision a complicated system whereby someone could use a
token on Monday to access the net, then on Wednesday they would become
eligible to exchange that token for a new one, provided that it had
not been black-listed due to complaints in the interim. This adds
considerable complexity, including the need to supply people with
multiple initial tokens so that they could do multiple net accesses
while waiting for their tokens to be eligible for exchange; the risk
that exchange would often be followed immediately by use of the new
token, harming unlinkability; the difficulty in fully black-listing a
user who has multiple independent tokens, when each act of abuse
essentially just takes one of his tokens away from him. Overall this
would be too cumbersome and problematic to use for this purpose.

Providing forward secrecy by having the nym-based web proxy erase its
records every two days is certainly less secure than doing it by
cryptographic means, but at the same time it is more secure than
trusting every web service out there to take similar actions to
protect its clients. Until a clean and unemcumbered technological
approach is available, this looks like a reasonable compromise.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: nym-0.2 released (fwd)

2005-10-02 Thread cyphrpunk
A few comments on the implementation details of

1. Limting token requests by IP doesn't work in today's internet. Most
customers have dynamic IPs. Either they won't be able to get tokens,
because someone else has already gotten one using their temporary IP,
or they will be able to get multiple ones by rotating among available
IPs. It may seem that IP filtering is expedient for demo purposes, but
actually that is not true, as it prevents interested parties from
trying out your server more than once, such as to do experimental
hacking on the token-requesting code.

I suggest a proof of work system a la hashcash. You don't have to use
that directly, just require the token request to be accompanied by a
value whose sha1 hash starts with say 32 bits of zeros (and record
those to avoid reuse).

2. The token reuse detection in signcert.cgi is flawed. Leading zeros
can be added to r which will cause it to miss the saved value in the
database, while still producing the same rbinary value and so allowing
a token to be reused arbitrarily many times.

3. signer.cgi attempts to test that the value being signed is  2^512.
This test is ineffective because the client is blinding his values. He
can get a signature on, say, the value 2, and you can't stop him.

4. Your token construction, sign(sha1(r)), is weak. sha1(r) is only
160 bits which could allow a smooth-value attack. This involves
getting signatures on all the small primes up to some limit k, then
looking for an r such that sha1(r) factors over those small primes
(i.e. is k-smooth). For k = 2^14 this requires getting less than 2000
signatures on small primes, and then approximately one in 2^40 160-bit
values will be smooth. With a few thousand more signatures the work
value drops even lower.

A simple solution is to do slightly more complex padding. For example,
concatenate sha1(0||r) || sha1(1||r) || sha1(2||r) || ... until it is
the size of the modulus. Such values will have essentially zero
probability of being smooth and so the attack does not work.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: nym-0.2 released (fwd)

2005-10-02 Thread cyphrpunk
On 10/1/05, Jason Holt [EMAIL PROTECTED] wrote:
 The reason I have separate token and cert servers is that I want to end up
 with a client cert that can be used in unmodified browsers and servers.  The
 certs don't have to have personal information in them, but with indirection we
 cheaply get the ability to enfore some sort of structure on the certs. Plus,
 I spent as much time as it took me to write *both releases of nym* just trying
 to get ahold of the actual digest in an X.509 cert that needs to be signed by
 the CA (in order to have the token server sign that instead of a random
 token).  That would have eliminated the separate token/cert steps, but
 required a really hideous issuing process and produced signatures whose form
 the CA could have no control over.  (Clients could get signatures on IOUs,
 delegated CA certs, whatever.)

That makes sense, although it does add some complexity for the end
user, having to figure out how to get his certificate into his
browser. Adam Langley's suggestion to cut and paste the token into a
login field at the gateway proxy would be simpler for the user. The
proxy could then set the token in a browser cookie which would make it
available on every access.

 Actually, if all you want is complaint-free certifications, that's easy to put
 in the proxy; just make it serve up different identifiers each time and keep a
 table of which IDs map to which client certs.  Makes it harder for the
 wikipedia admins to see patterns of abuse, though.  They'd have to report each
 incident and let the proxy admin decide when the threshold is reached.

My suggestion was even simpler. The mere fact that a connection was
allowed through by the gateway proxy implicitly certifies that it is
complaint-free. There is no need for client identifiers. Rather, the
proxy would keep a table of which outgoing IPs at which times mapped
to which tokens. The proxy would handle a complaint by invalidating
the token that was used at the time the problem occurred. This is
simpler than your client identifier, provides more user privacy, and
should work out of the box with Wikipedia, which must use a similar
complaint resolution mechanism with ISPs that dynamically assign IPs
to users.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: nym-0.2 released (fwd)

2005-10-01 Thread cyphrpunk
On 9/30/05, Jason Holt [EMAIL PROTECTED] wrote:
 My proposal for using this to enable tor users to play at Wikipedia is as

 1. Install a token server on a public IP.  The token server can optionally be
 provided Wikipedia's blocked-IP list and refuse to issue tokens to offending
 IPs.  Tor users use their real IP to obtain a blinded token.

 2. Install a CA as a hidden service.  Tor users use their unblinded tokens to
 obtain a client certificate, which they install in their browser.

 3. Install a wikipedia-gateway SSL web proxy (optionally also a hidden 
 which checks client certs and communicates a client identifier to MediaWiki,
 which MediaWiki will use in place of the REMOTE_ADDR (client IP address) for
 connections from the proxy.  When a user misbehaves, Wikipedia admins block 
 client identifier just as they would have blocked an offending IP 
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]