Re: Keysigning @ CFP2003

2003-03-25 Thread bear


On Mon, 24 Mar 2003, Jeroen C. van Gelderen wrote:

It's rather efficient if you want to sign a large number of keys of
people you mostly do not know personally.


Right, but remember that knowing people personally was supposed
to be part of the point of vouching for their identity to others.

I know this guy.  We spent a couple years working on X together.
is different in kind from I met this guy once in my life, and he
had a driver license that said his name was mike.

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread bear


On Tue, 25 Mar 2003, Ian Grigg wrote:

On Monday 24 March 2003 19:26, bear wrote:

 him running roughshod over the law.  He set up routing tables
 to fool DNS into thinking his machine was the shortest distance
 from the courthouse where she worked to her home ISP and
 eavesdropped on her mail.  Sent a message to every fax machine
 in town calling her a Jezebellian whore after getting the
 skinny on the aftermath of an affair that she was discussing
 with her husband.

I love it!  Then, I'm wrong on that point, we
do in fact have some aggressive MITMs
occuring in some mediums over the net.
Steve Bellovin pointed one out, this is
another.

Which gets us to the next stage of the
analysis (what did they cost!).


Wait.  Time out.  Setting aside the increased monetary
cost of her reelection campaign in a fairly conservative
state capitol, and setting aside the increased difficulty
of raising money for that campaign, the main costs here
are intangible.

On a professional level, she had reduced power in office
because of the scandal this clown created publishing her
personal email, but the intangible costs go both directions
from there.

Toward the personal end of the spectrum, discussing the
aftermath of an affair with one's husband is sensitive and
personal, and making that whole thing public can't have done
either of them, or their marriage for that matter, any good.

In the public sphere, this is a case in which information
gained from an attack on email was being employed directly
for undeserved influence on government officials.  Being timed
to interfere with her reelection makes it a direct means of
removing political opponents from office,  and it has
probably had a chilling effect on other council members
in that benighted city who might otherwise have voted in ways
Phred didn't like.  What he did was nothing less than a
direct assault on the democratic process of government.

I don't think mere monetary costs are even germane to
something like this.  The costs, publicly and personally,
are of a different kind than money expresses.  And we're going
to continue to have this problem for as long as we continue to
use unencrypted SMTP for mail transport.

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Keysigning @ CFP2003

2003-03-25 Thread bear


On Tue, 25 Mar 2003, Matt Crawford wrote:

Has anyone ever weighted a PGP key's certification value as a
function of how many keys it's know to have certified?

An interesting idea: At one extreme you could view the whole
universe as having a finite amount of trust and every
certification is a transfer of some trust from one person to
another. But then companies like verisign, after the first
thousand or so certs,  would have nothing left to sell.

At the other,  you could view verisign as providing a fairly
reliable indication, not necessarily of who X is, but certainly
of the fact that somebody was willing to spend thousands of
dollars to claim to be X and the financial records are on file
if you absolutely need to figure out who that was, so they
create trust in a way that most keysigners don't.

Neither model is perfect, but the latter one seems to have more
appeal to people in protecting financial transactions and the
former to people who are more concerned about personal privacy.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread bear


On Tue, 25 Mar 2003, Anne  Lynn Wheeler wrote:

the other scenario that has been raised before is that the browsers treat
all certification authorities the same  aka if the signature on the
certificate can be verified with any of the public keys in a browser's
public key table ... it is trusted. in effect, possibly 20-40 different
manufactures of chubb vault locks  with a wide range of business
process controls ... and all having the same possible backdoor.
Furthermore, the consumer doesn't get to choose which chubb lock is being
chosen.

Of course the consumer gets to make that choice.  I can go into my browser's
keyring and delete root certs that have been sold, ever.  And I routinely
do.  A fair number of sites don't work for me anymore, but I'm okay with
that.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread bear


On Tue, 25 Mar 2003, Ian Grigg wrote:

On Tuesday 25 March 2003 12:07, bear wrote:

But, luckily, there is a way to turn the above
subjective morass of harm into an objective
hard number:  civil suit.  Presumably, (you
mentioned America, right?) this injured party
filed a civil suit against the person and sought
damages.

You honestly haven't heard of Fred Phelps?
He has thirteen children and nine of them are
lawyers.  Estimated costs to sue the guy are in
the hundreds of thousands of dollars.  Estimated
costs for him to defend are near zero.  Plus the
instant you file that civil suit you'll have his
zombies loudly picketing your home (that's right,
your private residence) 24/7 until you stop.


 And we're going
 to continue to have this problem for as long as we continue to
 use unencrypted SMTP for mail transport.

I would agree.  Which is why we are having
this discussion - how can we get this poor
victim's traffic onto some form of crypto so
she doesn't get her life ripped apart by some
dirtbag?

ISP's don't want to support encrypted links
because it raises their CPU costs.  And mail
clients generally aren't intelligently designed
to handle encrypted email which the mail servers
could just pass through without decrypting and
encrypting.

I think a new protocol is needed.  The fact
that SMTP is unencrypted by default makes it
impossible for an encrypted email form to be
built on top of it.

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Face-Recognition Technology Improves

2003-03-24 Thread bear


On Sun, 16 Mar 2003, Eugen Leitl wrote:

There's a world of difference between a line of people each slowly
stepping through the gate past a sensor in roughly aligned orientation and
a fixed-orientation no-zoom low-resolution camera looking at a group of
freely behaving subjects at varying illumination.

The problem is that's exactly the sort of barrier that goes
away over time.  We face the inevitable advance of Moore's
Law.  The prices on those cameras are coming down, and the
prices of the media to store higher-res images (which plays
a major part in how much camera people decide is worth the
money) is coming down even more rapidly.  Face recognition
was something that was beyond our computing abilities for a
long time, but the systems are here now and we have to
decide how to deal with them - not on the basis of what they
are capable of this month, but on the basis of what kind of
society they enable in coming decades.

Also, face recognition is not like cryptography; you can't
make your face sixteen bits longer and stave off advances
in computer hardware for another five years.  These systems
are here now, and they're getting better.  Varied lighting,
varied perspective, moving faces, pixel counts, etc -- these
are all things that make the problem harder, but none of them
is going to put it out of reach for more than six months or
a year.  Five years from now those will be no barrier at
all, and the systems they have five years from now will be
deployed according to the decisions we make about such systems
now.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Face-Recognition Technology Improves

2003-03-24 Thread bear


On Sun, 16 Mar 2003, Bill Stewart wrote:

 But there are two sides to the problem - recording the images of the
 people you're looking for, and viewing the crowd to try to find
 matches.  You're right that airport security gates are probably a
 pretty good consistent place to view the crowd, but getting the
 target images is a different problem - some of the Usual Suspects
 may have police mugshots, but for most of them it's unlikely that
 you've gotten them to sit down while you take a whole-face geometry
 scan to get the fingerprint.

I'm reasonably certain that a 'whole-face geometry scan' is a
reasonable thing to expect to be able to extract from six or eight
security-gate images.  If you've been through the airport four or five
times in the last year, and they know whose boarding pass was
associated with each image, then they've probably got enough images of
your face to construct it without your cooperation.

And if they don't do it today, there's no barrier in place preventing
them from doing it tomorrow.  Five years from now, I bet the cameras
and systems will be good enough to make it a one-pass operation.  I'd
be surprised if they don't then scan routinely as people go through
the security booths in airports, and if you've been scanned before
they make sure it matches, and if you haven't you now have a scan on
file so they can make sure it matches next time.

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-24 Thread bear


On Mon, 24 Mar 2003, Peter Clay wrote:

On Sun, 23 Mar 2003, Ian Grigg wrote:

 Consider this simple fact:  There has been no
 MITM attack, in the lifetime of the Internet,
 that has recorded or documented the acquisition
 and fraudulent use of a credit card (CC).

 (Over any Internet medium.)

There have, however, been numerous MITM attacks for stealing
or eavesdropping on email.  A semi-famous case I'm thinking
of involves a rabid baptist minister named fred phelps and
a topeka city councilwoman who had the audacity to vote against
him running roughshod over the law.  He set up routing tables
to fool DNS into thinking his machine was the shortest distance
from the courthouse where she worked to her home ISP and
eavesdropped on her mail.  Sent a message to every fax machine
in town calling her a Jezebellian whore after getting the
skinny on the aftermath of an affair that she was discussing
with her husband.

And as for theft of credit card numbers, the lack of MITM
attacks directly on them is just a sign that other areas of
security around them are so loose no crooks have yet had to
go to that much trouble.  Weakest link, remember?  No need
to mount a MITM attack if you're able to just bribe the data
entry clerk.  Just because most companies' security is so
poor that it's not worth the crook's time and effort doesn't
mean we should throw anyone who takes security seriously
enough that a MITM vulnerability might be the weakest link
to the wolves.

How do you view attacks based on tricking people into going to a site
which claims to be affiliated with e.g. Ebay or Paypal, getting them to
enter their login information as usual, and using that to steal money?

These, technically speaking, are impostures, not MITM attacks.  The
web makes it ridiculously easy.  You can use any linktext or graphic
to link to anywhere, and long cryptic URL's are sufficiently standard
practice that people don't actually look at them any more to notice a
few characters' difference.

On the occasions where people have actually spoofed DNS to route the
correct URL to the wrong server in order to get info on people's
accounts, that is a full-on MITM attack. And that definitely has
happened.  I'm surprised to hear someone claim that credit card
numbers haven't been stolen that way. I've been more concerned about
email than credit cards, so I don't know for sure, but if credit cards
haven't been stolen this way then the guys who want them are way
behind the guys who want to eavesdrop on email.

 [2] AFAIR, Anonymous-Diffie-Hellman, or ADH, is
 inside the SSL/TLS protocol, and would represent
 a mighty fine encrypted browsing opportunity.
 Write to your browser coder today and suggest
 its immediate employment in the fight against
 the terrorists with the flappy ears.

 Just out of interest, do you have an economic cost/benefit analysis
 for the widespread deployment of gratuitous encryption?

This is a simple consequence of the fact that the main market for SSL
encryption is financial transactions.  And no credit card issuer wants
fully anonymous transactions; it leaves them holding the bag if
anything goes wrong.  Anonymous transactions require a different
market, which has barely begun to make itself felt in a meaningful way
(read: by being willing to pay for it) to anyone who has pockets deep
enough to do the development.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Diffie-Hellman 128 bit

2003-03-15 Thread bear


On Fri, 14 Mar 2003, NOP wrote:

Nope, it uses 128 bit primes. I'm trying to compute the discrete logarithm
and they are staying within a 128 bit GF(p) field. Sickening.

Thnx.

Lance


If they're using 128-bit primes, you don't really need to look for
breaks - just throw a cpu at it and you're done.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Encryption of data in smart cards

2003-03-14 Thread bear


On Wed, 12 Mar 2003, Krister Walfridsson wrote:


On Tue, 11 Mar 2003, Werner Koch wrote:

 If you want to encrypt the
 data on the card, you also need to store the key on it. And well, if
 you are able to read out the data, you are also able to read out the
 key (more or less trivial for most mass market cards).

This is not completely true -- I have seen some high-end cards that use
the PIN code entered by the user as the encryption key.  And it is quite
easy to do similar things on Java cards...

I've seen this too -- a little card that has its own 10key pad so
you can enter your key directly to the card, and a little purge
button next to the zero so you can tell it to forget the key you
entered after each use. Also a red LED to tell you that it was
up with a key entered, and that you needed to purge it before
sticking it back in your wallet.  The guy would enter his PIN,
stick the card in the PCMCIA slot, and the machine would unlock.
Slick little device, actually.

Now can we get one that uses more than 5 digits for a key?

Bear




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Scientists question electronic voting

2003-03-06 Thread bear


On Wed, 5 Mar 2003, Bill Frantz wrote:

The best counter to this problem is widely available systems to produce
fake photos of the vote, so the vote buyer can't know whether the votes he
sees in the photo are the real votes, or fake ones.

blink, blink.

you mean *MORE* widely available than photoshop/gimp/illustrator/etc?

Let's face it, if somebody can *see* their vote, they can record it.
and if someone can record it, then systems for counterfeiting such a
record already exist and are already widely dispersed.  If the
republicans, democrats, greens, libertarians, natural law party, and
communist party all offer you a bottle of beer for a record of your
vote for them next year, there's no reason why you shouldn't go home
without a six-pack.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Columbia crypto box

2003-02-09 Thread bear


On Sat, 8 Feb 2003, Lucky Green wrote:

In July of 1997, only days after the Mars Pathfinder mission and its
Sojourner Rover successfully landed on Mars, I innocently inquired on
the Cypherpunks mailing list if any subscribers happened to know if and
how NASA authenticates the command uplink to what at the time was
arguably the coolest RC toy in the solar system.

...

Apparently, my original inquiry had been copied and forwarded several
times. By the time my inquiry had reached the office of the President,
just as in a children's' game of telephone, my question of are they
using any decent crypto had turned in to hackers ready to take over
Mars Rover.

...

Needless to say and regardless of anyone's intent, such concern
would be entirely unfounded if the uplink were securely authenticated.

Which I believes represents an answer to my initial question as to
whether the uplink is securely authenticated.

Actually, I don't think it does.  It's been my experience that the
decision-makers never even *KNOW* whether their systems are secure.
They've been sold snake-oil claims of security so many times, and,
inevitably, seen those systems compromised, that even when responsible
and knowledgeable engineers say a system is secure, they have to
regard it as just another claim of the same type that's been proven
false before.

So I can easily imagine them just not knowing whether the link was
secure, thinking that the NASA engineer's job of securing uplinks
might be no better than Microsoft's job of securing communications
or operating systems, because they've had it demonstrated time and
again that even when they hear words like secure, the system can be
compromised.

The fact is that the NASA engineer has a huge advantage; s/he's not
working for a marketing department that will toss security for
convenience, s/he's not working on something whose code has to be
copied a million times and distributed to people with debuggers all
over the world, s/he's not trying to hide information from people on
their own computer systems, and s/he's not complying with deals made
with various people that require backdoors and transparency to law
enforcement in every box.

So the NASA engineer's actually got a chance of making something
secure, where the Microsoft engineer didn't.  Microsoft has to claim
their junk is secure, but in their case it's just marketing gas.  But
all this is below the notice of the decision makers; they *LIVE* in a
world where marketing gas is indistinguishable from reality, because
they don't have the engineer's knowledge of the issues.

So having the decision makers get real nervous was likely to happen,
whether the link is secure or not.  There's no information there
except that the decision makers have finally realized they don't
really *know* whether the link is secure.  That's progress, of a sort.

[Remind me to some time recount the tale of my discussing key management
with the chief-cryptographer for a battlefield communication system
considerably younger than the shuttle fleet. Appalling does not being to
describe it].

Battlefield systems have been that way forever.  Battlefield
information only has to remain secure for a few seconds to a few
hours, and they exploit that to the max in making the systems flexible
and fast enough for actual use.  You want appalling?  In the civil
war, they used monoalphabetic substitution as a trench code -- on
both sides.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: question about rsa encryption

2003-02-04 Thread bear



On Mon, 3 Feb 2003, Scott G. Kelly wrote:

I have a question regarding RSA encryption - forgive me if this seems
amateur-ish -, but 'm still a beginner. I seem to recall reading
somewhere that there is some issue with directly encrypting data with an
RSA public key, perhaps some vulnerability, but I can't find any
reference after a cursory look. Does anyone know of any issue with using
RSA encryption to encrypt a symmetric key under the target's public key
if the encrypted value is public (e.g. sent over a network)?

RSA is subject to blinding attacks and several other failure modes if
used without padding.  For details on what that means, read the
cyclopedia cryptologia article on RSA.

http://www.disappearing-inc.com/R/rsa.html

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: [IP] Master Key Copying Revealed (Matt Blaze of ATT Labs)

2003-01-27 Thread bear


On Mon, 27 Jan 2003, Faust wrote:

Bribe a guard, go to bed with a person with access etc..
However, that is not the proper domain of a study of rights amplification.

I'm actually not sure of that.  I think that an organized
case-by-case study of social engineering breaches would
be valuable reading material for security consultants, HR
staff, employers, designers, and psychologists.  It's not
actually the study of cryptography, but it's a topic near
and dear to the heart of those who need security, just as
Matt's paper on locks.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: EU Privacy Authorities Seek Changes in Microsoft 'Passport'

2003-01-27 Thread bear


The widespread acceptance of something as obviously a bad idea as
passport really bothers me.  I could see a password manager program
to automate the process of password invalidation where you discovered
a compromise; but the idea of putting everything you do online on the
same password or credential is just...  stupid beyond belief.

Why are single-sign-on systems even legal to sell without warnings?
Why don't Msoft and the other members of the Liberty alliance have
to put a big warning label on them that says USE OF THIS PRODUCT WILL
DEGRADE YOUR SECURITY?  Because that's what we're looking at here;
drastically reduced security for very marginally enhanced convenience.

But what really gets me about this is that it's totally obvious that
that's what we're looking at, and people are buying this system
anyway.  That's hard to swallow, because even consumers ought not to
be that stupid.  But it's even worse than that, because people who
ought to know better (and people who *DO* know better, their own
ethics and customers' best interests be damned) are even *DEVELOPING*
for this system.  It just doesn't make any damn sense.

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: [IP] Master Key Copying Revealed (Matt Blaze of ATT Labs)

2003-01-26 Thread bear


On Sat, 25 Jan 2003, Sampo Syreeni wrote:

Sure. But trying those combinations out can be automated -- I don't think
the kind of automatic lock pickers one sees in current action movies are
*entirely* fictional.

There are several types of devices that can convince a keylock
to open.  One of them is a kind of spring-loaded bar, usually
on a handle.  The bar is inserted into the keyhole, and then the
spring is released and a weight whacks the bar fairly hard.
This transmits the shock to the pins resting on the bar, and
thence to the other side of the pins resting across the cut
from the shocked side.

The result is that the pins fly apart momentarily against the
retaining springs.  If your timing is good, you can turn the lock
immediately after the 'snap' of the spring slamming shut.  It
usually takes an experienced user no more than three or four
tries to get the timing right.

This is actually a very simple device to construct.  I ran
across it in a book on locks and mechanisms.  Some folks call
it an automatic lock picker, but it's really just a snap
mechanism.  I've never actually seen one in person, but I
can give you the name and publication date of the pamphlet I
saw it in if I can find it around here.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Key Pair Agreement?

2003-01-21 Thread bear


On Mon, 20 Jan 2003, Jeroen C. van Gelderen wrote:

Hi,

Here is a scenario: Scott wants Alice to generate a key pair after
which he will receive Alice's public key. At the same time, Scott wants
to make sure that this key pair is newly generated (has not been used
before).

I do not know what the proper terminology is to discuss this. Assuming
there is none, I will call the solution Key Pair Agreement.


Key Pair Agreement already means something though, I thought.  In
Key Pair Agreement, Alice and Bob want to interact so that each
generates one-half of a key pair for an asymmetric encryption system.

The requirements are:

Alice does not know and cannot compute Kbob (bob's key).

Bob does not know and cannot compute Kalice (alice's key).

Each has enough information to assure that the keypair is novel.

Each has enough information to assure that the keypair is not
   contain a weak key if the encryption algorithm has weak
   keys.

Encrypt(Encrypt(P, Kbob), Kalice) = P

Encrypt(Encrypt(P, Kalice), Kbob) = P

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: DeCSS, crypto, law, and economics

2003-01-08 Thread bear


On Tue, 7 Jan 2003, alan wrote:


 Not to mention the two seasons of Futurama that are only available
 on Region 2 PAL DVDs.  (Or the other movies and TV shows not allowed
 by your corporate masters.)  They Live is another film only
 available from Region 2.  Maybe it tells too much about the movie
 industry...

This makes an interesting point.  While the argument that market
segmenting may increase the ability to provide material in all
markets, the fact is that given region coding, the producers of
this stuff *DON'T* provide the material in all markets.

If their argument, that the increased market size available with
region coding enables economies of scale, were actually the driving
force behind region coding, there should be no such thing as content
available in one region that is unavailable in another.

Thus their actions betray that they have a different motive. Therefore
the public skepticism regarding the truth of their assertions about
their motivations seems fairly solidly grounded on fact.

Bear

( who likes a fair amount of stuff that is only available
  coded for region 6 ).






-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: DeCSS, crypto, law, and economics

2003-01-08 Thread bear


On Wed, 8 Jan 2003, Pete Chown wrote:


One last point is that governments serve the interests primarily of
their own people.  So the job of Britain's government is to get me, and
other Brits, the best possible deal on films within the UK.  This might
mean balancing the interests of British consumers against British film
producers.  It doesn't mean balancing British consumers against foreign
film producers.  If no films were made in Britain, the government would
logically insist on a completely free market that allowed parallel
imports and circumvention measures.

Ah, but you're forgetting the whole globalization issue.

Governments aren't answering to their own people any more; they're all
striving to become a part of the new world order where a norwegian
can be brought to court for a supposed violation of american copyright
laws or where the Russian Dmitri Sklyarov can be jailed in the USA for
DOING HIS JOB IN RUSSIA.  We're moving forward into a glorious new
world where governments can impose laws upon their own people, not by
the fickle and divisive will of those governed, but rather in response
to international treaties and agreements with other nations promoting
global unity and harmony.

Cryptography is a part of that wonderful vision...  if the people of
different nations can be prevented from communicating effectively with
one another, or exercising their freedoms in ways that affect one
another, then effective opposition to global unity may be reduced, and
we can all become better servants and markets to our corporate
masters.

All power to the dromedariat!

Bear

PS.  If you happen to be mentally defective, you may not recognize
the foregoing as sarcasm.  Please take this into account when
composing your reply.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: [mnet-devel] Ditching crypto++ for pycrypto (fwd)

2002-12-08 Thread bear


On Tue, 3 Dec 2002, James A. Donald wrote:

Anything that is good, gets ported a lot.  Anything that is
ported a lot gets build/port problems.


Actually, I've found the reverse to be true.  Anything that gets
ported a lot eventually gets all the portability crap straightened
out so that porting it becomes just a matter of providing a few
definitions in a well-documented file.

If something still has porting problems, I'd say it hasn't been
ported enough.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: DBCs now issued by DMT

2002-12-08 Thread bear


On Thu, 5 Dec 2002, Peter Fairbrother wrote:

OK, suppose we've got a bank that issues bearer money.

Who owns the bank? It should be owned by bearer shares, of course.

Can any clever person here devise such a protocol?

I thought about this problem for several months.

The problem I kept running into and had no way around is that if the
holders are truly anonymous, then there is no way for them to seek
redress for fraudulent issue or fraudulent transactions.  If the
banker goes broke, people want to be able to make a claim against the
banker's future earnings for whatever worthless currency they were
holding when it happened, and they cannot do that from a position of
anonymity.  People want a faithless banker punished, meaning jail time
or hard labor, not just burning a nym.

The sole method for any truly anonymous currency to acquire value is
for the banker to promise to redeem it for something that has
value. So the banker, if it's to have a prayer of acceptance, cannot
be anonymous.

And the minute the banker's not anonymous, the whole system is handed
on a platter to the civil authorities and banking laws and so on, and
then no part of the system can be reliably anonymous because the
entire infrastructure of our legal system requires identity.

Look at the possibilities for conflict resolution.  How can the
anonymous holder of an issued currency prove that he's the beneficiary
to the issuer's promise to redeem, without the banker's cooperation
and without compromising his/her anonymity?  And if s/he succeeded in
proving it, who could force an anonymous banker to pay up?  And if you
succeeded in making the banker pay up, how could the banker prove
without the cooperation of the payee that the payment was made and
made to the correct payee?

We use a long-accepted fiat currency, so we're not used to thinking
about the nitty-gritty details that money as an infrastructure
requires. It is hidden from us because our currency infrastructure has
not broken down in living memory.  We shifted from privately issued
currency to government-issued currency largely without destabilizing
the economy.  Then once people were accustomed to not thinking of a
promise to redeem as being the source of value, we went off the gold
standard.  Our economy hasn't broken yet, but you have to realize that
this situation is a little bizarre from the point of view of currency
issue.  We're not thinking anymore about the promise to redeem
currency for something of value, and the implications of failure to
honor that promise, because we live in a sheltered and mildly bizarre
moment in history where those things haven't been relevant for a long
time to the currency we use most.  But any new currency would have to
have a good solid solution for that issue.

The only way I found to decentralize the system, at all, was the model
where all the actors are pseudonymous rather than anonymous, each user
has the power to issue currency, and different issued currencies were
allowed to fluctuate in value against each other depending on the
degree of trust or value of the underlying redemption commodity.
Money becomes a protocol and a commodity and labor exchange in raw
form, rather than a simple sum - it's back to the barter system.

I'd guess that all the Bank's finances should be available to anyone who
asks. That should include an accounting of all the money issued. And not
be reliant on one computer to keep the records.

An interesting idea, but it more or less prohibits offline
transactions involving a currency issue.  It also means the entire
market must be finite and closed.

Or the propounders wanting to: make a profit/control the bank?

I do not think that there are profits to be made as an issuer of
anonymous or hard-pseudonymous money.  That's one of the reasons I
advocate the everyone is potentially a mint model -- the expenses of
issue, and the cost of doing business uphill against trust until one's
issue is trusted, should be shared in something like equal proportions
by people who undertake it voluntarily.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: 'E-postmark' gives stamp of approval

2002-11-29 Thread bear


On Wed, 27 Nov 2002, Trei, Peter wrote:

The PO tried marketing this service about 6 years ago.
As far as I can see, this is almost identical to the last try.

It failed in the marketplace then, and I see no reason
whatsoever to think it will suceed now.


Hmmm.  Spam wasn't as big a problem six years ago, either.  And
businesses weren't looking at email as an avenue for legal and
commercial communication six years ago, either.

I dislike microsoft, but if this service is available without them,
and available for use by free software makers (yes, I know the
postmarks will still cost money, but the software to get them from
USPS doesn't have to be as proprietary or restricted as microsoft is
undoubtedly making theirs) it could become very useful.  If it becomes
widespread, I might start discarding unread all email from parties
unknown to me that doesn't bear a postmark, in the same way that I now
discard unread all email from parties unknown to me that doesn't have
my email address on it or that comes from untraceable hosts.

And the reason of course would be no different than the reason I throw
away all paper mail from parties unknown to me that doesn't bear at
least first-class postage; If somebody I don't know didn't pay enough
to show he gives a shit about talking to *me*, as opposed to any
random pair of eyeballs, then he's a bulkmailer and not worth my time.

The vulnerability of SMTP is a known problem. SMTP traffic, by
default, is easy to subvert, easy to eavesdrop on, easy to forge, easy
to divert, and easy to obfuscate.  SMTP is a playground for spammers
and con artists, and sufficiently unreliable and subvertible that no
legally binding or important documents can be safely trusted to it.
Business really wants a reliable electronic communications medium for
legally binding content, and SMTP is a spectacular failure on that
front. We have needed a better standard email protocol for a long
time, but the only real entries in the race have been locked up by
licensing costs and interoperability issues and so we've been hanging
ugly bags on the side of SMTP without fixing its fundamental issues.
We need a better protocol, for authentication, message integrity,
privacy, portability, and lots of other reasons. This product failed
six years ago, but I think that the SMTP problem, both as an open
wound into which spammers have been rubbing salt and as an
impossibility for confidential or legal-process communication, hurts
worse now than it hurt six years ago.  It may catch this time.

Not that I consider the US Postal service, or Microsoft, as players
likely to make anything *less* capable of being eavesdropped on or
subverted.  But authenticated senders, verifiable message integrity,
and reliable return-receipts for authenticated readers would be a step
forward, and I can't get any of them reliably with SMTP.

Sigh.  Ideally, I'd prefer the idea of a bond rather than a toll.  If
I could get email through some channel that guaranteed someone would
lose $1 *if* I designated their email as spam, I'd open every last
letter I got through that channel because I'd be confident that no
bulkmailer would *EVER* use it. I don't actually want corresponding
with me to cost money, I just don't want to be a free target for
bulkmailers.


Bear





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Did you *really* zeroize that key?

2002-11-08 Thread bear



I remember this issue from days when I wrote modem drivers.
I had a fight with a compiler vendor over the interpretation
of volatile.

They agreed with me that volatile meant that all *writes*
to the memory had to happen as directed; but had taken the
approach that *reads* of volatile memory could be optimized
away if the program didn't do anything with the values read.

This doesn't work with the UARTs that I was coding for at the
time, because on those chips, *reads* have side effects on
the state of the chip.  If a read of the status register
doesn't happen, then subsequent writes to the data buffer will
not trigger a new transmit.

The compiler vendor had not foreseen a situation in which
reads might have side effects, and so the compiler didn't
work for that task. I wound up using a different compiler.

Although the bastards never admitted to me that they were wrong,
I noted that in their next patch release, it was listed number
one in the list of critical bugfixes.

Bear
(who now notes that the company is no longer extant)



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: patent free(?) anonymous credential system pre-print

2002-11-05 Thread bear


On Tue, 5 Nov 2002, Nomen Nescio wrote:

That's just one possibility.  The point is, your ideas are going nowhere
using your present strategy.  Either this technology won't be used at
all, or inferior but unrestricted implementations will be explored,
as in the recent work.  If you want things to happen differently, you
must change your strategy.

There is a possibility that you have neglected.  And, evidently,
so have most of the patent-filers.

Twenty years is not so long.  Patents expire.

It's not terribly helpful for someone to lock up an idea for twenty
years, but honestly it may be at least that long before the legal and
cultural infrastructure is ready to fully take advantage of it anyway.

You, like most engineers, are thinking of technical barriers only;
it's entirely reasonable to suppose that you could deploy the
technical stuff in two to five years and rake in money on your patents
for the next fifteen to eighteen.  That's a valid model with computer
hardware, because its value to business is intrinsic.  Bluntly, it
enables you to do things differently and derive value within your own
company regardless of what anyone else is doing.  But here we are
talking about something whose value is extrinsic; it affects the way
mutually suspicious parties interact.  For changes in that arena to
happen, they have to be supported by the legal system, by precedent,
by custom, by tradition, etc.  These are barriers that will take a
*hell* of a lot longer to overcome than the mere technical barriers.
The fights over liability alone will take that long, and until those
fights are settled we are not talking about something that a
profit-motivated business will risk anything valuable on.

I remember having exactly your reaction (plus issues about patenting
math and the USPTO being subject to coercion/collusion from the NSA
and influence-peddling and so on...) when the RSA patent issued - but
RSA is free now, and RSA security has not made that much money on the
cipher itself.  And frankly, I don't think that having it be free much
earlier, given the infrastructure and implementation issues, would
really have made that much of a difference.  Note that there are
*still* a lot of important court decisions about asymmetric encryption
that haven't happened yet, and it was only profitable (due to
e-commerce) for the last couple years of the patent's run.

These patents are being filed in an industry and application which is
NOT part of how the world does business today.  They may or may not
turn out to be enabling items, but the world will have to learn to do
business in a different way before they become relevant.  That's not
going to happen in time for the dog-in-the-manger crowd to make any
money off the patents they're filing, so unless they can mobilize
*BILLIONS* of dollars for infrastructure replacement, education,
marketing, lobbying, court cases about legal validity for their
digital signatures and credentials, etc, etc, etc, there is no chance
of them withholding anything of value from the public domain.

It will take twenty years or more just for the *legal* system to
adjust to the point where a credential system or non-repudiation
property might possibly become useful to business.  Add another five
or ten years at least for acceptance and custom to grow up around it.
Another five or ten years for court cases and precedent and decisions
about liability to get settled so that it can become standard business
practice.  By that time the patents will be long gone.

Check history.  There is a long list of companies that made cipher
machines or invented ciphers, patented them, and went broke.  It isn't
a coincidence, nor a recent development.

Bear






-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: more snake oil? [WAS: New uncrackable(?) encryption technique]

2002-10-25 Thread bear


On Fri, 25 Oct 2002, Nicko van Someren wrote:

 [Moderator's note: so long as society continues to turn a blind eye to
 the harvesting of serpents for lipids, the international trade in
 snake oil will continue unabated. -- Perry]

I appreciate that as cryptographers we should be rightly skeptical of
anyone claiming to have a new, unbreakable encryption scheme.

That said, given the tone of the message from Multiplex Photonics
perhaps launching attacks on, or laughing at, those who are ill
informed about cryptography is not the best use of our energies.  If
this system is so eminently breakable then surely we should be applying
our skills to solve what scientists in another field currently believe
to be a hard problem, thereby advancing the sum of human knowledge,
rather than just sitting around sniggering at them.

   Nicko

The implication is that they have a hard problem in their
bioscience application, which they have recast as a cipher.

But in most cases, when someone - especially someone without a
crypto background - tries to transform a hard problem into
a cipher, the break on the cipher comes at some point in the
transformation, rather than on the hard problem itself.

I think it's not unlikely that the cipher can be broken and
not unlikely that the break will not help them at all with
their hard problem.

One thing that strikes me about it is that it doesn't seem to
be a practical cipher in any case.  It is too slow when implemented
in software to be competitive with known-good ciphers that we
have today, so it has little value as a cipher even if it does
turn out to be as unbreakable as the best we've got.  A good
cryptographer would spend time optimizing the snot out of it
and abstracting away operations that don't add security, in
order to make it fast enough to be competitive - after
which it might bear only a dim resemblance to the hard problem
that inspired it anyhow.

Offhand, I'd say that since it isn't a practical cipher to use
anyway, it's probably not a good use of time for professional
cryptographers to try to break.  On the gripping hand, if
there's a pro out there who wants to donate some specialized
mathematical expertise to biosciences, with or without compensation
from the benefactors at multiplex photonics, this may be a nice
way to do it.

On the gripping hand, since they've patented the method rather than
placing it in the public domain, you have to realize that it's a
donation to a single company rather than a donation to human
knowledge or biosciences in general.  There're plenty of worthwhile
things to work on that are truly public if you're feeling like
donating time and expertise on a charitable basis.

Bear



On Friday, Oct 25, 2002, at 07:25 Europe/London, Udhay Shankar N wrote:


 On Sun, 13 Oct 2002 15:45:58 +0100, in comp.security.misc Multiplex
 Photonics [EMAIL PROTECTED] wrote:

 We have developed a new encryption technique that we believe to be
 uncrackable - we have patented the method and intend to issue it as
 freeware for non-commercial use.
 Basically, we are a biotechnology company who have developed this
 system
 as an offshoot from our development of signal analysis systems.
 Until we are able to distribute this as a piece of software, we have
 produced technical documentation of the method and would be very keen
 to
 see if anyone would like to examine the method and/or develop an
 implementation themselves.


 We are not trying to sell anything to anyone - if someone were able to
 find a method of cracking this system, this would help us immeasurably
 in the development of our biotech product.
 The technical documentation is available for free download in the
 download section of our website at:
 http://www.multiplexphotonics.com - there is no advertising,
 registration or product that we are trying to sell you.
 Anyone interested can contact us by email at
 [EMAIL PROTECTED]
 
 We hope that this community will find this of interest
 may thanks
 Miles Kluth
 
 Multiplex Photonics Ltd.
 http://www.multiplexphotonics.com
 email: mailto:info;multiplexphotonics.co.uk

 --
 ((Udhay Shankar N)) ((udhay @ pobox.com)) ((www.digeratus.com))


 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to
 [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-22 Thread bear


On Tue, 22 Oct 2002, Ed Gerck wrote:

Short answer:  Because the MAC tag is doubled in size.

Longer answer: The “birthday paradox” says that if the MAC tag has t bits,
only 2^(t/2) queries to the MAC oracle are likely  needed in order to discover
two messages with the same tag, i.e., a “collision,” from which forgeries
could easily be constructed.



This is a point I don't think I quite get. Suppose that I have
a MAC oracle and I bounce 2^32 messages off of it.  With a
64-bit MAC, the odds are about even that two of those messages
will come back with the same MAC.

But why does that buy me the ability to easily make a forgery?

Does it mean I can then create a bogus message, which the oracle
has never seen, and generate a MAC that checks for it?  If so
how?

In protocol terms, let's say Alice is a digital notary.  Documents
come in, and Alice attests to their existence on a particular
date by adding a datestamp, affixing a keyed MAC, and sending
them back.

Now Bob sends Alice 2^32 messages (and Alice's key-management
software totally doesn't notice that the key has been worn to
a nub and prompt her to revoke it).  Reviewing his files, Bob
finds that he has a January 21 document and a September 30
document which have the same MAC.

What does Bob do now?  How does this get Bob the ability to
create something Alice didn't sign, but which has a valid MAC
from Alice's key?

Bear




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Microsoft marries RSA Security to Windows

2002-10-15 Thread bear



On Wed, 9 Oct 2002, Joseph Ashwood wrote:

Unfortunately, SecurID hasn't been that way for a while. RSA has offered
executables for various operating systems for some time now. I agree it
destroys what there was of the security, and reduces it to basically the
level of username/password, albeit at a more expensive price. But I'm sure
it was a move to improve their bottom line.

Good grief.

This is an old, old story by now, and it's starting to really
piss me off. It seems like every last attempt to implement
security of any kind in a commercial product gets compromised
for the sake of convenience/marketability, etc.

A system that is *actually* secure is inconvenient, or requires
mental effort to manage keys, or offline key storage, or won't
interact transparently with known insecure programs, or some
other basic fundamental constraint they're not willing to live
with -- so they take a component (RSA in this case) that could
have been used to build a secure system, use its presence as a
point to *claim* that that's what they're building, and build
something else.

It's irresponsible.  It makes *actual* security into a rare,
specialized, and arcane field.  It creates expectations that
you can do insecure things with secure software.  It gives
users a *FALSE* sense of security and deters them from getting
products that are actually secure.  It uses fraudulent (or, to
be very charitable, perhaps mistaken) claims of security to
compete unfairly with actual secure software which, of course,
has constraints on its operation.

I think somebody needs to start assigning security grades
based on the theory that it's the weakest link (PRNG with
state value out in the open) rather than the strongest (we
use whizbang patented strong encryption algorithm!) that
determines security. It's basically a matter of consumer
protection, and it's really something that security and crypto
people need to do within the industry.  It has to be within
the industry, because this is stuff that is well outside
a layman's ability to judge.

Bear






-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: What email encryption is actually in use?

2002-10-03 Thread bear



On Wed, 2 Oct 2002, Matthew Byng-Maddick wrote:

I have to say that much as it is a laudable goal to get widespread
encryption on the SMTP server network, I'm rapidly coming to the conclusion
that opportunistic encryption in this way doesn't really work. Consider
where one side believes that it will only accept certificates signed by a
particular CA (a perfectly plausible scenario in the case of SSL/TLS), and
I hand it a self-signed one - this is not communicable before the connection
starts up, and in-protocol, a failure to apply policy causes the connection
to be shut down (this is by no means the only one, consider one side that
only use DES and the other that never use it), leaving the connection in an
undefined state.

I consider that state perfectly well defined -- it is the
no connection state.  The only reason any protocol works
is because people prefer abiding by its rules and the
policies each other set up in it to having no connection.

The essence of a protocol is to detect situations where
one party or the other prefers No connection over the
rules, and enforce that such detection happens before any
confidential data is shared.  According to this rule, I
would say that the protocol you say is in an undefined
state has in fact functioned perfectly.  It detected a
rule that the other was not willing to abide by and dropped
the connection *before* risking any confidential data.
That's precisely what it was supposed to do.


The problem with this is obvious. You have to treat the failure as a
temporary failure and try again in a bit. Of course, we know that the
only way you're going to send this system mail is by sending it in plaintext,
because otherwise you won't adhere to policy, but also, given that it's an
automated service, there's no human to turn round and try something slightly
different, as there is in the case of the Web Browser or mail client talking
SSL.

But if you are willing to abide by the sending-plaintext
protocol in the first place, this is perfectly reasonable
too. Protocol termination for lack of willingness to trust
single-DES is no different than termination of protocol
for lack of willingness to send (or receive) plaintext.

Where our protocol design fails is in considering plaintext
to be something other than a particularly unreliable and
ineffective encryption algorithm.  Certainly nobody who's
willing to reject a connection for a self-signed certificate
should be willing to accept plaintext, because obviously
plaintext is not as secure as the minimum security they are
requiring.  But experience shows that people willing to
reject self-signed certs and poor ciphers always seem to
be willing to accept the even poorer cipher named plaintext.
This is completely irrational; either you need security or
you don't.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Gaelic Code Talkers

2002-10-03 Thread bear



Neal Stephenson probably ran into a similar story; he inserts it
(in fictionalized, names-changed form) in his novel Cryptonomicon.
You can probably find some references to the historical precedences
by googling starting there.

[Moderator's Note: I think Stephenson's story didn't seem to involve
code talkers, and appeared to be entirely fictional... --Perry]

Bear


On Wed, 2 Oct 2002, Bill Frantz wrote:

While vacationing in Scotland this summer I had a conversation with a
gentleman who said that the British had used Scottish Gaelic speakers as
code talkers during World War II.  He added that they were not used in
the European theatre, as there were too many Irish Gaelic speakers who
sympathized with the Axis.

A quick glance at Kahn didn't turn up an information on these code talkers.
Has anyone else heard anything about it?

Cheers - Bill


-
Bill Frantz   | The principal effect of| Periwinkle -- Consulting
(408)356-8506 | DMCA/SDMI is to prevent| 16345 Englewood Ave.
[EMAIL PROTECTED] | fair use.  | Los Gatos, CA 95032, USA



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Sun donates elliptic curve code to OpenSSL?

2002-09-24 Thread bear



On Tue, 24 Sep 2002, Bodo Moeller wrote:

On Tue, Sep 24, 2002 at 01:29:29PM +0100, Ben Laurie wrote:
 Markus Friedl wrote:

 With this code OpenSSL is turning into a non-free project.

 As has been observed elsewhere, the patent stuff only applies if you
 make a similar promise to Sun. If you don't want to have Sun not sue you
 when you infringe, then don't promise not to sue them.

Here's a longer explanation.  The Sun code in OpenSSL 0.9.8-dev is
available under the OpenSSL license; additionally, you have the
*option* to accept the covenant:

 The ECC Code is licensed pursuant to the OpenSSL open source
 license provided below.

 In addition, Sun covenants to all licensees who provide a reciprocal
   ^^
 covenant with respect to their own patents if any, not to sue under
 
 current and future patent claims necessarily infringed by the making,
 using, practicing, selling, offering for sale and/or otherwise
 disposing of the ECC Code as delivered hereunder (or portions thereof),
 provided that such covenant shall not apply: [...]

That's a defining relative clause.  If you are not willing to provide
a reciprocal covenant, this has nothing to do with you.  You just
can't use the stuff patented by Sun, but it's not compiled in by
default anyway for exactly this reason.

Read it again.  The first two words of the second sentence you
quoted are, In addition...

As I understand it, this means the donated code is available under
the OpenSSL source license.  So you *can* use it, whether or not
it's patented by Sun.

*In addition* to that, *if* you have software patents and you
promise not to sue Sun over them because of an infringement you
find in the donated code, then Sun promises that it won't sue
you either.  Sun does not forbid people from using the donated
code on the basis of whether or not they make this promise.

Basically, they're offering something they didn't have to offer
in order to release it under the OpenSSL license; if they'd
simply released it under the OpenSSL license, you'd have fewer
options, not more.

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: unforgeable optical tokens?

2002-09-22 Thread bear



On Sun, 22 Sep 2002, Hadmut Danisch wrote:

It's just a gadget of the type you can't make a similar one again,
and that's what it can be used for. Forget about networks and
challenge response in context of this token.

Security is far more than just the cryptographical standard methods.
There's security beyond cryptography. So don't have this limited
view.

Here's a potential application: consider it as a door key.  Every
time the user sticks it into the lock, the lock issues two challenges.
The first challenge is randomly selected; the lock just reads and
stores the result.  The second is for authentication: it issues the
same challenge it issued for the first challenge last time, reads
the result and compares it to the result it stored last time. If
it's a match, the lock opens.

This is not really applicable to remote authentication, because
in *remote* authentication, someone has to be *signalled* that
the authentication succeeded, whereupon the *signal* becomes just
another message that has to be protected using conventional crypto
and protocols.  But for *local* authentication, it's got some
good stuff going for it.

But consider the door lock application: There's no way for the
attacker (or the key-holder either) to know what challenge out
of zillions has been issued or what response out of zillions
has been stored. The door never had to send any of that
information over a network, so Eve can't get it and Mallory
can't replay or duplicate it; presumably it is stashed inside
tamper-resistant hardware somewhere in the lock.

Superficially, this resembles a smartcard key where the challenge
is a string and the response is the string encrypted according to
a key held on the smartcard.  But it's not subject to side channel
attacks like power measurement to extract its key for the encryption
operation the way smartcards are. And it is far more resistant to
duplication, even to an attacker who knows its internal structure
(key) and has the fab infrastructure. And it is many orders of
magnitude faster.  You shine lasers on it at particular angles
and at particular points on its surface for a challenge; its
response is at your sensors in a nanosecond or less.  No smartcard
is anywhere near that fast. And you can go swimming with it, which
you can't do with a smartcard; no need to ever have it out of your
posession, even when you're in the shower.

If you want to make whole computers that are tamper-resistant,
you could extend the door key metaphor to the computer itself;
with your key in it, it can read its hard drive and do computer-
like things.  Without your key in it, it's just a sealed lump
of metal and glass with some buttons on it. In an operating system
for such a machine, everything would be encrypted.  The boot sector
would be encrypted using the same protocol as the door key above,
with a different key for every bootup.

For the rest of the machine, instead of storing any encryption or
decryption keys anywhere, you'd store challenges for the token
and use its responses for the keys.  And every (say) tenth time
you touched something, you'd generate a new challenge, get a new
key from the token, and re-encrypt the plaintext with the new
key. That way even if a thief gets your machine, they can extract
zero information from it unless they get your keytoken too.

If your machine ever goes missing, and you still have the keytoken
in your posession, you have no security worries; likewise if the
keytoken ever goes missing, but you still have your machine.  It's
only if *both* of them go missing that you have a problem.

hmmm.  It becomes more rococo, but of course, it also makes it
easy to create a machine that can only be used with *all* of
two or more keytokens inserted; just the thing for mutually
suspicious parties to store confidential shared data on.

Anyway; it's nothing particularly great for remote authentication;
but it's *extremely* cool for local authentication.

Bear




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Cryptogram: Palladium Only for DRM

2002-09-18 Thread bear



On Wed, 18 Sep 2002, Peter wrote:

Hi Pete - I'm confused. Are you suggesting that I should enjoy these
freedoms on SW which I don't have legal rights to?

In emergencies, yes.  Remember the people trying to deal with
and organize the WTC rescue efforts, whose software kept rebelling
because of inappropriately-enforced license issues?  Care to even
estimate the liability for lives lost due to that?  You want to
create a system where they'd have *NO* way to override copyright
in a real emergency, *NO* way to save lives?  No. That's cut and
dried, because Copyright is never an emergency.  Copyright infraction
never costs lives.

I for one don't give a flaming shit whether someone has the
legal rights to equipment he has to use in an emergency to
save lives.  When putting automatic enforcement in place
means that lives will be lost, it is a Bad Idea. A company
that did it might (and IMO should) be held liable in court.

Furthermore, if you think that Pd will only be used for legal
purposes by the software vendors and manufacturers who control
it, I strongly suggest you revise your trust model  I have
seen no indication anywhere that these people are any more
trustworthy than those whose actions you decry. The only
difference is that the scale of abuses which can be perpetrated
by them is staggeringly large compared to the minor abuse of
someone copying a song or running a program out of license.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Palladium and malware

2002-08-29 Thread bear



On 29 Aug 2002, Paul Crowley wrote:

I'm informed that malware authors often go to some lengths to prevent
their software from being disassembled.  Could they use Palladium for
this end?  Are there any ways in which the facilities that Palladium
and TCPA provide could be useful to a malware author who wants to
frustrate legitimate attempts to understand and defeat their software?

If it provides the protections that copy-protection groups want
(ie, it can be used to prevent keys in their software from being
read by other software) then yes, it can be used to prevent any
code from being read by any software.

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Palladium and buffer over runs

2002-08-29 Thread bear



On Thu, 29 Aug 2002, Frank Andrew Stevenson wrote:


What is there to prevent that one single undisclosed buffer overrun bug in
a component such as Internet Explorer won't shoot down the whole DRM
scheme of Palladium ? Presumably IE will be able to run while the machine
is in a trusted state, but if the IE can be subverted by injecting
compromising code through a buffer overrun, the security of DRM material
that is viewed in one window could be compromised through malicious code
that has been introduced through another browser window.

It's my understanding of Palladium that it can enforce a separate
data space for applications by creating a memory space which is
encrypted with a key known to only that application.

Given that, I think a cracker could subvert IE normally, but that
wouldn't result in any access to the protected space of any other
applications.  And as long as IE is actually separate from your
OS (if you're running it on your Mac, or under WINE from Linux,
for example), it shouldn't give him/her access to anything
inside the OS.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Overcoming the potential downside of TCPA

2002-08-14 Thread bear



On Tue, 13 Aug 2002, Joseph Ashwood wrote:

However there is something that is very much worth noting, at least about
TCPA.

There is nothing stopping a virtualized version being created.

There is nothing that stops say VMWare from synthesizing a system view that
includes a virtual TCPA component. This makes it possible to (if desired)
remove all cryptographic protection.

...

The problem with this idea is that TCPA is useless.  For all the *useful*
things you are thinking of, you need TCPA plus an approved key.  The only
way you are going to get an approved key is inside a tamper-resistant chunk
of hardware.  If you should manage to extract the key, then yes, you'll be
able to create that CD.  But the idea is that you, the hardware owner, are
not authorized to extract the information contained in your own hardware.
I find the idea of owning something without having the legal right to
open it up and look inside legally dubious at best, but I'm no lawyer

The idea is that you shouldn't get anywhere without hardware hacking. The
people doing this have decided hardware hacks are acceptable risks because
they only want to protect cheap data -- movies, songs, commercial software,
whatever.  They are sticking to stuff that's not expensive enough to justify
hardware hacks.

However, if this infrastructure does in fact become trusted and somebody
tries to use it to protect more valuable data, God help them.  They'll get
their asses handed to them on a platter.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Challenge to TCPA/Palladium detractors

2002-08-12 Thread bear



On Thu, 8 Aug 2002, AARG!Anonymous wrote:

It's likely that only a limited number of compiler configurations would
be in common use, and signatures on the executables produced by each of
those could be provided.  Then all the app writer has to do is to tell
people, get compiler version so-and-so and compile with that, and your
object will match the hash my app looks for.

I don't like the idea of a trusted compiler.  No matter who makes
it.  People should choose compilers based on the compiler's merits
and make optimization and configuration decisions when compiling
based on their particular hardware, not in order to match some other
machine's or other user's ideal of trustable code.  The minute a
compiler becomes a standard, for any reason, it becomes a target
for people to subvert.

People who are likely to be a source of malicious clients will also
hack hardware if the data is sufficiently valuable to warrant it.
We have already seen how a relatively simple and inexpensive hardware
hack can be used to defeat palladium security, so while it may provide
suitable infrastructure if the attacker's motivation is just the price
of a movie ticket, it is not at all trustable as a structure if the
value of the data being protected rises above prices that justify
hardware hacking. Moreover, the same simple hardware hack defeats
every piece of palladium-protected content or software, so the cost
of hardware hacking can be amortized over many breaks.

I think you are trying to solve in hardware, problems which are
properly protocol-design problems.  This looks like the easy way
out because protocol design is hard, but the fact is that if there
is data you really want to protect which is more valuable than movie
tickets, what you want is a protocol that ensures no one using the
data ever has sufficient information to reconstruct more of it
than their particular licit use of it requires.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: adding noise blob to data before signing

2002-08-12 Thread bear



On 10 Aug 2002, Eric Rescorla wrote:

It's generally a bad idea to sign RSA data directly. The RSA
primitive is actually quite fragile. At the very least you should
PKCS-1 pad the data.

-Ekr

This is true.  Cyclopedia Cryptologia has a short article detailing
some of the attacks against direct use of RSA.

http://www.disappearing-inc.com/R/rsa.html

is a good URL if you want to read it.

Ray



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Skeleton Keys for Palladium Locks.

2002-08-02 Thread bear



It occurs to me that the Palladium architecture relies on control
of the data paths between the memory and CPU.

In order to spoof it and read Palladium-protected content, all I
have to do is provide another path in and out of memory.

Dual-ported memory has been manufactured for video and DSP systems
for decades, and is frequently faster than that used for main
memory.

It should be possible to construct a memory unit (skeleton key)
using dual-ported memory, which looks to the palladium motherboard
exactly like an ordinary memory module.  The second memory port
would be hooked up to a simple hardware blitter -- a standard
video-system chip that scans through the bits on the memory chip
and writes them to another memory chip, or other device.

The skeleton key would have exactly two control inputs: The first
would cause the data in memory to be copied out of the
palladium-controlled architecture.  The second would cause the
memory that had been copied out to be copied back in.  You
could hook them up to the two positions of a double-throw switch
on the front of the case if you liked, so they'd require no
software which could be detected by the Palladium motherboard.

Now, with appropriate selection of devices so that data stored
on the skeleton keys are persistent across boots, Palladium
control is circumvented. That requires a battery, but no big
deal; you can mount the battery with the switch.

The skeleton key module, if fabbed in bulk, would (wild guess
alert) probably cost about ten times what ordinary memory modules
cost. It is a simple device with a schematic that someone could
work out in an afternoon, far simpler than a PCI card. I could
have a working mask for you within a week. It could be fabbed
by a very small shop, using ordinary chips and PCB boards.

Now, I have chosen the name skeleton key advisedly.  Skeleton
keys are perfectly legal, necessary tools that every locksmith
must own in order to do business.  There is a legitimate market
for them, and if they were unavailable, nobody could afford the
risk of locking stuff up in a hard safe because they might not
be able to unlock it if they lose their key (if their hardware
fails or a drive crashes). Similarly, in a world where the
locks look like the proposed Palladium architecture, then
every Locksmith is going to have to have some skeleton keys
in his or her toolbox, just in order to do legitimate business.


Bear







-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: building a true RNG

2002-07-31 Thread bear



On Tue, 30 Jul 2002, James A. Donald wrote:


Randomness is of course indefinable.  A random oracle is however
definable.

If SHA-1 is indistinguishable from a random oracle without prior
knowledge of the input, then we would like to prove that for an
attacker to make use of the loss of entropy that results from the
fact that it is not a random oracle, the attacker would be need to
be able to distinguish SHA-1 from a random oracle without prior
knowledge of the input.


The above sentence is unsound.  You cannot take an assumption
(If SHA-1 is indistinguishable) and then use its negation
to prove something else (... then we would like to prove that
... the attacker would need to be able to distinguish...),
unless you prove your something else for both the TRUE and FALSE
values of the assumption.

Euclid was all over that, and George Boole after him.

Now, for the TRUE value of your assumption, which I
think you may have meant:

IF (A) SHA-1 is indistinguishable from a random oracle without
   prior knowledge of the input,
   AND
   (B) We can prove that in order to succeed in an attack, an
   attacker would need to distinguish SHA-1 from a random
   Oracle without prior knowledge of the input,
   THEN
   (C) The attacker will not succeed in that attack.

But for the FALSE value of your assumption A, we get

   IF (B) AND (~C) THEN (~A)

And

   IF (B) AND (~A) THEN (C) OR (~C).



So if we take B as an axiom, then we did not prove
anything about A unless given ~C and we did not prove anything
about C regardless of our assumptions about A.


Bear






-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG (was: Quantum Computing ...)

2002-07-23 Thread bear



On Mon, 22 Jul 2002, John S. Denker wrote:

David Honig wrote yet another nice note:

I'm not trying to be dense, but I'm totally not
understanding the distinction here.  The following
block diagram is excellent for focussing the discussion,
(thanks):

 Source -- Digitizer -- Simple hash -- Whitener (e.g., DES)

OK, we have DES as an example of a whitener.
-- Can somebody give me an example of a simple hash
that performs irreversible compression of the required
kind?

Depends on the data and how much entropy you suppose it
has, really.  An irreversible compression function that
I use when extracting entropy from text (for other
purposes) is to have a counter.  Each time you process
a character, you add the character code to the counter,
then multiply the counter by 2.4 rounding down.  This is
based on estimates of 1.33 bits of entropy per character
in english text, and requires an initialization vector
(in this case an initialization value) twice as long as
the character code to prevent you from taking too many
bits from the first few characters alone.

For something like a lava-lamp picture, your compression
function might be first converting it into a 4-color image,
editing out the constant parts (eg, the lamp base and edges),
compressing that using PNG format, and then taking some
similarly counter-based function of those bits. Using a
time series of pictures of the same lava-lamp, you'd have
to adjust for lower entropy per byte of processed PNG (by
using a lower factor), because it could be redundant with
other frames.

-- Isn't the anti-collision property required of even
the simplest hash?  Isn't that tantamount to a very
strong mixing property?  If there's strong mixing in
the simple hash function, why do we need more mixing
in the later whitening step?

You are talking, specifically, about cryptographic hash
functions.  The diagram specifies a simple hash function.
The distinction between cryptographic hashes and simple
hashes is, a simple hash is supposed to produce evenly
distributed output.  A cryptographic hash is supposed to
produce evenly distributed *and unpredictable* output.
A simple hash, plus a whitener, is about what you're
thinking of for a cryptographic hash function.

I assume digestion means the same as distillation?

Roughly.  People talk of digestion of a datastream, or
distillation of entropy, or irreversible compression,
etc.  It's roughly the same thing.

Gimme a break.  In particular, gimme an example of a crypto
algorithm that will fail if it is fed with a random-symbol
generator that has only 159.98 bits in a 160 bit word.

That's one bit per 8k. I guess it just depends on which
8k comes through and how much your opponent can make of
one bit.



 I see no point in whitening the output of such a
 distiller.

 So the adversary can't look back into your logic.  A 'distiller'
 which produces quality entropy (after digesting an arbitrary
 number of bits) needn't be as opaque as a crypto-secure hash is.

I'm still needing an example of a distiller that has
the weakness being alleged here.  In particular,
 -- either it wastes entropy (due to excessive hash collisions)
in which case it isn't a good distiller, and whitening it won't
improve things (won't recover the lost entropy), or
 -- it doesn't waste entropy, in which case the output has entropy
density of 159.98/160, in which case there is nothing to be gained
by so-called whitening or any other post-processing.

I think you may be right about that -- whitening protects you
from errors in an overly-simple distiller such as I described
above, but if you've got a really fine-tuned one, it doesn't
help much.


In particular, (proof by contradiction) consider the following
scenario:  suppose she captures 100 bits of output, and wants
to use it to make some predictions about the next 60 bits of
output.  She uses the 100 bits to see back into the
hypothetical simple-hash function, learn something about the
input thereof, and then pushes that forward again through the
simple-hash function to make the predictions.  But this scenario
violates the most basic requirements of the hash function, even
the simplest of simple-hash functions.

Again, it violates the requirements of a cryptographic hash
function, not a simple hash function.


Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: It's Time to Abandon Insecure Languages

2002-07-18 Thread bear



On 18 Jul 2002, Pete Chown wrote:

If you want totally type safe languages that use ahead of time
compilation, look at Eiffel, Sather, the Bigloo Scheme compiler, and so
on.  Also don't forget gcj, which does ahead of time compilation for
Java with the same type checking that you get in the managed
environment.

Agreed.  And I particularly like Scheme.  However, it's also not
hard to compile your C code with bounds checking turned on if you're
willing to sacrifice maybe a few things you shouldn't be using anyay,
so it's pretty inexcusable IMO to still be having buffer overflows.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto/web impementation tradeoffs

2002-07-04 Thread bear



Without more knowledge of the parameters of the system
(especially the threat model), it's hard to say -- however,
this sounds like a case for the Diffie-Hellman key agreement
protocol.  Have the client and server each pick a random
number, and then use those numbers to generate a key
dynamically.

Bear


On Wed, 3 Jul 2002, John Saylor wrote:

Hi

I'm passing some data through a web client [applet-like] and am planning
on using some crypto to help ensure the data's integrity when the applet
sends it back to me after it has been processed.

The applet has the ability to encode data with several well known
symmetric ciphers.

The problem I'm having has to do with key management.

Is it better to have the key encoded in the binary, or to pass it a
plain text key as one of the parameters to the applet?

I know that the way most cryptosystems work is that the security is in
the key. But having a compiled-in key just seems like a time bomb that's
going to go off eventually. Is it better to have a variable key passed
in as data [i.e. not marked as key] or to have a static key that sits
there and waits to be found.

Thanks.

--
\js

'People who work sitting down get paid more than people who work standing up.'
  - Ogden Nash (1902-1971)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Montgomery Multiplication

2002-07-02 Thread bear



On Tue, 2 Jul 2002, Damien O'Rourke wrote:

Hi,

I was just wondering if anyone knew where to get a good explanation of
Montgomery multiplication
for the non-mathematician?  I have a fair bit of maths but not what is
needed to understand his paper.

It's kind of an exotic technique, but it's one I happen to know...

Montgomery Multiplication is explained for the computer scientist
in Knuth, _Seminumerical Methods_.

Briefly: The chinese remainder theorem proves that for any set
A1...AN of integers with no common divisors, any integer X which is
less than their product can be represented as a series of remainders
X1...XN, where Xn is equal to X modulo An.

if you're using the same set of integers with no common divisors
A1...AN to represent two integers X and Y, you have a pair of series
of remainders X1...XN and Y1...YN.

Montgomery proved that Z, the product of X and Y, if it's in the
representable range, is Z1...ZN where Zn equals (Xn * Yn) mod An.

This means that, for integers A1..AN that are selected to be small
enough for your computer to multiply in a single instruction, you
can do multiplication of extended-precision integers less than their
product in linear time relative to the size of the set of integers
A; just multiply each pair of remainder terms modulo the modulus
for that term, and the result is the product.  As a technique,
it's a useful method when you know ahead of time what approximate
size your bigints are.

With positionally-notated integers, You need time N squared, or
at least N log N (I seem to remember that there was an N log N
technique, but I don't remember what it was and may be mistaken),
relative to the length of the integers themselve.

There are a few optimizations available so you can select a set A
that's just big enough to do the job, but they require some degree
of foreknowledge of the job that you often don't have when writing
libraries.

The major problem with Montgomery representation of integers is
that it makes division hard. If you can arrange your problem so
you don't need division, and you know the approximate size of
the bignums you'll be working with, it can speed things up
noticeably.

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Ross's TCPA paper

2002-06-29 Thread bear



On Mon, 24 Jun 2002, Anonymous wrote:

The important thing to note is this: you are no worse off than today!
You are already in the second state today: you run untrusted, and none
of the content companies will let you download their data.  But boolegs
are widely available.

The problem is that the analog hole is how we debug stuff.
When our speakers don't sound right, we tap the signal, put
it on an oscilloscope so we can see what's wrong, correct
the drivers, and try again.  When our monitor can't make sense
of the video signal, it's different equipment but the same
idea.  When you encrypt all the connections to basic display
hardware, as proposed in Palladium, it means nobody can write
drivers or debug hardware without a million-dollar license.
And if you do fix a bug so your system works better, your
system's trusted computing system will be shut down.  Not
that that's any great loss.

Likewise, encrypted instruction streams mean you don't know
what the hell your CPU is doing.  You would have no way to
audit a program and make sure it wasn't stealing stuff from
you or sending your personal information to someone else.

Do we even need to recount how many abuses have been foisted
on citizens to harvest marketing data, and exposed after-the-
fact by some little-known hero who was looking at the assembly
code and went, Hey look what it's doing here.  Why is it
accessing the passwords/browser cache/registry/whatever?

Do we want to recount how many times personal data has been
exported from customer's machines by adware that hoped not
to be noticed?  Or how popup ads get downloaded by software
that has nothing to do with what website people are actually
looking at?

I don't want to give vendors a tunnel in and out of my system
that I can't monitor.  I want to be able to shut it down and
nail it shut with a hardware switch.  I don't want to ever
run source code that people are so ashamed of that they don't
want me to be able to check and see what it does; I want to
nail that mode of my CPU off so that no software can turn it
on EVER.

I'll skip the digital movies if need be, but to me trusted
computing means that *I* can trust my computer, not that
someone else can.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: Ross's TCPA paper

2002-06-26 Thread bear



On Wed, 26 Jun 2002, Scott Guthery wrote:

Privacy abuse is first and foremost the failure
of a digital rights management system.  A broken
safe is not evidence that banks shouldn't use
safes.  It is only an argument that they shouldn't
use the safe than was broken.

I'm hard pressed to imagine what privacy without
DRM looks like.  Perhaps somebody can describe
a non-DRM privacy management system.  On the other
hand, I easily can imagine how I'd use DRM
technology to manage my privacy.

You are fundamentally confusing the problem of
privacy (controlling unpublished information and
not being compelled to publish it) with the
problem of DRM (attempting to control published
information and compelling others to refrain
from sharing it).  Privacy does not require
anyone to be compelled against their will to
do anything.  DRM does.

As I see it, we can get either privacy or DRM,
but there is no way on Earth to get both.
Privacy can happen only among citizens who are
free to manage their information and DRM can
happen only among subjects who may be compelled
to disclose or abandon information against
their will.

Privacy without DRM is when you don't need anyone's
permission to run any software on your computer.

Privacy without DRM is when you are absolutely free
to do anything you want with any bits in your
posession, but people can keep you from *getting*
bits private to them into your posession.

Privacy without DRM means being able to legally
keep stuff you don't want published to yourself,
even if that means using pseudonymous or anonymous
transactions for non-fraudulent purposes.

Privacy without DRM means being able to simply,
instantly, and arbitrarily change legal identities
to get out from under extant privacy infringements,
and not have the new identity easily linkable to
the old.

Privacy without DRM means people being able to
create keys for cryptosystems and use them in
complete confidence that no one else has a key
that will decrypt the communication -- this is
fundamental to keeping private information
private.

Privacy without DRM means no restrictions whatsoever
on usable crypto in the hands of citizens.  It may
be a crime to withhold any stored keys when under a
subpeona, but that subpeona should issue only when
there is probable cause to believe that you have
committed a crime or are withholding information
about one, and you should *ALWAYS* be notified of the
issue within 30 days.  It also means that keys which
are in your head rather than stored somewhere are
not subject to subpeona -- on fifth amendment grounds
(in the USA) if the record doesn't exist outside
your head, then you cannot be coerced to produce
it.

Privacy without DRM means being able to keep and
do whatever you want with the records your business
creates -- but not being able to force someone to
use their real name or linkable identity information
to do business with you if that person wants that
information to remain private.

Bear






-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Shortcut digital signature verification failure

2002-06-21 Thread bear



It's already been thunk of.  check the literature on hash cash.

Basically, the idea is that the server presents a little puzzle
that requires linear computation on the client's side.  (same
algorithm as minsky used for his time-lock).  The client
has to present the solution of the puzzle with a valid request.

To extend the idea to signatures, all you really have to do is
program the server to create puzzles that will take at least as
much computation to solve as it requires to check the signature.
And of course it checks the solution to the puzzle (using a single
modular-power operation, which is relatively cheap) before it
checks the signature itself.

Bear


On Thu, 20 Jun 2002, Bill Frantz wrote:

I have been thinking about how to limit denial of service attacks on a
server which will have to verify signatures on certain transactions.  It
seems that an attacker can just send random (or even not so random) data
for the signature and force the server to perform extensive processing just
to reject the transaction.

If there is a digital signature algorithm which has the property that most
invalid signatures can be detected with a small amount of processing, then
I can force the attacker to start expending his CPU to present signatures
which will cause my server to expend it's CPU.  This might result in a
better balance between the resources needed by the attacker and those
needed by the server.

Cheers - Bill


-
Bill Frantz   | The principal effect of| Periwinkle -- Consulting
(408)356-8506 | DMCA/SDMI is to prevent| 16345 Englewood Ave.
[EMAIL PROTECTED] | fair use.  | Los Gatos, CA 95032, USA



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: objectivity and factoring analysis

2002-05-13 Thread bear



On Fri, 26 Apr 2002, Anonymous wrote:


These estimates are very helpful.  Thanks for providing them.  It seems
that, based on the factor base size derived from Bernstein's asymptotic
estimates, the machine is not feasible and would take thousands of years
to solve a matrix.  If the 50 times smaller factor base can be used,
the machine is on the edge of feasibility but it appears that it would
still take years to factor a single value.

One thousand years = 10 iterations of Moore's law plus one year.
Call it 15-16 years?  Or maybe 20-21 since Moore's seems to have
gotten slower lately?

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Schneier on Bernstein factoring machine

2002-04-16 Thread bear



On Tue, 16 Apr 2002, Anonymous wrote:

Bruce Schneier writes in the April 15, 2002, CRYPTO-GRAM,
http://www.counterpane.com/crypto-gram-0204.html:

 But there's no reason to panic, or to dump existing systems.  I don't think
 Bernstein's announcement has changed anything.  Businesses today could
 reasonably be content with their 1024-bit keys, and military institutions
 and those paranoid enough to fear from them should have upgraded years ago.

 To me, the big news in Lucky Green's announcement is not that he believes
 that Bernstein's research is sufficiently worrisome as to warrant revoking
 his 1024-bit keys; it's that, in 2002, he still has 1024-bit keys to revoke.

Does anyone else notice the contradiction in these two paragraphs?
First Bruce says that businesses can reasonably be content with 1024 bit
keys, then he appears shocked that Lucky Green still has a 1024 bit key?
Why is it so awful for Lucky to still have a key of this size, if 1024
bit keys are good enough to be reasonably content about?


Because Lucky Green is a well-known paranoid who has no business
requirement to put up with second-class crypto for the sake of
compatibility and can reasonably control other methods of accessing
his important stuff.  Conversely, your typical businessman has few
or no business secrets not known to at least half-a-dozen employees
and after trusting that many people, better crypto would add
essentially nothing to the businessman's security.

For a handy metaphor, you can think of a kilobit-keyed cipher as
a potentially weak link in Lucky's security (worth the attention)
and probably the strongest link in a typical businessman's security
(not worth the attention).

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: authentication protocols

2002-03-29 Thread bear






On Mon, 25 Mar 2002, John Saylor wrote:

I'd like to find an authentication protocol that fits my needs:
1. 2 [automated] parties
2. no trusted 3rd party intemediary ['Trent' in _Applied_Crypto_]

Authentication relative to what?

All identity, and therefore all authentication, derives from
some kind of consensus idea of who a person is.  With no third
party, it is hard to check a consensus.

Usually authentication comes down to checking a credential. But
that implies some third party that issued the credential.

So, the pertinent question becomes, what is identity? For purposes
of your application, I mean -- no point to go off on philosophical
tangents.  Answer that, and maybe there'll be a protocol that you
can use.

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: [CYBERIA] Open Letter to Jack Valenti and Michael Eisner

2002-03-06 Thread bear


[Moderator's note: No, I don't want to open up the floodgate, but this
has a genuinely new idea in it among some others -- the notion that
perhaps the good of the entertainment industry isn't as important as
general purpose computing. That said, this is far afield from
cryptography (I'm only interested here because of the technological
copy protection politics angle) and I'm not going to entertain
followups unless they're genuinely interesting. --Perry]

Perhaps the time has come.

Copyright was necessary in earlier times because so few people
had the time to think and produce new ideas -- novels and songs
were rare, valuable to society, cost a lot of time and effort
to publish and distribute, and the people who made good ones
had to be supported and protected.

But these days a talented hobbyist can make really great music,
do all the mixing digitally on his or her home system, and release,
and there are hundreds of thousands of talented hobbyists.  The
publishers and studios can add no value.  Graphic artists can work
at home now.  Pixels don't care a bit whether they're produced in
a studio.  Publishing houses have more good novels available than
they can ever publish, even not counting the professional novelists.
And it is now possible for a hobbyist writer or musician to publish
entirely on the net at very little cost to themselves - or if
someone mirrors the work, at no cost to themselves whatsoever.

So, a few random ideas to keep in mind the next time you hear
someone arguing that computers must be crippled:

Society no longer needs copyright.

Society *DOES* need general-purpose computers.

To the extent that copyright threatens general-purpose computing,
it is harmful.

With the Internet, we no longer need publishers and distributors.
Go to a site like MP3.com and see how visibly redundant they have
become.

Good musicians can play club dates and get a percentage of the door,
and sell signed disks they burn themselves to the people at the
concert.

Good authors can go on lecture tours, or get paid by bookstores for
promotional appearances.

or, maybe, we can just leave it at real artists have day jobs.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Cringely Gives KnowNow Some Unbelievable Free Press... (fwd)

2002-03-01 Thread bear



On Wed, 27 Feb 2002, Lucky Green wrote:

Philip,
If we can at all fit it into the schedule, IFCA will attempt to offer a
colloquium on this topic at FC. Based on the countless calls inquiring about
this issue that I received just in the last few days, the customers of
financial cryptography are quite concerned about the Bernstein paper, albeit
the paper raises a number of open issues that still would need to be
investigated before one should assert that the sky is falling.

See you all at FC,

--Lucky, IFCA President

Hmmm.  According to Bernstein,  It's better and worse than
it first appeared.  On the one hand, the o(1) term may
be quite large and cancel much of the speedup for keys of
practical size, and even with reduced costs, that's still
a lot of single-purpose hardware to build for a practical
keysize.  On the other hand, RSA is not the only system
affected.  The technique may work on Elliptic Curve systems
as well. Which of these sides is better and which worse
is something that you will have to work out depending on
your own perspective.

Bear




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: Cringely Gives KnowNow Some Unbelievable Free Press... (fwd)

2002-02-25 Thread bear

[Moderator's inquiry: Any third parties care to comment on this? --Perry]

On Thu, 21 Feb 2002, Phillip H. Zakas wrote:

 On Tue, 5 Feb 2002, Eugene Leitl wrote:

 But at Crypto last August, Dan Bernstein announced a new design for a
 machine dedicated to NFS using asymptotically fast algorithms and
 optimising memory, CPU power and amount of parallelism to minimize

 Bear Responds:
 I really want to read this paper; if we don't get to see the
 actual mathematics, claims like this look incredibly like
 someone is spreading FUD. Is it available anywhere?


The paper is located here: http://cr.yp.to/papers.html
I've not evaluated yet but I'm interested in hearing if he received his
grant to try it out.

Holy shit.  The math works.  Bernstein has found ways of
using additional hardware to eliminate redundancies and
inefficiencies which appear in any linear implementation of the
Number Field Sieve.  We just never noticed that they were
inefficiencies and redundancies because we kept thinking in
terms of linear implementations.  This is probably the biggest
news in crypto in the last decade.  I'm astonished that it
hasn't been louder.

Note that there have been rumors of an RSA cracker built by a
three-letter agency in custom silicon before this, but until
analyzing Bernstein's paper I had always dismissed them as
ridiculous paranoid fantasies.  Now it looks like such a device
is entirely feasible and, in fact, likely.

This work demonstrates a lack of security in a bunch of PGP Keys.
All previous estimations of security level as a function of bit
length, should be applied as though the bit length were one-third
of its actual length.  This means that effectively all PGP RSA
keys shorter than 2k bits are insecure, and the 2kbit keys are
not nearly as secure as we thought they were.

I remember there was one version of PGP that allowed RSA keys
longer than 2kbits - I don't remember what version it was right
now, but someone is sure to remind us now that I've said so. :-)
Anyway, probably very few people are using 4kbit or 8kbit PGP
RSA keys anyhow, due to lack of cross-version compatibility.

The secure forever level of difficulty that we used to believe
we got from 2kbit keys in RSA is apparently a property of 6kbit
keys and higher, barring further highly-unexpected discoveries.

Recommendation to all implementors:  Future applications should
not offer to create RSA keys shorter than 2048 bits, and should
allow users to specify keys of up to *at least* 8 kbits in length.
Remember, backward compatibility is inappropriate where it
compromises security.

Recommendation to all crypto users: discontinue use of RSA keys
shorter than 2048 bits, NOW.  Issue a revocation if the software
you use allows it.  If the software you use is restricted to
RSA keys shorter than 2048 bits, get rid of it and find something
better.

I predict that Elliptic-Curve systems are about to become more
popular.

Bear



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: biometrics

2002-02-05 Thread bear



On Tue, 29 Jan 2002, Bill Frantz wrote:

What would be really nice is to be able to have the same PIN/password for
everything.  With frequent use, forgetting it would be less of a problem,
as would the temptation to write it down.  However, such a system would
require that the PIN/password be kept secret from the verifier (including
possibly untrusted hardware/software used to enter it.


You could, I suppose, create an algorithm that takes as inputs
your single PIN/password and the name of the entity you're
dealing with, and produces a daily use PIN/password for you
to use with that entity.

It wouldn't help much in the daily use arena -- you'd still
have to carry all the daily use PINs around in your head -
but in the scenario where you forget one, it could be used to
recreate it, and it would be a bit more secure than carrying
around the sheet of paper where your 20 PINs are all written
down.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]