Re: Proof of Work - atmospheric carbon

2009-01-30 Thread John Levine
You know those crackpot ideas that keep showing up in snake oil crypto?
Well, e-postage is snake oil antispam.

While I think this statement may be true for POW coinage, because for a bot
net it grows on trees, for money that traces back to the international
monetary exchange system, it may not be completely true.

It's close enough to completely true.  Stealing postage via bots is
only one of multiple fatal problems.

I wrote this white paper in 2004; some of the details could stand a
little update but the conclusions are as clear as ever:

http://www.taugh.com/epostage.pdf

R's,
John

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Attack of the Wireless Worms

2009-01-30 Thread Jerry Leichter

On Jan 29, 2009, at 10:07 AM, Donald Eastlake wrote:


Recent research has shown that a new and disturbing form of computer
infection is readily spread: the epidemic copying of malicious code
among wireless routers without the participation of intervening
computers. Such an epidemic could easily strike cities, where the
ranges of wireless routers often overlap.

http://blogs.spectrum.ieee.org/tech_talk/2009/01/attack_of_the_wireless_worms.html 

It's worth reading both the original article that describes the  
simulation - cited in the blog entry as http://arxiv.org/abs/0706.3146  
- and the actual blog entry, which is much more reasonable.


The original article posits that, if you can get onto a wireless  
network, you can load an update into the wireless router.  (They  
should have said access point, but ignore that; the confusion is now  
so well established that it doesn't much matter.)  Given that  
assumption, and further given the assumption that not only could you  
do it, you could write a virus that would do it for you, across a wide  
variety of router models from multiple vendors, they use some  
simulations to determine how long it would take to infect all the  
routers in several well-wirelessed metropolitan areas.  The numbers  
come out to a matter of days to hours.  Their only recommendation is  
that everyone use WPA2 with a strong password.


Of course, I could equally well write a paper on the assumption that  
car computers could infect other car computers by modulating the  
headlights, and then calculate how long it would take a virus to  
spread through all the cars in a city.  Maybe we all need to cover the  
headlights of our cars for security.


Access to a wireless network is a long way from administrative access  
to the router for that network.  Granted, some devices have weak  
administrative passwords.  That's certainly a problem - but the right  
approach to fixing *that* problem is, well, to fix that problem: Use a  
strong password.  It's very rare that anyone needs admin access to  
their wireless routers.  There's no reason not to choose a complex  
password, write it on sticker, and attach it to the router:  If  
someone has physical access to your router, your security is gone  
anyway.  The Spectrum article makes this point, and also points out  
that this would be a non-problem if vendors shipped routers with  
unique passwords pre-set on them.  (In fact, DSL routers - and  
probably cable routers - typically come that way.  They can also  
usually be set to permit admin access only from the home side, not  
the network side - as some wireless routers can be set to allow  
admin access only from their wired ports.)


There are many real problems around, but there are also many pseudo- 
problems.  The pseudo-problems do let you publish neat papers  
sometimes, but it's important not to take them *too* seriously.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-01-30 Thread John Gilmore
If it comes from the Trusted Computing Group, you can pretty much
assume that it will make your computer *less* trustworthy.  Their idea
of a trusted computer is one that random unrelated third parties can
trust to subvert the will of the computer's owner.

John

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Obama's secure PDA

2009-01-30 Thread Ivan Krstić

Multiple responses inline:

On Jan 26, 2009, at 11:26 AM, Paul Hoffman wrote:
I too would like to hear more information on this, particularly the  
crypto that is known to be used on the Edge.



See sections 'Secure Speech Processing' and 'Interoperability' of http://www.gdc4s.com/documents/GD-Sectera_Edge-w.pdf 
. The standard suites are used, as one would expect.


On Jan 26, 2009, at 4:56 PM, Jerry Leichter wrote:
The FAQ, indirectly, answers the your previous question of why only  
Secret for email:  Data-at-rest is encrypted using AES, which is  
only approved for Secret, not Top Secret, data.


This isn't the case; AES is approved for Top Secret with 192- or 256- 
bit keys, per http://www.cnss.gov/Assets/pdf/cnssp_15_fs.pdf.


On Jan 26, 2009, at 9:26 PM, Steven M. Bellovin wrote:
Quite simply, voice offers one service -- voice.  Data offers many  
services, and hence many venues for data-driven attacks: email  
(which includes many MIME types) and probably clicking on URLs, web  
(which includes HMTL, gif, jpeg, perhaps png, and almost certainly  
Javascript), and perhaps data files including pdf, Word, Powerpoint,  
and Excel.  Any one of those data formats is far more complex than  
even compressed voice; the union of them makes me surprised it can  
handle even Secret data... Note especially that HTML involves  
IFRAMEs and third-party images, which means inherent cross-domain  
issues.


I've thought about this, but I don't buy it. I'm a heavy user of  
wireless e-mail, but I use it as nothing more than a SMTP-addressable  
SMS service without a length limit. In other words, people can send me  
messages from a computer and not just from a mobile handset (true in  
the other direction, too), and I can read and write more than 160  
characters at a time.


I'd find mobile e-mail just as useful if it went through a proxy that  
stripped out _everything_ that's not plaintext. I open attachments on  
my phone about once in a blue moon, and wouldn't miss the ability if  
it were gone.


Cheers,

--
Ivan Krstić krs...@solarsail.hcs.harvard.edu | http://radian.org

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Attack of the Wireless Worms

2009-01-30 Thread Peter Gutmann
Donald Eastlake d3e...@gmail.com writes:

Recent research has shown that a new and disturbing form of computer
infection is readily spread: the epidemic copying of malicious code
among wireless routers without the participation of intervening
computers. Such an epidemic could easily strike cities, where the
ranges of wireless routers often overlap.

Does anyone know whether anything like this actually exists?  I've seen 
earlier work in this area that was either man-in-the-router proof-of-concept 
stuff or simulation (as this work appears to be), but I don't know of any 
in-the-wild mesh-network malware.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Proof of Work - atmospheric carbon

2009-01-30 Thread Thomas Coppi
On Wed, Jan 28, 2009 at 2:19 PM, John Levine jo...@iecc.com wrote:
 Indeed.  And don't forget that through the magic of botnets, the bad
 guys have vastly more compute power available than the good guys.

 Just out of curiosity, does anyone happen to know of any documented
examples of a botnet being used for something more interesting than
just sending spam or DDoS?

-- 
Thomas Coppi

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-01-30 Thread Thor Lancelot Simon
On Thu, Jan 29, 2009 at 01:22:37PM -0800, John Gilmore wrote:

 If it comes from the Trusted Computing Group, you can pretty much
 assume that it will make your computer *less* trustworthy.  Their idea
 of a trusted computer is one that random unrelated third parties can
 trust to subvert the will of the computer's owner.

People have funny notions of ownership, don't they?

It's very clear to me that I don't own my desktop machine at my office;
my employer does.  But even if TCG were to punch out a useful, reasonable
standard (which I do not think they have done in any case so far), the
policy problem of how to ensure that my desktop machine's actual owner
could enforce its ownership of that machine against me, while the retailer
who sold me my desktop machine at home -- which I do own -- or for that
matter the U.S. Government, can't enforce _its_ ownership of my own
machine against me; that's a real problem, and solutions to it are useful.

Given such solutions, frameworks like what TCG is chartered to build are
in fact good and useful.  I don't think it's right to blame the tool (or
the implementation details of a particular instance of a particular kind
of tool) for the idiot carpenter.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-01-30 Thread Jonathan Thornburg
On Thu, 29 Jan 2009, John Gilmore wrote:
 If it comes from the Trusted Computing Group, you can pretty much
 assume that it will make your computer *less* trustworthy.  Their idea
 of a trusted computer is one that random unrelated third parties can
 trust to subvert the will of the computer's owner.

Indeed, the classic question is I've just bought this new computer
which claims to have full-disk encryption.  Is there any practical
way I can assure myself that there are (likely) no backdoors in/around
the encryption?

For open-source software encryption (be it swap-space, file-system,
and/or full-disk), the answer is yes:  I can assess the developers'
reputations, I can read the source code, and/or I can take note of
what other people say who've read the source code.

Alas, I can think of no practical way to get a yes answer to my
question if the encryption is done in hardware, disk-drive firmware,
or indeed anywhere except software that I fully control.

-- 
-- Jonathan Thornburg jth...@astro.indiana.edu
   Dept of Astronomy, Indiana University, Bloomington, Indiana, USA
   Washing one's hands of the conflict between the powerful and the
powerless means to side with the powerful, not to be neutral.
  -- quote by Freire / poster by Oxfam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


UCE - a simpler approach using just digital signing?

2009-01-30 Thread Ray Dillinger
I have a disgustingly simple proposal.  It seems to me that one 
of the primary reasons why UCE-limiting systems fail is the 
astonishing complexity of having a trust infrastructure 
maintained by trusted third parties or shared by more than 
one user.  Indeed, trusted third party and trust shared by 
multiple users may be oxymorons here. Trust, by nature, is 
not really knowable by any third party, not the same for any 
set of more than one user, and in fact the people most willing 
to pay for it at least where UCE is concerned, experience shows 
to be usually the _least_ trustworthy parties.  

So why hasn't anybody tried direct implementation of user-
managed digital signatures yet?

A key list maintained by individual recipients for themselves 
alone could be astonishingly simpler in practice, probably 
to the point of actually being practical.  

In fact, it is _necessary_ to eliminate third parties and 
shared infrastructure almost entirely in order to allow mail 
recipients to have the kind of fine-grained control that 
they actually need to address the problem by creating 
social and business feedback loops that promote good security.

As matters stand today, there is no protection from UCE. 
If I know there is a user account named 'fred' on the host
'example.com', then I have an email address and I can send 
all the UCE I want.  And poor fred has the same email address 
he gives everybody, so he gets spam from people who've gotten 
his address and he has no idea where they got it.  All his 
legitimate correspondents are using the same email address, 
so he can't abandon it without abandoning *all* of them, 
and he doesn't know which of them gave his address to the 
spammers.  What if email accounts weren't that simple?  

Consider the implications of a third field, or trust token, 
which works like a password to fred's mail box.  Your 
mailer's copy of fred's email address would look like 
fred#to...@example.com where token was a field that 
was your own personal password to fred's mailbox.  Your 
system would still send mail to f...@example.com but 
it would include a Trust: header based on the token.

The simplest solution I can think of would be a direct 
application of digital signatures;  the trust token would 
be (used as) a cryptographic key, and the headers of any 
message would have to include a Trust field containing a 
digital signature (a keyed cryptographic hash of the message, 
generated by that key).  Messages to multiple recipients 
would need to contain one Trust field per recipient. 

Its use would follow simple rules:  

Each time Fred gives out his email address to a new sender, 
he creates a trust token for that sender.  They must use it 
when they send him mail.  So fred gives his bank a key when 
he gives them his email address.  If fred were willing to 
recieve mail from strangers, he could publish a trust token 
on his webpage or on usenet or whatever - it would be painless 
to revoke it later, so why not?  If fred trusted someone to 
give out his email address, he could give that person multiple 
trust tokens to pass along to others.  Again, an error in 
judgement would be painless to revoke later.

Fred can revoke any trust token from his system at any time, 
and does so whenever he gets spam with a trust token he issued.  
In UI terms there'd be a button in his mail reader that works 
as, this message is spam, so revoke this trust token because 
now a spammer has it.  Other messages sent with the same 
trust token would disappear from his mailbox instantly. Fred 
might not push this button every time, but at least he'd know 
what spam he was getting due to (say) his published trust token
on his webpage or usenet, and what spam he was getting due to 
his relationship with a bank, and he'd have the option of 
turning any source of spam off instantly. 

In the short run the .aliases file on the mail host would need 
a line so it would know to deliver mail to fred#anyth...@example.com
to fred.  This is not because a legitimate email would ever include
the literal key, but for purposes of alerting fred's MUA to protocol
breaches, so it could do key management.  Fred's MUA could then 
be upgraded to use tokens without affecting other users on the
system.In later MDA's that handle trust tokens directly, 
this forwarding would be automatic. 

Whenever Fred gets email sent by someone using a trust token, 
his system tells him which token - ie, what sender he gave 
that trust token to.  So email sent to fred using the trust 
token he gave his bank will show up in his mailbox under a 
heading that says this was sent by someone using the trust 
token you gave your bank.

Whenever fred gets email for fred#to...@example.com and that's 
still a legitimate token, his system revokes the token, sends him 
an automatic note that says which trust token was revoked, and 
bounces the email with a message that says, 
Your mailer is not using trust tokens.  Your mail has not been

Re: UCE - a simpler approach using just digital signing?

2009-01-30 Thread Jerry Leichter

On Jan 30, 2009, at 4:47 PM, Ray Dillinger wrote:

I have a disgustingly simple proposal.  [Basically, always include a  
cryptographic token when you send mail; always require it when you  
receive mail.]
There is little effective difference between this an whitelists.  If I  
only accept mail from people on my whitelist, spammers can only send  
me mail through three modes of failure:


	1.  They randomly pick a return address that happens to match someone  
on my
		whitelist.  I think we can agree that this is rare enough that it  
isn't

worth worrying about.

	2.  A spammer somehow finds pairs of people S and R, where S sends to  
R, and
		fakes S as the sender for spam directed to R.  This would be a new  
mode
		of attack - spammers today just spurt out millions of messages based  
on
		very little information.  Sure, someone *could* start this kind of  
attack -
		but it's difficult to get the necessary information to mount it, and  
it
		seems unlikely that it would make economic sense to spammers, who  
can live

with tiny response rates because they can so cheaply generate 
targets.

	3.  This is a variant of (2) that actually does occur today:  The  
spammer takes

over S's machine and sends to the same people S sends to.  
Viruses
try to spread by this mechanism; they often succeed.  In 
principle, a
		spammer could write a virus that simply sent the (S,R) information  
from

the infected machine, though I don't know that they've ever 
bothered.

	 Either a type 3 attack, or a type 2 attack where the information  
comes from
		invading S's, machine, can of course just as easily grab all the  
tokens

on S's machine.  The solution proposed is that this will be 
noticed
quickly, and the tokens will be marked as no longer valid.  But 
that's
really no different from R simply removing S from his whitelist.

Really, cryptography is a non-issue here.  As long as S and R share  
some information - even S's address will do - that R can use to filter  
messages; and there is no cheap way to get large amounts of (S,R)-pair  
information; that information can be the key to a whitelist.  (Some  
mailing lists do this:  E.g., if you want to post to RISKS, you're  
asked to include the string notsp at the beginning or end of the  
subject line.  This is public information, so a spammer could easily  
do this *if he chose to specifically target the RISKS mailing list*;  
but there's no way he can do this automatically on a mass scale.  An  
individual could easily reach a similar agreement with anyone sending  
him mail.


Of course, the downside is that you can now *only* receive mail from  
those on your (logical) whitelist.  That's fine in some cases,  
unacceptable in others.  You can semi-
automatically grow your whitelist by sending using some kind of  
challenge/response.  For example, if you could send back the message  
with a note saying:  You're not on my whitelist, if you want to reach  
me resend this message with 'xyzzy' in the subject line.  Spammers  
don't bother to look for such messages right now (though if you made  
this automatic enough, and enough people adopted it, they would have a  
reason to!) so they won't be able to sneak on your whitelist that  
way.  However, many people writing to you won't want to be bothered -  
and automated mailings that you *do* want to receive and don't know  
the details of ahead of time (e.g., approval messages for mailing list  
requests you make) won't get through either.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Proof of Work - atmospheric carbon

2009-01-30 Thread John Levine
Richard Clayton and I claim that PoW doesn't work:
http://www.cl.cam.ac.uk/~rnc1/proofwork.pdf

I bumped into Cynthia Dwork, who originallyinvented PoW, at a CEAS
meeting a couple of years ago, and she said she doesn't think it
works, either.

R's,
John

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: UCE - a simpler approach using just digital signing?

2009-01-30 Thread John Levine
Hi.  One of the hats I wear is the chair of the Anti-Spam Research
Group of the Internet Research Task Force, which is down the virtual
hall from the IETF.

You know how you all feel when someone shows up with his super duper
new unbreakable crypto scheme?  Well, that's kind of how I feel here.
Dealing with spam is surprisingly subtle, a lot of smart people have
been thinking about it for a long time, and most new ideas turn out
to be old ideas with well known flaws or limitations.

 Consider the implications of a third field, or trust token, which
 works like a password to fred's mail box.  Your mailer's copy of
 fred's email address would look like fred#to...@example.com where
 token was a field that was your own personal password to fred's
 mailbox.

It's not a bad idea.  Its best known implementation was done in 1996
by Robert Hall of ATT Labs who called it Zoemail.  You can learn all
about it in US Patent 5,930,479.

This is the wrong place to go into detail about its limitations,
although it should be self-evident that if it were effective, sometime
in the past 13 years we'd have started using it.

You're all welcome in the ASRG, which has a wiki at
http://wiki.asrg.sp.am with pointers to the mailing list and other
resources.  One of our slow moving projects is a taxonomy of anti-spam
techniques, both ones that work and ones that don't work.  If you'd
like to contribute, drop me a note and I'll give you a password so you
can edit it.

Regards,
John Levine, jo...@iecc.com, Primary Perpetrator of The Internet for Dummies,
Information Superhighwayman wanna-be, http://www.johnlevine.com, ex-Mayor
More Wiener schnitzel, please, said Tom, revealingly.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: UCE - a simpler approach using just digital signing?

2009-01-30 Thread Taral
On Fri, Jan 30, 2009 at 1:47 PM, Ray Dillinger b...@sonic.net wrote:
 This is basic digital signatures; it would work.

What's your transition plan? How do you deal with stolen trust
tokens? (Think trojans/worms.)

Also see: http://craphound.com/spamsolutions.txt

-- 
Taral tar...@gmail.com
Please let me know if there's any further trouble I can give you.
-- Unknown

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-01-30 Thread Taral
On Fri, Jan 30, 2009 at 1:41 PM, Jonathan Thornburg
jth...@astro.indiana.edu wrote:
 For open-source software encryption (be it swap-space, file-system,
 and/or full-disk), the answer is yes:  I can assess the developers'
 reputations, I can read the source code, and/or I can take note of
 what other people say who've read the source code.

Really? What about hardware backdoors? I'm thinking something like the
old /bin/login backdoor that had compiler support, but in hardware.

-- 
Taral tar...@gmail.com
Please let me know if there's any further trouble I can give you.
-- Unknown

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com