Halloween Hash Bash information

2005-10-31 Thread Perry E. Metzger

Bruce Schneier is liveblogging from the NIST Halloween Hash Bash:

http://www.schneier.com/blog/

(Credit: Steve Bellovin directed me at the web page.)

Perry
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


HTTPS mutual authentication alpha release - please test

2005-10-31 Thread Nick Owen
Happy Halloween! In what we hope will be a Halloween tradition, we have
new release available for testing. WiKID is pleased to announce the
alpha release of a major upgrade under the GPL featuring a cryptographic
method of mutual authentication for HTTPS:

WiKID-2.1: SOMETHING_WiKID_THIS_WAY_COMES

The token client is available at sourceforge:
http://prdownloads.sourceforge.net/wikid-twofactor/WiKID_Token_Client-2.1-prerelease.zip?download

The system works this way: Each WiKID domain now can include a
'registered URL' field and a hash that website's SSL certificate.  When
a user wants to log onto a secure web site, they start the WiKID token
and enter their PIN. The PIN is encrypted and sent to the WiKID server
along with a one-time use AES key and the registered URL.  The server
responds with a hash of the website's SSL certificate.  The token client
fetches the SSL certificate of the website and compares it the hash.  If
the hashes don't match, the user gets an error.  If they match, the user
is presented with registered URL and the passcode.  On supported
systems, the token client will launch the default browser to the
registered URL.

We are currently seeking testers for this early release.  You do not
need to set up a WiKID server to test. We have set up a WiKID server for
you.  Testers will need to download the latest J2SE WiKID token from
sourceforge.  Testing information can be found here:

https://sourceforge.net/forum/forum.php?thread_id=1376617&forum_id=484250

Most one-time-password systems suffer from man-in-the-middle attacks
primarily due to difficulties users have with validating SSL
certificates. The goal of this release is to validate certificates for
the end user, providing an SSH-esque security for web-enabled
applications such as online banking.

Any feedback is much appreciated.

Sincerely,

Nick
-- 
Nick Owen
WiKID Systems, Inc.
404.962.8983 (desk)
404.542.9453 (cell)
http://www.wikidsystems.com
At last, two-factor authentication, without the hassle factor
Now open source: http://sourceforge.net/projects/wikid-twofactor/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Symmetric ciphers as hash functions

2005-10-31 Thread James Muir
Tom Shrimpton (http://www.cs.pdx.edu/~teshrim/) does research in this 
area (ie. using block ciphers to build hash functions).  See the papers 
on his web site; in particular:


Black-Box Analysis of the Block-Cipher-Based Hash-Function Constructions 
from PGV [pdf] [ps]

John Black, Phillip Rogaway, and Thomas Shrimpton

-James

Arash Partow wrote:

Hi all,

How does one properly use a symmetric cipher as a cryptographic hash
function? I seem to be going around in circles.

Initially I thought you choose some known key and encrypt the data
with the key, using either the encrypted text or the internal state of
the cipher as the hash value, turns out all one needs to do to break
it, is decrypt the hash value with the "known" key and you get a value
which will produce the same hash value.

Reversing the situation (using the data as the key and a known plain-
text) makes a plaintext attack seem like a joy etc..

Are there any papers/books/etc that explain the implementation/use of
symmetric ciphers (particularly AES) as cryptographic hash functions?

btw I know that hash functions and symmetric ciphers share the same
structural heritage (feistel rounds etc...), I just don't seem to be
making the usage link at this point in time... :D

Any help would be very much appreciated.



Kind regards


Arash Partow

Be one who knows what they don't know,
Instead of being one who knows not what they don't know,
Thinking they know everything about all things.
http://www.partow.net


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Some thoughts on high-assurance certificates

2005-10-31 Thread Anne & Lynn Wheeler
Peter Gutmann wrote:
> And therein lies the problem.  The companies providing the certificates are in
> the business of customer service, not of running FBI-style special background
> investigations that provide a high degree of assurance but cost $50K each and
> take six months to complete.  The same race to the bottom that's given us
> unencrypted banking site logons and $9.95 certificates is also going to hit
> "high-assurance" certificates, with companies improving customer service and
> cutting customer costs by eliminating the (to them and to the customer)
> pointless steps that only result in extra overhead and costs.  How long before
> users can get $9.95 pre-approved high-assurance certificates, and the race
> starts all over again?

when we were doing this stuff for the original payment gateway ...
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.galric.com/~lynn/aadsm5.htm#asrn3

we had to also go around and audit some number of these relatively (at
the time) brand new organizations called certification authorities ...
issuing these things called digital certificates.

we listed a large number of things that a high assurance business
service needed to achieve (aka explaining that the ceritification
authority business was mostly a business service operation). at the
time, several commented that they were started to realize that ... it
wasn't a technically oriented local garage type operation ... but almost
totally administrative, bookkeeping, filing, service calls ... etc (and
from an operational standpoint nearly zero technical content). most of
them even rasied the subject about being able to outsource their actual
operations.

the other point ... was that the actual design point for digital
certificates ... were the providing of certified information for offline
relying parties ... i.e. relying parties that had no means of directly
accessing their own copy of the certified information ... and/or it was
an offline environment and could not perform timely access to the
authoritative agency responsible for the certified information.

as the online infrastructure became more and more pervasive ... the
stale, static, digital certificates were becoming more & more redundant,
 superfulous and useless. in that transiition, there was some refocus by
certification authority from the offline market segment of relying
parties (which was rapidly disappearing as the online internet became
more and more pervasive)) to the no-value relying party market segment
... aka those operations where the operation could justify the cost of
having their own copy of the certified information AND couldn't cost
justify performing timely, online operations (directly contacting
authoritative agency responsible for certified information). even this
no-value market segment began to rapidly shrink as the IT cost rapidly
declined of maintaining their own information and the telecom cost of
doing online transactions also rapidly declined.

while the attribute of "high-assurance" can be viewed as a good thing
... the issue of applying it to a paradigm that was designed for
supplying a solution for an offline environment becames questionable in
a world that is rapidly becoming online, all-the-time.

it makes even less sense for those that have migrated to the no-value
market segment ... where the parties involved that can't cost justify
online solutions ... aren't likely to find that they can justify costs
associated with supporting a high-assurance business operation.

part of the issue here is the possible confusion of the business process
of certifying information and the digital certificate business operation
targeted at representing that certified information for relying parties
operating in an offline environment  and unable to perform timely
operations to directly access the information.

this can possibly be seen in some of the mid-90s operations that
attempted to draw a correlation between x.509 identification digital
certificates and drivers licenses ... where both were targeted as
needing sufficient information for relying parties to perform operations
... solely relying on information totally obtained from the document
(physical driver's license or x.509 identification digital certificate).
there was some migration away from using the driver's license as a
correlary for x.509 identification digital certificates ... as you found
the majority of the important driver's license relying operations
migrating to real-time, online transactions. a public official might use
the number on the driver's license purely as part of a real-time online
transaction ... retrieving all the actual information ... and not
needing to actually rely on the information contained in the driver's
license at all. it was only for the relatively no-value operations that
the information in the physical drivers license continued to have
meaning. any events involving real value were all quickly migrating to
online, real-time transactions.

---

AW: [EMAIL PROTECTED]: Skype security evaluation]

2005-10-31 Thread Kuehn, Ulrich
> -Ursprüngliche Nachricht-
> Von: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] Im Auftrag von cyphrpunk
> Gesendet: Freitag, 28. Oktober 2005 06:07
> An: [EMAIL PROTECTED]; cryptography@metzdowd.com
> Betreff: Re: [EMAIL PROTECTED]: Skype security evaluation]
> 
> Wasn't there a rumor last year that Skype didn't do any 
> encryption padding, it just did a straight exponentiation of 
> the plaintext?
>
> Would that be safe, if as the report suggests, the data being 
> encrypted is 128 random bits (and assuming the encryption 
> exponent is considerably bigger than 3)? Seems like it's 
> probably OK. A bit risky perhaps to ride bareback like that 
> but I don't see anything inherently fatal.
> 
There are results available on this issue: First, a paper by 
Boneh, Joux, and Nguyen "Why Textbook ElGamal and RSA Encryption 
are Insecure", showing that you can essentially half the number 
of bits in the message, i.e. in this case the symmetric key 
transmitted. 

Second, it turns out that the tricky part is the implementation 
of the decryption side, where the straight-forward way -- ignoring 
the padding with 0s "They are zeroes, aren't they?" -- gives you a 
system that might be attacked in a chosen plaintext scenario very 
efficiently, obtaining the symmetric key. See my paper "Side-Channel 
Attacks on Textbook RSA and ElGamal Encryption" at PKC2003 for 
details.

Hope this answers your question.

Ulrich


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: [EMAIL PROTECTED]: Skype security evaluation]

2005-10-31 Thread Whyte, William
A similar approach enabled Bleichenbacher's SSL attack on 
RSA with PKCS#1 padding. This sounds very dangerous to me.

William 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of cyphrpunk
> Sent: Friday, October 28, 2005 5:07 AM
> To: [EMAIL PROTECTED]; cryptography@metzdowd.com
> Subject: Re: [EMAIL PROTECTED]: Skype security evaluation]
> 
> Wasn't there a rumor last year that Skype didn't do any encryption
> padding, it just did a straight exponentiation of the plaintext?
> 
> Would that be safe, if as the report suggests, the data being
> encrypted is 128 random bits (and assuming the encryption exponent is
> considerably bigger than 3)? Seems like it's probably OK. A bit risky
> perhaps to ride bareback like that but I don't see anything inherently
> fatal.
> 
> CP
> 
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to 
> [EMAIL PROTECTED]
> 
> 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: packet traffic analysis

2005-10-31 Thread John Denker

In the context of:

>>If your plaintext consists primarily of small packets, you should set the MTU
>>of the transporter to be small.   This will cause fragmentation of the
>>large packets, which is the price you have to pay.  Conversely, if your
>>plaintext consists primarily of large packets, you should make the MTU large.
>>This means that a lot of bandwidth will be wasted on padding if/when there
>>are small packets (e.g. keystrokes, TCP acks, and voice cells) but that's
>>the price you have to pay to thwart traffic analysis.

Travis H. wrote:


I'm not so sure.  If we're talking about thwarting traffic on the link
level (real circuit) or on the virtual-circuit level, then you're
adding, on average, a half-packet latency whenever you want to send a
real packet. 


I very much doubt it.  Where did that factor of "half" come frome.


I don't see any reason why it's necessary to pay these costs if you
abandon the idea of generating only equal-length packets 


Ah, but if you generate unequal-length packets then they are
vulnerable to length-analysis, which is a form of traffic analysis.
I've seen analysis systems that do exactly this.  So the question is,
are you trying to thwart traffic analysis, or not?

I should point out that encrypting PRNG output may be pointless, 


*is* pointless, as previously discussed.


and
perhaps one optimization is to stop encrypting when switching on the
chaff. 


A better solution would be to leave the encryption on and use constants
(not PRNG output) for the chaff, as previously discussed.


Some minor details
involving resynchronizing when the PRNG happens to


The notion of synchronized PRNGs is IMHO crazy -- complicated as well as
utterly unnecessary.


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Some thoughts on high-assurance certificates

2005-10-31 Thread Peter Gutmann
A number of CAs have started offering high-assurance certificates in an
attempt to... well, probably to make more money from them, given that the
bottom has pretty much fallen out of the market when you can get a standard
certificate for as little as $9.95.  The problem with these certificates is
that, apart from the fact that the distinction is meaningless to users (see
work by HCI people in this area), they also don't fit the standard CA business
processes.  CAs employ people whose job role, and job expertise, lie in
shifting as much product as possible as quickly as possible (as has already
been demonstrated in the race to the bottom for supplying standard
certificates), not in enforcing PKI theology on their clients.

There are only a very small number of people who understand the theology
behind certificates sufficiently to be able to explain the motivation behind
the various steps in the process of issuing them, and none of them are going
to be employed in doing certificate checking for CAs.  Instead, the task will
be managed by, and performed by, the same people who spam everything in the US
that has a pulse with pre-approved credit card applications, loans, and
similar items.

Here's a real-world example of this process in action.  A user approached a
large public CA for a high-assurance certificate and specifically requested
that his identity be checked thoroughly via his hard-to-forge paper documents.
The CA did the usual standard-assurance checking (whois lookup, email to the
whois contact address, caller ID check on the calling number, all easily
spoofed), and then announced that the user had been pre-approved for the high-
assurance certificate, *before* the user had supplied his authenticating
documents.  Made perfect sense, they'd done the equivalent of running a credit
check before pre-approving a credit card or loan or whatever. Their proactive
service and rapid attendance to the customer's needs put them ahead of the
competition...

... except that this isn't something like a standard credit-check business.
The user tried explaining this to the CA employees doing the checking, but
they just didn't understand what the problem was.  They'd done everything
right and provided outstanding service to the user hadn't they?

And therein lies the problem.  The companies providing the certificates are in
the business of customer service, not of running FBI-style special background
investigations that provide a high degree of assurance but cost $50K each and
take six months to complete.  The same race to the bottom that's given us
unencrypted banking site logons and $9.95 certificates is also going to hit
"high-assurance" certificates, with companies improving customer service and
cutting customer costs by eliminating the (to them and to the customer)
pointless steps that only result in extra overhead and costs.  How long before
users can get $9.95 pre-approved high-assurance certificates, and the race
starts all over again?

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


[Clips] Security 2.0: FBI Tries Again To Upgrade Technology

2005-10-31 Thread R.A. Hettinga

--- begin forwarded text


 Delivered-To: [EMAIL PROTECTED]
 Date: Mon, 31 Oct 2005 07:29:37 -0500
 To: Philodox Clips List <[EMAIL PROTECTED]>
 From: "R.A. Hettinga" <[EMAIL PROTECTED]>
 Subject: [Clips] Security 2.0: FBI Tries Again To Upgrade Technology
 Reply-To: [EMAIL PROTECTED]
 Sender: [EMAIL PROTECTED]

 

 The Wall Street Journal

  October 31, 2005

 Security 2.0:
  FBI Tries Again
  To Upgrade Technology
 By ANNE MARIE SQUEO
 Staff Reporter of THE WALL STREET JOURNAL
 October 31, 2005; Page B1

 As the fifth chief information officer in as many years at the Federal
 Bureau of Investigation, Zalmai Azmi faces a mystery: How to create a
 high-tech system for wide sharing of information inside the agency, yet at
 the same time stop the next Robert Hanssen.

 Mr. Hanssen is the rogue FBI agent who was sentenced to life in prison for
 selling secret information to the Russians. His mug shot -- with the words
 "spy, traitor, deceiver" slashed across it -- is plastered on the walls of
 a room at FBI headquarters where two dozen analysts try to track security
 breaches.

 Mr. Hanssen's arrest in February 2001, and his ability to use the agency's
 archaic system to gather the information he sold, led FBI officials to want
 to "secure everything" in their effort to modernize the bureau, Mr. Azmi
 says. But then, investigations after the Sept. 11 terrorist attacks showed
 that FBI agents had information about suspected terrorists that hadn't been
 shared with other law-enforcement agencies. So then "we said, 'Let's share
 everything,'" Mr. Azmi says.

 Since then, the FBI spent heavily to upgrade its case-management system,
 from one that resembled early versions of personal computers -- green type
 on a black computer screen, requiring a return to the main menu for each
 task -- to a system called Virtual Case File, which was supposed to use
 high-speed Internet connections and simple point-and-click features to sort
 and analyze data quickly.

 But after four years and $170 million, the dueling missions tanked the
 project. FBI Director Robert Mueller in April pulled the plug on the much
 ballyhooed technology amid mounting criticism from Congress and feedback
 from within the bureau that the new system wasn't a useful upgrade of the
 old, rudimentary system. As a result, the FBI continues to use older
 computer systems and paper documents remain the official record of the FBI
 for the foreseeable future.

 Highlighting the agency's problems is the recent indictment of an FBI
 analyst, Leandro Aragoncillo, who is accused of passing secret information
 to individuals in the Philippines. After getting a tip that Mr. Aragoncillo
 was seeking to talk to someone he shouldn't have needed to contact, the FBI
 used its computer-alert system to see what information the analyst had
 accessed since his hiring in 2004, a person familiar with the probe said.
 The system didn't pick up Mr. Aragoncillo's use of the FBI case-management
 system as unusual because he didn't seek "top secret" information and
 because he had security clearances to access the information involved, this
 person said.

 The situation underscores the difficulties in giving analysts and FBI
 agents access to a broad spectrum of information, as required by the 9/11
 Commission, while trying to ensure rogue employees aren't abusing the
 system. It's up to Mr. Azmi to do all this -- without repeating the
 mistakes of Virtual Case File.

 Much is at stake: FBI agents and analysts are frustrated by the lack of
 technology -- the FBI finished connecting its agents to the Internet only
 last year -- and Mr. Mueller's legacy depends on the success of this
 effort. The FBI director rarely appears at congressional hearings or news
 conferences without his chief information officer close by these days.

 An Afghan immigrant, the 43-year-old Mr. Azmi fled his native country in
 the early 1980s after the Soviet invasion. After a brief stint as a car
 mechanic in the U.S., he enlisted in the Marines in 1984 and spent seven
 years mainly overseas. A facility for languages -- he speaks five -- helped
 him win an assignment in the Marines working with radio communications and
 emerging computer technologies.

 When he returned to the U.S., he joined the U.S. Patent and Trademark
 Office as a project manager developing software and hardware solutions for
 patent examiners. He attended college and graduate school at night,
 obtaining a bachelor's degree in information systems from American
 University and a master's degree in the same field from George Washington
 University, both in Washington, D.C. Afterward, he got a job at the Justice
 Department in which he helped upgrade technology for U.S. attorneys across
 the country.

 That is where he was working when terrorists attacked Sept. 11, 2001. On
 Sept. 12, armed with two vans of equipment, Mr. Azmi and a team of
 engineers traveled from Washingt

Symmetric ciphers as hash functions

2005-10-31 Thread Arash Partow

Hi all,

How does one properly use a symmetric cipher as a cryptographic hash
function? I seem to be going around in circles.

Initially I thought you choose some known key and encrypt the data
with the key, using either the encrypted text or the internal state of
the cipher as the hash value, turns out all one needs to do to break
it, is decrypt the hash value with the "known" key and you get a value
which will produce the same hash value.

Reversing the situation (using the data as the key and a known plain-
text) makes a plaintext attack seem like a joy etc..

Are there any papers/books/etc that explain the implementation/use of
symmetric ciphers (particularly AES) as cryptographic hash functions?

btw I know that hash functions and symmetric ciphers share the same
structural heritage (feistel rounds etc...), I just don't seem to be
making the usage link at this point in time... :D

Any help would be very much appreciated.



Kind regards


Arash Partow

Be one who knows what they don't know,
Instead of being one who knows not what they don't know,
Thinking they know everything about all things.
http://www.partow.net


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like Payment Systems

2005-10-31 Thread cyphrpunk
On 10/28/05, Daniel A. Nagy <[EMAIL PROTECTED]> wrote:
> Irreversibility of transactions hinges on two features of the proposed
> systetm: the fundamentally irreversible nature of publishing information in
> the public records and the fact that in order to invalidate a secret, one
> needs to know it; the issuer does not learn the secret at all in some
> implementnations and only learns it when it is spent in others.
>
> In both cases, reversal is impossible, albeit for different reasons. Let's
> say, Alice made a payment to Bob, and Ivan wishes to reverse it with the
> possible cooperation of Alice, but definitely without Bob's help. Alice's
> secret is Da, Bob's secret is Db, the corresponding challenges are,
> respectively, Ca and Cb, and the S message containing the exchange request
> Da->Cb has already been published.
>
> In the first case, when the secret is not revealed, there is simply no way to
> express reverslas. There is no S message with suitable semantics semantics,
> making it impossible to invalidate Db if Bob refuses to reveal it.

The issuer can still invalidate it even though you have not explicitly
defined such an operation. If Alice paid Bob and then convinces the
issuer that Bob cheated her, the issuer could refuse to honor the Db
deposit or exchange operation. From the recipient's perspective, his
cash is at risk at least until he has spent it or exchanged it out of
the system.

The fact that you don't have an "issuer invalidates cash" operation in
your system doesn't mean it couldn't happen. Alice could get a court
order forcing the issuer to do this. The point is that reversal is
technically possible, and you can't define it away just by saying that
the issuer won't do that. If the issuer has the power to reverse
transactions, the system does not have full ireversibility, even
though the issuer hopes never to exercise his power.


> In the second case, Db is revealed when Bob tries to spend it, so Ivan can,
> in principle, steal (confiscate) it, instead of processing, but at that
> point Da has already been revealed to the public and Alice has no means to
> prove that she was in excusive possession of Da before it became public
> information.

That is an interesting possibility, but I can think of a way around
it. Alice could embed a secret within her secret. She could base part
of her secret on a hash of an even-more-secret value which she would
not reveal when spending/exchanging. Then if it came to where she had
to prove that she was the proper beneficiary of a reversed
transaction, she could reveal the inner secret to justify her claim.


> Now, one can extend the list of possible S messages to allow for reversals
> in the first scenario, but even in that case Ivan cannot hide the fact of
> reversal from the public after it happened and the fact that he is prepared
> to reverse payments even before he actually does so, because the users and
> auditors need to know the syntax and the semantics of the additional S
> messages in order to be able to use Ivan's services.

That's true, the public visibility of the system makes secret
reversals impossible. That's very good - one of the problems with
e-gold was that it was never clear when they were reversing and
freezing accounts. Visibility is a great feature. But it doesn't keep
reversals from happening, and it still leaves doubt about how final
transactions will be in this system.

CP

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: On Digital Cash-like Payment Systems

2005-10-31 Thread John Kelsey
>From: cyphrpunk <[EMAIL PROTECTED]>
>Sent: Oct 27, 2005 9:15 PM
>To: "James A. Donald" <[EMAIL PROTECTED]>
>Cc: cryptography@metzdowd.com, [EMAIL PROTECTED]
>Subject: Re: On Digital Cash-like Payment Systems

>On 10/26/05, James A. Donald <[EMAIL PROTECTED]> wrote:
>> How does one inflate a key?

>Just make it bigger by adding redundancy and padding, before you
>encrypt it and store it on your disk. That way the attacker who wants
>to steal your keyring sees a 4 GB encrypted file which actually holds
>about a kilobyte of meaningful data. Current trojans can steal files
>and log passwords, but they're not smart enough to decrypt and
>decompress before uploading. They'll take hours to snatch the keyfile
>through the net, and maybe they'll get caught in the act.

Note that there are crypto schemes that use huge keys, and it's
possible to produce simple variants of existing schemes that use
multiple keys.  That would mean that the whole 8GB string was
necessary to do whatever crypto thing you wanted to do.  A simple
example is to redefine CBC-mode encryption as

C[i] = E_K(C[i-1] xor P[i] xor S[C[i-1] mod 2^{29}])

where S is the huge shared string, and we're using AES.  Without
access to the shared string, you could neither encrypt nor decrypt.

>CP

--John

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


[Clips] US spy agency's patents under security scrutiny

2005-10-31 Thread R.A. Hettinga

--- begin forwarded text


 Delivered-To: [EMAIL PROTECTED]
 Date: Sat, 29 Oct 2005 08:19:44 -0400
 To: Philodox Clips List <[EMAIL PROTECTED]>
 From: "R.A. Hettinga" <[EMAIL PROTECTED]>
 Subject: [Clips] US spy agency's patents under security scrutiny
 Reply-To: [EMAIL PROTECTED]
 Sender: [EMAIL PROTECTED]

 

 New Scientist

 US spy agency's patents under security scrutiny
 17:45 27 October 2005
 NewScientist.com news service
Paul Marks

 The hyper-secretive US National Security Agency - the government's
 eavesdropping arm - appears to be having its patent applications
 increasingly blocked by the Pentagon. And the grounds for this are for
 reasons of national security, reveals information obtained under a freedom
 of information request.

 Most Western governments can prevent the granting (and therefore
 publishing) of patents on inventions deemed to contain sensitive
 information of use to an enemy or terrorists. They do so by issuing a
 secrecy order barring publication and even discussion of certain inventions.

 Experts at the US Patent and Trademark Office perform an initial security
 screening of all patent applications and then army, air force and navy
 staff at the Pentagon's Defense Technology Security Administration (DTSA)
 makes the final decision on what is classified and what is not.

 Now figures obtained from the USPTO under a freedom of information request
 by the Federation of American Scientists show that the NSA had nine of its
 patent applications blocked in the financial year to March 2005 against
 five in 2004, and none in each of the three years up to 2003.

 Keeping secrets

 This creeping secrecy is all the more surprising because as the US
 government's eavesdropping and code-breaking arm - which is thought to
 harness some of the world's most powerful supercomputers to decode
 intercepted communications - the NSA will have detailed knowledge of what
 should be kept secret and what should not. So it is unlikely to file
 patents that give away secrets.

 Bruce Schneier, a cryptographer and computer security expert with
 Counterpane Internet Security in California, finds the development
 "fascinating".

 "It's surprising that the Pentagon is becoming more secretive than the NSA.
 While I am generally in favour of openness in all branches of government,
 the NSA has had decades of experience with secrecy at the highest levels,"
 Schneier told New Scientist. "The fact that the Pentagon is classifying
 things that the NSA believes should be public is an indication of how much
 secrecy has crept into government over the past few years."

 However, at another level, the Pentagon appears to be relaxing slightly: it
 seems to be loosening its post 9/11 grip on the ideas of private inventors,
 with the number having patents barred on the grounds of national security
 halving in the last year.

 In the financial year to 2004, DTSA imposed 61 secrecy orders on private
 inventors, a number that had been climbing inexorably since 9/11. But up to
 the end of financial 2005, only 32 inventors had "secrecy orders" imposed
 on their inventions.

 Overall, the figures obtained by the FAS reveal 106 new secrecy orders were
 imposed on US inventions to March 2005, while 76 others were rescinded. So
 there are now 4915 secrecy orders in effect - some of which have been in
 effect since the 1930s.
 Related Articles
Patents gagged in the name of national security
http://www.newscientist.com/article.ns?id=mg18725075.800
09 July 2005
Transforming US Intelligence edited by Jennifer E Sims and 
Burton
 Gerber
http://www.newscientist.com/article.ns?id=mg18725182.100
24 September 2005
Hand over your keys
http://www.newscientist.com/article.ns?id=mg16922735.200
13 January 2001
 Weblinks
Invention secrecy activity, Federation of American Scientists
http://www.fas.org/sgp/othergov/invention/stats.html
US Department of Defense
http://www.defenselink.mil/
US National Security Agency
http://www.nsa.gov/

 --
 -
 R. A. Hettinga 
 The Internet Bearer Underwriting Corporation 
 44 Farquhar Street, Boston, MA 02131 USA
 "... however it may deserve respect for its usefulness and antiquity,
 [predicting the end of the world] has not been found agreeable to
 experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'
 ___
 Clips mailing list
 [EMAIL PROTECTED]
 http://www.philodox.com/mailman/listinfo/clips

--- end forwarded text


-- 
-
R. A. Hettinga 
The Internet Bearer Underwriting Corporation 
44 Farquhar Street, Boston, MA 02131 USA
"... howe

Re: packet traffic analysis

2005-10-31 Thread Travis H.
> I assume that the length is
> explicitly encoded in the legitimate packet.  Then the peer for the
> link ignores everything until the next "escape sequence" introducing a
> legitimate packet.

I should point out that encrypting PRNG output may be pointless, and
perhaps one optimization is to stop encrypting when switching on the
chaff.  The peer can then encrypt the escape sequence as it would
appear in the encrypted stream, and do a simple string match on that. 
In this manner the peer does not have to do any decryption until the
[encrypted] escape sequence re-appears.  Another benefit of this is to
limit the amount of material encrypted under the key to legitimate
traffic and the escape sequences prefixing them.  Some minor details
involving resynchronizing when the PRNG happens to produce the same
output as the expected encrypted escape sequence is left as an
exercise for the reader.
--
http://www.lightconsulting.com/~travis/  -><-
"We already have enough fast, insecure systems." -- Schneier & Ferguson
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: packet traffic analysis

2005-10-31 Thread Travis H.
Good catch on the encryption.  I feel silly for not thinking of it.

> If your plaintext consists primarily of small packets, you should set the MTU
> of the transporter to be small.   This will cause fragmentation of the
> large packets, which is the price you have to pay.  Conversely, if your
> plaintext consists primarily of large packets, you should make the MTU large.
> This means that a lot of bandwidth will be wasted on padding if/when there
> are small packets (e.g. keystrokes, TCP acks, and voice cells) but that's
> the price you have to pay to thwart traffic analysis.

I'm not so sure.  If we're talking about thwarting traffic on the link
level (real circuit) or on the virtual-circuit level, then you're
adding, on average, a half-packet latency whenever you want to send a
real packet.  And then there's the bandwidth tradeoff you mention,
which is probably of a larger concern (although bandwidth will
increase over time, whereas the speed of light will not).

I don't see any reason why it's necessary to pay these costs if you
abandon the idea of generating only equal-length packets and creating
all your chaff as packets.  Let's assume the link is encrypted as
before.  Then you merely introduce your legitimate packets with a
certain escape sequence, and pad between these packets with either
zeroes, or if you're more paranoid, some kind of PRNG.  In this way,
if the link is idle, you can stop generating chaff and start
generating packets at any time.  I assume that the length is
explicitly encoded in the legitimate packet.  Then the peer for the
link ignores everything until the next "escape sequence" introducing a
legitimate packet.

This is not a tiny hack, but avoids much of the overhead in your
technique.  It could easily be applied to something like openvpn,
which can operate over a TCP virtual circuit, or ppp.  It'd be a nice
optimization if you could avoid retransmits of segments that contained
only chaff, but that may or may not be possible to do without giving
up some TA resistance (esp. in the presence of an attacker who may
prevent transmission of segments).
--
http://www.lightconsulting.com/~travis/  -><-
"We already have enough fast, insecure systems." -- Schneier & Ferguson
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like Payment Systems

2005-10-31 Thread cyphrpunk
One other point with regard to Daniel Nagy's paper at
http://www.epointsystem.org/~nagydani/ICETE2005.pdf

A good way to organize papers like this is to first present the
desired properties of systems like yours (and optionally show that
other systems fail to meet one or more of these properties); then to
present your system; and finally to go back through and show how your
system meets each of the properties, perhaps better than any others.
This paper is lacking that last step. It would be helpful to see the
epoint system evaluated with regard to each of the listed properties.

In particular I have concerns about the finality and irreversibility
of payments, given that the issuer keeps track of each token as it
progresses through the system. Whenever one token is exchanged for a
new one, the issuer records and publishes the linkage between the new
token and the old one. This public record is what lets people know
that the issuer is not forging tokens at will, but it does let the
issuer, and possibly others, track payments as they flow through the
system. This could be grounds for reversibility in some cases,
although the details depend on how the system is implemented. It would
be good to see a critical analysis of how epoints would maintain
irreversibility, as part of the paper.

CP

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: On Digital Cash-like Payment Systems

2005-10-31 Thread cyphrpunk
On 10/26/05, James A. Donald <[EMAIL PROTECTED]> wrote:
> How does one inflate a key?

Just make it bigger by adding redundancy and padding, before you
encrypt it and store it on your disk. That way the attacker who wants
to steal your keyring sees a 4 GB encrypted file which actually holds
about a kilobyte of meaningful data. Current trojans can steal files
and log passwords, but they're not smart enough to decrypt and
decompress before uploading. They'll take hours to snatch the keyfile
through the net, and maybe they'll get caught in the act.

CP

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: [EMAIL PROTECTED]: Skype security evaluation]

2005-10-31 Thread cyphrpunk
Wasn't there a rumor last year that Skype didn't do any encryption
padding, it just did a straight exponentiation of the plaintext?

Would that be safe, if as the report suggests, the data being
encrypted is 128 random bits (and assuming the encryption exponent is
considerably bigger than 3)? Seems like it's probably OK. A bit risky
perhaps to ride bareback like that but I don't see anything inherently
fatal.

CP

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like Payment Systems

2005-10-31 Thread cyphrpunk
On 10/25/05, Travis H. <[EMAIL PROTECTED]> wrote:
> More on topic, I recently heard about a scam involving differential
> reversibility between two remote payment systems.  The fraudster sends
> you an email asking you to make a Western Union payment to a third
> party, and deposits the requested amount plus a bonus for you using
> paypal.  The victim makes the irreversible payment using Western
> Union, and later finds out the credit card used to make the paypal
> payment was stolen when paypal reverses the transaction, leaving the
> victim short.

This is why you can't buy ecash with your credit card. Too easy to
reverse the transaction, and by then the ecash has been blinded away.
If paypal can be reversed just as easily that won't work either.

This illustrates a general problem with these irreversible payment
schemes, it is very hard to simply acquire the currency. Any time you
go from a reversible payment system (as all the popular ones are) to
an irreversible one you have an impedence mismatch and the transfer
reflects rather than going through (so to speak).

CP

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


NY Times reports: NSA falsified Gulf of Tonkin intercepts

2005-10-31 Thread Perry E. Metzger

http://www.nytimes.com/2005/10/31/politics/31war.html?ex=1288414800&en=e2f5e341687a2ed9&ei=5090&partner=rssuserland&emc=rss

   WASHINGTON, Oct. 28 - The National Security Agency has kept secret
   since 2001 a finding by an agency historian that during the Tonkin
   Gulf episode, which helped precipitate the Vietnam War,
   N.S.A. officers deliberately distorted critical intelligence to
   cover up their mistakes, two people familiar with the historian's
   work say.

   The historian's conclusion is the first serious accusation that
   communications intercepted by the N.S.A., the secretive
   eavesdropping and code-breaking agency, were falsified so that they
   made it look as if North Vietnam had attacked American destroyers
   on Aug. 4, 1964, two days after a previous clash.

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Skype Patches Critical Flaws

2005-10-31 Thread Aram Perez

Skype Patches Critical Flaws

Skype users are being urged to upgrade to the latest version of the
Internet telephony client, due to a number of critical flaws in the
software that were disclosed by Skype's maker, Skype
Technologies SA.




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


The Pentagon is block NSA patent applications...

2005-10-31 Thread Steven M. Bellovin
http://www.newscientist.com/article.ns?id=dn8223&feedId=online-news_rss091

--Steven M. Bellovin, http://www.cs.columbia.edu/~smb



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


[EMAIL PROTECTED]: Re: [p2p-hackers] P2P Authentication]

2005-10-31 Thread Eugen Leitl
- Forwarded message from Kerry Bonin <[EMAIL PROTECTED]> -

From: Kerry Bonin <[EMAIL PROTECTED]>
Date: Thu, 27 Oct 2005 06:52:57 -0700
To: [EMAIL PROTECTED], "Peer-to-peer development." <[EMAIL PROTECTED]>
Subject: Re: [p2p-hackers] P2P Authentication
User-Agent: Mozilla Thunderbird 1.0.6 (Windows/20050716)
Reply-To: "Peer-to-peer development." <[EMAIL PROTECTED]>

There are only two good ways to provide man-in-the-middle resistant 
authentication with key repudiation in a distributed system - using a 
completely trusted out of band channel to manage everything, or use a 
PKI.  I've used PKI for >100k node systems, it works great if you keep 
it simple and integrate your CRL mechanism - in a distributed system the 
pieces are all already there!  I think some people are put off by the 
size and complexity of the libraries involved, which doesn't have to be 
the case - I've got a complete RSA/DSA X.509 compliant cert based PKI 
(leveraging LibTomCrypt for crypto primitives) in about 2k lines of C++, 
<30k object code, works great (I'll open that source as LGPL when I 
deploy next year...)  The only hard part about integrating into a p2p 
network is securing the CA's, and that's more of a network security 
problem than a p2p problem...

Kerry

[EMAIL PROTECTED] wrote:

>>>And if they do, then why reinvent the wheel? Traditional public key
>>>signing works well for these cases.
>>> 
>>>
>...
> 
>
>> Traditional public key signing doesn't work well if you want to
>>eliminate the central authority / trusted third party.  If you like
>>keeping those around, then yes, absolutely, traditional PKI works
>>swimmingly.
>>   
>>
>
>Where is the evidence of this bit about "traditional PKI working"?  As far 
>as
>I've observed, traditional PKI works barely for small, highly centralized,
>hierarchical organizations and not at all for anything else.  Am I missing 
>some
>case studies of PKI actually working as intended?
>
>Regards,
>
>Zooko
>___
>p2p-hackers mailing list
>[EMAIL PROTECTED]
>http://zgp.org/mailman/listinfo/p2p-hackers
>___
>Here is a web page listing P2P Conferences:
>http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences
>
>
> 
>


___
p2p-hackers mailing list
[EMAIL PROTECTED]
http://zgp.org/mailman/listinfo/p2p-hackers
___
Here is a web page listing P2P Conferences:
http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences


- End forwarded message -
-- 
Eugen* Leitl http://leitl.org";>leitl
__
ICBM: 48.07100, 11.36820http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE


signature.asc
Description: Digital signature


Re: [EMAIL PROTECTED]: Skype security evaluation]

2005-10-31 Thread Peter Gutmann
Jack Lloyd <[EMAIL PROTECTED]> writes:

>I just reread those sections and I still don't see anything about RSA
>encryption padding either. 3.2.2 just has some useless factoids about the RSA
>implementation (but neglects to mention important implementation points, like
>if blinding is used, or if signatures are verified before being released).
>3.2.3 describes the signature padding, but makes no mention of the encryption
>padding, or even that a padding method is used for encryption.

This would match my experience with homebrew VPN protocols when I looked at a
pile of OSS VPN implementations a year or so back.  Evrey single one of them
had flaws (some quite serious) not in getting the basic crypto right, but in
the way that the crypto was used.  I don't see any reason why Skype should
break this mould.

I can't understand why they didn't just use TLS for the handshake (maybe
YASSL) and IPsec sliding-window + ESP for the transport (there's a free
minimal implementation of this whose name escapes me for use by people who
want to avoid the IKE nightmare).  Established, proven protocols and
implementations are there for the taking, but instead they had to go out and
try and assemble something with their own three hands (sigh).

(Having said that, I don't consider it a big deal.  I've always treated Skype
as a neat way of doing VoIP rather than a super-secure encrypted comms link.
The security (for whatever it's worth) is just icing on the basic Skype
service - I'd use it with or without encryption.  The killer app is the cheap
phonecalls, not the crypto).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: [PracticalSecurity] Anonymity - great technology but hardly used

2005-10-31 Thread Ben Laurie
Travis H. wrote:
> Part of the problem is using a packet-switched network; if we had
> circuit-based, then thwarting traffic analysis is easy; you just fill
> the link with random garbage when not transmitting packets.  I
> considered doing this with SLIP back before broadband (back when my
> friend was my ISP).  There are two problems with this; one, getting
> enough random data, and two, distinguishing the padding from the real
> data in a computationally efficient manner on the remote side without
> giving away anything to someone analyzing your traffic.  I guess both
> problems could be solved
> by using synchronized PRNGs on both ends to generate the chaff.  The
> two sides getting desynchronzied would be problematic.  Please CC me
> with any ideas you might have on doing something like this, perhaps it
> will become useful again one day.

But this is trivial. Since the traffic is encrypted, you just have a bit
that says "this is garbage" or "this is traffic".

OTOH, this can leave you open to traffic marking attacks. George Danezis
and I wrote a paper on a protocol (Minx) designed to avoid marking
attacks by making all packets meaningful. You can find it here:
http://www.cl.cam.ac.uk/users/gd216/minx.pdf.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: [PracticalSecurity] Anonymity - great technology but hardly used

2005-10-31 Thread Hagai Bar-El

Hello,

At 25/10/05 07:18, cyphrpunk wrote:

>  http://www.hbarel.com/Blog/entry0006.html
>
>  I believe that for anonymity and pseudonymity technologies to survive
>  they have to be applied to applications that require them by design,
>  rather than to mass-market applications that can also do (cheaper)
>  without. If anonymity mechanisms are deployed just to fulfill the
>  wish of particular users then it may fail, because most users don't
>  have that wish strong enough to pay for fulfilling it. An example for
>  such an application (that requires anonymity by design) could be
>  E-Voting, which, unfortunately, suffers from other difficulties. I am
>  sure there are others, though.

The truth is exactly the opposite of what is suggested in this
article. The desire for anonymous communication is greater today than
ever, but the necessary technology does not exist.
...snip...
For the first time there are tens or hundreds of millions of users who
have a strong need and desire for high volume anonymous
communications. These are file traders, exchanging images, music,
movies, TV shows and other forms of communication. The main threat to
this illegal but widely practiced activity is legal action by
copyright holders against individual traders. The only effective
protection against these threats is the barrier that could be provided
by anonymity. An effective, anonymous file sharing network would see
rapid adoption and would be the number one driver for widespread use
of anonymity.
But the technology isn't there. Providing real-time, high-volume,
anonymous communications is not possible at the present time. Anyone
who has experienced the pitiful performance of a Tor web browsing
session will be familiar with the iron self-control and patience
necessary to keep from throwing the computer out the window in
frustration. Yes, you can share files via Tor, at the expense of
reducing transfer rates by multiple orders of magnitude.
...snip...



I agree with what you say, especially regarding the frustration with 
TOR, but I am not sure it contradicts the message I tried to lay out 
in my post.


Secure browsing is one instance of anonymity applications, which, as 
I mentioned, is used. I completely agree that technology may not be 
mature enough for this other instance of anonymity applications, 
which is anonymous file sharing. My point was that there is a lot of 
anonymity-related technology that is not used, especially in the 
field of finance; I did not claim that there are technological 
solutions available for each and every anonymity problem out there. I 
apologize if this spirit was not communicated well.


It's not that we have everything - it's that we don't use most of 
what we do have, although we once spent a lot of efforts designing it.


Regards,
Hagai.


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


packet traffic analysis

2005-10-31 Thread John Denker

Travis H. wrote:

Part of the problem is using a packet-switched network; if we had
circuit-based, then thwarting traffic analysis is easy; you just fill
the link with random garbage when not transmitting packets.  


OK so far ...


There are two problems with this; one, getting
enough random data, and two, distinguishing the padding from the real
data in a computationally efficient manner on the remote side without
giving away anything to someone analyzing your traffic.  I guess both
problems could be solved
by using synchronized PRNGs on both ends to generate the chaff. 


This is a poor statement of the problem(s), followed by a "solution" that
is neither necessary nor sufficient.

1) Let's assume we are encrypting the messages.  If not, the adversary
can read the messages without bothering with traffic analysis, so the
whole discussion of traffic analysis is moot.

2) Let's assume enough randomness is available to permit encryption
of the traffic ... in particular, enough randomness is available
_steady-state_ (without stockpiling) to meet even the _peak_ demand.
This is readily achievable with available technology.

3) As a consequence of (1) and (2), we can perfectly well use _nonrandom_
chaff.  If the encryption (item 1) is working, the adversary cannot tell
constants from anything else.  If we use chaff so that the steady-state
traffic is indistinguishable from the peak traffic, then (item 2) we
have enough randomness available;  TA-thwarting doesn't require anything
more.

4) Let's consider -- temporarily -- the scenario where the encryption is
being done using IPsec.  This will serve to establish terminology and
expose some problems heretofore not mentioned.

4a) IPsec tunnel mode has "inner headers" that are more than sufficient
to distinguish chaff from other traffic.  (Addressing the chaff to UDP
port 9 will do nicely.)

4b) What is not so good is that IPsec is notorious for "leaking" information
about packet-length.  Trying to make chaff with a distribution of packet
sizes indistinguishable from your regular traffic is rarely feasible, so
we must consider other scenarios, somewhat like IPsec but with improved
TA-resistance.

5) Recall that IPsec tunnel mode can be approximately described as IPIP
encapsulation carried by IPsec transport mode.  If we abstract away the
details, we are left with a packet (called an "envelope") that looks like

---++
| outer header | inner header | payload |  [1]
---++

where the inner header and payload (together called the "contents" of
the envelope) are encrypted.  (The "+" signs are meant to be opaque
to prying eyes.) The same picture can be used to describe not just
IPsec tunnel mode (i.e. IPIP over IPsec transport) but also GRE over
IPsec transport, and even PPPoE over IPsec transport.

Note:  All the following statements apply *after* any necessary
fragmentation has taken place.

The problem is that the size of the envelope (as described by the length
field in the outer header) is conventionally chosen to be /just/ big
enough to hold the contents.  This problem is quite fixable ... we just
need constant-sized envelopes!  The resulting picture is:

---
| outer header | inner header | payload | padding |[2]
---

where padding is conceptually different from chaff:  chaff means packets
inserted where there would have been no packet, while padding adjusts the
length of a packet that would have been sent anyway.

The padding is not considered part of the contents.  The decoding is
unambiguous, because the size of the contents is specified by the length
field in the inner header, which is unaffected by the padding.

This is a really, really tiny hack on top of existing protocols.

If your plaintext consists primarily of small packets, you should set the MTU
of the transporter to be small.   This will cause fragmentation of the
large packets, which is the price you have to pay.  Conversely, if your
plaintext consists primarily of large packets, you should make the MTU large.
This means that a lot of bandwidth will be wasted on padding if/when there
are small packets (e.g. keystrokes, TCP acks, and voice cells) but that's
the price you have to pay to thwart traffic analysis.  (Sometimes you can
have two virtual circuits, one for big packets and one for small packets.
This degrades the max performance in both cases, but raises the minimum
performance in both cases.)
  Remark: FWIW, the MTU (max transmission unit) should just be called
  the TU in this case, because all transmissions have the same size now!

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: [PracticalSecurity] Anonymity - great technology but hardly used

2005-10-31 Thread Alexander Klimov
On Wed, 26 Oct 2005, JЖrn Schmidt wrote:

> --- "Travis H." <[EMAIL PROTECTED]> wrote:
>
> [snip]
> > Another issue involves the ease of use when switching between a
> > [slower] anonymous service and a fast non-anonymous service.  I
> > have a tool called metaprox on my website (see URL in sig) that
> > allows you to choose what proxies you use on a domain-by-domain
> > basis.  Something like this is essential if you want to be
> > consistent about accessing certain sites only through an anonymous
> > proxy.  Short of that, perhaps a Firefox plug-in that allows you
> > to select proxies with a single click would be useful.
>
> You can already do the latter with SwitchProxy
> (http://www.roundtwo.com/product/switchproxy). Basically, it's a
> Firefox extension that saves you the trouble of going into the
> 'preferences' dialogue everytime you want to switch from one proxy
> to another (or go from using a proxy to not using one, that is).

In fact, it is possible to setup it all thru privoxy alone:

#  5. FORWARDING
#  =
#
#  This feature allows routing of HTTP requests through a chain
#  of multiple proxies. It can be used to better protect privacy
#  and confidentiality when accessing specific domains by routing
#  requests to those domains through an anonymous public proxy (see
#  e.g. http://www.multiproxy.org/anon_list.htm) Or to use a caching
#  proxy to speed up browsing. Or chaining to a parent proxy may be
#  necessary because the machine that Privoxy runs on has no direct
#  Internet access.
#
#  Also specified here are SOCKS proxies. Privoxy supports the SOCKS
#  4 and SOCKS 4A protocols.

[...]

#  5.1. forward
#  
#
#  Specifies:
#
#  To which parent HTTP proxy specific requests should be routed.
#
#  Type of value:
#
#  target_pattern http_parent[:port]
#
#  where target_pattern is a URL pattern that specifies to which
#  requests (i.e. URLs) this forward rule shall apply. Use /
#  to denote "all URLs".  http_parent[:port] is the DNS name or
#  IP address of the parent HTTP proxy through which the requests
#  should be forwarded, optionally followed by its listening port
#  (default: 8080). Use a single dot (.) to denote "no forwarding".

Btw, I guess everybody who installs tor with privoxy has to know about
this since he has to change this section.

The problem is that it is not clear how to protect against `malicious'
sites: if you separate fast and tor-enabled sites by the site's name,
e.g., tor for search.yahoo.com, and no proxy for everything else,
yahoo can trace you thru images served from .yimg.com; OTOH if you
change proxy `with one click' first of all you can easily forget to do
it, but also a site can create a time-bomb -- a javascript (or just
http/html refresh) which waits some time in background (presumably,
until you switch tor off) and makes another request which allows to
find out your real ip.

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: On the orthogonality of anonymity to current market demand

2005-10-31 Thread James A. Donald
--
John Kelsey
> What's with the heat-death nonsense?  Physical bearer
> instruments imply stout locks and vaults and alarm
> systems and armed guards and all the rest, all the way
> down to infrastructure like police forces and armies
> (private or public) to avoid having the biggest gang
> end up owning all the gold.  Electronic bearer
> instruments imply the same kinds of things, and the
> infrastructure for that isn't in place.  It's like
> telling people to store their net worth in their
> homes, in gold. That can work, but you probably can't
> leave the cheapest lock sold at Home Depot on your
> front door and stick the gold coins in the same drawer
> where you used to keep your checkbook.

Some of us get spyware more than others.

Further, genuinely secure systems are now becoming
available, notably Symbian.

While many people are rightly concerned that DRM will
ultimately mean that the big corporation, and thus the
state, has root access to their computers and the owner
does not, it also means that trojans, viruses, and
malware does not. DRM enables secure signing of
transactions, and secure storage of blinded valuable
secrets, since DRM binds the data to the software, and
provides a secure channel to the user.   So secrets
representing ID, and secrets representing value, can
only be manipulated by the software that is supposed to
be manipulating it. 

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 3CepcQ59MYKAZTizEycP1vkZBbexwbyiobaC/bXS
 44hfxMF4PBKXmc5uavnegOFFCMtNwDmpIMxLBcyI3


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]