Re: Proven Primes

2003-03-07 Thread Tim Dierks
At 10:04 AM 3/7/2003 +, Ben Laurie wrote:
Indeed. The commonly used one is ECPP which uses elliptic curves cunningly 
to not only prove primality, but to produce a certificate which can be 
quickly verified.

Probabilistic prime tests are just that - probable. ECPP actually proves it.
Does anyone, in practice, care about the distinction, if the probability 
that the prime test has failed can be proved to be far less than the chance 
that a hardware failure has caused a false positive ECPP test? To restate 
the question: all calculation methods have a certain possibility of 
failure, whether due to human or mechanical error, however minute that 
possibility may be. If I can use a probabalistic primality test to reduce 
the possibility of error due to algorithm failure to a point that it's well 
below the possibility of error due to hardware failure, what's the 
practical difference?

Thanks,
 - Tim


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Wiretap Act Does Not Cover Message 'in Storage' For Short Period (was Re: BNA's Internet Law News (ILN) - 2/27/03)

2003-03-05 Thread Tim Dierks
At 02:30 PM 3/5/2003 -0500, Steven M. Bellovin wrote:
From: Somebody

Technically, since their signal speed is slower than light, even
transmission lines act as storage devices.

Wire tapping is now legal.
The crucial difference, from a law enforcement perspective, is how hard
it is to get the requisite court order.  A stored message order is
relatively easy; a wiretap order is very hard.  Note that this
distinction is primarily statutory, not (as far as I know)
constitutional.
Furthermore, it's apparently not illegal for a non-governmental actor to 
retrieve stored information which they have access to, although it might be 
illegal for them to wiretap a communication even if they had access to the 
physical medium over which it travels.

I disagree with Somebody's claim; I don't think that claim would go 
anywhere in court, since a transmission clearly falls under the category of 
wire communication, and it's clear that transmission lines are the very 
entities the wiretap act has always been intended to protect, so Congress' 
intent is quite clear, regardless of any argument about storage.

 - Tim



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Wiretap Act Does Not Cover Message 'in Storage' For Short Period (was Re: BNA's Internet Law News (ILN) - 2/27/03)

2003-03-02 Thread Tim Dierks
At 01:39 PM 2/27/2003 -0500, R. A. Hettinga wrote:
At 9:01 AM -0500 on 2/27/03, BNA Highlights wrote:
 WIRETAP ACT DOES NOT COVER MESSAGE 'IN STORAGE' FOR SHORT
 PERIOD
 BNA's Electronic Commerce  Law Report reports that a
 federal court in Massachusetts has ruled that the federal
 Wiretap Act does not prohibit the improper acquisition of
 electronic communications that were in storage no matter
 how ephemeral that storage may be. The court relied on Konop
 v. Hawaiian Airlines Inc., which held that no Wiretap Act
 violation occurs when an electronic communication is
 accessed while in storage, even if the interception takes
 place during a nanosecond 'juncture' of storage along the
 path of transmission.  Case name is U.S. v. Councilman.
 Article at
 http://pubs.bna.com/ip/BNA/eip.nsf/is/a0a6m6y1k8
 For a free trial to source of this story, visit
 http://web.bna.com/products/ip/eplr.htm
This would seem to imply to me that the wiretap act does not apply to any 
normal telephone conversation which is carried at any point in its transit 
by an electronic switch, including all cell phone calls and nearly all 
wireline calls, since any such switch places the data of the ongoing call 
in storage for a tiny fraction of a second.

 - Tim



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Columbia crypto box

2003-02-08 Thread Tim Dierks
At 12:41 AM 2/8/2003 -0500, John S. Denker wrote:

As reported by AP:

| Among the most important [debris] they were seeking was
| a device that allows for the encryption of communication
| between the shuttle and NASA controllers. A NASA spokesman
| in Houston, John Ira Petty, said Friday that NASA feared
| the technology could be used to send bogus signals to the
| shuttle.

Apparently some folks skipped class the day Kerchhoffs'
Principle was covered.


Here are three valid reasons for NSA (who provides communication security 
to NASA) to keep crypto algorithms secret:

 1. If one has a sufficiently good level of analysis in-house that 
additional cryptographic analysis has reached the level of diminishing 
returns, then there's little additional value to be gained from the 
community input resulting from disclosure. In such a situation, even if a 
cipher is secure enough to meet its goals based solely on secrecy of the 
key, the marginal security of keeping the algorithm secret is of value.

 2. Keeping an algorithm secret prevents your opponents from using it. If 
you have better algorithms than your opponents, this is of value.

 3. Keeping an algorithm secret may provide protection to design concepts 
and constraints, which will help you keep secret methods of cryptanalysis 
with which you are familiar, but that your opponents have not yet 
discovered (e.g. differential cryptanalysis).

There may be more valid reasons for treating the device as secret; some 
categories that come to mind include protecting non-cryptographic 
information, such as the capabilities of the communication channel. Also, 
many systems on the shuttle are obsolete by modern standards, and it's 
possible that the communications security is similarly aged.

 - Tim Dierks



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


PKI cost of ownership studies?

2002-08-26 Thread Tim Dierks

I'm consulting for a company which would like to have a better 
understanding of the market's perception of PKI cost of ownership. 
Specifically, they hear a lot of numbers tossed around regarding people's 
perception of what it costs to set up a PKI, numbers in the $3 million and 
up area, and they'd like to better understand where those numbers come from.

I'm interested in getting a better understanding of where people may come 
up with these numbers and what they're composed of: software costs, 
integration services, building a secure vault, key ceremonies, CPS's, etc.?

Can anyone point me to available studies which might be influential in 
establishing perception of these costs?

Or better, is there anyone out there who's got a really good understanding 
of such market perception and who'd like to bill a few hours to answer some 
questions and brainstorm a little?

Thanks,
  - Tim Dierks
[EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Palladium: technical limits and implications

2002-08-12 Thread Tim Dierks

At 07:30 PM 8/12/2002 +0100, Adam Back wrote:
(Tim Dierks: read the earlier posts about ring -1 to find the answer
to your question about feasibility in the case of Palladium; in the
case of TCPA your conclusions are right I think).

The addition of an additional security ring with a secured, protected 
memory space does not, in my opinion, change the fact that such a ring 
cannot accurately determine that a particular request is consistant with 
any definable security policy. I do not think it is technologically 
feasible for ring -1 to determine, upon receiving a request, that the 
request was generated by trusted software operating in accordance with the 
intent of whomever signed it.

Specifically, let's presume that a Palladium-enabled application is being 
used for DRM; a secure  trusted application is asking its secure key 
manager to decrypt a content encryption key so it can access properly 
licensed code. The OS is valid  signed and the application is valid  
signed. How can ring -1 distinguish a valid request from one which has been 
forged by rogue code which used a bug in the OS or any other trusted entity 
(the application, drivers, etc.)?

I think it's reasonable to presume that desktop operating systems which are 
under the control of end-users cannot be protected against privilege 
escalation attacks. All it takes is one sound card with a bug in a 
particular version of the driver to allow any attacker to go out and buy 
that card  install that driver and use the combination to execute code or 
access data beyond his privileges.

In the presence of successful privilege escalation attacks, an attacker can 
get access to any information which can be exposed to any privilige level 
he can escalate to. The attacker may not be able to access raw keys  other 
information directly managed by the TOR or the key manager, but those keys 
aren't really interesting anyway: all the interesting content  
transactions will live in regular applications at lower security levels.

The only way I can see to prevent this is for the OS to never transfer 
control to any software which isn't signed, trusted and intact. The problem 
with this is that it's economically infeasible: it implies the death of 
small developers and open source, and that's a higher price than the market 
is willing to bear.

  - Tim

PS - I'm looking for a job in or near New York City. See my resume at 
http://www.dierks.org/tim/resume.html



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

2002-08-12 Thread Tim Dierks

At 09:07 PM 8/12/2002 +0100, Adam Back wrote:
At some level there has to be a trade-off between what you put in
trusted agent space and what becomes application code.  If you put the
whole application in trusted agent space, while then all it's
application logic is fully protected, the danger will be that you have
added too much code to reasonably audit, so people will be able to
gain access to that trusted agent via buffer overflow.

I agree; I think the system as you describe it could work and would be 
secure, if correctly executed. However, I think it is infeasible to 
generally implement commercially viable software, especially in the 
consumer market, that will be secure under this model. Either the 
functionality will be too restricted to be accepted by the market, or there 
will be a set of software flaws that allow the system to be penetrated.

The challenge is to put all of the functionality which has access to 
content inside of a secure perimeter, while keeping the perimeter secure 
from any data leakage or privilege escalation. The perimeter must be very 
secure and well-understood from a security standpoint; for example, it 
seems implausible to me that any substantial portion of the Win32 API could 
be used from within the perimeter; thus, all user interface aspects of the 
application must be run through a complete security analysis with the 
presumption that everything outside of the perimeter is compromised and 
cannot be trusted. This includes all APIs  data.

I think we all know how difficult it is, even for security professionals, 
to produce correct systems that enforce any non-trivial set of security 
permissions. This is true even when the items to be protected and the 
software functionality are very simple and straightforward (such as key 
management systems). I think it entirely implausible that software 
developed by multimedia software engineers, managing large quantities of 
data in a multi-operation, multi-vendor environment, will be able to 
deliver a secure environment.

This is even more true when the attacker (the consumer) has control over 
the hardware  software environment. If a security bug is found  patched, 
the end user has no direct incentive to upgrade their installation; in 
fact, the most concerning end users (e.g., pirates) have every incentive to 
seek out and maintain installations with security faults. While a content 
or transaction server could refuse to conduct transactions with a user who 
has not upgraded their software, such a requirement can only increase the 
friction of commerce, a price that vendors  consumers might be quite 
unwilling to pay.

I'm sure that the whole system is secure in theory, but I believe that it 
cannot be securely implemented in practice and that the implied constraints 
on use  usability will be unpalatable to consumers and vendors.

  - Tim

PS - I'm looking for a job in or near New York City. See my resume at 
http://www.dierks.org/tim/resume.html



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Secure E-Mail ASP-type systems?

2002-07-17 Thread Tim Dierks

I'm looking to assemble a list of commercially available secure e-mail ASP 
solutions, with a particular focus on those vendors who make their solution 
available for sale for operation by the customer. (Think running your own 
Hushmail servers.)

I'm aware of Certified Mail, but there must be many alternatives out there.

If you know of such an ASP or vendor, please send me an e-mail and I'll 
summarize to the list.

Thanks,
  - Tim Dierks


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]