Re: 2048-bit RSA keys

2010-08-17 Thread James A. Donald

On 2010-08-17 3:46 PM, Jonathan Katz wrote:

Many on the list may already know this, but I haven't seen it mentioned
on this thread. The following paper (that will be presented at Crypto
tomorrow!) is most relevant to this discussion:
"Factorization of a 768-bit RSA modulus",
http://eprint.iacr.org/2010/006


Which tells us that ordinary hobbyist and academic efforts will not be 
able to factor a 1024 bit RSA modulus by brute force until around 2015 
or so - but dedicated hardware and so forth might be able to do it now.


What is the latest news on cracking by ECC by brute force?

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Has there been a change in US banking regulations recently?

2010-08-17 Thread James A. Donald

On 2010-08-15 7:59 AM, Thor Lancelot Simon wrote:

Indeed.  The way forward would seem to be ECC, but show me a load balancer
or even a dedicated SSL offload device which supports ECC.


For sufficiently strong security, ECC beats factoring, but how strong is 
sufficiently strong?  Do you have any data?  At what point is ECC faster 
for the same security?


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048-bit RSA keys

2010-08-17 Thread Paul Wouters

On Tue, 17 Aug 2010, Steven Bellovin wrote:


They also suggest that a 3-4 year phase-out of 1024-bit moduli is the proper 
course.


Note that this is because they take into consideration that secrets have
to be unbreakable for decade(s), which is not the case for all uses of
RSA. For example in DNSSEC, a key can be rolled in a matter of hours
or days.

Paul

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048-bit RSA keys

2010-08-17 Thread Samuel Neves
 Forwarded at Andrew's request.

 Original Message 
Subject: Re: 2048-bit RSA keys
Date: Tue, 17 Aug 2010 19:11:55 -0500 (CDT)
From: Andrew Odlyzko 
To: Samuel Neves 
CC: cryptography@metzdowd.com


It is not unreasonable to consider the possibility of
algorithmic improvements.  But that does not guarantee
accuracy.

My 1995 paper "The future of integer factorization,"

http://www.dtc.umn.edu/~odlyzko/doc/future.of.factoring.pdf

published in CryptoBytes (The technical newsletter of RSA
Laboratories), vol. 1, no. 2, 1995, pp. 5-12, considered
the historical record of integer factorizations up to that
point, and took account both of the increasing computing
power and the progress in algorithms (which over the preceding
20 years contributed about as much as the growth in the
number of available cycles).  It then projected when
integers of various sizes might get factored, and even
the most conservative projection had 768-bit integers
getting factored by 2004 or so.

In retrospect, the latest 768-bit factorization shows that
over the preceding 15 years there has been practically no
progress in algorithms, and even the computing power that
is actually used for the experiments has fallen behind
projections.

Andrew

On Tue, 17 Aug 2010, Samuel Neves wrote:

> On 17-08-2010 21:42, Perry E. Metzger wrote:
>> On Tue, 17 Aug 2010 22:32:52 +0200 Simon Josefsson
>>  wrote:
>>> Bill Stewart  writes:
>>>
 Basically, 2048's safe with current hardware
 until we get some radical breakthrough
 like P==NP or useful quantum computers,
 and if we develop hardware radical enough to
 use a significant fraction of the solar output,
 we'll probably find it much easier to eavesdrop
 on the computers we're trying to attack than to
 crack the crypto.
>>> Another breakthrough in integer factoring could be sufficient for an
>>> attack on RSA-2048.  Given the number of increasingly efficient
>>> integer factorization algorithms that have been discovered
>>> throughout history, another breakthrough here seems more natural
>>> than unlikely to me.
>> A breakthrough could also render 10kbit keys broken, or might never
>> happen at all. A breakthrough could make short ECC keys vulnerable.
>> A breakthrough could make AES vulnerable. One can't operate on this
>> basis -- it makes it impossible to use anything other than one-time
>> pads.
>>
>
> A breakthrough is a rather strong term. But it's not unreasonable to
> believe that the number field sieve's complexity could be lowered on the
> near future by an *incremental* improvement --- it would only require
> lowering the complexity from L[1/3, ~1.92] to L[1/3, ~1.2] to make 2048
> bit factorization roughly as easy as 768 bits today.
>
> Coppersmith's variant of the number field sieve proposed a tradeoff that
> dramatically lowered the exponent, if one wanted to break many keys of
> roughly the same size. The idea was to fix m, the 'base' of the number
> field polynomial, and precompute many many pairs (a,b) such that a - bm
> was smooth. With this precomputation, the NFS runs in L[1/3, ~1.639],
> which is dramatically faster (and quite worth it for a large
> organization --- they're bound to want to break multiple keys).
>
> It is not unreasonable to think that a small(ish) improvement to the
> number field sieve could significantly lower the strength of current
> keys. It *looks* more likely to happen than a significant improvement on
> the speed of ECDLP breaking (I'll make no bets on AES, though).
>
> Best regards,
> Samuel Neves
>
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to
majord...@metzdowd.com
>

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048-bit RSA keys

2010-08-17 Thread Steven Bellovin

On Aug 17, 2010, at 5:19 10PM, Samuel Neves wrote:

> On 17-08-2010 21:42, Perry E. Metzger wrote:
>> On Tue, 17 Aug 2010 22:32:52 +0200 Simon Josefsson
>>  wrote:
>>> Bill Stewart  writes:
>>> 
 Basically, 2048's safe with current hardware
 until we get some radical breakthrough
 like P==NP or useful quantum computers,
 and if we develop hardware radical enough to
 use a significant fraction of the solar output,
 we'll probably find it much easier to eavesdrop
 on the computers we're trying to attack than to
 crack the crypto.
>>> Another breakthrough in integer factoring could be sufficient for an
>>> attack on RSA-2048.  Given the number of increasingly efficient
>>> integer factorization algorithms that have been discovered
>>> throughout history, another breakthrough here seems more natural
>>> than unlikely to me.
>> A breakthrough could also render 10kbit keys broken, or might never
>> happen at all. A breakthrough could make short ECC keys vulnerable.
>> A breakthrough could make AES vulnerable. One can't operate on this
>> basis -- it makes it impossible to use anything other than one-time
>> pads.
>> 
> 
> A breakthrough is a rather strong term. But it's not unreasonable to
> believe that the number field sieve's complexity could be lowered on the
> near future by an *incremental* improvement --- it would only require
> lowering the complexity from L[1/3, ~1.92] to L[1/3, ~1.2] to make 2048
> bit factorization roughly as easy as 768 bits today.

It's worth quote from the paper at CRYPTO '10 on factorization of a 768-bit 
number:

The new NFS record required the following effort. We spent half a year 
on
80 processors on polynomial selection. This was about 3% of the main 
task,
the sieving, which took almost two years on many hundreds of machines. 
On
a single core 2.2 GHz AMD Opteron processor with 2 GB RAM, sieving would
have taken about fifteen hundred years. We did about twice the sieving
strictly necessary, to make the most cumbersome step, the matrix step, 
more
manageable. Preparing the sieving data for the matrix step took a 
couple of
weeks on a few processors. The final step after the matrix step took 
less
than half a day of computing.

They conclude with

at this point factoring a 1024-bit RSA modulus looks more than five 
times
easier than a 768-bit RSA modulus looked back in 1999, when we achieved
the first public factorization of a 512-bit RSA modulus. Nevertheless, a
1024-bit RSA modulus is still about a thousand times harder to factor 
than
a 768-bit one. It may be possible to factor a 1024-bit RSA modulus 
within
the next decade by means of an academic effort on the same scale as the
effort presented here. Recent standards recommend phasing out such 
moduli
by the end of the year 2010 [28]. See also [21]. Another conclusion from
our work is that we can confidently say that if we restrict ourselves to
an open community, academic effort such as ours and unless something
dramatic happens in factoring, we will not be able to factor a 1024-bit
RSA modulus within the next five years [27]. After that, all bets are 
off.

They also suggest that a 3-4 year phase-out of 1024-bit moduli is the proper 
course.
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: About that "Mighty Fortress"... What's it look like?

2010-08-17 Thread David G. Koontz
On 18/08/10 3:46 AM, Peter Gutmann wrote:
> Alexander Klimov  writes:
> 
>> Each real-time check reveals your interest in the check. What about privacy
>> implications?
>
> (Have you ever seen a PKI or similar key-using design where anyone involved in
> speccing or deploying it genuinely cares about privacy implications?  Not only
> have I never seen one, I've even been to a talk at a conference where someone
> was criticised for wasting time on privacy concerns).


(You may have opened your question too wide).

Privacy against whom?  There were enough details revealed about the key
escrow LEAF in Clipper to see that the operation derived from over the air
transfer of keys in Type I applications.  The purpose was to keep a back
door private for use of the government.  The escrow mechanism an involution
of PKI.

There were of course concerns as evinced in the hearing under the 105th
Congress on 'Privacy in the Digital Age: Encryption and Mandatory Access
Hearings', before the Subcommittee on the Constitution, Federalism, and
Property Rights, of the Committee on The Judiciary, United States Senate in
March 1998.  These concerns were on the rights of privacy for users.

Clipper failed primarily because there wasn't enough trust that the privacy
wouldn't be confined to escrow agents authorized by the Judiciary.  The
Federal government lost credibility through orchestrated actions by those
with conscience concerned over personal privacy and potential government abuse.

Privacy suffers from lack of legislation and is only taken serious when the
threat is pervasive and the voters are up in arms.









-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048-bit RSA keys

2010-08-17 Thread Samuel Neves
 On 17-08-2010 21:42, Perry E. Metzger wrote:
> On Tue, 17 Aug 2010 22:32:52 +0200 Simon Josefsson
>  wrote:
>> Bill Stewart  writes:
>>
>>> Basically, 2048's safe with current hardware
>>> until we get some radical breakthrough
>>> like P==NP or useful quantum computers,
>>> and if we develop hardware radical enough to
>>> use a significant fraction of the solar output,
>>> we'll probably find it much easier to eavesdrop
>>> on the computers we're trying to attack than to
>>> crack the crypto.
>> Another breakthrough in integer factoring could be sufficient for an
>> attack on RSA-2048.  Given the number of increasingly efficient
>> integer factorization algorithms that have been discovered
>> throughout history, another breakthrough here seems more natural
>> than unlikely to me.
> A breakthrough could also render 10kbit keys broken, or might never
> happen at all. A breakthrough could make short ECC keys vulnerable.
> A breakthrough could make AES vulnerable. One can't operate on this
> basis -- it makes it impossible to use anything other than one-time
> pads.
>

A breakthrough is a rather strong term. But it's not unreasonable to
believe that the number field sieve's complexity could be lowered on the
near future by an *incremental* improvement --- it would only require
lowering the complexity from L[1/3, ~1.92] to L[1/3, ~1.2] to make 2048
bit factorization roughly as easy as 768 bits today.

Coppersmith's variant of the number field sieve proposed a tradeoff that
dramatically lowered the exponent, if one wanted to break many keys of
roughly the same size. The idea was to fix m, the 'base' of the number
field polynomial, and precompute many many pairs (a,b) such that a - bm
was smooth. With this precomputation, the NFS runs in L[1/3, ~1.639],
which is dramatically faster (and quite worth it for a large
organization --- they're bound to want to break multiple keys).

It is not unreasonable to think that a small(ish) improvement to the
number field sieve could significantly lower the strength of current
keys. It *looks* more likely to happen than a significant improvement on
the speed of ECDLP breaking (I'll make no bets on AES, though).

Best regards,
Samuel Neves

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Has there been a change in US banking regulations recently?

2010-08-17 Thread Steven Bellovin

On Aug 16, 2010, at 9:19 49PM, John Gilmore wrote:

>> who's your enemy?  The NSA?  The SVR?  Or garden-variety cybercrooks?
> 
> "Enemy"?  We don't have to be the enemy for someone to crack our
> security.  We merely have to be in the way of something they want;
> or to be a convenient tool or foil in executing a strategy.
> 

John, as you yourself have said, "cryptography is a matter of economics".  
Other than a few academics, people don't factor large numbers for fun; rather, 
they want the plaintext or the ability to forge signatures.  Is factoring the 
best way to do that?  Your own numbers suggest that it is not.  You wrote 
"After they've built 50, which perhaps only take six months to crack a key, 
will YOUR key be one of the 100 keys that they crack this year?"  100 keys, 
perhaps multiplied by 10 for the number of countries that will share the 
effort, means 1000 keys/year.  How many *banks* have SSL keys?  If you want to 
attack one of those banks, which is *cheaper*, getting time on a rare factoring 
machine, or finding some other way in, such as hacking an endpoint?  For that 
matter, don't forget Morris' "three Bs: burglary, bribery, and blackmail".  
(Aside: I was once discussing TWIRL with someone who has ties to the classified 
community.  When I quoted solution speeds of the we're discussing, he chortled, 
saying that the political fight over whose solutions were more valuable would 
paralyze things.)

If the threat is factoring, there are cheaper defenses than going to 1024-bit 
keys.  For example, every one under a given CA can issue themselves 
subcertificates.  For communication keys, use D-H; it's a separate solution 
effort for each session.  (Yes, it's cheaper if the modulus is held constant.)  
Cracking the signing key won't do any good, because of perfect forward secrecy.

You don't need long keys when they're used solely for short-lived 
authentication -- DNSSEC comes to mind.

Now -- all that said, I agree that 2048-bit keys are a safer choice.  However, 
defenders have to consider economics, too, and depending on what they're 
protecting it may not be a smart choice.
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-08-17 Thread Jerry Leichter

On Aug 17, 2010, at 4:20 AM, Peter Gutmann wrote:
 Your code-signing system should create a tamper-resistant audit  
trail [0] of

 every signature applied and what it's applied to.

Peter.

[0] By this I don't mean the usual cryptographic Rube-Goldbergery,  
just log
   the details to a separate server with a two-phase commit protocol  
to

   minimise the chances of creation of phantom non-logged signatures.
...thus once again demonstrating how much of good cryptographic  
practice is just good engineering/release management practice.


A number of years ago, in addition to being in charge of much of the  
software development, I had the system management organization of the  
small software maker I worked at reporting to me.  I formalized a  
process that the (already well run) organization already had in  
place.  Any time *any* build of the software "left the building", even  
if just for a demo, we marked that build as "locked".  We would never  
delete a "locked" build without a careful determination that it was,  
in fact, "dead":  No longer in use at any customer. We also,  
within 24 hours, did a special backup of the build onto a tape that  
went into permanent off-site storage.


The one time I know of that we didn't follow these procedures (before  
they were officially put into place), a very large customer, at their  
insistence and after the sales guy who dealt with them swore they  
agreed to delete the copy we gave them, got a snapshot of a build from  
a developer's workstation "just to see how the new version would  
look".  Without telling us, the customer proceeded to roll this out at  
hundreds of sites, resulting in years of support grief, since it was  
impossible for us to determine exactly what went into the code they  
were running.


We were later acquired by a much larger company that claimed they  
would "teach us how to do big-league software engineering".  Hah.   
That company was shipping software built on developer workstations as  
a day-to-day practice - they were just beginning to develop mechanisms  
to ensure that the stuff they shipped came through traceable,  
reproducible builds.  Oh ... but their stuff was in Java, so was  
signed  The signing was tightly controlled at a central location.  Cue  
classic joke about using an armored car to deliver an envelope to  
someone living in a cardboard box.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048-bit RSA keys

2010-08-17 Thread Perry E. Metzger
On Tue, 17 Aug 2010 22:32:52 +0200 Simon Josefsson
 wrote:
> Bill Stewart  writes:
> 
> > Basically, 2048's safe with current hardware
> > until we get some radical breakthrough
> > like P==NP or useful quantum computers,
> > and if we develop hardware radical enough to
> > use a significant fraction of the solar output,
> > we'll probably find it much easier to eavesdrop
> > on the computers we're trying to attack than to
> > crack the crypto.
> 
> Another breakthrough in integer factoring could be sufficient for an
> attack on RSA-2048.  Given the number of increasingly efficient
> integer factorization algorithms that have been discovered
> throughout history, another breakthrough here seems more natural
> than unlikely to me.

A breakthrough could also render 10kbit keys broken, or might never
happen at all. A breakthrough could make short ECC keys vulnerable.
A breakthrough could make AES vulnerable. One can't operate on this
basis -- it makes it impossible to use anything other than one-time
pads.

-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048-bit RSA keys

2010-08-17 Thread Simon Josefsson
Bill Stewart  writes:

> Basically, 2048's safe with current hardware
> until we get some radical breakthrough
> like P==NP or useful quantum computers,
> and if we develop hardware radical enough to
> use a significant fraction of the solar output,
> we'll probably find it much easier to eavesdrop
> on the computers we're trying to attack than to
> crack the crypto.

Another breakthrough in integer factoring could be sufficient for an
attack on RSA-2048.  Given the number of increasingly efficient integer
factorization algorithms that have been discovered throughout history,
another breakthrough here seems more natural than unlikely to me.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048-bit RSA keys

2010-08-17 Thread mhey...@gmail.com
On Tue, Aug 17, 2010 at 1:46 AM, Joseph Ashwood  wrote:
>
> The storage required for 2048 is approximately 2^64 bytes...
>
And from the density (1TB per cubic inch) in US Patent Application
20090094406, that gives about 70,000 gallons of memory or about 14 of
my father-in-law's average sized backyard swimming pools.

Dive in, the bits are the perfect temperature.

-Michael Heyman

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Haystack

2010-08-17 Thread Steve Weis
I sent an email asking for technical information several months ago
and did not receive a response. The FAQ says "the Haystack client
connects to our servers which in turn talk to websites on behalf of
our users" and "from a user's point of view, Haystack appears to be a
normal HTTP proxy". There is no binary or source available for
download and the FAQ says "revealing the source code at this time
would only aide the authorities in blocking Haystack".

Based on those statements, I'm going to speculate that the client
connects to a static list of innocuous-looking proxies and that they
are relying on keeping those proxies secret. If those servers were
known to an authority, it would be trivial to block. I think that is
why they're making the unrealistic assumption that an authority will
not be able to reverse engineer or even monitor traffic from a client.

On Tue, Aug 17, 2010 at 12:57 AM, Jerry Leichter  wrote:
> The mainstream press is full of discussion for a new program, Haystack,
> developed by a guy name Austin Heap and sponsored by the Censorship Research
> Center as a new kind of secure proxy.  See
> http://www.haystacknetwork.com/faq/ for some information.
>
> As described, the program relies on some kind of steganography to hide
> encrypted connections inside of connections to "approved" sites.  It was
> specifically designed to help Iranian dissidents maintain connections in the
> face of active government efforts to locate and block proxies and Tor entry
> and exit nodes.
>
> A Google search reveals absolutely no technical information about exactly
> what Haystack does or now it does it.  The program is available on multiple
> platforms but is closed source - the FAQ linked to above discusses this,
> citing fears that making the source available would help censors.
>
> Anyone know anything more about what Haystack is actually doing?

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: About that "Mighty Fortress"... What's it look like?

2010-08-17 Thread Perry E. Metzger
On Tue, 17 Aug 2010 15:04:00 +0300 Alexander Klimov
 wrote:
> On Sat, 31 Jul 2010, Perry E. Metzger wrote:
> > There is no rational reason at all that someone should "endorse" a
> > key when it is possible to simply do a real time check for
> > authorization. There is no reason to sign a key when you can just
> > check if the key is in a database.
> 
> Each real-time check reveals your interest in the check. What about
> privacy implications?

Well, OCSP and such already do online checks in real time, so there is
no difference there between my view of the world and what people claim
should be done for certificates.

The more interesting question is whether the crypto protocols people
can come up with ways of doing online checks for information about
keys that don't reveal information about what is being asked for. That
would help in both the certificate and non-certificate versions of
such checks.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: About that "Mighty Fortress"... What's it look like?

2010-08-17 Thread Peter Gutmann
Alexander Klimov  writes:

>Each real-time check reveals your interest in the check. What about privacy
>implications?

What about them?

(Have you ever seen a PKI or similar key-using design where anyone involved in
speccing or deploying it genuinely cares about privacy implications?  Not only
have I never seen one, I've even been to a talk at a conference where someone
was criticised for wasting time on privacy concerns).

In any case if it really is a concern, there are any number of ways of
blinding or masking what's going on.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: About that "Mighty Fortress"... What's it look like?

2010-08-17 Thread Alexander Klimov
On Sat, 31 Jul 2010, Perry E. Metzger wrote:
> You are still following the same model that has failed over and over
> and over again. "Endorsing" keys is the same "we have no internet,
> so we rely on having big books to tell us whether a person's credit
> card was stolen" model.
>
> There is no rational reason at all that someone should "endorse" a
> key when it is possible to simply do a real time check for
> authorization. There is no reason to sign a key when you can just
> check if the key is in a database.

Each real-time check reveals your interest in the check. What about
privacy implications?

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-08-17 Thread Peter Gutmann
A quick followup note on this, I was reading Microsoft's code-signing best
practices document and one comment caught my eye:

  If code is signed automatically as part of a build process, it is highly
  recommended that any code that is submitted to that build process be
  strongly authenticated.

Given that Realtek produce huge numbers of products and therefore even larger
numbers of drivers for different environments (and JMicron probably not much
less so), it's likely that they use a highly automated driver-signing process
to deal with this.  Even if they use "strong authentication" as per the
guidelines, for example making the network share on the company-wide signing
server non-public (i.e. requiring Windows domain authentication), this means
that anyone who compromises any PC on the network can now use the code-signing
server as an oracle.  So there may have been no need to steal the key, just
compromise one developer PC and you can get your malware signed.  So perhaps a
corollary requirement might be:

  Your code-signing system should create a tamper-resistant audit trail [0] of
  every signature applied and what it's applied to.

Peter.

[0] By this I don't mean the usual cryptographic Rube-Goldbergery, just log
the details to a separate server with a two-phase commit protocol to 
minimise the chances of creation of phantom non-logged signatures.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Haystack

2010-08-17 Thread Jerry Leichter
The mainstream press is full of discussion for a new program,  
Haystack, developed by a guy name Austin Heap and sponsored by the  
Censorship Research Center as a new kind of secure proxy.  See http://www.haystacknetwork.com/faq/ 
 for some information.


As described, the program relies on some kind of steganography to hide  
encrypted connections inside of connections to "approved" sites.  It  
was specifically designed to help Iranian dissidents maintain  
connections in the face of active government efforts to locate and  
block proxies and Tor entry and exit nodes.


A Google search reveals absolutely no technical information about  
exactly what Haystack does or now it does it.  The program is  
available on multiple platforms but is closed source - the FAQ linked  
to above discusses this, citing fears that making the source available  
would help censors.


Anyone know anything more about what Haystack is actually doing?

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048-bit RSA keys

2010-08-17 Thread Joseph Ashwood
FAIR DISCLOSURE: I am the inventor of some of the technology quoted, 
specifically US Patant Application 20090094406. And just to plug myself even 
more, yes the technology is for sale.


--
From: "Bill Stewart" 
Subject: Re: 2048-bit RSA keys


At 01:54 PM 8/16/2010, Perry E. Metzger wrote:

On Mon, 16 Aug 2010 12:42:41 -0700 Paul Hoffman
 wrote:
> At 11:35 AM +1000 8/16/10, Arash Partow wrote:
> >Just out of curiosity, assuming the optimal use of today's best of
> >breed factoring algorithms - will there be enough energy in our
> >solar system to factorize a 2048-bit RSA integer?
>
> We have no idea. The methods used to factor number continue to
> slowly get better,[...]

He asked about "today's best of breed algorithms", not future ones. In
that context, and assuming today's most energy efficient processors
rather than theoretical future processors, the question has a concrete
answer.


With today's best-of-breed algorithms and hardware designs,
there isn't enough money in the economy to build a machine
that comes close to making a scratch in the surface of
that kind of energy consumption, whether for factoring or
for simple destruction.


I'm not so convinced. Since we're discussing cost it makes sense to look at 
the cost based structure from http://www.rsa.com/rsalabs/node.asp?id=2088.


The storage required for 2048 is approximately 2^64 bytes, this is usually 
cited as the limitation. Considering technologies like US Patent Application 
20090094406 (mass quantities of Flash at better than DRAM speed), this is 
actually an achievable capacity with more speed than any current cpu can 
handle (2^64 storage could operate at up to millions of TB/sec). The cost is 
very signficant, from http://www.dramexchange.com/#flash, the best price per 
capacity is 32Gbit Flash, this is 2^32 bytes, so 2^32 such chips are 
required, session average of $6.99 each, this is "only" 2^32*6.99 about $30 
billion. Adding in the cost for the glue logic needed to build the 
20090094406 adds less than 10% to the cost, so its still under $35billion. 
Its worth noting that since we're talking about disk access protocols, the 
systems in place already handle addresses longer than 64-bits, so there are 
no redesign costs on the processors from this. So the cost resulting from 
the storage requirement for 2048 bit factoring is only about $35 billion.


If, as the page suggests, the storage is still truly the dominant cost 
factor 2048 is bordering on within reach for high value targets. 
Fortunately, this does not appear to be the case, storage jumped ahead of 
computation.


The computation cost is not as clear to me, I didn't invent the technologies 
so I'm not as intimately familiar. Computation costs are given by "A 
Cost-Based Security Analysis of Symmetric and Asymmetric Key Lengths" at 9 x 
10^15 times more complex than a 512-bit factoring, but does not immediately 
appear to offer good cost estimates, a few quick searches foun RSA-155 took 
about 8400 MIPS*years. Wikipedia gives a number of 147600 MIPS for an Intel 
Core i7. Intel gives prices at $560 per cpu 
(http://www.intel.com/buy/desktop/boxed-processor/embedded.htm?sSKU=BX80601940). 
Assuming a full year is an acceptable time frame the 2048 factoring would 
require 5.1*10^14 processors, costing, well bluntly, a crapload, or about 
$285,600,000,000,000,000.


I'm sure in such volume the price for the cpus could be brought down 
significantly, and other cpus may be more cost efficient.


Considering that google gives a number of $14.59 trillion, the purchase 
would require nearly 20,000 years of US GDP.


So unless someone can bring the computation cost down significantly (very 
possible, since I used a very brute force method) it seems unlikely that 
2048-bit numbers can be factord any time soon.


The most important part though is that the cost structure has changed 
signficantly. A few years ago the dominant cost was the storage, this has 
changed significantly.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048-bit RSA keys

2010-08-17 Thread Jonathan Katz

On Sun, 15 Aug 2010, Paul Hoffman wrote:


At 9:34 AM -0700 8/15/10, Ray Dillinger wrote:

I'm under the impression that <2048 keys are now insecure mostly due
to advances in factoring algorithms that make the attack and the
encryption effort closer to, but by no means identical to, scaling
with the same function of key length.


You are under the wrong impression, unless you are reading vastly different 
crypto literature than the rest of us are. RSA-1024 *might* be possible to 
break in public at some point in the next decade, and RSA-2048 is a few orders 
of magnitude harder than that.


Many on the list may already know this, but I haven't seen it mentioned on 
this thread. The following paper (that will be presented at Crypto 
tomorrow!) is most relevant to this discussion:

  "Factorization of a 768-bit RSA modulus",
  http://eprint.iacr.org/2010/006

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Has there been a change in US banking regulations recently?

2010-08-17 Thread John Gilmore
> who's your enemy?  The NSA?  The SVR?  Or garden-variety cybercrooks?

"Enemy"?  We don't have to be the enemy for someone to crack our
security.  We merely have to be in the way of something they want;
or to be a convenient tool or foil in executing a strategy.

Given the prevalence of Chinese crypto researchers at the open crypto
conferences, I suspect that China is as much of a threat as the US's
National Security Agency, Russia's Sluzhba Vneshney Razvedki, India's
Research and Analysis Wing, Japan's Jōhōhonbu, Israel's
Mossad, or Brazil's AgŽÃŽªncia Brasileira de InteligŽÃŽªnc.  A small
country with a good economy -- there are dozens more -- could also be
such a threat, if they focused on this area.  The big ones can crack
RSA keys AND do all the other things big countries do.

Many people on this list provide significant civilian or military
infrastructures depended on by millions.  When we know at least ten
nations are grasping at having the power to take down arbitrary
civilian infrastructures via cyberspace, we had better assume that
somebody among them can spend tens of millions of dollars *per year*
on key cracking.  And how much work is it, really, for us to use
longer keys?

Not all of us are in the US.  Those of us in the US perhaps have come
to a complacency about being a superpower - we haven't fought a war on
our own land, in which significant numbers of our own civilians died,
in what, a century?  The US government's idiotic response to 9/11 has
made more enemies around the world every year, while simultaneously
destroying the value of our currency.  The best time for a foreign
"enemy" to stop funding our $0.X trillion dollar a year debt would be
right after taking down much of our civilian infrastructure.  And
perhaps it might be hard for Washington to raise a billion dollars a
day in international bond sales, even from friendly countries, when
the international financial networks had been subtly or completely
compromised?  Hell, half the people in this country would starve two
days after their ATM cards stopped working.  The whole point of the
trillion dollar Bush and Obama bailouts (which were done by moving a
few bits in a federal funds transfer network somewhere) was to avoid
the specter of long lines around the block at bank branches, full of
angry people failing to turn bits in bank accounting databases into
paper or gold money.  Such a spectre would be easy for a cracker to
create -- and then how much confidence will people have in either the
currency or the government?

What keys secure that funds transfer network?  Suppose an attacker
merely multipled a random 10% of the transfers by 1000?  Somebody
wires you a thousand dollars, you have a 10% chance of it becoming a
million.  Wire a million, it might come through as a billion.  Then
you look at strategy: should they pay themselves back immediately for
the cost of cracking the keys, then be quiet?  Or should they just
make everyone a billionaire and make the entire currency worthless?

Did you think Adi Shamir's work on TWINKLE and TWIRL was theoretical?
Israeli leadership is paranoid enough to regularly shoot their friends
as well as their enemies, and usually in advance, on the theory of
weakening them *before* they turn against Israel.  And Israel would
have a lot more geopolitical power in a world without superpowers.

Did you think nobody else was designing or building such things?
Thank Adi for publishing - but what he published might not have been
his very best design.  Why did this community wait until a DES
cracker cost only $250,000 to build before thinking, duuh, maybe we
should defend our infrastructure against DES crackers.  How many
countries had secret DES crackers before I built one publicly?
To this day, no country has admitted having one -- yet I have been
privately told that government experts were aware that the cost of
building one was in the $250K range.  Do you think they learned that
merely by twirling a pencil at their desk, in agencies with budgets
way over $100 million a year?

(A private industry expert also told me that they'd been hoping the
first public DES cracker would happen at least a year later than it
did, to give them more time to secure their networks, e.g. before
their bosses found out how vulnerable the previous design was.)

In 2003, Shamir's estimate was that TWIRL could factor a 1024-bit
number in a year at a cost of about $10M US dollars.  More recent
estimates are here:

  http://people.csail.mit.edu/tromer/cryptodev/

Either that page hasn't been updated since 2006-7 or there's been no
published research since then.  I encourage others to post more
surveys of the cost of cracking RSA keys using dedicated hardware.

A typical academic analysis, such as 1996's "Minimal Key Lengths
for Symmetric Ciphers to Provide Adequate Security" said things like:

  Because ASICs require a far greater engineering investment than
  FPGAs and must be fabricated in quantity before they are economic