Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Chris Palmer
Thor Lancelot Simon writes:

 a significant net loss of security, since the huge increase in computation
 required will delay or prevent the deployment of SSL everywhere.

That would only happen if we (as security experts) allowed web developers to
believe that the speed of RSA is the limiting factor for web application
performance.

That would only happen if we did not understand how web applications work.

Thankfully, we do understand how web applications work, and we therefore
advise our colleagues and clients in a way that takes the whole problem
space of web application security/performance/availability into account.

Sure, 2048 is overkill. But our most pressing problems are much bigger and
very different. The biggest security problem, usability, rarely involves any
math beyond rudimentary statistics...


-- 
http://noncombatant.org/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


[tt] Random numbers created out of nothing

2010-09-30 Thread Eugen Leitl

Right from the snake-oil-security-dept.

- Forwarded message from Arlind Boshnjaku arlindboshnj...@yahoo.com -

From: Arlind Boshnjaku arlindboshnj...@yahoo.com
Date: Thu, 30 Sep 2010 14:48:44 +0200
To: transhumanist news t...@postbiota.org
Subject: [tt] Random numbers created out of nothing

http://www.newscientist.com/article/dn19520-random-numbers-created-out-of-nothing.html

Random numbers created out of nothing

12:36 30 September 2010 by Kate McAlpine

It's something from nothing. A random number generator that harnesses
the quantum fluctuations in empty space could soon sit inside your
computer.

A device that creates truly random numbers is vital for a number of
applications, including cryptography.

Algorithms can generate numbers that pass statistical tests for
randomness, but they're useless for secure cryptography if the
algorithm falls into the wrong hands. Other methods using entangled
ions to generate random numbers are more reliable, but tend to be
slower and more expensive.

Now Christian Gabriel's team at the Max Planck Institute for the
Science of Light in Erlangen, Germany, has built a prototype that
draws on a vacuum's random quantum fluctuations. These impart random
noise to laser beams in the device, which converts it into numbers.

It's an easy method, and it's good value, says Gabriel.

The team sent a laser into a beam splitter, sheltered from external
light sources. Without influence from the vacuum, the two emerging
beams would have been identical. However, the lowest energy state of
the electromagnetic field carries just enough energy to interact with
the laser as it passes through the beam splitter. The beams carry
this quantum noise, says Gabriel.

The exiting beams were captured in two detectors which turned the
light into electronic signals, and the signals were subtracted from
one another, leaving only the noise from the vacuum and electronics.
The team used a mathematical function to tease out the truly random
signal of the vacuum. Because they could calculate the total disorder
in the system and the portion which comes from the vacuum, they were
able to reduce the set of numbers so that the electronic contribution
was eliminated and only a fully random string remained.

Though reduced, the stream of bits comes at speedy 6.5 million per
second. This is already in line with the speed of commercially
available quantum random number generators, say the researchers, but
they hope to achieve rates more than 30 times higher.

Collaborator Christoph Marquardt says the generator's optimised speed
will be faster than anything you could buy or that is available in
other comparable systems nowadays.

The lab set-up costs about €1000, and the researchers estimate that
the cost could fall to about €100. As the device functions at room
temperature and could be made to fit in the palm of your hand, it may
one day be integrated into a desktop computer.

Antonio Acín of the Institute for Photonic Sciences in Barcelona,
Spain, points out that although the quantum noise of the vacuum is
tamper-proof, most users won't be able to verify the workings of their
random number generators – meaning they'll find it impossible to tell
whether they are receiving a unique random stream from the generator
or a pre-programmed, statistically random set from elsewhere.

Journal source: Nature Photonics, DOI: 10.1038/nphoton.2010.197
___
tt mailing list
t...@postbiota.org
http://postbiota.org/mailman/listinfo/tt

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Thor Lancelot Simon
On Wed, Sep 29, 2010 at 09:22:38PM -0700, Chris Palmer wrote:
 Thor Lancelot Simon writes:
 
  a significant net loss of security, since the huge increase in computation
  required will delay or prevent the deployment of SSL everywhere.
 
 That would only happen if we (as security experts) allowed web developers to
 believe that the speed of RSA is the limiting factor for web application
 performance.

At 1024 bits, it is not.  But you are looking at a factor of *9* increase
in computational cost when you go immediately to 2048 bits.  At that point,
the bottleneck for many applications shifts, particularly those which are
served by offload engines specifically to move the bottleneck so it's not
RSA in the first place.

Also, consider devices such as deep-inspection firewalls or application
traffic managers which must by their nature offload SSL processing in
order to inspect and possibly modify data before application servers see 
it.  The inspection or modification function often does not parallelize
nearly as well as the web application logic itself, and so it is often
not practical to handle it in a distributed way and just add more CPU.

At present, these devices use the highest performance modular-math ASICs
available and can just about keep up with current web applications'
transaction rates.  Make the modular math an order of magnitude slower
and suddenly you will find you can't put these devices in front of some
applications at all.

This too will hinder the deployment of SSL everywhere, and handwaving
about how for some particular application, the bottleneck won't be at
the front-end server even if it is an order of magnitude slower for it
to do the RSA operation itself will not make that problem go away.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Marsh Ray

On 09/30/2010 10:41 AM, Thor Lancelot Simon wrote:

On Wed, Sep 29, 2010 at 09:22:38PM -0700, Chris Palmer wrote:


Thor Lancelot Simon writes:

a significant net loss of security, since the huge increase in computation
required will delay or prevent the deployment of SSL everywhere.


That would only happen if we (as security experts) allowed web developers to
believe that the speed of RSA is the limiting factor for web application
performance.


+1.

Why are multi-core GHz server-oriented CPUs providing hardware 
acceleration for AES rather than RSA?


There may be reasons: AES side channels, patents, marketing, etc..

But if it really were such a big limitation you'd think it'd be a 
feature to sell server chips by now. Maybe in a sense it already is. 
What else are you going to do on that sixth core you stick behind the 
same shared main memory bus?



At 1024 bits, it is not.  But you are looking at a factor of *9* increase
in computational cost when you go immediately to 2048 bits.  At that point,
the bottleneck for many applications shifts, particularly those which are
served by offload engines specifically to move the bottleneck so it's not
RSA in the first place.


I could be wrong, but I get the sense that there's not really a high 
proportion of sites which are:


A. currently running within an order of magnitude of maxing out server 
CPU utilization on 1024 bit RSA, and


B. using session resumption to its fullest (eliminates RSA when it can 
be used), and


C. an upgrade to raw CPU power would represent a big problem for their 
budget.


OTOH, if it increased the latency and/or power consumption for 
battery-powered mobile client devices that could be noticeable for a lot 
of people.



Also, consider devices such as deep-inspection firewalls or application
traffic managers which must by their nature offload SSL processing in
order to inspect and possibly modify data before application servers see
it.  The inspection or modification function often does not parallelize
nearly as well as the web application logic itself, and so it is often
not practical to handle it in a distributed way and just add more CPU.


The unwrapping of the SSL should parallelize just fine. I think the IT 
term for that is scalability. We should be so lucky that all our 
problems could be solved by throwing more silicon at them!


Well, if there are higher-layer inspection methods (say virus scanning) 
which don't parallelize, well, wouldn't they have the same issue without 
encryption?



At present, these devices use the highest performance modular-math ASICs
available and can just about keep up with current web applications'
transaction rates.  Make the modular math an order of magnitude slower
and suddenly you will find you can't put these devices in front of some
applications at all.


Or the vendors get to sell a whole new generation of boxes again.


This too will hinder the deployment of SSL everywhere,


It doesn't bother me the least if deployment of dragnet-scale 
interception-friendly SSL is hindered. But you may be right that it has 
some kind of effect on overall adoption.



and handwaving
about how for some particular application, the bottleneck won't be at
the front-end server even if it is an order of magnitude slower for it
to do the RSA operation itself will not make that problem go away.


Most sites do run some particular application. For them, it's either a 
problem, an annoyance, or not a noticeable at all. The question is what 
proportion of situations are going to be noticeably impacted.


I imagine increasing the per-handshake costs from, say, 40 core-ms to 
300 core-ms will have wildly varying effects depending on the system. It 
might not manifest as a linear increase of anything that people care to 
measure.


I agree, it does sound a bit hand-wavy though. :-)

- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Thor Lancelot Simon
On Thu, Sep 30, 2010 at 01:36:47PM -0400, Paul Wouters wrote:
[I wrote]:
 Also, consider devices such as deep-inspection firewalls or application
 traffic managers which must by their nature offload SSL processing in
 order to inspect and possibly modify data

 You mean it will be harder for MITM attacks on SSL. Isn't that a good thing? 
 :P

No, I don't mean that, because if the administrator of site _X_ decides
to do SSL processing on a front-end device instead of on the HTTP servers,
for whatever reason, that is simply not a MITM attack.

To characterize it as one is basically obfuscatory.

When I talk about SSL everywhere being an immediate opportunity, I mean
that, from my point of view, it looks like there's a growing realization
that _for current key sizes and server workloads_, for many high transaction
rate sites like Gmail, using SSL is basically free -- so you might as well,
and we all end up better off.

Mutliplying the cost of the SSL session negotiation by a small factor will
change that for a few sites, but multiplying it by a factor somewhere from
8 to 11 (depending on different measurements posted here in previous
discussions) will change it for a lot more.

That's very unfortunate, from my point of view, because I believe it is
a much greater net good to have most or almost all HTTP traffic encrypted
than it is for individual websites to have keys that expire in 3 years,
but are resistant to factoring for 20 years.

The balance is just plain different for end keys and CA keys.  A
one-size-fits-all approach using the key length appropriate for the CA
will hinder universal deployment of SSL/TLS at the end sites.  That is
not a good thing.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Wrong Direction on Privacy - using NSLs to obtain communication transactional information

2010-09-30 Thread =JeffH

another facet of The Administration's We Hear You efforts..


Wrong Direction on Privacy
Susan Landau
2-Aug-2010

http://www.huffingtonpost.com/susan-landau/wrong-direction-on-privac_b_666915.html

The White House wants to make it easier for the FBI to get at your email and 
web browsing records; the plan is to make transactional information surrounding 
your Internet communications --- the to/from information and the times and 
dates of those communications --- subject to National Security Letters (NSLs), 
meaning the FBI could get these records without going through a judge.


NSLs were created in 1978 to give FBI investigators an easy way to obtain 
various business records, including the transactional information of phone 
records (not the content, which is subject to more stringent protections). The 
easy part of NSLs is that no courts are involved in issuing an NSL; the 
bureau does so itself. FBI guidelines require NSLs to be issued only on a 
written request of an FBI Special Agent in Charge (or other specially delegated 
senior FBI official), and there are four approval steps in the process.


Originally NSLs were to be used against foreign powers and people believed to 
be their agents. But proving someone was an agent of a foreign power was not 
all that easy, and NSLs were rarely used. That situation changed with the 
PATRIOT Act, which allowed NSLs to be used to gather information relevant to 
international terrorism cases. In an Orwellian touch, under the PATRIOT Act the 
bureau could require that the recipient of an NSL keep the order secret. NSL 
numbers shot up; between 2003-2006, the FBI issued 192,000 NSLs. Many were to 
phone companies. Why is clear; knowing who the bad guys are communicating with 
leads to untangling plots, often before law enforcement understands exactly 
what the plot might be. Such appears to be what happened, for example, in the 
case of Najibullah Zazi, who recently pled guilty to a plot to bomb the New 
York City subways.


At first in the initial aftermath of September 11th, telephone company workers 
were sharing offices with the FBI Communications Assistance Unit, and many 
times the required procedures went by the wayside. And instead of NSLs, the FBI 
begun using exigent letters'' requesting immediate access to telephone records 
with claims to the phone companies that the appropriate subpoenas were in 
process. Many times that wasn't true. Sometimes there wasn't even a paper trail 
for the requests; they were just issued verbally. Dates and other specifics 
were often missing from the requests, which meant law enforcement got many more 
months of data than there was need for.


Why does this matter? It turns out that communications transactional 
information is remarkably revelatory. When NSLs were created in 1978, phones 
were fixed devices, and the information of who was calling whom provided a 
useful past history of behavior. The information is much richer with mobile 
devices; knowing who is calling whom, or whose cellphone is repeatedly located 
in the same cellphone sector as whose, provides invaluable information --- 
information that is simultaneously remarkably invasive. Transactional data 
reveals who spends time together, what an organization's structure is, what 
business or political deals might be occurring. ... snip/



---
end

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com