What's the state of the art in digital signatures? Re: What's the state of the art in factorization?

2010-07-09 Thread Zooko O'Whielacronx
By the way, the general idea of One Hundred Year Security as far as
digital signatures go would be to combine digital signature
algorithms. Take one algorithm which is bog standard, such as ECDSA
over NIST secp256r1 and another which has strong security properties
and which is very different from ECDSA. Signing is simply generating a
signature over the message using each algorithm in parallel.
Signatures consist of both of the signatures of the two algorithms.
Verifying consists of checking both signatures and rejecting if either
one is wrong.

Since the digital signature algorithms that we've been discussing such
as [1] are related to discrete log/Diffie-Hellman and since an
efficient implementation would probably be in elliptic curves, then
those are not great candidates to pair with ECDSA in this combiner
scheme.

Unfortunately I haven't stumbled on a digital signature scheme which
has good properties (efficiency, simplicity, ease of implementation)
and which is based on substantially different ideas and which isn't
currently under patent protection (therefore excluding NTRUSign).

Any ideas?

[1] http://eprint.iacr.org/2007/019

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What's the state of the art in factorization?

2010-07-09 Thread Jonathan Katz

On Thu, 22 Apr 2010, Zooko O'Whielacronx wrote:


There is some interesting work in public key cryptosystems that reduce
to a *random* instance of a specific problem.

Here is a very cool one:

http://eprint.iacr.org/2009/576


...


Unless I misunderstand, if you read someone's plaintext without having
the private key then you have proven that P=NP!


It is not known, and considered extremely unlikely, that P \neq NP implies 
symmetric-key cryptography, much less public-key cryptography.


The paper you cite reduces security to a hard-on-average problem, whereas 
all that P \neq NP guarantees is hardness in the worst case.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


random but not by chance

2010-07-09 Thread dan


http://jqi.umd.edu/news/215-random-numbers-but-not-by-chance.html

   Random Numbers -- But Not By Chance
   April 2010

   Researchers have devised a new kind of random number
   generator, for encrypted communications and other uses,
   that is cryptographically secure, inherently private
   and - most importantly - certified random by laws of
   physics.

article cut there as there both a diagram and a video

--dan

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Best practices for storing and using 3rd party passwords?

2010-07-09 Thread Adam Fields
I'm looking for a best practices guide (for a system architecture) or
case studies for how best to handle storing and using 3rd party
passwords.

Specifically, I'm interested in the case where a program or service
needs to store a password in such a way that it can be used (presented
to another service on behalf of the user), which precludes using a
hash or other obfuscated password. Obviously this is a security risk,
but I'm looking for ways to minimize that risk, and tips on how to
design a system that can use those passwords as it needs to but still
minimize the chances of passwords being compromised.

(I understand that storing passwords is not in itself a great idea,
but in practice it's still required to access some web services where
OAuth or the like is not yet supported.)

Does anyone have a good reference for this?


-- 
- Adam
--
If you liked this email, you might also like:
HTML5 presentation in HTML5 
-- http://workstuff.tumblr.com/post/535889471
Cooking at home is different 
-- http://www.aquick.org/blog/2009/10/15/cooking-at-home-is-different/
Brooklyn Botanic Garden 
-- http://www.flickr.com/photos/fields/4520236537/
fields: @jacqui Get an ez-pay metrocard and never worry about refilling or 
los... 
-- http://twitter.com/fields/statuses/12888949847
--
** I design intricate-yet-elegant processes for user and machine problems.
** Custom development project broken? Contact me, I can help.
** Some of what I do: http://workstuff.tumblr.com/post/70505118/aboutworkstuff

[ http://www.adamfields.com/resume.html ].. Experience
[ http://www.morningside-analytics.com ] .. Latest Venture
[ http://www.confabb.com ]  Founder

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Spy/Counterspy

2010-07-09 Thread Peter Gutmann (alt)
GPS tracking units that you can fit to your car to track where your kids are
taking it (or *cough* other purposes) have been around for awhile now.  It's
interesting to see that recently the sorts of places that'll sell you card
skimmers and RFID cloners have started selling miniature GPS jammers that plug
into cigarette-lighter sockets on cars (general-purposes ones using internal
batteries have been around for awhile).  In other words these are specifically
designed to stop cars from being tracked.

(Some of the more sophisticated trackers will fall back to 3G GSM-based
tracking via UMTS modems if they lose the GPS signal, it'll be interested to
see how long it takes before the jammers are updated to deal with 3G signals
as well, hopefully while leaving 2G intact for phonecalls).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


A5/1 breaks with GPU and Rainbow Tables

2010-07-09 Thread Frank A. Stevenson
I hope the crosslinking is OK:

http://lists.lists.reflextor.com/pipermail/a51/2010-May/000605.html

Time memory tradeoffs attacks against A5/1, the most commonly used
encryption in GSM, have been known for over a decade. But older
proposals were limited to period hardware. The tables would be a few
tens of gigabytes, and the precomputational effort were restricted to
100-1000 CPU years with PCs of the era. Consequently the plaintext
requirements were impractically high, typically several minutes of
conversation.

The A5/1 TMTO project couples Rainbow tables with modern GPUs, and cheap
terabytes disks or fast flash storage, and gains leverage from keyspace
compression, a side effect of warming up the lfsrs. Recently results
have been announced, in the form of keys recovered from test data,
together with dramatic reduction in preprocessing and plaintext
requirements.

For instance 20 days computation on just one high end graphics card (ATI
Radeon HD 5970) seems to yield 4% chance of key recovery given a single
GSM frame (114 bits) of known plaintext. The tables will be computed to
a height of 2TB in a matter of months, reducing the plaintext
requirements to just a handful of GSM frames.

I should stress that the project has not made an actual intercept
coupled with a break of a GSM call yet. But given how few GSM frames
will be needed, this could be expected in the near term.

Frank A. Stevenson



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Attempts at finding a new TCP sequence generator for uIP

2010-07-09 Thread Joachim Strömbergson
Aloha!

uIP [1] is a very compact TCP/IP stack for small, networked connected,
embedded devices. (The code size for uIP including TCP and ICMP on the
AVR processor is about 5 kBytes.)

Unfortunately, the TCP sequence number generator in uIP is a bit
simplistic - basically a monotonically increasing number. In order to
reduce the opportunities for TCP Spoofing (like this nice one [2]) we
are trying to implement a new TCP sequence number generator.

What we want to find is an algorithm that generates a good (secure) TCP
seq numbers, but use very little resources (on 8-bit computing devices).

We have done some preliminary investigations, have some rough ideas and
would really appreciate comments and suggestions from the enlightened
minds on this list.

As we see it, the two main problems to solve are:
(1) Find a secure PRNG algorithm that have as low implementation
complexity as possible.

(2) Add as little system/application requirements on entropy source and
persistent storage as possible.

Looking at TinyRNG [3] for example, it seems that a block cipher in CTR
mode (or OFB mode) should be sufficient. The question then is what block
cipher to use? The XTEA block cipher [4] is very compact, but would it
be a wise choice from a security perspective?

But what to feed the PRNG with? Looking again at TinyRNG, it uses a
simplistic version of the entropy accumulator from the Fortuna PRNG [5],
but with fewer and smaller pools. The pools are processed using a
CBC-MAC built around the same block cipher as used in the PRNG.

The combined storage for the pools as well as CBC-MAC state would
probably be acceptable for uIP. The question is if the pool feeding
operation as such adds operational requirements on uIP that makes it
harder to integrate?

A simpler scheme could be to feed the PRNG (CTR-mode) with entropy
used as part of Key and IV, that is not use a pool mechanism at all
and leave it to user application to provide entropy words when
performing a reseed. The Key (and IV?) would also consists of a
counter that is monotonically increased.

The problem with this (we guess) is that in order to ensure that KEY+IV
is never reused is to keep at least part of KEY or IV as a counter that
is stored in persistent memory and increased once (and stored) every
time reseed (or boot) is performed. (How bad from a security perspective
would this be? Compared to other TCP sequence generators?)

The current version of uIP places few (basically no) demands on the
system/application regarding physical resources (besides mem for code
and data) and does not use any persistent storage besides code memory.
It seems that any good sequence generator that are driven by physical
entropy and tries to avoid sequence repetition need to place additional
demands on the system. No?

This is basically as far as we have taken this. More or less a bit of
Googling, reading and attempts at thinking. The ambition is not to
invent something new and unproven but to adapt existing tech and ideas
that seem to work. But get it to work with the size, performance and API
constraints of uIP.


Any thoughts, comments, suggestions and pointers would be very greatly
appreciated.

Thank you!
Joachim Str�mbergson


References
--
[1] A. Dunkels. uIP TCP/IP stack.
http://www.sics.se/~adam/uip/index.php/Main_Page

[1] R. Lawshae. Picking Electronic Locks Using TCP Sequence Prediction

http://www.defcon.org/images/defcon-17/dc-17-presentation/Ricky_Lawshae/defcon-17-ricky_lawshae-picking_electronic_locks-wp.pdf

[3] A. Francillon, C. Castelluccia. TinyRNG: A Cryptographic Random
Number Generator for Wireless Sensors Network Nodes
http://planete.inrialpes.fr/~ccastel/PAPERS/TinyRNG.pdf

[4] R. M. Needham, D. J. Wheeler. Tea extensions.
http://www.cix.co.uk/~klockstone/xtea.pdf

[5] Wikipedia. Fortuna PRNG.
http://en.wikipedia.org/wiki/Fortuna_%28PRNG%29

-- 
Med v�nlig h�lsning, Yours

Joachim Str�mbergson - Alltid i harmonisk sv�ngning.

Kryptoblog - IT-s�kerhet p� svenska
http://www.strombergson.com/kryptoblog

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Commercial quantum cryptography system broken

2010-07-09 Thread Steven Bellovin
http://www.technologyreview.com/blog/arxiv/25189/

Not at all to my surprise, they broke it by exploiting a difference between a 
theoretical system and a real-world implementation.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Quantum Key Distribution: the bad idea that won't die...

2010-07-09 Thread Alexander Klimov
http://arxiv.org/abs/1005.2376

  Unconditional security proofs of various quantum key
  distribution (QKD) protocols are built on idealized
  assumptions. One key assumption is: the sender (Alice) can
  prepare the required quantum states without errors. However,
  such an assumption may be violated in a practical QKD system.
  In this paper, we experimentally demonstrate a technically
  feasible intercept-and-resend attack that exploits such
  a security loophole in a commercial plug  play QKD system.
  The resulting quantum bit error rate is 19.7%, which is below
  the proven secure bound of 20.0% for the BB84 protocol.

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Question w.r.t. AES-CBC IV

2010-07-09 Thread Ralph Holz
Dear all,

A colleague dropped in yesterday and confronted me with the following.

He wanted to scrape off some additional bits when using AES-CBC because
the messages in his concept are very short (a few hundred bit). So he
was thinking about a variant of AES-CBC, where he uses just 32 (random)
bits as a source for the IV. These are encrypted with AES and then used
as the actual IV to feed into the CBC. As a result, he does not need to
send a 128 bit IV to the receiver but just the 32 bit.

His argument was that AES basically is used as an expansion function for
the IV here, with the added benefit of encryption. On the whole, this
should not weaken AES-CBC. Although he was not sure if it actually would
strengthen it.

While I am prepared to buy this argument (I am not a cryptographer...),
I still felt that the argument might not be complete. After all, 32 bits
don't provide much randomness, and I wasn't sure if this, overall, would
not lead to more structure in the ciphercode - which might in turn give
an attacker more clues with respect to the key.

Are there any opinions on this?

Regards,
Ralph

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


What is required for trust?

2010-07-09 Thread Sandy Harris
India recently forbade some Chinese companies from bidding on some
cell phone infrastructure projects, citing national security concerns:

http://www.chinatechnews.com/2010/05/25/12102-indias-bsnl-excludes-chinas-huawei-zte-from-gsm-bidding

Of course, the Chinese gov't and companies are by no means the only
ones one might worry about. ATT and other US telcos have given
customer data to the NSA. What about fear of NSA trickery in Lucent
products? Or French intelligence in Alcatel? Or Israeli or Taiwan or
whoever? In all cases, you can argue about how plausible such threats
are, but it seems clear they are not utterly implausible.

Nor are the companies the only threat. Cisco and many other firms have
factories in China; if you are worried about Huawei colluding with
government here to spy on or sabotage other nations, then you likely
have to worry about that government slipping a team into Cisco staff
to subvert those products. I don't think this threat is realistic, but
I could be wrong.

The main devices to worry about are big infrastructure pieces --
telephone switches, big routers and the like. However, those are by no
means the only potential targets. Small home routers and various
embedded systems are others.

So, if one is building some sort of hardware that people may be
reluctant to buy because of security concerns, what does it take to
reassure them? Obviously, this is going to vary with both the
application and the people involved, but can we say anything useful in
general?

Standard components help. If you use IPsec, or AES, or a commodity
processor, I can have some confidence in those parts, though I'll
still worry about other things. Use your own protocol or crypto
algorithm and I definitely won't trust it without publication and a
lot of analysis. Put big lumps of your own VLSI on the board and I'll
worry about what might be hidden in them.

Openness helps. Put an open source OS on the thing and give me the
application code in source for auditing. If you must use some VLSI or
FPGA parts, publish source for those.

Auditing helps. Intel got outsiders to audit their random number
generator. This is probably needed for some critical components, but
which?

All of those help, but are they enough? If not, what else is needed?
Or is this an impossible task?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


A Fault Attack Construction Based On Rijmen's Chosen-Text Relations Attack

2010-07-09 Thread Alfonso De Gregorio
The last Thursday, Vincent Rijmen announced a new clever attack on AES  
(and KASUMI) in a report posted to the Cryptology ePrint Archive:  
Practical-Titled Attack on AES-128 Using Chosen-Text Relations,  
http://eprint.iacr.org/2010/337


I believe the related-subkey model is an interesting model to look at  
and, with this email, I would like to solicit comments from the  
community about chosen-text relations attacks and their implications.


For example, this model might be pretty relevant while attacking  
white-box implementations of the target encryption algorithm with  
embedded secret key, assuming the ability to tamper with at least 1bit  
of the round output (debugging...).


A Fault Attack
In order to further solicit comments, I would like to contribute a  
fault attack construction based on chosen-text relations attack.


First, it is worth to note how the zero-query attack provided by  
chosen-text-relations-in-the-middle can be transformed into an attack  
with a single-query to both the encryption and decryption oracles. It  
is possible to do so by resuming the interrupted encryption after  
applying the specific difference delta to the state (ie, no rollback  
anymore) and querying the decryption oracle.


More specifically:
- halt the computer in the middle of execution of an encryption routine;
- apply the specific difference delta to the state;
- resume the encryption and output the ciphertext c*;
- query the decryption oracle with c* and retrieve the modified plaintext 
p*-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


CFP: ACM CCS Workshop on Insider Threats

2010-07-09 Thread Wright, Matthew
CFP: ACM CCS Workshop on Insider Threats

Apologies for cross-posting.

-Matt

ACM Workshop on Insider Threats
http://www.csiir.ornl.gov/ccsw2010/

Call for Papers

ACM Workshop on Insider Threats
in conjunction with ACM CCS 2010
October 8, 2010

When equipped with insider knowledge, an attacker is a particular risk
to an organization: they may know the policies and security measures
of an organization and devise ways to subvert them. Such attackers can
have a variety of motives and triggers that cause them to act against
the organization's interests. Further, the mechanisms these attackers
can use can range from unsophisticated abuses of their own authority
to elaborate techniques to acquire unauthorized access. The duration
of the attacks may be short or longer-term. Finally, the goal from
these attacks can be simple exfiltration of information or even direct
sabotage. 

The Insider Threat has been identified as a hard, but important,
computer security problem. This workshop broadly calls for novel
research in the defense against insider threats. Relevant research may
leverage operating systems, communication networking, data mining,
social networking, or theoretical techniques to inform or create
systems capable of detecting malicious parties. Cross-disciplinary
work is encouraged but such work should contain a significant
technical computer security contribution. Research in non-traditional
systems, such as smart spaces, is encouraged as well as enterprise
systems. Finally, while we discourage exploits of limited scope, we
solicit generalized techniques that help an inside attacker evade
modern defensive techniques. 
  
Topics of interest include but are not limited to:
  - Novel data collection of threat indicators,
  - Detection of triggers and behavior modeling associated with
 insider threat development, 
  - Detection of malicious users acting within their own authority
 against organizational interests, 
  - Detection of unauthorized escalation of rights,
  - Covert exfiltration of data and approaches to thwart such techniques,
  - Automatic detection of high-value digital assets,
  - Techniques to minimize false positives in insider threat detection,
  - Advances in access control, data compartmentalization or
 administration of compartments, 
  - Detection techniques for resource constrained clients (limited
 processor, bandwidth, or battery capacity), 
  - Data and digital asset tracking, and
  - Techniques to provide near real-time forensics.

Important Dates
  - Paper Submission Due: June 28, 2010
  - Acceptance Notification: August 6, 2010
  - Camera-ready Due: August 16, 2010
  - Workshop: October 8, 2010

Paper Format

Submissions must be at most 8 pages in double-column ACM format (note:
pages must be numbered), excluding the bibliography and well-marked
appendices and at most 10 pages overall. Committee members are not
required to read appendices, so the paper should be intelligible
without them. Submissions are not required to be anonymized. Only PDF
files will be accepted. Submissions not meeting these guidelines risk
rejection without consideration of their merits. The authors of
accepted papers must guarantee that their paper will be presented at
the workshop. Accepted papers will be published by the ACM in a
conference proceedings. 

Paper Submission

All submissions are made through the Easy Chair Website:
http://www.easychair.org/conferences/?conf=wits20100

Program Chairs
Brent Lagesse, Oak Ridge National Laboratory
Craig Shue, Oak Ridge National Laboratory

Program Committee
Michel Barbeau, Carleton University
Elisa Bertino, Purdue University
Dawn Cappelli, CERT
Erik Ferragut, Oak Ridge National Laboratory
Deborah Frincke, Pacific Northwest National Laboratory
Minaxi Gupta, Indiana University
Markus Jakobsson, Palo Alto Research Center
Apu Kapadia, Indiana University
Marc Liberatore, University of Massachusetts
Donggang Liu, University of Texas Arlington
Gerome Miklau, University of Massachusetts
Sean Smith, Dartmouth College
Matthew Wright, University of Texas Arlington

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


A real case of malicious steganography in the wild?

2010-07-09 Thread Steven Bellovin
For years, there have been unverifiable statements in the press about assorted 
hostile parties using steganography.  There may now be a real incident -- or at 
least, the FBI has stated in court documents that it happened.

According to the Justice Department 
(http://www.justice.gov/opa/pr/2010/June/10-nsd-753.html), 11 Russian nationals 
have been operating as deep cover agents in the US for many years.  According 
to Complaint #2 (link at the bottom of the page), a steganographic program was 
used.  Other Internet-era techniques allegedly employed include ad hoc WiFi 
networks.  (To be sure, the FBI could be wrong.  In the past, I've seen them 
make mistakes about that, but they're *much* more experienced now.)

It will be interesting to see how this develops in court.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


1280-Bit RSA

2010-07-09 Thread Dan Kaminsky
All,

   I've got a perfect vs. good question.

   NIST is pushing RSA-2048.  And I think we all agree that's probably a
good thing.

   However, performance on RSA-2048 is too low for a number of real world
uses.

   Assuming RSA-2048 is unavailable, is it worth taking the intermediate
step of using RSA-1280?  Or should we stick to RSA-1024?

--Dan

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Intel to also add RNG

2010-07-09 Thread Eugen Leitl

http://www.technologyreview.com/printer_friendly_article.aspx?id=25670channel=Briefingssection=Microprocessors
 

Tuesday, June 29, 2010

Nanoscale Random Number Circuit to Secure Future Chips

Intel unveils a circuit that can pump out truly random numbers at high speed.

By Tom Simonite

It might sound like the last thing you need in a precise piece of hardware,
but engineers at Intel are pretty pleased to have found a way to build a
circuit capable of random behavior into computer processors.

Generating randomness--an unpredictable stream of numbers--is much harder
than you might think. It's also crucial to creating the secure cryptographic
keys needed to keep data safe. Building a random-number-generating ability
into the Central Processing Unit (CPU) at a computer's heart is ideal, says
Ram Krishnamurthy, an engineer at Intel's Microprocessor Technology Labs, in
Hillsboro, OR. It should speed up any process that requires the generation of
an encrypted key, for example securing sensitive data on a hard drive, and
make it harder for an attacker to compromise that encryption.

Building circuitry capable of producing random numbers into a CPU has proved
difficult. Today random numbers are either generated in software, or in the
chip set outside the microprocessor, explains Krishnamurthy, one of the
Intel researchers on the project.

Neither solution is ideal. Software produces only pseudo random numbers
(given enough computing power, patterns can be found within that randomness).

If the random numbers are not truly random, for example, if they are biased
in some way, then an adversary has a better chance of guessing/determining
the value, explains mathematician Elaine Barker, at the National Institute
for Standards and Technology (NIST), in Gaithersburg, MD. In the case of
cryptographic keys, if the adversary can determine the key without an
excessive amount of computing power, then he can breach the confidentiality
of that data.

Installing a source of random numbers outside of a computer's core
microprocessor provides another avenue of opportunity to attackers, says
Krishnamurthy. You are vulnerable to side channel attacks, he explains,
there are many ways by which the key can be directly read off of the bus, or
attacks that look at how the power supply varies and look for signatures that
indicate what the key looks like.

Building the circuit into the main processor shuts off that possibility, says
Krishnamurthy, although the barrier to doing that has been practicality. The
best-established methods of generating random numbers use analog circuits
that rely on thermal noise as a source of randomness, and those circuits are
not easily fabricated with the techniques used to make the digital circuits
of a microprocessor. Nor are they easily scaled down to the size of
components on modern chips.

Intel's new circuit has a fully-digital design, making it possible to
incorporate it into the microprocessor die. At the heart of the new design is
a cross-coupled inverter, a combination of two basic circuit components that
is essentially a memory capable of storing a single 1 or 0. This memory,
though, is designed to be unreliable; it can be tipped between its two
possible outputs by the influence of thermal noise from the surrounding
silicon. Since that thermal noise is random, the circuit's output should be,
too.

In reality, though, the influence of fluctuations in voltage and temperature
normal inside a chip could bias that output to be less-than-random, requiring
Krishnamurthy and colleagues to develop additional measures to counteract
their influence. Benchmarks for true randomness maintained by NIST were
used to confirm they had been successful. We exceeded all of those
thresholds, he says. The speed at which the new circuit cranks out
numbers--2.4 billion a second or 2.4Gbps--is also around 200 times faster
than anything before, Krishnamurthy adds.

Having built the circuit with a smallest feature size of 45 nanometers, he
and colleagues are now working toward proving it can be built using 32 and 22
nanometer manufacturing processes with minimal design tweaks.

Passing existing benchmarks of randomness, though, does not mean the new
circuit is perfect. Current techniques do not make it possible to be certain
that any source of randomness is truly random, says Barker. We just don't
know enough to design tests that catch all the problems, and tests may not
always catch the point at which a noise source starts to go bad if the change
is subtle. Research by groups like that at NIST will generate smarter tests
that help industry engineers raise the bar further.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Chaocipher Revealed

2010-07-09 Thread I)ruid
CHAOCIPHER REVEALED: THE ALGORITHM
Moshe Rubin © 2 July 2010

ADDRESS: Rechov Shaulson 59/6, Jerusalem 95400 ISRAEL;
mos...@mountainvistasoft.com.

ABSTRACT: Chaocipher is a method of encryption invented by John F. Byrne
in 1918, who tried unsuccessfully to interest the US Signal Corp and
Navy in his system. In 1954, Byrne presented Chaocipher-encrypted
messages as a challenge in his autobiography “Silent Years”. Although
numerous students of cryptanalysis attempted to solve the challenge
messages over the years, none succeeded. Chaocipher has been a closely
guarded secret known only to a handful of persons. Following fruitful
negotiations with the Byrne family during the period 2009-2010, the
Chaocipher papers and materials have been donated to the National
Cryptologic Museum in Ft. Meade, MD. This paper presents the first full
disclosure and description of Byrne’s Chaocipher algorithm..

http://www.ciphermysteries.com/2010/07/03/the-chaocipher-revealed

-- 
I)ruid, C²ISSP
dr...@caughq.org
http://druid.caughq.org


signature.asc
Description: This is a digitally signed message part


Re: Spy/Counterspy

2010-07-09 Thread Pawel


Hi,

On Apr 27, 2010, at 5:38 AM, Peter Gutmann (alt) pgut001.reflec...@gmail.com 
 wrote:


GPS tracking units that you can fit to your car to track where your  
kids are
taking it (or *cough* other purposes) have been around for awhile  
now.  It's
interesting to see that recently the sorts of places that'll sell  
you card
skimmers and RFID cloners have started selling miniature GPS jammers  
that plug
into cigarette-lighter sockets on cars (general-purposes ones using  
internal
batteries have been around for awhile).  In other words these are  
specifically

designed to stop cars from being tracked.

(Some of the more sophisticated trackers will fall back to 3G GSM- 
based
tracking via UMTS modems if they lose the GPS signal, it'll be  
interested to
see how long it takes before the jammers are updated to deal with 3G  
signals

as well, hopefully while leaving 2G intact for phonecalls).


Just wondering, why wouldn't GPS trackers use 2G to determine the  
location?


And, also, does it even need a cell service subscription for location  
determination, or is it enough to query the cell towers (through some  
handshake protocols) to figure out the proximities and coordinates?




Peter.


Thanks,
  Pawel. 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What's the state of the art in factorization?

2010-07-09 Thread Jonathan Katz

On Thu, 22 Apr 2010, Zooko O'Whielacronx wrote:


On Wed, Apr 21, 2010 at 5:29 PM, Samuel Neves sne...@dei.uc.pt wrote
(on the cryptography@metzdowd.com list):

[2] http://www.cs.umd.edu/~jkatz/papers/dh-sigs-full.pdf


As one of the authors of the above paper, I have an obvious interest in 
this thread. =)



Later I discovered this paper [2] which appears to be an improvement
on that one in terms of performance (see Table 1 in [2]) while still
having a tight reduction to the Computational Diffie-Hellman (CDH)
problem. Strangely, this paper [2] doesn't appear to have been
published anywhere except as an eprint on eprint.iacr.org. I wonder
why not. Is there something wrong with it?


While I don't know of any attack, the proof of security does not appear to 
be correct.


On the other hand, there is one published scheme that gives a slight 
improvement to our paper (it has fewer on-line computations): it is a 
paper by Chevallier-Mames in Crypto 2005 titled An Efficient CDH-Based 
Signature Scheme with a Tight Security Reduction.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [cryptography] What's the state of the art in factorization?

2010-07-09 Thread Paul Crowley

Jonathan Katz wrote:

[2] http://www.cs.umd.edu/~jkatz/papers/dh-sigs-full.pdf


On the other hand, there is one published scheme that gives a slight 
improvement to our paper (it has fewer on-line computations): it is a 
paper by Chevallier-Mames in Crypto 2005 titled An Efficient CDH-Based 
Signature Scheme with a Tight Security Reduction.


My preferred signature scheme is the second, DDH-based one in the linked 
paper, since it produces shorter signatures - are there any proposals 
which improve on that?


Incidentally, the paper doesn't note this but that second scheme has a 
non-tight reduction to the discrete log problem in exactly the way that 
Schnorr does.

--
  __
\/ o\ Paul Crowley, p...@ciphergoth.org
/\__/ http://www.ciphergoth.org/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [cryptography] What's the state of the art in factorization?

2010-07-09 Thread Zooko O'Whielacronx
On Fri, Apr 23, 2010 at 3:57 AM, Paul Crowley p...@ciphergoth.org wrote:

 My preferred signature scheme is the second, DDH-based one in the linked
 paper, since it produces shorter signatures - are there any proposals which
 improve on that?

http://eprint.iacr.org/2007/019

Has one. Caveat lector.

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


What's the state of the art in digital signatures? Re: What's the state of the art in factorization?

2010-07-09 Thread Zooko O'Whielacronx
On Thu, Apr 22, 2010 at 12:40 PM, Jonathan Katz jk...@cs.umd.edu wrote:
 On Thu, 22 Apr 2010, Zooko O'Whielacronx wrote:

 Unless I misunderstand, if you read someone's plaintext without having
 the private key then you have proven that P=NP!
…
 The paper you cite reduces security to a hard-on-average problem, whereas
 all that P \neq NP guarantees is hardness in the worst case.

I see. I did misunderstand. So although cracking the Lyubashevsky,
Palacio, Segev encryption scheme [1] doesn't mean that you've proven
P=NP, because NP is about worst-case rather than average-case, it
*does* mean that you've solved the subset sum problem for a random
instance. If you can do that for all keys that people use in real life
then you can solve the subset sum problem for almost all random
instances, which seems like it would still be a breakthrough in
complexity theory. If you can do it for only a few keys then this
means that the Lyubashevsky, Palacio, Segev scheme is susceptible to
weak keys.

Is that right?

Anyway, although this is not one, there do exist proposals for public
key crypto schemes where breaking the scheme implies solving a worst
case instance of a supposedly hard problem, right?

Here is a recent paper which surveys several of them (all
lattice-based) and estimates secure key sizes: [2].

None of the signature schemes mentioned therein appear to have the
sort of efficiency that we are used to. For example the ecdonaldp
(ECDSA) signature schemes measured on
http://bench.cr.yp.to/results-sign.html have key sizes on the order of
tens of bytes, where the most efficient digital signature algorithm
described in [2] has key sizes on the order of thousands of bytes.
(And that one is a one-time signature scheme!)

Okay, so I'm still searching for a signature algorithm which has the
following properties (or as many of them as I can get):

1. efficient (signing time, verification time, key generation time,
key size, signature size)

2. some kind of strong argument that it really is secure (the gold
standard would be reduction to a worst-case instance of an NP-complete
problem)

or, if we can't have (2) then at least we want (3) and (4):

3. rather different from ECDSA, so that a breakthrough is unlikely to
invalidate both ECDSA and this other scheme at once
and
4. not known to be vulnerable to quantum computers

and finally but importantly:

4. easy to understand and to implement

Suggestions welcome!

Regards,

Zooko Wilcox-O'Hearn

[1] http://eprint.iacr.org/2009/576
[2] http://eprint.iacr.org/2010/137

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What's the state of the art in digital signatures? Re: What's the state of the art in factorization?

2010-07-09 Thread Jonathan Katz

On Wed, 28 Apr 2010, Zooko O'Whielacronx wrote:


Anyway, although this is not one, there do exist proposals for public
key crypto schemes where breaking the scheme implies solving a worst
case instance of a supposedly hard problem, right?


Not to worst-case hardness of an NP-complete problem, no. Quite the 
contrary, there has been some body of work showing that a result of this 
sort is unlikely. (Though, as with all things related to complexity theory 
where our state of knowledge is so limited, such a statement must be taken 
wit ha grain of salt. In any case, such a result is well beyond anything 
we can currently prove.)



2. some kind of strong argument that it really is secure (the gold
standard would be reduction to a worst-case instance of an NP-complete
problem)


See above.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Merkle Signature Scheme is the most secure signature scheme possible for general-purpose use

2010-07-09 Thread Zooko O'Whielacronx
Folks:

Regarding earlier discussion on these lists about the difficulty of
factoring and post-quantum cryptography and so on, you might be
interested in this note that I just posted to the tahoe-dev list:

100-year digital signatures

http://tahoe-lafs.org/pipermail/tahoe-dev/2010-June/004439.html

Here is an excerpt:


As David-Sarah [Hopwood] has pointed out, a Merkle Signature Scheme is at least
as secure as *any* other digital signature scheme, even in the
long-term—even if attackers have quantum computers and the knowledge
of how to solve math problems that we don't know how to solve today.

If you had some other digital signature scheme (even, for the sake of
argument, a post-quantum digital signature scheme with some sort of
beautiful reduction from some classic math problem), then you would
probably start wanting to digitally sign messages larger than the few
hundreds of bits that the digital signature algorithm natively
handles. Therefore, you would end up hashing your messages with a
secure hash function to generate message representatives short
enough to sign. Therefore, your system will actually depend on both
the security of the digital signature scheme *and* the security of a
hash function. With a Merkle Signature Scheme you rely on just the
security of a hash function, so there is one less thing that can go
wrong. That's why a Merkle Signature Scheme is at least as secure as
the best digital signature scheme that you can imagine. :-)


In that note I go on to talk about more Tahoe-LAFS-specific
engineering considerations and expose my ignorance about exactly what
properties are required of the underlying secure hash functions.

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


ANNOUNCING Tahoe, the Least-Authority File System, v1.7.0

2010-07-09 Thread Zooko O'Whielacronx
Dear people of the cryptography mailing lists:

We just released Tahoe-LAFS v1.7. The major new feature is an SFTP
server. This means that (with enough installing software and tinkering
with your operating system configuration) you can have a
normal-looking mount point backed by a Tahoe-LAFS grid.

Google is sponsoring us through Google Summer of Code. The next
release after this one will include the resulting improvements.

One of those improvements is the One Hundred Year Cryptography
project, with student Yu Xue and mentor Jack Lloyd. I'll post to these
lists about the progress they make.

Regards,

Zooko

ANNOUNCING Tahoe, the Least-Authority File System, v1.7.0

The Tahoe-LAFS team is pleased to announce the immediate
availability of version 1.7.0 of Tahoe-LAFS, an extremely
reliable distributed storage system.

Tahoe-LAFS is the first distributed storage system which offers
provider-independent security—meaning that not even the
operator of your storage server can read or alter your data
without your consent. Here is the one-page explanation of its
unique security and fault-tolerance properties:

http://tahoe-lafs.org/source/tahoe/trunk/docs/about.html

Tahoe-LAFS v1.7.0 is the successor to v1.6.1, which was released
February 27, 2010 [1].

v1.7.0 is a major new release with new features and bugfixes. It
adds a fully functional SFTP interface, support for non-ASCII character
encodings, and a new upload algorithm which guarantees that each file
is spread over multiple servers for fault-tolerance. See the NEWS file
[2] for details.


WHAT IS IT GOOD FOR?

With Tahoe-LAFS, you distribute your filesystem across multiple
servers, and even if some of the servers are compromised by
by an attacker, the entire filesystem continues to work
correctly, and continues to preserve your privacy and
security. You can easily share specific files and directories
with other people.

In addition to the core storage system itself, volunteers have
built other projects on top of Tahoe-LAFS and have integrated
Tahoe-LAFS with existing systems.

These include frontends for Windows, Macintosh, JavaScript,
iPhone, and Android, and plugins for Hadoop, bzr, mercurial,
duplicity, TiddlyWiki, and more. See the Related Projects page
on the wiki [3].

We believe that strong cryptography, Free and Open Source
Software, erasure coding, and principled engineering practices
make Tahoe-LAFS safer than RAID, removable drive, tape,
on-line backup or cloud storage systems.

This software is developed under test-driven development, and
there are no known bugs or security flaws which would
compromise confidentiality or data integrity under recommended
use. (For all currently known issues please see the
known_issues.txt file [4].)


COMPATIBILITY

This release is fully compatible with the version 1 series of
Tahoe-LAFS. Clients from this release can write files and
directories in the format used by clients of all versions back
to v1.0 (which was released March 25, 2008). Clients from this
release can read files and directories produced by clients of
all versions since v1.0. Servers from this release can serve
clients of all versions back to v1.0 and clients from this
release can use servers of all versions back to v1.0.

This is the ninth release in the version 1 series. This series
of Tahoe-LAFS will be actively supported and maintained for
the forseeable future, and future versions of Tahoe-LAFS will
retain the ability to read and write files compatible with
Tahoe-LAFS v1.


LICENCE

You may use this package under the GNU General Public License,
version 2 or, at your option, any later version. See the file
COPYING.GPL [5] for the terms of the GNU General Public
License, version 2.

You may use this package under the Transitive Grace Period
Public Licence, version 1 or, at your option, any later
version. (The Transitive Grace Period Public Licence has
requirements similar to the GPL except that it allows you to
wait for up to twelve months after you redistribute a derived
work before releasing the source code of your derived work.)
See the file COPYING.TGPPL.html [6] for the terms of the
Transitive Grace Period Public Licence, version 1.

(You may choose to use this package under the terms of either
licence, at your option.)


INSTALLATION

Tahoe-LAFS works on Linux, Mac OS X, Windows, Cygwin, Solaris,
*BSD, and probably most other systems. Start with
docs/quickstart.html [7].


HACKING AND COMMUNITY

Please join us on the mailing list [8]. Patches are gratefully
accepted -- the RoadMap page [9] shows the next improvements
that we plan to make and CREDITS [10] lists the names of people
who've contributed to the project. The Dev page [11] contains
resources for hackers.


SPONSORSHIP

Tahoe-LAFS was originally developed by Allmydata, Inc., a
provider of commercial backup services. After discontinuing
funding of Tahoe-LAFS RD in early 2009, they have continued
to provide servers, bandwidth, small personal gifts as tokens
of appreciation, and 

Merkle-Winternitz-HORS signature scheme for Tahoe-LAFS

2010-07-09 Thread David-Sarah Hopwood
[cc:d to cryptography list from the tahoe-dev list.
See http://allmydata.org/pipermail/tahoe-dev/2010-June/004439.html,
http://allmydata.org/pipermail/tahoe-dev/2010-June/004502.html and
http://allmydata.org/pipermail/tahoe-dev/2010-June/004496.html for context.]

Brian Warner wrote:
 On 6/23/10 7:18 AM, David-Sarah Hopwood wrote:

 Ah, but it will work for a multi-layer Merkle tree scheme, such as
 GMSS: if keys are generated deterministically from a seed, then the
 signatures certifying keys at upper layers are also deterministic, so
 there's no key-reuse problem for those.

 Right, exactly. The basic scheme would look something like this:

  * set K=1024
  * build a K-ary tree with 2^M=2^256 leaves. Each leaf corresponds to a
possible message.
  * define a deterministic keyed function from leaf number to keypair
  * define a deterministic keyed function from node number to keypair
  * publish the root pubkey as the readcap. Retain the secret key (used
as an input to the above deterministic functions) as the writecap.
  * to sign a given message:
* identify the DEPTH=ln(base=K, 2^256) =26ish tree nodes along the
  path from root to leaf
* generate the one privkey for the message leaf, use it to sign the
  message itself
* for all nodes on the path, from bottom to top:
[
 * generate all K pubkeys for the node's children, concatenate them
   into a string, treat that as a message
 * sign the message with the parent node's privkey
]

 As zooko pointed out, the leaf signature can be optimized away: since
 each message gets a unique pubkey, it suffices to reveal the
 preimage/privkey for that pubkey. This reduces the size of the leaf
 signature down to a single hash.

 Assuming a 256-bit Merkle-signature scheme (with a 0 and a 1
 preimage for each bit), it takes 512 hashes to build all the
 privkey/preimages, and then 512 hashes to build the pubkeys.
 Each signature requires computing and publishing (revealing) 256 preimage
 hashes.

Note that this is a Lamport-Diffie signature, not a Merkle one-time
signature. The Merkle one-time signature scheme (described in the second
paragraph under Signing a several bit message in [Merkle1987]) publishes
only preimage hashes corresponding to 1 bits, and uses a checksum.

 Generating the readcap is pretty easy: you build the K pubkeys for the
 top node (4*256 small hashes each), hash each one down (K large hashes
 of 512*32 bytes each), then hash those lists into a single value (one
 large hash of K*32 bytes). So 1M small hashes, 1024 hashes of 16KiB
 each, and one hash of 32KiB bytes.

 For K=1024 and M=256, the message signature consists of the leaf
 signature (one hash), and 26 nodes. Each node contains one signature
 (256 preimages), one pubkey (512 hashes), and 1023 hashes-of-pubkeys.

 So the overall message signature requires you publish about 46567 hashes,
 or 1.5MB.

The scheme that I'm currently considering has the following five
improvements over the one above:


1. For the signatures on non-leaf public keys, use the Winternitz one-time
   signature scheme. This was first publically described in [Merkle1987],
   but a clearer description is given in [BDS2008].

   The Winternitz scheme (unlike the Lamport-Diffie or Merkle schemes) has
   the property that the full public key can be derived from a signature.
   Therefore it's not necessary to explicitly include the pubkey that is
   being signed at each node, since it can be derived from the signature
   on the next-lower node. More precisely, the lower signature gives a
   claimed public key, which can be authenticated using the upper signature.

   Winternitz signatures also allow a trade-off between signature size and
   the number of hash computations needed to sign, depending on the base.

   (Typically the scheme is described with a base that is a power of 2,
   i.e. the message representative and checksum are expressed as base
   2^w numbers. Actually it works for an arbitrary base = 2, and using
   a base that is not a power of two can sometimes save an extra few
   percent in signature cost for a given signing size.)

   The signing cost is linear in the base B, while the size of the
   signature is only divided by lg(B). Nevertheless, choices of B from
   4 up to about 32 are useful.

   In the example above, the 256 + 512 = 768 hashes for the signature and
   pubkey, are reduced to 133 hashes for base 4; 90 hashes for base 8; and
   67 hashes for base 16.

   Note that the optimal choices of K are typically smaller than 1024, so
   the one-time signature/pubkey makes up a greater proportion of the
   published size for each layer than in the example above. For instance,
   if K = 16, B = 16, and M = 256, then the number of hashes published per
   layer drops to 67 + K-1 = 82. Without any of the other improvements
   below, at least 64 layers would be needed, so that would be 5248 hashes,
   or 164 KiB. (If Zooko's optimization is used for the leaf 

Re: Merkle-Winternitz-HORS signature scheme for Tahoe-LAFS [correction]

2010-07-09 Thread David-Sarah Hopwood
David-Sarah Hopwood wrote:
[snip]
 There could also be a concern that point 4 above is similar to
 on-line/off-line signatures as patented by Even, Goldreich and Micali
 (U.S. patent 5016274, filed in 1988; expires on 14 May 2011).

Ah, I calculated the expiration date incorrectly. It was filed before the
rules changed in June 1995, so it's the later of 20 years after filing
(8 November 2008) or 17 years after issue (14 May 2008). So it has already
expired.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature


Re: Question w.r.t. AES-CBC IV

2010-07-09 Thread Jonathan Katz
CTR mode seems a better choice here. Without getting too technical, 
security of CTR mode holds as long as the IVs used are fresh whereas 
security of CBC mode requires IVs to be random.


In either case, a problem with a short IV (no matter what you do) is the 
possibility of IVs repeating. If you are picking 32-bit IVs at random, you 
expect a repeat after only (roughly) 2^16 encryptions (which is not very 
many).


On Wed, 2 Jun 2010, Ralph Holz wrote:


Dear all,

A colleague dropped in yesterday and confronted me with the following.

He wanted to scrape off some additional bits when using AES-CBC because
the messages in his concept are very short (a few hundred bit). So he
was thinking about a variant of AES-CBC, where he uses just 32 (random)
bits as a source for the IV. These are encrypted with AES and then used
as the actual IV to feed into the CBC. As a result, he does not need to
send a 128 bit IV to the receiver but just the 32 bit.

His argument was that AES basically is used as an expansion function for
the IV here, with the added benefit of encryption. On the whole, this
should not weaken AES-CBC. Although he was not sure if it actually would
strengthen it.

While I am prepared to buy this argument (I am not a cryptographer...),
I still felt that the argument might not be complete. After all, 32 bits
don't provide much randomness, and I wasn't sure if this, overall, would
not lead to more structure in the ciphercode - which might in turn give
an attacker more clues with respect to the key.

Are there any opinions on this?

Regards,
Ralph

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Question w.r.t. AES-CBC IV

2010-07-09 Thread Greg Rose
Unfortunately I can't remember the author, but there was a paper  
showing that an encrypted counter was secure to use as IVs for CBC  
mode. So encrypting a shorter random IV should also be secure.


Greg.

On 2010 Jun 2, at 9:36 , Ralph Holz wrote:


Dear all,

A colleague dropped in yesterday and confronted me with the following.

He wanted to scrape off some additional bits when using AES-CBC  
because

the messages in his concept are very short (a few hundred bit). So he
was thinking about a variant of AES-CBC, where he uses just 32  
(random)
bits as a source for the IV. These are encrypted with AES and then  
used
as the actual IV to feed into the CBC. As a result, he does not need  
to

send a 128 bit IV to the receiver but just the 32 bit.

His argument was that AES basically is used as an expansion function  
for

the IV here, with the added benefit of encryption. On the whole, this
should not weaken AES-CBC. Although he was not sure if it actually  
would

strengthen it.

While I am prepared to buy this argument (I am not a  
cryptographer...),
I still felt that the argument might not be complete. After all, 32  
bits
don't provide much randomness, and I wasn't sure if this, overall,  
would
not lead to more structure in the ciphercode - which might in turn  
give

an attacker more clues with respect to the key.

Are there any opinions on this?

Regards,
Ralph

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A Fault Attack Construction Based On Rijmen's Chosen-Text Relations Attack

2010-07-09 Thread Jonathan Katz

On Mon, 14 Jun 2010, Alfonso De Gregorio wrote:

The last Thursday, Vincent Rijmen announced a new clever attack on AES (and 
KASUMI) in a report posted to the Cryptology ePrint Archive: Practical-Titled 
Attack on AES-128 Using Chosen-Text Relations, 
http://eprint.iacr.org/2010/337


Err...I read that paper by Rijmen as a bit of a joke. I think he was 
poking fun at some of these unrealistic attack models.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [TIME_WARP] 1280-Bit RSA

2010-07-09 Thread Brandon Enright
On Thu, 1 Jul 2010 06:46:30 +0200
Dan Kaminsky d...@doxpara.com wrote:

 All,
 
I've got a perfect vs. good question.
 
NIST is pushing RSA-2048.  And I think we all agree that's
 probably a good thing.
 
However, performance on RSA-2048 is too low for a number of real
 world uses.
 
Assuming RSA-2048 is unavailable, is it worth taking the
 intermediate step of using RSA-1280?  Or should we stick to RSA-1024?
 
 --Dan
 

Dan,

I looked at the GNFS runtime and plugged a few numbers in.  It seems
RSA Security is using a more conservative constant of about 1.8 rather
than the suggested 1.92299...

See:
http://mathworld.wolfram.com/NumberFieldSieve.html

So using 1.8, a 1024 bit RSA key is roughly equivalent to a 81 bit
symmetric key.  Plugging in 1280 yields 89 bits.

I'm of the opinion that if you take action to improve security, you
should get more than 8 additional bits for your efforts.  For example,
1536 shouldn't be that much slower but gives 96 bits of security.

For posterity, here is a table using 1.8 for the GNFS constant:

RSASymmetric

256  43.7
512  59.8
768  71.6
1024 81.2
1280 89.5
1536 96.8
2048 109.4
3072 129.9
4096 146.5
8192 195.1

Brandon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [TIME_WARP] 1280-Bit RSA

2010-07-09 Thread Dan Kaminsky
Dan,

 I looked at the GNFS runtime and plugged a few numbers in.  It seems
 RSA Security is using a more conservative constant of about 1.8 rather
 than the suggested 1.92299...

 See:
 http://mathworld.wolfram.com/NumberFieldSieve.html

 So using 1.8, a 1024 bit RSA key is roughly equivalent to a 81 bit
 symmetric key.  Plugging in 1280 yields 89 bits.

 I'm of the opinion that if you take action to improve security, you
 should get more than 8 additional bits for your efforts.  For example,
 1536 shouldn't be that much slower but gives 96 bits of security.


Here's the actual data, in terms of transactions per second, I'm getting for
a sample app:

512: 710.042382
1024: 187.187719
1280: 108.592265
1536: 73.314751
2048: 20.645645

2048 ain't happening.  The relative diff between 1280 and 1536 is
interesting though.



 For posterity, here is a table using 1.8 for the GNFS constant:

 RSASymmetric
 
 256  43.7
 512  59.8
 768  71.6
 1024 81.2
 1280 89.5
 1536 96.8
 2048 109.4
 3072 129.9
 4096 146.5
 8192 195.1


Do other cracking mechanisms have similar curves to GNFS (with different
constants)?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Question w.r.t. AES-CBC IV

2010-07-09 Thread Steven Bellovin

On Jul 9, 2010, at 1:55 12PM, Jonathan Katz wrote:

 CTR mode seems a better choice here. Without getting too technical, security 
 of CTR mode holds as long as the IVs used are fresh whereas security of CBC 
 mode requires IVs to be random.
 
 In either case, a problem with a short IV (no matter what you do) is the 
 possibility of IVs repeating. If you are picking 32-bit IVs at random, you 
 expect a repeat after only (roughly) 2^16 encryptions (which is not very 
 many).
 

Unless I misunderstand your point, I think that in the real world there's a 
very real difference in the insecurity of CBC vs CTR if the IV selection is 
faulty.  With CBC, there is semantic insecurity, in that one can tell if two 
messages have a common prefix if the IV is the same.  Furthermore, if the IV is 
predictable to the adversary under certain circumstances some plaintext can be 
recovered.

With CTR, however, there are very devastating two-message attacks if the IVs 
are the same; all that's necessary is some decent knowledge of some probable 
plaintext.  


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [TIME_WARP] 1280-Bit RSA

2010-07-09 Thread Jonathan Thornburg
The following usenet posting from 1993 provides an interesting bit
(no pun itended) of history on RSA key sizes.  The key passage is the
last paragraph, asserting that 1024-bit keys should be ok (safe from
key-factoring attacks) for a few decades.  We're currently just
under 1.75 decades on from that message.  I think the take-home lesson
is that forecasting progress in factoring is hard, so it's useful to
add a safety margin...

   From: p...@ox.ac.uk (Paul C Leyland)
   Newsgroups: sci.crypt
   Subject: Re: Questions about RSA.
   Message-ID: pcl.93feb16102...@rhodium.ox.ac.uk
   Date: 16 Feb 93 10:26:11 GMT
   References: 1993feb10.183246.29...@ee.ubc.ca
   Distribution: sci.crypt, alt.security
   Organization: Oxford University Computing Services, 13 Banbury Rd Oxford OX2 
6NN
   Lines: 59
   In-reply-to: jrusm...@ee.ubc.ca's message of 10 Feb 93 18:32:46 GMT
   
   In article 1993feb10.183246.29...@ee.ubc.ca jrusm...@ee.ubc.ca (RUSMISEL 
JASON DIRK) writes:
   
   ...
  1) The article suggests that the length of 'n', (where n is the product
  of two large primes p and q, and n is the modulus used with the
  encryption and decrytion keys to decode and encode) be 200 digits. [page
  125, top right]  200 digits base 10 is about 664 binary digits.  Now, to
  the question.  The program PGP provides various levels of key length,
  384 bits, 512 bits and 1024 bits.   How were these numbers decided on? I
   
   The PGP values are round numbers (3/8, 1/2 and 1) in kilobits.
   They are (handwaving furiously here) about the right size for
   testing, routine use and archival security.  The RSA values are
   about the right size for routine use.
   
  realize that the state of computer technology at the time the RSA
  article was written was very different than it is now, but have there
  been significant advances in crypto breaking since 1978 that would
  make factoring such large numbers much easier? (Perhaps the Connection
  Machine.)  Really I'm interested in a discussion of the design
  decisions/tradeoffs here.
   
   Let me try to justify the armwaving a little.  First, we must
   realise that the larger the keys, the greater the run-time of the
   encryption and decryption process.  Purists may nit pick, but
   roughly speaking doubling the number of digits in the key
   increases the run time eightfold.  [Purists: I'm assuming a
   n-squared multiplication routine and the simple binary method of
   exponentiation].  So small moduli are good.
   
   So far as is known, the only way to find the RSA secret key
   (other than espionage, bribery, extortion, etc) is to factorize
   the modulus.  Here large moduli are good.  Roughly speaking
   *adding* a few bits to the length of the modulus raises the
   factorizing cost eightfold.  Compare this to the encryption,
   where *doubling* costs us that much.  The meaning of a few bits
   can also be argued over, but it is around 20 bits, depending on
   methods and the size of the number.
   
   Current state of the art described in the open literature
   factorizes 120-digit numbers in a time of order one year, using
   machines of order 1000mips performance.  Again, purists can post
   better numbers if they wish.  384 bits is 116 digits so the PGP
   test keys can, with sufficient effort, be cracked with today's
   technology.  512 bits is well beyond current technology (155
   digits) but *might* be accessible in a few years with better
   algorithms and faster machines and more of them working in
   parallel and ... (I'm handwaving again).  1024-bits, so far as is
   known should be ok for a few decades.  I'd be happy to be proved
   wrong, as there's lots of other numbers I'd like to be able to
   factorize.
   
   Paul
   --
   Paul Leyland p...@oxford.ac.uk  | Hanging on in quiet desperation 
is
   Oxford University Computing Service  | the English way.
   13 Banbury Road, Oxford, OX2 6NN, UK | The time is come, the song is 
over.
   Tel: +44-865-273200  Fax: +44-865-273275 | Thought I'd something more to say.
   Finger p...@black.ox.ac.uk for PGP key|

-- 
-- Jonathan Thornburg [remove -animal to reply] 
jth...@astro.indiana-zebra.edu
   Dept of Astronomy, Indiana University, Bloomington, Indiana, USA
   Washing one's hands of the conflict between the powerful and the
powerless means to side with the powerful, not to be neutral.
  -- quote by Freire / poster by Oxfam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Question w.r.t. AES-CBC IV

2010-07-09 Thread Jerry Leichter

On Jul 9, 2010, at 1:55 PM, Jonathan Katz wrote:
CTR mode seems a better choice here. Without getting too technical,  
security of CTR mode holds as long as the IVs used are fresh  
whereas security of CBC mode requires IVs to be random.


In either case, a problem with a short IV (no matter what you do) is  
the possibility of IVs repeating. If you are picking 32-bit IVs at  
random, you expect a repeat after only (roughly) 2^16 encryptions  
(which is not very many).
CTR mode is dangerous unless you're also doing message authentication,  
which will require an additional block or thereabouts - so if the goal  
is to minimize bit overhead, it's not really appropriate here (unless  
it's being included anyway).


The small number of encryptions before the IV repeats is the principle  
problem   I see.  But there are a number of ways around it depending  
on overall system details we don't have.  Two examples:


- If each pair of communicating endpoints can keep state information,
you don't even have to send the IV:  Agree once on a secret key, then  
use CTR mode to generate an IV by encrypting the message sequence  
number (which might or might not be included in the message itself).

- Widen the 32-bit IV you send with other information.  If you
include the end-point ID's, then you only have to worry about possible  
clashes between IV's generated for the same pair of endpoints, or even  
for the pair in the same direction.  (That's not *quite* true, but  
it's close to true.)  If the two ends can agree on the date and time  
even roughly - say to the nearest minute - then you can mix that in as  
well.  Now you have to worry about a 50% chance of a clash if the same  
pair starts up 2^16 connections within in a minute - probably not  
likely enough to worry about.


-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com