RE: New result in predicate encryption: disjunction support

2008-05-05 Thread Scott Guthery
[Moderator's Note: Top posting is discouraged. --Perry]


What I meant was that the crypogram decrypted with a correct f(I)=1 key
yields the encrypted message Meet you at Starbucks at noon 0
whereas decryption with a wrong, f(I)=0, key yields Let's go down to Taco
Bell at midnight.  Padding with 0's doesn't help.

Cheers, Scott 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jonathan Katz
Sent: Sunday, May 04, 2008 1:20 PM
To: cryptography@metzdowd.com
Subject: RE: New result in predicate encryption: disjunction support

On Sun, 4 May 2008, Scott Guthery wrote:

 One useful application of the Katz/Sahai/Waters work is a counter to 
 traffic analysis.  One can send the same message to everyone but 
 ensure that only a defined subset can read the message by proper key 
 management.  What is less clear is how to ensure that decrytion with 
 the wrong key doesn't yield an understandable (and actionable) message.

This is actually pretty easy to do by, e.g., padding all valid messages with
sufficiently-many 0s. Decryption with an incorrect key will result in
something random that is unlikely to end with the requisite number of 0s
(and so will be discarded).
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Scott Guthery
 
 but also a proof that the source code one has is the source of the
implementation.

This is an unsolved problem for code in tamper-resistant devices.  There are
precious few procedures to, for example, determine that the CAC card that
was issued to Pfc. Sally Green this morning bears any relationship
whatsoever to the code that went through FIPS certification. (A hash of the
code is meaningless since the card will simply burp up the right answer.)  I
have seen one such procedure but I have never seen any such procedure
implemented in real cards.

And to Marcos' point, not only do certification labs not look for backdoors
but I once had an employee of such a lab tell me that even if they found one
the are not obliged to enter this in their report unless, of course, they
had been explicitly requested to test for the absence of backdoors.  In that
regard, I have never seen a security profile that contained a claim of no
backdoors.  And I guess you know who is paying big bucks for the
certification report. 

Smart cards from F.  TPMs from C.  A 'sleep at the wheel.

Cheers, Scott

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-05 Thread James A. Donald

Steven M. Bellovin wrote:
 IPsec operates at layer 3, where there are (generally)
 no user contexts.  This makes it difficult to bind
 IPsec credentials to a user, which means that it
 inherently can't be as simple to configure as ssh.

 Put another way, when you tell an sshd whom you wish
 to log in as, it consults that user's home directory
 and finds an authorized_keys file. How can IPsec -- or
 rather, any key management daemon for IPsec -- do
 that?  Per-user SPDs?  Is this packet for port 80 for
 user pat or user chris?

 I can envision ways around this (especially if we have
 an IP address per user of a system -- I've been
 writing about fine-grained IP address assignment for
 years), but they're inherently a lot more complex than
 ssh.

This is a particular case of the layer problem I have
been ranting about for years:  Private and authenticated
sessions at layer X do not in themselves correspond to
private and authenticated sessions at layer Y, and for
users to arrange their affairs so that layer X does
indeed secure layer Y generally requires users to stand
on their heads and stick their right big toe in their
left ear.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-05 Thread Ed Gerck

Ian G wrote: (on Kerckhoffs's rules)

=
6. Finally, it is necessary, given the circumstances that command its 
application, that the system be easy to use, requiring neither mental 
strain nor the knowledge of a long series of rules to observe.

=
...
PS:  Although his 6th is arguably the most important


Yes. Usability should be the #1 property of a secure system.

Conventional security thinking says that usability and security are 
like a seesaw; if usability goes up, security must go down, and 
vice-versa. This apparent antinomy actually works as a synergy: with 
more usability in a secure system, security increases. With less 
usability in a secure system, security decreases. A secure system that 
is not usable will be left aside by users.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-05 Thread James A. Donald

Thor Lancelot Simon wrote:

And, in fact, most VPN software of any type fails this test.  My concern
is that an excessive focus on how hard is it to set this thing up? can
seriously obscure the important second half of the question and if you
set it up in the easiest possible way, is it safe?


If there is a wrong way to do it, the end user will do it wrong.  Expert 
cryptographers frequently fail to act correctly on their understanding 
of cryptography.  The end user has no chance - and the chances are still 
not all that good even if your end user is highly qualified cryptographer.


What users comprehend, and are used to, is you that set up an account 
with username and password, and an admin blesses the account with 
appropriate privileges as a result of some out of band communication - 
which username and password has to be secured, invisibly to the user, 
against offline and phishing attacks, without requiring any thought or 
vigilance by the user - see my web page for 
http://jim.com/security/how_to_do_VPNs.html for attacks on the 
password model, and defenses against those attacks.


This comes naturally to humans, for humans have long relied on 
shibboleths for security against treachery by outsiders.  Thus the 
computer interface to our clever cryptographic algorithms must resemble 
as closely as possible the ancient human reliance on shibboleths for 
security.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Eric Rescorla
At Sun, 04 May 2008 20:14:42 -0400,
Perry E. Metzger wrote:
 
 
 Marcos el Ruptor [EMAIL PROTECTED] writes:
  All this open-source promotion is a huge waste of time. Us crackers
  know exactly how all the executables we care about (especially all
  the crypto and security related programs) work.
 
 With respect, no, you don't. If you did, then all the flaws in Windows
 would have been found at once, instead of trickling out over the
 course of decades as people slowly figure out new unintended
 behaviors. Anything sufficiently complicated to be interesting simply
 cannot be fully understood by inspection, end of story.

Without taking a position on the security of open source vs. closed
source (which strikes me as an open question), I agree with Perry
that deciding whether a given piece of software has back doors is
not really possible for a nontrivial piece of software. Note that
this is a very different problem from finding a single vulnerability
or answering specific (small) questions about the code [0].

-Ekr

[0] That said, I don't think that determining whether a nontrivial
piece of software security vulnerabilities is difficult. The
answer is yes.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Florian Weimer
* Perry E. Metzger:

 Marcos el Ruptor [EMAIL PROTECTED] writes:

 Nonsense. Total nonsense. A half-decent reverse engineer does not
 need the source code and can easily determine the exact operation of
 all the security-related components from the compiled executables,
 extracted ROM/EPROM code or reversed FPGA/ASIC layout

 I'm glad to know that you have managed to disprove Rice's
 Theorem.

Call me a speciest, but it's not clear if Rice's Theorem applies to
humans.

While Marcos' approach is somewhat off the mark (source-code
equivalent that works for me vs. conformance of potentially
malicious code to a harmless spec), keep in mind that object code
validation has been performed for safety-critical code for quite a
while.  The idea is that code for which some soundness property cannot
be shown simply fails validation.  It doesn't matter if the validator
is not clever enough, or if the code is actually bogus.

(And for most (all?) non-trivial software, source code acquisition
costs are way below validiation costs, so public availability of
source code is indeed a red herring.)

-- 
Florian Weimer[EMAIL PROTECTED]
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstra├če 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Ben Laurie

Perry E. Metzger wrote:

Marcos el Ruptor [EMAIL PROTECTED] writes:

To be sure that implementation does not contain back-doors, one needs
not only some source code but also a proof that the source code one
has is the source of the implementation.

Nonsense. Total nonsense. A half-decent reverse engineer does not
need the source code and can easily determine the exact operation of
all the security-related components from the compiled executables,
extracted ROM/EPROM code or reversed FPGA/ASIC layout


I'm glad to know that you have managed to disprove Rice's
Theorem. Could you explain to us how you did it? I suspect there's an
ACM Turing Award awaiting you.

Being slightly less sarcastic for the moment, I'm sure that a good
reverse engineer can figure out approximately what a program does by
looking at the binaries and approximately what an ASIC does given
good equipment to get the layout. What you can't do, full stop, is
know that there are no unexpected security related behaviors in the
hardware or software. That's just not possible.


I think that's blatantly untrue. For example, if I look at an AND gate, 
I can be absolutely sure about its security properties.


Rice's theorem says you can't _always_ solve this problem. It says 
nothing about figuring out special cases.


Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Perry E. Metzger

Ben Laurie [EMAIL PROTECTED] writes:
 I think that's blatantly untrue. For example, if I look at an AND
 gate, I can be absolutely sure about its security properties.

An AND gate isn't Turing Equivalent.

 Rice's theorem says you can't _always_ solve this problem. It says
 nothing about figuring out special cases.

Any modern processor is sufficiently larger than an AND gate that it
is no longer tractable. It isn't even possible to describe the
security properties one would need to (formally) prove.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Perry E. Metzger

Florian Weimer [EMAIL PROTECTED] writes:
 * Perry E. Metzger:

 Marcos el Ruptor [EMAIL PROTECTED] writes:

 Nonsense. Total nonsense. A half-decent reverse engineer does not
 need the source code and can easily determine the exact operation of
 all the security-related components from the compiled executables,
 extracted ROM/EPROM code or reversed FPGA/ASIC layout

 I'm glad to know that you have managed to disprove Rice's
 Theorem.

 Call me a speciest, but it's not clear if Rice's Theorem applies to
 humans.

If it doesn't apply to humans, that implies that humans are somehow
able to do computations that Turing Machines can't. I am sufficiently
skeptical of that to say, flat out, I don't believe it. If anything,
Turing Machines are more capable -- humans are only equivalent to
(large) finite state machines.

 While Marcos' approach is somewhat off the mark (source-code
 equivalent that works for me vs. conformance of potentially
 malicious code to a harmless spec), keep in mind that object code
 validation has been performed for safety-critical code for quite a
 while.

Certainly. You can use formal methods to prove the properties of
certain specially created systems -- the systems have to be produced
specially so that the proofs are possible. What you can't do in
general is take an existing system and prove security properties after
the fact.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [mm] OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Ben Laurie

Perry E. Metzger wrote:

Ben Laurie [EMAIL PROTECTED] writes:

I think that's blatantly untrue. For example, if I look at an AND
gate, I can be absolutely sure about its security properties.


An AND gate isn't Turing Equivalent.


Nor are most algorithms.


Rice's theorem says you can't _always_ solve this problem. It says
nothing about figuring out special cases.


Any modern processor is sufficiently larger than an AND gate that it
is no longer tractable. It isn't even possible to describe the
security properties one would need to (formally) prove.


I won't debate that, but its not a consequence of Rice's Theorem.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Comments on SP800-108

2008-05-05 Thread Jack Lloyd
Hi,

As a standard, this is specification is a disaster. Just from a quick
read, I see the following:

However, alternative orders for the input data fields may be used for
a KDF.

with a length specified by the function, an algorithm, or a protocol
which uses T as an input.

In feedback mode, the output of the PRF is computed using the result
of the previous iteration and, optionally, using a counter as the
iteration variable(s).

With sufficient options, all implementations are non-interoperable. I
think you've managed to reach that point here. As an implementor, my
instinct is to stay well away from this entire mess and just use IEEE
1363's KDF2, which is:

  - simple enough that anyone can implement it easily and without
 interop difficulties, or requiring protocol negotiations (and
 then the implementor has to do the negotiation properly - which
 opens up new avenues for security holes)

  - secure enough that it doesn't matter (ie, that the likelyhood
 that a security flaw in the KDF is the critical problem is far
 lower than a security flaw elsewhere in the system)

My recommendation: choose something that will work for nearly
everyone, and mandate it directly. For instance, why make the counter
length configurable? In 99% of implementations, the thing that will
make sense is a 32-bit counter (to paraphrase the famous if apocryphal
Bill Gates quote, 4 gigabytes of keying material should be enough for
anybody), but by refusing to mandate this behavior, you force every
implementor and application designer to choose something and then
negotiate on the off chance that some other length was chosen, or that
the other side is using variable length encodings - something which is
allowed by the spec, as best as I can tell, and which opens up some
pretty big (at least theoretical) holes.

I have no comments about the actual security aspects of it; it looks
fine to my eye, but given the interoperability issues listed above I
don't plan on implementing any of these KDFs anyway, so I can't say I
much care whether they are actually secure or not. I would advise you
to remember that crypto does not exist in a vacuum, and should help,
not hinder, the overall security of a system.

Regards,
  Jack Lloyd

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New result in predicate encryption: disjunction support

2008-05-05 Thread Ariel Waissbein
[Moderator's note: Again, top posting is discouraged, and not editing
quoted material is also discouraged. --Perry]

Hi list,

Interesting. Great work! I had been looking *generic* predicate
encryption for some time. Encryption over specific predicates is much
older. Malware (e.g., virus) and software protection schemes have been
using some sort of predicate encryption or trigger for over two
decades in order to obfuscate code. For example, an old virus used to
scan hard drives looking for a BBS configuration files in a similar
manner and some software protection schemes have encrypted pieces of
code that are decrypted only if some integrity checks (predicates) over
other pieces of the program are passed.

Triggers/predicates are very promising. Yet, they are only useful in
certain applications, since eavesdropping one decryption is enough to
recover the keys and plaintext.

I co-authored a paper were we used this same concept in a software
protection application ([1]) and later we formalized this concept, that
we called secure triggers, in a paper eventually publised at TISSEC
([2]). We were only able to construct triggers for very specific
predicate families, e.g.,
  - p(x)=1 iff x=I for some I in {0,1}^k
  - q(x,y,z,...)=1 iff x=I_1, y=I_2, z=I_3,...; and finally
  - r(x)=1 iff x_{j_1}=b_1,...,x_{j_k}=b_k for some b_1,...,b_k in {0,1}
and indexes i_1,...,i_k (|x|=k).
While these predicates do not cover arbitrary large possibilities, they
are implemented by efficient algorithms and require assuming only the
existence of IND-CPA secure symmetric ciphers. In [2] we came up with
more applications other than sofprot;)

[1] Diego Bendersky, Ariel Futoransky, Luciano Notarfrancesco, Carlos
Sarraute and Ariel Waissbein. Advanced Software Protection Now. Core
Security Technologies Tech report.
http://www.coresecurity.com/index.php5?module=ContentModaction=itemid=491

[2] Ariel Futoransky, Emiliano Kargieman, Carlos Sarraute, Ariel
Waissbein. Foundations and applications for secure triggers. ACM TISSEC,
Vol 9(1) (February 2006).

Cheers,
Ariel

Ivan Krsti? wrote:
 This is fairly interesting: AFAIK the first generalization of predicate
 encryption to support disjunctions. I find the result mostly interesting
 mathematically, since I expect we won't be seeing predicate encryption
 in widespread use anytime soon due to complexity and regulatory
 concerns. --IK
 
 
 
 Predicate Encryption Supporting Disjunctions, Polynomial Equations, and
 Inner Products
 Jonathan Katz and Amit Sahai and Brent Waters
 
 Preprint: http://eprint.iacr.org/2007/404
 
 Abstract: Predicate encryption is a new paradigm generalizing, among
 other things, identity-based encryption. In a predicate encryption
 scheme, secret keys correspond to predicates and ciphertexts are
 associated with attributes; the secret key SK_f corresponding to the
 predicate f can be used to decrypt a ciphertext associated with
 attribute I if and only if f(I)=1. Constructions of such schemes are
 currently known for relatively few classes of predicates.
 We construct such a scheme for predicates corresponding to the
 evaluation of inner products over N (for some large integer N). This, in
 turn, enables constructions in which predicates correspond to the
 evaluation of disjunctions, polynomials, CNF/DNF formulae, or threshold
 predicates (among others). Besides serving as what we feel is a
 significant step forward in the theory of predicate encryption, our
 results lead to a number of applications that are interesting in their
 own right.
 
 -- 
 Ivan Krsti? [EMAIL PROTECTED] | http://radian.org
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Matt Blaze

Nonsense. Total nonsense. A half-decent reverse engineer does not
need the source code and can easily determine the exact operation of
all the security-related components from the compiled executables,
extracted ROM/EPROM code or reversed FPGA/ASIC layout


I'm glad to know that you have managed to disprove Rice's
Theorem. Could you explain to us how you did it? I suspect there's an
ACM Turing Award awaiting you.

Being slightly less sarcastic for the moment, I'm sure that a good
reverse engineer can figure out approximately what a program does by
looking at the binaries and approximately what an ASIC does given
good equipment to get the layout. What you can't do, full stop, is
know that there are no unexpected security related behaviors in the
hardware or software. That's just not possible.




In particular, while it's certainly true than an expert can often  
discover

unexpected security-related behavior by careful examination of source
(or object) code, the absence of such a discovery, no matter how
expert the examination, is no guarantee of anything, for general  
software

and hardware designs.

And on a slight tangent, this is why it was only with great  
reluctance that

I agreed to participate in the top-to-bottom voting system reviews
conducted last year by California and Ohio.  If flaws were found (as  
they

were), that would tell us that there were flaws.  But if no flaws had
been found, that would tell us nothing about whether any such flaws were
present.  It might just have been that we were bad at our job, that the
flaws were subtle, or that something prevented us from noticing  
them.  Or

maybe there really are no flaws. There'd be no way to no for sure.

I ultimately decided to participate because I suspected that it was  
likely,
based on the immaturity of the software and the apparent lack of  
security

engineering in the design process for these systems, that we would find
vulnerabilities.  But what happens when those are fixed?  Should we then
conclude that the system is now secure?  Or should we ask another set
of experts to take another look?

After some number of iterations of this cycle, the experts might stop  
finding

vulnerabilities.  What can we conclude at that point?

It's a difficult question, but the word guarantee almost certainly
does not belong in the answer (unless preceded by the word no).

-matt


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]