Re: [cryptography] True RNG: elementary particle noise sensed with surprisingly simple electronics

2016-09-15 Thread Thierry Moreau

Thank you for this quick feedback.

On 15/09/16 09:04 PM, d...@deadhat.com wrote:

Hi!

A true random number generation strategy is no better than its
trustworthiness. Here is a suggestion for a simple scheme which rests on
a common digital electronic design.


[...]

Unavoidable current noise source:
   - thermal noise
   - excess current noise caused by the above resistor material
construction
Noise sources to be reduced (as a matter of sampling approach coherency)
   - electrostatic ...
   - electromagnetic ...

Any thoughts?



Yes.

A) Can you build 100,000,000 and expect them all to work?


No. The stated goal is to provide some scheme that a few wise guys may 
trust. So, building 20 units and having you as a satisfied user would be 
a more realistic goal. Microsoft and Apple seem to be trusted by the crowd.



B) Can you expect the those 100,000,000 resistors to behave in a
consistent manner or will the supplier switch compounds on you while you
aren't looking.  If you try and buy a paper-oil cap today, you'll get a
poly pretending to be paper-oil. I assume it's the same for obsolete
resistor compounds.


This brings the question of characterization of cheap material procured 
from the mass market channels. Obviously it is part of the detailed 
crafting process.


Realistically, one would be able to avoid the trouble here, e.g. by
buying a few rolls of 5000 resistors from a few manufacturers.


C) What are the EM injection opportunities to measured noise? Can you
saturate the inputs?


Also part of the implementation details to watch. This small circuit may 
be located in a Faraday cage. Hopefully its internals will remain tamper 
evident for a very paranoiac user.


About input saturation, the expected result of experimentation (with 
analysis) is some confidence that current noise is the main source of 
data fluctuation (I do not state which statistic to apply here for "data 
fluctuation"), and then EM could hardly induce the relevant resistor 
currents without e.g. a large coil within a short distance. Admittedly, 
this is not a definitive answer for a very paranoiac user.


Do you have a scheme overall immune to EM injection opportunities? Is 
the complexity of this scheme such that every external influence 
opportunities may be ruled out?



D) How are you planning to characterize the min entropy of the source? We
know the min entropy of well defined Gaussian noise, but what about shot,
1/f and all the other weird distributions?
   D_a) Can you distinguish that noise from system noise that might be
systematic rather than entropic.


Two aspects: entropy and the inherently compound measurement of multiple 
(and little understood) noise source ("noise from system" might be 
rather vague for a physicist).


About compound measurement, careful crafting of the wheatstone bridge 
(and its excitation voltage source) is expected to provide some 
assurance that current noise (thermal noise and excess current noise 
from resistor material properties) is the foremost contributor to data 
fluctuations.


Min entropy characterization: no definite plan. The raw 24 bits samples 
will be available for attempts at distribution characterization. I 
suspect however that a paranoiac user will fear that after gigabytes of 
data fed to the characterization process, the source might suddenly turn 
low entropy when the data is switched to the cryptographic random secret 
generation process.



E) Do you have an extractor algorithm in mind that is proven to work at
the lower bound for the min entropy you expect from the source?


I might have ideas in this area of concern but "proven extractor 
algorithm" is something orthogonal to the source: a proven algo would 
have its proof for a given "min entropy" abstract concept.



F) Are you wanting computational prediction bounds at the output of the
extractor or do you want H_inf(X) = 1.
   F_1) If you want the entropy answer, then you need to consider multiple
input extractors.
   F_2) Oh, and quantum-safe extractors are a thing now.


These questions, which I do not understand fully, would be orthogonal to 
the source.



G) Are any certifications required. In my experience P(Y) -> 1 as t ->
infinity. Projects who swore up and down that they weren't doing FIPS
would come back 2 years later, with a finished chip and ask "Can this be
FIPS certified", after a customer made their requirements clear.


This question need not be addressed now ( P(Y) unknown as t=0! ).


That's my usual list of questions. They may or may not apply to your
situation.


Thanks for sharing this.

- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] True RNG: elementary particle noise sensed with surprisingly simple electronics

2016-09-15 Thread Thierry Moreau

Hi!

A true random number generation strategy is no better than its 
trustworthiness. Here is a suggestion for a simple scheme which rests on 
a common digital electronic design.


While helping an undergrad student in a weight scale project, I 
encountered an A-to-D conversion circuit datasheet where some 
fundamental noise was explicitly quantified.


After a little research, I learned that a foremost unavoidable noise 
source is resistor "current noise" (i.e. occurring due to an elementary 
physics phenomenon):



Thick-film resistors are made of a mixture of conductive particles 
(metallic grains) with a glassy binder and an organic fluid. This “ink” 
is printed on a ceramic substrate and heated in an oven. During this 
firing process the conductive particles within the glassy matrix are 
fused to the substrate and form the resistor.


[All types of resistors] have in common that the total noise can be 
divided into thermal noise and excess noise. Excess current noise is the 
bunching and releasing of electrons associated with current flow, e.g. 
due to fluctuating conductivity based on imperfect contacts within the 
resistive material. The amount of current-noise depends largely on the 
resistor technology employed.


[T]hick film resistors show large excess noise.


Source: Frank Seifert, "Resistor Current Noise Measurements," April 14, 2009

The classical weight scale design is based on an 24 bits A-to-D (analog 
to digital) conversion with the sensing circuit made of a wheatstone 
bridge (a simple resistor network arrangement) that amplifies minute 
variations in individual resistor voltage caused by strain gauge 
deformation (a small directional stress on a strain gauge induce a 
change in resistor value). The basic idea of turning this classical 
design into a true noise sensing application is this one: replace the 
(minutely) variable resistor by a fixed resistor with a high noise level.


The surprisingly simple electronics is illustrated by two A-to-D 
integrated circuits (Avia Semiconductor HX711 and Texas Instrument 
ADS1232) and the open hardware design for a weight scale microprocessor 
board (SparkFun OpenScale).


Obviously the evil is in the details, and some refinements are desirable 
since a) the noise sensing application is better served with a larger 
signal amplification, and b) the confidence in the noise sampling 
approach is (presumably) raised if noise sources other than current 
noise are reduced with appropriate circuit design techniques. But none 
of this is rocket science (e.g. compared with other elementary physics 
noise sampling such as so-called quantum noise generators).


Unavoidable current noise source:
 - thermal noise
 - excess current noise caused by the above resistor material construction
Noise sources to be reduced (as a matter of sampling approach coherency)
 - electrostatic ...
 - electromagnetic ...

Any thoughts?

Regards,

- Thierry Moreau
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Why TLS? Why not modern authenticated D-H exchange?

2016-09-06 Thread Thierry Moreau

Dear applied cryptographers ...

The STS protocol (Station-To-Station) evolved into Hugo Krawczyk SIGMA 
(Sign-and-MAC) variant which is now found in IPSEC IKE and HIP (Host 
Identity Protocol, IETF RFC7401).


However, if one wants to consider this as an alternative to TLS, 
documentation sources are few and either too academic or too overloaded 
with protocol details detracting from the security properties.


I did face this situation while looking for a basic authenticated key 
establishment protocol. STS has been the very first secure protocol to 
which I was exposed decades ago, but recently I could not recognize its 
features/properties in any TLS deployment profile. So I researched the 
STS impact on modern protocols and I recorded my findings in this document:


"The Classical Authenticated Diffie-Hellman Exchange Revisited (with the 
Bladderwort Protocol Feature Addition)"


http://www.connotech.com/pract_sec_authed_dh_xchng.html

Abstract:

When a secure data communications channel between two distant server 
systems must be established, the TLS (Transport Layer Security) is the 
solution that comes first to the mind of IT security experts. Departing 
from this default common wisdom, we revisit the authenticated 
Diffie-Hellman exchange as a solution well rooted in the early ideas in 
the field of public key cryptography, refined by the dedication of 
theoreticians, and entrenched in a few (less conspicuous) Internet 
secure protocol standards, namely IPSEC IKE and HIP. Under the name 
Bladdarwort, we also propose a minor protocol addition for streamlined 
server operations where a long-term private signature key is better kept 
off-line during the operational phase of the secure communications channel.

===

I guess the end result holds important lessons, as a straightforward 
solution path for a basic and recurring issue in IT security. Yet, the 
difficult aspects of applied cryptography remain difficult, the document 
being explicit about them.


Thus, why TLS?

- Thierry Moreau
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] Show Crypto: prototype USB HSM

2016-04-13 Thread Thierry Moreau

On 13/04/16 03:12 AM, Tony Arcieri wrote:

On Tue, Apr 12, 2016 at 7:26 PM, Ron Garret <r...@flownet.com
<mailto:r...@flownet.com>> wrote:

This HSM is much more general-purpose than a U2F token.


Well, that's true, but it's also hundreds of times bigger than a token
in the Yubikey "nano" form factor, which is actually convenient to keep
permanently in the USB slot of a laptop. Your physical design seems
pretty unwieldy for laptops (see also Yubico's keychain designs).

Yubikey "nano" factor tokens like the NEO-n have also supported more
general purposes than a U2F token (e.g. CCID interface, OpenPGP applets,
see also PIV)

I swear I'm not a paid shill for Yubico, but I'm a fan of small
display-free hardware tokens. While a token like what you've built might
provide Maximum Security under pessimistic threat models, its large size

 =

Who wants to be optimistic with respect to threat models in the current 
IT landscape?


Do you?

(I much liked what I glimpsed from the original post.)

- Thierry Moreau


makes it look rather inconvenient to me.

--
Tony Arcieri


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] USG moves to vacate hearing tomorrow due to possible method to unlock iPhone

2016-03-21 Thread Thierry Moreau

On 21/03/16 10:48 PM, John Young wrote:

USG moves to vacate hearing tomorrow due to possible method to unlock
iPhone

https://cryptome.org/2016/03/usg-apple-191.pdf


If the USG preferred no ruling on the arguments presented, they would 
not behave differently.


If the FBI needed a good forensic tool created for them more than they 
need the data on this specific iPhone (as I initially guessed), the risk 
of a bad ruling for them would be a major step back in their creative 
procurement of forensic tools. Hence the USG would prefer no ruling.


Regards,

- Thierry Moreau



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Diffie-Hellman after the Logjam paper versus IETF RFCs ...

2015-11-19 Thread Thierry Moreau


Hi!

The Logjam paper (https://weakdh.org/) makes three recommendations for 
Diffie-Hellman parameters: transition to ECC-DH, use larger (>=2048 
bits) DH primes, and avoid fixed 1024-bits DH primes.


In reviewing the current standardized DH parameters, I came across two 
questions.


First some references with an historical perspective.

Oakley primes were introduced in RFC2409 section 6 (768 and 1024 bits). 
Larger primes were standardized in RFC3526 (confirmed widely used 1536 
bits plus 2048, 3072, 4096, 6144, and 8192 bits). The DH generator is 2.


Very recently 
(https://datatracker.ietf.org/doc/draft-ietf-tls-negotiated-ff-dhe/ 
appendix A) the Oakley prime number generation strategy is replayed, 
substituting the Euler constant binary extension for the pi binary 
extension as an unbiased trusted pseudo-random sequence. Note that the 
DH generator remains at 2 in this new document.


In the meantime, two standardization actions took place.

The authors of an EAP variant RFC6124 (section 7.1) found useful to 
modify the Oakley standard parameters by changing the DH generator value 
from 2 to a small prime number specific to each DH prime number 
(respectively 5, 31, 11, 5, and 5 for Oakley primes of 1024, 1536, 2048, 
3072, and 4096).


Finally, RFC5114 seems to scoop NIST on its own ground, introducing DH 
parameter sets with a defined and reduced size "prime order subgroup" 
with a generator value as large as the DH prime. I wonder if this 
standardization action actually turned a test vector example (originally 
intended as an example of a random parameter generation) into a fixed DH 
parameter set of the type found problematic in the Logjam paper. Indeed, 
the RFC5114 text refers to the NIST CSRC page 
http://csrc.nist.gov/groups/ST/toolkit/examples.html from which one may 
come to the document 
http://csrc.nist.gov/groups/ST/toolkit/documents/Examples/KS_FFC_All.pdf 
which is over 100 pages of test data without textual explanations or 
author attribution.


Then the two questions:

Q.1 Is the generator value selection per RFC6124 a better alternative 
than the fixed generator value 2?


Q.2 Is there any benefit in the size reduction for the prime order 
subgroup standardized by RFC5114 (beyond complying to the NIST addiction 
to cryptographic parameters exactly fit to a given security parameter)?


Conclusion

The default answers are yes to Q.1 and no to Q.2. Therefore, ongoing 
standardization work is a dubious place for basic wisdom on using a 
cryptographic primitive. RFC6124 has it almost right (it should have 
omitted the 1024 prime size) but seems outside of mainstream IETF work.


Apologies to IETF'ers for not making a contribution out of my opinion 
(you may use this message as you see fit).


Thanks in advance for comments!

- Thierry Moreau
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Diffie-Hellman after the Logjam paper versus IETF RFCs ...

2015-11-19 Thread Thierry Moreau

Hi!

The Logjam paper (https://weakdh.org/) makes three recommendations for 
Diffie-Hellman parameters: transition to ECC-DH, use larger (>=2048 
bits) DH primes, and avoid fixed 1024-bits DH primes.


In reviewing the current standardized DH parameters, I came across two 
questions.


First some references with an historical perspective.

Oakley primes were introduced in RFC2409 section 6 (768 and 1024 bits). 
Larger primes were standardized in RFC3526 (confirmed widely used 1536 
bits plus 2048, 3072, 4096, 6144, and 8192 bits). The DH generator is 2.


Very recently (
https://datatracker.ietf.org/doc/draft-ietf-tls-negotiated-ff-dhe/ 
appendix A) the Oakley prime number generation strategy is replayed, 
substituting the Euler constant binary extension for the pi binary 
extension as an unbiased trusted pseudo-random sequence. Note that the 
DH generator remains at 2 in this new document.


In the meantime, two standardization actions took place.

The authors of an EAP variant RFC6124 (section 7.1) found useful to 
modify the Oakley standard parameters by changing the DH generator value 
from 2 to a small prime number specific to each DH prime number 
(respectively 5, 31, 11, 5, and 5 for Oakley primes of 1024, 1536, 2048, 
3072, and 4096).


Finally, RFC5114 seems to scoop NIST on its own ground, introducing DH 
parameter sets with a defined and reduced size "prime order subgroup" 
with a generator value as large as the DH prime. I wonder if this 
standardization action actually turned a test vector example (originally 
intended as an example of a random parameter generation) into a fixed DH 
parameter set of the type found problematic in the Logjam paper. Indeed, 
the RFC5114 text refers to the NIST CSRC page 
http://csrc.nist.gov/groups/ST/toolkit/examples.html from which one may 
come to the document
http://csrc.nist.gov/groups/ST/toolkit/documents/Examples/KS_FFC_All.pdf 
which is over 100 pages of test data without textual explanations or 
author attribution.


Then the two questions:

Q.1 Is the generator value selection per RFC6124 a better alternative 
than the fixed generator value 2?


Q.2 Is there any benefit in the size reduction for the prime order 
subgroup standardized by RFC5114 (beyond complying to the NIST addiction 
to cryptographic parameters exactly fit to a given security parameter)?


Conclusion

The default answers are yes to Q.1 and no to Q.2. Therefore, ongoing 
standardization work is a dubious place for basic wisdom on using a 
cryptographic primitive. RFC6124 has it almost right (it should have 
omitted the 1024 prime size) but seems outside of mainstream IETF work.


Apologies to IETF'ers for not making a contribution out of my opinion 
(you may use this message as you see fit).


Thanks in advance for comments!

- Thierry Moreau
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Curious about FIDO Alliance authentication scheme

2015-09-23 Thread Thierry Moreau

Hi,

Here is a quick review of the FIDO alliance authentication proposal [1]. 
After looking superficially at the specifications documentation [2], I 
came to the tentative summary below. I did not feel a need to delve into 
the companion documentation set [3].



Core cryptographic principles:

(A) The scheme uses public key crypto signatures (PK signatures) without 
security certificates, for client authentication, in client-server 
applications.


(B) Each server entity (relying party) maintains its own database of 
public keys to account identity relationships.


(C) The scheme documentation suggests a unique PK signature key pair for 
each triplet .


(D) Account registration is devoid of special provisions for client 
identity verification: client device selects a PK signature key pair, 
signs a protocol-negotiation-derived context-dependent data stream and 
that's it.


Best practice security principles:

(E) The scheme documentation includes a taxonomy of mechanisms with 
which the client device may protect the activation of the device PK 
digital signature capability.


(F) In the account registration protocol exchanges, such client local 
mechanisms are negotiated.


(G) This arrangement is herein qualified as "best practice" because the 
server has no cryptographic integrity protection for client assertions 
in this account registration protocol exchange.


Scheme adoption strategy:

(H) The initial teaser is the appeal of an anti-phishing solution 
(alternative to password authentication).


(I) Levels the playing field for biometric/two-factor/tamper-processor 
authentication vendors.


(J) Not sure about browser support barrier to entry strategy.

Please use this summary with caution since it is very much of a guesstimate.


Two questions:

1) any comment about the above summary ...

2) assuming the authentication scheme turns widely deployed, what are 
the opportunities for the bad guys (those being creative, patient, and 
resourceful at attacking IT security schemes)? (Vulnerabilities in the 
client device are countless, dependent on local arrangements, and mostly 
well understood; it's the protocol vulnerabilities that would be 
relevant in view of the scheme novelty.)


Thanks in advance for feedback.


- Thierry

[1] https://fidoalliance.org/

[2] 
http://fidoalliance.org/wp-content/uploads/2014/12/fido-uaf-v1.0-ps-20141208.zip 
-- FIDO Alliance Universal Authentication Framework Complete Specifications


[3] 
https://fidoalliance.org/specs/fido-u2f-v1.0-nfc-bt-amendment-20150514.zip 
-- FIDO Alliance Universal 2nd Factor (U2F) specs with Bluetooth and NFC 
transports

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NIST Workshop on Elliptic Curve Cryptography Standards

2015-05-11 Thread Thierry Moreau

On 05/12/15 00:16, ianG wrote:

On 11/05/2015 17:56 pm, Thierry Moreau wrote:

On 05/09/15 11:18, ianG wrote:

Workshop on Elliptic Curve Cryptography Standards
June 11-12, 2015



I doubt the foremost questions will be addressed:

To which extent NSA influence motivates NIST in advancing the ECC
standards?



John Kelsey, chief of something or other at NIST, gave a pretty
comprehensive talk on the NSA issue for NIST at Real World Crypto in
Janaury [0].  My take-away is that they are taking it seriously.


Thanks for the reminder. I did read one report by NIST on this subject 
and it was already surprising how self-critical NIST was. The above talk 
goes in the same encouraging direction.




 From memory, there wasn't anything directly spotted for the ECC stuff,
but there has been this rising tide of demand for new curves ... so
maybe now is the time.



Can independent academia members present hypothetical mathematical
advances (even breakthroughs) that NSA could have made, or could
speculatively expect to make, in order for the NSA to provide the US a
cryptanalysis advance over the rest of the world (central to NSA
mission).



If you're saying, can the academics stumble across something that the
NSA had beforehand, well, of course.  But I'm not sure that's what you
mean.


Let me try to re-phrase what I meant.

I do not want to push any plot theory without a deep understanding of 
the ECC fundamentals. But recalling that NSA had prior knowledge of 
differential cryptanalysis (versus academia) and prior knowledge of RSA 
and D-H, is there any specific research directions in the ECC field in 
which the NSA could have advance knowledge that would induce them to 
push ECC deployment over factoring-based RSA?




[0]
http://www.realworldcrypto.com/rwc2015/program-2/RWC-2015-Kelsey-final.pdf?attredirects=0



- Thierry

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NIST Workshop on Elliptic Curve Cryptography Standards

2015-05-11 Thread Thierry Moreau

On 05/09/15 11:18, ianG wrote:

Workshop on Elliptic Curve Cryptography Standards
June 11-12, 2015

Agenda now available!

The National Institute of Standards and Technology (NIST) will host a
Workshop on Elliptic Curve Cryptography Standards at NIST headquarters
in Gaithersburg, MD on June 11-12, 2015.  The workshop will provide a
venue to engage the cryptographic community, including academia,
industry, and government users to discuss possible approaches to promote
the adoption of secure, interoperable and efficient elliptic curve
mechanisms.


I doubt the foremost questions will be addressed:

To which extent NSA influence motivates NIST in advancing the ECC standards?

Can independent academia members present hypothetical mathematical 
advances (even breakthroughs) that NSA could have made, or could 
speculatively expect to make, in order for the NSA to provide the US a 
cryptanalysis advance over the rest of the world (central to NSA mission).


To which extent the table of key size equivalences (between 
factoring-based cryptosystems and ECC schemes) is biased for a faster 
adoption of ECC (e.g. it makes sense to move to ECC because the 
equivalent RSA key sizes are inconvenient)?


NIST has been unquestionably useful for the cryptographic community with 
the AES and ASHA competitions. The outcome of the former is a widely 
deployed improvement over prior symmetric encryption algorithms. The 
outcome of the latter appears less attractive for adoption decisions, 
but the very challenges of an efficient secure hash algorithm seems to 
be the root cause, and not the NIST competition process.


With ECC, I have less confidence in NIST ability to leverage the 
cryptographic community contributions.


- Thierry Moreau
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Entropy is forever ...

2015-04-17 Thread Thierry Moreau
 of the 
original message into the assessed data element. Accordingly, my answer 
should be made more precise by referring to an unbiased RSA key 
generation process (which should not be considered a reasonable 
assumption for the endorsement of lower ranges of entropy assessments).


To summarize, the entropy assessment is a characterization of a the data 
source being used as a secret true random source. It also refers to the 
probability distribution of messages from the data source and the 
quantitative measure of information contents derived from the 
probability distribution according to the information theory. This 
mathematical formalism is difficult to apply to actual arrangements 
useful for cryptography, notably because the probability distribution is 
not reflected in any message. The information theory is silent about the 
secrecy requirement essential for cryptographic applications. Maybe 
there is confusion by assuming that entropy is lost when part of the 
random message is disclosed, while only (!) data suitability for 
cryptographic usage is being lost. In applying the information theory to 
the solution of actual difficulties in applied cryptography, we should 
address secrecy requirements independently. The probability distribution 
preservation through random message transformations is an important 
lesson from the theory that might have been overlooked (at least as an 
explicit requirement).


A note about the genesis of the ideas put forward. In my efforts to 
design applied cryptography key management schemes without taking 
anything for granted and paying attention to the lessons from the 
academia and their theories, I came with a situation very similar to the 
above problem statement. The 2000 bit random message from a 2000 bits 
entropy truly random source is a simplification to the actual situation 
in which a first message transformation preserves the probability 
distribution of random dice shuffling. In the above problem statement, 
the PRNG seeding is another distribution preserving transformation. The 
actual PRNG is based on the Blum-Blum-Shub x^2 mod N generator, which 
comes with two bits of entropy loss upon seeding. The above problem 
statement is thus concrete.


Maybe the term entropy is used, more or less by consensus, with a 
definition departing from the information theory. Indeed, NIST documents 
covering the topic of secret random numbers for cryptography use 
conflicting definitions surrounding the notion of entropy.


Although my own answer to the stated problem puts into question the 
Linux entropy pool depletion on usage, I do not feel competent to make 
suggestions. For instance, my note hints that a PRNG algorithm selection 
should be part of the operating system service definition for 
/dev/?random offered for cryptographic purposes but I have just a vague 
idea of whether and how the open source community might move in this 
direction.


Entropy is forever ... until a data leak occurs.
A diamond is forever ... until burglars break in.

Regards,

- Thierry Moreau
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] The Evanescent Security Module, one step towards an Open Source HSM

2015-04-01 Thread Thierry Moreau

Hi,

here is this new document:

The Evanescent Security Module, Concepts and Linux Usage Strategies

http://www.connotech.com/doc_ei_secomd.html

(Not an April fool announcement despite the funny name for an HSM!)

Enjoy!

- Thierry Moreau
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] The Evanescent Security Module, one step towards an Open Source HSM

2015-04-01 Thread Thierry Moreau

Hi,

here is this new document:

The Evanescent Security Module, Concepts and Linux Usage Strategies

http://www.connotech.com/doc_ei_secmod.html (corrected URL)

(Not an April fool announcement despite the funny name for an HSM!)

Enjoy!

- Thierry Moreau
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Request - PKI/CA History Lesson

2014-04-29 Thread Thierry Moreau

On 2014-04-29 18:18, ianG wrote:

On 29/04/2014 19:02 pm, Greg wrote:


I'm looking for a date that I could point to and call the birth of
modern HTTPS/PKI.

There is the Loren M Kohnfelder thesis from May of 1978, but that's not
quite it because it wasn't actually available to anyone at the time.

Perhaps an event along the lines of first modern HTTPS implementation
in a public web browser was released, or something like that.

Any leads? Maybe something from Netscape's history?



Yes, 1994, when Netscape invented SSL v1.  Which had no MITM support,
which was then considered to be a life and death issue by RSADSI ...
which just happened to have invested big in a think called x.509.  And
the rest is history.

Some commentary here, which is opinion not evidence.

http://financialcryptography.com/mt/archives/000609.html



I guess the historic gap between Loren Kohnfelder thesis and Netscape 
SSL development has to be filled with due consideration of the OSI 
development, and notably the Network Layer Security Protocol (NLSP).


Prior to the domination of IP protocols, the information highway was 
expected to be secured with the NLSP over an X.25 backbone.


The payment industry was investing in SET (Secure Electronic 
Transactions), and the Netscape SSL was first perceived as a childish 
attempt for a quick and (very) dirty short term solution.


Even then, in my understanding, there would still be a gap between Loren 
thesis and the NLSP development. I have some clues that the Digital 
Equipment DecNET protocols would fill this gap.


Don't look at Microsoft. By 1995, their only IT security commitment 
seemed to be for a facsimile security protocol (even devoid of public 
key crypto). (This should have been a prior art against Data Treasury 
cheque imaging patent battle, but that's another lllooonng story.)


In retrospect, the ASN.1 based X.509 security certificate has been 
salvaged from the OSI effort thanks to Verisign dedication to license 
their patents for some IETF protocols on easy terms.


Lotus Notes security is special because it evolved from an RSA 
technology license acquired prior to RSADSI, and they use certificates 
without the ASN.1/X.509 paradigms.


Regards,

- Thierry Moreau
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Alleged NSA-GCHQ Attack on Jean-Jacques Quisquater

2014-02-02 Thread Thierry Moreau

John Young wrote:

Any further information on the alleged NSA-GCHQ attack on
Jean-Jacques Quisquater than these two reports?

http://cryptome.org/2014/02/nsa-gchq-quisquater.pdf



Basically the same story by a general audience media, in French, at 
http://www.lalibre.be/actu/cyber/le-genie-belge-du-cryptage-espionne-par-la-nsa-52ec94893570d7514c2e7bba 



C'est en enquêtant sur le piratage massif qui a affecté des clients de 
Belgacom, dévoilé l'année passée, que les policiers ont découvert qu'un 
logiciel malveillant avait été installé sur l'ordinateur de cet expert ...


Translation: It is while inquiring about mass hacking targeting 
Belgacom subscribers that the police found malware installed on this 
expert's computer ...


(also with a few sentences cited from prof. Quisquater).


Apparently Quisquater would not have known about the
attack if not told by an insider.

Insider comsec disclosures may be finally getting legs,
not yet long, but more than NDA-official secrecy paralysis.

Any other cryptographer attacked (as if it would be known)?


--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Techniques for protecting CA Root certificate Secret Key

2014-01-09 Thread Thierry Moreau

Peter Bowen wrote:

On Wed, Jan 8, 2014 at 11:54 PM, ianG i...@iang.org wrote:

On 9/01/14 02:49 AM, Paul F Fraser wrote:

Software and physical safe keeping of Root CA secret key are central to
security of a large set of issued certificates.
Are there any safe techniques for handling this problem taking into
account the need to not have the control in the hands of one person?
Any links or suggestions of how to handle this problem?

The easiest place to understand the formal approach would be to look at
Baseline Requirements, which Joe pointed to.  It's the latest in a series of
documents that has emphasised a certain direction.

(fwiw, the techniques described in BR are not safe, IMHO.  But they are
industry 'best practice' so you might have to choose between loving
acceptance and safety.)


Is there a better reference for safe or a place that has commentary on
the 'best practice' weaknesses?



The short answer is 'no'.

As a first comment replace CA certificate Secret Key by root CA 
private signature key [used to sign certificates] because 1) you want 
to trust (or establish trustworthiness for) a CA *entity*, 2) you might 
wish to have some continuity if the CA entity replaces its signature key 
pair, and 3) a secret key might refer to some other type of key.


If you understand the fundamentals, you may see that the root DNSSEC 
signature key (handled by ICANN/IANA, see https://www.iana.org/dnssec ) 
requires indeed the exact same type of protections.


Documents were circulated prior to the launch of the DNSSEC service for 
the DNS root zone that disclosed a lot of design decisions that are now 
embedded in the details of KSK ceremonies. I got the feeling that 
ICANN employees are nowadays in the public-relations mood when 
questioned (more or less consciously by the person asking a question who 
may have been absent when call for comments were made).


I would suggest that the DNSSEC deployment at the root would be a good 
case study for IT security management, from an historic perspective. The 
primary source documents, and the conclusion of such case study, could 
be helpful to you but ...


... if you want to do it right (and since the resources -- money, 
personnel, organizational trustworthiness, immediate attention from a 
community of experts -- available to ICANN aren't available to you), you 
may need to revise your understanding of underlying principles (hint: 
don't start by reverse engineering the PKCS#12 specifications).


You may want to do it best practice and there you go.

Good luck

--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Speaking of key management [was Re: Techniques for protecting CA Root certificate Secret]

2014-01-09 Thread Thierry Moreau

Joe St Sauver wrote:

Hi,

Those who are interested in key management may wish to note:

   Cryptographic Key Management Workshop 2014
   http://www.nist.gov/itl/csd/ct/ckm_workshop2014.cfm
   March 4-5, 2014, NIST, Gaithersburg MD

See also:

   SP 800-152
   DRAFT A Profile for U. S. Federal Cryptographic Key Management Systems (CKMS)
   http://csrc.nist.gov/publications/PubsDrafts.html#SP-800-152
   Released 7 Jan 2014, comments due by March 5, 2014



Don't forget to look at SP 800-130 in parallel.
A Framework for Designing Cryptographic Key Management Systems

Overall, an endless list of requirements that may be useful as a barrier 
to entry in the US Federal Government IT security market.


Not necessarily a bad checklist; I could not identify any specific 
innovation-suppressor element (over the ones already present in the NIST 
mandatory techniques) after a quick glimpse at a few document sections.


The crazy things in Canada is that NIST mandatory techniques are merely 
recommended in the Canadian Federal Government. So the official crypto 
policy is the NIST one but departments and government agencies have no 
incentive to ever procure NIST-approved solutions: they have much more 
freedom when doing otherwise.


Have fun with key management challenges!

--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Techniques for protecting CA Root certificate Secret Key

2014-01-09 Thread Thierry Moreau

Tony Arcieri wrote:
On Thu, Jan 9, 2014 at 7:51 AM, Thierry Moreau 
thierry.mor...@connotech.com mailto:thierry.mor...@connotech.com wrote:


I would suggest that the DNSSEC deployment at the root would be a
good case study for IT security management, from an historic
perspective. The primary source documents, and the conclusion of
such case study, could be helpful to you but ...


I'd actually look at DNSSEC as something of an antipattern. They 
ostensibly seem to be using One Key To Rule Them all and a Shamir-like 
secret sharing scheme.


This makes less sense to me than a multisignature trust system / 
threshold signature system with n root keys and a threshold t such that 
we need t of n signatures in order for something to be considered signed.


While I'm sure they took great care to airgap and delete the DNSSEC root 
key from the computer it was generated on, that's an unnecessary risk 
that simply doesn't have to exist.


Furthermore a multisignature trust system makes it easy to rotate the 
root keys: if one is compromised you simply sign a new root key document 
with t of n signatures again, listing out the newly reissued public key.




I guess a multisignature trust system requires some algorithm support 
beyond RSA and ECC signature schemes pushed by NIST, and thus would have 
been rejected on the (questionable) basis of lack of support in the DNS 
software culture and the (political) basis of lack of NIST approval.


But yes! That is the type of suggestion/innovation that someone might 
look at while revisiting the fundamentals of root signature key management.


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Design Strategies for Defending against Backdoors

2013-11-19 Thread Thierry Moreau

ianG wrote:

On 18/11/13 20:58 PM, Thierry Moreau wrote:

ianG wrote:

On 18/11/13 10:27 AM, ianG wrote:

In the cryptogram sent over the weekend, Bruce Schneier talks about how
to design protocols to stop backdoors.  Comments?



To respond...


https://www.schneier.com/blog/archives/2013/10/defending_again_1.html

Design Strategies for Defending against Backdoors



...


 Encryption protocols should be designed so as not to leak any
random information. Nonces should be considered part of the key or
public predictable counters if possible. Again, the goal is to make it
harder to subtly leak key bits in this information.



Right, that I agree with.  Packets should be deterministically created
by the sender, and they should be verifiable by the recipient.



Then you lose the better theoretical foundations of probabilistic
signature schemes ...



If you're talking here about an authenticated request, that should be 
layered within an encryption packet IMHO, it should be the business 
content.




To clarify the original recommendation, is it correct to assume that the 
goal is to avoid subliminal channels through which key bits may be leaked?


If so, I don't see how a business content subliminal channel is a 
lesser concern than a signature salt field subliminal channel.


Defending against backdoors without inspection of an implementation 
details appears (euphemistically) challenging.



iang





--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Design Strategies for Defending against Backdoors

2013-11-18 Thread Thierry Moreau

ianG wrote:

On 18/11/13 10:27 AM, ianG wrote:

In the cryptogram sent over the weekend, Bruce Schneier talks about how
to design protocols to stop backdoors.  Comments?



To respond...


https://www.schneier.com/blog/archives/2013/10/defending_again_1.html

Design Strategies for Defending against Backdoors



...


 Encryption protocols should be designed so as not to leak any
random information. Nonces should be considered part of the key or
public predictable counters if possible. Again, the goal is to make it
harder to subtly leak key bits in this information.



Right, that I agree with.  Packets should be deterministically created 
by the sender, and they should be verifiable by the recipient.




Then you lose the better theoretical foundations of probabilistic 
signature schemes ...




--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Allergy for client certificates

2013-10-11 Thread Thierry Moreau

Guido Witmond wrote (in reference to eccentric authentication):


Another (not a killer)-feature (for users) is that they are in control
of the account. When they delete the private key, their account is
closed. No one else can come later and claim the account. Unless they
copied the private key beforehand.



Some reality check may turn this from a feature into a serious flaw: 
it's account continuity that matters to server-vendors and 
client-customers as well.


Server: a very good customer account vanishes suddenly and pops up as a 
new account (which one?) among the 200 or so that made a first 
transaction during the next week. Even the vanishing event can not be 
detected!


Client: I relied on the server to keep track of past purchase details, 
and for a crypto-?%# reason (do I care?) I lost them. Even worse, I 
can't create a new account with my real name (it says it's already 
enrolled while in fact it no longer works).


Solving this issue in your experiment is going to re-introduce much of 
the PKI complexity.


Sorry for asking tough questions, but maybe they would pop up sooner or 
later if this experiment goes forward.



--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] SSL session resumption defective (Re: What project would you finance? [WAS: Potential funding for crypto-related projects])

2013-07-04 Thread Thierry Moreau
Thanks to Nico for bringing the focus on DH as the central ingredient of 
PFS.


Nico Williams wrote:


But first we'd have to get users to use cipher suites with PFS.  We're
not really there.



Why?

Perfect forward secrecy (PFS) is an abstract security property defined 
because Diffie-Hellman (DH) -- whatever flavor of it -- provides it.


As a reminder, PFS prevents an adversary who gets a copy of a victim's 
system state at time T (long term private keys), then *only* eavesdrops 
the victim's system protocol exchanges at any time T' that is past a 
session key renegotiation (hint: the DH exchange part of the 
renegotiation bars the passive eavesdropper).


It's nice for us cryptographers to provide such protection. But its 
incremental security appears marginal.


So, is it really *needed* by the users given the state of client system 
insecurity?


I would rather get users to raise their awareness and self-defense 
against client system insecurity (seldom a cryptographer achievement).


--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] SSL session resumption defective (Re: What project would you finance? [WAS: Potential funding for crypto-related projects])

2013-07-04 Thread Thierry Moreau

Adam Back wrote:


Forward secrecy is exceedingly important security property.  Without it an
attacker can store encrypted messages via passive eavesdropping, or court
order an any infrastructure that records messages (advertised or covert) 
and

then obtain the private key via burglary, subpoena, coercion or
software/hardware compromise.



With an exceedingly narrow field of application: those client systems 
that a) delete secret information (application and keys) in a 
forensic-resistant way, and b) delete the application information 
systematically to render ineffective any later burglary, subpoena, 
coercion, or software/hardware compromise. Under these circumstances, 
chances are that the selected crypto mechanism would indeed already 
embed DH.



The fact that the user couldnt decrypt the traffic even if he wanted to as
he automatically no longer has the keys, is extremely valuable to the
overall security for casual, high assurance and after-the-fact security 
(aka

subpoena).

In my view all non-forward-secret ciphersuits should be deprecated.

(The argument that other parts of the system are poorly secured, is not an
excuse; and anyway their failure modes are quite distinct).


In my opinion, when you consider the casual user needs, I see those 
arguments not at a top priority.



Btw DH is not the only way to get forward secrecy; ephemeral (512-bit) RSA
keys were used as part of the now-defunct export ciphers, and the less well
known fact that you can extend forward secrecy using symmetric key one way
functions hash function k' = H(k), delete k.



Not completely by this counterexample: generate k, suffer from an enemy 
copy of system state including k, let k'=H(k), delete k', use k' in 
dangerous confidence. I mean the textbook PFS definition is not 
satisfied by k'=H(k).



DH also provides forward security (bacward secrecy?) its all a misnomer but
basically recovery of security, if decryption keys are compromised, but the
random number generator is still secure.  (And auth keys presumably.)

The fact that forward secrecy is secure against passive adversaries even
with posession of authenticating signature keys, also ups the level of
attack required to obtaining plaintext.  A MITM is something harder to
achieve at large scale, and without detection, in the face of compromised
CAs and so on.  So that is another extremely valuable functionality 
provided

by DH.

Dont knock DH - it provides multiple significant security advantages over
long-live keys.  All comms that is not necessarily store and forward should
be using it.



Indeed having a DH component in a session key establishment is the way 
to go. I am with you that the DH forcing a MITM arrangement is a useful 
line of defense.


I question the marginal benefit of upgrading from a deployed base where 
DH was omitted at the outset, under the PFS argument alone.


Regards,

- Thierry


Adam

On Thu, Jul 04, 2013 at 11:16:21AM -0400, Thierry Moreau wrote:
Thanks to Nico for bringing the focus on DH as the central ingredient 
of PFS.


Nico Williams wrote:


But first we'd have to get users to use cipher suites with PFS.  We're
not really there.



Why?

Perfect forward secrecy (PFS) is an abstract security property defined 
because Diffie-Hellman (DH) -- whatever flavor of it -- provides it.


As a reminder, PFS prevents an adversary who gets a copy of a victim's 
system state at time T (long term private keys), then *only* 
eavesdrops the victim's system protocol exchanges at any time T' that 
is past a session key renegotiation (hint: the DH exchange part of the 
renegotiation bars the passive eavesdropper).


It's nice for us cryptographers to provide such protection. But its 
incremental security appears marginal.


So, is it really *needed* by the users given the state of client 
system insecurity?


I would rather get users to raise their awareness and self-defense 
against client system insecurity (seldom a cryptographer achievement).


--
- Thierry Moreau





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Integrety checking GnuPG

2013-05-30 Thread Thierry Moreau

shawn wilson wrote:

I guess I should've said what my use case is:
I want a boot system that unlocks a partition where everything is
checked [...]
However, someone could replace
gpg with a version that logs to something.


OK, simply provide a Faraday cage to the user and instruct them to boot 
the device inside of it, hence ensuring a boot process without any RF 
connection to the exterior.


I'm only half joking: if you don't trust the hardware for having a 
trustworthy boot in some read-only section in the device, then you 
stated an impossible problem.


Also, you may be paranoid about a user device being replaced altogether 
without the victim noticing the replacement. Do you check that the 
serial number of your favorite gadget remains stable over time?


So in practice you must bear some residual risks when you tailor the 
boot process towards your goal. In the tailoring project, you might find 
that GPG is an overkill when only hash/signature validation is required.



This is sort of a trusting trust question.


So you knew the answer already.


--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Keyspace: client-side encryption for key/value stores

2013-03-25 Thread Thierry Moreau

danimoth wrote:

On 21/03/13 at 03:07am, Jeffrey Walton wrote:

Linux has not warmed up to the fact that userland needs help in
storing secrets from the OS.



http://standards.freedesktop.org/secret-service/

but maybe I have misunderstood your statement.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography



From Chapter 10. What's not included in the API: The service may 
choose to implement any method for locking secrets.


Back to the core difficulty!

Security by management exhaustion (the time we discuss this vs others ...).

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Keyspace: client-side encryption for key/value stores

2013-03-21 Thread Thierry Moreau

Peter Gutmann wrote:

Jeffrey Walton noloa...@gmail.com writes:


Android 4.0 and above also offer a Keychain (
http://developer.android.com/reference/android/security/KeyChain.html). If
using a lesser version, use a Keystore (
http://developer.android.com/reference/java/security/KeyStore.html).


What Android gives you is pretty rudimentary, it barely qualifies to use the
same designation as Apple's Keychain.


Linux has not warmed up to the fact that userland needs help in storing
secrets from the OS.


There's KWallet and Gnome Keyring, last time I looked KWallet was also pretty
primitive (about the level of Android's Keychain) and not being updated much,
but the Gnome Keyring seems to be actively updated.



I would say these things (I hesitate to qualify them as IT security 
mechanisms or schemes) address an impossible task, for which apparent 
success is possible only in a proprietary environment (just making the 
reverse engineering harder).


Client-side storage of long-term secrets can only be secured by 
dedicated client-side hardware. Your mileage may vary.


- Thierry


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] side channel analysis on phones

2013-03-09 Thread Thierry Moreau

ianG wrote:


[...], I'm expecting 
there to be no special hardware access via Android/Java.


[...] normally the developer has no control over which phone the 
product will be used on. There isn't a lot of point in developing for 
some special hardware features.




Yet there is one such point: side-channel attack countermeasures.



Has anyone done any side channel analysis on phones?



Aren't side channel attacks hardware-specific, almost by definition?

If so, this original post postulates an impossible problem statement: 
hardware abstraction can not assist countermeasures for 
hardware-specific threats.



[...] how to limit the
possibilities of attacking the keys from another app.



OK, now you insert O/S abstraction and O/S-specific threats.

Regards,


--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Client TLS Certificates - why not?

2013-03-05 Thread Thierry Moreau

str...@riseup.net wrote:

Hi,

Can anyone enlighten me why client TLS certificates are used so rarely? It
used to be a hassle in the past, but now at least the major browsers offer
quite decent client cert support, and seeing how most people struggle with
passwords, I don't see why client certs could not be beneficial even to
ordinary users.



Hi,

If you ask the question, you may be unaware of the many implications 
explained by other contributions. I take a chance at dropping my 
analysis, which is oriented towards innovation in IT security operations.


First of all, there is an abuse of language with the term client 
certificates: what protects the client is its public-private key pair 
(PPKP). So you may ask yourself Client PPKP, why not?


Then you realize that the X.509 certificates come with the complexity of 
the CA operations, and relying parties (server operators now eating the 
same dog food that they served to their end-users).


With the first party certification paradigm, drop the CA operations 
altogether and let the service providers maintain their own trusted 
client PPKP (I mean the client public keys).


The evil is in the details. I found more evils in removing the CA than 
in bringing forward the new paradigms -- the X.509 mindset is in one's 
brain very deep (not only in browser software where it can be 
circumvented easily with auto-issued dummy X.509 security certificates).


Still, the client PPKP usage along with the first party certification 
paradigm is not for an ordinary user if unable to mind the P and Q's 
of the RSA core operating principle (I postulated client PPKP usage, I'm 
stuck with client PPKP usage). A realistic goal is to get the 
installation instructions from 60 pages to 10-15 (OK 25-30 if we have to 
undo the X.509 mindset).


Trust at the enrollment phase is obviously delicate and can not be fully 
automated. I'm working on that part.


There are closed PKI deployments using client PPKP in a X.509 
PKI-centric perspective. The cost per user is significant. The 
alternative I am hinting about (a- client PPKP usage b- first party 
certification paradigm c- the enrollment scheme) would be an 
intermediate-level client authentication approach.


So why not PKI client certificates for ordinary users? Because even 
client PPKP usage for ordinary users is hardly conceivable.



With CAcert, there is even an excellent infrastructure in place that could
allow people to generate signed pseudonymous client certificates. A
service provider could limit the amount of certificates allowed per user
(as validated by CAcert), maybe even the amount of points required etc.

That way, one could provide services without the requirement of
registration, and still effectively limit abuse?


That's the early dream of a global PKI. Nowadays, we know more.

Regards,


--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Meet the groundbreaking new encryption app set to revolutionize privacy...

2013-02-07 Thread Thierry Moreau

ianG wrote:


[Hushmail design]  isn't
perfect but it was a whole lot better than futzing around with OpenPGP 
keys and manual decrypting.  And it was the latter 'risk' view that won, 
Hushmail filled that niche between the hard core pgp community, and the 
people who did business and needed an easy tool.


Don't be suspicious, be curious -- this is where security is at. 

Human rights reporters already put their life on the line.  Your mission 
is not to protect their life absolutely,


One design aspect seems missing from the high-level discussion: how do 
you define the security mechanism failure mode? You have basically two 
options: connect with an insecure protocol, or do not connect at all.


If it's a life-preserving application, this question should be addressed 
explicitly. A fail safe system may be either way, but stakeholders 
should know which way. Airplane pilots are trained according to the 
failure mode of each aircraft subsystem. E.g. if two-way radio fails, 
the pilot may remain confident (from an indication on the cockpit) that 
the air traffic controller (ATC) still sees the aircraft identifier on 
the radar (see Wikipedia entry for transponder) during the emergency 
landing. Thus the decision to land at the major airport (instead of a 
secondary airport with less traffic in conflict but lower grade 
facilities) is taken based on the fail-safe property of the 
aircraft-to-ATC communications subsystem.


Regards,

--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OAEP for RSA signatures?

2013-01-29 Thread Thierry Moreau

Peter Gutmann wrote:

Thierry Moreau thierry.mor...@connotech.com writes:


The Bleichenbacher attack adaptation to OAEP is non-existent today and would
be an even more significant academic result. I must assume that
Bleichenbacher would have published results in this direction if his research
would have given those.


Bleichenbacher didn't, but Manger did more than a decade ago:

  However, the design of RSAES-OAEP makes it highly likely that
  implementations will leak information between the decryption and integrity
  check operations making them susceptible to a chosen ciphertext attack that
  requires many orders of magnitude less effort than similar attacks against
  PKCS #1 v1.5 block type 2 padding. 
  
  -- A Chosen Ciphertext Attack on RSA Optimal Asymmetric Encryption Padding

 (OAEP) as Standardized in PKCS #1 v2.0



Thanks for the pointer. Indeed. In [1], Dan Boneh's article on SAEP 
(simplified OAEP) agrees as well:


During decryption invalid ciphertexts can be rejected in Steps 2 and 3 
as well as in Step 7. Manger [10] points out the importance of 
preventing an attacker from distinguishing between rejections at the 
various steps, say, using timing analysis. Implementors must ensure that 
the reason a ciphertext is rejected is hidden from the outside world. 
Indeed, our proof of security fails if this is not the case.


It's the spot the oracle lesson once again.

[1] Simplified OAEP for the RSA and Rabin functions, 
http://crypto.stanford.edu/~dabo/abstracts/saep.html


The original post was about digital signatures, where spot the oracle 
implies never let some remote party control what the digital signature 
primitive will sign. In practice, session encryption uses a digital 
signature operation on a session key hash (or something similar). It is 
important that the local system played a role (without an insider agent 
playing tricks) in the session key value determination.


The TLS mode where the client selects a session key and encrypts it for 
the server is simply no good (I forgot the name for this mode -- easy to 
recognize as a bad thing upon encountering it again).


It is thus left as an exercise for a pure PK encryption implementer to 
appreciate the Bleichenbacher oracle threat versus the OAEP/SAEP oracle 
threat. They may not be identical.


That's life with public key cryptography since the Rabin-Williams 
theoretical foundation has been established (its formal proof came with 
an early warning of the oracle pitfall). Nowadays the practical 
attacks/defenses front line often lie right where the oracle pitfall 
materializes.


Interesting times ...



--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OAEP for RSA signatures?

2013-01-28 Thread Thierry Moreau

Peter Gutmann wrote:


the reason why Bleichenbacher attacked v1.5
rather than OAEP is because use of the latter is [...]
compared to v1.5, [...]


Please correct me if I'm wrong. My point is that the highly significant 
academic contributions (among which I would put Bleichenbacher attack) 
should not be mis-represented by authoritative contributors to this list.


Bleichenbacher attack uses 1) characteristics of the PKCS v1.5 specs 
according to which RSA is used in a hybrid cryptosystem, and 2) some 
oracle which tells the attacker whether a give ciphertext is well-formed 
or not.


The Bleichenbacher attack adaptation to OAEP is non-existent today and 
would be an even more significant academic result. I must assume that 
Bleichenbacher would have published results in this direction if his 
research would have given those.


The oracle needed for a practical deployment of the Bleichenbacher 
attack may be a timing/side channel attack vulnerability, but it may 
also be something like a too detailed error code reported in the main 
channel of a protocol. So the minefield from pure timing/side channel 
attacks versus Bleichenbacher is distinct (and overlapping).


Protect against side channel attacks is one motto.

Spot the oracle is another one.

I find the latter important these days (that's an opinion, no need to 
correct me on this one!).


Use of OAEP is a way to avoid the Bleichenbacher attack oracle 
vulnerability, i.e. resist Bleichenbacher even if the oracle still remains.


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] OAEP for RSA signatures?

2013-01-27 Thread Thierry Moreau

James Muir wrote:

PSS is similar to OAEP, but is for signatures.  If you have OAEP
implemented, then it wouldn't take you long to do PSS, which is
described in the PKCS-1v2.1 document.



This is the answer I suspected in reading the original post question.


Hacking OAEP into a signature scheme sounds a little dangerous.
However, I guess the idea would idea would just be to hash your message
and encrypt the hash with the private exponent.  You want your
signature scheme to be existentially unforgeable.  If could forge one of
these signatures, then I not certain what that says about the security
of OAEP encryption (maybe nothing since anyone can create validate OAEP
ciphertexts).



I guess the original poster did not expect to create a new/modified 
signature scheme, but just implement a recognized one.



Full-domain-hash RSA is quite easy to implement.  If you don't like PSS,
then you could look at it.



Oh great ... I thought I was the only one to have taken note of 
full-domain-hash.


My understanding is that full-domain-hash remains a useful academic 
contribution to the formal proofs of PK cryptosystems. It is also 
antagonistic to the push towards ECC signatures. Certainly NIST is not 
interested since this organization pushes for ECC technology development 
and adoption. At least I did not see any reference to an FDH mode for 
secure hash competition candidates.


In practice, full-domain-hash has a serious performance penalty if one 
takes the implementation suggestion from [1]. You would do me a favor in 
providing references to practical implementation strategies.


[1] Mihir Bellare, Phillip Rogaway: The Exact Security of Digital 
Signatures - How to Sign with RSA and Rabin. EUROCRYPT 1996: pp399–416

http://www.cs.ucdavis.edu/~rogaway/papers/exact.pdf


-James

On 13-01-26 10:00 AM, ianG wrote:

Apologies in advance ;) but a cryptography question:

I'm coding (or have coded) a digital signature class in RSA.  In my
research on how to frame the input to the RSA private key operation, I
was told words to effect just use OAEP and you're done and dusted.
Which was convenient as that was already available/coded.



Maybe the advice should be taken with caution:

1) from this advice alone you were not warned that OAEP encryption 
principles turn into RSA-PSS for signatures, and


2) implementation-wise, RSA-PSS turns the secret random source into a 
critical system component (which is incrementally inconvenient if no 
other crypto usage of secret random numbers exists in the operational 
digital signature system -- the full-domain-hash avenue does not carry 
this secret random source dependency).



However I haven't seen any other code doing this - it is mostly PKCS1,
etc, and RFC3447 doesn't enlighten in this direction.

Could OAEP be considered reasonable for signatures?  or is this a case
of totally inappropriate?  Or somewhere in between?



iang


Regards,

--
- Thierry Moreau


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] phishing/password end-game (Re: Why anon-DH ...)

2013-01-17 Thread Thierry Moreau

d...@geer.org wrote:
  To clarify:  I think everyone and everything should be identified by 
  their public key,...


Would re-analyzing all this in a key-centric model rather than
a name-centric model offer any insight?  (key-centric meaning
that the key is the identity and Dan is an attribute of that
key; name-centric meaning that Dan is the identity and the key
is an attribute of that name)



Since you ask the question,

First, replace client certificate by client PPKP (public-private
key pair) and be ready for a significant training exercise. The
more the trainee knows about X.509, the greater challenge for the
trainer.

Second, replace PKI (for the server relying party) by the first
party certification paradigm.

Third, an enrollment process becomes a necessity. Here is BASC
(from the initial project name, bootstrapping an authenticated
session configuration).

==
The BASC enrollment process requires the use of the client private
key and allows a remote service provider to acquire trust in the
end-user control of the PPKP according to the first party
certification paradigm. The BASC enrollment process must be
repeated for independent service providers, but the end-user can
(and should) use a single PPKP with multiple independent service
providers.

The overall picture of the BASC enrollment process can be
explained in following 6 points. Points B-C make a ceremonious

utility that is run as a first step in a BASC enrollment instance

which encompasses points B-C-D-E.

A - PPKP Provisioning

The end-user gets a PPKP which can be used in a browser
software and for adhoc message signatures.

B - Contact the Service Provider Enrollment Server

This connection is required once within the ceremonious
utility, and then with a browser session using the client PPKP
for authentication (point D below).

C - Sign and Send a Proof Of Possession

In BASC, the proof of possession includes something trivial
and circumstantial (a one-time phrase and/or photograph which are
later reported back to the end-user to let her determine that she
connected to the correct service provider). The signed proof of
possession is sent to the service provider securely during the
ceremonious utility execution.

D - Online Enrollment Form

In a secure web browser session, the end-user fills in a form
with information relevant to enrollment purposes. The session
security rests on the HTTPS protocol using the client PPKP, plus
the server demonstration that it knows the end-user by displaying
her trivial and circumstantial data.

E - Enrollment Approval

Enrollment can not be complete just based on the end-user
having fulfilled the above formalities. Some verifications by the
service provider are required. Even if those occur behind the
scene, they are in intrinsic part of enrollment.

F - Routine PPKP Usage

Once enrolled, the end-user authenticates herself with the
PPKP when connecting to the service provider web servers for
applications secured by the HTTPS protocol. It is an application
design issue to have the server demonstrating its knowledge of the
end-user details before the end-user provides any sensitive input.
==

Fourth, the bad bad boy versus the good bad boy dilemma --
it's a marginal example of a fundamental property of any security
scheme devoid of bypass for special circumstances having little
prospect of deployment and use. (In technical audits of security
systems, follow the thread of exceptional procedures for the
scheme management organization and you will quickly reveal that
the participants' security is ineffective in the first place
against the bad bad boys.) I don't have any answer beyond a
suggestion to deploy first for security-critical distributed
applications (those would typically not be browser-based).

Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] phishing/password end-game (Re: Why anon-DH ...)

2013-01-17 Thread Thierry Moreau

James A. Donald wrote:

On 2013-01-18 1:17 AM, Thierry Moreau wrote:

First, replace client certificate by client PPKP (public-private
key pair) and be ready for a significant training exercise. The
more the trainee knows about X.509, the greater challenge for the
trainer. 


It has been decisively and repeatedly demonstrated that X.509 leads to a 
completely unusable client side interface.




This is a fact. That should be irrelevant ...


I assume that was your point.



The point above is about training users to handle a public-private key 
pair without reference to X.509 stuff (except as a required file 
format). Maybe you already know too much about X.509. Ignoring all of it 
may be difficult.


--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] yet another certificate MITM attack

2013-01-11 Thread Thierry Moreau

Jeffrey Walton wrote:


How do we teach developers to differentiate between the good
men-in-the-middle vs the bad man-in-the-middle?



According to another post by Peter, good ones would be based on 
anonymous D-H.




Perhaps they should be using the evil bit in the TCP/IP header to
indicate someone (or entity) is tampering with the secure channel?
https://tools.ietf.org/html/rfc3514.



That's an April 1st RFC!

Oh, maybe this whole thread is a bit in advance with the calendar.

More seriously, I agree that the questions raised by Jeffrey are 
relevant, and I support his main point. End-to-end security should make 
some sense, even today.


Regards,

--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] yet another certificate MITM attack

2013-01-11 Thread Thierry Moreau

John Kemp wrote:


[...] the _spirit_ of end-to-end semantics is violated here, I believe [...]


Personally, I am not a spiritual cryptography believer.

--
- Thierry Moreau
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Why anon-DH is less damaging than current browser PKI (a rant in five paragraphs)

2013-01-08 Thread Thierry Moreau

ianG wrote:

On 8/01/13 15:16 PM, Adam Back wrote:


[...] a story about how their bank is just totally 
hopeless.

[...]
So.  Totally hopeless.  A recipe for disaster.

Obviously we cannot fix this.  But what we can do is decide who is 
responsible, and decide how to make them carry that responsibility.


Hence the question.  Who is responsible for phishing?

Vendor?  CA?  User?  Bank?  SSL techies?



If it's about liability allocation, I'll leave others to comment.

If it's about what might be envisioned by each actor group, I have an 
observation about SSL techies. I guess I qualify as among the group, but 
my difficulty is to train other techies about the consequences of crypto 
scientific/academic results.


Two cases where SSL techies seem hopeless (to me) in applying academic 
results:


The MD5 brokenness got serious attention from the PKI community only 
when an actual collision was shown on a real certificate, no sooner 
(this particular work has little value as a scientific contribution 
besides its industrial impact). Even worse, the random certificate 
serial number short term patch has become best practice and PKI 
techies now come up with fantasies about its rationales (see discussion 
starting at 
http://www.ietf.org/mail-archive/web/pkix/current/msg32098.html ).


Dan Bernstein made (with help of colleagues) a demonstration that DNSSEC 
NSEC3 mechanism comes with an off-line dictionary attack vulnerability 
as a DNS zone walking countermeasure. DNSSEC techies just flamed the 
messenger (on other grounds), ignored the warning, and quietly left the 
vulnerability in oblivion. Professor Bernstein moved to other issues.


For the record, DNS zone walking is a DNS privacy threat introduced by 
plain DNSSEC (e.g. the attacker quickly discovers 
s12e920be.atm-network.example.com because atm-nework.example.com is 
DNSSEC-signed without NSEC3). The NSEC3 patch development delayed DNSSEC 
protocol completion by a few years. The prof Bernstein presentation came 
after the DNSSEC RFC's were done.


So, when trying to promote an IT security innovation (e.g. if phishing 
could be reduced by some scheme that would protect the banks against 
their own incompetence), the typical expert in the audience is subject 
to this kind of short sightedness about established practice.


So, I would envision any strategy to make academic results and IT 
security innovation more palatable to IT experts. This is how I feel 
responsible for the hopeless phishing minefield!


Regards,

--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] key exchange patented :)

2012-11-22 Thread Thierry Moreau

ianG wrote:

giving thanks to wisdom of intellectual property:

http://www.forbes.com/sites/andygreenberg/2012/11/09/meet-the-texas-lawyer-suing-hundreds-of-companies-for-using-basic-web-encryption/ 



the infringement lies solely in the use of the SSL or TLS “handshake”...
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography



I did not read the press article but I went directly to the patent document.

After very superficial study, I would describe this patent disclosure 
(US patent 5,412,730) as an encryption mode of operation in which a key 
schedule operation is required for every encrypted block, invented circa 
1988-1989. I wonder which protocol arrangement might practice the 
patented mode of operation.


I have no motivation for filling the perception gap between my above 
sketchy observation and the press article title.


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Application Layer Encryption Protocols Tuned for Cellular?

2012-11-04 Thread Thierry Moreau

ianG wrote:

On 1/11/12 10:55 AM, Peter Gutmann wrote:

Jeffrey Walton noloa...@gmail.com writes:

Is anyone aware of of application layer encryption protocols with 
session

management tuned for use on cellular networks?

[...]


 From that description your problem isn't at the encryption-protocol 
level at

all, you need a reliable transport mechanism for cellular networks,


[...]

Also, crypto tends to solve things that apply broadly across the layers, 
so if a design incorporates crypto right from the start, and uses those 
results broadly, benefits ensue


I am very novice at Host Identity Protocol, I encountered it while 
making a quick survey of Internet protocols with a perspective similar 
to the original post.


HIP addresses Host Identity *and* end-to-end security. The rendez-vous 
functionality (addressing the mobility use case) seems in the design 
right from the start.


HIP also appears as a lightweight IPsec, but certainly others can offer 
more wisdom in this respect.


Not a simple solution, but how could the original post requirements be 
adequately served by a simple solution?


Regards,



--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Just how bad is OpenSSL ?

2012-10-30 Thread Thierry Moreau

Solar Designer wrote:

On Tue, Oct 30, 2012 at 11:29:17AM -0400, Thierry Moreau wrote:
Isn't memory-space cleanse() isolated from file system specifics except 
for the swap space?


Normally yes, but the swap space may be in a file (rather than a disk
partition), or the swap partition may be in a virtual machine, which may
reside in a file.


Is the SSD technology used for swap state in any of the OS distributions?


It depends on how the OS is installed.  Plenty of installs have swap on SSD.

Assuming that cleanse() as to deal only with L1 CPU cache, L2 CPU cache, 
main memory, and swap space, I considered a periodical swap space 
sanitation operation to be useful: add a new swap space partition, 
remove an existing one, sanitize the removed one (low-level, below file 
system), put it back into the available set of partitions. I did not 
experiment in practice.


But that partition sanitation strategy ought to be part of an open 
HSM type of project.


What kind of HSM is that where you expect to need swap at all?  Just
disable swap, unless you're using an OS that can't live without swap.



I don't know. The intended HSM is Linux-based with a selected set of 
software components for its mission: server-side packages that would be 
on the closed HSM's host are candidates for the open HSM context.


Then it's just a matter of the shortest route to finish: route a) secure 
the swap, route b) monitor software components for maximum memory usage 
vs physical mem plus make a memory exhaustion fault analysis.





Alexander




--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] DKIM: Who cares?

2012-10-26 Thread Thierry Moreau

Peter Gutmann wrote:

John Levine jo...@iecc.com writes:


Is there some point to speculating ...?


Absolutely. ...



... so I'm
assuming there was some business-case issue ...
... a security mechanism was deployed on a large scale ...



Let me speculate a moment.

The 384 bits keys are much more efficient than 768+ keys (see HIP 
specifications first version which had a 384 bits DH prime for low-end 
environments).


The business case is to avoid upgrading the e-mail servers merely 
because you turn on DKIM (hitting a CPU horsepower limit).


Keep in mind that the RSA vs DSA spreads of CPU load between signer and 
verifier are reversed (RSA signature is more CPU-intensive, DSA 
verification is more CPU-intensive).


Regards,

--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Social engineering attacks on client certificates (Was ... crypto with a twist)

2012-10-14 Thread Thierry Moreau

Hi Ian!

Thanks for this thoughtful feedback.

Your first and explicit question (about application security requirement 
assumptions) deserves an answer. I respond to it (and a few more) and 
postpone replies to other feedback.


ianG wrote:

Hi Thierry,

On 14/10/12 01:21 AM, Thierry Moreau wrote:


When reviewing a security scheme design for a client organization, I had
to ask myself what a potential attacker would attempt if the system was
protecting million dollar transactions.



Yes.  We have to first figure out the business model.  Then extract from 
that a model of threats, and finally come up with a security model to 
mitigate the threats while advancing the business model.




In actual consulting assignments, I had to care for business model 
expansion: the operating division will get authorization from IT 
security staff with a very entry-level set of functionalities and quick 
and dirty client authentication techniques, and later expand the 
application with transactions having significant impacts.


If your business is dealing with million dollar transactions, can I ask 
if you are using browsers at all in that scenario?  If so, isn't there 
something wrong with this scenario?




Ah! Good question. Browsers are in every computing device, so it is very 
tempting to use it where a virus-immune device would be more 
appropriate. We live in the real world.


You already use a browser to configure network devices and to update the 
DNS records that sets the connectivity to your million dollar 
transaction application. (With DNSSEC, the DNS record management 
application is becoming more critical.)


The HTTPS session in these high impact applications should be very 
simple, basic HTML with little or no client-side processing (so that the 
service operator is confident about session integrity) and the user 
should be trained to expect a very stable user dialog. I keep in mind 
the retail payment PIN entry devices where the user is trained to input 
the PIN only on a terminal that has the look-and-feel of a certified 
banking device (this translate to application data input in the 
critical-app-in-the-browser, not to the private key usage at the outset 
of the HTTPS session).


Obviously, the client browser may accept fraudulent certificates if the 
list of root CAs is according to current practice. I guess the only cure 
to this is to use a custom-configured browser when using the 
critical-app-in-the-browser. See for instance Lightweight Portable 
Security http://www.spi.dod.mil/lipose.htm as an initiative in this 
direction (but don't trust *their* list of root CAs !!) (also, review 
their true entropy source ...) (this is open software based, at least 
some of it GPL, I would like to have their kernel, OS, and bootable 
media scripts in source code -- where should I ask??).


So yes, browsers as a substitute to a dumb terminal are so cost 
competitive that it is very difficult to avoid them.






If the user is given a genuine certificate containing privacy sensitive
subject name data, how do you expect him/her to react to the information
that the basic Internet protocol (TLS) exposes such data in the clear to
eavesdroppers? How can you expect him/her to protect the private key
once the certificate privacy lesson has been found bogus?



Why are you putting that detail into the certificate?


I am not, but isn't it the case for the PKI-based authentication schemes 
run by governments. Anyway, you and I are discussing the other scenario 
where the certificate is essentially devoid of privacy-sensitive data.






Given that I exported the certificate obtained from
https://www.ecca.wtmnd.nl/ and I used openssl pkcs12 and open pkcs8
utilities to look under the hood of the RSA private key, at which
point in the enrollment process should I have been warned against these
steps (or equivalent actions suggested in a social engineering attack)?



No, never, please :)  You shouldn't even be able to do that.


Ah! The technological issue we face here is that there is no mechanism 
for preventing me from doing it, e.g. while following the instructions 
in the context of a social engineering attack.


Regards,


--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Client certificate crypto with a twist

2012-10-10 Thread Thierry Moreau

Jon Callas wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Oct 10, 2012, at 6:52 AM, Jonathan Katz wrote:


Looking at this just from the point of view of client-server authentication, how is this 
any better than having the website generate a cryptographically strong 
password at sign-up time, and then having the client store it in the password 
cache of their browser?

Note that both solutions suffer from the same drawback: it becomes more 
difficult for a user to log on from different computers.


An excellent point, Jonathan.

I also wonder why there has to be any certification at all?

Right now, web sites store a user name and a representation of a password. 
(Note that a password, a hash of a password, etc. are all representations of 
that password.)

Why not store a representation of a *key* (a hash is a representation of a key) 
and then prove possession of the key? It doesn't need to be certified. I can 
store that key on as many computers as needed via a keychain or something like 
it.


The server then binds a public key to an account. I refer to this as 
first party certification (relying party maintains its own trust 
database and has no need to issue certificates). It suggests a user 
mental model where the PPKP (public-private key pair) becomes the 
authenticating data element. The public key certificate becomes irrelevant.




Of course, one could have that key be part of a certificate for the times that 
that is necessary.


When needed, e.g. for TLS session negotiation, it can be either 
self-signed, or auto-issued with the AIXCM dummy CA 
(http://www.connotech.com/public-domain-aixcm-00.txt).


Self-signing (or self-issuing) on-the-fly leaves the X.509 details out 
of the key store.


Maybe a single (or a few) PPKP(s) would be easier to migrate from one 
device to the other (easier than a full key store synchronization).


A single PPKP solves the Yet Another Account concern raised by others, 
at the cost of privacy protection (maybe one can't have his cake and eat 
it -- within the TLS paradigm).


Tools to manage the single PPKP would preferably be independent of a 
specific browser. In applying the openssl utilities to this task for a 
proof of concept, one notices the many inconsistencies of PKCS#? and the 
endless X.509 details.


You may guess I am investigating these avenues. However, my primary 
focus is not the low-value authenticated web session use case. 
Accordingly, some of the observations above may be out-of-sync with the 
real world challenges.


- Thierry Moreau
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key extraction from tokens (RSA SecurID, etc) via padding attacks on PKCS#1v1.5

2012-07-03 Thread Thierry Moreau

Noon Silk wrote:

From: 
http://blog.cryptographyengineering.com/2012/06/bad-couple-of-years-for-cryptographic.html

Here's the postage stamp version: due to a perfect storm of (subtle,
but not novel) cryptographic flaws, an attacker can extract sensitive
keys from several popular cryptographic token devices. This is
obviously not good, and it may have big implications for people who
depend on tokens for their day-to-day security. [...] The more
specific (and important) lesson for cryptographic implementers is: if
you're using PKCS#1v1.5 padding for RSA encryption, cut it out.
Really. This is the last warning you're going to get.

Direct link to the paper:
http://hal.inria.fr/docs/00/70/47/90/PDF/RR-7944.pdf - Efficient
Padding Oracle Attacks on Cryptographic Hardware by Bardou, Focardi,
Kawamoto, Simionato, Steel and Tsay



Thanks for this link.

The paper is self-explanatory, at least to someone who has followed the 
factoring-based public key cryptography resistance to CCA (chosen 
ciphertext attack) for a while.


Here is the main theoretical contribution: At the heart of our 
techniques is a small but significant theorem that allows not just 
multiplication (as in the [Bleichenbacher’s well-known attack] attack) 
but also division to be used to manipulate a PKCS#1 v1.5 ciphertext and 
learn about the plaintext.


The paper reports findings from extensive experiments with the attacks.

The paper is thus a very significant contribution.

Take care my friends, meaning that is you see yourself as an applied 
cryptographer, spot the oracle.


--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-21 Thread Thierry Moreau

James A. Donald wrote:

James A. Donald wrote:
   I see no valid case for on chip whitening. Whitening
   looks like a classic job for software. Why waste chip
   real estate on something that will only be used 0.001% of
   the time.

On 2012-06-22 6:53 AM, Michael Nelson wrote:
  I suppose that if the rng was shared between multiple
  processes, and if a malicious process could read the
  internal state, then it could predict what another process
  was going to be given in the near future.

To the extent that rng generates true randomness, it can only partially 
predict.  Assuming that each process collects sufficient true randomness 
for its purposes, not a problem.  That is the whole point and purpose of 
generating true randomness.




Just a few more random arguments in this discussion.

The NIST SP800-90 architecture, which is used in the Intel RNG, has

(A) a true random sampling process which provides less than full 
entropy, followed by


(B) an adaptation process, deterministic but not a NIST algorithm, 
called conditioning which provides well quantified full entropy bits 
(the designer has to make the demonstration that the goal is reached 
given the available understanding of the random sampling process), and 
finally


(C) the DRBG (deterministic random bit generator) which is periodically 
seeded by the output of the conditioning algorithm.


(A) is truly random, (B) and (C) are deterministic.

If your enemy has access to the data used by either the conditioning 
algorithm or the DRBG, he can figure out their respective output.


Because the Intel RNG designers do not know which CPU request comes from 
a user versus an enemy, so they only provide a unique and independent 
output portion to each of them. One can not guess what the other 
received. If the enemy can trace the user program with debugging support 
CPU facilities, he might be in a position to eavesdrop an output portion 
given to the user. Be careful.


But don't trust me about these explanations, I might be an enemy. At 
least Intel designers don't trust me to audit their deterministic 
algorithms implementations within production parts. So they protect your 
secure applications, just in case my Trojan horse software is loaded 
when your application runs.


As a concluding remark, ... well why should I share a conclusion with 
potential enemies? You may as well (truly random) draw your own conclusion.


Regards,



--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Intel RNG, questions raised by the report

2012-06-19 Thread Thierry Moreau

Hi!

The interesting discussion induced me to look again to the actual report 
[1]. When I initially did, I came up with the impression that the RNG 
design is sound to the extent that a) any system based on sampling an 
unpredictable physical phenomenon has some intrinsic limitations, and b) 
you accept the NIST SP800-90 architecture. Furthermore, one has to read 
between the lines and make its own opinion.


Here are the main questions that I had in my first (and again in my 
subsequent) quick reading	:


Q1. Do you like a system where the deterministic algorithms 
(conditioning, DBRG) are exposed only at the pseudocode level?


Q2. If you want to build a production system with it, how do you define 
fail-safe? Stated otherwise, if the RNG signals its malfunction to the 
software, as a system integrator, how are you going to handle the 
negative customer perception that the the best commercially available 
RNG simply turns off the customer production system?


Q3. Do you agree with the report authors when they write (end of section 
3.2.1) Also, while such failures can cause the design to behave briefly 
as a cryptographically-strong deterministic RNG, this should not result 
in any loss of security. ?


Q4. How do you get confidence that production parts are as good as the 
parts used in the report review?


None of these are specific to the Intel RNG being reviewed; any serious 
RNG arrangement based on sampling an unpredictable phenomenon might 
trigger the same set of questions.



Regards,

[1] ANALYSIS OF INTEL’S IVY BRIDGE DIGITAL RANDOM NUMBER GENERATOR, 
prepared for intel by Mike Hamburg Paul Kocher Mark E. Marson 
Cryptography Research, Inc., March 12, 2012 
http://software.intel.com/en-us/articles/download-the-latest-bull-mountain-software-implementation-guide/


--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] can the German government read PGP and ssh traffic?

2012-06-05 Thread Thierry Moreau

Hi Peter,

Replying on the thinking process, not on the fundamentals at this time 
(we seem to agree on the characteristics of PKC vs else).


Peter Gutmann wrote:

Thierry Moreau thierry.mor...@connotech.com writes:


Unless automated SSH sessions are needed (which is a different problem
space), the SSH session is directly controlled by a user. Then, the private
key is stored encrypted on long term storage (swap space vulnerability
remaining, admittedly) and in *plaintext*form*only*momentarily* for SSH
handshake computations following a decryption password entered by the user. 


...except that a user study a few years back (Inocilating SSH Against Address
Harvesting) found that two thirds of all SSH private keys were stored in
plaintext on disk.  You need to look at what actually happens in practice, not
what in theory should happen in an ideal world.



Agreeing about the survey findings, if we think towards a solution (or 
some form of improvements), we may focus our attention on the PKC 
characteristics benefiting to the one third of PKC users that are not 
that bad in private key protection.



In any case though you're completely missing the point of my argument (as did
the previous poster), which is that a scary number of people follow the
thinking that passwords are insecure, PKCs are secure, therefore anything
that uses PKCs is magically made secure even when it's quite obviously not
secure at all.  This is magical thinking, not any kind of reasoned assessment
of security.



Agreeing that this magical thinking is indeed operative (not only in IT 
security, e.g. a Judge accepting blindly the conclusion of a forensic 
expert irrespective of arguments by the opposing party), the association 
you made with SSH (which is a neat PKC implementation devoid of PKI 
endless complexity) is what triggered my reaction. Would you extend the 
association to PGP usage? Would you extend the association to Lotus 
Notes as another PKC user community ( 
http://en.wikipedia.org/wiki/Lotus_Notes#Security )?


The temptation to consider IT security a done deal exists with every 
mechanism, we should also agree on that.


Good IT security solutions based on PKC may exist despite of the 
temptation. I further opine that SSH using PKC may be part of reasonably 
good IT security solutions, and the temptation will still exist.


Regards,


--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] can the German government read PGP and ssh traffic?

2012-05-28 Thread Thierry Moreau

Peter Gutmann wrote:

Werner Koch w...@gnupg.org writes:


Which is not a surprise given that many SSH users believe that ssh
automagically make their root account save and continue to use their lame
passwords instead of using PK based authentication.


That has its own problems with magical thinking: Provided you use PK auth,
you're magically secure, even if the private key is stored in plaintext on ten
different Internet-connected multiuser machines.  I don't know how many times
I've been asked to change my line-noise password for PK auth, told the person
requesting the change that this would make them less secure because I need to
spread my private key across any number of not-very-secure machines, and
they've said that's OK because as long as it uses PKCs it's magically secure.

Peter.



Please Peter, a little rigor in the arguments would help.

Since the SSH servers need *only*your*public*key*, then the ten 
different Internet-connected multi-user machines are not those SSH 
servers the admin of which would have made the request to turn to client 
PK for SSH.


If you chose to roam into different (and as insecure as you wish to 
support your argument), it's your decision as a SSH client user. With 
the low selling price of small single user system, you could also 
dedicate one as a SSH client console and make it a) intermittently 
connected to the Internet, b) single user for all practical purposes, c) 
little vulnerable to Trojan horse, d) having only the software you 
selected for the job, e) ...


Unless automated SSH sessions are needed (which is a different problem 
space), the SSH session is directly controlled by a user. Then, the 
private key is stored encrypted on long term storage (swap space 
vulnerability remaining, admittedly) and in 
*plaintext*form*only*momentarily* for SSH handshake computations 
following a decryption password entered by the user. If you have to fear 
keyboards grabbers, you fear them for line-noise passwords as well.


Maybe you want to argue that PK authentication is an HMI nightmare and 
comes with misleading security claims derived from an obscure theory of 
operation. Fine. But in the case of SSH authentication, the PK 
alternative allows security-minded remote system operators to enjoy a 
secure remote console.


I don't understand why you would chose to handle your encrypted SSH 
private key in a lousy way. But it seems inappropriate to assume that 
better ways are not feasible.


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] DIAC: Directions in Authenticated Ciphers

2012-05-09 Thread Thierry Moreau

In a long message, Zooko Wilcox-O'Hearn wrote, in part:

the person who has the authority to sign the message
can *not* sign new messages

 it means that the data is immutable once transmitted, even to
someone who has all of the secrets that the original sender had.


This looks like a notarization use case of crypto, with the attempt to 
implement the notarization service without the help of a trusted 
[timestamp/historic evidence] third party.


Just my attempt to summarize a lengthy explanation ... no further comments.

Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PKI in practice: is there a list of (widely deployed) client-certs-issuing CAs?

2012-04-27 Thread Thierry Moreau

Follow-up on my own post below ...

Thierry Moreau wrote:

A question for those who follow PKI usage trends.

Is there a list of CAs that issue X.509 end-user certificates?

Here is the rationale for the question:

If an end-user has a certificate, he (more or less consciously) controls 
a private key. Suppose one deploys a web server that cares *only* about 
end-user public keys, e.g. it keeps track of end-user reputation and 
that's it for trust management. Then any type of certificate is good 
enough (self-signed, auto-issued, issued by a regular 
client-cert-issuing CA).


This web server can have an immediate potential user base if it 
negotiates the TLS session with a long list of CA distinguished names 
(in the CertificateRequest message).


The management tools for the contemplated web server scheme would 
include an issuer DN extraction utility from end-user or CA certificates 
so that the list may be augmented based on casual observations. Also, 
the SSL debugging tools will report the contents of CertificateRequest 
messages from public servers supporting client certs.


Anyone went through such data collection before?

Thanks in advance.



I got a few off-list messages.

One pointed towards the TLS 1.1 provision for an empty list of 
client-certs-issuing CA in the CertificateRequest message (in which case 
the client may use any certificate, which is the intended purpose). This 
is a protocol relaxation from TLS 1.0.


Another observation is that a major TLS *server* implementation 
truncates this list (from the operator-supplied configuration) to a much 
smaller size than the protocol limit. I don't know if this reflects some 
browser-client limitations as a TLS client entity.


So, if I had a long list of distinguished names for 
client-certs-issuing-CA, I am not sure I could recommend to use it as a 
default configuration item.


I guess it's preferable to focus on configuration management tools that 
ease the job of supporting a more specific server user base.


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] PKI in practice: is there a list of (widely deployed) client-certs-issuing CAs?

2012-04-26 Thread Thierry Moreau

A question for those who follow PKI usage trends.

Is there a list of CAs that issue X.509 end-user certificates?

Here is the rationale for the question:

If an end-user has a certificate, he (more or less consciously) controls 
a private key. Suppose one deploys a web server that cares *only* about 
end-user public keys, e.g. it keeps track of end-user reputation and 
that's it for trust management. Then any type of certificate is good 
enough (self-signed, auto-issued, issued by a regular 
client-cert-issuing CA).


This web server can have an immediate potential user base if it 
negotiates the TLS session with a long list of CA distinguished names 
(in the CertificateRequest message).


The management tools for the contemplated web server scheme would 
include an issuer DN extraction utility from end-user or CA certificates 
so that the list may be augmented based on casual observations. Also, 
the SSL debugging tools will report the contents of CertificateRequest 
messages from public servers supporting client certs.


Anyone went through such data collection before?

Thanks in advance.

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] RSA Moduli (NetLock Minositett Kozjegyzoi Certificate)

2012-03-23 Thread Thierry Moreau

Please let me try to summarize.

I guess it is OK to infer from Adam explanations and Peter observation 
about homegrown CA software implementations used by some CAs that ...


The unusual public RSA exponent may well be an indication that the 
signature key pair was generated by a software implementation not 
encompassing the commonly-agreed (among number-theoreticians having 
surveyed the field) desirable strategies.


At a modulus size of 2048 bits, I wouldn't lose sleep on this hypothesis.

- Thierry

Adam Back wrote:

As to why conventionally e is a small low hamming weight prime, even though
it doesnt have to be, I suspect it arose because some RSA code used to
generate not strong primes, but random primes.

If you generate a random prime, then the factors of P=(p-1)/2, Q=(q-1)/2
will be random.  But quite likely to contain 3, somewhat likely to 
contain 5

etc with decreasing probability for larger potential prime factors.  (And
crucially for strength, it is unlikely a random prime will be B-smooth for
dangerously small B.) Anyway so consider you choose a random pair of primes
p  q, and a random or fixed non-prime small low hamming weight e..  say
2^15-1, it has factors 3x3x11x331, so then you very often will have to 
abort

and try again a new e or a new p and/or q because P or Q will factorize by
some of these small factors, and then d will not be computable.

Consequently it'll be simpler and faster to pick a prime e, for a given 
size

e a prime has the lowest probability of having a co-factor with
carmichael(n).

If you have strong primes which I think is more common at this point, e
could be any random odd (non-even) number, presumably with low hamming
weight.

Low hamming weight is a performance trick for modexp which involves more
multiply operations for higher hamming weight.

Adam

On Fri, Mar 23, 2012 at 03:05:48PM +0100, Adam Back wrote:
I presume its implied (too much tongue in cheek stuff for my literal 
brain
to interpret) but a self-signed CA cert is a serious thing - thats a 
sub-CA

cert typically.  How that came to be signed with a bizarre though legal e
parameter is scary - what library or who wrote the code etc.

Usual reason to use primes of form 2^n+1 and co-prime to carmichael(n) is
low hamming weight.

Other than that typically p, q are strong primes P=(p-1)/2, Q=(q-1)/2 
also
prime, so any odd (non-even) e is pretty much guaranteed to work as 
carm(n)

= 2*P*Q where P = (p-1)/2, Q = (q-1)/2.  Or if using Lim-Lee primes, at
least B-smooth, meaning P=P1*P2*...Pn where |Pi|B for all Pi.  And e 
would

typically be smaller than B-bits anyway for performance.

(If e is not-coprime to carm(n) then d doesnt exist, as modinv(a,x) 
requires

gcd(a,x)==1, so its not like it will be insecure, it just wont work!)

e should also not be too small or other attacks kick in.

Dan Boneh has a good summary of RSA limitations:

http://www.ams.org/notices/199902/boneh.pdf

Adam

ps carm(n) = phi(n)/2 = (p-1)*(q-1)/2.

On Fri, Mar 23, 2012 at 06:51:51AM -0700, Jon Callas wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 23, 2012, at 6:39 AM, Peter Gutmann wrote:


Jon Callas j...@callas.org writes:

On Mar 23, 2012, at 6:03 AM, Peter Gutmann wrote:

Jeffrey Walton noloa...@gmail.com writes:
Is there any benefit to using an exponent that factors? I always 
thought low

hamming weights and primality were the desired attributes for public
exponents. And I'm not sure about primality.


Seeing a CA put a key like this in a cert is a bit like walking 
down the
street and noticing someone coming towards you wearing their 
underpants on
their head, there's nothing inherently bad about this but you do 
tend to want

to cross the street to make sure that you avoid them.


But Peter, CAs don't *precisely* put keys into certs. CAs certify a 
key that

the key creator wants to have in their cert.


This is a self-signed cert from the CA, so the key creator was the CA.


So it's like issuing yourself an Artistic License card with a color 
printer and laminator. :-) Good for lots of laughs.


Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFPbIAAsTedWZOD3gYRAo4KAKDuG0OgEg81mxGUJDGlYp5OzLMI/gCgkRRq
/G3T3NLS/8k1L4njuxMJMd0=
=tHSy
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Certificate Transparency: working code

2012-03-01 Thread Thierry Moreau

Ben Laurie wrote:

http://www.links.org/?p=1226

Quite a few people have said to me that Certificate Transparency (CT) 
sounds like a good idea, but they’d like to see a proper spec.


Well, there’s been one of those for quite a while, you can find the 
latest version [...],
or for your viewing convenience, I just made an HTML version 
http://www.links.org/files/sunlight.html.




May I ask a (maybe stupid) question?

... audit proofs will be valid indefinitely ...

Then what remains of the scheme reputation once Mallory managed to 
inject a fraudulent certificate in whatever is being audited (It's 
called a log but I understand it as a grow-only repository)?


Actually, my expectation would be to read an explanation of which 
security services are being offered, and which kind and level of 
assurance the CT server operating organization is expected to provide. 
What is the problem being addressed and to who does the main benefit 
accrue / from whom involvement is expected? Once I can see these, I may 
appreciate Apache and browser backward compatibility features and the like.


Thanks for your patience with my scrutiny.


--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-22 Thread Thierry Moreau

While commenting about

http://www.cs.bris.ac.uk/Research/CryptographySecurity/knowledge.html

, Marsh Ray wrote:


It talks about entropy exclusively in terms of 'unpredictability', which
I think misses the essential point necessary for thinking about actual
systems: Entropy is a measure of uncertainty experienced by a specific
attacker.


I am curious that you seem to prefer the risk analysis definition of 
entropy over the more general definition. I am rather confident that a 
proper application of the more general definition is more effective in 
providing security assurance: the future attack vectors are deemed to be 
unexpected ones.


You are not alone using this perspective. NIST documents on secret 
random data generation are very confusing about the definition they use. 
(I dropped out of their feedback requests on the last revision/round 
where they split the contents into two documents and released only one.) 
NIST seems to refer to three definitions: one from the 
information-theory (min-entropy), one where every bit is unpredictable 
(full entropy -- you know how NIST loves cryptographic parameters of 
just the proper size), and the risk analysis definition.


Anyway, this whole thing about RSA modulus GCD findings questions us 
about entropy in a renewed perspective (a reminder that future attack 
vectors are deemed to be unexpected ones).


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-20 Thread Thierry Moreau

Ben Laurie wrote:


On Sun, Feb 19, 2012 at 05:57:37PM +, Ben Laurie wrote:

In any case, I think the design of urandom in Linux is flawed and
should be fixed.


In FreeBSD random (and hence urandom) blocks at startup, but never again.



Thanks for bringing this freebsd random source design as this neat summary.

I take this opportunity to review the Linux design decisions.

First, let me put aside the initial entropy assessment issue -- it's not 
solvable without delving into the details, and let me assume the freebsd 
entropy collection is good, at the possible cost of slowing down the 
boot process.


Then, basically the freebsd design is initial seeding of a deterministic 
PRNG. If a) the PRNG design is cryptographically strong (a qualification 
 which can be fairly reliable if done with academic scrutiny), and b) 
the PRNG state remains secret, THEN the secret random source is good 
through the system operating life cycle. (I make a restriction of the 
design as a simple PRNG because periodic true random data merging into 
the PRNG state is something not studied in the algorithmic theory 
publications.)


The secrecy of the PRNG state is a requirement NO GREATER THAN the 
secrecy of any long-term secret (e.g. a MAC symmetric key or a digital 
signature private key) needed during the system operating life cycle. 
Even if there were a few cases where a security system requires a random 
source, but not a single long-term secret, an anecdotal case may not be 
the best model for a general-purpose OS design. By logical inference 
then, requiring continuous (or periodic) true random data collection is 
an over-design (i.e. engineering resources better put into greater 
assurance about secrecy protections), or a plain design flaw (remaining 
vulnerabilities in the secrecy attack vectors overlooked due to 
attention paid to true random data collection).


So, the freebsd design appears reasonable to me. Can it be brought into 
Linux? Is it a Linux design flaw to omit boot-time entropy assessment?


My answers are only as an option and no.

The design glitch is the blocking at boot time for entropy assessment 
(wait until the entropy pool is filled at an adequate level).


By essence, true random data collection is antagonistic to a complex 
computer system. Generally, you want a computer system to behave 
predictably. Specifically, it would be sad if your next aircraft 
boarding ends in a crash because a bad pointer in the fly-by-wire 
software referred to a memory location holding a precise interrupt 
timing measurement instead of a fixed data value (RTCA D0178B in a 
nutshell). In practice, almost every strategy for collecting true random 
data based on unpredictable facets of computer technology turns void 
with the technological advances. Dedicated devices or audio ports cost 
money and/or provisioning hindrance.


Thus, the blocking at boot time for entropy assessment may be not be 
acceptable as a default for Linux: it is hard to provide an upper limit 
of the blocking time, and it is certainly not perceived as useful by a 
large portion of system users/operators. The freebsd design for 
/dev/{,u}random appears fit for a more understanding users/operators base.


The mental model for authentication key generation operation should 
reflect the fact that it requires the computer to roll dice very 
secretly for your protection, but the computer is very poor at this type 
of dice rolling -- it may thus take time and/or require you to input 
anything on the keyboard/mouse/touchscreen until adequate dice shaking 
simulation has been achieved.


If security experts are not prepared to face this fact -- true random 
data collection and associated entropy assessment can not be made 
intrinsic to a computer system -- we are unjustified to expect OS 
suppliers to provide a magic fix, or software developers to take the 
liberty to solve an issue which is seldom stated.


In this perspective, the root cause for the RSA modulus GCD findings is 
the security experts inability to recognize and follow-up the 
ever-present challenges of secret random data generation. As such, the 
Linux design is seldom at stake.


Just my view, enjoy!



--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-19 Thread Thierry Moreau

Ben Laurie wrote:

On Fri, Feb 17, 2012 at 8:39 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:

Ben Laurie wrote:

On Fri, Feb 17, 2012 at 7:32 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:

Isn't /dev/urandom BY DEFINITION of limited true entropy?


$ ls -l /dev/urandom
lrwxr-xr-x  1 root  wheel  6 Nov 20 18:49 /dev/urandom - random


The above is the specific instance on your environment. Mine is different:
different kernel major/minor device numbers for /dev/urandom and
/dev/random.


So? Your claim was Isn't /dev/urandom BY DEFINITION of limited true
entropy? My response is: no.


I got the definition from

man 4 random

If your /dev/urandom never blocks the requesting task irrespective of the
random bytes usage, then maybe your /dev/random is not as secure as it might
be (unless you have an high speed entropy source, but what is high speed
in this context?)


Oh, please. Once you have 256 bits of good entropy, that's all you need.



First, about the definition, from man 4 random:

quote
A  read  from  the  /dev/urandom device will not block waiting for more 
entropy.  As a result, if  there  is  not  sufficient  entropy  in  the 
entropy  pool,  the  returned  values are theoretically vulnerable to a 
cryptographic attack on the algorithms used by the  driver.   Knowledge 
of how to do this is not available in the current non-classified 
literature, but it is theoretically possible that such an attack may 
exist.  If this is a concern in your application, use /dev/random instead.

/quote

If the RSA modulus GCD findings is not a cryptographic attack, I don't 
know what is. (OK, it's not published as an attack on the *algorithm*, 
but please note the fact that /dev/urandom cryptographic weakness may be 
at stake according to other comments in the current discussion.)


Second, about sufficiency of 256 bits of good entropy, the problem 
lies with good entropy: it is not testable by software because entropy 
quality depends on the process by which truly random data is collected 
and the software can not assess its own environment (at least for the 
Linux kernel which is meant to be adapted/customized/built for highly 
diversified environment).


Third, since good entropy turns out to become someone's confidence in 
the true random data collection process, you may well have your own 
confidence.


In conclusion, I am personally concerned that some operational mishaps 
made some RSA keys generated with /dev/urandom in environments where I 
depend on RSA security.


And yes, my concern is rooted in the /dev/urandom definition as quoted 
above.


If I am wrong in this logical inference (i.e. the RSA modulus GCD 
findings could be traced to other root cause than limited entropy of 
/dev/urandom), then I admittedly have to revise my understanding.


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-17 Thread Thierry Moreau

D. J. Bernstein wrote:
[...]


There are of course more defenses that one can add to provide resilience
against more severe randomness deficiencies: one can start with more
random bits and hash them down to 256 bits; use repeated RDTSC calls as
auxiliary randomness input; etc. These details have essentially nothing
to do with the choice of cryptographic primitive, and the whole point of
/dev/urandom is to centralize these details and get them right rather
than having everybody reimplement them badly. It would be interesting to
understand how /dev/urandom failed for the repeated RSA primes---I'm
presuming here that /dev/urandom was in fact the main culprit.



Isn't /dev/urandom BY DEFINITION of limited true entropy? True entropy 
collection may take time (and is inescapably based on environmental 
assumptions) while /dev/urandom is defined as non-blocking. No matter 
the theoretical properties of the (deterministic) PRNG component of 
/dev/urandom, they can not expand *true* entropy.


And this is so, no matter the amount of details you delegate to reputed 
security software developers.


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Complying with GPL V3 (Tivoization)

2012-01-09 Thread Thierry Moreau

Jeffrey Walton wrote:

Hi All,

I was reading on CyanogenMod (a custom ROM project for Android) and
The story behind the mysterious CyanogenMod update
(http://lwn.net/Articles/448134/).

Interestingly, it seems some privaye keys were circulated to comply
with GPL V3 with some nasty side effects (could anything else be
expected?). Some interesting points were brought up, including how to
comply with GPL V3.

Is anyone aware of papers on integrity/signature schemes or protocols
tailored for GPL V3? Or does this reduce to (1) allow the
hardware/firmware to load additional [trusted] public keys; or (2)
provide the private key for the hardware?

Jeff
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography



The high-level picture would be as follows:

[A] The GPL V3 philosophy excludes software intended to run on 
proprietary hardware.


[B] The custom ROM software version distributed under GPL V3 has to 
distribute a private key such that it is not tied to proprietary 
hardware. Consequently, accepting the software license terms includes 
the implicit limitation of an explicitly-breached signature key.


[C] However, the GPL philosophy allows closed or proprietary 
modifications *within*an*organization*, so the IT department could use 
its own private key applicable to the internally distributed hardware. 
It may well be unworkable in practice because all software components 
might need the IT department blessing/signature, but who demonstrated 
that code signing was workable at all at the institutional level?


[D] The GPL V3 compliance would forbid any transfer of such 
gplv3-turned-proprietary ROM-based equipment outside of the organization 
(one would put back the original ROM version as part of IT equipment 
sanitization before disposal).


I guess multiple keys or other schemes can only be attempts to obfuscate 
the fact that one breaches either the software integrity mechanism or 
the relevant GPL rule: you may not re-distribute without allowing 
modifications.


Overall, [C] is perhaps the essential vision of trusted computing where 
some hardware comes bound to a central authority responsible for 
software integrity. I never understood why the central authority had to 
be the hardware vendor who also sells to influential governments.


Regards,

--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] airgaps in CAs

2012-01-08 Thread Thierry Moreau

Florian Weimer wrote:

* Eugen Leitl:


Is anyone aware of a CA that actually maintains its signing
secrets on secured, airgapped machines, with transfers batched and
done purely by sneakernet?


Does airgapping provide significant security benefits these days,
compared to its costs?

File systems are generally less robust than network stacks.  USB
auto-detection is somewhat difficult to control on COTS systems.  So
unless you build your own transfer mechanism, a single TCP port
exposes less code, and code which has received more scrutiny.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography



About the same scrutiny that you need to make sure a single IP port 
would be listening can be applied to preventing USB port leaks.


In practice, I had to configure a Linux kernel devoid of USB support as 
a first motivation but it provided assurance about IP potential 
vulnerabilities (I don't recall the details). Thereafter, the selection 
of software packages was required for IP port restriction, but it also 
provided assurance about file system leaks.


I guess you can not equate high security (whether is labeled air gap 
or IP port restricted) with any segment of the COTS system market.


With respect to the costs, both air gap and IP port restricted imply 
higher operational costs: they require more direct physical contact with 
the physical object (at least if you request *a*single*TCP* port, in 
which case you don't get SSH).


Overall, air gap (and certified HSM) are public relations security 
slogans. The real challenge in security encompasses key management and 
authentication/authorization management, but you seldom see them 
addressed in public records of secure operations (the ICANN DNSSEC root 
KSK management is the exception).


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Non-governmental exploitation of crypto flaws?

2011-11-30 Thread Thierry Moreau

Ilya Levin wrote:

On Tue, Nov 29, 2011 at 5:52 PM, Jon Callas j...@callas.org wrote:


But the other one is Drew Gross's observation. If you think like an attacker, 
then you're a fool to worry about the crypto.


While generally true, this is kind of an overstatement. I'd say that
if you think like an attacker then crypto must be the least of your
worries.  But you still must worry about it.

I've seen real life systems were broken because of crypto combined
with other thins. Well, I broke couple of these in old days (whitehat
legal stuff)

For example, the Internet banking service of the bank I would not name
here was compromised during a blind remote intrusion simulating
exercise because of successful known plaintext attack on DES. Short
DES keys together with key derivation quirks and access to ciphertext
made the attack very practical and very effective.



Indeed, single-length DES cracking for attacking electronic payment 
networks is the other instance (along with the TI software signature 
public key factorization) of a production crypto attack. Both are 
based on brute force against short key material.


It is not verifiable because a) the perpetrators needed no publicity to 
benefit, and b) the financial institutions were upgrading electronic 
payment gear to triple-DES (suddenly at a faster than usual pace which 
could raise suspicion, at least in my mind), and also preferred less 
publicity.


I had some form of confirmation (that the attack scenario occurred) by 
the way the triple-DES upgrade project success has been described by a 
bank technology specialist who would have been aware of the incident(s).


- Thierry Moreau


Again, I'm not arguing with Drew Gross's observation. It is just a bit
extreme to say it like this.

Best regards,
Ilya

---
http://www.literatecode.com


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] -currently available- crypto cards with onboard key storage

2011-10-28 Thread Thierry Moreau

Thor Lancelot Simon wrote:

On Thu, Oct 27, 2011 at 12:15:32PM +0300, Martin Paljak wrote:

You have not described your requirements (ops/sec, FIPS/CC etc) but if
the volume is low, you could take USB CryptoStick(s)
(crypto-stick.org), which is supported by GnuPG and what can do up to
4096 bit onboard keys, unfortunately only one signature/decryption
pair usable through GnuPG. Probably you can also stack them up and
populate with the same key for load sharing.


So this appears to be basically a smartcard and USB smartcard reader
built into the same frob.  I can probably find a way to put it within
the chassis of even a fairly compact rackmount server without fear it
will come loose and take the application offline.

Unfortunately, it also appears to be unbuyable.  I tried all three
sources listed on the crypto-stick.org website yesterday: two were
out of stock, while the third said something along the lines of
low stock - order soon, walked me through the whole ordering process,
then said my order had been submitted -- without ever asking for
payment.

It's possible I might walk into my office next week and see two
crypto-sticks, provided free of charge, but I am not too optimistic
about that!

Is there a way to actually get these?



This sounds familiar to me: while the direct cost, per unit, of crypto 
gear would seem very low when compared with mass market devices with the 
same kind of electronics, crypto gear remains very difficult to procure 
without a massive contribution to engineering costs incurred by the 
supplier (for the crypto added value).


Ultimately a crypto gear under discussion is merely a CPU plus a 
rudimentary memory subsystem and an interface to a host (it may have a 
separate keypad, and/or a key injection port). The packaging matters to 
provide confidence that the secret/private keys remain onboard. 
Likewise, the API with the host is a can of worm about which you want to 
avoid discussion, again to provide this well informed sense of 
assurance that information risks and controls are in balance.


This being said, there is indeed a practical security benefit of having 
computations directly involving secret/private keys done by a CPU 
unlikely to be infected by a Trojan. Security certification concerns put 
aside, the architectural demands are no more elaborate than a CPU 
unlikely to be infected by a Trojan. From there, you either pay for the 
certification gimmick, or you mend your own solution. This is the basis 
for an open source HSM ...


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] server-signed client certs (Re: SSL is not broken by design)

2011-09-26 Thread Thierry Moreau

ianG wrote:

On 26/09/11 16:49 PM, Adam Back wrote:
What about introducing the concept of server signed client certs.  A 
server
could recognize its own server key pair signature on the client cert, 
even

though the server cert is not a proper CA cert.


Hmmm... interesting idea!



The term I used for this concept is first party certification.

Typically, the server application will store the entire cert in its db 
as account info.  So analysing the signature as own-CA would save that 
database lookup.  This might make sense if we are relying on say Apache 
to do all the processing, because it is automatable, and Apache doesn't 
have much in the way of database processing capabilities.




Apache allows a PHP application to retrieve the whole X.509 certificate 
used by the client. In the proof-of-concept validation I made, I 
extracted the client public RSA modulus and that was (almost) it about 
client authentication (SSL done by the apache module). By almost, I 
mean the database management workload amounts to the mapping 
RSA_modulus==account.


Thus the client need not even a certificate signed by a own-CA, any 
certificate would do (certificate used only because mainstream SSL does 
not support bare public keys).


To fully appreciate the potential of these ideas require to go back to 
the essential properties of digital signatures, and get rid of PKI trust 
model almost completely.


(Details available on request ... but no spare time to provide them on 
the foreseeable future.)


Although typically, I would suggest that Apache should be put in 
pass-thru mode and let the application do *all* the login processing.  
By this I mean, analyse the crypto results from the certs, and put it 
all in the PHP variables.  Never ever drop the connections on its own 
decisions, in my experience, letting Apache make security decisions like 
that always results in lousy user performance, which reduces overall 
security (user does it another way).


(It's an interesting idea ... )



Yes, but ...

you need to switch clients to using (a more or less degenerate form of) 
client certificates, which historically has failed,


and maybe more subtly, you start giving end-users something for their 
own security which is not branded by the service operator: ultimately it 
is the client private key protection that counts, and once the user 
mental model is well understood, there is no barrier for the client to 
move to another service operator without any loss in security.


Don't get me wrong, I would like to see this usage of client key pairs. 
The above are my understanding of the objections when trying to put 
forward this concept.



Then the password request
on the client goes to the browser/os key store.  So long as you had CA
pinning that would help the phishing situation.


Yes.


Yes.

Of course there is still the UI problem to somehow have the user 
detect the
difference between the key store password dialog and a fake dialog put 
up by
a hostile web page.  There are some things that can help like user 
cutomized

dialogs where the hostile site cant know the customization.


Right.



Hey, don't forget that the enemy needs the 
encrypted-client-private-key-file. The hostile web page needs a Trojan 
to get it. This raises the bar.


--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691


iang



Adam

On Mon, Sep 26, 2011 at 07:52:20AM +1000, ianG wrote:

On 25/09/11 10:09 AM, James A. Donald wrote:

On 2011-09-25 4:30 AM, Ben Laurie wrote:

I'm just saying I think its hard to detect when a password is being
asked for as part of the risk assessment.


http and https do not know there are such things as logons.  Logons 
need to be built into the protocol, rather than added on top.  Your 
browser should know you are logged on.


When using client certs, it works: the browser, the server and https 
do know if you're logged on [0].


The problem with HTTPS login is that it was sacrificed at the alter 
of some unworkable commercial dream, thus forcing developers to rely 
on passwords.  Any client cert is better than the current best saved 
password situation, because the technical security of a public key 
pair always exceeds a password [1].  But while vendors will slave to 
make saving passwords easier (so as to cope with the explosion of 
sites  contexts) ... they won't work to make client certs better.


All of this (again) aligns well with key continuity / pinning / and 
various other buzzwords.  But, really, you have to try it. There's no 
point in talking about it.




iang


[0]  Where, logged in means, is using an appropriate client cert.  
This involves an amount of code in the application to figure out, but 
it seems about the same amount of code as doing the login the other 
way, via passwords and so forth.  There are some additional 
complications such as new certs, but this is just coding and matching 
on the names.


[1

Re: [cryptography] Let's go back to the beginning on this

2011-09-12 Thread Thierry Moreau
In summary, Jon Callas wrote, about the challenges of ascertaining 
identities:


The who who make you an authority are the community, 
and they do it because you act like one.




This is just one of three models of identity assessment, prior to any 
technological component:


one's reputation in a community,

one's track record of past interactions with the relying party (e.g. 
account payment history), and


one's participation in a formal ceremony (e.g. applying for a passport).

The PGP vs PKI analysis puts emphasis on the first one, mainly because 
the PKI proponents has not been very explicit about identity assertion 
model. But the other two models are operating here and there in the IT 
security landscape.




--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] RDRAND and Is it possible to protect against malicious hw accelerators?

2011-06-20 Thread Thierry Moreau

Peter Gutmann wrote:

Marsh Ray ma...@extendedsubset.com writes:


So the Intel DRNG has observable shared internal state and is shared among 
multiple cores.


The rule for security there is that if an attacker can get physical access to 
the same CPU as you, you're toast via any number of side-channel attacks 
anyway.  So the solution is don't do that, then.  I don't really see this 
issue as a problem.


I guess reversing the trend towards virtualization and cloud computing 
is difficult.


Then the question would be whether to trust the CPU or the 
virtualization O/S as a trusted source of randomness. In either case you 
are deemed to be (HW or SW) version-dependent.


If a processor manufacturer gets the RNG right, they might get a product 
differentiation advantage.


The more generic challenge can be described with the following question:

Can any software process hosted in a virtualization environment be 
provided with a) a secret random source, b) a place to store long-term 
secrets, and c) some mechanism for external assessment of software 
integrity?


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] True Random Source, Thoughts about a Global System Perspective

2011-01-26 Thread Thierry Moreau

Peter Gutmann wrote:

Thierry Moreau thierry.mor...@connotech.com writes:

As a derived engineering strategy, wouldn't it be better to design a system 
where the long-term secrets are kept in a secure co-processor, 


Yes, of course, but that's asking the wrong question, what you need to ask is:

  As a product manufacturing strategy, should we put money into designing a 
  system where the long-term secrets are kept in a secure co-processor,


and the answer to that is almost always no.  Heck, even if you phrase it as 
should we use the TrustZone capabilities that are *alreay built into the 
chip* or I'd love to use the integrated crypto, I'll do it at no cost as a 
design exercise the answer has been no.  The extra stuff costs, not just

in BOM and NRE terms but in terms of future compatibility, support, custom
functionality, ... 



The above citation is truncated. Let me re-phrase the original question:

Between
  1) a host plus a secure co-processor, and
  2) a host plus some H/W for true random source
(with their life cycle costs as indicated above), wouldn't it be more 
efficient (for overall system security) to procure 1) first.


By the way, yes, the market for these things seem tiny.

So, back to the designer board, once you have the secure co-processor, 
you have the luxury of running a large state PRNG within a secure 
processor boundary, and you have less dependency on high speed true 
random source.


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] True Random Source, Thoughts about a Global System Perspective

2011-01-26 Thread Thierry Moreau

Peter Gutmann wrote:


Oh, and just to throw a spanner in the works: I've never seen any standards 
document or whatever that discusses what to do when you don't have enough 
entropy available.  There are all sorts of Rube-Goldberg entropy-estimation 
methods, but what do you do when your entropy-estimation says there's not 
enough available?


Well, you have three choices:

1) block the application processes until some more entropy is available,
2) rely on a 
(hopefully-cryptography-strong-devoid-of-implementation-flaws bla bla 
bla) PRNG seeded by limited entropy, or
3) fail-safe the system's functions (as if the crypto services were 
mission-critical) and expect the end-user to have a hot backup system in 
another part of the continent for continuity of operations.


Plus the usual

4) head in the sand attitude: pretend you do 2) but forget about the 
parenthesis.


Hint: Halting, i.e. preventing things from continuing isn't 
an option.




Unless you hinted towards 4), this reminds me someone who wanted to have 
SHA-512 in real-time video capture with a high resolution camera budget 
under $300 with forensic-type certification of video recordings.


Here is a rationale for 2)

unpredictable phenomenon
   |
   V
digitalization
   |
   V
conditioning
   |
   +--- one-time-pad encryption
   |
   +- application deterministic processing
   |
   +--- PRNG --- application deterministic processing

Since you are deemed to be critically dependent on (long term) secret 
protection in the application deterministic processing, you may as well 
apply secret protection mechanisms to the PRNG state, and enjoy the 
peace of mind (modulo above bla bla bla) provided by a good PRNG design.


--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography