Re: What if you had a very good entropy source, but only practical at crypto engine installation time?

2010-10-06 Thread Thierry Moreau

Dear all:

The PUDEC (Practical Use of Dice for Entropy Collection) scheme has been 
advanced. The new web page is at http://pudec.connotech.com


The main technical advance in this release is the documentation of 
(deterministic) algorithmic support ( 
http://pudec.connotech.com/pudec_algo.html ). This development effort 
uses a structured process as if it targeted FIPS140-2 level 4 
certification, hence the release of documentation before reference 
source code.


Plus the PUDEC dice sets are now offered for sale.

If you are part of an open source project (GPL) for a cryptographic key 
management server or an open source HSM and you see a useful feature 
in self-evident entropy source, don't hesitate to contact me (I would 
consider an open source contribution if such projects have a reasonable 
chance of critical mass adoption).


Enjoy!

Thierry Moreau wrote:


See http://www.connotech.com/doc_pudec_descr.html .

(OK, it's also practical whenever the server needs servicing by trusted 
personnel.)


Then, you care about the deterministic PRNG properties, the secrecy of 
its current state, and the prevention of PRNG output replays from an 
out-of-date saved state.


And bingo, you solved the random secret generation issue satisfactorily!

Regards,




--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Certificate-stealing Trojan

2010-09-29 Thread Thierry Moreau

Marsh Ray wrote:

On 09/27/2010 08:26 PM, Rose, Greg wrote:


On 2010 Sep 24, at 12:47 , Steven Bellovin wrote:


Per
http://news.softpedia.com/news/New-Trojan-Steals-Digital-Certificates-157442.shtml 


there's a new Trojan out there that looks for a steals Cert_*.p12
files -- certificates with private keys.  Since the private keys
are password-protected, it thoughtfully installs a keystroke logger
as well


Ah, the irony of a trojan stealing something that, because of lack of
PKI, is essentially useless anyway...


While I agree with the sentiment on PKI, we should accept this evidence 
for what it is:


There exists at least one malware author who, as of recently, did not 
have a trusted root CA key.


Additionally, the Stuxnet trojan is using driver-signing certs pilfered 
from the legitimate parties the old-fashioned way. This suggests that 
even professional teams with probable state backing either lack that 
card or are saving it to play in the next round.


Is it possible that the current PKI isn't always the weakest link in the 
chain? Is it too valuable of a cake to ever eat? Or does it just leave 
too many footprints behind?




Don't forget that the described trojan looks for an actual *client* 
private key and certificates. This puts Malory in a position to 
impersonate the victim comprehensively including non-crypto validity 
checks (e.g. confidence gained from log of recent activity using this 
certificate).


Then the question is which PKIs actually deploy client certificates.


- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com




--
- Thierry Moreau

CONNOTECH Experts-conseils inc.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: questions about RNGs and FIPS 140

2010-09-07 Thread Thierry Moreau

Ben Laurie wrote:

On 27/08/2010 19:38, Joshua Hill wrote:

The fact is that all of the approved deterministic RNGs have places that
you are expected to use to seed the generator.  The text of the standard
explicitly states that you can use non-approved non-deterministic RNGs
to seed your approved deterministic RNG.


This is nice.


It's an even better situation if you look at the modern deterministic RNGs
described in NIST SP800-90. (You'll want to use these, anyway.  They are
better designs and last I heard, NIST was planning on retiring the other
approved deterministic RNGs.) Every design in SP800-90 requires that your
initial seed is appropriately large and unpredictable, and the designs all
allow (indeed, require!) periodic reseeding in similarly reasonable ways.


Given that we seem to have agreed that unpredictable is kinda hard,
I'm amused that SP800-90 requires it. If it is a requirement then I
wonder why NIST didn't specify how to generate and validate such a seed?



Well, I find SP800-90 Annex C (Entropy and Entropy Sources) quite clear 
about the requirements. If nothing is approved, we may guess it's 
because no unpredictable phenomenon has been shown (convincingly) to be 
compliant.


In terms of solution documentation requirements, I see four stages:
1) unpredictable phenomenon,
2) sensor technology,
3) digitalization,
4) conditioning.

I separate 2 and 3 while NIST seems to merge them. I see them separate 
since the sensor technology is seldom developed with the entropy 
collection application in mind (the unpredictable phenomenon is not 
engineered: it just exists). The digitalization refers to the 
algorithmic processing taking raw A-to-D (analog to digital) data and 
giving some discrete measurement of the unpredictable phenomenon. This 
measurement is basically a convenient intermediate representation using 
a physical characteristic that is better understood, for analysis 
purposes, than the raw A-to-D data.


The digitalization algorithm may be the same as for pre-existing uses of 
the sensor technology, in which case an after-the-fact certification is 
challenging.


NIST seems to favor very well defined algorithms for affixing the NIST 
approved mark. The, the digitalization algorithm for a given pair 
unpredictable phenomenon,sensor technology may be challenging.


I released (a few days ago) a specification document for digitalization 
and conditioning algorithms for PUDEC, Practical Use of Dice for Entropy 
Collection, see http://www.connotech.com/doc_pudec_algo.html


Incidentally, another difficulty is that confidence in the entropy 
collection function is difficult to support with boot time / run time 
testing. IIRC, the statistical testing at boot time had to be dropped 
from the FIPS140 requirements because false failures (intrinsic to 
statistical testing) were not manageable in an operational context.


Obviously, there are other considerations to NIST approval because it 
would become a procurement specification for the US Federal government.





Cheers,

Ben.




Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Is determinism a good idea? WAS: questions about RNGs and FIPS 140

2010-08-26 Thread Thierry Moreau

travis+ml-cryptogra...@subspacefield.org wrote:

Hey all,


I also wanted to double-check these answers before I included them:


3) Is determinism a good idea?
See Debian OpenSSL fiasco.  I have heard Nevada gaming commission
regulations require non-determinism for obvious reasons.


Do those sound right?



I guess the more productive question is Since determinism requires a 
PRNG algorithm of some sort, which PRNG properties are needed in a given 
usage context?


In all cases, the PRNG relies on a true random source for seeding.


You refer to IT security clients (SSL fiasco), IT security servers 
(virtualization), and lottery/gaming systems. In IT security nowadays 
large PRNG periods and crypto-strength PRNG algorithm are the norm. As I 
understand the state of the art in lottery/gaming industry (incl. 
standards), it is an accepted practice to use short period (by IT 
security standards) PRNG combined with a form of continuous entropy 
collection: background exercise of the PRNG.


I think the SSL fiasco root cause analysis would remind us of criteria 
that are nowadays well addressed in the IT security sector (assuming 
minimal peer review of the design and implementation).



In a security analysis, you watch for data leaks, either in the source 
of truly unpredictable events, or the present/past PRNG state for the 
deterministic components of your design. If you already need data leak 
protection for private or secret keys, your system design may already 
have the required protections for the PRNG state (except that the PRNG 
state is both long-term -- as a long-term private key or long-term 
symmetric authentication key -- and updated in the normal system 
operations -- as session keys).



So, there is no simple answer. I guess every designs facing actual 
operational demands rely on some determinism because a sudden surge in 
secret random data usage is hard to fulfill otherwise.



Forgive me to remind the PUDEC (Practical Use of Dice for Entropy 
Collection) which mates well with a server system design using PRNG 
determinism after installation (or periodic operator-assisted 
maintenance). This project is still active. See 
http://www.connotech.com/doc_pudec_descr.html . You may see this as a 
bias in my opinions, but I don't see any benefits in misrepresenting 
relevant facts and analyzes.



Regards,


--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: questions about RNGs and FIPS 140

2010-08-26 Thread Thierry Moreau

Nicolas Williams wrote:

On Thu, Aug 26, 2010 at 06:25:55AM -0400, Jerry Leichter wrote:

On Aug 25, 2010, at 4:37 PM,
travis+ml-cryptogra...@subspacefield.org wrote:

I also wanted to double-check these answers before I included them:

1) Is Linux /dev/{u,}random FIPS 140 certified?
No, because FIPS 140-2 does not allow TRNGs (what they call non-
deterministic).  I couldn't tell if FIPS 140-1 allowed it, but
FIPS 140-2 supersedes FIPS 140-1.  I assume they don't allow non-
determinism because it makes the system harder to test/certify,
not because it's less secure.

No one has figured out a way to certify, or even really describe in
a way that could be certified, a non-deterministic generator.


Would it be possible to combine a FIPS 140-2 PRNG with a TRNG such that
testing and certification could be feasible?

I'm thinking of a system where a deterministic (seeded) RNG and
non-deterministic RNG are used to generate a seed for a deterministic
RNG, which is then used for the remained of the system's operation until
next boot or next re-seed.  That is, the seed for the run-time PRNG
would be a safe combination (say, XOR) of the outputs of a FIPS 140-2
PRNG and non-certifiable TNG.

factory_prng = new PRNG(factory_seed, sequence_number, datetime);
trng = new TRNG(device_path);
runtime_prng = new PRNG(factory_prng.gen(seed_size) ^ trng.gen(seed_size), 0, 
0);

One could then test and certify the deterministic RNG and show that the
non-deterministic RNG cannot destroy the security of the system (thus
the non-deterministic RNG would not require testing, much less
certification).

To me it seems obvious that the TRNG in the above scheme cannot
negatively affect the security of the system (given a sufficiently large
seed anyways).

Nico


Such implementations may be *certified* but this mode of CSPRNG seeding 
is unlikely to get *NIST*approved*. Cryptographic systems are 
*certified* with by-the-seat-of-the-pant CSPRNG seeding strategies (I 
guess) since crypto systems *are* being certified.


The tough part is to describe something with some hope of acquiring the 
*NIST*approved* status at some point. The above proposal merely shifts 
the difficulty to the TRNG. Practical Use of Dice for Entropy Collection 
is unique because the unpredictable process (shuffling dice) has clear 
and convincing statistical properties.


- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fw: [IP] Malware kills 154

2010-08-23 Thread Thierry Moreau

Peter Gutmann wrote:

Perry E. Metzger pe...@piermont.com forwards:


Authorities investigating the 2008 crash of Spanair flight 5022
have discovered a central computer system used to monitor technical
problems in the aircraft was infected with malware

http://www.msnbc.msn.com/id/38790670/ns/technology_and_science-security/?gt1=43001


Sigh, yet another attempt to use the dog ate my homework of computer
problems, if their fly-by-wire was Windows XP then they had bigger things to
worry about than malware.



FYI, avionics firmware/software is subject to RTCA DO-178b certification 
and fly-by-wire will inevitably require a level A certification which 
is quite demanding (i mean *QUITE*DEMANDING*) for software development 
process certification. There is no chance that an XP-based 
application/system would ever meet even the lower certification levels 
(but for the lowest one which corresponds to passenger entertainment 
systems).


Commercial avionics certification looks like the most demanding among 
industrial sectors requiring software certification (public 
transportation, high energy incl. nuclear, medical devices, government 
IT security in some countries, electronic payments, lottery and casino 
systems).


--
- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-08-04 Thread Thierry Moreau

Tanja Lange wrote:
There is more than the UI at stake here, i.e. the basic functionality of 
the scheme. Say you distribute shares in a 4 out of 7 scheme (ABCDEF) 
and share A is published on the web. How do you recover from the 
remaining 3 out of 6 scheme into a 4 out of 6 scheme without having a 
key ceremony? In an ad-hoc multi-party scheme, you request 4 of the 
remaining compliant parties to destroy key material allowing them to 
participate in a group with the traitor A, but no other key material. No 
system UI, but admittedly a coordination nightmare!




If the system is built to allow resharing then this is no problem. 


Resharing from a t-out-of-n scheme to an r-out-of-m scheme works as
follows: If the secret s is shared using the (otherwise random)
polynomial f of degree t then a share consists of (i,f(i)). To 
reshare, at least t or the original shareholders issue shares of 
f(i) in an r-out-of-m manner, i.e. take a polynomial gi of degree r and

compute m shares (i,j,gi(j)). When these are distributed to the new
users, the new users should end up with matching j's. The old shares
(i,f(i)) are deleted. Each of the m new users now has t shares 
(i1,j,gi1(j)), (i2,j,gi2(j)), ... ,(it,j,git(j)). This information 
can be combined into a single share (j,G(j)) of s by using the Lagrange 
coefficients of the first scheme.


All of this can be decorated with zero knowledge proofs to prove
correctness of the shares etc. Note that there is no interaction of the
t shareholders and everthing can be done remotely.

In the scenario that one share A is published it's enough to have t-1
users help in the resharing since every new user can use the public
information. On the other hand that's a mess to program, so it's more
resonable to ask t of the remaining shareholders to help. Doesn't sound
like a coordination nightmare to me.

For all this in a more general setting see e.g. Redistributing Secret
Shares to New Access Structures and Its Applications by Yvo Desmedt
and  Sushil Jajodia  (1997) 
	http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.45.3353


Does this answer the question?


Yes, or at least it gives a good sense that these issues has been dealt 
with in the cryptographic literature. It seems to fulfill the 
operational requirements (obviously when a good faith participant 
receives new shares from a remote party, a trust relationship is needed, 
but that is a given irrespective of the underlying crypto).


Thanks a lot for your answer!

Regards,

--
- Thierry Moreau




Tanja




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-08-03 Thread Thierry Moreau

Peter Gutmann wrote:


That's a good start, but it gets a bit more complicated than that in practice
because you've got multiple components, and a basic red light/green light
system doesn't really provide enough feedback on what's going on.  What you'd
need in practice is (at least) some sort of counter to indicate how many
shares are still outstanding to recreate the secret (We still need two more
shares, I guess we'll have to call Bob in from Bratislava after all).  Also
the UI for recreating shares if one gets lost gets tricky, depending on how
much metadata you can assume if a share is lost (e.g. We've lost share 5 of
7 vs. We've lost one of the seven shares), and suddenly you get a bit
beyond what the UI of an HSM is capable of dealing with.



There is more than the UI at stake here, i.e. the basic functionality of 
the scheme. Say you distribute shares in a 4 out of 7 scheme (ABCDEF) 
and share A is published on the web. How do you recover from the 
remaining 3 out of 6 scheme into a 4 out of 6 scheme without having a 
key ceremony? In an ad-hoc multi-party scheme, you request 4 of the 
remaining compliant parties to destroy key material allowing them to 
participate in a group with the traitor A, but no other key material. No 
system UI, but admittedly a coordination nightmare!



--
- Thierry Moreau



With a two-share XOR it's much simpler, two red LEDs that turn green when the
share is added, and you're done.  One share is denoted 'A' and the other is
denoted 'B', that should be enough for the shareholder to remember.

If you really wanted to be rigorous about this you could apply the same sort
of analysis that was used for weak/stronglinks and unique signal generators to
see where your possible failure points lie.  I'm not sure if anyone's ever
done this [0], or whether it's just build in enough redundancy that we should
be OK.

Peter.

[0] OK, I can imagine scenarios where it's quite probably been done, but
anyone involved in the work is unlikely to be allowed to talk about it.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-08-02 Thread Thierry Moreau

Peter Gutmann wrote:

Thierry Moreau thierry.mor...@connotech.com writes:


With the next key generation for DNS root KSK signature key, ICANN may have
an opportunity to improve their procedure.


What they do will really depend on what their threat model is.  I suspect that
in this case their single biggest threat was lack of display of sufficient
due diligence, thus all the security calisthenics (remember the 1990s Clipper
key escrow procedures, which involved things like having keys generated on a
laptop in a vault with the laptop optionally being destroyed afterwards, just
another type of security theatre to reassure users).  Compare that with the
former mechanism for backing up the Thawte root key, which was to keep it on a
floppy disk in Mark Shuttleworth's sock drawer because no-one would ever look
for it there.  Another example of this is the transport of an 1894-S dime
(worth just under 2 million dollars) across the US, which was achieved by
having someone dress in somewhat grubby clothes and fly across the country in
cattle class with the slabbed coin in his pocket, because no-one would imagine
that some random passenger on a random flight would be carrying a ~$2M coin.
So as this becomes more and more routine I suspect the accompanying
calisthenics will become less impressive.

(What would you do with the DNSSEC root key if you had it?  There are many 
vastly easier attack vectors to exploit than trying to use it, and even if you 
did go to the effort of employing it, it'd be obvious what was going on as 
soon as you used it and your fake signed data started appearing, c.f. the 
recent Realtek and JMicron key issues.  So the only real threat from its loss 
seems to be acute embarassment for the people involved, thus the due-diligence 
exercise).




I fully agree with the general ideas above with one very tiny exception 
explained in the next paragraph. The DNSSEC root key ceremonies remains 
nonetheless an opportunity to review the practical implementation details.


The exception lies in a section of a paranoia scale where few 
organizations would position themselves. So let me explain it with an 
enemy of the USG, e.g. the DNS resolver support unit in a *.mil.cc 
organization. Once their user base rely on DNSSEC for traffic encryption 
keys, they become vulnerable to spoofed DNS data responses. I leave it 
as an exercise to write the protocol details of an hypothetical attack 
given that Captain Pueblo in unito-223.naval.mil.cc routinely relies on 
a web site secured by DNSSEC to get instructions about where to sail his 
war ship on June 23, 2035 (using the unrealistic assumption that 
Pueblo's validating resolver uses only the official DNS root trust anchor).


Regards,


Peter.




--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Persisting /dev/random state across reboots

2010-07-29 Thread Thierry Moreau

Richard Salz wrote:
At shutdown, a process copies /dev/random to /var/random-seed which is 
used on reboots.

Is this a good, bad, or shrug, whatever idea?
I suppose the idea is that all startup procs look the same ?

tnx.


First look at http://en.wikipedia.org/wiki/Urandom

There is a tremendous value in the Linux kernel technology, including 
extensive peer review from an IT security perspective.


If you think there are security requirements not met (e.g. assurance of 
entropy characteristics, assurance of implementation configuration 
sanity), then you should state your design goals. Only thereafter we get 
an understanding of good, bad, or more relevant: improved.


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


What if you had a very good entropy source, but only practical at crypto engine installation time?

2010-07-22 Thread Thierry Moreau


See http://www.connotech.com/doc_pudec_descr.html .

(OK, it's also practical whenever the server needs servicing by trusted 
personnel.)


Then, you care about the deterministic PRNG properties, the secrecy of 
its current state, and the prevention of PRNG output replays from an 
out-of-date saved state.


And bingo, you solved the random secret generation issue satisfactorily!

Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Root Zone DNSSEC Deployment Technical Status Update

2010-07-17 Thread Thierry Moreau

Dear Jakob:

Trying to reply specifically. The bigger picture would require extensive 
background explanations.


Jakob Schlyter wrote:

On 16 jul 2010, at 19.59, Thierry Moreau wrote:


With what was called DURZ (Deliberately Unvalidatable Root Zone), you, security 
experts, has been trained to accept signature validation failures as false 
alarms by experts from reputable institutions.


Thierry, do you know of anyone that configured the DURZ DNSKEY and accepted the 
signature validation failure resulting because of this? We had good 
(documented) reasons for deploying the DURZ as we did, the deployment was 
successful and it is now all water under the bridge. Adding FUD at this time 
does not help.



This is not the way I approach the DURZ strategy as implemented by the 
deployment team.


I am referring to a specific DNSSEC protocol provision, but I will first 
make an analogy.


You install a fire alarm system in your house (DNSSEC is an alarm system 
for bogus DNS data) but the UL certification officer didn't come yet to 
make the official approval (no trust anchor for a zone on which your 
e-banking relies). Then an alarm triggers in the night (the mob behind 
the e-banking phishers got the RRSIG wrong -- they have a learning curve 
too). You tell your relatives to stay in the house because the alarm 
system is not reliable. Oh no, you would rather play it safe! (but is 
that what your DNSSEC-aware banking application would do: avoid a 
service call to the e-banking center because you don't have a configured 
trust anchor?).


Here is the protocol provision: RFC4035 5.1 allows validators to report 
bogus (alarm signal) when encountering an unvalidatable RRsig for a zone 
without a local basis for trust anchor.


Incidentally, you say you [the design team] had good *documented* 
reasons for implementing DURZ *as*you*did*. Did you document why any of 
unknown/proprietary/foreign signature algorithm code(s) were not 
possible (this was an alternative)? This was my outstanding question.





Auditing details are not yet public.


Yes, they are - see http://data.iana.org/ksk-ceremony/. If there is anything 
missing, please let me know.



Thanks, great. The two key ceremony scripts are what I wanted to look at.




I am wondering specifically about the protections of the private key material between the first 
key ceremony and the second one. I didn't investigate these details since ICANN was in 
charge and promised full transparency. Moreover, my critiques were kind of counterproductive in 
face of the seemingly overwhelming confidence in advice from the Verisign experts. In the worse 
scenario, we would already have a KSK signature key on which a suspected breach 
qualification would be attached.


The key material was couriered between the Key Management Facilities by ICANN 
staff members. I'd be happy to make sure you get answers to any questions you 
may have regarding this handling.



OK. You seem to refer to courier service between East Cost Facility 
(ECF) safe #1 (closed at ceremony 1 steps 199-202 and presumably opened 
for the courier service later on), carrying Tamper Evident Bags (TEB) 
sealed at steps 194-197 (see also 80-84), and deposited in West Coast 
Facility (WCF) safe #1 in advance of ceremony 2. At the WCF ceremony 2, 
the TEB were retrieved from the safe at steps 35-38, and the TEB tamper 
clues were verified at steps 73-76.


For the record, this key material exited the WCF HSM 
technology-intensive world at ceremony 1 step 60 and re-entered the ECF 
HSM #1 at ceremony 2 step 77-78. (The key material also entered WCF HSM 
#2 and ECF HSM #2.)


I don't have a question. I will trust the DNSSEC root signatures. 
However, it seems obvious that formal dual-control rules should have 
been designed, e.g. a Trusted Courier Officer role with a 3 out of 4 
(or 5) separation of duty. Without this, the key material has been 
protected only by the tamper-evident protection in transit from the ECF 
to the WCF. This role would have been limited in time.


I don't want to discuss the effectiveness of tamper-evident envelopes, 
or the additional controls built around the core key material in the HSM 
technology. These are mainly obfuscating the core principles.





Is there an emergency KSK rollover strategy?


Yes, please read the DPS - https://www.iana.org/dnssec/icann-dps.txt.


jakob (member of the Root DNSSEC Design Team)

--
Jakob Schlyter
Kirei AB - http://www.kirei.se/




Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Root Zone DNSSEC Deployment Technical Status Update

2010-07-17 Thread Thierry Moreau

Paul Hoffman wrote:

At 9:52 AM -0400 7/17/10, Thierry Moreau wrote:

Incidentally, you say you [the design team] had good *documented* reasons for 
implementing DURZ *as*you*did*. Did you document why any of 
unknown/proprietary/foreign signature algorithm code(s) were not possible (this 
was an alternative)? This was my outstanding question.


Thierry, can you say how using one of those alternatives would look different than the 
DURZ that they used? Should they all be marked as unverfied in a compliant 
DNSSEC resolver?


Yes. E.g. if a zone is signed only by algorithm GOOSE_128, and your 
validating resolver does not know this algorithm, the DNS zone data 
remains insecure (this is what you mean by unverified I guess). 
That's in the DNSSEC protocol.


Regards,


--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fw: Root Zone DNSSEC Deployment Technical Status Update

2010-07-16 Thread Thierry Moreau

Perry E. Metzger wrote:

The root zone has been signed, and the root zone trust anchor has
been published.



That's a great achievement for the parties involved. It is also a 
significant step towards more trustworthy DNS data.



I have been following this with attention from the perspective of 
system-wide master key, i.e. a slightly different perspective than 
trust anchor. The trust anchor may indeed be trusted by anyone. The 
system-wide master key is intended to be trustworthy to some broadest 
extent according to some (tacit) assessment.



Three outstanding issues on my plate:


A social engineering incident?

With what was called DURZ (Deliberately Unvalidatable Root Zone), you, 
security experts, has been trained to accept signature validation 
failures as false alarms by experts from reputable institutions. I spare 
you the details, since DURZ is now over (it may have spread to TLD 
managers though), but the formal protocol specification allows a 
compliant validator implementation to declare a signature failure with 
the DURZ as it was deployed. No specific rationale was given to me for 
the non-use of unknown/proprietary/foreign signature algorithm code(s) 
as a better interim deployment strategy.


Auditing details are not yet public.

I am wondering specifically about the protections of the private key 
material between the first key ceremony and the second one. I didn't 
investigate these details since ICANN was in charge and promised full 
transparency. Moreover, my critiques were kind of counterproductive in 
face of the seemingly overwhelming confidence in advice from the 
Verisign experts. In the worse scenario, we would already have a KSK 
signature key on which a suspected breach qualification would be attached.


Is there an emergency KSK rollover strategy?

Again, I spare you the details, but the way the RFC5011 is implemented, 
there is no automated KSK rollover strategy (this would require a larger 
set of keys at the root because a standby KSK would be needed).



Nothing above threatens the relevance, effectiveness, and benefits of 
the current deployment, unless you have a rationale risk analysis that 
convinces you that national security grade key management is a 
necessity. My DNSSEC root signature key risk analysis does not conclude 
that national security grade key management is needed for the official 
DNS root zone.


But lessons may be learned with the perspective of a rigorous security 
analysis (if we had to do some system-wide key deployment with impacts 
similar to the global DNS integrity ...). The DNSSEC protocol definition 
and root deployment project has many facets in which it was venturing 
into virgin ground (e.g. the claimed transparency for the KSK management 
procedures by ICANN).



Nobody ever done such a thing before, even less so in a production 
system with global impacts, so I give them a provisional A grade (not an 
A+) until the full auditing details are provided. But that's only me!



Regards,


--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What's the state of the art in factorization?

2010-04-22 Thread Thierry Moreau

Victor Duchovni wrote:

On Tue, Apr 20, 2010 at 08:58:25PM -0400, Thierry Moreau wrote:

The DNS root may be qualified as a high valued zone, but I made the 
effort to put in writing some elements of a risk analysis (I have an 
aversion for this notion as I build *IT*controls* and the consultants are 
hired to cost-justify avoiding their deployments, basically -- but I needed 
a risk analysis as much as a chief financial officer needs an economic 
forecast in which he has no faith.) The overall conclusion is that the DNS 
root need not be signed with key sizes that would resist serious brute 
force attacks.


See http://www.intaglionic.org/doc_indep_root_sign_proj.html#TOC:C. 
(document annex C. Risk Analysis Elements for DNSSEC Support at the Root).


This conclusion is arrived at in a rather ad-hoc fashion. One can equally
easily reach opposite conclusions, since the majority of administrators
will not configure trust in static keys below the root, and in many
cases domains below the root will have longer keys, especially if the
root keys are not conservative.


Do you have a suggestion for a less ad-hoc fashion?


Sure, cracking the root will not be the easiest attack for most,
but it really does need to be infeasible, as opposed to just
difficult. Otherwise, the root is very much an attractive target
for a well funded adversary.


For which purpose(s) is the DNS root signature key an attractive target? 
Given these purposes, who are the potential adversaries (Dan Bernstein 
claims that they don't need to be well funded)? I am not really seeking 
an answer, but these question are investigated (indeed in a rather 
ad-hoc fashion) in the above referenced annex.



Even if in most cases it is easier to
social-engineer the domain registrar or deliver malware to the
desktop of the domain's system administrator.


Indeed. And maybe social-engineering the zone signature function comes 
in this category.


You may observe that the DNS root zone signature function is also 
subject to social-engineering attack. This should be a basic concern for 
the DNS root key management procedures, independently for both the 
official DNS root signature and the Intaglio NIC alternative source.


By the way, state-of-the-art in factorization is just a portion of the 
story. What about formal proofs of equivalence between a public key 
primitive and the underlying hard problem. Don't forget that the USG had to 
swallow RSA (only because otherwise its very *definition* of public key 
cryptography would have remained out-of-sync with the rest) and is still 
interested in having us adopt ECDSA.


EC definitely has practical merit. Unfortunately the patent issues around
protocols using EC public keys are murky.

Neither RSA nor EC come with complexity proofs.



Correct. In this perspective, the Rabin-Williams cryptosystem is 
superior. But nowadays nobody seeks to make this advantage available in 
standardized protocols. This is a fascinating area, ...


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What's the state of the art in factorization?

2010-04-22 Thread Thierry Moreau

Jerry Leichter wrote:

On Apr 21, 2010, at 7:29 PM, Samuel Neves wrote:
EC definitely has practical merit. Unfortunately the patent issues 
around

protocols using EC public keys are murky.

Neither RSA nor EC come with complexity proofs.



While EC (by that I assume you mean ECDSA) does not have a formal
security proof, i.e., it is as hard as the EC discrete log, it it much
closer to one than RSA is to factoring. In particular, Pointcheval and
Stern, and later Brown come close to a formal proof for ECDSA [1]
It's perhaps worth pointing out again how little formal complexity 
proves tell you about security.


Suppose we could show that RSA was as hard as factoring.  So?  Nothing 
is really known about how hard factoring is.  At worst, it's NP-complete 
(though that's actually considered unlikely).  But suppose *that* was in 
fact known to be the case.  What would it tell us about the difficulty 
of factoring any particular product of two primes?  Absolutely nothing.  
NP-completeness is a worst-case result.  In principle, it could be the 
case that factoring is easy for numbers less than a billion bits long 
- it just becomes much harder eventually.  (I put easy in quotes 
because it's usually taken to mean there's a poly-time algorithm, and 
that's a meaningless statement for a finite set of problems.  *Every* 
finite set of problems has a O(1) time solution - just make a lookup 
table.)


There are some concrete complexity results - the kind of stuff Rogoway 
does, for example - but the ones I've seen tend to be in the block 
cipher/cryptographic hash function spaces.  Does anyone one know of 
similar kinds of results for systems like RSA?

-- Jerry


E.g. Koblitz, Neal; Menezes, Alfred, Another Look at ``Provable 
Security'', Cryptology ePrint Archive: Report 2004/152, available at 
http://eprint.iacr.org/2004/152.pdf.



- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What's the state of the art in factorization?

2010-04-22 Thread Thierry Moreau

Florian Weimer wrote:

* Thierry Moreau:


For which purpose(s) is the DNS root signature key an attractive
target?


You might be able to make it to CNN if your spin is really good.



Thanks for this feedback.

No, no, and no.

No, because I asked the question as a matter of security analysis 
methodology. My conclusion is that no purpose justifying an attack on 
the overall DNSSEC scheme particularly threatens the DNS root.


No, because while someone else's answer might be formulated based on 
non-rationale anti-USG paranoia (leading to a nice media story), the 
pervasive USG influence in the DNSSEC key management has very different 
impacts, the foremost one being that the DNS root may actually be signed 
soon (hey, great!).


No, because I don't want to handle the trouble of high visibility in a 
field where the public relations are already mixing up things (e.g. .org 
is signed but a registrant can't have a secure delegation for a .org 
domain as of today).


Caveat: I stopped volunteering information about specific elements of 
official DNSSEC root key management which might be criticized. It is 
time for the DNS root signature project to move forward. Also, the 
Intaglio NIC project has no value unless the official DNS root holds 
secure delegations.


But even without this self-restraint, there would be no spin for a CNN 
story. Dedication to good cryptographic key management is squarely dull 
and boring for a typical person.


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What's the state of the art in factorization?

2010-04-20 Thread Thierry Moreau

Perry E. Metzger wrote:

I was alerted to some slides from a talk that Dan Bernstein gave a few
days ago at the University of Montreal on what tools will be needed to
factor 1024 bit numbers:

http://cr.yp.to/talks/2010.04.16/slides.pdf



I had the opportunity to listen to Prof. Dan Bernstein talk last Friday 
morning. I was very glad to see him as I respect his dedication to 
crypto maths, algorithm implementation, and very applied studies of 
computation complexity.


The slides are pretty much representative of his talk. New material 
starts on slide 17. If you are familiar with the contents of slides 1-16 
and elliptic curve methods (I am not), then you should appreciate the 
contents of slides 17 up to 45.


Slides 46 to 47 deal with the computation speedups available with 
graphics processors.


In the audience, there seemed to be some who followed the presentation 
more than I did but Dan made a great talk even for people like me.



It has been a couple of years since there has been serious discussion on
the list on this topic, and especially in the light of various technical
decisions being undertaken on the size of DNS signing keys for high
valued zones (like root), I was curious as to whether anyone had any
interesting comments on the state of the art in factorization.



According to my records, the state-of-the-art is reference

Joppe W. Bos, Marcelo E. Kaihara, Thorsten Kleinjung, Arjen K. Lenstra, 
and Peter L. Montgomery, On the Security of 1024-bit RSA and 160-bit 
Elliptic Curve Cryptography, version 2, August 7, 2009, 18 pages 
(published on pages 43-60 in Comments on the Transition Paper 
available at 
http://csrc.nist.gov/groups/ST/key_mgmt/documents/Transition_comments_7242009.pdf, 
which was listed at http://csrc.nist.gov/groups/ST/key_mgmt/index.html).


plus this talk last Friday (and references). From these, you have to do 
your homework in guesswork about your actual enemy's power.



In the Intaglio NIC project white paper I contributed towards the 
deployment of an alternate source for signed official DNS root data, I 
had to refer to the state-of-the-art. See 
http://www.intaglionic.org/doc_indep_root_sign_proj.html#TOC:3.6 
(document section 3.6 Early Project Decisions about Protection Level).


The DNS root may be qualified as a high valued zone, but I made the 
effort to put in writing some elements of a risk analysis (I have an 
aversion for this notion as I build *IT*controls* and the consultants 
are hired to cost-justify avoiding their deployments, basically -- but I 
needed a risk analysis as much as a chief financial officer needs an 
economic forecast in which he has no faith.) The overall conclusion is 
that the DNS root need not be signed with key sizes that would resist 
serious brute force attacks.


See http://www.intaglionic.org/doc_indep_root_sign_proj.html#TOC:C. 
(document annex C. Risk Analysis Elements for DNSSEC Support at the Root).



By the way, state-of-the-art in factorization is just a portion of the 
story. What about formal proofs of equivalence between a public key 
primitive and the underlying hard problem. Don't forget that the USG had 
to swallow RSA (only because otherwise its very *definition* of public 
key cryptography would have remained out-of-sync with the rest) and is 
still interested in having us adopt ECDSA.



So, yes, it's always good to ask questions. I usually complain that one 
seldom gets a simple answer for a simple question addressed to a 
specialist. I don't feel I provided a simple answer, but I don't claim 
to be a specialist.



Regards,

- Thierry Moreau


Perry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Wikileaks video crypto.

2010-04-09 Thread Thierry Moreau

Perry E. Metzger wrote:

Earlier this weeks, Wikileaks released of video of an incident involving
an Apache helicopter which killed two Reuters reporters and a number of
bystanders in Iraq.

A number of the reports surrounding the release claim that the video was
decrypted by Wikileaks. Indeed, Wikileaks requested supercomputer
time via twitter and other means to decrypt a video, see:
http://twitter.com/wikileaks/status/7530875613

The video was apparently intentionally given to Wikileaks, so one can't
imagine that the releasing parties would have wanted it to be unreadable
by them (or that any reasonable modern cryptosystem would have be
crackable). What, then, does the decryption claim mean here. Does
anyone know?



As the adage goes, Those who know don't speak. Those who speak don't 
know. I am in the latter category.


I guess we can use the simplest explanation from the available clues.

(A) The video file was encrypted when it circulated within the victim 
organization (e.g. encrypted .zip file attached to an e-mail). (Granted 
victim of the breach is an euphemism when consideration is given to 
civilian deaths.)


(B.1) Someone not having the decryption key had a personal motivation 
for the leak.


(B.2) Or someone having the decryption key feared that release in 
decrypted form would allow to trace the source of the leak. Don't forget 
that many more people would have legitimate access to the ciphertext.


(C) Wikileaks analysts understood the brute force key cracking (and/or 
dictionary attack for a password-derived encryption key) and deemed it 
was useful in this case due to the significance of the video.


From these simple explanations, the lesson would be the irony of the 
situation where brute force attack success (respectively dictionary 
attack success) can be attributed to the restrictions in cipher strength 
(respectively impediments to sensible key management schemes) that the 
government officials promoted for civilian use crypto.


My 0.2 worth of wisdom (Friday afternoon special promotion!).

- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Trusted timestamping

2009-10-05 Thread Thierry Moreau

Alex Pankratov wrote:
Does anyone know what's the state of affairs in this area ? 


This is probably slightly off-topic, but I can't think of
a better place to ask about this sort of thing.

I have spent a couple of days looking around the Internet,
and things appear to be .. erm .. hectic and disorganized.

There is for example timestamp.verisign.com, but there is 
no documentation or description of it whatsoever. Even the
website itself is broken. However it is used by Microsoft's 
code signing tool that embeds Verisign's timestamp into 
Authenticode signature of signed executable files.


There is also a way to timestamp signed PDFs, but the there 
appears to be nothing _trusted_ about available Trusted 
Timestamping Authorities. Just a bunch of random companies

that call themselves that way and provide no indication why
they should actually be *trusted*. No audit practicies, not 
even a simple description of their backend setup. The same
goes for the companies providing timestamping services for 
arbitrary documents, either using online interfaces or a

downloadable software.

There are also Digital Poststamps, which is a very strange
version of a timestamping service, because their providers
insist on NOT releasing the actual timestamp to the customer 
and then charging for each timestamp verification request.


I guess my main confusion at the moment is why large CAs of 
Verisign's size not offering any standalone timestamping 
services.


Any thoughts or comments ?
  


I answer your question by two questions:

Trusted timestamping service is like a specialized form of 
non-repudiation service. You may wonder if there is any fielded usage of 
genuine non-repudiation service, i.e. extending to an arbitration 
function that would support evidence management in some litigation 
forum. Fraud prevention in payment systems is not based on a genuine 
non-repudiation scheme. Are you aware of the current state of genuine 
non-repudiation service?


Another approach to your question is that timestamping service has to be 
sold before being fielded and used. Who is(are) the real 
beneficiary(ies) in a trusted timestamping service, and how do you sell 
the service to them so that it makes economic sense?


Regards,

- Thierry Moreau
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re (security fix): A Basic Rabin-Williams Digital Signature Specification

2009-08-19 Thread Thierry Moreau

Dear all:


A revised document has been posted at 
http://www.connotech.com/doc_rw_sign_basic-02.html, including a fix for 
an elementary security issue (and two other items, see document revision 
history).


I received some, but not much, feedback (positive) on the first version.

Regards,

- Thierry


On Jul 27, 2009, at 10:35 AM, Thierry Moreau wrote:


Title and abstract:

Scirpo, a Basic Rabin-Williams Digital Signature Specification

The public key cryptography digital signatures are well studied since 
the early publications by academics three decades ago. On the 
deployment front, many standardization efforts brought the digital 
signature techniques at the core of current computer networks. This 
document almost completely ignores such standards, and focuses on the 
theoretical foundations of the Rabin-Williams digital signature 
scheme; it merely describes a simple digital signature scheme 
including minimal interoperability provisions. While devoid of any 
advance to the art or science of applied cryptography, this document 
appears original in its formalization of a signature scheme details 
with this minimalistic approach centered on theoretical foundations.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


A Basic Rabin-Williams Digital Signature Specification

2009-07-27 Thread Thierry Moreau

Dear all:

As you know well, there is nothing quite new with the Rabin-Williams 
digital signature scheme.


I just formulated a basic specification of it, leaving aside to the 
greatest extent anything that would not be rooted in some theoretical 
foundation.


The result is at http://www.connotech.com/doc_rw_sign_basic-01.html

Title and abstract:

Scirpo, a Basic Rabin-Williams Digital Signature Specification

The public key cryptography digital signatures are well studied since 
the early publications by academics three decades ago. On the deployment 
front, many standardization efforts brought the digital signature 
techniques at the core of current computer networks. This document 
almost completely ignores such standards, and focuses on the theoretical 
foundations of the Rabin-Williams digital signature scheme; it merely 
describes a simple digital signature scheme including minimal 
interoperability provisions. While devoid of any advance to the art or 
science of applied cryptography, this document appears original in its 
formalization of a signature scheme details with this minimalistic 
approach centered on theoretical foundations.



I already suspect that the usefulness of this document is limited, so if 
you do find some value in it, please let me know how the document can be 
improved for your purpose.


If anyone has other comments, I would like to read them.


Regards,

- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Has any public CA ever had their certificate revoked?

2009-05-05 Thread Thierry Moreau



d...@geer.org wrote:


No, [...]


Now that the main question is answered, there are sub-questions to be asked:

1. Has any public CA ever encountered a situation where a revocation 
would have been necessary?


1.1 Has any public CA ever had a disgrunted employee with too many 
privileges not revoked on a timely manner?


1.2 Has any public CA ever experienced a corporate reorganization where 
a backup HSM has been lost?


1.3 ...

2. Has any public CA ever suspected a situation where a revocation would 
have been necessary?


2.1 Has any public CA ever had an audit that identified mismanagement of 
signature private key over some extended period of time?


2.2 ...

Regards,


--

- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Has any public CA ever had their certificate revoked?

2009-05-05 Thread Thierry Moreau



Paul Hoffman wrote:

At 4:11 PM +1200 5/5/09, Peter Gutmann wrote:


Thierry Moreau thierry.mor...@connotech.com writes:



Now that the main question is answered, there are sub-questions to be asked:

1. Has any public CA ever encountered a situation where a revocation would
have been necessary?


Yes, several times, see e.g. the recent mozilla.org fiasco, as a result of
which nothing happened because it would have been politically inexpedient to
revoke the CA's cert.



Peter, you really need more detents on the knob for your hyperbole setting. nothing 
happened is flat-out wrong: the CA fixed the problem and researched all related problems that 
it could find. Perhaps you meant the CA was not punished: that would be
 correct in this case.

This leads to the question: if a CA in a trust anchor pile does something wrong (terribly wrong, in this 
case) and fixes it, should they be punished? If you say yes, you should be ready to answer 
who will benefit from the punishment and in what way
 should the CA be punished. (You don't have to answer these, of course: you 
can just mete out punishment because it makes you feel good and powerful. There is 
lots of history of that.)



Before the collapse of the .com market in year 2000, there were 
grandiose views of global PKIs, even with support by digital signature 
laws.


Actually, it turned out that CA liability avoidance was the golden rule 
at the law and business model abstraction level. Bradford Biddle 
published a couple of articles on this topic, e.g. in the San Diego Law 
Review, Vol 34, No 3.


The main lesson (validated after the PKI re-birth post-2002) is that no 
entity will ever position itself as a commercially viable global CA 
unless totally devoid of liability towards relying parties.


Thus no punishment is conceivable beyond the Peter's opinions (they are 
protected by Freedom of speech at least). That was predicted by the Brad 
Biddle analysis 12 years ago.


Regards,


--

- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Who cares about side-channel attacks?

2008-10-30 Thread Thierry Moreau



Peter Gutmann wrote:


Ben Laurie [EMAIL PROTECTED] writes:


Peter Gutmann wrote:


Given the string of
attacks on crypto in embedded devices (XBox, iPhone, iOpener, Wii, some
not-yet-published ones on HDCP devices :-), etc) this is by far the most
at-risk category because there's a huge incentive to attack them, the result
affects tens/hundreds of millions of devices, and the attacks are immediately
and widely actively exploited (modchips/device unlocking/etc, an important
difference between this and academic proof-of-concept attacks), so this is the
one where I'd expect the vendors to care most.


But they've all been unlocked using easier attacks, surely?



The published ones seem to be the (relatively) easy ones, but the ones that
have been tried (and either not published or just had the easy outcome
published) have been pretty amazing.  This is another one of these things
where real figures are going to be near-impossible to come by, even harder
than my hypothetical public vendor survey of who uses SCA protection.  You'd
have to read about 20 blogs and a hundred mailing lists, many private, just to
keep up, but from various informal contacts with people working in this area
it seems you're not looking at anything like the conventional identify the
weakest point, then attack there approach.  Instead what's being done is more
like the Iraqi weapons program prior to 1991 where they were using every
imaginable type of approach, including ones like calutrons that had been
abandoned decades earlier by everyone else, for a first-past-the-post finish,
they'd try anything and everything and whatever got them there first would be
declared the winner.  It's the same with these attacks, whenever I've asked
have you tried X the answer is invariably yes, we have.

This style of attack is quite different from the usual academic model, it
neatly illustrates Bruce Schneier's comment that a defender has to defend
every single point along the line while an attacker only has to find a single
weakness.  In this case it seems to be literally true, and the weakness won't
necessarily be the actual weakest point but merely the point where an attacker
with sufficient skill and access to the right tools got in.  Look at the XBox
attacks for example, there's everything from security 101 lack of
checking/validation and 1980s MSDOS-era A20# issues through to Bunnie Huang's
FPGA-based homebrew logic analyser and use of timing attacks to recover device
keys (oh, and there's an example of a real-world side-channel attack for you),
there's no rhyme or reason to them, it's just hammer away at everything with
anything you've got and exploit the first bit that fails.



Now you seem to answer the question yourself: SCA protections apply to a 
single class of attacks, while there are many.


Going back to who cares, having done certification consulting 
assignments for some devices with crypto, when there was no 
checklist-based standard to apply, good practice security criteria (to 
be briefly documented in the report) would include the following questions:


(A) Is the secret key inside a device unit applicable to this single 
unit, or is it a system-wide, or domain-wide key?


That's a key management scheme question.

(B) Is the attack destructive? Which device unit features (especially 
be in working order, but also be absent of actual tampering evidence 
or even remain under the control of the legitimate user without 
interruptions longer than X ) need to be impaired for a given class of 
attack to succeed? This question pertains to the secret key as in (A) 
and also to any public-key-to-be-integrity-protected which would prevent 
malicious code download.


That's a product design question.

(C) What are the incentives, if any, for the legitimate user to remain 
well-behaved in the human aspects of device protection? (E.g. a merchant 
has some incentive to maintain a payment authorization device in good 
working order.) This leads to the question of insider threats, so 
satisfactory answers in this area are seldom present.


This is an application design question.

This gives an idea of analyses that drives security-related spendings 
(in my limited experience). Clients (intend to) pay for protections that 
will prevent financial losses and major public relations impacts (and 
then cut operating budgets soon after the project gets its 
authorization!). The consultant study must clearly link attackers' 
motivations to impacts and to countermeasures.


Refinements to the above analysis methodology call for the same creative 
mind that you assume from the part of the attackers. E.g. the usefulness 
of a device unit clone for the attacker should be considered for 
questions (B) and (C).


Does SCA protection enter the picture? Marginally at best.

Regards,


--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http

Re: combining entropy

2008-10-24 Thread Thierry Moreau



IanG wrote:


If I have N pools of entropy (all same size X) and I pool them
together with XOR, is that as good as it gets?

My assumptions are:

 * I trust no single source of Random Numbers.
 * I trust at least one source of all the sources.
 * no particular difficulty with lossy combination.



Do you really trust that no single source of entropy can have knowledge 
of the other source's output, so it can surreptitiously correlate its own?


I.e, you are are also assuming that these sources are *independent*.


--

- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RSA modulus record

2008-09-17 Thread Thierry Moreau



Weger, B.M.M. de wrote:


Hi,

There's a new biggest known RSA modulus.
It is (in hexadecimal notation):

FF...(total of 9289166 F's)...FFDFF...(total of 1488985
F's)...FF800...(total of 9289165 0's)...001

It is guaranteed to be the product of two different large primes, 
and it has more than 80 million bits. Impressive security...




But it is trivially factored as (2^43112609-1) * (2^37156667-1)

Factorization based modulus need to be drawn from a pool of numbers 
without special properties, so that their factorization is not 
facilitated by special-purpose algorithms. There is ample academic work 
aiming to refine without special properties, and there is also ample 
(debated) academic work which assumes that without special property is 
a reasonable assumption in practice.


The fun part is to reconcile theory and practice, e.g. a decade of RSA 
industrial application before retrofitting the probabilistic property in 
RSA, while probabilistic cryptosystems has been around in academic work 
amost since the early days of published work on PK crypto.


Regards,


- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Let's be paranoid about CSS (cascaded style sheet) as an application data integrity attack vector!

2008-09-09 Thread Thierry Moreau

Dear security experts:

Suppose I want to use the HTML syntax and a plain web browser as a user 
interface for a secure application. By secure, I mean, among other 
things, that the application service provider is confident that the user 
sees the HTML contents without integrity vulnerabilities. Of course, 
https is the only allowed protocol for reaching this web page, and the 
only protocol present in any link from this page to a next one.



I am now concerned about the default and implicit style sheets that the 
web browser uses for HTML content rendering.



Here is a simple exploit which alters the ietf.org main page. Insert the 
following four lines


a[title=IETF Secretariat]:before
{content: Don't trust the }
a[title=IETF Secretariat]:after
{content:  for anything security-critical.}

to the file /usr/lib/firefox/res/html.css

then restart the Mozilla Firefox and bingo, the itef.org main page is 
subrepticiously changed. I.e. the link to IETF Secretariat is canged 
to Don't trust the IETF Secretariat for anything security-critical.



OK, this requires root access because the Linux community is generally 
security-conscious. But you should see the general idea: paranoia leads 
me to think of an adversary who would threatens application integrity 
(such as the above) without leaving much trace of computer system 
penetration.



This attack vector is trivial based on the HTML markup language 
philosophy - the above exploit merely alters default settings in a 
parameter specifications language (css) having a fine grained 
expressivity potential. CSS is about what the user sees when HTML 
contents is displayed on a media (typically a web browser.



Does anybody have any tip about how to mitigate this vulnerability, with 
minimal assumptions about the client web browser?



The basic idea would be to patch any default setting (that could alter 
the user display ...) in the CSS specifications with explicit parameter 
setting associated with the HTML contents. In the above case, the IETF 
can protect itself with the following HTML markup text near the 
beginning of the file:


STYLE type=text/css:before{content:}:after{content:}/STYLE


The habit of storing css style information in various style sheets files 
separate from the HTML contents is worrysome as each stylesheet 
retrieval operation is a potential attack vector.



Thanks in advance. More specifically, with the hope that paranoia can 
sometimes be a productive state of mind, I remain paranoid-ly grateful 
for your answers.



--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Decimal encryption

2008-08-27 Thread Thierry Moreau



Philipp Gühring wrote:


Hi,

I am searching for symmetric encryption algorithms for decimal strings.

Let's say we have various 40-digit decimal numbers:
2349823966232362361233845734628834823823
3250920019325023523623692235235728239462
0198230198519248209721383748374928601923

As far as I calculated, a decimal has the equivalent of about 3,3219
bits, so with 40 digits, we have about 132,877 bits.

Now I would like to encrypt those numbers in a way that the result is a
decimal number again (that's one of the basic rules of symmetric
encryption algorithms as far as I remember).

Since the 132,877 bits is similar to 128 bit encryption (like eg. AES),
I would like to use an algorithm with a somewhat comparable strength to AES.
But the problem is that I have 132,877 bits, not 128 bits. And I can't
cut it off or enhance it, since the result has to be a 40 digit decimal
number again.

Does anyone know a an algorithm that has reasonable strength and is able
to operate on non-binary data? Preferrably on any chosen number-base?



The short answer is no, nobody knows a secure algorithm that would 
work as a decimal stream cipher AND would not extend the message size 
for some form of key material reference data (or salt or IV ...).


If you have room for such message-specific reference data, it should be 
easy to design a decimal stream cipher for short messages.



--

- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The PKC-only application security model ...

2008-07-24 Thread Thierry Moreau



Eric Rescorla wrote:


At Wed, 23 Jul 2008 17:32:02 -0500,
Thierry Moreau wrote:




Anne  Lynn Wheeler wrote about various flavors of certificateless 
public key operation in various standards, notably in the financial 
industry.


Thanks for reporting those.

No doubt that certificateless public key operation is neither new nor 
absence from today's scene.


The document I published on my web site today is focused on fielding 
certificateless public operations with the TLS protocol which does not 
support client public keys without certificates - hence the meaningless 
security certificate. Nothing fancy in this technique, just a small 
contribution with the hope to facilitate the use of client-side PKC.



DTLS-SRTP 
(http://tools.ietf.org/html/draft-ietf-sip-dtls-srtp-framework-02,

http://tools.ietf.org/html/draft-ietf-avt-dtls-srtp)
uses a similar technique: certificates solely as a key 
carrier authenticated by an out-of-band exchange.




In draft-ietf-sip-dtls-srtp-framework, the detailed scheme uses 
self-signed certificates created by client end-entities themselves. The 
basic idea is identical. At the detailed level in my document, the 
client end-entity auto-issues a security certificate with a breached 
CA private key.


In the TLS Certificate request message, a list of CA distinguished names 
is provided to the end entity. Referring to a breached CA public key 
is an invitation to submit a meaningless end-entity certificate, making 
the detailed scheme more plain with respect to TLS options (i.e. an 
empty list in a certificate request message could be a not so well 
supported mode in TLS software implementations).


So, maybe the reference to the notion of self-signed EE certificates in 
draft-ietf-sip-dtls-srtp-framework could be replaced by meaningless EE 
certificates (or something else), which would include both self-signed 
or auto-issued. In such a case, the RFC publication for my document 
would become more pressing.


A related discussion occurred on the IETK PKIX mailing list in June 2008 
under the subject RFC 5280 Question.


Regards,


--

- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The PKC-only application security model ...

2008-07-24 Thread Thierry Moreau



Tom Scavo wrote:


On Wed, Jul 23, 2008 at 6:32 PM, Thierry Moreau
[EMAIL PROTECTED] wrote:


The document I published on my web site today is focused on fielding
certificateless public operations with the TLS protocol which does not
support client public keys without certificates - hence the meaningless
security certificate.



As such, your document is directly applicable to a proposed standard
that is now winding its way through the OASIS process:

http://wiki.oasis-open.org/security/SamlHoKWebSSOProfile

The proponents of this variant of SAML Web Browser SSO have no
interest in an online database of public keys, but your profile is
relevant nonetheless, for its interoperability aspects.


Thanks, I will look into this.


You mentioned earlier that this may become an IETF RFC.  Do I take
this to mean that your company holds no patent, copyright, trademark
or license rights that would prevent us from relying on your profile?


Neither patent nor patent application for the matter contained in the 
referenced document.


--

- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The PKC-only application security model ...

2008-07-23 Thread Thierry Moreau



Anne  Lynn Wheeler wrote about various flavors of certificateless 
public key operation in various standards, notably in the financial 
industry.


Thanks for reporting those.

No doubt that certificateless public key operation is neither new nor 
absence from today's scene.


The document I published on my web site today is focused on fielding 
certificateless public operations with the TLS protocol which does not 
support client public keys without certificates - hence the meaningless 
security certificate. Nothing fancy in this technique, just a small 
contribution with the hope to facilitate the use of client-side PKC.


- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why doesn't Sun release the crypto module of the OpenSPARC? Crypto export restrictions

2008-06-12 Thread Thierry Moreau



Richard Salz wrote:

I would expect hardware designs to be treated more like hardware than 
software.




That's an interesting observation, raising the issue of what is speech 
 vs hardware.


When I looked into this issue, I found the Common Criteria 
certification methodology as evidence that speech covers everything 
from the most high level abstract design description to the most 
concrete representation of the hardware that you would look at, e.g. for 
security certification assurance that electronic gates are properly 
positioned by the Computer-Aided-Design tools. Hence, any information is 
speech, and if it's in the public domain, I would expect an export 
control exception would apply. Only the actual silicon, and non 
human-readable dies for the silicon, would be hardware.


Otherwise, I see no legal base to locate a cut-off point between 
speech and hardware in the process of design refinements leading to 
the actual processor.


Regards,

--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Fwd: Secure Server e-Cert Developer e-Cert. Comerica TM Connect Web Bank]

2008-04-23 Thread Thierry Moreau



Arshad Noor wrote:



Fascinating!

This may be the first phishing e-mail I've seen that uses
a message related to digital certificates for attacking the
client; I am not a customer of Comerica.



I did notice this reference to certificates in the phishing blabla message.

I checked very quickly at comerica.com, they don't seem to use client PK 
pairs (nor certificates), merely the usual name/password authentication.


If the target financial institution was using client authentication, it 
would be interesting to see phishing scenario details, but that's not 
the case until shown otherwise.


I'm not impressed by the phisher blabla message.

--

- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fixing SSL (was Re: Dutch Transport Card Broken)

2008-02-21 Thread Thierry Moreau



Leichter, Jerry wrote:


While trying to find something else, I came across the following
reference:

Title:   Sender driven certification enrollment system
Document Type and Number:  United States Patent 6651166
Link to this page:  http://www.freepatentsonline.com/6651166.html
Abstract:
A sender driven certificate enrollment system and methods of its
use are provided, in which a sender controls the generation of a
digital certificate that is used to encrypt and send a document
to a recipient in a secure manner. The sender compares
previously stored recipient information to gathered information
from the recipient. If the information matches, the sender
transfers key generation software to the recipient, which
produces the digital certificate, comprising a public and
private key pair. The sender can then use the public key to
encrypt and send the document to the recipient, wherein the
recipient can use the matching private key to decrypt the
document.



Some feedback on the above security certificate issuance process.

At first, it seems neat. But then, looking at how it works in practice:

the client receives an e-mail notification soliciting him to click on a
HTML link and then enroll for a security certificate,

the client is solicited exactly like a phishing criminal would do, and

a java software utility downloaded from the web should not be allowed to
modify security-critical parameters on the local machine.


According to my records, this issuance process is nonetheless
representative of research directions for user enrollment, i.e. there
aren't too many other documented processes in this area.

Regards,


--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fixing SSL (was Re: Dutch Transport Card Broken)

2008-01-31 Thread Thierry Moreau



Philipp Gühring wrote:

Hi,



SSL key distribution and management is horribly broken,
with the result that everyone winds up using plaintext
when they should not.



Yes, sending client certificates in plaintext while claiming that SSL/TLS is 
secure doesn´t work in a world of phishing and identity theft anymore.


We have the paradox situation that I have to tell people that they should use 
HTTPS with server-certificates and username+password inside the HTTPS 
session, because that´s more secure than client certificates ...


Does anyone have an idea how we can fix this flaw within SSL/TLS within a 
reasonable timeframe, so that it can be implemented and shipped by the 
vendors in this century?


(I don´t think that starting from scratch and replacing SSL makes much sense, 
since it´s just one huge flaw ...)




If I recall correctly, SSL was designed chronologically after ISO OSI 
Network-Layer Security Protocol (yes, the official WAN was actually X.25 
at one point) or Transport Layer Security Protocol, both in their 
connection-oriented flavor, which used ideas originating from DecNET 
designs (researcher names Tardo, Alagappan; I once had a patent number 
in this thread of protocol engineering, but I lost it). Anyway, the key 
point in these visionary ideas is that the D-H exchange occurs *before* 
the exchange of security certificates. This provided the traffic-flow 
confidentiality that becomes desirable to protect privacy these days.


So, you got your fix with OSI NLSP or TLSP, you just have to overcome 
the *power of the installed base*!


Regards,

--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: More on in-memory zeroisation

2007-12-13 Thread Thierry Moreau



Leichter, Jerry wrote:


If the function is defined as I suggested - as a static or inline - you
can, indeed, takes its address.  (In the case of an inline, this forces
the compiler to materialize a copy somewhere that it might not otherwise
have produced, but not to actually *use* that copy, except when you take
the address.)  You are allowed to invoke the function using the address
you just took.  However, what in that tells you that the compiler -
knowing exactly what code will be invoked - can't elide the call?


Case of static function definition: the standard says that standard 
library headers *declare* functions, not *define* them.


Case of inline: I don't know if inline definition falls in the standard 
definition of declaration.


Also, the standard refers to these identifiers as external linkage. This 
language *might* not creare a mandatory provision if there was a 
compelling reason to have static or inline implementation, but I doubt 
the very infrequent use of (memset)(?,0,?) instead of memset(?,0,?) is a 
significant optimization opportunity. The compiler writer risks a 
non-compliance assessment in making such strectched reading of the 
standard in the present instance, for no gain in any benchmark or 
production software speed measurement.


Obviously, a pointer to an external linkage scope function must adhere 
to the definition of pointer equality (==) operator.


Maybe a purposedly stretched reading of the standard might let you make 
your point. I don't want to argue too theoretically. Peter and I just 
want to clear memory!


Kind regards,


--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: More on in-memory zeroisation

2007-12-13 Thread Thierry Moreau

/ testf.c /
#include stdio.h
#include string.h

typedef void *(*fpt_t)(void *, int, size_t);

void f(fpt_t arg)
{
if (memset==arg)
printf(Hello world!\n);
}

/ test.c /
#include stdlib.h
#include string.h

typedef void *(*fpt_t)(void *, int, size_t);

extern void f(fpt_t arg);

int main(int argc, char *argv[])
{
f(memset);
return EXIT_SUCCESS;
}

/*   I don't want to argue too theoretically.

- Thierry Moreau */

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: More on in-memory zeroisation

2007-12-13 Thread Thierry Moreau



Leichter, Jerry wrote:



On Wed, 12 Dec 2007, Thierry Moreau wrote:

| Date: Wed, 12 Dec 2007 16:24:43 -0500
| From: Thierry Moreau [EMAIL PROTECTED]
| To: Leichter, Jerry [EMAIL PROTECTED]
| Cc: Peter Gutmann [EMAIL PROTECTED], cryptography@metzdowd.com
| Subject: Re: More on in-memory zeroisation
| 
| / testf.c /

| #include stdio.h
| #include string.h
| 
| typedef void *(*fpt_t)(void *, int, size_t);
| 
| void f(fpt_t arg)

| {
|   if (memset==arg)
|   printf(Hello world!\n);
| }
| 
| / test.c /

| #include stdlib.h
| #include string.h
| 
| typedef void *(*fpt_t)(void *, int, size_t);
| 
| extern void f(fpt_t arg);
| 
| int main(int argc, char *argv[])

| {
|   f(memset);
|   return EXIT_SUCCESS;
| }
| 
| /*   I don't want to argue too theoretically.
| 
| - Thierry Moreau */

I'm not sure what you are trying to prove here.  Yes, I believe that
in most implementations, this will print Hello world\n.  Is it,
however, a strictly conforming program (I think that's the right
standardese) - i.e., are the results guaranteed to be the same on
all conforming implementations?  I think you'll find it difficult
to prove that.


If there is a consensus among comforming implementation developers that 
the above program is comforming, that's a good enough proof for me.


As a consequence of alleged consensus above, my understanding of the C 
standard would prevail and (memset)(?,0,?) would refer to an external 
linkage function, which would guarantee (to the sterngth of the above 
consensus) resetting an arbitrary memory area for secret intermediate 
result protection.


Reading ANSI X3.159-1989, I believe there would be such a consensus, and 
I find it quite obvious. You may disagree, and I will no further argument.


Regards,

--

- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: question re practical use of secret sharing

2007-06-22 Thread Thierry Moreau



Very interesting discussion.

I bring a different angle to the very topic of discussion (practical 
use). See below, after the quotes which I fully agree.


Peter Gutmann wrote:


D. K. Smetters [EMAIL PROTECTED] writes:



However, given the difficulty people have in managing keys in general,
building effective systems that allow them to manage key fragments is beyond
the range of most current commercial products.



I think that's the perfect summary of the problem with threshold schemes.
The processes they involve is simply too complex both to model mentally for
users and to build an interface to.  [...]

When we were mulling it over to see whether it was worth standardising, we
tried to come up with a general-purpose programming API for it.  [...]
working at the very geeky crypto toolkit API level (where you're allowed to be

So that lead to two questions:

1. Who would want to implement and use an ODBC-complexity-level API just to
   protect a key?

2. How are you going to fit a UI to that?  (This is the real killer, even if
   you come up with some API to do (1), there's probably no effectively usable
   way to do (2)).

At that de facto QED the discussion more or less ended.

Peter.



Here is a different perspective.

Forget about mathematics (who wants to go farther than 2 out of 3, 3 out 
of 4, and 2 out of 4).


Forget about an API: think first of a potential application area.

Forget about HCI (Human Computer Interface): focus on the 
control/liability allowed/implied by a key share and the administrative 
procedures.


Forget about processors and computers altogether!

If I didn't lose you yet, think of secret sharing for the *long-term 
cryptographic key material* for DNSSEC support at the root.


I.e. share the control over delegation of DNSSEC *routine* signature 
operations (to IANA staff in the foreseeable future) among secret 
sharing entities, say USG NTIA, an European entity, and a third one 
elsewhere.


Store the key shares on paper sheets of bar codes - the user interface 
is a safe box for the secure hardware, and a diplomatic briefcase for 
transport layer.


Actually, secret sharing implies significant procedural overhead for key 
management, and hence may find applications only in master keys of 
some orgnizations.


I did propose a scheme where the above principles are implicitly put 
forward, i.e. TAKREM for DNSSEC (root) trust anchor key rollover. The 
above long-term cryptographic key material is specified in the TAKREM 
documentation (perhaps other routine public key cryptoperiod 
management schemes might use the same principles for secret sharing).


From some of those who develop interoperability specifications (i.e. 
IETF participants) I was called a patent troll. From those 
organizations who control the Internet, i.e. USG NTIA, Verisign, and 
ICANN, I seem to be nobody. Hence the proposal made little progress.


In summary, to answer the question practical use of secret sharing, I 
don't see it in my crystal ball. Nonetheless, control of DNSSEC root 
signature key would be a good candidate application area for secret sharing.


Admittedly, the above change in perspective does not solve the 
difficulty people have in managing keys in general -- it merely shifts 
it from trusted system administrators to diplomats and like individuals. 
(A DHS sponsored study even ignored or downplayed mere split key storage 
for protecting the DNSSEC root private key.)


Regards,

--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-11 Thread Thierry Moreau



Jostein Tveit wrote:


Ben Laurie [EMAIL PROTECTED] writes:



...thought this might interest people here.



Anyone got a test key with a real and a forged signature to test
other implementations than OpenSSL?



If I understand the attack mathematics correctly, the following 
algorithm should give you an alleged signature value that would be 
mistakenly accepted by a flawed RSA implementation. I didn't implement 
the algorithm, and I will not make suggestions as a convenient big 
number arithmetic tool to implement it.


Note: The algorithm output value is NOT A FORGED SIGNATURE, since a 
non-flawed RSA signature verification implementation will correctly 
reject it. Nonetheless, using public exponent 3 with any use of RSA 
should be deprecated.


For the record, I am referring to
Hal Finney, Bleichenbacher's RSA signature forgery based on 
implementation error Wed, 30 Aug 2006

http://www.mail-archive.com/cryptography@metzdowd.com/msg06537.html

Input:

N, large public modulus (of unknown factorization)
h, hash value

Constant:

p: hex 01 FF 00 30 21 30 09 06 05 2B 0E 03 02 1A 05 00 04 14

A random binary source (e.g. large enough PRNG output)

Algorithm:

(A) find the largest value of r such that b=(p*2^20+h)*2^(8r) such that 
b+2^(8r)-1N


(B) select random a, 0aN^2, then set c=a*N^2+b+2^(8r)-1

(C) using a simple binary search, find the d = integer cubic root of c

(D) if d^3a*N^2+b, go back to step (B) -- if it occurs with a high 
probability, that's a failure of the approach proposed here, intuition 
suggests that the probability is either very close to zero, or very 
close to one


(E) set alleged signature s=d mod N (indeed, dN, so s=d) and validate 
(merely as a software self-check) that (s^3 mod N) div 2^(8r) equals 
(p*2^20+h)


(F) output alleged signature s

Regards,

--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: DNS/DNSSEC as an inbound mail signature public key distribution mechanism (was: signing all outbound email)

2006-09-08 Thread Thierry Moreau



Jon Callas wrote:



[... about DKIM ...] The signature travels  with the message and 
the signing key is in the network. As long as  you have both, you can 
verify the signatures.




the signing key is in the network -- Indeed. The public signature key 
is stored in the DNS.


DKIM might be the first widely deployed application to use the DNS as 
the preferred means of distributing public keys.


*Authenticated* public key distribution would need an upgrade of the DNS 
with DNSSEC deployment.


Perhaps it is time for discussion groups like this one to take a look at 
DNSSEC (RFC4033 / RFC4034 / RFC4035) and review its security principles, 
trust model, deployment challenges, HMI (Human Machine Interaction) 
aspects, etc.


Look at 
http://www.circleid.com/posts/dnssec_deployment_and_dns_security_extensions/ 
or query your favorite web search engine with DNSSEC.


Good reading.

--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Status of opportunistic encryption

2006-06-04 Thread Thierry Moreau



Thomas Harold wrote, in part:



I do suspect at some point that the lightweight nature of DNS will give 
way to a heavier, encrypted or signed protocol.  Economic factors will 
probably be the driving force (online banking).




E.g. RFC4033, RFC4034, RFC4035.

- Thierry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: what's wrong with HMAC?

2006-05-01 Thread Thierry Moreau



Travis H. wrote:


Ross Anderson once said cryptically,


HMAC has a long story attched to it - the triumph of the
theory community over common sense



He wouldn't expand on that any more... does anyone have an idea of
what he is referring to?


I suggest that you read the theory, make your own mind, and share your 
opinion with us.


Perhaps Mr. Anderson read the theory, made his own mind, and shared his 
opinion with whoever was listening or reading the above citation.


I recall having read some theory, made my own mind, and Mr. Anderson's 
citation above wouldn't be too far from my opinion at that time.


All theories are equal, but some theories are more equal than others ...

Have fun!

--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: thoughts on one time pads

2006-01-26 Thread Thierry Moreau



Travis H. wrote:


In this article, Bruce Schneier argues against the practicality of a
one-time pad:

http://www.schneier.com/crypto-gram-0210.html#7

I take issue with some of the assumptions raised there.

[...] Then a $1
CD-ROM would hold enough data for 7 years of communication! [...]

So my questions to you are:

1) Do you agree with my assessment?  If so, why has every crypto
expert I've seen poo-pooed the idea?



You shift to the problem of filling CDs with pure random data. Which 
physical property do you want to sample and with which type of hardware 
do you expect to sample it and at which rate, and with which protection 
against eavesdroping during the sampling? At what cost? With what kind 
of design assurance that the pure random data is indeed pure and random?


Have fun.

--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Another entry in the internet security hall of shame....

2005-09-10 Thread Thierry Moreau



Stephan Neuhaus wrote:


James A. Donald wrote:

[...]

That's because PSKs (as I have understood them) have storage and 
management issues that CA certificates don't have, [...] 
that the issue of how to exchange PSKs 
securely in the first place is left as an exercise for the reader (good 
luck!)


See http://www.connotech.com/sakem_index.htm.

Incidentally, TLS-PSK protocol standardization proposals has been around 
in the IETF for some time, and it is the mobile telephony development 
momentum made it pass the standardization process (e.g. drafts by 
Nokia). In the mobile telephony world, the physical distribution of 
subscriber identity mudules (i.e. integrated circuits with 
secret/private keying material) is physically distributed to subscribers.




[...] 
( [...] for the secure exchange 
of PSKs, which is IMHO unresolvable without changes to the business 
workflow). [...]
But the server side?  There are many more server applications than there 
are different Web browsers, and each one would have to be changed.  At 
the very least, they'd need an administrative interface to enter and 
delete PSKs.  That means that supporting PSKs is going to cost the 
businesses money (both to change their code and to change their 
workflow), money that they'd rather not spend on something that they 
probably perceive as the customer's (i.e., not their) problem, namely 
phishing.




The incremental operating cost can be resaonable only for organizations 
that already incur the *authorization* management overhead.




Fun,


Regards,

--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Standardization and renewability

2005-08-04 Thread Thierry Moreau



Hagai Bar-El wrote:

[...]

Up till now I could come up with three approaches to solve this problem:

1. Limit renewability to keying.


	Then you should study A Note About Trust Anchor Key Distribution, see 
http://www.connotech.com/takrem.pdf. It allows to distribute public keys 
to be used, if need be, at a later time in a different context.


2. Generalize the scheme (like the SPDC concept, or MPEG IPMP), more or 
less by making the standard part general, with non-standard profiles.
3. Standardize sets of key management methods at once, so to have spares 
for immediate switching.


[...]



--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


A Note About Trust Anchor Key Distribution

2005-07-06 Thread Thierry Moreau

To all:

Here is a scheme for a central organization
distributing a trust anchor public key with rollover
requirement. The suggested acronym for this scheme is
TAKREM for Trust Anchor Key REnewal Method.

We use the notation #R[i]# for the public root public
key #R[i]#, with the private key counterpart #r[i]#.

The central organization establishes key pairs
#r[0],R[0]#, #r[1],R[1]#, #r[2],R[2]#, ...,
#r[n],R[n]#, allocating the pair #r[0],R[0]# as the
initial private/public trusted key pair, and reserving
each key pairs #r[i],R[i]# for the cryptoperiod
starting with the #i#'th root key renewal, for
#1=i=n#.

A separate MASH (Modular Arithmetic Secure Hash)
instance #H[i]# is created for each #R[i]#. MASH is
defined in International standard document ISO/IEC
10118-4:1998, Information technology - Security
techniques - Hash-functions - Part 4: Hash-functions
using modular arithmetic.

That is, the central organization selects a large
composite modulus number #N[i]# used in the MASH round
function and a prime number #P[i]# used in the MASH
final reduction function.

Then, the central organization selects a random salt
field #s[i]#.

A hash computation gives a root key digest #D[i]# :
  #D[i]=H[i](s[i]|R[i]|N[i]|P[i])# .
The digest #D[i]# is like an advanced notice of future
trust anchor key #R[i]#.

The data tuple #r[i],R[i],N[i],P[i],s[i]# is set
aside in dead storage.

The trust anchor key initial distribution is
  #R[0], D[1], D[2], ..., D[n]# .

Security rationale: with data tuple
#r[i],R[i],N[i],P[i],s[i]# totally concealed until
the usage period for key pair #r[i],R[i]#, an
adversary is left with the digest #D[i]# from which it
is deemed impossible to mount a brute force attack.

A root key rollover is triggered by the following
message:
  #i,R[i],N[i],P[i],s[i]# .

Upon receipt of this messsage, the end-user system
becomes in a position to validate the root key digest
#D[i]#.

More details are provided in
http://www.connotech.com/takrem.pdf.

Regards,

--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Clips] Storm Brews Over Encryption 'Safe Harbor' in Data Breach Bills

2005-06-03 Thread Thierry Moreau



Adam Shostack wrote:



No.  If I get your database with SQL injection, all conditions are
met, and I have your plaintext.  But, the data is in an encrypted
form, and you're saved.


I'm not familiar with SQL injection vulnerabilities. Perhaps the issue 
is misrepresentation by the SQL provider that the database is encrypted 
using proper algorithms and key management. I guess that if a database 
access application using SQL injections has cleartext access to the 
data, this data is either not appropriately encrypted or the control of 
the encryption key escaped the legitimate user when the SQL injections 
were leaked to the adversary.


One issue with rulemaking/lawmaking is that consequences of a rule are 
sometimes unexpected because words (e.g. properly encrypted) are 
smetimes corrupted by diverted usage e.g. public relations aspects of 
e-commerce security. So, even if your statement was technically wrong, 
if *you* are convinced that a database vulnerable to SQL injection 
tampering threat is nonetheless encrypted, then a judge might be so 
convinced. Consequently, the lawmaking exercise must be more specific 
than above, e.g. using reference to by-laws which define acceptable 
encryption technology and key management techniques ... which is no 
longer a simple solution.


Thanks for highlighting the limits of the original post, either on a 
technical basis or on issues of lawmaking strategy.


--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: No Encryption for E-Passports

2005-03-07 Thread Thierry Moreau
See the following comments submitted to the Department of State

- Thierry Moreau
CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1
Tel.: (514)385-5691
Fax:  (514)385-5900
web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]
===
  Comments on the
  Department of State Public Notice 4993 (RIN 1400 AB93)
   about
Electronic Passport
 
   March 7, 2005
 
 
 
 by Thierry Moreau
 
 
  CONNOTECH Experts-conseil inc.
 9130 Place de Montgolfier
   Montral, Qc, Canada H2M 2A1
 
   Tel.: +1-514-385-5691
   Fax: +1-514-385-5900
 
E-mail: [EMAIL PROTECTED]
Internet: http://www.connotech.com


Introduction
We appreciate the opportunity to submit comments on the
electronic passport (e-passport) global project and proposed
regulation changes ([1]). Some of these comments have a
broader scope than the regulation change (this seems to be
invited by the Department of State by the public notice
discussion of e-passport encryption debate, i.e. [1] page
8306, center column, 2nd to 4th paragraphs). Our comments are
centered on the information security aspects of the e-
passport global project, notably the ICAO Public Key
Infrastructure (PKI) framework, i.e. [2].
The uniqueness of security requirements for the global
interoperability of e-passports has been recognized early in
the ICAO development process that brought the document [2] to
its current version. As a result, most of the traditional PKI
concepts has been omitted or simplified. We believe there are
merits in the scheme found in the document [2] for the e-
passport security, including the selection of un-encrypted e-
passport electronic chip data. The driving design criteria
has been operational hindsight rather than conservatism. We
are concerned that this hindsight is not always reflected in
the [1] public notice.
Our comments below are itemized, and they do not have
equal importance, significance, or relevance to the specific
regulatory change.
Unencrypted e-passports is a valid direction
We generally concur with the ICAO selection of
unencrypted e-passports. Encryption would mean a global key
management scheme to determine the circumstances in which an
e-passport would be unlocked by a reader. Such a key
management scheme would imply granting reading rights to some
organizations and denying such rights to others. Those
opposing the unencrypted e-passports would certainly be even
more suspicious of any workable key management scheme for
encrypted e-passports. We have yet to see any suggestion as a
key management scheme that might appear acceptable to a
security expert who claimed that unencrypted e-passport are
putting US citizens at risk. This explanation seems reflected
in the Department of State statement that in order to be
globally interoperable, encryption would require a higher
level of technology and more complicated technical
coordination with other nations. ([1] page 8306, center
column, 2nd paragraph) although we would have liked the
Department of State to speak for itself (e.g. Such technical
coordination includes notably the cryptographic key
management for electronic chip decryption keys.).
Doubtful representation of e-passport technology,
reader requirements and skimming threat
According to the document [2], Everyone who has the
appropriate equipment is able to read the chip contents of
the MRTD, but only the parties that are provided with the
appropriate public key certificates and certificate
revocation lists will be able to verify the authenticity and
integrity of the chip contents. (Document section 2.4.4) So
we find misleading the [1] public notice that eavesdropping
requires a reader furnished with the proper public key ([1]
page 8306, center column, 4th paragraph). In fact, reading of
electronic chips by international transportation operators
(e.g. airlines) is encouraged by the ICAO.
The e-passport proponents should not minimize the
significance of unauthorized e-passport reading threats.
Anti-skimming features are important to US travelers wishing
to protect their anonymity and privacy. The Department of
State should provide reliable information about their
effectiveness and their prudent use, since the momentary
disabling of anti-skimming mechanisms (e.g. the removal of a
metallic shield surrounding the electronic chip antenna)
materializes the e-passport bearer authorization to read the
e-passport.
Doubtful representation of e-passport technology,
global skimming countermeasures
We are puzzled

Re: New directions for hash function designs (was: More problems with hash functions)

2004-08-25 Thread Thierry Moreau

Hal Finney wrote:
Another of the Crypto talks that was relevant to hash function security
was by Antoine Joux, discoverer of the SHA-0 collision that required
2^51 work.  Joux showed how most modern hash functions depart from the
ideal of a random function.
The problem is with the iterative nature of most hash functions, which
are structured like this (view with a fixed with font):
IV  ---  COMP   ---   COMP   ---   COMP   ---  Output
  ^ ^ ^
  | | |
  | | |
Block 1   Block 2   Block 3
The idea is that there is a compression function COMP, which takes
two inputs: a state vector, which is the size of the eventual hash
output; and an input block.  The hash input is padded and divided
into fixed-size blocks, and they are run through the hash function
via the pattern above.
This pattern applies to pretty much all the hashes in common use,
including MDx, RIPEMDx, and SHA-x.
 

Just for the record, the Frogbit algorithm is a compression function with a 1-bit 
block size and a variable size state information.
(http://www.connotech.com/frogbit.htm)
The Helix cipher shares with the Frogbit algorithm the use of dual key stream usages 
between which the plaintext stream is inserted in the computations. However, the Helix 
authors do not recommend their construct for a hash function.
(http://www.schneier.com/paper-helix.pdf)
Perhaps ideas combined from these two approaches can help define other constructs for hash 
functions. Obviously, if the sate information is to end up in a standard size at the end of the 
plaintext processing, the additional state information has to be folded, which means 
additional processing costs, of discarded.

--
- Thierry Moreau
CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1
Tel.: (514)385-5691
Fax:  (514)385-5900
web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Definitions of Security?

2004-04-14 Thread Thierry Moreau


[EMAIL PROTECTED] wrote:

Hi,

I'm looking for interesting and unusal defitions of the
term Security (or secure).
I'm fully aware that it is difficult or impossible to give
a precise, compact, and universal definitions, and some
book authors explicitely say so. However, there are definitions
(or attempts to give those), and I'd be interested to compare
them. If you know of any definition that might be interesting
for any reason, please send me a link or citation. thanx
a well-informed sense of assurance that information risks and controls 
are in balance

From:
James M. Anderson, Why We Need a New Definition of Information Security, 
Computers  Security, vol 22, no. 4, May 2003, 2003, pages 308-313.

--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
H2M 2A1
Tel.: (514)385-5691
Fax:  (514)385-5900
web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Inescapable public key property of secret key transport?

2003-12-09 Thread Thierry Moreau
Subject: Inescapable public key property of secret key transport?

In the massive effort to secure the open networks, personal
computers and software trustworthiness, I wonder how the
following two security properties for a secret key transport
scheme has been addressed.

For the purpose of this post, a secret key transport scheme is
defined as the server processing of a key establishment packet
(cryptogram) that brings to the *server* the knowledge of a
secret key value that is assumed to be shared with a legitimate
*entity* (that is, we focus on the server perspective).

The two security properties are

(1) Inescapable public key property

 An outcome of the secret key transport is the assurance
 gained by the server that the secret key can only be
 (initially) shared with an entity that used a given public
 key (the server public key) when applying the exact public
 key primitive in the secret key transport specification.
 Otherwise, the secret key can only be (initially) shared
 with an entity that knows the server private key.

(2) Inescapable secret key processing rules

 An outcome of the secret key transport is the assurance
 gained by the server that the secret key can only be
 (initially) shared with an entity that followed the secret
 key preprocessing rules in the secret key transport
 specification.

 Note: The qualification (initially) refers to the fact
   that the server can't assume that the remote
   entity will indeed preserve the secret key
   confidentiality over time.

Note that the naive assumption that the remote entity uses a
trusted software is a convenient way to circumvent the security
properties.

These two properties are present in the PEKE cryptosystem
(Probabilistic Encryption Key Exchange). The references are
 Moreau, Thierry, Probabilistic Encryption Key Exchange,
 Electronics Letters, Vol. 31, number 25, 7th December 1995,
 pp 2166-2168.,or
 http://www.connotech.com/PEKEMAP.HTM.

In order to spare the reader the learning time associated with
the PEKE cryptosystem, here is a possible equivalent
implementation with the RSA primitive (caveat: this scheme is a
draft, I whish to learn if and how equivalent precautions are
done by fielded secret key transport implementations):

(A)  a legitimate entity selects a random secret S;

(B)  it sends the key establishment packet (cryptogram)
 C=Encr(S||Hash(S||PubK)) where

  Encr is the RSA encryption function with the public key
  PubK, and

  Hash is a secure hash function;

(C)  the key transported is K=Hash(S||PubK).

Accordingly, the server performs the following operations upon
receiving some C from an unknown entity:

 D=Decr(C);
 if D does not look like S'||Hash(S'||PubK), reject the key
 transport operation;
 else use K'=Hash(S'||PubK) as the transported key.

Note: This prevents the following attack where the key
  transport is simply based on Encr(K):

 The hacker modifies the legitimate entity's Pubk to his own,
 PubK (with encryption function Encr and decryption
 function Decr).

 The legitimate entity inadvertently sends Encr(K) instead
 of Encr(K). The hacker intercepts Encr(K), recovers
 K=Decr(Encr(K)), and sends Encr(K) to the server on behalf
 of the legitimate entity.

The inescapable secret key processing rules security property
intuitively refers to the fact that the step K=Hash(S||PubK)
prevents the remote entity from selecting an hidden pattern or
property in the transported key.

So, the questions is how are the two properties (inescapable
public key property and inescapable secret key processing
rules) addressed in the existing key establishment protocols?

Thanks in advance!

--

- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
H2M 2A1

Tel.: (514)385-5691
Fax:  (514)385-5900

e-mail: [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]