Cryptography-Digest Digest #128, Volume #11      Tue, 15 Feb 00 14:13:02 EST

Contents:
  Re: Basic Crypto Question 3 ([EMAIL PROTECTED])
  Re: Does the NSA have ALL Possible PGP keys? (wtshaw)
  Re: New standart for encryption software. (Albert P. Belle Isle)
  Re: UK publishes 'impossible' decryption law (Bill Unruh)
  Re: Basic Crypto Question 3 (John Savard)
  Re: Basic Crypto Question 3 (John Savard)
  Re: Guaranteed Public Key Exchanges (Darren New)
  Re: ECDSA added to DSS (Quisquater)
  Re: Basic Crypto Question 3 (John Savard)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Basic Crypto Question 3
Date: Tue, 15 Feb 2000 18:00:47 GMT

Could you comment on this.

What I am getting here is that mixing ciphers of different block size,
key lengths in a cascade is ok.  As Bruce mentioned in his thread, use
twofish on a 128 bit block, then in parallel 2 64 bit ciphers.

No one has said if its stronger or weeker or no comment to mix the
ciphers with different parameters, then say keeping all the ciphers in
the cascade homogenised (same blocksize, key length , rounds etc).

In your experience is cascading common in existing crypto systems.

Thanks

Dave


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED] (wtshaw)
Crossposted-To: comp.security.pgp,misc.survivalism
Subject: Re: Does the NSA have ALL Possible PGP keys?
Date: Tue, 15 Feb 2000 11:23:11 -0600

In article <[EMAIL PROTECTED]>, "tiwolf"
<[EMAIL PROTECTED]> wrote:

> I don't care about prime numbers, what I do know is the extent to which
> governments want to be able to read the hearts of men that they control.
> Since science cant give them that ability yet they want to be able to read
> all email and computer information, as well as listen into all telephone
> calls. Whether they actually need to or not. They want the power because the
> power exist even if only in their own mind.
> 
Playing to the prejudice that man had when it was gospel to preach that
the earth was the center of the universe, or rather the central level that
stretched out to the unknown in all flat directions, the many are
encouraged to place their crypto trusts in deficient concepts, as
graciously encoraged by government who figures that only limited means can
be derived by the halfblind civilian information in AES.

Or, 

"Build yourself an alter and put on it the letters PGP, and we will work
to control it  or destroy it, and therefore leave you hopeless."

Horsefeathers. Make sure that you know the agenda and work against those
that would subvert cypto. I am not saying anything good or bad about PGP
in this post, only that it is a football to be used in a bigger game, and
that it is not the only ball in play.
-- 
Let's all sit back an watch the inhabitants of the political zoo 
perform in three rings.  It's more exciting than soap operas.  Then 
vote out anyone who has been in long enough to abuse things.  

------------------------------

From: Albert P. Belle Isle <[EMAIL PROTECTED]>
Subject: Re: New standart for encryption software.
Date: Tue, 15 Feb 2000 13:30:58 -0500
Reply-To: [EMAIL PROTECTED]

On Mon, 14 Feb 2000 17:46:25 -0500, "Trevor Jackson, III"
<[EMAIL PROTECTED]> wrote:

>"Albert P. Belle Isle" wrote:
>
>> On Sun, 13 Feb 2000 23:34:55 -0500, "Trevor Jackson, III"
>> <[EMAIL PROTECTED]> wrote:
>>
>> >>
>> >> Although our source code is available for review under NDA, any
>> >> INFOSEC professional knows that spiking cryptosystem implementations
>> >> at the object code level is a much greater threat than "backdoors"
>> >> spelled-out in well-documented source code. Hence, the emphasis on
>> >> testing performance of the cryptosystem, rather than trusting pretty
>> >> source code listings.
>> >>
>> >> (Of course, that doesn't seem to inhibit the calls by sci.crypt
>> >> posters to "show me the source code." Any professional spiker would be
>> >> all too happy to get the resulting "seal of approval" <g>.)
>> >
>> >You have mixed (possibly confused) two distinct problems that haunt
>> >software offered by untrusted implementors.  First, and unquestionably
>> >foremost, is the threat of incompetence.  An implementor may design a weak
>> >cipher, or poorly implement a strong cipher, or perfectly implement a
>> >strong cipher but overlook a security weakness in some supporting aspect
>> >of the software.  Source code inspection -- peer review -- addresses these
>> >kinds of threats.
>> >
>>
>> Perhaps you do, but I know of no "black bag jobs" that involved
>> replacing source code. I also don't have access to all of MSFT's
>> source code but, again, perhaps you do.
>>
>> If you re-read the first line of the above quote from my original
>> posting, or the past four days worth of subsequent postings in which I
>> clearly restated our belief in the _necessity_ of source code review,
>> I fail to understand how my insistance on their not being _sufficient_
>> for INFOSEC against professional attackers could be construed as the
>> straw man you're attacking - i.e., discouragement of source code
>> reviews by qualified reviewers.
>
>While you have since made it clear that you consider source code review
>necessary to security, your post as quoted above does nothing but ridicule the
>concept of source code review.  I believe this explains why I, among a number
>of others, interpreted your position to be that source code review was
>unnecessary.

Mr. Jackson:

My scorn was (and is) specifically aimed at mindless braying for mere
access to source code as the be-all and end-all talisman for INFOSEC. 

Readers of this, and other INFOSEC-related newsgroups, many of whom
cannot understand such source code, are being told that the mere
existence of some posted source code on a web-site is a guarantee
against misplaced trust in "snake-oil" or mal-ware. 

Having had to pay more money over the years than I liked in order to
obtain real, professional code reviews, I find this to be drivel.

In particular, the positing of a basic QA procedure (that any
responsible software maker should follow as a matter of course) as if
the possibility that some third parties might take the time to do it
were a guarantee of security against _spiking_ is nonsense at best,
and a dangerous placebo for anyone facing professional attackers.

My discussion was about countering spiking - not about QA. 

Even if it were only about QA, even professional source code peer
review is no more an all-encompassing design assurance program for
software than netlist or VHDL peer review is for integrated circuits.
No manager reporting to me who thought so ever did so for very long.

When my original choice of wording was (apparently) interpreted as
ridiculing the _necessity_ of source code review (in spite of my
intent) I responded with follow-up postings over a period long enough
for anyone following the thread to "get the idea" before your posting.

Once more: I _still_ find qualified review of source code _necessary_
but _not_sufficient_ for INFOSEC against spiking attacks. I have
offered some trivial illustrations of counterexamples, which pale in
the light of tactics employed by really competent adversaries.

>
>>
>>
>> I certainly don't discourage the use of seat belts, but as I always
>> told my children, they won't protect you against all hazards. Can that
>> be somehow construed as my offering an inducement to ignore them?
>
>Certainly not.  But in the context of the messages to which you were replying
>it certainly connoted so.
>

Perhaps I'm just too untrusting, but it appears to me that one can do
so only by ignoring the several days worth of follow-up postings to
the thread before yours as an excuse to do battle with a strawman.

However, if I'm being overly harsh in that assessment, I don't wish to
turn such a misunderstanding into any strawman attacks of my own.

Can I hope that the target of my scorn is now apparent? Do you still
disagree with it?

>>
>> >The second kind of threat is that of a malicious vendor who purposefully
>> >implements a weakness or a back door.  This is a dramatically smaller
>> >threat.  And, BTW, one that source code review _does_ reduce, because it
>> >is quite hard to hide such a back door from an inspector able to recreate
>> >the binary.  Given the same tools the binaries should be close to
>> >indistinguishable.  And a debug script that works on one ought to produce
>> >the same log  when applied to the other.  So even patched binaries are not
>> >hard to uncover.
>> >
>>
>> With your carefully stated qualifying "givens," I'd agree that a
>> single, crudely-spiked executable file _could_ be caught out (if you
>> add the proviso that it be inspected on a trusted system with all
>> compilers, linkers, debuggers, report generators and other tools
>> included in the evaluation of what constitutes a TCB).
>
>That's reasonable.  You inspect on your system and I'll inspect on mine.  ;-)
>
>>
>>
>> However, large file sets, installed to multiple directories (running
>> as both applications and services), with the possibility of chained
>> aliasing between them, can present a more challenging "INsecurity
>> through obscurity," to coin a phrase.
>
>I suspect this problem is exaggerated.  Yes, testing a big app is hard.  But
>not because the needle is so carefully camouflaged, but because you've got
>acres of hay in which to hide it. (Soon Micros~1 will use acres to measure the
>amount of CD surface are they need to ship their software service packs).
>
>However, given that I build such a system piecemeal, and that QA testing tools
>are now able to reliably measure source code coverage, I do not find the
>problem to be unresolvable.
>

The "acres of hay" are precisely the point for professional attackers.
They present a large number of files that are a priori known to be
present on the target platform, and can be co-opted in the spiking.

I'm glad you "don't find the problem to be unresolvable." However,
since unsubstantiated assurances carry little weight in a supposed
discourse on mistrust of other people's code, perhaps you'd share some
details with me on exactly how you can be so sure of your ability to
counter (all? most? some?) variations on the themes of such attacks.

If you are building systems "piecemeal," is your method transferrable
to others, and/or applicable to organizations with large numbers of
laptops to worry about? Could it perhaps be practically applied to the
(high-end) individually-owned laptop market?

If so, I'd be sincerely interested, with the possibility of backing up
that interest with dollars, if warranted. If your methodology and/or
tools are proprietary to you, I'd be happy to discuss them off-line.

Or, are you mainly concerned with servers in physically secured sites?
Having managed operations in SCIFs, I do realize that nothing can take
the place of strong, enforceable physical security. However, my
concern is for people with a very different problem. 

My particular concern is for the very large number of people who must
worry about INFOSEC on Windows PCs and laptops running commercial
applications with which they must maintain interoperability (and whose
file sets must hence be included as cooperative spiking code targets).

Many readers/lurkers on newsgroups such as this are seeking advice on
protecting data which they are forced to store on such platforms.
 
>>
>>
>> Spiking supposedly standard OS function libraries (MFC*.DLLs, for
>> instance), whose accompanying debug (.MAP) files are always "updated"
>> along with them, could give such statements about the ease of spiking
>> detection a rather embarassing quality.
>
>True, but then you've changed the whole thesis under discussion.  Rather than
>spiking an application you'd be using the application install as a cover for
>inserting a trojan horse.  While certainly a threat, it bears no resemblance to
>you original claims re patching executables.
>

Surely you know there are unsportmanlike people who find it quite
reasonable to spike any or all of the files in their distribution(s),
or use them to indirectly spike other files known to be on the target
platform, rather than restricting themselves to the set's main
EXE-file?

If they'd overwrite any of the DLLs in your debug environment would
that be "unfair," since those shouldn't "count" as executables?

How about Ken Thompson's old self-spiking-compiler paper? 

>From what I recall of it, I should think that counterexample to the
supposed sufficiency of application source code review for spiking
countermeasures would be good enough for theoretical arguments.

It's been more years than I care to admit since I had "a thesis under
discussion," either mine or my students'. For me, this is no exercise.

I'm concerned with professional attacks on the _weakest_ point of the
defenses for high-value targets on vulnerable platforms; specifically,
with those attacks that _can't_ be countered by source code reviews.

The examples I mentioned are trivial ones, not sophisticated ones, and
I offered them as (so far) unrefuted evidence for the insufficiency of
(regardless of the necessity for) source code reviews as _spiking_
_countermeasures_.

A laptop-full of geologic exploration data or of major contract bid
strategy and tactics, for example(s), can be worth a lot of different
attack measures, only one of which must succeed.

>> Patching and using KERNEL32.DLL's IsDebuggerPresent() function affords
>> some interesting possibilities on NT platforms, and there's always
>> that old favorite ReadProcessMemory().
>
>You can wax lyrical about what will happen when you get a hostile application
>installed on a system.  But you have not addressed the issue of getting it
>there, past a thorough source code review.
>

What? Don't tease an old man.

Surely you don't believe that source code reviews block "black bag
jobs" (or other Trojan insertion attacks) for what's euphemistically
called "non-cooperative collection" on Windows PCs and laptops?

We must operate in very different worlds. You're very fortunate if
yours is really sheltered from the kinds of "unfair" attacks that
worry me. Countering such attacks requires more than even real source
code review, let alone mere source code posting on a web-site.


Albert P. BELLE ISLE
Cerberus Systems, Inc.
================================================
ENCRYPTION SOFTWARE with
  Forensic Software Countermeasures
    http://www.CerberusSystems.com
================================================

------------------------------

From: [EMAIL PROTECTED] (Bill Unruh)
Crossposted-To: talk.politics.crypto
Subject: Re: UK publishes 'impossible' decryption law
Date: 15 Feb 2000 18:39:05 GMT

In <[EMAIL PROTECTED]> [EMAIL PROTECTED] (Dave 
Hazelwood) writes:

>Perhaps it is time for people who really understand the digital 
>era to take the reins of power?

>Send those funny old men in their wigs out to pasture once and for
>all??

This is not those funny men in wigs. This is the young turks who claim
to understand the digitial era, and are elected by you to make the laws.
The funny men in wigs actually are a lot more conservative and cognisant
of what laws like this really mean to the protection of privacy, etc.
Most of the issues have been hashed out for hundreds of years. Those men
in wigs are the one who do not think the world was invented in 1985, and
new laws need  to be invented to control everything.

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Basic Crypto Question 3
Date: Tue, 15 Feb 2000 11:44:06 GMT

[EMAIL PROTECTED] (David Wagner) wrote, in part:

>The example is under
>a probable-plaintext attack model (you know the ciphertext, you
>know the distribution on the plaintext, nothing else).

>The example shows that cascades are trickier than you might expect,
>and that there is a definite need for precise, formal, rigorous
>reasoning here, lest we be swayed by misleading intuition.

The reference, I presume, is "Cascade Ciphers: The Importance of Being
First". I'll look it up: but I think I already see what I've omitted.

Given that the plaintexts are not uniform in probability, the first
cipher could produce ciphertexts which are nonuniform in a way that is
more useful in an attack on the second cipher.

For one cipher to be designed based on the identity of the next cipher
in the cascade is an interaction of sorts, and the need to compress
the plaintext (so as to remove easily exploitable characteristics,
such as the first bit in every byte being constant) is highlited.

While it is important to find out all the possible pitfalls of cipher
cascades, I don't think that people ought to be scared away from using
them: these difficulties are largely theoretical, as long as one isn't
allowing an attacker to pick any part of the cascade one is using.
(Were that the case, all those dangers would, of course, be very
real.)

John Savard (jsavard<at>ecn<dot>ab<dot>ca)
http://www.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Basic Crypto Question 3
Date: Tue, 15 Feb 2000 11:49:26 GMT

[EMAIL PROTECTED] (wtshaw) wrote, in part:

>but by limiting ciphertext absolutely, you
>excommunicate ciphers which might be stronger than your other choices.

The point was that this was a general condition that prevents a
cipher, if it is maliciously designed in a way I am not aware of, from
having the opportunity to conceal information that subverts the
security of the system. As long as the lengthened ciphertext can be
effectively decomposed into two parts, one that is equal in size to
the original message, and another part which is varied under the
control of a trusted source of random numbers, systems of
probabilistic encryption can be used.

It is just a specific condition for a specific case, not an attempt to
say that ciphers which expand the plaintext are "bad". It's just that
it's easier to see, and simpler to prove, that certain funny things
don't happen if there's no room to do them in.

John Savard (jsavard<at>ecn<dot>ab<dot>ca)
http://www.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

From: Darren New <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Guaranteed Public Key Exchanges
Date: Tue, 15 Feb 2000 18:50:04 GMT

Ralph Hilton wrote:
> On Tue, 15 Feb 2000 02:15:33 GMT, Darren New <[EMAIL PROTECTED]> wrote:
> 
> >Or, to be even more susinct:
> >
> >If there are multiple people who can read mail to and send mail from the
> >same address (both your intended recipient and the MITM), and you have no
> >way to distinguish them, then no, you have no way of communicating with
> >only one of them.  Simple, yes? :-)
> 
> No. By using the DH key exchange from a publicly announced message one can
> get a return of several messages each with part of a key, One creates a
> full key for each of the respondents. 

How do you know? If John and Mary can both read the email, and you can't
tell them apart, then it might be John answering all your mail, or it might
be Mary answering all your mail, or it might be one answering some and one
answering others. How do you know you have a full key for each of the
respondents?

> One uses each of the keys thus
> obtained as a one-time pad for a message detailing future security.

It wouldn't be a one-time pad, either.

> Thus one has separated out one's respondents and know, given adequate key
> and encryption methodologies, that one is only communicating at any time
> to only one of the identities.

But even if...

> Having established the secure communication line to each one can establish
> by detailed interrogation who the actual original intended recipient is if
> one has sufficient data and mutual contacts.

Well, the original problem was you do *not* have any sufficient data or
mutual contacts. If you did, you'd just include the public key fingerprint
in the same presumedly-secure exchange that got you the email address. See?
Look...

"... and you have no way to distinguish them ..."

Hence, it doesn't matter how secure your communication is. You still can't
tell the good guys from the bad guys.

> Can you give a realistic scenario where one would not be able to
> differentiate?

Yeah. The original question.  Your boss comes up to you and says "Send a
copy of our trade secret stuff to Joe Hinkle. He's at [EMAIL PROTECTED] Make
sure nobody else can see it, including [EMAIL PROTECTED]" 

I contend that if that's all the info you have, you can't do it.

-- 
Darren New / Senior MTS / Invisible Worlds Inc.
San Diego, CA, USA (PST).  Cryptokeys on demand.
There is no safety in disarming only the fearful.

------------------------------

From: Quisquater <[EMAIL PROTECTED]>
Subject: Re: ECDSA added to DSS
Date: Tue, 15 Feb 2000 20:12:09 +0100

DJohn37050 wrote:
> 
> ECDSA has been added to DSS.  See www.nist.gov.
> Don Johnson

Good lobbying! :-) (?)

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Basic Crypto Question 3
Date: Tue, 15 Feb 2000 11:59:33 GMT

[EMAIL PROTECTED] (David Wagner) wrote, in part:

>It gives an example of a pair of ciphers with no back channels,
>independent keys, no message expansion, etc., yet the cascade
>can be as weak as the weaker of the two.  The example is under
>a probable-plaintext attack model (you know the ciphertext, you
>know the distribution on the plaintext, nothing else).

As I noted, I suspect the example is based on the first cipher
reshaping redundancy in the original plaintext to put it in a form
more useful in attacking the second cipher.

In that case, as one of the conditions I noted was:

- any compression of plaintext takes place prior to the first cipher
in the cascade, and is independent of the ciphers

that condition does address, in embryonic form, the restriction that
is required to prevent such a possibility. But I need to expand on it
further:

- no cipher in the cascade is cognizant, or attempts to become
cognizant, of redundancy in its input, and especially no cipher, given
a likely form of redundant input, will generate output having some
sort of exploitable pattern.

Thus, not just compression, but all use or manipulation of redundancy,
must be excluded from the cascade, since redundancy in the plaintext
input essentially provides the opportunity for a _concealed_
"expansion" of the plaintext without adding any bits. I _was_ thinking
about that possibility, even if I didn't go quite far enough to fully
cover that base.

John Savard (jsavard<at>ecn<dot>ab<dot>ca)
http://www.ecn.ab.ca/~jsavard/crypto.htm

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to