Cryptography-Digest Digest #828, Volume #13       Wed, 7 Mar 01 12:13:01 EST

Contents:
  Re: One time authentication ("Henrick Hellstr�m")
  Re: PKI and Non-repudiation practicalities (Anne & Lynn Wheeler)
  Re: NTRU - any opinions (Mehdi-Laurent Akkar)
  Re: => FBI easily cracks encryption ...? ("kroesjnov")
  Re: => FBI easily cracks encryption ...? ("kroesjnov")
  Re: Really big numbers in C (Rodrigo Nuno =?iso-8859-1?Q?Bragan=E7a?= da Cunha)
  Re: => FBI easily cracks encryption ...? (Sundial Services)
  Buy a PDF edition of Applied Cryptography of Bruce SCHNEIER ("Latyr Jean-Luc FAYE")
  Encryption software (Curtis R. Williams)
  Re: PKI and Non-repudiation practicalities (Mark Currie)
  Re: Applied Cryptography - SCHNEIER (Samuel Paik)
  Re: NTRU - any opinions ("James Russell")
  Re: NTRU - any opinions (Mehdi-Laurent Akkar)
  Re: Buy a PDF edition of Applied Cryptography of Bruce SCHNEIER (Volker Hetzer)
  Re: PKI and Non-repudiation practicalities (Anne & Lynn Wheeler)
  OT: TV Licensing (Was: => FBI easily cracks encryption ...?) ("John Niven")
  Re: PKI and Non-repudiation practicalities (Anne & Lynn Wheeler)

----------------------------------------------------------------------------

From: "Henrick Hellstr�m" <[EMAIL PROTECTED]>
Subject: Re: One time authentication
Date: Wed, 7 Mar 2001 16:15:14 +0100

"Scott Fluhrer" <[EMAIL PROTECTED]> skrev i meddelandet
news:985drv$t11$[EMAIL PROTECTED]...
>
> Henrick Hellstr�m <[EMAIL PROTECTED]> wrote in message
> > PCFB-mode does that.
> Are you sure?  It would appear that a computationally unbounded adversary
> could bruteforce the block cipher key, and that would appear allow the
> possibility (given one authenticated message) of forging another.  And,
the
> computationally unbounded adversary model appears to be the one the OP is
> worrying about


Firstly:
If PCFB-m/n mode is used, then a computionally unbounded adversary with
access to all known plain text of a single message might derive a truly
random initial vector with a probability of at most 1/(n-m). In particular,
brute force known plain text attacks on PCFB mode do not work well in
retrospect - i.e. the attacker might find the internal state (key, vector)
corresponding to the known plain text and subsequent text, but _not_ any
prior internal vector state (given some assumptions about the plain text,
the block cipher, the m/n ratio, etc).

Furthermore, PCFB mode increases the work load and data requirements of a
brute force attack with a factor depending on the m/n ratio. This is because
n-m bits of the output from the underlying block cipher goes straight into
the input of the block cipher at the next position, and are at most
implicitly revealed.

In this respect PCFB mode seems to be superior to e.g. PCBC mode. With PCBC
mode, if you know PT(i), CT(i), CT(i-1) and the key K, then you might derive
PT(i-1) = ( D_K(CT(i)) xor CT(i-1) xor PT(i-1)). Known plain text explicitly
reveals the vector.

Secondly:
If you are worrying about computationally unbounded adversaries, you could
use PCFB mode with an OTP as a key stream and change the key prior to each
block encryption. This would result in the desired authentication properties
_and_ resistance against brute force attacks. If this method is used and the
m/m ratio is suffiently small, then the underlying "block cipher" does not
have to be that advanced - it would be sufficient that it was bijective and
that each of the m least significant bits of the output depended with an
equal weight on each bit of the input.

Thirdly:
If the underlying block diffusion method is (a) bijective and (b) each bit
of it's output depends with an equal weight on each bit of it's input, then
the probability that the error propagation will wear off at any given
position past the error plus one is equal to 2**(m-n). Consequently, the
probability that a signature authenticates the message drops with the
distance from the last signature, although with a nearly neglible factor.

--
Henrick Hellstr�m  [EMAIL PROTECTED]
StreamSec HB  http://www.streamsec.com



------------------------------

Subject: Re: PKI and Non-repudiation practicalities
Reply-To: Anne & Lynn Wheeler <[EMAIL PROTECTED]>
From: Anne & Lynn Wheeler <[EMAIL PROTECTED]>
Date: Wed, 07 Mar 2001 15:19:17 GMT

Anne & Lynn Wheeler <[EMAIL PROTECTED]> writes:
> four person days per secret. For a million secrets using the rubber
> hose method, it then takes four million person days, or 10,959 person
> days. By comparison some of the shared-secret harvesting techniques

oops, that should be 10,959 person years with the rubber hose method
compared to possibly a couple person weeks to come up with approx. the
same potential fraud "yield".

sort of reminds me of the inverse of the story about why telephones
and automobiles were never going to take off in the market ... both
required the services of manual people operators. the solution in both
cases was to make each individual person their own operator
(i.e. rather than having to hire telephone operators and automobile
drivers, each person was responsible for their own). The person hour
projections for the number of telephone operators and automobile
drivers were at least similar to the rubber-hose solution to solving
the fraud yield/ROI problem with a transition from a shared-secret
infrastructure to a secret infrastructure (with tokens and public
keys).

-- 
Anne & Lynn Wheeler   | [EMAIL PROTECTED] -  http://www.garlic.com/~lynn/ 

------------------------------

From: Mehdi-Laurent Akkar <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: NTRU - any opinions
Date: Wed, 07 Mar 2001 15:32:21 GMT

> It is very new.
> Don Johnson

But studied,

An interisting paper on Crypto'00 by Joux & Jaulmes.

 MLA




------------------------------

From: "kroesjnov" <[EMAIL PROTECTED]>
Crossposted-To: alt.security.pgp,talk.politics.crypto
Subject: Re: => FBI easily cracks encryption ...?
Date: Wed, 7 Mar 2001 16:39:38 +0100

> Not at all.  Read the article carefully.  The hacker found source code
> for an operating system.  Big deal.  There is no danger in this at all
> that I can see.

You might wanna know, that if you have the source code, it is very easy to
write an exploit for the program/os.
That is the danger here...

"Wisdom lies not in obtaining knowledge, but in using it in the right way"

kroesjnov
email: [EMAIL PROTECTED] (remove nov to reply)
UIN: 67346792
pgp fingerprint: 4251 4350 4242 7764 80DA  DB1C E2B2 850A DF15 4D85



------------------------------

From: "kroesjnov" <[EMAIL PROTECTED]>
Crossposted-To: alt.security.pgp,talk.politics.crypto
Subject: Re: => FBI easily cracks encryption ...?
Date: Wed, 7 Mar 2001 16:41:43 +0100

> You seem to see terrorists under every rock.  When was the last time
> they attacked your home?

They didn`t.
But I don`t like to wait for it to happen either... Then it will be to
late...

"Wisdom lies not in obtaining knowledge, but in using it in the right way"

kroesjnov
email: [EMAIL PROTECTED] (remove nov to reply)
UIN: 67346792
pgp fingerprint: 4251 4350 4242 7764 80DA  DB1C E2B2 850A DF15 4D85



------------------------------

Date: Wed, 07 Mar 2001 15:47:44 +0000
From: Rodrigo Nuno =?iso-8859-1?Q?Bragan=E7a?= da Cunha <[EMAIL PROTECTED]>
Subject: Re: Really big numbers in C

Taylor Francis wrote:
> 
> Anyone know how to handle really bug numbers in C?  I'm talking
> 1024-4096 bit numbers...my compiler only handles 64 bit (that I can
> tell...)
> 
> Thanks


http://www.gnu.org/software/gmp/gmp.html

Works in GCC. It might work in VisualC++ and others...

------------------------------

Date: Wed, 07 Mar 2001 08:55:10 -0700
From: Sundial Services <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Crossposted-To: alt.security.pgp,talk.politics.crypto
Subject: Re: => FBI easily cracks encryption ...?

(You have to own a license to watch TV in Britain?  Fortunately I have a
simple solution for that, having not watched TV at all for years... by
choice.)

Most radio receivers work on a "heterodyne" (sp?) principle in which
they inject a mixdown signal of the desired frequency into the signal
that's coming in from the antenna.  The opposing waves then resonate
together (much as two guitar strings do when you're trying to tune-up
and haven't quite got it right yet...), and the receiver works off that
resonating beat -- which is now at a much lower frequency than before.

Receivers are rarely shielded, so these "het" frequencies can easily be
detected and they provide an immediate indication of what channel nearby
receivers are tuned to.  This is commonly used by radio
advertising-rating systems here in the States.

It could obviously be used to determine if you're listening and to what
channel ... but advertisers might be more interested in this data than
"the guv'mint."


>John Niven wrote:
> 
> > > what their citizens are watching on TV or listening to
> > > on radios. (Does England still do that?).
> > Can you post details about this?  I've always thought it was an urban
> > myth except under lab conditions.
> 
> Britain's TV Licensing Department used to claim that they had handheld
> detectors capable of not only detecting that you were watching TV, but also
> which channel.  I don't know anyone who's been caught by such a device,
> though I do know people who have been caught for other reasons.  The current
> advertising drive suggests that all they know, and need to know, is who
> doesn't own a TV license.
> 
> Sorry, hardly conclusive either way, but suggesting that earlier detection
> claims may have lacked foundation.
>

------------------------------

From: "Latyr Jean-Luc FAYE" <[EMAIL PROTECTED]>
Subject: Buy a PDF edition of Applied Cryptography of Bruce SCHNEIER
Date: Wed, 7 Mar 2001 15:54:25 -0000

Hi

It's the 2nd  post of this message as I haven't seen the first one appear on
the NG.

I bought one printed copy of the book Applied Cryptography  in a Book shop.
But I have to share it with four other people. So I think that it can be
easier for us to have it in PDF and put it in our Intranet.
Where can I buy the PDF version of the book
Thanks in advance.
Latyr



--
Latyr Jean-Luc FAYE
http://faye.cjb.net



------------------------------

From: Curtis R. Williams <[EMAIL PROTECTED]>
Subject: Encryption software
Date: Wed, 07 Mar 2001 16:08:30 GMT

Has anyone in this group (or elsewhere on the net) evaluated commonly
available encryption software programs. I'm pretty good at spotting
the obvius phonies, but there are many programs that look reasonable.
Does anyone actually try and verify that algorithms are properly
implemented? 

------------------------------

Subject: Re: PKI and Non-repudiation practicalities
From: [EMAIL PROTECTED] (Mark Currie)
Date: 07 Mar 2001 16:14:54 GMT

The attack that you make on shared-key systems is not entirely fair though. 
Although it may be possible to crack the central repository of 
shared-secrets/credit card numbers, PKI has a similar problem in that if you 
compromise a CA, or worse, a root CA, you can create millions of new 
certificates using existing identities that you can now masquerade. The way PKI 
solves this is to suggest that you place your root CA in a bunker (possibly 
under a mountain!) and in fact have multiple instances scattered around the 
world. This increases the cost of PKI. In an earlier thread you mentioned the 
possible savings to be gained by having chip cards (shared across 
institutions). This may outweight the associated infrastructure costs but I 
don't think that PKI infrastructure costs are insignificant. Even if you just 
focus on the CA's, hierachical PKI's tend to create a central trust point (root 
CA) that millions of certs rely on. Typically a lot more users rely on the 
central point than what you would find in shared-secret systems. This puts 
enormous pressure on the security of this entity. If the root CA (plus copies) 
are attacked by an organised para-military group, the whole trust chain 
collapses because you can't be sure that the private key wasn't compromised in 
the process. Preventing these types of attack are not cheap.

Mark

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
>
>Benjamin Goldberg <[EMAIL PROTECTED]> writes:
>
>> Even if your authentification is via a token, which in turn is activated
>> via biometrics, there still is a secret.  It's the data in the token! 
>> If all tokens were identical (allowing for them needing different
>> biometrics to activate), they would be useless.  The token needs to
>> contain something to uniquely identify it electronically, and
>> authenticate that identity.  Identification is simple; give each token a
>> unique id.  Authentification, however, requires some sort of secret --
>> either a private key, or a shared secret.
>> 
>> The weakest link is still access to the secret.  An attacker merely
>> needs to get access to a token and open it up, and avoid the tamper
>> resistance, and he has the secret.  This is conceptually no different
>> from beating a password out of the user with a rubber hose.
>
>the difference is whether the secret is known by only one person or a
>lot of people (i.e. which also represents the semantic difference
>between "secret" and "shared-secret").
>
>in the "shared-secret" scenerios ... the "shared-secret" are
>registered someplace and are subject to harvesting ... aka effectively
>credit card numbers are treated as "shared-secrets" (witness all the
>stuff written about protecting master credit-card databases at
>merchant servers). Harvesting of master database files of
>shared-secrets is significantly simpler than defeating tamper-evident
>and/or beating somebody with rubber hose.
>
>eliminating shared-secrets was the point of the discussion ... and
>distinquishing shared-secret infrastructures vis-a-vis secret
>infrastructures, along with the difference in fraud ROI; aka a
>scenerio where somebody can electronically steal 100,000 shared
>secrets in a couple of minutes ... vis-a-vis taking hrs to steal one
>secret significantly changes the risk, exploit, and fraud
>characteristics of an infrastructre. If it is possible to deploy a
>"secret" infrastructure for approximately the same cost as a
>"shared-secret" infrastructure and the "secret" infrastructure reduces
>the fraud ROI by five to ten orders of magnitude (i.e. it takes a
>thousand times as much effort to obtain 1/100000 useable fraudulant
>material).
>
>> -- 
>> The difference between theory and practice is that in theory, theory and
>> practice are identical, but in practice, they are not.
>
>random ref
>http://www.garlic.com/~lynn/2000b.html#22
>
>the other part of the scernio ... is financial and various other
>commercial infrastructures strongly push that in shared secret
>scenerios that the same shared secret can't be shared across multiple
>different organizations with multiple different objectives i.e. an
>employer typically has strong requlations against "sharing" a
>corporate access password shared-secret with other organizations (i.e.
>using the same shared-secret password to access the corporate intranet
>as is used to access a personal ISP account and misc. random
>webservers around the world). 
>
>I would guess that a gov. agency would not be too please if an
>gov. agency employee specified their employee access (shared-secret)
>password ... as an access (shared-secret) password for some webserver
>registration site in some other country.
>
>However, it is possible to specify a public key in multiple places and
>employees of one organization couldn't use the harvesting of the
>public keys at that organization for penetration of other
>organizations.
>
>Furthermore, the rubber-hose approach is going to take quite a bit
>longer to obtain a million secrets and hardware tokens that correspond
>to the registered public keys (as compared to some of the
>shared-secret harvesting techniques). Lets say that the rubber-hose
>approach takes something like two days per ... planning, setup,
>capture, executing, etc and involves minimum of two people. That is
>four person days per secret. For a million secrets using the rubber
>hose method, it then takes four million person days, or 10,959 person
>days. By comparison some of the shared-secret harvesting techniques
>can be done in a couple person weeks for a million shared-secrets.
>
>-- 
>Anne & Lynn Wheeler   | [EMAIL PROTECTED] -  http://www.garlic.com/~lynn/ 
>


------------------------------

From: Samuel Paik <[EMAIL PROTECTED]>
Subject: Re: Applied Cryptography - SCHNEIER
Date: Wed, 07 Mar 2001 16:15:29 GMT

Latyr Jean-Luc FAYE wrote:
> I bought one printed copy of the book Applied Cryptography  in a Book shop.
> But I have to share it with four other people. So I think that it can be
> easier for us to have it in PDF and put it in our Intranet.
> Where can I buy the PDF version of the book

Dr. Dobbs has a CD-ROM, "DDJ Essential Books on Cryptography & Security"
which includes _Applied Cryptography_.  Start from here
http://www.ddj.com/store/
  -> "Dr. Dobb's CD-ROMs and books"
  -> "Essential Book Collections on CD-ROM"

------------------------------

From: [EMAIL PROTECTED] ("James Russell")
Subject: Re: NTRU - any opinions
Date: 7 Mar 2001 17:21:30 +0100

Thanks for the feedback and the link to that document.

So I guess basically it's a little too "young" to consider using at this 
point, and ECC would be a better bet for near term use in a 
processor-constrained environment?

I appreciate the feedback.  We are not cryptographers and are just trying to 
sift through the propaganda.
_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com


-- 
Posted from [206.156.202.110] by way of f264.law10.hotmail.com [64.4.14.139] 
via Mailgate.ORG Server - http://www.Mailgate.ORG

------------------------------

From: Mehdi-Laurent Akkar <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: NTRU - any opinions
Date: Wed, 07 Mar 2001 16:27:29 GMT

> So I guess basically it's a little too "young" to consider using at this

The article I pointed out describe a chosen cipher text cryptanalysis of NTRU.

MLA


------------------------------

From: Volker Hetzer <[EMAIL PROTECTED]>
Subject: Re: Buy a PDF edition of Applied Cryptography of Bruce SCHNEIER
Date: Wed, 07 Mar 2001 17:38:16 +0100

Latyr Jean-Luc FAYE wrote:
> 
> Hi
> 
> It's the 2nd  post of this message as I haven't seen the first one appear on
> the NG.
> 
> I bought one printed copy of the book Applied Cryptography  in a Book shop.
> But I have to share it with four other people. So I think that it can be
> easier for us to have it in PDF and put it in our Intranet.
> Where can I buy the PDF version of the book
If Bruce sells it that way and you find out how much a enterprise-wide
licence of the pdf file costs, could you please post the price here?

Greetings!
Volker
--
They laughed at Galileo.  They laughed at Copernicus.  They laughed at
Columbus. But remember, they also laughed at Bozo the Clown.

------------------------------

Subject: Re: PKI and Non-repudiation practicalities
Reply-To: Anne & Lynn Wheeler <[EMAIL PROTECTED]>
From: Anne & Lynn Wheeler <[EMAIL PROTECTED]>
Date: Wed, 07 Mar 2001 16:58:23 GMT

[EMAIL PROTECTED] (Mark Currie) writes:

> The attack that you make on shared-key systems is not entirely fair though. 
> Although it may be possible to crack the central repository of 
> shared-secrets/credit card numbers, PKI has a similar problem in that if you 
> compromise a CA, or worse, a root CA, you can create millions of new 
> certificates using existing identities that you can now masquerade. The way PKI 
> solves this is to suggest that you place your root CA in a bunker (possibly 
> under a mountain!) and in fact have multiple instances scattered around the 
> world. This increases the cost of PKI. In an earlier thread you mentioned the 
> possible savings to be gained by having chip cards (shared across 
> institutions). This may outweight the associated infrastructure costs but I 
> don't think that PKI infrastructure costs are insignificant. Even if you just 
> focus on the CA's, hierachical PKI's tend to create a central trust point (root 
> CA) that millions of certs rely on. Typically a lot more users rely on the 
> central point than what you would find in shared-secret systems. This puts 
> enormous pressure on the security of this entity. If the root CA (plus copies) 
> are attacked by an organised para-military group, the whole trust chain 
> collapses because you can't be sure that the private key wasn't compromised in 
> the process. Preventing these types of attack are not cheap.
> 
> Mark

I'm not talking about PKI, CA's or the certification authority digital
signature model (CADS model) ... i'm talking about the AADS (account
authority digital signature) model. It eliminates the systemic risks
inherent in the CADS model.

random refs to the AADS model can be found at:
http://www.garlic.com/~lynn

I take a infrastructure that currently registeres shared-secrets and
instead register public keys. No business process costs ... just some
technology costs. 

Given that many back-end systems have some pretty strigent security
and audit requirements specifically targeted at preventing things like
insiders harvesting shared-secrets .... some of those procedures and
associates costs can be alliviated.

Also, in the ISP world ...  a significant costs is service call
associated with handling a password compromise. This is further
aggrevated by human factors issues with people having to track &
remember a large number of different shared-secrets ... because of the
guidelines about not using the same shared-secrets in multiple
different domains.

i.e. the start of my comments on this thread was purely the transition
of existing business processes (no new, changed &/or different
business processes, no reliance on 3rd parties and all the associated
new issues with regard to liability, vulnerabilities, and systemic
risk, etc) from a shared-secret paradigm to a public key/secret/token
paradigm ... and some deployment approaches that would result in lower
costs than the current shared-secret paradigm (for instance adding a
chip to an existing card being distributed might be able to save
having to distribute one or more subsequent cards ... resulting in
distributing hardware tokens actually costing the overall
infrastructure less than distributing magstripe cards).

random systemic risk refs from thread in sci.crypt in fall of 99
http://www.garlic.com/~lynn/99.html#156
http://www.garlic.com/~lynn/99.html#236
http://www.garlic.com/~lynn/99.html#240

random other system risk refs:
http://www.garlic.com/~lynn/98.html#41
http://www.garlic.com/~lynn/2000.html#36
http://www.garlic.com/~lynn/2001c.html#34


-- 
Anne & Lynn Wheeler   | [EMAIL PROTECTED] -  http://www.garlic.com/~lynn/ 

------------------------------

From: "John Niven" <[EMAIL PROTECTED]>
Crossposted-To: alt.security.pgp,talk.politics.crypto
Subject: OT: TV Licensing (Was: => FBI easily cracks encryption ...?)
Date: Wed, 7 Mar 2001 17:01:50 -0000

> (You have to own a license to watch TV in Britain?  Fortunately I have a
> simple solution for that, having not watched TV at all for years... by
> choice.)

Not quite right - you have to own a license to own a TV.  Subtle, but it
meant that the small colour set I had solely to use with my "micro-computer"
during the 80's required a license.  If you have no aerial (eg. you used
your TV just for watching pre-recorded videos or DVDs) you're still required
to buy a license.  Licensing is on a per-household basis - retirement
communities, for example, require one license per room.

Sorry for the off-topic post, but it's a pet peeve of mine...!

John

--
John Niven
(Reply through newsgroup)


"Sundial Services" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> (You have to own a license to watch TV in Britain?  Fortunately I have a
> simple solution for that, having not watched TV at all for years... by
> choice.)




------------------------------

Subject: Re: PKI and Non-repudiation practicalities
Reply-To: Anne & Lynn Wheeler <[EMAIL PROTECTED]>
From: Anne & Lynn Wheeler <[EMAIL PROTECTED]>
Date: Wed, 07 Mar 2001 17:05:51 GMT

Anne & Lynn Wheeler <[EMAIL PROTECTED]> writes:
> oops, that should be 10,959 person years with the rubber hose method
> compared to possibly a couple person weeks to come up with approx. the
> same potential fraud "yield".

the other analysis is a skilled rubber-hose person might be expecting
a minimum of $1000/day. At four person days per secret, that comes out
to $4k salary ... plus maybe another $1k or so in expenses; or on the
order of $5k cost per secret/token.

Lets say the person has $5k credit limit on the account associated
with the use of the token ... that means that the fraud can go out and
(at best) make $5k worth of fraudulent purchases. Say brand-new stuff
that they then have to fence at .10 on the dollar ... yielding $500 at
an outlay of $5k.

Downside is that the person may be able to report the token
lost/stolen/compromised prior to the criminals being able to fully
take advantage of it. There is also possibility that person already
has some charges out standing so the available credit is less than the
credit limit.

-- 
Anne & Lynn Wheeler   | [EMAIL PROTECTED] -  http://www.garlic.com/~lynn/ 

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to