[EMAIL PROTECTED]: [fc-announce] Financial Cryptography 2007 Call for Papers]

2006-07-29 Thread R. Hirschfeld
From: Sven Dietrich <[EMAIL PROTECTED]>
Subject: [fc-announce] Financial Cryptography 2007 Call for Papers
To: [EMAIL PROTECTED]
Date: Fri, 28 Jul 2006 11:41:39 -0400 (EDT)

Dear Colleague,

please find below the first Call for Papers for FC'07.

Best regards,

Sven Dietrich
- -- 
Dr. Sven DietrichCERT Research - Software Engineering Institute
[EMAIL PROTECTED]   4500 Fifth Ave, Pittsburgh, PA 15213, USA
Tel: +1-412-268-7711 Fax: +1-412-268-6989  PGPkeyID: 0x04185247
- --
First Call for Papers

FC'07: Financial Cryptography and Data Security
http://fc07.ifca.ai/

Eleventh International Conference
February 12-15, 2007
Lowlands, Scarborough, Trinidad and Tobago

Submissions Due Date: October 9, 2006, 11:59pm, EDT (UTC-4)

Program Chair:  Sven Dietrich (Carnegie Mellon University)
General Chair:  Rafael Hirschfeld (Unipay)

At its 11th year edition, Financial Cryptography and Data Security (FC'07)
is a well established and major international forum for research, advanced
development, education, exploration, and debate regarding security in the
context of finance and commerce. We will continue last year's augmentation
of the conference title and expansion of our scope to cover all aspects of
securing transactions and systems. These aspects include a range of
technical areas such as: cryptography, payment systems, secure transaction
architectures, software systems and tools, fraud prevention, secure IT
infrastructure, and analysis methodologies. Our focus will also encompass
financial, legal, business, and policy aspects. Material both on theoretical
(fundamental) aspects of securing systems,and on secure applications and
real-world deployments will be considered.

The conference goal is to bring together top cryptographers, data-security
specialists, and computer scientists with economists, bankers, implementers,
and policy makers. Intimate and colorful by tradition, the FC'07 program
will feature invited talks, academic presentations, technical
demonstrations, and panel discussions.

This conference is organized annually by the International Financial
Cryptography Association (IFCA).

Original papers, surveys, and presentations on all aspects of financial and
commerce security are invited. Submissions must have a strong and visible
bearing on financial and commerce security issues, but can be
interdisciplinary in nature and need not be exclusively concerned with
cryptography or security. Possible topics for submission to the various
sessions include, but are not limited to:

Anonymity and Privacy
Auctions
Audit and Auditability
Authentication and Identification, including Biometrics
Certification and Authorization
Commercial Cryptographic Applications
Commercial Transactions and Contracts
Digital Cash and Payment Systems
Digital Incentive and Loyalty Systems
Digital Rights Management
Financial Regulation and Reporting
Fraud Detection
Game Theoretic Approaches to Security
Identity Theft, Physhing and Social Engineering
Infrastructure Design
Legal and Regulatory Issues
Microfinance and Micropayments
Monitoring, Management and Operations
Reputation Systems
RFID-Based and Contactless Payment Systems
Risk Assessment and Management
Secure Banking and Financial Web Services
Securing Emerging Computational Paradigms
Security and Risk Perceptions and Judgments
Security Economics
Smart Cards and Secure Tokens
Trust Management
Trustability and Trustworthiness
Underground-Market Economics
Virtual Economies
Voting system security

For those interested, last year's proceedings are available from Springer.

Submission Instructions

Submission Categories

FC'07 is inviting submissions in four categories: (1) research papers, (2)
systems and applications presentations, (3) panel sessions, (4) surveys. For
all accepted submissions, at least one author must attend the conference and
present the work.

Research Papers

Research papers should describe novel scientific contributions to the field,
and they will be subject to rigorous peer review. Accepted submissions will
be included in the conference proceedings to be published in the
Springer-Verlag Lecture Notes in Computer Science (LNCS) series after the
conference, so the submissions must be formatted in the standard LNCS format
(15 page limit).

Systems and Application Presentations

Submissions in this category should describe novel or successful systems
with an emphasis on secure digital commerce applications. Presentations may
concern commercial systems, academic prototypes, or open-source projects for
any of the topics listed above. Where appropriate, software or hardware
demonstrations are encouraged as part of the presentations in these
sessions. Submissions in this category should consist of a short summary of
the work (1-6 pages in length) to be reviewed by the Program Committee,
along with a short biography of the presenters. Accepted submissions will be
presented at the conference (25 minute

Re: Recovering data from encrypted disks, broken CD's

2006-07-29 Thread Florian Weimer
* Steven M. Bellovin:

> I wonder how accurate this is.  It's certainly true that some drives have
> vendor passwords to unlock them.  It's hard to see how they could break
> through (good) software encryption,

A lot of software tends to create temporary files in random places.
If you don't encrypt the whole disk (including swap space and the
suspend-to-disk area), plaintext might be written to the disk and can
be recovered even though the actual cryptography is sound.  This
assumes that transparent decryption is used--the situation is worse if
you need to create a temporary plaintext copy on disk before you can
actually process the data.

(Now I only need to figure out why sequential disk I/O takes such a
significant hit when using dm-crypt. *sigh*)

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Noise sources: multi-oscillator vs. semiconductor noise?

2006-07-29 Thread ericm
On Sat, Jul 29, 2006 at 04:24:12PM -0400, Thor Lancelot Simon wrote:
> I cannot find any public, rigorous discussion of why such a design might
> be preferable to the semiconductor noise type of design -- but I have to
> assume the people designing the commercial sources have all converged on
> similar designs for _some_ reason.

All the commercial RNGs are part of chips ("cores"), rather than
circuits made from discrete parts.  From what the hardware people I
have worked with have told me, the free-running oscilator design is
easier to get through chip-design software that generally considers
analog circuits to be errors.  It's also easier to simulate.


Eric


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Noise sources: multi-oscillator vs. semiconductor noise?

2006-07-29 Thread Thor Lancelot Simon
I am working on a semi-experimental hardware RNG project.  I note that
the publically available designs, e.g. http://willware.net:8080/hw-rng.html
or http://world.std.com/~reinhold/waynesrngcomp.gif or the others
listed at http://www.std.com/~reinhold/truenoise.html all use semiconductor
junction noise as the source -- diode or transistor avalanche noise.

But all the modern commercial designs with which I'm familiar (which
include the Intel, the Hifn, one Motorola design, and some others that
are not publically documented) are all multiple-oscillator designs, in
which some number (usually 2 or 3) of undisciplined oscillators of close
design frequency drift against one another, and an ADC is used to sample
the resulting output waveform.

I cannot find any public, rigorous discussion of why such a design might
be preferable to the semiconductor noise type of design -- but I have to
assume the people designing the commercial sources have all converged on
similar designs for _some_ reason.

Can someone point me to a discussion of the advantages or disadvantages
of either design type in the literature?  I am not interested in the
theoretical advantages of other, costlier sources such as radioactive-decay
or "more direct" (than junction noise) quantum or thermal noise sources;
I just want to understand why all the public domain designs are of one
type, and all the commercial designs of the other.


-- 
  Thor Lancelot Simon[EMAIL PROTECTED]

  "We cannot usually in social life pursue a single value or a single moral
   aim, untroubled by the need to compromise with others."  - H.L.A. Hart

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Recovering data from encrypted disks, broken CD's

2006-07-29 Thread Steven M. Bellovin
On Fri, 28 Jul 2006 10:16:23 -0400, [EMAIL PROTECTED] wrote:

>
> Encrption can be broken
> I was surprised to learn that Ontrack regularly recovers encrypted data
> on systems where the user has lost the key. "There's only a couple of
> technologies where we would run into a roadblock [such as] some of the
> new laptops that have passwords that are tied to the media and to the
> BIOS," says Burmeister. That raises the question: if they can do it, who
> else can?
> 
> On encrypted systems that are more difficult to crack, OnTrack also has
> a secret weapon. "Certain situations involve getting permission to get
> help from the manufacturer," he says.
> 
I wonder how accurate this is.  It's certainly true that some drives have
vendor passwords to unlock them.  It's hard to see how they could break
through (good) software encryption, unless the software vendor -- probably
Microsoft -- has implemented some form of key escrow, which to my
knowledge they've adamantly opposed doing.  In fact, Microsoft just
withdrew an add-on feature to provide easy-to-use encrypted folders
because corporations didn't like the lack of key recovery.


--Steven M. Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Crypto to defend chip IP: snake oil or good idea?

2006-07-29 Thread Anne & Lynn Wheeler
Thor Lancelot Simon wrote:
> As Perry said, chip fabs have plenty of diagnostic equipment that
> would extract an RSA private key every bit as easily as it would
> extract a private serial number, which means that the additional cost
> of 20-40 gates, plus IP licensing, plus... for a cryptographic engine
> is strictly wasted.  I am a happy Certicom customer but I certainly
> wouldn't buy _this_ product from them.

fab has plenty of equipment ... at some point there needs to be a little
trust ... the fab could also create copy chips with back doors that
would enable attackers with the appropriate knowledge to extract all
private keys from all manufactured chips  w/o even requiring
diagnostic equipment. there are audit processes that are designed to
preclude both the backdoor design scenario as well as the private key
extraction scenario.

my claim is that whether it is 20-40 gates or 20k-40k gates would both
be equivalently trivial ... or at least unable to differentiate the
difference if you are talking about 100 million circuit chip.

my assertion is that there is incremental benefit of asymmetric key
operation over straight static serial number. in the scenario where the
asymmetric key operation is being used as countermeasure to copy chips
... there may even be incentive for the fab to not compromise their own
chips.

there are also some interesting processes in fabs around the
poweron/test situation to narrow the vulnerability of possible private
key extraction (after the key may be generated) ... unless you are
talking about physical invasive techniques that damage the chip
(negating the purpose have using the digital signature from the private
key for proof of a valid, undamaged, working chip).

my assertion is that the cost of the additional gates can be more than
offset by improving/eliminating other chip processing related processes
... resulting in a net economic benefit  this is improved by
aggresive cost reduction of the additional gates  so it might need
to save more than dollar or two in other chip processes for a net
economic benefit (i.e. it may be able to accomplish asymmetric key
circuits for pennies)

you seem to be asserting that the complexity of asymmetric key circuits
would require savings on the order of possibly hundreds of dollars (per
chip) to show any net economic benefit.

somewhat related is that there are lots of current chip activity where
they ahve an excess of circuits that they are somewhat desperately
looking for applications for. if they can front load some incremental
purpose that uses the excess circuits ... the design costs are front
loaded and then amortized across hundreds of millions of chips ...
effectively driving the actual circuit related cost (for the incremental
feature) to zero. if it doesn't actually increase any post fab per chip
processing cost ... and can decrease any post fab per chip processing
cost ... then it actually takes extremely little savings to show a net
economic infrastructure benefit.

in my scenario ... it takes relatively trivial copy chip countermeasure
incremental benefit to justify fabs adding the feature to their chips.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Crypto to defend chip IP: snake oil or good idea?

2006-07-29 Thread Thor Lancelot Simon
On Fri, Jul 28, 2006 at 06:46:54PM -0600, Anne & Lynn Wheeler wrote:
> Thor Lancelot Simon wrote:
> >The simple, cost-effective solution, then, would seem to be to generate
> >"static serial numbers" like cipher keys -- with sufficient randomness
> >and length that their sequence cannot be predicted.  I still do not see
> >the advantage (except to Certicom, who would doubtless like to charge a
> >bunch of money for their "20-40k gate crypto code") of using asymmetric
> >cryptography in this application.
[...] 
> so is the issue really with asymmetric key cryptography technology done 
> in custom circuit design ... or is the issue with certicom??

The issue is with unnecessary complexity that yields (still, to my
eye) no demonstrable security benefit in the applications for which
the Certicom press release claimed it was intended.  As I said before,
I think the basic "chip generates key pair, public key signed during
manufacturing" solution is a very clever one -- but only to problems
which _also_ justify the cost of a very serious tamper-proofing effort
aimed at protecting the private key, and where it is a requirement of
the application that the original fab _itself_ not have that key as
part of the manufacturing process, e.g. where it will be used as a
master secret for persistent storage of other keys.  In other words,
for devices like IBM's cryptographic modules.  But for the purpose
Certicom claimed (and you seem also to be claiming) it's suited for:

As Perry said, chip fabs have plenty of diagnostic equipment that
would extract an RSA private key every bit as easily as it would
extract a private serial number, which means that the additional cost
of 20-40 gates, plus IP licensing, plus... for a cryptographic engine
is strictly wasted.  I am a happy Certicom customer but I certainly
wouldn't buy _this_ product from them.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Crypto to defend chip IP: snake oil or good idea?

2006-07-29 Thread Anne & Lynn Wheeler

Thor Lancelot Simon wrote:

The simple, cost-effective solution, then, would seem to be to generate
"static serial numbers" like cipher keys -- with sufficient randomness
and length that their sequence cannot be predicted.  I still do not see
the advantage (except to Certicom, who would doubtless like to charge a
bunch of money for their "20-40k gate crypto code") of using asymmetric
cryptography in this application.


which effectively gets you the same as the secure hash scenario for the 
static account number scenario ... example immediately following the 
million static serial numbers in the same post:

http://www.garlic.com/~lynn/aadsm25.htm#4

which is countermeasure to attackers taking advantage of regular pattern.

however, if the static serial number is ever used for any purpose ... it 
then has to be exposed ... since it is static ... it then is subject to 
skimming, evesdropping, etc ... and then used in replay attacks,

i.e. previous post
http://www.garlic.com/~lynn/aadsm25.htm#4

the only equivalent of static serial number to private key is if it is 
never exposed ... which effectively implies that it is never used,

i.e. previous post
http://www.garlic.com/~lynn/aadsm25.htm#4

for years the standard security response has been that the best security 
is to lock it away and never use it and/or provide access.


if it is ever used for any purpose ... then it can be exposed all over 
the place ... in manner similar to static account numbers (even with the 
static secure hash) described in the same posting as the million account 
number scenario, i.e. previous post

http://www.garlic.com/~lynn/aadsm25.htm#4

so is the issue really with asymmetric key cryptography technology done 
in custom circuit design ... or is the issue with certicom??


btw, the 40k circuit core design that i referred to done in late 99 and 
early 2000 had no certicom content ... even the ecc was done w/o any 
certicom content.


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Crypto to defend chip IP: snake oil or good idea?

2006-07-29 Thread Thor Lancelot Simon
On Fri, Jul 28, 2006 at 03:52:55PM -0600, Anne & Lynn Wheeler wrote:
> Thor Lancelot Simon wrote:
> >I don't get it.  How is there "no increase in vulnerability and threat"
> >if a manufacturer of counterfeit / copy chips can simply read the already
> >generated private key out of a legitimate chip (because it's not protected
> >by a tamperproof module, and the "significant post-fab security handling"
> >has been eliminated) and make as many chips with that private key as he
> >may care to?
> >
> >Why should I believe it's any harder to steal the private key than to
> >steal a "static serial number"?
> 
> so for more drift ... given another example of issues with static
> data authentication operations is that static serial numbers are 
> normally considered particularly secret ... and partially as a result 
> ... they tend to have a fairly regular pattern ... frequently even 
> sequential. there is high probability that having captured a single 
> static serial number ... you could possibly correctly guess another 
> million or so static serial numbers w/o a lot of additional effort. This 
> enables the possibly trivial initial effort to capture the first serial 
> number to be further amortized over an additional million static serial 
> numbers ... in effect, in the same effort it has taken to steal a single 
> static serial number ... a million static serial numbers have 
> effectively been stolen.

The simple, cost-effective solution, then, would seem to be to generate
"static serial numbers" like cipher keys -- with sufficient randomness
and length that their sequence cannot be predicted.  I still do not see
the advantage (except to Certicom, who would doubtless like to charge a
bunch of money for their "20-40k gate crypto code") of using asymmetric
cryptography in this application.


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Crypto to defend chip IP: snake oil or good idea?

2006-07-29 Thread Anne & Lynn Wheeler

Thor Lancelot Simon wrote:

I don't get it.  How is there "no increase in vulnerability and threat"
if a manufacturer of counterfeit / copy chips can simply read the already
generated private key out of a legitimate chip (because it's not protected
by a tamperproof module, and the "significant post-fab security handling"
has been eliminated) and make as many chips with that private key as he
may care to?

Why should I believe it's any harder to steal the private key than to
steal a "static serial number"?


so maybe we can look at another kind of static serial number 
vulnerability ... besides it will be nominally directly accessable by 
general programming and/or transmitted (neither of which would require 
capturing with physical intrusive methods).


so for more drift ... given another example of issues with static
data authentication operations is that static serial numbers are 
normally considered particularly secret ... and partially as a result 
... they tend to have a fairly regular pattern ... frequently even 
sequential. there is high probability that having captured a single 
static serial number ... you could possibly correctly guess another 
million or so static serial numbers w/o a lot of additional effort. This 
enables the possibly trivial initial effort to capture the first serial 
number to be further amortized over an additional million static serial 
numbers ... in effect, in the same effort it has taken to steal a single 
static serial number ... a million static serial numbers have 
effectively been stolen.


So even if you have a scenario where the effort to steal a single static 
serial number is exactly the same as the effort to steal a private key 
(because the chips containing them will never divulge and/or export 
either ... which is actually a false assumption, but just for argument 
sake assume it to be true) ... then we can still claim that when the 
effort has been made to steal a single static serial number ... that 
effort could then be amortized over a million static serial numbers ... 
while you are still stuck with only a single private key. the equation 
then is whether "identical effort" divided by one is the same as 
"identical effort" divided by a million.


so we could look at it from an additional analogy
http://www.garlic.com/~lynn/aadsm25.htm#3 Crypto to defend chip IP: 
snake oil or good idea?


and yet again another anlogy/example similar to static serial numbers 
tends to be account numbers. one of the static account number 
vulnerabilities in the 60s were their regular structure. attackers would 
use the regular structure nature of account numbers to conjure up bogus 
account numbers from which counterfeit magstripe payment cards were 
created. this would frequently be succesful for performing fraudulent 
transactions.


eventually the payment card industry came up with a sort of secure hash 
that was written to magstripe along with the account number. basically 
it was a bank/bin "secret" mashed with the account number. the 
association network collected a table of all the bank/bin "secrets" and 
could check the secure hash on an account transaction against the 
computed value for the account (for instance, if they were going to be 
doing standin authorization).


you then started to see bogus reading of the magstripe (static data) by 
attackers ... who then would use the recorded information to create a 
counterfeit replica.


in the mid-90s, there was chip&pin effort as countermeasure to the bogus 
reading of magstripes. there was nothing that could be swiped and read 
... so it prevented attackers from creating counterfeit cards from bogus 
reads.


the only problem was that sometime in the 80s, you started to also see 
attackers recording valid transactions ... they didn't actually need 
physical access to the card ... they just needed to be able to record 
valid transactions ... and since it was static data ... it could be 
readily used for replay attacks using counterfeit magstripe cards.


the chip&pin deployments in the late 90s thru recently would have the 
chip present a digital certificate as its authentication. It didn't do 
any actual public key operations ... it just presented the certificate. 
This was called static data authentication. The problem was that the 
technology used for skimming/recording valid (magstripe) transactions 
frequently worked equally well recording static data chip&pin 
transactions (the attackers didn't require any physical access ... they 
just skimmed/recorded valid transactions, in fact they could record tens 
of thousands of transactions enabling them to build tens of thousands of 
counterfeit cards).


Now since it was purely static data authentication, the attackers found 
that they could take a counterfeit chip and install the skimmed/recorded 
certificate ... and the chip would now pass as valid. In the late 90s, 
this got the label "yes cards" ... old "yes card" reference:

http://web.archive.org/web/2003041

Re: [IP] more on Can you be compelled to give a password?

2006-07-29 Thread Ed Gerck

List,

the Subject says it all. This might be of interest
here, for comments.


The answer is definitely NO even for the naive user,
just requiring the tech-savvy for set up. Several
examples are possible.

John Smith can set two passwords, one for normal use
and the other when in distress. The distress password
may simply announce that the data is expired or, more
creatively, also make the data unreadable.

John Smith can also set two passwords, one of them
unknown to him but known to a third-party (that
John S does not have to trust) that is subject to
a different jurisdiction /or rules /or is in another
place. John Smith may comply with any demand to
disclose his password but such a demand may not be
effective for the third-party.

John Smith can have the data, encrypted with a key
controlled by his password, sitting on some Internet
server somewhere. John S never carries the data
and anyone finding the data does not know to whom it
belongs to.

John Smith can also use keys with short expiration
dates in order to circumvent by delay tactics any
demand to reveal their passwords, during which time
the password expires.

Of course, this is not really a safe heaven for
criminals because criminal activity is often detected
and evidenced by its "outside" effects, including
tracing.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Crypto to defend chip IP: snake oil or good idea?

2006-07-29 Thread Anne & Lynn Wheeler

Thor Lancelot Simon wrote:

So, you sign the public key the chip generated, and inject the _signed_
key back into the chip, then package and ship it.  This is how the SDK
for IBM's crypto processors determines that it is talking to the genuine
IBM product.  It is a good idea, and it also leaves the chip set up for
you with a preloaded master secret (its private key) for encrypting other
keys for reuse in insecure environments, which is really handy.

But do we really think that general-purpose CPUs or DSPs are going to
be packaged in the kind of enclosure IBM uses to protect the private keys
inside its cryptographic modules?


so one analogy to explore is somebody claims pin/passwords 
authentication infrastructures have the exact same vulnerabilities (no 
more and no less) as private key digital signature authentication. that 
evesdropping attacks on digital signatures represents the exact same 
vulnerability as evesdropping on pin/passwords.


to further explore this analogy ... the registration of a public key as 
part of digital signature infrastructure represents the same exact 
vulnerability as pin/password registration  i.e. that anybody having 
access to the public key registration file can take the public key and 
perform a fraudulent authentication ... because just like in 
pin/password authentication paradigm ... the public key is used for both 
originating the authentication as well as verifying the authentication.


for some additional assertions in this analogy ... that would imply that 
an attacker only needs to learn the public key in order to perform a 
successful attack and doesn't actually require access to the private key 
at all (assuming an assertion that a serialno/pin/password 
authentication paradigm has the same exact vulnerabilities and threats

as public/private key digital signature authentication paradigm).

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Crypto to defend chip IP: snake oil or good idea?

2006-07-29 Thread Anne & Lynn Wheeler

Thor Lancelot Simon wrote:

I don't get it.  How is there "no increase in vulnerability and threat"
if a manufacturer of counterfeit / copy chips can simply read the already
generated private key out of a legitimate chip (because it's not protected
by a tamperproof module, and the "significant post-fab security handling"
has been eliminated) and make as many chips with that private key as he
may care to?

Why should I believe it's any harder to steal the private key than to
steal a "static serial number"?


there is no increased vulnerability and threat to existing situation 
where attacker can copy the serial number as it is being read out by 
normal functions. its static data ... along the lines of symmetric 
password ... where the same information that is used to establish the 
authentication is also used to validate the authentication.


the private key scenario doesn't export the private key as part of any 
normal function ... it is generated within the added circuit core, not 
available to processing outside of the added circuit core, and the only 
thing that is normally exposed/exported outside the normal added circuit 
core is the public key and digital signatures.


so the added circuit core is incremental cost for the chip real estate 
for the incremental 20k-40k circuit core. the rest of the associated fab 
and post-fab processing can be reduced to effectively zero ... changing 
the paradigm from a serial number, pin, password symmetrical based 
authentication to an asymmetrical based authentication (for essentially 
no incremental cost).


so an attacker to retrieve the private key ... can't do it by trivial 
evesdropping or readily available processor functions ... instead the 
attacker has to resort to physical invasive techniques on the chip to 
obtain the private key. right away that eliminates all the distance, 
electronic attacks ... reducing the attacks that require physical 
possession of the object.


so now the issue is countermeasure to physical invasive attacks 
requiring physical possession of each chip. so in some of the scenarios 
... one sufficient is to have sufficient physical invasive 
countermeasures that the physical attack will take longer than the 
nominal interval to report physical lost/stolen (invalidating the use of 
the physical object).


another scenario from parameterized risk management ... is to make the 
physical attack more expensive than the associated expected fraudulent 
benefit to the attacker.


the issue is since the serial number is static (and requires symmetrical 
authentication ... same value is used for both establishing 
authentication and verifying authentication) ... and


symmetric authentication mechanisms are vulnerable to a large number of 
attacks other than physical invasive attack on the physical chip
(the argument is nearly identical to the justification of using digital 
signature authentication in lieu of static data pin/password 
authentication which is subject to all sorts of evesdropping and replay 
attacks) ... like peeling physical layers of the chip and using scanning 
electron microscope  i actually spent some time working at the los 
gatos vlsi lab (bldg. 29) which claims to have pioneered use of scanning 
electron microscope for chip analysis ... not for chip attacks ... but 
as part of debugging initial chips.


so a physical vulnerability issue for something fips140-2 is whether 
there is constant power and countermeasure to physical invasive attack 
can trigger zeroization. there is cost and vulnerability trade-off 
regarding not having constant power and can have a physical attack w/o 
zeroization countermeasure. that is something that shows up as part of 
parameterized risk management.





this is also somewhat related to the security proportional to risk topic
... one such discussion:
http://www.garlic.com/~lynn/2001h.html#61

past posts involving this thread:
http://www.garlic.com/~lynn/aadsm24.htm#49 Crypto to defend chip IP: 
snake oil or good idea?
http://www.garlic.com/~lynn/aadsm24.htm#51 Crypto to defend chip IP: 
snake oil or good idea?
http://www.garlic.com/~lynn/aadsm24.htm#52 Crypto to defend chip IP: 
snake oil or good idea?
http://www.garlic.com/~lynn/aadsm24.htm#53 Case Study: Thunderbird's 
brittle security as proof of Iang's 3rd Hypothesis in secure design: 
there is only one mode, and it's secure
http://www.garlic.com/~lynn/aadsm25.htm#0 Crypto to defend chip IP: 
snake oil or good idea?
http://www.garlic.com/~lynn/aadsm25.htm#1 Crypto to defend chip IP: 
snake oil or good idea?

http://www.garlic.com/~lynn/2006n.html#57 The very first text editor

past posts discussing parameterized risk management issues:
http://www.garlic.com/~lynn/aadsmore.htm#bioinfo3 QC Bio-info leak?
http://www.garlic.com/~lynn/aadsmore.htm#biosigs biometrics and 
electronic signatures
http://www.garlic.com/~lynn/aepay3.htm#x959risk1 Risk Management in AA / 
draft X9.59

http://www.garlic.com/~lynn/aadsm3.htm#cstech3 cardtech

Re: Crypto to defend chip IP: snake oil or good idea?

2006-07-29 Thread Thor Lancelot Simon
On Thu, Jul 27, 2006 at 08:53:26PM -0600, Anne & Lynn Wheeler wrote:
> 
> If you treat it as a real security chip (the kind that goes into 
> smartcards and hardware token) ... it eliminates the significant 
> post-fab security handling (prior to finished delivery), in part to 
> assure that counterfeit / copy chips haven't been introduced into the 
> stream  with no increase in vulnerability and threat.

I don't get it.  How is there "no increase in vulnerability and threat"
if a manufacturer of counterfeit / copy chips can simply read the already
generated private key out of a legitimate chip (because it's not protected
by a tamperproof module, and the "significant post-fab security handling"
has been eliminated) and make as many chips with that private key as he
may care to?

Why should I believe it's any harder to steal the private key than to
steal a "static serial number"?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Recovering data from encrypted disks, broken CD's

2006-07-29 Thread leichter_jerrold
>From a Computerworld blog.
--Jerry


When encryption doesn't work

By Robert L. Mitchell on Wed, 07/26/2006 - 12:00pm

In my interview with Ontrack Data Recovery this week (see
Recovery specialists bring data back from the dead:

http://www.computerworld.com/action/article.do?command=printArticleBasic&art
icleId=112460),

quite a bit hit the cutting room floor, including these three nuggets by
Mike Burmeister, director of engineering for data recovery:

Encrption can be broken
I was surprised to learn that Ontrack regularly recovers encrypted data
on systems where the user has lost the key. "There's only a couple of
technologies where we would run into a roadblock [such as] some of the
new laptops that have passwords that are tied to the media and to the
BIOS," says Burmeister. That raises the question: if they can do it, who
else can?

On encrypted systems that are more difficult to crack, OnTrack also has
a secret weapon. "Certain situations involve getting permission to get
help from the manufacturer," he says.

Broken CDs still yield data
Ontrack can also reassemble and recover data from CD-ROM discs that have
been broken into pieces. If you're using CDs for backups of sensitive
data, it's probably best to shred them.

Tapes work. People fail
Among the tape problems Ontrack sees most often are those related to
human errors, such as accidentally erased or formatted tapes.

"Formatting the wrong tapes is the most common [problem] by far.  The
other one is they back up over a tape that has information on it.  The
general thing is they back up the wrong data. We'll get the tape in and
they'll say, 'The data I thought was on this tape is not on it.'"

While those failures can be attributed to confusion, another failure is
the result of just plain laziness. "People run these backup processes
and they're not simple anymore. They run these large, complex tape
libraries and they call that good enough. They don't actually go through
the process of verifying [the tape]," Burmeister says. The result:
disaster strikes twice: once when the primary storage goes down and
again when the restore fails.

For more on how the technical challenge of recovery have raised the
stakes and what you can do to protect your data, see the story above.

Filed under : Security | Software | Storage
Robert L. Mitchell's blog



James Earl wrote:

It's really too bad that ComputerWorld deems to edit these
explainations. Especially when you consider its all ELECTRONIC paper.

Posted on Thu, 07/27/2006 - 4:12pm| reply

Security Skeptic wrote:

CDs (and DVDs) are very effective targets for recovery, because they
have massive error correction and the data is self-identifying because
of the embedded sector IDs. It's quite possible to recover a CD that has
been shredded, not just broken.

A few years ago, there was academic research describing automated
reassembly of shredded documents by scanning the bits and matching the
rough edges of along the cuts. I'm sure that technology has improved,
too.

The moral of the story is that physical destruction is hard. Grinding to
powder and heating past the Curie point are pretty reliable, but short
of that, it's tough. You're better off encrypting, as long as the key
actually is secret.

Posted on Thu, 07/27/2006 - 4:44pm| reply

Security Skeptic wrote:

Computer BIOS passwords: easy to recover by resetting or other direct
access to CMOS. You can do this at home.

Disk drive media passwords: hard to recover, but possible by direct
access to flash memory on the drive. This is tough to do at home, but
probably a breeze for OnTrack.

Disk drive built-in hardware encryption (which as far as I know is only
a Seagate feature so far) should be essentially impossible to recover,
unless Seagate has built in a back door, has fumbled the implementation,
or the password is simple enough to guess. Same is true for software-
based full-disk encryption: it can be invulnerable in the absence of
errors. Use it properly, and you'll never have to worry about your data
if the computer is lost or stolen.

Posted on Thu, 07/27/2006 - 4:54pm| reply

Iain Wilkinson wrote:

Surely it's far more common to use the BIOS to prevent a hard drive
being mounted in another device that to encrypt it.

As one of the other commentators says, the BIOS is pretty easy to get
into if you know what you are doing. Basing an encryption system on this
would inherit all its weaknesses.

Posted on Fri, 07/28/2006 - 7:53am| reply

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Crypto to defend chip IP: snake oil or good idea?

2006-07-29 Thread Anne & Lynn Wheeler

Thor Lancelot Simon wrote:

So, you sign the public key the chip generated, and inject the _signed_
key back into the chip, then package and ship it.  This is how the SDK
for IBM's crypto processors determines that it is talking to the genuine
IBM product.  It is a good idea, and it also leaves the chip set up for
you with a preloaded master secret (its private key) for encrypting other
keys for reuse in insecure environments, which is really handy.

But do we really think that general-purpose CPUs or DSPs are going to
be packaged in the kind of enclosure IBM uses to protect the private keys
inside its cryptographic modules?


... long post warning :) ...

that is basically a certificate-based process  i.e. a recognized 
certification authority is signing the exported public key and injecting 
it back into the chip ... as a form of digital certificate.


this allows that some relying party ... that has a copy of the 
appropriate certification authority's public key to validate the 
device's digital certificate in an offline manner.


the approach i described was not the offline pki-based offline scenario 
but the certificateless flavor ... the "relying party" accepts the 
public key and contacts the authoritative agency managing/hosting the 
fab's manifest. the authoritative agency then returns whether it is an 
original chip (rather than possibly a counterfeit / copy chip) and 
possibly also the integrity characteristics of the particular chip.


in any case, can you say "parameterized risk management" :)

with respect to the "kind of enclosure IBM users to protect the private 
keys inside the cryptographic modules" is that the integrity 
characteristics of any specific kind of chip is likely to be 
proportional to the vulnerabilities, threats, risks and purposes that 
the chip is used for. the high level of integrity for the ibm crypto 
unit's private key isn't directly related to the cost of the unit and/or 
whether it is a counterfeit unit ... it is much more related to various 
anticipated uses that the ibm crypto unit will be applied.


say a 10-50 cent security chip that has been evaluated to EAL5-high
 possibly even less ... see discussion here
http://www.garlic.com/~lynn/aadsm24.htm#28 DDA cards may address the UK 
chip&Pin woes


... the integrity and protection of the private key is likely going to 
proportional to the purposes for which the chip will be used.


Part of the least expensive process ... is that other than the 20k-40k 
additional circuits ... the actual processing, processing steps, and 
processing time is done in such a way that there is absolutely no 
different from what they are doing today ... the initial power-on/test 
to validate a working chip (before it has been sliced and diced from the 
wafer) is the same exact step taking the same exact amount of time. the 
exporting of the test fields indicating a valid working chip as part of 
power-on/test ... is not changed ... other than there are a few more 
bits that represent the exported public key. the storage and maintenance 
in the fab chip manifest is exactly the same.


There is no incremental cost and no incremental processing ... other 
than the chip real estate for additional 20k-40k circuits.


If you treat it as a real security chip (the kind that goes into 
smartcards and hardware token) ... it eliminates the significant 
post-fab security handling (prior to finished delivery), in part to 
assure that counterfeit / copy chips haven't been introduced into the 
stream  with no increase in vulnerability and threat.


So finally it comes down to later wanting to check whether you have a 
counterfeit / copy chip. The current scenario would be to read out the 
static data serial number and have that looked up in the fab's chip 
manifest. however serial number static data is vulnerable to things like 
skimming and replay attacks. So in the basic operation ... for 
effectively zero incremental cost ... you effectively get a dynamic data 
serial number authentication for looking up in the fab's chip manifest 
(as opposed to a simple static data serial number).


For nearly all uses of such a basic chip configuration, the cost of 
attacking the private key (in such a eal5-high evaluated chip) is much 
more than any likely benefit ... and is bracketed by being able to flag 
the chip serial and public key in the fab's chip manifest.


As an attack purely for the purposes of selling 50 cent copy chips ... 
each chip attack is going to cost enormously more than expected fraud 
revenue.


So you have to be expecting something other than a revenue from selling 
copy chips  to mount such an attack, you would have to be expecting 
to be able to make use of the private key for some significantly larger 
benefit than selling a copy chip.


If you are talking about an attack on the private key ... for purpose 
other than selling a copy chip ... then you are into security 
proportional to risk ... i.e. having a variety of