### Re: combining entropy

On Sat, 25 Oct 2008, John Denker wrote:

| On 10/25/2008 04:40 AM, IanG gave us some additional information.
|
| Even so, it appears there is still some uncertainty as to
| interpretation, i.e. some uncertainty as to the requirements
| and objectives.
|
| I hereby propose a new scenario.  It is detailed enough to
| be amenable to formal analysis.  The hope is that it will
| satisfy the requirements and objectives ... or at least
| promote a more precise discussion thereof.
|
| persons).  Each of them, on demand, puts out a 160 bit
| word, called a member word.  We wish to combine these
| to form a single word, the group word, also 160 bits
| in length.
This isn't enough.  Somehow, you have to state that the values emitted
on demand in any given round i (where a round consists of exactly one
demand on all N member and produces a single output result) cannot
receive any input from any other members.  Otherwise, if N=2 and member
0 produces true random values that member 1 can see before it responds
to the demand it received, then member 1 can cause the final result to
be anything it likes.

This is an attack that must be considered because you already want to
consider the case:

|  b) Some of [the members] are malicious.  Their outputs may appear
|   random, but are in fact predictable by our adversary.

Stating this requirement formally seems to be quite difficult.  You can
easily make it very strong - the members are to be modeled as
probabilistic TM's with no input.  Then, certainly, no one can see
anyone else's value, since they can't see *anything*.  But you really
want to say something along the lines of no malicious member can see
the value output by any non-malicious member, which gets you into
requiring an explicit failure model - which doesn't fit comfortably with
the underlying problem.

If the issue is how to make sure you get out at least all the randomness
that was there, where the only failures are that some of your sources
become predictable, the XOR is fine.  But once you allow for more
complicated failure/attack modes, it's really not clear what is going on
and what the model should to be.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: combining entropy

On Tue, 28 Oct 2008, John Denker wrote:

| Date: Tue, 28 Oct 2008 12:09:04 -0700
| From: John Denker [EMAIL PROTECTED]
| To: Leichter, Jerry [EMAIL PROTECTED],
| Cryptography cryptography@metzdowd.com
| Cc: IanG [EMAIL PROTECTED]
| Subject: Re: combining entropy
|
| On 10/28/2008 09:43 AM, Leichter, Jerry wrote:
|
|  | We start with a group comprising N members (machines or
|  | persons).  Each of them, on demand, puts out a 160 bit
|  | word, called a member word.  We wish to combine these
|  | to form a single word, the group word, also 160 bits
|  | in length.
|  This isn't enough.  Somehow, you have to state that the values emitted
|  on demand in any given round i (where a round consists of exactly one
|  demand on all N member and produces a single output result) cannot
|  receive any input from any other members.  Otherwise, if N=2 and member
|  0 produces true random values that member 1 can see before it responds
|  to the demand it received, then member 1 can cause the final result to
|  be anything it likes.
|
|
| Perhaps an example will make it clear where I am coming
| from.  Suppose I start with a deck of cards that has been
| randomly shuffled.  It can provide log2(52!) bits of
| entropy.  That's a little more than 225 bits.  Now suppose
| I have ten decks of cards all arranged alike.  You could
| set this up by shuffling one of them and then stacking
| the others to match ... or by some more symmetric process.
| In any case the result is symmetric w.r.t interchange of
| decks.  In this situation, I can choose any one of the
| decks and obtain 225 bits of entropy.  The funny thing
| is that if I choose N of the decks, I still get only 225
| bits of entropy, not N*225
| The original question spoke of trusted sources of
| entropy, and I answered accordingly.  To the extent
| that the sources are correlated, they were never eligible
| to be considered trusted sources of entropy.  To say
| the same thing the other way around, to the extent
| that each source can be trusted to provide a certain
| amount of entropy, it must be to that extent independent
| of the others.
Rest of example omitted.  I'm not sure of the point.  Yes, there are
plenty of ways for correlation to sneak in.

As far as I can see, only the second piece I quoted is relevant, and it
essentially gets to the point:  The original problem isn't well posed.
It makes no sense *both* to say the sources and trusted *and* to say
that they may not deliver the expected entropy.  If I know the entropy of
all the sources, that inherently includes some notion of trust - call
it source trust:  I can trust them to have at least that much entropy.
I have to have that trust, because there is no way to measure the
(cryptographic) entropy.  (And don't say I can analyze how the source
is constructed, because then I'm left with the need to trust that what
I analyzed is actually still physically there - maybe an attacker has
replaced it!)

Given such sources it's easy to *state* what it would mean for them to
be independent:  Just that if I consider the source produced by
concatenating all the individual sources, its entropy is the sum of the
entropies of the constituents.  Of course, that's an entropy I can again
measure - at least in the limit - in the information theoretical sense,
but not in the cryptographic sense; another aspect of trust - call it
independence trust - has to enter here.

All that's fine, but how then are we supposed to construe a question
about what happens if some of the sources fail to deliver their rated
entropy?  That means that source trust must be discarded.  (Worse, as
the original problem is posed, I must discard source trust for *some
unknown subset of the sources*.)  But given that, why should I assume
that independence trust remains?

say, physical failures of sources implemented as well-isolated modules,
it might well be a reasonable thing to do.  In fact, this is essentially
the independent- failure model we use all the time in building reliable
physical systems.  Of course, as we know well, that model is completely
untenable when the concern is hostile attack, not random failure.  What
do you replace it with?

Consider the analogy with reliable distributed systems.  People have
basically only dealt with two models:

1.  The fail-stop model.  A failed module stops interacting.
2.  The Byzantine model.  Failed modules can do anything
including cooperating by exchanging arbitrary
information and doing infinite computation.

The Byzantine model is bizarre sounding, but it's just a way of expressing
a worst-case situation:  Maybe the failed modules act randomly but just by
bad luck they do the worst possible thing.

We're trying to define something different here.  Twenty-odd years ago,
Mike Fischer at Yale proposed some ideas in this direction (where
modules have access

### Re: once more, with feeling.

On Sun, 21 Sep 2008, Eric Rescorla wrote:
|   - Use TLS-PSK, which performs mutual auth of client and server
|   without ever communicating the password
|  Once upon a time, this would have been possible, I think.  Today,
|  though, the problem is the user entering their key in a box that is
|  (a) not remotely forgeable by a web site that isn't using the
|  browser's TLS-PSK mechanism; and (b) will *always* be recognized by
|  users, even dumb ones.  Today, sites want *pretty* login screens,
|  with *friendly* ways to recover your (or Palin's) password, and not
|  just generic grey boxes.  Then imagine the phishing page that
|  displays an artistic but purely imaginary login screen, with a
|
| This is precisely the issue.
|
| There are any number of cryptographic techniques that would allow
| clients and servers to authenticate to each other in a phishing
| resistant fashion, but they all depend on ensuring that the
| convince the user to type their password into some dialog
| that the attacker controls. That's the challenging technical
| issue, but it's UI, not cryptographic.
The sitation today is (a) the decreasing usefulness of passwords -
those anyone has a chance of remembering are just to guessable in the
face of the kinds of massive intelligent brute force that's possible
today and (b) the inherently insecure password entry mechanisms that
we've trained people to use.  Perhaps the only solution is to attack
both problems at the same time:  Replace passwords with something
else, and use a different, more secure input mechanism at the same
time.

The problem is what that something else should be.  Keyfobs with
one-time passwords are a good solution from the pure security point
of view, but (a) people find them annoying; (b) when used with
existing input mechanisms, as they pretty much universally are, are
subject to MITM attacks.  The equivalent technology on a USB plugin
is much easier on the user in some circumstances, but is subject to
some bad semantic attacks, as discussed here previously.  Also, it's
not a great solution for mobile devices.

DoD/government uses smartcards, but that's probably not acceptable to
the broad population.  There's been some playing around with cellphones
playing the role of smartcard, but cellphones are not inherently secure
either.  There's also the related problem of scalability to multiple
providers:  I only need one DoD card, which might be acceptable, but if
every secure web site wants to give me their own, I have a problem.  Of
course, various federated identity standards are already battling it
out, but uptake seems limited.  Besides, that can only be one element of
the solution - if I use a traditional password to get to my federated
identity token, I've made the old problem much worse, not better.

Some laptops and keyboards and even encrypted USB memory sticks are
getting fingerprint scanners as standard hardware.  *If* these
actually work as advertised - not a good bet, based on history so
far - these could be an interesting input mechanism.  Since there
are no expectations today that the fingerprint data will be
available to any web site that asks, one could perhaps establish
a standard for controlling this in an appropriate way, with a
built-in, unforgeable display.  With microphones and, increasingly,
cameras as widely-available components, one might define a similar
special input mode around them and look to voice or face recognition.

Or maybe we could even leverage the increasing interest in special
outside-the-main-OS basic displays one sees on laptops.  (I'm sure it
just thrills Microsoft to see Dell putting a tiny Linux implementation
in each laptop)

These are all just possibilities, and whether any of them (or some other
approach) actually gains broad acceptance is, of course, totally up in
the air.  Right now, while in the aggregate the problems with ID theft
are bad and getting worse, relatively few individuals feel the pain,
nor is there much in the way to offer them.  Until one or the other
of these changes - and most likely, both - the old password in some
window or another model will likely stick around.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



On Fri, 19 Sep 2008, Barney Wolff wrote:

| Date: Fri, 19 Sep 2008 01:54:42 -0400
| From: Barney Wolff [EMAIL PROTECTED]
| To: EMC IMAP [EMAIL PROTECTED]
| Cc: Cryptography cryptography@metzdowd.com
|
| On Wed, Sep 17, 2008 at 06:39:54PM -0400, EMC IMAP wrote:
|  Yet another web attack:
|
|  As I understand the attack, it's this:  Cookies can be marked Secure.
|  A Secure cookie can only be returned over an HTTPS session.  An cookie
|  not marked Secure can be returned over any session.  So:  If a site
|  puts security-sensitive data into a non-Secure cookie, an attacker who
|  can spoof DNS or otherwise grab sessions can send a HTTP page
|  allegedly from the site that set the cookie asking that it be returned
|  - and it will be.
|
| Why on earth would anyone put security-sensitive data in a cookie,
| unencrypted?  It's the server talking to itself or its siblings, after
| all, and it's vulnerable to attack on the client's machine.
a)  It depends on who you think it has to be secure against.  Typical
reasoning:  If it's effectively the *client's* information, why/from
whom do I need to protect it while it's on the *client's* machine?
After all, it can only be seen by the client and me.

b)  The way this attack is actually likely to be used is to steal a
logged-in session.  If I have the cookie, and can MITM the stream
to the server, I can act within the logged-in session.  I don't
need to be able to decrypt the cookied - the real client has no
need to (but in fact there isn't much point in encrypting, while at
rest, the nonce that identifies the logged-in session.)

I put logged-in session in quotes in agreement with James Donald's
message on this subject.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: street prices for digital goods?

On Thu, 11 Sep 2008, Peter Gutmann wrote:
| ...I've been (very informally) tracking it for awhile, and for generic
| data (non-Platinum credit cards, PPal accounts, and so on) it's
| essentially too cheap to meter, you often have to buy the stuff
| in blocks (10, 20, 50 at a time) to make it worth the sellers while.
But this implies there is something very wrong with our current

If, as is commonly assumed, hackers today are in this as a business,
and are driven by then the value of a credit card number is determined
exactly by the most money you can turn it into, by any approach.  If
I have a credit card number, I can turn it into money by selling it,

Now, there are costs involved with buying goods, receiving them,
and reselling them; and also there's some probability that the
credit card providers will notice my activity and block my
transactions.  (There's of course also the possibility that I
get caught and sent to jail!)  If the costs of doing this business
are fixed, I can drive them to zero by using enough credit cards,
and there are clearly plenty around - but see below.  So the only
significant issue is variable costs:  For every dollar I charge on
a card, I only get back some fraction of a dollar, based on my per-
transaction costs and the probability of my transaction getting
rejected.  This probability grows with the size of the transaction,
so the actual optimal strategy is complicated.

Still ... if you can *buy* a credit card number for a couple
of cents, its actually *value* can't be much higher.  Which
implies that something in the overall system makes it difficult
to monetize that card.  I'm not sure what all of them are, but
we can guess at some.  The card providers *must* be rather good
at blocking cards fairly quickly - at least when large amounts
of money are involved.  That is:  The probability of being
blocked must go up very rapidly with the size of the transaction,
forcing the optimal transaction size to be small.  If it's
small enough, then fixed costs per transaction become significant.
And something blocks the approach of do many small transactions
against many cards - presumably because these have to be done
in the real world, which means you need many people going to many
vendors picking up all kinds of physical objects.

Whatever the causes ... if it's cheap to *buy* credit card
numbers, they must not really be worth all that much!

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: security questions

|  | My theory is that no actual security people have ever been involved,
|  | that it's just another one of those stupid design practices that are
|  | perpetuated because nobody has ever complained or that's what
|  | everybody is doing.
|
|  Your theory is incorrect. There is considerable analysis on what
|
| Can you reference it please?  There has been some analysis on the
| entropy of passphrases as a password replacement, but it is not
| relevant.
RSA sells a product that is based on such research.  I don't have
references; perhaps someone else does.

I think the accurate statement here is:  There's been some research on
this matter, and there are some reasonable implementations out there;
but there are also plenty of me-too implementations that are quite
worthless.

In fact, I've personally never run into an implementation that I would
not consider worthless.  (Oddly, the list of questions that started
this discussion is one of the better ones I've seen.  Unfortunately,
what it demonstrates is that producing a useful implementation with
a decent amount of total entropy probably involves more setup time
than the average user will want to put up with.)

|  constitute good security questions based on the anticipated entropy
|  of the responses. This is why, for example, no good security
|  question has a yes/no answer (i.e., 1-bit). Aren't security
|  questions just an automation of what happens once you get a customer
|  service representative on the phone? In some regards they may be
|  more secure as they're less subject to social manipulation (i.e., if
|  I mention a few possible answers to a customer support person, I can
|  probably get them to confirm an answer for me).
| The difference is that when you are interfacing with a human, you have
| to go through a low-speed interface, namely, voice. In that respect, a
| security question, coupled with a challenge about recent transactions,
| makes for adequate security.  The on-line version of the security
| question is vulnerable to automated dictionary attacks.
Actually, this cuts both ways.  Automated interfaces generally require
exact matches; at most, they will be case-blind.  This is appropriate
and understood for passwords.  It is inappropriate for what people
perceive as natural-text questions and answers.  When I first started
running into such systems, when asked for where I was born, I would
answer New York - or maybe New York City, or maybe NY or NYC.
I should have thought about the consequences of providing a natural-
text answer to a natural-text question - but I didn't.  Sure enough,
when I actually needed to reset my password - I ended up getting locked
out of the system because there was no way I could remember, 6 months
later, what exact answer I'd given.

A human being is more forgiving.  This makes the system more vulnerable
to social engineering - but it makes it actually useable.  The
tradeoff here is very difficult to make.  By its nature, a secondary
access system will be rarely used.  People may, by dint of repetition,
learn to parrot back exact answers, even a random bunch of characters,
if they have to use them every day.  There's no way anything but a
fuzzy match on meaning will work for an answer people have to give
once every couple of months - human memory simply doesn't work that
way.

I learned my lesson and never provide actual answers to these questions
any more.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### RE: OpenID/Debian PRNG/DNS Cache poisoning advisory

On Fri, 8 Aug 2008, Dave Korn wrote:
|  Isn't this a good argument for blacklisting the keys on the client
|  side?
|
| Isn't that exactly what Browsers must check CRLs means in this
| context anyway?  What alternative client-side blacklisting mechanism
| do you suggest?
Since the list of bad keys is known and fairly short, one could
explicitly check for them in the browser code, without reference to
any external CRL.

Of course, the browser itself may not see the bad key - it may see key
for something that *contains* a bad key.  So such a check would not be
complete.  Still, it couldn't hurt.

One could put similar checks everywhere that keys are used.  Think of it
as the modern version of code that checks for and rejects DES weak and
semi-weak keys.  The more code out there that does the check, the faster
bad keys will be driven out of use.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

|   Funnily enough I was just working on this -- and found that we'd
|   end up adding a couple megabytes to every browser.  #DEFINE
|   NONSTARTER.  I am curious about the feasibility of a large bloom
|   filter that fails back to online checking though.  This has side
|   effects but perhaps they can be made statistically very unlikely,
|   without blowing out the size of a browser.
|  Why do you say a couple of megabytes? 99% of the value would be
|  1024-bit RSA keys. There are ~32,000 such keys. If you devote an
|  80-bit hash to each one (which is easily large enough to give you a
|  vanishingly small false positive probability; you could probably get
|  away with 64 bits), that's 320KB.  Given that the smallest Firefox
|  [...]
You can get by with a lot less than 64 bits.  People see problems like
this and immediately think birthday paradox, but there is no birthday
paradox here:  You aren't look for pairs in an ever-growing set,
you're looking for matches against a fixed set.  If you use 30-bit
hashes - giving you about a 120KB table - the chance that any given
key happens to hash to something in the table is one in a billion,
now and forever.  (Of course, if you use a given key repeatedly, and
it happens to be that 1 in a billion, it will hit every time.  So an
additional table of known good keys that happen to collide is worth
maintaining.  Even if you somehow built and maintained that table for
all the keys across all the systems in the world - how big would it
get, if only 1 in a billion keys world-wide got entered?)

| You could store {hash, seed} and check matches for false positives
| by generating a key with the corresponding seed and then checking for an
| exact match -- slow, but rare.  This way you could choose your false
| positive rate / table size comfort zone and vary the size of the hash
| accordingly.
Or just go off to one of a number of web sites that have a full table.
Many solutions are possible, when they only need to be invoked very,
very rarely.
-- Jerry

| Nico
| --
|
| -
| The Cryptography Mailing List
| Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
|
|

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

|  You can get by with a lot less than 64 bits.  People see problems
|  like this and immediately think birthday paradox, but there is no
|  birthday paradox here:  You aren't look for pairs in an
|  ever-growing set, you're looking for matches against a fixed set.
|  If you use 30-bit hashes - giving you about a 120KB table - the
|  chance that any given key happens to hash to something in the table
|  is one in a billion, now and forever.  (Of course, if you use a
|  given key repeatedly, and it happens to be that 1 in a billion, it
|  will hit every time.  So an additional table of known good keys
|  that happen to collide is worth maintaining.  Even if you somehow
|  built and maintained that table for all the keys across all the
|  systems in the world - how big would it get, if only 1 in a billion
|  keys world-wide got entered?)
| I don't believe your math is correct here. Or rather, it would
| be correct if there was only one bad key.
|
| Remember, there are N bad keys and you're using a b-bit hash, which
| has 2^b distinct values. If you put N' entries in the hash table, the
| probability that a new key will have the same digest as one of them is
| N'/(2^b). If b is sufficiently large to make collisions rare, then
| N'=~N and we get N/(2^b).
|
| To be concrete, we have 2^15 distinct keys, so, the probability of a
| false positive becomes (2^15)/(2^b)=2^(b-15).  To get that probability
| below 1 billion, b+15 = 30, so you need about 45 bits. I chose 64
| because it seemed to me that a false positive probability of 2^{-48}
| or so was better.
You're right, of course - I considered 32,000 to be vanishingly small
compared to the number of hash values, but of course it isn't.  The
perils of looking at one number just as decimal and the other just in
exponential form

In any case, I think it's clear that even for extremely conservative
false hit ratios, the table size is quite reasonable.  You wouldn't
want the table on your smart card or RFID chip, perhaps, but there even
a low-end smartphone would have no problems.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: security questions

On Thu, 7 Aug 2008, John Ioannidis wrote:
| Does anyone know how this security questions disease started, and
| why it is spreading the way it is?  If your company does this, can you
| find the people responsible and ask them what they were thinking?
|
| My theory is that no actual security people have ever been involved,
| and that it's just another one of those stupid design practices that
| are perpetuated because nobody has ever complained or that's what
| everybody is doing.
As best I can determine - based on external observation, not insider
information - the evolution went something like this:

- It used to be when you needed to access an account by
phone, whoever you called just believed you were
who you said.

- Social engineering of such calls started to become a pain,
so something else was needed.  Call centers started to
birthday, last four digits of SSN.  This was data that
was usually available anyway - SSN's have been used as
account id's for years, birthday and mother's maiden
name have been standard disambiguators among people with
similar names forever.

- In parallel, passwords started to infiltrate everyday life.
It's hard to recall that before ATM's became widely
used (mid to late '70's) there would really have been
no place the average consumer ever used a password.
Account numbers, sure - but they came pre-printed on
your statement or credit card and no one expected to
memorize them - and no one really thought of them as

them.  Of course, before resetting a password, you have
to validate that the person asking for the reset is who
he said he is.  The cheapest approach is to use the
validation system you already have:  Those simple
security questions about birthdays and mothers.

- Password resetting became a significant cost; people to
talk on the phone to some idiot customer who's managed
to forget his password for the 3rd time in a month is
expensive.  So password reset services moved on-line.
But now identity validation became more of an issue:  It
was always assumed (with little justification) that it
was hard to fool a customer service guy into believing
you were someone else.  But a Web page?  You need to
provide *something* that a machine can check.
Initially, the same information that the humans check
was used - but in plain text on the screen, that felt
weak.  So ... why not have the user provide answers to a
couple of security questions that the program can then
use to validate him before assigning him a new password?

- Fast forward to a couple of years ago.  Identity theft is
becoming big business.  Most of that is due to really
bad security practices - laptops with tens of thousands
of unencrypted account records left in coffee shops,
unencrypted WiFi used to transfer credit card info at
large stores - but that's too embarrassing to talk about.
Various agencies, government and other get into the
act, demand accountability and best practices.
One best practice that gets written into actual
regulation in the banking business is two-factor
authentication.  That spreads as a best practice -
and your best defense against legal and other
problems is that you show you followed the industries
established best practice.  So now everyone needs to
do two-factor authentication.

- Ah, but just what does two-factor authentication mean?
We in the security biz know, but apparently none of
that makes it into the regs.  So, some company - I'm
sure with sufficient research one could even figure
out who - decides that, for them, two-factor means
Cheap, easy to implement - they probably already have
such a system in place for password resets.  People
are used to it and accept it; no training is needed.
And ... somehow *they convice the regulatory agency
involved that this satisfies the regs*.

- The rest is history.  Everyone must do 

### Re: security questions

On Wed, 6 Aug 2008, Peter Saint-Andre wrote:
| Wells Fargo is requiring their online banking customers to provide
| answers to security questions such as these:
|
| ***
|
| What is name of the hospital in which your first child was born?
| What is your mother's birthday? (MMDD)
| What is the first name of your first roommate in college?
| What is the name of the first street you lived on as a child?
| What year did you start junior high/middle school? ()
| What is your oldest sibling's nickname?
| What is your dream occupation?
| What is your spouse's nickname?
| In what city was your father born?
| What is the name of the high school you attended?
| What is your best friend's first name?
| What is the name of the junior high/middle school you attended?
| What is the first name of your maternal grandfather (mother's father)?
| What is the name of your favorite childhood superhero?
| In what city did you meet your spouse?
| In what city did your parents meet?
| In what city did you attend high school?
| What is name of the hospital in which you were born?
| What is the last name of your favorite teacher?
| In what city was your maternal grandmother (mother's mother) born?
|
| ***
|
| It strikes me that the answers to many of these questions might be
| public information or subject to social engineering attacks...
These kinds of questions used to bother me.  Then I realized that
*I could lie*.  As long as *I* remember that I answer What is your
mother's maiden name with xyzzy, the site and I can be happy.

Well ... happier, anyway.  The only way to remain sane if you take
this approach is to use the same answer at every site that asks
these security questions.  But that's not good, especially since
most of these sites appear to make the *actual value you specified*
available to their call centers.  This is nice if you can't remember
the exact capitalization you used, but it does, of course, leak more
information that you'd rather have out there readily accessible.

For Web sites these days, I generate random strong passwords and keep
them on a keychain on my Mac.  Actually, the keychain gets synchronized
automatically across all my Mac's using .mac/MobileMe (for all their
flaws).  When I do this, I enter random values that I don't even
record for the security questions.  Should something go wrong, I'm
going to end up on the phone with a rep anyway, and they will have
some other method for authenticating me (or, of course, a clever
social-engineering attacker).

The only alternative I've seen to this whole approach is sold by
RSA (owned by EMC; I have nothing to do with the product, but will
note my association with the companies) which authenticates based on
real-world data.  For example, you might be asked where you got
coffee this morning if your credit card shows such a charge.  This
approach is apparently quite effective if used correctly - though
it does feel pretty creepy.  (They were watching me buy coffee?)

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: how bad is IPETEE?

For an interesting discussion of IPETEE, see:

www.educatedguesswork.org/moveabletype/archives/2008/07/ipetee.html

Brief summary:  This is an initial discussion - the results of a
drinking session - that got leaked as an actual proposal.  The
guys behind it are involved with The Pirate Bay.  The goal is
to use some form of opportunistic encryption to make as much
Internet traffic as possible encrypted as quickly as possible -
which puts all kinds of constraints on a solution, which in
turn also necessarily weakens the solution (e.g., without some
required configuration, there's no way you can avoid MITM
attacks) and forces odd compromises.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: disks with hardware FDE

On Tue, 8 Jul 2008, Perry E. Metzger wrote:
|  Has anyone had any real-world experience with these yet? Are there
|  standards for how they get the keys from the BIOS or OS? (I'm
|  interested in how they deal with zeroization on sleep and such.)
|
|  Most manufacturer (will) implement the TCG Storage Specification:
|  https://www.trustedcomputinggroup.org/groups/storage/
|
|  Lastly, anyone have any idea of whether the manufacturers are doing
|  the encryption correctly or not?
|
|  I know that Seagate Secure does not use XTS mode, but something CBC
|  based.
|
| Where do they get their IVs from?
I have no idea what they actually *do*, but the obvious way to get an IV
is to use the encryption of the block number.  Guaranteed known to
whoever needs to decrypt the disk block, and unique for each disk block.
(Using the disk block number itself as the IV is actually reasonably
safe, too, though it seems a bit too structured - one can imagine files
which have a leading count or even a copy of the disk block number in
each disk block leading to an initial zero input to the encryption.)

(I think one of Phil Rogoway's papers suggest this kind of approach for
a safe CBC API:  Given an existing CBC API that takes an IP as input,
instead build one that takes no explicit IP, but (a) maintains an
internal counter; (b) prepends the current counter value to the
supplied input and increments the counter; (c) supplies the underlying
API with an IP of 0.  The modified API can't be abused by accidentally
re-using an IP.)

| In general, I feel like the only way to really verify that these
| things are being done correctly is to be able (in software) to read
| the ciphertext and verify that it is encrypted with the right key in
| the right mode. The small amount I've heard about the design leads me
| to worry that this is not actually possible.
Somehow we still haven't learned the lesson that the security can only
come from (a) published, vetted algorithms and modes; (b) a way to check
that the alleged algorithm is what the black box actually implements.

Of course, for all you know it implements the algorithm while hiding a
copy of the key away somewhere just in case  But that's a whole
other problem.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: Permanent Privacy - Are Snake Oil Patents a threat?

| ...Obviously patents could be improved by searching further across
| disciplines for prior art and by having more USPTO expertise.  We're
| also seeing a dumbing down of the 'Persons Having Ordinary Skill In
| the Art' as the number of practitioners expand rapidly.
Patent law and its interpretation - like all law - changes over time.
Through much of the early twentieth century, patent law was strongly
biased in favor of large companies.  A small inventor couldn't get any
effective quick relief against even obvious infringements - he had to
fight a long, drawn-out battle, at the end of which he probably didn't
end up with much anyway.  In reaction to such famous cases as the
much-infringed patent on FM radio, the law was changed and reinterpreted
in ways that gave the small inventor much more power.  Unfortunately,
patent trolls eventually made use of those same changes

The last couple of decades have seen a series of cases that effectively
gutted the entire notion of obvious to persons having ordinary skills
in the art.  As often happens with trends like this, if you look back
at the early cases that started the trend, the results may seem
reasonable - but over time, the whole thing gets out of control.

The Supreme Court, in a decision last year (the name and details of
which escape me), pretty much said This has gone too far.  Specific-
ally, they said that applying a technique that is well known in one area
to another area may well be obvious and not eligible for patent
protection.  The Supreme Court can only decide on cases brought before
it, but the feeling seems to be that they are signaling a readiness to
breath new life into the non-obviousness requirement for a patent.
It'll be years before we see exactly how this all settles out.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Securing the Network against Web-based Proxies


Ah, where the web is going.  8e6 Technologies sells a hardware box
that it claims does signature analysis to detect HTTP proxies and
blocks them.  It can also block HTTPS proxies that do not have a
valid certificate (whatever that means), as well as do such things
as block IM, force Google and Yahoo searches to be done in Safe
mode, and so on.

They're marketing this to the education community (with the typical
horror stories of the problems your school district can run into
if students use proxies to get around your rules).

What I find most interesting, though, is that the company, based
in California, has an overseas presence in exactly two other
countries:  Taiwan and China.  One doesn't need much imagination
to see what market they are going after there

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: Strength in Complexity?

On Wed, 2 Jul 2008, Peter Gutmann wrote:

| Date: Wed, 02 Jul 2008 12:08:18 +1200
| From: Peter Gutmann [EMAIL PROTECTED]
| To: [EMAIL PROTECTED], [EMAIL PROTECTED]
| Cc: cryptography@metzdowd.com, [EMAIL PROTECTED]
| Subject: Re: Strength in Complexity?
|
| Perry E. Metzger [EMAIL PROTECTED] writes:
|
| No. In fact, it is about as far from the truth as I've ever seen.
| No real expert would choose to deliberately make a protocol more
| complicated.
|
| IPsec.  Anything to do with PKI.  XMLdsig.  Gimme a few minutes and
| I can provide a list as long as your arm.  Protocol designers *love*
| complexity.  The more complex and awkward they can make a protocol,
| the better it has to be.
The cynical among us might rephrase that as:  The more complex and
awkward they can make a protocol, the better it will be at generating
future consulting work.  :-(

(I don't think that applies to your list, where the root causes have
more to do with design-by-committee and the consequent need to make
everyone happy.)
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: Ransomware

|  The key size would imply PKI; that being true, then the ransom may
|  be for a session key (specific per machine) rather than the master
|  key it is unwrapped with.
|
| Per the computerworld.com article:
|
|Kaspersky has the public key in hand ? it is included in the
|Trojan's code ? but not the associated private key necessary to
|unlock the encrypted files.
|
|
http://www.computerworld.com/action/article.do?command=viewArticleBasicarticleId=9094818
|
| This would seem to imply they already verified the public key was
| constant in the trojan and didn't differ between machines (or that
| I'm giving Kaspersky's team too much credit with my assumptions).
Returning to the point of the earlier question - why doesn't someone
pay the ransom once and then use the key to decrypt everyone's files:
Assuming, as seems reasonable, that there is a session key created
per machine and then encrypted with the public key, what you'd get
for your ransom money is the decryption of that one session key.
Enough to decrypt your files, not useful on any other machine.

There's absolutely no reason the blackmailer should ever reveal the
actual private key to anyone (short of rubber-hose treatment of some
sort).
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### RE: Ransomware

|   Why are we wasting time even considering trying to break the public key?
|
|   If this thing generates only a single session key (rather, a host key)
| per machine, then why is it not trivial to break?  The actual encryption
| algorithm used is RC4, so if they're using a constant key without a unique
| IV per file, it should be trivial to reconstruct the keystream by XORing any
| two large files that have been encrypted by the virus on the same machine.
This is the first time I've seen any mention of RC4.  *If* they are
using RC4, and *if* they are using it incorrectly - then yes, this
would certainly work.  Apparently earlier versions of the same malware
made even more elementary cryptographic mistakes, and the encryption
was easily broken.  But they learned enough to avoid those mistakes
this time around.  Even if they screwed up on cipher and cipher mode
this time - expect them to do better the next time.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: A slight defect in the truncated HMAC code...

| SNMPv3 Authentication Bypass Vulnerability
|
|Original release date: June 10, 2008
|Last revised: --
|Source: US-CERT
|
| Systems Affected
|
|  * Multiple Implementations of SNMPv3
|
| Overview
|
|   A  vulnerability in the way implementations of SNMPv3 handle specially
|   crafted packets may allow authentication bypass.  [Based on shortened
|   authentication]
There's another ... issue with SNMPv3, this time with encryption keys.

The SNMPv3 standard defines a mechanism for converting an entered pass
phrase into an AES key.  The standard also specifies a(n appropriate)
minimum length for the pass phrase.

SNMP agents running in devices sold by a certain, err, very large vendor
do not enforce the minimum length, and it is in fact common to see
devices configured using short pass phrases.  Software that needs to
talk to such devices, and which *does* enforce the requirements of the
standard, will of course be unable to do so.  (Well, I suppose there is
almost certainly an equivalent pass phrase that's long enough, but
finding is impractical if the key derivation function is any good.)  So
such software must necessarily ignore the security requirements of the
standard as well.

Not only is it hard to define technically correct solutions to security
problems ... it's damn difficult to get them fielded!

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Ransomware


Computerworld reports:

http://www.computerworld.com/action/article.do?command=viewArticleBasicarticleId=9094818

on a call from Kaspersky Labs for help breaking encryption used by some
ransomeware:  Code that infects a system, uses a public key embedded in
the code to encrypt your files, then tells you you have to go to some
web site and pay for the decryption key.

Apparently earlier versions of this ransomware were broken because of a
faulty implementation of the encryption.  This one seems to get it
right.  It uses a 1024-bit RSA key.  Vesselin Bontchev, a long-time
antivirus developer at another company, claims that Kaspersky is just
looking for publicity:  The encryption in this case is done right and
there's no real hope of breaking it.

It appears the speculations have now become reality.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: Ransomware



On Mon, 9 Jun 2008, John Ioannidis wrote:

| Date: Mon, 09 Jun 2008 15:08:03 -0400
| From: John Ioannidis [EMAIL PROTECTED]
| To: Leichter, Jerry [EMAIL PROTECTED]
| Cc: cryptography@metzdowd.com
| Subject: Re: Ransomware
|
| Leichter, Jerry wrote:
|  Computerworld reports:
|
|
http://www.computerworld.com/action/article.do?command=viewArticleBasicarticleId=9094818

|
|
| This is no different than suffering a disk crash.  That's what backups are
| for.
|
| /ji
|
| PS: Oh, backups you say.
Bontochev's comment as well.

Of course, there is one way this can be much worse than a disk crash:  A
clever bit of malware can sit there silently and encrypt files you don't
seem to be using much.  By the time it makes its ransom demands, you
may find you have to go back days or even weeks in your backups to get
valuable data back.

Even worse, targeted malwared could attack your backups.  If it encrypted
the data on the way to the backup device, it could survive silently for
months, by which time encrypting the live data and demanding the
ransom would be a very credible threat.  (Since many backup programs
already offer encryption, hooking it might just involve changing the
key.  It's always so nice when your opponent provides the mechanisms
needed to attack him)
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: the joy of enhanced certs

On Wed, 4 Jun 2008, Perry E. Metzger wrote:
| As some of you know, one can now buy Enhanced Security certificates,
| and Firefox and other browsers will show the URL box at the top with a
| special distinctive color when such a cert is in use.
|
| Many of us have long contended that such things are worthless and
| prove only that you can pay more money, not that you're somehow more
| trustworthy.
|
| An object lesson in this just fell in my lap -- I just got my first
| email from a spammer that links to a web site that uses such a cert,
| certified by a CA I've never heard of (Starfield Technologies, Inc.)
| Doubtless they sell discount Enhanced Security certs so you don't
| have to worry about paying more money either. I haven't checked the
| website for drive by malware, but I wouldn't be shocked if it was
| there.
|
| I'm thinking of starting a CA that sells super duper enhanced
| security certs, where we make the company being certified sign a
| document in which they promise that they're absolutely trustworthy.
| To be really sure, we'll make them fax said document in on genuine
This message, shortly after our discussion of trust, makes me think of
the applicability of an aspect liguistic theory, namely speech acts.
Speech acts are expressions that go beyond simply communication to
actually produce real-world effects.  The classic example:  If I say
John and Sarah are married, that's a bit of communication; I've passed
along to listeners my belief in the state of the world.  When a
minister, in the right circumstances, says John and Sarah are married,
those words actually create the reality:  They *are* now married.

There are many more subtle examples.  A standard example is that of
a promise:  To be effective as a speech act, the promise must be
made in a way that makes it clear that the promiser is undertaking
some obligation, and the promiser must indeed take on that obligation.
There's a whole cultural context involved here in what is needed for
an obligation to exist and what it actually means to be obligated.
(Ultimately, the theory gets pushed to the point where it breaks;
but we don't have to go that far.)

In human-to-human communication, we naturally understand and apply the
distinction between speech acts and purely communicative speech.  It's
not that we can't be fooled - a person who speaks with authority is
often taken to have it, which may allow him to create speech acts he
should not be able to - but this is relatively rare.

When exchanging data with a machine, the line between communication and
speech acts gets very blurry.  (You can think of this as the blurry line
between data and program.)  When I go into a store and ask for
information, I see myself and the salesman as engaging in pure
communication.  There are definite, well-understood ways - socially and
even legally defined steps - that identify when I've crossed over into
speech acts and have, for example, taken on an obligation to pay for
something.  When, on the other hand, I look at a Web site, things are
not at all clear.  From my point of view, the data coming to my screen
is purely communication to me.  From the computer's point of view, the
HTML is all speech acts, causing the computer to take some actions.
My clicks are all speech acts to the server.  Problems arise when what
I see as pure communication is somehow transformed, without my consent
or even knowledge, into speech acts that implicate *me*, rather than my
computer.  This happens all too easily, exactly because the boundary
between me and my computer is so permeable, in a Web world.

Receiving an SSL cert, in the proper context (corresponds to the URL
I typed, signed by a trusted CA), is supposed to be a speech act to
me as a human being:  It's supposed to cause me to believe that I've
reached the site I meant to reach.  (My machine, of course, doesn't
care - it has no beliefs and has nothing at risk.)  The reason the model
is so appealing is that it maps to normal human discourse.  If my friend
tells me I'll bring dinner, I don't cook something while waiting for
him to arrive.

Unfortunately, as we've discussed here many times, the analogy is
deeply, fundamentally flawed.  SSL certs don't really work like trusted
referals from friends, and the very familiarity of the transactions is
what makes them so dangerous:  It makes it too easy for us to treat
something as a speech act when we really shouldn't.

Enhanced security certs simply follow the same line of reasoning.  They
will ultimately prove just as hazardous.

Going back to promises as speech acts:  When a politician promises to
improve the economy, we've all come to recognize that, although that's
in the *from* of a promise, it doesn't actually create any obligation.
Improving the economy isn't something anyone can actually do - even if
we could agree on what it means.  Such a promise is simply a way of
saying I think the economy should be 

### Re: Protection mail at rest

| There's an option 2b that might be even more practical: an S/MIME or
| PGP/MIME forwarder.  That is, have a trusted party receive your mail,
| but rather than forwarding it intact encrypt it and then forward it to
Excellent idea!  I like it.

Of course, it's another piece of a distributed solution that you need
to keep running.  It would make for an interesting third-party
service.  (On the surface, letting a third party run this for you
seems hazardous, but as always your stuff is exposed on the way
to the forwarder whatever you do

A forwarded like this as a pre-packaged EC2 VM, perhaps?

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Protection mail at rest


At one time, mail delivery was done to the end-user's system, and all
mail was stored there.  These days, most people find it convenient to
leave their mail on a IMAP server:  It can be accessed from anywhere,
it can be on a system kept under controlled conditions (unlike a
laptop), and so on.  In fact, most people these days - even the
technically savvy - not only leave their mail on an IMAP server,
but let some provider deal with the headaches of maintaining that
server.

So, most people's mail spends most of its life sitting on a disk owned,
managed, and controlled by some third party, whose responsibilities, not
to mention abilities, for keeping that stuff secure are unclear to say
the least.  On top of that, the legal protections for data held by a
third party are limited.

We have mechanisms for providing end-to-end encryption of messages.
Messages sent using, say, S/MIME are encrypted on the IMAP server
just as they are out on the net.  But this only helps for mail
exchanged between correspondents who both choose to use it.

Suppose I ask for a simpler thing:  That my mail, as stored in my
IMAP server, spends most of its life encrypted, inaccessible even
to whoever has access to the physical media on which the server
stores its mail.

Now, this is a funny goal.  If mail arrives unencrypted, anyone with
access to the data stream can copy it and do what they like.  It will
inevitably be buffered, even likely stored on a disk, in the raw,
unencrypted form.  We explicitly leave dealing with this out of the
equation - only end-to-end encryption can deal with it.

Here are two ways of implementation something in this direction:

1.  Client only.  The client, whenever it sees a new message,
(c) stores the encrypted version back on the server;
(d) deletes the unencrypted version.  The client can
put the encrypted messages in a different folder, or
it can mark them with a header line.

2.  Server-assisted.  The client gives the server its public
key.  When a message arrives at the server, the
server (a) generates a session key; (b) encrypts
the message using the session key; (c) encrypts
the session key with the client's public key;
key to the encrypted message; (e) stores the
encrypted message.  The necessary work for
the client is obvious.

In each case, one would probably chose some headers to encrypt
separately - e.g., the subject - so that one could more easily pull
them out without decrypting the whole message.

Obviously, approach 2 greatly decreases the time that messages may
hang around unencrypted; but approach 1 can be implemented without
any cooperation from the IMAP provider, which allows it to be rolled
out even for those who use the large providers without having Google
and Hotmail and Yahoo! buy into it.

Does anyone know of existing work in this area?
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### FBI Worried as DoD Sol Counterfeit Networking Gear


Note the reference to recent results on spiking hardware.  (From some
IDG journal - I forget which.)

-- Jerry

-- Forwarded message --
FBI Worried as DoD Sold Counterfeit Networking Gear
Stephen Lawson and Robert McMillan, IDG News Service

Friday, May 09, 2008 5:10 PM PDT
The U.S. Federal Bureau of Investigation is taking the issue of
counterfeit Cisco equipment very seriously, according to a leaked FBI
presentation that underscores problems in the Cisco supply chain.

The presentation gives an overview of the FBI Cyber Division's effort to
crack down on counterfeit network hardware, the FBI said Friday in a
statement. It was never intended for broad distribution across the
Internet.

In late February the FBI broke up a counterfeit distribution network,
seizing an estimated US$3.5 million worth of components manufactured in China. This two-year FBI effort, called Operation Cisco Raider, involved 15 investigations run out of nine FBI field offices. According to the FBI presentation, the fake Cisco routers, switches and cards were sold to the U.S. Navy, the U.S. Marine Corps., the U.S. Air Force, the U.S. Federal Aviation Administration, and even the FBI itself. One slide refers to the problem as a critical infrastructure threat. The U.S. Department of Defense is taking the issue seriously. Since 2007, the Defense Advanced Research Projects Agency has funded a program called Trust in IC, which does research in this area. Last month, researcher Samuel King demonstrated how it was possible to alter a computer chip to give attackers virtually undetectable back-door access to a computer system. King, an assistant professor in the University of Illinois at Urbana- Champaign's computer science department, has argued that by tampering with equipment, spies could open up a back door to sensitive military systems. In an interview on Friday, he said the slides show that this is clearly something that has the FBI worried. The Department of Defense is concerned, too. In 2005 its Science Board cited concerns over just such an attack in a report. Cisco believes the counterfeiting is being done to make money. The company investigates and tests counterfeit equipment it finds and has never found a back door in any counterfeit hardware or software, said spokesman John Noh. Cisco is working with law enforcement agencies around the world on this issue. The company monitors its channel partners and will take action, including termination of a contract, if it finds a partner selling counterfeit equipment, he said. Cisco Brand Protection coordinates and collaborates with our sales organizations, including government sales, across the world, and it's a very tight integration. The best way for channel partners and customers to avoid counterfeit products is to buy only from authorized channel partners and distributors, Noh said. They have the right to demand written proof that a seller is authorized. The FBI doesn't seem satisfied with this advice, however. According to the presentation, Cisco's gold and silver partners have purchased counterfeit equipment and sold it to the government and defense contractors. Security researcher King believes that the government is better off focusing on detection rather than trying to secure the IT supply chain, because there are strong economic incentives to keep it open and flexible -- even if this means there may be security problems. There are so many good reasons for this global supply chain; I just think there's no way we can secure it. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: How far is the NSA ahead of the public crypto community? An interesting datapoint I've always had on this question: Back in 1975 or so, a mathematician I knew (actually, he was a friend's PhD advisor) left academia to go work for the NSA. Obviously, he couldn't say anything at all about what he would be doing. The guy's specialty was algebraic geometry - a hot field at the time. This is the area of mathematics that studied eliptic curves many years before anyone realized they had any application to cryptography. In fact, it would be years before anyone on the outside could make any kind of guess about what in the world the NSA would want a specialist in algebraic geometry to do. At the time, it was one of the purest of the pure fields. The friend he used to advise bumped into this guy a few years later at a math conference. He asked him how it felt not to be able to publish openly. The response: When I was working at the university, there were maybe 30 specialists in the world who read and understood my papers. There aren't quite as many now, but they really appreciate what I do. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: It seems being in an explosion isn't enough... On Thu, 8 May 2008, Perry E. Metzger wrote: | Quoting: | |It was one of the most iconic and heart-stopping movie images of |2003: the Columbia Space Shuttle ignited, burning and crashing to |earth in fragments. | |Now, amazingly, data from a hard drive recovered from the fragments |has been used to complete a physics experiment - CXV-2 - that took |place on the doomed Shuttle mission. | | http://blocksandfiles.com/article/5056 | | Now, this article isn't written from a security perspective, but I | think the implications are pretty obvious: quite a bit can happen to a | hard drive before the data is no longer readable. On the other hand ... from a report in Computerworld, we have: [Jon] Edwards [a senior clean room engineer at Kroll Ontrack, which did the recovery work] said the circuit board on the bottom of the drive was burned almost beyond recognition and that all of its components had fallen off. Every piece of plastic on the model ST9385AG hard drive melted, he noted, and all the electronic chips inside had burned and come loose. Edwards said the Seagate hard drive -- which was about eight years old in 2003 -- featured much greater fault tolerance and durability than current hard drives of similar capacity. Two other hard drives aboard the Columbia were so severely damaged that it was impossible to extract any usable data, he added. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: It seems being in an explosion isn't enough... On Fri, 9 May 2008, Ali, Saqib wrote: | Edwards said the Seagate hard drive -- which was | about eight years old in 2003 -- featured much | greater fault tolerance and durability than current | hard drives of similar capacity. | | I am not so sure about this statement. The newer drives are far more | ruggedized and superior in constuction. For e.g. the newer EE25 are | designed to operate @ | 1) Operating temperatures of ?30?C to 85?C | 2) Operating altitudes from ?1000 feet to 16,400 feet | 3) Operating vibration up to 2.0 Gs | 4) Long-duration (11 ms) shock capability of 150 Gs | | where as the older ST9385AG: | 1) Operating temperatures of 5? to 55?C (41? to 131?F) | 2) Operating altitudes from ?1,000 ft to 10,000 ft (?300 m to 3,000 m) | 3) Operating vibration up to 0.5 Gs | 4) shock capability of 100 Gs Well, he's the guy who actually recovers data from the things. I think the main issue here is that the older drives used much larger magnetic domains on the disk, inherently providing a great deal of physical redundancy, for those with the equipment to make use of it. Also, the encodings were much simpler and the controllers much less sophisticated. Today the controller/head/disk are effectively a single unit, tightly coupled by complex feedback loops. The controller writes data that it will be able to read, adjusting things based on what it actually reads back. I've been told - I can't verify this - that in practical terms today, if you lose the controller, the data is toast: Another nominally identical controller won't be able to read it. -- Jerry  ### Re: SSL and Malicious Hardware/Software On Mon, 28 Apr 2008, Ryan Phillips wrote: | Matt's blog post [1] gets to the heart of the matter of what we can | trust. | | I may have missed the discussion, but I ran across Netronome's 'SSL | Inspector' appliance [2] today and with the recent discussion on this | list regarding malicious hardware, I find this appliance appalling. It's not the first. Blue Coat, a company that's been building various Web optimization/filtering appliances for 12 years, does the same thing. I'm sure there are others. | Basically a corporation can inject a SSL Trusted CA key in the | keystore within their corporate operating system image and have this | device generate a new server certificate to every SSL enabled website, | signed by the Trusted CA, and handed to the client. The client does a | validation check and trusts the generated certificate, since the CA is | trusted. A very nice man-in-the-middle and would trick most casual | computer users. | | I'm guessing these bogus certificates can be forged to look like the | real thing, but only differ by the fingerprint and root CA that was | used to sign it. | | What are people's opinions on corporations using this tactic? I can't | think of a great way of alerting the user, but I would expect a pretty | reasonable level of privacy while using an SSL connection at work. I'm very uncomfortable with the whole business. Corporations will of course tell you it's their equipment and is there for business purposes, and you have no expectation of privacy while using it. I can understand the issues they face: Between various regulatory laws that impinge on the white-hot topic of data leakage and issues of workplace discrimination arising out of questionable sites, they are under a great deal of pressure to control what goes over their networks. But if monitoring everything is the stance they have to take, I would rather that they simply block encrypted connections entirely. As this stuff gets rolled out, there *will* be legal issues. On the one hand, the whole industry is telling you HTTPS to a secure web site - see that green bar in your browser? - is secure and private. On the other, your employer is doing a man-in-the-middle attack and, without your knowing it, reading your discussions with your doctor. Or maybe gaining access to your credit card accounts - and who knows who in the IT department might be able to sneak a peak. Careful companies will target these appliances at particular sites. They'll want to be able to prove that they aren't watching you order your medications on line, lest they run into ADA problems, for example. It's going to be very interesting to see how this all plays out. We've got two major trends crashing headlong into each other. One is toward tighter and tighter control over what goes on on a company's machines and networks, some of it forced by regulation, some of it because we can. The other is the growing technological workarounds. If I don't like the rules on my company's network, I can buy over-the-air broadband service and use it from my desk. It's still too expensive for most people today, but the price will come down rapidly. Corporate IT will try to close up machines to make that harder and harder to do, but at the same time there's a growing push for IT to get out of the business of buying, financing, and maintaining rapidly depreciating laptops. Better to give employees a stipend and let them buy what they want - and carry the risks. -- Jerry | Regards, | Ryan | | [1] http://www.crypto.com/blog/hardware_security/ | [2] http://www.netronome.com/web/guest/products/ssl_appliance - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: Designing and implementing malicious hardware On Sat, 26 Apr 2008, Karsten Nohl wrote: | Assuming that hardware backdoors can be build, the interesting | question becomes how to defeat against them. Even after a particular | triggering string is identified, it is not clear whether software can | be used to detect malicious programs. It almost appears as if the | processor would need a hardware-based virus-scanner or sorts. This | scanner could be simple as it only has to match known signatures, but | would need have access to a large number of internal data structures | while being developed by a completely separate team of designers. I suspect the only heavy-weight defense is the same one we use against the Trusting Trust hook-in-the-compiler attack: Cross-compile on as many compilers from as many sources as you can, on the assumption that not all compilers contain the same hook. For hardware, this would mean running multiple chips in parallel checking each others states/outputs. Architectures like that have been built for reliability (e.g., Stratus), but generally they assume identical processors. Whether you can actually build such a thing with deliberately different processors is an open question. While in theory someone could introduce the same spike into Intel, AMD, and VIA chips, an attacker with that kind of capability is probably already reading your mind directly anyway. Of course, you'd end up with a machine no faster than your slowest chip, and you'd have to worry about the correctness of the glue circuitry that compares the results. *Maybe* the NSA would build such things for very special uses. Whether it would be cheaper for them to just build their own chip fab isn't at all clear. (One thing mentioned in the paper is that there are only 30 plants in the world that can build leading-edge chips today, and that it simply isn't practical any more to build your own. I think the important issue here is leading edge. Yes, if you need the best performance, you have few choices. But a chip with 5-year-old technology is still very powerful - more than powerful enough for many uses. When it comes to obsolete technology, you may have more choices - and of course next year's 5 year old technology will be even more powerful. Yes, 5 years from now, there will only be 30 or so plants with 2008 technology - but the stuff needed to build such a plant will be available used, or as cheap versions of newer stuff, so building your own will be much more practical.) -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: Designing and implementing malicious hardware On Mon, 28 Apr 2008, Ed Gerck wrote: | Leichter, Jerry wrote: | I suspect the only heavy-weight defense is the same one we use against | the Trusting Trust hook-in-the-compiler attack: Cross-compile on | as many compilers from as many sources as you can, on the assumption | that not all compilers contain the same hook. ... | Of course, you'd end up with a machine no faster than your slowest | chip, and you'd have to worry about the correctness of the glue | circuitry that compares the results. | | Each chip does not have to be 100% independent, and does not have to | be used 100% of the time. | | Assuming a random selection of both outputs and chips for testing, and | a finite set of possible outputs, it is possible to calculate what | sampling ratio would provide an adequate confidence level -- a good | guess is 5% sampling. I'm not sure how you would construct a probability distribution that's useful for this purpose. Consider the form of one attack demonstrated in the paper: If a particular 64-bit value appears in a network packet, the code will jump to the immediately succeeding byte in the packet. Let's for the sake of argument assume that you will never, by chance, see this 64-bit value across all chip instances across the life of the chip. (If you don't think 64 bits is enough to ensure that, use 128 or 256 or whatever.) Absent an attack, you'll never see any deviation from the theoretical behavior. Once, during the lifetime of the system, an attack is mounted which, say, grabs a single AES key from memory and inserts it into the next outgoing network packet. That should take no more than a few tens of instructions. What's the probability of your catching that with any kind of sampling? | This should not create a significant impact on average speed, as 95% | of the time the untested samples would not have to wait for | verification (from the slower chips). I don't follow this. Suppose the system has been running for 1 second, and you decide to compare states. The slower system has only completed a tenth of the instructions completed by the faster. You now have to wait .9 seconds for the slower one to catch up before you have anything to compare. If you could quickly load the entire state of the faster system just before the instruction whose results you want to compare into the slower one, you would only have to wait one of the slower systems's instruction times - but how would you do that? Even assuming a simple mapping between the full states of disparate systems, the state is *huge* - all of memory, all the registers, hidden information (cache entries, branch prediction buffers). Yes, only a small amount of it is relevant to the next instruction - but (a) how can you find it; (b) how can you find it *given that the actual execution of the next instruction may be arbitrarily different from what the system model claims*? | One could also trust-certify | each chip based on its positive, long term performance -- which could | allow that chip to run with much less sampling, or none at all. Long term-performance against a targetted attack means nothing. | In general, this approach is based on the properties of trust when | viewed in terms of Shannon's IT method, as explained in [*]. Trust is | seen not as a subjective property, but as something that can be | communicated and measured. One of the resulting rules is that trust | cannot be communicated by self-assertions (ie, asking the same chip) | [**]. Trust can be positive (what we call trust), negative (distrust), | and zero (atrust -- there is no trust value associated with the | information, neither trust nor distrust). More in [*]. The papers look interesting and I'll have a look at them, but if you want to measure trust, you have to have something to start with. What we are dealing with here is the difference between a random fault and a targetted attack. It's quite true that long experience with a chip entitles you to trust that, given random data, it will most likely produce the right results. But no amount of testing can possibly lead to proper trust that there isn't a special value that will induce different behavior. -- Jerry | Cheers, | Ed Gerck | | References: | [*] www.nma.com/papers/it-trust-part1.pdf | www.mcwg.org/mcg-mirror/trustdef.htm | | [**] Ken's paper title (op. cit.) is, thus, identified to be part of | the very con game described in the paper. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: Designing and implementing malicious hardware On Thu, 24 Apr 2008, Jacob Appelbaum wrote: | Perry E. Metzger wrote: | A pretty scary paper from the Usenix LEET conference: | | http://www.usenix.org/event/leet08/tech/full_papers/king/king_html/ | | The paper describes how, by adding a very small number of gates to a | microprocessor design (small enough that it would be hard to notice | them), you can create a machine that is almost impossible to defend | against an attacker who possesses a bit of secret knowledge. I | suggest reading it -- I won't do it justice with a small summary. | | It is about the most frightening thing I've seen in years -- I have | no idea how one might defend against it. | | Silicon has no secrets. | | I spent last weekend in Seattle and Bunnie (of XBox hacking | fame/Chumby) gave a workshop with Karsten Nohl (who recently cracked | MiFare). | | In a matter of an hour, all of the students were able to take a | selection of a chip (from an OK photograph) and walk through the | transistor layout to describe the gate configuration. I was surprised | (not being an EE person by training) at how easy it can be to | understand production hardware. Debug pads, automated masking, | etc. Karsten has written a set of MatLab extensions that he used to | automatically describe the circuits of the mifare devices. Automation | is key though, I think doing it by hand is the path of madness. While analysis of the actual silicon will clearly have to be part of any solution, it's going to be much harder than that: 1. Critical circuitry will likely be tamper-resistant. Tamper-resistance techniques make it hard to see what's there, too. So, paradoxically, the very mechanisms used to protect circuitry against one attack make it more vulnerable to another. What this highlights, perhaps, is the need for transparent tamper-resistance techniques, which prevent tampering but don't interfere with inspec- tion. 2. An experienced designer can readily understand circuitry that was designed normally. This is analogous to the ability of an experience C programmer to understand what a normal, decently-designed C program is doing. Under- standing what a poorly designed C program is doing is a whole other story - just look at the history of the Obfuscated C contests. At least in that case, an experienced analyst can raise the alarm that something wierd is going on . But what *deliberately deceptive* C code? Look up Underhanded C Contest on Wikipedia. The 2007 contest was to write a program that implements a standard, reliable encryption algorithm, which some percentage of the time makes the data easy to decrypt (if you know how) - and which will look innocent to an analyst. There have been two earlier contests. I remember seeing another, similar contest in which the goal was to produce a vote-counting program that looked completely correct, but biased the results. The winner was amazingly good - I consider myself pretty good at analyzing code, but even knowing that this code had a hook in it, I missed it completely. Worse, none of the code even set of my why is it doing *that* detector. 3. This is another step in a long line of attacks that attack something by moving to a lower-level of abstraction and using that to invalidate the assumptions that implementations at higher levels of abstraction use. There's a level below logic gates, the actual circuitry. A paper dating back to 1999 - Analysis of Unconventional Evolved Electronics, CACM V42#4 (it doesn't seem to be available on-line) reported on experiments using genetic algorithms to evolve an FPGA design to solve a simple program (something like generate a -.5V output if you see a 200Hz input, and a +1V output if you see a 2KHz input). The genetic algorithm ran at the design level, but fitness testing was done on actual, synthesized circuits. A human engineer given this problem would have used a counter chain of some sort. The evolved circuit had nothing that looked remotely like a counter chain. But it worked ... and the experimenters couldn't figure out exactly how. Probing the FPGA generally caused it to stop working. The design included unconnected gates - which, if removed, caused the circuit to stop working. Presumably, the circuit was relying on the analogue characteristics of the FPGA rather than its nominal digital  ### Re: Declassified NSA publications | Date: Thu, 24 Apr 2008 16:22:34 + | From: Steven M. Bellovin [EMAIL PROTECTED] | To: cryptography@metzdowd.com | Subject: Declassified NSA publications | | http://www.nsa.gov/public/crypt_spectrum.cfm Interesting stuff. There's actually more there in some parallel directories - there's an overview page at http://www.nsa.gov/public/publi3.cfm -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: no possible brute force Was: Cruising the stacks and finding stuff  On Wed, 23 Apr 2008, Alexander Klimov wrote: | Date: Wed, 23 Apr 2008 12:53:56 +0300 (IDT) | From: Alexander Klimov [EMAIL PROTECTED] | To: Cryptography cryptography@metzdowd.com | Subject: no possible brute force Was: Cruising the stacks and finding stuff | | On Tue, 22 Apr 2008, Leichter, Jerry wrote: | Interestingly, if you add physics to the picture, you can convert | no practical brute force attack into no possible brute force | attack given known physics. Current physical theories all place a | granularity on space and time: There is a smallest unit of space | beyond which you can't subdivide things, and a smallest unit of | time. | | I guess you are talking about Planck units, so let's make the | calculations: | | Planck length is L = \sqrt{hbar G / c^3} ~ 1.6E-35 m, | | Planck time T = L/c ~ 5.4E-44 s, | | so a cubic-meter-computer can have | | N = (1/L)^3 ~ 2.4E104 elements | | and thus | | N/T ~ 4.5E147 operations per second | | that is approximately 2^{490} operations per second in a cubic meter | of Planck computer. Given a year (3.15E7 s) on a Earth-size | (1.08E21 m^3) Planck computer we can make approximately 2^{585} | operations. Except that this doesn't quite work. You can't actually have anywhere near that many distinct states in that volume of space. The physics does indeed get bizarre here: The maximum number of bits you can store in a given volume of space is determined by that space's *surface area*, not its volume. So you actually only get around 1E70 elements and 4.5E112 operations. :-) | OK, it was amusing to do these calculations, but does it mean | anything? I think it is more like garbage-in-garbage-out process. | | If I understand correctly, L and T are not non-divisible units of | space-time, but rather boundaries for which we understand that our | current theories do not work, because this scale requires unification | of general relativity and quantum mechanics. Kind of. | Even more bizarre is to think about the above processing element as | representing a single bit: according to quantum mechanics, states of a | system are represented by unit vectors in associated Hilbert space of | the system and each observable is represented by a linear operator | acting on the state space. Even if we have only two qubits they are | not described by two complex numbers, but rather by 4: | | a_0|00 + a_1|01 + a_2|10 + a_3|11 | | and thus description of the state of quantum registers requires | exponential number of complex numbers a_i and each operation | simultaneously process all those a_i. Since we cannot measure | them all, it is an open question whether quantum computations | provide exponential speed-up and all we know right now is that | they give at least quadratic one. For simple search problems, where there are no shortcuts and you really can only do brute force, quadratic is the best you can do. | By the way, if you have a computer with the processing power larger | than that of all the cryptanalysts in the world, it makes sense to use | your computer to think to find a better attack than a brute-force | search :-) Of course. But the mice are already running that computation. | One place this shows up, as an example: It turns out give a volume | of space, the configuration with the maximum entropy for that volume | of is exactly a black hole with that volume | | [OT] This is a strange thesis because all black hole solutions in | general relativity can be completely characterized by only three | externally observable parameters: mass, electric charge, and angular | momentum (the no-hair theorem) and thus in some sense they have zero | entropy (it is not necessary a contradiction with something, because | it takes an infinite time for matter to reach the event horizon). You're looking at this backwards. Suppose I want to store a large amount of data in a given volume of space. To be specific, I have a unit diameter sphere in which to build my memory device. Initially, I fill it with magnetic domains, and see how many distinct bits I can store. That gets me some huge number. I replace that with racetrack-style memory, with information on the edges of the domains. Then with single electrons, with bits stored in multiple quantum states. Can I keep going indefinitely? The answer is no: There is in fact a limit, and it turns out to be the number of bits equivalent to the entropy of a black hole with a unit diameter sphere. The entropy isn't given by the number of measurements I can make on my black hole - it's given by the number of possible precursor states that can produce the given black hole. In fact, if you try to shove that much information into your initial volume, you will inevitably produce a black hole - not a good idea if what you want is a storage device, since the data will be there in some sense, but will be inaccessible! | and its entropy turns out to be the area of the black hole, in units | of square Planck lengths ### Re: Cruising the stacks and finding stuff | ...How bad is brute force here for AES? Say you have a chip that can do | ten billion test keys a second -- far beyond what we can do now. Say | you have a machine with 10,000 of them in it. That's 10^17 years worth | of machine time, or about 7 million times the lifetime of the universe | so far (about 13x10^9 years). | | Don't believe me? Just get out calc or bc and try | ((2^128/10^14)/(60*60*24*365)) | | I don't think anyone will be brute force cracking AES with 128 bit | keys any time soon, and I doubt they will ever be brute forcing AES | with 256 bit keys unless very new and unanticipated technologies | arise. | | Now, it is entirely possible that someone will come up with a much | smarter attack against AES than brute force. I'm just speaking of how | bad brute force is. The fact that brute force is so bad is why people | go for better attacks, and even the A5/1 attackers do not use brute | force. Interestingly, if you add physics to the picture, you can convert no practical brute force attack into no possible brute force attack given known physics. Current physical theories all place a granularity on space and time: There is a smallest unit of space beyond which you can't subdivide things, and a smallest unit of time. One place this shows up, as an example: It turns out give a volume of space, the configuration with the maximum entropy for that volume of is exactly a black hole with that volume, and its entropy turns out to be the area of the black hole, in units of square Planck lengths. So, in effect, the smallest you can squeeze a bit is a Planck length by Planck length square. (Yes, even in 3-d space, the constraint is on an area - you'd think the entropy would depend on the volume, but in fact it doesn't, bizarre as that sounds.) So suppose you wanted to build the ultimate computer to brute-force DES. Suppose you want your answer within 200 years. Since information can't propagate faster than light, anything further than 100 years from the point where you pose the question is irrelevant - it can't causally affect the result. So you computer is (at most, we'll ignore the time it takes to get parts of that space into the computation) a 100-light-year diameter sphere that exists for 200 years. This is a bounded piece of space-time, and can hold a huge, but finite, number of bits which can flip at most a huge, but finite, number of times. If a computation requires more bit flips than that, it cannot, even in *physical* principle, be carried out. I ran across a paper discussing this a couple of years back, in a different context. The authors were the ones who made the argument that we need to be wary of in principle arguments: What's possible in principle depends on what assumptions you make. Given an appropriate oracle, the halting problem is in principle easy to solve. The paper discussed something else, but I made some rough estimates (details long forgotten) of the in principle limits on brute force attacks. As I recall, for a 100-year computation, a 128-bit key is just barely attackable; a 256-bit key is way out of the realm of possibility. Given all the hand-waving in my calculation, I didn't try to determine where in that range the cut-over occurs. Someone better than me at the physics should be able to compute much tighter bounds. Even if I'm off by quite a bit, it's certain that the key lengths we are using today are already near fundamental physical limits. Brute force is simply not an interesting mode of attack against decently engineered modern systems. Of course, this says - and can say - absolutely nothing about the possibility of analytic or side-channel or any of a variety of other intelligent attacks -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### 2factor  Anyone know anything about a company called 2factor (2factor.com)? They're pushing a system based on symmetric cryptography with, it appears, some kind of trusted authority. Factor of 100 faster than SSL. More secure, because it authenticates every message. No real technical data I can find on the site, and I've never seen a site with so little information about who's involved. (Typically, you at least get a list of the top execs.) Some ex-spooks? Pure snake oil? Somewhere in between? -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: [p2p-hackers] convergent encryption reconsidered | They extended the confirmation-of-a-file attack into the | learn-partial-information attack. In this new attack, the | attacker learns some information from the file. This is done by | trying possible values for unknown parts of a file and then | checking whether the result matches the observed ciphertext. | | How is this conceptually different from classic dictionary attacks, | and why does e.g. running the file through PBKDF2 and using the result | for convergence not address your concern(s)? How would that help? Both the ability of convergent encryption to eliminate duplicates, and this attack, depend on there being a deterministic algorithm that computes a key from the file contents. Sure, if you use a different salt for each file, the attack goes away - but so does the de-duplication. If you don't care about de-duplication, there are simpler, cheaper ways to choose a key. -- Jerry | -- | Ivan Krsti? [EMAIL PROTECTED] | http://radian.org | | - | The Cryptography Mailing List | Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] | |  ### Re: convergent encryption reconsidered |...Convergent encryption renders user files vulnerable to a |confirmation-of-a-file attack. We already knew that. It also |renders user files vulnerable to a learn-partial-information |attack in subtle ways. We didn't think of this until now. My |search of the literature suggests that nobody else did either. The way obvious in retrospect applies here: The vulnerability is closely related to the power of probable plaintext attacks against systems that are thought to be vulnerable only to known plaintext attacks. The general principle that needs to be applied is: In any cryptographic setting, if knowing the plaintext is sufficient to get some information out of the system, then it will also be possible to get information out of the system by guessing plaintext - and one must assume that there will be cases where such guessing is easy enough. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: Firewire threat to FDE | As if the latest research (which showed that RAM contents can be | recovered after power-down) was not enough, it seems as Firewire ports | can form yet an easier attack vector into FDE-locked laptops. | | Windows hacked in seconds via Firewire | http://www.techworld.com/security/news/index.cfm?RSSNewsID=11615 | | The attack takes advantage of the fact that Firewire can | directly read and write to a system's memory, adding extra speed | to data transfer | | IIUC, the tool mentioned only bypasses the Win32 unlock screen, but | given the free access to RAM, exploit code that digs out FDE keys is a | matter of very little extra work. | | This is nothing new. The concept was presented a couple of years ago, | but I haven't seen most FDE enthusiasts disable their Firewire ports | yet. | | Unsurprisingly: | | Microsoft downplayed the problem, noting that the Firewire | attack is just one of many that could be carried out if an | attacker already has physical access to the system. | | The claims [...] are not software vulnerabilities, | but reflect a hardware design industry issue that affects | multiple operating systems, Bill Sisk, Microsoft's security | response communications manager, told Techworld. | | It is not *their* fault, but being a company that pretends to take | users' security seriously, and being at a position that allows them to | block this attack vector elegantly, I would have gone that extra | half-mile rather than come up with excuses why not to fix it. All they | need to do is make sure (through a user-controlled but default-on | feature) that when the workstation is locked, new Firewire or PCMCIA | devices cannot be introduced. That hard? Just how would that help? As I understand it, Firewire and PCMCIA provide a way for a device to access memory directly. The OS doesn't have to do anything - in fact, it *can't* do anything. Once your attacker is on the bus with the ability to do read/write cycles to memory, it's a bit late to start worrying about whether you allow that device to be visible through the OS. Note that disks have always had direct access to memory - DMA is the way to get acceptable performance. SATA ports - uncommon on portables, very common on servers - would be just as much of a threat. Same for SCSI on older machines. Normally, the CPU sets up DMA transfers - but it's up to the device to follow the rules and not speak until recognized. But there's no real enforcement. (Oh, if you start talking out of turn, you might hang the bus or crash the system if you collide with something - but that's like very rare, and hardly an effective protective measure.) The only possible protection here is at the hardware level: The external interface controller must be able to run in a mode which blocks externally-initiated memory transactions. Unfortunately, that may not be possible for some controllers. Sure, the rules for (say) SCSI might say that a target is only supposed to begin sending after a request from an initiator - but it would take a rather sophisticated state machine to make sure to match things up properly, especially on a multi-point bus. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: delegating SSL certificates | So at the company I work for, most of the internal systems have | expired SSL certs, or self-signed certs. Obviously this is bad. | | You only think this is bad because you believe CAs add some value. | | Presumably the value they add is that they keep browsers from popping | up scary warning messages Apple's Mail.app checks certs on SSL-based mail server connections. It has the good - but also bad - feature that it *always* asks for user approval if it gets a cert it doesn't like. One ISP I've used for years (BestWeb) uses an *expired* self-signing cert. The self-signed part I could get around - it's possible to add new CA's to Mail.app's list. But there's no way to get it to accept an expired cert automatically. So ... every time Mail.app starts up, it complains about the cert and asks me to approve it. This stalls Mail's startup, and it fails to pick up mail - from any server - until I tell it, OK, yes, go ahead. The cert has now been expired for over 2 years. (You might well wonder why, if you're going to use a self-signed cert, you *ever* let it expire - much less cut one, like theirs, with a 1-year lifetime. Since all you're getting with a self-signed cert is continuity of identity, expiration has no positives, just negatives. Perhaps they were planning to go out of business in a year? :-) ) I've been in touch with BestWeb's support guys repeatedly. Either they just don't understand what I'm talking about, or I'll finally get someone to understand, he'll ask me for details on which cert is expired, I'll send them - and then nothing will happen. Clueless. Just to add to the amusement, *some* of their services - Web mail, and through it tuning of their spam filters - are accessible *only* through HTTP, not HTTPS. These use the same credentials Perhaps I should just go with the flow and use unencrypted connections. (Or get over my inertia, stop trying to get them to fix things, and drop my connections to them at the next renewal) -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: RNG for Padding | Hi, | | This may be out of the remit of the list, if so a pointer to a more | appropriate forum would be welcome. | | In Applied Crypto, the use of padding for CBC encryption is suggested | to be met by ending the data block with a 1 and then all 0s to the end | of the block size. | | Is this not introducing a risk as you are essentially introducing a | large amount of guessable plaintext into the ciphertext. | | Is it not wiser to use RNG data as the padding, and using some kind of | embedded packet size header to tell the system what is padding? It's a requirement of all modern cryptosystems that they be secure against known-plaintext attacks. This is for two reasons: 1. The state of the art being what it is, it's no harder to create a system with decent security guarantees (within the limits we have *any* such guarantees, of course) with security against known-plaintext attacks than without. 2. More important: History has shown that there's *always* known plaintext available. There are tons of situations where you know what is being sent because you actually have access to the same information from other channels (once *everything* is encrypted, much of what's encrypted isn't in and of itself secret!); other situations where you can force the plaintext to some value because, for example, you provided it; yet others where you don't know for sure, but can make good guesses. So the additional security is minor. Note, BTW, the the 1 and then all 0's padding lets a legitimate receiver determine where the data ends; random padding doesn't. So you'd have to send the length elsewhere with random padding. That length would have a limited number of possible values - becoming easily guessable plaintext. -- Jerry | Thanks for your suggestions, | | Mr Pink | | - | The Cryptography Mailing List | Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED] | | - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: cold boot attacks on disk encryption | ...I imagine this will eventually have a big impact on the way organizations | respond to stolen mobile device incidents. With the current technology, if a | laptop or mobile device is on when it's stolen, companies will need to assume | that the data is gone, regardless of whether or not encryption products have | been deployed. | | Anyone familar with the laws in the arena? Are there regulations which require | reporting only if data on a stolen device is not encrypted? I believe something like this has been written into law. The reporting laws are all state laws, so of course vary. The Federal laws often have safe harbor provisions for encrypted data. Regardless of the law, the broad public perception is that encrypted means safe. After one too many embarrasments, corporations (and governments) have learned that Oh, yes, 150,000 credit card numbers were stolen but there's no evidence anyone is using them no longer works as damage control; but Oh, yes, 150,000 credit card numbers were stolen but that's OK - they were encrypted works fine. (Note that these announcements don't even bother to discuss what the encryption mechanism might be - ROT13, anyone?) Unfortunately, the technical nature of these results - combined with the We told you to encrypt everything to make it safe; now we tell you encryption isn't safe nature of the debate, is unlikely to produce anything positive in the general public sphere. People will probably just shrug their shoulders, figure nothing can be done, and move on. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: cold boot attacks on disk encryption | Their key recovery technique gets a lot of mileage from using the | computed key schedule for each round of AES or DES to provide | redundant copies of the bits of the key. If the computer cleared | the key schedule storage, while keeping the key itself when the | system is in sleep mode, or when the screen-saver password mode | kicks in, this attack would be less possible. We've viewed screen locked and sleep mode (with forced screen lock on wake) as equivalent to off. Clearly that's no longer a tenable position. Sensitive data in memory must be cleared or encrypted, with decryption requiring externally-entered information, whenever the screen is locked or sleep mode initiated. This would actually make them *safer* than the off state, since at least you know your software can gain control while entering those states! | If, in addition, the key was kept XORed with the secure hash of a | large block of random memory, as suggested in their countermeasures | section, their attacks would be considerably more difficult. | | These seem to be simple, low overhead countermeasures that provide | value for machines like laptops in transit. I suspect GPS chip sets will become a standard part of laptops in the future. One can imagine some interesting techniques based on them. Even now, most laptops have motion sensors (used to safe the disks), which could be used. I seem to recall some (IBM?) research in which you wore a ring with an RFID-like chip in it. Move away from your machine for more than some preset time and it locks. I'm sure we'll see many similar ideas come into use. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### RE: Toshiba shows 2Mbps hardware RNG |SAN FRANCISCO -- Toshiba Corp. has claimed a major breakthrough in |the field of security technology: It has devised the world's |highest-performance physical random-number generator (RNG) |circuit. | |The device generates random numbers at a data rate of 2.0 megabits |a second, according to Toshiba in a paper presented at the |International Solid-State Circuits Conference (ISSCC) here. | | I'm wondering if they've considered the possibility of EMI skewing the | operation of the device, or other means of causing the device to | genearate less than completely random numbers. I wonder if they considered the possibility that the device will be destroyed by a static discharge? It's one thing to criticize a design about which you know nothing on the basis of a broad, little-known or brand new, attack. But the fact that EMI can skew devices has been known for years. Hardware that may need to work in (deliberately or otherwise) high-EMI environments has to be appropriately designed and shielded (just as devices have for years been protected against static discharge through multiple layers of protection, from the chip design itself through ground straps for people handling them). I know nothing at all about Toshiba or its designers. Do you know something that makes you think they are so incompetent that they are unaware of well-known issues that arise in the design of the kinds of devices they work with? | It is certainly an interesting device; I think this would find | considerable use in communication infrastructure and high-bandwidth | applications. As someone else mentioned, generating a single, random, | 128 bit seed is not too difficult with current technology, but it | doesn't address the issue that often times you want more than just a | single key. One of the problems with the Linux random number generator | is that it happens to be quite slow, especially if you need a lot of | data. | | Some potential uses: | 1.) Secure file erasure. Why? Writing hard random values over the previous data is neither more nor less secure than writing zeroes, unless you descend to the level of attacking the disk surface and making use of remnance effects. Once you do that ... it's still not clear that writing random values is better or worse than writing all zeroes! (As Peter Gutmann showed years ago, there are *highly technology-specific sets of patterns* that do a better job than all zeroes, or all ones, or whatever. There's little reason to believe that a random set of bits is good for much of anything in this direction.) If you're concerned about someone distinguishing between erased data and real data ... if the real data is unencrypted, then the game is over anyway. If the real data is encrypted, you want the erased data to look exactly as random as the encrypted real data. That is, if you believe that your AES-encrypted (say) data can be distinguished from random bits without knowning the key, then if you fill the erased blocks with *really* random bits, the distinguisher will tell you exactly where the real data is! Better to use exactly the same encryption algorithm to generate your random erasure pattern. BTW, even pretty average disks these days can write 50 MB/second, or 200 times the rate at which this device can generate random bits. | 2.) OTP keygen for those _really_ high security applications. OK. | 3.) Faster symmetric keyset generation. You know, when you need to | build 32k keys... OK, though given the computational overhead involved in generating symmetric keys, it's hard to see the random number generation as the throttling factor. | 4.) Random seeding of communication packets. If you're talking about inserting fillers to thwart traffic analysis, the same argument as for erasing disk blocks: Either you believe your encrypted packets can't be distinguished from random, in which case you don't need the generator; or you are afraid they *can* be, in which case you'd better not use the generator! | There used to be (maybe still) a TCP spoofing exploit that relied on the | timing of packets; there are also various de-anonymization attacks based | on clock skew. With a chip like this, you could add a small, random | number to the timestamp, or even packet delay, and effectively thwart | such attacks. Such systems need high-bandwidth, random number | generators. I don't buy it. First off, the rates are pretty low - how many packets per second do you send? Second, the attacks involved are probably impossible to counter using software, because the timing resolutions are too small. Maybe you can build random jitter into the hardware itself - but that brings in all kinds of other issues. (The hardware is, of course, *already* introducing random jitter - that's the basis of the attack. Just adding more without getting rid of the bias that enables the attacks is little help; at worst, it just requires the attacker to take more samples to average away the  ### Dilbert on security  Today's Dilbert - http://www.unitedmedia.com/comics/dilbert/archive/images/dilbert23667240080211.gif is right on point -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: Fixing SSL (was Re: Dutch Transport Card Broken) | By the way, it seems like one thing that might help with client certs | is if they were treated a bit like cookies. Today, a website can set | a cookie in your browser, and that cookie will be returned every time | you later visit that website. This all happens automatically. Imagine | if a website could instruct your browser to transparently generate a | public/private keypair for use with that website only and send the | public key to that website. Then, any time that the user returns to | that website, the browser would automatically use that private key to | authenticate itself. For instance, boa.com might instruct my browser | to create one private key for use with *.boa.com; later, | citibank.com could instruct my browser to create a private key for | use with *.citibank.com. By associating the private key with a specific | DNS domain (just as cookies are), this means that the privacy | implications of client authentication would be comparable to the | privacy implications of cookies. Also, in this scheme, there wouldn't | be any need to have your public key signed by a CA; the site only needs | to know your public key (e.g., your browser could send self-signed | certs), which eliminates the dependence upon the third-party CAs. | Any thoughts on this? While trying to find something else, I came across the following reference: Title: Sender driven certification enrollment system Document Type and Number: United States Patent 6651166 Link to this page: http://www.freepatentsonline.com/6651166.html Abstract: A sender driven certificate enrollment system and methods of its use are provided, in which a sender controls the generation of a digital certificate that is used to encrypt and send a document to a recipient in a secure manner. The sender compares previously stored recipient information to gathered information from the recipient. If the information matches, the sender transfers key generation software to the recipient, which produces the digital certificate, comprising a public and private key pair. The sender can then use the public key to encrypt and send the document to the recipient, wherein the recipient can use the matching private key to decrypt the document. This was work done a Xerox. I was trying to find a different report at Xerox in response to Peter Gutmann's comment that certificate aren't used because they are impractical/unusable. Parc has done some wonderful work on deal with those problems. See: http://www.parc.com/research/projects/usablesecurity/wireless.html Not Internet scale, but in an enterprise, it should work. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: Gutmann Soundwave Therapy | - Truncate the MAC to, say, 4 bytes. Yes, a simple brute | force attack lets one forge so short a MAC - but | is such an attack practically mountable in real | time by attackers who concern you? | | In fact, 32-bit authentication tags are a feature of | SRTP (RFC 3711). Great minds run in the same ruts. :-) | - Even simpler, send only one MAC every second - i.e., | every 50 packets, for the assumed parameters. | Yes, an attacker can insert a second's worth | of false audio - after which he's caught. I | suppose one could come up with scenarios in | which that matters - but they are very specialized. | VOIP is for talking to human beings, and for | human beings in all but extraordinary circumstances | a second is a very short time. | | Not sending a MAC on every packet has difficult interactions with | packet loss. If you do the naive thing and every N packets send a MAC | covering the previous N packets, then if you lose even one of those | packets you can't verify the MAC. But since some packet loss is | normal, an attacker can cover their tracks simply by removing one out | of every N packets. *Blush*. Talk about running in the same ruts. I was specifically talking about dealing with lossy datagram connections, but when I came to making a suggestion, suggested one I'd previously considered for non-lossy stream connections. Streams are so much easier to reason about - it's easy to get caught. (It's also all too easy to forget that no stream implementation really implements the abstract semantics of a reliable stream - which is irrelevant in some cases, but very significant in others.) | Since (by definition) you don't have a copy of the packet you've lost, | you need a MAC that survives that--and is still compact. This makes | life rather more complicated. I'm not up on the most recent lossy | MACing literature, but I'm unaware of any computationally efficient | technique which has a MAC of the same size with a similar security | level. (There's an inefficient technique of having the MAC cover all | 2^50 combinations of packet loss, but that's both prohibitively | expensive and loses you significant security.) My suggestion for a quick fix: There's some bound on the packet loss rate beyond which your protocol will fail for other reasons. If you maintain separate MAC's for each k'th packet sent, and then deliver k checksums periodically - with the collection of checksums itself MAC'ed, a receiver should be able to check most of the checksums, and can reset itself for the others (assuming you use a checksum with some kind of prefix-extension property; you may have to send redundant information to allow that, or allow the receiver to ask for more info to recover). Obviously, if you *really* use every k'th packet to define what is in fact a substream, an attacker can arrange to knock out the substream he has chosen to attack. So you use your encryptor to permute the substreams, so there's no way to tell from the outside which packet is part of which substream. Also, you want to make sure that a packet containing checksums is externally indistinguishable from one containing data. Finally, the checksum packet inherently has higher - and much longer-lived - semantic value, so you want to be able to request that *it* be resent. Presumably protocols that are willing to survive data loss still have some mechanism for control information and such that *must* be delivered, even if delayed. Tons of hand-waving there; at the least, you have to adjust k and perhaps other parameters to trade off security and overhead. I'm pretty sure something along these lines could be done, but it's certainly not off-the-shelf. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: Gutmann Soundwave Therapy | So, this issue has been addressed in the broadcast signature context | where you do a two-stage hash-and-sign reduction (cf. [PG01]), but | when this only really works because hashes are a lot more efficient | than signatures. I don't see why it helps with MACs. Thanks for the reference. | Obviously, if you *really* use every k'th packet to define what is in | fact a substream, an attacker can arrange to knock out the substream he | has chosen to attack. So you use your encryptor to permute the | substreams, so there's no way to tell from the outside which packet is | part of which substream. Also, you want to make sure that a packet | containing checksums is externally indistinguishable from one containing | data. Finally, the checksum packet inherently has higher - and much | longer-lived - semantic value, so you want to be able to request that | *it* be resent. Presumably protocols that are willing to survive data | loss still have some mechanism for control information and such that | *must* be delivered, even if delayed. | | This basically doesn't work for VoIP, where latency is a real issue. It lets the receiver to make a choice: Deliver the data immediately, avoiding the latency at the cost of possibly releasing bogus data (which we'll find out about, and report, later); or hold off on releasing the data until you know it's good, at the cost of introducing audible artifacts. In non-latency-sensitive designs, the prudent approach is to never allow data out of the cryptographic envelope until you've authenticated it. Here, you should probably be willing to do that, on the assumption that the application layer - a human being - will know how to react if you tell him authentication has failed, please disregard what you heard in the last 10 seconds. (If you record the data, the human being doesn't have to rely on memory - you can tell him exactly where things went south.) There are certainly situation where this isn't good enough - e.g., if you're telling a fighter pilot to fire a missile, a fake command may be impossible to countermand in time to avoid damage - but that's pretty rare. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: Gutmann Soundwave Therapy | All of this ignores a significant issue: Are keying and encryption | (and authentication) mechanisms really independent of each other? I'm | not aware of much work in this direction. | | Is there much work to be done here? If you view the keyex mechanism | as a producer of an authenticated blob of shared secrecy and the | post-keyex portions (data transfer or whatever you're doing) as a | consumer of said blob, with a PRF as impedance-matcher (as is done by | SSL/TLS, SSH, IPsec, ..., with varying degrees of aplomb, and in a | more limited store-and-forward context PGP, S/MIME, ...), is there | much more to consider? I don't know. Can you prove that your way of looking at it is valid? After all, I can look at encryption as applying a PRF to a data stream, and authentication as computing a keyed one-way function (or something) - so is there anything to prove about whether I can choose and combine them independently? About whether Encrypt-then-MAC and MAC-then-Encrypt are equivalent? I should think by now that we've learned how delicate our cryptographic primitives can be - and how difficult it can be to compose them in a way that retains all their individual guarantees. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: Gutmann Soundwave Therapy Commenting on just one portion: | 2. VoIP over DTLS | As Perry indicated in another message, you can certainly run VoIP | over DTLS, which removes the buffering and retransmit issues | James is alluding to. Similarly, you could run VoIP over IPsec | (AH/ESP). However, for performance reasons, this is not the favored | approach inside IETF. | | The relevant issue here is packet size. Say you're running a | low bandwidth codec like G.729 at 8 kbps. If you're operating at | the commonly used 50 pps, then each packet is 160 bits == 20 bytes. | The total overhead of the IP, UDP, and RTP headers is 40 bytes, | so you're sending 60 byte packets. | | - If you use DTLS with AES in CBC mode, you have the 4 byte DTLS | header, plus a 16 byte IV, plus 10 bytes of MAC (in truncated MAC | mode), plus 2 bytes of padding to bring you up to the AES block | boundary: DTLS adds 32 bytes of overhead, increasing packet | size by over 50%. The IPsec situation is similar. | | - If you use CTR mode and use the RTP header to form the initial | CTR state, you can remove all the overhead but the MAC itself, | reducing the overhead down to 10 bytes with only 17% packet | expansion (this is how SRTP works) If efficiency is your goal - and realistically it has to be *a* goal - then you need to think about the semantics of what you're securing. By the nature of VOIP, there's very little semantic content in any given packet, and because VOIP by its nature is a real-time protocol, that semantic content loses all value in a very short time. Is it really worth 17% overhead to provide this level of authentication for data that isn't, in and of itself, so significant? At least two alternative approach suggest themselves: - Truncate the MAC to, say, 4 bytes. Yes, a simple brute force attack lets one forge so short a MAC - but is such an attack practically mountable in real time by attackers who concern you? - Even simpler, send only one MAC every second - i.e., every 50 packets, for the assumed parameters. Yes, an attacker can insert a second's worth of false audio - after which he's caught. I suppose one could come up with scenarios in which that matters - but they are very specialized. VOIP is for talking to human beings, and for human beings in all but extraordinary circumstances a second is a very short time. If you don't like 1 second, make this configurable. Even dropping it to 1/10 second and sticking to DTLS (with a modification, of course) drops your overhead to 5% - and 1/10 second isn't even enough time to insert a no into the stream. For many purposes, a value of 10 seconds - which reduces the overhead to an insignificant level - is probably acceptable. It's great to build generic encrypted tunnels that provide strong security guarantees regardless of what you send through them - just as it's great to provide generic stream protocols like TCP that don't care what you use them for. The whole point of this discussion has been that, in some cases, the generic protocols aren't really what you need: They don't provide quite the guarantees you need, and they impose overhead that may be unacceptable in some cases. The same argument applies to cryptographic algorithms. Yes, there is a greater danger if cryptographic algorithms are misused: Using TCP where it's inappropri- ate *usually* just screws up your performance, while an inappropriate cryptographic primitive may compromise your security. Of course, if you rely on TCP's reliablity in an inappropriate way, you can also get into serious trouble - but that's more subtle and rare. Then again, actually mounting real attacks against some of the cryptographic weaknesses we sometimes worry about is also pretty subtle and rare. The NSA quote someone - Steve Bellovin? - has repeated comes to mind: Amateurs talk about algorithms. Professionals talk about economics. Using DTLS for VOIP provides you with an extremely high level of security, but costs you 50% packet overhead. Is that worth it to you? It really depends - and making an intelligent choice requires that various alternatives along the cost/safety curve actually be available. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### VaultID  Anyone know anything about these guys? (www.vaultid.com). They are trying to implement one-time credit card numbers on devices you take with you - initially cell phones and PDA's, eventually in a credit card form factor. The general idea seems good, but their heavy reliance on fingerprint recogition is troubling (though it may be appropriate in their particular application). -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: patent of the day | http://www.google.com/patents?vid=USPAT6993661 | | Gee, the inventor is Simson Garfinkel, who's written a bunch of books | including Database Nation, published in 2000 by O'Reilly, about all | the way the public and private actors are spying on us. | | I wonder whether this was research to see how hard it was to | get the PTO to grant an absurd patent. Alternatively, it could be an attempt to preempt any other patents in this area. We'll have to see what Garfinkle does with the patent. BTW, I don't see this as an example of an absurd patent. There might well be prior art, but the idea of erasing information by deliberately discarding a key is certainly not completely obvious except in retrospect. If you look at any traditional crypto text, you won't find anything of this sort - it wasn't the kind of thing people had worried about until fairly recently. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: DRM for batteries | Date: Fri, 04 Jan 2008 16:38:07 +1300 | From: Peter Gutmann [EMAIL PROTECTED] | To: cryptography@metzdowd.com | Subject: DRM for batteries | | http://www.intersil.com/cda/deviceinfo/0,1477,ISL6296,0.html | | At$1.40 each (at least in sub-1K quantities) you wonder whether it's
| costing them more to add the DRM (spread over all battery sales) than
| any marginal gain in preventing use of third-party batteries by a
| small subset of users.
For laptop batteries - which can cost $100 each - some might see it as a win. Of course, if you can eliminate the competition, you can also raise prices. The spec sheets have links to PDF description of the algorithm, but distribution of that is restricted - talk to your salesman. Can anyone make any sense of the following claim: Non-unique mapping of the secret key to an 8-Bit authentication code maximizes hacking difficulty due to need for exhaustive key search (superior to SHA-1). The only thing I can come up with is the old idea that you compute (say) a 32-bit keyed MAC but then only use the bottom 16 bits. This makes it more difficult for an attacker to use the MAC on some data to determine the key - on average, you'd probably need about 2^16 samples to give a unique key. This was use in some old X.something-or-other bank hashing algorithm, which predates functions that we believe to be one-way. Over all, I find it hard to see how such a product can really make sense, however. If there's enough money to make it worth trying to keep the clone makers out, there's enough money in it for the clone makers to be willing to invest in determining the secret information. Given the nature of the proposed solution, all batteries (or other protected objects) have to have the same secret - break into one, and you can make as many as you like, so any cost for breaking in is amortized over however many of the things you can sell. There's no effective way to change the secret - even if you could somehow make a patch to the devices involved, you couldn't change it in such a way that it would refuse to use the batteries already in it (with the old secret). Meanwhile, at$1.40 a unit, you can't make anything really tamper
protected.  (Given some of the reverse engineering expertise already out
there, it's not clear how tamper protected you can really make
something these days at *any* cost.  But you certainly can't do it on
the cheap.)

Still, I'm sure people will try - and life will become even more
annoying.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: Death of antivirus software imminent

Virtualization has become the magic pixie dust of the decade.

When IBM originally developed VMM technology, security was not a primary
goal.  People expected the OS to provide security, and at the time it
was believed that OS's would be able to solve the security problems.

As far as I know, the first real tie of VMM's to security was in a DEC
project to build a VMM for the VAX that would be secure at the Orange
Book A2 level.  The primary argument for this was:  Existing OS's are
way too complex to verify (and in any case A2 required verified design,
which is impossible to apply to an already-existing design).  A VMM can
be small and simple enough to have a verified design, and because it
runs under the OS and can mediate all access to the hardware, it can
serve as a Reference Monitor.  The thing was actually built and met its
requirements (actually, it far exceeded some, especially on the
performance end), but died when DEC killed the VAX in favor of the
Alpha.

Today's VMM's are hardly the same thing.  They are built for perfor-
mance, power, and managability, not for security.  While certainly
smaller than full-blown Windows, say, they are hardly tiny any more.
Further, a major requirement of the VAX VMM was isolation:  The
different VM's could communicate only through network protocols.  No
shared devices, no shared file systems.  Not the kind of thing that
would be practical for the typical uses of today's crop of VM's.

The claim that VMM's provide high level security is trading on the
reputation of work done (and published) years ago which has little if
anything to do with the software actually being run.  Yes, even as they
stand, today's VMM's probably do provide better security than some -
many? - OS's.  Using a VM as resettable sandbox is a nice idea, where
you can use it.  (Of course, that means when you close down the sandbox,
you lose all your state.  Kind of hard to use when the whole point of
running an application like, say, an editor is to produce long-lived
state!  So you start making an exception here, an exception there
... and pretty soon the sand is spilled all over the floor and is in

The distinction between a VMM and an OS is fuzzy anyway.  A VMM gives
you the illusion that you have a whole machine for yourself.  Go back
a read a description of a 1960's multi-user OS and you'll see the
very same language used.  If you want to argue that a small OS *can
be* made more secure than a huge OS, I'll agree.  But that's a size
distinction, not a VMM/OS distinction
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### RE: Death of antivirus software imminent

| One virtualization approach that I have not see mentioned on this
| thread is to run the virtual machine on a more secure OS than is used
| by the applications of interest.
|
| For example, one could run VMware on SELinux and use VMware to host
| Windows/Vista.  Thus, even if a virus subverts Windows it still has no
| more capabilities than any errant program in SELinux.  And, the virus
| author has to cope with the complications created by the dual
| operating systems.
It's not clear to me what threats this protects you against.  A Windows
virus would work within the Windows environment just as it always did.
If that's *your* working environment, it's just as contaminated as if
you were running Windows on bare metal.

Of course, if you're using the sandbox idea, you can throw out your
contaminated Windows environment periodically and start from fresh.
As always, you need to be in a position to throw *everything* out,
which can be rather painful.

A virus that could break through Windows, then through VMWare (with
or without SELinux), then actually do something in that environment
to establish itself more strongly, probably doesn't exist today - and
would be quite an interesting challenge.

| Me, I do just the opposite.  I browse the web with firefox running on
| SELinux (targeted policy) on VMware hosted on Windows XP.
That's a more reasonable approach.

| That would be secure if I didn't run as root half the time.
:-(
-- Jerry

| Chuck Jackson

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: crypto class design

| So... supposing I was going to design a crypto library for use within
| a financial organization, which mostly deals with credit card numbers
| and bank accounts, and wanted to create an API for use by developers,
| does anyone have any advice on it?
|
| It doesn't have to be terribly complete, but it does have to be
| relatively easy to use correctly (i.e. securely).
|
| I was thinking of something like this:
|
| class crypto_api
| {
| ...
| // developers call these based on what they're trying to do
| // these routines simply call crypto_logic layer
| Buffer encrypt_credit_card(Buffer credit_card_number, key_t key);
| Buffer encrypt_bank_account(Buffer bank_account, key_t key);
| Buffer encrypt_other(Buffer buffer, key_t key);
| ...
| };
|
| class crypto_logic
| {
| ...
| algo_default = ALGO_AES256CBC;
| // encrypt with a given algorithm
| Buffer encrypt(Buffer buffer, key_t key, algo_t aid = algo_default);
| // calls different routines in crypto_implementation layer depending on
algorithm used
| Buffer decrypt(Buffer buffer, key_t key);
| ...
| };
|
| class crypto_glue
| {
| ...
| // calls routines in libraries such as OpenSSL
| // mostly wrappers that translate between our data types and theirs
| Buffer aes256cbc-encrypt(...);
| Buffer aes256cbc-decrypt(...);
| ...
| };
|
| The general idea is that crypto_api provides domain-specific APIs that
| are easy for anyone to understand, that the logic layer allows for the
| selection of different algorithms, and the glue layer is basically a
| translation layer to underlying libraries.
|
| It is very important that the API remain stable, because the code
| base is large and owned by various groups.
|
| One thing that I'm wondering is how to indicate (e.g.) the overhead in
| terms of padding, or whatever, for various algorithms... or if it
| matters.  The old code had some really disturbing practices like
| assuming that the output buffer was 16 octets bigger, and stuff like
| that... scary.
|
| Intend to skim the OpenSSL design and Gutmann's Design of a
| Cryptographic Security Architecture for ideas.
|
Your Buffer class is a step up from using a void*!  You're not really
using data typing effectively.  Define classes to encapsulate encrypted
and cleartext data; carefully decide what transitions are allowed among
them; and define your API around that.  Note that transitions include
creation and, particularly, deletion - the destructor for cleartext
should zero the memory.

The above is a simplification.  There are probably more than two
categories of data.  A better classification might be:  Encrypted,
cleartext but sensitive, non-sensitive.  In a financial setting,
sensitive may have subdivisions based, for example, on who is allowed
access.  Should there be some special datatype for keys, which are about
the most sensitive thing in the system?  (It should probably be the case
that the common public API's provide no way to export a key, just a way
to apply it.  Key management should be a separate API that most
applications don't even use, so you can be sure they can't (without
cheating, which is of course always possible in C++) leak them.)

As much as possible, make the actual rules that apply to any piece of
data in the program (a) transparent to someone reading the code; (b)
enforceable by a compiler or, second best, the API implementation.  In
the public API, concentrate on the data and the rules that govern it.
Particular crypto algorithms and various related choices should be
hidden within the implementation.  Not only should the API be easy to
use correctly, it should be as hard as possible to use *in*correctly!

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### RE: More on in-memory zeroisation

| I've been through the code.  As far as I can see, there's nothing in
| expand_builtin_memset_args that treats any value differently, so there
| can't be anything special about memset(x, 0, y).  Also as far as I can
| tell, gcc doesn't optimise out calls to memset, not even thoroughly
While good for existing crypto code, this is exactly the kind of thing
that's a problem.  We now have a well-distributed bit of folk knowledge
that memset(x,0,y) is treated specially by the compiler.  It isn't; this
knowledge is just repeated inaccurate rumor.  Fortunately, not
treated specially in this case defaults to a case that does what we
want - but it also means that if someone makes the code has no effect
analyzer smarter in some release of gcc, all of a sudden, these memset()
calls that we're relying on may suddenly just disappear from the
generated code.  How long before anyone notices?  It's not as if the
change log will show optimize away dead calls to memset - it will
likely contain some obscure comment like improve recognition that type
B subtrees can be collapsed in phase 3.

The only *safe* way to write code like this - absent explicit support
in the standard - is with explicit support in each particular compiler.
Even something like:

#pragma always_call memset

ugly as it is, would work.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: PlayStation 3 predicts next US president

|  The whole point of a notary is to bind a document to a person.  That
|  the person submitted two or more different documents at different
|  times is readily observable.  After all, the notary has the
|  document(s)!
|
| No, the notary does not have the documents *after* they are notarized,
| nor do they keep copies. Having been a notary I know this
| personally. When I stopped being a notary all I had to submit to the
| state was my seal and my record books.
|
| If I had to testify about a document I would only be attesting that
| the person who presented themselves adequately proved, under the
| prudent businessman's standard, that they were the person that they
| said they were and that I saw them sign the document in
| question. That's it. No copies at all. What would anyone have to
| testify about if a legal battle arose after the notary either died or
| stopped being a notary?
|
| Think for a minute about the burden on a notary if they had to have a
| copy of every document they notarized. What a juicy target they would
| make for thieves and industrial spies. No patent paperwork would be
| safe, no sales contract, no will, or other document. Just think how
| the safe and burglar alarm companies would thrive. Now ask yourself
| how much it costs to notarize a document. Would that pay for the
| copying and storage. I don't know what the current fees are in
| California but 20 years ago they were limited to $6.00 per person per | document and an extra buck for each additional copy done at the same | time. My average was about$14.00 per session. My insurance was
| $50/year. Nowhere near enough to cover my liability if I was to This whole discussion has an air of unreality about it. Historically, notary publics date from an era when most people couldn't read or write, and hardly anyone could afford a lawyer. How does someone who can't read a document and can perhaps only scrawl an X enter into a contract? In the old days, he took the written contract to a notary public, who would read it to him, explain it, make sure he understood it, then stamp his scrawled X. The notary's stamp asserted exactly the kind of thing that we discuss on this list as missing from digital signatures: That the particular person whose X was attached (and who would be fully identified by the notary) understood and assented to the contents of the contract. Today, we assume that everyone can read, and where a contract is at all complex, that everyone will have access to a lawyer. (Of course, this assumption is often invalid, but that's another story.) The requirement for a notary public's stamp is a faded vestige. For certain important documents, we still require that a notary sign off, but what exactly that proves any more is rather vague. Yes, in theory it binds a signature to a particular person, with that signature being on a particular document. That latter is why the notary's stamp is a physical stamp through the paper - hard(er) to fake. Of course, most of the time, the stamp is only applied to the last page of a multi-page contract, so proves only that that last page was in the notary's hands - replacing the early pages is no big deal. I think I've seen notaries initial every page, but I've also seen notaries who don't. In practice, whenever I've needed to have a document notarized, a quick look at some basic ID is about all that was involved. It's quite easy to get fake ID past a notary. Given the trivial fee paid to a notary - I think the limit is$2 in Connecticut - asking the notary to actually
add much of value is clearly a non-starter.

The financial industry has actually created its own system - I forget
the name, some like a Gold Bond Certification - that it requires for
certain high-importance transactions (e.g., a document asserting you
own some stock for which you've lost the certificates).  I've never
actually needed to get this - it's appeared as a requirement for
some alternative kinds of transactions on forms I've had to fill out
over the years - so I don't know exactly how it works.  However, it's
completely independent of the traditional notary public system, and
is run through commercial banks.

Trying to justify anything based on the current role of notary
publics - at least as it exists in the US - is a waste of time.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: Flaws in OpenSSL FIPS Object Module

|  It is, of course, the height of irony that the bug was introduced in
|  the very process, and for the very purpose, of attaining FIPS
|  compliance!
|
| But also to be expected, because the feature in question is
| unnatural: the software needs a testable PRNG to pass the compliance
| tests, and this means adding code to the PRNG to make it more
| predictable under test conditions.
Agreed.  In fact, this fits with an observation I've made in many
contexts in the past:  Any time you introduce a new mode of operation,
you are potentially introducing a new failure mode corresponding to
it as well.  Thus, bulkhead doors on sidewalks are unlikely to open
under you because the only mode of operation they try to support has
the doors opening upward.  I would be very leary of stepping on such
a door if it could *ever* be opened downward.

| As the tests only test the predictable PRNG, it is easy to not notice
| failure to properly re-seed the non-test PRNG. One can't easily test
| failure to operate correctly under non-test conditions. And the
| additional complexity of the test harness makes such failure more
| likely.
|
| The interaction of the test harness with the software under study
| needs close scrutiny (thorough and likely multiple independent code
| reviews).
There's a famous story - perhaps apocryphal - from the time IBM
introduced some of the first disk packs.  They did great for a while,
but then started experiencing head crashes at a rate much higher than
had ever been seen in the development labs.  The labs, of course,
suspected production problems - but packs they brought in worked just
as well as the ones they'd worked with earlier.

Finally, someone sitting there, staring at one of the test packs and
at a crashed disk from a customer had a moment of insight.  There was
one difference between the two packs:  The labs pulled samples
directly off the production line.  Customers got packs that had gone
through QA.  The last thing QA did was put a Passed sticker on the
top disk of the pack.

So ... take a pack with a sticker and spin it up.  This puts G forces
on the sticker.  The glue under the sticker slowly begins to migrate.
Eventually, some of it goes flying off into the enclosure.  If it
gets under a head ... crash.

| Similar bugs are just as likely in closed-source software and are less
| likely to be discovered.
Actually, now that this failure mode has been demonstrated, it would
be a good idea to test for it.  It's harder to do with just binaries,
but possible - look at the recent analyses of the randomization in
Vista ASLR.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### RE: More on in-memory zeroisation

|  Then the compiler can look at the implementation and prove that a
|  memset() to a dead variable can be elided
|
| One alternative is to create zero-ing functions that wrap memset()
| calls with extra instructions that examine some of the memory, log a
| message and exit the application if the memory is not zero. This has
| two benefits: 1) It guarantees the compiler will leave the memset() in
| place and 2) guarantees the memset() worked. It does incur a few extra
| instructions though.
|
| I guess it is possible that the compiler would somehow optimize the
| memset to only zero the elements subsequent code compares... Hmmm
| [Of course your application could be swapped out and just before the
| memset call writing your valuable secrets to the system swap file on
| disk... :-( ]
In practice, with an existing compiler you are not in a position to
change, these kinds of games are necessary.  If you're careful, you look
at the generated code to make sure it does what you expect.

But this is a very bad - and potentially very dangerous - approach.
You're relying on the stupidity of the compiler - and on the compiler
not become more intelligent over time.  Are you really prepared to
re-check the generated code every time the compiler is rev'ed?

There sometimes needs to be an explicit way to tell the compiler that
some operation *must* be done in some way, no matter what the compiler
thinks it knows.  There's ample precedent for this.  For example,
floating point arithmetic doesn't exactly follow the usual laws of
arithmetic (e.g., it's not associative, if you consider overflows), some
if you know what you are doing in construction an FP algorithm, you have
to have a way to tell the compiler Yes, I know you think you can
improve my code here, but just leave it alone, thank you very much.
And all programming languages that see numerical programming as within
their rubric provide standardized, documented ways to do just that.  C
have volatile so that you can tell the compiler that it may not elide
or move operations on a variable, even when those operations have no
effects visible in the C virtual machine.  (The qualifier was added
to support memory-mapped I/O, where there can be locations that look
like memory but have arbitrarily different semantics from normal
memory.)  And so on.

You can almost, but not quite, get the desired effect for memory zero-
ization with volatile.  Something more is needed, and software that
something more (to be pinned down explicitly).

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: More on in-memory zeroisation

|  However, that doesn't say anything about whether f is actually
|  invoked at run time.  That comes under the acts as if rule:  If
|  the compiler can prove that the state of the C (notional) virtual
|  machine is the same whether f is actually invoked or not, it can
|  elide the call.  Nothing says that memset() can't actually be
|  defined in the appropriate header, as a static (or, in C99, inline)
|  function.
|
| The standard actually says ... it is permitted to take the address of
| a library function even if it is defined as a macro  The standard
| works for me as a source code author who needs an execution-aware
| memcpy function from time to time. Overworked GCC contributors should
| work to comply to the standard, not to address Peter, Thierry, and
| whoever's wildests dreams.
If the function is defined as I suggested - as a static or inline - you
can, indeed, takes its address.  (In the case of an inline, this forces
the compiler to materialize a copy somewhere that it might not otherwise
have produced, but not to actually *use* that copy, except when you take
the address.)  You are allowed to invoke the function using the address
you just took.  However, what in that tells you that the compiler -
knowing exactly what code will be invoked - can't elide the call?

By the way, you might wonder what happens if two different CU's take
the address of memset and we then compare them.  In this kind of
implementation, they will be unequal - but in fact nothing in the
Standard says they can't be!  A clever compiler could have all kinds
of reasons to produce multiple copies of the same function.  All you
can say is that if two function pointers are equal, they point to the
same function.  No converse form is provable within the Standard.

You might try something like:

typedef (void *(*memset_ptr)(const void*,int,size_t));

volatile memset_ptr p_memset = (memset);

(I *think* I got that syntax right!)

Then you can invoke (*p_memset).  But if you do this in the same
compilation unit, a smart compiler that does value propagation could
determine that it knows where p_memset points, and that it knows what
the code there is, so it can go ahead and do its deeper analysis.

Using:

volatile memset_ptr p_memset = (memset);

in one compilation unit and then:

extern volatile memset_ptr p_memset = (memset);

will keep you safe from single-CU optimizations, but nothing in the
Standard says that's all there are.  Linker-based optimizations
could have the additional information that nowhere in the program
can p_memset be changed, and further that p_memset is allocated to
regular memory, and in principle the calls could be elided at that
point.  Mind you, I would be astounded if any compiler/linker system
actually attempted such an optimization ... but that doesn't make
it illegal within the language of the Standard.

|  Then the compiler can look at the implementation and prove
|  that a memset() to a dead variable can be elided
|
| It can't prove much in the case of (memset)()
In principle (I'll grant you, probably not in practice), it can
provie quite a bit - and certainly enough to justify eliding the
call.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: More on in-memory zeroisation

|  If the function is defined as I suggested - as a static or inline -
|  you can, indeed, takes its address.  (In the case of an inline, this
|  forces the compiler to materialize a copy somewhere that it might
|  not otherwise have produced, but not to actually *use* that copy,
|  except when you take the address.)  You are allowed to invoke the
|  function using the address you just took.  However, what in that
|  tells you that the compiler - knowing exactly what code will be
|  invoked - can't elide the call?
|
| Case of static function definition: the standard says that standard
| library headers *declare* functions, not *define* them.
Where does it say it *can't* define them?  How could a Standard-conforming
program tell the difference?  If no Standard-conforming program can
tell the difference between two implementations, it makes no difference
what you, as an omniscient external observer, might know - they are either
both compatible with the Standard, or neither is.

| Case of inline: I don't know if inline definition falls in the
| standard definition of declaration.
It makes not difference.

| Also, the standard refers to these identifiers as external
| linkage. This language *might* not creare a mandatory provision if
| there was a compelling reason to have static or inline implementation,
| but I doubt the very infrequent use of (memset)(?,0,?) instead of
| memset(?,0,?) is a significant optimization opportunity. The compiler
| writer risks a non-compliance assessment in making such strectched
| reading of the standard in the present instance, for no gain in any
| benchmark or production software speed measurement.
|
| Obviously, a pointer to an external linkage scope function must adhere
| to the definition of pointer equality (==) operator.
What do you think the definition of pointer equality actually is?  Keep
in mind that you need to find the definition *in the Standard*.  The
*mathematical* definition is irrelevant.

| Maybe a purposedly stretched reading of the standard might let you
| make your point. I don't want to argue too theoretically. Peter and I
| just want to clear memory!
Look, I write practical programs all the time - mainly in C++ recently,
but the same principles apply.  My programs tend to be broadly portable
across different compilers and OS's.  I've been doing this for close to
30 years.  I stick to the published standards where possible, but there's
no way to avoid making assumptions that go beyond the standards in a few
cases:  Every standard I know of is incomplete, and no implementation I've
ever worked with is *really* 100% compliant.

It's one thing to point out a set of practical techniques for getting
certain kinds of things done.  It's another to make unsupportable arguments
that those practical techniques are guaranteed to work.  There's tons of
threaded code out there, for example.  Given the lack of any discussion of
threading in existing language standards, most of them skate on thin ice.
Some things are broadly agreed upon, and quality of implementation
requirements make it unlikely that a compiler will break them.  Other
things are widely believed by developers to have been agreed upon, but
have *not* really be agreed upon by providers.  Programs that rely on
these things - e.g., that C++ function-scope static initializers will be
run in a thread-safe way - will fail here and there, because in fact
compiler developers don't even try to support them.  Because of the ever-
growing importance of threaded programs, this situation is untenable, and
in fact the language groups are starting to grapple with how to incorporate

Security issues are a similar issue.  The fact is, secure programming
sometimes requires primitives that the standards simply don't provide.
Classic example:  For a long time, there was *no* safe way to use
sprintf(), since there was no a priori way of determining how long the
output string might be.  People had various hacks, but all of them could
be fooled, unless you pretty much re-implemented sprintf() yourself.
snprintf() fixed that.

There is, today, no way to guarantee that memset() will be run, within the
confines of the standard.  This is a relatively minor oversight - C has
seen such issues as important since volatile was introduced well before
the language was standardized.  I expect we'll see some help on this in
a future version.  In the meanwhile, it would be nice if compiler
developers would agree on some extra-Standard mechanisms.  The gcc hack
could be a first step - but it should be written down, not just something
a few insiders know about.  Standards are supposed to grow by standardi-
zing proven practice, not by innovation.

The problem with unsupportable assumptions that some hack or another
provides a solution is that they block *actual* solutions.  By all means
use them where necessary - but push for better approaches.

-- Jerry

| Kind regards,
|
|


### Re: More on in-memory zeroisation



On Wed, 12 Dec 2007, Thierry Moreau wrote:

| Date: Wed, 12 Dec 2007 16:24:43 -0500
| From: Thierry Moreau [EMAIL PROTECTED]
| To: Leichter, Jerry [EMAIL PROTECTED]
| Cc: Peter Gutmann [EMAIL PROTECTED], cryptography@metzdowd.com
| Subject: Re: More on in-memory zeroisation
|
| / testf.c /
| #include stdio.h
| #include string.h
|
| typedef void *(*fpt_t)(void *, int, size_t);
|
| void f(fpt_t arg)
| {
|   if (memset==arg)
|   printf(Hello world!\n);
| }
|
| / test.c /
| #include stdlib.h
| #include string.h
|
| typedef void *(*fpt_t)(void *, int, size_t);
|
| extern void f(fpt_t arg);
|
| int main(int argc, char *argv[])
| {
|   f(memset);
|   return EXIT_SUCCESS;
| }
|
| /*   I don't want to argue too theoretically.
|
| - Thierry Moreau */
I'm not sure what you are trying to prove here.  Yes, I believe that
in most implementations, this will print Hello world\n.  Is it,
however, a strictly conforming program (I think that's the right
standardese) - i.e., are the results guaranteed to be the same on
all conforming implementations?  I think you'll find it difficult
to prove that.

BTW, it *might* not even be true in practice if you build your program
as multiple shared libraries!
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: Intercepting Microsoft wireless keyboard communications

|  Exactly what makes this problem so difficult eludes me, although one
|  suspects that the savage profit margins on consumables like
|  keyboards and mice might have something to do with it.
|
| It's moderately complex if you're trying to conserve bandwidth (which
| translates to power) and preserve a datagram model.  The latter
| constraint generally rules out stream ciphers; the former rules out
| things like encrypting the keystroke plus seven random bytes with a
| 64-bit block cipher.  Power is also an issue if your cipher uses very
| much CPU time or custom hardware.
|
| Im sure most readers of this list can propose *some* solution.  It's
| instructive, though, to consider everything that needs to go into a
| full system solution, including the ability to resynchronize cipher
| states and the need to avoid confusing naive users if the cat happened
| to fall asleep on the space bar while the CPU was turned off.
Somewhere - perhaps in the Computerworld article - someone mentions that
some devices use Bluetooth, and are therefore much more secure.

In practice, most Bluetooth devices don't even agree on a non-zero
key when pairing, so just using Bluetooth is no promise of anything.
Does anyone know how good Bluetooth security can potentially be -
and is it practically attainable in the low power/lost message
context that would be needed here?  How are some of the emerging
low-power protocols (e.g., ZigBee) dealing with this?

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: More on in-memory zeroisation

|  There was a discussion on this list a year or two back about
|  problems in using memset() to zeroise in-memory data, specifically
|  the fact that optimising compilers would remove a memset() on
|  (apparently) dead data in the belief that it wasn't serving any
|  purpose.
|
| Then, s/memset(?,0,?)/(memset)(?,0,?)/ to get rid of compiler
| in-lining.
|
| Ref: ANSI X3.159-1989, section 4.1.6 (Use of C standard library
| functions)
I don't have te C89 spec handy (just the C99 spec, which is laid
out differently), but from what I recall, this construct guarantees
nothing of the sort.

Most standard library functions can be implemented as macros.  Using the
construct (f)(args) guarantees that you get the actual function f,
rather than the macro f.  However, that doesn't say anything about
whether f is actually invoked at run time.  That comes under the acts
as if rule:  If the compiler can prove that the state of the C
(notional) virtual machine is the same whether f is actually invoked or
not, it can elide the call.  Nothing says that memset() can't actually
be defined in the appropriate header, as a static (or, in C99, inline)
function.  Then the compiler can look at the implementation and prove
that a memset() to a dead variable can be elided

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: Flaws in OpenSSL FIPS Object Module

| What does it say about the integrity of the FIPS program, and its CMTL
| evaluation process, when it is left to competitors to point out
| non-compliance of evaluated products -- proprietary or open source --
| to basic architectural requirements of the standard?
I was going to ask the same question.  My answer:  This proves yet again
how far we are from a servicable ability to produce secure software.

Software that's been through the FIPS process has been vetted to the limits
of our current abilities under the constraints of being even vaguely
commercially viable.  OpenSSL is open source software that's been around
for a long time, examined by many, many people.  It had a very rough
journey through the FIPS process, so was presumably checked even more
than software that just breezes through.  Even so ... it had a security
bug.  It's hard to suggest something that could have been done differently
to guarantee that this couldn't happen.  Anyone who might argue - as I'm
sure they will - that this proves you should use commercial software
rather than OSS if you need security is speaking nonsense - that's not
at all what this incident is about.

It is, of course, the height of irony that the bug was introduced in the
very process, and for the very purpose, of attaining FIPS compliance!

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### State of the art in hardware reverse-engineering


Flylogic Engineering does some very interesting tampering with tamper-
resistant parts.  Most of those secure USB sticks you see around won't
last more than a couple of minutes with these guys.

See http://www.flylogic.net/blog
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Government Smart Card Initiative


Little progress on government-wide smart card initiative, and little
surprise

November 14, 2007 (Computerworld) More than three years after a
presidential directive requiring federal government agencies to issue
new smart-card identity credentials to all employees and contractors,
progress on the mandate continues to be tediously slow.

Most agencies appear to have missed by a wide margin an October 27
deadline by which they were supposed to have completed background checks
and issued smart-ID credentials to all employees and contractors with 15
years or less of service.

The so-called Personal Identity Verification (PIV) cards are supposed to
be tamper-proof and to support biometric authentication features. PIV
facilities and can be used across agencies. Federal agencies are
mandated to issue them to all employees and contractor under Homeland
Security Presidential Directive-12 of August 2004. Under the multi-phase
initiative, agencies have until October 2008 to issue PIV cards to all
their employees and contractors.

Several government agencies contacted for this story did not respond to
request for information on their implementation status. But an
inspection of publicly posted information at IDmanagement.gov, a federal
identity management site, showed that a large number of government
agencies had barely begun issuing the cards just prior to the October

Well below the Mendoza line

For example, as of Sept. 1, the U.S. Department of Commerce had not
issued even one PIV credential, though it listed over 40,000 employees
as requiring it. As of October 19, the Social Security Administration
had issued cards to 300 of its 65,000 employees, and to 429 of its
approximately 20,000 contractors. On July 1, the U.S. Department of
Energy had issued the new cards to 5 out of its 13,500 employees, and
not a single one to its 98,000 or so contractors.

Doing slightly better was the Department of State, which has issued the
new ID credentials to 4450 of its 19,865 employees and to more than a
quarter of its 7000 contractors by Sept. 14. Similarly, the Department
of Labor has issued cards to 6450 of its 15,600 employees and about 400
of its 3000 contractors as of Sept. 1

Though the numbers are a far cry from where the agencies were required
to be, they are not entirely unexpected. From the program's outset,
security analysts and government IT managers warned that agencies would
have a hard time meeting HSPD-12 implementation deadlines for a variety
of technological and logistical reasons.

This is a classic example of politically established deadlines that are
not based on any reality at all. It is no more complicated than that,
said Franklin Reeder an independent consultant and a former chief of
information policy at the U.S. Office of Management and Budget (OMB).

As best as I can tell, HSPD-12 deadlines were set without any real
understanding of the enormity of what needed to be done or the costs
involved in doing so, said Reeder, who is also chairman for the Center
for Internet Security.

The National Institute for Standards and Technology (NIST), which was
originally entrusted with the task of coming up with the technical
specifications for HSPD-12, did a great job in delivering the standards
on schedule, Reeder said. Since then, agencies have been left with the
unenviable task of trying in an unreasonably short time frame to replace
their existing physical and logical access infrastructures with that
required for the PIV cards, Reeder said.

It's one of those situations where the technology itself is not
complicated, but it does comprise many different pieces that have to be
carefully integrated, said Hord Tipton, a former CIO with the
U.S. Department of the Interior. The task involves a lot of cooperation
between different groups within agencies that have traditionally not
worked with each other, such as human resources, physical security and
IT, he said, and sometimes it can also mean replacing ongoing agency
efforts with the standards mandated by HSPD-12. The biggest example of
this is the U.S. Department of Defense, which rolled out millions of its
own IDs, called Common Access Cards. Those were based on a different
standard, and the DoD is currently in the process of migrating their
system to the PIV standard.

Interoperability looms

In addition to the internal issues, agencies also need to make sure
their PIV card infrastructures are interoperable with those of other
government agencies, Tipton said. This raises a whole set of other
technology, standards, trust, control and political issues that agencies
need to navigate.

A shared service set up by the General Services Administration (GSA) to
help agencies enroll employees into the PIV program and issue the new
cards to them is also still in the process of ramping up according to
Neville Pattison, vice president of business development and government
affairs at 

### People side-effects of increased security for on-line banking


Sometimes the side-effects are as significant as the direct effects

-- Jerry

Story from BBC NEWS:
http://news.bbc.co.uk/go/pr/fr/-/2/hi/technology/7091206.stm

Fears over online banking checks
By Mark Ward
Technology Correspondent, BBC News website

Complicated security checks could be undermining confidence in online
banking, warn experts.  Security extras such as number fobs, card
readers and password checks might make consumers more wary of net bank
websites, they fear.  The warning comes as research shows how phishing
gangs are targeting attempts to grab customer login details.  But the UK
body overseeing net banking says figures show criminals are getting away
with less from online accounts.  Security check

In a bid to beat the bad guys many banks have added extra security
account.

Some, such as Lloyds, have trialled number generating key fobs and
Barclays is trialling chip and pin card readers. Others have tried
systems that check a customers PC and then ask that person to select
which image they chose from a set they were shown previously.

But, said Garry Sidaway from ID and authentication firm Tricipher, all
these checks could be making consumer more nervous about using online
banking.

The banks have to make this channel secure, he said, but there is
crumbling confidence in it.

Andrew Moloney, financial services market director for RSA Security,
said banks were well aware that their efforts to shore up security
around online banking could have a downside.  It registers as a
concern, he said, there could be too much security and there's a
danger of over-selling a new technology.  This is not just about
experience and customer satisfaction.

The misgivings about beefed up security around online banking come as
the UK government's Get Safe Online campaign issues a survey which shows
the risks people are taking with login details.  These lax practices
could prove costly as cyber fraudsters gradually shift their attention
to Europe following moves in the US to combat phishing.  In late 2005
the US Federal Financial Institutions Examination Council (FFIEC) issued
guidelines which forced banks to do more to protect online accounts.
Phishing statistics show a rapid move by the fraudsters to European
banks and, said Mr Moloney, to smaller European banks using less
protection.  Lists of phishing targets gathered by security companies
show a huge shift away from big bank brands such as Citibank and Bank of
America to Sparkasse, VolksBank and many others.  A spokeswoman for the
Association for Payment and Clearing Services (Apacs) which oversees
online banking said its figures showed that the message about safe
banking was getting through.  Statistics released in October indicated
that online banking fraud (including phishing) for the first six months
of 2007 was down 67% over the previous year.  During the same time
period the number of phishing attacks rose by 42%.  The reason we are
seeing that fall, despite the increase in phishing attacks, is because
consumers are becoming more aware of how to protect themselves, said
the spokeswoman.  But, she added, what we are still seeing happening
is people falling foul of phishing attacks.  The spokeswoman urged
people to be careful with login details to bank accounts and exercise
caution when using e-mail and the web.

Published: 2007/11/13 09:33:59 GMT

? BBC MMVII


### Re: Intelligent Redaction

| Xerox Unveils Technology That Blocks Access to Sensitive Data in
| Documents to Prevent Security Leaks
|
| The Innovation: The technology includes a detection software tool that
| uses content analysis and an intelligent user interface to easily
| protect sensitive information. It can encrypt only the sensitive
| sections or paragraphs of a document, a capability previously not
| available.
Actually, it looks as if Xerox has been doing a bunch of very
interesting work on the borderlines of security, privacy,
cryptography, and human factors.  I hadn't noticed it before.

Look, for example, at:

http://www.parc.com/research/projects/security/default.html

(Now, can anyone account for the bizarre very light gray pattern
of lines that appear behind the top half or so of this page?)

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: Quantum Crytography to be used for Swiss elections

| Date: Sat, 13 Oct 2007 03:20:48 -0400
| From: Victor Duchovni [EMAIL PROTECTED]
| To: cryptography@metzdowd.com
| Subject: Re: Quantum Crytography to be used for Swiss elections
|
| On Fri, Oct 12, 2007 at 11:04:15AM -0400, Leichter, Jerry wrote:
|
|  No comment from me on the appropriateness.  From Computerworld.
|
|
| Why so shy? ...
Only that we've been over this ground so many times before.

| There is real charm in the phrase endowed with relevance and
| purpose.
|
| One might, by analogy with the 2nd law of thermodynamics, speculate
| that prior to some of the relevance and purpose of the election data
| rubbing off on the QC system, the QC system was the one more lacking
| in these desirable attributes. If wants to really go out on a limb,
| one might try to apply the fist law also, and conclude that the
| election data has as a result less relevance and purpose.
|
| In our physical analogy, heat is replaced with
| trust/relevance/purpose.  One can transfer this heat from the
| election to a technology or from a technology an election, always in
| the expected direction.
Ah, but this is a quantum system.  I think it's more a matter of
inducing correlations than a physical transfer.

Are trust, relevance, and purpose orthogonal variables?  That seems
unlikely.  So you need to trade of your ability to measure them.

Ah, there are some trustworthy photons.  Oops, we can trust them, but
we don't know if they are relevant.  Ah, there's a relevant photon

-- Jerry :-)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



|  ...  What's wrong with starting
|  with input SALT || PASSWORD and iterating N times,
|
| Shouldn't it be USERID || SALT || PASSWORD to guarantee that if
| two users choose the same password they get different hashes?
| It looks to me like this wold make dictionary attacks harder too.
As others have pointed out, with a large enough salt, dictionary attacks
become impossible.  But it's worth mentioning another issue:  People's
userid's do change and it's nice not to have the hashed passwords break
as a result.  (This is pretty counter-intuitive to users who change their
names, and a disaster if a large organization needs to do a mass renaming
and somehow has to coordinate a mass password update at the same time.)

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Quantum Crytography to be used for Swiss elections


No comment from me on the appropriateness.  From Computerworld.

-- Jerry

Quantum cryptography to secure ballots in Swiss election

Ellen Messmer

October 11, 2007 (Network World) Swiss officials are using quantum
cryptography technology to protect voting ballots cast in the Geneva
region of Switzerland during parliamentary elections to be held Oct. 21,
marking the first time this type of advanced encryption will be used for
election protection purposes.

Still considered an area of advanced research, quantum cryptography uses
photons to carry encryption keys to secure communications over
fiber-optic lines and can automatically detect if anyone is trying to
eavesdrop on a communications stream. For the Swiss ballot-collection
process, the quantum cryptography system made by id Quantique will be
used to secure the link between the central ballot-counting station in
downtown Geneva and a government data center in the suburbs.

We would like to provide optimal security conditions for the work of
counting the ballots, said Robert Hensler, the Geneva State Chancellor,
in a statement issued today. In this context, the value added by
quantum cryptography concerns not so much protection from outside
attempts to interfere as the ability to verify that the data have not
been corrupted in transit between entry and storage.

The use of quantum cryptography in the voting process will showcase
technology developed in Switzerland. The firm id Quantique, based in
Carouge, grew out of research done at the University of Geneva by
Professor Nicolas Gisin and his team back in the mid-1990s.

According to id Quantique's CEO Gregoire Ribordy, the firm's Cerberis
product, developed in collaboration with Australian company Senetas,
will be used for the point-to-point encryption of ballot information
sent over a telecommunications line from the central ballot-counting
station to the government data center.

Ribordy said the Swiss canton of Geneva -- there are 26 cantons
throughout all Switzerland -- has about 200,000 registered voters who
will either go to the polls on Oct. 21 and cast their vote, or vote by
mail. The votes cast by mail are all collected in the days before the
election and all brought to the central counting station on Oct. 21,
Ribordy said.

Once the election is closed -- at noon on Sunday, Oct. 21 -- the sealed
ballot boxes of all the polling stations are brought to the central
counting station, where they are opened and where the votes are mixed
with the mail votes. Counting them is then manually done at the central
counting station. People counting the votes at this central station use
computers to transfer the counts to the data center of the canton of
Geneva, Ribordy explained.

He said the quantum cryptography system is ready to be put into
action. Ribordy doesn't think the high-speed link has been encrypted by
any means in the past, but he added that the IT department of the Swiss
government is not sharing a lot of information on certain details for
security reasons.

The use of quantum cryptography in the Swiss election marks the start of
the SwissQuantum project managed by Professor Gisin, with support from
the National Center of Competence in Quantum Photonics Research in
Switzerland.

Protection of the federal elections is of historical importance in the
sense that, after several years of development and experimentation, this
will be the first use of the 1GHz quantum encrypter, which is
transparent for the user, and an ordinary fiber-optic line to send data
endowed with relevance and purpose, said Professor Gisin in a prepared
statement. So this occasion marks quantum technology's real debut.

The SwissQuantum project aims to set up a pilot communications network
throughout Geneva. Supporters compare it with that of the first Internet
links in the United States in the 1970s. The Swiss are also expected to
showcase the quantum cryptography project during the ITU Telecom World
event being held in Geneva this week.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: Full Disk Encryption solutions selected for US Government use

| A slightly off-topic question:  if we accept that current processes
| (FIPS-140, CC, etc) are inadequate indicators of quality for OSS
| products, is there something that can be done about it?  Is there a
| reasonable criteria / process that can be built that is more suitable?
Well, if you believe a talk by Brian Snow at the NSA - see
http://www.acsac.org/2005/papers/Snow.pdf - our whole process has to
change to get assurance, from the beginnings of the design all the
way through the final product.

I suspect he's right - but I'm also pretty sure that the processes
involved will always be too expensive for most uses.  They'll even be
too expensive for the cases where you'd think they best apply - e.g.,
in protecting large financial transactions.  An analysis of the costs
vs. the risks will usually end up with the decision to spend less and
spread the risks around, whether through insurance or higher rates
or other means.

We keep being told that inspection after the fact will give us more
secure systems.  It never seems to work.  You'd think that the
experience of, say, the US auto industry - which was taught by the
Japanese that you have to build quality into your entire process, not
inspect *out* lack of quality at the end - would give us some hint
that after-the-fact inspection is not the way to go.

Given all that ... a FIPS 140-2 certification is actually a pretty
reasonable evaluation.  It can be because it's trying to deal with
a problem that can be constrained to a workable size.  You know what's
supposed to go in; you know what's supposed to come out.  (This
still works better for hardware than for software, though.)  Where
FIPS 140-2 breaks down is that ultimately all it can tell you is
that some constrained piece of the system works.  But it tells you
nothing, and *can* tell you nothing, about whether that piece is
being used in a proper, secure way.  (Again, this is somewhat easier
with hardware, because the system boundaries are much more sharply
defined - and because of the inflexibility of hardware, they are also
much smaller.)  Beyond this is Common Criteria, which can easily be
more about paperwork than anything real.

Until someone comes up with a new way to approach the problem, my
guess is that we'll see more stuff moved into hardware, with limited
security definitions above the hardware that we can have some faith
in - but as little of real value to be said above that as there is
today.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### RE: Trillian Secure IM

|  But, opportunistic cryptography is even more fun.  It is
|  very encouraging to see projects implement cryptography in
|  limited forms.  A system that uses a primitive form of
|  encryption is many orders of magnitude more secure than a
|  system that implements none.
|
| Primitive form - maybe, weak form - absolutely not. It
| is actually worse than having no security at all, because
| it tends to create an _illusion_ of protection.
This is an old argument.  I used to make it myself.  I even used to
believe it.  Unfortunately, it misses the essential truth:  The choice
is rarely between really strong cryptography and weak cryptography; it's
between weak cryptography and no cryptography at all.  What this
argument assumes is that people really *want* cryptography; that if you
give them nothing, they'll keep on asking for it; but if you give them
something weak, they'll stop asking and things will end there.  But in
point of fact hardly anyone knows enough to actually want cryptography.
Those who know enough will insist on the strong variety whether or not
the weak is available; while the rest will just continue with whatever
they have.

| Which is by the way exactly the case with SecureIM. How
| hard is it to brute-force 128-bit DH ? My guesstimate
| is it's an order of minutes or even seconds, depending
| on CPU resources.
It's much better to analyze this in terms of the cost to the attacker
and the defender.  If the defender assigns relatively low value to his
messages, an attack that costs the attacker more than that low value is
of no interest.  Add in the fact that an attacker may have to break
multiple message streams before he gets to one that's worth anything at
all.

Even something that takes a fraction of a second to decrypt raises the
bar considerably for an attacker who just surfs all conversations,
scanning for something of interest.  It's easy to search for a huge
number of keywords - or even much more complex patterns - in parallel at
multi-megabyte/second speeds with fgrep-like (Aho-Corasick) algorithms.
A little bit of decryption tossed in there changes the calculations
completely.

I'm not going to defend the design choices here because I have no idea
what the protocol constraints were, what the attack model was (or even
if anyone actually produced one), what the hardware base was assumed to
be at the time this was designed, etc.  Perhaps it's just dumb design;
perhaps this was the best they could do.  Could it be better?  Of
course.  Is it better to not put a front door on your house because
the only ones permitted for appearance's sake are wood and can be
broken easily?
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Retailers try to push data responsibilities back to banks


Retail group takes a swipe at PCI, puts card companies 'on notice'
Jaikumar Vijayan

October 04, 2007 (Computerworld) Simmering discontent within the retail
industry over the payment card industry (PCI) data security standards
erupted into the open this week with the National Retail Federation
(NRF) asking credit card companies to stop forcing retailers to store
payment card data.

In a tersely worded letter to the PCI Security Standards Council, which
oversees implementation of the standard, NRF CIO David Hogan asked
credit card companies to stop making retailers jump through hoops to
create an impenetrable fortress to protect card data. Instead,
retailers want to eliminate the incentive for hackers to break into
their systems in the first place.

With this letter, we are officially putting the credit card industry on
notice, Hogan said in a statement. The NRF, a trade association whose
membership includes most of the major retailers in the U.S., is the
national voice for about 1.4 million U.S retail establishments.

In an interview with Computerworld this morning, Hogan said the letter
was provoked by a lot of frustration in the industry about PCI
guidelines and the deadlines associated with implementing them. If the
goal of PCI is to protect credit card data, the easiest and most common
sense approach is to stop requiring merchants to store the data in the
first place, he said.

PCI is a data security standard mandated by Visa International Inc.,
MasterCard International Inc., American Express Co., Discover and the
Japan Credit Bureau. It requires companies to implement a set of
prescribed security controls for protecting cardholder data. Though the
requirements went into affect more than two years ago, a large number of
big retailers are still noncompliant because of a variety of issues that
include legacy system challenges, rules interpretation issues and
continuously evolving guidelines.

According to Hogan, credit card companies require retailers and others
accepting payment card transactions to store certain card data sometimes
for up to 18 months so that it can be retrieved in the event of
chargebacks and other disputes.

But rather than have thousands of retailers store the data, credit card
companies and their banks should do so, Hogan said. Retailers only need
an authorization code provided at the time of a sale to validate a
charge, and a receipt with truncated credit card information to handle
returns and refunds. If that were done, he said, most retailers probably
wouldn't store any cardholder data.

According to Hogan, under the current process, credit card companies and
their banks already have the information needed for retrieval purposes
and it should be their responsibility to store and protect the data. It
is a very fundamental shift. But if you think about it, it is a very
common-sense approach.

PCI mandates are challenging retailers to build fortresses around credit
card data, he said. We build these higher walls and the hackers bring
in taller ladders and this kind of keeps scaling up all the time.

Gartner Inc. analyst Avivah Litan said that the NRF letter makes a
sound argument.

It's totally reasonable to tell the banking system and payment system
that 'we don't want to store this data anymore,' she said. If they
aren't storing this data, many of these [PCI] requirements go away and
the scope of the compliance effort is much more restricted.

In an e-mailed comment, Bob Russo, general manager of the PCI security
standards council, said the body received the NRF letter yesterday and
will respond after reviewing it further. However, it must be recognized
that the payment brands -- and not the Council -- operate the systems
underlying the payments process, as well as the compliance
programs. Because of this, Mr. Hogan should be directing his concerns to
those individual brands.

Jon Hurst, president of the Retailers Association of Massachusetts,
backed the NRF's position. With all of the attention paid to PCI, what's
gone unnoticed is the fact that card companies themselves require
certain amounts of data to be stored because of disputed transactions,
he said. If not for that requirement, many retailers -- especially the
large ones -- would probably not keep data and therefore wouldn't be
pressed to secure it, he said.

Prat Moghe, founder and CTO of Tizor Systems Inc., a Maynard,
Mass.-based security firm, called the NRF's demand political posturing
and said it would do little to improve retail security anytime soon.

I think a lot of this is about moving culpability back to the credit
card companies and saying don't make this my problem alone, Moghe
said. They seem to have realized that going on the defense as an
industry doesn't help. There is just more and more they have to do. By
speaking out aggressively at a time when retail industry information
security practices are under scrutiny by consumers and lawmakers, the
NRF is hoping to spread the liability for card data 

### Re: Linus: Security is people wanking around with their opinions

| I often say, Rub a pair of cryptographers together, and you'll
| get three opinions.  Ask three, you'll get six opinions.  :-)
|
| However, he's talking about security, which often isn't quantifiable!
From what I see in the arguments, it's more complicated than that.

On one side, we have SeLinux, produced with at least the aid of the NSA.
SeLinux embodies the accepted knowledge about how to do security
right.  This is a matter of engineering experience, not science.  The
fact is, very few things in this world are a matter of science.
Science can provide answers, but it can't choose the questions for you.
In the case of security, you first have to choose your model of what
needs to be secured, and against what kind of attacks.  There's no
possible science here - science can help you by telling you where the
limits are, what impact some choices have on others, but ultimately what
you consider important to protect, and what kinds of attacks you
consider plausible enough to be worth the costs of preventing, are
judgements that science cannot make.  The NSA has tons of experience
here, along all the relevant dimensions.  But the judgements they make,
while appropriate to their circumstances, may make little sense in other
circumstances.  I'm quite willing to grant that, in the sphere in which
NSA works, SeLinux is a great solution.  But few of us live there.

So ... on the other side, we have those who focus on the difficulty with
actually configuring and using an SeLinux system.  This is a dimension
that doesn't particularly concern NSA:  They have legal and operational
requirements that *must* be met, and the way to deal with the complexity
is to throw trained people and money at the problem.  But hardly anyone
else is in a position to take that approach.  So the net result is that
people end up not using SeLinux.  Seeing this, others come along with
simpler-to-use approaches.  They don't solve the problems SeLinux
solves, but they do solve *some* real problems - and they are claimed to
be much more likely to be adopted.  (Adoption rates, at least, *can* be
measured.  You can complain all you like about what people *should* be
doing, but ultimately what they *are* doing is something you have to
measure in the real world - scientifically! - not just think about.)

Now, the security absolutists say But you're getting people to adopt
something that doesn't *really* protect them.  Perhaps, though in the
words of George Orwell, The best is the enemy of the good.

We see the same kinds of arguments in cryptography.  There are the
absolutists, who brand as snake oil anything that doesn't pass every
known test anyone has ever published, that hasn't had every individual
component fully vetted by people they trust (and ultimately, they trust
no one, so it ends up the only things they trust are things they created
themselves).  There are the true snake oil salesmen.  And there are
those who try to get something good enough out there:  Something that
will actually get used by more than a tiny fraction of the population
and will protect them against reasonable threats.  For myself, I long
ago decided that no data I have is so valuable that it needs to survive
an attack that costs more than, say, a few thousand dollars to pull off.
In fact, if we're talking about data that can't be identified up front -
e.g., if someone had to go through my encrypted files one at a time, not
knowing what was in them until they had decrypted them - the threshold
is dramatically lower.  I'd probably be happy if it cost more than $100 per file. Even at those rates, there would be cheaper ways to get at my stuff than attacking the cryptography. Obviously, others will have different thresholds. But thinking about this kind of thing in monetary terms does help you get away from the kind of nebulous I want my stuff secure from any possible attack by anyone thinking. So I don't trust WEP for anything, but I do trust WPA - but I use SSH even over WPA links for many things. It's cheap, it's as easy to use as the alternatives - why not? I have files encrypted with what by today's standards are very weak algorithms. If they get broken, I've judged that my loss is trivial. The old programs are quick and easy to use and I just haven't gotten 'round to re-encrypting with newer algorithms that, on today's machines, are fast enough and easy enough to use. I tend to zero out files before deleting them, just because it's easy to do and it can't hurt. On the other hand, I don't go out of my way to use some 7-pass or - Lord save us from those who can't even be bothered to read Peter Guttman's paper on this and understand what he actually said - 35-pass erasure algorithm: If I have to worry about an attacker who are willing to use fancy data recovery hardware to look for remnant magnetization, I've got other problems. (BTW, it always amazes me that no modern system has picked up an old, old idea from VMS: You can set a marker on a file that  ### Goodby analogue hole, hello digital hole  The movie studios live in fear of people stealing their product as it all goes digital. There's, of course, always the analogue hole, the point where the data goes to the display. The industry defined an all-digital, all-licensed-hardware path through HDMI which blocks this path. As we know, Vista goes out of its way to keep all that stuff safe from tampering. But in this business, there's always someone who's defining different hardware that cuts the other way. At least one someone is the DisplayLink alliance. This is a group of vendors who are supporting a protocol for connecting video displays to computers over various kinds of generic connections. At the moment, USB, Ethernet, and wireless are on the list. Products are beginning to appear - LG, for example, recently announced the LG L206WU, a 1680x1050 display that connects over USB. A virtual graphics card drives the thing. You can support up to 6 USB displays, in addition to your existing displays. The limiting factor seems to be the CPU. Intel is involved with DisplayLink, and is demoing 3D and HD Video on USB and Wireless USB displays based on some integrated support in the the Intel graphics hardware at an upcoming conference. Vista's Aero is supported. The press release talks about watching movies. There's a reference in one article about the LG that says watching Blu-Ray disks probably won't work well because so much of the CPU is used up decoding the disk. Obviously, if this is indeed the limitation, it's a temporary one. No mention anywhere of HDMI or any kind of deal with the movie industry. If DisplayLink takes off - and with Intel on the producing side and at least LG, Toshiba, Kensington already announcing products it's got a good chance - HDMI is going to have a tough time gaining a place at the table; and the Blu-Ray/HD-DVD producers are quickly going to find themselves having to choose whether they are going to walk away from the vast majority of the market. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: OK, shall we savage another security solution? | If you think about this in general terms, we're at the point where we | can avoid having to trust the CPU, memory, disks, programs, OS, etc., | in the borrowed box, except to the degree that they give us access to | the screen and keyboard. (The problem of securing connections that | go through a hostile intermediary we know how to solve.) The keyboard | problem is intractable, though it would certainly be a step forward | if at least security information didn't go through there. This could | be done either by having a small data entry mechanism on the secure | device itself, or by using some kind of challenge/response (an LCD | on the device supplies a random value - not readable in any way by | the connected machine - that you combine with your password before | typing it in.) Maybe HDMI will actually have some use in providing | a secure path to the screen? (Unlikely, unfortunately.) | | Would it not be possible to solve the keyboard problem by allowing a | keyboard (e.g. USB) to be plugged directly into the device? Perhaps. Public systems usually don't have unpluggable keyboards. If I have to carry my own, I'm well on my way to just having my own portable system (which may be the way things end up anyway). -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: OK, shall we savage another security solution? | Anyone know anything about the Yoggie Pico (www.yoggie.com)? It | claims to do much more than the Ironkey, though the language is a bit | less marketing-speak. On the other hand, once I got through the | marketing stuff to the technical discussions at Ironkey, I ended up | with much more in the way of warm fuzzies than I do with Yoggie. | | Here's another secure USB flash drive: | http://www.kingston.com/flash/DTSPdemo/eval.asp with minimal | marketing-speak. This is a representative of yet another class of secure USB devices: - The Kingston encrypts data stored on it. (Note that you have to enter the decryption key from the system keyboard when you plug the thing in. If your threat scenarios include usage in a compromised system, this is not the device for you. - The Ironkey does the same thing - though they don't emphasize that aspect of things; such devices are pretty common. (There are a bunch of companies that have USB memory sticks with fingerprint sensors. Who knows how easy they are to spoof - likely not very). Ironkey's claim to fame is that it also acts as a key store that can be used with on-device programs like a browser and to connect to a Tor network. In this configuration - assuming it's implemented correctly - you can have a secure connection to a remote site even if you plug the USB into a compromised machine. (Of course, this doesn't solve the whole problem: You have to use the machine for I/O. The network traffic is secured between the remote endpoint and the program in the key, but the path from the key to the keyboard and screen is unsecured. A sophisticated attack could sniff or modify the keyboard stream and replace the on-screen data. We're probably talking about a highly targetted attack here to get any useful information that way. Certainly possible, but a lot harder than simply sniffing the password used to unlock the on-device memory and/or copying all the contents once they've been unlocked.) - The Yoggie is kind of a fancy firewall in a USB stick. I don't think there's any user-writable memory in it - certainly not for files, probably not even for secure storage of passwords. Historically, NSA has apparently never liked software implementations of cryptography - they wanted protected hardware. Such hardware has been prohibitively expensive until quite recently. These devices show that the price of such hardware is no longer a problem: We can build very secure, very small pieces of hardware for not a lot of money. What to *do* with those hardware capabilities is another question. It's not easy to fit them safely into systems - and what problems can they solve in those systems. Kingston and many other similar devices are a great solution to a problem very real problem: When my 2GB memory stick falls out of my pocket, have I just given away 2GB of highly sensitive data to anyone who finds the thing? They are *not* any kind of solution to the how can I access my data safely on a possibly-compromised system? The Ironkey guys have attacked a broader problem, and while they haven't completely solved it - it's not clear any solution exists! - they've provided a capability that is potentially useful. (They aren't unique - people have built a bunch of devices that are basically outboard Linux boxes that rely on a guest box to provide network connectivity, a keyboard, and a screen. But they have a commercially available low- cost product.) If you think about this in general terms, we're at the point where we can avoid having to trust the CPU, memory, disks, programs, OS, etc., in the borrowed box, except to the degree that they give us access to the screen and keyboard. (The problem of securing connections that go through a hostile intermediary we know how to solve.) The keyboard problem is intractable, though it would certainly be a step forward if at least security information didn't go through there. This could be done either by having a small data entry mechanism on the secure device itself, or by using some kind of challenge/response (an LCD on the device supplies a random value - not readable in any way by the connected machine - that you combine with your password before typing it in.) Maybe HDMI will actually have some use in providing a secure path to the screen? (Unlikely, unfortunately.) -- Jerry | | Regards, | Aram | | - | The Cryptography Mailing  ### OK, shall we savage another security solution?  Anyone know anything about the Yoggie Pico (www.yoggie.com)? It claims to do much more than the Ironkey, though the language is a bit less marketing-speak. On the other hand, once I got through the marketing stuff to the technical discussions at Ironkey, I ended up with much more in the way of warm fuzzies than I do with Yoggie. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: Another Snake Oil Candidate | The world's most secure USB Flash Drive: https://www.ironkey.com/demo. What makes you call it snake oil? At least the URL you point to says very reasonable things: It uses AES, not some home-brew encryption; the keys are stored internally; the case is physically protected, and has some kind of tampering sensor that wipes the stored keys when attacked. In fact, they make some of the same points: Your IronKey is literally packed with the latest and most secure encryption technologies, all enabled by the powerful onboard Cryptochip. Rather than employing homegrown cryptographic algorithms that have not undergone rigorous cryptoanalysis, IronKey follows industry best practices and uses only well-established and thoroughly tested cryptographic algorithms. All of your data on the IronKey drive is encrypted in hardware using AES CBC-mode encryption. 1. Encryption Keys 2. Always-On Encryption 3. Two-Factor Authentication Encryption Keys The encryption keys used to protect your data are generated in hardware by a FIPS 140-2 compliant True Random Number Generator on the IronKey Cryptochip. This ensures maximum protection via the encryption ciphers. The keys are generated in the Cryptochip when you initialize your IronKey, and they never leave the secure hardware to be placed in flash memory or on your computer. Always-On Encryption Because your IronKey implements data encryption in the hardware Cryptochip, all data written to your drive is always encrypted. There is no way to accidentally turn it off or for malware or criminals to disable it. Also, it runs many times faster than software encryption, especially when storing large files or using the on-board portable Firefox browser. Two-Factor Authentication Beyond simply protecting the privacy of your data on the IronKey flash drive, the IronKey Cryptochip incorporates advanced Public Key Cryptography ciphers that allow you to lock down your online IronKey account. That way you must have your IronKey device, in addition to your password, to access your online account. This highly complex process runs behind the scenes, giving you state-of-the-art protection from phishers, hackers and other online threats. The management team lists some people who should know what they are doing. They have a FAQ which gives a fair amount of detail about what they do. I have nothing at all to do with this company - this is the first I've heard of them - but it's hardly advancing the state of security if even those who seem to be trying to do the right thing get tarred as delivering snake-oil. If you know something beyond the publicly-available information about the company, let's hear it. Otherwise, you owe them an apology - whether they actually do live up to their own web site or not. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: How the Greek cellphone network was tapped. | Between encrypted VOIP over WIFI and eventually over broadband cell - | keeping people from running voice over their broadband connections is | a battle the telco's can't win in the long run - and just plain | encrypted cell phone calls, I think in a couple of years anyone who | wants secure phone connections will have them. | | I think you're looking at this a bit wrong. I rememeber the same | opinion as the above being expressed on the brew-a-stu list about | fifteen years ago, and no doubt some other list will carry it in | another fifteen years time, with nothing else having changed. Anyone | who wants secure voice connections (governments/military and a | vanishingly small number of hardcore geeks) already have them, and | have had them for years. Everyone else just doesn't care, and | probably never will. This is why every single encrypted-phones-for- | the-masses project has failed in the market. People don't see phone | eavesdropping as a threat, and therefore any product that has a | nonzero price difference or nonzero usability difference over an | unencrypted one will fail. This is why the only successful encrypted | phone to date has been Skype, because the crypto comes for free. | | I once had a chat with someone who was responsible for indoctrinating | the newbies that turn up in government after each election into things | like phone security practices. He told me that after a full day of | drilling it into them (well, alongside a lot of other stuff from other | departments) it sometimes took them as long as a week before they were | back to loudly discussing sensitive information on a cellphone in the | middle of a crowded restaurant. | | So in terms of secure voice communications, the military and geeks are | already well served, and everyone else doesn't care. Next, please. I won't disagree with you here. Most people don't perceive voice monitoring as a threat to them - and if you're talking about monitoring by many governments and by business intelligence snoopers, they are perfectly correct. (I say many governments because those governments that actively monitor and control large portions of their citizenry hardly make a secret of that fact, and citizens of those countries just assume they might be overheard and act accordingly. The citizens of, for lack of a better general phrase, the Western democracies, are quite right in their assessment that their governments really don't care about what they are saying on the phone, unless they are part of a very small subpopulation involved, whether legitimately or otherwise, in politics or intelligence or a couple of other pretty well understood areas.) Selling protection against voice snooping to most people under current circumstances is like selling flood insurance to people living in the desert. If you're an insurance hacker - like a security hacker - you can point out that flash floods *can* happen, but if they are so rare that no one is likely to be affected in their lifetime, your sales pitch *should* fail. What will change things is not the technology but the perception of a threat. Forty years ago, the perceived threat from airplane hijacking was that it was non-existent, and no one would consider paying the cost. Today, we play a very significant cost. The threat is certainly greater, but the *perceived* threat is orders of magnitude beyond even that. The moment the perceived threat from phone eavesdropping exceeds some critical level, the market for solutions (good and, of course, worthless) will materialize. As you note, in the military and intelligence community, the real and perceived threats have been there for years. And the crypto hackers will perceive a threat whether it exists or not. I'd guess that the next step will be in the business community. All it will take is one case where a deal is visibly lost because of proven eavesdropping (proven in quotes because it's unlikely that there will really be any proof - just a *perception* of a smoking gun - and in fact it could well be that the trigger case will really be someone covering his ass over a loss for entirely different reasons) and all of a sudden there will be a demand for strong crypto on every Blackberry phone link. Things have a way of spreading from there: If the CEO's need this, then maybe I need it, too. If it is expensive or inconvenient, I may feel the need, but I won't act on it. But the CEO's will ensure that it isn't inconvenient - they won't put up with anything that isn't invisible to them - and technology will quickly drive down the cost. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Re: How the Greek cellphone network was tapped. | Crypto has been an IP minefield for some years. With the expiry of | certain patents, and the availability of other unencumbered crypto | primitives (eg. AES), we may see this change. But John's other | points are well made, and still valid. Downloadable MP3 ring tones | are a selling point. E2E security isn't (although I've got to | wonder about certain teenage demographics... :) | | It's also an open question whether network operators subject to | interception requirements can legally offer built-in E2E encryption | capabilities without backdoors. It's going to be interesting to see the effect of the iPhone in this area. While nominally a closed system like all the handsets that preceded it, in practice it's clear that people will find ways to load their own code into the things. (As of yesterday - less than two weeks after the units shipped - people have already teased out how to get to the debugging/code patching interface and have extracted the internal passwords. The community doing this would make a fascinating study in and of itself - an international group coordinating through an open IM line, tossing around ideas.) There's plenty of CPU power available, and a fairly standard environment. (In fact, recent reports hint that the chip contains a hardware accelerator for Java.) Between encrypted VOIP over WIFI and eventually over broadband cell - keeping people from running voice over their broadband connections is a battle the telco's can't win in the long run - and just plain encrypted cell phone calls, I think in a couple of years anyone who wants secure phone connections will have them. There will be tons of moaning about it from governments - not to mention the telco's, though for them that will be a triviality compared to all the other things they will lose control over - but no one is going to be able to put this genie back in the bottle. Also, right now, the technology to build a cell phone is still specialized and capital-intensive. But today's leading-edge chip and manufacturing technology is tomorrow's commodity. Ten, twenty years from now, anyone will be able to put together the equivalent of today's iPhone, just as anyone can go down to Fry's today and build themselves what was a high-end PC a couple of years ago. You can't quite build your own laptop yet, but can that be far off? A gray box cellphone might not compete with what you'll be able to buy from the leading-edge guys of the day, but it will be easily capable of what's needed to do secure calling. So - who's going to write the first RFC for secure voice over cell, thus circumventing the entire government/telco/PTT standards process? We're not quite ready for it to take off, but we're getting close. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### Historical one-way hash functions  So, you want to be able to prove in the future that you have some piece of information today - without revealing that piece of information. We all know how to do that: Widely publish today the one-way hash of the information. Well ... it turns out this idea is old. Very old. In the 17th century, scientists were very concerned about establishing priority; but they also often wanted to delay publication so that they could continue to work on the implications of their ideas without giving anyone else the opportunity to do it. Thus, in 1678, Robert Hooke published an idea he had first developed in 1660. Even then, he only published the following: ceiiinosssttuu. Two years later, he revealed that this was an anagram of the Latin phrase Ut tensio sic uis - as the tension so the power - what we today call Hooke's Law of elastic deformation. (This story appears in Henry Petroski's The Evolution of Useful Things.) -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]  ### What Banks Tell Online Customers About Their Security  From CIO magazine. For the record, I, like the author, am a Bank of America customer, but unlike her I've started using their on-line services. What got me to do it was descriptions of the increasing vulnerability of traditional paper-based mechanisms: If I pay a credit card by mail, I leave in my mailbox an envelope with my credit card account number, my address, a check with all the banking information needed to transfer money - and probably a bunch of other envelopes with similar information. Yes, I could carry it to a post box or even a post office, but the inconvenience is getting pretty large at that point. Meanwhile, the on-line services have some unique security features of their own, like the ability to send me an email notification when various conditions are met, like large transactions. -- Jerry From: www.cio.com What Banks Tell Online Customers About Their Security - Sarah D. Scalet, CIO May 29, 2007 By the end of 2006, U.S. banks were supposed to have implemented strong authentication for online banking - in other words, they needed to put something besides a user name and password in between any old Internet user and all the money in a customer's banking account. The most obvious way to meet the guidance, issued by the U.S. Federal Financial Institutions Examination Council (FFIEC), would have been to issue one-time password devices or set up another form of two-factor authentication. But last summer, when I did a preliminary evaluation of security offerings at the country's largest banks, I was pretty unimpressed. (See Two-Factor Too Scarce at Consumer Banks http://www.cio.com/article/113750/.) Since then, I've given up on getting a one-time-password device, and have accepted the fact that banks are instead moving toward what might diplomatically be called creative authentication. (See Strong Authentication: Success Factors http://www.csoonline.com/read/110106/fea_strong_auth.html.) Given that man-in-the-middle attacks can circumvent two-factor authentication, a combination of device authentication, additional security questions and extra fraud controls doesn't seem like a bad approach. But, I wondered, almost six months past the FFIEC deadline, what are banks telling customers about online security? As the chief financial officer of Chateau Scalet - and as a working mother about to have baby No. 2 - I wanted to know if any of them could offer me enough assurance that I would take the online banking plunge as a way to simplify my life. I decided it was time to update my research from last year. I called the call centers at each of the top three banks, identified myself as a customer with a checking and savings account, and told them I was interested in online banking but concerned about security. The point, yes, was to see what type of security each bank had in place. More than that, however, I wanted to see how well each bank was able to communicate about security through its call center. After all, what good is good security if you can't explain it to your customers? Here's what I learned. Citibank My first call was to Citibank. I started with my standard question: How can I be assured that my online banking transactions are secure and private? The call center rep said that Citibank uses 128-bit encryption, which verifies that you have a maximum level of security. End of answer. Pause. I asked what kinds of protections Citibank had in place for making sure that it would really be me logging onto my account. I'm sorry, he said, but I don't understand your question. We had a language barrier, he and I. The call-center rep, in India, was not a native English speaker. The call went poorly, and I have no way of knowing whether this was because of our communications barrier or simply because Citibank hadn't instructed him how to answer questions about security. I repeated my question a couple times, and he finally said, Let me look into that, ma'am. I waited on hold more than a minute, and when he came back, he told me I could go online and read all about online banking. All the information is there, ma'am, he said politely. I kept prodding. I asked if Citibank offered tokens or did device recognition of some sort, and he told me I could log on with a user name and password. At any computer where I punch in my user name and password, I'll have full access to my account? I asked. Yes, ma'am, anyplace you have Internet access, he answered. He finally did say that in certain situations I would be asked extra security questions, but he wouldn't or couldn't explain when that happened or why. I asked if it was unusual for him to field calls about security, and he said yes. I finally ended the call in frustration. Chase Next I called Chase. This time I got a woman in Michigan, who at least didn't try to shunt me off onto the Internet - well, at least right away. But she seemed to interpret my every question about security  ### Re: The bank fraud blame game | | Given that all you need for this is a glorified pocket | | calculator, you could (in large enough quantities) probably get | | it made for$10, provided you shot anyone who tried to
| |   introduce product-deployment DoS mechanisms like smart cards and
| |   EMV into the picture.  Now all we need to do is figure out how
| |   to get there from here.
| |
| |  I'd suggest starting from the deployment, training, and help desk
| |  costs.  The technology is free, getting users to use it is not.  I
| |  helped several banks look at this stuff in the late 90s, when cost
| |  of a smartcard reader was order ~25, and deployment costs were
| |  estimated at $100, and help desk at$50/user/year.
| |
| | Of course, given the magnitude of costs of fraud, and where it may
| | be heading in the near term, the $50 a year may be well spent, | | especially if it could be cut to$25 with some UI investment. It is
| | all a question of whether you'd rather pay up front with the
| | security apparatus or after the fact in fraud costs...
|
| It may be, indeed.  You're going (as Lynn pointed out in another post)
| to be fighting an uphill battle against the last attempts.  I don't
| think smartcards (per se) are the answer.  What you really need is
| something like a palm pilot, with screen and input and a reasonably
| trustworthy OS, along with (as you say) the appropriate UI investment.
You do realize that you've just come down to what the TPM guys want to
build?  (Of course, much of the driving force behind having TPM comes
from a rather different industry.  We're all happy when TPM can be
used to ensure that our banking transactions actually do what the bank
says it will do for a particular set of instructions issued by us and
no one else, not so happy when they ensure that our music transactions
act the same way)

Realistically, the only way these kinds of devices could catch on would
be for them to be standardized.  No one would be willing to carry one
for their bank, another for their stock broker, a third for their
mortgage holder, a fourth for their credit card company, and so on.
But once they *are* standardized, almost the same potential for
undesireable uses appears as for TPM's.  What's to prevent the movie
Fob before they authorize you to watch a movie?  If the only significant
differences between this USAF and TPM is that the latter is more
convenient because more tightly tied to the machine, we might as well
have the convenience.

(This is why I find much of the discussion about TPM so surreal.  The
issue isn't the basic technology, which one way or another, in some form,
is going to get used.  It's how we limit the potential misuses)

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### TPM, part 2


All your data belong to us.  From Computerworld.

-- Jerry

Trusted Computing Group turns attention to storage
Chris Mellor

June 24, 2007 (TechWorld.com) The Trusted Computing Group has announced
sensitive data on hard drives, flash drives, tape cartridges and optical
disks. These devices won't release data unless the access request is
validated by their own on-drive security function.

David Hill, a principal in the Mesabi Group, said: The public media
blares the loss of confidential information on large numbers of
individuals on what seems a daily basis, and that is only the tip of the
data breach iceberg for not having trusted storage. Trusted storage will
soon be seen as a necessity --not just a nice to have -- by all
organizations.

The Trusted Computing Group (TCG) is a not-for-profit industry-standards
organization with the aim of enhancing the security of computers
operating in disparate platforms. Its draft, developed by more than 60
of the TCG's 2175 member companies, specifies an architecture which
defines how accessing devices could interact with storage devices to
prevent unwanted access.

Storage devices would interact with a trusted element in host systems,
generally a Trusted Platform Module (TPM), which is embedded into most
enterprise PCs. The trust and security functions from the specification
could be implemented by a combination of firmware and hardware on the
storage device. Platform-based applications can then utilize these
functions through a trusted command interface negotiated with the SCSI
and ATA standards committees.

Thus a server or PC application could issue access requests to a disk
drive and provide a key, random number or hash value. The drive hardware
and/or firmware checks that this is valid and then supplies the data,
decrypting it if necessary. Future versions of the SATA, SCSI and SAS
storage interfaces would be extended to support the commands and
parameters needed for such access validity checking.

Mark Re, Seagate Research SVP, said: Putting trust and security
functions directly in the storage device is a novel idea, but that is
where the sensitive data resides. Implementing open, standards-based
security solutions for storage devices will help ensure that system
interoperability and manageability are greatly improved, from the
individual laptop to the corporate data center. Seagate already has an
encrypting drive.

Marcia Bencala, Hitachi GST's marketing and strategy VP, said:
Hitachi's Travelstar mobile hard drives support bulk data encryption
today and we intend to incorporate the final Trusted Storage
Specification as a vital part of our future-generation products.

The TCG has formed a Key Management Services subgroup, to provide a
method to manage cryptographic keys.

Final TCG specifications will be published soon but companies could go
ahead and implement based on the draft spec.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### The bank fraud blame game


As always, banks look for ways to shift the risk of fraud to someone -
anyone - else.  The New Zealand banks have come up with some interesting
wrinkles oh this process.  From Computerworld.

-- Jerry

NZ banks demand a peek at customer PCs in fraud cases
Stephen Bell

June 26, 2007 (Computerworld New Zealand) Banks in New Zealand are
verify whether they have enough security protection.

Under the terms of a new banking Code of Practice, banks may request
access in the event of a disputed transaction to see if security
protection in is place and up to date.

The code, issued by the Bankers' Association last week after lengthy
drafting and consultation, now has a new section dealing with Internet
banking.

Liability for any loss resulting from unauthorized Internet banking
transactions rests with the customer if they have used a computer or
device that does not have appropriate protective software and operating
system installed and up-to-date, [or] failed to take reasonable steps to
ensure that the protective systems, such as virus scanning, firewall,
antispyware, operating system and antispam software on [the] computer,
are up-to-date.

computer or device in order to verify that you have taken all reasonable
information in accordance with this code.

If you refuse our request for access then we may refuse your claim.

InternetNZ was still reviewing the new code, last week, executive
director Keith Davidson told Computerworld.

In general terms, InternetNZ has been encouraging all Internet users to
be more security conscious, especially ... to use up-to-date virus
checkers, spyware deletion tools and a robust firewall, Davidson says.

The new code now places a clear obligation on users to comply with some
pragmatic security requirements, which does seem appropriate. If fraud
continues unabated, then undoubtedly banks would need to increase fees
to cover the costs of fraud, he says, so increasing security awareness
and compliance in advance is probably the better tactic for both banks
and their customers.

Bank customers who are unhappy with the new rules may choose to
tellers at the bank.  But it seems that electronic banking and in
particular Internet banking has become the convenient choice for
consumers, Davidson says.

The code also warns users that they could be liable for any loss if they
have chosen an obvious PIN or password, such as a consecutive sequence
of numbers, a birth date or a pet's name; disclosed a PIN or password to
a third party or kept a written or electronic record of it. Similar
warnings are already included in the section that deals with ATM and
PINs for Eftpos that was issued in 2002.

There is nothing in this clause allowing an electronic record to be held
in a password-protected cache -- a facility provided by some commercial
security applications.

For their part, the banks undertake to provide information on their
websites about appropriate tools and services for ensuring security, and
to tell customers where they can find this information when they sign up
for Internet banking.

One issue we have raised with the Bankers Association in the past is
that banks should not initiate email contact with their customers,
Davidson says.

The code allows banks to use unsolicited email among other media to
advise of changes in their arrangements with the customer, but Davidson
says they should only utilize their web-based mail systems.

It is hardly surprising that some people fall victim to phishing email
scams when banks use email as a normal method of communication, and
therefore email can be perceived as a valid communication by end users,
he says.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: anti-RF window film

| http://www.sciam.com/article.cfm?articleid=6670BF9B-E7F2-99DF-3EAC1C6DC382972F
|
| A company is selling a window film that blocks most RF signals.  The
| obvious application is TEMPEST-shielding.  I'm skeptical that it will
| be very popular -- most sites won't want to give up Blackberry and
| cell phones...
Real life follows fiction?  There was a Law and Order episode a year or
two back in which a high-tech company used some alleged technology like
this - a fine mesh of wires over the windows.  (An important clue was
one of the detectives noticing that the mesh had been disturbed.
Someone had replaced the wires in a small region with black thread, then
hid a cell-phone repeater outside the window.  As I recall, the reason
for doing was just your typical hacker you try to stop me, I'll get
around you trick.)

There were also reports not that long ago of a paint that provided
RF shielding.  On a more refined basis, there was some kind of
material suitable for walls that had embedded antennas.  You cut
them for a particular frequency range, and they provided very good
shielding in that range.

There is clearly a demand for this kind of thing.  New technologies
are making a hash of the old (sometimes not so old!) rules.  Two
examples:

is long gone in most places.  All kinds of concerns feed
into this; a big part is concern about liability when
employees access inappropriate sites.  This will all
seem a bit silly when the penetration of high-speed
wireless Internet access reaches reasonable levels.

- Insider trading rules have placed all kinds of interesting
particular, every phone message in and out of
sensitive areas is recorded, as is all email.
But cell phones, text messaging, and so on bypass
all that.  I gather some firms are responding by
requiring that employees use only company-provided
cell phones.  (Whether those calls get recorded is
another question.)  How well they'll be able to
maintain such policies, as cell phones morph into
multi-function personal devices, is an open question.

With all this going on, the desire to just finesse the whole problem
by physically blocking signals is certainly only going to grow.

Interesting times.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: The bank fraud blame game

| Leichter, Jerry writes:
| -+---
|  | As always, banks look for ways to shift the risk of
|  | fraud to someone - anyone - else.  The New Zealand
|  | banks have come up with some interesting wrinkles on
|  | this process.
|  |
|
| This is *not* a power play by banks, the Trilateral Commission,
| or the Gnomes of Zurich.  It is the first echo of a financial
| thunderclap.  As, oddly, I said only yesterday, I think that
| big ticket Internet transactions have become inadvisable
| and will become more so.  I honestly think that the party
| could be over for e-commerce, with eBay Motors as its
| apogee
Actually, we don't really disagree with the rest of your message, and
I'm not claiming some kind of conspiracy.  This isn't really a power
play because the banks hold all the cards.  Perhaps We're reading
different parts of the message I forwarded.  Consider:

Liability for any loss resulting from unauthorized Internet
banking transactions rests with the customer if they have used
a computer or device that does not have appropriate protective
software and operating system installed and up-to-date, [or]
failed to take reasonable steps to ensure that the protective
systems, such as virus scanning, firewall, antispyware,
operating system and antispam software on [the] computer, are
up-to-date.
OK, I could live with that as stated.  But:

your computer or device in order to verify that you have taken
all reasonable steps to protect your computer or device and
safeguard your secure information in accordance with this code.

If you refuse our request for access then we may refuse your
claim.
The delay between when you were defrauded and when they request
access is unspecified.  Who knows what's happened in the meanwhile?
Perhaps as a result of my experience, I stopped using on-line banking,
and as a result decided it wasn't worth keeping all the (obviously
ineffective) software up to date.  This is just too open-ended a
requirement.  All reasonable steps?  Just what *are* all reasonable
steps?  I think I know more than most people about how to keep systems
secure, but I'd be at a loss to make a list that could reasonably
be called all reasonable steps.  (Actually, my list would probably
include don't use IE or Outlook.  Is that reasonable?)

Bank customers who are unhappy with the new rules may choose to
dealing with tellers at the bank.  But it seems that electronic
banking and in particular Internet banking has become the
convenient choice for consumers, Davidson says.
On-line access is on its way to become a necessity.  EZ-Pass in New York
(electronic toll collection) now charges \$2/month if you want them to
send you a printed statement - go for all on-line access, and it's free.
Hardly a necessity yet, but this is a harbinger.  (Meanwhile, the
percentage of EZ-Pass only lanes at toll plazas keeps rising.  You don't
*need* to use EZ-Pass, if you're willing to incur significant delays.)

The code also warns users that they could be liable for any loss
if they have chosen an obvious PIN or password, such as a
consecutive sequence of numbers, a birth date or a pet's name;
disclosed a PIN or password to a third party or kept a written
or electronic record of it. Similar warnings are already
included in the section that deals with ATM and PINs for Eftpos
that was issued in 2002.

There is nothing in this clause allowing an electronic record to
be held in a password-protected cache -- a facility provided by
some commercial security applications.
This is not just wrong, it's *dangerously* wrong.

The code allows banks to use unsolicited email among other media
to advise of changes in their arrangements with the customer,
but Davidson says they should only utilize their web-based mail
systems.

It is hardly surprising that some people fall victim to
phishing email scams when banks use email as a normal method of
communication, and therefore email can be perceived as a valid
communication by end users, he says.
As we've discussed here many times, banks' mail messages are incredibly
hazardous, and teach entirely the wrong things.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### RE: Free Rootkit with Every New Intel Machine

| ...Apple is one vendor who I gather does include a TPM chip on their
| systems, I gather, but that wasn't useful for me.
Apple included TPM chips on their first round of Intel-based Macs.
Back in 2005, there were all sorts of stories floating around the net
about how Apple would use TPM to prevent OS X running on non-Apple
hardware.

In fact:

- Some Apple models contain a TPM module (the Infineon TPM1.2);
some (second generation) don't;

- No current Apple model contains an EFI (boot) driver for the
module;

- No current version of OS X contains a driver to access the
module for any purpose;

- Hence:  OS X doesn't rely on TPM to block execution on non-
Apple hardware.  In fact, there is an active hacker's
community that gets OS X to run on hackintosh's -
an announcement of OS X on a Sony Vaio made the
rounds just a couple of days ago.  Apparently the
only real difficulty is writing appropriate boot
and other low-level drivers.

Amit Singh, the author of the definitive reference on OS X internals,
has written and distributed an OS X driver for the TPM on those
machines that have it.  For all kinds of details, see his page at:

http://www.osxbook.com/book/bonus/chapter10/tpm/

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: Quantum Cryptography

|  - Quantum Cryptography is fiction (strictly claims that it solves
|an applied problem are fiction, indisputably interesting Physics).
|
|  Well that is a broad (and maybe unfair) statement.
|
|  Quantum Key Distribution (QKD) solves an applied problem of secure key
|  distribution. It may not be able to ensure unconditional secrecy
|  during key exchange, but it can detect any eavesdropping. Once
|  eavesdropping is detected, the key can be discarded.
|
| Secure in what sense? Did I miss reading about the part of QKD that
| addresses MITM (just as plausible IMHO with fixed circuits as passive
| eavesdropping)?
|
| Once QKD is augmented with authentication to address MITM, the Q
| seems entirely irrelevant.
The unique thing the Q provides is the ability to detect eaves-
dropping.  I think a couple of weeks ago I forwarded a pointer to
a paper showing that there were some limits to this ability, but
even so, this is a unique feature that no combination of existing
current approach of the QKD efforts is to assume that physical
constraints are sufficient to block MITM, while quantum contraints
block passive listening (which is assumed not to be preventable
using physical constraints).  It's the combination that gives you
security.

One can argue about the reasonableness of this model - particularly
about the ability of physical limitations to block MITM.  It does
move the center of the problem, however - and into a region (physical
protection) in which there is much more experience and perhaps
some better intuition.  Valid or not, it certainly is easier to
give people the warm fuzzies by talking about physical protection

In the other direction, whether the ability to detect eavesdropping lets
you do anything interesting is, I think, an open question.  I wouldn't
dismiss it out of hand.  There's an old paper that posits related
primitive, Verify Once Memory:  Present it with a set of bits, and it
answers either Yes, that's the value stored in me or No, wrong value.
In either case, *the stored bits are irrevokably scrambled*.  (One
could, in principle, build such a thing with quantum bits, but beyond
the general suggestions in the original paper, no one has worked out how
to do this in detail.)  The paper uses this as a primitive to construct
unforgeable subway tokens:  Even if you buy a whole bunch of valid
tokens, and get hold of a whole bunch of used ones, you have no way
to construct a new one.  (One could probably go further - I don't
recall if the paper does - and have a do the two of you match
primitive, which would use quantum bits in both the token and the
token validator.  Then even if you had a token validator, you couldn't
create new tokens.  Obviously, in this case you don't want to scramble
the validator.)
-- Jerry

| --
|
|  /\ ASCII RIBBON  NOTICE: If received in error,
|  \ / CAMPAIGN Victor Duchovni  please destroy and notify
|   X AGAINST   IT Security, sender. Sender does not waive
|  / \ HTML MAILMorgan Stanley   confidentiality or privilege,
|and use is prohibited.
|
| -
| The Cryptography Mailing List
| Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
|
|

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



### Re: Why self describing data formats:

| Many protocols use some form of self describing data format, for
| example ASN.1, XML, S expressions, and bencoding.
|
| Why?
|
| Presumably both ends of the conversation have negotiated what protocol
| version they are using (and if they have not, you have big problems)
| and when they receive data, they need to get the data they expect.  If
| they are looking for list of integer pairs, and they get a integer
| string pairs, then having them correctly identified as strings is not
| going to help much.
I suspect the main reason designers use self-describing formats is the
same reason Unix designers tend to go with all-ASCII formats:  It's
much easier to debug by eye.  Whether this is really of significance
at any technical level is debateable.  At the social level, it's very
important.  We're right into worse is better territory:  Self-
describing and, especially, ASCII-based protocols and formats are much
easier to hack with.  It's much easier to recover from errors in a
self-describing format; it's much easier to make reasonable inter-
pretations of incorrect data (for better or worse).  Network lore makes
this a virtue:  Be conservative in what you send, liberal in what you
accept.  (The first part gets honored in the breach all too often, and
of course, the second is a horrible prescription for cryptography or
security in general.)  So software to use such protocols and formats
gets developed faster, spreads more widely, and eventually you have an
accepted standard that's too expensive to replace.

The examples are rife.  HTML is a wonderful one:  It's a complex but
human-readable protocol that a large fraction (probaby a majority) of
generators get wrong - so there's a history of HTML readers ignoring
errors and doing the best they can.  Again, this is a mixed bag - on
the on hand, the web would clearly have grown much more slowly without
it; on the other, the lack of standardization can cause, and has caused,
problems.  (IE6-only sites, raise your hands.)

Looked at objectively, it's hard to see why XML is even a reasonable
choice for many of its current uses.  (A markup language is supposed to
add semantic information over an existing body of data.  If most of
the content of a document is within the markup - true of probably the
majority of uses of XML today - something is very wrong.)  But it's
there, there are tons of ancilliary programs, so ... the question that
gets asked is not why use XML? but why *not* use XML?  (Now, if I
could only learn to relax and stop tearing my hear every time I read
some XML paper in which they use semantics to mean what everyone
else uses syntax for)
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]




Interesting-looking article on how users of P2P networks end up sharing
much more than they expected:  http://weis2007.econinfosec.org/papers/43.pdf

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



| Interesting-looking article on how users of P2P networks end up sharing
| much more than they expected:  http://weis2007.econinfosec.org/papers/43.pdf
Earlier analysis by the USPTO:

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



|  Just being able to generate traffic over the link isn't enough to
|  carry out this attack.
|
| Well, it depends on if you key per-flow or just once for the link.  If
| the latter, and you have the ability to create traffic over the link,
| and there's a 1-for-1 correspondence between plaintext and encrypted
| packets, then you have a problem.
I have no clue what this means.

| Scenarios include:
|
| Private wifi network, you are sending packets at a customer from
| unprivileged node on internet; you want known plaintext for the key
| used to secure the wifi traffic, or you want the contents of his
| connection.
|
| Target is VPN'ed into corporate headquarters, you are sending packets
| at them (or you send them email, they download it from their mail server)
Again, we're talking about a particular attack, which requires (a) not
known, but chosen plaintext; (b) more, *adaptive* chosen plaintext:  You
have to be in a position to choose the plaintext for the next block that
will be encrypted *after* you've seen the ciphertext of the previous
block.

Nothing like e-mail is going to work, since even supposing you could grab
the last packet on the link and get your mail message to be the next
thing to get sent over the link:  The first block of a mail message as
a server delivers it is never going to be completely under your control,
and in fact it's unlikely you can control any of it.  (If what you meant
by 1-for-1 correspondence between plaintext and encrypted packets,
then I guess that mail doesn't come close.)

You can certainly come up with artificial scenarios where such an attack
would work.  For example, suppose you know that the victim it tailing
some file, and you can write to that file.  Then by appending to the
file you are inserting chosen plaintext into his datastream.  Maybe you
could come up with some analogous attack around an RSS feed, but I'm
skeptical - you have to be able to choose every byte of the first block
or the attack is impossible.  Further, suppose you can choose the first
block, but only some fraction of the time.  Well ... without some other
signal, you can't tell if the first block failed to match your target
because your target was wrong, or because you didn't manage to get your
block in.  So now you have to try repeatedly.  What was always a brute-
force search just got multiplied by some factor.

Again, it's not that this isn't a potentially significant attack.  It's
that the combination of special circumstances you need to pull it off -
a block known to have high value; a small enough set of possible values
for that block that you have hope of guessing it correctly before the
key is reused; enough pauses in the datastream to actually let you try
enough probes to get a significant probability of confirming a guess;
the ability to insert random packets into the plaintext repeatedly
without it being noticed - limits the plausible attack scenarios to a
rather small set.  One can worry about everything or one can try to
field real systems.
-- Jerry

| --
| Kill dash nine, and its no more CPU time, kill dash nine, and that
| process is mine. -- URL:http://www.subspacefield.org/~travis/
| For a good time on my UBE blacklist, email [EMAIL PROTECTED]
|

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



|  |   Frankly, for SSH this isn't a very plausible attack, since
|  |   it's not clear how you could force chosen plaintext into an
|  |   SSH session between messages.  A later paper suggested that
|  |   SSL is more vulnerable: A browser plugin can insert data into
|  |   an SSL protected session, so might be able to cause
|  |   information to leak.
|  |
|  |  Hmm, what about IPSec?  Aren't most of the cipher suites used
|  |  there CBC mode?
|  |
|  | ESP does not chain blocks across packets.  One could produce an
|  | ESP implementation that did so, but there is really no good reason
|  | for that, and as has been widely discussed, an implementation
|  | SHOULD use a PRNG to generate the IV for each packet.
|  I hope it's a cryptographically secure PRNG.  The attack doesn't
|  require any particular IV, just one known to an attacker ahead of
|  time.
|
|  However, cryptographically secure RNG's are typically just as
|  expensive as doing a block encryption.  So why not just encrypt the
|  IV once with the session key before using it?  (This is the
|  equivalent of pre-pending a block of all 0's to each packet.)
|
| But if the key doesn't change between messages then this makes the IV
| of the second block constant and if any plaintext repeats in the first
| block of plaintext then you have a problem.
I guess my proposal was ambiguous.  You don't use the encryption of
the *initial* IV for each packet; you use the encryption of what you
would otherwise have used as the IV, i.e., the last ciphertext block
of the previous packet.  The IV of the packet after that is just as
variable as it ever was.  (If it were not, then CBC would be just
about useless:  The CBC encryption of, say, the second block of two
all 0 blocks would always be the same!)
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



|   Frankly, for SSH this isn't a very plausible attack, since it's not
|   clear how you could force chosen plaintext into an SSH session between
|   messages.  A later paper suggested that SSL is more vulnerable:
|   A browser plugin can insert data into an SSL protected session, so
|   might be able to cause information to leak.
|
|  Hmm, what about IPSec?  Aren't most of the cipher suites used there
|  CBC mode?
|
| ESP does not chain blocks across packets.  One could produce an ESP
| implementation that did so, but there is really no good reason for
| that, and as has been widely discussed, an implementation SHOULD use
| a PRNG to generate the IV for each packet.
I hope it's a cryptographically secure PRNG.  The attack doesn't require
any particular IV, just one known to an attacker ahead of time.

However, cryptographically secure RNG's are typically just as expensive
as doing a block encryption.  So why not just encrypt the IV once with
the session key before using it?  (This is the equivalent of pre-pending
a block of all 0's to each packet.)
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



|  Frankly, for SSH this isn't a very plausible attack, since it's not
|  clear how you could force chosen plaintext into an SSH session between
|  messages.  A later paper suggested that SSL is more vulnerable:
|  A browser plugin can insert data into an SSL protected session, so
|  might be able to cause information to leak.
|
| Hmm, what about IPSec?  Aren't most of the cipher suites used there
| CBC mode?  If it doesn't key each flow seperately, and the opponent
| has the ability to generate traffic over the link, which isn't
| unreasonable, then this would seem feasible.  And then there's openvpn,
| which uses SSL for the point-to-point link, thus probably vulnerable,
| more vulnerable than a browser.  I am also aware of SSL being used
| many places other than browsers and openvpn.
Just being able to generate traffic over the link isn't enough to
carry out this attack.  You have to be able to get the sender to
encrypt a chosen block for you as the first thing in a packet.  How
would you do that?  Suppose there was an echo command that would
cause the receiver to send back (within the encrypted channel) whatever
data you asked.  Well, how do you get an echo command inserted into
the encrypted, presumably authenticated, flow going back the other
way?

The browser SSL attack could work because plugin code runs *within* the
browser - which knows the key - and it can add material to the red
(plaintext) connection data.  How do you propose mounting the attack