Re: interesting paper on the economics of security

2007-08-23 Thread pgut001
Florian Weimer <[EMAIL PROTECTED]> writes:

>The tests I've seen are mostly worthless because they do not weigh their
>results based on the actual threats a typical user faces.

A topic very similar to this came up recently on the hcisec list.  My comments
there were:

We already have really, really good metrics for this.  It's called the
commercial malware industry (blatant ad: see my Defcon talk from last week for
examples of exploit sales and pricing models).  To find out how secure
something is, look at how much exploits for it are selling for on the black
market.  I've been thinking of doing a maverick paper for next years MetriCon
about this [0], for example although OS X is veritable smorgasbord of 0days
the market value of these is close to zero because everyone's targetting
Windows instead.  A prime example of this is Safari, it was 0dayed within two
hours of the Windows version appearing, yet the same flaws had lain dormant in
the OS X version (presumably) for years because there's little to no
commercial interest in exploiting Macs.  So it could be argued that the best
real-world metric that we have for security comes from the attackers, not the
defenders.

(Incidentally, this powerful real-world metric is telling us that the
existing browser security model is indistinguishable from placebo :-).

Peter.

[0] This should not be construed as a promise of a paper appearing.  I'm not
sure whether I could get enough material to make an interesting paper.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: AMDs new instructions for parallelism and support för side-channel attacks?

2007-08-20 Thread pgut001
=?UTF-8?B?Sm9hY2hpbSBTdHLDtm1iZXJnc29u?= <[EMAIL PROTECTED]> writes:

>I just saw om EE Times that AMD will start to extend their x86 CPUs with
>instructions to support/help developers take advantage of the increasing
>(potential) parallelism in their processors. First out are two instructions
>that allows the developer to get info about instruction completion as well as
>cache misses.
>
>Considering the article by . about analysis of protection mechanism against
>cache based timing attacks for AES [1] one could assume that these
>instructions should be useful for writing side-channel resistant
>implementations

I think it's exactly the opposite, we're already having enough problems with
microarchitectural (MA) attacks without explicit diagnostic facilities built
into the CPU.  If you look at the AMD specs these extra ring3-accessible
facilities are only going to make it worse.  These attacks are essentially
impossible to defend against merely by modifying the victim code, the only
possible defences at the moment are:

1. "Don't do that then" (i.e. don't allow arbitrary untrusted code to run in
   parallel with your crypto ops).

2. With future hardware support, some mechanism for partitioning the CPU so
   that critical regions of code can run without leaving externally observable
   traces, ending with some sort of super-INVD/INVLPG instruction to clear all
   caches and buffers.  So the code would be something like:

enter_secure_region
[[[crypto code]]]
INV_everything
exit_secure_region

   Of course something like this would have to be accessible from ring 3,
   which makes it a built-in DoS mechanism.

So "don't do that then" seems to be the only fix for this (not including the
usual blue-sky response of everyone having 
built into their system).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: AMDs new instructions for parallelism and?UTF-8?B?IHN1cHBvcnQgZsO2ciBzaWRlLWNoYW5uZWwgYXR0YWNrcz8=?=

2007-08-19 Thread pgut001
=?UTF-8?B?Sm9hY2hpbSBTdHLDtm1iZXJnc29u?= <[EMAIL PROTECTED]> writes:

>I just saw om EE Times that AMD will start to extend their x86 CPUs with
>instructions to support/help developers take advantage of the increasing
>(potential) parallelism in their processors. First out are two instructions
>that allows the developer to get info about instruction completion as well as
>cache misses.
>
>Considering the article by . about analysis of protection mechanism against
>cache based timing attacks for AES [1] one could assume that these
>instructions should be useful for writing side-channel resistant
>implementations

I think it's exactly the opposite, we're already having enough problems with
microarchitectural (MA) attacks without explicit diagnostic facilities built
into the CPU.  If you look at the AMD specs these extra ring3-accessible
facilities are only going to make it worse.  These attacks are essentially
impossible to defend against merely by modifying the victim code, the only
possible defences at the moment are:

1. "Don't do that then" (i.e. don't allow arbitrary untrusted code to run in
   parallel with your crypto ops).

2. With future hardware support, some mechanism for partitioning the CPU so
   that critical regions of code can run without leaving externally observable
   traces, ending with some sort of super-INVD/INVLPG instruction to clear all
   caches and buffers.  So the code would be something like:

enter_secure_region
[[[crypto code]]]
INV_everything
exit_secure_region

   Of course something like this would have to be accessible from ring 3,
   which makes it a built-in DoS mechanism.
So "don't do that then" seems to be the only fix for this (not including the
usual blue-sky response of everyone having 
built into their system).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: New article on root certificate problems with Windows

2007-07-21 Thread pgut001

Paul Hoffman <[EMAIL PROTECTED]> writes:

At 2:45 AM +1200 7/20/07, [EMAIL PROTECTED] wrote:
|From a security point of view, this is really bad.  From a usability 
point of

|view, it's necessary.

As you can see from my list of proposed solutions, I disagree. I see no
reason not to to alert a user *who has removed a root* that you are about to
put it back in.


It depends on what you mean by "user".  You're assuming that direct action by
the wetware behind the keyboard resulted in its removal.  However given how
obscure and well-hidden this capability is, it's more likely that a user agent
acting with the user's rights caused the problem.  So the message you end up
communicating to the user is:

  "Something you've never heard of before has changed a setting you've never
  heard of before that affects the operation of something you've never heard
  of before and probably wouldn't understand no matter how patiently we
  explain it".

(those things are, in order "some application or script", "the cert trust
setting", "certificates", and "PKI").

I guess we'd need word from MS on whether this is by design or by accident,
but I can well see that quietly unbreaking something that's broken for some
reason would be seen as desirable behaviour.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: New article on root certificate problems with Windows

2007-07-19 Thread pgut001

Paul Hoffman <[EMAIL PROTECTED]> writes:

I posted a new security research article at
. It is not directly related to
crypto (although not so much of the traffic on this list is...), it does
relate to some PKI topics that are favorites of this list.


The executive summary, so I've got something to reply to:

  In the default configuration for Windows XP with Service Pack 2 (SP2), if a
  user removes one of the trusted root certificates, and the certifier who
  issued that root certificate is trusted by Microsoft, Windows will silently
  add the root certificate back into the user's store and use the original
  trust settings.

While I don't agree with this behaviour, I can see why Microsoft would do
this, and I can't see them changing it at any time in the future.  It's the
same reason why they ignore key usage restrictions and allow (for example) an
encryption-only key to be used for signatures, and a thousand other breaches
of PKI etiquette: There'd be too many user complaints if they didn't.

The people designing this stuff aren't the ones who have to man the tech
helpdesk when users find that things break because of some action that they
don't even understand (see e.g. the Xerox PARC study where a bunch of people
with PhDs in computer science, after following paint-by-numbers instructions
to install certs on their machines, had absolutely no idea what they'd just
done to their computers).

From a security point of view, this is really bad.  From a usability point of
view, it's necessary.  The solution is to let the HCI people into the design
process, something that's very rarely, if ever, done in the security 
field [0].


Peter.

[0] Before people jump up and down about this: Yes, HCISec has become a very
active and productive field in the last few years.  Unfortunately far too
little of the work that's being done is making it into products though.
We have lots of data saying "X is unusable in practice" and "The best way
to handle this is Y", but developers keep on pushing X and avoiding (or
don't even know about) Y.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]