One of Brinworld's uglier moments, no rights for immies

2002-10-22 Thread Major Variola (ret)
So two illegals are going back because they were in a white van near a
pay phone.
They're fortunate, they only got the 12gauge in the face and the asphalt
facial;
in a month it'll be a cruise missile first, forensics later.

Mr. Godsniper, call us back.  We couldn't trace^H^H^H^H^H hear you. 

The announcement came hours after Virginia
authorities took two men into custody after
surrounding a white van near a Richmond gas station.
However, sources said the two men weren't involved
in the attacks and would be deported to Latin
America for immigration violations.

They were in the wrong place at the wrong time, a
senior law enforcement source in Washington said on
condition of anonymity.

http://story.news.yahoo.com/news?tmpl=storyu=/ap/20021021/ap_on_re_us/sniper_shootings_368

===
Moosehunting in Virginia, ayup.  Random primate hunting,
now a Steak House: clearly its a PETA terrorist,
letting us graze the greener grass on the other side.

Homo sapiens: the other white meat.




Re: Auditing Source Code for Backdoors

2002-10-22 Thread Mike Rosing
On Wed, 31 Dec 1969, Bill Frantz wrote:

 I have been asked to audit some source code to see if the programmer
 inserted a backdoor.  (The code processes input from general users, and has
 access to the bits that control the privilege levels of those users, so
 backdoors are quite possible.)  The question I have is what obscure
 techniques should I be on the lookout for.  Besides the obvious /* Begin
 backdoor code */ of course.  :-)  The code is in ANSI C.

Look for exception processing.  Anywhere the code looks for a particular
value, something like == 0x3456352e.  That usually is a passcode into
a backdoor.  It only takes one line :-)

Patience, persistence, truth,
Dr. mike




Re: Intel Security processor + a question

2002-10-22 Thread Major Variola (ret)
At 05:13 PM 10/21/02 -0400, Tyler Durden wrote:

So I guess the follow on question is: Even if you can look at the code
of a
RNG...how easy is it to determine if its output is usefully random,
or are
there certain Diffie-approved RNGs that should always be there, and
if not
something's up?

Start with something analog, where no one knows the initial state
perfectly, and the dynamics are dispersive (chaotic).  Digitize it.
You can use ping pong balls if you like.

1. Measure its entropy (eg see Shannon).  Xor values together
(xor doesn't generate change (variation), but preserves it).
Go to 1 until you find that your measurments have asymptoted.

You should then hash ('whiten') your distilled 1bit/baud values,
to make it hard to go backwards throught the deterministic iterative
distilling in the above recipe.

In practice, you may feed a hashing digest function directly with your
raw
measurements and rely on the digest compressing the number of bits
in:out
to assure 1 bit/baud (even without the hash-whitening).

However the output of such a hash function will be noise-like even with
very low entropy input, e.g., successive integers.  Ergo measuring after

hashing is pointless.

Discuss the results with your troopleader, and you will receive your
crypto merit badge in 4-6 weeks.




anonymous remailers

2002-10-22 Thread Shawn K. Quinn
If one has set up a new anonymous remailer, where is the best place to 
get the word out? Here or somewhere else?

-- 
Shawn K. Quinn




Re: Palladium -- trivially weak in hw but secure in software?? (Re: palladium presentation - anyone going?)

2002-10-22 Thread Tal Garfinkel
 Software-based attacks are redistributable.  Once I write a program
 that hacks a computer, I can give that program to anyone to use.  I
 can even give it to everyone, and then anyone could use it.  The
 expertise necessary can be abstracted away into a program even my
 mother could use.
 
 Hardware-based attacks cannot be redistributed.  If I figure out how
 to hack my system, I can post instructions on the web but it still
 requires technical competence on your end if you want to hack your
 system too.
 
 While this doesn't help a whole lot for a DRM goal (once you get the
 non-DRM version of the media data, you can redistribute it all you
 want).

I think this assumption may be incorrect. In order for content providers
to win the DRM fight it seems like they need to address two issues. 

First, put up a big enough barrier for most users that circumventing
access controls is infeasible, or simply not worth it.

Second, put up a big enough barrier for most users that gaining access to
copies of media with the access controls removed is either infeasible,
or simply not worth it.

I believe tamper resistant hardware solves the first problem, even if,
as Adam conjectures, all that is required to access media protected by
Palladium is a $50 kit (which remember, you can't obtain legally) and
some hardware hacking. This seems to rule out well over %99 of the 
media consuming public. 

The problem of obstructing the distribution of media is really a different
topic. I think that solving this problem is easier than most folks 
think.  Again, you don't have to totally stop it P2P, or that kid in the
shopping mall selling copied CD's. All you have to do is put up big
enough technical and legal barriers that the general public would rather
just pay for the media.

While it may be the case that Palladium is not a serious barrier to
the average CS graduate student, Cypherpunk, or even the home user who
has a modicum of hardware clue, I don't think this will kill it as an
effective technology for supporting DRM, assuming that the software
cannot be broken.

--Tal




Palladium

2002-10-22 Thread Peter Clay
I've been trying to figure out whether the following attack will be
feasible in a Pd system, and what would have to be incorporated to prevent
against it.

Alice runs trusted application T on her computer. This is some sort of
media application, which acts on encoded data streamed over the
internet. Mallory persuades Alice to stream data which causes a buffer
overrun in T. The malicious code, running with all of T's privileges:

- abducts choice valuable data protected by T (e.g. individual book keys
for ebooks)
- builds its own vault with its own key
- installs a modified version of T, V, in that vault with access to the
valuable data
- trashes T's vault

The viral application V is then in an interesting position. Alice has two
choices:

- nuke V and lose all her data (possibly including all backups, depending
on how backup of vaults works)
- allow V to act freely

I haven't seen enough detail yet to be able to flesh this out, but it does
highlight some areas of concern:

- how do users back up vaults?
- there really needs to be a master override to deal with misbehaving
trusted apps.

Pete
-- 
Peter Clay | Campaign for   _  _| .__
   | Digital   /  / | |
   | Rights!   \_ \_| |
   | http://uk.eurorights.org




Re: palladium presentation - anyone going?

2002-10-22 Thread Adam Back
On Sun, Oct 20, 2002 at 10:38:35PM -0400, Arnold G. Reinhold wrote:
 There may be a hole somewhere, but Microsoft is trying hard to get
 it right and Brian seemed quite competent.

It doesn't sound breakable in pure software for the user, so this
forces the user to use some hardware hacking.

They disclaimed explicitly in the talk announce that:

| Palladium is not designed to provide defenses against
| hardware-based attacks that originate from someone in control of the
| local machine.

However I was interested to know exactly how easy it would be to
defeat with simple hardware modifications or reconfiguration.

You might ask why if there is no intent for Palladium to be secure
against the local user, then why would the design it so that the local
user has to use (simple) hardware attacks.  Could they not, instead of
just make these functions available with a user present test in the
same way that the TOR and SCP functions can be configured by the user
(but not by hostile software).

For example why not a local user present function to lie about TOR
hash to allow debugging (for example).

 Adam Back wrote:
 - isn't it quite weak as someone could send different information to
 the SCP and processor, thereby being able to forge remote attestation
 without having to tamper with the SCP; and hence being able to run
 different TOR, observe trusted agents etc.
 
 There is also a change to the PC memory management to support a 
 trusted bit for memory segments. Programs not in trusted mode can't 
 access trusted memory.

A trusted bit in the segment register doesn't make it particularly
hard to break if you have access to the hardware.

For example you could:

- replace your RAM with dual-ported video RAM (which can be read using
alternate equipment on the 2nd port).

- just keep RAM powered-up through a reboot so that you load a new TOR
which lets you read the RAM.

 Also there will be three additional x86 instructions (in microcode)
 to support secure boot of the trusted kernel and present a SHA1 hash
 of the kernel code in a read only register.  

But how will the SCP know that the hash it reads comes from the
processor (as opposed to being forged by the user)?  Is there any
authenticated communication between the processor and the SCP?

Adam
--
http://www.cypherspace.net/




Re: palladium presentation - anyone going?

2002-10-22 Thread Arnold G. Reinhold
At 10:52 PM +0100 10/21/02, Adam Back wrote:

On Sun, Oct 20, 2002 at 10:38:35PM -0400, Arnold G. Reinhold wrote:

There may be a hole somewhere, but Microsoft is trying hard to get
it right and Brian seemed quite competent.


It doesn't sound breakable in pure software for the user, so this
forces the user to use some hardware hacking.

They disclaimed explicitly in the talk announce that:

| Palladium is not designed to provide defenses against
| hardware-based attacks that originate from someone in control of the
| local machine.

However I was interested to know exactly how easy it would be to
defeat with simple hardware modifications or reconfiguration.

You might ask why if there is no intent for Palladium to be secure
against the local user, then why would the design it so that the local
user has to use (simple) hardware attacks.  Could they not, instead of
just make these functions available with a user present test in the
same way that the TOR and SCP functions can be configured by the user
(but not by hostile software).


One of the services that Palladium offers, according to the talk 
announcement, is:

b. Attestation. The ability for a piece of code to digitally sign
or otherwise attest to a piece of data and further assure the
signature recipient that the data was constructed by an unforgeable,
cryptographically identified software stack.


It seems to me such a service requires that Palladium be secure 
against the local user. I think that is the main goal of the product.


For example why not a local user present function to lie about TOR
hash to allow debugging (for example).


Adam Back wrote:
- isn't it quite weak as someone could send different information to
the SCP and processor, thereby being able to forge remote attestation
without having to tamper with the SCP; and hence being able to run
different TOR, observe trusted agents etc.

There is also a change to the PC memory management to support a
trusted bit for memory segments. Programs not in trusted mode can't
access trusted memory.


A trusted bit in the segment register doesn't make it particularly
hard to break if you have access to the hardware.

For example you could:

- replace your RAM with dual-ported video RAM (which can be read using
alternate equipment on the 2nd port).

- just keep RAM powered-up through a reboot so that you load a new TOR
which lets you read the RAM.


Brian mentioned that the system will not be secure against someone 
who can access the memory bus.  But I can see steps being taken in 
the future to make that mechanically difficult. The history of the 
Scanner laws is instructive. Originally one had the right to listen 
to any radio communication as long as you did not make use of the 
information  received. Then Congress banned the sale of scanners that 
can receive cell phone frequencies. Subsequently the laws were 
tightened to require scanners be designed so that their frequency 
range cannot be modified.  In practice this means the control chip 
must be potted in epoxy.  I can see similar steps being taken with 
Palladium PCs. Memory expansion could be dealt with by finding a way 
to give Palladium preferred access to the first block of physical 
memory that is soldered on the mother board.



Also there will be three additional x86 instructions (in microcode)
to support secure boot of the trusted kernel and present a SHA1 hash
of the kernel code in a read only register. 


But how will the SCP know that the hash it reads comes from the
processor (as opposed to being forged by the user)?  Is there any
authenticated communication between the processor and the SCP?


Brian also mentioned that there would be changes to the Southbridge 
LCP bus, which I gather is a local I/O bus in PCs.  SCP will sit on 
that and presumably the changes are to insure that the SCP can only 
be accessed in secure mode.

At 12:27 AM +0100 10/22/02, Peter Clay wrote:
I've been trying to figure out whether the following attack will be
feasible in a Pd system, and what would have to be incorporated to prevent
against it.

Alice runs trusted application T on her computer. This is some sort of
media application, which acts on encoded data streamed over the
internet. Mallory persuades Alice to stream data which causes a buffer
overrun in T. The malicious code, running with all of T's privileges:

- abducts choice valuable data protected by T (e.g. individual book keys
for ebooks)
- builds its own vault with its own key
- installs a modified version of T, V, in that vault with access to the
valuable data
- trashes T's vault

The viral application V is then in an interesting position. Alice has two
choices:

- nuke V and lose all her data (possibly including all backups, depending
on how backup of vaults works)
- allow V to act freely


There are two cases here. One is a buffer overflow in one of the 
trusted agents running in Palladium. Presumably an attack here will 
only be able to damage vaults associated with the product 

Palladium -- trivially weak in hw but secure in software?? (Re: palladium presentation - anyone going?)

2002-10-22 Thread Adam Back
Remote attestation does indeed require Palladium to be secure against
the local user.  

However my point is while they seem to have done a good job of
providing software security for the remote attestation function, it
seems at this point that hardware security is laughable.

So they disclaim in the talk announce that Palladium is not intended
to be secure against hardware attacks:

| Palladium is not designed to provide defenses against
| hardware-based attacks that originate from someone in control of the
| local machine.

so one can't criticise the implementation of their threat model -- it
indeed isn't secure against hardware based attacks.

But I'm questioning the validity of the threat model as a realistic
and sensible balance of practical security defenses.

Providing almost no hardware defenses while going to extra-ordinary
efforts to provide top notch software defenses doesn't make sense if
the machine owner is a threat.

The remote attestation function clearly is defined from the view that
the owner is a threat.

Without specifics and some knowledge of hardware hacking we can't
quantify, but I suspect that hacking it would be pretty easy.  Perhaps
no soldering, $50 equipment and simple instructions anyone could
follow.

more inline below...

On Mon, Oct 21, 2002 at 09:36:09PM -0400, Arnold G. Reinhold wrote:
 [about improving palladium hw security...] Memory expansion could be
 dealt with by finding a way to give Palladium preferred access to
 the first block of physical memory that is soldered on the mother
 board.

I think standard memory could be used.  I can think of simple
processor modifications that could fix this problem with hardware
tamper resistance assurance to the level of having to tamper with .13
micron processor.  The processor is something that could be epoxyied
inside a cartridge for example (with the cartridge design processor +
L2 cache housings as used by some Intel pentium class processors),
though probably having to tamper with a modern processor is plenty
hard enough to match software security given software complexity
issues.

Adam
--
http://www.cypherspace.net/




Re: Palladium -- trivially weak in hw but secure in software?? (Re: palladium presentation - anyone going?)

2002-10-22 Thread Rick Wash
On Tue, Oct 22, 2002 at 04:52:16PM +0100, Adam Back wrote:
 So they disclaim in the talk announce that Palladium is not intended
 to be secure against hardware attacks:
 
 | Palladium is not designed to provide defenses against
 | hardware-based attacks that originate from someone in control of the
 | local machine.
 
 so one can't criticise the implementation of their threat model -- it
 indeed isn't secure against hardware based attacks.
 
 But I'm questioning the validity of the threat model as a realistic
 and sensible balance of practical security defenses.
 
 Providing almost no hardware defenses while going to extra-ordinary
 efforts to provide top notch software defenses doesn't make sense if
 the machine owner is a threat.

This depends.  I would say this is an interesting threat model.  It
makes the attacks non-redistributable.

Software-based attacks are redistributable.  Once I write a program
that hacks a computer, I can give that program to anyone to use.  I
can even give it to everyone, and then anyone could use it.  The
expertise necessary can be abstracted away into a program even my
mother could use.

Hardware-based attacks cannot be redistributed.  If I figure out how
to hack my system, I can post instructions on the web but it still
requires techinical competence on your end if you want to hack your
system too.

While this doesn't help a whole lot for a DRM goal (once you get the
non-DRM version of the media data, you can redistribute it all you
want), it can be very useful for security.  It can help to eliminate
the 'script kiddie' style of attackers.

  Rick