Re: [patch 1/2] [RFC] Simple tamper-proof device filesystem.

2007-12-17 Thread David Wagner
David Wagner wrote:
> If the attacker gets full administrator-level access on your machine,
> there are a gazillion ways the attacker can prevent other admins from
> logging on. This patch can't prevent that.  It sounds like this patch
> is trying to solve a fundamentally unsolveable problem.

Tetsuo Handa wrote:
> Please be aware that I'm saying "if this filesystem is used with MAC".

I'm aware.  I'm sticking with my argument.

I doubt that any we're likely to see a MAC system that is strict enough
to prevent an attacker with administrator access from locking out other
admins, and is yet is loose enough to be useful in practice.  I think
the proposed patch is like sticking a thumb in the dike and is trying
to solve a problem that cannot be solved with any reasonable application
of effort.  I think if the attacker has gotten administrator level, then
we'll never be able to prevent the attacker from doing all sorts of bad
things we don't like, like locking out other admins.  Of course if we
have a proposed defense that only stops one particular attack pathway
but leaves dozens others open, it's always convenient to say that "the
other attack pathways aren't my problem, that's the MAC's business".
Sure, if we want to hypothesize the existence of a "magic fairy dust"
MAC system that somehow closes every other path via which admin-level
attackers could lock out other admins, except for this one pathway, then
this patch might make sense.  But I see no reason to expect ordinary
MAC systems to have that property.

Trying to put in place a defense that only prevents on particular attack
path, when there are a thousand other ways an attacker might achieve the
same ends, does not seem like a good way to go about securing your system.
For every one attack path that you shut down, the attacker can probably
think up a dozen new paths that you haven't shut down yet.  That isn't
a good basis for security.

Personally, I'd argue that we should learn a different lesson from
the attack you experienced.  The lesson is not "oh boy, we better shut
down this particular way that the attacker misused administrator-level
access".  I think a better lesson is "let's think about ways to reduce
the likelihood that attackers will get administrator-level access,
because once the attacker has administrator-level access, the attacker
can do a lot of harm".

>If MAC(such as SELinux, TOMOYO Linux) allows attackers to
>"mount other filesystem over this filesystem", this filesystem is no
>longer tamper-proof.
>But as long as MAC prevents attackers from mounting other filesystem
>over this filesystem,
>this filesystem can remain tamper-proof.

But the point is that it's not enough just to prevent attackers
from mounting other filesystems over this filesystem.  I can think
of all sorts of ways that an admin-level attacker might be able to
prevent other administrators from logging in.  If your defense strategy
involves trying to enumerate all of those possible ways and then shut
them down one by one, you're relying upon a defense strategy known as
"blacklisting".  Blacklisting has a terrible track record in the
security field, because it's too easy to overlook one pathway.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/2] [RFC] Simple tamper-proof device filesystem.

2007-12-17 Thread David Wagner
David Wagner wrote:
 If the attacker gets full administrator-level access on your machine,
 there are a gazillion ways the attacker can prevent other admins from
 logging on. This patch can't prevent that.  It sounds like this patch
 is trying to solve a fundamentally unsolveable problem.

Tetsuo Handa wrote:
 Please be aware that I'm saying if this filesystem is used with MAC.

I'm aware.  I'm sticking with my argument.

I doubt that any we're likely to see a MAC system that is strict enough
to prevent an attacker with administrator access from locking out other
admins, and is yet is loose enough to be useful in practice.  I think
the proposed patch is like sticking a thumb in the dike and is trying
to solve a problem that cannot be solved with any reasonable application
of effort.  I think if the attacker has gotten administrator level, then
we'll never be able to prevent the attacker from doing all sorts of bad
things we don't like, like locking out other admins.  Of course if we
have a proposed defense that only stops one particular attack pathway
but leaves dozens others open, it's always convenient to say that the
other attack pathways aren't my problem, that's the MAC's business.
Sure, if we want to hypothesize the existence of a magic fairy dust
MAC system that somehow closes every other path via which admin-level
attackers could lock out other admins, except for this one pathway, then
this patch might make sense.  But I see no reason to expect ordinary
MAC systems to have that property.

Trying to put in place a defense that only prevents on particular attack
path, when there are a thousand other ways an attacker might achieve the
same ends, does not seem like a good way to go about securing your system.
For every one attack path that you shut down, the attacker can probably
think up a dozen new paths that you haven't shut down yet.  That isn't
a good basis for security.

Personally, I'd argue that we should learn a different lesson from
the attack you experienced.  The lesson is not oh boy, we better shut
down this particular way that the attacker misused administrator-level
access.  I think a better lesson is let's think about ways to reduce
the likelihood that attackers will get administrator-level access,
because once the attacker has administrator-level access, the attacker
can do a lot of harm.

If MAC(such as SELinux, TOMOYO Linux) allows attackers to
mount other filesystem over this filesystem, this filesystem is no
longer tamper-proof.
But as long as MAC prevents attackers from mounting other filesystem
over this filesystem,
this filesystem can remain tamper-proof.

But the point is that it's not enough just to prevent attackers
from mounting other filesystems over this filesystem.  I can think
of all sorts of ways that an admin-level attacker might be able to
prevent other administrators from logging in.  If your defense strategy
involves trying to enumerate all of those possible ways and then shut
them down one by one, you're relying upon a defense strategy known as
blacklisting.  Blacklisting has a terrible track record in the
security field, because it's too easy to overlook one pathway.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/2] [RFC] Simple tamper-proof device filesystem.

2007-12-16 Thread David Wagner
Tetsuo Handa writes:
>When I attended at Security Stadium 2003 as a defense side,
>I was using devfs for /dev directory. The files in /dev directory
>were deleted by attckers and the administrator was unable to login.

If the attacker gets full administrator-level access on your machine,
there are a gazillion ways the attacker can prevent other admins from
logging on.  This patch can't prevent that.  It sounds like this patch
is trying to solve a fundamentally unsolveable problem.

A useful slogan: "Don't forbid what you cannot prevent."
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch 1/2] [RFC] Simple tamper-proof device filesystem.

2007-12-16 Thread David Wagner
Tetsuo Handa writes:
When I attended at Security Stadium 2003 as a defense side,
I was using devfs for /dev directory. The files in /dev directory
were deleted by attckers and the administrator was unable to login.

If the attacker gets full administrator-level access on your machine,
there are a gazillion ways the attacker can prevent other admins from
logging on.  This patch can't prevent that.  It sounds like this patch
is trying to solve a fundamentally unsolveable problem.

A useful slogan: Don't forbid what you cannot prevent.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 39/45] AppArmor: Profile loading and manipulation, pathname matching

2007-06-24 Thread David Wagner
James Morris  wrote:
>A. Pathname labeling - applying access control to pathnames to objects, 
>rather than labeling the objects themselves.
>
>Think of this as, say, securing your house by putting a gate in the street 
>in front of the house, regardless of how many other possible paths there 
>are to the house via other streets, adjoining properties etc.
>
>Pathname labeling and mediation is simply mediating a well-known path to 
>the object.  In this analogy, object labeling would instead ensure that 
>all of the accessible doors, windows and other entrances of the house were 
>locked, so that someone trying to break in from the rear alley would 
>not get in simply by bypassing the front gate and opening any door.
>
>What you do with AppArmor, instead of addressing the problem, is just 
>redefine the environment along the lines of "set your house into a rock 
>wall so there is only one path to it".

Harrumph.  Those analogies sound good but aren't a very good guide.

Let's take a concrete example.  Consider the following fragment of a
policy for Mozilla:
allow ~/.mozilla
deny ~
Ignore the syntax; the goal is to allow Mozilla to access files under
~/.mozilla but nothing else under my home directory.  This is a perfectly
reasonable policy fragment to want to enforce.  And enforcing it in
the obvious way using pathname-based access control is not a ridiculous
thing to do.

Yes, in theory, there could always be crazy symlinks or hardlinks
from somewhere under ~/.mozilla to elsewhere in my home directory that
would cause this policy to behave in ways different from how I desired.
In theory.  But in practice this is "pretty good": good enough to be
useful in the real world.  In the real world I don't have any symlinks
like that under my ~/.mozilla directory, and I'm not really worried
about unconfined processes accidentally creating a symlink under there
against my wishes.  It'd be good enough for me.

Yes, pathname-based models have limitations and weaknesses, but don't
overplay them.  For some purposes they are a very simple way of specifying
a desired policy and they work well enough to be useful -- a darn site
better than what we've got today.  If your goal is "ease of use" and
"better than what many users are using today", it's not an unreasonable
approach.  Time will tell whether it's the best solution, but it's not
obviously wrong.

And I think that's the criteria: If you want to argue that the very
idea of pathname-based access control so bogus that no approach to
pathname-based security should be merged, then you should have to argue
that it is obviously wrong and obviously not useful to users.  I don't
think that burden has been met.

>B. Pathname access control as a general abstraction for OS security.

This strikes me as a strawman.  Pathname-based access control is an
abstraction for mediating the *filesystem*.  Who says it has to be the
way you mediate the network or IPC?

>To quote from:
>
>http://www.novell.com/linux/security/apparmor/
>
>  "AppArmor gives you network application security via mandatory access 
>   control for programs, protecting against the exploitation of software 
>   flaws and compromised systems. AppArmor includes everything you need to 
>   provide effective containment for programs (including those that run as 
>   root) to thwart attempted exploits and even zero-day attacks."
>
>This is not accurate in any sense of the term containment of mandatory 
>access control that I've previously encountered.

You bet.  The claim you quote is totally bogus.

Bad marketers, no biscuit for you.

>The fact that it doesn't work as expected does not arise simply from 
>missing features or being "different".  It arises from the design of the 
>system, which uses a pathname abstraction, where, even if we agree to 
>ignore issue (1) above, still does not work, because only filesystem 
>interactions are mediated.

Disagree.

>The "simple" policy that users can so effortlessly manipulate is simple 
>because it is wrong, and deliberately so.
>
>The design of the AppArmor is based on _appearing simple_, but at the 
>expense of completeness and thus correctness.

Based on my experience with Janus, my expectation is the policy isn't
going to get that much more complicated when they add mediation of
network and IPC access.  I suspect it will stay almost as simple.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 39/45] AppArmor: Profile loading and manipulation, pathname matching

2007-06-24 Thread David Wagner
James Morris  wrote:
>The point is that the pathname model does not generalize, and that 
>AppArmor's inability to provide adequate coverage of the system is a 
>design issue arising from this.

I don't see it.  I don't see why you call this a design issue.  Isn't
this just a case where they haven't gotten around to implementing
network and IPC mediation yet?  How is that a design issue arising
from a pathname-based model?  For instance, one system I built (Janus)
provided complete mediation, including mediation of network and IPC,
yet it too used a pathname model for its policy file when describing
the policy for the filesystem.  That seems to contradict your statement.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 39/45] AppArmor: Profile loading and manipulation, pathname matching

2007-06-24 Thread David Wagner
I've heard four arguments against merging AA.

Argument 1. SELinux does it better than AA.  (Or: SELinux dominates AA.
Or: SELinux can do everything that AA can.)

Argument 2. Object labeling (or: information flow control) is more secure
than pathname-based access control.

Argument 3. AA isn't complete until it mediates network and IPC.

Argument 4. AA doesn't have buy-in from VFS maintainers.

Let me comment on these one-by-one.

1. I think this is a bogus argument for rejecting AA.  As I remember it,
the whole point of LSM was to allow merging multiple solutions and let
them compete for users on their merit, not to force the Linux maintainers
to figure out which is the best solution.  Let a million flowers bloom.

2. This is argument #1 in a different guise and I find it about as weak.
Pathname-based access control has strengths and weaknesses.  I think
users and Linux distributions are in a better position to evaluate those
tradeoffs than L-K.  Competition is good.

3. This one I agree with.  If you want to sandbox network daemons that
you're concerned might get hacked, then you really want your sandbox to
mediate everything.  Right now the security provided by AA (if that's
what you are using it for) will be limited by its incomplete mediation,
since a knowledgeable motivated attacker who hacks your daemon may be
able to use network or IPC to break out of the sandbox, depending upon
the network and host configuration.  Filesystem mediation alone might be
better than nothing, I suppose, if you're worried about script kiddies,
but it's certainly not a basis for strong security claims.  The state of
the art in sandboxing obviously requires complete mediation, and we've
known that for a decade.

That said, I see filesystem mediation as the hard part.  It's the hardest
part to implement and get right.  It's the hardest part to configure
and the hardest part when it comes to designing usable policy languages.
And I suspect it's the hardest part to get merged into the Linux kernel.
And it often makes sense to start with the hard part.  If AA's approach
to mediating the filesystem is acceptable, I think AA is 2/3rds of the way
to a tool that could be very useful for providing strong security claims.

There's a policy decision here: Do maintainers refuse to merge AA until
it provides complete mediation?  That's a policy matter that's up to the
maintainers.  I have no opinion on it.  However if that is the reason
for rejecting AA it seems like it might be appropriate to come to some
decision now about whether AA's approach to filesystem mediation is
acceptable to Linux developers.  I don't think it would be reasonable
to tell AA developers to go spend a few months developing network and
IPC mediation and then after they do that, to reject the whole thing on
the basis that the approach to filesystem mediation is unacceptable.
That won't encourage development of new and innovative approaches to
security, which doesn't seem like a good thing to me.

4. Way over my head.  I'm not qualified to comment on this aspect.
I suspect this is the argument that ought to be getting the most serious
and thorough discussion, not the irrelevant SELinux-vs-AA faceoff.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 39/45] AppArmor: Profile loading and manipulation, pathname matching

2007-06-24 Thread David Wagner
Stephen Smalley  wrote:
>On Fri, 2007-06-22 at 01:06 -0700, John Johansen wrote:
>> No the "incomplete" mediation does not flow from the design.  We have
>> deliberately focused on doing the necessary modifications for pathname
>> based mediation.  The IPC and network mediation are a wip.
>
>The fact that you have to go back to the drawing board for them is that
>you didn't get the abstraction right in the first place.

Calling this "going back to the drawing board" board strikes me as an
unfair criticism, when the real situation is that in the future the AA
folks will need to extend their code to mediate network and IPC (not
throw all the current code away and start over from scratch, and not
replace big swaths of the current code).
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 39/45] AppArmor: Profile loading and manipulation, pathname matching

2007-06-24 Thread David Wagner
Stephen Smalley  wrote:
>That would certainly help, although one might quibble with the use of
>the word "confinement" at all wrt AppArmor (it has a long-established
>technical meaning that implies information flow control, and that goes
>beyond even complete mediation - it requires global and persistent
>protection of the data based on its properties, which requires stable
>and unambiguous identifiers).

1. Yes, that's the usage that has the greatest historical claim, but
"confinement" has also been used in the security community to refer to
limiting the overt side effects a process can have rather than controlling
information flow.  The term "confinement" is arguably ambiguous, but I
think there is a semi-established meaning that doesn't imply information
flow control.

2. This is a can of worms we probably don't want to open.  Keep in
mind that SELinux doesn't meet definition of confinement in Lampson's
original paper, either, because it only restricts overt information flows.
SELinux doesn't prevent covert channels, even though Lampson's original
paper included them as part of the confinement problem.  Yet I don't
think it would be reasonable to criticize someone for describing SELinux
as a tool for "confinement".  I don't know of any practical solution
that solves the confinement problem as Lampson envisioned it.  I'd
recommend making decisions on the basis of whether the mechanisms are
useful, rather than whether they solve Lampson's notion of the
"confinement" problem.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 39/45] AppArmor: Profile loading and manipulation, pathname matching

2007-06-24 Thread David Wagner
Stephen Smalley  wrote:
>On Thu, 2007-06-21 at 21:54 +0200, Lars Marowsky-Bree wrote:
>> And now, yes, I know AA doesn't mediate IPC or networking (yet), but
>> that's a missing feature, not broken by design.
>
>The incomplete mediation flows from the design, since the pathname-based
>mediation doesn't generalize to cover all objects unlike label- or
>attribute-based mediation.

I don't see anything in the AA design that would prevent an extending
it to mediate network and IPC while remaining consistent with its design
so far.  Do you?  It seems to me the AA design is to mediate filesystem
access using pathname-based access control, and that says nothing about
how they mediate network access.

I have built sandboxing tools before, and my experience is that the
filesystem mediation is the hardest, gronkiest part.  In comparison,
mediating networking and IPC is considerably easier.  The policy
language for mediating access to the network can be pretty simple.
The same for IPC.  Obviously you shouldn't expect the policy language
for networking to use filenames, any more than you should expect the
policy languages for filesystems to use TCP/IP port numbers; that wouldn't
make any sense.


>And the "use the natural abstraction for
>each object type" approach likewise doesn't yield any general model or
>anything that you can analyze systematically for data flow.

I don't see this as relevant to whether AA should be merged.
Fight that one in the marketplace for users, not L-K.

>> If I restrict my Mozilla to not access my on-disk mail folder, it can't
>> get there. (Barring bugs in programs which Mozilla is allowed to run
>> unconfined, sure.)
>
>Um, no.  It might not be able to directly open files via that path, but
>showing that it can never read or write your mail is a rather different
>matter.

"Showing that it can never read or write your mail" is not part of AA's
goals.  People whose goals differ from AA's can use a different tool.
No one is forcing you to use AA if it isn't useful to you.

I don't see this criticism as relevant to a merger decision.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 39/45] AppArmor: Profile loading and manipulation, pathname matching

2007-06-24 Thread David Wagner
Stephen Smalley  wrote:
On Thu, 2007-06-21 at 21:54 +0200, Lars Marowsky-Bree wrote:
 And now, yes, I know AA doesn't mediate IPC or networking (yet), but
 that's a missing feature, not broken by design.

The incomplete mediation flows from the design, since the pathname-based
mediation doesn't generalize to cover all objects unlike label- or
attribute-based mediation.

I don't see anything in the AA design that would prevent an extending
it to mediate network and IPC while remaining consistent with its design
so far.  Do you?  It seems to me the AA design is to mediate filesystem
access using pathname-based access control, and that says nothing about
how they mediate network access.

I have built sandboxing tools before, and my experience is that the
filesystem mediation is the hardest, gronkiest part.  In comparison,
mediating networking and IPC is considerably easier.  The policy
language for mediating access to the network can be pretty simple.
The same for IPC.  Obviously you shouldn't expect the policy language
for networking to use filenames, any more than you should expect the
policy languages for filesystems to use TCP/IP port numbers; that wouldn't
make any sense.


And the use the natural abstraction for
each object type approach likewise doesn't yield any general model or
anything that you can analyze systematically for data flow.

I don't see this as relevant to whether AA should be merged.
Fight that one in the marketplace for users, not L-K.

 If I restrict my Mozilla to not access my on-disk mail folder, it can't
 get there. (Barring bugs in programs which Mozilla is allowed to run
 unconfined, sure.)

Um, no.  It might not be able to directly open files via that path, but
showing that it can never read or write your mail is a rather different
matter.

Showing that it can never read or write your mail is not part of AA's
goals.  People whose goals differ from AA's can use a different tool.
No one is forcing you to use AA if it isn't useful to you.

I don't see this criticism as relevant to a merger decision.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 39/45] AppArmor: Profile loading and manipulation, pathname matching

2007-06-24 Thread David Wagner
Stephen Smalley  wrote:
That would certainly help, although one might quibble with the use of
the word confinement at all wrt AppArmor (it has a long-established
technical meaning that implies information flow control, and that goes
beyond even complete mediation - it requires global and persistent
protection of the data based on its properties, which requires stable
and unambiguous identifiers).

1. Yes, that's the usage that has the greatest historical claim, but
confinement has also been used in the security community to refer to
limiting the overt side effects a process can have rather than controlling
information flow.  The term confinement is arguably ambiguous, but I
think there is a semi-established meaning that doesn't imply information
flow control.

2. This is a can of worms we probably don't want to open.  Keep in
mind that SELinux doesn't meet definition of confinement in Lampson's
original paper, either, because it only restricts overt information flows.
SELinux doesn't prevent covert channels, even though Lampson's original
paper included them as part of the confinement problem.  Yet I don't
think it would be reasonable to criticize someone for describing SELinux
as a tool for confinement.  I don't know of any practical solution
that solves the confinement problem as Lampson envisioned it.  I'd
recommend making decisions on the basis of whether the mechanisms are
useful, rather than whether they solve Lampson's notion of the
confinement problem.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 39/45] AppArmor: Profile loading and manipulation, pathname matching

2007-06-24 Thread David Wagner
Stephen Smalley  wrote:
On Fri, 2007-06-22 at 01:06 -0700, John Johansen wrote:
 No the incomplete mediation does not flow from the design.  We have
 deliberately focused on doing the necessary modifications for pathname
 based mediation.  The IPC and network mediation are a wip.

The fact that you have to go back to the drawing board for them is that
you didn't get the abstraction right in the first place.

Calling this going back to the drawing board board strikes me as an
unfair criticism, when the real situation is that in the future the AA
folks will need to extend their code to mediate network and IPC (not
throw all the current code away and start over from scratch, and not
replace big swaths of the current code).
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 39/45] AppArmor: Profile loading and manipulation, pathname matching

2007-06-24 Thread David Wagner
I've heard four arguments against merging AA.

Argument 1. SELinux does it better than AA.  (Or: SELinux dominates AA.
Or: SELinux can do everything that AA can.)

Argument 2. Object labeling (or: information flow control) is more secure
than pathname-based access control.

Argument 3. AA isn't complete until it mediates network and IPC.

Argument 4. AA doesn't have buy-in from VFS maintainers.

Let me comment on these one-by-one.

1. I think this is a bogus argument for rejecting AA.  As I remember it,
the whole point of LSM was to allow merging multiple solutions and let
them compete for users on their merit, not to force the Linux maintainers
to figure out which is the best solution.  Let a million flowers bloom.

2. This is argument #1 in a different guise and I find it about as weak.
Pathname-based access control has strengths and weaknesses.  I think
users and Linux distributions are in a better position to evaluate those
tradeoffs than L-K.  Competition is good.

3. This one I agree with.  If you want to sandbox network daemons that
you're concerned might get hacked, then you really want your sandbox to
mediate everything.  Right now the security provided by AA (if that's
what you are using it for) will be limited by its incomplete mediation,
since a knowledgeable motivated attacker who hacks your daemon may be
able to use network or IPC to break out of the sandbox, depending upon
the network and host configuration.  Filesystem mediation alone might be
better than nothing, I suppose, if you're worried about script kiddies,
but it's certainly not a basis for strong security claims.  The state of
the art in sandboxing obviously requires complete mediation, and we've
known that for a decade.

That said, I see filesystem mediation as the hard part.  It's the hardest
part to implement and get right.  It's the hardest part to configure
and the hardest part when it comes to designing usable policy languages.
And I suspect it's the hardest part to get merged into the Linux kernel.
And it often makes sense to start with the hard part.  If AA's approach
to mediating the filesystem is acceptable, I think AA is 2/3rds of the way
to a tool that could be very useful for providing strong security claims.

There's a policy decision here: Do maintainers refuse to merge AA until
it provides complete mediation?  That's a policy matter that's up to the
maintainers.  I have no opinion on it.  However if that is the reason
for rejecting AA it seems like it might be appropriate to come to some
decision now about whether AA's approach to filesystem mediation is
acceptable to Linux developers.  I don't think it would be reasonable
to tell AA developers to go spend a few months developing network and
IPC mediation and then after they do that, to reject the whole thing on
the basis that the approach to filesystem mediation is unacceptable.
That won't encourage development of new and innovative approaches to
security, which doesn't seem like a good thing to me.

4. Way over my head.  I'm not qualified to comment on this aspect.
I suspect this is the argument that ought to be getting the most serious
and thorough discussion, not the irrelevant SELinux-vs-AA faceoff.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 39/45] AppArmor: Profile loading and manipulation, pathname matching

2007-06-24 Thread David Wagner
James Morris  wrote:
The point is that the pathname model does not generalize, and that 
AppArmor's inability to provide adequate coverage of the system is a 
design issue arising from this.

I don't see it.  I don't see why you call this a design issue.  Isn't
this just a case where they haven't gotten around to implementing
network and IPC mediation yet?  How is that a design issue arising
from a pathname-based model?  For instance, one system I built (Janus)
provided complete mediation, including mediation of network and IPC,
yet it too used a pathname model for its policy file when describing
the policy for the filesystem.  That seems to contradict your statement.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 39/45] AppArmor: Profile loading and manipulation, pathname matching

2007-06-24 Thread David Wagner
James Morris  wrote:
A. Pathname labeling - applying access control to pathnames to objects, 
rather than labeling the objects themselves.

Think of this as, say, securing your house by putting a gate in the street 
in front of the house, regardless of how many other possible paths there 
are to the house via other streets, adjoining properties etc.

Pathname labeling and mediation is simply mediating a well-known path to 
the object.  In this analogy, object labeling would instead ensure that 
all of the accessible doors, windows and other entrances of the house were 
locked, so that someone trying to break in from the rear alley would 
not get in simply by bypassing the front gate and opening any door.

What you do with AppArmor, instead of addressing the problem, is just 
redefine the environment along the lines of set your house into a rock 
wall so there is only one path to it.

Harrumph.  Those analogies sound good but aren't a very good guide.

Let's take a concrete example.  Consider the following fragment of a
policy for Mozilla:
allow ~/.mozilla
deny ~
Ignore the syntax; the goal is to allow Mozilla to access files under
~/.mozilla but nothing else under my home directory.  This is a perfectly
reasonable policy fragment to want to enforce.  And enforcing it in
the obvious way using pathname-based access control is not a ridiculous
thing to do.

Yes, in theory, there could always be crazy symlinks or hardlinks
from somewhere under ~/.mozilla to elsewhere in my home directory that
would cause this policy to behave in ways different from how I desired.
In theory.  But in practice this is pretty good: good enough to be
useful in the real world.  In the real world I don't have any symlinks
like that under my ~/.mozilla directory, and I'm not really worried
about unconfined processes accidentally creating a symlink under there
against my wishes.  It'd be good enough for me.

Yes, pathname-based models have limitations and weaknesses, but don't
overplay them.  For some purposes they are a very simple way of specifying
a desired policy and they work well enough to be useful -- a darn site
better than what we've got today.  If your goal is ease of use and
better than what many users are using today, it's not an unreasonable
approach.  Time will tell whether it's the best solution, but it's not
obviously wrong.

And I think that's the criteria: If you want to argue that the very
idea of pathname-based access control so bogus that no approach to
pathname-based security should be merged, then you should have to argue
that it is obviously wrong and obviously not useful to users.  I don't
think that burden has been met.

B. Pathname access control as a general abstraction for OS security.

This strikes me as a strawman.  Pathname-based access control is an
abstraction for mediating the *filesystem*.  Who says it has to be the
way you mediate the network or IPC?

To quote from:

http://www.novell.com/linux/security/apparmor/

  AppArmor gives you network application security via mandatory access 
   control for programs, protecting against the exploitation of software 
   flaws and compromised systems. AppArmor includes everything you need to 
   provide effective containment for programs (including those that run as 
   root) to thwart attempted exploits and even zero-day attacks.

This is not accurate in any sense of the term containment of mandatory 
access control that I've previously encountered.

You bet.  The claim you quote is totally bogus.

Bad marketers, no biscuit for you.

The fact that it doesn't work as expected does not arise simply from 
missing features or being different.  It arises from the design of the 
system, which uses a pathname abstraction, where, even if we agree to 
ignore issue (1) above, still does not work, because only filesystem 
interactions are mediated.

Disagree.

The simple policy that users can so effortlessly manipulate is simple 
because it is wrong, and deliberately so.

The design of the AppArmor is based on _appearing simple_, but at the 
expense of completeness and thus correctness.

Based on my experience with Janus, my expectation is the policy isn't
going to get that much more complicated when they add mediation of
network and IPC access.  I suspect it will stay almost as simple.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 01/41] Pass struct vfsmount to the inode_create LSM hook

2007-06-01 Thread David Wagner
[EMAIL PROTECTED] writes:
>Experience over on the Windows side of the fence indicates that "remote bad
>guys get some local user first" is a *MAJOR* part of the current real-world
>threat model - the vast majority of successful attacks on end-user boxes these
>days start off with either "Get user to (click on link|open attachment)" or
>"Subvert the path to a website (either by hacking the real site or hijacking
>the DNS) and deliver a drive-by fruiting when the user visits the page".

AppArmor isn't trying to defend everyday users from getting phished or
social engineered; it is trying to protect servers from getting rooted
because of security holes in their network daemons.  I find that a
laudable goal.  Sure, it doesn't solve every security problem in the
world, but so what?  A tool that could solve that one security problem
would still be a useful thing, even if it did nothing else.

I don't find the Windows stuff too relevant here.  As I understand it,
AppArmor isn't aimed at defending Windows desktop users; it is aimed at
defending Linux servers.  A pretty different environment, I'd say.

Ultimately, there are some things AppArmor may be good at, and there
are also sure to be some things it is bloody useless for.  My hammer
isn't very good for screwing in screws, but I still find it useful.
I confess I don't understand the kvetching about AppArmor's goals.
What are you expecting, some kind of silver bullet?

A question I'd find more interesting is whether AppArmor is able to
meet its stated goals, under a reasonable threat model, and with what
degree of assurance, and at what cost.  But I don't know whether that's
relevant for the linux-kernel mailing list.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 01/41] Pass struct vfsmount to the inode_create LSM hook

2007-06-01 Thread David Wagner
[EMAIL PROTECTED] writes:
Experience over on the Windows side of the fence indicates that remote bad
guys get some local user first is a *MAJOR* part of the current real-world
threat model - the vast majority of successful attacks on end-user boxes these
days start off with either Get user to (click on link|open attachment) or
Subvert the path to a website (either by hacking the real site or hijacking
the DNS) and deliver a drive-by fruiting when the user visits the page.

AppArmor isn't trying to defend everyday users from getting phished or
social engineered; it is trying to protect servers from getting rooted
because of security holes in their network daemons.  I find that a
laudable goal.  Sure, it doesn't solve every security problem in the
world, but so what?  A tool that could solve that one security problem
would still be a useful thing, even if it did nothing else.

I don't find the Windows stuff too relevant here.  As I understand it,
AppArmor isn't aimed at defending Windows desktop users; it is aimed at
defending Linux servers.  A pretty different environment, I'd say.

Ultimately, there are some things AppArmor may be good at, and there
are also sure to be some things it is bloody useless for.  My hammer
isn't very good for screwing in screws, but I still find it useful.
I confess I don't understand the kvetching about AppArmor's goals.
What are you expecting, some kind of silver bullet?

A question I'd find more interesting is whether AppArmor is able to
meet its stated goals, under a reasonable threat model, and with what
degree of assurance, and at what cost.  But I don't know whether that's
relevant for the linux-kernel mailing list.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 01/41] Pass struct vfsmount to the inode_create LSM hook

2007-05-29 Thread David Wagner
[EMAIL PROTECTED] wrote:
> no, this won't help you much against local users, [...]

Pavel Machek  wrote:
>Hmm, I guess I'd love "it is useless on multiuser boxes" to become
>standard part of AA advertising.

That's not quite what david@ said.  As I understand it, AppArmor is not
focused on preventing attacks by local users against other local users;
that's not the main problem it is trying to solve.  Rather, it's primary
purpose is to deal with attacks by remote bad guys against your network
servers.  That is a laudable goal.  Anything that helps reduce the impact
of remote exploits is bound to be useful, even if doesn't do a darn
thing to stop local users from attacking each other.

This means that AppArmor could still be useful on multiuser boxes,
even if that utility is limited to defending (some) network daemons
against remote attack (or, more precisely, reducing the damage done by
a successful remote attack against a network daemon).
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 01/41] Pass struct vfsmount to the inode_create LSM hook

2007-05-29 Thread David Wagner
[EMAIL PROTECTED] wrote:
 no, this won't help you much against local users, [...]

Pavel Machek  wrote:
Hmm, I guess I'd love it is useless on multiuser boxes to become
standard part of AA advertising.

That's not quite what david@ said.  As I understand it, AppArmor is not
focused on preventing attacks by local users against other local users;
that's not the main problem it is trying to solve.  Rather, it's primary
purpose is to deal with attacks by remote bad guys against your network
servers.  That is a laudable goal.  Anything that helps reduce the impact
of remote exploits is bound to be useful, even if doesn't do a darn
thing to stop local users from attacking each other.

This means that AppArmor could still be useful on multiuser boxes,
even if that utility is limited to defending (some) network daemons
against remote attack (or, more precisely, reducing the damage done by
a successful remote attack against a network daemon).
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-19 Thread David Wagner
Stephen Smalley  wrote:
>Integrity protection requires information flow control; you can't
>protect a high integrity process from being corrupted by a low integrity
>process if you don't control the flow of information.  Plenty of attacks
>take the form of a untrusted process injecting data that will ultimately
>be used by a more trusted process with a surprising side effect.

I don't agree with this blanket statement.  In a number of cases
of practical interest, useful integrity protection can be achieved
without full information flow control.  Suppose you have a malicious
("low integrity") process A, and a target ("high integrity") process B.
We want to prevent A from attacking B.  One way to do that is to ensure
that A has no overt channel it can use to attack process B, by severely
restricting A's ability to cause side effects on the rest of the world.
This is often sufficient to contain the damage that A can do.

Of course, if the intended functionality of the system requires A to
communicate data to B, and if you don't trust B's ability to handle
that data carefully enough, and if A is malicious, then you've got a
serious problem.

But in a number of cases (enough cases to be useful), you can provide
a useful level of security without needing information flow control and
without needing global, persistent labels.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-19 Thread David Wagner
Crispin Cowan wrote:
> How is it that you think a buffer overflow in httpd could allow an
> attacker to break out of an AppArmor profile?

James Morris  wrote:
> [...] you can change the behavior of the application and then bypass 
> policy entirely by utilizing any mechanism other than direct filesystem 
> access: IPC, shared memory, Unix domain sockets, local IP networking, 
> remote networking etc.
[...]
> Just look at their code and their own description of AppArmor.

My gosh, you're right.  What the heck?  With all due respect to the
developers of AppArmor, I can't help thinking that that's pretty lame.
I think this raises substantial questions about the value of AppArmor.
What is the point of having a jail if it leaves gaping holes that
malicious code could use to escape?

And why isn't this documented clearly, with the implications fully
explained?

I would like to hear the AppArmor developers defend this design decision.
When we developed Janus, over 10 years ago, we defended against these
attack avenues and protected everything -- not just the filesystem.
Systrace does the same, as does Plash.  So does Consh, and MapBox, and
Ostia, to name a few other examples from the research world.  This is
standard stuff that is well-documented in the literature, and it seems to
me it is necessary before you can claim to have a useful jail.  What am
I missing?


P.S. I think the criticisms that "AppArmor is pathname-based" or
"AppArmor doesn't do everything SELinux does" or "AppArmor doesn't do
information flow control" are weak.  But the criticism that "AppArmor
leaves security holes that can be used to escape the jail" seems like
a serious criticism to me.  Perhaps a change of focus is in order.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-19 Thread David Wagner
Stephen Smalley  wrote:
>Confinement in its traditional sense (e.g. the 1973 Lampson paper, ACM
>Vol 16 No 10) means information flow control, which you have agreed
>AppArmor does not and cannot provide.

Right, that's how I understand it, too.

However, I think some more caveats are in order.  In all honesty,
I don't think SELinux solves Lampson's problem, either.

It is useful to distinguish between "bit-confinement" (confining the
flow of information, a la Lampson) vs "authority-confinement" (confining
the flow of privileges and the ability of the untrusted app to cause
side effects on the rest of the system).

No Linux system provides bit-confinement, if the confined app is
malicious.  AppArmor does not provide bit-confinement.  Neither does
SELinux.  SELinux can stop some kinds of accidental leakage of secrets,
but it cannot prevent deliberate attempts to leak the secrets that are
known to malicious apps.  The reason is that, in every system under
consideration, it is easy for a malicious app to leak any secrets it might
have to the outside world by using covert channels (e.g., wall-banging).
In practical terms, Lampson's bit-confinement problem is just not
solvable.  Oh well, so it goes.

A good jail needs to provide authority-confinement, but thankfully,
it doesn't need to provide bit-confinement.  I don't know enough about
AppArmor to know whether it is able to do a good job of providing
authority-confinement.  If it cannot, then it deserves criticism on
those grounds.

Often the pragmatic solution to the covert channel problem is to ensure
that untrusted apps are never given access to critical secrets in the
first place.  They can't leak something they don't know.  This solves the
confidentiality problem by avoiding any attempt to tackle the unsolvable
bit-confinement problem.

Note that the problem of building a good jail is a little different from
the information flow control problem.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-19 Thread David Wagner
James Morris  wrote:
>On Wed, 18 Apr 2007, Crispin Cowan wrote:
>> How is it that you think a buffer overflow in httpd could allow an
>> attacker to break out of an AppArmor profile?
>
>Because you can change the behavior of the application and then bypass 
>policy entirely by utilizing any mechanism other than direct filesystem 
>access: IPC, shared memory, Unix domain sockets, local IP networking, 
>remote networking etc.

Any halfway decent jail will let you control access to all of those
things, thereby preventing an 0wned httpd from breaking out of the jail.
(For instance, Janus did.  So does Systrace.)

Are you saying AppArmor does not allow that kind of control?  Specifics
would be useful.

>Also worth noting here is that you have to consider any limited 
>environment as enforcing security policy, and thus its configuration 
>becomes an additional component of security policy.

I don't understand what you are saying.  Yes, the AppArmor policy
file is part of policy.  Is that what you mean?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-19 Thread David Wagner
James Morris  wrote:
On Wed, 18 Apr 2007, Crispin Cowan wrote:
 How is it that you think a buffer overflow in httpd could allow an
 attacker to break out of an AppArmor profile?

Because you can change the behavior of the application and then bypass 
policy entirely by utilizing any mechanism other than direct filesystem 
access: IPC, shared memory, Unix domain sockets, local IP networking, 
remote networking etc.

Any halfway decent jail will let you control access to all of those
things, thereby preventing an 0wned httpd from breaking out of the jail.
(For instance, Janus did.  So does Systrace.)

Are you saying AppArmor does not allow that kind of control?  Specifics
would be useful.

Also worth noting here is that you have to consider any limited 
environment as enforcing security policy, and thus its configuration 
becomes an additional component of security policy.

I don't understand what you are saying.  Yes, the AppArmor policy
file is part of policy.  Is that what you mean?
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-19 Thread David Wagner
Stephen Smalley  wrote:
Confinement in its traditional sense (e.g. the 1973 Lampson paper, ACM
Vol 16 No 10) means information flow control, which you have agreed
AppArmor does not and cannot provide.

Right, that's how I understand it, too.

However, I think some more caveats are in order.  In all honesty,
I don't think SELinux solves Lampson's problem, either.

It is useful to distinguish between bit-confinement (confining the
flow of information, a la Lampson) vs authority-confinement (confining
the flow of privileges and the ability of the untrusted app to cause
side effects on the rest of the system).

No Linux system provides bit-confinement, if the confined app is
malicious.  AppArmor does not provide bit-confinement.  Neither does
SELinux.  SELinux can stop some kinds of accidental leakage of secrets,
but it cannot prevent deliberate attempts to leak the secrets that are
known to malicious apps.  The reason is that, in every system under
consideration, it is easy for a malicious app to leak any secrets it might
have to the outside world by using covert channels (e.g., wall-banging).
In practical terms, Lampson's bit-confinement problem is just not
solvable.  Oh well, so it goes.

A good jail needs to provide authority-confinement, but thankfully,
it doesn't need to provide bit-confinement.  I don't know enough about
AppArmor to know whether it is able to do a good job of providing
authority-confinement.  If it cannot, then it deserves criticism on
those grounds.

Often the pragmatic solution to the covert channel problem is to ensure
that untrusted apps are never given access to critical secrets in the
first place.  They can't leak something they don't know.  This solves the
confidentiality problem by avoiding any attempt to tackle the unsolvable
bit-confinement problem.

Note that the problem of building a good jail is a little different from
the information flow control problem.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-19 Thread David Wagner
Crispin Cowan wrote:
 How is it that you think a buffer overflow in httpd could allow an
 attacker to break out of an AppArmor profile?

James Morris  wrote:
 [...] you can change the behavior of the application and then bypass 
 policy entirely by utilizing any mechanism other than direct filesystem 
 access: IPC, shared memory, Unix domain sockets, local IP networking, 
 remote networking etc.
[...]
 Just look at their code and their own description of AppArmor.

My gosh, you're right.  What the heck?  With all due respect to the
developers of AppArmor, I can't help thinking that that's pretty lame.
I think this raises substantial questions about the value of AppArmor.
What is the point of having a jail if it leaves gaping holes that
malicious code could use to escape?

And why isn't this documented clearly, with the implications fully
explained?

I would like to hear the AppArmor developers defend this design decision.
When we developed Janus, over 10 years ago, we defended against these
attack avenues and protected everything -- not just the filesystem.
Systrace does the same, as does Plash.  So does Consh, and MapBox, and
Ostia, to name a few other examples from the research world.  This is
standard stuff that is well-documented in the literature, and it seems to
me it is necessary before you can claim to have a useful jail.  What am
I missing?


P.S. I think the criticisms that AppArmor is pathname-based or
AppArmor doesn't do everything SELinux does or AppArmor doesn't do
information flow control are weak.  But the criticism that AppArmor
leaves security holes that can be used to escape the jail seems like
a serious criticism to me.  Perhaps a change of focus is in order.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-19 Thread David Wagner
Stephen Smalley  wrote:
Integrity protection requires information flow control; you can't
protect a high integrity process from being corrupted by a low integrity
process if you don't control the flow of information.  Plenty of attacks
take the form of a untrusted process injecting data that will ultimately
be used by a more trusted process with a surprising side effect.

I don't agree with this blanket statement.  In a number of cases
of practical interest, useful integrity protection can be achieved
without full information flow control.  Suppose you have a malicious
(low integrity) process A, and a target (high integrity) process B.
We want to prevent A from attacking B.  One way to do that is to ensure
that A has no overt channel it can use to attack process B, by severely
restricting A's ability to cause side effects on the rest of the world.
This is often sufficient to contain the damage that A can do.

Of course, if the intended functionality of the system requires A to
communicate data to B, and if you don't trust B's ability to handle
that data carefully enough, and if A is malicious, then you've got a
serious problem.

But in a number of cases (enough cases to be useful), you can provide
a useful level of security without needing information flow control and
without needing global, persistent labels.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-17 Thread David Wagner
James Morris  wrote:
>This is not what the discussion is about.  It's about addressing the many 
>points in the FAQ posted here which are likely to cause misunderstandings, 
>and then subsequent responses of a similar nature.

Thank you.  Then I misunderstood, and I owe you an apology.  Thank you
for your patience and for correcting my mistaken impression.

For what it's worth, I agreed with most or all of the comments you made
in your original response to the FAQ posted here.  I thought they were
constructive.  What got me to ranting was an email from Karl MacMillan
that seemed focused more on debating the merits of AppArmor rather than
on improving the FAQ.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-17 Thread David Wagner
James Morris  wrote:
>On Tue, 17 Apr 2007, David Wagner wrote:
>> Maybe you'd like to confine the PHP interpreter to limit what it can do.
>> That might be a good application for something like AppArmor.  You don't
>> need comprehensive information flow control for that kind of use, and
>> it would likely just get in the way.
>
>SELinux can do this, it's policy-flexible.  You can even simulate a 
>pathame-based policy language with a consequential loss of control:

I have no doubt that SELinux can do that, but that has about as much
relevance to my point as the price of tea in China does.  I can use a
screwdriver to drive in a nail into my wall, too, if I really wanted to,
but that doesn't mean toolmakers should stop manufacturing hammers.

My point is that there are some tasks where it's plausible that AppArmor
might well be a better (easier-to-use) tool for the job.  I'm inclined
to suspect I might find it easier to use AppArmor for this kind of task
than SELinux, and I suspect I'm not the only one.  That doesn't mean
that AppArmor is somehow inherently superior to SELinux, or something
like that.

No one is claiming that AppArmor is "a better SELinux".  It solves
a somewhat different problem, and has a different set of tradeoffs.
It seems potentially useful.  That ought to be enough.  The world does
not revolve around SELinux.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-17 Thread David Wagner
James Morris  wrote:
>I would challenge the claim that AppArmor offers any magic bullet for
>ease of use.

There are, of course, no magic bullets for ease of use.
I would not make such a strong claim.  I simply stated that it
is plausible that AppArmor might have some advantages in some
deployment environments.

The purpose of LSM was to enable multiple different approaches to
security, so that we don't have to fight over the One True Way to
do it.  There might not be one best way for all situations.

These systems probably have different tradeoffs.  Consequently, it seems
to me that arguing over whether SELinux is superior to AppArmor makes
about as much sense as arguing over whether emacs is superior to vim,
or whether Python is superior to Perl.  The answer is likely to be
"it depends".

It's to be expected that SELinux developers prefer their own system
over AppArmor, or that AppArmor developers prefer AppArmor to SELinux.
(Have you ever seen any new parent who thinks their own baby is ugly?)
SELinux developers are likely to have built a system that addresses
the problems that seem important to them; other systems might set
priorities differently.

I think in this case the best remedy is to let many flowers bloom,
and let the users decide for themselves.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-17 Thread David Wagner
Karl MacMillan  wrote:
>My private ssh keys need to be protected regardless
>of the file name - it is the "bag of bits" that make it important not
>the name.

I think you picked a bad example.  That's a confidentiality policy.
AppArmor can't make any guarantees about confidentiality.  Neither can
SELinux.  If you give a malicious app the power to read your private ssh
key, it's game over, dude.  (Covert channels, wall banging, and all that.)
So don't do that.

>Similarly, you protect the integrity of the applications that
>need name resolution by ensuring that the data that they read is high
>integrity. You do that by controlling data not the file name used to
>access the data. That is James point - a comprehensive mechanism like
>SELinux allows you to comprehensively protect the integrity of data.

I think this argument just misses the point.  What you want isn't what
AppArmor does.  Fine.  Nobody is forcing you to use AppArmor.  But that
doesn't mean AppArmor is useless.  There may be people who want what
AppArmor has to provide.

It sounds like you want a comprehensive information flow control system.
That's not what AppArmor provides.  If I understand correctly, one
thing AppArmor does provide is a way to confine untrusted legacy
apps in a restricted jail.  That can be useful in some scenarios.
Consider, for instance, a web server where untrusted users can upload PHP
scripts, and you're concerned that those PHP scripts might be malicious.
Maybe you'd like to confine the PHP interpreter to limit what it can do.
That might be a good application for something like AppArmor.  You don't
need comprehensive information flow control for that kind of use, and
it would likely just get in the way.

Those who want a information flow control system, will probably use
SELinux or something like it.  Those who want what AppArmor has to offer,
might well use AppArmor.  They solve different problems and have different
tradeoffs.  There is room for more than one security tool in the world.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-17 Thread David Wagner
Karl MacMillan  wrote:
>I don't think that the ease-of-use issue is clear cut. The hard part of
>understanding both SELinux policies and AppArmor profiles is
>understanding what access should be allowed. [...]
>Whether the access is allowed with the SELinux or
>AppArmor language seems like a small issue in comparison. Given that I
>think it is better to choose the solution that is complete and capable
>of meeting the most security concerns.

I have a different reaction.  Given that the ease of use vs
completeness issues are not completely understood, I would think
it would make more sense to include both in the kernel.  Wasn't that
the whole point of the LSM interface, to let competing approaches
bloom and compete on their merits?

I have to say that I'm not convinced the difference in policy
languages is a small issue.  I find the SELinux policy language
and policy files more or less inscrutable.  In comparison, from
the AppArmor FAQ, I can imagine that I might be able to understand
enough to hack AppArmor policies after 5 minutes of reading a
man page.  Whether I'm likely to know what the policy ought to be
is indeed a tough question, but I can imagine that AppArmor might
be more usable than SELinux.  Even if SELinux is more "complete"
than AppArmor, I might still prefer ease of use over completeness
and understandability.

And I have to say that the ability to form a mental model of how
the system works and understand more or less what it is doing may
be useful.  I find debugging SELinux problems a bear: I often just
end up disabling entire SELinux policies, or turning off SELinux,
because I can't understand what it's doing.  In comparison, it's
plausible that it might be easier for sysadmins to understand what
AppArmor is doing, since they don't have to understand labelling and
hard-to-read policy files.  And the increase in understandability
might potentially outweigh the "completeness" issue, in some cases.
Ultimately, easier to use and easier to understand tools might
better solve security overall, because they are more likely to be
used and to be used correctly.

Bottom line: I think the comparison regarding ease of use is a
bit speculative at this point, but I think there is sufficient
reason for thinking that AppArmor might be a useful tool in some
deployment environments.

>I'd also argue that the typical interface presented to admins (which
>doesn't involve writing new policies) is easier for SELinux than with
>AppArmor. Most admins do fine with relabeling, changing booleans, and
>running audit2allow, which is all that is needed to solve the majority
>of SELinux issues.

Heh.  I had to chuckle at that one: it is pretty far removed from
my own personal experience.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-17 Thread David Wagner
Karl MacMillan  wrote:
I don't think that the ease-of-use issue is clear cut. The hard part of
understanding both SELinux policies and AppArmor profiles is
understanding what access should be allowed. [...]
Whether the access is allowed with the SELinux or
AppArmor language seems like a small issue in comparison. Given that I
think it is better to choose the solution that is complete and capable
of meeting the most security concerns.

I have a different reaction.  Given that the ease of use vs
completeness issues are not completely understood, I would think
it would make more sense to include both in the kernel.  Wasn't that
the whole point of the LSM interface, to let competing approaches
bloom and compete on their merits?

I have to say that I'm not convinced the difference in policy
languages is a small issue.  I find the SELinux policy language
and policy files more or less inscrutable.  In comparison, from
the AppArmor FAQ, I can imagine that I might be able to understand
enough to hack AppArmor policies after 5 minutes of reading a
man page.  Whether I'm likely to know what the policy ought to be
is indeed a tough question, but I can imagine that AppArmor might
be more usable than SELinux.  Even if SELinux is more complete
than AppArmor, I might still prefer ease of use over completeness
and understandability.

And I have to say that the ability to form a mental model of how
the system works and understand more or less what it is doing may
be useful.  I find debugging SELinux problems a bear: I often just
end up disabling entire SELinux policies, or turning off SELinux,
because I can't understand what it's doing.  In comparison, it's
plausible that it might be easier for sysadmins to understand what
AppArmor is doing, since they don't have to understand labelling and
hard-to-read policy files.  And the increase in understandability
might potentially outweigh the completeness issue, in some cases.
Ultimately, easier to use and easier to understand tools might
better solve security overall, because they are more likely to be
used and to be used correctly.

Bottom line: I think the comparison regarding ease of use is a
bit speculative at this point, but I think there is sufficient
reason for thinking that AppArmor might be a useful tool in some
deployment environments.

I'd also argue that the typical interface presented to admins (which
doesn't involve writing new policies) is easier for SELinux than with
AppArmor. Most admins do fine with relabeling, changing booleans, and
running audit2allow, which is all that is needed to solve the majority
of SELinux issues.

Heh.  I had to chuckle at that one: it is pretty far removed from
my own personal experience.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-17 Thread David Wagner
Karl MacMillan  wrote:
My private ssh keys need to be protected regardless
of the file name - it is the bag of bits that make it important not
the name.

I think you picked a bad example.  That's a confidentiality policy.
AppArmor can't make any guarantees about confidentiality.  Neither can
SELinux.  If you give a malicious app the power to read your private ssh
key, it's game over, dude.  (Covert channels, wall banging, and all that.)
So don't do that.

Similarly, you protect the integrity of the applications that
need name resolution by ensuring that the data that they read is high
integrity. You do that by controlling data not the file name used to
access the data. That is James point - a comprehensive mechanism like
SELinux allows you to comprehensively protect the integrity of data.

I think this argument just misses the point.  What you want isn't what
AppArmor does.  Fine.  Nobody is forcing you to use AppArmor.  But that
doesn't mean AppArmor is useless.  There may be people who want what
AppArmor has to provide.

It sounds like you want a comprehensive information flow control system.
That's not what AppArmor provides.  If I understand correctly, one
thing AppArmor does provide is a way to confine untrusted legacy
apps in a restricted jail.  That can be useful in some scenarios.
Consider, for instance, a web server where untrusted users can upload PHP
scripts, and you're concerned that those PHP scripts might be malicious.
Maybe you'd like to confine the PHP interpreter to limit what it can do.
That might be a good application for something like AppArmor.  You don't
need comprehensive information flow control for that kind of use, and
it would likely just get in the way.

Those who want a information flow control system, will probably use
SELinux or something like it.  Those who want what AppArmor has to offer,
might well use AppArmor.  They solve different problems and have different
tradeoffs.  There is room for more than one security tool in the world.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-17 Thread David Wagner
James Morris  wrote:
I would challenge the claim that AppArmor offers any magic bullet for
ease of use.

There are, of course, no magic bullets for ease of use.
I would not make such a strong claim.  I simply stated that it
is plausible that AppArmor might have some advantages in some
deployment environments.

The purpose of LSM was to enable multiple different approaches to
security, so that we don't have to fight over the One True Way to
do it.  There might not be one best way for all situations.

These systems probably have different tradeoffs.  Consequently, it seems
to me that arguing over whether SELinux is superior to AppArmor makes
about as much sense as arguing over whether emacs is superior to vim,
or whether Python is superior to Perl.  The answer is likely to be
it depends.

It's to be expected that SELinux developers prefer their own system
over AppArmor, or that AppArmor developers prefer AppArmor to SELinux.
(Have you ever seen any new parent who thinks their own baby is ugly?)
SELinux developers are likely to have built a system that addresses
the problems that seem important to them; other systems might set
priorities differently.

I think in this case the best remedy is to let many flowers bloom,
and let the users decide for themselves.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-17 Thread David Wagner
James Morris  wrote:
On Tue, 17 Apr 2007, David Wagner wrote:
 Maybe you'd like to confine the PHP interpreter to limit what it can do.
 That might be a good application for something like AppArmor.  You don't
 need comprehensive information flow control for that kind of use, and
 it would likely just get in the way.

SELinux can do this, it's policy-flexible.  You can even simulate a 
pathame-based policy language with a consequential loss of control:

I have no doubt that SELinux can do that, but that has about as much
relevance to my point as the price of tea in China does.  I can use a
screwdriver to drive in a nail into my wall, too, if I really wanted to,
but that doesn't mean toolmakers should stop manufacturing hammers.

My point is that there are some tasks where it's plausible that AppArmor
might well be a better (easier-to-use) tool for the job.  I'm inclined
to suspect I might find it easier to use AppArmor for this kind of task
than SELinux, and I suspect I'm not the only one.  That doesn't mean
that AppArmor is somehow inherently superior to SELinux, or something
like that.

No one is claiming that AppArmor is a better SELinux.  It solves
a somewhat different problem, and has a different set of tradeoffs.
It seems potentially useful.  That ought to be enough.  The world does
not revolve around SELinux.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: AppArmor FAQ

2007-04-17 Thread David Wagner
James Morris  wrote:
This is not what the discussion is about.  It's about addressing the many 
points in the FAQ posted here which are likely to cause misunderstandings, 
and then subsequent responses of a similar nature.

Thank you.  Then I misunderstood, and I owe you an apology.  Thank you
for your patience and for correcting my mistaken impression.

For what it's worth, I agreed with most or all of the comments you made
in your original response to the FAQ posted here.  I thought they were
constructive.  What got me to ranting was an email from Karl MacMillan
that seemed focused more on debating the merits of AppArmor rather than
on improving the FAQ.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 00/41] AppArmor security module overview

2007-04-16 Thread David Wagner
Pavel Machek  wrote:
> David Wagner wrote:
>> There was no way to follow fork securely.
>
>Actually there is now. I did something similar called subterfugue and
>we solved this one.

Yes, I saw that.  I thought subterfugue was neat.  The way that
subterfugue was a clever hack -- albeit too clever by half, in my opinion.
Dynamically re-writing the program on the fly to insert a trap after
the fork() call, right?  When the tracer has to do that kind of thing,
I find it hard to get confidence that it will be secure.  It seems
all too easy to imagine ways that the tracee might be able to escape
the tracing and break security.  There are all sorts of corner cases
to think about.  What if the program is executing from shared memory?
What if there are multiple threads running concurrently?  What if the
program is executing from a region of memory where a DMA is scheduled to
asynchronously write to?  Any of those cases could create a race condition
(TOCTTOU) where the trap after the fork() gets removed before the program
reaches that point of execution.

ptrace() seems like a fine answer for a debugger, but I think it's not
such a great answer for a security tool where you have to be dead-certain
there is no way to escape the sandbox.  When I'm relying upon something
for security, the last thing you want is to have to go through hairy
kludgy kludgy contortions to make up for flaws in the interface.
Complexity is the enemy of security.

I still think that ptrace() is not the best way to implement this kind
of security tool, and I think it's entirely understandable that they did
not use ptrace.  I do not think it is a fair criticism of AppArmor to say
"AppArmor should have used ptrace()".

>>  Handling of signals is a mess: ptrace overloads the
>> signal mechanism to deliver its events, [...]
>
>We got this solved in linux, I believe.

Out of curiousity, how was this solved?  It looked pretty fundamental
to me.

Thanks for your comments...
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 00/41] AppArmor security module overview

2007-04-16 Thread David Wagner
Pavel Machek  wrote:
 David Wagner wrote:
 There was no way to follow fork securely.

Actually there is now. I did something similar called subterfugue and
we solved this one.

Yes, I saw that.  I thought subterfugue was neat.  The way that
subterfugue was a clever hack -- albeit too clever by half, in my opinion.
Dynamically re-writing the program on the fly to insert a trap after
the fork() call, right?  When the tracer has to do that kind of thing,
I find it hard to get confidence that it will be secure.  It seems
all too easy to imagine ways that the tracee might be able to escape
the tracing and break security.  There are all sorts of corner cases
to think about.  What if the program is executing from shared memory?
What if there are multiple threads running concurrently?  What if the
program is executing from a region of memory where a DMA is scheduled to
asynchronously write to?  Any of those cases could create a race condition
(TOCTTOU) where the trap after the fork() gets removed before the program
reaches that point of execution.

ptrace() seems like a fine answer for a debugger, but I think it's not
such a great answer for a security tool where you have to be dead-certain
there is no way to escape the sandbox.  When I'm relying upon something
for security, the last thing you want is to have to go through hairy
kludgy kludgy contortions to make up for flaws in the interface.
Complexity is the enemy of security.

I still think that ptrace() is not the best way to implement this kind
of security tool, and I think it's entirely understandable that they did
not use ptrace.  I do not think it is a fair criticism of AppArmor to say
AppArmor should have used ptrace().

  Handling of signals is a mess: ptrace overloads the
 signal mechanism to deliver its events, [...]

We got this solved in linux, I believe.

Out of curiousity, how was this solved?  It looked pretty fundamental
to me.

Thanks for your comments...
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH resend][CRYPTO]: RSA algorithm patch

2007-04-12 Thread David Wagner
Indan Zupancic wrote:
>On Thu, April 12, 2007 11:35, Satyam Sharma wrote:
>> 1. First, sorry, I don't think an RSA implementation not conforming to
>> PKCS #1 qualifies to be called RSA at all. That is definitely a *must*
>> -- why break strong crypto algorithms such as RSA by implementing them
>> in insecure ways?
>
>It's still RSA, that it's not enough to get a complete and secure crypto
>system doesn't mean it isn't RSA anymore. Maybe you're right and having
>RSA without the rest makes no sense.

Yes, Satyam Sharma is 100% correct.  Unpadded RSA makes no sense.  RSA is
not secure if you omit the padding.  If you have a good reason why RSA
needs to be in the kernel for security reasons, then the padding has to be
in the kernel, too.  Putting plain unpadded RSA in the kernel seems bogus.

I worry about the quality of this patch if it is using unpadded RSA.
This is pretty elementary stuff.  No one should be implementing their
own crypto code unless they have considerable competence and knowledge
of cryptography.  This elementary error leaves reason to be concerned
about whether the developer of this patch has the skills that are needed
to write this kind of code and get it right.

People often take it personally when I tell them that they do are not
competent to write their own crypto code, but this is not a personal
attack.  It takes very specialized knowledge and considerable study
before one can write your own crypto implementation from scratch and
have a good chance that the result will be secure.  People without
those skills shouldn't be writing their own crypto code, at least not
if security is important, because it's too easy to get something wrong.
(No, just reading Applied Cryptography is not good enough.)  My experience
is that code that contains elementary errors like this is also likely
to contain more subtle errors that are harder to spot.  In short, I'm
not getting warm fuzzies here.

And no, you can't just blithely push padding into user space and expect
that to make the security issues go away.  If you are putting the
RSA exponentiation in the kernel because you don't trust user space,
then you have to put the padding in the kernel, too, otherwise you're
vulnerable to attack from evil user space code.

It is also not true that padding schemes change all the time.  They're
fairly stable.  Pick a reasonable modern padding scheme and leave it.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 00/41] AppArmor security module overview

2007-04-12 Thread David Wagner
Pavel Machek  wrote:
>You can do the same with ptrace. If that's not fast enough... improve
>ptrace?

I did my Master's thesis on a system called Janus that tried using ptrace
for this goal.  The bottom line is that ptrace sucks for this purpose.
It is a kludge.  It is not the right approach.  I do not know of any
satisfactory way to improve ptrace for this purpose; you have to throw
away ptrace and start over.

At the time I did the work, ptrace has all sorts of serious problems.
Here are some of them.  There was no way to follow fork securely.
There was no way to deny a single system call without killing the process
entirely.  Performance was poor, because ptrace context-switches on every
read() and write().  Handling of signals is a mess: ptrace overloads the
signal mechanism to deliver its events, which in retrospect was a lousy
design decision it makes the tracer complex and error-prone and makes
it hard to maintain the transparency of tracing.  ptrace breaks wait(),
and consequently handling wait() and other signal-related system calls
transparently and securely is ugly at best.  Handling signals is probably
feasible but a total mess, and that's the last thing you want in the
security-critical part of your system.  In addition, ptrace operates
at the wrong level of abstraction and forces the user-level tracer to
maintain a lot of shadow state that must be kept in sync with state held
by the kernel.  That's an opportunity for security holes.  Also, ptrace
has no way to force the tracee to die if the tracer unexpectedly dies,
which is risky when using ptrace for security confinement.  I haven't
checked whether these problems are still present in the current
implementation of ptrace, but I'd guess that many probably still are,
because many are fundamental consequences of how ptrace works.

Before advocating ptrace for this purpose, I encourage you to study some
of the relevant literature.  Start with Chapter 4 of my Master's thesis.
  http://www.cs.berkeley.edu/~daw/papers/janus-masters.ps
Then, read Tal Garfinkel's paper on system call interposition.
  http://www.stanford.edu/~talg/papers/traps/abstract.html
Then, read about Ostia.
  http://www.stanford.edu/~talg/papers/NDSS04/abstract.html
I think these may change your mind about the suitability of ptrace for
this task.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [AppArmor 00/41] AppArmor security module overview

2007-04-12 Thread David Wagner
Pavel Machek  wrote:
You can do the same with ptrace. If that's not fast enough... improve
ptrace?

I did my Master's thesis on a system called Janus that tried using ptrace
for this goal.  The bottom line is that ptrace sucks for this purpose.
It is a kludge.  It is not the right approach.  I do not know of any
satisfactory way to improve ptrace for this purpose; you have to throw
away ptrace and start over.

At the time I did the work, ptrace has all sorts of serious problems.
Here are some of them.  There was no way to follow fork securely.
There was no way to deny a single system call without killing the process
entirely.  Performance was poor, because ptrace context-switches on every
read() and write().  Handling of signals is a mess: ptrace overloads the
signal mechanism to deliver its events, which in retrospect was a lousy
design decision it makes the tracer complex and error-prone and makes
it hard to maintain the transparency of tracing.  ptrace breaks wait(),
and consequently handling wait() and other signal-related system calls
transparently and securely is ugly at best.  Handling signals is probably
feasible but a total mess, and that's the last thing you want in the
security-critical part of your system.  In addition, ptrace operates
at the wrong level of abstraction and forces the user-level tracer to
maintain a lot of shadow state that must be kept in sync with state held
by the kernel.  That's an opportunity for security holes.  Also, ptrace
has no way to force the tracee to die if the tracer unexpectedly dies,
which is risky when using ptrace for security confinement.  I haven't
checked whether these problems are still present in the current
implementation of ptrace, but I'd guess that many probably still are,
because many are fundamental consequences of how ptrace works.

Before advocating ptrace for this purpose, I encourage you to study some
of the relevant literature.  Start with Chapter 4 of my Master's thesis.
  http://www.cs.berkeley.edu/~daw/papers/janus-masters.ps
Then, read Tal Garfinkel's paper on system call interposition.
  http://www.stanford.edu/~talg/papers/traps/abstract.html
Then, read about Ostia.
  http://www.stanford.edu/~talg/papers/NDSS04/abstract.html
I think these may change your mind about the suitability of ptrace for
this task.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH resend][CRYPTO]: RSA algorithm patch

2007-04-12 Thread David Wagner
Indan Zupancic wrote:
On Thu, April 12, 2007 11:35, Satyam Sharma wrote:
 1. First, sorry, I don't think an RSA implementation not conforming to
 PKCS #1 qualifies to be called RSA at all. That is definitely a *must*
 -- why break strong crypto algorithms such as RSA by implementing them
 in insecure ways?

It's still RSA, that it's not enough to get a complete and secure crypto
system doesn't mean it isn't RSA anymore. Maybe you're right and having
RSA without the rest makes no sense.

Yes, Satyam Sharma is 100% correct.  Unpadded RSA makes no sense.  RSA is
not secure if you omit the padding.  If you have a good reason why RSA
needs to be in the kernel for security reasons, then the padding has to be
in the kernel, too.  Putting plain unpadded RSA in the kernel seems bogus.

I worry about the quality of this patch if it is using unpadded RSA.
This is pretty elementary stuff.  No one should be implementing their
own crypto code unless they have considerable competence and knowledge
of cryptography.  This elementary error leaves reason to be concerned
about whether the developer of this patch has the skills that are needed
to write this kind of code and get it right.

People often take it personally when I tell them that they do are not
competent to write their own crypto code, but this is not a personal
attack.  It takes very specialized knowledge and considerable study
before one can write your own crypto implementation from scratch and
have a good chance that the result will be secure.  People without
those skills shouldn't be writing their own crypto code, at least not
if security is important, because it's too easy to get something wrong.
(No, just reading Applied Cryptography is not good enough.)  My experience
is that code that contains elementary errors like this is also likely
to contain more subtle errors that are harder to spot.  In short, I'm
not getting warm fuzzies here.

And no, you can't just blithely push padding into user space and expect
that to make the security issues go away.  If you are putting the
RSA exponentiation in the kernel because you don't trust user space,
then you have to put the padding in the kernel, too, otherwise you're
vulnerable to attack from evil user space code.

It is also not true that padding schemes change all the time.  They're
fairly stable.  Pick a reasonable modern padding scheme and leave it.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] Undo some of the pseudo-security madness

2007-01-21 Thread David Wagner
Samium Gromoff  wrote:
>[...] directly setuid root the lisp system executable itself [...]

Like I said, that sounds like a bad idea to me.  Sounds like a recipe for
privilege escalation vulnerabilities.  Was the lisp system executable
really implemented to be secure even when you make it setuid root?
Setting the setuid-root bit on programs that didn't expect to be
setuid-root is generally not a very safe thing to do. [1]

The more I hear, the more unconvinced I am by this use case.

If you don't care about the security issues created by (mis)using the lisp
interpreter in this way, then like I suggested before, you can always
write a tiny setuid-root wrapper program that turns off address space
randomization and exec()s the lisp system executable, and leave the lisp
system executable non-setuid and don't touch the code in the Linux kernel.
That strikes me as a better solution: those who don't mind the security
risks can take all the risks they want, without forcing others to take
unwanted and unnecessary risks.

It's not that I'm wedded to address space randomization of setuid
programs, or that I think it would be a disaster if this patch were
accepted.  Local privilege escalation attacks aren't the end of the world;
in all honesty, they're pretty much irrelevant to many or most users.
It's just that the arguments I'm hearing advanced in support of this
change seem dubious, and the change does eliminate one of the defenses
against a certain (narrow) class of attacks.


[1] In comparison, suidperl was designed to be installed setuid-root,
and it takes special precautions to be safe in this usage.  (And even it
has had some security vulnerabilities, despite its best efforts, which
illustrates how tricky this business can be.)  Setting the setuid-root
bit on a large complex interpreter that wasn't designed to be setuid-root
seems like a pretty dubious proposition to me.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] Undo some of the pseudo-security madness

2007-01-21 Thread David Wagner
Samium Gromoff  wrote:
>the core of the problem are the cores which are customarily
>dumped by lisps during the environment generation (or modification) stage,
>and then mapped back, every time the environment is invoked.
>
>at the current step of evolution, those core files are not relocatable
>in certain natively compiling lisp systems.
>
>in an even smaller subset of them, these cores are placed after
>the shared libraries and the executable.
>
>which obviously breaks when the latter are placed unpredictably.
>(yes, i know, currently mmap_base() varies over a 1MB range, but who
>says it will last indefinitely -- probably one day these people
>from full-disclosure will prevail and it will become, like, 256MB ;-)
>
>so, what do you propose?

The obvious solution is: Don't make them setuid root.
Then this issue disappears.

If there is some strong reason why they need to be setuid root, then
you'll need to explain that reason and your requirements in more detail.
But, based on your explanation so far, I have serious doubts about
whether it is a good idea to make such core-dumps setuid root in the
first place.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] Undo some of the pseudo-security madness

2007-01-21 Thread David Wagner
Samium Gromoff  wrote:
the core of the problem are the cores which are customarily
dumped by lisps during the environment generation (or modification) stage,
and then mapped back, every time the environment is invoked.

at the current step of evolution, those core files are not relocatable
in certain natively compiling lisp systems.

in an even smaller subset of them, these cores are placed after
the shared libraries and the executable.

which obviously breaks when the latter are placed unpredictably.
(yes, i know, currently mmap_base() varies over a 1MB range, but who
says it will last indefinitely -- probably one day these people
from full-disclosure will prevail and it will become, like, 256MB ;-)

so, what do you propose?

The obvious solution is: Don't make them setuid root.
Then this issue disappears.

If there is some strong reason why they need to be setuid root, then
you'll need to explain that reason and your requirements in more detail.
But, based on your explanation so far, I have serious doubts about
whether it is a good idea to make such core-dumps setuid root in the
first place.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] Undo some of the pseudo-security madness

2007-01-21 Thread David Wagner
Samium Gromoff  wrote:
[...] directly setuid root the lisp system executable itself [...]

Like I said, that sounds like a bad idea to me.  Sounds like a recipe for
privilege escalation vulnerabilities.  Was the lisp system executable
really implemented to be secure even when you make it setuid root?
Setting the setuid-root bit on programs that didn't expect to be
setuid-root is generally not a very safe thing to do. [1]

The more I hear, the more unconvinced I am by this use case.

If you don't care about the security issues created by (mis)using the lisp
interpreter in this way, then like I suggested before, you can always
write a tiny setuid-root wrapper program that turns off address space
randomization and exec()s the lisp system executable, and leave the lisp
system executable non-setuid and don't touch the code in the Linux kernel.
That strikes me as a better solution: those who don't mind the security
risks can take all the risks they want, without forcing others to take
unwanted and unnecessary risks.

It's not that I'm wedded to address space randomization of setuid
programs, or that I think it would be a disaster if this patch were
accepted.  Local privilege escalation attacks aren't the end of the world;
in all honesty, they're pretty much irrelevant to many or most users.
It's just that the arguments I'm hearing advanced in support of this
change seem dubious, and the change does eliminate one of the defenses
against a certain (narrow) class of attacks.


[1] In comparison, suidperl was designed to be installed setuid-root,
and it takes special precautions to be safe in this usage.  (And even it
has had some security vulnerabilities, despite its best efforts, which
illustrates how tricky this business can be.)  Setting the setuid-root
bit on a large complex interpreter that wasn't designed to be setuid-root
seems like a pretty dubious proposition to me.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] Undo some of the pseudo-security madness

2007-01-20 Thread David Wagner
Samium Gromoff  wrote:
>This patch removes the dropping of ADDR_NO_RANDOMIZE upon execution of setuid
>binaries.
>
>Why? The answer consists of two parts:
>
>Firstly, there are valid applications which need an unadulterated memory map.
>Some of those which do their memory management, like lisp systems (like SBCL).
>They try to achieve this by setting ADDR_NO_RANDOMIZE and reexecuting
>themselves.
>
>Secondly, there also are valid reasons to want those applications to be setuid
>root. Like poking hardware.

This has the unfortunate side-effect of making it easier for local
attackers to mount privilege escalation attacks against setuid binaries
-- even those setuid binaries that don't need unadulterated memory maps.

There's a cleaner solution to the problem case you mentioned.  Rather than
re-exec()ing itself, the application could be split into two executables:
the first is a tiny setuid-root wrapper which sets ADDR_NO_RANDOMIZE and
then executes the second program; the second is not setuid-anything and does
all the real work.  Such a decomposition is often better for security
for other reasons, too (such as the fact that the wrapper can drop all
unneeded privileges before exec()ing the second executable).

Why would you need an entire lisp system to be setuid root?  That sounds
like a really bad idea.  I fail to see why that is a relevant example.  
Perhaps the fact that such a lisp system breaks if you have security features
enabled should tell you something.

It may be possible to defeat address space randomization in some cases,
but that doesn't mean address space randomization is worthless.

It sounds like there is a tradeoff between security and backwards
compatibility.  I don't claim to know how to choose between those tradeoffs,
but I think one ought to at least be aware of the pros and cons on both
sides.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] Undo some of the pseudo-security madness

2007-01-20 Thread David Wagner
Samium Gromoff  wrote:
This patch removes the dropping of ADDR_NO_RANDOMIZE upon execution of setuid
binaries.

Why? The answer consists of two parts:

Firstly, there are valid applications which need an unadulterated memory map.
Some of those which do their memory management, like lisp systems (like SBCL).
They try to achieve this by setting ADDR_NO_RANDOMIZE and reexecuting
themselves.

Secondly, there also are valid reasons to want those applications to be setuid
root. Like poking hardware.

This has the unfortunate side-effect of making it easier for local
attackers to mount privilege escalation attacks against setuid binaries
-- even those setuid binaries that don't need unadulterated memory maps.

There's a cleaner solution to the problem case you mentioned.  Rather than
re-exec()ing itself, the application could be split into two executables:
the first is a tiny setuid-root wrapper which sets ADDR_NO_RANDOMIZE and
then executes the second program; the second is not setuid-anything and does
all the real work.  Such a decomposition is often better for security
for other reasons, too (such as the fact that the wrapper can drop all
unneeded privileges before exec()ing the second executable).

Why would you need an entire lisp system to be setuid root?  That sounds
like a really bad idea.  I fail to see why that is a relevant example.  
Perhaps the fact that such a lisp system breaks if you have security features
enabled should tell you something.

It may be possible to defeat address space randomization in some cases,
but that doesn't mean address space randomization is worthless.

It sounds like there is a tradeoff between security and backwards
compatibility.  I don't claim to know how to choose between those tradeoffs,
but I think one ought to at least be aware of the pros and cons on both
sides.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Entropy Pool Contents

2006-11-28 Thread David Wagner
Continuing the tangent:

Henrique de Moraes Holschuh  wrote:
>On Mon, 27 Nov 2006, Ben Pfaff wrote:
>> [EMAIL PROTECTED] (David Wagner) writes:
>> > Well, if you want to talk about really high-value keys like the scenarios
>> > you mention, you probably shouldn't be using /dev/random, either; you
>> > should be using a hardware security module with a built-in FIPS certified
>> > hardware random number source.  
>> 
>> Is there such a thing?  [...]
>
>There used to exist a battery of tests for this, but a FIPS revision removed
>them. [...]

The point I was making in my email was not about the use of FIPS
randomness tests.  The FIPS randomness tests are not very important.
The point I was making was about the use of a hardware security module
to store really high-value keys.  If you have a really high-value key,
that key should never be stored on a Linux server: standard advice is
that it should be generated on a hardware security module (HSM) and never
leave the HSM.  If you are in charge of Verisign's root cert private key,
you should never let this private key escape onto any general-purpose
computer (including any Linux machine).  The reason for this advice is
that it's probably much harder to hack a HSM remotely than to hack a
general-purpose computer (such as a Linux machine).

Again, this is probably a tangent from anything related to Linux kernel
development.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Entropy Pool Contents

2006-11-28 Thread David Wagner
Continuing the tangent:

Henrique de Moraes Holschuh  wrote:
On Mon, 27 Nov 2006, Ben Pfaff wrote:
 [EMAIL PROTECTED] (David Wagner) writes:
  Well, if you want to talk about really high-value keys like the scenarios
  you mention, you probably shouldn't be using /dev/random, either; you
  should be using a hardware security module with a built-in FIPS certified
  hardware random number source.  
 
 Is there such a thing?  [...]

There used to exist a battery of tests for this, but a FIPS revision removed
them. [...]

The point I was making in my email was not about the use of FIPS
randomness tests.  The FIPS randomness tests are not very important.
The point I was making was about the use of a hardware security module
to store really high-value keys.  If you have a really high-value key,
that key should never be stored on a Linux server: standard advice is
that it should be generated on a hardware security module (HSM) and never
leave the HSM.  If you are in charge of Verisign's root cert private key,
you should never let this private key escape onto any general-purpose
computer (including any Linux machine).  The reason for this advice is
that it's probably much harder to hack a HSM remotely than to hack a
general-purpose computer (such as a Linux machine).

Again, this is probably a tangent from anything related to Linux kernel
development.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Entropy Pool Contents

2006-11-27 Thread David Wagner
Warning: tangent with little practical relevance follows:

Kyle Moffett  wrote:
>Actually, our current /dev/random implementation is secure even if  
>the cryptographic algorithms can be broken under traditional  
>circumstances.

Maybe.  But, I've never seen any careful analysis to support this or
characterize exactly what assumptions are needed for this to be true.
Some weakened version of your claim might be accurate, but at a minimum
you probably need to make some heuristic assumptions about the sources
of randomness and the distribution of values they generate, and you may
also need some assumptions that the SHA hash function isn't *totally*
broken.  If you make worst-case assumptions, I doubt that this claim
can be justified in any rigorous way.

(For instance, compressing random samples with the CRC process is a
heuristic that presumably works fine for most randomness sources, but
it cannot be theoretically justified: there exist sources for which it
is problematic.  Also, the entropy estimator is heuristic and will
overestimate the true amount of entropy available, for some sources.
Likewise, if you assume that the cryptographic hash function is totally
insecure, then it is plausible that carefully chosen malicious writes to
/dev/random might be able to reduce the total amount of entropy in the
pool -- at least, I don't see how to prove that this is impossible.)

Anyway, I suspect this is all pretty thoroughly irrelevant in practice.
It is very unlikely that the crypto schemes are the weakest link in the
security of a typical Linux system, so I'm just not terribly worried
about the scenario where the cryptography is completely broken.  It's
like talking about whether, hypothetically, /dev/random would still be
secure if pigs had wings.

>When generating long-term cryptographic private keys, however, you  
>*should* use /dev/random as it provides better guarantees about  
>theoretical randomness security than does /dev/urandom.  Such  
>guarantees are useful when the random data will be used as a  
>fundamental cornerstone of data security for a server or network  
>(think your root CA certificate or HTTPS certificate for your million- 
>dollar-per-year web store).

Well, if you want to talk about really high-value keys like the scenarios
you mention, you probably shouldn't be using /dev/random, either; you
should be using a hardware security module with a built-in FIPS certified
hardware random number source.  The risk of your server getting hacked
probably exceeds the risk of a PRNG failure.

I agree that there is a plausible argument that it's safer to use
/dev/random when generating, say, your long-term PGP private key.
I think that's a reasonable view.  Still, the difference in risk
level in practice is probably fairly minor.  The algorithms that use
that private key are probably going to rely upon the security of hash
functions and other crypto primitives, anyway.  So if you assume that
all modern crypto algorithms are secure, then /dev/urandom may be just
as good as /dev/random; whereas if you assume that all modern crypto
algorithms are broken, then it may not matter much what you do.  I can
see a reasonable argument for using /dev/random for those kinds of keys,
on general paranoia and defense-in-depth grounds, but you're shooting
at a somewhat narrow target.  You only benefit if the crypto algorithms
are broken just enough to make a difference between /dev/random and
/dev/urandom, but not broken enough to make PGP insecure no matter how
you pick your random numbers.  That's the narrow target.  There are
better things to spend your time worrying about.

Nothing you say is unreasonable; I'm just sharing a slightly different
perspective on it all.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Entropy Pool Contents

2006-11-27 Thread David Wagner
Phillip Susi  wrote:
>David Wagner wrote:
>> Nope, I don't think so.  If they could, that would be a security hole,
>> but /dev/{,u}random was designed to try to make this impossible, assuming
>> the cryptographic algorithms are secure.
>> 
>> After all, some of the entropy sources come from untrusted sources and
>> could be manipulated by an external adversary who doesn't have any
>> account on your machine (root or non-root), so the scheme has to be
>> secure against introduction of maliciously chosen samples in any event.
>
>Assuming it works because it would be a bug if it didn't is a logical 
>fallacy.  Either the new entropy pool is guaranteed to be improved by 
>injecting data or it isn't.  If it is, then only root should be allowed 
>to inject data.  If it isn't, then the entropy estimate should increase 
>when the pool is stirred.

Sorry, but I disagree with just about everything you wrote in this
message.  I'm not committing any logical fallacies.  I'm not assuming
it works because it would be a bug if it didn't; I'm just trying to
help you understand the intuition.  I have looked at the algorithm
used by /dev/{,u}random, and I am satisfied that it is safe to feed in
entropy samples from malicious sources, as long as you don't bump up the
entropy counter when you do so.  Doing so can't do any harm, and cannot
reduce the entropy in the pool.  However, there is no guarantee that
it will increase the entropy.  If the adversary knows what bytes you
are feeding into the pool, then it doesn't increase the entropy count,
and the entropy estimate should not be increased.

Therefore:
  - It is safe to allow non-root users to inject data into the pool
by writing to /dev/random, as long as you don't bump up the entropy
estimate.  Doing so cannot decrease the amount of entropy in the
pool.
  - It is not a good idea to bump up the entropy estimate when non-root
users write to /dev/random.  If a malicious non-root user writes
the first one million digits of pi to /dev/random, then this hasn't
increased the uncertainty that this attacker has in the pool, so
you shouldn't increase the entropy estimate.
  - Whether you automatically bump up the entropy estimate when
root users write to /dev/random is a design choice where you could
reasonably go either way.  On the one hand, you might want to ensure
that root has to take some explicit action to allege that it is
providing a certain degree of entropy, and you might want to insist
that root tell /dev/random how much entropy it added (since root
knows best where the data came from and how much entropy it is likely
to contain).  On the other hand, you might want to make it easier
for shell scripts to add entropy that will count towards the overall
entropy estimate, without requiring them to go through weird
contortions to call various ioctl()s.  I can see arguments both
ways, but the current behavior seems reasonable and defensible.

Note that, in any event, the vast majority of applications should be
using /dev/urandom (not /dev/random!), so in an ideal world, most of
these issues should be pretty much irrelevant to the vast majority of
applications.  Sadly, in practice many applications wrongly use
/dev/random when they really should be using /dev/urandom, either out
of ignorance, or because of serious flaws in the /dev/random man page.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Entropy Pool Contents

2006-11-27 Thread David Wagner
Phillip Susi  wrote:
>Why are non root users allowed write access in the first place?  Can't 
>the pollute the entropy pool and thus actually REDUCE the amount of good 
>entropy?

Nope, I don't think so.  If they could, that would be a security hole,
but /dev/{,u}random was designed to try to make this impossible, assuming
the cryptographic algorithms are secure.

After all, some of the entropy sources come from untrusted sources and
could be manipulated by an external adversary who doesn't have any
account on your machine (root or non-root), so the scheme has to be
secure against introduction of maliciously chosen samples in any event.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Entropy Pool Contents

2006-11-27 Thread David Wagner
Phillip Susi  wrote:
Why are non root users allowed write access in the first place?  Can't 
the pollute the entropy pool and thus actually REDUCE the amount of good 
entropy?

Nope, I don't think so.  If they could, that would be a security hole,
but /dev/{,u}random was designed to try to make this impossible, assuming
the cryptographic algorithms are secure.

After all, some of the entropy sources come from untrusted sources and
could be manipulated by an external adversary who doesn't have any
account on your machine (root or non-root), so the scheme has to be
secure against introduction of maliciously chosen samples in any event.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Entropy Pool Contents

2006-11-27 Thread David Wagner
Phillip Susi  wrote:
David Wagner wrote:
 Nope, I don't think so.  If they could, that would be a security hole,
 but /dev/{,u}random was designed to try to make this impossible, assuming
 the cryptographic algorithms are secure.
 
 After all, some of the entropy sources come from untrusted sources and
 could be manipulated by an external adversary who doesn't have any
 account on your machine (root or non-root), so the scheme has to be
 secure against introduction of maliciously chosen samples in any event.

Assuming it works because it would be a bug if it didn't is a logical 
fallacy.  Either the new entropy pool is guaranteed to be improved by 
injecting data or it isn't.  If it is, then only root should be allowed 
to inject data.  If it isn't, then the entropy estimate should increase 
when the pool is stirred.

Sorry, but I disagree with just about everything you wrote in this
message.  I'm not committing any logical fallacies.  I'm not assuming
it works because it would be a bug if it didn't; I'm just trying to
help you understand the intuition.  I have looked at the algorithm
used by /dev/{,u}random, and I am satisfied that it is safe to feed in
entropy samples from malicious sources, as long as you don't bump up the
entropy counter when you do so.  Doing so can't do any harm, and cannot
reduce the entropy in the pool.  However, there is no guarantee that
it will increase the entropy.  If the adversary knows what bytes you
are feeding into the pool, then it doesn't increase the entropy count,
and the entropy estimate should not be increased.

Therefore:
  - It is safe to allow non-root users to inject data into the pool
by writing to /dev/random, as long as you don't bump up the entropy
estimate.  Doing so cannot decrease the amount of entropy in the
pool.
  - It is not a good idea to bump up the entropy estimate when non-root
users write to /dev/random.  If a malicious non-root user writes
the first one million digits of pi to /dev/random, then this hasn't
increased the uncertainty that this attacker has in the pool, so
you shouldn't increase the entropy estimate.
  - Whether you automatically bump up the entropy estimate when
root users write to /dev/random is a design choice where you could
reasonably go either way.  On the one hand, you might want to ensure
that root has to take some explicit action to allege that it is
providing a certain degree of entropy, and you might want to insist
that root tell /dev/random how much entropy it added (since root
knows best where the data came from and how much entropy it is likely
to contain).  On the other hand, you might want to make it easier
for shell scripts to add entropy that will count towards the overall
entropy estimate, without requiring them to go through weird
contortions to call various ioctl()s.  I can see arguments both
ways, but the current behavior seems reasonable and defensible.

Note that, in any event, the vast majority of applications should be
using /dev/urandom (not /dev/random!), so in an ideal world, most of
these issues should be pretty much irrelevant to the vast majority of
applications.  Sadly, in practice many applications wrongly use
/dev/random when they really should be using /dev/urandom, either out
of ignorance, or because of serious flaws in the /dev/random man page.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Entropy Pool Contents

2006-11-27 Thread David Wagner
Warning: tangent with little practical relevance follows:

Kyle Moffett  wrote:
Actually, our current /dev/random implementation is secure even if  
the cryptographic algorithms can be broken under traditional  
circumstances.

Maybe.  But, I've never seen any careful analysis to support this or
characterize exactly what assumptions are needed for this to be true.
Some weakened version of your claim might be accurate, but at a minimum
you probably need to make some heuristic assumptions about the sources
of randomness and the distribution of values they generate, and you may
also need some assumptions that the SHA hash function isn't *totally*
broken.  If you make worst-case assumptions, I doubt that this claim
can be justified in any rigorous way.

(For instance, compressing random samples with the CRC process is a
heuristic that presumably works fine for most randomness sources, but
it cannot be theoretically justified: there exist sources for which it
is problematic.  Also, the entropy estimator is heuristic and will
overestimate the true amount of entropy available, for some sources.
Likewise, if you assume that the cryptographic hash function is totally
insecure, then it is plausible that carefully chosen malicious writes to
/dev/random might be able to reduce the total amount of entropy in the
pool -- at least, I don't see how to prove that this is impossible.)

Anyway, I suspect this is all pretty thoroughly irrelevant in practice.
It is very unlikely that the crypto schemes are the weakest link in the
security of a typical Linux system, so I'm just not terribly worried
about the scenario where the cryptography is completely broken.  It's
like talking about whether, hypothetically, /dev/random would still be
secure if pigs had wings.

When generating long-term cryptographic private keys, however, you  
*should* use /dev/random as it provides better guarantees about  
theoretical randomness security than does /dev/urandom.  Such  
guarantees are useful when the random data will be used as a  
fundamental cornerstone of data security for a server or network  
(think your root CA certificate or HTTPS certificate for your million- 
dollar-per-year web store).

Well, if you want to talk about really high-value keys like the scenarios
you mention, you probably shouldn't be using /dev/random, either; you
should be using a hardware security module with a built-in FIPS certified
hardware random number source.  The risk of your server getting hacked
probably exceeds the risk of a PRNG failure.

I agree that there is a plausible argument that it's safer to use
/dev/random when generating, say, your long-term PGP private key.
I think that's a reasonable view.  Still, the difference in risk
level in practice is probably fairly minor.  The algorithms that use
that private key are probably going to rely upon the security of hash
functions and other crypto primitives, anyway.  So if you assume that
all modern crypto algorithms are secure, then /dev/urandom may be just
as good as /dev/random; whereas if you assume that all modern crypto
algorithms are broken, then it may not matter much what you do.  I can
see a reasonable argument for using /dev/random for those kinds of keys,
on general paranoia and defense-in-depth grounds, but you're shooting
at a somewhat narrow target.  You only benefit if the crypto algorithms
are broken just enough to make a difference between /dev/random and
/dev/urandom, but not broken enough to make PGP insecure no matter how
you pick your random numbers.  That's the narrow target.  There are
better things to spend your time worrying about.

Nothing you say is unreasonable; I'm just sharing a slightly different
perspective on it all.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: capabilities patch (v 0.1)

2005-08-09 Thread David Wagner
David Madore  wrote:
>I intend to add a couple of capabilities which are normally available
>to all user processes, including capability to exec(), [...]

Once you have a mechanism that lets you prevent the untrusted program
from exec-ing a setuid/setgid program (such as your bounding set idea),
I don't see any added value in preventing the program from calling exec().

"Don't forbid what you can't prevent".  The program can always emulate
the effect of exec() in userspace (for non-setuid/setgid programs) --
doing so is tedious, but nothing prevents a malicious userspace program
from implementing such a thing, I think.

This is only a comment on forbidding exec(), not on anything else in
your proposal.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: capabilities patch (v 0.1)

2005-08-09 Thread David Wagner
David Madore  wrote:
I intend to add a couple of capabilities which are normally available
to all user processes, including capability to exec(), [...]

Once you have a mechanism that lets you prevent the untrusted program
from exec-ing a setuid/setgid program (such as your bounding set idea),
I don't see any added value in preventing the program from calling exec().

Don't forbid what you can't prevent.  The program can always emulate
the effect of exec() in userspace (for non-setuid/setgid programs) --
doing so is tedious, but nothing prevents a malicious userspace program
from implementing such a thing, I think.

This is only a comment on forbidding exec(), not on anything else in
your proposal.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: understanding Linux capabilities brokenness

2005-08-08 Thread David Wagner
David Madore  wrote:
>This does not tell me, then, why CAP_SETPCAP was globally disabled by
>default, nor why passing of capabilities across execve() was entirely
>removed instead of being fixed.

I do not know of any good reason.  Perhaps the few folks who knew enough
to fix it properly didn't feel like bothering; it beats me.

Messing with capabilities is scary.  As far as I can tell, there never was
any coherent "design" to the semantics of POSIX capabilities in Linux.
It's had a little bit of a feeling of a muddle of accumulated gunk,
so unless you understand it really well, it's hard to know what any
changes you make are safe.  This may have scared people away from fixing
it "the right way".  But if you're volunteering to do the analysis and
figure out how to fix it, I say, sounds good to me.

Then again, I'm an outsider.  Perhaps someone more involved in the
development and maintanence of capabilities knows something that I don't.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: understanding Linux capabilities brokenness

2005-08-08 Thread David Wagner
David Madore  wrote:
This does not tell me, then, why CAP_SETPCAP was globally disabled by
default, nor why passing of capabilities across execve() was entirely
removed instead of being fixed.

I do not know of any good reason.  Perhaps the few folks who knew enough
to fix it properly didn't feel like bothering; it beats me.

Messing with capabilities is scary.  As far as I can tell, there never was
any coherent design to the semantics of POSIX capabilities in Linux.
It's had a little bit of a feeling of a muddle of accumulated gunk,
so unless you understand it really well, it's hard to know what any
changes you make are safe.  This may have scared people away from fixing
it the right way.  But if you're volunteering to do the analysis and
figure out how to fix it, I say, sounds good to me.

Then again, I'm an outsider.  Perhaps someone more involved in the
development and maintanence of capabilities knows something that I don't.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-18 Thread David Wagner
Theodore Ts'o  wrote:
>For one, /dev/urandom and /dev/random don't use the same pool
>(anymore).  They used to, a long time ago, but certainly as of the
>writing of the paper this was no longer true.  This invalidates the
>entire last paragraph of Section 5.3.

Ok, you're right, this is a serious flaw, and one that I overlooked.
Thanks for elaborating.  (By the way, has anyone contacted to let them
know about these two errors?  Should I?)

I see three remaining criticisms from their Section 5.3:
1) Due to the way the documentation describes /dev/random, many
   programmers will choose /dev/random by default.  This default
   seems inappropriate and unfortunate.
2) There is a widespread perception that /dev/urandom's security is
   unproven and /dev/random's is proven.  This perception is wrong.
   On a related topic, it is "not at all clear" that /dev/random provides
   information-theoretic security.
3) Other designs place less stress on the entropy estimator, and
   thus are more tolerant to failures of entropy estimation.  A failure
   in the entropy estimator seems more likely than a failure in the
   cryptographic algorithms.
These three criticisms look right to me.

Apart from the merits or demerits of Section 5.3, the rest of the paper
seemed to have some interesting ideas for how to simplify and possibly
improve the /dev/random generator, which might be worth considering at
some point.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-18 Thread David Wagner
Matt Mackall  wrote:
>On Sat, Apr 16, 2005 at 01:08:47AM +0000, David Wagner wrote:
>> http://eprint.iacr.org/2005/029
>
>Unfortunately, this paper's analysis of /dev/random is so shallow that
>they don't even know what hash it's using. Almost all of section 5.3
>is wrong (and was when I read it initially).

Yes, that is a minor glitch, but I believe all their points remain
valid nonetheless.  My advice is to apply the appropriate s/MD5/SHA1/g
substitution, and re-read the paper to see what you can get out of it.

The problem is not that the paper is shallow; it is not.  The source
of the error is likely that this paper was written by theorists, not
implementors.  There are important things we can learn from them, and I
think it is worth reading their paper carefully to understand what they
have to offer.

I believe they raise substantial and deep questions in their Section 5.3.
I don't see why you say Section 5.3 is all wrong.  Can you elaborate?
Can you explain one or two of the substantial errors you see?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/7] procfs privacy: misc. entries

2005-04-18 Thread David Wagner
Lorenzo Hernández  García-Hierro  wrote:
>El lun, 18-04-2005 a las 15:05 -0400, Dave Jones escribió:
>> This is utterly absurd. You can find out anything thats in /proc/cpuinfo
>> by calling cpuid instructions yourself.
>> Please enlighten me as to what security gains we achieve
>> by not allowing users to see this ?
>
>It's more obscurity than anything else. At least that's what privacy
>means usually.

Well, that's not what the word "privacy" means to me.  It seems to me
there are plenty of "privacy" issues that are real and legitimate and
have nothing to do with obscurity.

I agree with Dave Jones.  Security through obscurity makes no sense.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/7] procfs privacy: misc. entries

2005-04-18 Thread David Wagner
Lorenzo Hernández  García-Hierro  wrote:
El lun, 18-04-2005 a las 15:05 -0400, Dave Jones escribió:
 This is utterly absurd. You can find out anything thats in /proc/cpuinfo
 by calling cpuid instructions yourself.
 Please enlighten me as to what security gains we achieve
 by not allowing users to see this ?

It's more obscurity than anything else. At least that's what privacy
means usually.

Well, that's not what the word privacy means to me.  It seems to me
there are plenty of privacy issues that are real and legitimate and
have nothing to do with obscurity.

I agree with Dave Jones.  Security through obscurity makes no sense.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-18 Thread David Wagner
Matt Mackall  wrote:
On Sat, Apr 16, 2005 at 01:08:47AM +, David Wagner wrote:
 http://eprint.iacr.org/2005/029

Unfortunately, this paper's analysis of /dev/random is so shallow that
they don't even know what hash it's using. Almost all of section 5.3
is wrong (and was when I read it initially).

Yes, that is a minor glitch, but I believe all their points remain
valid nonetheless.  My advice is to apply the appropriate s/MD5/SHA1/g
substitution, and re-read the paper to see what you can get out of it.

The problem is not that the paper is shallow; it is not.  The source
of the error is likely that this paper was written by theorists, not
implementors.  There are important things we can learn from them, and I
think it is worth reading their paper carefully to understand what they
have to offer.

I believe they raise substantial and deep questions in their Section 5.3.
I don't see why you say Section 5.3 is all wrong.  Can you elaborate?
Can you explain one or two of the substantial errors you see?
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-18 Thread David Wagner
Theodore Ts'o  wrote:
For one, /dev/urandom and /dev/random don't use the same pool
(anymore).  They used to, a long time ago, but certainly as of the
writing of the paper this was no longer true.  This invalidates the
entire last paragraph of Section 5.3.

Ok, you're right, this is a serious flaw, and one that I overlooked.
Thanks for elaborating.  (By the way, has anyone contacted to let them
know about these two errors?  Should I?)

I see three remaining criticisms from their Section 5.3:
1) Due to the way the documentation describes /dev/random, many
   programmers will choose /dev/random by default.  This default
   seems inappropriate and unfortunate.
2) There is a widespread perception that /dev/urandom's security is
   unproven and /dev/random's is proven.  This perception is wrong.
   On a related topic, it is not at all clear that /dev/random provides
   information-theoretic security.
3) Other designs place less stress on the entropy estimator, and
   thus are more tolerant to failures of entropy estimation.  A failure
   in the entropy estimator seems more likely than a failure in the
   cryptographic algorithms.
These three criticisms look right to me.

Apart from the merits or demerits of Section 5.3, the rest of the paper
seemed to have some interesting ideas for how to simplify and possibly
improve the /dev/random generator, which might be worth considering at
some point.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-16 Thread David Wagner
Jean-Luc Cooke  wrote:
>The part which suggests choosing an irreducible poly and a value "a" in the
>preprocessing stage ... last I checked the value for a and the poly need to
>be secret.  How do you generate poly and a, Catch-22?  Perhaps I'm missing
>something and someone can point it out.

I don't think you're missing anything.  What you say matches my
understanding as well.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-16 Thread David Wagner
linux wrote:
>Thank you for pointing out the paper; Appendix A is particularly
>interesting.  And the [BST03] reference looks *really* nice!  I haven't
>finished it yet, but based on what I've read so far, I'd like to
>*strongly* recommnd that any would-be /dev/random hackers read it
>carefully.  It can be found at
>http://www.wisdom.weizmann.ac.il/~tromer/papers/rng.pdf

Yeah, [BST03] seems worth reading.  It has a reasonable survey of some
previous work, and is well-written.

However, I'm pretty skeptical about [BST03] as a basis for a real-world
randomness generator.  It assumes that there are only 2^t possible
distributions for the source, and the set of possible distributions has
been fixed in advance (before the design of your randomness generator
is revealed).  Consequently, it fails to defend against adaptive attacks.

If the attacker can feed in maliciously chosen inputs (chosen after the
attacker learns which randomness extraction algorithm you are using),
then the BST03 scheme promises nothing.  For instance, if you feed in
timings of network packets, then even if you don't count them as providing
any entropy, the mere act of feeding them into your randomness generator
causes their theorems to be clearly inapplicable (since no matter what
value of t you pick, the adversary can arrange to get more than t bits
of freedom in the network packets he sends you).

So I'm not sure [BST03]'s theorems actually promise what you'd want.

On the other hand, if you want to take their constructions as providing
some intuition or ideas about how one might build a randomness generator,
while realizing that their theorems don't apply and there may be no
useful guarantees that can be proven about such an approach, I don't
have any objections to that view.

By the way, another example of work along these lines is
http://theory.lcs.mit.edu/~yevgen/ps/2-ext.ps
That paper is more technical and theoretically-oriented, so it might
be harder to read and less immediately useful.  It makes a strong
assumption (that you have two sources that are independent -- i.e.,
totally uncorrelated), but the construction at the heart of their paper
is pretty simple, which might be of interest.

>Happily, it *appears* to confirm the value of the LFSR-based input
>mixing function.  Although the suggested construction in section 4.1 is
>different, and I haven't seen if the proof can be extended.

Well, I don't know.  I don't think I agree with that interpretation.

Let me give a little background about 2-universal hashing.  There is a
basic result about use of 2-universal hash functions, which says that
if you choose the seed K truly at random, then you can use h_K(X) to
extract uniform random bits from a non-uniform source X.  (Indeed, you
can even reveal K without harming the randomness of h_K(X).)  The proof
of this fact is usually known as the Leftover Hashing Lemma.

One of the standard constructions of a 2-universal hash function is
as a LFSR-like scheme, where the seed K is used to select the feedback
polynomial.  But notice that it is critical that the feedback polynomial
be chosen uniformly at random, in a way that is unpredictable to the
attacker, and kept secret until you receive data from the source.

What /dev/random does is quite different from the idea of 2-universal
hashing developed in the theory literature and recounted in [BST03].
/dev/random fixes a single feedback polynomial in advance, and publishes
it for the world to see.  The theorems about 2-universal hashing promise
nothing about use of a LFSR with a fixed feedback polynomial.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-16 Thread David Wagner
linux wrote:
>3) Fortuna's design doesn't actually *work*.  The authors' analysis
>   only works in the case that the entropy seeds are independent, but
>   forgot to state the assumption.  Some people reviewing the design
>   don't notice the omission.

Ok, now I understand your objection.  Yup, this is a real objection.
You are right to ask questions about whether this is a reasonable assumption.

I don't know whether /dev/random makes the same assumption.  I suspect that
its entropy estimator is making a similar assumption (not exactly the same
one), but I don't know for sure.

I also don't know whether this is a realistic assumption to make about
the physical sources we currently feed into /dev/random.  That would require
some analysis of the physics of those sources, and I don't have the skills
it would take to do that kind of analysis.

>   Again, suppose we have an entropy source that delivers one fresh
>   random bit each time it is sampled.
>
>   But suppose that rather than delivering a bare bit, it delivers the
>   running sum of the bits.  So adjacent samples are either the same or
>   differ by +1.  This seems to me an extremely plausible example.
>
>   Consider a Fortuna-like thing with two pools.  The first pool is seeded
>   with n, then the second with n+b0, then the first again with n+b0+b1.
>   n is the arbitrary starting count, while b0 and b1 are independent
>   random bits.
>
>   Assuming that an attacker can see the first pool, they can find n.
>   After the second step, their uncertainty about the second pool is 1
>   bit, the value of b0.
>
>   But the third step is interesting.  The attacker can see the value of
>   b0+b1.  If the sum is 0 or 2, the value of b0 is determined uniquely.
>   Only in the case that b0+b1 = 1 is there uncertainty.  So we have
>   only *half* a bit of uncertainty (one bit, half of the time) in the
>   second pool.
[..]
>   I probably just don't have enough mathematical background, but I don't
>   currently know how to bound this leakage.

Actually, this example scenario is not a problem.  I'll finish the
analysis for you.  Suppose that the adversary can observe the entire
evolution of the first pool (its initial value, and all updates to it).
Assume the adversary knows n.  In one round (i.e., a pair of updates),
the adversary learns the value of b0 + b1 (and nothing more!).  In the
next round, the adversary learns b0' + b1' -- and so on.

How many bits of uncertainty have been added to the second pool in
each round?  With probability 1/2, the uncertainty of the second pool
remains unchanged.  With probability 1/2, the uncertainty increases by
exactly 1 bit.  This means there are two classes of updates, and both
classes are equally likely.

Suppose we perform 200 rounds of updates.  Then we can expect about
100 of these updates to be of the second class.  If the updates were
split evently (50/50) between these two classes, the adversary would
have 100 bits of uncertainty about the second pool.  In general, we
expect somewhere near 100 bits of uncertainty -- sometimes a bit more,
sometimes a bit less, but the chances that it is a lot less than 100
bits of uncertainty are exponentially small.

Therefore, except for an event that occurs with exponentially small
probability, the adversary will be left with many bits of uncertainty
about the second pool.  So this kind of source should not pose a serious
problem for Fortuna, or for any two-pool solution.


If you want a better example of where the two-pool scheme completely
falls apart, consider this: our source picks a random bit, uses this
same bit the next two times it is queried, and then picks a new bit.
Its sequence of outputs will look like (b0,b0,b1,b1,b2,b2,..,).  If
we alternate pools, then the first pool sees the sequence b0,b1,b2,..
and the second pool sees exactly the same sequence.  Consequently, an
adversary who can observe the entire evolution of the first pool can
deduce everything there is to know about the second pool.  This just
illustrates that these multiple-pool solutions make some assumptions
about the time-independence of their sources, or at least that the bits
going into one pool don't have too much correlation with the bits going
into the other pool.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why system call need to copy the date from the userspace before using it

2005-04-16 Thread David Wagner
Hacksaw  wrote:
>What I would expect the kernel to do is this:
>
>system_call_data_prep (userdata, size){ [...]
>  for each page from userdata to userdata+size
>  {
>   if the page is swapped out, swap it in
>   if the page is not owned by the user process, return -ENOWAYMAN
>   otherwise, lock the page
>  }   [...]

One challenge that might make this issue a little tricky is that
you have to handle double-indirection, where the kernel copies in
a buffer that includes a pointer to some other buffer that you then
have to copy in.  I think this comes up in some of the ioctl() calls.
Because only the guts of the ioctl() implementation knows the format of
the data structure, only it knows what system_call_data_prep() calls
would be needed.  So, everywhere that currently does copy_from_user()
would have to do system_call_data_prep().  (It wouldn't be sufficient
to call system_call_data_prep() once in some standardized way at the
start of each system call, and leave it at that.)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-16 Thread David Wagner
linux wrote:
>David Wagner wrote:
>>linux wrote:
>>> First, a reminder that the design goal of /dev/random proper is
>>> information-theoretic security.  That is, it should be secure against
>>> an attacker with infinite computational power.
>
>> I am skeptical.
>> I have never seen any convincing evidence for this claim, [..]
>
>I'm not sure which claim you're skeptical of.  The claim that it's
>a design goal, or the claim that it achieves it?

Oops!  Gosh, I was unclear, wasn't it?  Sorry about that.
I meant the latter claim.

I certainly agree that information-theoretic security is a stated goal
of /dev/random.  I'm just not certain that it achieves this goal.
(Caveat: I don't think that failing to achieve this goal is problematic.)

>Whether the goal is *achieved* is a different issue.  random.c tries
>pretty hard, but makes some concessions to practicality, relying on
>computational security as a backup.  (But suggestions as to how to get
>closer to the goal are still very much appreciated!)

Ok.

>In particular, it is theoretically possible for an attacker to exploit
>knowledge of the state of the pool and the input mixing transform to
>feed in data that permutes the pool state to cluster in SHA1 collisions
>(thereby reducing output entropy), or to use the SHA1 feedback to induce
>state collisions (therby reducing pool entropy).  But that seems to bring
>whole new meaning to the word "computationally infeasible", requiring
>first preimage solutions over probability distributions.

Well, wait a second.  You have to make up your mind about whether you
are claiming information-theoretic security, or claiming computational
security.  If the former, then this is absolutely an admissible attack.
There is nothing whatsoever wrong with this attack, from an
information-theoretic point of view.  On the other hand, if we are talking
about computational security, then I totally agree that this is a
(computationally) infeasible attack.

>Also, the entropy estimation may be flawed, and is pretty crude, just
>heavily derated for safety.  And given recent developments in keyboard
>skiffing, and wireless keyboard deployment, I'm starting to think that
>the idea (taken from PGP) of using the keyboard and mouse as an entropy
>source is one whose time is past.
>
>Given current processor clock rates and the widespread availability of
>high-resolution timers, interrupt synchronization jitter seems like
>a much more fruitful source.  I think there are many bits of entropy
>in the lsbits of the RDTSC time of interrupts, even from the periodic
>timer interrupt!  Even derating that to 0.1 bit per sample, that's still
>a veritable flood of seed material.

Makes sense.


As for your question about what one could do to achieve
information-theoretic security, there is a bunch of theoretical work
in the CS theory world on this subject (look up, e.g., "extractors").
Here is my summary about what is possible:

1) If you don't know anything about your source, and you don't
start with any entropy, then information-theoretically secure randomness
extraction is impossible -- at least in principle.  You pick any
deterministic algorithm for randomness extraction, and I will show you
a source for which that algorithm fails.

2) If you start with a short seed of uniformly distributed perfect
randomness, and you have a lower bound on the amount of entropy provided
by your source, then you can extract random bits in a way that is
provably information-theoretically secure.  Note that you don't have
to know anything about the distribution of the source, other than that
its (min-)entropy is not too small.  The simplest construction uses a
2-universal hash function keyed by the seed (its security is established
by the Leftover Hashing Lemma), but there are other constructions,
including a class of schemes known as "extractors".  This approach
does require a short seed of perfect true randomness for every chunk
of output you want to generate, though.

3) If you make certain assumptions about the source, you can extract
entropy in a way that is provably information-theoretically secure,
without needing the short seed.  However, the assumptions required are
typically fairly strong: e.g., that your source is completely memoryless;
that you have multiple sources that are totally independent (i.e.,
uncorrelated in any way); or that your source has a certain structure
(e.g., k of the n bits are uniformly random and independent, and the
other n-k bits are fixed in advance).  People are actively working to
push the boundaries in this category.

I'm not sure whether any of the above will be practically relevant.
They may be too theoretical for real-world use.  But if you're interested,
I could try to give you more information about any of these categories.
-
To unsubscribe from this list: send t

Re: Fortuna

2005-04-16 Thread David Wagner
linux wrote:
David Wagner wrote:
linux wrote:
 First, a reminder that the design goal of /dev/random proper is
 information-theoretic security.  That is, it should be secure against
 an attacker with infinite computational power.

 I am skeptical.
 I have never seen any convincing evidence for this claim, [..]

I'm not sure which claim you're skeptical of.  The claim that it's
a design goal, or the claim that it achieves it?

Oops!  Gosh, I was unclear, wasn't it?  Sorry about that.
I meant the latter claim.

I certainly agree that information-theoretic security is a stated goal
of /dev/random.  I'm just not certain that it achieves this goal.
(Caveat: I don't think that failing to achieve this goal is problematic.)

Whether the goal is *achieved* is a different issue.  random.c tries
pretty hard, but makes some concessions to practicality, relying on
computational security as a backup.  (But suggestions as to how to get
closer to the goal are still very much appreciated!)

Ok.

In particular, it is theoretically possible for an attacker to exploit
knowledge of the state of the pool and the input mixing transform to
feed in data that permutes the pool state to cluster in SHA1 collisions
(thereby reducing output entropy), or to use the SHA1 feedback to induce
state collisions (therby reducing pool entropy).  But that seems to bring
whole new meaning to the word computationally infeasible, requiring
first preimage solutions over probability distributions.

Well, wait a second.  You have to make up your mind about whether you
are claiming information-theoretic security, or claiming computational
security.  If the former, then this is absolutely an admissible attack.
There is nothing whatsoever wrong with this attack, from an
information-theoretic point of view.  On the other hand, if we are talking
about computational security, then I totally agree that this is a
(computationally) infeasible attack.

Also, the entropy estimation may be flawed, and is pretty crude, just
heavily derated for safety.  And given recent developments in keyboard
skiffing, and wireless keyboard deployment, I'm starting to think that
the idea (taken from PGP) of using the keyboard and mouse as an entropy
source is one whose time is past.

Given current processor clock rates and the widespread availability of
high-resolution timers, interrupt synchronization jitter seems like
a much more fruitful source.  I think there are many bits of entropy
in the lsbits of the RDTSC time of interrupts, even from the periodic
timer interrupt!  Even derating that to 0.1 bit per sample, that's still
a veritable flood of seed material.

Makes sense.


As for your question about what one could do to achieve
information-theoretic security, there is a bunch of theoretical work
in the CS theory world on this subject (look up, e.g., extractors).
Here is my summary about what is possible:

1) If you don't know anything about your source, and you don't
start with any entropy, then information-theoretically secure randomness
extraction is impossible -- at least in principle.  You pick any
deterministic algorithm for randomness extraction, and I will show you
a source for which that algorithm fails.

2) If you start with a short seed of uniformly distributed perfect
randomness, and you have a lower bound on the amount of entropy provided
by your source, then you can extract random bits in a way that is
provably information-theoretically secure.  Note that you don't have
to know anything about the distribution of the source, other than that
its (min-)entropy is not too small.  The simplest construction uses a
2-universal hash function keyed by the seed (its security is established
by the Leftover Hashing Lemma), but there are other constructions,
including a class of schemes known as extractors.  This approach
does require a short seed of perfect true randomness for every chunk
of output you want to generate, though.

3) If you make certain assumptions about the source, you can extract
entropy in a way that is provably information-theoretically secure,
without needing the short seed.  However, the assumptions required are
typically fairly strong: e.g., that your source is completely memoryless;
that you have multiple sources that are totally independent (i.e.,
uncorrelated in any way); or that your source has a certain structure
(e.g., k of the n bits are uniformly random and independent, and the
other n-k bits are fixed in advance).  People are actively working to
push the boundaries in this category.

I'm not sure whether any of the above will be practically relevant.
They may be too theoretical for real-world use.  But if you're interested,
I could try to give you more information about any of these categories.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why system call need to copy the date from the userspace before using it

2005-04-16 Thread David Wagner
Hacksaw  wrote:
What I would expect the kernel to do is this:

system_call_data_prep (userdata, size){ [...]
  for each page from userdata to userdata+size
  {
   if the page is swapped out, swap it in
   if the page is not owned by the user process, return -ENOWAYMAN
   otherwise, lock the page
  }   [...]

One challenge that might make this issue a little tricky is that
you have to handle double-indirection, where the kernel copies in
a buffer that includes a pointer to some other buffer that you then
have to copy in.  I think this comes up in some of the ioctl() calls.
Because only the guts of the ioctl() implementation knows the format of
the data structure, only it knows what system_call_data_prep() calls
would be needed.  So, everywhere that currently does copy_from_user()
would have to do system_call_data_prep().  (It wouldn't be sufficient
to call system_call_data_prep() once in some standardized way at the
start of each system call, and leave it at that.)
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-16 Thread David Wagner
linux wrote:
3) Fortuna's design doesn't actually *work*.  The authors' analysis
   only works in the case that the entropy seeds are independent, but
   forgot to state the assumption.  Some people reviewing the design
   don't notice the omission.

Ok, now I understand your objection.  Yup, this is a real objection.
You are right to ask questions about whether this is a reasonable assumption.

I don't know whether /dev/random makes the same assumption.  I suspect that
its entropy estimator is making a similar assumption (not exactly the same
one), but I don't know for sure.

I also don't know whether this is a realistic assumption to make about
the physical sources we currently feed into /dev/random.  That would require
some analysis of the physics of those sources, and I don't have the skills
it would take to do that kind of analysis.

   Again, suppose we have an entropy source that delivers one fresh
   random bit each time it is sampled.

   But suppose that rather than delivering a bare bit, it delivers the
   running sum of the bits.  So adjacent samples are either the same or
   differ by +1.  This seems to me an extremely plausible example.

   Consider a Fortuna-like thing with two pools.  The first pool is seeded
   with n, then the second with n+b0, then the first again with n+b0+b1.
   n is the arbitrary starting count, while b0 and b1 are independent
   random bits.

   Assuming that an attacker can see the first pool, they can find n.
   After the second step, their uncertainty about the second pool is 1
   bit, the value of b0.

   But the third step is interesting.  The attacker can see the value of
   b0+b1.  If the sum is 0 or 2, the value of b0 is determined uniquely.
   Only in the case that b0+b1 = 1 is there uncertainty.  So we have
   only *half* a bit of uncertainty (one bit, half of the time) in the
   second pool.
[..]
   I probably just don't have enough mathematical background, but I don't
   currently know how to bound this leakage.

Actually, this example scenario is not a problem.  I'll finish the
analysis for you.  Suppose that the adversary can observe the entire
evolution of the first pool (its initial value, and all updates to it).
Assume the adversary knows n.  In one round (i.e., a pair of updates),
the adversary learns the value of b0 + b1 (and nothing more!).  In the
next round, the adversary learns b0' + b1' -- and so on.

How many bits of uncertainty have been added to the second pool in
each round?  With probability 1/2, the uncertainty of the second pool
remains unchanged.  With probability 1/2, the uncertainty increases by
exactly 1 bit.  This means there are two classes of updates, and both
classes are equally likely.

Suppose we perform 200 rounds of updates.  Then we can expect about
100 of these updates to be of the second class.  If the updates were
split evently (50/50) between these two classes, the adversary would
have 100 bits of uncertainty about the second pool.  In general, we
expect somewhere near 100 bits of uncertainty -- sometimes a bit more,
sometimes a bit less, but the chances that it is a lot less than 100
bits of uncertainty are exponentially small.

Therefore, except for an event that occurs with exponentially small
probability, the adversary will be left with many bits of uncertainty
about the second pool.  So this kind of source should not pose a serious
problem for Fortuna, or for any two-pool solution.


If you want a better example of where the two-pool scheme completely
falls apart, consider this: our source picks a random bit, uses this
same bit the next two times it is queried, and then picks a new bit.
Its sequence of outputs will look like (b0,b0,b1,b1,b2,b2,..,).  If
we alternate pools, then the first pool sees the sequence b0,b1,b2,..
and the second pool sees exactly the same sequence.  Consequently, an
adversary who can observe the entire evolution of the first pool can
deduce everything there is to know about the second pool.  This just
illustrates that these multiple-pool solutions make some assumptions
about the time-independence of their sources, or at least that the bits
going into one pool don't have too much correlation with the bits going
into the other pool.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-16 Thread David Wagner
Jean-Luc Cooke  wrote:
The part which suggests choosing an irreducible poly and a value a in the
preprocessing stage ... last I checked the value for a and the poly need to
be secret.  How do you generate poly and a, Catch-22?  Perhaps I'm missing
something and someone can point it out.

I don't think you're missing anything.  What you say matches my
understanding as well.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-15 Thread David Wagner
linux wrote:
>/dev/urandom depends on the strength of the crypto primitives.
>/dev/random does not.  All it needs is a good uniform hash.

That's not at all clear.  I'll go farther: I think it is unlikely
to be true.

If you want to think about cryptographic primitives being arbitrarily
broken, I think there will be scenarios where /dev/random is insecure.

As for what you mean by "good uniform hash", I think you'll need to
be a bit more precise.

>Do a bit of reading on the subject of "unicity distance".

Yes, I've read Shannon's original paper on the subject, as well
as many other treatments.

I stand by my comments above.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-15 Thread David Wagner
Theodore Ts'o  wrote:
>With a properly set up set of init scripts, /dev/random is initialized
>with seed material for all but the initial boot [...]

I'm not so sure.  Someone posted on this mailing list several months
ago examples of code in the kernel that looks like it could run before
those init scripts are run, and that looks like it might well be using
/dev/*random before it has been seeded.

I never saw any response.

>It fundamentally assumes that crypto
>primitives are secure (when the recent break of MD4, MD5, and now SHA1
>should have been a warning that this is a Bad Idea (tm)),

It looks to me like the recent attacks on MD4, MD5, and SHA1 do not
endanger /dev/random.  Those attacks affect collision-resistance, but
it looks to me like the security of /dev/random relies on other properties
of the hash function (e.g., pseudorandomness, onewayness) which do not
seem to be threatened by these attacks.  But of course this is a complicated
business, and maybe I overlooked something about the way /dev/random uses
those hash functions.  Did I miss something?

As for which threat models are realistic, I consider it more likely
that my box will be hacked in a way that affects /dev/random than that
SHA1 will be broken in a way that affects /dev/random.

>In addition, Fortuna is profligate with entropy, [...]

Yup.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-15 Thread David Wagner
Jean-Luc Cooke  wrote:
>Info-theoretic randomness is a strong desire of some/many users, [..]

I don't know.  Most of the time that I've seen users say they want
information-theoretic randomness, I've gotten the impression that those
users didn't really understand what information-theoretic randomness means,
and their applications usually didn't need information-theoretic randomness
in the first place.

As for those who reject computational security because of its
unproven nature, they should perhaps be warned that any conjectured
information-theoretic security of /dev/random is also unproven.

Personally, I feel the issue of information-theoretic security
is a distraction.  Given the widespread confusion about what
information-theoretic security means, I certainly sympathize with why
Jean-Luc Cooke left in the existing entropy estimation technique as a
way of side-stepping the whole argument.

Anyway, the bottom line is I don't consider "information-theoretic
arguments" as a very convincing reason to reject Cooke's proposal.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-15 Thread David Wagner
>First, a reminder that the design goal of /dev/random proper is
>information-theoretic security.  That is, it should be secure against
>an attacker with infinite computational power.

I am skeptical.
I have never seen any convincing evidence for this claim,
and I suspect that there are cases in which /dev/random fails
to achieve this standard.

And it seems I am not the only one.  See, e.g., Section 5.3 of:
http://eprint.iacr.org/2005/029

Fortunately, it doesn't matter whether /dev/random provides
information-theoretic security.  I have reasonable confidence that
it provides computational security, and that is all that applications
need.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-15 Thread David Wagner
Matt Mackall  wrote:
>While it may have some good properties, it lacks
>some that random.c has, particularly robustness in the face of failure
>of crypto primitives.

It's probably not a big deal, because I'm not worried about the
failure of standard crypto primitives, but--

Do you know of any analysis to back up the claim that /dev/random
will be robust in the failure of crypto primitives?  I have never
seen anyone try to do such an analysis, but maybe you know of something
that I don't.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-15 Thread David Wagner
Matt Mackall  wrote:
While it may have some good properties, it lacks
some that random.c has, particularly robustness in the face of failure
of crypto primitives.

It's probably not a big deal, because I'm not worried about the
failure of standard crypto primitives, but--

Do you know of any analysis to back up the claim that /dev/random
will be robust in the failure of crypto primitives?  I have never
seen anyone try to do such an analysis, but maybe you know of something
that I don't.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-15 Thread David Wagner
First, a reminder that the design goal of /dev/random proper is
information-theoretic security.  That is, it should be secure against
an attacker with infinite computational power.

I am skeptical.
I have never seen any convincing evidence for this claim,
and I suspect that there are cases in which /dev/random fails
to achieve this standard.

And it seems I am not the only one.  See, e.g., Section 5.3 of:
http://eprint.iacr.org/2005/029

Fortunately, it doesn't matter whether /dev/random provides
information-theoretic security.  I have reasonable confidence that
it provides computational security, and that is all that applications
need.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-15 Thread David Wagner
Theodore Ts'o  wrote:
With a properly set up set of init scripts, /dev/random is initialized
with seed material for all but the initial boot [...]

I'm not so sure.  Someone posted on this mailing list several months
ago examples of code in the kernel that looks like it could run before
those init scripts are run, and that looks like it might well be using
/dev/*random before it has been seeded.

I never saw any response.

It fundamentally assumes that crypto
primitives are secure (when the recent break of MD4, MD5, and now SHA1
should have been a warning that this is a Bad Idea (tm)),

It looks to me like the recent attacks on MD4, MD5, and SHA1 do not
endanger /dev/random.  Those attacks affect collision-resistance, but
it looks to me like the security of /dev/random relies on other properties
of the hash function (e.g., pseudorandomness, onewayness) which do not
seem to be threatened by these attacks.  But of course this is a complicated
business, and maybe I overlooked something about the way /dev/random uses
those hash functions.  Did I miss something?

As for which threat models are realistic, I consider it more likely
that my box will be hacked in a way that affects /dev/random than that
SHA1 will be broken in a way that affects /dev/random.

In addition, Fortuna is profligate with entropy, [...]

Yup.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Fortuna

2005-04-15 Thread David Wagner
linux wrote:
/dev/urandom depends on the strength of the crypto primitives.
/dev/random does not.  All it needs is a good uniform hash.

That's not at all clear.  I'll go farther: I think it is unlikely
to be true.

If you want to think about cryptographic primitives being arbitrarily
broken, I think there will be scenarios where /dev/random is insecure.

As for what you mean by good uniform hash, I think you'll need to
be a bit more precise.

Do a bit of reading on the subject of unicity distance.

Yes, I've read Shannon's original paper on the subject, as well
as many other treatments.

I stand by my comments above.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: seccomp for 2.6.11-rc1-bk8

2005-02-25 Thread David Wagner
Andrea Arcangeli  wrote:
>On Sun, Jan 23, 2005 at 07:34:24AM +0000, David Wagner wrote:
>> [...Ostia...]  The jailed process inherit an open file
>> descriptor to its jailor, and is only allowed to call read(), write(),
>> sendmsg(), and recvmsg().  [...]
>
>Why to call sendmsg/recvmsg when you can call read/write anyway?

Because sendmsg() and recvmsg() allow passing of file descriptors,
and read() and write() do not.  For some uses of this kind of jail,
the ability to pass file descriptors to/from your master is a big deal.
It enables significant new uses of seccomp.  Right now, the only way a
master can get a fd to the jail is to inherit that fd across fork(),
but this isn't as flexible and it restricts the ability to pass fds
interactively.

Andrea, I understand that you don't have any use for sendmsg()/recvmsg()
in your Cpushare application.  I'm thinking about this from the point of
view of other potential users of seccomp.  I believe there are several
other applications which might benefit from seccomp, if only it were
to allow fd-passing.  If we're going to deploy this in the mainstream
kernel, maybe it makes sense to enable other uses as well.  And that's
why I suggested allowing sendmsg() and recvmsg().

It might be worth considering.

[Sorry for the very late reply; I've been occupied with other things
since your last reply.]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: seccomp for 2.6.11-rc1-bk8

2005-02-25 Thread David Wagner
Andrea Arcangeli  wrote:
On Sun, Jan 23, 2005 at 07:34:24AM +, David Wagner wrote:
 [...Ostia...]  The jailed process inherit an open file
 descriptor to its jailor, and is only allowed to call read(), write(),
 sendmsg(), and recvmsg().  [...]

Why to call sendmsg/recvmsg when you can call read/write anyway?

Because sendmsg() and recvmsg() allow passing of file descriptors,
and read() and write() do not.  For some uses of this kind of jail,
the ability to pass file descriptors to/from your master is a big deal.
It enables significant new uses of seccomp.  Right now, the only way a
master can get a fd to the jail is to inherit that fd across fork(),
but this isn't as flexible and it restricts the ability to pass fds
interactively.

Andrea, I understand that you don't have any use for sendmsg()/recvmsg()
in your Cpushare application.  I'm thinking about this from the point of
view of other potential users of seccomp.  I believe there are several
other applications which might benefit from seccomp, if only it were
to allow fd-passing.  If we're going to deploy this in the mainstream
kernel, maybe it makes sense to enable other uses as well.  And that's
why I suggested allowing sendmsg() and recvmsg().

It might be worth considering.

[Sorry for the very late reply; I've been occupied with other things
since your last reply.]
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] BSD Secure Levels: claim block dev in file struct rather than inode struct, 2.6.11-rc2-mm1 (3/8)

2005-02-08 Thread David Wagner
>The attack is to hardlink some tempfile name to some file you want
>over-written.  This usually involves just a little bit of work, such as
>recognizing that a given root cronjob uses an unsafe predictable filename
>in /tmp (look at the Bugtraq or Full-Disclosure archives, there's plenty).
>Then you set a little program that sleep()s till a few seconds before
>the cronjob runs, does a getpid(), and then sprays hardlinks into the
>next 15 or 20 things that mktemp() will generate...

Got it.  Very good -- now I see.  Thanks for the explanation.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] BSD Secure Levels: claim block dev in file struct rather than inode struct, 2.6.11-rc2-mm1 (3/8)

2005-02-08 Thread David Wagner
The attack is to hardlink some tempfile name to some file you want
over-written.  This usually involves just a little bit of work, such as
recognizing that a given root cronjob uses an unsafe predictable filename
in /tmp (look at the Bugtraq or Full-Disclosure archives, there's plenty).
Then you set a little program that sleep()s till a few seconds before
the cronjob runs, does a getpid(), and then sprays hardlinks into the
next 15 or 20 things that mktemp() will generate...

Got it.  Very good -- now I see.  Thanks for the explanation.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] BSD Secure Levels: claim block dev in file struct rather than inode struct, 2.6.11-rc2-mm1 (3/8)

2005-02-07 Thread David Wagner
>For those systems that have everything on one big partition, you can often
>do stuff like:
>
>ln /etc/passwd /tmp/
>
>and wait for /etc/passwd to get clobbered by a cron job run by root...

How would /etc/passwd get clobbered?  Are you thinking that a tmp
cleaner run by cron might delete /tmp/whatever (i.e., delete the hardlink
you created above)?  But deleting /tmp/whatever is safe; it doesn't affect
/etc/passwd.  I'm guessing I'm probably missing something.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] BSD Secure Levels: claim block dev in file struct rather than inode struct, 2.6.11-rc2-mm1 (3/8)

2005-02-07 Thread David Wagner
For those systems that have everything on one big partition, you can often
do stuff like:

ln /etc/passwd /tmp/filename_generated_by_mktemp

and wait for /etc/passwd to get clobbered by a cron job run by root...

How would /etc/passwd get clobbered?  Are you thinking that a tmp
cleaner run by cron might delete /tmp/whatever (i.e., delete the hardlink
you created above)?  But deleting /tmp/whatever is safe; it doesn't affect
/etc/passwd.  I'm guessing I'm probably missing something.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: seccomp for 2.6.11-rc1-bk8

2005-01-22 Thread David Wagner
Chris Wright  wrote:
>* David Wagner ([EMAIL PROTECTED]) wrote:
>> There is a simple tweak to ptrace which fixes that: one could add an
>> API to specify a set of syscalls that ptrace should not trap on.  To get
>> seccomp-like semantics, the user program could specify {read,write}, but
>> if the user program ever wants to change its policy, it could change that
>> set.  Solaris /proc (which is what is used for tracing) has this feature.
>> I coded up such an extension to ptrace semantics a long time ago, and
>> it seemed to work fine for me, though of course I am not a ptrace expert.
>
>Hmm, yeah, that'd be nice.  That only leaves the issue of tracer dying
>(say from that crazy oom killer ;-).

Yes, I also implemented was a ptrace option which causes the child to be
slaughtered if the parent dies for any reason.  I could dig up the code,
but I don't recall it being very hard.  This was ages ago (a 2.0.x kernel)
and I have no idea what might have changed.  Also, am definitely not a
guru on kernel internals, so it is always possible I missed something.
But, at least on the surface this doesn't seem hard to implement.

A third thing I implemented was a option which would cause ptrace() to be
inherited across forks.  The way that strace does this (last I looked)
is an unreliable abomination: when it sees a request to call fork(), it
sets a breakpoint at the next instruction after the fork() by re-writing
the code of the parent, then when that breakpoint triggers it attaches to
the child, restores the parent's code, and lets them continue executing.
This is icky, and I have little confidence in its security to prevent
children from escaping a ptrace() jail, so I added a feature to ptrace()
that remedies the situation.

Anyway, back to the main topic: ptrace() vs seccomp.  I think one
plausible reason to prefer some mechanism that allows user level to
specify the allowed syscall set is that it might provide more flexibility.
What if 6 months from now we discover that we really should have enabled
one more syscall in seccomp to accomodate other applications?

At the same time, I truly empathize Andrea's position that something
like seccomp ought to be a lot easier to verify correct than ptrace().
I think several people here are underestimating the importance of
clean design.  ptrace() is, frankly, a godawful mess, and I don't
know about this thinking that you can take a godawful mess and then
audit it carefully and call it secure -- well, that seems unlikely to
ever lead to the same level of assurance that you can get with a much
cleaner design.  (This business of overloading as a means of sending
ptrace events to user level was in retrospect probably a bad design
decision, for instance.  See, e.g., Section 12 of my MS thesis for more.
http://www.cs.berkeley.edu/~daw/papers/janus-masters.ps)  Given this,
I can see real value in seccomp.

Perhaps there is a compromise position.  What if one started from seccomp,
but then extended it so the set of allowed syscalls can be specified by
user level?  This would push policy to user level, while retaining the
attractive simplicity and ease-of-audit properties of the seccomp design.
Does something like this make sense?

Let me give you some idea of new applications that might be enabled
by this kind of functionality.  One cool idea is a 'delegating
architecture' for jails.  The jailed process inherit an open file
descriptor to its jailor, and is only allowed to call read(), write(),
sendmsg(), and recvmsg().  If the jailed process wants to interact
with the outside world, it can send a request to its jailor to this
effect.  For instance, suppose the jailed process wants to create a
file called "/tmp/whatever", so it sends this request to the jailor.
The jailor can decide whether it wants this to be allowed.  If it is
to be allowed, the jailor can create this file and transfer a file
descriptor to the jailed process using sendmsg().  Note that this
mechanism allows the jailor to completely virtualize the system call
interface; for instance, the jailor could transparently instead create
"/tmp/jail17/whatever" and return a fd to it to the jailed process,
without the jailed process being any the wiser.  (For more on this,
see http://www.stanford.edu/~talg/papers/NDSS04/abstract.html and
http://www.cs.jhu.edu/~seaborn/plash/plash.html)

So this is one example of an application that is enabled by adding
recvmsg() to the set of allowed syscalls.  When it comes to the broader
question of seccomp vs ptrace(), I don't know what strategy makes most
sense for the Linux kernel, but I hope these ideas help give you some
idea of what might be possible and how these mechanisms could be used.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: seccomp for 2.6.11-rc1-bk8

2005-01-22 Thread David Wagner
Chris Wright  wrote:
* David Wagner ([EMAIL PROTECTED]) wrote:
 There is a simple tweak to ptrace which fixes that: one could add an
 API to specify a set of syscalls that ptrace should not trap on.  To get
 seccomp-like semantics, the user program could specify {read,write}, but
 if the user program ever wants to change its policy, it could change that
 set.  Solaris /proc (which is what is used for tracing) has this feature.
 I coded up such an extension to ptrace semantics a long time ago, and
 it seemed to work fine for me, though of course I am not a ptrace expert.

Hmm, yeah, that'd be nice.  That only leaves the issue of tracer dying
(say from that crazy oom killer ;-).

Yes, I also implemented was a ptrace option which causes the child to be
slaughtered if the parent dies for any reason.  I could dig up the code,
but I don't recall it being very hard.  This was ages ago (a 2.0.x kernel)
and I have no idea what might have changed.  Also, am definitely not a
guru on kernel internals, so it is always possible I missed something.
But, at least on the surface this doesn't seem hard to implement.

A third thing I implemented was a option which would cause ptrace() to be
inherited across forks.  The way that strace does this (last I looked)
is an unreliable abomination: when it sees a request to call fork(), it
sets a breakpoint at the next instruction after the fork() by re-writing
the code of the parent, then when that breakpoint triggers it attaches to
the child, restores the parent's code, and lets them continue executing.
This is icky, and I have little confidence in its security to prevent
children from escaping a ptrace() jail, so I added a feature to ptrace()
that remedies the situation.

Anyway, back to the main topic: ptrace() vs seccomp.  I think one
plausible reason to prefer some mechanism that allows user level to
specify the allowed syscall set is that it might provide more flexibility.
What if 6 months from now we discover that we really should have enabled
one more syscall in seccomp to accomodate other applications?

At the same time, I truly empathize Andrea's position that something
like seccomp ought to be a lot easier to verify correct than ptrace().
I think several people here are underestimating the importance of
clean design.  ptrace() is, frankly, a godawful mess, and I don't
know about this thinking that you can take a godawful mess and then
audit it carefully and call it secure -- well, that seems unlikely to
ever lead to the same level of assurance that you can get with a much
cleaner design.  (This business of overloading as a means of sending
ptrace events to user level was in retrospect probably a bad design
decision, for instance.  See, e.g., Section 12 of my MS thesis for more.
http://www.cs.berkeley.edu/~daw/papers/janus-masters.ps)  Given this,
I can see real value in seccomp.

Perhaps there is a compromise position.  What if one started from seccomp,
but then extended it so the set of allowed syscalls can be specified by
user level?  This would push policy to user level, while retaining the
attractive simplicity and ease-of-audit properties of the seccomp design.
Does something like this make sense?

Let me give you some idea of new applications that might be enabled
by this kind of functionality.  One cool idea is a 'delegating
architecture' for jails.  The jailed process inherit an open file
descriptor to its jailor, and is only allowed to call read(), write(),
sendmsg(), and recvmsg().  If the jailed process wants to interact
with the outside world, it can send a request to its jailor to this
effect.  For instance, suppose the jailed process wants to create a
file called /tmp/whatever, so it sends this request to the jailor.
The jailor can decide whether it wants this to be allowed.  If it is
to be allowed, the jailor can create this file and transfer a file
descriptor to the jailed process using sendmsg().  Note that this
mechanism allows the jailor to completely virtualize the system call
interface; for instance, the jailor could transparently instead create
/tmp/jail17/whatever and return a fd to it to the jailed process,
without the jailed process being any the wiser.  (For more on this,
see http://www.stanford.edu/~talg/papers/NDSS04/abstract.html and
http://www.cs.jhu.edu/~seaborn/plash/plash.html)

So this is one example of an application that is enabled by adding
recvmsg() to the set of allowed syscalls.  When it comes to the broader
question of seccomp vs ptrace(), I don't know what strategy makes most
sense for the Linux kernel, but I hope these ideas help give you some
idea of what might be possible and how these mechanisms could be used.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: seccomp for 2.6.11-rc1-bk8

2005-01-21 Thread David Wagner
Chris Wright  wrote:
>Only difference is in number of context switches, and number of running
>processes (and perhaps ease of determining policy for which syscalls
>are allowed).  Although it's not really seccomp, it's just restricted
>syscalls...

There is a simple tweak to ptrace which fixes that: one could add an
API to specify a set of syscalls that ptrace should not trap on.  To get
seccomp-like semantics, the user program could specify {read,write}, but
if the user program ever wants to change its policy, it could change that
set.  Solaris /proc (which is what is used for tracing) has this feature.
I coded up such an extension to ptrace semantics a long time ago, and
it seemed to work fine for me, though of course I am not a ptrace expert.

I don't know whether ptrace + this tweak is a better idea than seccomp.
It is just another option out there that achieves similar functionality.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: seccomp for 2.6.11-rc1-bk8

2005-01-21 Thread David Wagner
Chris Wright  wrote:
Only difference is in number of context switches, and number of running
processes (and perhaps ease of determining policy for which syscalls
are allowed).  Although it's not really seccomp, it's just restricted
syscalls...

There is a simple tweak to ptrace which fixes that: one could add an
API to specify a set of syscalls that ptrace should not trap on.  To get
seccomp-like semantics, the user program could specify {read,write}, but
if the user program ever wants to change its policy, it could change that
set.  Solaris /proc (which is what is used for tracing) has this feature.
I coded up such an extension to ptrace semantics a long time ago, and
it seemed to work fine for me, though of course I am not a ptrace expert.

I don't know whether ptrace + this tweak is a better idea than seccomp.
It is just another option out there that achieves similar functionality.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] more SAK stuff

2001-07-06 Thread David Wagner

>More interestingly, it changes the operation of SAK in two ways:
>(a) It does less, namely will not kill processes with uid 0.

I think this is bad for security.

(I assume you meant euid 0, not ruid 0.  Using the real uid
for access control decisions is a very odd thing to do.)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [PATCH] more SAK stuff

2001-07-06 Thread David Wagner

More interestingly, it changes the operation of SAK in two ways:
(a) It does less, namely will not kill processes with uid 0.

I think this is bad for security.

(I assume you meant euid 0, not ruid 0.  Using the real uid
for access control decisions is a very odd thing to do.)
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [PATCH] User chroot

2001-06-27 Thread David Wagner

Jesse Pollard  wrote:
>2. Any penetration is limited to what the user can access.

Sure, but in practice, this is not a limit at all.

Once a malicious party gains access to any account on your
system (root or non-root), you might as well give up, on all
but the most painstakingly careful configurations.  That's why
chroot is potentially valuable.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [PATCH] User chroot

2001-06-26 Thread David Wagner

Mohammad A. Haque wrote:
>Why do this in the kernel when it's available in userspace?

Because the userspace implementations aren't equivalent.
In particular, it is not so easy for them to enforce the following
restriction:
  (*) If a non-root user requested the chroot, then setuid/setgid
  bits won't have any effect under the new root.
The proposed kernel patch respects (*), but I'm not aware of any
user-level application that ensures (*) is followed.

(Also, there is the small matter that the user-level implementations
are only usable by root, or are setuid root.  The latter is only a
minor difference, though, IMHO.)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



  1   2   >