On Thu, Dec 22, 2011 at 2:20 PM, Mark Burgess <[email protected]> wrote:
> I like your analogy of the poisoned stream.  :-)  However, taking over a
> properly maintained signature is a much smaller vector than gaining access
> to the headwaters (especially if you have several people manning the
> pumps).  If the signing private key is password protected itself and kept
> off the server except for changes, then the attacker has no control over the
> clients that respect the key.  I would presume that this is why package
> repositories and software distribution/update systems have been so
> successful at keeping their rivers pure.  I am disappointed that you view it
> differently.

Hmm.

I suppose that another way to look at this is in analyzing how input
gets to the "central server."

It is common for the input to come as the output of an SCM system.

Thus, we have...

- Administrative Users
   who have rights to check updates into the SVN/Git/Mercurial/... repository

Once they generate and approve a release...

- Golden Server checks out the latest configuration, from that same repository

If the Golden Server is merely plucking from a stream of SCM output,
then attacks against the data may be reframed as attacks against the
SCM, and protection from attack may be reframed as putting protections
surrounding who is allowed to check changes into the SCM.

That is quite a different thing from the alternative scenario, where
policy scripts are injected onto the "golden servers" in a manual
fashion.  In *that* case, you have no way of evaluating, a priori,
whether configuration changes were authorized or not; you need to
harden the "golden servers" to a massive degree.

In the "check out from SCM" case, the golden server can, in some
senses, be locked down further, as you could, in principle, lock out
the ability to login; the just does a periodic "svn update" or "git
pull".

Keeping things under control, and changes documented, by virtue of the
SCM system, where you can always track down who made what change when,
and where you ought to, in principle, be able to rebuild the golden
server in 30 seconds via "git clone", should be The Better Answer for
these sorts of processes.

> The difference between Puppet and CFEngine is who owns the information on
> which decisions are made. In CFEngine, nodes are basically autonomous and
> each node controls its own information and its own decisions, based on its
> private view of the environment. The nodes can pull down data/information
> from a server (choose to drink from the poison), but they are under no
> obligation to trust it or use it. There is strong (crypto) authentication
> (ssh style) that maximizes the likelihood of authenticity, but ultimately if
> upstream is poison, a node cannot really tell.

That latter bit is the "poisonous" part...

CFEngine may make the nodes more autonomous, in principle, but if the
policy feeds from the possibly-poisoned source, that poisons
destinations.

And if the upstream has been poisoned, the downstream can only "tell"
if the downstream portions have some independent intelligence.  In
effect, they would need to have local policy that *isn't* fed from
upstream.  And that implies you need to have some additional way to
get the local/downstream policy updated.  That seems like a recursive
problem to me, mostly amounting to giving up on centralizing policy
management.

> Absolutely. Every host has complete control of what it wants to use/reject.

But for that statement to be true, you need to have some separate
policy on the host to evaluate what upstream material it will
use/reject.  Theoretically plausible, but needing generous doses of
unobtanium, as far as I can see.
_______________________________________________
Help-cfengine mailing list
[email protected]
https://cfengine.org/mailman/listinfo/help-cfengine

Reply via email to