>Hi Ryan, thanks for bringing this up!

Sorry for the delay here -- I was out, and the last message I saw was that
my post was in moderation...

>
>On Wed, Aug 23, 2017, at 03:31 PM, Ryan Barry wrote:
>>
>> /home
>> /opt
>> /var
>> /var/log
>> /var/log/audit
>
>As I understand it, the NIST-800 specification was designed for a
"traditional"
>system managed via yum.  I imagine they did a default RHEL7 install and
>looked at hardening on top of that.

I'd imagine the same. That said, oVirt Node is also not managed by yum, and
the specific request there was still to have separate filesystems. The
theory being that a runaway process or attacker which/who fills one of the
partitions can't lock out the entire system or block logging of activities.

In that sense, I suspect that most of the guidelines still apply.

>
>The ostree model is in contrast *very* prescriptive about the system
layout.
>It's different from how things worked before, but I think still compatible
>enough; and worth the change.

We did something similar in vintage oVirt Node, so I completely understand
-- we saw a large number of problems with some system utilities not playing
nice with bind mounts and redirection/symlinks on a ro root, but that's a
definite tangent.

>There's some more info here:
>https://ostree.readthedocs.io/en/latest/manual/adapting-existing/
>But this boils down to: every mount point you've listed above actually
lives
>underneath /var.  Stated even more strongly, *all local state* lives under
>/var or /etc, and everything else appears immutable (though rpm-ostree can
>mutate it).

The risk from the NIST guidelines is still /var being filled and a denial
of service, which definitely looks like it would still apply.

While I'm aware that the primary use case of Atomic is containers, it's
entirely possible that a container with a volume mount or an extreme amount
of logging could fill /var and deny logging to other applications.

>
>>
>> (ideally with any 'persistent' data like the rpmdb relocated off of /var,
>
>Indeed:
>
>```
>$ rpm-ostree status
>State: idle
>Deployments:
>  atomicws:fedora/x86_64/workstation
>                   Version: 26.64 (2017-08-23 07:44:49)
>                BaseCommit:
73f5c6bfa0365f4170921b95ae420439f2f904564c7bdb12680dc1c8ddd47986
>$ ls -al /var/lib/rpm
>lrwxrwxrwx. 1 root root 19 Apr 18 15:29 /var/lib/rpm -> ../../usr/share/rpm
>```

This looks very familiar ;)

>
>>  with the contents of /var[/*] being the same across all ostree
instances, so logs are not lost if users need to roll back).
>
>Yep, that's also enforced.  ostree (and rpm-ostree) will never touch /var,
whether
>upgrading or rolling back.  The only twist here is that we generate
systemd tmpfiles.d
>entries - so for example `rpm-ostree install postgresql` will cause some
directories
>to be generated on the *next* boot.
>
>Related to all of this, rpm-ostree for example will cleanly reject
>installing RPMs that do things like have content in /home:
>https://github.com/projectatomic/rpm-ostree/issues/233
>
>> In my testing, Atomic seems to only take ~3GB of the volume group when
installed, though I understand that the remainder of the volume group is
often used for Docker image storage. We performed a conversion to a
NIST-800 layout as part of an update on oVirt Node, but we were fortunate
enough to be using lvmthin, so we didn't need to worry too much about it,
but I'm not sure how this would be done on Atomic. I know that /var was
added recently, so some shuffling must be possible, but I haven't looked
into the details of how that was performed.
>
>Yep, it's ironic that it took so long for
https://github.com/ostreedev/ostree/pull/859
>to land - it really helps complete the picture of having basically all
>local system state easily put onto separate storage, consistently backed
up, etc.
>
>> Additionally, getting as close as possible to full STIG compliance would
be ideal. I see that atomic supports scanning containers running on Atomic
hosts, but I'm not sure what the actual status of the host itself is.
>
>I think we can do that more easily now that the above PR landed; however,
>it seems likely to me that the guidelines need to be updated to account
>for things like /home → /var/home.

I'd agree here -- the NIST guidelines themselves are pretty opaque, and
only a part of STIG compliance, but I'll seek out someone who can
confirm/deny this.

oVirt Node uses a very similar deployment model:

Vintage oVirt Node laid down a LiveCD image on the filesystem, with all
persistent data in /config, bind-mounted across where necessary. The system
was not able to be modified (updates were written to a second partition
which was then booted from in an A/B fashion).

Current oVirt Node ships as a squashfs which is extracted onto a new thin
LVM LV, which essential data (/etc, mostly) copied across as needed -- all
system state goes into /var, which is kept the same across all images.

I'm not sure that the NIST guidelines really applied to us either, but it's
easier to be compliant than to change teh buidelines.


>
>Also unfortunately for RHEL7, systemd refusing to mount on symlinks
>is fairly problematic.  See: https://github.com/systemd/systemd/pull/6293
>But administrators can work around that by using /var/home as the mount
point.
>
>Anyways, so I think the current F26AH's support for /var as a mount
>point covers most of this; the tricky part is probably updating the
document
>to account for the ostree/AH differences?

Updating the document is certainly a solution, though I expect that
/var/log and /var/log/audit will need to be separate volumes, at least. I
would also not be surprised to see that /opt and /home needed to be also,
so docker containers with VOLUME could not maliciously fill those and
simultaneously fill /var

Is this something we can help with?

Outside of that, "full" STIG compliance (minus bootloader locking and some
other finicky bits -- cloud-init can handle most of the edge cases for
users who want the whole shebang) is another target. I've seen some work on
openscap for Atomic, but I haven't tried a run yet to see what the status
is...

Reply via email to