I don't know 5% of what you guys do when it comes to SCAP and the way the content is manipulated, but one thing stuck out to me.
> The fix scripts should not be written to check system state at the
> granularity targeted by OVAL checks. But they should still be doing
> basic error checking and error handling.
While I think I agree with this as an idea, I think this may be somewhat more
complicated than that. When the OVAL check fails, it fails in a binary form
(pass/fail). I would argue that the remediation content will have to do a more
granular check in some cases where the current content may not be so
straightforward. For example, PAM parses it's configuration files in a
specific order and you can't just stick the required line in there anywhere.
In my experience the check just looks for a regex match and gives a pass/fail
from that. The remediation content will have to "understand" the proper layout
of the file and handle variances within that file. This example is pretty
simple, but I hope I am getting across the point I am trying to make.
I honestly didn't know that OpenSCAP had the ability to do remediation at all,
and I'm in the process of trying to understand how that works and reading the
7670 document, but from the OpenSCAP Remediation page that Simon Lukasik so
graciously wrote up I get the idea of how it all ties together. That leads me
to a question about the selective ability to remediate findings though. Using
that information alone it appears that you can either a) wholesale remediate
all failed findings right when they are found using the 'eval' parameter or b)
do the 'Offline' remediation that does the same thing, but just gives you a
chance to see what needs to be changed first. Is it possible to add an option
to exclude or include particular things to be remediated while still having
them checked? Without thinking about it too much I can't think of a good way
to do that without it being cumbersome. But I can say that in my years working
with security measures I have never been able to take
the 'recommended' solution and fit 100% of it to my system. There are always
outliers. For example, I wouldn't want it to disable IPv6 on my system since
it is an operational requirement.
This may already exist somewhere and I just don't know what I am rattling on
about and if so I apologize. I also realize that this is a very early stage,
but I thought I would bring it up.
On a side note - I also agree bash is the way to go. Other content can come
later, but bash should be first. I don't even know how you would be able to
put something like puppet in here. That's (for the most part) managed at a
central server and the client can not/should not be attempting to apply any
recipes locally.
I'd love to help with this more as I begin to understand the inner workings of
how the content is created and managed. But if there is anything I can help
with in the short term let me know. I would love to see this succeed.
Thanks everyone,
Chad
-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Shawn
Wells
Sent: Tuesday, March 26, 2013 12:42 AM
To: [email protected]
Subject: Re: Remediation Scripts
On 3/26/13 12:18 AM, Francisco Slavin wrote:
You rang? Or, y'know, whatever sound emails make. The internet is
obviating the need for the onomatopoeia.
I still envision the AOL "you've got mail!"
Anyway....
On Monday, March 25, 2013 10:15 PM, Shawn Wells wrote:
> I've been taking a few off-list questions around remediation
lately, namely
> from interested parties asking "where do we start?" Wanted to
move those
> conversations to on-list. Here's a few of the common
questions and my
> thoughts to get us started.
>
I feel like I wade into this conversation from time to time and end up
repeating myself a bit. A couple past threads [1] for context [2] should keep
me from being too repetitive.
>
> (1) What language(s) should be used?
>
> IMO, bash. I'm leaning this way because it's included in
*every* RHEL
> release, whereas puppet modules would require the
installation of 3rd party
> software. I'd like to see as much done through 'native' tools
as possible.
> There's certainly advantages to Perl (e.g., potential speed)
however I don't
> think we want to assume Perl is installed on all RHEL
machines.
>
Puppet 's high-level language lets you statefully define system
configurations. This is nice considering ostensibly XCCDF follows a very
similar modeling approach. The problem is government applicability - Puppet is
in EPEL and is not enterprise-supported as such the last I checked, so for some
areas it's simply not an option. Additionally some people targeting embedded
environments don't want to pull Puppet in.
BASH is everywhere and as such makes the most sense.
From a tools side it would make sense to support numerous 'fix'
mechanisms, and from a content side it may make sense to have fix content in
various formats. For the first steps it seems like BASH is the way to go.
Personally I'm with you regarding puppet. Additionally RH has publicly stated
plans to incorporate puppet into future (layered) products. However, as you
pointed out, we can reasonably *depend* on bash being installed on any given
RHEL box.
As for additional formats, the extensibility is already built into the SSG
build process (sample content for puppet exists). The challenge -- as you
seemed also to identify -- is what the first steps will be. I worry about
finding ourselves in a situation with fragmented remediation content... a few
bash fixes, a few puppet. I'd rather pick one to begin with (and bash seems
most logical).
>
> (2) Do we perform checking in the scripts?
>
> Defined further, should the scripts contain conditional
checks to see if they
> should be ran?
> IMO, no. That's what OVAL is for.
>
This conversation gets a bit muddied by the definition of 'checking'.
The fix scripts should not be written to check system state at the granularity
targeted by OVAL checks. But they should still be doing basic error checking
and error handling. As such they would need to report errors and the tool
calling out to the fix scripts would have to act on them, so some common
dictionary of return code values may be useful. Using CEE has been brought up
in the past as well. Basically whichever front-end is calling out to SCAP
libraries for check content gets its fine-grained error reporting taken care of
thanks to those libraries; but the fix side will be done ad-hoc and having
granular error reporting would still be a huge benefit. The quick-and-dirty
way would simply be to have 'success' and 'fail' RCs defined and to capture the
stderr and stdout of the script according to whether it passes or fails, and
what degree of logging your tool is set to (debug!
/verbose/e
tc.). This last approach is what SecState is currently doing.
I like the stderr/stout approach. Simon Lukasik recently wrote a good posting
about how this output would be captured in result XML:
http://isimluk.livejournal.com/3573.html
>
> (3) Where do we begin?
>
> - Name remediation scripts after corresponding XCCDF rule
> - Build process includes them into final ssg-rhel6-xccdf.xml
>
> Known challenge on passing XCCDF variables through to the
scripts,
> however I wouldn't let this hold us up. Still *tons* of work
to be done
> while this gets sorted.
>
I'm not sure what challenge you're referencing here. We've gotten
variable passing to work pretty simply [3].
>
> There's a good bit of RHEL6 content in the Aqueduct project
that (I
> believe) Tresys committed. Perhaps we could reuse those
scripts?
>
The Tresys CLIP team did contribute a lot of scripts to Aqueduct which
were written specifically to go alongside SSG SCAP content [4]. These scripts
were mapped back directly to SSG rules [5]. They were written against an older
version of the SSG repo and as such likely need a bit of an update, but I think
they would be a very good starting place.
There are a lot of design concerns to keep in mind when approaching
remediation content and the balance between authorship time and effort must be
considered for each of those. Unfortunately I missed the registration for your
SCAP workshop this time around, but I would be happy to support a call to talk
through some of this.
I'll need to dig deeper into your final output, but can you help me understand
how your JSON gets transformed into valid XML?
e.g.:
<!-- We use JSON notation to articulate a bash script call -->
<fix system="urn:xccdf:fix:script:bash">
{
"script" : "/root/passreqs.sh",
"environment-variables" : { "login_defs_min_len" :
"<sub idref="pass-min-length-var" />" },
"positional-args" : ["argument1", "argument2"]
}
</fix>
I'm imaging we'd need to modify the makefile to parse all this, sucking in
$script and performing some kind of variable substitution. How do you guys
handle things?
Thank you
- Francisco
[1] Sep 2012 -
https://lists.fedorahosted.org/pipermail/scap-security-guide/2012-September/001516.html
[2] Feb 2013 -
https://lists.fedorahosted.org/pipermail/scap-security-guide/2013-February/002722.html
[3] SecState XCCDF variable passing -
https://fedorahosted.org/secstate/wiki/RemediationContentHowTo
[4] -
https://fedorahosted.org/aqueduct/browser/trunk/compliance/Bash/SSG
[5] -
https://fedorahosted.org/aqueduct/browser/trunk/compliance/Bash/SSG/tools/fix_mapper.py
##
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ scap-security-guide mailing list [email protected] https://lists.fedorahosted.org/mailman/listinfo/scap-security-guide
