Trimmed opsawg since this seems most appropiate to opsec / sacm

https://datatracker.ietf.org/doc/draft-coffin-sacm-vuln-scenario/

got a signficant change associated with the sacm usecase alignment.

that seems like a signficant effort borne out of this feedback.

joel

On 12/2/15 9:21 AM, Haynes, Dan wrote:
> Hi Josh,
> 
> Thanks for the feedback! 
> 
>  
> 
> Comments inline below.  Please let me know if I am misunderstanding
> anything.
> 
>  
> 
> Thanks,
> 
>  
> 
> Danny
> 
>  
> 
>  
> 
> *From:* sacm [mailto:[email protected]] *On Behalf Of *Stevens, Josh
> (Cyber Security)
> *Sent:* Thursday, November 19, 2015 10:37 PM
> *To:* Romascanu, Dan (Dan) <[email protected]>; [email protected];
> [email protected]
> *Cc:* [email protected]
> *Subject:* Re: [sacm] Feedback on the SACM Vulnerability Assessment Scenario
> 
>  
> 
> Hi Dan,
> 
>  
> 
> Agreed, this does help focus the work for this SACM Working Group.
>  After reading the draft-coffin-sacm-vuln-scenario, I had some initial
> feedback from an enterprise defense perspective that might be
> incorporated.  Here are my contributions:
> 
>  
> 
> 1.       In the abstract, a vulnerability report is referenced, however
> it’s not clear whether an authenticated or unauthenticated vulnerability
> scan report (or both) is being referred to.  If it can consume both, I
> would call that out. If what’s being referred to is actually a recurring
> industry standard vulnerability report from a vendor, that could also
> clear up the abstract prior to the scope statement where the
> vulnerability report is defined.
> 
>  
> 
> [danny]: the intent is a vulnerability report is that it is published by
> a vendor (e.g. security advisory) in some un-prescribed format from
> which the relevant data can be extracted from it in some un-prescribed
> way.  That is, the vulnerability report is not necessarily an industry
> standard (at least not in the context of this document).  We can try to
> clarify this a bit.
> 
>  
> 
> 2.       In the abstract, there isn’t a reference to the endpoint based
> approach that is detailed in the next section. Ideally, instead of “It
> begins with an enterprise ingesting a vulnerability report and ends at
> the point of identifying affected endpoints”,  this could be better
> summarized as “Endpoint data is pushed to a central point for comparison
> with vulnerability criteria to enable posture assessment.”  
> 
>  
> 
> [danny]: we can try to clarify this.            
> 
>  
> 
> 3.       3.3 A few comments on page 7, paragraph 4  “The attributes
> could be manually entered into a CMDB by a human, This would include any
> attributes that cannot be collected programmatically.” I believe the
> intent here is to leave fields open for the user to define and leverage
> as needed, however I'm not sure we want to advocate manual entry?
> Ideally, if we can use a common set of algorithms that can be *manually*
> adjusted to provide exception based rules to the overall effort.  If we
> give a user the chance to mangle data sets, they probably will - I'm in
> favor of providing additional fields that can be interfaced with other
> security API's where automation is the default workflow choice.   Here
> are some alternatives to manual entry for these three categories:
> 
> a.       Location - for a global organization consider the DNS sub
> domain infrastructure, for a smaller organization consider SIEM events
> that stamp the closest proximity security infrastructure into an event
> (zone based), others might be ARP cache, DNS cache, etc.
> 
> b.      Role - compare with external maps to identify external facing
> endpoints, fingerprint web server packages with a local agent, scrape
> local process table to gauge TCP connections to the web server (is it
> really a web server), etc.
> 
> c.       Criticality - analytics on logins, active sessions, user
> account counts and net flow will provide a strong enterprise criticality
> score
> 
>  
> 
> [danny]: I don’t think allowing users to manually enter attributes into
> the CMDB necessarily means mangled data sets.  For example, an
> organization could define a schema that defines what the information
> should look like.  While it wouldn’t likely be standardized, I think
> that is ok given this type of information would most likely be
> organization-specific anyways.  What do others think?
> 
>  
> 
> With that said, I think this other information and the algorithms to
> process this information would be useful as well.  Does anyone have any
> thoughts on this approach?
> 
>  
> 
> 4.       5.1 If the authenticated or unauthenticated data sets do get
> merged or compared, a decision tree will have to be pre-established -
> does the authenticated/agent based score override a more recent  but
> unauthenticated scan finding?
> 
>  
> 
> [danny]:  I don’t think I know the answer at this time.  It depends on
> how the WG ends up defining the assessment logic.  Maybe it is something
> that can be configured in the evaluation guidance or maybe authenticated
> data always takes precedence over unauthenticated data.  With that said,
> authenticated/unauthenticated sounds like it would be a good piece of
> information to capture about an attribute in the information model (e.g.
> similar how we want to capture if an attribute can be used for
> designation/identification or if it is has privacy concerns).  Do others
> have thoughts on authenticated vs. unauthenticated attributes impacting
> assessments?
> 
>  
> 
> 5.       For Appendix B: Priority should include cyber intelligence and
> campaign based vulnerability scores. For example, in 2013 - CVE's
> leveraged by the "Red October Advanced Cyber Espionage Campaign
> targeting Diplomatic officials" should be prioritized well above the
> CVE's used in Conficker , etc. How can this standard be directed or
> modified to accept industry standard Indicator of Compromise (IOC)’s and
> provide intel driven posture assessments?
> 
>  
> 
> [danny]: agree, it probably makes sense to say something about cyber
> intelligence information in determining scores and priorities.  What do
> others think? 
> 
>  
> 
> When you say “standard” are you referring to the vulnerability
> assessment scenario draft?  SACM in general?  Or, something
> else?                       
> 
>  
> 
> 6.       Other general comments:
> 
>  
> 
> o   Are mutex string acquisition, DLL fingerprinting, IOC processing and
> auto remediation out of the scope for the current Working Group?  In
> addition, to Vulnerability assessment, these would all go nicely
> together as part of a standard endpoint spec for vendors to communicate
> through.  I’ve seen email traffic from the group on several of these
> related topics.
> 
>  
> 
> [danny]: Remediation is currently out-of-scope for SACM.  Regarding
> mutex string acquisition, DLL fingerprinting, and IOC processing, I
> think it depends on what is required to collect the data.  If we can
> collect the data by looking at something on the endpoint (e.g. file
> hashes, registry keys, etc.), I suspect it would be in scope.  If we
> have to do more detailed analysis (e.g. dynamic malware analysis) that
> require specialized tools and sandboxes, I suspect that would be
> out-of-scope.  With that said, the output of this more detailed analysis
> could produce information that would serve as input into creating
> guidance to assess endpoints for IOCs at which point I think it would
> fall within the scope of SACM. 
> 
>  
> 
> o   Section 3. The multiple references to CMDB make some potential
> assumptions about managed vs. unmanaged endpoints
> 
> §  It’s possible that unmanaged endpoints won't be found in a CMDB, does
> this model account in any way for those?
> 
> §  An alternative is to consider continuous netflow analytics updating a
> repository / data lake
> 
>  
> 
> [danny]: We didn’t really think of CMDB at that level.  We simply used
> CMDB to mean a place to store guidance, data, results, etc.  Basically,
> anything that is input or output with respect to an assessment.  With
> that said, as SACM begins to define what it needs in a repository (and
> its data stores), these are definitely questions that need to be asked.
> 
>  
> 
> o   Is Rogue Device Detection under consideration as a data point or
> even an Endpoint Type?
> 
> o   Discovering neighboring endpoints
> 
> o   ARP as sensor data 
> 
> o   Once a rogue device has been detected, a detailed (or secondary)
> vulnerability assessment should begin automatically.
> 
>  
> 
> [danny]:  in previous discussions, there was mention of managed vs.
> unmanaged endpoints, on the network, in the context of BYOD.  Managed
> endpoints are those owned/controlled by the organization whereas
> unmanaged would be endpoints that are not owned nor controlled by the
> organization (i.e. personal laptop, etc.).  Would your definition of a
> rogue endpoint align with an unmanaged endpoint?        
> 
>  
> 
> Hope this helps.
> 
>  
> 
> Josh Stevens
> 
> Hewlett Packard Enterprise
> 
>  
> 
> *From:* sacm [mailto:[email protected]] *On Behalf Of *Romascanu,
> Dan (Dan)
> *Sent:* Thursday, November 19, 2015 7:51 AM
> *To:* [email protected] <mailto:[email protected]>; [email protected]
> <mailto:[email protected]>
> *Cc:* [email protected] <mailto:[email protected]>
> *Subject:* [sacm] Feedback on the SACM Vulnerability Assessment Scenario
> 
>  
> 
> Hi,
> 
>  
> 
> I am reiterating a request that I made at IETF 94 in the OPSAWG meeting,
> and also sent to the mail lists of opsec and opsawg. The SACM WG is
> considering a document
> https://datatracker.ietf.org/doc/draft-coffin-sacm-vuln-scenario/ that
> describes the operational practice of vulnerability reports, which we
> believe is an important use case in the security assessment life cycle.
> We are requiring feedback from operators about the scenario describe in
> this document – does it make sense? Is it similar with what you do in
> operational real life? Are you using similar or different methods for
> vulnerability assessment in your networks? A quick reading and short
> feedback would be greatly appreciated.
> 
>  
> 
> Thanks and Regards,
> 
>  
> 
> Dan
> 
>  
> 
> 
> 
> _______________________________________________
> OPSEC mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/opsec
> 


Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
OPSEC mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/opsec

Reply via email to