Hi Dan,

Agreed, this does help focus the work for this SACM Working Group.  After 
reading the draft-coffin-sacm-vuln-scenario, I had some initial feedback from 
an enterprise defense perspective that might be incorporated.  Here are my 
contributions:



1.       In the abstract, a vulnerability report is referenced, however it's 
not clear whether an authenticated or unauthenticated vulnerability scan report 
(or both) is being referred to.  If it can consume both, I would call that out. 
If what's being referred to is actually a recurring industry standard 
vulnerability report from a vendor, that could also clear up the abstract prior 
to the scope statement where the vulnerability report is defined.



2.       In the abstract, there isn't a reference to the endpoint based 
approach that is detailed in the next section. Ideally, instead of "It begins 
with an enterprise ingesting a vulnerability report and ends at the point of 
identifying affected endpoints",  this could be better summarized as "Endpoint 
data is pushed to a central point for comparison with vulnerability criteria to 
enable posture assessment."



3.       3.3 A few comments on page 7, paragraph 4  "The attributes could be 
manually entered into a CMDB by a human, This would include any attributes that 
cannot be collected programmatically." I believe the intent here is to leave 
fields open for the user to define and leverage as needed, however I'm not sure 
we want to advocate manual entry? Ideally, if we can use a common set of 
algorithms that can be *manually* adjusted to provide exception based rules to 
the overall effort.  If we give a user the chance to mangle data sets, they 
probably will - I'm in favor of providing additional fields that can be 
interfaced with other security API's where automation is the default workflow 
choice.   Here are some alternatives to manual entry for these three categories:

a.       Location - for a global organization consider the DNS sub domain 
infrastructure, for a smaller organization consider SIEM events that stamp the 
closest proximity security infrastructure into an event (zone based), others 
might be ARP cache, DNS cache, etc.

b.      Role - compare with external maps to identify external facing 
endpoints, fingerprint web server packages with a local agent, scrape local 
process table to gauge TCP connections to the web server (is it really a web 
server), etc.

c.       Criticality - analytics on logins, active sessions, user account 
counts and net flow will provide a strong enterprise criticality score



4.       5.1 If the authenticated or unauthenticated data sets do get merged or 
compared, a decision tree will have to be pre-established - does the 
authenticated/agent based score override a more recent  but unauthenticated 
scan finding?


5.       For Appendix B: Priority should include cyber intelligence and 
campaign based vulnerability scores. For example, in 2013 - CVE's leveraged by 
the "Red October Advanced Cyber Espionage Campaign targeting Diplomatic 
officials" should be prioritized well above the CVE's used in Conficker , etc. 
How can this standard be directed or modified to accept industry standard 
Indicator of Compromise (IOC)'s and provide intel driven posture assessments?



6.       Other general comments:



o   Are mutex string acquisition, DLL fingerprinting, IOC processing and auto 
remediation out of the scope for the current Working Group?  In addition, to 
Vulnerability assessment, these would all go nicely together as part of a 
standard endpoint spec for vendors to communicate through.  I've seen email 
traffic from the group on several of these related topics.



o   Section 3. The multiple references to CMDB make some potential assumptions 
about managed vs. unmanaged endpoints

?  It's possible that unmanaged endpoints won't be found in a CMDB, does this 
model account in any way for those?

?  An alternative is to consider continuous netflow analytics updating a 
repository / data lake

o   Is Rogue Device Detection under consideration as a data point or even an 
Endpoint Type?

o   Discovering neighboring endpoints

o   ARP as sensor data

o   Once a rogue device has been detected, a detailed (or secondary) 
vulnerability assessment should begin automatically.

Hope this helps.

Josh Stevens
Hewlett Packard Enterprise

From: sacm [mailto:[email protected]] On Behalf Of Romascanu, Dan (Dan)
Sent: Thursday, November 19, 2015 7:51 AM
To: [email protected]; [email protected]
Cc: [email protected]
Subject: [sacm] Feedback on the SACM Vulnerability Assessment Scenario

Hi,

I am reiterating a request that I made at IETF 94 in the OPSAWG meeting, and 
also sent to the mail lists of opsec and opsawg. The SACM WG is considering a 
document https://datatracker.ietf.org/doc/draft-coffin-sacm-vuln-scenario/ that 
describes the operational practice of vulnerability reports, which we believe 
is an important use case in the security assessment life cycle. We are 
requiring feedback from operators about the scenario describe in this document 
- does it make sense? Is it similar with what you do in operational real life? 
Are you using similar or different methods for vulnerability assessment in your 
networks? A quick reading and short feedback would be greatly appreciated.

Thanks and Regards,

Dan

_______________________________________________
OPSEC mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/opsec

Reply via email to