Hi -

On 11/18/2019 7:07 PM, Greg MacLellan wrote:
I just came across RFC 7860 while looking for updates to the SNMPv3
authentication models.

It's good to see updated hash algorithms, however, I'm confused by
several aspects of section 9.3, Derivation of Keys from Passwords:

1. Why is the word "SHOULD" used for this specification? There is no
alternative given, and if two systems use a different procedure, they
will be unable to authenticate successfully. It seems like "MUST" would
be appropriate here.

The view that prevailed during the SNMPv* wars was that
we shouldn't be *mandating* key localization, despite the
obvious operational benefits of doing so.  The view was also
that even if key localization was a wonderful thing, that we
shouldn't be mandating the use of a specific algorithm.  The
end result was that RFC 3414 has appendices with really useful
examples that, as a matter of operational sanity, people have
treated as though they were actually required by the standard.

As a matter of *interoperability* (rather than operational sanity)
it doesn't matter whether a key localization algorithm was even
used to produce the keys used in an SNMPv3 deployment, so the
use of passwords is technically beyond the scope of the standard's
requirements.  Some of us (including me) would have liked to
nail it down much more firmly than that, but as a practical
matter the compromise text seems to have gotten the job done.

2. It specifies using the password-to-key algorithm from RFC 3414,
however, it seems like this algorithm has no technical merit, and I
cannot find any basis for why this is a good idea. Using a predictable
1MB input to the hash /lowers /its security as compared to using the
input directly, as it drastically lowers the effective entropy. For
example, the passwords "abc" and "abcabc" and "abcabcabc" are now
equivalent. In addition, there is a performance cost for doing this, but
without the full security benefits of an actual "slow-hash" (such as
scrypt, bcrypt or PBKDF2).

The more elaborate HMAC used with SNMPv3, chosen to overcome
the deficiencies of the MAC used in RFC 1352 and RFC 1446, appeared
in RFC 2264 and subsequent SNMPv3 documents.  My recollection
is that given the processing power and storage capability available
two decades ago, making the password-to-key computation expensive
was thought to add more to a defense against dictionary attacks
than the reduction in the theoretical size of the password space
would detract from it. Uri Blumenthal probably remembers the details
of his analysis.  The 8-character minimum length requirement for
passwords also undercuts the objection that the algorithm "collapses"
too many passwords.

3. The algorithm then says to combine `digest1 + snmpEngineID + digest1`
-- why do this instead of simply `digest1 + snmpEngineID`? Once again,
this incurs a performance cost but does nothing to increase the entropy
or resistance to brute-force attacks.

As I recall...
This was due to concerns about how MAC algorithms work.  Two decades
ago some had concerns that a MAC code might reveal too much about
the most "recent" bits fed to the algorithm.  This concern is heightened
considering some deployments where the snmpEngineID values might differ
from each other in predictable ways involving only a few bits.  By
giving the digest (computed entirely from private information) to the
algorithm both before and after the (effectively public) snmpEngineID
value would be processed, those fears were reduced.  Again, folks like
Uri Blumenthal can speak to the goriest details and might remember the
debates better than I do.

Randy

_______________________________________________
OPSAWG mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/opsawg

Reply via email to