What would really help is websites publishing a file that can be found in
an automated way (perhaps, like robots.txt, it is standardly named at the
root) that defines what areas of the site require what type of login (for
example, it could say that forums.foobricks.ninja requires an OpenID, and
then a browser can, if the user wants it that way, automatically log in
using a preferred OpenID registered with the browser; and if the file says
that demos.foobricks.ninja needs a SceneID <https://id.scene.org/>, then
the browser can log in with that). This would aid multiple-login schemes,
since the user would not have to deal with the confounding detail of
treating each site like an unrelated login system. As usual, there are
attacks based on this that would have to be defended against.
Part of this file could give the restrictions on the password (of course,
the less the better, for the most part) - perhaps as a regex. But it is
important that it be machine-readable, this will help a password keeper
application to generate better random passwords, and be able to check
whether a user-saved password would be valid as often as it wants, offline.
I think XML would be ideal for such a file, but it could be in any standard
format.

Of course, for sites with poor security, this will *help* rather than
ultimately hinder the attackers, but only because of security through
obscurity.

-Arlo James Barnes
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to