On 7/17/22 11:24 PM, J. Roeleveld wrote:
If I have 1 desktop and 1 laptop, that means 2 client machines. Add 5 servers/vms.

/Clients/ need (non-host) key pairs. Servers shouldn't need non-host key pairs. Servers should only need the clients' public keys on them.

That means 10 ssh-keys per person to manage and keep track off.

If you're using per-host-per-client key pairs, sure. If you're only using per-client key pairs and copying the public key to the server, no.

When a laptop gets replaced, I need to ensure the keys get removed from the authorized_keys section.

If the new key pair would be using the same algorithm and bit length and there is no reason to suspect compromise, then I see no reason to replace the key pair. I'd just copy the key pair from the old client to the new client and destroy it on the old client. This is especially true if the authorized_keys file has a from stanza on the public key.

Same goes for when the ssh-keys need refreshing. Which, due to the amount, I never got round to.

I've not run into any situation where policy mandates that a key pair be replaced unless when there isn't any reason to suspect it's compromise.

I actually have more then the amount mentioned above, the amount of ssh-keys gets too much to manage without an automated tool to keep track of them and automate the changing of the keys. I never got the time to create that tool and never found anything that would make it easier.

As I think about it, I'd probably leverage the comment stanza of the public key so that I could do an in place delete with sed and then append the new public key. E.g. have a comment that consists of the client's host name, some delimiter, and the date. That way it would be easy to remove any and all keys for the client in the future.

When hosts can get added and removed regularly for testing purposes, this requires a management tool.

It depends on how you configure things.

It seems as if it's possible to use the "%h" parameter when specifying the IdentityFile. So you could have a wild card stanza that would look for a file based on the host name.

You could put "root" without a valid password, making it impossible to "su -" into and add a 2nd uid/gid 0 account with a valid password. I know of 1 organisation where they had a 2nd root account added which could be used by the orgs sys-admins for emergency access. (These were student owned servers directly connected to the internet)

I absolutely hate the idea of having multiple accounts using the same UID. I'd be far more likely to have a per host account with UID=0 / GID=0 and have the root account have a different UID / GID.

I'll need to try this at some point in the future.

I expect the "wheel" group to only be for changing into "root", that's what it's advertised as.

I've seen some binaries in the wheel group and 0550 permission.

Still needs the clients to be actually running when the server runs the script. Or it needs to be added to a schedule and gets triggered when the client becomes available. This would make the scheduler too complex.

Why can't the script that's running ssh simply start an agent, run ssh, then stop the agent? There's no coordination necessary.

I agree, but root-access is only needed for specific tasks, like updates. Most access is done using service-specific accounts. I only have 2 where users have shell-accounts.

Many people forget about problems on boot that require root's password.

I'd love to implement Kerberos, mostly for the SSO abilities, but haven't found a simple to follow howto yet which can be easily adjusted so it can be added to an existing environment.

ACK



--
Grant. . . .
unix || die

Reply via email to