On 15/03/17 08:45, Sean Dague wrote:
On 03/13/2017 05:10 PM, Zane Bitter wrote:
<snip>
I'm not sure I agree. One can very simply inject needed credentials
into a running VM and have it interact with the cloud APIs.

Demo please!

Most Keystone backends are read-only, you can't even create a new user
account yourself. It's an admin-only API anyway. The only non-expiring
credential you even *have*, ignoring the difficulties of getting it to
the server, is your LDAP password. Would *you* put *your* LDAP password
on an internet-facing server? I would not.

So is one of the issues to support cloud native flows that our user auth
system, which often needs to connect into traditional enterprise
systems, doesn't really consider that?

Yes, absolutely.

Keystone kinda sorta has a partial fix for this. Different domains can have different backends, so you can have one read-only domain for corporate user accounts backed by LDAP/ActiveDirectory and another read/write domain backed by Sqlalchemy.

In fact this is how Heat gets around this - we require operators to create a DB-backed heat_stack_users domain, we create accounts in there, and then we give them special permissions (not granted by their keystone roles) for the stacks they're associated with in Heat. It's messy and other projects (like Kuryr) don't automatically get the benefit.

Nor does it help end users at the moment. There's no domain that guaranteed to be set up for them to create user accounts in (certainly not one that's consistent across multiple OpenStack clouds), and even if there were only admins can create user accounts on most clouds (IIUC Rackspace is one notable exception to this, but we need stuff that's consistent across clouds).

I definitely agree, if your cloud is using your LDAP password, which
gets you into your health insurance and direct deposit systems at your
employeer, sticking this into a cloud server is a no go.

Thinking aloud, I wonder if user provisionable sub users would help
here. They would have all the same rights as the main user (except
modify other subusers), but would have a dedicated user provisioned
password. You basically can carve off the same thing from Google when
you have services that can't do the entire oauth/2factor path. Fastmail
rolled out something similar recently as well.

This sounds like a good idea, and could definitely be part of the solution. If you read an AWS getting started guide pretty much all of them have as Step #1 creating an IAM account to use with your project so that you basically never have to use the credentials of your master account, which is connected to your billing. (I suspect the reason is that most people seem to end up accidentally checking their AWS credentials into a public GitHub repo at some point. ;)

It's not a total solution, though. A user account that has all of your permissions can still do anything you can do, like e.g. delete your whole application and all of its data. And backups. In fact, it can do that in all of the projects you have a role in. (The latter is fixable, by creating a user that has _no_ permissions instead, so you can then delegate your roles to them one at a time using a trust, or alternatively just by choosing which roles it inherits.)

So ultimately we need to give cloud application developers total control over what the accounts they create can and cannot do - so an application can, e.g. scale itself and heal itself but not delete itself.

cheers,
Zane.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to