While your discourse is fascinating, it sort of drives well off topic of security. :)
I'm trying to explore the evolution of AAA. From NIS to NIS+ to LDAP on a corporate LAN, to branch offices over dedicated WAN, to branch offices over the Internet, to multiple corporate data centers over the internet, to Cloud. As a sysadmin I see a divide, two categories of IdM: the first is application users which are served by the many SAML/SSO/Federation protocols out there, the second is IT administrators who will be working on infrastructure and must rely solely on more traditional technologies like LDAP, Kerberos, BSM, RBAC, etc. This can be on a small scale, such as in the context of ISC, or on a large scale for outsourcing to various cloud implementations. The point is that security standards require that you know who is doing what, where and when... but existing methods don't seem nearly flexible enough to accommodate administrators who have compliance requirements. Per the subject of this thread I pulled out one aspect of the confusion, that of home directories, because its a more complicated problem to solve than just replicating the directory. Do you have global NFS home dirs which are Kerberos protected? Or create per-site home dirs? Or just VPN it all to look more traditional? benr. On 12/24/09 6:59 AM, Mike Gerdts wrote: >> Now, I can use several methods to override or redirect, such as you >> point out. But what is the best practice way to handle this size >> deployment? Home directories are one such problem, but there are plenty >> of others. >> >> >> This really starts to become an issue as you look toward IDM in cloud >> deployments, where some of your servers are here and some there, perhaps >> on different continents. I feel like I'm inventing an architecture for >> the first time, but others have done this many times before. >> > When I think about it in an IDM context, I would be looking at an HR > record to see where a person's office is and provision the home > directory based on that. I am not sure I see it as an IDM task to > handle continuous optimization of this. > > In a cloud environment, I would think (without having actually managed > a cloud environment but with managing a smallish HPC environment) that > each workload deployed to the cloud would have a well-defined set of > dependencies. Much like memory placement optimization, thread > migration, etc. algorithms in an OS, it would make sense to locate the > workload close to the storage. At such a time as there is not enough > compute power to handle the workload (or other constraints, such as > availability) are not being met, the workload should move to someplace > that has more capacity. Presumably this means replicating or moving > at least the active data set with the workload. This sounds like > resource management, not identity management. > > Most likely the cloud isn't made up of interactive shell users. > Interactive users are much less inclined to "put things where they > belong". Where you do need to support interactive shell users, it > would seem to make sense to isolate each to a site that will give them > great performance. The workload that they distribute will presumably > have well-defined data stores. As the data stores are defined, I > would think that they would consist of a menu of characteristics. The > menu is limited and you can't pick all. For example, "shared rw > access", "globally distributed", and "less than 10 ms service time" > don't go together. However, "replicated rw access" is compatible with > global distribution and good service times. Without such rules in > place and applications designed to play within such rules, I have a > really hard time seeing how a cloud can use global distribution while > providing predictable levels of performance. > > But I suspect that you've already figured this out or I am missing > some fundamental information about how your environment works. > >