Re: [Openstack-operators] Passing entire disks to instances
Would an OpenStack cinder volume meet your needs ? Tim From: David Arroyo [mailto:dr...@aqwari.net] Sent: 29 August 2015 17:17 To: openstack-operators@lists.openstack.org Subject: [Openstack-operators] Passing entire disks to instances Hello, I would like to pass entire disks to my openstack instances. I have some IO-bound workloads and would like to avoid any overhead or contention by giving an instance access to N additional disks, such that they do not share that disk with other guests on the same compute node. These additional disks do not need to last longer than the instances themselves; they are "ephemeral" in that regard. There is no need for backup, no need for instance or data migration, and no need for running the instance on a separate compute node from the extra disks. Effectively I want to have an "extra disks" property, as part of a flavor or image, that is handled like vcpus without overcommit. I have been doing some research but have not yet found an obvious way to do this in openstack. Does anyone else have a similar use case, and how have you handled it? To be more specific, we run some very large Cassandra clusters on physical hardware, with excellent performance. Cassandra is largely IO-bound, and we want to virtualize it without introducing unnecessary IO latency. Cheers, David ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Security around enterprise credentials and OpenStack API
We create "service accounts" in AD, for teams to use inside scripts or within services. We have an home built solution to prevent credentials from being stored in clear text on the server and allow for password rotation of the service accounts without service interruption. On 8/29/15, 9:49 AM, "Marc Heckmann" wrote: >Hi all, > >I was going to post a similar question this evening, so I decided to just >bounce on Mathieu’s question. See below inline. > >> On Mar 31, 2015, at 8:35 PM, Matt Fischer wrote: >> >> Mathieu, >> >> We LDAP (AD) with a fallback to MySQL. This allows us to store service >> accounts (like nova) and "team accounts" for use in Jenkins/scripts etc in >> MySQL. We only do Identity via LDAP and we have a forked copy of this driver >> (https://github.com/SUSE-Cloud/keystone-hybrid-backend) to do this. We don't >> have any permissions to write into LDAP or move people into groups, so we >> keep a copy of users locally for purposes of user-list operations. The only >> interaction between OpenStack and LDAP for us is when that driver tries a >> bind. >> >> >> >>> On Tue, Mar 31, 2015 at 6:06 PM, Mathieu Gagné wrote: >>> Hi, >>> >>> Lets say I wish to use an existing enterprise LDAP service to manage my >>> OpenStack users so I only have one place to manage users. >>> >>> How would you manage authentication and credentials from a security >>> point of view? Do you tell your users to use their enterprise >>> credentials or do you use an other method/credentials? > >We too have integration with enterprise credentials through LDAP, but as you >suggest, we certainly don’t want users to use those credentials in scripts or >store them on instances. Instead we have a custom Web portal where they can >create separate Keystone credentials for their project/tenant which are stored >in Keystone’s MySQL database. Our LDAP integration actually happens at a level >above Keystone. We don’t actually let users acquire Keystone tokens using >their LDAP accounts. > >We’re not really happy with this solution, it’s a hack and we are looking to >revamp it entirely. The problem is that I never have been able to find a clear >answer on how to do this with Keystone. > >I’m actually quite partial to the way AWS IAM works. Especially the instance >“role" features. Roles in AWS IAM is similar to TRUSTS in Keystone except that >it is integrated into the instance metadata. It’s pretty cool. > >Other than that, RBAC policies in Openstack get us a good way towards IAM like >functionality. We just need a policy editor in Horizon. > >Anyway, the problem is around delegation of credentials which are used >non-interactively. We need to limit what those users can do (through RBAC >policy) but also somehow make the credentials ephemeral. > >If someone (Keystone developer?) could point us in the right direction, that >would be great. > >Thanks in advance. > >>> >>> The reason is that (usually) enterprise credentials also give access to >>> a whole lot of systems other than OpenStack itself. And it goes without >>> saying that I'm not fond of the idea of storing my password in plain >>> text to be used by some scripts I created. >>> >>> What's your opinion/suggestion? Do you guys have a second credential >>> system solely used for OpenStack? >>> >>> -- >>> Mathieu >>> >>> ___ >>> OpenStack-operators mailing list >>> OpenStack-operators@lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> ___ >> OpenStack-operators mailing list >> OpenStack-operators@lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >___ >OpenStack-operators mailing list >OpenStack-operators@lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Security around enterprise credentials and OpenStack API
Sorry for the repost, it seems this mail was in the outbox of another machine that I hadn't turned on in a while. Please ignore. > On Aug 29, 2015, at 11:56, Marc Heckmann wrote: > > Hi all, > > I was going to post a similar question this evening, so I decided to just > bounce on Mathieu’s question. See below inline. > >> On Mar 31, 2015, at 8:35 PM, Matt Fischer wrote: >> >> Mathieu, >> >> We LDAP (AD) with a fallback to MySQL. This allows us to store service >> accounts (like nova) and "team accounts" for use in Jenkins/scripts etc in >> MySQL. We only do Identity via LDAP and we have a forked copy of this driver >> (https://github.com/SUSE-Cloud/keystone-hybrid-backend) to do this. We don't >> have any permissions to write into LDAP or move people into groups, so we >> keep a copy of users locally for purposes of user-list operations. The only >> interaction between OpenStack and LDAP for us is when that driver tries a >> bind. >> >> >> >>> On Tue, Mar 31, 2015 at 6:06 PM, Mathieu Gagné wrote: >>> Hi, >>> >>> Lets say I wish to use an existing enterprise LDAP service to manage my >>> OpenStack users so I only have one place to manage users. >>> >>> How would you manage authentication and credentials from a security >>> point of view? Do you tell your users to use their enterprise >>> credentials or do you use an other method/credentials? > > We too have integration with enterprise credentials through LDAP, but as you > suggest, we certainly don’t want users to use those credentials in scripts or > store them on instances. Instead we have a custom Web portal where they can > create separate Keystone credentials for their project/tenant which are > stored in Keystone’s MySQL database. Our LDAP integration actually happens at > a level above Keystone. We don’t actually let users acquire Keystone tokens > using their LDAP accounts. > > We’re not really happy with this solution, it’s a hack and we are looking to > revamp it entirely. The problem is that I never have been able to find a > clear answer on how to do this with Keystone. > > I’m actually quite partial to the way AWS IAM works. Especially the instance > “role" features. Roles in AWS IAM is similar to TRUSTS in Keystone except > that it is integrated into the instance metadata. It’s pretty cool. > > Other than that, RBAC policies in Openstack get us a good way towards IAM > like functionality. We just need a policy editor in Horizon. > > Anyway, the problem is around delegation of credentials which are used > non-interactively. We need to limit what those users can do (through RBAC > policy) but also somehow make the credentials ephemeral. > > If someone (Keystone developer?) could point us in the right direction, that > would be great. > > Thanks in advance. > >>> >>> The reason is that (usually) enterprise credentials also give access to >>> a whole lot of systems other than OpenStack itself. And it goes without >>> saying that I'm not fond of the idea of storing my password in plain >>> text to be used by some scripts I created. >>> >>> What's your opinion/suggestion? Do you guys have a second credential >>> system solely used for OpenStack? >>> >>> -- >>> Mathieu >>> >>> ___ >>> OpenStack-operators mailing list >>> OpenStack-operators@lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> ___ >> OpenStack-operators mailing list >> OpenStack-operators@lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > ___ > OpenStack-operators mailing list > OpenStack-operators@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
[Openstack-operators] Passing entire disks to instances
Hello, I would like to pass entire disks to my openstack instances. I have some IO-bound workloads and would like to avoid any overhead or contention by giving an instance access to N additional disks, such that they do not share that disk with other guests on the same compute node. These additional disks do not need to last longer than the instances themselves; they are "ephemeral" in that regard. There is no need for backup, no need for instance or data migration, and no need for running the instance on a separate compute node from the extra disks. Effectively I want to have an "extra disks" property, as part of a flavor or image, that is handled like vcpus without overcommit. I have been doing some research but have not yet found an obvious way to do this in openstack. Does anyone else have a similar use case, and how have you handled it? To be more specific, we run some very large Cassandra clusters on physical hardware, with excellent performance. Cassandra is largely IO-bound, and we want to virtualize it without introducing unnecessary IO latency. Cheers, David ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Stack with external vlan and intranet vlan
Many thanks, Antonio. I solved my minfiguration error. Ignazio Il giorno 29/ago/2015 11:24, "Antonio Messina" ha scritto: > I have the same configuration on both network and compute nodes. > > Tenant networks is set to gre,vxlan, but as admin you can also create vlan > networks if it's listed in type_drivers. > > .a. > Il 28/ago/2015 10:06 AM, "Ignazio Cassano" ha > scritto: > >> Hi Antonio, I tried the configuration you suggested in previous emails >> without success. >> I would be gratefull if you could give me further information: >> >> 1)Did you modify the ml2_conf.ini in the computing node or also in the >> neutron network node ? >> 2) Did yoy modify tenant_network_types (ath this time we configured >> "tenant_network_types = gre" but >> we presume we must use "gre, vlan" ? >> >> Regards >> >> 2015-07-25 12:48 GMT+02:00 Antonio Messina : >> >>> On Sat, Jul 25, 2015 at 12:38 PM, Ignazio Cassano >>> wrote: >>> > You are very kind, thank you. >>> > I have only anothe doubt. >>> > When in a normal scenario you create the external net, you also create >>> an >>> > openvswtch bridge (br-ex) on the network node and add the nic >>> interface >>> > you have configuret for internet access. >>> > In our scenario we must have another interface in the intranet network >>> : >>> > must we create a bridge and add the intranet interface? >>> > Must we modify any neutron configuration file to expose the new bridge >>> ? >>> >>> The standard configuration for vlan network applies. The setup I've >>> describe does not use an external router, so you will not pass through >>> the network node and will not use br-ex bridge. >>> >>> I'm using ml2 with openvswitch, so the relevant options for ml2_conf.ini >>> are: >>> >>> [ml2] >>> type_drivers = gre,vlan,vxlan >>> mechanism_drivers = openvswitch >>> >>> [ml2_type_vlan] >>> network_vlan_ranges = vlannet:1:4000 >>> >>> [ovs] >>> bridge_mappings = vlannet:br-vlan >>> >>> br-vlan is an openvswitch bridge created on the compute node with: >>> >>> ovs-vsctl -- --may-exist add-br br-vlan >>> ovs-vsctl -- --may-exist add-port br-vlan bond0 >>> >>> in my case, bond0 is an interface on the compute node in "trunk", so >>> that packets are received with the vlan tag on the node. >>> >>> .a. >>> >>> -- >>> antonio.s.mess...@gmail.com >>> antonio.mess...@uzh.ch +41 (0)44 635 42 22 >>> S3IT: Service and Support for Science IT http://www.s3it.uzh.ch/ >>> University of Zurich >>> Winterthurerstrasse 190 >>> CH-8057 Zurich Switzerland >>> >> >> ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Stack with external vlan and intranet vlan
I have the same configuration on both network and compute nodes. Tenant networks is set to gre,vxlan, but as admin you can also create vlan networks if it's listed in type_drivers. .a. Il 28/ago/2015 10:06 AM, "Ignazio Cassano" ha scritto: > Hi Antonio, I tried the configuration you suggested in previous emails > without success. > I would be gratefull if you could give me further information: > > 1)Did you modify the ml2_conf.ini in the computing node or also in the > neutron network node ? > 2) Did yoy modify tenant_network_types (ath this time we configured > "tenant_network_types = gre" but > we presume we must use "gre, vlan" ? > > Regards > > 2015-07-25 12:48 GMT+02:00 Antonio Messina : > >> On Sat, Jul 25, 2015 at 12:38 PM, Ignazio Cassano >> wrote: >> > You are very kind, thank you. >> > I have only anothe doubt. >> > When in a normal scenario you create the external net, you also create >> an >> > openvswtch bridge (br-ex) on the network node and add the nic >> interface >> > you have configuret for internet access. >> > In our scenario we must have another interface in the intranet network : >> > must we create a bridge and add the intranet interface? >> > Must we modify any neutron configuration file to expose the new bridge ? >> >> The standard configuration for vlan network applies. The setup I've >> describe does not use an external router, so you will not pass through >> the network node and will not use br-ex bridge. >> >> I'm using ml2 with openvswitch, so the relevant options for ml2_conf.ini >> are: >> >> [ml2] >> type_drivers = gre,vlan,vxlan >> mechanism_drivers = openvswitch >> >> [ml2_type_vlan] >> network_vlan_ranges = vlannet:1:4000 >> >> [ovs] >> bridge_mappings = vlannet:br-vlan >> >> br-vlan is an openvswitch bridge created on the compute node with: >> >> ovs-vsctl -- --may-exist add-br br-vlan >> ovs-vsctl -- --may-exist add-port br-vlan bond0 >> >> in my case, bond0 is an interface on the compute node in "trunk", so >> that packets are received with the vlan tag on the node. >> >> .a. >> >> -- >> antonio.s.mess...@gmail.com >> antonio.mess...@uzh.ch +41 (0)44 635 42 22 >> S3IT: Service and Support for Science IT http://www.s3it.uzh.ch/ >> University of Zurich >> Winterthurerstrasse 190 >> CH-8057 Zurich Switzerland >> > > ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators