Re: [Openstack-operators] Is the neutron port-security extension available for ML2 linux-bridge?
Thanks for the response James. The port-security extension was implemented for ML2 with OVS in Kilo but I cannot seem to find any similar implementation for linux-bridge. It also works with LinuxBridge in Kilo. To gain this functionality, you'll need to upgrade the environment from Juno to Kilo. Though I have never tried code patches to my environment before, I have to ask: is it possible to bring just enough of the changes in Kilo ML2 into the current Juno install? If so, how could I go about that? Charles ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [openstack-dev] Which is the correct way to set ha queues in RabbitMQ
You're welcome :) On Tue, Jul 28, 2015 at 1:15 PM, Alvise Dorigo alvise.dor...@pd.infn.it wrote: thank you very much Vishal. A. On 28/07/2015 09:41, vishal yadav wrote: ha_all vs. HA. which one is correct ? That's the policy name, you can name anything... Excerpt from 'man rabbitmqctl' ... set_policy [-p vhostpath] {name} {pattern} {definition} [priority] Sets a policy. name The name of the policy. pattern The regular expression, which when matches on a given resources causes the policy to apply. definition The definition of the policy, as a JSON term. In most shells you are very likely to need to quote this. priority The priority of the policy as an integer, defaulting to 0. Higher numbers indicate greater precedence. ... Regards, Vishal On Tue, Jul 28, 2015 at 12:47 PM, Alvise Dorigo alvise.dor...@pd.infn.it wrote: Hi, I read these two documents: http://docs.openstack.org/high-availability-guide/content/_configure_rabbitmq.html https://www.rdoproject.org/RabbitMQ To configure the queues in HA mode, the two docs suggests two slightly different commands; The first one says: rabbitmqctl set_policy ha-all '^(?!amq\.).*' '{ha-mode: all}' while the second one says: rabbitmqctl set_policy HA '^(?!amq\.).*' '{ha-mode: all}' ha_all vs. HA. which one is correct ? thanks, Alvise __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [puppet] module dependencies and different openstack versions
Hi, I have same feedback as Robert, we use the openstack/puppet-[project] modules and they are quiet independent. We have our own module that integrates those modules as we need and we even deploy each service on different nodes so we need them to be independent and we could achieve it. Kind regards, Cynthia Lopes do Sacramento 2015-07-28 9:43 GMT+02:00 Van Leeuwen, Robert rovanleeu...@ebay.com: We currently use our own custom puppet modules to deploy openstack, I have been looking into the official openstack modules and have a few barriers to switching. We are looking at doing this at a project at a time but the modules have a lot of dependencies. Eg. they all depend on the keystone module and try to do things in keystone suck as create users, service endpoints etc. This is a pain as I don¹t want it to mess with keystone (for one we don¹t support setting endpoints via an API) but also we don¹t want to move to the official keystone module at the same time. We have some custom keystone stuff which means we¹ll may never move to the official keystone puppet module. The neutron module pulls in the vswitch module but we don¹t use vswitch and it doesn¹t seem to be a requirement of the module so maybe doesn¹t need to be in metadata dependencies? It looks as if all the openstack puppet modules are designed to all be used at once? Does anyone else have these kind of issues? It would be great if eg. the neutron module would just manage neutron and not try and do things in nova, keystone, mysql etc. The other issue we have is that we have different services in openstack running different versions. Currently we have Kilo, Juno and Icehouse versions of different bits in the same cloud. It seems as if the puppet modules are designed just to manage one openstack version? Is there any thoughts on making it support different versions at the same time? Does this work? Hi, In my experience (I am setting up a new environment) the modules can be used ³stand-alone². It is the OpenStack module itself that comes with a combined server example. The separate modules (nova, glance, etc) are very configurable and don¹t necessarily need to setup e.g. keystone. From the OpenStack module you can modify the profiles and it will not do the keystone stuff / database, etc.. E.g. Remove the ³:nova::keystone::auth² part in the nova profile. We use r10k to select which versions to install and it should be trivial to use Juno / Kilo stuff together (have not tested this myself). Regarding the vswich module I *guess* that that is regulated by the following: neutron/manifests/agents/ml2/ovs.pp: if $::neutron::params::ovs_agent_package So unsetting that variable should not pull the package. Cheers, Robert van Leeuwen ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [openstack-dev] Which is the correct way to set ha queues in RabbitMQ
Hi Vishal, do you have a effective recipe to test if the rabbitmq's HA ? I've three instances of it; I've also nova, cinder and neutron configured with rabbit_ha_queues = true. Just restarting a rabbit instance seems not to be sufficient to test a real case scenario, is it ? any advice ? thanks, Alvise On 28/07/2015 09:46, vishal yadav wrote: You're welcome :) On Tue, Jul 28, 2015 at 1:15 PM, Alvise Dorigo alvise.dor...@pd.infn.it mailto:alvise.dor...@pd.infn.it wrote: thank you very much Vishal. A. On 28/07/2015 09:41, vishal yadav wrote: ha_all vs. HA. which one is correct ? That's the policy name, you can name anything... Excerpt from 'man rabbitmqctl' ... set_policy [-p vhostpath] {name} {pattern} {definition} [priority] Sets a policy. name The name of the policy. pattern The regular expression, which when matches on a given resources causes the policy to apply. definition The definition of the policy, as a JSON term. In most shells you are very likely to need to quote this. priority The priority of the policy as an integer, defaulting to 0. Higher numbers indicate greater precedence. ... Regards, Vishal On Tue, Jul 28, 2015 at 12:47 PM, Alvise Dorigo alvise.dor...@pd.infn.it mailto:alvise.dor...@pd.infn.it wrote: Hi, I read these two documents: http://docs.openstack.org/high-availability-guide/content/_configure_rabbitmq.html https://www.rdoproject.org/RabbitMQ To configure the queues in HA mode, the two docs suggests two slightly different commands; The first one says: rabbitmqctl set_policy ha-all '^(?!amq\.).*' '{ha-mode: all}' while the second one says: rabbitmqctl set_policy HA '^(?!amq\.).*' '{ha-mode: all}' ha_all vs. HA. which one is correct ? thanks, Alvise __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [puppet] module dependencies and different openstack versions
We currently use our own custom puppet modules to deploy openstack, I have been looking into the official openstack modules and have a few barriers to switching. We are looking at doing this at a project at a time but the modules have a lot of dependencies. Eg. they all depend on the keystone module and try to do things in keystone suck as create users, service endpoints etc. This is a pain as I don¹t want it to mess with keystone (for one we don¹t support setting endpoints via an API) but also we don¹t want to move to the official keystone module at the same time. We have some custom keystone stuff which means we¹ll may never move to the official keystone puppet module. The neutron module pulls in the vswitch module but we don¹t use vswitch and it doesn¹t seem to be a requirement of the module so maybe doesn¹t need to be in metadata dependencies? It looks as if all the openstack puppet modules are designed to all be used at once? Does anyone else have these kind of issues? It would be great if eg. the neutron module would just manage neutron and not try and do things in nova, keystone, mysql etc. The other issue we have is that we have different services in openstack running different versions. Currently we have Kilo, Juno and Icehouse versions of different bits in the same cloud. It seems as if the puppet modules are designed just to manage one openstack version? Is there any thoughts on making it support different versions at the same time? Does this work? Hi, In my experience (I am setting up a new environment) the modules can be used ³stand-alone². It is the OpenStack module itself that comes with a combined server example. The separate modules (nova, glance, etc) are very configurable and don¹t necessarily need to setup e.g. keystone. From the OpenStack module you can modify the profiles and it will not do the keystone stuff / database, etc.. E.g. Remove the ³:nova::keystone::auth² part in the nova profile. We use r10k to select which versions to install and it should be trivial to use Juno / Kilo stuff together (have not tested this myself). Regarding the vswich module I *guess* that that is regulated by the following: neutron/manifests/agents/ml2/ovs.pp: if $::neutron::params::ovs_agent_package So unsetting that variable should not pull the package. Cheers, Robert van Leeuwen ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
[Openstack-operators] [nova] Quota per flavor, availability zone or a combination
A lot of us have had use cases where a cloud-admin wanted to set - 1. Quota per flavor 2. Quota per Availability Zone 3. Quota per flavor_az (flavor and availability zone) All of these use cases requires an update to the quota module. Currently, quota module only tracks static resources like core, ram, disk which are hardcoded in the quota module. All quota calculations during quota commit is made using static resources. If we need the ability to set Quota per X, we need the ability to add a quota resource dynamically during quota-update. So, Vliobh Meshram, Josh Harlow and myself have been working on a spec to support dynamic quota resources in nova, which will staisfy all use cases above. Here is the link to the spec - https://review.openstack.org/#/c/206160/ This spec talks about 1. Providing capability to create dynamic quota resource 2. Update metadata (flavor and AZ) for dynamic quota resource 3. Incrementing/Decrementing dynamic quota resource value during instance creation and deletion. It would be great if we could get input from dev and operators on the spec. This is a problem that we have been trying hard at Yahoo to solve and we would love to get your feedback. Operators mailing list was not included in my email (sent yesterday) to dev mailing list. So, sending this email to operators mailing list. Thanks, Meghal ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
[Openstack-operators] [neutron][extra-dhcp-opt]How to use extra-dhcp-opt when the opt_name=static-route and opt_name=classless-static-route?
Hi all, When using the extr-dhcp-opt, I find the function works well when opt_name=mtu and opt_name=router. The vm created will use the assigned mtu value or the assigned gateway. But when I create port using --extra-dhcp-opt opt_name=static-route,opt_value=192.168.0.0/24 2.2.2.2 the vm won't use this route. The opt_name=classless-static-route shows the same result.Do i use them in the wrong way? Any suggestion will be grateful. Thank you.___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [openstack-dev] [glance] Removal of Catalog Index Service from Glance
On 27/07/15 19:24 +, Ian Cordasco wrote: On 7/27/15, 11:29, Louis Taylor lo...@kragniz.eu wrote: On Fri, Jul 17, 2015 at 07:50:55PM +0100, Louis Taylor wrote: Hi operators, In Kilo, we added the Catalog Index Service as an experimental API in Glance. It soon became apparent this would be better suited as a separate project, so it was split into the Searchlight project: https://wiki.openstack.org/wiki/Searchlight We've now started the process of removing the service from Glance for the Liberty release. Since the service was originally had the status of being experimental, we felt it would be okay to remove it without a cycle of deprecation. Is this something that would cause issues for any existing deployments? If you have any feelings about this one way or the other, feel free to share your thoughts on this mailing list or in the review to remove the code: https://review.openstack.org/#/c/197043/ Some time has passed and no one has complained about this, so I propose we go ahead and remove it in liberty. Cheers, Louis +1 Commented on the review! +2 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco pgp27Jqj0N95q.pgp Description: PGP signature ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [openstack-dev] [glance] Removal of Catalog Index Service from Glance
yeah that really didn't belong in glance at all. On Tue, Jul 28, 2015 at 10:05 AM, Flavio Percoco fla...@redhat.com wrote: On 27/07/15 19:24 +, Ian Cordasco wrote: On 7/27/15, 11:29, Louis Taylor lo...@kragniz.eu wrote: On Fri, Jul 17, 2015 at 07:50:55PM +0100, Louis Taylor wrote: Hi operators, In Kilo, we added the Catalog Index Service as an experimental API in Glance. It soon became apparent this would be better suited as a separate project, so it was split into the Searchlight project: https://wiki.openstack.org/wiki/Searchlight We've now started the process of removing the service from Glance for the Liberty release. Since the service was originally had the status of being experimental, we felt it would be okay to remove it without a cycle of deprecation. Is this something that would cause issues for any existing deployments? If you have any feelings about this one way or the other, feel free to share your thoughts on this mailing list or in the review to remove the code: https://review.openstack.org/#/c/197043/ Some time has passed and no one has complained about this, so I propose we go ahead and remove it in liberty. Cheers, Louis +1 Commented on the review! +2 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [puppet] module dependencies and different openstack versions
We use the OpenStack modules, but glue everything together with a monolithic composition module (our own.) We do want to get to a place where we can upgrade/apply config/etc. each OpenStack component separately, but have’t tackled it yet. I think it will be possible, but will take some work. I have heard of a few others who have been working toward the same thing, though I don’t think there’s really anything concrete in the upstream modules yet. WRT the dependencies, we use r10k with a manually populated Puppetfile, so we don’t rely on the module metadata to determine which modules to pull in. That’s one way to get exactly what you want rather than all the dependency sprawl. Mike On 7/27/15, 5:10 PM, Sam Morrison sorri...@gmail.com wrote: On 27 Jul 2015, at 11:25 pm, Emilien Macchi emil...@redhat.com wrote: On 07/27/2015 02:32 AM, Sam Morrison wrote: We currently use our own custom puppet modules to deploy openstack, I have been looking into the official openstack modules and have a few barriers to switching. We are looking at doing this at a project at a time but the modules have a lot of dependencies. Eg. they all depend on the keystone module and try to do things in keystone suck as create users, service endpoints etc. This is a pain as I don’t want it to mess with keystone (for one we don’t support setting endpoints via an API) but also we don’t want to move to the official keystone module at the same time. We have some custom keystone stuff which means we’ll may never move to the official keystone puppet module. Well, in that case it's going to be very hard for you to use the modules. Trying to give up forks and catch-up to upstream is really expensive and challenging (Fuel is currently working on this). What I suggest is: 1/ have a look at the diff between your manifests and upstream ones. 2/ try to use upstream modules with the maximum number of classes, and put the rest in a custom module (or a manifest somewhere). 3/ submit patches if you think we're missing something in the modules. The neutron module pulls in the vswitch module but we don’t use vswitch and it doesn’t seem to be a requirement of the module so maybe doesn’t need to be in metadata dependencies? AFIK there is no conditional in metadata.json, so we need the module anyway. It should not cause any trouble to you, except if you have a custom 'vswitch' module. Yeah it would be nice if you could specify dependencies as well as recommended much like debian packages do. We use librarian-puppet to manage all our modules and you can’t disable it installing all the dependencies. But that is another issue… It looks as if all the openstack puppet modules are designed to all be used at once? Does anyone else have these kind of issues? It would be great if eg. the neutron module would just manage neutron and not try and do things in nova, keystone, mysql etc. We try to design our modules to work together because Puppet OpenStack is a single project composed of modules that are supposed to -together- deploy OpenStack. All the puppet modules we use are very modular (hence the name), the openstack modules aren’t at this stage. Ideally each module would be self contained and then if people wanted to deploy “openstack” there could be an “openstack” module that would pull in all the individual project modules and make them work together. It’s the first tip for writing a module listed at https://docs.puppetlabs.com/puppet/latest/reference/modules_fundamentals.h tml#tips I guess I’m just wondering if other people are having the same issue I am? and if so is there a way forward to make the puppet modules more modular or do I just stick with my own modules. In your case, I would just install the module from source (git) and not trying to pull them from Puppetforge. The other issue we have is that we have different services in openstack running different versions. Currently we have Kilo, Juno and Icehouse versions of different bits in the same cloud. It seems as if the puppet modules are designed just to manage one openstack version? Is there any thoughts on making it support different versions at the same time? Does this work? 1/ you're running Kilo, Juno and Icehouse in the same cloud? Wow. You're brave! We are a large deployment spanning multiple data centres and 1000+ hosts so upgrading in one big bang isn’t an option. I don’t think this is brave it is the norm for people running large openstack clouds in production. 2/ Puppet modules do not hardcode OpenStack packages version. Though our current master is targeting Liberty, but we have stable/kilo, stable/juno, etc. You can even disable the package dependency in most of the classes. The packages aren’t the issue it’s more the configs that get pushed out and so on, when config variables change location etc. with different versions this becomes hard. I'm not sure this is an issue here,
[Openstack-operators] RE : Can't launch docker instance, Unexpected vif_type=binding_failed.
openvswitch agent is running and the logs in compute2 are as follow: 1. OVS-cleanup.log 2015-06-20 12:52:19.976 1529 INFO neutron.agent.ovs_cleanup_util [-] OVS cleanup completed successfully 2015-06-23 15:48:43.401 1332 INFO neutron.common.config [-] Logging enabled! 2015-06-23 15:48:43.893 1332 INFO neutron.agent.ovs_cleanup_util [-] Cleaning br-int 2015-06-23 15:48:44.520 1332 INFO neutron.agent.ovs_cleanup_util [-] OVS cleanup completed successfully 2015-06-24 11:49:21.423 1770 INFO neutron.common.config [-] Logging enabled! 2015-06-24 11:49:22.123 1770 INFO neutron.agent.ovs_cleanup_util [-] Cleaning br-int 2015-06-24 11:49:22.628 1770 INFO neutron.agent.ovs_cleanup_util [-] OVS cleanup completed successfully 2015-06-25 00:21:55.634 1337 INFO neutron.common.config [-] Logging enabled! 2015-06-25 00:21:56.858 1337 INFO neutron.agent.ovs_cleanup_util [-] Cleaning br-int 2015-06-25 00:21:57.900 1337 INFO neutron.agent.ovs_cleanup_util [-] OVS cleanup completed successfully 2015-07-07 16:43:42.608 1457 INFO neutron.common.config [-] Logging enabled! 2015-07-07 16:43:43.399 1457 INFO neutron.agent.ovs_cleanup_util [-] Cleaning br-int 2015-07-07 16:43:43.792 1457 INFO neutron.agent.ovs_cleanup_util [-] OVS cleanup completed successfully 2015-07-08 15:04:31.954 1351 INFO neutron.common.config [-] Logging enabled! 2015-07-08 15:04:32.888 1351 INFO neutron.agent.ovs_cleanup_util [-] Cleaning br-int 2015-07-08 15:04:33.235 1351 INFO neutron.agent.ovs_cleanup_util [-] OVS cleanup completed successfully 2015-07-20 13:25:20.300 1550 INFO neutron.common.config [-] Logging enabled! 2015-07-20 13:25:22.665 1550 INFO neutron.agent.ovs_cleanup_util [-] Cleaning br-int 2015-07-20 13:25:22.770 1550 INFO neutron.agent.ovs_cleanup_util [-] OVS cleanup completed succe 2. Openvswitch-agent.log 2015-07-28 13:23:29.151 4615 ERROR neutron.agent.linux.ovsdb_monitor [-] Error received from ovsdb monitor: 2015-07-28T11:23:29Z|1|fatal_signal|WARN|terminating with signal 15 (Terminated) 2015-07-28 13:23:29.190 4615 ERROR neutron.agent.linux.utils [-] Command: ['ps', '--ppid', '4764', '-o', 'pid='] Exit code: 1 Stdout: '' Stderr: '' 2015-07-28 13:23:29.835 4615 CRITICAL neutron [req-dbf6bc78-c2df-4454-9e19-5f09bf688ee9 None] AssertionError: Trying to re-send() an already-triggered event. 2015-07-28 13:23:29.835 4615 TRACE neutron Traceback (most recent call last): 2015-07-28 13:23:29.835 4615 TRACE neutron File /usr/bin/neutron-openvswitch-agent, line 10, in module 2015-07-28 13:23:29.835 4615 TRACE neutron sys.exit(main()) 2015-07-28 13:23:29.835 4615 TRACE neutron File /usr/lib/python2.7/dist-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, line 1565, in main 2015-07-28 13:23:29.835 4615 TRACE neutron agent.daemon_loop() 2015-07-28 13:23:29.835 4615 TRACE neutron File /usr/lib/python2.7/dist-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, line 1485, in daemon_loop 2015-07-28 13:23:29.835 4615 TRACE neutron self.rpc_loop(polling_manager=pm) 2015-07-28 13:23:29.835 4615 TRACE neutron File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 2015-07-28 13:23:29.835 4615 TRACE neutron self.gen.next() 2015-07-28 13:23:29.835 4615 TRACE neutron File /usr/lib/python2.7/dist-packages/neutron/agent/linux/polling.py, line 39, in get_polling_manager 2015-07-28 13:23:29.835 4615 TRACE neutron pm.stop() 2015-07-28 13:23:29.835 4615 TRACE neutron File /usr/lib/python2.7/dist-packages/neutron/agent/linux/polling.py, line 106, in stop 2015-07-28 13:23:29.835 4615 TRACE neutron self._monitor.stop() 2015-07-28 13:23:29.835 4615 TRACE neutron File /usr/lib/python2.7/dist-packages/neutron/agent/linux/async_process.py, line 89, in stop 2015-07-28 13:23:29.835 4615 TRACE neutron self._kill() 2015-07-28 13:23:29.835 4615 TRACE neutron File /usr/lib/python2.7/dist-packages/neutron/agent/linux/ovsdb_monitor.py, line 99, in _kill 2015-07-28 13:23:29.835 4615 TRACE neutron super(SimpleInterfaceMonitor, self)._kill(*args, **kwargs) 2015-07-28 13:23:29.835 4615 TRACE neutron File /usr/lib/python2.7/dist-packages/neutron/agent/linux/async_process.py, line 116, in _kill 2015-07-28 13:23:29.835 4615 TRACE neutron self._kill_event.send() 2015-07-28 13:23:29.835 4615 TRACE neutron File /usr/lib/python2.7/dist-packages/eventlet/event.py, line 150, in send 2015-07-28 13:23:29.835 4615 TRACE neutron assert self._result is NOT_USED, 'Trying to re-send() an already-triggered event.' 2015-07-28 13:23:29.835 4615 TRACE neutron AssertionError: Trying to re-send() an already-triggered event. 2015-07-28 13:23:29.835 4615 TRACE neutron 2015-07-28 13:23:32.197 6195 INFO neutron.common.config [-] Logging enabled! 2015-07-28 13:23:33.005 6195 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on controller:5672 2015-07-28 13:23:33.120 6195 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on
[Openstack-operators] Cinder volume on lvm volume
Hi All: I have lvm volume /dev/mapper/centos-images on allinOne installation (Cenots7, openstack icehouse) machine. I wanted to make this to configure in cinder to get ebs for instances. Can i do that? is there any document apart to do the same? Any references will be highly appreciated. Thanks, Dev ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [tags] Meeting this week
Hi all, I think it's probably a good idea to have a meeting in our scheduled slot 1400 UTC on Thurs 30th July. I'll actually be in Beijing at the time, but I've planned to be there, but it something goes wrong, it would be great if someone could run the meeting. I think a good discussion topic is what you'd like to do for the mid-cycle ops event as we'll likely have a 90 minute in-person session. Regards, Tom On 16/07/15 21:11, Tom Fifield wrote: OK, if there isn't soon an outpouring of support for this meeting, I think it's best cancelled :) On 16/07/15 18:37, Maish Saidel-Keesing wrote: I would prefer to defer today's meeting On 07/16/15 11:17, Tom Fifield wrote: Hi, According to the logs from last week, which are sadly in yet another directory: http://eavesdrop.openstack.org/meetings/_operator_tags/ , we do have a meeting this week, but the only agenda item (Jamespage markbaker - thoughts on packaging) didn't pan out since markbaker wasn't available. Is there interest for a meeting, and any proposed topics? ops:ha? Regards, Tom On 16/07/15 16:10, Maish Saidel-Keesing wrote: Are we having a meeting today at 14:00 UTC? On 06/29/15 07:39, Tom Fifield wrote: Hi, As noted last meeting, we didn't get even half way through out agenda, so we will meet this week as well. So, join us this Thursday Jul 2nd 1400 UTC in #openstack-meeting on freenode (http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150702T1400 ) To kick off with agenda item #4: https://etherpad.openstack.org/p/ops-tags-June-2015 Previous meeting notes can be found at: http://eavesdrop.openstack.org/meetings/ops_tags/2015/ Regards, Tom ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators