[openstack-dev] [Cinder] Confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata

2014-05-04 Thread Zhangleiqiang (Trump)
Hi, stackers:

I have some confusion about the respective use cases for volume's 
admin_metadata, metadata and glance_image_metadata. 

I know glance_image_metadata comes from image which is the volume created from, 
and it is immutable. Glance_image_metadata is used for many cases, such as 
billing, ram requirement, etc. And it also includes property which can effects 
the use-pattern of volume, such as volumes with hw_scsi_mode=virtio-scsi 
will be supposed that there is corresponding virtio-scsi driver installed and 
will be used as a device of virtio-scsi controller which has higher 
performance when booting from it with scsi bus type.

However, volume is constantly having blocks changed, which may result in 
situations as follows:

1. A volume not created from image or created from image without hw_scsi_mode 
property at first but then has the virtio-scsi driver manually installed, there 
will be no method to make the volume used with virito-scsi controller when 
booting from it. 

2. If a volume was created from an image with hw_scsi_mode property at first, 
and then the virtio-scsi driver in the instance is uninstalled, there will be 
no method to make the volume not used with virtio-scsi controller when 
booting from it.

For the first situation, is it suitable to set corresponding metadata to 
volume? Should we use metadata or admin_metadata? I notice that volumes will 
have attach_mode and readonly admin_metadata and empty metadata after 
created, and I can't find the respective use cases for admin_metada and 
metadata.

For the second situation, what is the better way to handle it?

Any advice?


--
zhangleiqiang (Trump)

Best Regards



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova compute error

2014-05-04 Thread abhishek jain
Hi

I changed the ebtables version i.e ebtables_2.0.9.2-2ubuntu3_powerpc.deb
from version 2.0.10-4 and the error just reduced to one as below...


2014-05-03 11:26:42.052 12065 TRACE nova.openstack.common.rpc.amqp if
ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed',
dom=self)
2014-05-03 11:26:42.052 12065 TRACE nova.openstack.common.rpc.amqp
libvirtError: Error while building firewall: Some rules could not be
created for interface tap9dafa3dd-7a: Failure to execute command '$EBT -t
nat -A libvirt-J-tap9dafa3dd-7a  -j J-tap9dafa3dd-7a-mac' : 'The kernel
doesn't support a certain ebtables extension, consider recompiling your
kernel or insmod the extension.'.
2014-05-03 11:26:42.052 12065 TRACE nova.openstack.common.rpc.amqp
2014-05-03 11:26:42.052 12065 TRACE nova.openstack.common.rpc.amqp
2014-05-03 11:26:45.374 12065 DEBUG nova.openstack.common.rpc.amqp [-]
Making synchronous call on conductor ... multicall /opt/stack/nova/nova/o

I'm using ubuntu 13.10 with powerpc architeture.
Is there any other way of booting VM without the dependency of
ebtables.PLease let me know.

Thanks


On Sat, May 3, 2014 at 12:55 PM, abhishek jain ashujain9...@gmail.comwrote:

 Hi all

 I want to boot VM from controller node onto the compute node using
 devstack.All my services such as nova,q-agt,nova-compute,neutron,etc are
 running properly both on compute node as well on the controller node.I'm
 also able to boot VMs on controller node from the openstack dashvoard
 .However issue is coming when I'm booting VM from controller node onto
 compute node.
 Following is the error in the nova-compute logs when I'm trying to boot VM
 on compute node from controller node


 2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp if
 ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed',
 dom=self)
 2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp
 libvirtError: Error while building firewall: Some rules could not be
 created for interface tap74d2ff08-7f: Failure to execute command '$EBT -t
 nat -A libvirt-J-tap74d2ff08-7f  -j J-tap74d2ff08-7f-mac' : 'Unable to
 update the kernel. Two possible causes:
 2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp 1.
 Multiple ebtables programs were executing simultaneously. The ebtables
 2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp
 userspace tool doesn't by default support multiple ebtables programs
 running
 2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp
 concurrently. The ebtables option --concurrent or a tool like flock can be
 2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqpused
 to support concurrent scripts that update the ebtables kernel tables.
 2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp 2. The
 kernel doesn't support a certain ebtables extension, consider
 2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp
 recompiling your kernel or insmod the extension.
 2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp .'.
 2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp
 2014-04-30 05:10:22.452 17049 TRACE nova.openstack.common.rpc.amqp
 2014-04-30 05:10:29.066 17049 DEBUG nova.openstack.common.rpc.amqp [-]
 Making synchronous call on conductor ... multicall /opt/stack/nova/nova/o

 From the logs it appears that the command $EBT -t nat -A
 libvirt-J-tap74d2ff08-7f  -j J-tap74d2ff08-7f-mac is not able to update the
 kernel with the ebtables rules.I have also enabled the ebtables modules(*)
 in my kernel.

 Please help me regarding this.
 Also is there any other way of booting the VM without updating the rules
 in kernel .

 Thanks
 Abhisehk Jain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-04 Thread Sean Dague
On 05/03/2014 03:53 PM, gustavo panizzo gfa wrote:
 On 05/02/2014 11:09 AM, Mark McClain wrote:

 To throw something out, what if moved to using config-dir for optional 
 configs since it would still support plugin scoped configuration files.  

 Neutron Servers/Network Nodes
 /etc/neutron.d
  neutron.conf  (Common Options)
  server.d (all plugin/service config files )
  service.d (all service config files)


 Hypervisor Agents
 /etc/neutron
  neutron.conf
  agent.d (Individual agent config files)


 The invocations would then be static:

 neutron-server —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/server.d

 Service Agents:
 neutron-l3-agent —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/service.d

 Hypervisors (assuming the consolidates L2 is finished this cycle):
 neutron-l2-agent —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/agent.d

 Thoughts?
 
 i like this idea, it makes easy to use configuration manager to setup
 neutron, also it fits perfectly with real life where sometimes you need
 more than one l3 agent running on the same box

Question (because I honestly don't know), when would you want more than
1 l3 agent running on the same box?

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-05-04 Thread Jay Lau
The topic is Scheduler hints for VM life
cyclehttp://junodesignsummit.sched.org/event/77801877aa42b595f14ae8b020cd1999#.
Thanks.


2014-05-04 10:06 GMT+08:00 Qiming Teng teng...@linux.vnet.ibm.com:

 On Thu, May 01, 2014 at 08:49:11PM +0800, Jay Lau wrote:
  Jay Pipes and all, I'm planning to merge this topic to
 
 http://junodesignsummit.sched.org/event/77801877aa42b595f14ae8b020cd1999after
  some discussion in this week's Gantt IRC meeting, hope it is OK.
 
  Thanks!

 The link above didn't work.  How about telling us the name of the topic?


  2014-05-01 19:56 GMT+08:00 Day, Phil philip@hp.com:
 

 In the original API there was a way to remove members from the
 group.
 This didn't make it into the code that was submitted.
   
Well, it didn't make it in because it was broken. If you add an
 instance
   to a
group after it's running, a migration may need to take place in
 order to
   keep
the semantics of the group. That means that for a while the policy
 will
   be
being violated, and if we can't migrate the instance somewhere to
   satisfy the
policy then we need to either drop it back out, or be in violation.
   Either some
additional states (such as being queued for inclusion in a group,
 etc)
   may be
required, or some additional footnotes on what it means to be in a
 group
might have to be made.
   
It was for the above reasons, IIRC, that we decided to leave that bit
   out since
the semantics and consequences clearly hadn't been fully thought-out.
Obviously they can be addressed, but I fear the result will be ...
 ugly.
   I think
there's a definite possibility that leaving out those dynamic
 functions
   will look
more desirable than an actual implementation.
   
   If we look at a server group as a general contained or servers, that
 may
   have an attribute that expresses scheduling policy, then it doesn't
 seem to
   ugly to restrict the conditions on which an add is allowed to only
 those
   that don't break the (optional) policy.Wouldn't even have to go to
 the
   scheduler to work this out.
  


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring as a Service

2014-05-04 Thread Thomas Goirand
On 05/02/2014 05:17 AM, Alexandre Viau wrote:
 Hello Everyone!
 
 My name is Alexandre Viau from Savoir-Faire Linux.
 
 We have submited a Monitoring as a Service blueprint and need feedback.
 
 Problem to solve: Ceilometer's purpose is to track and *measure/meter* usage 
 information collected from OpenStack components (originally for billing). 
 While Ceilometer is usefull for the cloud operators and infrastructure 
 metering, it is not a *monitoring* solution for the tenants and their 
 services/applications running in the cloud because it does not allow for 
 service/application-level monitoring and it ignores detailed and precise 
 guest system metrics.
 
 Proposed solution: We would like to add Monitoring as a Service to Openstack
 
 Just like Rackspace's Cloud monitoring, the new monitoring service - lets 
 call it OpenStackMonitor for now -  would let users/tenants keep track of 
 their ressources on the cloud and receive instant notifications when they 
 require attention.
 
 This RESTful API would enable users to create multiple monitors with 
 predefined checks, such as PING, CPU usage, HTTPS and SMTP or custom checks 
 performed by a Monitoring Agent on the instance they want to monitor.
 
 Predefined checks such as CPU and disk usage could be polled from Ceilometer. 
 Other predefined checks would be performed by the new monitoring service 
 itself. Checks such as PING could be flagged to be performed from multiple 
 sites.
 
 Custom checks would be performed by an optional Monitoring Agent. Their 
 results would be polled by the monitoring service and stored in Ceilometer.
 
 If you wish to collaborate, feel free to contact me at 
 alexandre.v...@savoirfairelinux.com
 The blueprint is available here: 
 https://blueprints.launchpad.net/openstack-ci/+spec/monitoring-as-a-service
 
 Thanks!

I would prefer if monitoring capabilities was added to Ceilometer rather
than adding yet-another project to deal with.

What's the reason for not adding the feature to Ceilometer directly?

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring as a Service

2014-05-04 Thread John Griffith
On Sun, May 4, 2014 at 9:37 AM, Thomas Goirand z...@debian.org wrote:

 On 05/02/2014 05:17 AM, Alexandre Viau wrote:
  Hello Everyone!
 
  My name is Alexandre Viau from Savoir-Faire Linux.
 
  We have submited a Monitoring as a Service blueprint and need feedback.
 
  Problem to solve: Ceilometer's purpose is to track and *measure/meter*
 usage information collected from OpenStack components (originally for
 billing). While Ceilometer is usefull for the cloud operators and
 infrastructure metering, it is not a *monitoring* solution for the tenants
 and their services/applications running in the cloud because it does not
 allow for service/application-level monitoring and it ignores detailed and
 precise guest system metrics.
 
  Proposed solution: We would like to add Monitoring as a Service to
 Openstack
 
  Just like Rackspace's Cloud monitoring, the new monitoring service -
 lets call it OpenStackMonitor for now -  would let users/tenants keep track
 of their ressources on the cloud and receive instant notifications when
 they require attention.
 
  This RESTful API would enable users to create multiple monitors with
 predefined checks, such as PING, CPU usage, HTTPS and SMTP or custom checks
 performed by a Monitoring Agent on the instance they want to monitor.
 
  Predefined checks such as CPU and disk usage could be polled from
 Ceilometer. Other predefined checks would be performed by the new
 monitoring service itself. Checks such as PING could be flagged to be
 performed from multiple sites.
 
  Custom checks would be performed by an optional Monitoring Agent. Their
 results would be polled by the monitoring service and stored in Ceilometer.
 
  If you wish to collaborate, feel free to contact me at
 alexandre.v...@savoirfairelinux.com
  The blueprint is available here:
 https://blueprints.launchpad.net/openstack-ci/+spec/monitoring-as-a-service
 
  Thanks!

 I would prefer if monitoring capabilities was added to Ceilometer rather
 than adding yet-another project to deal with.

 What's the reason for not adding the feature to Ceilometer directly?

 Thomas


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​I'd also be interested in the overlap between your proposal and
Ceilometer.  It seems at first thought that it would be better to introduce
the monitoring functionality in to Ceilometer and make that project more
diverse as opposed to yet another project.​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-04 Thread Mark McClain


 On May 4, 2014, at 8:08, Sean Dague s...@dague.net wrote:
 
 Question (because I honestly don't know), when would you want more than
 1 l3 agent running on the same box?

For the legacy case where there are multiple external networks connected to a 
node on different bridges.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-04 Thread Armando M.
If the consensus is to unify all the config options into a single
configuration file, I'd suggest following what the Nova folks did with
[1], which I think is what Salvatore was also hinted. This will also
help mitigate needless source code conflicts that would inevitably
arise when merging competing changes to the same file.

I personally do not like having a single file with gazillion options
(the same way I hate source files with gazillion LOC's but I digress
;), but I don't like a proliferation of config files either. So I
think what Mark suggested below makes sense.

Cheers,
Armando

[1] - 
https://github.com/openstack/nova/blob/master/etc/nova/README-nova.conf.txt

On 2 May 2014 07:09, Mark McClain mmccl...@yahoo-inc.com wrote:

 On May 2, 2014, at 7:39 AM, Sean Dague s...@dague.net wrote:

 Some non insignificant number of devstack changes related to neutron
 seem to be neutron plugins having to do all kinds of manipulation of
 extra config files. The grenade upgrade issue in neutron was because of
 some placement change on config files. Neutron seems to have *a ton* of
 config files and is extremely sensitive to their locations/naming, which
 also seems like it ends up in flux.

 We have grown in the number of configuration files and I do think some of the 
 design decisions made several years ago should probably be revisited.  One of 
 the drivers of multiple configuration files is the way that Neutron is 
 currently packaged [1][2].  We’re packaged significantly different than the 
 other projects so the thinking in the early years was that each 
 plugin/service since it was packaged separately needed its own config file.  
 This causes problems because often it involves changing the init script 
 invocation if the plugin is changed vs only changing the contents of the init 
 script.  I’d like to see Neutron changed to be a single package similar to 
 the way Cinder is packaged with the default config being ML2.


 Is there an overview somewhere to explain this design point?

 Sadly no.  It’s a historical convention that needs to be reconsidered.


 All the other services have a single config config file designation on
 startup, but neutron services seem to need a bunch of config files
 correct on the cli to function (see this process list from recent
 grenade run - http://paste.openstack.org/show/78430/ note you will have
 to horiz scroll for some of the neutron services).

 Mostly it would be good to understand this design point, and if it could
 be evolved back to the OpenStack norm of a single config file for the
 services.


 +1 to evolving into a more limited set of files.  The trick is how we 
 consolidate the agent, server, plugin and/or driver options or maybe we don’t 
 consolidate and use config-dir more.  In some cases, the files share a set of 
 common options and in other cases there are divergent options [3][4].   
 Outside of testing the agents are not installed on the same system as the 
 server, so we need to ensure that the agent configuration files should stand 
 alone.

 To throw something out, what if moved to using config-dir for optional 
 configs since it would still support plugin scoped configuration files.

 Neutron Servers/Network Nodes
 /etc/neutron.d
 neutron.conf  (Common Options)
 server.d (all plugin/service config files )
 service.d (all service config files)


 Hypervisor Agents
 /etc/neutron
 neutron.conf
 agent.d (Individual agent config files)


 The invocations would then be static:

 neutron-server —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/server.d

 Service Agents:
 neutron-l3-agent —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/service.d

 Hypervisors (assuming the consolidates L2 is finished this cycle):
 neutron-l2-agent —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/agent.d

 Thoughts?

 mark

 [1] http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-7/
 [2] 
 http://packages.ubuntu.com/search?keywords=neutronsearchon=namessuite=trustysection=all
 [3] 
 https://git.openstack.org/cgit/openstack/neutron/tree/etc/neutron/plugins/nuage/nuage_plugin.ini#n2
 [4]https://git.openstack.org/cgit/openstack/neutron/tree/etc/neutron/plugins/bigswitch/restproxy.ini#n3
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-04 Thread John Dickinson
To add some color, Swift supports both single conf files and conf.d 
directory-based configs. See 
http://docs.openstack.org/developer/swift/deployment_guide.html#general-service-configuration.

The single config file pattern is quite useful for simpler configurations, 
but the directory-based ones becomes especially useful when looking at cluster 
configuration management tools--stuff that auto-generates and composes config 
settings (ie non hand-curated configs). For example, the conf.d configs can 
support each middleware config or background daemon process in a separate file. 
Or server settings in one file and common logging settings in another.

(Also, to answer before it's asked [but I don't want to derail the current 
thread], I'd be happy to look at oslo config parsing if it supports the same 
functionality.)

--John




On May 4, 2014, at 9:49 AM, Armando M. arma...@gmail.com wrote:

 If the consensus is to unify all the config options into a single
 configuration file, I'd suggest following what the Nova folks did with
 [1], which I think is what Salvatore was also hinted. This will also
 help mitigate needless source code conflicts that would inevitably
 arise when merging competing changes to the same file.
 
 I personally do not like having a single file with gazillion options
 (the same way I hate source files with gazillion LOC's but I digress
 ;), but I don't like a proliferation of config files either. So I
 think what Mark suggested below makes sense.
 
 Cheers,
 Armando
 
 [1] - 
 https://github.com/openstack/nova/blob/master/etc/nova/README-nova.conf.txt
 
 On 2 May 2014 07:09, Mark McClain mmccl...@yahoo-inc.com wrote:
 
 On May 2, 2014, at 7:39 AM, Sean Dague s...@dague.net wrote:
 
 Some non insignificant number of devstack changes related to neutron
 seem to be neutron plugins having to do all kinds of manipulation of
 extra config files. The grenade upgrade issue in neutron was because of
 some placement change on config files. Neutron seems to have *a ton* of
 config files and is extremely sensitive to their locations/naming, which
 also seems like it ends up in flux.
 
 We have grown in the number of configuration files and I do think some of 
 the design decisions made several years ago should probably be revisited.  
 One of the drivers of multiple configuration files is the way that Neutron 
 is currently packaged [1][2].  We’re packaged significantly different than 
 the other projects so the thinking in the early years was that each 
 plugin/service since it was packaged separately needed its own config file.  
 This causes problems because often it involves changing the init script 
 invocation if the plugin is changed vs only changing the contents of the 
 init script.  I’d like to see Neutron changed to be a single package similar 
 to the way Cinder is packaged with the default config being ML2.
 
 
 Is there an overview somewhere to explain this design point?
 
 Sadly no.  It’s a historical convention that needs to be reconsidered.
 
 
 All the other services have a single config config file designation on
 startup, but neutron services seem to need a bunch of config files
 correct on the cli to function (see this process list from recent
 grenade run - http://paste.openstack.org/show/78430/ note you will have
 to horiz scroll for some of the neutron services).
 
 Mostly it would be good to understand this design point, and if it could
 be evolved back to the OpenStack norm of a single config file for the
 services.
 
 
 +1 to evolving into a more limited set of files.  The trick is how we 
 consolidate the agent, server, plugin and/or driver options or maybe we 
 don’t consolidate and use config-dir more.  In some cases, the files share a 
 set of common options and in other cases there are divergent options [3][4]. 
   Outside of testing the agents are not installed on the same system as the 
 server, so we need to ensure that the agent configuration files should stand 
 alone.
 
 To throw something out, what if moved to using config-dir for optional 
 configs since it would still support plugin scoped configuration files.
 
 Neutron Servers/Network Nodes
 /etc/neutron.d
neutron.conf  (Common Options)
server.d (all plugin/service config files )
service.d (all service config files)
 
 
 Hypervisor Agents
 /etc/neutron
neutron.conf
agent.d (Individual agent config files)
 
 
 The invocations would then be static:
 
 neutron-server —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/server.d
 
 Service Agents:
 neutron-l3-agent —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/service.d
 
 Hypervisors (assuming the consolidates L2 is finished this cycle):
 neutron-l2-agent —config-file /etc/neutron/neutron.conf —config-dir 
 /etc/neutron/agent.d
 
 Thoughts?
 
 mark
 
 [1] http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-7/
 [2] 
 

Re: [openstack-dev] Monitoring as a Service

2014-05-04 Thread Denis Makogon
Hello to All.

I also +1 this idea. As I can see, Telemetry program (according to
Launchpad) covers the process of the infrastructure metrics (networking,
etc) and in-compute-instances metrics/monitoring.
So, the best option, I guess, is to propose add such great feature to
Ceilometer. In-compute-instance monitoring will be the great value-add to
upstream Ceilometer.
As for me, it's a good chance to integrate well-known production ready
monitoring systems that have tons of specific plugins (like Nagios etc.)

Best regards,
Denis Makogon

воскресенье, 4 мая 2014 г. пользователь John Griffith написал:




 On Sun, May 4, 2014 at 9:37 AM, Thomas Goirand 
 z...@debian.orgjavascript:_e(%7B%7D,'cvml','z...@debian.org');
  wrote:

 On 05/02/2014 05:17 AM, Alexandre Viau wrote:
  Hello Everyone!
 
  My name is Alexandre Viau from Savoir-Faire Linux.
 
  We have submited a Monitoring as a Service blueprint and need feedback.
 
  Problem to solve: Ceilometer's purpose is to track and *measure/meter*
 usage information collected from OpenStack components (originally for
 billing). While Ceilometer is usefull for the cloud operators and
 infrastructure metering, it is not a *monitoring* solution for the tenants
 and their services/applications running in the cloud because it does not
 allow for service/application-level monitoring and it ignores detailed and
 precise guest system metrics.
 
  Proposed solution: We would like to add Monitoring as a Service to
 Openstack
 
  Just like Rackspace's Cloud monitoring, the new monitoring service -
 lets call it OpenStackMonitor for now -  would let users/tenants keep track
 of their ressources on the cloud and receive instant notifications when
 they require attention.
 
  This RESTful API would enable users to create multiple monitors with
 predefined checks, such as PING, CPU usage, HTTPS and SMTP or custom checks
 performed by a Monitoring Agent on the instance they want to monitor.
 
  Predefined checks such as CPU and disk usage could be polled from
 Ceilometer. Other predefined checks would be performed by the new
 monitoring service itself. Checks such as PING could be flagged to be
 performed from multiple sites.
 
  Custom checks would be performed by an optional Monitoring Agent. Their
 results would be polled by the monitoring service and stored in Ceilometer.
 
  If you wish to collaborate, feel free to contact me at
 alexandre.v...@savoirfairelinux.comjavascript:_e(%7B%7D,'cvml','alexandre.v...@savoirfairelinux.com');
  The blueprint is available here:
 https://blueprints.launchpad.net/openstack-ci/+spec/monitoring-as-a-service
 
  Thanks!

 I would prefer if monitoring capabilities was added to Ceilometer rather
 than adding yet-another project to deal with.

 What's the reason for not adding the feature to Ceilometer directly?

 Thomas


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgjavascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ​I'd also be interested in the overlap between your proposal and
 Ceilometer.  It seems at first thought that it would be better to introduce
 the monitoring functionality in to Ceilometer and make that project more
 diverse as opposed to yet another project.​

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ML2] L2 population mechanism driver

2014-05-04 Thread Sławek Kapłoński
Hello,

Last time I want to try using L2pop mechanism driver in ML2 (and openvswitch 
agents on compute nodes). But every time when I try to spawn instance I have 
got error binding failed. After some searching in code I found that l2pop 
driver have not implemented method bind_port and as it inherit directly from 
MechanismDriver this method is in fact not implemented. 
Is is ok and this mechanism driver should be used in other way or maybe there 
is some bug in this driver and it miss this method?

Best regards
Slawek Kaplonski
sla...@kaplonski.pl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring as a Service

2014-05-04 Thread John Dickinson
One of the advantages of the program concept within OpenStack is that separate 
code projects with complementary goals can be managed under the same program 
without needing to be the same codebase. The most obvious example across every 
program are the server and client projects under most programs.

This may be something that can be used here, if it doesn't make sense to extend 
the ceilometer codebase itself.

--John





On May 4, 2014, at 12:30 PM, Denis Makogon dmako...@mirantis.com wrote:

 Hello to All.
 
 I also +1 this idea. As I can see, Telemetry program (according to Launchpad) 
 covers the process of the infrastructure metrics (networking, etc) and 
 in-compute-instances metrics/monitoring.
 So, the best option, I guess, is to propose add such great feature to 
 Ceilometer. In-compute-instance monitoring will be the great value-add to 
 upstream Ceilometer.
 As for me, it's a good chance to integrate well-known production ready 
 monitoring systems that have tons of specific plugins (like Nagios etc.)
 
 Best regards,
 Denis Makogon
 
 воскресенье, 4 мая 2014 г. пользователь John Griffith написал:
 
 
 
 On Sun, May 4, 2014 at 9:37 AM, Thomas Goirand z...@debian.org wrote:
 On 05/02/2014 05:17 AM, Alexandre Viau wrote:
  Hello Everyone!
 
  My name is Alexandre Viau from Savoir-Faire Linux.
 
  We have submited a Monitoring as a Service blueprint and need feedback.
 
  Problem to solve: Ceilometer's purpose is to track and *measure/meter* 
  usage information collected from OpenStack components (originally for 
  billing). While Ceilometer is usefull for the cloud operators and 
  infrastructure metering, it is not a *monitoring* solution for the tenants 
  and their services/applications running in the cloud because it does not 
  allow for service/application-level monitoring and it ignores detailed and 
  precise guest system metrics.
 
  Proposed solution: We would like to add Monitoring as a Service to Openstack
 
  Just like Rackspace's Cloud monitoring, the new monitoring service - lets 
  call it OpenStackMonitor for now -  would let users/tenants keep track of 
  their ressources on the cloud and receive instant notifications when they 
  require attention.
 
  This RESTful API would enable users to create multiple monitors with 
  predefined checks, such as PING, CPU usage, HTTPS and SMTP or custom checks 
  performed by a Monitoring Agent on the instance they want to monitor.
 
  Predefined checks such as CPU and disk usage could be polled from 
  Ceilometer. Other predefined checks would be performed by the new 
  monitoring service itself. Checks such as PING could be flagged to be 
  performed from multiple sites.
 
  Custom checks would be performed by an optional Monitoring Agent. Their 
  results would be polled by the monitoring service and stored in Ceilometer.
 
  If you wish to collaborate, feel free to contact me at 
  alexandre.v...@savoirfairelinux.com
  The blueprint is available here: 
  https://blueprints.launchpad.net/openstack-ci/+spec/monitoring-as-a-service
 
  Thanks!
 
 I would prefer if monitoring capabilities was added to Ceilometer rather
 than adding yet-another project to deal with.
 
 What's the reason for not adding the feature to Ceilometer directly?
 
 Thomas
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ​I'd also be interested in the overlap between your proposal and Ceilometer.  
 It seems at first thought that it would be better to introduce the monitoring 
 functionality in to Ceilometer and make that project more diverse as opposed 
 to yet another project.​
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-04 Thread Mandeep Dhami
I second the conf.d model.

Regards,
Mandeep


On Sun, May 4, 2014 at 10:13 AM, John Dickinson m...@not.mn wrote:

 To add some color, Swift supports both single conf files and conf.d
 directory-based configs. See
 http://docs.openstack.org/developer/swift/deployment_guide.html#general-service-configuration
 .

 The single config file pattern is quite useful for simpler
 configurations, but the directory-based ones becomes especially useful when
 looking at cluster configuration management tools--stuff that
 auto-generates and composes config settings (ie non hand-curated configs).
 For example, the conf.d configs can support each middleware config or
 background daemon process in a separate file. Or server settings in one
 file and common logging settings in another.

 (Also, to answer before it's asked [but I don't want to derail the current
 thread], I'd be happy to look at oslo config parsing if it supports the
 same functionality.)

 --John




 On May 4, 2014, at 9:49 AM, Armando M. arma...@gmail.com wrote:

  If the consensus is to unify all the config options into a single
  configuration file, I'd suggest following what the Nova folks did with
  [1], which I think is what Salvatore was also hinted. This will also
  help mitigate needless source code conflicts that would inevitably
  arise when merging competing changes to the same file.
 
  I personally do not like having a single file with gazillion options
  (the same way I hate source files with gazillion LOC's but I digress
  ;), but I don't like a proliferation of config files either. So I
  think what Mark suggested below makes sense.
 
  Cheers,
  Armando
 
  [1] -
 https://github.com/openstack/nova/blob/master/etc/nova/README-nova.conf.txt
 
  On 2 May 2014 07:09, Mark McClain mmccl...@yahoo-inc.com wrote:
 
  On May 2, 2014, at 7:39 AM, Sean Dague s...@dague.net wrote:
 
  Some non insignificant number of devstack changes related to neutron
  seem to be neutron plugins having to do all kinds of manipulation of
  extra config files. The grenade upgrade issue in neutron was because of
  some placement change on config files. Neutron seems to have *a ton* of
  config files and is extremely sensitive to their locations/naming,
 which
  also seems like it ends up in flux.
 
  We have grown in the number of configuration files and I do think some
 of the design decisions made several years ago should probably be
 revisited.  One of the drivers of multiple configuration files is the way
 that Neutron is currently packaged [1][2].  We’re packaged significantly
 different than the other projects so the thinking in the early years was
 that each plugin/service since it was packaged separately needed its own
 config file.  This causes problems because often it involves changing the
 init script invocation if the plugin is changed vs only changing the
 contents of the init script.  I’d like to see Neutron changed to be a
 single package similar to the way Cinder is packaged with the default
 config being ML2.
 
 
  Is there an overview somewhere to explain this design point?
 
  Sadly no.  It’s a historical convention that needs to be reconsidered.
 
 
  All the other services have a single config config file designation on
  startup, but neutron services seem to need a bunch of config files
  correct on the cli to function (see this process list from recent
  grenade run - http://paste.openstack.org/show/78430/ note you will
 have
  to horiz scroll for some of the neutron services).
 
  Mostly it would be good to understand this design point, and if it
 could
  be evolved back to the OpenStack norm of a single config file for the
  services.
 
 
  +1 to evolving into a more limited set of files.  The trick is how we
 consolidate the agent, server, plugin and/or driver options or maybe we
 don’t consolidate and use config-dir more.  In some cases, the files share
 a set of common options and in other cases there are divergent options
 [3][4].   Outside of testing the agents are not installed on the same
 system as the server, so we need to ensure that the agent configuration
 files should stand alone.
 
  To throw something out, what if moved to using config-dir for optional
 configs since it would still support plugin scoped configuration files.
 
  Neutron Servers/Network Nodes
  /etc/neutron.d
 neutron.conf  (Common Options)
 server.d (all plugin/service config files )
 service.d (all service config files)
 
 
  Hypervisor Agents
  /etc/neutron
 neutron.conf
 agent.d (Individual agent config files)
 
 
  The invocations would then be static:
 
  neutron-server —config-file /etc/neutron/neutron.conf —config-dir
 /etc/neutron/server.d
 
  Service Agents:
  neutron-l3-agent —config-file /etc/neutron/neutron.conf —config-dir
 /etc/neutron/service.d
 
  Hypervisors (assuming the consolidates L2 is finished this cycle):
  neutron-l2-agent —config-file /etc/neutron/neutron.conf —config-dir
 /etc/neutron/agent.d
 
 

Re: [openstack-dev] [ML2] L2 population mechanism driver

2014-05-04 Thread Narasimhan, Vivekanandan
Hi Slawek,

I think L2 pop driver needs to be used in conjunction with other mechanism 
drivers.

It only deals with pro-actively informing agents on which MAC Addresses became
available/unavailable on cloud nodes and is not meant for binding/unbinding 
ports
on segments.

If you configure mechanism_drivers=openvswitch,l2population in your 
ml2_conf.ini and restart
your neutron-server, you'll notice that bind_port is handled by OVS mechanism 
driver
(via AgentMechanismDriverBase inside ml2/drivers/mech_agent.py).

--
Thanks,

Vivek


-Original Message-
From: Sławek Kapłoński [mailto:sla...@kaplonski.pl]
Sent: Sunday, May 04, 2014 12:32 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [ML2] L2 population mechanism driver

Hello,

Last time I want to try using L2pop mechanism driver in ML2 (and openvswitch
agents on compute nodes). But every time when I try to spawn instance I have
got error binding failed. After some searching in code I found that l2pop
driver have not implemented method bind_port and as it inherit directly from
MechanismDriver this method is in fact not implemented.
Is is ok and this mechanism driver should be used in other way or maybe there
is some bug in this driver and it miss this method?

Best regards
Slawek Kaplonski
sla...@kaplonski.pl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-04 Thread gustavo panizzo gfa
On 05/04/2014 01:22 PM, Mark McClain wrote:
 
 
 On May 4, 2014, at 8:08, Sean Dague s...@dague.net wrote:

 Question (because I honestly don't know), when would you want more than
 1 l3 agent running on the same box?
 
 For the legacy case where there are multiple external networks connected to a 
 node on different bridges.

legacy since when? i'm still using it on icehouse

pls, let us know if there is something better!

thanks!

-- 
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-04 Thread Kevin Benton
External networks can be handled just like regular networks by not
specifying the external bridge. They will then be tagged with the provider
network information just like any other tenant network.


On Sun, May 4, 2014 at 6:52 PM, gustavo panizzo gfa g...@zumbi.com.arwrote:

 On 05/04/2014 01:22 PM, Mark McClain wrote:
 
 
  On May 4, 2014, at 8:08, Sean Dague s...@dague.net wrote:
 
  Question (because I honestly don't know), when would you want more than
  1 l3 agent running on the same box?
 
  For the legacy case where there are multiple external networks connected
 to a node on different bridges.

 legacy since when? i'm still using it on icehouse

 pls, let us know if there is something better!

 thanks!

 --
 1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Question about addit log in nova-compute.log

2014-05-04 Thread Chen CH Ji

Hi
   I saw in my compute.log has following logs which looks to me
strange at first, Free resource is negative make me confused and I take a
look at the existing code
   looks to me the logic is correct and calculation doesn't
have problem ,but the output 'Free' is confusing

   Is this on purpose or might need to be enhanced?

2014-05-05 10:51:33.732 4992 AUDIT nova.compute.resource_tracker [-] Free
ram (MB): -1559
2014-05-05 10:51:33.732 4992 AUDIT nova.compute.resource_tracker [-] Free
disk (GB): 29
2014-05-05 10:51:33.732 4992 AUDIT nova.compute.resource_tracker [-] Free
VCPUS: -3
Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Do we have a way to add a non-nova managed host to nova managed?

2014-05-04 Thread Chen CH Ji

Hi
  Not sure ti's proper to ask this question here, or maybe
someone asked this question before and discussion , please share info to me
if we have

  If we have a bunch of hosts that used to manage by them self,
some vm and hypervisors already running on the hosts
 Do we have a way to include those vms into management of
nova ?


Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [cinder] Do we now require schema response validation in tempest clients?

2014-05-04 Thread Christopher Yeoh
On Thu, May 1, 2014 at 1:32 PM, Ken'ichi Ohmichi ken1ohmi...@gmail.comwrote:

 Hi David,

 2014-05-01 5:44 GMT+09:00 David Kranz dkr...@redhat.com:
  There have been a lot of patches that add the validation of response
 dicts.
  We need a policy on whether this is required or not. For example, this
 patch
 
  https://review.openstack.org/#/c/87438/5
 
  is for the equivalent of 'cinder service-list' and is a basically a copy
 of
  the nova test which now does the validation. So two questions:
 
  Is cinder going to do this kind of checking?
  If so, should new tests be required to do it on submission?

 I'm not sure someone will add the similar validation, which we are adding
 to
 Nova API tests, to Cinder API tests also. but it would be nice for Cinder
 and
 Tempest. The validation can be applied to the other projects(Cinder, etc)
 easily because the base framework is implemented in common rest client
 of Tempest.

 When adding new tests like https://review.openstack.org/#/c/87438 , I
 don't
 have strong opinion for including the validation also. These schemas will
 be
 large sometimes and the combination in the same patch would make reviews
 difficult. In current Nova API test implementations, we are separating them
 into different patches.


Separating the schema part into a separate dependent patch probably makes
sense when they large but I would like to see us ratchet the requirement to
have schema validation for the cinder api as well too.

Chris





 Thanks
 Ken'ichi Ohmichi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday May 6th at 19:00 UTC

2014-05-04 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday May 6th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

And in case you missed the meeting last week (like I did, since I'm
slightly better than Clark at vacations!), the logs and minutes are
available here:

Minutes : 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-04-29-18.59.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-04-29-18.59.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-04-29-18.59.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev