Re: [openstack-dev] [oslo][messaging][zmq] Discussion on zmq driver design issues

2015-03-21 Thread Li Ma
On Tue, Mar 10, 2015 at 8:14 PM, ozamiatin ozamia...@mirantis.com wrote:
 Hi Li Ma,

 Thank you very much for your reply


 It's good to hear you have a living deployment with zmq driver.
 Is there a big divergence between your production and upstream versions
 of the driver? Besides [1] and [2] fixes for redis we have [5] and [6]
 critical multi-backend related issues for using the driver in real-world
 deployment.

Actually there's no such a big divergence between our driver and
upstream version. We didn't refactor it much, but just fixed all the
bugs that we met before and implemented socket reuse mechanism to
greatly improve its performance. For some bugs available, especially
cinder multi-backend and neutron multi-worker, we hacked cinder and
neutron to get rid of these bugs.

I discussed with our cinder developer several times about these
problems you mentioned above. Due to the current architecture, it is
really difficult to fix it in zeromq driver. However, it is very easy
to deal with it in cinder. We have patches on hand, but the
implementation is a little tricky that the upstream may not accept it.
:-( No worry, I'll find out it soon.

By the way, we are discussing about fanout performance and message
persistance. I don't have codes available, but I've got some ideas to
implement it.


 The only functionality for large-scale deployment that lacks in the
 current upstream codebase is socket pool scheduling (to provide
 lifecycle management, including recycle and reuse zeromq sockets). It
 was done several months ago and we are willing to contribute. I plan
 to propose a blueprint in the next release.

 Pool, recycle and reuse sounds good for performance.

Yes, actually our implementation is a little ugly and there's no unit
test available. Right now, I'm trying to refactor it and hopefully
I'll submit a spec soon.

 We also need a refactoring of the driver to reduce redundant entities
 or reconsider them (like ZmqClient or InternalContext) and to reduce code
 replications (like with topics).
 There is also some topics management needed.
 Clear code == less bugs == easy understand == easy contribute.
 We need a discussion (with related spec and UMLs) about what the driver
 architecture should be (I'm in progress with that).

+1, cannot agree with you more.

 3. ZeroMQ integration

 I've been working on the integration of ZeroMQ and DevStack for a
 while and actually it is working right now. I updated the deployment
 guide [3].

 That's true it works! :)

 I think it is the time to bring a non-voting gate for ZeroMQ and we
 can make the functional tests work.

 You can turn it with 'check experimental'. It is broken now.

I'll figure it out soon.

 5. ZeroMQ discussion

 Here I'd like to say sorry for this driver. Due to spare time and
 timezone, I'm not available for IRC or other meeting or discussions.
 But if it is possible, should we create a subgroup for ZeroMQ and
 schedule meetings for it? If we can schedule in advance or at a fixed
 date  time, I'm in.

 That's great idea
 +1 for zmq subgroup and meetings

I'll open another thread to discuss this topic.


 Subfolder is actually what I mean (python package like '_drivers')
 it should stay in oslo.messaging. Separate package like
 oslo.messaging.zeromq is overkill.
 As Doug proposed we can do consistently to AMQP-driver.

I suggest you go for it right now. It is really important for further
development.
If I submit new codes based upon the current code structure, it will
greatly affect this work in the future.

Best regards,
-- 
Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Pipeline for notifications does not seem to work

2015-03-21 Thread Igor Degtiarov
I am just curious have you restarted ceilometer services after
pipeline.yaml has been changed?
Igor Degtiarov
Software Engineer
Mirantis Inc
www.mirantis.com


On Sat, Mar 21, 2015 at 1:21 PM, Tim Bell tim.b...@cern.ch wrote:
 No errors in the notification logs.



 Should this work with the default ceilometer.conf file or do I need to
 enable anything ?



 I’ve also tried using arithmetic. When I have a meter like “cpu” for the
 source, this fires the expression evaluation without problems. However, I
 can’t find a good way of doing the appropriate calculations using the number
 of cores. Sample calculation is below



 expr: $(cpu)*0.98+$(vcpus)*10.0



 I have tried $(cpu.resource_metdata.vcpus) and
 $(cpu.resource_metdata.cpu_number) also. Any suggestions on an alternative
 approach that could work ?



 Any suggestions for the variable name to get at the number of cores when I’m
 evaluating an expression fired by the cpu time ?



 Tim



 From: gordon chung [mailto:g...@live.ca]
 Sent: 20 March 2015 20:55
 To: OpenStack Development Mailing List not for usage questions


 Subject: Re: [openstack-dev] [ceilometer] Pipeline for notifications does
 not seem to work



 i can confirm it works for me as well... are there any noticeable errors in
 the ceilometer-agent-notifications log? the snippet below looks sane to me
 though.

 cheers,
 gord

 From: idegtia...@mirantis.com
 Date: Fri, 20 Mar 2015 18:35:56 +0200
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [ceilometer] Pipeline for notifications does
 not seem to work

 Hi Tim

 I've check your case on my devstack. And I've received new hs06 meter
 in my meter list.

 So something wrong with your local env.


 Cheers,
 Igor D.
 Igor Degtiarov
 Software Engineer
 Mirantis Inc
 www.mirantis.com


 On Fri, Mar 20, 2015 at 5:40 PM, Tim Bell tim.b...@cern.ch wrote:
 
 
  I’m running Juno with ceilometer and trying to produce a new meter which
  is
  based on vcpus * F (where F is a constant that is different for each
  hypervisor).
 
 
 
  When I create a VM, I get a new sample for vcpus.
 
 
 
  However, it does not appear to fire the transformer.
 
 
 
  The same approach using “cpu” works OK but this one is polling on a
  regular
  interval rather than a one off notification when the VM is created.
 
 
 
  Any suggestions or alternative approaches for how to get a sample based
  the
  number of cores scaled by a fixed constant?
 
 
 
  Tim
 
 
 
  In my pipeline.yaml sources,
 
 
 
  - name: vcpu_source
 
  interval: 180
 
  meters:
 
  - vcpus
 
  sinks:
 
  - hs06_sink
 
 
 
  In my transformers, I have
 
 
 
  - name: hs06_sink
 
  transformers:
 
  - name: unit_conversion
 
  parameters:
 
  target:
 
  name: hs06
 
  unit: HS06
 
  type: gauge
 
  scale: 47.0
 
  publishers:
 
  - notifier://
 
 
 
 
 
 
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to handle vendor-specific API microversions?

2015-03-21 Thread Monty Taylor
On 03/21/2015 01:21 AM, Chris Friesen wrote:
 Hi,
 
 I've recently been playing around a bit with API microversions and I
 noticed something that may be problematic.
 
 The way microversions are handled, there is a monotonically increasing
 MAX_API_VERSION value in nova/api/openstack/api_version_request.py. 
 When you want to make a change you bump the minor version number and
 it's yours. End-users can set the microversion number in the request to
 indicate what they support, and all will be well.
 
 The issue is that it doesn't allow for OpenStack providers to add their
 own private microversion(s) to the API.  They can't just bump the
 microversion internally because that will conflict with the next
 microversion bump upstream (which could cause problems when they upgrade).
 
 In terms of how to deal with this, it would be relatively simple to just
 bump the major microversion number at the beginning of each new
 release.  However, that would make it difficult to backport
 bugfixes/features that use new microversions since they might overlap
 with private microversions.
 
 I think a better solution might be to expand the existing microversion
 API to include a third digit which could be considered a private
 microversion, and provide a way to check the third digit separate from
 the other two.  That way providers would have a way to add custom
 features in a backwards-compatible way without worrying about colliding
 with upstream code.

I would vote that we not make this pleasant or easy for vendors who are
wanting to add a feature to the API. As a person who uses several clouds
daily, I can tell you that a vendor chosing to do that is VERY mean to
users, and provides absolutely no value to anyone, other than allowing
someone to make a divergent differentiated fork.

Just don't do it. Seriously. It makes life very difficult for people
trying to consume these things.

The API is not the place for divergence.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime on 2015-03-21

2015-03-21 Thread James E. Blair
cor...@inaugust.com (James E. Blair) writes:

 Hi,

 Gerrit will be unavailable for a few hours starting at 1500 UTC on
 Saturday, March 21.

Gerrit is up and running on the new server.  Actual downtime was about 1
hour from 1500 to 1600.

Please let us know either here or on Freenode in #openstack-infra if you
notice any problems.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][cinder][nova][neutron] going forward to oslo-config-generator ...

2015-03-21 Thread Gary Kotton
Hi,
One of the issues that we had in Nova was that when we moved to oslo
libraries configuration options support by the libraries were no longer
present in the generated configuration file. Is this something that is
already supported or planned (sorry for being a little ignorant here).
In neutron things may be a little more challenging as there are many
different plugins and with the decomposition things may have additional
challenges. The configuration binding is done via the external decomposed
code and not in the neutron code base. So it is not clear how that code
may be parsed to generate the sample configuration.
Thanks
Gary

On 3/21/15, 12:01 AM, Jay S. Bryant jsbry...@electronicjungle.net
wrote:

All,

Let me start with the TLDR;

Cinder, Nova and Neutron have lots of configuration options that need to
be processed by oslo-config-generator to create the
project.conf.sample file.  There are a couple of different ways this
could be done.  I have one proposal out, which has raised concerns,
there is a second approach that could be taken which I am proposing
below.  Please read on if you have a strong opinion on the precedent we
will try to set in Cinder.  :-)

We discussed in the oslo meeting a couple of weeks ago a plan for how
Cinder was going to blaze a trail to the new oslo-config-generator.  The
result of that discussion and work is here:  [1]  It needs some more
work but has the bare bones pieces there to move to using
oslo-config-generator.

With the change I have written extensive hacking checks that ensure that
any lists that are registered with register_opts() are included in the
base cinder/opts.py file that is then a single entry point that pulls
all of the options together to generate the cinder.conf.sample file.
This has raised concern, however, that whenever a developer adds a new
list of configuration options, they are going to have to know to go back
to cinder/opts.py and add their module and option list there.  The
hacking check should catch this before code is submitted, but we are
possibly setting ourselves up for cases where the patch will fail in the
gate because updates are not made in all the correct places and because
pep8 isn't run before the patch is pushed.

It is important to note, that this will not happen every time a
configuration option is changed or added, as was the case with the old
check-uptodate.sh script.  Only when a new list of configuration options
is added which is a much less likely occurrence.  To avoid this
happening at all it was proposed by the Cinder team that we use the code
I wrote for the hacking checks to dynamically go through the files and
create cinder/opts.py whenever 'tox -egenconfig' is run.  Doing this
makes me uncomfortable as it is not consistent with anything else I am
familiar with in OpenStack and is not consistent with what other
projects are doing to handle this problem.  In discussion with Doug
Hellman, the approach also seemed to cause him concern.  So, I don't
believe that is the right solution.

An alternative that may be a better solution was proposed by Doug:

We could even further reduce the occurrence of such issues by moving the
list_opts() function down into each driver and have an entry point for
oslo.config.opts in setup.cfg for each of the drivers.  As with the
currently proposed solution, the developer doesn't have to edit a top
level file for a new configuration option.  This solution adds that the
developer doesn't have to edit a top level file to add a new
configuration item list to their driver.  With this approach the change
would happen in the driver's list_opts() function, rather than in
cinder/opts.py .  The only time that setup.cfg would needed to edited is
when a new package is added or when a new driver is added.  This would
reduce some of the already minimal burden on the developer.  We,
however, would need to agree upon some method for aggregating together
the options lists on a per package (i.e. cinder.scheduler, cinder.api)
level.  This approach, however, also has the advantage of providing a
better indication in the sample config file of where the options are
coming from.  That is an improvement over what I have currently proposed.

Does Doug's proposal sound more agreeable to everyone?  It is important
to note that the fact that some manual intervention is required to
'plumb' in the new configuration options was done by design.  There is a
little more work required to make options available to
oslo-config-generator but the ability to use different namespaces,
different sample configs, etc were added with the new generator.  These
additional capabilities were requested by other projects.  So, moving to
this design does have the potential for more long-term gain.

Thanks for taking the time to consider this!

Jay

[1] https://review.openstack.org/#/c/165431/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [oslo][cinder][nova][neutron] going forward to oslo-config-generator ...

2015-03-21 Thread Arkady_Kanevsky
Jay,
That sound reasonable.
We will need to document in a guide for driver developers what to do when new 
option is added deprecated in conf file for a driver.
Expect that nothing extra will need to be done beyond what we are doing now 
when new functionality added/deprecated from scheduler/default driver and 
perculates into drivers a release later.

I can also comment directly on the patch if it make sense.
Thanks,
Arkady

-Original Message-
From: Jay S. Bryant [mailto:jsbry...@electronicjungle.net]
Sent: Friday, March 20, 2015 5:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [oslo][cinder][nova][neutron] going forward to 
oslo-config-generator ...

All,

Let me start with the TLDR;

Cinder, Nova and Neutron have lots of configuration options that need to be 
processed by oslo-config-generator to create the .conf.sample file. There are a 
couple of different ways this could be done. I have one proposal out, which has 
raised concerns, there is a second approach that could be taken which I am 
proposing below. Please read on if you have a strong opinion on the precedent 
we will try to set in Cinder. :-)

We discussed in the oslo meeting a couple of weeks ago a plan for how Cinder 
was going to blaze a trail to the new oslo-config-generator. The result of that 
discussion and work is here: [1] It needs some more work but has the bare bones 
pieces there to move to using oslo-config-generator.

With the change I have written extensive hacking checks that ensure that any 
lists that are registered with register_opts() are included in the base 
cinder/opts.py file that is then a single entry point that pulls all of the 
options together to generate the cinder.conf.sample file.
This has raised concern, however, that whenever a developer adds a new list of 
configuration options, they are going to have to know to go back to 
cinder/opts.py and add their module and option list there. The hacking check 
should catch this before code is submitted, but we are possibly setting 
ourselves up for cases where the patch will fail in the gate because updates 
are not made in all the correct places and because
pep8 isn't run before the patch is pushed.

It is important to note, that this will not happen every time a configuration 
option is changed or added, as was the case with the old check-uptodate.sh 
script. Only when a new list of configuration options is added which is a much 
less likely occurrence. To avoid this happening at all it was proposed by the 
Cinder team that we use the code I wrote for the hacking checks to dynamically 
go through the files and create cinder/opts.py whenever 'tox -egenconfig' is 
run. Doing this makes me uncomfortable as it is not consistent with anything 
else I am familiar with in OpenStack and is not consistent with what other 
projects are doing to handle this problem. In discussion with Doug Hellman, the 
approach also seemed to cause him concern. So, I don't believe that is the 
right solution.

An alternative that may be a better solution was proposed by Doug:

We could even further reduce the occurrence of such issues by moving the
list_opts() function down into each driver and have an entry point for 
oslo.config.opts in setup.cfg for each of the drivers. As with the currently 
proposed solution, the developer doesn't have to edit a top level file for a 
new configuration option. This solution adds that the developer doesn't have to 
edit a top level file to add a new configuration item list to their driver. 
With this approach the change would happen in the driver's list_opts() 
function, rather than in cinder/opts.py . The only time that setup.cfg would 
needed to edited is when a new package is added or when a new driver is added. 
This would reduce some of the already minimal burden on the developer. We, 
however, would need to agree upon some method for aggregating together the 
options lists on a per package (i.e. cinder.scheduler, cinder.api) level. This 
approach, however, also has the advantage of providing a better indication in 
the sample config file of where the options are coming from. That is an 
improvement over what I have currently proposed.

Does Doug's proposal sound more agreeable to everyone? It is important to note 
that the fact that some manual intervention is required to 'plumb' in the new 
configuration options was done by design. There is a little more work required 
to make options available to oslo-config-generator but the ability to use 
different namespaces, different sample configs, etc were added with the new 
generator. These additional capabilities were requested by other projects. So, 
moving to this design does have the potential for more long-term gain.

Thanks for taking the time to consider this!

Jay

[1] https://review.openstack.org/#/c/165431/


__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [ceilometer] Pipeline for notifications does not seem to work

2015-03-21 Thread Tim Bell
No errors in the notification logs.

Should this work with the default ceilometer.conf file or do I need to enable 
anything ?

I've also tried using arithmetic. When I have a meter like cpu for the 
source, this fires the expression evaluation without problems. However, I can't 
find a good way of doing the appropriate calculations using the number of 
cores. Sample calculation is below

expr: $(cpu)*0.98+$(vcpus)*10.0

I have tried $(cpu.resource_metdata.vcpus) and 
$(cpu.resource_metdata.cpu_number) also. Any suggestions on an alternative 
approach that could work ?

Any suggestions for the variable name to get at the number of cores when I'm 
evaluating an expression fired by the cpu time ?

Tim

From: gordon chung [mailto:g...@live.ca]
Sent: 20 March 2015 20:55
To: OpenStack Development Mailing List not for usage questions
Subject: Re: [openstack-dev] [ceilometer] Pipeline for notifications does not 
seem to work

i can confirm it works for me as well... are there any noticeable errors in the 
ceilometer-agent-notifications log? the snippet below looks sane to me though.

cheers,
gord

 From: idegtia...@mirantis.commailto:idegtia...@mirantis.com
 Date: Fri, 20 Mar 2015 18:35:56 +0200
 To: 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [ceilometer] Pipeline for notifications does not 
 seem to work

 Hi Tim

 I've check your case on my devstack. And I've received new hs06 meter
 in my meter list.

 So something wrong with your local env.


 Cheers,
 Igor D.
 Igor Degtiarov
 Software Engineer
 Mirantis Inc
 www.mirantis.comhttp://www.mirantis.com


 On Fri, Mar 20, 2015 at 5:40 PM, Tim Bell 
 tim.b...@cern.chmailto:tim.b...@cern.ch wrote:
 
 
  I'm running Juno with ceilometer and trying to produce a new meter which is
  based on vcpus * F (where F is a constant that is different for each
  hypervisor).
 
 
 
  When I create a VM, I get a new sample for vcpus.
 
 
 
  However, it does not appear to fire the transformer.
 
 
 
  The same approach using cpu works OK but this one is polling on a regular
  interval rather than a one off notification when the VM is created.
 
 
 
  Any suggestions or alternative approaches for how to get a sample based the
  number of cores scaled by a fixed constant?
 
 
 
  Tim
 
 
 
  In my pipeline.yaml sources,
 
 
 
  - name: vcpu_source
 
  interval: 180
 
  meters:
 
  - vcpus
 
  sinks:
 
  - hs06_sink
 
 
 
  In my transformers, I have
 
 
 
  - name: hs06_sink
 
  transformers:
 
  - name: unit_conversion
 
  parameters:
 
  target:
 
  name: hs06
 
  unit: HS06
 
  type: gauge
 
  scale: 47.0
 
  publishers:
 
  - notifier://
 
 
 
 
 
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: 
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to handle vendor-specific API microversions?

2015-03-21 Thread Joe Gordon
On Sat, Mar 21, 2015 at 8:31 AM, Monty Taylor mord...@inaugust.com wrote:

 On 03/21/2015 01:21 AM, Chris Friesen wrote:
  Hi,
 
  I've recently been playing around a bit with API microversions and I
  noticed something that may be problematic.
 
  The way microversions are handled, there is a monotonically increasing
  MAX_API_VERSION value in nova/api/openstack/api_version_request.py.
  When you want to make a change you bump the minor version number and
  it's yours. End-users can set the microversion number in the request to
  indicate what they support, and all will be well.
 
  The issue is that it doesn't allow for OpenStack providers to add their
  own private microversion(s) to the API.  They can't just bump the
  microversion internally because that will conflict with the next
  microversion bump upstream (which could cause problems when they
 upgrade).
 
  In terms of how to deal with this, it would be relatively simple to just
  bump the major microversion number at the beginning of each new
  release.  However, that would make it difficult to backport
  bugfixes/features that use new microversions since they might overlap
  with private microversions.
 
  I think a better solution might be to expand the existing microversion
  API to include a third digit which could be considered a private
  microversion, and provide a way to check the third digit separate from
  the other two.  That way providers would have a way to add custom
  features in a backwards-compatible way without worrying about colliding
  with upstream code.

 I would vote that we not make this pleasant or easy for vendors who are
 wanting to add a feature to the API. As a person who uses several clouds
 daily, I can tell you that a vendor chosing to do that is VERY mean to
 users, and provides absolutely no value to anyone, other than allowing
 someone to make a divergent differentiated fork.

 Just don't do it. Seriously. It makes life very difficult for people
 trying to consume these things.

 The API is not the place for divergence.


In fact we have made vendorization of the API hard on purpose, see the
microversion spec for details: https://review.openstack.org/#/c/127127

To quote Jay Pipes from that review:

*-1 for vendor flag*

*Recommend getting rid of the vendor: specification entirely. The point of
standardizing our APIs is to make them standard, not to allow
vendorization. API extensions were an idea designed (in part) to allow
vendorization. And we've seen how that works out.*

*Let's take a hard stand and say this is the OpenStack Compute API and be
done with it. If RAX or HP Cloud or Frobnozzle Cloud wants to have a
separate but different API, then they should call it something else,
because it's not the OpenStack Compute API*



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev