Re: [openstack-dev] [Glance] [all] Proposal for new Glance core reviewers

2014-11-26 Thread Flavio Percoco

On 25/11/14 20:16 +, Nikhil Komawar wrote:

Hi all,

Please consider this email as a nomination for Erno and Alex (CC) for adding
them to the list of Glance core reviewers. Over the last cycle, both of them
have been doing good work with reviews, participating in the project
discussions as well as taking initiatives to creatively improve the project.
Their insights into project internals and it's future directions has been
valuable too.

Please let me know if anyone has concerns with this change. If there none
brought up, I will make this membership change official in about a week.

Thanks for your consideration and the hard work, Erno and Alex!


+2A

--
@flaper87
Flavio Percoco


pgpD0rOoQ2Ei4.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Fuel Plugins, First look; Whats Next?

2014-11-26 Thread Andrew Woodward
On Tue, Nov 25, 2014 at 1:39 AM, Evgeniy L e...@mirantis.com wrote:


 On Tue, Nov 25, 2014 at 10:40 AM, Andrew Woodward xar...@gmail.com wrote:

 On Mon, Nov 24, 2014 at 4:40 AM, Evgeniy L e...@mirantis.com wrote:
  Hi Andrew,
 
  Comments inline.
  Also could you please provide a link on OpenStack upgrade feature?
  It's not clear why do you need it as a plugin and how you are going
  to deliver this feature.
 
  On Sat, Nov 22, 2014 at 4:23 AM, Andrew Woodward xar...@gmail.com
  wrote:
 
  So as part of the pumphouse integration, I've started poking around
  the Plugin Arch implementation as an attempt to plug it into the fuel
  master.
 
  This would require that the plugin install a container, and some
  scripts into the master node.
 
  First look:
  I've looked over the fuel plugins spec [1] and see that the install
  script was removed from rev 15 -16 (line 134) This creates problems
  do to the need of installing the container, and scripts so I've
  created a bug [2] for this so that we can allow for an install script
  to be executed prior to HCF for 6.0.
 
 
  Yes, it was removed, but nothing stops you from creating the install
  script and putting it in tarball, you don't need any changes in the
  current implementation.

 how would it be executed? the plugin loading done by fuel-client
 doesn't cover this.


 Manually untar and run your script, as it was designed before we implemented
 more user friendly approach.

Needs some TLC, but here is a working patch

https://review.openstack.org/137301


 
  The reasons why it was done this way, see in separate mailing thread
  [1].
 
  [1]
 
  http://lists.openstack.org/pipermail/openstack-dev/2014-October/049073.html
 
 
 
  Looking into the implementation of the install routine [3] to
  implement [2], I see that the fuelclient is extracting the tar blindly
  (more on that at #3) on the executor system that fuelclient is being
  executed from. Problems with this include 1) the fuelclient may not
  root be privileged (like in Mirantis OpenStack Express) 2) the
  fuelclient may not be running on the same system as nailgun 3) we are
  just calling .extractall on the tarball, this means that we haven't
  done any validation on the files coming out of the tarball. We need to
  validate that 3.a) the tarball was actually encoded with the right
  base path 3.b) that the tasks.yaml file is validated and all the noted
  scripts are found. Really, the install of the plugin should be handled
  by the nailgun side to help with 1,2.
 
 
  1. if you have custom installation you have to provide custom
  permissions
  for /var/www/nailgun/plugins directory
  2. you are absolutely right, see the thread from above why we decided to
  add
  this feature even if it was a wrong decision from architecture point
  of
  view
  3. haven't done any validation - not exactly, validation is done on
  plugin
  building stage, also we have simple validation on plugin
  installation
  stage on
  Nailgun side (that data are consistent from nailgun point of view).
  There are
  several reasons why it was done mainly on fuel-plugin-builder side:
a. plugin is validated before it's installed (it dramatically
  simplifies development)
b. also you can check that plugin is valid without plugin
  building,
use 'fpb --check fuel_plugin_name' parameter
c. faster fixes delivery, if there is a bug in validation (we had
  several of them
during the development in fuel-plugin-builder), we cannot just
  release new
version of fuel, but we can do it with fuel-plugin-builder, we
  had

I've already found some validation bugs in fpb

https://bugs.launchpad.net/fuel/+bug/1396491
https://bugs.launchpad.net/fuel/+bug/1396495 (
https://review.openstack.org/137304 )
https://bugs.launchpad.net/fuel/+bug/1396499

  2 releases [1].
For more complicated structures you will have bugs in
  validation
  for sure.
d. if we decide to support validations on both sides, we will come
  up
  with a lot of bugs
which are related to desynchronization of validators between
  Nailgun and fuel-plugin-builder

 the main validation points that should be done by nailgun is to verify
 that the paths are correct. i.e.
 * the tar ./folder == metadata.yaml['name']
 * tasks.yaml + metadata.yaml refer to valid paths for cmd,
 deployment_scripts_path, repository_path

 Rright now there is no contract between the user building the plugin
 with fpb, vs adding all the files to a tarball. if fpb is supposed to
 be doing this, then there should be some form of signature that can be
 parsed to ensure that these items have been pre-validated and the
 package wasn't modified, or built by hand. Something that would be
 easy, and cheap would be something like 'cat metdata.yaml tasks.yaml |
 md5sum md5sum' and validate this when we load the package. It also
 gives us a starting point for other signers.


 Do we really want to cover 

Re: [openstack-dev] [pecan] [WSME] Different content-type in request and response

2014-11-26 Thread Renat Akhmerov
Hi,

I traced the WSME code and found a place [0] where it tries to get arguments 
from request body based on different mimetype. So looks like WSME supports only 
json, xml and “application/x-www-form-urlencoded”.

So my question is: Can we fix WSME to also support “text/plain” mimetype? I 
think the first snippet that Nikolay provided is valid from WSME standpoint.

Or if we don’t understand something in WSME philosophy then it’d nice to hear 
some explanations from WSME team. Will appreciate that.


Another issue that previously came across is that if we use WSME then we can’t 
pass arbitrary set of parameters in a url query string, as I understand they 
should always correspond to WSME resource structure. So, in fact, we can’t have 
any dynamic parameters. In our particular use case it’s very inconvenient. 
Hoping you could also provide some info about that: how it can be achieved or 
if we can just fix it.

If you need help with contribution let us know pls.

Thanks

[0] https://github.com/stackforge/wsme/blob/master/wsme/rest/args.py#L215 
https://github.com/stackforge/wsme/blob/master/wsme/rest/args.py#L215

Renat Akhmerov
@ Mirantis Inc.



 On 25 Nov 2014, at 23:06, Nikolay Makhotkin nmakhot...@mirantis.com wrote:
 
 Hi, folks! 
 
 I try to create a controller which should receive one http content-type in 
 request but it should be another content-type in response. I tried to use 
 pecan and wsme decorators for controller's methods.
 
 I just want to receive text on server and send json-encoded string from 
 server (request has text/plain and response - application/json) 
 
 I tried: 
 
 class MyResource(resource.Resource):
 id = wtypes.text
 name = wtypes.text
 
 
 class MyResourcesController(rest.RestController):
 @wsexpose(MyResource, body=wtypes.text)
 def put(self, text):
 return MyResource(id='1', name=text)
 
 
 According to WSME documentation 
 (http://wsme.readthedocs.org/en/latest/integrate.html#module-wsmeext.pecan 
 http://wsme.readthedocs.org/en/latest/integrate.html#module-wsmeext.pecan) 
 signature wsexpose method as following: 
 
   wsexpose(return_type, *arg_types, **options)
 
 Ok, I just set MyResource as return_type and body to text type. But it didn't 
 work as expected: 
 http://paste.openstack.org/show/138268/ 
 http://paste.openstack.org/show/138268/ 
 
 I looked at pecan documentation at 
 https://media.readthedocs.org/pdf/pecan/latest/pecan.pdf 
 https://media.readthedocs.org/pdf/pecan/latest/pecan.pdf but I didn't find 
 anything that can fit to my case.
 
 Also, I tried: 
 
 class MyResource(resource.Resource):
 id = wtypes.text
 name = wtypes.text
 
 
 class MyResourcesController(rest.RestController):
 @expose('json')
 @expose(content_type=text/plain)
 def put(self):
 text = pecan.request.text
 return MyResource(id='1', name=text).to_dict()
 
 It worked just in case if request and response have the same content-type. 
 (application/json-application/json, text/plain-text/plain)
 
 I also tried a lot of combination of parameters but it is still not worked.
 
 Does anyone know what the problem is?
 How it can be done using WSME and/or Pecan?
 
 Sorry if I misunderstand something.
 -- 
 Best Regards,
 Nikolay
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon]

2014-11-26 Thread Tatiana Ovtchinnikova
+1 and +1

Cindy and Thai, thank you and welcome!

2014-11-25 2:09 GMT+03:00 David Lyle dkly...@gmail.com:

 I am pleased to nominate Thai Tran and Cindy Lu to horizon-core.

 Both Thai and Cindy have been contributing significant numbers of high
 quality reviews during Juno and Kilo cycles. They are consistently among
 the top non-core reviewers. They are also responsible for a significant
 number of patches to Horizon. Both have a strong understanding of the
 Horizon code base and the direction of the project.

 Horizon core team members please vote +1 or -1 to the nominations either
 in reply or by private communication. Voting will close on Friday unless I
 hear from everyone before that.

 Thanks,
 David


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Issues regarding Jenkins on Gerrit reviews

2014-11-26 Thread Abhishek Talwar/HYD/TCS
Hi All,

I am facing some issues with Jenkins on two of my reviews. Jenkins is failing 
either on gate-tempest-dsvm-neutron-src-python-neutronclient-icehouse or 
gate-tempest-dsvm-neutron-src-python-neutronclient-icehouse but i do not see 
any of my code changes making them fail.
So if you can look at the reviews and guide me that why are they failing again 
and again.

Links for reviews:

1. https://review.openstack.org/#/c/133151/
2. https://review.openstack.org/#/c/99929/

Kindly provide some information regarding this.



Thanks and Regards
Abhishek Talwar
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-11-26 Thread Julien Danjou
On Fri, Oct 31 2014, Flavio Percoco wrote:

 Fully agree!

 The more I think about it, the more I'm convinced we should keep py26
 in oslo until EOL Juno. It'll take time, it may be painful but it'll
 be simpler to explain and more importantly it'll be simpler to do.

 Keeping this simple will also help us with welcoming more reviewers in
 our team. It's already complex enough to explain what oslo-inc is and
 why there are oslo libraries.

Ok, so now that I start looking into that, it seems nobody added back
Python 2.6 jobs to the Oslo libraries, so they are not gated against it
and the door is open to breakage support.

I'm gonna work on this.

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Zabbix in HA mode

2014-11-26 Thread Mike Scherbakov
For small installs we still have to consider an option of roles
combination, and placing Zabbix on controllers. Fuel disk allocation logic
should be smart and has to allocate separate disk for it where possible.

On Wednesday, November 26, 2014, Stanislaw Bogatkin sbogat...@mirantis.com
wrote:

 Hi Bartosz,
 As for me - zabbix is bad practice to place on controller nodes when large
 installations will be monitoring, cause it can slow down disk IO on big db.
 If it happen - controllers can became unresponsible to other services on
 controllers. Zabbix guys recommended to use separate station for large
 installations.
 But zabbix in HA on dedicated nodes (not on the controllers) is very good
 idea to eliminate monitoring SPOF.


 On Tue, Nov 25, 2014 at 8:07 PM, Mike Scherbakov mscherba...@mirantis.com
 javascript:_e(%7B%7D,'cvml','mscherba...@mirantis.com'); wrote:

 Regarding the licensing, it should not be an issue because we provide all
 source code (if not as git repos, then as source RPMs/DEBs).

 On Tue, Nov 25, 2014 at 7:34 PM, Bartosz Kupidura bkupid...@mirantis.com
 javascript:_e(%7B%7D,'cvml','bkupid...@mirantis.com'); wrote:

 Hello Vladimir,
 I agree. But in most cases, zabbix-server would be moved from failed
 node by pacemaker.
 Moreover some clients dont want to „waste” 3 additional servers only for
 monitoring.

 As i said, this is only first drop of zabbix HA. Later we can allow user
 to deploy zabbix-server
 not only on controllers, but also on dedicated nodes.

 Best Regards,
 Bartosz Kupidura


  Wiadomość napisana przez Vladimir Kuklin vkuk...@mirantis.com
 javascript:_e(%7B%7D,'cvml','vkuk...@mirantis.com'); w dniu 25 lis
 2014, o godz. 15:47:
 
  Bartosz,
 
  It is obviously possible to install zabbix on the master nodes and put
 it under pacemaker control. But it seems very strange for me to monitor
 something with software located on the nodes that you are monitoring.
 
  On Tue, Nov 25, 2014 at 4:21 PM, Bartosz Kupidura 
 bkupid...@mirantis.com
 javascript:_e(%7B%7D,'cvml','bkupid...@mirantis.com'); wrote:
  Hello All,
 
  Im working on Zabbix implementation which include HA support.
 
  Zabbix server should be deployed on all controllers in HA mode.
 
  Currently we have dedicated role 'zabbix-server', which does not
 support more
  than one zabbix-server. Instead of this we will move monitoring
 solution (zabbix),
  as an additional component.
 
  We will introduce additional role 'zabbix-monitoring', assigned to all
 servers with
  lowest priority in serializer (run puppet after every other roles)
 when zabbix is
  enabled.
  'Zabbix-monitoring' role will be assigned automatically.
 
  When zabbix component is enabled, we will install zabbix-server on all
 controllers
  in active-backup mode (pacemaker+haproxy).
 
  In next stage, we can allow users to deploy zabbix-server on dedicated
 node OR
  on controllers for performance reasons.
  But for now we should force zabbix-server to be deployed on
 controllers.
 
  BP is in initial phase, but code is ready and working with Fuel 5.1.
  Now im checking if it works with master.
 
  Any comments are welcome!
 
  BP link: https://blueprints.launchpad.net/fuel/+spec/zabbix-ha
 
  Best Regards,
  Bartosz Kupidura
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  --
  Yours Faithfully,
  Vladimir Kuklin,
  Fuel Library Tech Lead,
  Mirantis, Inc.
  +7 (495) 640-49-04
  +7 (926) 702-39-68
  Skype kuklinvv
  45bk3, Vorontsovskaya Str.
  Moscow, Russia,
  www.mirantis.com
  www.mirantis.ru
  vkuk...@mirantis.com
 javascript:_e(%7B%7D,'cvml','vkuk...@mirantis.com');
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-26 Thread Stanislaw Bogatkin
Hi all,
As I understand, we just need to monitoring one node - Fuel master. For
slave nodes we already have a solution - zabbix.
So, in that case why we need some complicated stuff like monasca? Let's use
something small, like monit or sensu.

On Mon, Nov 24, 2014 at 10:36 PM, Fox, Kevin M kevin@pnnl.gov wrote:

  One of the selling points of tripleo is to reuse as much as possible
 from the cloud, to make it easier to deploy. While monasca may be more
 complicated, if it ends up being a component everyone learns, then its not
 as bad as needing to learn two different monitoring technologies. You could
 say the same thing cobbler vs ironic. the whole Ironic stack is much more
 complicated. But for an openstack admin, its easier since a lot of existing
 knowlege applies. Just something to consider.

 Thanks,
 Kevin

 --
 *From:* Tomasz Napierala
 *Sent:* Monday, November 24, 2014 6:42:39 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Fuel] fuel master monitoring


  On 24 Nov 2014, at 11:09, Sergii Golovatiuk sgolovat...@mirantis.com
 wrote:
 
  Hi,
 
  monasca looks overcomplicated for the purposes we need. Also it requires
 Kafka which is Java based transport protocol.
  I am proposing Sensu. It's architecture is tiny and elegant. Also it
 uses rabbitmq as transport so we won't need to introduce new protocol.

 Do we really need such complicated stuff? Sensu is huge project, and it's
 footprint is quite large. Monit can alert using scripts, can we use it
 instead of API?

 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Proposal to add examples/usecase as part of new features / cli / functionality patches

2014-11-26 Thread Valeriy Ponomaryov
Hi Deepak,

Docs are present in any project already, according to example with manila -
https://github.com/openstack/manila/tree/master/doc/source

It is used for docs on http://docs.openstack.org/ , also everyone if able
to contribute to it.

See docs built on basis of files from manila repo:
http://docs.openstack.org/developer/manila/

For most of projects we have already useful resource:
http://docs.openstack.org/cli-reference/content/

In conclusion I can say that it is question more to the organization of
creation such docs than possibility to create it in general.

Regards,
Valeriy Ponomaryov

On Wed, Nov 26, 2014 at 8:01 AM, Deepak Shetty dpkshe...@gmail.com wrote:

 Hi stackers,
I was having this thought which i believe applies to all projects of
 openstack (Hence All in the subject tag)

 My proposal is to have examples or usecase folder in each project which
 has info on how to use the feature/enhancement (which was submitted as part
 of a gerrit patch)
 In short, a description with screen shots (cli, not GUI) which should be
 submitted (optionally or mandatory) along with patch (liek how testcases
 are now enforced)

 I would like to take an example to explain. Take this patch @
 https://review.openstack.org/#/c/127587/ which adds a default volume type
 in Manila

 Now it would have been good if we could have a .txt or .md file alogn with
 the patch that explains :

 1) What changes are needed in manila.conf to make this work

 2) How to use the cli with this change incorporated

 3) Some screen shots of actual usage (Now the author/submitted would have
 tested in devstack before sending patch, so just copying those cli screen
 shots wouldn't be too big of a deal)

 4) Any caution/caveats that one has to keep in mind while using this

 It can be argued that some of the above is satisfied via commit msg and
 lookign at test cases.
 But i personally feel that those still doesn't give a good visualization
 of how a feature patch works in reality

 Adding such a example/usecase file along with patch helps in multiple ways:

 1) It helps the reviewer get a good picture of how/which clis are affected
 and how this patch fits in the flow

 2) It helps documentor get a good view of how this patch adds value, hence
 can document it better

 3) It may help the author or anyone else write a good detailed blog post
 using the examples/usecase as a reference

 4) Since this becomes part of the patch and hence git log, if the
 feature/cli/flow changes in future, we can always refer to how the feature
 was designed, worked when it was first posted by looking at the example
 usecase

 5) It helps add a lot of clarity to the patch, since we know how the
 author tested it and someone can point missing flows or issues (which
 otherwise now has to be visualised)

 6) I feel this will help attract more reviewers to the patch, since now
 its more clear what this patch affects, how it affects and how flows are
 changing, even a novice reviewer can feel more comfortable and be confident
 to provide comments.

 Thoughts ?

 thanx,
 deepak


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova host-update gives error 'Virt driver does not implement host disabled status'

2014-11-26 Thread Vineet Menon
Hi Kevin,

Oh. Yes. That could be the problem.
Thanks for pointing that out.


Regards,

Vineet Menon


On 26 November 2014 at 02:02, Chen CH Ji jiche...@cn.ibm.com wrote:

 are you using libvirt ? it's not implemented
 ,guess your bug are talking about other hypervisors?

 the message was printed here:

 http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/contrib/hosts.py#n236

 Best Regards!

 Kevin (Chen) Ji 纪 晨

 Engineer, zVM Development, CSTL
 Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
 Phone: +86-10-82454158
 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
 Beijing 100193, PRC

 [image: Inactive hide details for Vineet Menon ---11/26/2014 12:10:39
 AM---Hi, I'm trying to reproduce the bug https://bugs.launchpad.n]Vineet
 Menon ---11/26/2014 12:10:39 AM---Hi, I'm trying to reproduce the bug
 https://bugs.launchpad.net/nova/+bug/1259535.

 From: Vineet Menon mvineetme...@gmail.com
 To: openstack-dev openstack-dev@lists.openstack.org
 Date: 11/26/2014 12:10 AM
 Subject: [openstack-dev] [nova] nova host-update gives error 'Virt driver
 does not implement host disabled status'
 --



 Hi,

 I'm trying to reproduce the bug
 *https://bugs.launchpad.net/nova/+bug/1259535*
 https://bugs.launchpad.net/nova/+bug/1259535.
 While trying to issue the command, nova host-update --status disable
 machine1, an error is thrown saying,

ERROR (HTTPNotImplemented): Virt driver does not implement host
disabled status. (HTTP 501) (Request-ID:
req-1f58feda-93af-42e0-b7b6-bcdd095f7d8c)



 What is this error about?

 Regards,
 Vineet Menon
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [cinder] Timeout problems

2014-11-26 Thread Tobias Engelbert
Hi,
When testing high load scenarios, e.g. issuing 100 volume attachments, we are 
running into timeout problems between nova, cinder and the centralized storage 
backend.
Has anybody experienced similar problems?
/Tobi


https://blueprints.launchpad.net/nova/+spec/volume-status-polling
https://blueprints.launchpad.net/cinder/+spec/volume-status-polling


From: Tobias Engelbert [mailto:tobias.engelb...@ericsson.com]
Sent: Monday, November 03, 2014 11:33 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [cinder][nova] Timeout problems

Hi,
Parallel volume operations lead to inconsistencies between the OpenStack 
database and the deployed view on a centralized storage backend.

When performing multiple volume operation on a centralized storage backend it 
can come to timeouts on the OpenStack side. These timeouts can be the RPC 
timeout or e.g. in high availability scenarios, the HA proxy timeout.
When nova wants to attach a volume, it triggers the status change from 
available to attaching and sends initialize_connection via cinderclient to 
cinder API via the REST API. Cinder API performs a synchronous CALL to cinder 
volume, then via the driver the centralized storage backend is contacted. When 
now a timeout occurs, nova triggers the database to change the volume status 
from attaching to available. Meanwhile the centralized storage backend 
performs, what was originally requested. Here we can have a mismatch between 
database and the real view of the centralized storage backend.

I would be curious if this behavior is also seen by others and would like to 
discuss possible solution.
/Tobi

See also
https://blueprints.launchpad.net/cinder/+spec/volume-status-polling

11javascript:;


https://review.openstack.org/#/c/132225/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Order of machines to be terminated during scale down

2014-11-26 Thread Maish Saidel-Keesing
In which order are machines terminated during a scale down action in an
auto scaling group

For example instance 1  2 were deployed in a stack. Instances 3  4
were created as a result of load.

When the load is reduced and the instances are scaled back down, which
ones will be removed? And in which order?

From old to new (1-4) or new to old (4 - 1) ?

Thanks

-- 
Maish Saidel-Keesing


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] [CustomResource] LifeCycle methods flow

2014-11-26 Thread Pradip Mukhopadhyay
Hello,



Any pointer (document and/or code pointer) related to how the different
overridden methods are getting called when a custom resource is getting
deployed in the heat stack?


Basically just tried to annotate the h-eng log on a simple,
very-first-attempt 'hello world' resource. Noticed the log is something
like:

2014-11-26 15:38:30.251 INFO heat.engine.plugins.helloworld [-]
[pradipm]:Inside handle_create
2014-11-26 15:38:30.257 INFO heat.engine.plugins.helloworld [-]
[pradipm]:Inside _set_param_values
2014-11-26 15:38:31.259 INFO heat.engine.plugins.helloworld [-]
[pradipm]:Inside check_create_complete
2014-11-26 15:38:44.227 INFO heat.engine.plugins.helloworld
[req-9979deb9-f911-4df4-bdf8-ecc3609f054b None demo] [pradipm]:Inside
HelloWorld ctor
2014-11-26 15:38:44.234 INFO heat.engine.plugins.helloworld
[req-9979deb9-f911-4df4-bdf8-ecc3609f054b None demo] [pradipm]:Inside
_resolve_attribute




The constructor (ctor) is getting called in the flow after the
create-resource. So though understanding the flow would help.



Thanks in advance,
Pradip
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Zabbix in HA mode

2014-11-26 Thread Dmitriy Shulyak
 Im working on Zabbix implementation which include HA support.

 Zabbix server should be deployed on all controllers in HA mode.

But zabbix-server will stay and user will be able to assign this role where
he wants?
If so there will be no limitations on roles allocation strategy that user
can use for cluster



Currently we have dedicated role 'zabbix-server', which does not support
 more
 than one zabbix-server. Instead of this we will move monitoring solution
 (zabbix),
 as an additional component.

 We will introduce additional role 'zabbix-monitoring', assigned to all
 servers with
 lowest priority in serializer (run puppet after every other roles) when
 zabbix is
 enabled.
 'Zabbix-monitoring' role will be assigned automatically

It must not be in orchestrator (i guess you are talking about serializer)
by some cluster attribute or another hack.
I thought about such kind of role placement during granular deployment
design, and it can be done in a next way:

Zabbix-monitoring (i like zabbix-agent more) to all servers if
zabbix-server is added to cluster,
and then operator should be able to remove zabbix-monitoring from some
nodes. But more importantly he will be able
to see roles to nodes placement in a very explicit manner
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Order of machines to be terminated during scale down

2014-11-26 Thread Pavlo Shchelokovskyy
Maish,

by default they are deleted in in the same order they were created, FIFO
style.

Best regards,
Pavlo Shchelokovskyy.

On Wed, Nov 26, 2014 at 12:24 PM, Maish Saidel-Keesing 
maishsk+openst...@maishsk.com wrote:

 In which order are machines terminated during a scale down action in an
 auto scaling group

 For example instance 1  2 were deployed in a stack. Instances 3  4
 were created as a result of load.

 When the load is reduced and the instances are scaled back down, which
 ones will be removed? And in which order?

 From old to new (1-4) or new to old (4 - 1) ?

 Thanks

 --
 Maish Saidel-Keesing


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Change diagnostic snapshot compression algoritm

2014-11-26 Thread Mike Scherbakov
Can we put it as a work item for diagnostic snapshot improvements, so we
won't forget about this in 6.1?

On Tuesday, November 25, 2014, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 Thank you all for your feedback. Request postponed to the next release. We
 will compare available solutions.

 On Mon, Nov 24, 2014 at 2:36 PM, Vladimir Kuklin vkuk...@mirantis.com
 javascript:_e(%7B%7D,'cvml','vkuk...@mirantis.com'); wrote:

 guys, there is already pxz utility in ubuntu repos. let's test it

 On Mon, Nov 24, 2014 at 2:32 PM, Bartłomiej Piotrowski 
 bpiotrow...@mirantis.com
 javascript:_e(%7B%7D,'cvml','bpiotrow...@mirantis.com'); wrote:

 On 24 Nov 2014, at 12:25, Matthew Mosesohn mmoses...@mirantis.com
 javascript:_e(%7B%7D,'cvml','mmoses...@mirantis.com'); wrote:
  I did this exercise over many iterations during Docker container
  packing and found that as long as the data is under 1gb, it's going to
  compress really well with xz. Over 1gb and lrzip looks more attractive
  (but only on high memory systems). In reality, we're looking at log
  footprints from OpenStack environments on the order of 500mb to 2gb.
 
  xz is very slow on single-core systems with 1.5gb of memory, but it's
  quite a bit faster if you run it on a more powerful system. I've found
  level 4 compression to be the best compromise that works well enough
  that it's still far better than gzip. If increasing compression time
  by 3-5x is too much for you guys, why not just go to bzip? You'll
  still improve compression but be able to cut back on time.
 
  Best Regards,
  Matthew Mosesohn

 Alpha release of xz supports multithreading via -T (or —threads)
 parameter.
 We could also use pbzip2 instead of regular bzip to cut some time on
 multi-core
 systems.

 Regards,
 Bartłomiej Piotrowski
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com
 javascript:_e(%7B%7D,'cvml','vkuk...@mirantis.com');

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Let's use additional prefixes in threads

2014-11-26 Thread Mike Scherbakov
Vladimir,
+1 on using additional prefixes.

Please do not use Orchestrator though, especially with /Astute. Astute
is not an orchestrator since we moved all orchestration logic to Nailgun,
and it happened a long time ago already. Let's call it as task executor
instead, or Nailgun's workers.
So, for Astute-related things, let's use [Astute].

On Monday, November 24, 2014, Jay Pipes jaypi...@gmail.com wrote:

 On 11/24/2014 12:04 PM, Vladimir Kuklin wrote:

 [Fuel][Library] for compatibility with other projects. Let's negotiate
 the list of prefixes and populate them on our wiki so that everyone can
 configure his filters.


 ++

 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [CustomResource] LifeCycle methods flow

2014-11-26 Thread Pavlo Shchelokovskyy
Pradip,

https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L473

Basically, it calls handle_create that might return some data, yields, and
than keeps calling check_create_complete with that data returned by
handle_create, yielding control in-between, until check_create_complete
returns True.

Best regards,
Pavlo Shchelokovskyy.

On Wed, Nov 26, 2014 at 12:20 PM, Pradip Mukhopadhyay 
pradip.inte...@gmail.com wrote:

 Hello,



 Any pointer (document and/or code pointer) related to how the different
 overridden methods are getting called when a custom resource is getting
 deployed in the heat stack?


 Basically just tried to annotate the h-eng log on a simple,
 very-first-attempt 'hello world' resource. Noticed the log is something
 like:

 2014-11-26 15:38:30.251 INFO heat.engine.plugins.helloworld [-]
 [pradipm]:Inside handle_create
 2014-11-26 15:38:30.257 INFO heat.engine.plugins.helloworld [-]
 [pradipm]:Inside _set_param_values
 2014-11-26 15:38:31.259 INFO heat.engine.plugins.helloworld [-]
 [pradipm]:Inside check_create_complete
 2014-11-26 15:38:44.227 INFO heat.engine.plugins.helloworld
 [req-9979deb9-f911-4df4-bdf8-ecc3609f054b None demo] [pradipm]:Inside
 HelloWorld ctor
 2014-11-26 15:38:44.234 INFO heat.engine.plugins.helloworld
 [req-9979deb9-f911-4df4-bdf8-ecc3609f054b None demo] [pradipm]:Inside
 _resolve_attribute




 The constructor (ctor) is getting called in the flow after the
 create-resource. So though understanding the flow would help.



 Thanks in advance,
 Pradip


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Proposal to add examples/usecase as part of new features / cli / functionality patches

2014-11-26 Thread Deepak Shetty
Hi Valeriy,
   I know about docs, but this was a proposal to provide small doc which
are patch specific as that helps reviewers and other doc writers

I have many a times seen people asking on IRC or list on how to test this
patch, or i did this with your patch but didn't work , such iterations can
be reduced if we can have small docs (in free flowing text to begin with)
associated with each patch than can help people other than author to
understand what/how the patch adds functionality, which will improve the
overall review quality and reviewers in general

thanx,
deepak
P.S. I took the Manila patch just as an example , nothing specific about it
:)


On Wed, Nov 26, 2014 at 3:40 PM, Valeriy Ponomaryov 
vponomar...@mirantis.com wrote:

 Hi Deepak,

 Docs are present in any project already, according to example with manila
 - https://github.com/openstack/manila/tree/master/doc/source

 It is used for docs on http://docs.openstack.org/ , also everyone if able
 to contribute to it.

 See docs built on basis of files from manila repo:
 http://docs.openstack.org/developer/manila/

 For most of projects we have already useful resource:
 http://docs.openstack.org/cli-reference/content/

 In conclusion I can say that it is question more to the organization of
 creation such docs than possibility to create it in general.

 Regards,
 Valeriy Ponomaryov

 On Wed, Nov 26, 2014 at 8:01 AM, Deepak Shetty dpkshe...@gmail.com
 wrote:

 Hi stackers,
I was having this thought which i believe applies to all projects of
 openstack (Hence All in the subject tag)

 My proposal is to have examples or usecase folder in each project which
 has info on how to use the feature/enhancement (which was submitted as part
 of a gerrit patch)
 In short, a description with screen shots (cli, not GUI) which should be
 submitted (optionally or mandatory) along with patch (liek how testcases
 are now enforced)

 I would like to take an example to explain. Take this patch @
 https://review.openstack.org/#/c/127587/ which adds a default volume
 type in Manila

 Now it would have been good if we could have a .txt or .md file alogn
 with the patch that explains :

 1) What changes are needed in manila.conf to make this work

 2) How to use the cli with this change incorporated

 3) Some screen shots of actual usage (Now the author/submitted would have
 tested in devstack before sending patch, so just copying those cli screen
 shots wouldn't be too big of a deal)

 4) Any caution/caveats that one has to keep in mind while using this

 It can be argued that some of the above is satisfied via commit msg and
 lookign at test cases.
 But i personally feel that those still doesn't give a good visualization
 of how a feature patch works in reality

 Adding such a example/usecase file along with patch helps in multiple
 ways:

 1) It helps the reviewer get a good picture of how/which clis are
 affected and how this patch fits in the flow

 2) It helps documentor get a good view of how this patch adds value,
 hence can document it better

 3) It may help the author or anyone else write a good detailed blog post
 using the examples/usecase as a reference

 4) Since this becomes part of the patch and hence git log, if the
 feature/cli/flow changes in future, we can always refer to how the feature
 was designed, worked when it was first posted by looking at the example
 usecase

 5) It helps add a lot of clarity to the patch, since we know how the
 author tested it and someone can point missing flows or issues (which
 otherwise now has to be visualised)

 6) I feel this will help attract more reviewers to the patch, since now
 its more clear what this patch affects, how it affects and how flows are
 changing, even a novice reviewer can feel more comfortable and be confident
 to provide comments.

 Thoughts ?

 thanx,
 deepak


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kind Regards
 Valeriy Ponomaryov
 www.mirantis.com
 vponomar...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-26 Thread Przemyslaw Kaminski

I agree, this was supposed to be small.

P.

On 11/26/2014 11:03 AM, Stanislaw Bogatkin wrote:

Hi all,
As I understand, we just need to monitoring one node - Fuel master. 
For slave nodes we already have a solution - zabbix.
So, in that case why we need some complicated stuff like monasca? 
Let's use something small, like monit or sensu.


On Mon, Nov 24, 2014 at 10:36 PM, Fox, Kevin M kevin@pnnl.gov 
mailto:kevin@pnnl.gov wrote:


One of the selling points of tripleo is to reuse as much as
possible from the cloud, to make it easier to deploy. While
monasca may be more complicated, if it ends up being a component
everyone learns, then its not as bad as needing to learn two
different monitoring technologies. You could say the same thing
cobbler vs ironic. the whole Ironic stack is much more
complicated. But for an openstack admin, its easier since a lot of
existing knowlege applies. Just something to consider.

Thanks,
Kevin *
*

*From:* Tomasz Napierala
*Sent:* Monday, November 24, 2014 6:42:39 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [Fuel] fuel master monitoring


 On 24 Nov 2014, at 11:09, Sergii Golovatiuk
sgolovat...@mirantis.com mailto:sgolovat...@mirantis.com wrote:

 Hi,

 monasca looks overcomplicated for the purposes we need. Also it
requires Kafka which is Java based transport protocol.
 I am proposing Sensu. It's architecture is tiny and elegant.
Also it uses rabbitmq as transport so we won't need to introduce
new protocol.

Do we really need such complicated stuff? Sensu is huge project,
and it's footprint is quite large. Monit can alert using scripts,
can we use it instead of API?

Regards,
-- 
Tomasz 'Zen' Napierala

Sr. OpenStack Engineer
tnapier...@mirantis.com mailto:tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-11-26 Thread Andreas Jaeger
On 11/26/2014 10:46 AM, Julien Danjou wrote:
 On Fri, Oct 31 2014, Flavio Percoco wrote:
 
 Fully agree!

 The more I think about it, the more I'm convinced we should keep py26
 in oslo until EOL Juno. It'll take time, it may be painful but it'll
 be simpler to explain and more importantly it'll be simpler to do.

 Keeping this simple will also help us with welcoming more reviewers in
 our team. It's already complex enough to explain what oslo-inc is and
 why there are oslo libraries.
 
 Ok, so now that I start looking into that, it seems nobody added back
 Python 2.6 jobs to the Oslo libraries, so they are not gated against it
 and the door is open to breakage support.
 
 I'm gonna work on this.

The libraries have 2.6 support enabled as discussed - but if indeed some
are missing, please send patches,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF:Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Finding old reviews for action

2014-11-26 Thread James Polley
At the mid-cycle, there was some discussion around using our weekly meeting
time to find 5 old reviews and assign people to shepherd those reviews -
either marking them as abandoned if there hasn't been any action, or
rounding up reviewers or people to help make changes if required to drive
it forward.

What we didn't nail down at the time was how to find the 5 reviews. Because
of the ambiguity, we spent some time a few weeks ago trying to decide on a
method. Full logs are at [1] but in short we settled on using the Longest
waiting reviews (based on latest revision): from [2].

I'm mentioning this on the list so that (A) we can be consistent across
meetings in how we look for issues that need attention, (B) so that other
people can suggest other methods of finding the reviews that need
attention, if this isn't the best method, and (C) in case this set of
reviews is something we might want to surface elsewhere (sadly, I don't
believe it can't be surfaced in a Gerrit dashboard, unless Gerrit can now
handle any sorting other than newest-first)


[1]
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-11-12-08.16.log.html
[2] http://russellbryant.net/openstack-stats/tripleo-openreviews.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [CI] Cinder/Ceph CI setup

2014-11-26 Thread Giulio Fidente

hi there,

while working on the TripleO cinder-ha spec meant to provide HA for 
Cinder via Ceph [1], we wondered how to (if at all) test this in CI, so 
we're looking for some feedback


first of all, shall we make Cinder/Ceph the default for our (currently 
non-voting) HA job? (check-tripleo-ironic-overcloud-precise-ha)


current implementation (under review) should permit for the deployment 
of both the Ceph monitors and Ceph OSDs on either controllers, dedicated 
nodes, or to split them up so that only OSDs are on dedicated nodes


what would be the best scenario for CI?

* a single additional node hosting a Ceph OSD with the Ceph monitors 
deployed on all controllers (my preference is for this one)


* a single additional node hosting a Ceph OSD and a Ceph monitor

* no additional nodes with controllers also service as Ceph monitor and 
Ceph OSD


more scenarios? comments? Thanks for helping

1. https://blueprints.launchpad.net/tripleo/+spec/tripleo-kilo-cinder-ha
--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal new hacking rules

2014-11-26 Thread Nicolas Trangez
On Mon, 2014-11-24 at 13:19 -0500, Jay Pipes wrote:
 I think pointing out that the default failure 
 message for testtools.TestCase.assertEqual() uses the terms
 reference 
 (expected) and actual is a reason why reviewers *should* ask patch 
 submitters to use (expected, actual) ordering.

Is there any reason for this specific ordering? Not sure about others,
but I tend to write equality comparisons like this

if var == 1:

instead of

if 1 == var:

(although I've seen the latter in C code before).

This gives rise to

assert var == 1

or, moving into `unittest` domain

assertEqual(var, 1)

reading it as 'Assert `var` equals 1', which makes me wonder why the
`assertEqual` API is defined the other way around (unlike how I'd write
any other equality check).

Nicolas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [CustomResource] LifeCycle methods flow

2014-11-26 Thread Pradip Mukhopadhyay
Thanks Pavlo.

One particular thing I did not comprehend is:

Suppose my resource code is something like:


class HelloWorld(resource.Resource):
def __init__(self, controller, deserializer, serializer=None):
LOG.info([pradipm]:Inside HelloWorld ctor);
resource.Resource.__init__(self, controller, deserializer,
serializer)
## Re-setting the data value
self._data_value = self.properties['value']

properties_schema = {
'value': properties.Schema(
properties.Schema.STRING,
_('foo')
),
}

attributes_schema = {
   'data': _('the data')
}

...
   def _set_param_values(self):
LOG.info([pradipm]:Inside _set_param_values)
self._data_value = self.properties['value']
return

def handle_create(self):
LOG.info([pradipm]:Inside handle_create)
container_id = 1   ## some arbitrary id
self.resource_id_set(container_id)
self._set_param_values()
return container_id



I am seeing the constructor is getting called *later on* compared to the
handle_create - check_create_complete etc.

Is that a *defined* behavior? Or it is purely *temporal* (such that in some
cases the ctor might be called early also).




Thanks,
Pradip





On Wed, Nov 26, 2014 at 4:05 PM, Pavlo Shchelokovskyy 
pshchelokovs...@mirantis.com wrote:

 Pradip,

 https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L473

 Basically, it calls handle_create that might return some data, yields, and
 than keeps calling check_create_complete with that data returned by
 handle_create, yielding control in-between, until check_create_complete
 returns True.

 Best regards,
 Pavlo Shchelokovskyy.

 On Wed, Nov 26, 2014 at 12:20 PM, Pradip Mukhopadhyay 
 pradip.inte...@gmail.com wrote:

 Hello,



 Any pointer (document and/or code pointer) related to how the different
 overridden methods are getting called when a custom resource is getting
 deployed in the heat stack?


 Basically just tried to annotate the h-eng log on a simple,
 very-first-attempt 'hello world' resource. Noticed the log is something
 like:

 2014-11-26 15:38:30.251 INFO heat.engine.plugins.helloworld [-]
 [pradipm]:Inside handle_create
 2014-11-26 15:38:30.257 INFO heat.engine.plugins.helloworld [-]
 [pradipm]:Inside _set_param_values
 2014-11-26 15:38:31.259 INFO heat.engine.plugins.helloworld [-]
 [pradipm]:Inside check_create_complete
 2014-11-26 15:38:44.227 INFO heat.engine.plugins.helloworld
 [req-9979deb9-f911-4df4-bdf8-ecc3609f054b None demo] [pradipm]:Inside
 HelloWorld ctor
 2014-11-26 15:38:44.234 INFO heat.engine.plugins.helloworld
 [req-9979deb9-f911-4df4-bdf8-ecc3609f054b None demo] [pradipm]:Inside
 _resolve_attribute




 The constructor (ctor) is getting called in the flow after the
 create-resource. So though understanding the flow would help.



 Thanks in advance,
 Pradip


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Pavlo Shchelokovskyy
 Software Engineer
 Mirantis Inc
 www.mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Separate code freeze for repos

2014-11-26 Thread Aleksandra Fedorova
Mike,

from DevOps point of view it doesn't really matter when we do
branching. This is the process we need to perform anyway and this
partial branching doesn't change too much for us.
Although there might be several technical questions like:

 1) When we create /6.1 mirror?
 2) Should we create fuel-main repo branch before others or should we
pass config.mk variables from Jenkins side?

But it can be done one way or the other.

The primary concern here is not the build process and its
implementation, but the question how we are going to test the early
patches.

Right now we have unit tests and general nightly tests which are
analyzed and managed by QA team. The fact that we can create set of
6.1 system test jobs earlier in the process and even run them daily
doesn't mean that there will be people to watch them and analyze their
results. If we do early 6.1-branching while QA team is focused on 6.0
release, who will be dealing with this additional workload?

And if those 6.1 nightly system tests aren't checked properly, we get
code merged to fuel-web for several weeks based on unit-tests only,
which is generally a bad idea. Especially with current state of
fuel-web repository with several projects in one.


On Mon, Nov 24, 2014 at 3:01 AM, Dmitry Borodaenko
dborodae...@mirantis.com wrote:
 1. We discussed splitting fuel-web, I think we should do that before
 relaxing code freeze rules for it.

 2. If there are high or critical priority bugs in a component during soft
 code freeze, all developers of that component should be writing, reviewing,
 or testing fixes for these bugs.

 3. If we do separate code freeze for current components, we should always
 start with fuel-main, so that we can switch repos from master to stable one
 at a time.

 On Nov 17, 2014 4:08 AM, Mike Scherbakov mscherba...@mirantis.com wrote:

 I believe that we need to do this, and agree with Vitaly.

 Basically, when we are getting low amount of review requests, it's easy
 enough to do backports to stable branch. So criteria should be based on
 this, and I believe it can be even more soft, than what Vitaly suggests.

 I suggest the following:
 ___
 If no more than 3 new High / Critical priority bugs appeared in the passed
 day, and no more than 10 High/Critical over the past 3 days appeared - then
 stable branch can be created. ___

 HCF criteria remain the same. We will just have stable branch earlier. It
 might be a bit of headache for our DevOps team: it means that

 6.1 ISO should appear immediately after first stable branch created (we
 need ISO and all set of tests working on master)
 6.0 ISO has to be build on master branches from some repos, but stable/6.0
 from other. Likely it means whether switching to stable/6.0 in fuel-main and
 hacking config.mk, or something else.

 DevOps team, what do you think?


 On Fri, Nov 14, 2014 at 5:24 PM, Vitaly Kramskikh
 vkramsk...@mirantis.com wrote:

 There is a proposal to consider a repo as stable if there are no
 high/critical bugs and there were no such new bugs with this priority for
 the last 3 days. I'm ok with it.

 2014-11-14 17:16 GMT+03:00 Igor Kalnitsky ikalnit...@mirantis.com:

 Guys,

 The idea of separate unfreezing is cool itself, but we have to define
 some rules how to define that fuel-web is stable. I mean, in fuel-web
 we have different projects, so when Fuel UI is stable, the
 fuel_upgrade or Nailgun may be not.

 - Igor

 On Fri, Nov 14, 2014 at 3:52 PM, Vitaly Kramskikh
 vkramsk...@mirantis.com wrote:
  Evgeniy,
 
  That means that the stable branch can be created for some repos
  earlier. For
  example, fuel-web repo seems not to have critical issues for now and
  I'd
  like master branch of that repo to be opened for merging various stuff
  which
  shouldn't go to 6.0 and do not wait until all other repos stabilize.
 
  2014-11-14 16:42 GMT+03:00 Evgeniy L e...@mirantis.com:
 
  Hi,
 
   There was an idea to make a separate code freeze for repos
 
  Could you please clarify what do you mean?
 
  I think we should have a way to merge patches for the next
  release event if it's code freeze for the current.
 
  Thanks,
 
  On Tue, Nov 11, 2014 at 2:16 PM, Vitaly Kramskikh
  vkramsk...@mirantis.com wrote:
 
  Folks,
 
  There was an idea to make a separate code freeze for repos, but we
  decided not to do it. Do we plan to try it this time? It is really
  painful
  to maintain multi-level tree of dependent review requests and wait
  for a few
  weeks until we can merge new stuff in master.
 
  --
  Vitaly Kramskikh,
  Software Engineer,
  Mirantis, Inc.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Vitaly Kramskikh,
  Software 

[openstack-dev] [Fuel] Nailgun API log verbosity

2014-11-26 Thread Ivan Kliuk

Hi, all!

Recently I started working on nailgun-api log is too verbose 
https://bugs.launchpad.net/fuel/+bug/1393148 bug and collected your 
feedbacks about PoC https://review.openstack.org/#/c/137053/ as follows:
1) We cannot always delete or cut messages received from nailgun-agent 
because of they bear essential information within which is used for 
troubleshooting
2) Increase log rotation interval, i.e. to one week. This will decrease 
log size without making it less verbose

3) Log data on nodes
4) Implement debug mode feature (increase verbosity by clicking/checking 
option on web ui/cli)


Would be great to find a consistency in the way we'll go.
Thank you!

--
Sincerely yours,
Ivan Kliuk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins improvement

2014-11-26 Thread Dmitry Ukov
Evgeniy,
Thanks a lot!

On Mon, Nov 24, 2014 at 5:15 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 Our current validation implementation is based on jsonschema,
 we will figure out how to hack/configure it to provide more human
 readable message

 Thanks,

 On Mon, Nov 24, 2014 at 2:34 PM, Dmitry Ukov du...@mirantis.com wrote:

 That was my fault. I did not expect that timeout parameter is a mandatory
 requirement for task. Every thing works perfectly fine.
 Thanks for the help.

 On Mon, Nov 24, 2014 at 3:05 PM, Tatyana Leontovich 
 tleontov...@mirantis.com wrote:

 Guys,
 task like
 - role: ['controller']
 stage: post_deployment
 type: puppet
 parameters:
 puppet_manifest: puppet/site.pp
 puppet_modules: puppet/modules/
 timeout: 360
 works fine for me, so  I believe your task should looks like

 cat tasks.yaml
 # This tasks will be applied on controller nodes,
 # here you can also specify several roles, for example
 # ['cinder', 'compute'] will be applied only on
 # cinder and compute nodes
 - role: ['controller']
   stage: post_deployment
   type: puppet
   parameters:
 puppet_manifest: install_keystone_ldap.pp
 puppet_modules: /etc/puppet/modules/

 And be sure that install_keystone_ldap.pp thos one invoke other manifests

 Best,
 Tatyana

 On Mon, Nov 24, 2014 at 12:49 PM, Dmitry Ukov du...@mirantis.com
 wrote:

 Unfortunately this does not work

 cat tasks.yaml
 # This tasks will be applied on controller nodes,
 # here you can also specify several roles, for example
 # ['cinder', 'compute'] will be applied only on
 # cinder and compute nodes
 - role: ['controller']
   stage: post_deployment
   type: puppet
   parameters:
 puppet_manifest: install_keystone_ldap.pp
 puppet_modules: puppet/:/etc/puppet/modules/


 fpb --build .
 /home/dukov/dev/.plugins_ldap/local/lib/python2.7/site-packages/pkg_resources.py:1045:
 UserWarning: /home/dukov/.python-eggs is writable by group/others and
 vulnerable to attack when used with get_resource_filename. Consider a more
 secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE
 environment variable).
   warnings.warn(msg, UserWarning)
 2014-11-24 13:48:32 ERROR 15026 (cli) Wrong value format 0 -
 parameters, for file ./tasks.yaml, {'puppet_modules':
 'puppet/:/etc/puppet/modules/', 'puppet_manifest':
 'install_keystone_ldap.pp'} is not valid under any of the given schemas
 Traceback (most recent call last):
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py,
 line 90, in main
 perform_action(args)
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py,
 line 77, in perform_action
 actions.BuildPlugin(args.build).run()
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py,
 line 42, in run
 self.check()
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py,
 line 99, in check
 self._check_structure()
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py,
 line 111, in _check_structure
 ValidatorManager(self.plugin_path).get_validator().validate()
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py,
 line 39, in validate
 self.check_schemas()
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py,
 line 46, in check_schemas
 self.validate_file_by_schema(v1.TASKS_SCHEMA, self.tasks_path)
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py,
 line 47, in validate_file_by_schema
 self.validate_schema(data, schema, path)
   File
 /home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py,
 line 43, in validate_schema
 value_path, path, exc.message))
 ValidationError: Wrong value format 0 - parameters, for file
 ./tasks.yaml, {'puppet_modules': 'puppet/:/etc/puppet/modules/',
 'puppet_manifest': 'install_keystone_ldap.pp'} is not valid under any of
 the given schemas


 On Mon, Nov 24, 2014 at 2:34 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 Hi,

 according to [1] you should be able to use:

 puppet_modules: puppet/:/etc/puppet/modules/

 This is valid string yaml parameter that should be parsed just fine.

 [1]
 https://github.com/stackforge/fuel-web/blob/master/tasklib/tasklib/actions/puppet.py#L61-L62

 Regards
 --
 Alex


 On Mon, Nov 24, 2014 at 12:07 PM, Dmitry Ukov du...@mirantis.com
 wrote:

 Hello All,
 Current implementation of plugins in Fuel unpacks plugin tarball
 into /var/www/nailgun/plugins/.
 If we implement deployment part of plugin using puppet there is a
 setting
 puppet_modules:

 This setting should specify path to modules folder. As soon as main
 deployment part of plugin is implemented as a Puppet module module
 path setting should be:

 puppet_modules: puppet/

 

Re: [openstack-dev] [Heat] Order of machines to be terminated during scale down

2014-11-26 Thread Jay Lau
The current behavior is not flexible to customer,  I see that we have a
blueprint want to enhance this behavior.

https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources
https://wiki.openstack.org/wiki/Heat/AutoScaling

In Use Case section, we have the following:
==
Use Cases

   1. Users want to use AutoScale without using Heat templates.
   2. Users want to use AutoScale *with* Heat templates.
   3. Users want to scale arbitrary resources, not just instances.
   4. Users want their autoscaled resources to be associated with shared
   resources such as load balancers, cluster managers, configuration servers,
   and so on.
   5. TODO: Administrators or automated processes want to add or remove
   *specific* instances from a scaling group. (one node was compromised or had
   some critical error?)
   6. TODO: Users want to specify a general policy about which resources to
   delete when scaling down, either newest or oldest
   7. TODO: A hook needs to be provided to allow completion or cancelling
   of the auto scaling down of a resource. For example, a MongoDB shard may
   need draining to other nodes before it can be safely deleted. Or another
   example, replica's may need time to resync before another is deleted. The
   check would ensure the resync is done.
   8. *TODO: Another hook should be provided to allow selection of node to
   scale down. MongoDB example again, select the node with the least amount of
   data that will need to migrate to other hosts.*

===

Item 8 is enabling customer can customize the instance to scale down.

Thanks!

2014-11-26 18:30 GMT+08:00 Pavlo Shchelokovskyy 
pshchelokovs...@mirantis.com:

 Maish,

 by default they are deleted in in the same order they were created, FIFO
 style.

 Best regards,
 Pavlo Shchelokovskyy.

 On Wed, Nov 26, 2014 at 12:24 PM, Maish Saidel-Keesing 
 maishsk+openst...@maishsk.com wrote:

 In which order are machines terminated during a scale down action in an
 auto scaling group

 For example instance 1  2 were deployed in a stack. Instances 3  4
 were created as a result of load.

 When the load is reduced and the instances are scaled back down, which
 ones will be removed? And in which order?

 From old to new (1-4) or new to old (4 - 1) ?

 Thanks

 --
 Maish Saidel-Keesing


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Pavlo Shchelokovskyy
 Software Engineer
 Mirantis Inc
 www.mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Telco] [NFV] [Heat] Telco Orchestration

2014-11-26 Thread Mathieu Rohon
Hi,

On Wed, Nov 26, 2014 at 12:48 AM, Georgy Okrokvertskhov
gokrokvertsk...@mirantis.com wrote:
 Hi,

 In Murano we did couple projects related to networking orchestration. As NFV

Can you tell us more about those projects? Does it include
mutli-datacenter use cases?

 is a quite broad term I can say that Murano approach fits into it too. In
 our case we had bunch of virtual appliances with specific networking
 capabilities and requirements. Some of these appliances had to work together
 to provide a required functionality. These virtual appliances were exposed
 as Murano applications with defined dependencies between apps and operators
 were able to create different networking configuration with these apps
 combining them according their requirements\capabilities. Underlying
 workflows were responsible to bind these virtual appliances together.

Can you provide us a link to such a murano Application, how you define
dependencies with apps, and how you translate those dependencies in
networking configuration?

 I will be glad to participate in tomorrow meeting and answer any questions
 you have.

 Thanks
 Georgy

 On Tue, Nov 25, 2014 at 6:14 AM, Marc Koderer m...@koderer.com wrote:

 Hi Angus,

 Am 25.11.2014 um 12:48 schrieb Angus Salkeld asalk...@mirantis.com:

 On Tue, Nov 25, 2014 at 7:27 PM, Marc Koderer m...@koderer.comwrote:

 Hi all,

 as discussed during our summit sessions we would like to expand the scope
 of the Telco WG (aka OpenStack NFV group) and start working
 on the orchestration topic (ETSI MANO).

 Therefore we started with an etherpad [1] to collect ideas, use-cases and
 requirements.


 Hi Marc,

 You have quite a high acronym per sentence ratio going on that etherpad;)


 Haha, welcome to the telco world :)


 From Heat's perspective, we have a lot going on already, but we would love
 to support
 what you are doing.


 That’s exactly what we are planning. What we have is a long list of
 use-cases and
 requirements. We need to transform them into specs for the OpenStack
 projects.
 Many of those specs won’t be NFV specify, for instance a Telco cloud will
 be highly
 distributed. So what we need is a multi-region heat support (which is
 already a planned
 feature for Heat as I learned today).


 You need to start getting specific about what you need and what the
 missing gaps are.
 I see you are already looking at higher layers (TOSCA) also check out
 Murano as well.


 Yep, I will check Murano.. I never had a closer look to it.

 Regards
 Marc


 Regards
 -Angus


 Goal is to discuss this document and move it onto the Telco WG wiki [2]
 when
 it becomes stable.

 Feedback welcome ;)

 Regards
 Marc
 Deutsche Telekom

 [1] https://etherpad.openstack.org/p/telco_orchestration
 [2] https://wiki.openstack.org/wiki/TelcoWorkingGroup

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Georgy Okrokvertskhov
 Architect,
 OpenStack Platform Products,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] environment variables in local.conf post-config section

2014-11-26 Thread Andreas Scheuring
Hi together, 
is there a way to use Environment variables in the local.conf
post-config section?

On my system (stable/juno devstack) not the content of the variable, but
the variable name itself is being inserted into the config file.


So e.g. 
[[post-config|$NOVA_CONF]]
[DEFAULT]
vncserver_proxyclient_address=$HOST_IP_MGMT

results in nova.conf as:
vncserver_proxyclient_address = $HOST_IP_MGMT



Thanks

-- 
Andreas 
(irc: scheuran)




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-26 Thread Fox, Kevin M
So then in the end, there will be 3 monitoring systems to learn, configure, and 
debug? Monasca for cloud users, zabbix for most of the physical systems, and 
sensu or monit to be small?

Seems very complicated.

If not just monasca, why not the zabbix thats already being deployed?

Thanks,
Kevin


From: Przemyslaw Kaminski
Sent: Wednesday, November 26, 2014 2:50:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Fuel] fuel master monitoring

I agree, this was supposed to be small.

P.

On 11/26/2014 11:03 AM, Stanislaw Bogatkin wrote:
Hi all,
As I understand, we just need to monitoring one node - Fuel master. For slave 
nodes we already have a solution - zabbix.
So, in that case why we need some complicated stuff like monasca? Let's use 
something small, like monit or sensu.

On Mon, Nov 24, 2014 at 10:36 PM, Fox, Kevin M 
kevin@pnnl.govmailto:kevin@pnnl.gov wrote:
One of the selling points of tripleo is to reuse as much as possible from the 
cloud, to make it easier to deploy. While monasca may be more complicated, if 
it ends up being a component everyone learns, then its not as bad as needing to 
learn two different monitoring technologies. You could say the same thing 
cobbler vs ironic. the whole Ironic stack is much more complicated. But for an 
openstack admin, its easier since a lot of existing knowlege applies. Just 
something to consider.

Thanks,
Kevin


From: Tomasz Napierala
Sent: Monday, November 24, 2014 6:42:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Fuel] fuel master monitoring


 On 24 Nov 2014, at 11:09, Sergii Golovatiuk 
 sgolovat...@mirantis.commailto:sgolovat...@mirantis.com wrote:

 Hi,

 monasca looks overcomplicated for the purposes we need. Also it requires 
 Kafka which is Java based transport protocol.
 I am proposing Sensu. It's architecture is tiny and elegant. Also it uses 
 rabbitmq as transport so we won't need to introduce new protocol.

Do we really need such complicated stuff? Sensu is huge project, and it's 
footprint is quite large. Monit can alert using scripts, can we use it instead 
of API?

Regards,
--
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.commailto:tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Do we need an IntrospectionInterface?

2014-11-26 Thread Dmitry Tantsur
Hi all!

As our state machine and discovery discussion proceeds, I'd like to ask
your opinion on whether we need an IntrospectionInterface
(DiscoveryInterface?). Current proposal [1] suggests adding a method for
initiating a discovery to the ManagementInterface. IMO it's not 100%
correct, because:
1. It's not management. We're not changing anything.
2. I'm aware that some folks want to use discoverd-based discovery [2] even
for DRAC and ILO (e.g. for vendor-specific additions that can't be
implemented OOB).

Any ideas?

Dmitry.

[1] https://review.openstack.org/#/c/100951/
[2] https://review.openstack.org/#/c/135605/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] removing XML testing completely from Tempest

2014-11-26 Thread Sean Dague
On 11/25/2014 03:28 AM, Flavio Percoco wrote:
 On 24/11/14 08:56 -0500, Sean Dague wrote:
 Having XML payloads was never a universal part of OpenStack services.
 During the Icehouse release the TC declared that being an OpenStack
 service requires having a JSON REST API. Projects could do what they
 wanted beyond that. Lots of them deprecated and have been removing the
 XML cruft since then.

 Tempest is a tool to test the OpenStack API. OpenStack hasn't had an XML
 API for a long time.

 Given that current branchless Tempest only supports as far back as
 Icehouse anyway, after these changes were made, I'd like to propose that
 all the XML code in Tempest should be removed. If a project wants to
 support something else beyond a JSON API that's on that project to test
 and document on their own.

 We've definitively blocked adding new XML tests in Tempest anyway, but
 getting rid of the XML debt in the project will simplify it quite a bit,
 make it easier for contributors to join in, and seems consistent with
 the direction of OpenStack as a whole.
 
 Lets get rid of it, once and for all.
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

This was discussed last night in the cross project meeting, and there
was a pretty resounding *do it* -
http://eavesdrop.openstack.org/meetings/project/2014/project.2014-11-25-21.01.log.html

The patch stream to accomplish it is here -
https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:rmxml,n,z

+1s for non core members that are in favor, especially potentially
affected PTLs, appreciated.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Separate code freeze for repos

2014-11-26 Thread Mike Scherbakov
I envision it in the following way:

   1. stable/6.0 is created for fuel-main along with stable branch for
   other repo, which is under consideration, like fuel-web
   2. In stable/6.0 of fuel-main, config.mk should be changed to refer to
   stable/6.0 for one of the repos, like fuel-web. master branch is still used
   for all other repos
   3. All our jobs (Fuel CI, system tests, etc.) are forked, like we do at
   HCF. One set should point to stable/6.0, another to master of fuel-main.
   4. Mirror has to be created for 6.1 immediately too, as we are opening
   master for fuel-main
   5. System tests should be analyzed for regression by QA or Dev team.
   6. As other repos start satisfying criteria, we can simply change branch
   name in config.mk

 should we pass config.mk variables from Jenkins side
I do not think we should change job configs. Let's do changes only in
source code, otherwise we will end up with having two places for
configuration.

This is pretty massive change, and it will require testing of
infrastructure for possible fork in advance, and creating a separate
checklists for it.

On Wed, Nov 26, 2014 at 3:31 PM, Aleksandra Fedorova afedor...@mirantis.com
 wrote:

 Mike,

 from DevOps point of view it doesn't really matter when we do
 branching. This is the process we need to perform anyway and this
 partial branching doesn't change too much for us.
 Although there might be several technical questions like:

  1) When we create /6.1 mirror?
  2) Should we create fuel-main repo branch before others or should we
 pass config.mk variables from Jenkins side?

 But it can be done one way or the other.

 The primary concern here is not the build process and its
 implementation, but the question how we are going to test the early
 patches.

 Right now we have unit tests and general nightly tests which are
 analyzed and managed by QA team. The fact that we can create set of
 6.1 system test jobs earlier in the process and even run them daily
 doesn't mean that there will be people to watch them and analyze their
 results. If we do early 6.1-branching while QA team is focused on 6.0
 release, who will be dealing with this additional workload?

 And if those 6.1 nightly system tests aren't checked properly, we get
 code merged to fuel-web for several weeks based on unit-tests only,
 which is generally a bad idea. Especially with current state of
 fuel-web repository with several projects in one.


 On Mon, Nov 24, 2014 at 3:01 AM, Dmitry Borodaenko
 dborodae...@mirantis.com wrote:
  1. We discussed splitting fuel-web, I think we should do that before
  relaxing code freeze rules for it.
 
  2. If there are high or critical priority bugs in a component during soft
  code freeze, all developers of that component should be writing,
 reviewing,
  or testing fixes for these bugs.
 
  3. If we do separate code freeze for current components, we should always
  start with fuel-main, so that we can switch repos from master to stable
 one
  at a time.
 
  On Nov 17, 2014 4:08 AM, Mike Scherbakov mscherba...@mirantis.com
 wrote:
 
  I believe that we need to do this, and agree with Vitaly.
 
  Basically, when we are getting low amount of review requests, it's easy
  enough to do backports to stable branch. So criteria should be based on
  this, and I believe it can be even more soft, than what Vitaly suggests.
 
  I suggest the following:
  ___
  If no more than 3 new High / Critical priority bugs appeared in the
 passed
  day, and no more than 10 High/Critical over the past 3 days appeared -
 then
  stable branch can be created. ___
 
  HCF criteria remain the same. We will just have stable branch earlier.
 It
  might be a bit of headache for our DevOps team: it means that
 
  6.1 ISO should appear immediately after first stable branch created (we
  need ISO and all set of tests working on master)
  6.0 ISO has to be build on master branches from some repos, but
 stable/6.0
  from other. Likely it means whether switching to stable/6.0 in
 fuel-main and
  hacking config.mk, or something else.
 
  DevOps team, what do you think?
 
 
  On Fri, Nov 14, 2014 at 5:24 PM, Vitaly Kramskikh
  vkramsk...@mirantis.com wrote:
 
  There is a proposal to consider a repo as stable if there are no
  high/critical bugs and there were no such new bugs with this priority
 for
  the last 3 days. I'm ok with it.
 
  2014-11-14 17:16 GMT+03:00 Igor Kalnitsky ikalnit...@mirantis.com:
 
  Guys,
 
  The idea of separate unfreezing is cool itself, but we have to define
  some rules how to define that fuel-web is stable. I mean, in fuel-web
  we have different projects, so when Fuel UI is stable, the
  fuel_upgrade or Nailgun may be not.
 
  - Igor
 
  On Fri, Nov 14, 2014 at 3:52 PM, Vitaly Kramskikh
  vkramsk...@mirantis.com wrote:
   Evgeniy,
  
   That means that the stable branch can be created for some repos
   earlier. For
   example, fuel-web repo seems not to have critical issues for now and
   I'd
   like master branch of 

[openstack-dev] #Personal# Ref: L3 service integration with service framework

2014-11-26 Thread Priyanka Chopra
Hi Gary, All,


This is with reference to blueprint - L3 router Service Type Framework and 
corresponding development at github repo.

I noticed that the patch was abandoned due to inactivity. Wanted to know 
if there is a specific reason for which the development was put on hold? 

I am working on a Use-case to enable neutron calls (L2 and L3) from 
OpenStack to OpenDaylight neutron. However presently ML2 forwards the L2 
calls to ODL neutron, but not the L3 calls (router and FIP). 
With this blueprint submission the L3 Service framework (that includes L3 
driver, agent and plugin) will be completed and hence L3 calls from 
OpenStack can be redirected to any controller platform. Please suggest in 
case anyone else is working on the same or if we can do the enhancements 
required and submit the code to enable such a usecase.


Best Regards
Priyanka 
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Order of machines to be terminated during scale down

2014-11-26 Thread Maish Saidel-Keesing

On 26/11/2014 14:50, Jay Lau wrote:
 The current behavior is not flexible to customer,  I see that we have
 a blueprint want to enhance this behavior.

 https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources
 https://wiki.openstack.org/wiki/Heat/AutoScaling

 In Use Case section, we have the following:
 ==


   Use Cases

  1. Users want to use AutoScale without using Heat templates.
  2. Users want to use AutoScale *with* Heat templates.
  3. Users want to scale arbitrary resources, not just instances.
  4. Users want their autoscaled resources to be associated with shared
 resources such as load balancers, cluster managers, configuration
 servers, and so on.
  5. TODO: Administrators or automated processes want to add or remove
 *specific* instances from a scaling group. (one node was
 compromised or had some critical error?)
  6. TODO: Users want to specify a general policy about which resources
 to delete when scaling down, either newest or oldest
  7. TODO: A hook needs to be provided to allow completion or
 cancelling of the auto scaling down of a resource. For example, a
 MongoDB shard may need draining to other nodes before it can be
 safely deleted. Or another example, replica's may need time to
 resync before another is deleted. The check would ensure the
 resync is done.
  8. *TODO: Another hook should be provided to allow selection of node
 to scale down. MongoDB example again, select the node with the
 least amount of data that will need to migrate to other hosts.*

 ===

 Item 8 is enabling customer can customize the instance to scale down.
Thanks Jay - I know that it is not available today.

What I would like to know is - what is the order that is used today?

Thanks
Maish

 Thanks!

 2014-11-26 18:30 GMT+08:00 Pavlo Shchelokovskyy
 pshchelokovs...@mirantis.com mailto:pshchelokovs...@mirantis.com:

 Maish,

 by default they are deleted in in the same order they were
 created, FIFO style.

 Best regards,
 Pavlo Shchelokovskyy.

 On Wed, Nov 26, 2014 at 12:24 PM, Maish Saidel-Keesing
 maishsk+openst...@maishsk.com
 mailto:maishsk+openst...@maishsk.com wrote:

 In which order are machines terminated during a scale down
 action in an
 auto scaling group

 For example instance 1  2 were deployed in a stack. Instances
 3  4
 were created as a result of load.

 When the load is reduced and the instances are scaled back
 down, which
 ones will be removed? And in which order?

 From old to new (1-4) or new to old (4 - 1) ?

 Thanks

 --
 Maish Saidel-Keesing


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 -- 
 Pavlo Shchelokovskyy
 Software Engineer
 Mirantis Inc
 www.mirantis.com http://www.mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 -- 
 Thanks,

 Jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Maish Saidel-Keesing

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-11-26 Thread Julien Danjou
On Wed, Nov 26 2014, Andreas Jaeger wrote:

 The libraries have 2.6 support enabled as discussed - but if indeed some
 are missing, please send patches,

So to recap, it seems to me the plan is to keep all Oslo lib with
Python 2.6 so we don't have any transient dependency problem with stable
in the future. In this regard patch
https://review.openstack.org/#/c/130444 seems to be in contradiction
with what has been decided, I wonder why it has been merged?

I pushed https://review.openstack.org/#/c/137321/ to bring it back.

Please people make up your mind. ;)

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-11-26 Thread Andreas Jaeger
On 11/26/2014 02:48 PM, Julien Danjou wrote:
 On Wed, Nov 26 2014, Andreas Jaeger wrote:
 
 The libraries have 2.6 support enabled as discussed - but if indeed some
 are missing, please send patches,
 
 So to recap, it seems to me the plan is to keep all Oslo lib with
 Python 2.6 so we don't have any transient dependency problem with stable
 in the future. In this regard patch
 https://review.openstack.org/#/c/130444 seems to be in contradiction
 with what has been decided, I wonder why it has been merged?

Check the dates when it was proposed. That was based on the initial
proposal, the discussion to use 2.6 everywhere was only done afterwards.

 I pushed https://review.openstack.org/#/c/137321/ to bring it back.
 
 Please people make up your mind. ;)

Thanks for the patch,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF:Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal new hacking rules

2014-11-26 Thread Jay Pipes

On 11/26/2014 06:20 AM, Nicolas Trangez wrote:

On Mon, 2014-11-24 at 13:19 -0500, Jay Pipes wrote:

I think pointing out that the default failure
message for testtools.TestCase.assertEqual() uses the terms
reference
(expected) and actual is a reason why reviewers *should* ask patch
submitters to use (expected, actual) ordering.


Is there any reason for this specific ordering? Not sure about others,
but I tend to write equality comparisons like this

 if var == 1:

instead of

 if 1 == var:

(although I've seen the latter in C code before).

This gives rise to

 assert var == 1

or, moving into `unittest` domain

 assertEqual(var, 1)

reading it as 'Assert `var` equals 1', which makes me wonder why the
`assertEqual` API is defined the other way around (unlike how I'd write
any other equality check).


It's not about an equality condition.

It's about the message that is produced by 
testtools.TestCase.assertEqual(), and the helpfulness of that message 
when the order of the arguments is reversed.


This is especially true with large dict comparisons. If you get a 
message like:


 reference: large_dict
 actual: large_dict

And the arguments are reversed, then you end up wasting time looking in 
the test code instead of the real code for the thing that is different.


Anyway, like I said, it's not something that we can write a simple 
hacking check for, and therefore, it's not something that should have 
much time spent on. But I do recommend that reviewers bring it up, 
especially if the patch author has been inconsistent in their usage of 
(expected, actual) in multiple assertEqual() calls in their patch.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] environment variables in local.conf post-config section

2014-11-26 Thread jordan pittier
Hi,
It should work with your current local.conf. You may be facing this bug : 
https://bugs.launchpad.net/devstack/+bug/1386413

Jordan

- Original Message -
From: Andreas Scheuring scheu...@linux.vnet.ibm.com
To: openstack-dev openstack-dev@lists.openstack.org
Sent: Wednesday, 26 November, 2014 2:04:57 PM
Subject: [openstack-dev] [devstack] environment variables in local.conf 
post-config section

Hi together, 
is there a way to use Environment variables in the local.conf
post-config section?

On my system (stable/juno devstack) not the content of the variable, but
the variable name itself is being inserted into the config file.


So e.g. 
[[post-config|$NOVA_CONF]]
[DEFAULT]
vncserver_proxyclient_address=$HOST_IP_MGMT

results in nova.conf as:
vncserver_proxyclient_address = $HOST_IP_MGMT



Thanks

-- 
Andreas 
(irc: scheuran)




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Order of machines to be terminated during scale down

2014-11-26 Thread Maish Saidel-Keesing
Thanks Pavlo.

Is there any reason why FIFO was chosen?

Maish
On 26/11/2014 12:30, Pavlo Shchelokovskyy wrote:
 Maish,

 by default they are deleted in in the same order they were created,
 FIFO style.

 Best regards,
 Pavlo Shchelokovskyy.

 On Wed, Nov 26, 2014 at 12:24 PM, Maish Saidel-Keesing
 maishsk+openst...@maishsk.com mailto:maishsk+openst...@maishsk.com
 wrote:

 In which order are machines terminated during a scale down action
 in an
 auto scaling group

 For example instance 1  2 were deployed in a stack. Instances 3  4
 were created as a result of load.

 When the load is reduced and the instances are scaled back down, which
 ones will be removed? And in which order?

 From old to new (1-4) or new to old (4 - 1) ?

 Thanks

 --
 Maish Saidel-Keesing


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 -- 
 Pavlo Shchelokovskyy
 Software Engineer
 Mirantis Inc
 www.mirantis.com http://www.mirantis.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Maish Saidel-Keesing

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal new hacking rules

2014-11-26 Thread Nicolas Trangez
On Wed, 2014-11-26 at 08:54 -0500, Jay Pipes wrote:
 On 11/26/2014 06:20 AM, Nicolas Trangez wrote:
  On Mon, 2014-11-24 at 13:19 -0500, Jay Pipes wrote:
  I think pointing out that the default failure
  message for testtools.TestCase.assertEqual() uses the terms
  reference
  (expected) and actual is a reason why reviewers *should* ask patch
  submitters to use (expected, actual) ordering.
 
  Is there any reason for this specific ordering? Not sure about others,
  but I tend to write equality comparisons like this
 
   if var == 1:
 
  instead of
 
   if 1 == var:
 
  (although I've seen the latter in C code before).
 
  This gives rise to
 
   assert var == 1
 
  or, moving into `unittest` domain
 
   assertEqual(var, 1)
 
  reading it as 'Assert `var` equals 1', which makes me wonder why the
  `assertEqual` API is defined the other way around (unlike how I'd write
  any other equality check).
 
 It's not about an equality condition.
 
 It's about the message that is produced by 
 testtools.TestCase.assertEqual(), and the helpfulness of that message 
 when the order of the arguments is reversed.

I'm aware of that. I was just wondering whether there's a particular
reason the ordering (and as a result of that the error message) was
chosen as it is.

I'd rather design the API as `assertEqual(value, expected)`, and let the
message indeed say 'Expected ..., but got ...' (and using the argument
values accordingly).

Nicolas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] How should libvirt pools work with distributed storage drivers?

2014-11-26 Thread Peter Penchev
Hi,

Some days ago, a bunch of Nova specs were approved for Kilo.  Among them
was https://blueprints.launchpad.net/nova/+spec/use-libvirt-storage-pools

Now, while I do recognize the wisdom of using storage pools, I do see a
couple of possible problems with this, especially in the light of my
upcoming spec proposal for using StorPool distributed storage for the VM
images.

My main concern is with the explicit specification that the libvirt pools
should be of the directory type, meaning that all the images should be
visible as files in a single directory.  Would it be possible to extend the
specification to allow other libvirt pool types, or to allow other ways of
pointing Nova at the filesystem path of the VM image?

Where this is coming from is that StorPool volumes (which we intend to
write a DiskImage subclass for) appear in the host filesystem as
/dev/storpool/volumename special files (block devices).  Thus, it would
be... interesting... to find ways to make them show up under a specific
directory (yes, we could do lots and lots of symlink magic, but we've been
down that road before and it doesn't necessarily lead to Good Things(tm)).
I see that the spec has several mentions of yeah, we should special-case
Ceph/RBD here, since they do things in a different way - well, StorPool
does things in a slightly different way, too :)

And yes, we do have work in progress to expose the StorPool cluster's
volumes as a libvirt pool, but this might take a bit of time to complete
and then it will most probably take much more time to get into the libvirt
upstream *and* into the downstream distributions, so IMHO okay, let's use
different libvirt pool types might not be entirely enough for us, although
it would be a possible workaround.

Of course, it's entirely possible that I have not completely understood the
proposed mechanism; I do see some RBD patches in the previous incarnations
of this blueprint, and if I read them right, it *might* be trivial to
subclass the new libvirt storage pool support thing and provide the
/dev/storpool/volumename paths to the upper layers.  If this is so, feel
free to let me know I've wasted your time in reading this e-mail, in strong
terms if necessary :)

G'luck,
Peter
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal new hacking rules

2014-11-26 Thread Louis Taylor
On Wed, Nov 26, 2014 at 08:54:35AM -0500, Jay Pipes wrote:
 It's not about an equality condition.
 
 It's about the message that is produced by testtools.TestCase.assertEqual(),
 and the helpfulness of that message when the order of the arguments is
 reversed.
 
 This is especially true with large dict comparisons. If you get a message
 like:
 
  reference: large_dict
  actual: large_dict
 
 And the arguments are reversed, then you end up wasting time looking in the
 test code instead of the real code for the thing that is different.
 
 Anyway, like I said, it's not something that we can write a simple hacking
 check for, and therefore, it's not something that should have much time
 spent on. But I do recommend that reviewers bring it up, especially if the
 patch author has been inconsistent in their usage of (expected, actual) in
 multiple assertEqual() calls in their patch.

I think Nicolas's question was what made testtools choose this ordering. As far
as I know, the python docs for unittest uses the opposite ordering. I think
most people can see that the error messages involving 'reference' and 'actual'
are useful, but maybe not the fact that in order to achieve them using
testtools, you need to go against the norm for other testing frameworks.

(fwiw, I'm an advocate for using the ordering with the best error messages)


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal new hacking rules

2014-11-26 Thread Jay Pipes

On 11/26/2014 09:28 AM, Nicolas Trangez wrote:

On Wed, 2014-11-26 at 08:54 -0500, Jay Pipes wrote:

On 11/26/2014 06:20 AM, Nicolas Trangez wrote:

On Mon, 2014-11-24 at 13:19 -0500, Jay Pipes wrote:

I think pointing out that the default failure
message for testtools.TestCase.assertEqual() uses the terms
reference
(expected) and actual is a reason why reviewers *should* ask patch
submitters to use (expected, actual) ordering.


Is there any reason for this specific ordering? Not sure about others,
but I tend to write equality comparisons like this

  if var == 1:

instead of

  if 1 == var:

(although I've seen the latter in C code before).

This gives rise to

  assert var == 1

or, moving into `unittest` domain

  assertEqual(var, 1)

reading it as 'Assert `var` equals 1', which makes me wonder why the
`assertEqual` API is defined the other way around (unlike how I'd write
any other equality check).


It's not about an equality condition.

It's about the message that is produced by
testtools.TestCase.assertEqual(), and the helpfulness of that message
when the order of the arguments is reversed.


I'm aware of that. I was just wondering whether there's a particular
reason the ordering (and as a result of that the error message) was
chosen as it is.

I'd rather design the API as `assertEqual(value, expected)`, and let the
message indeed say 'Expected ..., but got ...' (and using the argument
values accordingly).


I think you'd have the same problem, no? People would still need to get 
the order of the arguments correct.


-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] environment variables in local.conf post-config section

2014-11-26 Thread Andreas Scheuring
That's it. Thanks!
-- 
Andreas 
(irc: scheuran)


On Wed, 2014-11-26 at 15:10 +0100, jordan pittier wrote:
 Hi,
 It should work with your current local.conf. You may be facing this bug : 
 https://bugs.launchpad.net/devstack/+bug/1386413
 
 Jordan
 
 - Original Message -
 From: Andreas Scheuring scheu...@linux.vnet.ibm.com
 To: openstack-dev openstack-dev@lists.openstack.org
 Sent: Wednesday, 26 November, 2014 2:04:57 PM
 Subject: [openstack-dev] [devstack] environment variables in local.conf   
 post-config section
 
 Hi together, 
 is there a way to use Environment variables in the local.conf
 post-config section?
 
 On my system (stable/juno devstack) not the content of the variable, but
 the variable name itself is being inserted into the config file.
 
 
 So e.g. 
 [[post-config|$NOVA_CONF]]
 [DEFAULT]
 vncserver_proxyclient_address=$HOST_IP_MGMT
 
 results in nova.conf as:
 vncserver_proxyclient_address = $HOST_IP_MGMT
 
 
 
 Thanks
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Splitting up the assignment component

2014-11-26 Thread David Chadwick
I tend to agree with Morgan. There are resources and there are users.
And there is something in the middle that says which users can access
which resources. It might be an ACL, a RBAC role, or a set of ABAC
attributes, or something else (such as a MAC policy). So to my mind this
middle bit, whilst being connected to both resources and users, is
separate from both of them. So we should not artificially put it with
just one of them.

FYI, the roles in RBAC are part of the policy specification. You define
the roles, their hierarchical relationships, then assign both users and
resources (privileges actually) to them. So roles could be part of the
policy specification, except that the policy is distributed, so in which
part of the distributed policy would you put it? Would it be in the
specification of roles to actions, or in the attribute mappings, or in
the user to attribute assignments?

regards

David

On 25/11/2014 16:42, Morgan Fainberg wrote:
 
 On Nov 25, 2014, at 4:25 AM, Henry Nash hen...@linux.vnet.ibm.com wrote:

 Hi

 As most of you know, we have approved a spec 
 (https://review.openstack.org/#/c/129397/) to split the assignments 
 component up into two pieces, and the code (divided up into a series of 
 patches) is currently in review (https://review.openstack.org/#/c/130954/). 
 While most aspects of the split appear to have agreement, there is one 
 aspect that has been questioned - and that is the whether roles' should be 
 in the resource component, as proposed?

 First, let's recap the goals here:

 1) The current assignment component is really what's left after we split 
 off users/groups into identity some releases ago.  Assignments is pretty 
 complicated and messy - and we need a better structure (as an example, just 
 doing the split allowed me to find 5 bugs in our current implementation - 
 and I wouldn't be surprised if there are more).  This is made more urgent by 
 the fact that we are about to land some big new changes in this area, e.g. 
 hierarchical projects and a re-implemntation (for performance) of 
 list_role_assignments.

 2) While Keystone may have started off as a service where we store all the 
 users, credentials  permissions needed to access other OpenStack services, 
 we more and more see Keystone as a wrapper for existing corporate 
 authentication and authorisation mechanisms - and it's job is really to 
 provided a common mechanism and language for these to be consumed across 
 OpenStack services.  To do this well, we must make sure that the keystone 
 components are split along sensible lines...so that they can individually 
 wrap these corporate directories/services.  The classic case of this was are 
 previous split off of Identity...and this new proposal takes this a step 
 further.

 3) As more and more broad OpenStack powered clouds are created, we must 
 makes sure that our Keystone implementation is as flexible as possible. We 
 already plan to support new abstractions for things like cloud providers 
 enabling resellers to do business within one OpenStack cloud (by providing 
 hierarchical multi-tenancy, domain-roles etc.). Our current assignments 
 model is a) slightly unusual in that all roles are global and every 
 assignment has actor-target-role, and b) cannot easily be substituted for 
 alternate assignment models (even for the whole of an OpenStack 
 installation, let alone on a domain by domain basis)

 The proposal for splitting the assignment component is trying to provide a 
 better basis for the above.  It separates the storing and CRUD operations of 
 domain/projects/roles into a resource component, while leaving the pure 
 assignment model in assignment.  The rationale for this is that the 
 resource component defines the entities that the rest of the OpenStack 
 services (and their policy engines) understand...while assignment is a pure 
 mapper between these entities. The details of these mappings are never 
 exposed outside of Keystone, except for the generation of contents of a 
 token.  This would allow new assignment models to be introduced that, as 
 long as they support the api to list what role_ids are mapped to project_id 
 X for user_id Y, then the rest of OpenStack would never know anything had 
 changed.

 So to (finally) get the the point of this post...where should the role 
 definitions live? The proposal is that these live in resource, because:

 a) They represent the definition of how Keystone and the other services 
 define permission - and this should be independent of whatever assignment 
 model we choose
 b) We may well chose (in the future) to morph what we currently means as a 
 role...into what they really are, which is a capability.  Once we have 
 domain-specifc roles (groups), which map to global roles, then we may well 
 end up, more often than not, with a role representing a single API 
 capability.  Roles might even be created simply by a service registering 
 its capabilities with Keystone.  Again, this 

Re: [openstack-dev] [QA][Tempest] Proposing Ghanshyam Mann for Tempest Core

2014-11-26 Thread Attila Fazekas
+1

- Original Message -
From: Marc Koderer m...@koderer.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Wednesday, November 26, 2014 7:58:06 AM
Subject: Re: [openstack-dev] [QA][Tempest] Proposing Ghanshyam Mann for Tempest 
Core

+1 

Am 22.11.2014 um 15:51 schrieb Andrea Frittoli  andrea.fritt...@gmail.com : 





+1 
On 21 Nov 2014 18:25, Ken1 Ohmichi  ken1ohmi...@gmail.com  wrote: 


+1 :-) 

Sent from my iPod 

On 2014/11/22, at 7:56, Christopher Yeoh  cbky...@gmail.com  wrote: 

 +1 
 
 Sent from my iPad 
 
 On 22 Nov 2014, at 4:56 am, Matthew Treinish  mtrein...@kortar.org  wrote: 
 
 
 Hi Everyone, 
 
 I'd like to propose we add Ghanshyam Mann (gmann) to the tempest core team. 
 Over 
 the past couple of cycles Ghanshyam has been actively engaged in the Tempest 
 community. Ghanshyam has had one of the highest review counts on Tempest for 
 the past cycle, and he has consistently been providing reviews that have 
 been 
 of consistently high quality that show insight into both the project 
 internals 
 and it's future direction. I feel that Ghanshyam will make an excellent 
 addition 
 to the core team. 
 
 As per the usual, if the current Tempest core team members would please vote 
 +1 
 or -1(veto) to the nomination when you get a chance. We'll keep the polls 
 open 
 for 5 days or until everyone has voted. 
 
 Thanks, 
 
 Matt Treinish 
 
 References: 
 
 https://review.openstack.org/#/q/reviewer:%22Ghanshyam+Mann+%253Cghanshyam.mann%2540nectechnologies.in%253E%22,n,z
  
 
 http://stackalytics.com/?user_id=ghanshyammannmetric=marks 
 
 ___ 
 OpenStack-dev mailing list 
 OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 
 ___ 
 OpenStack-dev mailing list 
 OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Proposing Ghanshyam Mann for Tempest Core

2014-11-26 Thread Matthew Treinish

So all of the current core team members have voted unanimously in favor of
adding Ghanshyam to the team.

Welcome to the team Ghanshyam.

-Matt Treinish


On Wed, Nov 26, 2014 at 09:57:10AM -0500, Attila Fazekas wrote:
 +1
 
 - Original Message -
 From: Marc Koderer m...@koderer.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, November 26, 2014 7:58:06 AM
 Subject: Re: [openstack-dev] [QA][Tempest] Proposing Ghanshyam Mann for   
 Tempest Core
 
 +1 
 
 Am 22.11.2014 um 15:51 schrieb Andrea Frittoli  andrea.fritt...@gmail.com : 
 
 
 
 
 
 +1 
 On 21 Nov 2014 18:25, Ken1 Ohmichi  ken1ohmi...@gmail.com  wrote: 
 
 
 +1 :-) 
 
 Sent from my iPod 
 
 On 2014/11/22, at 7:56, Christopher Yeoh  cbky...@gmail.com  wrote: 
 
  +1 
  
  Sent from my iPad 
  
  On 22 Nov 2014, at 4:56 am, Matthew Treinish  mtrein...@kortar.org  
  wrote: 
  
  
  Hi Everyone, 
  
  I'd like to propose we add Ghanshyam Mann (gmann) to the tempest core 
  team. Over 
  the past couple of cycles Ghanshyam has been actively engaged in the 
  Tempest 
  community. Ghanshyam has had one of the highest review counts on Tempest 
  for 
  the past cycle, and he has consistently been providing reviews that have 
  been 
  of consistently high quality that show insight into both the project 
  internals 
  and it's future direction. I feel that Ghanshyam will make an excellent 
  addition 
  to the core team. 
  
  As per the usual, if the current Tempest core team members would please 
  vote +1 
  or -1(veto) to the nomination when you get a chance. We'll keep the polls 
  open 
  for 5 days or until everyone has voted. 
  
  Thanks, 
  
  Matt Treinish 
  
  References: 
  
  https://review.openstack.org/#/q/reviewer:%22Ghanshyam+Mann+%253Cghanshyam.mann%2540nectechnologies.in%253E%22,n,z
   
  
  http://stackalytics.com/?user_id=ghanshyammannmetric=marks 
  
  ___ 
  OpenStack-dev mailing list 
  OpenStack-dev@lists.openstack.org 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
  
  ___ 
  OpenStack-dev mailing list 
  OpenStack-dev@lists.openstack.org 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 
 ___ 
 OpenStack-dev mailing list 
 OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 ___ 
 OpenStack-dev mailing list 
 OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


pgpUFXStLCFpG.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Devstack] Juno Backport for Bug Revert Single quote iniset argument in merge_config_file

2014-11-26 Thread Andreas Scheuring
Hi, 
do you think we could backport this bug to the devstack stable/juno
release? 
https://review.openstack.org/#/c/131334/

This bug prohibits people to use the local.conf from icehouse, when they
take use of variables for defining configuration values



An example. The local.conf looks like this:
 [[post-config|$NOVA_CONF]]
 [DEFAULT]
 vncserver_proxyclient_address=$HOST_IP_MGMT
 

With stable/juno this results in nova.conf as:
 vncserver_proxyclient_address = $HOST_IP_MGMT


The above mentioned bug solves this problem.

Do you think it's worthwile backporting it?


-- 
Andreas 
(irc: scheuran)




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova host-update gives error 'Virt driver does not implement host disabled status'

2014-11-26 Thread Vladik Romanovsky


- Original Message -
 From: Vineet Menon mvineetme...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, 26 November, 2014 5:14:09 AM
 Subject: Re: [openstack-dev] [nova] nova host-update gives error 'Virt driver 
 does not implement host disabled
 status'
 
 Hi Kevin,
 
 Oh. Yes. That could be the problem.
 Thanks for pointing that out.
 
 
 Regards,
 
 Vineet Menon
 
 
 On 26 November 2014 at 02:02, Chen CH Ji  jiche...@cn.ibm.com  wrote:
 
 
 
 
 
 are you using libvirt ? it's not implemented
 ,guess your bug are talking about other hypervisors?
 
 the message was printed here:
 http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/contrib/hosts.py#n236
 
 Best Regards!
 
 Kevin (Chen) Ji 纪 晨
 
 Engineer, zVM Development, CSTL
 Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
 Phone: +86-10-82454158
 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
 Beijing 100193, PRC
 
 Vineet Menon ---11/26/2014 12:10:39 AM---Hi, I'm trying to reproduce the bug
 https://bugs.launchpad.net/nova/+bug/1259535 .
 
 From: Vineet Menon  mvineetme...@gmail.com 
 To: openstack-dev  openstack-dev@lists.openstack.org 
 Date: 11/26/2014 12:10 AM
 Subject: [openstack-dev] [nova] nova host-update gives error 'Virt driver
 does not implement host disabled status'
 
 
Hi Vinet, 

There are two methods in the API for changing the service/host status.
nova host-update and nova service-update.

Currently, in order to disable the service one should use the nova 
service-update command,
which maps to service_update method in the manager class.

nova host-update maps to set_host_enabled() methodin the virt drivers, which 
is not implemented
in the libvirt driver.
Not sure what is the purpose of this method, but libvirt driver doesn't 
implement it.

For a short period of time, this method was implemented, for a wrong reason, 
which was causing the bug in the title,
however, it was fix with https://review.openstack.org/#/c/61016

Let me know if you have any questions.

Thanks,
Vladik



 
 
 Hi,
 
 I'm trying to reproduce the bug https://bugs.launchpad.net/nova/+bug/1259535
 .
 While trying to issue the command, nova host-update --status disable
 machine1, an error is thrown saying,
 
 
 ERROR (HTTPNotImplemented): Virt driver does not implement host disabled
 status. (HTTP 501) (Request-ID: req-1f58feda-93af-42e0-b7b6-bcdd095f7d8c)
 
 What is this error about?
 
 Regards,
 Vineet Menon
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Order of machines to be terminated during scale down

2014-11-26 Thread Zane Bitter

On 26/11/14 09:13, Maish Saidel-Keesing wrote:

Thanks Pavlo.

Is there any reason why FIFO was chosen?


I believe that this was the original termination policy on AWS, and that 
was the reason we chose it. It was used on AWS because if you deleted an 
instance that was just created you would be charged for a full hour, so 
it was cheaper on average to delete an older one. It appears from the 
docs[1] that they now do something more sophisticated by default (kill 
the one closest to the next billing hour, all else being equal) and also 
offer a bunch of different policies.


cheers,
Zane.

[1] 
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingBehavior.InstanceTermination.html



Maish
On 26/11/2014 12:30, Pavlo Shchelokovskyy wrote:

Maish,

by default they are deleted in in the same order they were created,
FIFO style.

Best regards,
Pavlo Shchelokovskyy.

On Wed, Nov 26, 2014 at 12:24 PM, Maish Saidel-Keesing
maishsk+openst...@maishsk.com mailto:maishsk+openst...@maishsk.com
wrote:

In which order are machines terminated during a scale down action
in an
auto scaling group

For example instance 1  2 were deployed in a stack. Instances 3  4
were created as a result of load.

When the load is reduced and the instances are scaled back down, which
ones will be removed? And in which order?

From old to new (1-4) or new to old (4 - 1) ?

Thanks

--
Maish Saidel-Keesing


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com http://www.mirantis.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Maish Saidel-Keesing



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] No Meeting this Week

2014-11-26 Thread Matthew Treinish

Hi Everyone,

I figured I'd send a quick announcement that we won't be having a meeting this
week. The next meeting will be next week, Dec. 4th at 17:00 UTC.

Thanks,

-Matt Treinish


pgpCJ2s1V0P2N.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pecan] [WSME] Different content-type in request and response

2014-11-26 Thread Doug Hellmann

On Nov 26, 2014, at 3:49 AM, Renat Akhmerov rakhme...@mirantis.com wrote:

 Hi,
 
 I traced the WSME code and found a place [0] where it tries to get arguments 
 from request body based on different mimetype. So looks like WSME supports 
 only json, xml and “application/x-www-form-urlencoded”.
 
 So my question is: Can we fix WSME to also support “text/plain” mimetype? I 
 think the first snippet that Nikolay provided is valid from WSME standpoint.

WSME is intended for building APIs with structured arguments. It seems like the 
case of wanting to use text/plain for a single input string argument just 
hasn’t come up before, so this may be a new feature.

How many different API calls do you have that will look like this? Would this 
be the only one in the API? Would it make sense to consistently use JSON, even 
though you only need a single string argument in this case?

 
 Or if we don’t understand something in WSME philosophy then it’d nice to hear 
 some explanations from WSME team. Will appreciate that.
 
 
 Another issue that previously came across is that if we use WSME then we 
 can’t pass arbitrary set of parameters in a url query string, as I understand 
 they should always correspond to WSME resource structure. So, in fact, we 
 can’t have any dynamic parameters. In our particular use case it’s very 
 inconvenient. Hoping you could also provide some info about that: how it can 
 be achieved or if we can just fix it.

Ceilometer uses an array of query arguments to allow an arbitrary number.

On the other hand, it sounds like perhaps your desired API may be easier to 
implement using some of the other tools being used, such as JSONSchema. Are you 
extending an existing API or building something completely new?

Doug

 
 If you need help with contribution let us know pls.
 
 Thanks
 
 [0] https://github.com/stackforge/wsme/blob/master/wsme/rest/args.py#L215
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 25 Nov 2014, at 23:06, Nikolay Makhotkin nmakhot...@mirantis.com wrote:
 
 Hi, folks! 
 
 I try to create a controller which should receive one http content-type in 
 request but it should be another content-type in response. I tried to use 
 pecan and wsme decorators for controller's methods.
 
 I just want to receive text on server and send json-encoded string from 
 server (request has text/plain and response - application/json) 
 
 I tried: 
 
 class MyResource(resource.Resource):
 id = wtypes.text
 name = wtypes.text
 
 
 class MyResourcesController(rest.RestController):
 @wsexpose(MyResource, body=wtypes.text)
 def put(self, text):
 return MyResource(id='1', name=text)
 
 
 According to WSME documentation 
 (http://wsme.readthedocs.org/en/latest/integrate.html#module-wsmeext.pecan) 
 signature wsexpose method as following: 
 
   wsexpose(return_type, *arg_types, **options)
 
 Ok, I just set MyResource as return_type and body to text type. But it 
 didn't work as expected: 
 http://paste.openstack.org/show/138268/ 
 
 I looked at pecan documentation at 
 https://media.readthedocs.org/pdf/pecan/latest/pecan.pdf but I didn't find 
 anything that can fit to my case.
 
 Also, I tried: 
 
 class MyResource(resource.Resource):
 id = wtypes.text
 name = wtypes.text
 
 
 class MyResourcesController(rest.RestController):
 @expose('json')
 @expose(content_type=text/plain)
 def put(self):
 text = pecan.request.text
 return MyResource(id='1', name=text).to_dict()
 
 It worked just in case if request and response have the same content-type. 
 (application/json-application/json, text/plain-text/plain)
 
 I also tried a lot of combination of parameters but it is still not worked.
 
 Does anyone know what the problem is?
 How it can be done using WSME and/or Pecan?
 
 Sorry if I misunderstand something.
 -- 
 Best Regards,
 Nikolay
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-26 Thread Igor Kalnitsky
Folks,

Maybe I understand some things wrong, but Zabbix is a different story.
We deploy Zabbix per cluster, so it doesn't monitor for *all* slaves
or master node. It monitors only one cluster.

Therefore I see no reasons to choose Zabbix over monit. I mean, it
shouldn't be We MUST use Zabbix because we use it for our clusters.

- Igor

P.S: Personally, I'd like to use either Monit or Sensu.

On Wed, Nov 26, 2014 at 3:58 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

 So then in the end, there will be 3 monitoring systems to learn,
 configure, and debug? Monasca for cloud users, zabbix for most of the
 physical systems, and sensu or monit to be small?

 Seems very complicated.

 If not just monasca, why not the zabbix thats already being deployed?


 Yes, I had the same thoughts... why not just use zabbix since it's used
 already?

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-26 Thread Przemyslaw Kaminski
We want to monitor Fuel master node while Zabbix is only on slave nodes 
and not on master. The monitoring service is supposed to be installed on 
Fuel master host (not inside a Docker container) and provide basic info 
about free disk space, etc.


P.

On 11/26/2014 02:58 PM, Jay Pipes wrote:

On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

So then in the end, there will be 3 monitoring systems to learn,
configure, and debug? Monasca for cloud users, zabbix for most of the
physical systems, and sensu or monit to be small?

Seems very complicated.

If not just monasca, why not the zabbix thats already being deployed?


Yes, I had the same thoughts... why not just use zabbix since it's 
used already?


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] is there a way to simulate thousands or millions of compute nodes?

2014-11-26 Thread Gareth
Hi all,

Is there a way to simulate thousands or millions of compute nodes? Maybe we
could have many fake nova-compute services on one physical machine. By this
other nova components would have pressure from thousands of compute
services and this could help us find more problem from large-scale
deployment (fake ; -) )

I know there is a fake virt driver in nova, but that is not so real. Maybe
we need a fake driver could sleep 20s (which is close to real booting time)
in 'spawn' function.

-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Meeting cancelled this week

2014-11-26 Thread Ben Swartzlander
I forgot to bring the topic up last week, but this week we have a 
holiday in the US that conflicts with the weekly meeting, so I have 
cancelled it.


-Ben


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] #Personal# Ref: L3 service integration with service framework

2014-11-26 Thread Mathieu Rohon
Hi,

you can still add your own service plugin, as a mixin of
L3RouterPlugin (have a look at brocade's code).
AFAIU service framework would manage the coexistence several
implementation of a single service plugin.

This is currently not prioritized by neutron. This kind of work might
restart in the advanced_services project.

On Wed, Nov 26, 2014 at 2:28 PM, Priyanka Chopra
priyanka.cho...@tcs.com wrote:
 Hi Gary, All,


 This is with reference to blueprint - L3 router Service Type Framework and
 corresponding development at github repo.

 I noticed that the patch was abandoned due to inactivity. Wanted to know if
 there is a specific reason for which the development was put on hold?

 I am working on a Use-case to enable neutron calls (L2 and L3) from
 OpenStack to OpenDaylight neutron. However presently ML2 forwards the L2
 calls to ODL neutron, but not the L3 calls (router and FIP).
 With this blueprint submission the L3 Service framework (that includes L3
 driver, agent and plugin) will be completed and hence L3 calls from
 OpenStack can be redirected to any controller platform. Please suggest in
 case anyone else is working on the same or if we can do the enhancements
 required and submit the code to enable such a usecase.


 Best Regards
 Priyanka

 =-=-=
 Notice: The information contained in this e-mail
 message and/or attachments to it may contain
 confidential or privileged information. If you are
 not the intended recipient, any dissemination, use,
 review, distribution, printing or copying of the
 information contained in this e-mail message
 and/or attachments to it are strictly prohibited. If
 you have received this communication in error,
 please notify us by reply e-mail or telephone and
 immediately and permanently delete the message
 and any attachments. Thank you


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] suds-jurko, new in our global-requirements.txt: what is the point?!?

2014-11-26 Thread Thomas Goirand
Hi,

I tried to package suds-jurko. I was first happy to see that there was
some progress to make things work with Python 3. Unfortunately, the
reality is that suds-jurko has many issues with Python 3. For example,
it has many:

except Exception, e:

as well as many:

raise Exception, 'Duplicate key %s found' % k

This is clearly not Python3 code. I tried quickly to fix some of these
issues, but as I fixed a few, others appear.

So I wonder, what is the point of using suds-jurko, which is half-baked,
and which will conflict with the suds package?

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-26 Thread Jay Pipes

On 11/26/2014 10:22 AM, Przemyslaw Kaminski wrote:

We want to monitor Fuel master node while Zabbix is only on slave nodes
and not on master. The monitoring service is supposed to be installed on
Fuel master host (not inside a Docker container) and provide basic info
about free disk space, etc.


Why not use the same thing for monitoring the Fuel master host as we do 
for the docker containers/cluster?



P.

On 11/26/2014 02:58 PM, Jay Pipes wrote:

On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

So then in the end, there will be 3 monitoring systems to learn,
configure, and debug? Monasca for cloud users, zabbix for most of the
physical systems, and sensu or monit to be small?

Seems very complicated.

If not just monasca, why not the zabbix thats already being deployed?


Yes, I had the same thoughts... why not just use zabbix since it's
used already?

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-26 Thread Sergii Golovatiuk
Hi,

I would do both to compare. monit and Sensu have own advantages.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Nov 26, 2014 at 4:22 PM, Przemyslaw Kaminski pkamin...@mirantis.com
 wrote:

 We want to monitor Fuel master node while Zabbix is only on slave nodes
 and not on master. The monitoring service is supposed to be installed on
 Fuel master host (not inside a Docker container) and provide basic info
 about free disk space, etc.

 P.


 On 11/26/2014 02:58 PM, Jay Pipes wrote:

 On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

 So then in the end, there will be 3 monitoring systems to learn,
 configure, and debug? Monasca for cloud users, zabbix for most of the
 physical systems, and sensu or monit to be small?

 Seems very complicated.

 If not just monasca, why not the zabbix thats already being deployed?


 Yes, I had the same thoughts... why not just use zabbix since it's used
 already?

 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-26 Thread Andrew Laski


On 11/25/2014 11:54 AM, Solly Ross wrote:

I can't comment on other projects, but Nova definitely needs the soft
delete in the main nova database. Perhaps not for every table, but
there is definitely code in the code base which uses it right now.
Search for read_deleted=True if you're curious.

Just to save people a bit of time, it's actually `read_deleted='yes'`
or `read_deleted=yes` for many cases.

Just to give people a quick overview:

A cursory glance (no pun intended) seems to indicate that quite a few of
these are reading potentially deleted flavors.  For this case, it makes
sense to keep things in one table (as we do).

There are also quite a few that seem to be making sure deleted things
are properly cleaned up.  In this case, 'deleted' acts as a CLEANUP
state, so it makes just as much sense to keep the deleted rows in a
separate table.


For this case in particular, the concern is that operators might need
to find where an instance was running once it is deleted to be able to
diagnose issues reported by users. I think that's a valid use case of
this particular data.


This is a new database, so its our big chance to get this right. So,
ideas welcome...

Some initial proposals:

- we do what we do in the current nova database -- we have a deleted
column, and we set it to true when we delete the instance.

- we have shadow tables and we move delete rows to a shadow table.


Both approaches are viable, but as the soft-delete column is widespread, it
would be thorny for this new app to use some totally different scheme,
unless the notion is that all schemes should move to the audit table
approach (which I wouldn’t mind, but it would be a big job).FTR, the
audit table approach is usually what I prefer for greenfield development,
if all that’s needed is forensic capabilities at the database inspection
level, and not as much active GUI-based “deleted” flags.   That is, if you
really don’t need to query the history tables very often except when
debugging an issue offline.  The reason its preferable is because those
rows are still “deleted” from your main table, and they don’t get in the
way of querying.   But if you need to refer to these history rows in
context of the application, that means you need to get them mapped in such
a way that they behave like the primary rows, which overall is a more
difficult approach than just using the soft delete column.

I think it does really come down here to how you intend to use the soft-delete
functionality in Cells.  If you just are using it to debug or audit, then I 
think
the right way to go would be either the audit table (potentially can store more
lifecycle data, but could end up taking up more space) or a separate shadow
table (takes up less space).

If you are going to use the soft delete for application functionality, I would
consider differentiating between deleted and we still have things left to
clean up, since this seems to be mixing two different requirements into one.


The case that spawned this discussion is one where deleted rows are not 
needed for application functionality.  So I'm going to update the 
proposed schema there to not include a 'deleted' column. Fortunately 
there's still some time before the question of how to handle deletes 
needs to be fully sorted out.



That said, I have a lot of plans to send improvements down the way of the
existing approach of “soft delete column” into projects, from the querying
POV, so that criteria to filter out soft delete can be done in a much more
robust fashion (see
https://bitbucket.org/zzzeek/sqlalchemy/issue/3225/query-heuristic-inspector-event).
But this is still more complex and less performant than if the rows are
just gone totally, off in a history table somewhere (again, provided you
really don’t need to look at those history rows in an application context,
otherwise it gets all complicated again).

Interesting. I hadn't seen consistency between the two databases as
trumping doing this less horribly, but it sounds like its more of a
thing that I thought.

Thanks,
Michael

--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Do we need an IntrospectionInterface?

2014-11-26 Thread Imre Farkas

On 11/26/2014 02:20 PM, Dmitry Tantsur wrote:

Hi all!

As our state machine and discovery discussion proceeds, I'd like to ask
your opinion on whether we need an IntrospectionInterface
(DiscoveryInterface?). Current proposal [1] suggests adding a method for
initiating a discovery to the ManagementInterface. IMO it's not 100%
correct, because:
1. It's not management. We're not changing anything.
2. I'm aware that some folks want to use discoverd-based discovery [2]
even for DRAC and ILO (e.g. for vendor-specific additions that can't be
implemented OOB).

Any ideas?

Dmitry.

[1] https://review.openstack.org/#/c/100951/
[2] https://review.openstack.org/#/c/135605/



Hi Dmitry,

I see the value in using the composability of our driver interfaces, so 
I vote for having a separate IntrospectionInterface. Otherwise we 
wouldn't allow users to use eg. the DRAC driver with an in-band but more 
powerful hw discovery.


Imre


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-26 Thread Stanislaw Bogatkin
As for me - zabbix is overkill for one node. Zabbix Server + Agent +
Frontend + DB + HTTP server, and all of it for one node? Why not use
something that was developed for monitoring one node, doesn't have many
deps and work out of the box? Not necessarily Monit, but something similar.

On Wed, Nov 26, 2014 at 6:22 PM, Przemyslaw Kaminski pkamin...@mirantis.com
 wrote:

 We want to monitor Fuel master node while Zabbix is only on slave nodes
 and not on master. The monitoring service is supposed to be installed on
 Fuel master host (not inside a Docker container) and provide basic info
 about free disk space, etc.

 P.


 On 11/26/2014 02:58 PM, Jay Pipes wrote:

 On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

 So then in the end, there will be 3 monitoring systems to learn,
 configure, and debug? Monasca for cloud users, zabbix for most of the
 physical systems, and sensu or monit to be small?

 Seems very complicated.

 If not just monasca, why not the zabbix thats already being deployed?


 Yes, I had the same thoughts... why not just use zabbix since it's used
 already?

 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] suds-jurko, new in our global-requirements.txt: what is the point?!?

2014-11-26 Thread Donald Stufft

 On Nov 26, 2014, at 10:34 AM, Thomas Goirand z...@debian.org wrote:
 
 Hi,
 
 I tried to package suds-jurko. I was first happy to see that there was
 some progress to make things work with Python 3. Unfortunately, the
 reality is that suds-jurko has many issues with Python 3. For example,
 it has many:
 
 except Exception, e:
 
 as well as many:
 
 raise Exception, 'Duplicate key %s found' % k
 
 This is clearly not Python3 code. I tried quickly to fix some of these
 issues, but as I fixed a few, others appear.
 
 So I wonder, what is the point of using suds-jurko, which is half-baked,
 and which will conflict with the suds package?
 
It looks like it uses 2to3 to become Python 3 compatible.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal new hacking rules

2014-11-26 Thread Ben Nemec
On 11/26/2014 07:54 AM, Jay Pipes wrote:
 On 11/26/2014 06:20 AM, Nicolas Trangez wrote:
 On Mon, 2014-11-24 at 13:19 -0500, Jay Pipes wrote:
 I think pointing out that the default failure
 message for testtools.TestCase.assertEqual() uses the terms
 reference
 (expected) and actual is a reason why reviewers *should* ask patch
 submitters to use (expected, actual) ordering.

 Is there any reason for this specific ordering? Not sure about others,
 but I tend to write equality comparisons like this

  if var == 1:

 instead of

  if 1 == var:

 (although I've seen the latter in C code before).

 This gives rise to

  assert var == 1

 or, moving into `unittest` domain

  assertEqual(var, 1)

 reading it as 'Assert `var` equals 1', which makes me wonder why the
 `assertEqual` API is defined the other way around (unlike how I'd write
 any other equality check).
 
 It's not about an equality condition.
 
 It's about the message that is produced by 
 testtools.TestCase.assertEqual(), and the helpfulness of that message 
 when the order of the arguments is reversed.
 
 This is especially true with large dict comparisons. If you get a 
 message like:
 
   reference: large_dict
   actual: large_dict
 
 And the arguments are reversed, then you end up wasting time looking in 
 the test code instead of the real code for the thing that is different.

And my argument is that you're going to have to check the test code
anyway, because without some sort of automated enforcement you can never
be sure that the test author got it right.  I don't personally see
having to open a failing unit test as a huge burden anyway - I generally
need to do that to see what the unit test is calling anyway.

That said, I'm not personally bothered by this.  I learned early on not
to trust the expected, actual ordering so it makes no difference what
the failure message is to me.  I think we could save less experienced
developers some aggravation by not claiming something that may not be
true, but if people disagree I'm not inclined to spend a bunch of time
bikeshedding about it either. :-)

-Ben

 
 Anyway, like I said, it's not something that we can write a simple 
 hacking check for, and therefore, it's not something that should have 
 much time spent on. But I do recommend that reviewers bring it up, 
 especially if the patch author has been inconsistent in their usage of 
 (expected, actual) in multiple assertEqual() calls in their patch.
 
 Best,
 -jay
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-26 Thread Sergii Golovatiuk
Jay,

Fuel uses watchdog service for container to restart it in case of issues.
We have the same problem with containers when disk is full

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Nov 26, 2014 at 4:39 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 11/26/2014 10:22 AM, Przemyslaw Kaminski wrote:

 We want to monitor Fuel master node while Zabbix is only on slave nodes
 and not on master. The monitoring service is supposed to be installed on
 Fuel master host (not inside a Docker container) and provide basic info
 about free disk space, etc.


 Why not use the same thing for monitoring the Fuel master host as we do
 for the docker containers/cluster?


  P.

 On 11/26/2014 02:58 PM, Jay Pipes wrote:

 On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

 So then in the end, there will be 3 monitoring systems to learn,
 configure, and debug? Monasca for cloud users, zabbix for most of the
 physical systems, and sensu or monit to be small?

 Seems very complicated.

 If not just monasca, why not the zabbix thats already being deployed?


 Yes, I had the same thoughts... why not just use zabbix since it's
 used already?

 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-26 Thread Sergii Golovatiuk
Monit is easy and is used to control states of Compute nodes. We can adopt
it for master node.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Nov 26, 2014 at 4:46 PM, Stanislaw Bogatkin sbogat...@mirantis.com
wrote:

 As for me - zabbix is overkill for one node. Zabbix Server + Agent +
 Frontend + DB + HTTP server, and all of it for one node? Why not use
 something that was developed for monitoring one node, doesn't have many
 deps and work out of the box? Not necessarily Monit, but something similar.

 On Wed, Nov 26, 2014 at 6:22 PM, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:

 We want to monitor Fuel master node while Zabbix is only on slave nodes
 and not on master. The monitoring service is supposed to be installed on
 Fuel master host (not inside a Docker container) and provide basic info
 about free disk space, etc.

 P.


 On 11/26/2014 02:58 PM, Jay Pipes wrote:

 On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

 So then in the end, there will be 3 monitoring systems to learn,
 configure, and debug? Monasca for cloud users, zabbix for most of the
 physical systems, and sensu or monit to be small?

 Seems very complicated.

 If not just monasca, why not the zabbix thats already being deployed?


 Yes, I had the same thoughts... why not just use zabbix since it's used
 already?

 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023)

2014-11-26 Thread Ben Nemec
On 11/25/2014 10:58 PM, Ian Wienand wrote:
 Hi,
 
 My change [1] to enable a consistent tracing mechanism for the many
 scripts diskimage-builder runs during its build seems to have hit a
 stalemate.
 
 I hope we can agree that the current situation is not good.  When
 trying to develop with diskimage-builder, I find myself constantly
 going and fiddling with set -x in various scripts, requiring me
 re-running things needlessly as I try and trace what's happening.
 Conversley some scripts set -x all the time and give output when you
 don't want it.
 
 Now nodepool is using d-i-b more, it would be even nicer to have
 consistency in the tracing so relevant info is captured in the image
 build logs.
 
 The crux of the issue seems to be some disagreement between reviewers
 over having a single trace everything flag or a more fine-grained
 approach, as currently implemented after it was asked for in reviews.
 
 I must be honest, I feel a bit silly calling out essentially a
 four-line patch here.

My objections are documented in the review, but basically boil down to
the fact that it's not a four line patch, it's a 500+ line patch that
does essentially the same thing as:

set +e
set -x
export SHELLOPTS

in disk-image-create.  You do lose set -e in disk-image-create itself on
debug runs because that's not something we can safely propagate,
although we could work around that by unsetting it before calling hooks.
 FWIW I've used this method locally and it worked fine.

The only drawback is it doesn't allow the granularity of an if block in
every script, but I don't personally see that as a particularly useful
feature anyway.  I would like to hear from someone who requested that
functionality as to what their use case is and how they would define the
different debug levels before we merge an intrusive patch that would
need to be added to every single new script in dib or tripleo going forward.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Asia friendly IRC meeting time

2014-11-26 Thread Sergey Lukjanov
I think that 6 am for US west works much better than 3 am for Saratov.

So, I'm ok with keeping current time and add 1400 UTC.

   18:00UTC: Moscow (9pm)  China(2am)  US West(10am)/US East (1pm)
   14:00UTC: Moscow (5pm)  China(10pm)  US (W 6am / E 9am)

I think it's the best option to make all of us able to join.


On Wed, Nov 26, 2014 at 8:33 AM, Zhidong Yu zdyu2...@gmail.com wrote:

 If 6am works for people in US west, then I'm fine with Matt's suggestion
 (UTC14:00).

 Thanks, Zhidong

 On Tue, Nov 25, 2014 at 11:26 PM, Matthew Farrellee m...@redhat.com
 wrote:

 On 11/25/2014 02:37 AM, Zhidong Yu wrote:

  Current meeting time:
  18:00UTC: Moscow (9pm)China(2am) US West(10am)

 My proposal:
  18:00UTC: Moscow (9pm)China(2am) US West(10am)
  00:00UTC: Moscow (3am)China(8am) US West(4pm)


 fyi, a number of us are UW East (US West + 3 hours), so...

 current meeting time:
  18:00UTC: Moscow (9pm)  China(2am)  US West(10am)/US East (1pm)

 and during daylight savings it's US West(11am)/US East(2pm)

 so the proposal is:
  18:00UTC: Moscow (9pm)  China(2am)  US (W 10am / E 1pm)
  00:00UTC: Moscow (3am)  China(8am)  US (W 4pm / E 7pm)

 given it's literally impossible to schedule a meeting during business
 hours across saratov, china and the us, that's a pretty reasonable
 proposal. my concern is that 00:00UTC may be thin on saratov  US
 participants.

 also consider alternating the existing schedule w/ something that's ~4
 hours earlier...
  14:00UTC: Moscow (5pm)  China(10pm)  US (W 6am / E 9am)

 best,


 matt


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-11-26 Thread Ben Nemec
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/26/2014 07:48 AM, Julien Danjou wrote:
 On Wed, Nov 26 2014, Andreas Jaeger wrote:
 
 The libraries have 2.6 support enabled as discussed - but if
 indeed some are missing, please send patches,
 
 So to recap, it seems to me the plan is to keep all Oslo lib with 
 Python 2.6 so we don't have any transient dependency problem with
 stable in the future. In this regard patch 
 https://review.openstack.org/#/c/130444 seems to be in
 contradiction with what has been decided, I wonder why it has been
 merged?
 
 I pushed https://review.openstack.org/#/c/137321/ to bring it
 back.
 
 Please people make up your mind. ;)
 

Heh, we need a tooz lock to prevent us from changing our minds while
patches are in flight. ;-)

Thanks for taking care of this.

- -Ben
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJUdgvJAAoJEDehGd0Fy7uqbwsH/20xVKHJ2TpVY7usl5DfDTAq
KiVJmqU1TLdyFOrYA0V7VS+zJFrOBep8ZeUeoEBpnruoQyFaniC8LtAKWUSZ8XPC
DlZxYMJKLjToFsQMEvw8wpkLf5eqPpBuAORndleABEuFpNeT6HTu3b2oBxaU+Mps
sy5UZ2mb2sNDJvcKJuOwvCNCzJ6a//4MQVeBekSRUZ/xCxpjoyWH04FmFFGbRZEJ
rZYiTXpV6wukPoRoa+cmaI0LywdbuAWO9vczExvA9FsnYxuxNIJgFcIJvFqnjkEU
5EkchUcXQciCX+hYNxTiSUqxoUddQ/W48EnPhRV7UqYBJc87VcrvOg7aJoAwUv0=
=FJqY
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal new hacking rules

2014-11-26 Thread Zane Bitter

On 26/11/14 09:33, Louis Taylor wrote:

On Wed, Nov 26, 2014 at 08:54:35AM -0500, Jay Pipes wrote:

It's not about an equality condition.

It's about the message that is produced by testtools.TestCase.assertEqual(),
and the helpfulness of that message when the order of the arguments is
reversed.

This is especially true with large dict comparisons. If you get a message
like:

  reference: large_dict
  actual: large_dict

And the arguments are reversed, then you end up wasting time looking in the
test code instead of the real code for the thing that is different.

Anyway, like I said, it's not something that we can write a simple hacking
check for, and therefore, it's not something that should have much time
spent on. But I do recommend that reviewers bring it up, especially if the
patch author has been inconsistent in their usage of (expected, actual) in
multiple assertEqual() calls in their patch.


I think Nicolas's question was what made testtools choose this ordering. As far
as I know, the python docs for unittest uses the opposite ordering. I think
most people can see that the error messages involving 'reference' and 'actual'
are useful, but maybe not the fact that in order to achieve them using
testtools, you need to go against the norm for other testing frameworks.


The python docs for unittest mostly use 'first' and 'second' as the 
parameter names, and unittest doesn't distinguish between expected and 
actual in the default error messages.


That said, some of the newer assertions like assertDictEqual do use 
expected and actual, _in that order_, the same as testtools does.


The bottom line is that there are exactly two ways to do it, the entire 
world has now chosen one way and while I might otherwise have chosen 
differently for the same reasons as Nicolas, it would be absurd not to 
do it the same way.


That said, the entire discussion is moot because it can't be checked 
automatically.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] Rally scenario for network scale with VMs

2014-11-26 Thread Ajay Kalambur (akalambu)
Hi
Is there  a Rally scenario under works where we create N networks and associate 
N Vms with each network.
This would be a decent stress tests of neutron
Is there any such scale scenario in works
I see scenario for N networks, subnet creation and a separate one for N VM 
bootups
I am looking for an integration of these 2



Ajay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] Question on periodic task

2014-11-26 Thread Ajay Kalambur (akalambu)
Hi Boris
Looks like this would require changes in key portions of Rally infra. Need some 
more time getting a hang of rally by committing a few scenarios before I make 
infra changes


Ajay


From: Boris Pavlovic bo...@pavlovic.memailto:bo...@pavlovic.me
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, November 21, 2014 at 7:02 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Rally] Question on periodic task

Ajay,


We have this in our RoadMap:
https://github.com/stackforge/rally/blob/master/doc/feature_request/multi_scenarios_load_gen.rst

So, it's not yet supported out of box, but we really would like to have it in 
upstream.

Are you interested in work on this direction?


Best regards,
Boris Pavlovic


On Fri, Nov 21, 2014 at 8:22 AM, Ajay Kalambur (akalambu) 
akala...@cisco.commailto:akala...@cisco.com wrote:
Ok the action I wanted to perform was for HA I.e execute a scenario like VM 
boot and in parallel in a separate process , ssh and restart controller node 
for instance
I thought periodic task would be useful for that. I guess I need to look at 
some other way of performing this
Ajay


From: Boris Pavlovic bpavlo...@mirantis.commailto:bpavlo...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, November 20, 2014 at 7:03 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Rally] Question on periodic task

Hi Ajay,


I am not sure why you are looking that part at all.
everything in openstack/common/* is oslo-incubator code.
Actually that method is not used in Rally yet, except Rally as a Service part 
that doesn't work yet.

As a scenario developer I think you should be able to find everything here:
https://github.com/stackforge/rally/tree/master/rally/benchmark

So I really don't the case when you need to pass something to periodic task.. 
It's not that task


Best regards,
Boris Pavlovic






On Fri, Nov 21, 2014 at 3:36 AM, Ajay Kalambur (akalambu) 
akala...@cisco.commailto:akala...@cisco.com wrote:
Hi
I have a question on
/rally/openstack/common/periodic_task.py

It looks like if I have a method decorated with @periodic_task my method would 
get scheduled in separate process every N seconds
Now let us say we have a scenario and this periodic_task how does it work when 
concurrency=2 for instance

Is the periodic task also scheduled in 2 separate process. I actually want only 
one periodic task process irrespective of concurrency count in scenario
Also as a scenario developer how can I pass arguments into the periodic task


Ajay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues regarding Jenkins on Gerrit reviews

2014-11-26 Thread Joshua Harlow
Looks like the same bug affecting oslo libraries, clients (likely anyone 
with that similarily named icehouse job):


https://bugs.launchpad.net/tempest/+bug/1395368 (likely neutrons real 
issue), see bug for a potential resolution review that seems to be going 
through the tubes.


ER query review @ https://review.openstack.org/#/c/136657/

Abhishek Talwar/HYD/TCS wrote:

Hi All,

I am facing some issues with Jenkins on two of my reviews. Jenkins is
failing either on
*gate-tempest-dsvm-neutron-src-python-neutronclient-icehouse
http://logs.openstack.org/29/99929/8/check/gate-tempest-dsvm-neutron-src-python-neutronclient-icehouse/d35234d*
or
*gate-tempest-dsvm-neutron-src-python-neutronclient-icehouse
http://logs.openstack.org/29/99929/8/check/gate-tempest-dsvm-neutron-src-python-neutronclient-icehouse/d35234d***but
i do not see any of my code changes making them fail.
So if you can look at the reviews and guide me that why are they failing
again and again.

Links for reviews:

1. https://review.openstack.org/#/c/133151/
2. https://review.openstack.org/#/c/99929/

Kindly provide some information regarding this.



Thanks and Regards
Abhishek Talwar

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Nov 27 1800 UTC

2014-11-26 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20141127T18

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-26 Thread Jay Pipes

On 11/26/2014 11:54 AM, Sergii Golovatiuk wrote:

Jay,

Fuel uses watchdog service for container to restart it in case of
issues. We have the same problem with containers when disk is full


I see. I guess I don't quite understand why Zabbix isn't just used for 
everything -- after all, the puppet manifests already exist for it and 
are used for monitoring other things apparently.


-jay


On Wed, Nov 26, 2014 at 4:39 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

On 11/26/2014 10:22 AM, Przemyslaw Kaminski wrote:

We want to monitor Fuel master node while Zabbix is only on
slave nodes
and not on master. The monitoring service is supposed to be
installed on
Fuel master host (not inside a Docker container) and provide
basic info
about free disk space, etc.


Why not use the same thing for monitoring the Fuel master host as we
do for the docker containers/cluster?


P.

On 11/26/2014 02:58 PM, Jay Pipes wrote:

On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

So then in the end, there will be 3 monitoring systems
to learn,
configure, and debug? Monasca for cloud users, zabbix
for most of the
physical systems, and sensu or monit to be small?

Seems very complicated.

If not just monasca, why not the zabbix thats already
being deployed?


Yes, I had the same thoughts... why not just use zabbix
since it's
used already?

Best,
-jay

_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting Nov 27 1800 UTC

2014-11-26 Thread Matthew Farrellee

On 11/26/2014 01:10 PM, Sergey Lukjanov wrote:

Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20141127T18

--
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.


fyi, it's the Thanksgiving holiday for folks in the US, so we'll be absent.

best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] #Personal# Ref: L3 service integration with service framework

2014-11-26 Thread Kevin Benton
+1. In the ODL case you would just want a completely separate L3 plugin.

On Wed, Nov 26, 2014 at 7:29 AM, Mathieu Rohon mathieu.ro...@gmail.com
wrote:

 Hi,

 you can still add your own service plugin, as a mixin of
 L3RouterPlugin (have a look at brocade's code).
 AFAIU service framework would manage the coexistence several
 implementation of a single service plugin.

 This is currently not prioritized by neutron. This kind of work might
 restart in the advanced_services project.

 On Wed, Nov 26, 2014 at 2:28 PM, Priyanka Chopra
 priyanka.cho...@tcs.com wrote:
  Hi Gary, All,
 
 
  This is with reference to blueprint - L3 router Service Type Framework
 and
  corresponding development at github repo.
 
  I noticed that the patch was abandoned due to inactivity. Wanted to know
 if
  there is a specific reason for which the development was put on hold?
 
  I am working on a Use-case to enable neutron calls (L2 and L3) from
  OpenStack to OpenDaylight neutron. However presently ML2 forwards the L2
  calls to ODL neutron, but not the L3 calls (router and FIP).
  With this blueprint submission the L3 Service framework (that includes L3
  driver, agent and plugin) will be completed and hence L3 calls from
  OpenStack can be redirected to any controller platform. Please suggest in
  case anyone else is working on the same or if we can do the enhancements
  required and submit the code to enable such a usecase.
 
 
  Best Regards
  Priyanka
 
  =-=-=
  Notice: The information contained in this e-mail
  message and/or attachments to it may contain
  confidential or privileged information. If you are
  not the intended recipient, any dissemination, use,
  review, distribution, printing or copying of the
  information contained in this e-mail message
  and/or attachments to it are strictly prohibited. If
  you have received this communication in error,
  please notify us by reply e-mail or telephone and
  immediately and permanently delete the message
  and any attachments. Thank you
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Telco] [NFV] [Heat] Telco Orchestration

2014-11-26 Thread Georgy Okrokvertskhov
Hi Mathieu,

Can you tell us more about those projects? Does it include
mutli-datacenter use cases?

Most of this work was done as a custom projects for customers. I have to
ask them for a permission to share details.
We do not support multi-datacenter placement officially, but this feature
was developed for one of the customer and now is under review in upstream.
Here is a review link: https://review.openstack.org/#/c/125717/

Technically we can create multiple Heat stacks in different regions and
orchestrate Heat stack updates with proper resources through Murano
workflows.

Can you provide us a link to such a Murano Application, how you define
dependencies with apps, and how you translate those dependencies in
networking configuration?

As I told, this was a custom work for customers, so they own these
packages. The best example we have publically available is here:
https://github.com/sergmelikyan/murano-app-incubator/blob/f5-loadbalancer/io.murano.traffic.f5.NeutronLoadBalancer/Classes/F5NeutronLoadBalancer.yaml

As you see, we do not translate dependencies to network configs. Instead of
translation we just provide a way to script necessary steps and expose them
as workflows or functions which can be invoked by other applications. In
this example we have F5LB which adds a Neutron resources to Heat template
to create VIP and Pool Memebers and then extends F5 configuration by adding
F5 specific LB method and adding IRule for redirect via calling BIGIP API
directly. Here is an application wich has a dependency for LB:
https://github.com/sergmelikyan/murano-app-incubator/blob/f5-loadbalancer/io.murano.apps.SimpleWebApp/Classes/SimpleWebApp.yaml

Dependency is a generic
loadBalancer:Contract: $.class(traffic:LoadBalancer)
Based on which actual LoadBalancer implementation is selected by user
during confutation step (NeutronLB or F5NeutronLB) the result of execution
of these lines
https://github.com/sergmelikyan/murano-app-incubator/blob/f5-loadbalancer/io.murano.apps.SimpleWebApp/Classes/SimpleWebApp.yaml#L42-L43
will be different.

Thanks
Georgy

On Wed, Nov 26, 2014 at 4:58 AM, Mathieu Rohon mathieu.ro...@gmail.com
wrote:

 Hi,

 On Wed, Nov 26, 2014 at 12:48 AM, Georgy Okrokvertskhov
 gokrokvertsk...@mirantis.com wrote:
  Hi,
 
  In Murano we did couple projects related to networking orchestration. As
 NFV

 Can you tell us more about those projects? Does it include
 mutli-datacenter use cases?

  is a quite broad term I can say that Murano approach fits into it too. In
  our case we had bunch of virtual appliances with specific networking
  capabilities and requirements. Some of these appliances had to work
 together
  to provide a required functionality. These virtual appliances were
 exposed
  as Murano applications with defined dependencies between apps and
 operators
  were able to create different networking configuration with these apps
  combining them according their requirements\capabilities. Underlying
  workflows were responsible to bind these virtual appliances together.

 Can you provide us a link to such a murano Application, how you define
 dependencies with apps, and how you translate those dependencies in
 networking configuration?

  I will be glad to participate in tomorrow meeting and answer any
 questions
  you have.
 
  Thanks
  Georgy
 
  On Tue, Nov 25, 2014 at 6:14 AM, Marc Koderer m...@koderer.com wrote:
 
  Hi Angus,
 
  Am 25.11.2014 um 12:48 schrieb Angus Salkeld asalk...@mirantis.com:
 
  On Tue, Nov 25, 2014 at 7:27 PM, Marc Koderer m...@koderer.comwrote:
 
  Hi all,
 
  as discussed during our summit sessions we would like to expand the
 scope
  of the Telco WG (aka OpenStack NFV group) and start working
  on the orchestration topic (ETSI MANO).
 
  Therefore we started with an etherpad [1] to collect ideas, use-cases
 and
  requirements.
 
 
  Hi Marc,
 
  You have quite a high acronym per sentence ratio going on that
 etherpad;)
 
 
  Haha, welcome to the telco world :)
 
 
  From Heat's perspective, we have a lot going on already, but we would
 love
  to support
  what you are doing.
 
 
  That’s exactly what we are planning. What we have is a long list of
  use-cases and
  requirements. We need to transform them into specs for the OpenStack
  projects.
  Many of those specs won’t be NFV specify, for instance a Telco cloud
 will
  be highly
  distributed. So what we need is a multi-region heat support (which is
  already a planned
  feature for Heat as I learned today).
 
 
  You need to start getting specific about what you need and what the
  missing gaps are.
  I see you are already looking at higher layers (TOSCA) also check out
  Murano as well.
 
 
  Yep, I will check Murano.. I never had a closer look to it.
 
  Regards
  Marc
 
 
  Regards
  -Angus
 
 
  Goal is to discuss this document and move it onto the Telco WG wiki [2]
  when
  it becomes stable.
 
  Feedback welcome ;)
 
  Regards
  Marc
  Deutsche Telekom
 
  [1] 

Re: [openstack-dev] [All] Proposal to add examples/usecase as part of new features / cli / functionality patches

2014-11-26 Thread Steve Gordon
- Original Message -
 From: Deepak Shetty dpkshe...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 Hi stackers,
I was having this thought which i believe applies to all projects of
 openstack (Hence All in the subject tag)
 
 My proposal is to have examples or usecase folder in each project which has
 info on how to use the feature/enhancement (which was submitted as part of
 a gerrit patch)
 In short, a description with screen shots (cli, not GUI) which should be
 submitted (optionally or mandatory) along with patch (liek how testcases
 are now enforced)
 
 I would like to take an example to explain. Take this patch @
 https://review.openstack.org/#/c/127587/ which adds a default volume type
 in Manila
 
 Now it would have been good if we could have a .txt or .md file alogn with
 the patch that explains :
 
 1) What changes are needed in manila.conf to make this work
 
 2) How to use the cli with this change incorporated
 
 3) Some screen shots of actual usage (Now the author/submitted would have
 tested in devstack before sending patch, so just copying those cli screen
 shots wouldn't be too big of a deal)
 
 4) Any caution/caveats that one has to keep in mind while using this
 
 It can be argued that some of the above is satisfied via commit msg and
 lookign at test cases.
 But i personally feel that those still doesn't give a good visualization of
 how a feature patch works in reality
 
 Adding such a example/usecase file along with patch helps in multiple ways:
 
 1) It helps the reviewer get a good picture of how/which clis are affected
 and how this patch fits in the flow
 
 2) It helps documentor get a good view of how this patch adds value, hence
 can document it better
 
 3) It may help the author or anyone else write a good detailed blog post
 using the examples/usecase as a reference
 
 4) Since this becomes part of the patch and hence git log, if the
 feature/cli/flow changes in future, we can always refer to how the feature
 was designed, worked when it was first posted by looking at the example
 usecase
 
 5) It helps add a lot of clarity to the patch, since we know how the author
 tested it and someone can point missing flows or issues (which otherwise
 now has to be visualised)
 
 6) I feel this will help attract more reviewers to the patch, since now its
 more clear what this patch affects, how it affects and how flows are
 changing, even a novice reviewer can feel more comfortable and be confident
 to provide comments.
 
 Thoughts ?

I would argue that for the projects that use *-specs repositories this is the 
type of detail we would like to see in the specifications associated with the 
feature themselves rather than creating another separate mechanism. For the 
projects that don't use specs repositories (e.g. Manila) maybe this demand is 
an indication they should be considering them?

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cue] project update

2014-11-26 Thread Vipul Sabhaya
Hello,

Thanks to those who I met personally at the Summit for your feedback on the
project.

For those that don’t know what Cue is, we’re building a Message Broker
Provisioning service for Openstack.  More info can be found here:
https://wiki.openstack.org/wiki/Cue

Since the summit, we’re working full-steam ahead on our v1 API.  We are
also now on Stackforge, and leveraging Openstack CI and the gerrit review
process.

Come talk to us on #openstack-cue.

Useful Links:

V1 API - https://wiki.openstack.org/wiki/Cue/api
RTFD - http://cue.readthedocs.org/en/latest/
Code - https://github.com/stackforge/cue
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Splitting up the assignment component

2014-11-26 Thread Adam Young

On 11/26/2014 09:52 AM, David Chadwick wrote:

I tend to agree with Morgan. There are resources and there are users.
And there is something in the middle that says which users can access
which resources. It might be an ACL, a RBAC role, or a set of ABAC
attributes, or something else (such as a MAC policy). So to my mind this
middle bit, whilst being connected to both resources and users, is
separate from both of them. So we should not artificially put it with
just one of them.

FYI, the roles in RBAC are part of the policy specification. You define
the roles, their hierarchical relationships, then assign both users and
resources (privileges actually) to them. So roles could be part of the
policy specification, except that the policy is distributed, so in which
part of the distributed policy would you put it? Would it be in the
specification of roles to actions, or in the attribute mappings, or in
the user to attribute assignments?
I'd say that for this split we leave roles in assignment, but that when 
we get to the store policy rules in a database spec we move them to 
policy.


https://review.openstack.org/#/c/133814

as using Hierarchical rules to generate policy implies that the whole 
thing is in a unified, choherent structure.


https://review.openstack.org/#/c/125704/7




regards

David

On 25/11/2014 16:42, Morgan Fainberg wrote:

On Nov 25, 2014, at 4:25 AM, Henry Nash hen...@linux.vnet.ibm.com wrote:

Hi

As most of you know, we have approved a spec (https://review.openstack.org/#/c/129397/) to 
split the assignments component up into two pieces, and the code (divided up into a series of 
patches) is currently in review (https://review.openstack.org/#/c/130954/). While most aspects 
of the split appear to have agreement, there is one aspect that has been questioned - and that 
is the whether roles' should be in the resource component, as proposed?

First, let's recap the goals here:

1) The current assignment component is really what's left after we split off users/groups into 
identity some releases ago.  Assignments is pretty complicated and messy - and we 
need a better structure (as an example, just doing the split allowed me to find 5 bugs in our current 
implementation - and I wouldn't be surprised if there are more).  This is made more urgent by the fact that 
we are about to land some big new changes in this area, e.g. hierarchical projects and a re-implemntation 
(for performance) of list_role_assignments.

2) While Keystone may have started off as a service where we store all the users, 
credentials  permissions needed to access other OpenStack services, we more and 
more see Keystone as a wrapper for existing corporate authentication and authorisation 
mechanisms - and it's job is really to provided a common mechanism and language for these to 
be consumed across OpenStack services.  To do this well, we must make sure that the keystone 
components are split along sensible lines...so that they can individually wrap these 
corporate directories/services.  The classic case of this was are previous split off of 
Identity...and this new proposal takes this a step further.

3) As more and more broad OpenStack powered clouds are created, we must makes 
sure that our Keystone implementation is as flexible as possible. We already 
plan to support new abstractions for things like cloud providers enabling 
resellers to do business within one OpenStack cloud (by providing hierarchical 
multi-tenancy, domain-roles etc.). Our current assignments model is a) slightly 
unusual in that all roles are global and every assignment has 
actor-target-role, and b) cannot easily be substituted for alternate assignment 
models (even for the whole of an OpenStack installation, let alone on a domain 
by domain basis)

The proposal for splitting the assignment component is trying to provide a better basis for the above.  It 
separates the storing and CRUD operations of domain/projects/roles into a resource component, 
while leaving the pure assignment model in assignment.  The rationale for this is that the 
resource component defines the entities that the rest of the OpenStack services (and their policy engines) 
understand...while assignment is a pure mapper between these entities. The details of these mappings are 
never exposed outside of Keystone, except for the generation of contents of a token.  This would allow new 
assignment models to be introduced that, as long as they support the api to list what role_ids are 
mapped to project_id X for user_id Y, then the rest of OpenStack would never know anything had changed.

So to (finally) get the the point of this post...where should the role definitions live? 
The proposal is that these live in resource, because:

a) They represent the definition of how Keystone and the other services define 
permission - and this should be independent of whatever assignment model we 
choose
b) We may well chose (in the future) to morph what we currently means as a 

Re: [openstack-dev] #Personal# Ref: L3 service integration with service framework

2014-11-26 Thread Kyle Mestery
There is already an out-of-tree L3 plugin, and as part of the plugin
decomposition work, I'm planning to use this as the base for the new
ODL driver in Kilo. Before you file specs and BPs, we should talk a
bit more.

Thanks,
Kyle

[1] https://github.com/dave-tucker/odl-neutron-drivers

On Wed, Nov 26, 2014 at 12:53 PM, Kevin Benton blak...@gmail.com wrote:
 +1. In the ODL case you would just want a completely separate L3 plugin.

 On Wed, Nov 26, 2014 at 7:29 AM, Mathieu Rohon mathieu.ro...@gmail.com
 wrote:

 Hi,

 you can still add your own service plugin, as a mixin of
 L3RouterPlugin (have a look at brocade's code).
 AFAIU service framework would manage the coexistence several
 implementation of a single service plugin.

 This is currently not prioritized by neutron. This kind of work might
 restart in the advanced_services project.

 On Wed, Nov 26, 2014 at 2:28 PM, Priyanka Chopra
 priyanka.cho...@tcs.com wrote:
  Hi Gary, All,
 
 
  This is with reference to blueprint - L3 router Service Type Framework
  and
  corresponding development at github repo.
 
  I noticed that the patch was abandoned due to inactivity. Wanted to know
  if
  there is a specific reason for which the development was put on hold?
 
  I am working on a Use-case to enable neutron calls (L2 and L3) from
  OpenStack to OpenDaylight neutron. However presently ML2 forwards the L2
  calls to ODL neutron, but not the L3 calls (router and FIP).
  With this blueprint submission the L3 Service framework (that includes
  L3
  driver, agent and plugin) will be completed and hence L3 calls from
  OpenStack can be redirected to any controller platform. Please suggest
  in
  case anyone else is working on the same or if we can do the enhancements
  required and submit the code to enable such a usecase.
 
 
  Best Regards
  Priyanka
 
  =-=-=
  Notice: The information contained in this e-mail
  message and/or attachments to it may contain
  confidential or privileged information. If you are
  not the intended recipient, any dissemination, use,
  review, distribution, printing or copying of the
  information contained in this e-mail message
  and/or attachments to it are strictly prohibited. If
  you have received this communication in error,
  please notify us by reply e-mail or telephone and
  immediately and permanently delete the message
  and any attachments. Thank you
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] scheduling a review day

2014-11-26 Thread Ben Nemec
We talked about this in the meeting this week, but just for the record I
should be able to make it too.

-Ben

On 11/21/2014 11:07 AM, Doug Hellmann wrote:
 We have a bit of a backlog in the Oslo review queue. Before we add a bunch of 
 new reviews for Kilo work, I’d like to see if we can clear some of the 
 existing reviews. One idea I had was setting aside a “review day”, where we 
 spend a work day on reviews together, coordinating and doing fast 
 turn-arounds via IRC. 
 
 I know most of the team works on projects other than Oslo, including 
 company-focused work, so I don’t think we want to try to go more than a day 
 and that we would need time to coordinate other schedules to allow the time. 
 How many people could/would participate in a review day like this on 4 
 December?
 
 Doug
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kite] oslo.messaging changes for message security

2014-11-26 Thread Ben Nemec
On 11/14/2014 08:38 AM, Doug Hellmann wrote:
 
 On Nov 13, 2014, at 8:47 PM, Jamie Lennox jamielen...@redhat.com wrote:
 
 Hi all,

 To implement kite we need the ability to sign and encrypt the message and the
 message data. This needs to happen at a very low level in the oslo.messaging
 stack. The existing message security review
 (https://review.openstack.org/#/c/109806/) isn't going to be sufficient. It
 allows us to sign/encrypt only the message data ignoring the information in 
 the
 context and not allowing us to sign the message as a whole. It would also
 intercept and sign notifications which is not something that kite can do.

 Mostly this is an issue of how the oslo.messaging library is constructed. The
 choice of how data is serialized for transmission (including things like how
 you arrange context and message data in the payload) is handled individually 
 by
 the driver layer rather than in a common higher location. All the drivers use
 the same helper functions for this and so it isn't a problem in practice.

 Essentially I need a stateful serializing/deserializing object (I need to 
 store
 keys and hold things like a connection to the kite server) that either 
 extends
 or replaces oslo.messaging._drivers.common.serialize_msg and deserialize_msg
 and their exception counterparts.

 There are a couple of ways I can see to do what I need:

 1. Kite becomes a more integral part of oslo.messaging and the marshalling 
 and
 verification code becomes part of the existing RPC path. This is how it was
 initially proposed, it does not provide a good story for future or 
 alternative
 implementations. Oslo.messaging would either have a dependency on kiteclient,
 implement its own ways of talking to the server, or have some hack that 
 imports
 kiteclient if available.

 2. Essentially I add a global object loaded from conf to the existing common
 RPC file. Pro: The existing drivers continue to work as today, Con: global
 state held by a library. However given the way oslo messaging works i'm not
 really sure how much of a problem this is. We typically load transport from a
 predefined location in the conf file and we're not really in a situation 
 where
 you might want to construct different transports with different parameters in
 the same project.

 3. I create a protocol object out of the RPC code that kite can subclass and
 the protocol can be chosen by CONF when the transport/driver is created. This
 still touches a lot of places as the protocol object would need to be passed 
 to
 all messages, consumers etc. It involves changing the interface of the 
 drivers
 to accept this new object and changes in each of the drivers to work with the
 new protocol object rather than the existing helpers.

 4. As the last option requires changing the driver interface anyway we try 
 and
 correct the driver interfaces completely. The driver send and receive 
 functions
 that currently accept a context and args parameters should only accept a
 generic object/string consisting of already marshalled data. The code that
 handles serializing and deserializing gets moved to a higher level and kite
 would be pluggable there with the current RPC being default.

 None of these options involve changing the public facing interfaces nor the
 messages emitted on the wire (when kite is not used).

 I've been playing a little with option 3 and I don't think it's worth it. 
 There
 is a lot of code change and additional object passing that I don't think
 improves the library in general.

 Before I go too far down the path with option 4 I'd like to hear the thoughts
 of people more familiar with the library.

 Is there a reason that the drivers currently handle marshalling rather than 
 the
 RPC layer?
 
 It may have been an artifact of the evolution of that code, but I seem to 
 remember at some point that one of the drivers had a limitation either in the 
 byte-values allowed or the number of bytes allowed in a message. They all 
 seem to be doing roughly the same thing to construct the messages now, 
 though, so I’m not sure if that’s really true.

I believe you're thinking of the issue related to this:
https://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/_drivers/impl_qpid.py#L346

I know when I originally made that change concerns were raised about how
it would impact secure messaging, but at the time the conclusion was
that it would be fine.  I _think_ that should still be the case since it
only affects the wire format of Qpid messages.  Either Qpid directly
serializes the message dict, or it dumps it to json and serializes the
resulting string.  Either way the content of the message doesn't change.
 Changing the interface to pass in a serialized string would actually
simplify things.

That said, there were also backwards compatibility concerns with this
change, which is why we didn't just change it to always pass the json
dump.  We wanted new versions of the qpid driver to still be able to
talk to 

Re: [openstack-dev] [nova] How should libvirt pools work with distributed storage drivers?

2014-11-26 Thread Solly Ross
Hi!
 
 Some days ago, a bunch of Nova specs were approved for Kilo. Among them was
 https://blueprints.launchpad.net/nova/+spec/use-libvirt-storage-pools
 
 Now, while I do recognize the wisdom of using storage pools, I do see a
 couple of possible problems with this, especially in the light of my
 upcoming spec proposal for using StorPool distributed storage for the VM
 images.
 
 My main concern is with the explicit specification that the libvirt pools
 should be of the directory type, meaning that all the images should be
 visible as files in a single directory. Would it be possible to extend the
 specification to allow other libvirt pool types, or to allow other ways of
 pointing Nova at the filesystem path of the VM image?

The specification was never intended to restrict storage pools to being
file-based.  In fact, it was my intention that all different types of pools
be supported.  The specification dedicates several paragraphs to discussing
file-based pools, since transitioning from legacy file-based backends to
the storage pool backend requires a bit of work, while other backends, like
LVM, can simply be turned into a pool without any movement or renaming of the
underlying volumes.

In fact, LVM works excellently (it's one of the pool types I use frequently
in testing to make sure migration works regardless of source and destination
pool type).

 
 Where this is coming from is that StorPool volumes (which we intend to write
 a DiskImage subclass for) appear in the host filesystem as
 /dev/storpool/volumename special files (block devices). Thus, it would be...
 interesting... to find ways to make them show up under a specific directory
 (yes, we could do lots and lots of symlink magic, but we've been down that
 road before and it doesn't necessarily lead to Good Things(tm)). I see that
 the spec has several mentions of yeah, we should special-case Ceph/RBD
 here, since they do things in a different way- well, StorPool does things
 in a slightly different way, too :)

The reason that I wrote something about Ceph/RBD is that the Ceph storage driver
in libvirt is incomplete -- it doesn't yet have support for
virStorageVolCreateXMLFrom, so we need to work around that.

 
 And yes, we do have work in progress to expose the StorPool cluster's volumes
 as a libvirt pool, but this might take a bit of time to complete and then it
 will most probably take much more time to get into the libvirt upstream
 *and* into the downstream distributions, so IMHO okay, let's use different
 libvirt pool types might not be entirely enough for us, although it would
 be a possible workaround.

The intention was that new storage pool types should try to get themselves in
as new libvirt storage pool drivers, and then they should just work in Nova
(there is one line that needs to be modified, but other than that, you
should just be able to start using them).

 
 Of course, it's entirely possible that I have not completely understood the
 proposed mechanism; I do see some RBD patches in the previous incarnations
 of this blueprint, and if I read them right, it *might* be trivial to
 subclass the new libvirt storage pool support thing and provide the
 /dev/storpool/volumename paths to the upper layers. If this is so, feel free
 to let me know I've wasted your time in reading this e-mail, in strong terms
 if necessary :)

I dislike using strong terms ;-), but I do think you may have misread the 
spec.
If you are unclear, you can catch me next week on freenode as directxman12 and
we can discuss further (I'm out on PTO this week).

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting Nov 27 1800 UTC

2014-11-26 Thread Sergey Lukjanov
Thanks for the note, it sounds like we could cancel the meeting this week
because of it... Anybody except Russian team folks planning to attend the
meeting this week?

On Wed, Nov 26, 2014 at 9:44 PM, Matthew Farrellee m...@redhat.com wrote:

 On 11/26/2014 01:10 PM, Sergey Lukjanov wrote:

 Hi folks,

 We'll be having the Sahara team meeting as usual in
 #openstack-meeting-alt channel.

 Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#
 Next_meetings

 http://www.timeanddate.com/worldclock/fixedtime.html?msg=
 Sahara+Meetingiso=20141127T18

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.


 fyi, it's the Thanksgiving holiday for folks in the US, so we'll be absent.

 best,


 matt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-26 Thread Jay Pipes

On 11/25/2014 09:34 PM, Mike Bayer wrote:

On Nov 25, 2014, at 8:15 PM, Ahmed RAHAL ara...@iweb.com wrote:

Hi,

Le 2014-11-24 17:20, Michael Still a écrit :

Heya,

This is a new database, so its our big chance to get this right. So,
ideas welcome...

Some initial proposals:

  - we do what we do in the current nova database -- we have a deleted
column, and we set it to true when we delete the instance.

  - we have shadow tables and we move delete rows to a shadow table.

  - something else super clever I haven't thought of.


Some random thoughts that came to mind ...

1/ as far as I remember, you rarely want to delete a row
- it's usually a heavy DB operation (well, was back then)
- it's destructive (but we may want that)
- it creates fragmentation (less of a problem depending on db engine)
- it can break foreign key relations if not done the proper way


deleting records with foreign key dependencies is a known quantity.  Those 
items are all related and being able to delete everything related is a 
well-solved problem, both via ON DELETE cascades as well as standard ORM 
features.


++


2/ updating a row to 'deleted=1'
- gives an opportunity to set a useful deletion time-stamp
I would even argue that setting the deleted_at field would suffice to declare a row 
'deleted' (as in 'not NULL'). I know, explicit is better than implicit …


the logic that’s used is that “deleted” is set to the primary key of the 
record, this is to allow UNIQUE constraints to be set up that serve on the 
non-deleted rows only (e.g. UNIQUE on “x” + “deleted” is possible when there 
are multiple “deleted” rows with “x”).


Indeed. Because people want to be able to name an instance one thing, 
delete it, and immediately name another instance the same thing. Ugh -- 
what an annoying use case, IMO. Better to just delete the row out of the 
database after archival/audit log of the operation.



- the update operation is not destructive
- an admin/DBA can decide when and how to purge/archive rows

3/ moving the row at deletion
- you want to avoid additional steps to complete an operation, thus avoid 
creating a new record while deleting one
- even if you wrap things into a transaction, not being able to create a row 
somewhere can make your delete transaction fail
- if I were to archive all deleted rows, at scale I'd probably move them to 
another db server altogether


if you’re really “archiving”, I’d just dump out a log of what occurred to a 
textual log file, then you archive the files.  There’s no need for a pure 
“audit trail” to even be in the relational DB.


Precisely. Why is the RDBMS the thing that is used for archival/audit 
logging? Why not a NoSQL store or a centralized log facility? All that 
would be needed would be for us to standardize on the format of the 
archival record, standardize on the things to provide with the archival 
record (for instance system metadata, etc), and then write a simple 
module that would write an archival record to some backend data store.


Then we could rid ourselves of the awfulness of the shadow tables and 
all of the read_deleted=yes crap.


Best,
-jay


Now, I for one would keep the current mark-as-deleted model.

I however perfectly get the problem of massive churn with instance 
creation/deletion.


is there?   inserting and updating rows is a normal thing in relational DBs.



So, let's be crazy, why not have a config option 'on_delete=mark_delete', 
'on_delete=purge' or 'on_delete=archive' and let the admin choose ? (is that 
feasible ?)


I’m -1 on that.  The need for records to be soft-deleted or not, and if those 
soft-deletes need to be accessible in the application, should be decided up 
front.  Adding a multiplicity of options just makes the code that much more 
complicated and fragments its behaviors and test coverage.   The suggestion 
basically tries to avoid making a decision and I think more thought should be 
put into what is truly needed.



This would especially come handy if the admin decides the global cells database 
may not need to keep track of deleted instances, the cell-local nova database 
being the absolute reference for that.


why would an admin decide that this is, or is not, needed?   if the deleted 
data isn’t needed by the live app, it should just be dumped to an archive.  
admins can set how often that archive should be purged, but IMHO the “pipeline” 
of these records should be straight; there shouldn’t be junctions and switches 
that cause there to be multiple data paths.   It leads to too much complexity.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-26 Thread Belmiro Moreira
Hi,
my experience is that soft delete is important to keep record of deleted
instances and its characteristics.
In fact in my organization we are obliged to keep these records for several
months.
However, it would be nice that after few months we were able to purge the
DB with a nova tool.

In the particular case of this cells table my major concern is that having
a delete field maybe means that top and children databases need to be
synchronized. Looking into the current cells design having duplicated
information in different databases is one of the main issues.

Belmiro


On Wed, Nov 26, 2014 at 4:40 PM, Andrew Laski andrew.la...@rackspace.com
wrote:


 On 11/25/2014 11:54 AM, Solly Ross wrote:

 I can't comment on other projects, but Nova definitely needs the soft
 delete in the main nova database. Perhaps not for every table, but
 there is definitely code in the code base which uses it right now.
 Search for read_deleted=True if you're curious.

 Just to save people a bit of time, it's actually `read_deleted='yes'`
 or `read_deleted=yes` for many cases.

 Just to give people a quick overview:

 A cursory glance (no pun intended) seems to indicate that quite a few of
 these are reading potentially deleted flavors.  For this case, it makes
 sense to keep things in one table (as we do).

 There are also quite a few that seem to be making sure deleted things
 are properly cleaned up.  In this case, 'deleted' acts as a CLEANUP
 state, so it makes just as much sense to keep the deleted rows in a
 separate table.

  For this case in particular, the concern is that operators might need
 to find where an instance was running once it is deleted to be able to
 diagnose issues reported by users. I think that's a valid use case of
 this particular data.

  This is a new database, so its our big chance to get this right. So,
 ideas welcome...

 Some initial proposals:

 - we do what we do in the current nova database -- we have a deleted
 column, and we set it to true when we delete the instance.

 - we have shadow tables and we move delete rows to a shadow table.


 Both approaches are viable, but as the soft-delete column is
 widespread, it
 would be thorny for this new app to use some totally different scheme,
 unless the notion is that all schemes should move to the audit table
 approach (which I wouldn’t mind, but it would be a big job).FTR, the
 audit table approach is usually what I prefer for greenfield
 development,
 if all that’s needed is forensic capabilities at the database inspection
 level, and not as much active GUI-based “deleted” flags.   That is, if
 you
 really don’t need to query the history tables very often except when
 debugging an issue offline.  The reason its preferable is because those
 rows are still “deleted” from your main table, and they don’t get in the
 way of querying.   But if you need to refer to these history rows in
 context of the application, that means you need to get them mapped in
 such
 a way that they behave like the primary rows, which overall is a more
 difficult approach than just using the soft delete column.

 I think it does really come down here to how you intend to use the
 soft-delete
 functionality in Cells.  If you just are using it to debug or audit, then
 I think
 the right way to go would be either the audit table (potentially can
 store more
 lifecycle data, but could end up taking up more space) or a separate
 shadow
 table (takes up less space).

 If you are going to use the soft delete for application functionality, I
 would
 consider differentiating between deleted and we still have things left
 to
 clean up, since this seems to be mixing two different requirements into
 one.


 The case that spawned this discussion is one where deleted rows are not
 needed for application functionality.  So I'm going to update the proposed
 schema there to not include a 'deleted' column. Fortunately there's still
 some time before the question of how to handle deletes needs to be fully
 sorted out.


  That said, I have a lot of plans to send improvements down the way of the
 existing approach of “soft delete column” into projects, from the
 querying
 POV, so that criteria to filter out soft delete can be done in a much
 more
 robust fashion (see
 https://bitbucket.org/zzzeek/sqlalchemy/issue/3225/query-
 heuristic-inspector-event).
 But this is still more complex and less performant than if the rows are
 just gone totally, off in a history table somewhere (again, provided you
 really don’t need to look at those history rows in an application
 context,
 otherwise it gets all complicated again).

 Interesting. I hadn't seen consistency between the two databases as
 trumping doing this less horribly, but it sounds like its more of a
 thing that I thought.

 Thanks,
 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] Where should Schema files live?

2014-11-26 Thread Jay Pipes

On 11/20/2014 08:12 AM, Sandy Walsh wrote:

Hey y'all,

To avoid cross-posting, please inform your -infra / -operations buddies about 
this post.

We've just started thinking about where notification schema files should live 
and how they should be deployed. Kind of a tricky problem.  We could really use 
your input on this problem ...

The assumptions:
1. Schema files will be text files. They'll live in their own git repo 
(stackforge for now, ideally oslo eventually).
2. Unit tests will need access to these files for local dev
3. Gating tests will need access to these files for integration tests
4. Many different services are going to want to access these files during 
staging and production.
5. There are going to be many different versions of these files. There are 
going to be a lot of schema updates.

Some problems / options:
a. Unlike Python, there is no simple pip install for text files. No version 
control per se. Basically whatever we pull from the repo. The problem with a 
git clone is we need to tweak config files to point to a directory and that's a 
pain for gating tests and CD. Could we assume a symlink to some well-known 
location?
 a': I suppose we could make a python installer for them, but that's a pain 
for other language consumers.
b. In production, each openstack service could expose the schema files via 
their REST API, but that doesn't help gating tests or unit tests. Also, this 
means every service will need to support exposing schema files. Big 
coordination problem.
c. In production, We could add an endpoint to the Keystone Service Catalog to 
each schema file. This could come from a separate metadata-like service. Again, 
yet-another-service to deploy and make highly available.
d. Should we make separate distro packages? Install to a well known location 
all the time? This would work for local dev and integration testing and we 
could fall back on B and C for production distribution. Of course, this will 
likely require people to add a new distro repo. Is that a concern?

Personally, I'm leaning towards option D but I'm not sure what the implications 
are.

We're early in thinking about these problems, but would like to start the 
conversation now to get your opinions.

Look forward to your feedback.


OK, so the goal of this effort should be to have a single OpenStack 
standard for what the payload and structure of notification messages 
will look like. That means, to me at least, that these schema files should:


 a) Live in a single repo in the openstack/ code namespace 
(openstack/notification-schemas?)


 b) Be published to an openstack.org subdomain, served by some static 
web server for all the world to read and/or mirror


Let clients and servers that need to read and write these messages 
download the schemas as-needed.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-26 Thread Mike Bayer

 
 Precisely. Why is the RDBMS the thing that is used for archival/audit 
 logging? Why not a NoSQL store or a centralized log facility? All that would 
 be needed would be for us to standardize on the format of the archival 
 record, standardize on the things to provide with the archival record (for 
 instance system metadata, etc), and then write a simple module that would 
 write an archival record to some backend data store.
 
 Then we could rid ourselves of the awfulness of the shadow tables and all of 
 the read_deleted=yes crap.


+1000 - if we’re really looking to “do this right”, as the original message 
suggested, this would be “right”.  If you don’t need these rows in the app (and 
it would be very nice if you didn’t), dump it out to an archive file / 
non-relational datastore.   As mentioned elsewhere, this is entirely acceptable 
for organizations that are “obliged” to store records for auditing purposes.   
Nova even already has a dictionary format for everything set up with nova 
objects, so dumping these dictionaries out as JSON would be the way to go.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-26 Thread Andrew Laski


On 11/26/2014 03:39 PM, Belmiro Moreira wrote:

Hi,
my experience is that soft delete is important to keep record of 
deleted instances and its characteristics.
In fact in my organization we are obliged to keep these records for 
several months.
However, it would be nice that after few months we were able to purge 
the DB with a nova tool.


I think that any solution for this needs to keep the deleted data 
available in some form.  Is it important for you that the deleted data 
be in the same table as non deleted rows, or could they be moved into 
another table?  And would it matter if the format of the row changed 
during a move?





In the particular case of this cells table my major concern is that 
having a delete field maybe means that top and children databases 
need to be synchronized. Looking into the current cells design having 
duplicated information in different databases is one of the main issues.


Agreed.  I think this can be solved by ensuring that instance deletion 
is only about setting the deleted column in the cell instance table.  
The instance mapping being deleted makes no statement about whether or 
not the instance is deleted, though it would be a bug to delete it 
before the instance was deleted.




Belmiro


On Wed, Nov 26, 2014 at 4:40 PM, Andrew Laski 
andrew.la...@rackspace.com mailto:andrew.la...@rackspace.com wrote:



On 11/25/2014 11:54 AM, Solly Ross wrote:

I can't comment on other projects, but Nova definitely
needs the soft
delete in the main nova database. Perhaps not for every
table, but
there is definitely code in the code base which uses it
right now.
Search for read_deleted=True if you're curious.

Just to save people a bit of time, it's actually
`read_deleted='yes'`
or `read_deleted=yes` for many cases.

Just to give people a quick overview:

A cursory glance (no pun intended) seems to indicate that
quite a few of
these are reading potentially deleted flavors.  For this case,
it makes
sense to keep things in one table (as we do).

There are also quite a few that seem to be making sure deleted
things
are properly cleaned up.  In this case, 'deleted' acts as a
CLEANUP
state, so it makes just as much sense to keep the deleted rows
in a
separate table.

For this case in particular, the concern is that operators
might need
to find where an instance was running once it is deleted
to be able to
diagnose issues reported by users. I think that's a valid
use case of
this particular data.

This is a new database, so its our big chance to
get this right. So,
ideas welcome...

Some initial proposals:

- we do what we do in the current nova database --
we have a deleted
column, and we set it to true when we delete the
instance.

- we have shadow tables and we move delete rows to
a shadow table.


Both approaches are viable, but as the soft-delete
column is widespread, it
would be thorny for this new app to use some totally
different scheme,
unless the notion is that all schemes should move to
the audit table
approach (which I wouldn’t mind, but it would be a big
job).FTR, the
audit table approach is usually what I prefer for
greenfield development,
if all that’s needed is forensic capabilities at the
database inspection
level, and not as much active GUI-based “deleted”
flags.   That is, if you
really don’t need to query the history tables very
often except when
debugging an issue offline.  The reason its preferable
is because those
rows are still “deleted” from your main table, and
they don’t get in the
way of querying.   But if you need to refer to these
history rows in
context of the application, that means you need to get
them mapped in such
a way that they behave like the primary rows, which
overall is a more
difficult approach than just using the soft delete column.

I think it does really come down here to how you intend to use
the soft-delete
functionality in Cells.  If you just are using it to debug or
audit, then I think
the right way to go would be either the audit table
(potentially can store more

[openstack-dev] [stable][sahara] Sahara is broken in stable/juno

2014-11-26 Thread Sergey Lukjanov
Hi,

Sahara is broken in stable/juno now by new alembic release (unit tests).
The patch is already done and approved for master [0] and I've already
backported it to the stable/juno branch [1].

[0] https://review.openstack.org/#/c/137035/
[1] https://review.openstack.org/#/c/137469/

P.S. If anyone from stable team will approve it, it'll be great :)

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-26 Thread Joshua Harlow

Mike Bayer wrote:

Precisely. Why is the RDBMS the thing that is used for archival/audit logging? 
Why not a NoSQL store or a centralized log facility? All that would be needed 
would be for us to standardize on the format of the archival record, 
standardize on the things to provide with the archival record (for instance 
system metadata, etc), and then write a simple module that would write an 
archival record to some backend data store.

Then we could rid ourselves of the awfulness of the shadow tables and all of 
the read_deleted=yes crap.



+1000 - if we’re really looking to “do this right”, as the original message 
suggested, this would be “right”.  If you don’t need these rows in the app (and 
it would be very nice if you didn’t), dump it out to an archive file / 
non-relational datastore.   As mentioned elsewhere, this is entirely acceptable 
for organizations that are “obliged” to store records for auditing purposes.   
Nova even already has a dictionary format for everything set up with nova 
objects, so dumping these dictionaries out as JSON would be the way to go.




+ 1001; dump it out to some data warehouse, put it to HDFS, do something 
else with long term storage IMHO; just let's avoid continuing to turn a 
database into a data warehouse, they are really not the same thing and 
don't have the same requirements, constraints ...


I've always been confused why some of the openstack tables tried to do 
both roles with a deleted=1|0 field. The part that has also been 
confusing to me is has anyone actually tried switching a deleted=1 field 
back to deleted=0 without application logic to do this; if so how did u 
manage to pull that off correctly without knowing the inner details of 
the application itself (how did u do this atomically so that the users 
*actively* running against the api would not start to receive weird 
responses and failures...)?






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-11-26 Thread Everett Toews
On Nov 20, 2014, at 4:06 PM, Eoghan Glynn 
egl...@redhat.commailto:egl...@redhat.com wrote:

How about allowing the caller to specify what level of detail
they require via the Accept header?

▶ GET /prefix/resource_name
 Accept: application/json; detail=concise

The Accept request-header field can be used to specify certain media types 
which are acceptable for the response.” [1]

detail=concise is not a media type and looking at the grammar in the RFC it 
wouldn’t be valid. It’s not appropriate for the Accept header.

Everett

[1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Proposal to add Dave Walker back to stable-maint-core

2014-11-26 Thread Adam Gandelman
Hi All-

Daviey was an original member of the stable-maint team and one of the
driving forces behind the creation of the team and branches back in the
early days. He was removed from the team later on during a pruning of
inactive members. Recently, he has began focusing on the stable branches
again and has been providing valuable reviews across both branches:

https://review.openstack.org/#/q/reviewer:%22Dave+Walker+%253Cemail%2540daviey.com%253E%22++branch:stable/icehouse,n,z

https://review.openstack.org/#/q/reviewer:%22Dave+Walker+%253Cemail%2540daviey.com%253E%22++branch:stable/juno,n,z

I think his understanding of policy, attention to detail and willingness to
question the appropriateness of proposed backports would make him a great
member of the team (again!).  Having worked with him in Ubuntu-land, I also
think he'd be a great candidate to help out with the release management
aspect of things (if he wanted to).

Cheers,
Adam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-11-26 Thread Shaunak Kashyap
detail=concise is not a media type and looking at the grammar in the RFC it 
wouldn’t be valid.

I think the grammar would allow for application/json; detail=concise. See the 
last line in the definition of the media-range nonterminal in the grammar 
(copied below for convenience):

   Accept = Accept :
#( media-range [ accept-params ] )


   media-range= ( */*
| ( type / * )
| ( type / subtype )
) *( ; parameter )
   accept-params  = ; q = qvalue *( accept-extension )
   accept-extension = ; token [ = ( token | quoted-string ) ]

The grammar does not define the parameter nonterminal but there is an example 
in the same section that seems to suggest what it could look like:

   Accept: text/*, text/html, text/html;level=1, */*

Shaunak

On Nov 26, 2014, at 2:03 PM, Everett Toews 
everett.to...@rackspace.commailto:everett.to...@rackspace.com wrote:

On Nov 20, 2014, at 4:06 PM, Eoghan Glynn 
egl...@redhat.commailto:egl...@redhat.com wrote:

How about allowing the caller to specify what level of detail
they require via the Accept header?

▶ GET /prefix/resource_name
 Accept: application/json; detail=concise

The Accept request-header field can be used to specify certain media types 
which are acceptable for the response.” [1]

detail=concise is not a media type and looking at the grammar in the RFC it 
wouldn’t be valid. It’s not appropriate for the Accept header.

Everett

[1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >