[openstack-dev] [nova] Nova API meeting cancelled this week

2014-06-12 Thread Christopher Yeoh
Hi,

Given some people can't make it tomorrow and I don't think we have much
new to talk about anyway I'm going to cancel the Nova API meeting this
week. Feel free to use your new extra spare time reviewing some API
related nova-specs we really want to get moving though :-)

https://review.openstack.org/84695 (v2.1 on V3 API)
https://review.openstack.org/96139 (v2.1 microversions)
https://review.openstack.org/96139 (Tasks API)
https://review.openstack.org/96139 (Policy should be enforced at API
layer)



Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] AggregateMultiTenancyIsolation scheduler filter - bug, or new feature proposal?

2014-06-12 Thread Belmiro Moreira
Hi,
if you are interested in this filter see:
https://review.openstack.org/#/c/99476/

Belmiro

--
Belmiro Moreira
CERN
Email: belmiro.more...@cern.ch
IRC: belmoreira




On Tue, Jun 10, 2014 at 10:42 PM, Belmiro Moreira 
moreira.belmiro.email.li...@gmail.com wrote:

 Hi Jesse,

 it would be great collaborate with you on this.



 No, I didn’t update to nova-specs yet.

 It would be good to discuss on IRC. My nick is belmoreira.



 Belmiro

 --

 Belmiro Moreira

 CERN

 Email: belmiro.more...@cern.ch

 IRC: belmoreira


 On Tue, Jun 10, 2014 at 9:19 AM, Jesse Pretorius 
 jesse.pretor...@gmail.com wrote:

 On 9 June 2014 15:18, Belmiro Moreira 
 moreira.belmiro.email.li...@gmail.com wrote:

 I would say that is a documentation bug for the
 “AggregateMultiTenancyIsolation” filter.


 Great, thanks. I've logged a bug for this:
 https://bugs.launchpad.net/openstack-manuals/+bug/1328400


 When this was implemented the objective was to schedule only instances
 from specific tenants for those aggregates but not make them exclusive.


 That’s why the work on
 https://blueprints.launchpad.net/nova/+spec/multi-tenancy-isolation-only-aggregates
 started but was left on hold because it was believed
 https://blueprints.launchpad.net/nova/+spec/whole-host-allocation had
 some similarities and eventually could solve the problem in a more generic
 way.


 However p-clouds implementation is marked as “slow progress” and I
 believe there is no active work at the moment.


 Probably is a good time to review the ProjectsToAggregateFilter filter
 again. The implementation and reviews are available at
 https://review.openstack.org/#/c/28635/


 Agreed. p-clouds is a much greater framework with much deeper and wider
 effects. The isolated aggregate which you submitted code for is exactly
 what we're looking for and actually what we're using in production today.

 I'm proposing that we put together the nova-spec for
 https://blueprints.launchpad.net/nova/+spec/multi-tenancy-isolation-only-aggregates,
 but as suggested in my earlier message I think a simpler approach would be
 to modify the existing filter to meet our needs by simply using an
 additional metadata tag to designate the aggregate as an exclusive one. In
 the blueprint you did indicate that you were going to put together a
 nova-spec for it, but I couldn't find one in the specs repository - either
 merged or WIP.


 One of the problems raised was performance concerns considering the
 number of DB queries required. However this can be documented if people
 intend to enable the filter.


 As suggested by Phil Day in https://review.openstack.org/#/c/28635/
 there is now a caching capability (landed in
 https://review.openstack.org/#/c/33720/) which reduces the number of DB
 calls.

 Can I suggest that we collaborate on the spec? Perhaps we can discuss
 this on IRC? My nick is odyssey4me and I'm in #openstack much of the
 typical working day and often in the evenings. My time zone is GMT+2.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Mid-cycle meetup for Cinder devs

2014-06-12 Thread Avishay Traeger
I think you can create an easy survey with doodle.com.  You can fill in the
dates, and ask people to specify next to their names if their attendance
will be physical or virtual.


On Thu, Jun 12, 2014 at 12:16 AM, D'Angelo, Scott scott.dang...@hp.com
wrote:

  During the June 11 #openstack-cinder meeting we discussed a mid-cycle
 meetup. The agenda is To be Determined.

 I have inquired and HP in Fort Collins, CO has room and network
 connectivity available. There were some dates that worked well for
 reserving a nice room:

 July 14,15,17,18, 21-25, 27-Aug 1

 But a room could be found regardless.

 Virtual connectivity would also be available.



 Some of the open questions are:

 Are developers interested in a mid-cycle meetup?

 What dates are Not Good (Blackout dates)?

 What dates are Good?

 Whom might be able to be physically present in Ft Collins, CO?

 Are there alternative locations to be considered?



 Someone had mentioned a Google Survey. Would someone like to create that?
 Which questions should be asked?



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Nominating Nikhil Komawar for Core

2014-06-12 Thread Mark Washenberger
Hi folks,

I'd like to nominate Nikhil Komawar to join glance-core. His code and
review contributions over the past years have been very helpful and he's
been taking on a very important role in advancing the glance tasks work.

If anyone has any concerns, please let me know. Otherwise I'll make the
membership change next week (which is code for, when someone reminds me to!)

Thanks!
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] dev source code with the help of packstack and another question, thank you very much.

2014-06-12 Thread bt...@163.com
Hi,everybody.
 I use packstack to install openstack, but I have found a few questions: 
(Centos 6.5 os)
 1)  The directory /var/lib/glance is not big enough to store the images. I 
modify the config files in the file /etc/glance/glance-api.conf  and  
/etc/glance/glance-cache.conf ,modify the filesystem_store_datadir . But when I 
run packstack again to reinstall openstack: 
packstack --answer-file=packstack-answers-20140606-140240.txt 
after this operation, the value of parameter filesystem_store_datadir is 
changed to the default value(filesystem_store_datadir=/var/lib/glance/images/ ) 
. Is there any method to change the value for ever? 
and the similar thing happens to nova instance store directory. Thank you very 
much.

2) How can I modify the source code and reinstall openstack with the help of 
packstack ? 
Thank you very much.
Best wishes to you.




bt...@163.com___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Heat template parameters encryption

2014-06-12 Thread Clint Byrum
I tend to agree with you Keith, securing Heat is Heat's problem.
Securing Nova is nova's problem. And I too would expect that those with
admin access to Heat, would not have admin access to Nova. That is why
we split these things up with API's.

I still prefer that users encrypt secrets on the client side, and store
said secrets in Barbican, passing only a temporary handle into templates
for consumption.

But until we have that, just encrypting hidden parameters would be simple
to do and I wouldn't even mind it being on by default in devstack because
only a small percentage of parameters are hidden. My initial
reluctance to the plan was in encrypting everything, as that makes
verifying things a lot harder. But just encrypting the passwords.. I
think that's a decent plan.

A couple of ideas:

* Provide a utility to change the key (must update the entire database).
* Allow multiple decryption keys (to enable tool above to work
  slowly).

Excerpts from Keith Bray's message of 2014-06-11 22:29:13 -0700:
 
 On 6/11/14 2:43 AM, Steven Hardy sha...@redhat.com wrote:
 
 IMO, when a template author marks a parameter as hidden/secret, it seems
 incorrect to store that information in plain text.
 
 Well I'd still question why we're doing this, as my previous questions
 have
 not been answered:
 - AFAIK nova user-data is not encrypted, so surely you're just shifting
 the
   attack vector from one DB to another in nearly all cases?
 
 Having one system (e.g. Nova) not as secure as it could be isn't a reason
 to not secure another system as best we can. For every attack vector you
 close, you have another one to chase. I'm concerned that the merit of the
 feature is being debated, so let me see if I can address that:
 
 We want to use Heat to launch customer facing stacks.  In a UI, we would
 prompt customers for Template inputs, including for example: Desired
 Wordpress Admin Password, Desired MySQL password, etc. The UI then makes
 an API call to Heat to orchestrate instantiation of the stack.  With Heat
 as it is today, these customer specified credentials (as template
 parameters) would be stored in Heat's database in plain text. As a Heat
 Service Administrator, I do not need nor do I want the customer's
 Wordpress application password to be accessible to me.  The application
 belongs to the customer, not to the infrastructure provider.  Sure, I
 could blow the customer's entire instance away as the service provider.
 But, if I get fired or leave the company, I could no longer can blow away
 their instance... If I leave the company, however, I could have taken a
 copy of the Heat DB with me, or had looked that info up in the Heat DB
 before my exit, and I could then externally attack the customer's
 Wordpress instance.  It makes no sense for us to store user specified
 creds unencrypted unless we are administering the customer's Wordpress
 instance for them, which we are not.  We are administering the
 infrastructure only.  I realize the encryption key could also be stolen,
 but in a production system the encryption key access gets locked down to
 a VERY small set of folks and not all the people that administer Heat
 (that's part of good security practices and makes auditing of a leaked
 encryption key much easier).
   
 - Is there any known way for heat to leak sensitive user data, other
 than
   a cloud operator with admin access to the DB stealing it?  Surely cloud
   operators can trivially access all your resources anyway, including
   instances and the nova DB/API so they have this data anyway.
 
 Encrypting the data in the DB also helps in case if a leak of arbitrary DB
 data does surface in Heat.  We are not aware of any issues with Heat today
 that could leak that data... But, we never know what vulnerabilities will
 be introduced or discovered in the future.
 
 
 At Rackspace, individual cloud operators can not trivially access all
 customer cloud resources.  When operating a large cloud at scale, service
 administrator's operations and capabilities are limited to the systems
 they work on.  While I could impersonate a user via Heat and do lot's of
 bad things across many of their resources, each of the other systems
 (Nova, Databases, Auth, etc.) audit the who is doing what on behave of
 what customer, so I can't do something malicious to a customer's Nova
 instance without the Auth System Administrators ensuring that HR knows I
 would be the person to blame.  Similarly, a Nova system administrator
 can't delete a customer's Heat stack without our Heat administrators
 knowing who is to blame.  We have checks and balances across our systems
 and purposefully segment our possible attack vectors.
 
 Leaving sensitive customer data unencrypted at rest provides many more
 options for that data to get in the wrong hands or be taken outside the
 company.  It is quick and easy to do a MySQL dump if the DB linux system
 is compromised, which has nothing to do with Heat having a vulnerability.
 
 Our ask is to 

[openstack-dev] [Fuel] IRC weekly

2014-06-12 Thread Mike Scherbakov
Folks,
Though it is holiday in Russia where we have large Fuelers presence, I'm
still going to run the meeting at 16.00 UTC on 12th (9am PST, 8pm MSK
Thursday).
Other locations, I need your presence.

Agenda: https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda
Feel free to extend it in advance.

Thanks,
-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Pacemaker][HA] Notifying clones of offline nodes

2014-06-12 Thread Vladimir Kuklin
Hi, Jay

Yep. Here is the link to mail archive (I hoped that they would hit Reply
All button):
https://www.mail-archive.com/pacemaker@oss.clusterlabs.org/msg19896.html

Actually, this is what Mirantis Linux Hardening team could do, I hope, in
this release cycle and push it to upstream.



On Thu, Jun 12, 2014 at 2:44 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 05/26/2014 11:16 AM, Vladimir Kuklin wrote:

 Hi all

 We are working on HA solutions for OpenStack(-related) services and
 figured out that sometimes we need clones to be notified if one of the
 cluster nodes running clone instances goes offline. E.g., we need this
 information to make RabbitMQ AMQP broker cluster to forget this node
 until it goes up again. This is easily achievable if we stop the
 instance on the node - then notification is sent to clone instances and
 everything is fine. But what can we do if node goes offline
 unexpectedly? Is there any way to notify other clones that the slave is
 dead and perform corresponding actions?

 One of the ways we figured out is to implement additional cluster
 monitoring action in resource OCF and purge dead nodes, but it looks a
 little bit overwhelming and inconvenient. Is there a chance we missed
 some attributes that allow configuration of such behaviour that
 pacemaker notifies other clones on node offline/fence/cold_shutdown ?


 Ping. Hi Vladimir, did you ever get any response about this? I'm also
 interested in the answers...

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Using saltstack as orchestrator for fuel

2014-06-12 Thread Vladimir Kuklin
Guys, what we really need from orchestration tool is an ability orchestrate
a big amount of task accross the nodes with all the complicated
dependencies, dynamic actions (e.g. what to do on failure and on success)
and parallel execution including those, that can have no additional
software installed somewhere deep in the user's infrastructure (e.g. we
need to send a RESTful request to vCenter). And this is the usecase of our
pluggable architecture. I am wondering if saltstack can do this.


On Wed, Jun 11, 2014 at 9:08 PM, Sergii Golovatiuk sgolovat...@mirantis.com
 wrote:

 Hi,

 That would be nice to compare Ansible and Salt. They are both Python
 based. Also, Ansible has pull model also. Personally, I am big fan of
 Ansible because of its simplicity and speed of playbook development.

 ~Sergii


 On Wed, Jun 11, 2014 at 1:21 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 well, i dont have any comparison chart, i can work on one based on
 requirements i've provided in initial letter, but:
 i like ansible, but it is agentless, and it wont fit well in our current
 model of communication between nailgun and orchestrator
 cloudify - java based application, even if it is pluggable with other
 language bindings - we will benefit from application in python
 salt is been around for 3-4 years, and simply compare github graphs, it
 one of the most used and active projects in python community

 https://github.com/stackforge/mistral/graphs/contributors
 https://github.com/saltstack/salt/graphs/contributors


 On Wed, Jun 11, 2014 at 1:04 PM, Sergii Golovatiuk 
 sgolovat...@mirantis.com wrote:

 Hi,

 There are many mature orchestration applications (Salt, Ansible,
 Cloudify, Mistral). Is there any comparison chart? That would be nice to
 compare them to understand the maturity level. Thanks

 ~Sergii


 On Wed, Jun 11, 2014 at 12:48 PM, Dmitriy Shulyak dshul...@mirantis.com
  wrote:

 Actually i am proposing salt as alternative, the main reason - salt is
 mature, feature full orchestration solution, that is well adopted even by
 our internal teams


 On Wed, Jun 11, 2014 at 12:37 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 As far as I remember we wanted to replace Astute with Mistral [1], do
 we really want to have some intermediate steps (I mean salt) to do it?

 [1] https://wiki.openstack.org/wiki/Mistral


 On Wed, Jun 11, 2014 at 10:38 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Yes, in my opinion salt can completely replace
 astute/mcollective/rabbitmq.
 Listen and respond to the events generated by nailgun, or any other
 plugin - not a problem.
 There is already some kind of plugin for salt that adds ability to
 execute puppet on minions (agents) [1]

 [1]
 http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.puppet.html


 On Tue, Jun 10, 2014 at 4:06 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Interesting stuff.
 Do you think that we can get rid of Astute at some point being
 purely replaced by Salt?
 And listening for the commands from Fuel?

 Can you please clarify, does the suggested approach implies that we
 can have both puppet  SaltStack? Even if you ever switch to anything
 different, it is important to provide a smooth and step-by-step way for 
 it.



 On Mon, Jun 9, 2014 at 6:05 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I know that sometime ago saltstack was evaluated to be used as
 orchestrator in fuel, so I've prepared some initial specification, that
 addresses basic points of integration, and general requirements for
 orchestrator.

 In my opinion saltstack perfectly fits our needs, and we can
 benefit from using mature orchestrator, that has its own community. I 
 still
 dont have all the answers, but , anyway, i would like to ask all of 
 you to
 start a review for specification


 https://docs.google.com/document/d/1uOHgxM9ZT_2IdcmWvgpEfCMoV8o0Fk7BoAlsGHEoIfs/edit?usp=sharing

 I will place it in fuel-docs repo as soon as specification will be
 full enough to start POC, or if you think that spec should placed 
 there as
 is, i can do it now

 Thank you

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-06-12 Thread Thierry Carrez
Morgan Fainberg wrote:
 I’ve been looking over the code for this and it turns out plain old SHA1
 is a bad idea.  We recently had a patch land in keystone client and
 keystone to let us configure the hashing algorithm used for token
 revocation list and the short-token ids. 
 
 I’ve updated my patch set to use ‘{OBSCURED}%(token)s’ instead of
 specifying a specific obscuring algorithm. This means that if we ever
 update the way we obscure the data in the future, we’re not lying about
 what was done in the log. The proposed approach can be found
 here: https://review.openstack.org/#/c/99432

Looks good!

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Nova list can't return anything

2014-06-12 Thread 严超
Hi, All:
I print *nova --debug list *,  and got nothing return, how can
that be ?
RESP: [200] CaseInsensitiveDict({'date': 'Thu, 12 Jun 2014 08:52:19 GMT',
'content-length': '0', 'content-type': 'text/html; charset=UTF-8'})
RESP BODY:

*body***
None
*body***
DEBUG (shell:792) 'NoneType' object has no attribute '__getitem__'
Traceback (most recent call last):
  File /opt/stack/python-novaclient/novaclient/shell.py, line 789, in main
OpenStackComputeShell().main(argv)
  File /opt/stack/python-novaclient/novaclient/shell.py, line 724, in main
args.func(self.cs, args)
  File /opt/stack/python-novaclient/novaclient/v1_1/shell.py, line 1129,
in do_list
search_opts=search_opts)
  File /opt/stack/python-novaclient/novaclient/v1_1/servers.py, line 591,
in list
return self._list(/servers%s%s % (detail, query_string), servers)
  File /opt/stack/python-novaclient/novaclient/base.py, line 72, in _list
data = body[response_key]
TypeError: 'NoneType' object has no attribute '__getitem__'
ERROR (TypeError): 'NoneType' object has no attribute '__getitem__'


*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*


*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] pacemaker management tools

2014-06-12 Thread Jan Provaznik

On 06/12/2014 12:17 AM, Gregory Haynes wrote:

The issue is that distributions supported in TripleO provide different
tools for managing Pacemaker. Ubuntu/Debian provides crmsh, Fedora/RHEL
provides pcs, OpenSuse provides both. I didn't find packages for all our
distros for any of the tools. Also if there is a third-party repo
providing packages for various distros, adding dependency on an
untrusted third-party repo might be a problem for some users.

Although it's a little bit annoying, I think we will end up with
managing commands for both config tools, a resource creation sample:

if $USE_PCS;then
crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 params
ip=192.168.122.120 cidr_netmask=32 op monitor interval=30s
else
pcs resource create ClusterIP IPaddr2 ip=192.168.0.120 cidr_netmask=32
fi


This seems like a reasonable solution if we can ensure that we have CI
for both branches of the installation. This is a big issue with our
current mariadb/percona installation and it sounds like were heading
down the same path here.


With mariadb/percona it's slightly different as these are separate 
elements and each of them works on all distros. With mariadb I was 
waiting until galera server is merged into Fedora (which already is), so 
it's a good time now to make it default for Fedora setup (a patch for 
this will be submitted soon).



If we can make USE_PCS directly dependent on the installed distro that
would be sufficient (CI for each distro would take care of the different
branches) but this gets a bit more complicated if you want to split crm
and pcs out into different elements (like in mariadb vs percona)...



Yes, usage of pcs is directly dependent on distro. Pcs/crm wouldn't be 
splitted into separate elements, it's just about calling 2 different 
commands in an os-refresh-config script depending on distribution (so we 
are sure both are tested).


Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Pacemaker][HA] Notifying clones of offline nodes

2014-06-12 Thread Mike Scherbakov
 I hoped that they would hit Reply All button
they might hit it, but if someone is not subscribed to openstack-dev, then
the message won't be posted as far as I know.
Likely, mine won't be posted in pacemaker ML as I'm not subscribed to that
ML.


On Thu, Jun 12, 2014 at 1:23 AM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Hi, Jay

 Yep. Here is the link to mail archive (I hoped that they would hit Reply
 All button):
 https://www.mail-archive.com/pacemaker@oss.clusterlabs.org/msg19896.html

 Actually, this is what Mirantis Linux Hardening team could do, I hope, in
 this release cycle and push it to upstream.



 On Thu, Jun 12, 2014 at 2:44 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 05/26/2014 11:16 AM, Vladimir Kuklin wrote:

 Hi all

 We are working on HA solutions for OpenStack(-related) services and
 figured out that sometimes we need clones to be notified if one of the
 cluster nodes running clone instances goes offline. E.g., we need this
 information to make RabbitMQ AMQP broker cluster to forget this node
 until it goes up again. This is easily achievable if we stop the
 instance on the node - then notification is sent to clone instances and
 everything is fine. But what can we do if node goes offline
 unexpectedly? Is there any way to notify other clones that the slave is
 dead and perform corresponding actions?

 One of the ways we figured out is to implement additional cluster
 monitoring action in resource OCF and purge dead nodes, but it looks a
 little bit overwhelming and inconvenient. Is there a chance we missed
 some attributes that allow configuration of such behaviour that
 pacemaker notifies other clones on node offline/fence/cold_shutdown ?


 Ping. Hi Vladimir, did you ever get any response about this? I'm also
 interested in the answers...

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] promote blueprint about deferred deletion for volumes

2014-06-12 Thread Yuzhou (C)

@ john,

Thank you for your comments.

About blueprint volume-delete-protect : 
https://review.openstack.org/#/c/97034/,  I think deferred deletion for volumes 
is valuable,  it seems to me that should be sufficient. 

Firstly currently in cinder, calling the API of deleting volume means 
the volume will be deleted immediately. If the user specify a wrong volume by 
mistake, the data in the volume may be lost forever. To avoid this, we hope to 
add a deferred deletion mechanism for volumes. So for a certain amount of time, 
volumes can be restored after the user find a misuse of deleting volume. So I 
think deferred deletion for volumes is valuable

Moreover, there are deferred deletion implements for instance in nova 
and image in glance, I think it is very common feature to protected important 
resource. 

So I would like to promote this blueprint as soon.

@all,

I have submit this blueprint https://review.openstack.org/#/c/97034/,  It 
introduces some complexity, still exists different options. So I would like to 
get more feedback about this bp.

Thanks.

Zhou Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] FloatingIp pollster spamming n-api logs (bug 1328694)

2014-06-12 Thread liusheng

Matt, Eoghan, thanks

Firstly , sorry for the effection, the direct reason of the bug is an 
issue in nova-network scenario,
it is my mistake when commit patch 
https://review.openstack.org/#/c/81429/ to fix the bug 1262124.
with agreement of Matt's view, to dissipate the load of nova API, so it 
would better to use notification

instead of polling of floatingip or some other resources.

For this bug 1328694 the Matt's patch (has been approved)has reverted in 
ceilometer side, in nova side,
the issue should also be fixed, I have upload a patch for that 
https://review.openstack.org/#/c/99251/,

I will try to make the patch merged asap.
For the bug 1262124, I will also upload patch to fix the doc impact.


Best Regards
Liu sheng


于 2014/6/12 3:22, Eoghan Glynn 写道:

Thanks for bringing this to the list Matt, comments inline ...


tl;dr: some pervasive changes were made to nova to enable polling in
ceilometer which broke some things and in my opinion shouldn't have been
merged as a bug fix but rather should have been a blueprint.

===

The detailed version:

I opened bug 1328694 [1] yesterday and found that came back to some
changes made in ceilometer for bug 1262124 [2].

Upon further inspection, the original ceilometer bug 1262124 made some
changes to the nova os-floating-ips API extension and the database API
[3], and changes to python-novaclient [4] to enable ceilometer to use
the new API changes (basically pass --all-tenants when listing floating
IPs).

The original nova change introduced bug 1328694 which spams the nova-api
logs due to the ceilometer change [5] which does the polling, and right
now in the gate ceilometer is polling every 15 seconds.

IIUC that polling cadence in the gate is in the process of being reverted
to the out-of-the-box default of 600s.


I pushed a revert in ceilometer to fix the spam bug and a separate patch
was pushed to nova to fix the problem in the network API.

Thank you for that. The revert is just now approved on the ceilometer side,
and is wending its merry way through the gate.


The bigger problem I see here is that these changes were all made under
the guise of a bug when I think this is actually a blueprint.  We have
changes to the nova API, changes to the nova database API, CLI changes,
potential performance impacts (ceilometer can be hitting the nova
database a lot when polling here), security impacts (ceilometer needs
admin access to the nova API to list floating IPs for all tenants),
documentation impacts (the API and CLI changes are not documented), etc.

So right now we're left with, in my mind, two questions:

1. Do we just fix the spam bug 1328694 and move on, or
2. Do we revert the nova API/CLI changes and require this goes through
the nova-spec blueprint review process, which should have happened in
the first place.

So just to repeat the points I made on the unlogged #os-nova IRC channel
earlier, for posterity here ...

Nova already exposed an all_tenants flag in multiple APIs (servers,
volumes,
security-groups etc.) and these would have:

(a) generally pre-existed ceilometer's usage of the corresponding APIs

and:

(b) been tracked and proposed at the time via straight-forward LP bugs,
as  opposed to being considered blueprint material

So the manner of the addition of the all_tenants flag to the floating_ips
API looks like it just followed existing custom  practice.

Though that said, the blueprint process and in particular the nova-specs
aspect, has been tightened up since then.

My preference would be to fix the issue in the underlying API, but to use
this as a teachable moment ... i.e. to require more oversight (in the
form of a reviewed  approved BP spec) when such API changes are proposed
in the future.

Cheers,
Eoghan


Are there other concerns here?  If there are no major objections to the
code that's already merged, then #2 might be excessive but we'd still
need docs changes.

I've already put this on the nova meeting agenda for tomorrow.

[1] https://bugs.launchpad.net/ceilometer/+bug/1328694
[2] https://bugs.launchpad.net/nova/+bug/1262124
[3] https://review.openstack.org/#/c/81429/
[4] https://review.openstack.org/#/c/83660/
[5] https://review.openstack.org/#/c/83676/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


While there is precedent for --all-tenants with some of the other APIs,
I'm concerned about where this stops.  When ceilometer wants polling on
some other resources that the nova API exposes, will it need the same
thing?  Doing all of this polling for resources in all tenants in nova
puts an undue burden on the nova API and the database.

Yes, that's a fair point.

The only other 

Re: [openstack-dev] [oslo] versioning and releases

2014-06-12 Thread Thierry Carrez
Doug Hellmann wrote:
 On Tue, Jun 10, 2014 at 5:19 PM, Mark McLoughlin mar...@redhat.com wrote:
 On Tue, 2014-06-10 at 12:24 -0400, Doug Hellmann wrote:
 [...]
 Background:

 We have two types of oslo libraries. Libraries like oslo.config and
 oslo.messaging were created by extracting incubated code, updating the
 public API, and packaging it. Libraries like cliff and taskflow were
 created as standalone packages from the beginning, and later adopted
 by the oslo team to manage their development and maintenance.

 Incubated libraries have been released at the end of a release cycle,
 as with the rest of the integrated packages. Adopted libraries have
 historically been released as needed during their development. We
 would like to synchronize these so that all oslo libraries are
 officially released with the rest of the software created by OpenStack
 developers.

Could you outline the benefits of syncing with the integrated release ?

Personally I see a few drawbacks to this approach:

We dump the new version on consumers usually around RC time, which is
generally a bad time to push a new version of a dependency and detect
potential breakage. Consumers just seem to get the new version at the
worst possible time.

It also prevents from spreading the work all over the cycle. For example
it may have been more successful to have the oslo.messaging new release
by milestone-1 to make sure it's adopted by projects in milestone-2 or
milestone-3... rather than have it ready by milestone-3 and expect all
projects to use it by consuming alphas during the cycle.

Now if *all* projects were continuously consuming alpha versions, most
of those drawbacks would go away.

 [...]
 Patch Releases:

 Updates to existing library releases can be made from stable branches.
 Checking out stable/icehouse of oslo.config for example would allow a
 release 1.3.1. We don't have a formal policy about whether we will
 create patch releases, or whether applications are better off using
 the latest release of the library. Do we need one?

 I'm not sure we need one, but if we did I'd expect them to be aligned
 with stable releases.

 Right now, I think they'd just be as-needed - if there's enough
 backported to the stable branch to warrant a release, we just cut one.
 
 That's pretty much what I thought, too. We shouldn't need to worry
 about alphas for patch releases, since we won't add features.

Yes, I think we can be pretty flexible about it. But to come back to my
above remark... should it be stable/icehouse or stable/1.3 ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Calling on Security Engineers / Developers / Architects - Time to share your toys

2014-06-12 Thread Clark, Robert Graham
All,

TL:DR; Lets work together and openly on security review and threat
analysis for OpenStack

I've discussed this for a while within the security group but now I'm
sharing more widely here on -dev. 

There are currently scores of security reviews taking place on OpenStack
architecture, projects and implementations. All the big players in
OpenStack are conducting their own security reviews, we are all finding
things that should be addressed in the community and I'm sure that we
are all missing things that others have found too.

There's very little commercial value in holding onto security review
data. I am, appealing to the security people out there in the community
to come together and share expertise on Threat Modelling/Analysis in
OpenStack. There's already been some excellent path-finding here (
https://wiki.openstack.org/wiki/Security/Threat_Analysis ).

My long term aspiration is that Threat Analysis and Penetration Testing
eventually gets performed in the open, in a collaborative process
between several organisations, all finding issues, opening bugs and
submitting patches together. With each organisation performing internal
audits on their deltas for secret source / value added stuff. I believe
by doing this we can raise the bar on all of our collective security
efforts while decreasing the massive duplication of effort that's going
on right now.

The security group is having a mid-cycle sprint in July, we are looking
to cover a lot of ground (
https://etherpad.openstack.org/p/ossg-juno-meetup ) but one of the
primary topics we will be focussing on is the Threat Modelling process.
How it can be shaped and how it should move forward. I hope that some of
you can be there and if not, that we can get the sharing and
collaboration of security reviews onto the security agenda at your
respective organisations. 

Cheers
-Rob




 



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-06-12 Thread Chmouel Boudjnah
On Wed, Jun 11, 2014 at 9:47 PM, Sean Dague s...@dague.net wrote:

 Actually swiftclient is one of the biggest offenders in the gate -

 http://logs.openstack.org/96/99396/1/check/check-tempest-dsvm-full/4501fc8/logs/screen-g-api.txt.gz#_2014-06-11_15_20_11_078



I'd be happy to fix that but that would make the --debug option innefective
right? Is it addressed in a different way in other clients?

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] fastest way to run individual tests ?

2014-06-12 Thread Daniel P. Berrange
When in the middle of developing code for nova I'll typically not wish to
the run the entire Nova test suite every time I have a bit of code to
verify. I'll just want to run the single test case that deals with the
code I'm hacking on.

I'm currently writing a 'test_hardware.py' test case for the NUMA work
I'm dealing with. I can run that using 'run_tests.sh' or 'tox' by just
passing the name of the test case. The test case in question takes a tiny
fraction of a second to run, but the tox command above wastes 32 seconds
faffing about before it runs the test itself, while run_tests.sh is not
much better wasting 22 seconds.

   # tox -e py27  tests.virt.test_hardware
   ...snip...
   real 0m32.923s
   user 0m22.083s
   sys  0m4.377s


   # time ./run_tests.sh tests.virt.test_hardware
   ...snip...
   real 0m22.075s
   user 0m14.282s
   sys  0m1.407s


This is a really severe time penalty to incurr each time I want to run
this tiny test (which is very frequently during dev).

Does anyone have any tip on how to actually run individual tests in an
efficient manner. ie something that adds no more than 1 second penalty
over  above the time to run the test itself. NB, assume that i've primed
the virtual env with all prerequisite deps already.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] canceling meeting Thu 12

2014-06-12 Thread Sergey Lukjanov
Hey folks,

today is the holiday in Russia, so, canceling the irc team meeting.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] Proposal to reduce amount of conf options

2014-06-12 Thread Boris Pavlovic
Hi all,


At this moment in rally we have for almost every benchmark scenario (or at
least service) bunch of CONF options.
That are used to setup pool interval and pre pool pause. Here is the
section:
https://github.com/stackforge/rally/blob/master/etc/rally/rally.conf.sample#L142-L293


If default values works for you then you are happy, but if your cloud is
faster or slower then you'll need probably to change all these parameters
= which is painful operation.

I would like to proposal to have just one configuration option called
cloud_speed or something like that, all other conf options related to pre
pool and pause before pooling will be removed and actually just calculated
from cloud_speed. So it will be quite simple to setup all pooling
intervals updating only one parameter.

Btw if we will have only one argument, we can make some method that will
automatically adjust it before running benchmark.

Thoughts?


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Problematic gate-tempest-dsvm-virtual-ironic job

2014-06-12 Thread Sean Dague
Current gate-tempest-dsvm-virtual-ironic has only a 65% pass rate *in
the gate* over the last 48 hrs -
http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF9uYW1lOmdhdGUtdGVtcGVzdC1kc3ZtLXZpcnR1YWwtaXJvbmljIEFORCAobWVzc2FnZTpcIkZpbmlzaGVkOiBTVUNDRVNTXCIgT1IgbWVzc2FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIpIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAyNTcyMjk4NTE4LCJtb2RlIjoic2NvcmUiLCJhbmFseXplX2ZpZWxkIjoiYnVpbGRfc3RhdHVzIn0=

This job is run on diskimage-builder and ironic jobs in the the gate
queue. Those jobs are now part of the integrated gate queue due to the
overlap with oslotest jobs.

This is *really* problematic, and too low to be voting. Anything  90%
pass rate is really an issue.

It looks like these issues are actually structural with the job, because
unlike our other configurations which aggressively try to avoid network
interaction (which we've found is too unreliable), this job adds the
cloud archive repository on the fly, and pulls content from there.
That's never going to have a high success rate.

I'm proposing we turn this off - https://review.openstack.org/#/c/99630/

The ironic team needs to go back to the drawing board a little here and
work on getting all the packages and repositories they need pulled down
into nodepool so we can isolate from network effects before we can make
this job gating again.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-06-12 Thread Chmouel Boudjnah
On Thu, Jun 12, 2014 at 12:58 PM, Chmouel Boudjnah chmo...@enovance.com
wrote:


 On Wed, Jun 11, 2014 at 9:47 PM, Sean Dague s...@dague.net wrote:

 Actually swiftclient is one of the biggest offenders in the gate -

 http://logs.openstack.org/96/99396/1/check/check-tempest-dsvm-full/4501fc8/logs/screen-g-api.txt.gz#_2014-06-11_15_20_11_078



 I'd be happy to fix that but that would make the --debug option
 innefective right? Is it addressed in a different way in other clients?


Anyway I have sent a patch for swiftclient for this in :

https://review.openstack.org/#/c/99632/1

Personally I don't think I like much that SHA1 and i'd rather use the first
16 bytes of the token (like we did in swift server)

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-12 Thread Flavio Percoco

On 11/06/14 16:26 -0700, Devananda van der Veen wrote:

On Tue, Jun 10, 2014 at 1:23 AM, Flavio Percoco fla...@redhat.com wrote:

Against:

 • Makes it hard for users to create applications that work across
multiple
   clouds, since critical functionality may or may not be available in a
given
   deployment. (counter: how many users need cross-cloud compatibility?
Can
   they degrade gracefully?)




The OpenStack Infra team does.


This is definitely unfortunate but I believe it's a fair trade-off. I
believe the same happens in other services that have support for
different drivers.


I disagree strongly on this point.

Interoperability is one of the cornerstones of OpenStack. We've had
panels about it at summits. Designing an API which is not
interoperable is not a fair tradeoff for performance - it's
destructive to the health of the project. Where other projects have
already done that, it's unfortunate, but let's not plan to make it
worse.

A lack of interoperability not only prevents users from migrating
between clouds or running against multiple clouds concurrently, it
hurts application developers who want to build on top of OpenStack
because their applications become tied to specific *implementations*
of OpenStack.



What I meant to say is that, based on a core set of functionalities,
all extra functionalities are part of the fair trade-off. It's up to
the cloud provider to choose what storage driver/features they want to
expose. Nonetheless, they'll all expose the same core set of
functionalities. I believe this is true also for other services, which
I'm not trying to use as an excuse but as a reference of what the
reality of non-opinionated services is. Marconi is opinionated w.r.t
the API and the core set of functionalities it wants to support.

You make really good points that I agree with. Thanks for sharing.

--
@flaper87
Flavio Percoco


pgpGoYb3RQ7ty.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-06-12 Thread Sean Dague
On 06/12/2014 07:42 AM, Chmouel Boudjnah wrote:
 On Thu, Jun 12, 2014 at 12:58 PM, Chmouel Boudjnah chmo...@enovance.com
 mailto:chmo...@enovance.com wrote:
 
 
 On Wed, Jun 11, 2014 at 9:47 PM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 Actually swiftclient is one of the biggest offenders in the gate -
 
 http://logs.openstack.org/96/99396/1/check/check-tempest-dsvm-full/4501fc8/logs/screen-g-api.txt.gz#_2014-06-11_15_20_11_078
 
 
 
 I'd be happy to fix that but that would make the --debug option
 innefective right? Is it addressed in a different way in other clients?

The only thing it makes harder is you have to generate your own token to
run the curl command. The rest is there. Because everyone is running our
servers at debug levels, it means the clients are going to be running
debug level as well (yay python logging!), so this is something I don't
think people realized was a huge issue.

 Anyway I have sent a patch for swiftclient for this in :
 
 https://review.openstack.org/#/c/99632/1
 
 Personally I don't think I like much that SHA1 and i'd rather use the
 first 16 bytes of the token (like we did in swift server)

Using a well known hash means you can verify it was the right thing if
you have access to the original data. Just taking the first 16 bytes
doesn't give you that, so I think the hash provides slightly more
debugability.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problematic gate-tempest-dsvm-virtual-ironic job

2014-06-12 Thread Thierry Carrez
Sean Dague wrote:
 Current gate-tempest-dsvm-virtual-ironic has only a 65% pass rate *in
 the gate* over the last 48 hrs -
 http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF9uYW1lOmdhdGUtdGVtcGVzdC1kc3ZtLXZpcnR1YWwtaXJvbmljIEFORCAobWVzc2FnZTpcIkZpbmlzaGVkOiBTVUNDRVNTXCIgT1IgbWVzc2FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIpIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAyNTcyMjk4NTE4LCJtb2RlIjoic2NvcmUiLCJhbmFseXplX2ZpZWxkIjoiYnVpbGRfc3RhdHVzIn0=
 
 This job is run on diskimage-builder and ironic jobs in the the gate
 queue. Those jobs are now part of the integrated gate queue due to the
 overlap with oslotest jobs.
 
 This is *really* problematic, and too low to be voting. Anything  90%
 pass rate is really an issue.
 
 It looks like these issues are actually structural with the job, because
 unlike our other configurations which aggressively try to avoid network
 interaction (which we've found is too unreliable), this job adds the
 cloud archive repository on the fly, and pulls content from there.
 That's never going to have a high success rate.
 
 I'm proposing we turn this off - https://review.openstack.org/#/c/99630/
 
 The ironic team needs to go back to the drawing board a little here and
 work on getting all the packages and repositories they need pulled down
 into nodepool so we can isolate from network effects before we can make
 this job gating again.

+1ed

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-12 Thread Sean Dague
We're definitely deep into capacity issues, so it's going to be time to
start making tougher decisions about things we decide aren't different
enough to bother testing on every commit.

Previously we've been testing Postgresql in the gate because it has a
stricter interpretation of SQL than MySQL. And when we didn't test
Postgresql it regressed. I know, I chased it for about 4 weeks in grizzly.

However Monty brought up a good point at Summit, that MySQL has a strict
mode. That should actually enforce the same strictness.

My proposal is that we land this change to devstack -
https://review.openstack.org/#/c/97442/ and backport it to past devstack
branches.

Then we drop the pg jobs, as the differences between the 2 configs
should then be very minimal. All the *actual* failures we've seen
between the 2 were completely about this strict SQL mode interpretation.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-12 Thread Julien Danjou
On Thu, Jun 12 2014, Sean Dague wrote:

 However Monty brought up a good point at Summit, that MySQL has a strict
 mode. That should actually enforce the same strictness.

I would vote -1 on that, simply because using PostgreSQL should be more
than that just doing strict SQL.

For example, in Ceilometer and Gnocchi we have custom SQL type that are
implemented with different data type depending on the SQL engine that's
being used. PostgreSQL proposes better and more optimized data type in
certain case (timestamp or UUID from the top of my head). Not gating
against PostgreSQL would potentially introduce bugs in that support for
us.

Oh sure, I can easily imagine that it's not the case currently in many
other OpenStack projects. But that IMHO would be a terrible move towards
leveling down the SQL usage in OpenStack, which is already pretty low
IMHO.

My 2c,
-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-06-12 Thread Chmouel Boudjnah
On Thu, Jun 12, 2014 at 1:59 PM, Sean Dague s...@dague.net wrote:

 The only thing it makes harder is you have to generate your own token to
 run the curl command. The rest is there.


Well I would have imagine that the curl command debug are here so people
can easily copy and paste them and/or tweak them, but sure it would just
make it a bit harder.


 Because everyone is running our
 servers at debug levels, it means the clients are going to be running
 debug level as well (yay python logging!), so this is something I don't
 think people realized was a huge issue.


so maybe the issue is that those curl commands shows up in server log when
they should only output when running swift/nova/etc/client --debug, right?

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-12 Thread Sean Dague
On 06/12/2014 08:15 AM, Julien Danjou wrote:
 On Thu, Jun 12 2014, Sean Dague wrote:
 
 However Monty brought up a good point at Summit, that MySQL has a strict
 mode. That should actually enforce the same strictness.
 
 I would vote -1 on that, simply because using PostgreSQL should be more
 than that just doing strict SQL.
 
 For example, in Ceilometer and Gnocchi we have custom SQL type that are
 implemented with different data type depending on the SQL engine that's
 being used. PostgreSQL proposes better and more optimized data type in
 certain case (timestamp or UUID from the top of my head). Not gating
 against PostgreSQL would potentially introduce bugs in that support for
 us.
 
 Oh sure, I can easily imagine that it's not the case currently in many
 other OpenStack projects. But that IMHO would be a terrible move towards
 leveling down the SQL usage in OpenStack, which is already pretty low
 IMHO.

That's not cacthable in unit or functional tests?

My experience is that it's *really* hard to tickle stuff like that from
Tempest in any meaningful way that's not catchable at lower levels.
Especially as we're going through SQLA for all this access in the first
place.

Keeping jobs alive based on the theory that they might one day be useful
is something we just don't have the liberty to do any more. We've not
seen an idle node in zuul in 2 days... and we're only at j-1. j-3 will
be at least +50% of this load.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-12 Thread Julien Danjou
On Thu, Jun 12 2014, Sean Dague wrote:

 That's not cacthable in unit or functional tests?

Not in an accurate manner, no.

 Keeping jobs alive based on the theory that they might one day be useful
 is something we just don't have the liberty to do any more. We've not
 seen an idle node in zuul in 2 days... and we're only at j-1. j-3 will
 be at least +50% of this load.

Sure, I'm not saying we don't have a problem. I'm just saying it's not a
good solution to fix that problem IMHO.

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Nominating Nikhil Komawar for Core

2014-06-12 Thread Alex Meade
+100 it's about time!


On Thu, Jun 12, 2014 at 3:26 AM, Mark Washenberger 
mark.washenber...@markwash.net wrote:

 Hi folks,

 I'd like to nominate Nikhil Komawar to join glance-core. His code and
 review contributions over the past years have been very helpful and he's
 been taking on a very important role in advancing the glance tasks work.

 If anyone has any concerns, please let me know. Otherwise I'll make the
 membership change next week (which is code for, when someone reminds me to!)

 Thanks!
 markwash

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] versioning and releases

2014-06-12 Thread Mark McLoughlin
On Thu, 2014-06-12 at 12:09 +0200, Thierry Carrez wrote:
 Doug Hellmann wrote:
  On Tue, Jun 10, 2014 at 5:19 PM, Mark McLoughlin mar...@redhat.com wrote:
  On Tue, 2014-06-10 at 12:24 -0400, Doug Hellmann wrote:
  [...]
  Background:
 
  We have two types of oslo libraries. Libraries like oslo.config and
  oslo.messaging were created by extracting incubated code, updating the
  public API, and packaging it. Libraries like cliff and taskflow were
  created as standalone packages from the beginning, and later adopted
  by the oslo team to manage their development and maintenance.
 
  Incubated libraries have been released at the end of a release cycle,
  as with the rest of the integrated packages. Adopted libraries have
  historically been released as needed during their development. We
  would like to synchronize these so that all oslo libraries are
  officially released with the rest of the software created by OpenStack
  developers.
 
 Could you outline the benefits of syncing with the integrated release ?

Sure!

http://lists.openstack.org/pipermail/openstack-dev/2012-November/003345.html

:)

 Personally I see a few drawbacks to this approach:
 
 We dump the new version on consumers usually around RC time, which is
 generally a bad time to push a new version of a   dependency and detect
 potential breakage. Consumers just seem to get the new version at the
 worst possible time.
 
 It also prevents from spreading the work all over the cycle. For example
 it may have been more successful to have the oslo.messaging new release
 by milestone-1 to make sure it's adopted by projects in milestone-2 or
 milestone-3... rather than have it ready by milestone-3 and expect all
 projects to use it by consuming alphas during the cycle.
 
 Now if *all* projects were continuously consuming alpha versions, most
 of those drawbacks would go away.

Yes, that's the plan. Those issues are acknowledged and we're reasonably
confident the alpha versions plan will address them.

  [...]
  Patch Releases:
 
  Updates to existing library releases can be made from stable branches.
  Checking out stable/icehouse of oslo.config for example would allow a
  release 1.3.1. We don't have a formal policy about whether we will
  create patch releases, or whether applications are better off using
  the latest release of the library. Do we need one?
 
  I'm not sure we need one, but if we did I'd expect them to be aligned
  with stable releases.
 
  Right now, I think they'd just be as-needed - if there's enough
  backported to the stable branch to warrant a release, we just cut one.
  
  That's pretty much what I thought, too. We shouldn't need to worry
  about alphas for patch releases, since we won't add features.
 
 Yes, I think we can be pretty flexible about it. But to come back to my
 above remark... should it be stable/icehouse or stable/1.3 ?

It's a branch for bugfix releases of the icehouse version of the
library, so I think stable/icehouse makes sense.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Proposal to reduce amount of conf options

2014-06-12 Thread Sergey Skripnick


IMO it is good if default values are calculated from cloud_speed, and  
there is also ability to change every single option.



Hi all,

At this moment in rally we have for almost every benchmark scenario (or  
at least service) bunch of CONF options.That are used to setup pool  
interval and pre pool pause. Here is the section:  
https://github.com/stackforge/rally/blob/master/etc/rally/rally.conf.sample#L142-L293


If default values works for you then you are happy, but if your cloud is  
faster or slower then you'll need probably to change all these  
parameters = which is painful operation.


I would like to proposal to have just one configuration option called  
cloud_speed or something like that, all other conf options related to  
pre pool and pause before pooling will be removed and actually just  
calculated from cloud_speed. So it will be quite simple to setup all  
pooling intervals updating only one parameter.



Btw if we will have only one argument, we can make some method that will  
automatically adjust it before running benchmark.  
Thoughts?



Best regards,
Boris Pavlovic





--
Regards,
Sergey Skripnick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova]{neutron] Mid cycle sprints

2014-06-12 Thread Gary Kotton
Hi,
There is the mid cycle sprint in July for Nova and Neutron. Anyone interested 
in maybe getting one together in Europe/Middle East around the same dates? If 
people are willing to come to this part of the world I am sure that we can 
organize a venue for a few days. Anyone interested. If we can get a quorum then 
I will be happy to try and arrange things.
Thanks
Gary


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Nominating Nikhil Komawar for Core

2014-06-12 Thread Kuvaja, Erno
+1

From: Alex Meade [mailto:mr.alex.me...@gmail.com]
Sent: 12 June 2014 13:56
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Nominating Nikhil Komawar for Core

+100 it's about time!

On Thu, Jun 12, 2014 at 3:26 AM, Mark Washenberger 
mark.washenber...@markwash.netmailto:mark.washenber...@markwash.net wrote:
Hi folks,

I'd like to nominate Nikhil Komawar to join glance-core. His code and review 
contributions over the past years have been very helpful and he's been taking 
on a very important role in advancing the glance tasks work.

If anyone has any concerns, please let me know. Otherwise I'll make the 
membership change next week (which is code for, when someone reminds me to!)

Thanks!
markwash

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]{neutron] Mid cycle sprints

2014-06-12 Thread Gary Kotton
This part of the world == Israel (it has been a long week :))

From: administrator gkot...@vmware.commailto:gkot...@vmware.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, June 12, 2014 at 4:32 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova]{neutron] Mid cycle sprints

Hi,
There is the mid cycle sprint in July for Nova and Neutron. Anyone interested 
in maybe getting one together in Europe/Middle East around the same dates? If 
people are willing to come to this part of the world I am sure that we can 
organize a venue for a few days. Anyone interested. If we can get a quorum then 
I will be happy to try and arrange things.
Thanks
Gary


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Olso] Periodic task coalescing

2014-06-12 Thread Tom Cammann

Hello,

I'm addressing https://bugs.launchpad.net/oslo/+bug/1326020 which is
dealing with periodic tasks.

There is currently a code block that checks if a task is 0.2 seconds
away from being run and if so it run now instead. Essentially
coalescing nearby tasks together.

From oslo-incubator/openstack/common/periodic_task.py:162

# If a periodic task is _nearly_ due, then we'll run it early
idle_for = min(idle_for, spacing)
if last_run is not None:
delta = last_run + spacing - time.time()
if delta  0.2:
idle_for = min(idle_for, delta)
continue

However the resolution in the config for various periodic tasks is by
the second, and I have been unable to find a task that has a
millisecond resolution. I intend to get rid of this coalescing in this
bug fix.

It fits in with this bug fix as I intend to make the tasks run on their
specific spacing boundaries, i.e. if spacing is 10 seconds, it will run
at 17:30:10, 17:30:20, etc.

Is there any reason to keep the coalescing of tasks?

Thanks,

Tom


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Specs repository update and the way forward

2014-06-12 Thread Carlos Gonçalves
Thank you for the update, Kyle.

I was sceptical about this move at first but hopefully I was wrong. The specs 
repository indeed eases a lot of the work from a submitter and reviewer point 
of view.

Is there any web page where all approved blueprints are being published to? 
Jenkins builds such pages I’m looking for but they are linked to each patchset 
individually (e.g., 
http://docs-draft.openstack.org/77/92477/6/check/gate-neutron-specs-docs/f05cc1d/doc/build/html/).
 In addition, listing BPs currently under reviewing and linking to its 
review.o.o page could potentially draw more attention/awareness to what’s being 
proposed to Neutron (and other OpenStack projects).

Thanks,
Carlos Goncalves

On 11 Jun 2014, at 18:25, Kyle Mestery mest...@noironetworks.com wrote:

 tl;dr: The specs repository has been great to work with. As a
 reviewer, it makes reviews easier. As PTL, it makes tracking easier as
 well.
 
 Since Juno-1 is about to close, I wanted to give everyone an update on
 Neutron's usage of the specs repository. These are observations from
 using this since a few weeks before the Summit. I thought it would be
 good to share with the broader community to see if other projects
 using spec repositories had similar thoughts, and I also wanted to
 share this info for BP submitters and reviewers.
 
 Overall, the spec repository has been great as a tool to consolidate
 where new ideas are documented and made into something we can merge
 and move forward with. Using gerrit for this has been great. We've
 merged a good amount of specs [1], and the process of hooking these to
 Launchpad for milestone tracking has been straightforward. As the PTL
 of Neutron, I've found the specs repository helps me out immensely,
 the workflow is great.
 
 One of the things I've noticed is that sometimes it's hard to get
 submitters to respond to feedback on the specs repository. If you look
 at our current queue of open BPs [2], we have a lot which are waiting
 for feedback from submitters. I don't know how to address this issue,
 any feedback appreciated here.
 
 Secondly, with so many open BPs, it's unlikely that all of these will
 make Juno. With what we already have approved and being worked, a lot
 of these will likely slide to the K release. At some point in the
 next few weeks, I may start going through some and marking them as
 such.
 
 So, to summarize, I'm very happy with the workflow from the specs repository.
 
 Thanks for reading!
 Kyle
 
 [1] 
 https://review.openstack.org/#/q/status:merged+project:openstack/neutron-specs,n,z
 [2] 
 https://review.openstack.org/#/q/status:open+project:openstack/neutron-specs,n,z
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][QA] Disabling the v3 API tests in the gate

2014-06-12 Thread Matt Riedemann



On 6/12/2014 12:40 AM, Christopher Yeoh wrote:

On Thu, Jun 12, 2014 at 7:30 AM, Matthew Treinish mtrein...@kortar.org
mailto:mtrein...@kortar.org wrote:

Hi everyone,

As part of debugging all the bugs that have been plaguing the gate
the past
couple of weeks one of the things that came up is that we're still
running the
v3 API tests in the gate. AIUI at summit Nova decided that the v3
API test won't
exist as a separate major version. So I'm not sure there is much
value in
continuing to run the API tests.


So the v3 API won't exist as a separate major version, but I think its
very important we keep up with the tempest tests so we don't regress.
Over time these v3 api features will either be ported to
v2.1microversions (the vast majority I expect) or dropped. At that
point  they'll be moved to tempest testing v2.1microversions.

  But whatever we do we'll need to test against v2 (which we're stuck
with for a very long time) and v2.1microversions (rolling possible
backwards incompatible changes to the v2 api) for quite a while.


in motivator for doing this is the total run time of tempest, the v3
tests
add ~7-10min of time to the gating jobs right now. [1] (which is
just a time
test, not how it'll be implemented) While this doesn't seem like much it
actually would make a big difference in our total throughput. Every
little bit
counts. There are probably some other less quantifiable benefits to
removing the
extra testing like for example slightly decreasing the load on nova
in an
already stressed environment like the gating nodes.

So I'd like to propose that we disable running the v3 API tests in
the gate. I
was thinking we would keep the tests around in tree for as long as
there was
a v3 API in any supported nova branch, but instead of running them
in the gate
just have a nightly bit-rot job on the tests and also add it to the
experimental
queue.


I'd really prefer we don't take this route, but its better than nothing.
Incidentally the v3 tempest api tests have in the past found race
conditions which did theoretically occur in the v2 api as well. Just the
different architecture exposed them a bit better.

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I think it'd be OK to move them to the experimental queue and a periodic 
nightly job until the v2.1 stuff shakes out.  The v3 API is marked 
experimental right now so it seems fitting that it'd be running tests in 
the experimental queue until at least the spec is approved and 
microversioning starts happening in the code base.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Reviews - we need your help!

2014-06-12 Thread Macdonald-Wallace, Matthew
FWIW, I’ve tried to make a useful dashboard for this using Sean Dague’s 
gerrit-dash-creator [0].

Short URL is http://bit.ly/1l4DLFS long url is:

https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Ftripleo-incubator+OR+project%3Aopenstack%2Ftripleo-image-elements+OR+project%3Aopenstack%2Ftripleo-heat-templates+OR+project%3Aopenstack%2Ftripleo-specs+OR+project%3Aopenstack%2Fos-apply-config+OR+project%3Aopenstack%2Fos-collect-config+OR+project%3Aopenstack%2Fos-refresh-config+OR+project%3Aopenstack%2Fdiskimage-builder%29+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%252cjenkins+NOT+label%3ACode-Review%3E%3D-2%252cself+branch%3Amaster+status%3Aopentitle=TripleO+ReviewsYour+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3AselfPassed+Jenkins%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3C%3D-1+limit%3A100Changes+with+no+code+review+in+the+last+48hrs=NOT+label%3ACode-Review%3C%3D2+age%3A48hChanges+with+no+code+review+in+the+last+5+days=NOT+label%3ACode-Review%3C%3D2+age%3A5dChanges+with+no+code+review+in+the+last+7+days=NOT+label%3ACode-Review%3C%3D2+age%3A7dSome+adjustment+required+%28-1+only%29=label%3ACode-Review%3D-1+NOT+label%3ACode-Review%3D-2+limit%3A100Dead+Specs+%28-2%29=label%3ACode-Review%3C%3D-2

I’ll add it to my fork and submit a PR if people think it useful.

Matt

[0] https://github.com/sdague/gerrit-dash-creator

From: James Polley [mailto:j...@jamezpolley.com]
Sent: 12 June 2014 06:08
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [TripleO] Reviews - we need your help!

During yesterday's IRC meeting, we realized that our review stats are starting 
to slip again.
Just after summit, our stats were starting to improve. In the 2014-05-20 
meeting, the TripleO  Stats since the last revision without -1 or -2[1] 
looked like this:

1rd quartile wait time: 1 days, 1 hours, 11 minutes

Median wait time: 6 days, 9 hours, 49 minutes

3rd quartile wait time: 13 days, 5 hours, 46 minutes

As of yesterdays meeting, we have:

1rd quartile wait time: 4 days, 23 hours, 19 minutes

Median wait time: 7 days, 22 hours, 8 minutes

3rd quartile wait time: 13 days, 19 hours, 17 minutes

This really hurts our velocity, and is especially hard on people making their 
first commit, as it can take them almost a full work week before they even get 
their first feedback.
To get things moving, we need everyone to make a special effort to do a few 
reviews every day. It would be most helpful if you can look for older reviews 
without a -1 or -2 and help those reviews get over the line.
If you find reviews that are just waiting for a simple fix - typo or syntax 
fixes, simple code fixes, or a simple rebase - it would be even more helpful if 
you could take a few minutes to make those patches, rather than just leaving 
the review waiting for the attention of the original submitter.
Please keep in mind that these stats are based on all of our projects, not just 
tripleo-incubator. To save you heading to the wiki, here's a handy link that 
shows you all open code reviews in all our projects:

bit.ly/1hQco1Nhttp://bit.ly/1hQco1N

If you'd prefer the long version:
https://review.openstack.org/#/q/status:open+%28project:openstack/tripleo-incubator+OR+project:openstack/tuskar+OR+project:openstack/tuskar-ui+OR+project:openstack-infra/tripleo-ci+OR+project:openstack/os-apply-config+OR+project:openstack/os-collect-config+OR+project:openstack/os-refresh-config+OR+project:openstack/os-cloud-config+OR+project:openstack/tripleo-image-elements+OR+project:openstack/tripleo-heat-templates+OR+project:openstack/diskimage-builder+OR+project:openstack/python-tuskarclient+OR+project:openstack/tripleo-specs%29,n,zhttps://review.openstack.org/#/q/status:open+%28project:openstack/tripleo-incubator+OR+project:openstack/tuskar+OR+project:openstack/tuskar-ui+OR+project:openstack-infra/tripleo-ci+OR+project:openstack/os-apply-config+OR+project:openstack/os-collect-config+OR+project:openstack/os-refr

https://wiki.openstack.org/wiki/TripleO#Bug_Triage has links to reviews for 
individual projects as well as more information about how we triage bugs.

[1] http://www.nemebean.com/reviewstats/tripleo-open.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Specs repository update and the way forward

2014-06-12 Thread Veiga, Anthony
+1 to this.  It would be great to read the compiled spec and have it be 
searchable/filtered.
-Anthony

Thank you for the update, Kyle.

I was sceptical about this move at first but hopefully I was wrong. The specs 
repository indeed eases a lot of the work from a submitter and reviewer point 
of view.

Is there any web page where all approved blueprints are being published to? 
Jenkins builds such pages I’m looking for but they are linked to each patchset 
individually (e.g., 
http://docs-draft.openstack.org/77/92477/6/check/gate-neutron-specs-docs/f05cc1d/doc/build/html/).
 In addition, listing BPs currently under reviewing and linking to its 
review.o.o page could potentially draw more attention/awareness to what’s being 
proposed to Neutron (and other OpenStack projects).

Thanks,
Carlos Goncalves

On 11 Jun 2014, at 18:25, Kyle Mestery 
mest...@noironetworks.commailto:mest...@noironetworks.com wrote:

tl;dr: The specs repository has been great to work with. As a
reviewer, it makes reviews easier. As PTL, it makes tracking easier as
well.

Since Juno-1 is about to close, I wanted to give everyone an update on
Neutron's usage of the specs repository. These are observations from
using this since a few weeks before the Summit. I thought it would be
good to share with the broader community to see if other projects
using spec repositories had similar thoughts, and I also wanted to
share this info for BP submitters and reviewers.

Overall, the spec repository has been great as a tool to consolidate
where new ideas are documented and made into something we can merge
and move forward with. Using gerrit for this has been great. We've
merged a good amount of specs [1], and the process of hooking these to
Launchpad for milestone tracking has been straightforward. As the PTL
of Neutron, I've found the specs repository helps me out immensely,
the workflow is great.

One of the things I've noticed is that sometimes it's hard to get
submitters to respond to feedback on the specs repository. If you look
at our current queue of open BPs [2], we have a lot which are waiting
for feedback from submitters. I don't know how to address this issue,
any feedback appreciated here.

Secondly, with so many open BPs, it's unlikely that all of these will
make Juno. With what we already have approved and being worked, a lot
of these will likely slide to the K release. At some point in the
next few weeks, I may start going through some and marking them as
such.

So, to summarize, I'm very happy with the workflow from the specs repository.

Thanks for reading!
Kyle

[1] 
https://review.openstack.org/#/q/status:merged+project:openstack/neutron-specs,n,z
[2] 
https://review.openstack.org/#/q/status:open+project:openstack/neutron-specs,n,z

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Access Groups API and DB changes

2014-06-12 Thread Alex Meade
Hey Folks,

The blueprint for Access Groups can be found here:
https://blueprints.launchpad.net/manila/+spec/access-groups

If you have a chance, please look through the proposal for the API
resources and DB schema changes here:
https://etherpad.openstack.org/p/manila-access-groups-api-proposal

-Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Distributed locking

2014-06-12 Thread Matthew Booth
We have a need for a distributed lock in the VMware driver, which I
suspect isn't unique. Specifically it is possible for a VMware datastore
to be accessed via multiple nova nodes if it is shared between
clusters[1]. Unfortunately the vSphere API doesn't provide us with the
primitives to implement robust locking using the storage layer itself,
so we're looking elsewhere.

The closest we seem to have in Nova currently are service groups, which
currently have 3 implementations: DB, Zookeeper and Memcached. The
service group api currently provides simple membership, but for locking
we'd be looking for something more.

I think the api we'd be looking for would be something along the lines of:

Foo.lock(name, fence_info)
Foo.unlock(name)

Bar.fence(fence_info)

Note that fencing would be required in this case. We believe we can
fence by terminating the other Nova's vSphere session, but other options
might include killing a Nova process, or STONITH. These would be
implemented as fencing drivers.

Although I haven't worked through the detail, I believe lock and unlock
would be implementable in all 3 of the current service group drivers.
Fencing would be implemented separately.

My questions:

* Does this already exist, or does anybody have patches pending to do
something like this?
* Are there other users for this?
* Would service groups be an appropriate place, or a new distributed
locking class?
* How about if we just used zookeeper directly in the driver?

Matt

[1] Cluster ~= hypervisor
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Import errors in tests are not reported in python 2.7

2014-06-12 Thread Thomas Herve
Hi all,

I don't know if it's a know issue or not, but I noticed on one of my patch 
(https://review.openstack.org/#/c/99648/) that the 2.7 gate was passing whereas 
the 2.6 is failing because of import errors. It seems to be a problem related 
to the difference in the discover module, so presumably an issue in testtools. 
The problem appears locally when using tox, so it's fairly annoying not to be 
able to trust the test result.

Thanks,

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed locking

2014-06-12 Thread Julien Danjou
On Thu, Jun 12 2014, Matthew Booth wrote:

 We have a need for a distributed lock in the VMware driver, which I
 suspect isn't unique. Specifically it is possible for a VMware datastore
 to be accessed via multiple nova nodes if it is shared between
 clusters[1]. Unfortunately the vSphere API doesn't provide us with the
 primitives to implement robust locking using the storage layer itself,
 so we're looking elsewhere.

The tooz library has been created for this purpose:

  https://pypi.python.org/pypi/tooz

  https://git.openstack.org/cgit/stackforge/tooz/

 Although I haven't worked through the detail, I believe lock and unlock
 would be implementable in all 3 of the current service group drivers.
 Fencing would be implemented separately.

The plan is to leverage tooz to replace the Nova service group drivers,
as this is also usable in a lot of others OpenStack services.

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-12 Thread Mike Bayer

On 6/12/14, 8:26 AM, Julien Danjou wrote:
 On Thu, Jun 12 2014, Sean Dague wrote:

 That's not cacthable in unit or functional tests?
 Not in an accurate manner, no.

 Keeping jobs alive based on the theory that they might one day be useful
 is something we just don't have the liberty to do any more. We've not
 seen an idle node in zuul in 2 days... and we're only at j-1. j-3 will
 be at least +50% of this load.
 Sure, I'm not saying we don't have a problem. I'm just saying it's not a
 good solution to fix that problem IMHO.

Just my 2c without having a full understanding of all of OpenStack's CI
environment, Postgresql is definitely different enough that MySQL
strict mode could still allow issues to slip through quite easily, and
also as far as capacity issues, this might be longer term but I'm hoping
to get database-related tests to be lots faster if we can move to a
model that spends much less time creating databases and schemas.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed locking

2014-06-12 Thread Matthew Booth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/06/14 15:35, Julien Danjou wrote:
 On Thu, Jun 12 2014, Matthew Booth wrote:
 
 We have a need for a distributed lock in the VMware driver, which
 I suspect isn't unique. Specifically it is possible for a VMware
 datastore to be accessed via multiple nova nodes if it is shared
 between clusters[1]. Unfortunately the vSphere API doesn't
 provide us with the primitives to implement robust locking using
 the storage layer itself, so we're looking elsewhere.
 
 The tooz library has been created for this purpose:
 
 https://pypi.python.org/pypi/tooz
 
 https://git.openstack.org/cgit/stackforge/tooz/
 
 Although I haven't worked through the detail, I believe lock and
 unlock would be implementable in all 3 of the current service
 group drivers. Fencing would be implemented separately.
 
 The plan is to leverage tooz to replace the Nova service group
 drivers, as this is also usable in a lot of others OpenStack
 services.

This looks interesting. It doesn't have hooks for fencing, though.

What's the status of tooz? Would you be interested in adding fencing
hooks?

Matt
- -- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlOZvc8ACgkQNEHqGdM8NJCgHQCcCTGaZ9520HCa60MJ0xhkD81O
pi4AnA2x9nwGD5F5xD8SHYEYNOpRri/2
=WIsg
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Gate still backed up - need assistance with nova-network logging enhancements

2014-06-12 Thread Matt Riedemann



On 6/10/2014 5:36 AM, Michael Still wrote:

https://review.openstack.org/99002 adds more logging to
nova/network/manager.py, but I think you're not going to love the
debug log level. Was this the sort of thing you were looking for
though?

Michael

On Mon, Jun 9, 2014 at 11:45 PM, Sean Dague s...@dague.net wrote:

Based on some back of envelope math the gate is basically processing 2
changes an hour, failing one of them. So if you want to know how long
the gate is, take the length / 2 in hours.

Right now we're doing a lot of revert roulette, trying to revert things
that we think landed about the time things went bad. I call this
roulette because in many cases the actual issue isn't well understood. A
key reason for this is:

*nova network is a blackhole*

There is no work unit logging in nova-network, and no attempted
verification that the commands it ran did a thing. Most of these
failures that we don't have good understanding of are the network not
working under nova-network.

So we could *really* use a volunteer or two to prioritize getting that
into nova-network. Without it we might manage to turn down the failure
rate by reverting things (or we might not) but we won't really know why,
and we'll likely be here again soon.

 -Sean

--
Sean Dague
http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







I mentioned this in the nova meeting today also but the assocated bug 
for the nova-network ssh timeout issue is bug 1298472 [1].


My latest theory on that one is if there could be a race/network leak in 
the ec2 third party tests in Tempest or something in the ec2 API in 
nova, because I saw this [2] showing up in the n-net logs.  My thinking 
is the tests or the API are not tearing down cleanly and eventually 
network resources are leaked and we start hitting those timeouts.  Just 
a theory at this point, but the ec2 3rd party tests do run concurrently 
with the scenario tests so things could be colliding at that point, but 
I haven't had time to dig into it, plus I have very little experience in 
those tests or the ec2 API in nova.


[1] https://bugs.launchpad.net/tempest/+bug/1298472
[2] http://goo.gl/6f1dfw

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][QA] Disabling the v3 API tests in the gate

2014-06-12 Thread Dan Smith

I think it'd be OK to move them to the experimental queue and a periodic
nightly job until the v2.1 stuff shakes out.  The v3 API is marked
experimental right now so it seems fitting that it'd be running tests in
the experimental queue until at least the spec is approved and
microversioning starts happening in the code base.


I think this is reasonable. Continuing to run the full set of tests on 
every patch for something we never expect to see the light of day (in 
its current form) seems wasteful to me. Plus, we're going to 
(presumably) be ramping up tests on v2.1, which means to me that we'll 
need to clear out some capacity to make room for that.


--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] need core reviewers for https://review.openstack.org/#/c/81954/

2014-06-12 Thread Robert Li (baoli)
Hi,

The SR-IOV work depends on this fix. It has got +1’s for quite some time, and 
need core reviewers to review and approve.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed locking

2014-06-12 Thread Julien Danjou
On Thu, Jun 12 2014, Matthew Booth wrote:

 This looks interesting. It doesn't have hooks for fencing, though.

 What's the status of tooz? Would you be interested in adding fencing
 hooks?

It's maintained and developer, we have plan to use it in Ceilometer and
others projects. Joshua also wants to use it for Taskflow.

We are blocked for now by https://review.openstack.org/#/c/93443/ and by
the lack of resource to complete that request obviously, so help
appreciated. :)

As for fencing hooks, it sounds like a good idea.

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Message level security plans.

2014-06-12 Thread Kelsey, Timothy Joh
Hello OpenStack folks,

First please allow me to introduce myself, my name is Tim Kelsey and I’m a 
security developer working at HP. I am very interested in projects like Kite 
and the work that’s being undertaken to introduce message level security into 
OpenStack and would love to help out on that front. In an effort to ascertain 
the current state of development it would be great to hear from the people who 
are involved in this and find out what's being worked on or planned in 
blueprints.

Many Thanks,

--
Tim Kelsey
Cloud Security Engineer
HP Helion

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][QA] Disabling the v3 API tests in the gate

2014-06-12 Thread Christopher Yeoh
On Fri, Jun 13, 2014 at 12:25 AM, Dan Smith d...@danplanet.com wrote:

 I think it'd be OK to move them to the experimental queue and a periodic
 nightly job until the v2.1 stuff shakes out.  The v3 API is marked
 experimental right now so it seems fitting that it'd be running tests in
 the experimental queue until at least the spec is approved and
 microversioning starts happening in the code base.


 I think this is reasonable. Continuing to run the full set of tests on
 every patch for something we never expect to see the light of day (in its
 current form) seems wasteful to me. Plus, we're going to (presumably) be
 ramping up tests on v2.1, which means to me that we'll need to clear out
 some capacity to make room for that.


Thats true, though I was suggesting as v2.1microversions rolls out  we drop
the test out of v3 and move it to v2.1microversions testing, so there's no
change in capacity required.

Matt - how much of the time overhead is scenario tests? That's something
that would have a lot less impact if moved to and experimental queue.
Although the v3 api as a whole won't be officially exposed, the api tests
test specific features fairly indepdently which are slated for
v2.1microversions on a case by case basis and I don't want to see those
regress. I guess my concern is how often the experimental queue results get
really looked at and how hard/quick it is to revert when lots of stuff
merges in a short period of time)

Chris


 --Dan


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][QA] Disabling the v3 API tests in the gate

2014-06-12 Thread Dan Smith

Thats true, though I was suggesting as v2.1microversions rolls out  we
drop the test out of v3 and move it to v2.1microversions testing, so
there's no change in capacity required.


Right now we run a full set over /v2 and a full set over /v3. Certainly 
as we introduce /v2.1 we'll need full coverage on that as well, before 
we start introducing any microversion-based changes, right? Dropping /v3 
should make room for /v2.1, and then go from there when adding new 
things to /v2.1 via microversions. When we are able to drop /v2 (or 
rather move /v2.1 to /v2) then we actually gain back the capacity.


--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Message level security plans.

2014-06-12 Thread Matt Riedemann



On 6/12/2014 10:08 AM, Kelsey, Timothy Joh wrote:

Hello OpenStack folks,

First please allow me to introduce myself, my name is Tim Kelsey and I’m a 
security developer working at HP. I am very interested in projects like Kite 
and the work that’s being undertaken to introduce message level security into 
OpenStack and would love to help out on that front. In an effort to ascertain 
the current state of development it would be great to hear from the people who 
are involved in this and find out what's being worked on or planned in 
blueprints.

Many Thanks,

--
Tim Kelsey
Cloud Security Engineer
HP Helion

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Are you talking about log messages or RPC messages?  For log messages, 
there is a thread that started yesterday on masking auth tokens [1].


If RPC, I'm aware of at least one issue filed against Qpid [2] for 
allowing a way to tell Qpid not to log a message since it might contain 
sensitive information (like auth tokens).


Looks like there is also an older blueprint for trusted messaging here [3].

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-June/037345.html
[2] https://issues.apache.org/jira/browse/QPID-5772
[3] https://blueprints.launchpad.net/oslo.messaging/+spec/trusted-messaging

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-12 Thread Matt Riedemann



On 6/12/2014 9:38 AM, Mike Bayer wrote:


On 6/12/14, 8:26 AM, Julien Danjou wrote:

On Thu, Jun 12 2014, Sean Dague wrote:


That's not cacthable in unit or functional tests?

Not in an accurate manner, no.


Keeping jobs alive based on the theory that they might one day be useful
is something we just don't have the liberty to do any more. We've not
seen an idle node in zuul in 2 days... and we're only at j-1. j-3 will
be at least +50% of this load.

Sure, I'm not saying we don't have a problem. I'm just saying it's not a
good solution to fix that problem IMHO.


Just my 2c without having a full understanding of all of OpenStack's CI
environment, Postgresql is definitely different enough that MySQL
strict mode could still allow issues to slip through quite easily, and
also as far as capacity issues, this might be longer term but I'm hoping
to get database-related tests to be lots faster if we can move to a
model that spends much less time creating databases and schemas.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Is there some organization out there that uses PostgreSQL in production 
that could stand up 3rd party CI with it?


I know that at least for the DB2 support we're adding across the 
projects we're doing 3rd party CI for that. Granted it's a proprietary 
DB unlike PG but if we're talking about spending resources on testing 
for something that's not widely used, but there is a niche set of users 
that rely on it, we could/should move that to 3rd party CI.


I'd much rather see us spend our test resources on getting multi-node 
testing running in the gate so we can test migrations in Nova.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] IRC weekly

2014-06-12 Thread Mike Scherbakov
Reminding, meetings are ran in #openstack-meeting-alt on freenode.


On Thu, Jun 12, 2014 at 1:14 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Folks,
 Though it is holiday in Russia where we have large Fuelers presence, I'm
 still going to run the meeting at 16.00 UTC on 12th (9am PST, 8pm MSK
 Thursday).
 Other locations, I need your presence.

 Agenda: https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda
 Feel free to extend it in advance.

 Thanks,
 --
 Mike Scherbakov
 #mihgen




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] need core reviewers for https://review.openstack.org/#/c/81954/

2014-06-12 Thread Ben Nemec
Please don't send review requests to the list.  See the preferred
methods here:
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html

Thanks.

-Ben

On 06/12/2014 10:07 AM, Robert Li (baoli) wrote:
 Hi,
 
 The SR-IOV work depends on this fix. It has got +1’s for quite some time, and 
 need core reviewers to review and approve.
 
 thanks,
 Robert
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Mid-cycle meetup for Cinder devs

2014-06-12 Thread Anita Kuno
On 06/12/2014 02:29 AM, Avishay Traeger wrote:
 I think you can create an easy survey with doodle.com.  You can fill in the
 dates, and ask people to specify next to their names if their attendance
 will be physical or virtual.
 
 
 On Thu, Jun 12, 2014 at 12:16 AM, D'Angelo, Scott scott.dang...@hp.com
 wrote:
 
  During the June 11 #openstack-cinder meeting we discussed a mid-cycle
 meetup. The agenda is To be Determined.

 I have inquired and HP in Fort Collins, CO has room and network
 connectivity available. There were some dates that worked well for
 reserving a nice room:

 July 14,15,17,18, 21-25, 27-Aug 1

 But a room could be found regardless.

 Virtual connectivity would also be available.



 Some of the open questions are:

 Are developers interested in a mid-cycle meetup?

 What dates are Not Good (Blackout dates)?

 What dates are Good?

 Whom might be able to be physically present in Ft Collins, CO?

 Are there alternative locations to be considered?



 Someone had mentioned a Google Survey. Would someone like to create that?
 Which questions should be asked?



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Sorry I wasn't at the June 11th meeting. I had talked with John in
channel about dates and had hoped that something late August would be
considered (we had discussed after LinuxCon). I do want to attend. The
only time mentioned above where I don't have a direct conflict is July
21-25, though I would be coming straight from another meetup.

Thanks, I do hope the meetup happens, I have dancing to do!
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] FloatingIp pollster spamming n-api logs (bug 1328694)

2014-06-12 Thread John Garbutt
On 11 June 2014 20:07, Joe Gordon joe.gord...@gmail.com wrote:
 On Wed, Jun 11, 2014 at 11:38 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:
 On 6/11/2014 10:01 AM, Eoghan Glynn wrote:
 Thanks for bringing this to the list Matt, comments inline ...

 tl;dr: some pervasive changes were made to nova to enable polling in
 ceilometer which broke some things and in my opinion shouldn't have been
 merged as a bug fix but rather should have been a blueprint.

 ===

 The detailed version:

 I opened bug 1328694 [1] yesterday and found that came back to some
 changes made in ceilometer for bug 1262124 [2].

 Upon further inspection, the original ceilometer bug 1262124 made some
 changes to the nova os-floating-ips API extension and the database API
 [3], and changes to python-novaclient [4] to enable ceilometer to use
 the new API changes (basically pass --all-tenants when listing floating
 IPs).

 The original nova change introduced bug 1328694 which spams the nova-api
 logs due to the ceilometer change [5] which does the polling, and right
 now in the gate ceilometer is polling every 15 seconds.


 IIUC that polling cadence in the gate is in the process of being reverted
 to the out-of-the-box default of 600s.

 I pushed a revert in ceilometer to fix the spam bug and a separate patch
 was pushed to nova to fix the problem in the network API.


 Thank you for that. The revert is just now approved on the ceilometer
 side,
 and is wending its merry way through the gate.

 The bigger problem I see here is that these changes were all made under
 the guise of a bug when I think this is actually a blueprint.  We have
 changes to the nova API, changes to the nova database API, CLI changes,
 potential performance impacts (ceilometer can be hitting the nova
 database a lot when polling here), security impacts (ceilometer needs
 admin access to the nova API to list floating IPs for all tenants),
 documentation impacts (the API and CLI changes are not documented), etc.

 So right now we're left with, in my mind, two questions:

 1. Do we just fix the spam bug 1328694 and move on, or
 2. Do we revert the nova API/CLI changes and require this goes through
 the nova-spec blueprint review process, which should have happened in
 the first place.


 So just to repeat the points I made on the unlogged #os-nova IRC channel
 earlier, for posterity here ...

 Nova already exposed an all_tenants flag in multiple APIs (servers,
 volumes,
 security-groups etc.) and these would have:

(a) generally pre-existed ceilometer's usage of the corresponding APIs

 and:

(b) been tracked and proposed at the time via straight-forward LP
 bugs,
as  opposed to being considered blueprint material

 So the manner of the addition of the all_tenants flag to the floating_ips
 API looks like it just followed existing custom  practice.

 Though that said, the blueprint process and in particular the nova-specs
 aspect, has been tightened up since then.

 My preference would be to fix the issue in the underlying API, but to use
 this as a teachable moment ... i.e. to require more oversight (in the
 form of a reviewed  approved BP spec) when such API changes are proposed
 in the future.

 Cheers,
 Eoghan

 Are there other concerns here?  If there are no major objections to the
 code that's already merged, then #2 might be excessive but we'd still
 need docs changes.

 I've already put this on the nova meeting agenda for tomorrow.

 [1] https://bugs.launchpad.net/ceilometer/+bug/1328694
 [2] https://bugs.launchpad.net/nova/+bug/1262124
 [3] https://review.openstack.org/#/c/81429/
 [4] https://review.openstack.org/#/c/83660/
 [5] https://review.openstack.org/#/c/83676/

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 While there is precedent for --all-tenants with some of the other APIs,
 I'm concerned about where this stops.  When ceilometer wants polling on some
 other resources that the nova API exposes, will it need the same thing?
 Doing all of this polling for resources in all tenants in nova puts an undue
 burden on the nova API and the database.

 Can we do something with notifications here instead?  That's where the
 nova-spec process would have probably caught this.

 ++ to notifications and not polling.

Yeah, I think we need to revert this, and go through the specs
process. Its been released in Juno-1 now, so this revert feels bad,
but perhaps its the best of a bad situation?

Word of caution, we need to get notifications versioned correctly if
we want this as a more formal external API. I think Heat have
similar issues in this area, efficiently knowing about something
happening in 

Re: [openstack-dev] Message level security plans.

2014-06-12 Thread Kelsey, Timothy Joh
Thanks for the info Matt, I guess I should have been clearer about what I
was asking. I was indeed referring to the trusted RPC messaging proposal
you linked. Im keen to find out whats happening with that and where I can
help.

-- 
Tim Kelsey
Cloud Security Engineer
HP Helion




On 12/06/2014 16:22, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:



On 6/12/2014 10:08 AM, Kelsey, Timothy Joh wrote:
 Hello OpenStack folks,

 First please allow me to introduce myself, my name is Tim Kelsey and
I¹m a security developer working at HP. I am very interested in projects
like Kite and the work that¹s being undertaken to introduce message
level security into OpenStack and would love to help out on that front.
In an effort to ascertain the current state of development it would be
great to hear from the people who are involved in this and find out
what's being worked on or planned in blueprints.

 Many Thanks,

 --
 Tim Kelsey
 Cloud Security Engineer
 HP Helion

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Are you talking about log messages or RPC messages?  For log messages,
there is a thread that started yesterday on masking auth tokens [1].

If RPC, I'm aware of at least one issue filed against Qpid [2] for
allowing a way to tell Qpid not to log a message since it might contain
sensitive information (like auth tokens).

Looks like there is also an older blueprint for trusted messaging here
[3].

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-June/037345.html
[2] https://issues.apache.org/jira/browse/QPID-5772
[3] 
https://blueprints.launchpad.net/oslo.messaging/+spec/trusted-messaging

-- 

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-12 Thread Sean Dague
On 06/12/2014 10:38 AM, Mike Bayer wrote:
 
 On 6/12/14, 8:26 AM, Julien Danjou wrote:
 On Thu, Jun 12 2014, Sean Dague wrote:

 That's not cacthable in unit or functional tests?
 Not in an accurate manner, no.

 Keeping jobs alive based on the theory that they might one day be useful
 is something we just don't have the liberty to do any more. We've not
 seen an idle node in zuul in 2 days... and we're only at j-1. j-3 will
 be at least +50% of this load.
 Sure, I'm not saying we don't have a problem. I'm just saying it's not a
 good solution to fix that problem IMHO.
 
 Just my 2c without having a full understanding of all of OpenStack's CI
 environment, Postgresql is definitely different enough that MySQL
 strict mode could still allow issues to slip through quite easily, and
 also as far as capacity issues, this might be longer term but I'm hoping
 to get database-related tests to be lots faster if we can move to a
 model that spends much less time creating databases and schemas.

This is what I mean by functional testing. If we were directly hitting a
real database on a set of in tree project tests, I think you could
discover issues like this. Neutron was headed down that path.

But if we're talking about a devstack / tempest run, it's not really
applicable.

If someone can point me to a case where we've actually found this kind
of bug with tempest / devstack, that would be great. I've just *never*
seen it. I was the one that did most of the fixing for pg support in
Nova, and have helped other projects as well, so I'm relatively familiar
with the kinds of fails we can discover. The ones that Julien pointed
really aren't likely to be exposed in our current system.

Which is why I think we're mostly just burning cycles on the existing
approach for no gain.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Olso] Periodic task coalescing

2014-06-12 Thread Matt Riedemann



On 6/12/2014 8:55 AM, Tom Cammann wrote:

Hello,

I'm addressing https://bugs.launchpad.net/oslo/+bug/1326020 which is
dealing with periodic tasks.

There is currently a code block that checks if a task is 0.2 seconds
away from being run and if so it run now instead. Essentially
coalescing nearby tasks together.

 From oslo-incubator/openstack/common/periodic_task.py:162

# If a periodic task is _nearly_ due, then we'll run it early
idle_for = min(idle_for, spacing)
if last_run is not None:
 delta = last_run + spacing - time.time()
 if delta  0.2:
 idle_for = min(idle_for, delta)
 continue

However the resolution in the config for various periodic tasks is by
the second, and I have been unable to find a task that has a
millisecond resolution. I intend to get rid of this coalescing in this
bug fix.

It fits in with this bug fix as I intend to make the tasks run on their
specific spacing boundaries, i.e. if spacing is 10 seconds, it will run
at 17:30:10, 17:30:20, etc.

Is there any reason to keep the coalescing of tasks?

Thanks,

Tom


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Seems reasonable to remove this.  For historical context, it looks like 
this code was moved over to oslo-incubator from nova in early Havana 
[1]. Going back to grizzly-eol on nova, the periodic task code was in 
nova.manager. From what I can tell, the 0.2 check was added here [2]. 
There isn't really an explicit statement about why that was added in the 
commit message or the related bug though. Maybe it had something to do 
with the tests or the dynamic looping call that was added?  You could 
see if Michael (mikal) remembers.


[1] https://review.openstack.org/#/c/25885/
[2] https://review.openstack.org/#/c/18618/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tempest ipv6 tests

2014-06-12 Thread Anita Kuno
On 06/11/2014 12:11 PM, Ajay Kalambur (akalambu) wrote:
 Hi
 Are there any tests available for ipv6 in Tempest. Also whats the road map 
 for addition of these tests.
 
 Ajay
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Hi Ajay:

Right now setting up ipv6 testing is in the design stage. The people who
would know the most (and probably welcome your contribution) are Sean
Collins (irc: sc68cal) Sean Dague (irc: sdague) and Matt Treinish (irc:
mtreinish). They can be found in #openstack-dev, #openstack-qa,
#openstack-neutron and #openstack-infra (as well as other channels) on
freenode network.

You might also like to read the logs from qa and neutron meetings as
well as participate in them:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
https://wiki.openstack.org/wiki/Network/Meetings
http://eavesdrop.openstack.org/meetings/

Thanks Ajay,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Gate still backed up - need assistance with nova-network logging enhancements

2014-06-12 Thread Davanum Srinivas
Hey Matt,

There is a connection pool in
https://github.com/boto/boto/blob/develop/boto/connection.py which
could be causing issues...

-- dims

On Thu, Jun 12, 2014 at 10:50 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:


 On 6/10/2014 5:36 AM, Michael Still wrote:

 https://review.openstack.org/99002 adds more logging to
 nova/network/manager.py, but I think you're not going to love the
 debug log level. Was this the sort of thing you were looking for
 though?

 Michael

 On Mon, Jun 9, 2014 at 11:45 PM, Sean Dague s...@dague.net wrote:

 Based on some back of envelope math the gate is basically processing 2
 changes an hour, failing one of them. So if you want to know how long
 the gate is, take the length / 2 in hours.

 Right now we're doing a lot of revert roulette, trying to revert things
 that we think landed about the time things went bad. I call this
 roulette because in many cases the actual issue isn't well understood. A
 key reason for this is:

 *nova network is a blackhole*

 There is no work unit logging in nova-network, and no attempted
 verification that the commands it ran did a thing. Most of these
 failures that we don't have good understanding of are the network not
 working under nova-network.

 So we could *really* use a volunteer or two to prioritize getting that
 into nova-network. Without it we might manage to turn down the failure
 rate by reverting things (or we might not) but we won't really know why,
 and we'll likely be here again soon.

  -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 I mentioned this in the nova meeting today also but the assocated bug for
 the nova-network ssh timeout issue is bug 1298472 [1].

 My latest theory on that one is if there could be a race/network leak in the
 ec2 third party tests in Tempest or something in the ec2 API in nova,
 because I saw this [2] showing up in the n-net logs.  My thinking is the
 tests or the API are not tearing down cleanly and eventually network
 resources are leaked and we start hitting those timeouts.  Just a theory at
 this point, but the ec2 3rd party tests do run concurrently with the
 scenario tests so things could be colliding at that point, but I haven't had
 time to dig into it, plus I have very little experience in those tests or
 the ec2 API in nova.

 [1] https://bugs.launchpad.net/tempest/+bug/1298472
 [2] http://goo.gl/6f1dfw

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-12 Thread Day, Phil


 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: 09 June 2014 19:03
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory
 allocation ratio out of scheduler
 
 On 06/09/2014 12:32 PM, Chris Friesen wrote:
  On 06/09/2014 07:59 AM, Jay Pipes wrote:
  On 06/06/2014 08:07 AM, Murray, Paul (HP Cloud) wrote:
  Forcing an instance to a specific host is very useful for the
  operator - it fulfills a valid use case for monitoring and testing
  purposes.
 
  Pray tell, what is that valid use case?
 
  I find it useful for setting up specific testcases when trying to
  validate thingsput *this* instance on *this* host, put *those*
  instances on *those* hosts, now pull the power plug on *this* host...etc.
 
 So, violating the main design tenet of cloud computing: though shalt not care
 what physical machine your virtual machine lives on. :)
 
  I wouldn't expect the typical openstack end-user to need it though.
 
 Me either :)

But the full set of system capabilities isn't only about things that an 
end-user needs - there are also admin features we need to include.

Another use case for this is to place a probe instance on specific hosts to 
help monitor specific aspects of the system performance from a VM perspective.


 
 I will point out, though, that it is indeed possible to achieve the same use
 case using host aggregates that would not break the main design tenet of
 cloud computing... just make two host aggregates, one for each compute
 node involved in your testing, and then simply supply scheduler hints that
 would only match one aggregate or the other.
 

Even I wouldn't argue that aggregates are a great solution here ;-)   Not only 
does having single node aggregates for every host you want to force to seem a 
tad overkill, the logic for this admin feature includes by-passing the normal 
scheduler filters, 


 Best,
 -jay
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][QA] Disabling the v3 API tests in the gate

2014-06-12 Thread Matthew Treinish
On Fri, Jun 13, 2014 at 12:41:19AM +0930, Christopher Yeoh wrote:
 On Fri, Jun 13, 2014 at 12:25 AM, Dan Smith d...@danplanet.com wrote:
 
  I think it'd be OK to move them to the experimental queue and a periodic
  nightly job until the v2.1 stuff shakes out.  The v3 API is marked
  experimental right now so it seems fitting that it'd be running tests in
  the experimental queue until at least the spec is approved and
  microversioning starts happening in the code base.
 
 
  I think this is reasonable. Continuing to run the full set of tests on
  every patch for something we never expect to see the light of day (in its
  current form) seems wasteful to me. Plus, we're going to (presumably) be
  ramping up tests on v2.1, which means to me that we'll need to clear out
  some capacity to make room for that.
 
 
 Thats true, though I was suggesting as v2.1microversions rolls out  we drop
 the test out of v3 and move it to v2.1microversions testing, so there's no
 change in capacity required.

That's why I wasn't proposing that we rip the tests out of the tree. I'm just
trying to weigh the benefit of leaving them enabled on every run against
the increased load they cause in an arguably overworked gate.

 
 Matt - how much of the time overhead is scenario tests? That's something
 that would have a lot less impact if moved to and experimental queue.
 Although the v3 api as a whole won't be officially exposed, the api tests
 test specific features fairly indepdently which are slated for
 v2.1microversions on a case by case basis and I don't want to see those
 regress. I guess my concern is how often the experimental queue results get
 really looked at and how hard/quick it is to revert when lots of stuff
 merges in a short period of time)

The scenario tests tend to be the slower tests in tempest. I have to disagree
that removing them would have lower impact. The scenario tests provide the best
functional verification, which is part of the reason we always have failures in
the gate on them. While it would make the gate faster the decrease in what were
testing isn't worth it. Also, for reference I pulled the test run times that
were greater than 10sec out of a recent gate run:
http://paste.openstack.org/show/83827/

The experimental jobs aren't automatically run, they have to be manually
triggered by leaving a 'check experimental' comment. So for changes that we want
to test the v3 api on a comment would have to left. To prevent regression is why
we'd also have the nightly job, which I think is a better compromise for the v3
tests while we wait to migrate them to the v2.1 microversion tests.

Another, option is that we make the v3 job run only on the check queue and not
on the gate. But the benefits of that are slightly more limited, because we'd
still be holding up the check queue.

-Matt Treinish


pgpxAA__5zve5.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nova list can't return anything

2014-06-12 Thread Ben Nemec
Please do not cross-post questions between openstack and openstack-dev.
 This doesn't sound development-related, so the appropriate place for
this discussion is the openstack list.

Thanks.

-Ben

On 06/12/2014 03:52 AM, 严超 wrote:
 Hi, All:
 I print *nova --debug list *,  and got nothing return, how can
 that be ?
 RESP: [200] CaseInsensitiveDict({'date': 'Thu, 12 Jun 2014 08:52:19 GMT',
 'content-length': '0', 'content-type': 'text/html; charset=UTF-8'})
 RESP BODY:
 
 *body***
 None
 *body***
 DEBUG (shell:792) 'NoneType' object has no attribute '__getitem__'
 Traceback (most recent call last):
   File /opt/stack/python-novaclient/novaclient/shell.py, line 789, in main
 OpenStackComputeShell().main(argv)
   File /opt/stack/python-novaclient/novaclient/shell.py, line 724, in main
 args.func(self.cs, args)
   File /opt/stack/python-novaclient/novaclient/v1_1/shell.py, line 1129,
 in do_list
 search_opts=search_opts)
   File /opt/stack/python-novaclient/novaclient/v1_1/servers.py, line 591,
 in list
 return self._list(/servers%s%s % (detail, query_string), servers)
   File /opt/stack/python-novaclient/novaclient/base.py, line 72, in _list
 data = body[response_key]
 TypeError: 'NoneType' object has no attribute '__getitem__'
 ERROR (TypeError): 'NoneType' object has no attribute '__getitem__'
 
 
 *Best Regards!*
 
 
 *Chao Yan--**My twitter:Andy Yan @yanchao727
 https://twitter.com/yanchao727*
 
 
 *My Weibo:http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][QA] Disabling the v3 API tests in the gate

2014-06-12 Thread Matt Riedemann



On 6/12/2014 10:51 AM, Matthew Treinish wrote:

On Fri, Jun 13, 2014 at 12:41:19AM +0930, Christopher Yeoh wrote:

On Fri, Jun 13, 2014 at 12:25 AM, Dan Smith d...@danplanet.com wrote:


I think it'd be OK to move them to the experimental queue and a periodic

nightly job until the v2.1 stuff shakes out.  The v3 API is marked
experimental right now so it seems fitting that it'd be running tests in
the experimental queue until at least the spec is approved and
microversioning starts happening in the code base.



I think this is reasonable. Continuing to run the full set of tests on
every patch for something we never expect to see the light of day (in its
current form) seems wasteful to me. Plus, we're going to (presumably) be
ramping up tests on v2.1, which means to me that we'll need to clear out
some capacity to make room for that.



Thats true, though I was suggesting as v2.1microversions rolls out  we drop
the test out of v3 and move it to v2.1microversions testing, so there's no
change in capacity required.


That's why I wasn't proposing that we rip the tests out of the tree. I'm just
trying to weigh the benefit of leaving them enabled on every run against
the increased load they cause in an arguably overworked gate.



Matt - how much of the time overhead is scenario tests? That's something
that would have a lot less impact if moved to and experimental queue.
Although the v3 api as a whole won't be officially exposed, the api tests
test specific features fairly indepdently which are slated for
v2.1microversions on a case by case basis and I don't want to see those
regress. I guess my concern is how often the experimental queue results get
really looked at and how hard/quick it is to revert when lots of stuff
merges in a short period of time)


The scenario tests tend to be the slower tests in tempest. I have to disagree
that removing them would have lower impact. The scenario tests provide the best
functional verification, which is part of the reason we always have failures in
the gate on them. While it would make the gate faster the decrease in what were
testing isn't worth it. Also, for reference I pulled the test run times that
were greater than 10sec out of a recent gate run:
http://paste.openstack.org/show/83827/

The experimental jobs aren't automatically run, they have to be manually
triggered by leaving a 'check experimental' comment. So for changes that we want
to test the v3 api on a comment would have to left. To prevent regression is why
we'd also have the nightly job, which I think is a better compromise for the v3
tests while we wait to migrate them to the v2.1 microversion tests.

Another, option is that we make the v3 job run only on the check queue and not
on the gate. But the benefits of that are slightly more limited, because we'd
still be holding up the check queue.

-Matt Treinish



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah the scenario tests need to stay, that's how we've exposed the two 
big ssh bugs in the last couple of weeks which are obvious issues at scale.


I still think experimental/periodic is the way to go, not a hybrid of 
check-on/gate-off.  If we want to explicitly test v3 API changes we can 
do that with 'recheck experimental'.  Granted someone has to remember to 
run those, much like checking/rechecking 3rd party CI results.


One issue I've had with the nightly periodic job is finding out where 
the results are in an easy to consume format.  Is there something out 
there for that?  I'm thinking specifically of things we've turned off in 
the gate before like multi-backend volume tests and 
allow_tenant_isolation=False.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] versioning and releases

2014-06-12 Thread Thierry Carrez
Mark McLoughlin wrote:
 On Thu, 2014-06-12 at 12:09 +0200, Thierry Carrez wrote:
 Doug Hellmann wrote:
 On Tue, Jun 10, 2014 at 5:19 PM, Mark McLoughlin mar...@redhat.com wrote:
 On Tue, 2014-06-10 at 12:24 -0400, Doug Hellmann wrote:
 [...]
 Background:

 We have two types of oslo libraries. Libraries like oslo.config and
 oslo.messaging were created by extracting incubated code, updating the
 public API, and packaging it. Libraries like cliff and taskflow were
 created as standalone packages from the beginning, and later adopted
 by the oslo team to manage their development and maintenance.

 Incubated libraries have been released at the end of a release cycle,
 as with the rest of the integrated packages. Adopted libraries have
 historically been released as needed during their development. We
 would like to synchronize these so that all oslo libraries are
 officially released with the rest of the software created by OpenStack
 developers.

 Could you outline the benefits of syncing with the integrated release ?
 
 Sure!
 
 http://lists.openstack.org/pipermail/openstack-dev/2012-November/003345.html
 
 :)

Heh :) I know why *you* prefer it synced. Was just curious to see if
Doug thought the same way :P

 Personally I see a few drawbacks to this approach:

 We dump the new version on consumers usually around RC time, which is
 generally a bad time to push a new version of a  dependency and detect
 potential breakage. Consumers just seem to get the new version at the
 worst possible time.

 It also prevents from spreading the work all over the cycle. For example
 it may have been more successful to have the oslo.messaging new release
 by milestone-1 to make sure it's adopted by projects in milestone-2 or
 milestone-3... rather than have it ready by milestone-3 and expect all
 projects to use it by consuming alphas during the cycle.

 Now if *all* projects were continuously consuming alpha versions, most
 of those drawbacks would go away.
 
 Yes, that's the plan. Those issues are acknowledged and we're reasonably
 confident the alpha versions plan will address them.

I agree that if we release alphas often and most projects consume them
instead of jump from stable release to stable release, we have all the
benefits without the drawbacks.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] dev source code with the help of packstack and another question, thank you very much.

2014-06-12 Thread Ben Nemec
On 06/12/2014 02:50 AM, bt...@163.com wrote:
 Hi,everybody.
  I use packstack to install openstack, but I have found a few questions: 
 (Centos 6.5 os)
  1)  The directory /var/lib/glance is not big enough to store the images. 
 I modify the config files in the file /etc/glance/glance-api.conf  and  
 /etc/glance/glance-cache.conf ,modify the filesystem_store_datadir . But when 
 I run packstack again to reinstall openstack: 
 packstack --answer-file=packstack-answers-20140606-140240.txt 
 after this operation, the value of parameter filesystem_store_datadir is 
 changed to the default value(filesystem_store_datadir=/var/lib/glance/images/ 
 ) . Is there any method to change the value for ever? 
 and the similar thing happens to nova instance store directory. Thank you 
 very much.

This isn't a development question.  Please ask on either the regular
openstack (no -dev) list or the RDO mailing list.  Thanks.

 
 2) How can I modify the source code and reinstall openstack with the help of 
 packstack ? 
 Thank you very much.

Packstack isn't really intended as a development tool.  If you're
looking to make code changes you probably want to use devstack to
install: http://devstack.org/

 Best wishes to you.
 
 
 
 
 bt...@163.com
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Import errors in tests are not reported in python 2.7

2014-06-12 Thread Thomas Herve
 On 06/12/2014 10:32 AM, Thomas Herve wrote:
  Hi all,
  
  I don't know if it's a know issue or not, but I noticed on one of my patch
  (https://review.openstack.org/#/c/99648/) that the 2.7 gate was passing
  whereas the 2.6 is failing because of import errors. It seems to be a
  problem related to the difference in the discover module, so presumably an
  issue in testtools. The problem appears locally when using tox, so it's
  fairly annoying not to be able to trust the test result.
  
  Thanks,
  
 
 I'm actually really confused by the testr commands that are generated
 here -
 http://logs.openstack.org/48/99648/2/check/gate-heat-python27/aa8fead/console.html
 
 vs. what you see in a Nova unit test run.
 
 Is there some particular wrappers you have kicking off in the unit tests?

Hum, not that I know. Our tox.ini does python setup.py testr --slowest

--
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][QA] Disabling the v3 API tests in the gate

2014-06-12 Thread Sean Dague
On 06/12/2014 12:02 PM, Matt Riedemann wrote:
 
 
 On 6/12/2014 10:51 AM, Matthew Treinish wrote:
 On Fri, Jun 13, 2014 at 12:41:19AM +0930, Christopher Yeoh wrote:
 On Fri, Jun 13, 2014 at 12:25 AM, Dan Smith d...@danplanet.com wrote:

 I think it'd be OK to move them to the experimental queue and a
 periodic
 nightly job until the v2.1 stuff shakes out.  The v3 API is marked
 experimental right now so it seems fitting that it'd be running
 tests in
 the experimental queue until at least the spec is approved and
 microversioning starts happening in the code base.


 I think this is reasonable. Continuing to run the full set of tests on
 every patch for something we never expect to see the light of day
 (in its
 current form) seems wasteful to me. Plus, we're going to
 (presumably) be
 ramping up tests on v2.1, which means to me that we'll need to clear
 out
 some capacity to make room for that.


 Thats true, though I was suggesting as v2.1microversions rolls out 
 we drop
 the test out of v3 and move it to v2.1microversions testing, so
 there's no
 change in capacity required.

 That's why I wasn't proposing that we rip the tests out of the tree.
 I'm just
 trying to weigh the benefit of leaving them enabled on every run against
 the increased load they cause in an arguably overworked gate.


 Matt - how much of the time overhead is scenario tests? That's something
 that would have a lot less impact if moved to and experimental queue.
 Although the v3 api as a whole won't be officially exposed, the api
 tests
 test specific features fairly indepdently which are slated for
 v2.1microversions on a case by case basis and I don't want to see those
 regress. I guess my concern is how often the experimental queue
 results get
 really looked at and how hard/quick it is to revert when lots of stuff
 merges in a short period of time)

 The scenario tests tend to be the slower tests in tempest. I have to
 disagree
 that removing them would have lower impact. The scenario tests provide
 the best
 functional verification, which is part of the reason we always have
 failures in
 the gate on them. While it would make the gate faster the decrease in
 what were
 testing isn't worth it. Also, for reference I pulled the test run
 times that
 were greater than 10sec out of a recent gate run:
 http://paste.openstack.org/show/83827/

 The experimental jobs aren't automatically run, they have to be manually
 triggered by leaving a 'check experimental' comment. So for changes
 that we want
 to test the v3 api on a comment would have to left. To prevent
 regression is why
 we'd also have the nightly job, which I think is a better compromise
 for the v3
 tests while we wait to migrate them to the v2.1 microversion tests.

 Another, option is that we make the v3 job run only on the check queue
 and not
 on the gate. But the benefits of that are slightly more limited,
 because we'd
 still be holding up the check queue.

 -Matt Treinish



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 Yeah the scenario tests need to stay, that's how we've exposed the two
 big ssh bugs in the last couple of weeks which are obvious issues at scale.
 
 I still think experimental/periodic is the way to go, not a hybrid of
 check-on/gate-off.  If we want to explicitly test v3 API changes we can
 do that with 'recheck experimental'.  Granted someone has to remember to
 run those, much like checking/rechecking 3rd party CI results.
 
 One issue I've had with the nightly periodic job is finding out where
 the results are in an easy to consume format.  Is there something out
 there for that?  I'm thinking specifically of things we've turned off in
 the gate before like multi-backend volume tests and
 allow_tenant_isolation=False.

It's getting emailed to the otherwise defunct openstack-qa list.
Subscribe there for nightlies.

Also agreed, the scenario tests find and prevent *tons* of real issues.
Those have to stay. There is a reason we use them in the smoke runs for
grenade, they are a very solid sniff test of real working.

I also think by policy we should probably pull v3 out of the main job,
as it's not a stable API. We've had issues in Tempest with people
landing tests, then trying to go and change the API. The biggest issue
in taking branchless tempest back to stable/havana was Nova v3 API, as
it's actually quite different in havana than icehouse.

We have a chicken / egg challenge in testing experimental APIs which
will need to get resolved, but for now I think turning off v3 is the
right approach.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][QA] Disabling the v3 API tests in the gate

2014-06-12 Thread Joe Gordon
On Jun 12, 2014 9:03 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 6/12/2014 10:51 AM, Matthew Treinish wrote:

 On Fri, Jun 13, 2014 at 12:41:19AM +0930, Christopher Yeoh wrote:

 On Fri, Jun 13, 2014 at 12:25 AM, Dan Smith d...@danplanet.com wrote:

 I think it'd be OK to move them to the experimental queue and a
periodic

 nightly job until the v2.1 stuff shakes out.  The v3 API is marked
 experimental right now so it seems fitting that it'd be running tests
in
 the experimental queue until at least the spec is approved and
 microversioning starts happening in the code base.


 I think this is reasonable. Continuing to run the full set of tests on
 every patch for something we never expect to see the light of day (in
its
 current form) seems wasteful to me. Plus, we're going to (presumably)
be
 ramping up tests on v2.1, which means to me that we'll need to clear
out
 some capacity to make room for that.


 Thats true, though I was suggesting as v2.1microversions rolls out  we
drop
 the test out of v3 and move it to v2.1microversions testing, so there's
no
 change in capacity required.


 That's why I wasn't proposing that we rip the tests out of the tree. I'm
just
 trying to weigh the benefit of leaving them enabled on every run against
 the increased load they cause in an arguably overworked gate.


 Matt - how much of the time overhead is scenario tests? That's something
 that would have a lot less impact if moved to and experimental queue.
 Although the v3 api as a whole won't be officially exposed, the api
tests
 test specific features fairly indepdently which are slated for
 v2.1microversions on a case by case basis and I don't want to see those
 regress. I guess my concern is how often the experimental queue results
get
 really looked at and how hard/quick it is to revert when lots of stuff
 merges in a short period of time)


 The scenario tests tend to be the slower tests in tempest. I have to
disagree
 that removing them would have lower impact. The scenario tests provide
the best
 functional verification, which is part of the reason we always have
failures in
 the gate on them. While it would make the gate faster the decrease in
what were
 testing isn't worth it. Also, for reference I pulled the test run times
that
 were greater than 10sec out of a recent gate run:
 http://paste.openstack.org/show/83827/

 The experimental jobs aren't automatically run, they have to be manually
 triggered by leaving a 'check experimental' comment. So for changes that
we want
 to test the v3 api on a comment would have to left. To prevent
regression is why
 we'd also have the nightly job, which I think is a better compromise for
the v3
 tests while we wait to migrate them to the v2.1 microversion tests.

 Another, option is that we make the v3 job run only on the check queue
and not
 on the gate. But the benefits of that are slightly more limited, because
we'd
 still be holding up the check queue.

 -Matt Treinish



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Yeah the scenario tests need to stay, that's how we've exposed the two
big ssh bugs in the last couple of weeks which are obvious issues at scale.

 I still think experimental/periodic is the way to go, not a hybrid of
check-on/gate-off.  If we want to explicitly test v3 API changes we can do
that with 'recheck experimental'.  Granted someone has to remember to run
those, much like checking/rechecking 3rd party CI results.

++


 One issue I've had with the nightly periodic job is finding out where the
results are in an easy to consume format.  Is there something out there for
that?  I'm thinking specifically of things we've turned off in the gate
before like multi-backend volume tests and allow_tenant_isolation=False.

 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Third-Party][Infra] Optimize third party resources

2014-06-12 Thread Jaume Devesa
Hello all,

I've just submitted a patch[1] to solve a bug in Neutron. All the
third-party plugins has voted as +1 but Jenkins has refused the patch
because of the '.' dot at the end of the commit summary line.

I know that is my fault, because I should run the ./run_tests.sh -p after
modify the commit message, but:

since the official Jenkins runs more gates, tests and checks than the rest
of third-party Jenkins, wouldn't be better to run third party jobs after
official Jenkins has verified the patch? I think that would relax the load
on the third party infrastructures and maybe they can be more reactive in
the 'good' patches.


​[1]: https://review.openstack.org/#/c/99679/2​

-- 
Jaume Devesa
Software Engineer at Midokura
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Juno-1 development milestone available

2014-06-12 Thread Kurt Griffiths
Hi folks,

Marconi’s first Juno milestone release is now available. It includes
several bug fixes, plus adds support for caching frequent DB queries as
part of the team's focus on performance tuning during the Juno cycle. This
release also includes an important refactoring of our API tests that will
allow us to deliver version 1.1 for the Juno-2 milestone release.

You can download the juno-1 release and review the changes here:

https://launchpad.net/marconi/+milestone/juno-1

Thanks to everyone who contributed to this first milestone! It’s great to
see all the new contributors.

--
Kurt Griffiths (kgriffs)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-12 Thread Joe Gordon
On Jun 12, 2014 8:37 AM, Sean Dague s...@dague.net wrote:

 On 06/12/2014 10:38 AM, Mike Bayer wrote:
 
  On 6/12/14, 8:26 AM, Julien Danjou wrote:
  On Thu, Jun 12 2014, Sean Dague wrote:
 
  That's not cacthable in unit or functional tests?
  Not in an accurate manner, no.
 
  Keeping jobs alive based on the theory that they might one day be
useful
  is something we just don't have the liberty to do any more. We've not
  seen an idle node in zuul in 2 days... and we're only at j-1. j-3 will
  be at least +50% of this load.
  Sure, I'm not saying we don't have a problem. I'm just saying it's not
a
  good solution to fix that problem IMHO.
 
  Just my 2c without having a full understanding of all of OpenStack's CI
  environment, Postgresql is definitely different enough that MySQL
  strict mode could still allow issues to slip through quite easily, and
  also as far as capacity issues, this might be longer term but I'm hoping
  to get database-related tests to be lots faster if we can move to a
  model that spends much less time creating databases and schemas.

 This is what I mean by functional testing. If we were directly hitting a
 real database on a set of in tree project tests, I think you could
 discover issues like this. Neutron was headed down that path.

 But if we're talking about a devstack / tempest run, it's not really
 applicable.

 If someone can point me to a case where we've actually found this kind
 of bug with tempest / devstack, that would be great. I've just *never*
 seen it. I was the one that did most of the fixing for pg support in
 Nova, and have helped other projects as well, so I'm relatively familiar
 with the kinds of fails we can discover. The ones that Julien pointed
 really aren't likely to be exposed in our current system.

 Which is why I think we're mostly just burning cycles on the existing
 approach for no gain.

Given all the points made above, I think dropping PostgreSQL is the right
choice; if only we had infinite cloud that would be another story.

What about converting one of our existing jobs (grenade partial ncpu, large
ops, regular grenade, tempest with nova network etc.) Into a PostgreSQL
only job? We could get some level of PostgreSQL testing without any
additional jobs, although this is  tradeoff obviously.


 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-12 Thread Janczuk, Tomasz
What exactly is the core set of functionalities Marconi expects all
implementations to support? (I understand it is a subset of the HTTP APIs
Marconi exposes?)

On 6/12/14, 4:56 AM, Flavio Percoco fla...@redhat.com wrote:

On 11/06/14 16:26 -0700, Devananda van der Veen wrote:
On Tue, Jun 10, 2014 at 1:23 AM, Flavio Percoco fla...@redhat.com
wrote:
 Against:

  € Makes it hard for users to create applications that work across
 multiple
clouds, since critical functionality may or may not be available
in a
 given
deployment. (counter: how many users need cross-cloud
compatibility?
 Can
they degrade gracefully?)


The OpenStack Infra team does.

 This is definitely unfortunate but I believe it's a fair trade-off. I
 believe the same happens in other services that have support for
 different drivers.

I disagree strongly on this point.

Interoperability is one of the cornerstones of OpenStack. We've had
panels about it at summits. Designing an API which is not
interoperable is not a fair tradeoff for performance - it's
destructive to the health of the project. Where other projects have
already done that, it's unfortunate, but let's not plan to make it
worse.

A lack of interoperability not only prevents users from migrating
between clouds or running against multiple clouds concurrently, it
hurts application developers who want to build on top of OpenStack
because their applications become tied to specific *implementations*
of OpenStack.


What I meant to say is that, based on a core set of functionalities,
all extra functionalities are part of the fair trade-off. It's up to
the cloud provider to choose what storage driver/features they want to
expose. Nonetheless, they'll all expose the same core set of
functionalities. I believe this is true also for other services, which
I'm not trying to use as an excuse but as a reference of what the
reality of non-opinionated services is. Marconi is opinionated w.r.t
the API and the core set of functionalities it wants to support.

You make really good points that I agree with. Thanks for sharing.

-- 
@flaper87
Flavio Percoco
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Third-Party][Infra] Optimize third party resources

2014-06-12 Thread Anita Kuno
On 06/12/2014 12:20 PM, Jaume Devesa wrote:
 Hello all,
 
 I've just submitted a patch[1] to solve a bug in Neutron. All the
 third-party plugins has voted as +1 but Jenkins has refused the patch
 because of the '.' dot at the end of the commit summary line.
 
 I know that is my fault, because I should run the ./run_tests.sh -p after
 modify the commit message, but:
 
 since the official Jenkins runs more gates, tests and checks than the rest
 of third-party Jenkins, wouldn't be better to run third party jobs after
 official Jenkins has verified the patch? I think that would relax the load
 on the third party infrastructures and maybe they can be more reactive in
 the 'good' patches.
 
 
 ​[1]: https://review.openstack.org/#/c/99679/2​
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Well then we are in a situation where development ceases until the third
party systems respond. Which is not a situation we want to put ourselves
in. We actually are actively avoiding that situation.

The failure on the '.' at the end of the commit title is a style test,
not something the third party tests would run anyway.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-12 Thread Monty Taylor
On 06/12/2014 08:36 AM, Sean Dague wrote:
 On 06/12/2014 10:38 AM, Mike Bayer wrote:

 On 6/12/14, 8:26 AM, Julien Danjou wrote:
 On Thu, Jun 12 2014, Sean Dague wrote:

 That's not cacthable in unit or functional tests?
 Not in an accurate manner, no.

 Keeping jobs alive based on the theory that they might one day be useful
 is something we just don't have the liberty to do any more. We've not
 seen an idle node in zuul in 2 days... and we're only at j-1. j-3 will
 be at least +50% of this load.
 Sure, I'm not saying we don't have a problem. I'm just saying it's not a
 good solution to fix that problem IMHO.

 Just my 2c without having a full understanding of all of OpenStack's CI
 environment, Postgresql is definitely different enough that MySQL
 strict mode could still allow issues to slip through quite easily, and
 also as far as capacity issues, this might be longer term but I'm hoping
 to get database-related tests to be lots faster if we can move to a
 model that spends much less time creating databases and schemas.
 
 This is what I mean by functional testing. If we were directly hitting a
 real database on a set of in tree project tests, I think you could
 discover issues like this. Neutron was headed down that path.

We have MySQL and PostGres available on all of the unittest nodes. So if
someone wrote a functional test to test for postgres specific issues
like that, and put the standard trap on it only run this if you find a
postgres database with an openstackci user - then we should be able to
catch all of the specific things like this without incurring the cost of
a double run.

So, in general, +1 from me.

 But if we're talking about a devstack / tempest run, it's not really
 applicable.
 
 If someone can point me to a case where we've actually found this kind
 of bug with tempest / devstack, that would be great. I've just *never*
 seen it. I was the one that did most of the fixing for pg support in
 Nova, and have helped other projects as well, so I'm relatively familiar
 with the kinds of fails we can discover. The ones that Julien pointed
 really aren't likely to be exposed in our current system.
 
 Which is why I think we're mostly just burning cycles on the existing
 approach for no gain.
 
   -Sean
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-06-12 Thread Sumit Naiksatam
Hi Carlos,

I noticed that the point you raised here had not been followed up. So
if I understand correctly, your concern is related to sharing common
configuration information between GP drivers, and ML2 mechanism
drivers (when used in the mapping)? If so, would a common
configuration file  shared between the two drivers help to address
this?

Thanks,
~Sumit.

On Tue, May 27, 2014 at 10:33 AM, Carlos Gonçalves m...@cgoncalves.pt wrote:
 Hi,

 On 27 May 2014, at 15:55, Mohammad Banikazemi m...@us.ibm.com wrote:

 GP like any other Neutron extension can have different implementations. Our
 idea has been to have the GP code organized similar to how ML2 and mechanism
 drivers are organized, with the possibility of having different drivers for
 realizing the GP API. One such driver (analogous to an ML2 mechanism driver
 I would say) is the mapping driver that was implemented for the PoC. I
 certainly do not see it as the only implementation. The mapping driver is
 just the driver we used for our PoC implementation in order to gain
 experience in developing such a driver. Hope this clarifies things a bit.


 The code organisation adopted to implement the PoC for the GP is indeed very
 similar to the one ML2 is using. There is one aspect I think GP will hit
 soon if it continues to follow with its current code base where multiple
 (policy) drivers will be available, and as Mohammad putted it as being
 analogous to an ML2 mech driver, but are independent from ML2’s. I’m
 unaware, however, if the following problem has already been brought to
 discussion or not.

 From here I see the GP effort going, besides from some code refactoring, I'd
 say expanding the supported policy drivers is the next goal. With that ODL
 support might next. Now, administrators enabling GP ODL support will have to
 configure ODL data twice (host, user, password) in case they’re using ODL as
 a ML2 mech driver too, because policy drivers share no information between
 ML2 ones. This can become more troublesome if ML2 is configured to load
 multiple mech drivers.

 With that said, if it makes any sense, a different implementation should be
 considered. One that somehow allows mech drivers living in ML2 umbrella to
 be extended; BP [1] [2] may be a first step towards that end, I’m guessing.

 Thanks,
 Carlos Gonçalves

 [1]
 https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mechanismdriver-extensions
 [2] https://review.openstack.org/#/c/89208/


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-06-12 Thread Sumit Naiksatam
Hi Carlos,

I noticed that the point you raised here had not been followed up. So
if I understand correctly, your concern is related to sharing common
configuration information between GP drivers, and ML2 mechanism
drivers (when used in the mapping)? If so, would a common
configuration file  shared between the two drivers help to address
this?

Thanks,
~Sumit.

On Tue, May 27, 2014 at 10:33 AM, Carlos Gonçalves m...@cgoncalves.pt wrote:
 Hi,

 On 27 May 2014, at 15:55, Mohammad Banikazemi m...@us.ibm.com wrote:

 GP like any other Neutron extension can have different implementations. Our
 idea has been to have the GP code organized similar to how ML2 and mechanism
 drivers are organized, with the possibility of having different drivers for
 realizing the GP API. One such driver (analogous to an ML2 mechanism driver
 I would say) is the mapping driver that was implemented for the PoC. I
 certainly do not see it as the only implementation. The mapping driver is
 just the driver we used for our PoC implementation in order to gain
 experience in developing such a driver. Hope this clarifies things a bit.


 The code organisation adopted to implement the PoC for the GP is indeed very
 similar to the one ML2 is using. There is one aspect I think GP will hit
 soon if it continues to follow with its current code base where multiple
 (policy) drivers will be available, and as Mohammad putted it as being
 analogous to an ML2 mech driver, but are independent from ML2’s. I’m
 unaware, however, if the following problem has already been brought to
 discussion or not.

 From here I see the GP effort going, besides from some code refactoring, I'd
 say expanding the supported policy drivers is the next goal. With that ODL
 support might next. Now, administrators enabling GP ODL support will have to
 configure ODL data twice (host, user, password) in case they’re using ODL as
 a ML2 mech driver too, because policy drivers share no information between
 ML2 ones. This can become more troublesome if ML2 is configured to load
 multiple mech drivers.

 With that said, if it makes any sense, a different implementation should be
 considered. One that somehow allows mech drivers living in ML2 umbrella to
 be extended; BP [1] [2] may be a first step towards that end, I’m guessing.

 Thanks,
 Carlos Gonçalves

 [1]
 https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mechanismdriver-extensions
 [2] https://review.openstack.org/#/c/89208/


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] performance

2014-06-12 Thread Janczuk, Tomasz
Hello,

I was wondering if there is any update on the performance of Marconi? Are
results of any performance measurements available yet?

Thanks,
Tomasz Janczuk
@tjanczuk
HP

On 4/29/14, 1:01 PM, Janczuk, Tomasz tomasz.janc...@hp.com wrote:

Hi Flavio,

Thanks! I also added some comments to the performance test plan at
https://etherpad.openstack.org/p/marconi-benchmark-plans we talked about
yesterday. 

Thanks,
Tomasz

On 4/29/14, 2:52 AM, Flavio Percoco fla...@redhat.com wrote:

On 28/04/14 17:41 +, Janczuk, Tomasz wrote:
Hello,

Have any performance numbers been published for Marconi? I have asked
this question before
(http://lists.openstack.org/pipermail/openstack-dev/2014-March/031004.ht
m
l) but there were none at that time.



Hi Tomasz,

Some folks in the team are dedicated to working on this and producing
results asap. The details and results will be shared as soon as
possible.

Thanks a lot for your interest, I'll make sure you're in the loop as
soon as we have them.

-- 
@flaper87
Flavio Percoco
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-12 Thread Tim Bell
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 12 June 2014 17:37
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Gate proposal - drop Postgresql configurations in
 the gate
 
...
 But if we're talking about a devstack / tempest run, it's not really 
 applicable.
 
 If someone can point me to a case where we've actually found this kind of bug
 with tempest / devstack, that would be great. I've just *never* seen it. I 
 was the
 one that did most of the fixing for pg support in Nova, and have helped other
 projects as well, so I'm relatively familiar with the kinds of fails we can 
 discover.
 The ones that Julien pointed really aren't likely to be exposed in our current
 system.
 
 Which is why I think we're mostly just burning cycles on the existing approach
 for no gain.
 

In some cases, we've dropped support for drivers in OpenStack since they were 
not tested in the gate, on the grounds that if it is not tested, it is probably 
broken.

From my understanding, this change proposes to drop Postgres testing from the 
default gate. Yet, there does not seem to be a proposal to drop Postgres 
support.

Are these two positions consistent ?

(Just seeking clarification, I fully understand the difficulties involved in 
multiple parallel testing at our scale)

Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-12 Thread Clint Byrum
Excerpts from Matt Riedemann's message of 2014-06-12 08:15:46 -0700:
 
 On 6/12/2014 9:38 AM, Mike Bayer wrote:
 
  On 6/12/14, 8:26 AM, Julien Danjou wrote:
  On Thu, Jun 12 2014, Sean Dague wrote:
 
  That's not cacthable in unit or functional tests?
  Not in an accurate manner, no.
 
  Keeping jobs alive based on the theory that they might one day be useful
  is something we just don't have the liberty to do any more. We've not
  seen an idle node in zuul in 2 days... and we're only at j-1. j-3 will
  be at least +50% of this load.
  Sure, I'm not saying we don't have a problem. I'm just saying it's not a
  good solution to fix that problem IMHO.
 
  Just my 2c without having a full understanding of all of OpenStack's CI
  environment, Postgresql is definitely different enough that MySQL
  strict mode could still allow issues to slip through quite easily, and
  also as far as capacity issues, this might be longer term but I'm hoping
  to get database-related tests to be lots faster if we can move to a
  model that spends much less time creating databases and schemas.
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 Is there some organization out there that uses PostgreSQL in production 
 that could stand up 3rd party CI with it?
 
 I know that at least for the DB2 support we're adding across the 
 projects we're doing 3rd party CI for that. Granted it's a proprietary 
 DB unlike PG but if we're talking about spending resources on testing 
 for something that's not widely used, but there is a niche set of users 
 that rely on it, we could/should move that to 3rd party CI.
 
 I'd much rather see us spend our test resources on getting multi-node 
 testing running in the gate so we can test migrations in Nova.
 

I think this is really the answer. To paraphrase the wise and well
experienced engineer, Beyoncé:

If you like it then you shoulda put CI on it.

The project will succumb to a tragedy of the commons if it bends over
backwards for every deployment variation available. But 3rd parties who
care can always contribute resources and (if they play nice...) votes.

I think there are a tiny number of things that will cause corner case
bugs that could creep in, but as Sean says, we haven't actually seen
these.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][ceilometer][glance][all] Loading clients from a CONF object

2014-06-12 Thread Mathieu Gagné

On 2014-06-12, 7:18 AM, Sean Dague wrote:

On 06/11/2014 08:12 PM, Mathieu Gagné wrote:

On 2014-06-11, 7:52 PM, Sean Dague wrote:

I'm concerned about the [nova] section being (one day) overloaded with
options unrelated to the actual nova client configuration. Although my
concern could be wrong.


I feel like you need to put your opperator hat on when it comes to conf
files. Nova is the compute service. Talking to nova is doing nova
things. Nova_client has no real meaning, and it actually gets kind of
confusing what's a client in an openstack cloud.

Because neutron is a nova client, nova is a neutron / glance client /
cinder client, glance is a swift client.

So that subtlety makes sense to people that spend time reading code. But
from an Ops perspective, seems to just add a layer of confusion.

The config file should be a view that makes sense from configuring a
system by someone that's not reading the code. Not just a reflection of
the code structure dejour that parses it.

Which is why I think [nova] makes sense.



I agree with you, [nova] looks more suitable.

--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Nominating Nikhil Komawar for Core

2014-06-12 Thread Brian Rosmaita
+1

From: Kuvaja, Erno [kuv...@hp.com]
Sent: Thursday, June 12, 2014 9:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Nominating Nikhil Komawar for Core

+1

From: Alex Meade [mailto:mr.alex.me...@gmail.com]
Sent: 12 June 2014 13:56
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Nominating Nikhil Komawar for Core

+100 it's about time!

On Thu, Jun 12, 2014 at 3:26 AM, Mark Washenberger 
mark.washenber...@markwash.netmailto:mark.washenber...@markwash.net wrote:
Hi folks,

I'd like to nominate Nikhil Komawar to join glance-core. His code and review 
contributions over the past years have been very helpful and he's been taking 
on a very important role in advancing the glance tasks work.

If anyone has any concerns, please let me know. Otherwise I'll make the 
membership change next week (which is code for, when someone reminds me to!)

Thanks!
markwash

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Nominating Nikhil Komawar for Core

2014-06-12 Thread Arnaud Legendre
+1 Good job Nikhil! 

- Original Message -

From: Brian Rosmaita brian.rosma...@rackspace.com 
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
Sent: Thursday, June 12, 2014 10:15:07 AM 
Subject: Re: [openstack-dev] [Glance] Nominating Nikhil Komawar for Core 

+1 

From: Kuvaja, Erno [kuv...@hp.com] 
Sent: Thursday, June 12, 2014 9:34 AM 
To: OpenStack Development Mailing List (not for usage questions) 
Subject: Re: [openstack-dev] [Glance] Nominating Nikhil Komawar for Core 




+1 



From: Alex Meade [mailto:mr.alex.me...@gmail.com] 
Sent: 12 June 2014 13:56 
To: OpenStack Development Mailing List (not for usage questions) 
Subject: Re: [openstack-dev] [Glance] Nominating Nikhil Komawar for Core 




+100 it's about time! 





On Thu, Jun 12, 2014 at 3:26 AM, Mark Washenberger  
mark.washenber...@markwash.net  wrote: 




Hi folks, 





I'd like to nominate Nikhil Komawar to join glance-core. His code and review 
contributions over the past years have been very helpful and he's been taking 
on a very important role in advancing the glance tasks work. 





If anyone has any concerns, please let me know. Otherwise I'll make the 
membership change next week (which is code for, when someone reminds me to!) 





Thanks! 


markwash 



___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 






___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] NFV in OpenStack use cases and context

2014-06-12 Thread Alan Kavanagh
Hi Ramki

Really like the smart scheduler idea, we made a couple of blueprints that are 
related to ensuring you have the right information to build a constrained based 
scheduler. I do however want to point out that this is not NFV specific but is 
useful for all applications and services of which NFV is one. 

/Alan

-Original Message-
From: ramki Krishnan [mailto:r...@brocade.com] 
Sent: June-10-14 6:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Chris Wright; Nicolas Lemieux; Norival Figueira
Subject: Re: [openstack-dev] NFV in OpenStack use cases and context

Hi Steve,

Forgot to mention, the Smart Scheduler (Solver Scheduler) enhancements for 
NFV: Use Cases, Constraints etc. is a good example of an NFV use case deep 
dive for OpenStack.

https://urldefense.proofpoint.com/v1/url?u=https://docs.google.com/document/d/1k60BQXOMkZS0SIxpFOppGgYp416uXcJVkAFep3Oeju8/edit%23heading%3Dh.wlbclagujw8ck=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0Am=vTulCeloS8Hc59%2FeAOd32Ri4eqbNqVE%2FeMgNRzGZnz4%3D%0As=836991d6daab66b519de3b670db8af001144ddb20e636665b395597aa118538f

Thanks,
Ramki

-Original Message-
From: ramki Krishnan
Sent: Tuesday, June 10, 2014 3:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Chris Wright; Nicolas Lemieux; Norival Figueira
Subject: RE: NFV in OpenStack use cases and context

Hi Steve,

We are have OpenStack gap analysis documents in ETSI NFV under member only 
access. I can work on getting public version of the documents (at least a 
draft) to fuel the kick start.

Thanks,
Ramki

-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com]
Sent: Tuesday, June 10, 2014 12:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Chris Wright; Nicolas Lemieux
Subject: Re: [openstack-dev] [NFV] Re: NFV in OpenStack use cases and context

- Original Message -
 From: Steve Gordon sgor...@redhat.com
 To: Stephen Wong stephen.kf.w...@gmail.com
 
 - Original Message -
  From: Stephen Wong stephen.kf.w...@gmail.com
  To: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com,
  OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org
  
  Hi,
  
  Perhaps I have missed it somewhere in the email thread? Where is 
  the use case = bp document we are supposed to do for this week? Has 
  it been created yet?
  
  Thanks,
  - Stephen
 
 Hi,
 
 Itai is referring to the ETSI NFV use cases document [1] and the 
 discussion is around how we distill those - or a subset of them - into 
 a more consumable format for an OpenStack audience on the Wiki. At 
 this point I think the best approach is to simply start entering one 
 of them (perhaps #5) into the Wiki and go from there. Ideally this 
 would form a basis for discussing the format etc.
 
 Thanks,
 
 Steve
 
 [1]
 http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV
 001v010101p.pdf

To try and kick start things I have created a table on the wiki [1] based on 
the *DRAFT* NFV Performance  Portability Best Practises document [2]. This 
really lists workload types rather than specific applications, although I've 
put in an examples column we can populate with them.

I find it a useful way to quickly break down some of the characteristics of NFV 
applications at a glance. What do people think of this as something to start 
with? Remember, it's a wiki! So anyone is welcome to either expand the table or 
start adding more concrete user stories (e.g. around ETSI NFV use case number 5 
that Itai and I have been referring to, or any other VNF for that matter) in 
this section (we may/want need to create a separate page but for now it seems 
OK to get started here).

Thanks,

Steve

[1] https://wiki.openstack.org/wiki/Meetings/NFV#Use_Cases
[2] 
http://docbox.etsi.org/ISG/NFV/Open/Latest_Drafts/NFV-PER001v009%20-%20NFV%20Performance%20%20Portability%20Best%20Practises.pdf

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] mysql/mysql-python license contamination into openstack?

2014-06-12 Thread Chris Friesen

Hi,

I'm looking for the community viewpoint on whether there is any chance 
of license contamination between mysql and nova.  I realize that lawyers 
would need to be involved for a proper ruling, but I'm curious about the 
view of the developers on the list.


Suppose someone creates a modified openstack and wishes to sell it to 
others.  They want to keep their changes private.  They also want to use 
the mysql database.


The concern is this:

nova is apache licensed
sqlalchemy is MIT licensed
mysql-python (aka mysqldb1) is GPLv2 licensed
mysql is GPLv2 licensed



The concern is that since nova/sqlalchemy/mysql-python are all 
essentially linked together, an argument could be made that the work as 
a whole is a derivative work of mysql-python, and thus all the source 
code must be made available to anyone using the binary.


Does this argument have any merit?

Has anyone tested any of the mysql DBAPIs with more permissive licenses?

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problematic gate-tempest-dsvm-virtual-ironic job

2014-06-12 Thread Adam Gandelman
I've opened https://bugs.launchpad.net/openstack-ci/+bug/1329430 to track
the progress of getting the ironic job in better shape.  Monty had a great
suggestion this morning about how cache_devstack.py can be updated to cache
the UCA stuff for any job that may need it.  I'm putting together a patch
to address that now.

Thanks,
Adam



On Thu, Jun 12, 2014 at 11:09 AM, Ben Nemec openst...@nemebean.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 06/12/2014 06:40 AM, Sean Dague wrote:
  Current gate-tempest-dsvm-virtual-ironic has only a 65% pass rate
  *in the gate* over the last 48 hrs -
 
 http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF9uYW1lOmdhdGUtdGVtcGVzdC1kc3ZtLXZpcnR1YWwtaXJvbmljIEFORCAobWVzc2FnZTpcIkZpbmlzaGVkOiBTVUNDRVNTXCIgT1IgbWVzc2FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIpIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAyNTcyMjk4NTE4LCJtb2RlIjoic2NvcmUiLCJhbmFseXplX2ZpZWxkIjoiYnVpbGRfc3RhdHVzIn0=
 
   This job is run on diskimage-builder and ironic jobs in the the
  gate queue. Those jobs are now part of the integrated gate queue
  due to the overlap with oslotest jobs.
 
  This is *really* problematic, and too low to be voting. Anything 
  90% pass rate is really an issue.
 
  It looks like these issues are actually structural with the job,
  because unlike our other configurations which aggressively try to
  avoid network interaction (which we've found is too unreliable),
  this job adds the cloud archive repository on the fly, and pulls
  content from there. That's never going to have a high success
  rate.
 
  I'm proposing we turn this off -
  https://review.openstack.org/#/c/99630/
 
  The ironic team needs to go back to the drawing board a little here
  and work on getting all the packages and repositories they need
  pulled down into nodepool so we can isolate from network effects
  before we can make this job gating again.
 
  -Sean

 On a related note, that oslotest cross-testing job probably needs to
 be removed for the moment.  It's completely broken on our stable
 branches and at some point all of these cross-test jobs will be
 generated automatically, so the manually added ones will probably need
 to go anyway.

 I rechecked https://review.openstack.org/#/c/92910/ and it still looks
 ready to go.

 - -Ben

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iQEcBAEBAgAGBQJTmey/AAoJEDehGd0Fy7uqAWgIAJRiNdRUGFZT/nImP99h4PUK
 ZPBVGgD22G+dF2G4UibOn0YQL+fvlbnABHD6tbELmn91hqWEl20MfhUW+55xRPsr
 pLJXWktdyeJTkLNP6TynthUw6B1H8okVRkgsbQ8tKVdIn3VFHXuN+1I8l9qx/4w6
 paDwnfaokVL9zxK2BwNLJm3KUlFQ5lYFebKRiJYGdCxb8qPIEmJIoo+xfn3G2EqO
 q8OiDkPt4V53RxKUS+Er8sRlJJXOjDr97CPhhoj5napEn7ox5n7OdasngotEg1oR
 n3EbVr6sdifq71EQejxQUjt4zDGu+qAFNVjYnzpBx+ad2RR+t3yvDnguq7gG3hc=
 =Zfeu
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Third-Party][Infra] Optimize third party resources

2014-06-12 Thread Joe Gordon
On Thu, Jun 12, 2014 at 12:25 PM, Joe Gordon joe.gord...@gmail.com wrote:


 On Jun 12, 2014 9:20 AM, Jaume Devesa devv...@gmail.com wrote:
 
  Hello all,
 
  I've just submitted a patch[1] to solve a bug in Neutron. All the
 third-party plugins has voted as +1 but Jenkins has refused the patch
 because of the '.' dot at the end of the commit summary line.
 
  I know that is my fault, because I should run the ./run_tests.sh -p
 after modify the commit message, but:
 

 You can just turn this rule off in neutrons tox.ini file



https://review.openstack.org/99743


  since the official Jenkins runs more gates, tests and checks than the
 rest of third-party Jenkins, wouldn't be better to run third party jobs
 after official Jenkins has verified the patch? I think that would relax the
 load on the third party infrastructures and maybe they can be more reactive
 in the 'good' patches.
 
 
  ​[1]: https://review.openstack.org/#/c/99679/2​
 
  --
  Jaume Devesa
  Software Engineer at Midokura
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] pacemaker management tools

2014-06-12 Thread Adam Gandelman
It's been a while since I've used these tools and I'm not 100% surprised
they've fragmented once again. :)  That said, does pcs support creating the
CIB configuration in bulk from a file? I know that crm shell would let you
dump the entire cluster config and restore from file.  Unless the CIB
format has differs now, couldn't we just create the entire thing first and
use a single pcs or crm command to import it to the cluster, rather than
building each resource command-by-command?

Adam


On Wed, Jun 11, 2014 at 4:28 AM, Jan Provazník jprov...@redhat.com wrote:

 Hi,
 ceilometer-agent-central element was added recently into overcloud image.
 To be able scale out overcloud control nodes, we need HA for this central
 agent. Currently central agent can not scale out (until [1] is done). For
 now, the simplest way is add the central agent to Pacemaker, which is quite
 simple.

 The issue is that distributions supported in TripleO provide different
 tools for managing Pacemaker. Ubuntu/Debian provides crmsh, Fedora/RHEL
 provides pcs, OpenSuse provides both. I didn't find packages for all our
 distros for any of the tools. Also if there is a third-party repo providing
 packages for various distros, adding dependency on an untrusted third-party
 repo might be a problem for some users.

 Although it's a little bit annoying, I think we will end up with managing
 commands for both config tools, a resource creation sample:

 if $USE_PCS;then
   crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 params
 ip=192.168.122.120 cidr_netmask=32 op monitor interval=30s
 else
   pcs resource create ClusterIP IPaddr2 ip=192.168.0.120 cidr_netmask=32
 fi

 There are not many places where pacemaker configuration would be required,
 so I think this is acceptable. Any other opinions?

 Jan


 [1] https://blueprints.launchpad.net/ceilometer/+spec/central-
 agent-improvement

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] NFV in OpenStack use cases and context

2014-06-12 Thread ramki Krishnan
Yathi - many thanks for adding more NFV context.

Alan - many thanks for the interest and would be glad to have further 
discussions.

Thanks,
Ramki

-Original Message-
From: Yathiraj Udupi (yudupi) [mailto:yud...@cisco.com] 
Sent: Thursday, June 12, 2014 11:53 AM
To: Alan Kavanagh
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] NFV in OpenStack use cases and context

Hi Alan, 

Our Smart (Solver) Scheduler blueprint
(https://blueprints.launchpad.net/nova/+spec/solver-scheduler ) has been in the 
works in the Nova community since late 2013.  We have demoed at the Hong Kong 
summit, as well as the Atlanta summit,  use cases using this smart scheduler 
for better, optimized resource placement with complex constrained scenarios.  
So to let you know this work was started as a smart way of doing scheduling, 
applicable in general and not limited to NFV.  Currently we feel NFV is a 
killer app for driving this blueprint and work ahead, however is applicable for 
all kinds of resource placement scenarios. 

We will be very interested in finding out more about your blueprints that you 
are referring to here, and see how it can be integrated as part of our future 
roadmap. 

Thanks,
Yathi. 


On 6/12/14, 10:55 AM, Alan Kavanagh alan.kavan...@ericsson.com wrote:

Hi Ramki

Really like the smart scheduler idea, we made a couple of blueprints 
that are related to ensuring you have the right information to build a 
constrained based scheduler. I do however want to point out that this 
is not NFV specific but is useful for all applications and services of 
which NFV is one.

/Alan

-Original Message-
From: ramki Krishnan [mailto:r...@brocade.com]
Sent: June-10-14 6:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Chris Wright; Nicolas Lemieux; Norival Figueira
Subject: Re: [openstack-dev] NFV in OpenStack use cases and context

Hi Steve,

Forgot to mention, the Smart Scheduler (Solver Scheduler) enhancements 
for NFV: Use Cases, Constraints etc. is a good example of an NFV use 
case deep dive for OpenStack.

https://urldefense.proofpoint.com/v1/url?u=https://docs.google.com/docu
men 
t/d/1k60BQXOMkZS0SIxpFOppGgYp416uXcJVkAFep3Oeju8/edit%23heading%3Dh.wlb
cla 
gujw8ck=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2Bx
Y2b 
qaf%2FKm29ZfiqAKXxeo%3D%0Am=vTulCeloS8Hc59%2FeAOd32Ri4eqbNqVE%2FeMgNRz
GZn
z4%3D%0As=836991d6daab66b519de3b670db8af001144ddb20e636665b395597aa118
538
f

Thanks,
Ramki

-Original Message-
From: ramki Krishnan
Sent: Tuesday, June 10, 2014 3:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Chris Wright; Nicolas Lemieux; Norival Figueira
Subject: RE: NFV in OpenStack use cases and context

Hi Steve,

We are have OpenStack gap analysis documents in ETSI NFV under member 
only access. I can work on getting public version of the documents (at 
least a draft) to fuel the kick start.

Thanks,
Ramki

-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com]
Sent: Tuesday, June 10, 2014 12:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Chris Wright; Nicolas Lemieux
Subject: Re: [openstack-dev] [NFV] Re: NFV in OpenStack use cases and 
context

- Original Message -
 From: Steve Gordon sgor...@redhat.com
 To: Stephen Wong stephen.kf.w...@gmail.com
 
 - Original Message -
  From: Stephen Wong stephen.kf.w...@gmail.com
  To: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com,
  OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  
  Hi,
  
  Perhaps I have missed it somewhere in the email thread? Where 
  is the use case = bp document we are supposed to do for this week? 
  Has it been created yet?
  
  Thanks,
  - Stephen
 
 Hi,
 
 Itai is referring to the ETSI NFV use cases document [1] and the 
 discussion is around how we distill those - or a subset of them - 
 into a more consumable format for an OpenStack audience on the Wiki. 
 At this point I think the best approach is to simply start entering 
 one of them (perhaps #5) into the Wiki and go from there. Ideally 
 this would form a basis for discussing the format etc.
 
 Thanks,
 
 Steve
 
 [1]
 http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NF
 V
 001v010101p.pdf

To try and kick start things I have created a table on the wiki [1] 
based on the *DRAFT* NFV Performance  Portability Best Practises document [2].
This really lists workload types rather than specific applications, 
although I've put in an examples column we can populate with them.

I find it a useful way to quickly break down some of the 
characteristics of NFV applications at a glance. What do people think 
of this as something to start with? Remember, it's a wiki! So anyone is 
welcome to either expand the table or start adding more concrete user stories 
(e.g.
around ETSI NFV use case number 5 that Itai and I have been referring 

Re: [openstack-dev] NFV in OpenStack use cases and context

2014-06-12 Thread ramki Krishnan
++ Yathi.

-Original Message-
From: ramki Krishnan [mailto:r...@brocade.com] 
Sent: Thursday, June 12, 2014 12:48 PM
To: OpenStack Development Mailing List (not for usage questions); Alan Kavanagh
Cc: Norival Figueira
Subject: Re: [openstack-dev] NFV in OpenStack use cases and context

Yathi - many thanks for adding more NFV context.

Alan - many thanks for the interest and would be glad to have further 
discussions.

Thanks,
Ramki

-Original Message-
From: Yathiraj Udupi (yudupi) [mailto:yud...@cisco.com]
Sent: Thursday, June 12, 2014 11:53 AM
To: Alan Kavanagh
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] NFV in OpenStack use cases and context

Hi Alan, 

Our Smart (Solver) Scheduler blueprint
(https://blueprints.launchpad.net/nova/+spec/solver-scheduler ) has been in the 
works in the Nova community since late 2013.  We have demoed at the Hong Kong 
summit, as well as the Atlanta summit,  use cases using this smart scheduler 
for better, optimized resource placement with complex constrained scenarios.  
So to let you know this work was started as a smart way of doing scheduling, 
applicable in general and not limited to NFV.  Currently we feel NFV is a 
killer app for driving this blueprint and work ahead, however is applicable for 
all kinds of resource placement scenarios. 

We will be very interested in finding out more about your blueprints that you 
are referring to here, and see how it can be integrated as part of our future 
roadmap. 

Thanks,
Yathi. 


On 6/12/14, 10:55 AM, Alan Kavanagh alan.kavan...@ericsson.com wrote:

Hi Ramki

Really like the smart scheduler idea, we made a couple of blueprints 
that are related to ensuring you have the right information to build a 
constrained based scheduler. I do however want to point out that this 
is not NFV specific but is useful for all applications and services of 
which NFV is one.

/Alan

-Original Message-
From: ramki Krishnan [mailto:r...@brocade.com]
Sent: June-10-14 6:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Chris Wright; Nicolas Lemieux; Norival Figueira
Subject: Re: [openstack-dev] NFV in OpenStack use cases and context

Hi Steve,

Forgot to mention, the Smart Scheduler (Solver Scheduler) enhancements 
for NFV: Use Cases, Constraints etc. is a good example of an NFV use 
case deep dive for OpenStack.

https://urldefense.proofpoint.com/v1/url?u=https://docs.google.com/docu
men
t/d/1k60BQXOMkZS0SIxpFOppGgYp416uXcJVkAFep3Oeju8/edit%23heading%3Dh.wlb
cla
gujw8ck=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=%2FZ35AkRhp2kCW4Q3MPeE%2Bx
Y2b
qaf%2FKm29ZfiqAKXxeo%3D%0Am=vTulCeloS8Hc59%2FeAOd32Ri4eqbNqVE%2FeMgNRz
GZn
z4%3D%0As=836991d6daab66b519de3b670db8af001144ddb20e636665b395597aa118
538
f

Thanks,
Ramki

-Original Message-
From: ramki Krishnan
Sent: Tuesday, June 10, 2014 3:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Chris Wright; Nicolas Lemieux; Norival Figueira
Subject: RE: NFV in OpenStack use cases and context

Hi Steve,

We are have OpenStack gap analysis documents in ETSI NFV under member 
only access. I can work on getting public version of the documents (at 
least a draft) to fuel the kick start.

Thanks,
Ramki

-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com]
Sent: Tuesday, June 10, 2014 12:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Chris Wright; Nicolas Lemieux
Subject: Re: [openstack-dev] [NFV] Re: NFV in OpenStack use cases and 
context

- Original Message -
 From: Steve Gordon sgor...@redhat.com
 To: Stephen Wong stephen.kf.w...@gmail.com
 
 - Original Message -
  From: Stephen Wong stephen.kf.w...@gmail.com
  To: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com,
  OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  
  Hi,
  
  Perhaps I have missed it somewhere in the email thread? Where 
  is the use case = bp document we are supposed to do for this week?
  Has it been created yet?
  
  Thanks,
  - Stephen
 
 Hi,
 
 Itai is referring to the ETSI NFV use cases document [1] and the 
 discussion is around how we distill those - or a subset of them - 
 into a more consumable format for an OpenStack audience on the Wiki.
 At this point I think the best approach is to simply start entering 
 one of them (perhaps #5) into the Wiki and go from there. Ideally 
 this would form a basis for discussing the format etc.
 
 Thanks,
 
 Steve
 
 [1]
 http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NF
 V
 001v010101p.pdf

To try and kick start things I have created a table on the wiki [1] 
based on the *DRAFT* NFV Performance  Portability Best Practises document [2].
This really lists workload types rather than specific applications, 
although I've put in an examples column we can populate with them.

I find it a useful way to quickly break down some of the 

Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-12 Thread Jay Pipes

On 06/12/2014 12:24 PM, Joe Gordon wrote:


On Jun 12, 2014 8:37 AM, Sean Dague s...@dague.net
mailto:s...@dague.net wrote:
 
  On 06/12/2014 10:38 AM, Mike Bayer wrote:
  
   On 6/12/14, 8:26 AM, Julien Danjou wrote:
   On Thu, Jun 12 2014, Sean Dague wrote:
  
   That's not cacthable in unit or functional tests?
   Not in an accurate manner, no.
  
   Keeping jobs alive based on the theory that they might one day be
useful
   is something we just don't have the liberty to do any more. We've not
   seen an idle node in zuul in 2 days... and we're only at j-1. j-3
will
   be at least +50% of this load.
   Sure, I'm not saying we don't have a problem. I'm just saying it's
not a
   good solution to fix that problem IMHO.
  
   Just my 2c without having a full understanding of all of OpenStack's CI
   environment, Postgresql is definitely different enough that MySQL
   strict mode could still allow issues to slip through quite
easily, and
   also as far as capacity issues, this might be longer term but I'm
hoping
   to get database-related tests to be lots faster if we can move to a
   model that spends much less time creating databases and schemas.
 
  This is what I mean by functional testing. If we were directly hitting a
  real database on a set of in tree project tests, I think you could
  discover issues like this. Neutron was headed down that path.
 
  But if we're talking about a devstack / tempest run, it's not really
  applicable.
 
  If someone can point me to a case where we've actually found this kind
  of bug with tempest / devstack, that would be great. I've just *never*
  seen it. I was the one that did most of the fixing for pg support in
  Nova, and have helped other projects as well, so I'm relatively familiar
  with the kinds of fails we can discover. The ones that Julien pointed
  really aren't likely to be exposed in our current system.
 
  Which is why I think we're mostly just burning cycles on the existing
  approach for no gain.

Given all the points made above, I think dropping PostgreSQL is the
right choice; if only we had infinite cloud that would be another story.

What about converting one of our existing jobs (grenade partial ncpu,
large ops, regular grenade, tempest with nova network etc.) Into a
PostgreSQL only job? We could get some level of PostgreSQL testing
without any additional jobs, although this is  tradeoff obviously.


I was initially -1 on Sean's proposal. My reasoning echoed some of 
Julien's reasoning and all of Chris Friesen's rationale (and the bug 
report he mentioned was a perfect example of the types of things that 
would not, IMO, be caught by a MySQL strict mode configuration.)


That said, I recognize the resource capacity issues the gate is 
suffering from and I think Joe's suggestion above is actually a really 
good one.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mysql/mysql-python license contamination into openstack?

2014-06-12 Thread Jay Pipes

On 06/12/2014 02:13 PM, Chris Friesen wrote:

Hi,

I'm looking for the community viewpoint on whether there is any chance
of license contamination between mysql and nova.  I realize that lawyers
would need to be involved for a proper ruling, but I'm curious about the
view of the developers on the list.

Suppose someone creates a modified openstack and wishes to sell it to
others.  They want to keep their changes private.  They also want to use
the mysql database.


IANAL and all that...but...

The problem is not in their closed product *using* MySQL. The problem is 
in the *distribution* of the product. If the product is distributed with 
MySQL packaged with the product (and thereby the product is distributing 
the MySQL source or binaries), that is not permitted by the GPL.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Message level security plans.

2014-06-12 Thread Matt Riedemann



On 6/12/2014 10:31 AM, Kelsey, Timothy Joh wrote:

Thanks for the info Matt, I guess I should have been clearer about what I
was asking. I was indeed referring to the trusted RPC messaging proposal
you linked. Im keen to find out whats happening with that and where I can
help.



Looks like there was a short related thread in the dev list last month:

http://lists.openstack.org/pipermail/openstack-dev/2014-May/034392.html

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Meaning of 204 from DELETE apis

2014-06-12 Thread David Kranz
Tempest has a number of tests in various services for deleting objects 
that mostly return 204. Many, but not all, of these tests go on to check 
that the resource was actually deleted but do so in different ways. 
Sometimes they go into a timeout loop waiting for a GET on the object to 
fail. Sometimes they immediately call DELETE again or GET and assert 
that it fails. According to what I can see about the HTTP spec, 204 
should mean that the object was deleted. So is waiting for something to 
disappear unnecessary? Is immediate assertion wrong? Does this behavior 
vary service to service? We should be as consistent about this as 
possible but I am not sure what the expected behavior of all services 
actually is.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >