Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2015-01-05 Thread Punith S
thanks ramy :)

i have setup the CI , but our dsvm-tempest-full job is failing due to some
failures on running tempest.
but how do we publish these failures to sandbox project.
my gearman service is not showing any worker threads

root@cimaster:/# telnet 127.0.0.1 4730
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.

thanks

On Sun, Jan 4, 2015 at 10:23 PM, Asselin, Ramy ramy.asse...@hp.com wrote:

  Did you try asking the friendly folks on IRC freenode #openstack-infra?



 You can also try:

 Rebooting.

 Delete all the Jenkins jobs and reloading them.



 Ramy



 *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
 *Sent:* Friday, December 26, 2014 1:30 AM
 *To:* Punith S
 *Cc:* OpenStack Development Mailing List (not for usage questions);
 Asselin, Ramy

 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
 setting up CI



 @Asselin:

 Regarding the few items you can try: i tried everything, the job still
 appears NOT_REGISTERED.

 I'll see next week if i can do a clean install on another jenkins master.



 Thanks for your help,

 Eduard



 On Fri, Dec 26, 2014 at 11:23 AM, Punith S punit...@cloudbyte.com wrote:

  hello,



 i have setup the CI environment for our cloudbyte storage, running a
 master jenkins vm and a slave node vm running devstack.



 i have followed the jay's blog using asselin's scripts from github.



 all i need to do is to test our cloudbyte cinder driver against the cinder
 tempest suit.



 currently the noop-check-communication job is coming of successfully on
 openstack-dev/sandbox project



 but the *dvsm full tempest *job is failing due to an error in failing to
 upload the images to the glance service from swift.



 how can i hack the dsvm tempest full job so that it only installs my
 required services like cinder,nova,horizon etc. ?



 does modifying the devstack gate scripts help ?



 i'm attaching the links for the failure of dsvm-tempest-full job



 devstacklog.txt - http://paste.openstack.org/show/154779/

 devstack.txt.summary - http://paste.openstack.org/show/154780/



 thanks









 On Tue, Dec 23, 2014 at 8:46 PM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

  You should use 14.04 for the slave. The limitation for using 12.04 is
 only for the master since zuul’s apache configuration is WIP on 14.04 [1],
 and zuul does not run on the slave.

 Ramy

 [1] https://review.openstack.org/#/c/141518/

 *From:* Punith S [mailto:punit...@cloudbyte.com]
 *Sent:* Monday, December 22, 2014 11:37 PM
 *To:* OpenStack Development Mailing List (not for usage questions)

 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
 setting up CI



 Hi Asselin,



 i'm following your readme https://github.com/rasselin/os-ext-testing

 for setting up our cloudbyte CI on two ubuntu 12.04 VM's(master and slave)



 currently the scripts and setup went fine as followed with the document.



 now both master and slave have been connected successfully, but in order
 to run the tempest integration test against our proposed cloudbyte cinder
 driver for kilo, we need to have devstack installed in the slave.(in my
 understanding)



 but on installing the master devstack i'm getting permission issues in
 12.04 in executing ./stack.sh since master devstack suggests the 14.04 or
 13.10 ubuntu. and on contrary running install_slave.sh is failing on 13.10
 due to puppet modules on found error.



  is there a way to get this work ?



 thanks in advance



 On Mon, Dec 22, 2014 at 11:10 PM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

  Eduard,



 A few items you can try:

 1.   Double-check that the job is in Jenkins

 a.   If not, then that’s the issue

 2.   Check that the processes are running correctly

 a.   ps -ef | grep zuul

i.  Should
 have 2 zuul-server  1 zuul-merger

 b.  ps -ef | grep Jenkins

i.  Should
 have 1 /usr/bin/daemon --name=jenkins  1 /usr/bin/java

 3.   In Jenkins, Manage Jenkins, Gearman Plugin Config, “Test
 Connection”

 4.   Stop and Zuul  Jenkins. Start Zuul  Jenkins

 a.   service Jenkins stop

 b.  service zuul stop

 c.   service zuul-merger stop

 d.  service Jenkins start

 e.  service zuul start

 f.service zuul-merger start



 Otherwise, I suggest you ask in #openstack-infra irc channel.



 Ramy



 *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
 *Sent:* Sunday, December 21, 2014 11:01 PM


 *To:* Asselin, Ramy
 *Cc:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
 setting up CI



 Thanks Ramy,



 Unfortunately i don't see dsvm-tempest-full in the status output.

 Any idea how i can get it registered?



 Thanks,

 Eduard



 On Fri, Dec 19, 2014 at 9:43 PM, Asselin, Ramy 

Re: [openstack-dev] [devstack] [IceHouse] Install prettytable=0.7 to satisfy pip 6/PEP 440

2015-01-05 Thread Thierry Carrez
Yogesh Prasad wrote:
 I observe that this commit is present in master branch.
 
 commit 6ec66bb3d1354062ec70be972dba990e886084d5
 
 Install prettytable=0.7 to satisfy pip 6/PEP 440
 ...
 
 However, I am facing the issues due to PEP 440 in devstack's
 stable/icehouse branch. Is devstack icehouse still maintained ? In other
 words will these fixes get into icehouse branch ?

Yes, devstack's stable/icehouse branch is still maintained.

Looking at
https://git.openstack.org/cgit/openstack-dev/devstack/log/?h=stable/icehouse
it appears that fix was already backported ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [storyboard] Nominating Yolanda Robla for StoryBoard Core

2015-01-05 Thread Thierry Carrez
Michael Krotscheck wrote:
 StoryBoard is the much anticipated successor to Launchpad, and is a
 component of the Infrastructure Program. The storyboard-core group is
 intended to be a superset of the infra-core group, with additional
 reviewers who specialize in the field.
 
 Yolanda has been working on StoryBoard ever since the Atlanta Summit,
 and has provided a diligent and cautious voice to our development
 effort. She has consistently provided feedback on our reviews, and is
 neither afraid of asking for clarification, nor of providing
 constructive criticism. In return, she has been nothing but gracious and
 responsive when improvements were suggested to her own submissions.
 
 Furthermore, Yolanda has been quite active in the infrastructure team as
 a whole, and provides valuable context for us in the greater realm of infra.
 
 Please respond within this thread with either supporting commentary, or
 concerns about her promotion. Since many western countries are currently
 celebrating holidays, the review period will remain open until January
 9th. If the consensus is positive, we will promote her then!

+1

Although Yolanda missed the brainwashing^Winitial objective alignment
sprint in Brussels, she has been catching up on code really fast and
always provided insightful reviews. Adding her to the core team should
give us the critical mass we need to unclog the change pipeline.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Bug squashing followup

2015-01-05 Thread Derek Higgins
See below, We got only 5 people signed up, anybody else willing to join
the effort ?

TL;DR if your willing to assess about 15 tripleo related bugs to decide
if they are still current then add your name to the etherpad.

thanks,
Derek

On 18/12/14 11:25, Derek Higgins wrote:
 While bug squashing yesterday, I went through quite a lot of bugs
 closing those that were already fixed or no longer relevant, closing
 around 40 bugs. I eventually ran out of time, but I'm pretty sure if we
 split the task up between us we could weed out a lot more.
 
 What I'd like to do is, as a once off, randomly split up all the bugs to
 a group of volunteers (hopefully a large number of people), each person
 gets assigned X number of bugs and is then responsible for just deciding
 if it is still a relevant bug (or finding somebody who can help decide)
 and closing if necessary. Nothing needs to get fixed here we just need
 to make sure people are have a uptodate list of relevant bugs.
 
 So who wants to volunteer? We probably need about 15+ people for this to
 be split into manageable chunks. If your willing to help out just add
 your name to this list
 https://etherpad.openstack.org/p/tripleo-bug-weeding
 
 If we get enough people I'll follow up by splitting out the load and
 assigning to people.
 
 The bug squashing day yesterday put a big dent in these, but wasn't
 entirely focused on weeding out stale bugs, some people probably got
 caught up fixing individual bugs and it wasn't helped by a temporary
 failure of our CI jobs (provoked by a pbr update and we were building
 pbr when we didn't need to be).
 
 thanks,
 Derek.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Bug in federation

2015-01-05 Thread Marco Fargetta
Hi David,

in principle I agree with your comments. The current design mixes
different aspect up and it is not manageable when the number of IdPs
get bigger, like in the case you should allow access from users in a
country federation, especially compared to other tools supporting
identity federation.

Nevertheless, I think you have to consider the current implementation
like a fusion between the discovery protocol and the
authentication. Users, instead of being re-directed to a Discovery
Service providing the list of IdPs, receive the list from keystone
itself. Each endpoint URL is an IdP the user can use to authenticate
and when selected the user should go directly to the IdP and not into
the DS. Of course I am not saying this is good but it is acceptable
from the user point of view. There is not the problem to map IdPs in
the DS with endpoint URL because it is made in advance.

By the way, if you change the approach and create a single URL for the
authentication then I cannot see the use of a list of trusted
IdPs. You should disable the not accepted IdPs at higher level so to
avoid the situation where a user authenticate into the IdP but cannot
access the service. You may work at apache and DS level to enable only
trusted IdPs. Then you need a better mapping in order to put your
logic there.

I think this is a significant change and if there is agreement I think
it is possible to end with a more flexible design.

Do you plan to propose a new spec?

Marco







On Fri, Jan 02, 2015 at 09:51:55PM +, David Chadwick wrote:
 Hi Marco
 
 I think the current design is wrong because it is mixing up access
 control with service endpoint location. The endpoint of a service should
 be independent of the access control rules determining who can contact
 the service. Any entity should be able to contact a service endpoint
 (subject to firewall rules of course, but this is out of scope of
 Keystone), and once connected, access control should then be enforced.
 Unfortunately the current design directly ties access control (which
 IdP) to the service endpoint by building the IDP name into the URL. This
 is fundamentally a bad design. Not only is it too limiting, but also it
 is mixing up different concerns, rather than separating them out, which
 is a good computer science principle.
 
 So, applying the separation of concerns principle to Keystone, the
 federated login endpoint should not be tied to any specific IdP. There
 are many practical reasons for this, such as:
 
 a) in the general case the users of an openstack service could be from
 multiple different organisations, and hence multiple different IdPs, but
 they may all need to access the same service and hence same endpoint,
 b) users who are authorised to access an openstack service might be
 authorised based on their identity attributes that are not IdP specific
 (e.g. email address), so they might have a choice of IDP to use
 c) federations are getting larger and larger, and interfederations are
 exploding the number of IdPs that users can use. The GEANT eduGAIN
 interfederation for example now has IdPs from about 20 countries, and
 each country can have over a 100 IdPs. So we are talking about thousands
 of IdPs in a federation. It is conceivable that users from all of these
 might wish to access a given cloud service.
 
 Here is my proposal for how federation should be re-engineered
 
 1. The federation endpoint URL for Keystone can be anything intuitive
 and in keeping with existing guidelines, and should be IDP independent
 
 2. Apache will protect this endpoint with whatever federation
 protocol(s) it is able to. The Keystone administrator and Apache
 administrator will liaise out of band to determine the name of the
 endpoint and the federation protocol and IDPs that will be able to
 access it.
 
 3. Keystone will have its list of trusted IdPs as now.
 
 4. Keystone will have its mapping rules as now (although I still believe
 it would be better for mapping rules to be IDP independent, and to have
 lists of trusted attributes from trusted IDPs instead)
 
 5. Apache will return to Keystone two new parameters indicating the IdP
 and protocol that were used by the user in connecting to the endpoint.
 Apache knows what these are.
 
 6. Keystone will use these new parameters for access control and mapping
 rules. i.e. it will reject any users who are from untrusted IdPs, and it
 will determine the right mapping rule to use based on the values of the
 two new parameters. A simple table in Keystone will map the IdPs and
 protocols into the correct mapping rule to use.
 
 This is not a huge change to make, in fact it should be a rather simple
 re-engineering task.
 
 regards
 
 David
 
 
 On 24/12/2014 17:50, Marco Fargetta wrote:
  
  On 24 Dec 2014, at 17:34, David Chadwick d.w.chadw...@kent.ac.uk wrote:
 
  If I understand the bug fix correctly, it is firmly tying the URL to the
  IDP to the mapping rule. But I think this is going in the wrong
  

Re: [openstack-dev] [Heat][API] about parameters in Create stack API

2015-01-05 Thread Steven Hardy
On Mon, Jan 05, 2015 at 06:20:47PM +0900, Hongseok Jeon wrote:
Hi,A 
When trying to understand Heat APIs, I am stuck by the first part, Create
stack Heat API.
http://developer.openstack.org/api-ref-orchestration-v1.html#stacks
The Create stack API has request parameters and response parameters.A 
In my understanding, however, most parameters in the response part should
be in the request parameters so that they are conveyed with a HTTP request
message.A 
For example, param_name-n and param_value-n parameters should be
included in the request parameter part when a template has user-defined
parameters.

Yes, you're right, it's a documentation bug, thanks for pointing it out.

I've raised this bug so we can fix it:

https://bugs.launchpad.net/openstack-api-site/+bug/1407630

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-05 Thread Dmitry Tantsur

On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:

Hi, Dmitry

I think this is a good project.
I got one question: what is the relationship with ironic-python-agent?
Thanks.

Hi!

No relationship right now, but I'm hoping to use IPA as a base for 
introspection ramdisk in the (near?) future.


BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, December 11, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] ironic-discoverd status update

Hi all!

As you know I actively promote ironic-discoverd project [1] as one of the means 
to do hardware inspection for Ironic (see e.g. spec [2]), so I decided it's 
worth to give some updates to the community from time to time. This email is 
purely informative, you may safely skip it, if you're not interested.

Background
==

The discoverd project (I usually skip the ironic- part when talking about it) 
solves the problem of populating information about a node in Ironic database without help 
of any vendor-specific tool. This information usually includes Nova scheduling properties 
(CPU, RAM, disk
size) and MAC's for ports.

Introspection is done by booting a ramdisk on a node, collecting data there and 
posting it back to discoverd HTTP API. Thus actually discoverd consists of 2 
components: the service [1] and the ramdisk [3]. The service handles 2 major 
tasks:
* Processing data posted by the ramdisk, i.e. finding the node in Ironic 
database and updating node properties with new data.
* Managing iptables so that the default PXE environment for introspection does 
not interfere with Neutron

The project was born from a series of patches to Ironic itself after we 
discovered that this change is going to be too intrusive. Discoverd was 
actively tested as part of Instack [4] and it's RPM is a part of Juno RDO. 
After the Paris summit, we agreed on bringing it closer to the Ironic upstream, 
and now discoverd is hosted on StackForge and tracks bugs on Launchpad.

Future
==

The basic feature of discoverd: supply Ironic with properties required for 
scheduling, is pretty finished as of the latest stable series 0.2.

However, more features are planned for release 1.0.0 this January [5].
They go beyond the bare minimum of finding out CPU, RAM, disk size and NIC 
MAC's.

Plugability
~~~

An interesting feature of discoverd is support for plugins, which I prefer to 
call hooks. It's possible to hook into the introspection data processing chain 
in 2 places:
* Before any data processing. This opens opportunity to adopt discoverd to 
ramdisks that have different data format. The only requirement is that the 
ramdisk posts a JSON object.
* After a node is found in Ironic database and ports are created for MAC's, but 
before any actual data update. This gives an opportunity to alter, which 
properties discoverd is going to update.

Actually, even the default logic of update Node.properties is contained in a 
plugin - see SchedulerHook in ironic_discoverd/plugins/standard.py
[6]. This plugability opens wide opportunities for integrating with 3rd party 
ramdisks and CMDB's (which as we know Ironic is not ;).

Enrolling
~

Some people have found it limiting that the introspection requires power 
credentials (IPMI user name and password) to be already set. The recent set of 
patches [7] introduces a possibility to request manual power on of the machine 
and update IPMI credentials via the ramdisk to the expected values. Note that 
support of this feature in the reference ramdisk [3] is not ready yet. Also 
note that this scenario is only possible when using discoverd directly via it's 
API, not via Ironic API like in [2].

Get Involved


Discoverd terribly lacks reviews. Out team is very small and self-approving is 
not a rare case. I'm even not against fast-tracking any existing Ironic core to 
a discoverd core after a couple of meaningful reviews :)

And of course patches are welcome, especially plugins for integration with 
existing systems doing similar things and CMDB's. Patches are accepted via 
usual Gerrit workflow. Ideas are accepted as Launchpad blueprints (we do not 
follow the Gerrit spec process right now).

Finally, please comment on the Ironic spec [2], I'd like to know what you think.

References
==

[1] https://pypi.python.org/pypi/ironic-discoverd
[2] https://review.openstack.org/#/c/135605/
[3]
https://github.com/openstack/diskimage-builder/tree/master/elements/ironic-discoverd-ramdisk
[4] https://github.com/agroup/instack-undercloud/
[5] https://bugs.launchpad.net/ironic-discoverd/+milestone/1.0.0
[6]
https://github.com/stackforge/ironic-discoverd/blob/master/ironic_discoverd/plugins/standard.py
[7]
https://blueprints.launchpad.net/ironic-discoverd/+spec/setup-ipmi-credentials

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [horizon] static files handling, bower/

2015-01-05 Thread Radomir Dopieralski
On 05/01/15 00:35, Richard Jones wrote:
 On Mon Dec 22 2014 at 8:24:03 PM Radomir Dopieralski
 openst...@sheep.art.pl mailto:openst...@sheep.art.pl wrote:
 
 On 20/12/14 21:25, Richard Jones wrote:
  This is a good proposal, though I'm unclear on how the
  static_settings.py file is populated by a developer (as opposed to a
  packager, which you described).
 
 It's not, the developer version is included in the repository, and
 simply points to where Bower is configured to put the files.
 So just to be clear, as developers we:

 1. have a bower.json listing the bower component to use,
 2. use bower to fetch and install those to the bower_components
 directory at the top level of the Horizon repos checkout, and
 3. manually edit static_settings.py when we add a new bower component to
 bower.json so it knows the appropriate static files to load from that
 component.

 Is that correct?

 The above will increase the burden on those adding or upgrading bower
 components (they'll need to check the bower.json in the component for
 the appropriate static files to link in) but will make life easier for
 the re-packagers since they'll know which files they need to cater for
 in static_settings.py

Well, I expect you can tell Bower to put the files somewhere else than
in the root directory of the project -- a directory like ``bower_files``
or something (that directory is also added to ``.gitignore`` so that you
don't commit it by mistake). Then only that directory needs to be added
to the ``static_settings.py``. Of course, you still need to make all the
``script`` links in appropriate places with the right URLs, but you
would have to do that anyways.

Let's look at an example. Suppose you need to a new JavaScript library
called hipster.js. You add it to the ``bower.json`` file, and run
Bower. Bower downloads the right files and does whatever it is that it
does to them, and puts them in  ``bower_files/hipster-js``. Now you edit
Horizon's templates and add ``script src={{ STATIC_URL
}}/hipster-js/hipster.js`` to ``_scripts.html``. That's it for you.
Since your ``static_settings.py`` file already has a line:

  ('', os.path.join(BASE_DIR, '/bower_files')),

in it, it will just work.

Now, suppose that a packager wants to package this for, say, Debian. And
suppose that Debian has hipster.js packaged, except it was called
bro.js before, and they left the old name for compatibility reasons.
He will look at the change history to the ``bower.json`` and the
``_scripts.html`` files, take the ``static_settings.py`` file for his
distribution, and add a line:

  ('hipster-js/hipster.js', '/usr/lib/js_libraries/bro_js/bro.js'),


-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-05 Thread Zhou, Zhenzan
Hi, Dmitry

I think this is a good project. 
I got one question: what is the relationship with ironic-python-agent? 
Thanks.

BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com] 
Sent: Thursday, December 11, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] ironic-discoverd status update

Hi all!

As you know I actively promote ironic-discoverd project [1] as one of the means 
to do hardware inspection for Ironic (see e.g. spec [2]), so I decided it's 
worth to give some updates to the community from time to time. This email is 
purely informative, you may safely skip it, if you're not interested.

Background
==

The discoverd project (I usually skip the ironic- part when talking about it) 
solves the problem of populating information about a node in Ironic database 
without help of any vendor-specific tool. This information usually includes 
Nova scheduling properties (CPU, RAM, disk
size) and MAC's for ports.

Introspection is done by booting a ramdisk on a node, collecting data there and 
posting it back to discoverd HTTP API. Thus actually discoverd consists of 2 
components: the service [1] and the ramdisk [3]. The service handles 2 major 
tasks:
* Processing data posted by the ramdisk, i.e. finding the node in Ironic 
database and updating node properties with new data.
* Managing iptables so that the default PXE environment for introspection does 
not interfere with Neutron

The project was born from a series of patches to Ironic itself after we 
discovered that this change is going to be too intrusive. Discoverd was 
actively tested as part of Instack [4] and it's RPM is a part of Juno RDO. 
After the Paris summit, we agreed on bringing it closer to the Ironic upstream, 
and now discoverd is hosted on StackForge and tracks bugs on Launchpad.

Future
==

The basic feature of discoverd: supply Ironic with properties required for 
scheduling, is pretty finished as of the latest stable series 0.2.

However, more features are planned for release 1.0.0 this January [5]. 
They go beyond the bare minimum of finding out CPU, RAM, disk size and NIC 
MAC's.

Plugability
~~~

An interesting feature of discoverd is support for plugins, which I prefer to 
call hooks. It's possible to hook into the introspection data processing chain 
in 2 places:
* Before any data processing. This opens opportunity to adopt discoverd to 
ramdisks that have different data format. The only requirement is that the 
ramdisk posts a JSON object.
* After a node is found in Ironic database and ports are created for MAC's, but 
before any actual data update. This gives an opportunity to alter, which 
properties discoverd is going to update.

Actually, even the default logic of update Node.properties is contained in a 
plugin - see SchedulerHook in ironic_discoverd/plugins/standard.py
[6]. This plugability opens wide opportunities for integrating with 3rd party 
ramdisks and CMDB's (which as we know Ironic is not ;).

Enrolling
~

Some people have found it limiting that the introspection requires power 
credentials (IPMI user name and password) to be already set. The recent set of 
patches [7] introduces a possibility to request manual power on of the machine 
and update IPMI credentials via the ramdisk to the expected values. Note that 
support of this feature in the reference ramdisk [3] is not ready yet. Also 
note that this scenario is only possible when using discoverd directly via it's 
API, not via Ironic API like in [2].

Get Involved


Discoverd terribly lacks reviews. Out team is very small and self-approving is 
not a rare case. I'm even not against fast-tracking any existing Ironic core to 
a discoverd core after a couple of meaningful reviews :)

And of course patches are welcome, especially plugins for integration with 
existing systems doing similar things and CMDB's. Patches are accepted via 
usual Gerrit workflow. Ideas are accepted as Launchpad blueprints (we do not 
follow the Gerrit spec process right now).

Finally, please comment on the Ironic spec [2], I'd like to know what you think.

References
==

[1] https://pypi.python.org/pypi/ironic-discoverd
[2] https://review.openstack.org/#/c/135605/
[3]
https://github.com/openstack/diskimage-builder/tree/master/elements/ironic-discoverd-ramdisk
[4] https://github.com/agroup/instack-undercloud/
[5] https://bugs.launchpad.net/ironic-discoverd/+milestone/1.0.0
[6]
https://github.com/stackforge/ironic-discoverd/blob/master/ironic_discoverd/plugins/standard.py
[7]
https://blueprints.launchpad.net/ironic-discoverd/+spec/setup-ipmi-credentials

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [horizon] static files handling, bower/

2015-01-05 Thread Richard Jones
On Mon Jan 05 2015 at 7:59:14 PM Radomir Dopieralski openst...@sheep.art.pl
wrote:

 On 05/01/15 00:35, Richard Jones wrote:
  On Mon Dec 22 2014 at 8:24:03 PM Radomir Dopieralski
  openst...@sheep.art.pl mailto:openst...@sheep.art.pl wrote:
 
  On 20/12/14 21:25, Richard Jones wrote:
   This is a good proposal, though I'm unclear on how the
   static_settings.py file is populated by a developer (as opposed to
 a
   packager, which you described).
 
  It's not, the developer version is included in the repository, and
  simply points to where Bower is configured to put the files.
  So just to be clear, as developers we:
 
  1. have a bower.json listing the bower component to use,
  2. use bower to fetch and install those to the bower_components
  directory at the top level of the Horizon repos checkout, and
  3. manually edit static_settings.py when we add a new bower component to
  bower.json so it knows the appropriate static files to load from that
  component.
 
  Is that correct?
 
  The above will increase the burden on those adding or upgrading bower
  components (they'll need to check the bower.json in the component for
  the appropriate static files to link in) but will make life easier for
  the re-packagers since they'll know which files they need to cater for
  in static_settings.py

 Well, I expect you can tell Bower to put the files somewhere else than
 in the root directory of the project -- a directory like ``bower_files``
 or something (that directory is also added to ``.gitignore`` so that you
 don't commit it by mistake). Then only that directory needs to be added
 to the ``static_settings.py``. Of course, you still need to make all the
 ``script`` links in appropriate places with the right URLs, but you
 would have to do that anyways.


Bower installs into a directory called bower_components in the current
directory, which is equivalent to your bower_files above.



 Let's look at an example. Suppose you need to a new JavaScript library
 called hipster.js. You add it to the ``bower.json`` file, and run
 Bower. Bower downloads the right files and does whatever it is that it
 does to them, and puts them in  ``bower_files/hipster-js``. Now you edit
 Horizon's templates and add ``script src={{ STATIC_URL
 }}/hipster-js/hipster.js`` to ``_scripts.html``. That's it for you.
 Since your ``static_settings.py`` file already has a line:

   ('', os.path.join(BASE_DIR, '/bower_files')),

 in it, it will just work.


Yep, except s/bower_files/bower_components :)


Now, suppose that a packager wants to package this for, say, Debian. And
 suppose that Debian has hipster.js packaged, except it was called
 bro.js before, and they left the old name for compatibility reasons.
 He will look at the change history to the ``bower.json`` and the
 ``_scripts.html`` files, take the ``static_settings.py`` file for his
 distribution, and add a line:

   ('hipster-js/hipster.js', '/usr/lib/js_libraries/bro_js/bro.js')


Ah! I had forgotten about that feature. Yep, all good :)


  Richard
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat][API] about parameters in Create stack API

2015-01-05 Thread Hongseok Jeon
Hi,

When trying to understand Heat APIs, I am stuck by the first part, Create
stack Heat API.

http://developer.openstack.org/api-ref-orchestration-v1.html#stacks

The Create stack API has request parameters and response parameters.
In my understanding, however, most parameters in the response part should
be in the request parameters so that they are conveyed with a HTTP request
message.

For example, param_name-n and param_value-n parameters should be
included in the request parameter part when a template has user-defined
parameters.

BR,
Hongseok Jeon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt]How to customize cpu features in nova

2015-01-05 Thread Daniel P. Berrange
On Tue, Dec 23, 2014 at 05:27:20PM +0800, CloudBeyond wrote:
 Dear Developers,
 
 Sorry for interrupting if i sent to wrong email group, but i got a problem on
 running Solaris 10 on icehouse openstack.
  I found it is need to disable CPU feature x2apic so that solaris 10 NIC
 could work in KVM as following code in libvirt.xml
 
   cpu mode=custom match=exact
 modelSandyBridge/model
 vendorIntel/vendor
 feature policy='disable' name='x2apic'/
   /cpu
 
 if without line
   feature policy='disable' name='x2apic'/
 the NIC in Solaris could not work well.
 
 And I try to migrate the KVM libvirt xml to Nova. I found only two options
 to control the result.
 
 First I used default setting cpu_mdoe = None in nova.conf , the Solaris 10
 would keep rebooting before enter desktop env.
 
 And then I set cpu_mode = custom, cpu_model = SandyBridge. Solaris 10 could
 start up but NIC not work.
 
 I also set cpu_mode = host-model, cpu_model = None. Solaris 10 could work
 but NIC not.
 
 I read the code located
 in /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py. Is that
 possible to do some hacking to customize the cpu feature?

Correct, the only way is to change the code.

Really a bug needs to be filed against QEMU to report the problem with
x2apic, because it is not something that should be allowed to break any
guest OS.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why nova mounts FS for LXC container instead of libvirt?

2015-01-05 Thread Daniel P. Berrange
On Tue, Dec 30, 2014 at 05:18:19PM +0300, Dmitry Guryanov wrote:
 Hello,
 
 Libvirt can create loop or nbd device for LXC container and mount it by 
 itself, for instance, you can add something like this to xml config:
 
 filesystem type='file'
   driver type='loop' format='raw'/
   source file='/fedora-20-raw'/
   target dir='/'/
 /filesystem
 
 But nova mounts filesystem for container by itself. Is this because rhel-6 
 doesn't support filesystems with type='file' or there are some other reasons?

The support for mounting using NBD in OpenStack pre-dated the support
for doing this in Libvirt. In faact the reason I added this feature to
libvirt was precisely because OpenStack was doing this.

We haven't switched Nova over to use this new syntax yet though, because
that would imply a change to the min required libvirt version for LXC.
That said we should probably make such a change, because honestly no
one should be using LXC without using user namespaces, othewise their
cloud is horribly insecure. This would imply making the min libvirt for
LXC much much newer than it is today.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] IRC logging

2015-01-05 Thread Cindy Pallares

Hi all,

I would like to re-open the discussion on IRC logging for the glance 
channel. It was discussed on a meeting back in November[1], but it 
didn't seem to have a lot of input from the community and it was not 
discussed in the mailing list. A lot of information is exchanged through 
the channel and it isn't accessible for people who occasionally come 
into our channel from other projects, new contributors, and people who 
don't want to be reached off-hours or don't have bouncers. Logging our 
channel would  increase our community's transparency and make our 
development discussions publicly accessible to contributors in all 
time-zones and from other projects. It is very useful to look back on 
the logs for previous discussions or as well as to refer people to 
discussions or questions previously answered.



--Cindy

[1] 
http://eavesdrop.openstack.org/meetings/glance/2014/glance.2014-11-13-20.03.log.html


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] devstack plugins

2015-01-05 Thread Sean Dague
Just getting back from break, and looking to finish off the devstack
plugin support (especially as we have a bunch of projects that might
want to use this). The last pre-holiday patch is here:
https://review.openstack.org/#/c/142805  (the doc for how it works is in
the patch -
https://review.openstack.org/#/c/142805/3/doc/source/plugins.rst,cm for
people that want some background)

There are a few interesting comments on that patch, so maybe an ML
thread will help us get to a final approach.

1) install_plugins - currently is a one way process

Dean correctly points out that install_plugins is currently a one way
process. I actually wonder if we should change that fact and run a 'git
clean -f extras.d' before the install plugins under the principle of
least surprise. This would make removing the enable_plugin actually
remove it from the environment.

2) is_service_enabled for things that aren't OpenStack services?

Overloading ENABLED_SERVICES with things that aren't OpenStack services
is something I'd actually like to avoid. Because other parts of our tool
chain, like grenade, need some understanding of what's an openstack
service and what is not.

Maybe for things like ceph, glusterfs, opendaylight we need some other
array of features or something. Honestly I'd really like to get mysql
and rabbitmq out of the service list as well. It confusing things quite
a bit at times.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] [Heat] validating properties of Sahara resources in Heat

2015-01-05 Thread Pavlo Shchelokovskyy
Hi all,

I would like to ask Sahara developers' opinion on two bugs raised against
Heat's resources - [1] and [2].
Below I am going to repeat some of my comments from those bugs and
associated Gerrit reviews [3] to have the conversation condensed here in ML.

In Heat's Sahara-specific resources we have such properties as
floating_ip_pool for OS::Sahara::NodeGroupTemplate [4]
and neutron_management_network for both OS::Sahara::ClusterTemplate [5] and
OS::Sahara::Cluster [6].
My questions are about when and under which conditions those properties are
required to successfully start a Sahara Cluster.

floating_ip_pool:

I was pointed that Sahara could be configured to use netns/proxy to access
the cluster VMs instead of floating IPs.

My questions are:
- Can that particular configuration setting (netns/proxy) be assessed via
saharaclient?
- What would be the result of providing floating_ip_pool when Sahara is
indeed configured with netns/proxy?
  Is it going to function normally, having just wasted several floating IPs
from quota?
- And more crucial, what would happen if Sahara is _not_ configured to use
netns/proxy and not provided with floating_ip_pool?
  Can that lead to cluster being created (at least VMs for it spawned) but
Sahara would not be able to access them for configuration?
  Would Sahara in that case kill the cluster/shutdown VMs or hang in some
cluster failed state?

neutron_management_network:
I understand the point that it is redundant to use it in both resources
(although we are stuck with deprecation period as those are part of Juno
release already).

Still, my questions are:
- would this property passed during creation of Cluster override the one
passed during creation of Cluster Template?
- what would happen if I set this property (pass it via saharaclient) when
Nova-network is in use?
- what if I _do not_ pass this property and Neutron has several networks
available?

The reason I'm asking is that in Heat we try to follow fail-fast
approach, especially for billable resources,
to avoid situation when a (potentially huge) stack is being created and
breaks on last or second-to-last resource,
leaving user with many resources spawned (even if for a short time if the
stack rollback is enabled)
which might cost a hefty sum of money for nothing. That is why we are
trying to validate the template
as thoroughly as we can before starting to create any actual resources in
the cloud.

Thus I'm interested in finding the best possible (or least-worse)
cover-it-all strategy
for validating properties being set for these resources.

[1] https://bugs.launchpad.net/heat/+bug/1399469
[2] https://bugs.launchpad.net/heat/+bug/1402844
[3] https://review.openstack.org/#/c/141310
[4]
https://github.com/openstack/heat/blob/master/heat/engine/resources/sahara_templates.py#L136
[5]
https://github.com/openstack/heat/blob/master/heat/engine/resources/sahara_templates.py#L274
[6]
https://github.com/openstack/heat/blob/master/heat/engine/resources/sahara_cluster.py#L79

Best regards,

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Using DevStack for multi-node setup

2015-01-05 Thread Sean Dague
On 01/03/2015 04:41 PM, Danny Choi (dannchoi) wrote:
 Hi,
 
 I’m using DevStack to deploy OpenStack on a multi-node setup:
 Controller, Network, Compute as 3 separate nodes
 
 Since the Controller node is stacked first, during which the Network
 node is not yet ready, it fails to create the router instance and the
 public network.
 Both have to be created manually.
 
 Is this the expected behavior?  Is there a workaround to have DevStack
 create them?

The only way folks tend to run multinode devstack is Controller +
Compute nodes. And that sequence of creating an all in one controller,
plus additional compute nodes later, works.

Also, when running multi host (at least for nova network) we're
explicitly specifying host level direct networking, so there wouldn't be
a network node.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] devstack plugins

2015-01-05 Thread Kyle Mestery
On Mon, Jan 5, 2015 at 8:09 AM, Sean Dague s...@dague.net wrote:

 Just getting back from break, and looking to finish off the devstack
 plugin support (especially as we have a bunch of projects that might
 want to use this). The last pre-holiday patch is here:
 https://review.openstack.org/#/c/142805  (the doc for how it works is in
 the patch -
 https://review.openstack.org/#/c/142805/3/doc/source/plugins.rst,cm for
 people that want some background)

 I am very happy to see this work going in!


 There are a few interesting comments on that patch, so maybe an ML
 thread will help us get to a final approach.

 1) install_plugins - currently is a one way process

 Dean correctly points out that install_plugins is currently a one way
 process. I actually wonder if we should change that fact and run a 'git
 clean -f extras.d' before the install plugins under the principle of
 least surprise. This would make removing the enable_plugin actually
 remove it from the environment.

 2) is_service_enabled for things that aren't OpenStack services?

 Overloading ENABLED_SERVICES with things that aren't OpenStack services
 is something I'd actually like to avoid. Because other parts of our tool
 chain, like grenade, need some understanding of what's an openstack
 service and what is not.

 Maybe for things like ceph, glusterfs, opendaylight we need some other
 array of features or something. Honestly I'd really like to get mysql
 and rabbitmq out of the service list as well. It confusing things quite
 a bit at times.

 I agree with this, lets separate out the non-OpenStack things into another
array, especially if this makes things like grenade easier. But lets
definitely keep this separate array and logic similar for both as much as
we can.

Thanks,
Kyle


 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] [glance] Consistency in client side sorting

2015-01-05 Thread Jay Pipes

On 01/05/2015 10:13 AM, Steven Kaufer wrote:

The nova, cinder, and glance REST APIs support listing instances,
volumes, and images in a specific order.  In general, the REST API
supports something like:

   ?sort_key=key1sort_dir=ascsort_key=key2sort_dir=desc

This sorts the results using 'key1' as the primary key (in ascending
order), 'key2' as the secondary key (in descending order), etc.

Note that this behavior is not consistent across the projects.  Nova
supports multiple sort keys and multiple sort directions, glance
supports multiple sort keys but a single direction, and cinder only
supports a single sort key and a single sort direction (approved kilo BP
to support multiple sort keys and directions is here:
https://blueprints.launchpad.net/cinder/+spec/cinder-pagination).

The purpose of this thread is to discuss how the sort information should
be inputted to the client.

In nova, (committed in kilo https://review.openstack.org/#/c/117591/)
the syntax is:  --sort key1:asc,key2:desc
In cinder, the syntax is:  --sort_key key1 --sort_dir desc
In glance, the proposed syntax (from
https://review.openstack.org/#/c/120777/) is: --sort-key key1 --sort-key
key2 --sort-dir desc

Note that the keys are different for cinder and glance (--sort_key vs.
--sort-key).  Also, client side sorting does not actually work in cinder
(fix under review at https://review.openstack.org/#/c/141964/).

Giving that each of these 3 clients will be supporting client-side
sorting in kilo, it seems that we should get this implemented in a
consistent manner.  It seems that the 2 options are either:

   --sort-key key1 --sort-dir desc --sort-key key2 --sort-dir asc
   --sort key1:asc,key2:desc

Personally, I favor option 2 but IMO it is more important that these are
made consistent.

Thoughts on getting consistency across all 3 projects (and possibly others)?


Yeah, I personally like the second option as well, but agree that 
consistency is the key (pun intended) here.


I would say let's make a decision on the standard to go with (possibly 
via the API or SDK working groups?) and then move forward with support 
for that option in all three clients (and continue to support the old 
behaviour for 2 release cycles, with deprecation markings as appropriate).


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] BPs in Launchpad for Kilo-2

2015-01-05 Thread Kyle Mestery
Happy New Year everyone!

I went through our LP BPs for Kilo-2 [1] this morning. For those which were
approved and do not have any code submitted, I've marked them as Blocked
for now. Once code arrives, we'll change the status. Given that Kilo-2 is a
month from today [2], I'm expecting some swift progress on many of this BPs
in the coming weeks.

Thanks, and if you have questions, please reply here, or find me on IRC in
#openstack-neutron.

Kyle

[1] https://launchpad.net/neutron/+milestone/kilo-2
[2] https://wiki.openstack.org/wiki/Kilo_Release_Schedule
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] IRC logging

2015-01-05 Thread Louis Taylor
On Mon, Jan 05, 2015 at 05:42:02AM -0600, Cindy Pallares wrote:
 I would like to re-open the discussion on IRC logging for the glance
 channel. It was discussed on a meeting back in November[1], but it didn't
 seem to have a lot of input from the community and it was not discussed in
 the mailing list. A lot of information is exchanged through the channel and
 it isn't accessible for people who occasionally come into our channel from
 other projects, new contributors, and people who don't want to be reached
 off-hours or don't have bouncers. Logging our channel would  increase our
 community's transparency and make our development discussions publicly
 accessible to contributors in all time-zones and from other projects. It is
 very useful to look back on the logs for previous discussions or as well as
 to refer people to discussions or questions previously answered.

As the person who brought this up previously, +1 on this. Last time it was
brought up there was no strong consensus.

Louis


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] IRC logging

2015-01-05 Thread Nikhil Komawar
Based on the feedback received, we would like to avoid logging on the project 
channel. My take from the discussion was that it gives many a folks a feeling 
of informal platform to express their ideas freely in contrast to the meeting 
channels.

However, at the same time I would like to point out that using foul language in 
the open freenode channels is a bad practice. There are no admins monitoring 
our project channels however, those channels that are monitored have people 
kicked out on misbehavior.  The point being, no logging means freedom of 
thought for only the creative purposes; please do not take me any other way.

Thanks,
-Nikhil


From: Anita Kuno [ante...@anteaya.info]
Sent: Monday, January 05, 2015 10:42 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance] IRC logging

On 01/05/2015 06:42 AM, Cindy Pallares wrote:
 Hi all,

 I would like to re-open the discussion on IRC logging for the glance
 channel. It was discussed on a meeting back in November[1], but it
 didn't seem to have a lot of input from the community and it was not
 discussed in the mailing list. A lot of information is exchanged through
 the channel and it isn't accessible for people who occasionally come
 into our channel from other projects, new contributors, and people who
 don't want to be reached off-hours or don't have bouncers. Logging our
 channel would  increase our community's transparency and make our
 development discussions publicly accessible to contributors in all
 time-zones and from other projects. It is very useful to look back on
 the logs for previous discussions or as well as to refer people to
 discussions or questions previously answered.


 --Cindy

 [1]
 http://eavesdrop.openstack.org/meetings/glance/2014/glance.2014-11-13-20.03.log.html


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Hi Cindy:

You might want to consider offering a patch (you can use this one as an
example: https://review.openstack.org/#/c/138965/2) and anyone with a
strong perspective can express themselves with a vote and comment.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] kilo-2 status checkup at next week's meeting

2015-01-05 Thread Doug Hellmann
As mentioned in today’s meeting, we will be reviewing our kilo-2 progress at 
next week’s meeting (12 Jan 16:00 UTC #openstack-meeting-alt). Please be 
prepared to discuss the progress of your blueprints at that point. 

If you can’t make the meeting, please contact me via IRC (dhellmann) so we can 
discuss the blueprint separately.

I will default to rescheduling all blueprints to kilo-3 unless I hear from the 
owner by next week’s meeting.

Thanks,
Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pip 6.0.6 breaking py2* jobs - bug 1407736

2015-01-05 Thread Doug Hellmann

On Jan 5, 2015, at 12:00 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

 There is a deprecation warning in pip 6.0.6 which is making the py26 (on 
 stable branches) and py27 jobs hit subunit log sizes of over 50 MB which 
 makes the job fail.
 
 A logstash query shows this started happening around 1/3 which is when pip 
 6.0.6 was released. In Nova alone there are nearly 18 million hits of the 
 deprecation warning.
 
 Should we temporarily block so that pip  6.0.6?
 
 https://bugs.launchpad.net/nova/+bug/1407736

I think this is actually a change in pkg_resources (in the setuptools dist) 
[1], being triggered by stevedore using require=False to avoid checking 
dependencies when plugins are loaded.

Doug

[1] 
https://bitbucket.org/pypa/setuptools/commits/b1c7a311fb8e167d026126f557f849450b859502


 
 -- 
 
 Thanks,
 
 Matt Riedemann
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pip 6.0.6 breaking py2* jobs - bug 1407736

2015-01-05 Thread Sean Dague
On 01/05/2015 12:00 PM, Matt Riedemann wrote:
 There is a deprecation warning in pip 6.0.6 which is making the py26 (on
 stable branches) and py27 jobs hit subunit log sizes of over 50 MB which
 makes the job fail.
 
 A logstash query shows this started happening around 1/3 which is when
 pip 6.0.6 was released. In Nova alone there are nearly 18 million hits
 of the deprecation warning.
 
 Should we temporarily block so that pip  6.0.6?
 
 https://bugs.launchpad.net/nova/+bug/1407736
 

Upstream bug filed here - https://github.com/pypa/pip/issues/2326

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pip 6.0.6 breaking py2* jobs - bug 1407736

2015-01-05 Thread Brant Knudson
I think this started happening with setuptools 10.2. We had a similar
problem in Keystone -- a change was merged that caused any use of
deprecated function to fail[1]. This caused all keystone-python27 jobs to
fail when 10.2 was used because of the now deprecated function used by
paste. I thought this would actually last for longer then a week before it
broke. To get keystone tests working again, a change was merged to only
fail for deprecation warnings from keystone[2].

An example way to fix this is to add a warnings filter to ignore
deprecations rather than logging them.

[1] https://review.openstack.org/#/c/143183/2/keystone/tests/core.py
[2] https://review.openstack.org/#/c/144810/2/keystone/tests/core.py

- Brant



On Mon, Jan 5, 2015 at 11:00 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:

 There is a deprecation warning in pip 6.0.6 which is making the py26 (on
 stable branches) and py27 jobs hit subunit log sizes of over 50 MB which
 makes the job fail.

 A logstash query shows this started happening around 1/3 which is when pip
 6.0.6 was released. In Nova alone there are nearly 18 million hits of the
 deprecation warning.

 Should we temporarily block so that pip  6.0.6?

 https://bugs.launchpad.net/nova/+bug/1407736

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] IRC logging

2015-01-05 Thread Cindy Pallares

I've made a patch, we can vote on it there.

https://review.openstack.org/#/c/145025/


On 01/05/2015 11:15 AM, Amrith Kumar wrote:

I think logging the channel is a benefit even if, as Nikhil points out, it is 
not an official meeting. Trove logs both the #openstack-trove channel and the 
meetings when they occur. I have also had some conversations with other ATC's 
on #openstack-oslo and #openstack-security and have found that the eavesdrop 
logs at http://eavesdrop.openstack.org/irclogs/ to be invaluable in either bug 
comments or code review comments.

The IRC channel is an integral part of communicating within the OpenStack 
community. The use of foul language and other inappropriate behavior should be 
monitored not by admins but by other members of the community and called out 
just as one would call out similar behavior in a non-virtual work environment. 
I submit to you that profanity and inappropriate conduct in an IRC channel 
constitutes a hostile work environment just as much as it does in a non-virtual 
environment.

Therefore I submit to you that there is no place for such behavior on an IRC 
channel irrespective of whether it is logged or not.

Thanks,

-amrith

| -Original Message-
| From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com]
| Sent: Monday, January 05, 2015 11:58 AM
| To: OpenStack Development Mailing List (not for usage questions)
| Subject: Re: [openstack-dev] [Glance] IRC logging
|
|
|
|  On Jan 5, 2015, at 08:07, Nikhil Komawar nikhil.koma...@rackspace.com
| wrote:
| 
|  Based on the feedback received, we would like to avoid logging on the
| project channel. My take from the discussion was that it gives many a
| folks a feeling of informal platform to express their ideas freely in
| contrast to the meeting channels.
| 
|  However, at the same time I would like to point out that using foul
| language in the open freenode channels is a bad practice. There are no
| admins monitoring our project channels however, those channels that are
| monitored have people kicked out on misbehavior.  The point being, no
| logging means freedom of thought for only the creative purposes; please
| do not take me any other way.
| 
|  Thanks,
|  -Nikhil
| 
|
| I just want to point out that keystone has logging enabled for our channel
| and I do not see it as a hamper to creative discussion / open discussion.
| The logging is definitely of value. Also a lot of people will locally log
| a given irc channel, which largely nets the same result.
|
| It is still not an official meeting, and we have heated debates at times,
| the logging let's us check back on things discussed outside of the
| official meetings. I do admit it is used less frequently than the meeting
| logs.
|
| --Morgan
|
| Sent via mobile
|  
|  From: Anita Kuno [ante...@anteaya.info]
|  Sent: Monday, January 05, 2015 10:42 AM
|  To: openstack-dev@lists.openstack.org
|  Subject: Re: [openstack-dev] [Glance] IRC logging
| 
|  On 01/05/2015 06:42 AM, Cindy Pallares wrote:
|  Hi all,
| 
|  I would like to re-open the discussion on IRC logging for the glance
|  channel. It was discussed on a meeting back in November[1], but it
|  didn't seem to have a lot of input from the community and it was not
|  discussed in the mailing list. A lot of information is exchanged
|  through the channel and it isn't accessible for people who
|  occasionally come into our channel from other projects, new
|  contributors, and people who don't want to be reached off-hours or
|  don't have bouncers. Logging our channel would  increase our
|  community's transparency and make our development discussions
|  publicly accessible to contributors in all time-zones and from other
|  projects. It is very useful to look back on the logs for previous
|  discussions or as well as to refer people to discussions or questions
| previously answered.
| 
| 
|  --Cindy
| 
|  [1]
|  http://eavesdrop.openstack.org/meetings/glance/2014/glance.2014-11-13
|  -20.03.log.html
| 
| 
|  ___
|  OpenStack-dev mailing list
|  OpenStack-dev@lists.openstack.org
|  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
|  Hi Cindy:
| 
|  You might want to consider offering a patch (you can use this one as
|  an
|  example: https://review.openstack.org/#/c/138965/2) and anyone with a
|  strong perspective can express themselves with a vote and comment.
| 
|  Thanks,
|  Anita.
| 
|  ___
|  OpenStack-dev mailing list
|  OpenStack-dev@lists.openstack.org
|  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
| 
|  ___
|  OpenStack-dev mailing list
|  OpenStack-dev@lists.openstack.org
|  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
|
| ___
| OpenStack-dev mailing list
| OpenStack-dev@lists.openstack.org
| 

Re: [openstack-dev] [Glance] IRC logging

2015-01-05 Thread Morgan Fainberg


 On Jan 5, 2015, at 08:07, Nikhil Komawar nikhil.koma...@rackspace.com wrote:
 
 Based on the feedback received, we would like to avoid logging on the project 
 channel. My take from the discussion was that it gives many a folks a feeling 
 of informal platform to express their ideas freely in contrast to the meeting 
 channels.
 
 However, at the same time I would like to point out that using foul language 
 in the open freenode channels is a bad practice. There are no admins 
 monitoring our project channels however, those channels that are monitored 
 have people kicked out on misbehavior.  The point being, no logging means 
 freedom of thought for only the creative purposes; please do not take me any 
 other way.
 
 Thanks,
 -Nikhil
 

I just want to point out that keystone has logging enabled for our channel and 
I do not see it as a hamper to creative discussion / open discussion. The 
logging is definitely of value. Also a lot of people will locally log a given 
irc channel, which largely nets the same result. 

It is still not an official meeting, and we have heated debates at times, the 
logging let's us check back on things discussed outside of the official 
meetings. I do admit it is used less frequently than the meeting logs. 

--Morgan

Sent via mobile
 
 From: Anita Kuno [ante...@anteaya.info]
 Sent: Monday, January 05, 2015 10:42 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Glance] IRC logging
 
 On 01/05/2015 06:42 AM, Cindy Pallares wrote:
 Hi all,
 
 I would like to re-open the discussion on IRC logging for the glance
 channel. It was discussed on a meeting back in November[1], but it
 didn't seem to have a lot of input from the community and it was not
 discussed in the mailing list. A lot of information is exchanged through
 the channel and it isn't accessible for people who occasionally come
 into our channel from other projects, new contributors, and people who
 don't want to be reached off-hours or don't have bouncers. Logging our
 channel would  increase our community's transparency and make our
 development discussions publicly accessible to contributors in all
 time-zones and from other projects. It is very useful to look back on
 the logs for previous discussions or as well as to refer people to
 discussions or questions previously answered.
 
 
 --Cindy
 
 [1]
 http://eavesdrop.openstack.org/meetings/glance/2014/glance.2014-11-13-20.03.log.html
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Hi Cindy:
 
 You might want to consider offering a patch (you can use this one as an
 example: https://review.openstack.org/#/c/138965/2) and anyone with a
 strong perspective can express themselves with a vote and comment.
 
 Thanks,
 Anita.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] pip 6.0.6 breaking py2* jobs - bug 1407736

2015-01-05 Thread Matt Riedemann
There is a deprecation warning in pip 6.0.6 which is making the py26 (on 
stable branches) and py27 jobs hit subunit log sizes of over 50 MB which 
makes the job fail.


A logstash query shows this started happening around 1/3 which is when 
pip 6.0.6 was released. In Nova alone there are nearly 18 million hits 
of the deprecation warning.


Should we temporarily block so that pip  6.0.6?

https://bugs.launchpad.net/nova/+bug/1407736

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] IRC logging

2015-01-05 Thread Nikhil Komawar
Appreciate the feedback, Morgan!

Cindy: I do not mind if you open up a Merge Prop for this change (link it on 
this mail loop) and ask for a poll on the gerrit review.

Thanks,
-Nikhil


From: Morgan Fainberg [morgan.fainb...@gmail.com]
Sent: Monday, January 05, 2015 11:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] IRC logging

 On Jan 5, 2015, at 08:07, Nikhil Komawar nikhil.koma...@rackspace.com wrote:

 Based on the feedback received, we would like to avoid logging on the project 
 channel. My take from the discussion was that it gives many a folks a feeling 
 of informal platform to express their ideas freely in contrast to the meeting 
 channels.

 However, at the same time I would like to point out that using foul language 
 in the open freenode channels is a bad practice. There are no admins 
 monitoring our project channels however, those channels that are monitored 
 have people kicked out on misbehavior.  The point being, no logging means 
 freedom of thought for only the creative purposes; please do not take me any 
 other way.

 Thanks,
 -Nikhil


I just want to point out that keystone has logging enabled for our channel and 
I do not see it as a hamper to creative discussion / open discussion. The 
logging is definitely of value. Also a lot of people will locally log a given 
irc channel, which largely nets the same result.

It is still not an official meeting, and we have heated debates at times, the 
logging let's us check back on things discussed outside of the official 
meetings. I do admit it is used less frequently than the meeting logs.

--Morgan

Sent via mobile
 
 From: Anita Kuno [ante...@anteaya.info]
 Sent: Monday, January 05, 2015 10:42 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Glance] IRC logging

 On 01/05/2015 06:42 AM, Cindy Pallares wrote:
 Hi all,

 I would like to re-open the discussion on IRC logging for the glance
 channel. It was discussed on a meeting back in November[1], but it
 didn't seem to have a lot of input from the community and it was not
 discussed in the mailing list. A lot of information is exchanged through
 the channel and it isn't accessible for people who occasionally come
 into our channel from other projects, new contributors, and people who
 don't want to be reached off-hours or don't have bouncers. Logging our
 channel would  increase our community's transparency and make our
 development discussions publicly accessible to contributors in all
 time-zones and from other projects. It is very useful to look back on
 the logs for previous discussions or as well as to refer people to
 discussions or questions previously answered.


 --Cindy

 [1]
 http://eavesdrop.openstack.org/meetings/glance/2014/glance.2014-11-13-20.03.log.html


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Hi Cindy:

 You might want to consider offering a patch (you can use this one as an
 example: https://review.openstack.org/#/c/138965/2) and anyone with a
 strong perspective can express themselves with a vote and comment.

 Thanks,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Addition to solum core

2015-01-05 Thread Murali Allada
+1

From: Pierre Padrixe pierre.padr...@gmail.commailto:pierre.padr...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, December 30, 2014 at 5:58 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Solum] Addition to solum core

+1

2014-12-27 19:02 GMT+01:00 Devdatta Kulkarni 
devdatta.kulka...@rackspace.commailto:devdatta.kulka...@rackspace.com:
+1


From: James Y. Li [yuel...@gmail.commailto:yuel...@gmail.com]
Sent: Saturday, December 27, 2014 9:03 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Solum] Addition to solum core


+1!

-James Li

On Dec 27, 2014 2:02 AM, Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
Solum cores,

I propose the following addition to the solum-core group[1]:

+ Ed Cranford (ed--cranford)

Please reply to this email to indicate your votes.

Thanks,

Adrian Otto

[1] https://review.openstack.org/#/admin/groups/229,members Current Members

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] IRC logging

2015-01-05 Thread Amrith Kumar
I think logging the channel is a benefit even if, as Nikhil points out, it is 
not an official meeting. Trove logs both the #openstack-trove channel and the 
meetings when they occur. I have also had some conversations with other ATC's 
on #openstack-oslo and #openstack-security and have found that the eavesdrop 
logs at http://eavesdrop.openstack.org/irclogs/ to be invaluable in either bug 
comments or code review comments.

The IRC channel is an integral part of communicating within the OpenStack 
community. The use of foul language and other inappropriate behavior should be 
monitored not by admins but by other members of the community and called out 
just as one would call out similar behavior in a non-virtual work environment. 
I submit to you that profanity and inappropriate conduct in an IRC channel 
constitutes a hostile work environment just as much as it does in a non-virtual 
environment.

Therefore I submit to you that there is no place for such behavior on an IRC 
channel irrespective of whether it is logged or not.

Thanks,

-amrith

| -Original Message-
| From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com]
| Sent: Monday, January 05, 2015 11:58 AM
| To: OpenStack Development Mailing List (not for usage questions)
| Subject: Re: [openstack-dev] [Glance] IRC logging
| 
| 
| 
|  On Jan 5, 2015, at 08:07, Nikhil Komawar nikhil.koma...@rackspace.com
| wrote:
| 
|  Based on the feedback received, we would like to avoid logging on the
| project channel. My take from the discussion was that it gives many a
| folks a feeling of informal platform to express their ideas freely in
| contrast to the meeting channels.
| 
|  However, at the same time I would like to point out that using foul
| language in the open freenode channels is a bad practice. There are no
| admins monitoring our project channels however, those channels that are
| monitored have people kicked out on misbehavior.  The point being, no
| logging means freedom of thought for only the creative purposes; please
| do not take me any other way.
| 
|  Thanks,
|  -Nikhil
| 
| 
| I just want to point out that keystone has logging enabled for our channel
| and I do not see it as a hamper to creative discussion / open discussion.
| The logging is definitely of value. Also a lot of people will locally log
| a given irc channel, which largely nets the same result.
| 
| It is still not an official meeting, and we have heated debates at times,
| the logging let's us check back on things discussed outside of the
| official meetings. I do admit it is used less frequently than the meeting
| logs.
| 
| --Morgan
| 
| Sent via mobile
|  
|  From: Anita Kuno [ante...@anteaya.info]
|  Sent: Monday, January 05, 2015 10:42 AM
|  To: openstack-dev@lists.openstack.org
|  Subject: Re: [openstack-dev] [Glance] IRC logging
| 
|  On 01/05/2015 06:42 AM, Cindy Pallares wrote:
|  Hi all,
| 
|  I would like to re-open the discussion on IRC logging for the glance
|  channel. It was discussed on a meeting back in November[1], but it
|  didn't seem to have a lot of input from the community and it was not
|  discussed in the mailing list. A lot of information is exchanged
|  through the channel and it isn't accessible for people who
|  occasionally come into our channel from other projects, new
|  contributors, and people who don't want to be reached off-hours or
|  don't have bouncers. Logging our channel would  increase our
|  community's transparency and make our development discussions
|  publicly accessible to contributors in all time-zones and from other
|  projects. It is very useful to look back on the logs for previous
|  discussions or as well as to refer people to discussions or questions
| previously answered.
| 
| 
|  --Cindy
| 
|  [1]
|  http://eavesdrop.openstack.org/meetings/glance/2014/glance.2014-11-13
|  -20.03.log.html
| 
| 
|  ___
|  OpenStack-dev mailing list
|  OpenStack-dev@lists.openstack.org
|  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
|  Hi Cindy:
| 
|  You might want to consider offering a patch (you can use this one as
|  an
|  example: https://review.openstack.org/#/c/138965/2) and anyone with a
|  strong perspective can express themselves with a vote and comment.
| 
|  Thanks,
|  Anita.
| 
|  ___
|  OpenStack-dev mailing list
|  OpenStack-dev@lists.openstack.org
|  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
| 
|  ___
|  OpenStack-dev mailing list
|  OpenStack-dev@lists.openstack.org
|  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
| 
| ___
| OpenStack-dev mailing list
| OpenStack-dev@lists.openstack.org
| http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list

Re: [openstack-dev] [all] Proper use of 'git review -R'

2015-01-05 Thread Carl Baldwin
On Tue, Dec 30, 2014 at 9:37 AM, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2014-12-30 09:46:35 -0500 (-0500), David Kranz wrote:
 [...]
 Can some one explain when we should *not* use -R after doing 'git
 commit --amend'?
 [...]

 In the standard workflow this should never be necessary. The default
 behavior in git-review is to attempt a rebase and then undo it
 before submitting. If the rebase shows merge conflicts, the push
 will be averted and the user instructed to deal with those
 conflicts. Using -R will skip this check and allow you to push
 changes which can't merge due to conflicts.

tl;dr:  I suggest an enhancement to git review which will help us
avoid unintentionally uploading new patch sets when a change depends
on another change.

I've been thinking about this a bit since I had a discussion in the
infra room last month.  I have been using --no-rebase every time I run
git review and I've been telling others to do the same.  I even
proposed setting defaultrebase to 0 for the neutron project [1].  At
that time, I learned that this is expected to be the default for
current versions of git review.

I had a few experiences during the development of the DVR feature this
past summer that leave me believing that there is still a problem.  I
saw a few cases where multiple authors were working on dependent
patches and one author's rebase of an older dependency clobbered newer
changes.  This required me to step in and manually find and restore
the clobbered changes.  Things got better when I asked all of the
authors to always use --no-rebase and we manually managed necessary
rebases due to merge conflicts independently of other changes to the
patch sets.

I haven't had time to dig up all of the details about what happened.
I will try to find some time to do that soon.  However, I have an idea
of where the problem is...

The problem happens when a chain of dependencies is rebased together
to master.  This creates new versions of dependencies as well as the
top patch.  The new version of the dependency might actually be a
rebased version of an older patch set.  When this new version is
uploaded, it clobbers changes to the dependency.  I think this is
generally the wrong thing to do; especially when a patch set chain has
multiple authors.

This is not the way gerrit rebases when you use the rebase button in
the UI.  Gerrit will rebase a patch set to the latest patch set of the
change on which it depends.  It there is no dependency, then it will
rebase to master.

I'm not sure if this is git review's fault or not.  I know in older
versions of git review it was at fault.  More recent incidents could
have been due to manually initiated rebases which were done
incorrectly.  However, I had the impression that git review would do
rebases in this way and our problems on DVR seemed to stop when I
trained the team to use --no-rebase.

*** I can suggest an enhancement to git review which will help out in
this situation.  The change is around how git review warns about
uploading multiple patch sets.  It doesn't seem to be smart enough to
tell when it will actually upload a new version of a dependency.  That
is, it warns even when the commit id of a dependency matches one that
is already in gerrit as if it were going to create a new patch set.
It is impossible to tell -- without manually checking out of band --
if it is *really* going to create a new patch set.  I doubt many
people (besides me) actually bother to go to gerrit to compare commit
ids to see what will really happen.

Git review should check the commit ids in gerrit.  It should not stop
to warn about commit ids that already exist in gerrit.  Then, it
should warn a bit *louder* about commit ids which are not in gerrit
because many people have become desensitized to the current warning.

Another habit that I have developed is to always download the latest
version of a patch set, work on it fairly quickly, and then upload it
again.  I don't keep a lot of WIP locally for extended periods of
time.  I never know when someone is going to depend on a patch of mine
and rebase it -- whether intentionally or not -- and upload the
rebased version.

I've dreamed about adding features to git/gerrit to manage patch set
iterations within a change, and dependent changes more formally so
that these problems can be more easily detected and handled by the
tools.  I think if git rebase could leave some sort of soft trail it
might help but I haven't thought through this completely.  I can see
problems with how to handle this in a distributed way.

Carl

[1] https://review.openstack.org/#/c/140863/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2015-01-05 Thread Andrew Woodward
There are two threads here that need to be unraveled from each other.

1. We need to prevent fuel from doing anything if the OS is out of
disk space. It leads to a very broken database from which it requires
a developer to reset to a usable state.
From this point we need to
* develop a method for locking down the DB writes so that fuel becomes
RO until space is freed
* develop a method (or re-use existing) to notify the user that a
serious error state exists on the host. ( that could not be dismissed)
* we need some API that can lock / unlock the DB
* we need some monitor process that will trigger the lock/unlock

2. We need monitoring for the master node and fuel components in
general as discussed at length above.
unless we intend to use this to also monitor the services on deployed
nodes (likely bad), then what we use to do this is irrelevant to
getting this started. If we are intending to use this to also monitor
deployed nodes, (again bad for the fuel node to do) then we need to
standardize with what we monitor the cloud with (Zabbix currently) and
offer a single pane of glass. Federation in the monitoring becomes a
critical requirement here as having more than one pane of glass is an
operations nightmare.

Completing #1 is very important in the near term as I have had to
un-brick several deployments over it already. Also, in my mind these
are also separate tasks.

On Thu, Nov 27, 2014 at 1:19 AM, Simon Pasquier spasqu...@mirantis.com wrote:
 I've added another option to the Etherpad: collectd can do basic threshold
 monitoring and run any kind of scripts on alert notifications. The other
 advantage of collectd would be the RRD graphs for (almost) free.
 Of course since monit is already supported in Fuel, this is the fastest path
 to get something done.
 Simon

 On Thu, Nov 27, 2014 at 9:53 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Is it possible to send http requests from monit, e.g for creating
 notifications?
 I scanned through the docs and found only alerts for sending mail,
 also where token (username/pass) for monit will be stored?

 Or maybe there is another plan? without any api interaction

 On Thu, Nov 27, 2014 at 9:39 AM, Przemyslaw Kaminski
 pkamin...@mirantis.com wrote:

 This I didn't know. It's true in fact, I checked the manifests. Though
 monit is not deployed yet because of lack of packages in Fuel ISO. Anyways,
 I think the argument about using yet another monitoring service is now
 rendered invalid.

 So +1 for monit? :)

 P.


 On 11/26/2014 05:55 PM, Sergii Golovatiuk wrote:

 Monit is easy and is used to control states of Compute nodes. We can
 adopt it for master node.

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Wed, Nov 26, 2014 at 4:46 PM, Stanislaw Bogatkin
 sbogat...@mirantis.com wrote:

 As for me - zabbix is overkill for one node. Zabbix Server + Agent +
 Frontend + DB + HTTP server, and all of it for one node? Why not use
 something that was developed for monitoring one node, doesn't have many 
 deps
 and work out of the box? Not necessarily Monit, but something similar.

 On Wed, Nov 26, 2014 at 6:22 PM, Przemyslaw Kaminski
 pkamin...@mirantis.com wrote:

 We want to monitor Fuel master node while Zabbix is only on slave nodes
 and not on master. The monitoring service is supposed to be installed on
 Fuel master host (not inside a Docker container) and provide basic info
 about free disk space, etc.

 P.


 On 11/26/2014 02:58 PM, Jay Pipes wrote:

 On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

 So then in the end, there will be 3 monitoring systems to learn,
 configure, and debug? Monasca for cloud users, zabbix for most of the
 physical systems, and sensu or monit to be small?

 Seems very complicated.

 If not just monasca, why not the zabbix thats already being deployed?


 Yes, I had the same thoughts... why not just use zabbix since it's
 used already?

 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Fuel] About deployment progress calculation

2015-01-05 Thread Andrew Woodward
Sorry for the necro-post but I think it's important to note that as we
get more progress with granular roles, we get specific tasks, and
time-out durations for not just plugin tasks, but all tasks. As
Evegniy noted, we should be able to use this as a calibration of the
ceiling of the progress bars. Without other data, we could assume that
1/2 the timeout is the expected run time of the task

I think we can and should use data from the stats collection to figure
the average, and possibly re-calibrate the timeouts for the tasks.

On Fri, Nov 28, 2014 at 5:21 AM, Evgeniy L e...@mirantis.com wrote:
 Hi Dmitry,

 I totally agree that the current approach won't work (and doesn't work
 well).

 I have several comments:

 Each task will provide estimated time

 1. Each task has timeout, lets use it as an estimate, I don't think
 that we should ask to provide both of this fields, execution
 estimate depends on hardware, my suggestion is to keep it
 simple and solve the problem internally with information from
 timeout field.
 2. I would like to clarify implementation a bit more, what is
 time delta of the task? I think that Task executor
 (orchestrator/astute/mistral)
 shouldn't provide any information except status of the task,
 it should be simple interface, like task_uuid: 1, status: running
 and Nailgun on its side should do all of the magic with progress
 calculation.

 Thanks,

 On Tue, Oct 28, 2014 at 10:29 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hello everyone,

 I want to raise concerns about progress bar, and its usability.
 In my opinion current approach has several downsides:
 1. No valuable information
 2. Very fragile, you need to change code in several places not to break it
 3. Will not work with plugable code

 Log parsing works under one basic assumption - that we are in control of
 all tasks,
 so we can use mappings to logs with certain pattern.
  It wont work with plugable architecture, and i am talking not about
 fuel-plugins, and the
 way it will be done in 6.0, but the whole idea of plugable architecture,
 and i assume that internal features will be implemented as granular
 self-contained plugins,
 and it will be possible to accomplish not only with puppet, but with any
 other tool that suits you.
 Asking person who will provide plugin (extension) to add mappings to logs
 - feels like weirdeist thing ever.

 What can be done to improve usability of progress calculation?
 I see here several requirements:
 1.Provide valuable information
   - Correct representation of time that task takes to run
   - What is going on target node in any point of the deployment?
 2. Plugin friendly, it means that approach we will take should be flexible
 and extendable

 Implementation:
 In nearest future deployment will be splitted into tasks, they are will be
 big, not granular
 (like deploy controller, deploy compute), but this does not matter,
 because we can start to estimate them.
 Each task will provide estimated time.
 At first it will be manually setted by person who develops plugin (tasks),
 but it can be improved,
 so this information automatically (or semi-auto) will be provided by
 fuel-stats application.
 It will require orchestrator to report 2 simple entities:
 - time delta of the task
 - task identity
 UI will be able to show percents anyway, but additionally it will show
 what is running on target node.

 Ofcourse it is not about 6.0, but please take a look, and lets try to
 agree
 on what is right way to solve this task, because log parsing will not work
 with data-driven
 orchestrator and plugable architecture.
 Thank you

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrew
Mirantis
Ceph community

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Stop agent scheduling without stopping sevices

2015-01-05 Thread Itsuro ODA
Neutron experts,

I want to stop scheduling to a specific {dhcp|l3}_agent without
stopping router/dhcp services on it.
I expected setting admin_state_up of the agent to False is met
this demand. But this operation stops all services on the agent
in actuality. (Is this behavior intended ? It seems there is no
document for agent API.)

I think admin_state_up of agents should affect only scheduling.
If it is accepted I will submit a bug report and make a fix.

Or should I propose a blueprint for adding function to stop
agent's scheduling without stopping services on it ?

I'd like to hear neutron experts' suggestions.

Thanks.
Itsuro Oda
-- 
Itsuro ODA o...@valinux.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Need help getting DevStack setup working for VPN testing

2015-01-05 Thread Anita Kuno
On 01/02/2015 08:43 AM, Paul Michali (pcm) wrote:
 To summarize what I’m trying to do with option (A)…
 
 I want to test VPN in DevStack by setting up two private networks, two 
 routers, and a shared public network. The VMs created in the private networks 
 should be able to access the public network, but not the other private 
 network (e.g. VM on private-A subnet can ping public interface of router2 on 
 private-B subnet)
 
   |
 VM-a
 
 
 Do I need to create the second router and private network using a different 
 tenant?
 Do I need to setup security group rules to allow the access desired?
 What local.conf settings do I need for this setup (beyond what I have below)?
 
 I’ve been trying so many different combinations (using both single and two 
 devstack setups, trying provider net, using single/multiple tenants) and have 
 been getting a variety of different results, from unexpected ping results, to 
 VMs stuck in power state PAUSED, that I’m lost as to how to set this up. I 
 think I’m hung up on the security group rules and how to setup the bridges.
 
 What I’d like to do, is just focus on this option (A) - using a single 
 devstack with multiple routers, and see if that works. If not, I can focus on 
 option (B), using two devstacks/hosts.
 
 Since I’m pretty much out of ideas on how to fix this for now, I’m going to 
 try to see if I can get on a bare metal setup, which has worked in the past.
 
 Any ideas? I’d like to verify VPNaaS reference implementation with the new 
 repo changes. Been spending some time over the holiday vacation playing with 
 this, with no joy. :(
 
 
 PCM (Paul Michali)
 
 MAIL …..…. p...@cisco.commailto:p...@cisco.com
 IRC ……..… pc_m (irc.freenode.comhttp://irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83

Hi Paul:

It might be worth your while to add an agenda item to the infra meeting
agenda https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting

It might help you get a sense of what is necessary to fill the gaps
either in tech or knowledge.

Thanks,
Anita.
 
 
 
 
 On Dec 31, 2014, at 2:35 PM, Paul Michali (pcm) 
 p...@cisco.commailto:p...@cisco.com wrote:
 
 Just more data…
 
 I keep consistently seeing that on private subnet, the VM can only access 
 router (as expected), but on privateB subnet, the VM can access the private 
 I/F of router1 on private subnet. From the router’s namespace, I cannot ping 
 the local VM (why not?). Oddly, I can ping router1’s private IP from router2 
 namespace!
 
 I tried these commands to create security group rules (are they wrong?):
 
 # There are two default groups created by DevStack
 group=`neutron security-group-list | grep default | cut -f 2 -d' ' | head -1`
 neutron security-group-rule-create --protocol ICMP $group
 neutron security-group-rule-create --protocol tcp --port-range-min 22 
 --port-range-max 22 $group
 group=`neutron security-group-list | grep default | cut -f 2 -d' ' | tail -1`
 neutron security-group-rule-create --protocol ICMP $group
 neutron security-group-rule-create --protocol tcp --port-range-min 22 
 --port-range-max 22 $group
 
 The only change that happens, when I do these commands, is that the VM in 
 privateB subnet can now ping the VM from private subnet, but not vice versa. 
 From router1 namespace, it can then access local VMs. From router2 namespace 
 it can access local VMs and VMs in private subnet (all access).
 
 It seems like I have some issue with security groups, and I need to square 
 that away, before I can test VPN out.
 
 Am I creating the security group rules correctly?
 My goal is that the private nets can access the public net, but not each 
 other (until VPN connection is established).
 
 Lastly, in this latest try, I set OVS_PHYSICAL_BRIDGE=br-ex. In earlier runs 
 w/o that, there were QVO interfaces, but no QVB or QBR interfaces at all. It 
 didn’t seem to change connectivity, however.
 
 Ideas?
 
 PCM (Paul Michali)
 
 MAIL …..…. p...@cisco.commailto:p...@cisco.com
 IRC ……..… pc_m (irc.freenode.comhttp://irc.freenode.com/)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 
 
 
 On Dec 31, 2014, at 10:33 AM, Paul Michali (pcm) 
 p...@cisco.commailto:p...@cisco.com wrote:
 
 I’ve been playing a bit with trying to get VPNaaS working post-repo split, 
 and haven’t been successful. I’m trying it a few ways with DevStack, and I’m 
 not sure whether I have a config error, setup issue, or there is something 
 due to the split.
 
 In the past (and it’s been a few months since I verified VPN operation), I 
 used two bare metal machines and an external switch connecting them. With a 
 DevStack cloud running on each. That configuration is currently setup for a 
 vendor VPN solution, so I wanted to try different methods to test the 
 reference VPN implementation. I’ve got two ideas to do this:
 
 A) Run DevStack and create two routers with 

[openstack-dev] [heat][oslo] heat unit tests mocking private parts of oslo.messaging

2015-01-05 Thread Doug Hellmann
As part of updating oslo.messaging to move it out of the oslo namespace package 
I ran into some issues with heat. While debugging, I tried running the heat 
unit tests using the modified version of oslo.messaging and ran into test 
failures because the tests are mocking private parts of the library that are 
moving to have new names.

Mocking internal parts of Oslo libraries isn’t supported, and so I need someone 
from the heat team to work with me to fix the heat tests and possibly add 
missing fixtures to oslo.messaging to avoid breaking heat when we release the 
updated oslo.messaging. I tried raising attention on IRC in #heat but I think 
I’m in the wrong timezone compared to most of the heat devs.

Here’s an example of one of the failing tests:

==
FAIL: 
heat.tests.test_stack_lock.StackLockTest.test_failed_acquire_existing_lock_engine_alive
tags: worker-3
--
Traceback (most recent call last):
  File heat/tests/test_stack_lock.py, line 84, in 
test_failed_acquire_existing_lock_engine_alive
self.m.StubOutWithMock(messaging.rpc.client._CallContext, call)
AttributeError: 'module' object has no attribute '_CallContext'
Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:'alembic'
  pythonlogging:'cliff'
  pythonlogging:'heat-provision'
  pythonlogging:'heat_integrationtests'
  pythonlogging:'heatclient'
  pythonlogging:'iso8601'
  pythonlogging:'keystoneclient'
  pythonlogging:'migrate'
  pythonlogging:'neutronclient'
  pythonlogging:'novaclient'
  pythonlogging:'oslo'
  pythonlogging:'oslo_config'
  pythonlogging:'oslo_messaging'
  pythonlogging:'requests'
  pythonlogging:'routes'
  pythonlogging:'saharaclient'
  pythonlogging:'sqlalchemy'
  pythonlogging:'stevedore'
  pythonlogging:'swiftclient'
  pythonlogging:'troveclient'

pythonlogging:'': {{{WARNING [heat.engine.environment] Changing 
AWS::CloudWatch::Alarm from OS::Heat::CWLiteAlarm to OS::Heat::CWLiteAlarm}}}

Traceback (most recent call last):
  File heat/tests/test_stack_lock.py, line 84, in 
test_failed_acquire_existing_lock_engine_alive
self.m.StubOutWithMock(messaging.rpc.client._CallContext, call)
AttributeError: 'module' object has no attribute '_CallContext'

Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:'alembic'
  pythonlogging:'cliff'
  pythonlogging:'heat-provision'
  pythonlogging:'heat_integrationtests'
  pythonlogging:'heatclient'
  pythonlogging:'iso8601'
  pythonlogging:'keystoneclient'
  pythonlogging:'migrate'
  pythonlogging:'neutronclient'
  pythonlogging:'novaclient'
  pythonlogging:'oslo'
  pythonlogging:'oslo_config'
  pythonlogging:'oslo_messaging'
  pythonlogging:'requests'
  pythonlogging:'routes'
  pythonlogging:'saharaclient'
  pythonlogging:'sqlalchemy'
  pythonlogging:'stevedore'
  pythonlogging:'swiftclient'
  pythonlogging:'troveclient'

pythonlogging:'': {{{WARNING [heat.engine.environment] Changing 
AWS::CloudWatch::Alarm from OS::Heat::CWLiteAlarm to OS::Heat::CWLiteAlarm}}}

Traceback (most recent call last):
  File heat/tests/test_stack_lock.py, line 84, in 
test_failed_acquire_existing_lock_engine_alive
self.m.StubOutWithMock(messaging.rpc.client._CallContext, call)
AttributeError: 'module' object has no attribute ‘_CallContext'


That class _CallContext isn’t part of the public API for oslo.messaging, and so 
it is not being exposed through the redirect modules I’m creating for backwards 
compatibility. We need to look for a way to create a fixture to do whatever it 
is these tests are trying to do — I don’t understand the tests, which is why I 
need a heat developer to help out.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] how to delete a volume which is in error_deleting state

2015-01-05 Thread Asselin, Ramy
HI Eli,

In the UI, got to Admin -- Volumes  select desired volume.

[cid:image001.png@01D028F0.E93AA0A0]

Then click the drop down for the volume and pick “Update Volume Status”.

[cid:image002.png@01D028F0.E93AA0A0]

You can then change the stats to anything you like.

[cid:image003.png@01D028F0.E93AA0A0]

Ramy

From: Eli Qiao [mailto:ta...@linux.vnet.ibm.com]
Sent: Sunday, January 04, 2015 9:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] how to delete a volume which is in 
error_deleting state


在 2015年01月05日 13:10, Asselin, Ramy 写道:
Before getting into the database, try the cinder reset-state command. It’s 
available in the horizon ui starting in Juno.
Otherwise use the command line [1].

Ramy
[1] http://docs.openstack.org/cli-reference/content/cinderclient_commands.html

 Hi Punith, the command line is really help , thanks.
but I am not sure I found it in horizon ui(latest upstream version), is there a 
button , or menu to call cinder reset-state in ui ?


From: Punith S [mailto:punit...@cloudbyte.com]
Sent: Sunday, January 04, 2015 9:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] how to delete a volume which is in 
error_deleting state

Hi Eli,

you have to log-in to MySQL cinder database , and try deleting the required 
volume from the volumes table using the id.
if it fails due to foreign key constraints in volume metadata table, try 
deleting the corresponding volume metadata and then try to delete the required 
volume row.

thanks

On Mon, Jan 5, 2015 at 7:22 AM, Eli Qiao 
ta...@linux.vnet.ibm.commailto:ta...@linux.vnet.ibm.com wrote:

hi all,
how to delete a cinder volume which is in error_deleting status ?
I don't find force delete options in 'cinder delete',  then how we fix it if we 
got such situation ?
[tagett@stack-01 devstack]$ cinder list
+--++-+--+-+--+--+
|  ID  | Status | Name| Size | 
Volume Type | Bootable | Attached to  |
+--++-+--+-+--+--+
| 3e0acd0a-f28f-4fe3-b6e9-e65d5c40740b | in-use | with_cirros |  4   | 
lvmdriver-1 |   true   | 428f0235-be54-462f-8916-f32965d42e63 |
| 7039c683-2341-4dd7-a947-e35941245ec4 | error_deleting | None|  4   | 
lvmdriver-1 |  false   |  |
| d576773f-6865-4959-ba26-13602ed32e89 | error_deleting | None|  4   | 
lvmdriver-1 |  false   |  |
+--++-+--+-+--+--+
[tagett@stack-01 devstack]$ cinder delete  7039c683-2341-4dd7-a947-e35941245ec4
Delete for volume 7039c683-2341-4dd7-a947-e35941245ec4 failed: Bad Request 
(HTTP 400) (Request-ID: req-e4d8cdd9-6ed5-4a7f-81de-7f38f2163d33)
ERROR: Unable to delete any of specified volumes.




--

Thanks,

Eli (Li Yong) Qiao

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
regards,

punith s
cloudbyte.comhttp://cloudbyte.com




___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Eli (Li Yong) Qiao
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable-maint] acceptable changes for stable/icehouse

2015-01-05 Thread Jay S. Bryant

All,

We have been getting a number of non-security patches proposed to 
stable/icehouse for Cinder.  The cores were discussing what was ok to 
put into icehouse at this point in time and couldn't agree as to whether 
we were at the point of only accepting security changes.


Appreciate advice on this topic.

Jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] review closure for nova blueprint review.openstack.org/#/c/140133/

2015-01-05 Thread Joe Cropper
Prefixing subject with [nova] so folks’ mail rules catch this.

- Joe

 On Jan 5, 2015, at 1:47 PM, Kenneth Burger burg...@us.ibm.com wrote:
 
 Hi, I am trying to get approval on this nova blueprint, 
 https://review.openstack.org/#/c/140133/ 
 https://review.openstack.org/#/c/140133/.   There was a +2 from Michael 
 Still ( twice in prior patches ) and a +1 from Jay Bryant from a cinder 
 perspective.The only changes from the patches receiving the +2  was 
 related to a directory change of the spec location in the repository.   
 
 Is it still possible to get approval for this blueprint?
 
 Thanks,
 Ken Burger
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Image based provisioning

2015-01-05 Thread Andrew Woodward
Here is a list of the issues I ran into using IBP before the 23rd. 5
appears to not be merged yet and must be resolved prior to making IBP
the default as you can't restart a provisioned node.

1. a full cobbler template is generated for the IBP node, if you
wanted to re-prov the node, you would have to erase the cobbler
profile, bootstrap and call the node provision api. If you forced it
back to netboot (which can be done with installer methods) it loads
the installer instead of the bootstrap image

2. We need to be careful when considering removing cobbler from fuel,
its still being used in IBP to manage dnsmasq (dhcp lease for
fuelweb_admin iface) and bootp/PXE loading profiles

3. After a time all DNS names for nodes expire (ssh node-1 - Could
not resolve hostname) even though they are still in cobbler (cobbler
system list)

4. fuel-agent log is not in logs UI

5. image based nodes won't set up network after first boot
https://bugs.launchpad.net/fuel/+bug/1398207

6 image based nodes are basically impossible to read network settings
on unless you know everything about cloud-init


On Wed, Dec 17, 2014 at 3:08 AM, Vladimir Kozhukalov
vkozhuka...@mirantis.com wrote:
 In case of image based we need either to update image or run yum
 update/apt-get upgrade right after first boot (second option partly
 devalues advantages of image based scheme). Besides, we are planning to
 re-implement image build script so as to be able to build images on a master
 node (but unfortunately 6.1 is not a real estimate for that).

 Vladimir Kozhukalov

 On Wed, Dec 17, 2014 at 5:03 AM, Mike Scherbakov mscherba...@mirantis.com
 wrote:

 Dmitry,
 as part of 6.1 roadmap, we are going to work on patching feature.
 There are two types of workflow to consider:
 - patch existing environment (already deployed nodes, aka target nodes)
 - ensure that new nodes, added to the existing and already patched envs,
 will install updated packages too.

 In case of anakonda/preseed install, we can simply update repo on master
 node and run createrepo/etc. What do we do in case of image? Will we need a
 separate repo alongside with main one, updates repo - and do
 post-provisioning yum update to fetch all patched packages?

 On Tue, Dec 16, 2014 at 11:09 PM, Andrey Danin ada...@mirantis.com
 wrote:

 Adding Mellanox team explicitly.

 Gil, Nurit, Aviram, can you confirm that you tested that feature? It can
 be enabled on every fresh ISO. You just need to enable the Experimental mode
 (please, see the documentation for instructions).

 On Tuesday, December 16, 2014, Dmitry Pyzhov dpyz...@mirantis.com
 wrote:

 Guys,

 we are about to enable image based provisioning in our master by
 default. I'm trying to figure out requirement for this change. As far as I
 know, it was not tested on scale lab. Is it true? Have we ever run full
 system tests cycle with this option?

 Do we have any other pre-requirements?



 --
 Andrey Danin
 ada...@mirantis.com
 skype: gcon.monolake


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrew
Mirantis
Ceph community

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable-maint] acceptable changes for stable/icehouse

2015-01-05 Thread John Griffith
On Mon, Jan 5, 2015 at 4:39 PM, Jay S. Bryant
jsbry...@electronicjungle.net wrote:
 All,

 We have been getting a number of non-security patches proposed to
 stable/icehouse for Cinder.  The cores were discussing what was ok to put
 into icehouse at this point in time and couldn't agree as to whether we were
 at the point of only accepting security changes.

 Appreciate advice on this topic.

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hey Jay,

Thanks for raising this, my understanding was that we were in fact
security only mode now for Icehouse.  That being said I completely
forgot that it's been that long already and I let some reviews slip
myself just today in fact.  Thanks for pointing it out.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pip 6.0.6 breaking py2* jobs - bug 1407736

2015-01-05 Thread Matt Riedemann



On 1/5/2015 2:16 PM, Doug Hellmann wrote:


On Jan 5, 2015, at 12:22 PM, Doug Hellmann d...@doughellmann.com wrote:



On Jan 5, 2015, at 12:00 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:


There is a deprecation warning in pip 6.0.6 which is making the py26 (on stable 
branches) and py27 jobs hit subunit log sizes of over 50 MB which makes the job 
fail.

A logstash query shows this started happening around 1/3 which is when pip 
6.0.6 was released. In Nova alone there are nearly 18 million hits of the 
deprecation warning.

Should we temporarily block so that pip  6.0.6?

https://bugs.launchpad.net/nova/+bug/1407736


I think this is actually a change in pkg_resources (in the setuptools dist) 
[1], being triggered by stevedore using require=False to avoid checking 
dependencies when plugins are loaded.

Doug

[1] 
https://bitbucket.org/pypa/setuptools/commits/b1c7a311fb8e167d026126f557f849450b859502


After some discussion with Jason Coombs and dstufft, a version of setuptools 
with a split API to replace the deprecated option was released. I have a patch 
up to teach stevedore about the new methods[1].

Doug

[1] https://review.openstack.org/#/c/145042/1






--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The stevedore patch was merged. Do we need a release of stevedore and a 
global-requirements update to then get the deprecation warnings fixed in 
nova (on master and stable/juno)?


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][oslo] heat unit tests mocking private parts of oslo.messaging

2015-01-05 Thread Angus Salkeld
On Tue, Jan 6, 2015 at 9:03 AM, Doug Hellmann d...@doughellmann.com wrote:

 As part of updating oslo.messaging to move it out of the oslo namespace
 package I ran into some issues with heat. While debugging, I tried running
 the heat unit tests using the modified version of oslo.messaging and ran
 into test failures because the tests are mocking private parts of the
 library that are moving to have new names.

 Mocking internal parts of Oslo libraries isn’t supported, and so I need
 someone from the heat team to work with me to fix the heat tests and
 possibly add missing fixtures to oslo.messaging to avoid breaking heat when
 we release the updated oslo.messaging. I tried raising attention on IRC in
 #heat but I think I’m in the wrong timezone compared to most of the heat
 devs.


Hi Doug,

This should help you along: https://review.openstack.org/#/c/145094/

-Angus



 Here’s an example of one of the failing tests:

 ==
 FAIL:
 heat.tests.test_stack_lock.StackLockTest.test_failed_acquire_existing_lock_engine_alive
 tags: worker-3
 --
 Traceback (most recent call last):
   File heat/tests/test_stack_lock.py, line 84, in
 test_failed_acquire_existing_lock_engine_alive
 self.m.StubOutWithMock(messaging.rpc.client._CallContext, call)
 AttributeError: 'module' object has no attribute '_CallContext'
 Traceback (most recent call last):
 _StringException: Empty attachments:
   pythonlogging:'alembic'
   pythonlogging:'cliff'
   pythonlogging:'heat-provision'
   pythonlogging:'heat_integrationtests'
   pythonlogging:'heatclient'
   pythonlogging:'iso8601'
   pythonlogging:'keystoneclient'
   pythonlogging:'migrate'
   pythonlogging:'neutronclient'
   pythonlogging:'novaclient'
   pythonlogging:'oslo'
   pythonlogging:'oslo_config'
   pythonlogging:'oslo_messaging'
   pythonlogging:'requests'
   pythonlogging:'routes'
   pythonlogging:'saharaclient'
   pythonlogging:'sqlalchemy'
   pythonlogging:'stevedore'
   pythonlogging:'swiftclient'
   pythonlogging:'troveclient'

 pythonlogging:'': {{{WARNING [heat.engine.environment] Changing
 AWS::CloudWatch::Alarm from OS::Heat::CWLiteAlarm to
 OS::Heat::CWLiteAlarm}}}

 Traceback (most recent call last):
   File heat/tests/test_stack_lock.py, line 84, in
 test_failed_acquire_existing_lock_engine_alive
 self.m.StubOutWithMock(messaging.rpc.client._CallContext, call)
 AttributeError: 'module' object has no attribute '_CallContext'

 Traceback (most recent call last):
 _StringException: Empty attachments:
   pythonlogging:'alembic'
   pythonlogging:'cliff'
   pythonlogging:'heat-provision'
   pythonlogging:'heat_integrationtests'
   pythonlogging:'heatclient'
   pythonlogging:'iso8601'
   pythonlogging:'keystoneclient'
   pythonlogging:'migrate'
   pythonlogging:'neutronclient'
   pythonlogging:'novaclient'
   pythonlogging:'oslo'
   pythonlogging:'oslo_config'
   pythonlogging:'oslo_messaging'
   pythonlogging:'requests'
   pythonlogging:'routes'
   pythonlogging:'saharaclient'
   pythonlogging:'sqlalchemy'
   pythonlogging:'stevedore'
   pythonlogging:'swiftclient'
   pythonlogging:'troveclient'

 pythonlogging:'': {{{WARNING [heat.engine.environment] Changing
 AWS::CloudWatch::Alarm from OS::Heat::CWLiteAlarm to
 OS::Heat::CWLiteAlarm}}}

 Traceback (most recent call last):
   File heat/tests/test_stack_lock.py, line 84, in
 test_failed_acquire_existing_lock_engine_alive
 self.m.StubOutWithMock(messaging.rpc.client._CallContext, call)
 AttributeError: 'module' object has no attribute ‘_CallContext'


 That class _CallContext isn’t part of the public API for oslo.messaging,
 and so it is not being exposed through the redirect modules I’m creating
 for backwards compatibility. We need to look for a way to create a fixture
 to do whatever it is these tests are trying to do — I don’t understand the
 tests, which is why I need a heat developer to help out.

 Doug


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] review closure for nova blueprint review.openstack.org/#/c/140133/

2015-01-05 Thread Matt Riedemann



On 1/5/2015 6:07 PM, Joe Cropper wrote:

Prefixing subject with [nova] so folks’ mail rules catch this.

- Joe


On Jan 5, 2015, at 1:47 PM, Kenneth Burger burg...@us.ibm.com
mailto:burg...@us.ibm.com wrote:

Hi, I am trying to get approval on this nova blueprint,
https://review.openstack.org/#/c/140133/.   There was a +2 from
Michael Still ( twice in prior patches ) and a +1 from Jay Bryant from
a cinder perspective.The only changes from the patches receiving
the +2  was related to a directory change of the spec location in the
repository.

Is it still possible to get approval for this blueprint?

Thanks,
Ken Burger

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



A single +2 does not an approval make, but I don't know if there were 
others lined up to approve this before the change and the 12/18 deadline.


We're at a point now where the nova-drivers team is going to get 
together this week (if not before the nova meeting on Thursday, during 
that meeting) to talk about a exception process plan for k-2.  Details 
on the outcome of that should be in the ML once determined.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][oslo] heat unit tests mocking private parts of oslo.messaging

2015-01-05 Thread Doug Hellmann

 On Jan 5, 2015, at 6:54 PM, Angus Salkeld asalk...@mirantis.com wrote:
 
 On Tue, Jan 6, 2015 at 9:03 AM, Doug Hellmann d...@doughellmann.com wrote:
 As part of updating oslo.messaging to move it out of the oslo namespace 
 package I ran into some issues with heat. While debugging, I tried running 
 the heat unit tests using the modified version of oslo.messaging and ran into 
 test failures because the tests are mocking private parts of the library that 
 are moving to have new names.
 
 Mocking internal parts of Oslo libraries isn’t supported, and so I need 
 someone from the heat team to work with me to fix the heat tests and possibly 
 add missing fixtures to oslo.messaging to avoid breaking heat when we release 
 the updated oslo.messaging. I tried raising attention on IRC in #heat but I 
 think I’m in the wrong timezone compared to most of the heat devs.
 
 Hi Doug,
 
 This should help you along: https://review.openstack.org/#/c/145094/

That looks good, thanks!

Doug

 
 -Angus
  
 
 Here’s an example of one of the failing tests:
 
 ==
 FAIL: 
 heat.tests.test_stack_lock.StackLockTest.test_failed_acquire_existing_lock_engine_alive
 tags: worker-3
 --
 Traceback (most recent call last):
   File heat/tests/test_stack_lock.py, line 84, in 
 test_failed_acquire_existing_lock_engine_alive
 self.m.StubOutWithMock(messaging.rpc.client._CallContext, call)
 AttributeError: 'module' object has no attribute '_CallContext'
 Traceback (most recent call last):
 _StringException: Empty attachments:
   pythonlogging:'alembic'
   pythonlogging:'cliff'
   pythonlogging:'heat-provision'
   pythonlogging:'heat_integrationtests'
   pythonlogging:'heatclient'
   pythonlogging:'iso8601'
   pythonlogging:'keystoneclient'
   pythonlogging:'migrate'
   pythonlogging:'neutronclient'
   pythonlogging:'novaclient'
   pythonlogging:'oslo'
   pythonlogging:'oslo_config'
   pythonlogging:'oslo_messaging'
   pythonlogging:'requests'
   pythonlogging:'routes'
   pythonlogging:'saharaclient'
   pythonlogging:'sqlalchemy'
   pythonlogging:'stevedore'
   pythonlogging:'swiftclient'
   pythonlogging:'troveclient'
 
 pythonlogging:'': {{{WARNING [heat.engine.environment] Changing 
 AWS::CloudWatch::Alarm from OS::Heat::CWLiteAlarm to OS::Heat::CWLiteAlarm}}}
 
 Traceback (most recent call last):
   File heat/tests/test_stack_lock.py, line 84, in 
 test_failed_acquire_existing_lock_engine_alive
 self.m.StubOutWithMock(messaging.rpc.client._CallContext, call)
 AttributeError: 'module' object has no attribute '_CallContext'
 
 Traceback (most recent call last):
 _StringException: Empty attachments:
   pythonlogging:'alembic'
   pythonlogging:'cliff'
   pythonlogging:'heat-provision'
   pythonlogging:'heat_integrationtests'
   pythonlogging:'heatclient'
   pythonlogging:'iso8601'
   pythonlogging:'keystoneclient'
   pythonlogging:'migrate'
   pythonlogging:'neutronclient'
   pythonlogging:'novaclient'
   pythonlogging:'oslo'
   pythonlogging:'oslo_config'
   pythonlogging:'oslo_messaging'
   pythonlogging:'requests'
   pythonlogging:'routes'
   pythonlogging:'saharaclient'
   pythonlogging:'sqlalchemy'
   pythonlogging:'stevedore'
   pythonlogging:'swiftclient'
   pythonlogging:'troveclient'
 
 pythonlogging:'': {{{WARNING [heat.engine.environment] Changing 
 AWS::CloudWatch::Alarm from OS::Heat::CWLiteAlarm to OS::Heat::CWLiteAlarm}}}
 
 Traceback (most recent call last):
   File heat/tests/test_stack_lock.py, line 84, in 
 test_failed_acquire_existing_lock_engine_alive
 self.m.StubOutWithMock(messaging.rpc.client._CallContext, call)
 AttributeError: 'module' object has no attribute ‘_CallContext'
 
 
 That class _CallContext isn’t part of the public API for oslo.messaging, and 
 so it is not being exposed through the redirect modules I’m creating for 
 backwards compatibility. We need to look for a way to create a fixture to do 
 whatever it is these tests are trying to do — I don’t understand the tests, 
 which is why I need a heat developer to help out.
 
 Doug
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] static files handling, bower/

2015-01-05 Thread Tripp, Travis S
What Radomir proposes looks like it would greatly ease the process I am still 
going through to get the latest angular available to Horizon for current 
development.  At the time of writing this, I’m still trying to get the updated 
library through.  I hit a rather difficult workflow:


  1.  Packaged the latest into Xstatic-Angular-1.3.7
  2.  Submitted patch which deprecated the separate older 
xstatic-angular-cookies and xstatic-angular-mock packages
  3.  Reviewed and approved (after correcting an initial mis-repackaging)
  4.  Radomir released to Pypi

This was pretty easy and not too hard. Not too much to complain about.

However, now, to get Horizon to use it, I have to get that into global 
requirements.  Since I’m deprecating old packages I got stuck in a sort of ugly 
dependency path.  I couldn’t remove the cookies and mock libraries from the 
global requirements patch that added the new 1.3.7 package because of horizon 
still referencing the deprecated packages.  And, when I did it anyway, the 
integration tests failed due to horizon being dependent on something not in 
global requirements.  So, now, as far as I can tell we have to jump through the 
following hoops:


  1.  Global requirements patch to add angular 1.3.7
 *   Verify check / recheck fun
 *   Reviewed and approved
 *   Gate check / recheck fun
  2.  Horizon patch to update to angular 1.3.7 and remove deprecated mock and 
cookies packages
 *   Verify check / recheck fun
 *   Reviewed and approved
 *   Gate check / recheck fun
  3.  Global requirements patch to remove deprecated mock and cookies
 *   Verify check / recheck fun
 *   Reviewed and approved
 *   Gate check / recheck fun

Don’t get me wrong, I really do think the gate is brilliant and am all for a 
review / approval process, but this does seem excessive for a UI library that 
should only be used by Horizon. Is there some other reason that this should 
have to go through global requirements?

Thanks,
Travis

From: Richard Jones r1chardj0...@gmail.commailto:r1chardj0...@gmail.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, January 5, 2015 at 2:08 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [horizon] static files handling, bower/



On Mon Jan 05 2015 at 7:59:14 PM Radomir Dopieralski 
openst...@sheep.art.plmailto:openst...@sheep.art.pl wrote:
On 05/01/15 00:35, Richard Jones wrote:
 On Mon Dec 22 2014 at 8:24:03 PM Radomir Dopieralski
 openst...@sheep.art.plmailto:openst...@sheep.art.pl 
 mailto:openst...@sheep.art.plmailto:openst...@sheep.art.pl wrote:

 On 20/12/14 21:25, Richard Jones wrote:
  This is a good proposal, though I'm unclear on how the
  static_settings.py file is populated by a developer (as opposed to a
  packager, which you described).

 It's not, the developer version is included in the repository, and
 simply points to where Bower is configured to put the files.
 So just to be clear, as developers we:

 1. have a bower.json listing the bower component to use,
 2. use bower to fetch and install those to the bower_components
 directory at the top level of the Horizon repos checkout, and
 3. manually edit static_settings.py when we add a new bower component to
 bower.json so it knows the appropriate static files to load from that
 component.

 Is that correct?

 The above will increase the burden on those adding or upgrading bower
 components (they'll need to check the bower.json in the component for
 the appropriate static files to link in) but will make life easier for
 the re-packagers since they'll know which files they need to cater for
 in static_settings.py

Well, I expect you can tell Bower to put the files somewhere else than
in the root directory of the project -- a directory like ``bower_files``
or something (that directory is also added to ``.gitignore`` so that you
don't commit it by mistake). Then only that directory needs to be added
to the ``static_settings.py``. Of course, you still need to make all the
``script`` links in appropriate places with the right URLs, but you
would have to do that anyways.

Bower installs into a directory called bower_components in the current 
directory, which is equivalent to your bower_files above.


Let's look at an example. Suppose you need to a new JavaScript library
called hipster.js. You add it to the ``bower.json`` file, and run
Bower. Bower downloads the right files and does whatever it is that it
does to them, and puts them in  ``bower_files/hipster-js``. Now you edit
Horizon's templates and add ``script src={{ STATIC_URL
}}/hipster-js/hipster.js`` to ``_scripts.html``. That's it for you.
Since your ``static_settings.py`` file already has a line:

  ('', os.path.join(BASE_DIR, '/bower_files')),

in it, it will just work.

Yep, except s/bower_files/bower_components :)


Now, suppose that a 

Re: [openstack-dev] [horizon] static files handling, bower/

2015-01-05 Thread Richard Jones
I think the only outstanding question is how developers and non-packagers
populate the bower_components directory - that is, how is bower expected to
be available for them?

I think following the Storyboard approach is a good idea: isolate a
known-working node/bower environment local to horizon which is managed by
tox - so to invoke bower you run tox -e bower command. No worries about
system installation or compatibility, and works in the gate.

Horizon installation (whenever a pip install would be invoked) would then
also have a tox -e bower install invocation.

Storyboard[1] uses a thing called nodeenv[2] which is installed through pip
/ requirements.txt to control the node environment. It then has bower
commands in tox.ini[3] (though I'd just have a single bower environment
to implement the tox command I suggest above.


 Richard

[1] https://wiki.openstack.org/wiki/StoryBoard
[2] https://pypi.python.org/pypi/nodeenv
[3]
https://git.openstack.org/cgit/openstack-infra/storyboard-webclient/tree/tox.ini


On Tue Jan 06 2015 at 11:42:17 AM Tripp, Travis S travis.tr...@hp.com
wrote:

 What Radomir proposes looks like it would greatly ease the process I am
 still going through to get the latest angular available to Horizon for
 current development.  At the time of writing this, I’m still trying to get
 the updated library through.  I hit a rather difficult workflow:


   1.  Packaged the latest into Xstatic-Angular-1.3.7
   2.  Submitted patch which deprecated the separate older
 xstatic-angular-cookies and xstatic-angular-mock packages
   3.  Reviewed and approved (after correcting an initial mis-repackaging)
   4.  Radomir released to Pypi

 This was pretty easy and not too hard. Not too much to complain about.

 However, now, to get Horizon to use it, I have to get that into global
 requirements.  Since I’m deprecating old packages I got stuck in a sort of
 ugly dependency path.  I couldn’t remove the cookies and mock libraries
 from the global requirements patch that added the new 1.3.7 package because
 of horizon still referencing the deprecated packages.  And, when I did it
 anyway, the integration tests failed due to horizon being dependent on
 something not in global requirements.  So, now, as far as I can tell we
 have to jump through the following hoops:


   1.  Global requirements patch to add angular 1.3.7
  *   Verify check / recheck fun
  *   Reviewed and approved
  *   Gate check / recheck fun
   2.  Horizon patch to update to angular 1.3.7 and remove deprecated mock
 and cookies packages
  *   Verify check / recheck fun
  *   Reviewed and approved
  *   Gate check / recheck fun
   3.  Global requirements patch to remove deprecated mock and cookies
  *   Verify check / recheck fun
  *   Reviewed and approved
  *   Gate check / recheck fun

 Don’t get me wrong, I really do think the gate is brilliant and am all for
 a review / approval process, but this does seem excessive for a UI library
 that should only be used by Horizon. Is there some other reason that this
 should have to go through global requirements?

 Thanks,
 Travis

 From: Richard Jones r1chardj0...@gmail.commailto:r1chardj0...@gmail.com
 
 Reply-To: OpenStack List openstack-dev@lists.openstack.orgmailto:
 openstack-dev@lists.openstack.org
 Date: Monday, January 5, 2015 at 2:08 AM
 To: OpenStack List openstack-dev@lists.openstack.orgmailto:openstack
 -d...@lists.openstack.org
 Subject: Re: [openstack-dev] [horizon] static files handling, bower/



 On Mon Jan 05 2015 at 7:59:14 PM Radomir Dopieralski 
 openst...@sheep.art.plmailto:openst...@sheep.art.pl wrote:
 On 05/01/15 00:35, Richard Jones wrote:
  On Mon Dec 22 2014 at 8:24:03 PM Radomir Dopieralski
  openst...@sheep.art.plmailto:openst...@sheep.art.pl mailto:
 openst...@sheep.art.plmailto:openst...@sheep.art.pl wrote:
 
  On 20/12/14 21:25, Richard Jones wrote:
   This is a good proposal, though I'm unclear on how the
   static_settings.py file is populated by a developer (as opposed to
 a
   packager, which you described).
 
  It's not, the developer version is included in the repository, and
  simply points to where Bower is configured to put the files.
  So just to be clear, as developers we:
 
  1. have a bower.json listing the bower component to use,
  2. use bower to fetch and install those to the bower_components
  directory at the top level of the Horizon repos checkout, and
  3. manually edit static_settings.py when we add a new bower component to
  bower.json so it knows the appropriate static files to load from that
  component.
 
  Is that correct?
 
  The above will increase the burden on those adding or upgrading bower
  components (they'll need to check the bower.json in the component for
  the appropriate static files to link in) but will make life easier for
  the re-packagers since they'll know which files they need to cater for
  in static_settings.py

 Well, I expect you can tell Bower to put 

Re: [openstack-dev] Problem installing devstack

2015-01-05 Thread Abhishek Shrivastava
Hi Rajdeep,

Can you tell me the exact configuration (i.e;OS) you are using for
installing devstack.

On Tue, Jan 6, 2015 at 10:16 AM, Rajdeep Dua rajdeep@gmail.com wrote:

 Thanks for the response, in my case i am still getting the same error..

 On Tue, Jan 6, 2015 at 9:28 AM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
 wrote:

 2015-01-06 12:40 GMT+09:00 Rajdeep Dua rajdeep@gmail.com:
  Getting this error while running stack.sh in devstack
 
  Could not find a version that satisfies the requirement
  SQLAlchemy=0.8.99,=0.9.99,=0.8.4,=0.9.7 (from
 keystone==2014.2.dev114)
  (from versions: 0.4.2p3, 0.4.7p1, 0.5.4p1, 0.5.4p2, 0.1.0, 0.1.1, 0.1.2,
  0.1.3, 0.1.4, 0.1.5, 0.1.6, 0.1.7, 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.2.4,
 0.2.5,
  0.2.6, 0.2.7, 0.2.8, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.3.5, 0.3.6,
 0.3.7,
  0.3.8, 0.3.9, 0.3.10, 0.3.11, 0.4.0b1, 0.4.0b2, 0.4.0b3, 0.4.0b4,
 0.4.0b5,
  0.4.0b6, 0.4.0, 0.4.1, 0.4.2a0, 0.4.2b0, 0.4.2, 0.4.3, 0.4.4, 0.4.5,
 0.4.6,
  0.4.7, 0.4.8, 0.5.0b1, 0.5.0b2, 0.5.0b3, 0.5.0rc1, 0.5.0rc2, 0.5.0rc3,
  0.5.0rc4, 0.5.0, 0.5.1, 0.5.2, 0.5.3, 0.5.4, 0.5.5, 0.5.6, 0.5.7, 0.5.8,
  0.6b1, 0.6b2, 0.6b3, 0.6.0, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.6.6,
 0.6.7,
  0.6.8, 0.6.9, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.7.4, 0.7.5, 0.7.6, 0.7.7,
 0.7.8,
  0.7.9, 0.7.10, 0.8.0b2, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 0.8.5, 0.8.6,
  0.8.7, 0.9.0, 0.9.1, 0.9.2, 0.9.3, 0.9.4, 0.9.5, 0.9.6, 0.9.7, 0.9.8)
No distributions matching the version for
  SQLAlchemy=0.8.99,=0.9.99,=0.8.4,=0.9.7 (from
 keystone==2014.2.dev114)
 
  I saw a couple of bugs filed and patches going in.
 
  Please clarify if this is fixed and how to get the latest changes in.


 I faced the similar problem, and I could avoid it by removing
 /usr/local/lib/python2.7/dist-packages directory and ./stack.sh again.

 Hope that helps,
 Ken Ohmichi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks  Regards,
Abhishek
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problem installing devstack

2015-01-05 Thread Dr. Jens Rosenboom

Am 06/01/15 um 04:40 schrieb Rajdeep Dua:

Getting this error while running stack.sh in devstack

Could not find a version that satisfies the requirement
SQLAlchemy=0.8.99,=0.9.99,=0.8.4,=0.9.7 (from keystone==2014.2.dev114)
(from versions: 0.4.2p3, 0.4.7p1, 0.5.4p1, 0.5.4p2, 0.1.0, 0.1.1, 0.1.2,
0.1.3, 0.1.4, 0.1.5, 0.1.6, 0.1.7, 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.2.4,
0.2.5, 0.2.6, 0.2.7, 0.2.8, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.3.5,
0.3.6, 0.3.7, 0.3.8, 0.3.9, 0.3.10, 0.3.11, 0.4.0b1, 0.4.0b2, 0.4.0b3,
0.4.0b4, 0.4.0b5, 0.4.0b6, 0.4.0, 0.4.1, 0.4.2a0, 0.4.2b0, 0.4.2, 0.4.3,
0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.4.8, 0.5.0b1, 0.5.0b2, 0.5.0b3, 0.5.0rc1,
0.5.0rc2, 0.5.0rc3, 0.5.0rc4, 0.5.0, 0.5.1, 0.5.2, 0.5.3, 0.5.4, 0.5.5,
0.5.6, 0.5.7, 0.5.8, 0.6b1, 0.6b2, 0.6b3, 0.6.0, 0.6.1, 0.6.2, 0.6.3,
0.6.4, 0.6.5, 0.6.6, 0.6.7, 0.6.8, 0.6.9, 0.7.0, 0.7.1, 0.7.2, 0.7.3,
0.7.4, 0.7.5, 0.7.6, 0.7.7, 0.7.8, 0.7.9, 0.7.10, 0.8.0b2, 0.8.0, 0.8.1,
0.8.2, 0.8.3, 0.8.4, 0.8.5, 0.8.6, 0.8.7, 0.9.0, 0.9.1, 0.9.2, 0.9.3,
0.9.4, 0.9.5, 0.9.6, 0.9.7, 0.9.8)
   No distributions matching the version for
SQLAlchemy=0.8.99,=0.9.99,=0.8.4,=0.9.7 (from keystone==2014.2.dev114)

I saw a couple of bugs filed and patches going in.

Please clarify if this is fixed and how to get the latest changes in.


I would assume that you are trying to reuse an older devstack 
installation, as this bug should have been fixed a couple of weeks ago.


However, running stack.sh by default does not update the source repos 
once they exist, so you can either add RECLONE=yes to your config or 
go manually through the repos in /opt/stack and update them.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Support for Amazon VPC APIs in OpenStack

2015-01-05 Thread Saju M
Hi,

I seen a blueprint which implement Amazon VPC APIs and Status is Abandoned
https://review.openstack.org/#/c/40071/

Have any plans to get it done in Kilo release ?
How can I change the Abandoned status ?.
Have any dependencies ?

Please let me know, So I can rebase it.

Regards
Saju Madhavan
+91 09535134654
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Weekly subteam status report

2015-01-05 Thread Jim Rollenhagen
Hi all,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted. I'm going to start
keeping this trimmed down to just the new stuff.

Drivers
  iRMC (naohirot)
[power driver] needs code review towords kilo-2:
  https://review.openstack.org/#/c/144901/
[virtual media deploy driver] updated the spec to the patch set 13
  for review: https://review.openstack.org/#/c/134865/
[management driver] updated the spec to the patch set 7 for review,
  and started implementation: https://review.openstack.org/#/c/136020/

  AMT (lintan)
Proposed a patch to support the workflow of deploy on AMT/vPro PC:
  https://review.openstack.org/#/c/135184/
AMT driver proposal now to use wsman instead of amttools

// jim

[0] https://etherpad.openstack.org/p/IronicWhiteBoard

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] UserWarning: Unknown distribution option: 'pbr'

2015-01-05 Thread Ian Wienand
On 11/27/2014 12:59 PM, Li Tianqing wrote:
 I write a module to extend openstack. When install by python
 setup.py develop, it always blame this

 /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown
 distribution option: 'pbr'
warnings.warn(msg)
...
 Processing dependencies for UNKNOWN==0.0.0
 Finished processing dependencies for UNKNOWN==0.0.0

 I do not know why the egg is UNKNOWN, and why the pbr option is
 unknown? i write the name in setup.cfg,

This is because pbr isn't installed.  You probably want to install the
python-pbr package.

I hit this problem today with tox and oslo.config.  tox creates an
sdist of the package on the local system, which because pbr isn't
installed system creates this odd UNKNOWN-0.0.0.zip file.

---
$ /usr/bin/python setup.py sdist --formats=zip
/usr/lib64/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution 
option: 'pbr'
  warnings.warn(msg)
running sdist
running egg_info
writing UNKNOWN.egg-info/PKG-INFO
writing top-level names to UNKNOWN.egg-info/top_level.txt
...
creating UNKNOWN-0.0.0


This isn't a failure, and tox is logging all that to a file, so it's
very not clear that's what has happened.

Then tox tries to install this into the virtualenv *with* pbr, which
explodes in a fairly unhelpful manner:

---
$ tox -e pep8
GLOB sdist-make: /home/iwienand/programs/oslo.config/setup.py
pep8 inst-nodeps: 
/home/iwienand/programs/oslo.config/.tox/dist/UNKNOWN-0.0.0.zip
ERROR: invocation failed, logfile: 
/home/iwienand/programs/oslo.config/.tox/pep8/log/pep8-4.log
ERROR: actionid=pep8
msg=installpkg
Unpacking ./.tox/dist/UNKNOWN-0.0.0.zip
  Running setup.py (path:/tmp/pip-z9jGEr-build/setup.py) egg_info for package 
from file:///home/iwienand/programs/oslo.config/.tox/dist/UNKNOWN-0.0.0.zip
ERROR:root:Error parsing
Traceback (most recent call last):
  File 
/home/iwienand/programs/oslo.config/.tox/pep8/lib/python2.7/site-packages/pbr/core.py,
 line 104, in pbr
attrs = util.cfg_to_args(path)
  File 
/home/iwienand/programs/oslo.config/.tox/pep8/lib/python2.7/site-packages/pbr/util.py,
 line 238, in cfg_to_args
pbr.hooks.setup_hook(config)
  File 
/home/iwienand/programs/oslo.config/.tox/pep8/lib/python2.7/site-packages/pbr/hooks/__init__.py,
 line 27, in setup_hook
metadata_config.run()
  File 
/home/iwienand/programs/oslo.config/.tox/pep8/lib/python2.7/site-packages/pbr/hooks/base.py,
 line 29, in run
self.hook()
  File 
/home/iwienand/programs/oslo.config/.tox/pep8/lib/python2.7/site-packages/pbr/hooks/metadata.py,
 line 28, in hook
self.config['name'], self.config.get('version', None))
  File 
/home/iwienand/programs/oslo.config/.tox/pep8/lib/python2.7/site-packages/pbr/packaging.py,
 line 554, in get_version
raise Exception(Versioning for this project requires either an sdist
Exception: Versioning for this project requires either an sdist tarball, or 
access to an upstream git repository. Are you sure that git is installed?
---

I proposed [1] to oslo.config to basically avoid the sdist phase.
This seems to be what happens elsewhere.

I started writing a bug for the real issue, but it's not clear to me
where it belongs.  It seems like distutils should error for unknown
distribution option.  But then setuptools seems be ignoring the
setup_requires=['pbr'] line in the config.  But maybe tox should be
using pip to install rather than setup.py.

So if any setuptools/distribute/pip/pbr/tox people want to point me to
who should own the problem, happy to chase it up...

-i

[1] https://review.openstack.org/#/c/145119/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler sub-group meeting agenda 1/6

2015-01-05 Thread Dugger, Donald D
Meeting on #openstack-meeting at 1500 UTC (8:00AM MST)


    1) Status on cleanup work - 
https://wiki.openstack.org/wiki/Gantt/kilo


--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problem installing devstack

2015-01-05 Thread Rajdeep Dua
Thanks for the response, in my case i am still getting the same error..

On Tue, Jan 6, 2015 at 9:28 AM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
wrote:

 2015-01-06 12:40 GMT+09:00 Rajdeep Dua rajdeep@gmail.com:
  Getting this error while running stack.sh in devstack
 
  Could not find a version that satisfies the requirement
  SQLAlchemy=0.8.99,=0.9.99,=0.8.4,=0.9.7 (from
 keystone==2014.2.dev114)
  (from versions: 0.4.2p3, 0.4.7p1, 0.5.4p1, 0.5.4p2, 0.1.0, 0.1.1, 0.1.2,
  0.1.3, 0.1.4, 0.1.5, 0.1.6, 0.1.7, 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.2.4,
 0.2.5,
  0.2.6, 0.2.7, 0.2.8, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.3.5, 0.3.6,
 0.3.7,
  0.3.8, 0.3.9, 0.3.10, 0.3.11, 0.4.0b1, 0.4.0b2, 0.4.0b3, 0.4.0b4,
 0.4.0b5,
  0.4.0b6, 0.4.0, 0.4.1, 0.4.2a0, 0.4.2b0, 0.4.2, 0.4.3, 0.4.4, 0.4.5,
 0.4.6,
  0.4.7, 0.4.8, 0.5.0b1, 0.5.0b2, 0.5.0b3, 0.5.0rc1, 0.5.0rc2, 0.5.0rc3,
  0.5.0rc4, 0.5.0, 0.5.1, 0.5.2, 0.5.3, 0.5.4, 0.5.5, 0.5.6, 0.5.7, 0.5.8,
  0.6b1, 0.6b2, 0.6b3, 0.6.0, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.6.6,
 0.6.7,
  0.6.8, 0.6.9, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.7.4, 0.7.5, 0.7.6, 0.7.7,
 0.7.8,
  0.7.9, 0.7.10, 0.8.0b2, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 0.8.5, 0.8.6,
  0.8.7, 0.9.0, 0.9.1, 0.9.2, 0.9.3, 0.9.4, 0.9.5, 0.9.6, 0.9.7, 0.9.8)
No distributions matching the version for
  SQLAlchemy=0.8.99,=0.9.99,=0.8.4,=0.9.7 (from
 keystone==2014.2.dev114)
 
  I saw a couple of bugs filed and patches going in.
 
  Please clarify if this is fixed and how to get the latest changes in.


 I faced the similar problem, and I could avoid it by removing
 /usr/local/lib/python2.7/dist-packages directory and ./stack.sh again.

 Hope that helps,
 Ken Ohmichi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problem installing devstack

2015-01-05 Thread Amit Das
Can you check the /opt/stack/requirements folder  check if thats updated
 (git status, git log) after doing unstack  stack.

One of the *-requirements.txt has the entry which makes all the projects to
fail.

We had removed the /opt/stack/requirements project  then did a unstack 
stack.

Regards,
Amit
*CloudByte Inc.* http://www.cloudbyte.com/

On Tue, Jan 6, 2015 at 10:40 AM, Abhishek Shrivastava 
abhis...@cloudbyte.com wrote:

 Hi Rajdeep,

 Can you tell me the exact configuration (i.e;OS) you are using for
 installing devstack.

 On Tue, Jan 6, 2015 at 10:16 AM, Rajdeep Dua rajdeep@gmail.com
 wrote:

 Thanks for the response, in my case i am still getting the same error..

 On Tue, Jan 6, 2015 at 9:28 AM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
 wrote:

 2015-01-06 12:40 GMT+09:00 Rajdeep Dua rajdeep@gmail.com:
  Getting this error while running stack.sh in devstack
 
  Could not find a version that satisfies the requirement
  SQLAlchemy=0.8.99,=0.9.99,=0.8.4,=0.9.7 (from
 keystone==2014.2.dev114)
  (from versions: 0.4.2p3, 0.4.7p1, 0.5.4p1, 0.5.4p2, 0.1.0, 0.1.1,
 0.1.2,
  0.1.3, 0.1.4, 0.1.5, 0.1.6, 0.1.7, 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.2.4,
 0.2.5,
  0.2.6, 0.2.7, 0.2.8, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.3.5, 0.3.6,
 0.3.7,
  0.3.8, 0.3.9, 0.3.10, 0.3.11, 0.4.0b1, 0.4.0b2, 0.4.0b3, 0.4.0b4,
 0.4.0b5,
  0.4.0b6, 0.4.0, 0.4.1, 0.4.2a0, 0.4.2b0, 0.4.2, 0.4.3, 0.4.4, 0.4.5,
 0.4.6,
  0.4.7, 0.4.8, 0.5.0b1, 0.5.0b2, 0.5.0b3, 0.5.0rc1, 0.5.0rc2, 0.5.0rc3,
  0.5.0rc4, 0.5.0, 0.5.1, 0.5.2, 0.5.3, 0.5.4, 0.5.5, 0.5.6, 0.5.7,
 0.5.8,
  0.6b1, 0.6b2, 0.6b3, 0.6.0, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.6.6,
 0.6.7,
  0.6.8, 0.6.9, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.7.4, 0.7.5, 0.7.6, 0.7.7,
 0.7.8,
  0.7.9, 0.7.10, 0.8.0b2, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 0.8.5,
 0.8.6,
  0.8.7, 0.9.0, 0.9.1, 0.9.2, 0.9.3, 0.9.4, 0.9.5, 0.9.6, 0.9.7, 0.9.8)
No distributions matching the version for
  SQLAlchemy=0.8.99,=0.9.99,=0.8.4,=0.9.7 (from
 keystone==2014.2.dev114)
 
  I saw a couple of bugs filed and patches going in.
 
  Please clarify if this is fixed and how to get the latest changes in.


 I faced the similar problem, and I could avoid it by removing
 /usr/local/lib/python2.7/dist-packages directory and ./stack.sh again.

 Hope that helps,
 Ken Ohmichi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks  Regards,
 Abhishek

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Problem installing devstack

2015-01-05 Thread Rajdeep Dua
Getting this error while running stack.sh in devstack

Could not find a version that satisfies the requirement
SQLAlchemy=0.8.99,=0.9.99,=0.8.4,=0.9.7 (from keystone==2014.2.dev114)
(from versions: 0.4.2p3, 0.4.7p1, 0.5.4p1, 0.5.4p2, 0.1.0, 0.1.1, 0.1.2,
0.1.3, 0.1.4, 0.1.5, 0.1.6, 0.1.7, 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.2.4,
0.2.5, 0.2.6, 0.2.7, 0.2.8, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.3.5,
0.3.6, 0.3.7, 0.3.8, 0.3.9, 0.3.10, 0.3.11, 0.4.0b1, 0.4.0b2, 0.4.0b3,
0.4.0b4, 0.4.0b5, 0.4.0b6, 0.4.0, 0.4.1, 0.4.2a0, 0.4.2b0, 0.4.2, 0.4.3,
0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.4.8, 0.5.0b1, 0.5.0b2, 0.5.0b3, 0.5.0rc1,
0.5.0rc2, 0.5.0rc3, 0.5.0rc4, 0.5.0, 0.5.1, 0.5.2, 0.5.3, 0.5.4, 0.5.5,
0.5.6, 0.5.7, 0.5.8, 0.6b1, 0.6b2, 0.6b3, 0.6.0, 0.6.1, 0.6.2, 0.6.3,
0.6.4, 0.6.5, 0.6.6, 0.6.7, 0.6.8, 0.6.9, 0.7.0, 0.7.1, 0.7.2, 0.7.3,
0.7.4, 0.7.5, 0.7.6, 0.7.7, 0.7.8, 0.7.9, 0.7.10, 0.8.0b2, 0.8.0, 0.8.1,
0.8.2, 0.8.3, 0.8.4, 0.8.5, 0.8.6, 0.8.7, 0.9.0, 0.9.1, 0.9.2, 0.9.3,
0.9.4, 0.9.5, 0.9.6, 0.9.7, 0.9.8)
  No distributions matching the version for
SQLAlchemy=0.8.99,=0.9.99,=0.8.4,=0.9.7 (from keystone==2014.2.dev114)

I saw a couple of bugs filed and patches going in.

Please clarify if this is fixed and how to get the latest changes in.

Thanks
Rajdeep
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Addition to solum core

2015-01-05 Thread Ravi Penta
+1

-Ravi.
- Original Message -
From: Murali Allada murali.all...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Monday, January 5, 2015 9:10:54 AM
Subject: Re: [openstack-dev] [Solum] Addition to solum core

+1 

From: Pierre Padrixe  pierre.padr...@gmail.com  
Reply-To: OpenStack Development Mailing List (not for usage questions)  
openstack-dev@lists.openstack.org  
Date: Tuesday, December 30, 2014 at 5:58 PM 
To: OpenStack Development Mailing List (not for usage questions)  
openstack-dev@lists.openstack.org  
Subject: Re: [openstack-dev] [Solum] Addition to solum core 

+1 

2014-12-27 19:02 GMT+01:00 Devdatta Kulkarni  devdatta.kulka...@rackspace.com 
 : 



+1 


From: James Y. Li [ yuel...@gmail.com ] 
Sent: Saturday, December 27, 2014 9:03 AM 
To: OpenStack Development Mailing List 
Subject: Re: [openstack-dev] [Solum] Addition to solum core 



+1! 

-James Li 
On Dec 27, 2014 2:02 AM, Adrian Otto  adrian.o...@rackspace.com  wrote: 


Solum cores, 

I propose the following addition to the solum-core group[1]: 

+ Ed Cranford (ed--cranford) 

Please reply to this email to indicate your votes. 

Thanks, 

Adrian Otto 

[1] https://review.openstack.org/#/admin/groups/229,members Current Members 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] thoughts on the midcycle

2015-01-05 Thread Ruby Loo
On 29 December 2014 at 17:45, Devananda van der Veen 
devananda@gmail.com wrote:

 ...
 That being said, I'd also like to put forth this idea: if we had a second
 gathering (with the same focus on writing code) the following week (let's
 say, Feb 11 - 13) in the SF Bay area -- who would attend? Would we be able
 to get the other half of the core team together and get more work done?
 Is this a good idea?


I doubt that I'll be able to attend the SF one, so sorry, the other half
may not all be there, but I'm tiny so you won't miss me ;)

--ruby
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proper use of 'git review -R'

2015-01-05 Thread Carl Baldwin
On Tue, Dec 30, 2014 at 11:24 AM, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2014-12-30 12:31:35 -0500 (-0500), David Kranz wrote:
 [...]
 1. This is really a UI issue, and one that is experienced by many.
 What is desired is an option to look at different revisions of the
 patch that show only what the author actually changed, unless
 there was a conflict.

 I'm not sure it's entirely a UI issue. It runs deeper. There simply
 isn't enough metadata in Git to separate intentional edits from
 edits made to solve merge conflicts. Using merge commits instead of
 rebases mostly solves this particular problem but at the expense of
 introducing all sorts of new ones. A rebase-oriented workflow makes
 it easier for merge conflicts to be resolved along the way, instead
 of potentially nullifying valuable review effort at the very end
 when it comes time to approve the change and it's no longer relevant
 to the target branch.

Jeremy is correct here.  I've dreamed about how to enhance git to
support this sort of thing more formally but it isn't an easy problem
and wouldn't help us in the short term anyway.

To overcome this, I hacked out a script [1] which rebases older patch
sets to the same parent as the most current patch set to help me
compare across rebases.  I've found it very handy in certain
situations.  I can see how conflicts were handled as well as what
other changes were made outside the scope of merge conflict
resolution.  I use it by downloading the latest patch set with git
review -d X and then I compare to a previous patch set (NN) by
supplying that patch set number on the command line.

I once had dreams of adding this capability to gerrit but I found the
gerrit development learning curve to be a bit steep for the time I
had.

 There is a potential work-around, though it currently involves some
 manual effort (not sure whether it would be sane to automate as a
 feature of git-review). When you notice your change conflicts and
 will need a rebase, first reset and stash your change, then reset
 --hard to the previous patchset already in Gerrit, then rebase that
 and push it (solving the merge conflicts if any), then pop your
 stashed edits (solving any subsequent merge conflicts) and finally
 push that as yet another patchset. This separates the rebase from
 your intentional modifications though at the cost of rather a lot of
 extra work.

 Alternatively you could push your edits with git review -R and
 _then_ follow up with another patchset rebasing on the target branch
 and resolving the merge conflicts. Possibly slightly easier?

I'm a strong proponent of splitting rebases (with merge conflict
resolution) from other manual changes.  This is a help to reviewers.
If someone tells me that a patch set is a pure rebase to resolve
conflicts then I can review it by repeating the rebase myself to see
if I get the same answer.

Both suggestions above are good ones.  Which one you use is a matter
of preference IMO.  I personally prefer the latter (push with -R and
then resolve conflicts) because it is easier on me.

 2. Using -R is dangerous unless you really know what you are
 doing. The doc string makes it sound like an innocuous way to help
 reviewers.

 Not necessarily dangerous, but it does allow you to push changes
 which are just going to flat fail all jobs because they can't be
 merged to the target branch to get tested.

I agree there is no danger.  As I've stated in my other post, I have
*always* used it for two years and have seen no danger.  I have come
to accept the failing jobs as a regular and welcome part of my work
flow.  If these failing jobs are taking a lot of resources then we
need some redesign in infrastructure to fail them more quickly and
cheaply so that resources can be spared from having to test patch sets
which are in conflict.

Carl

[1] http://paste.openstack.org/show/155614/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pip 6.0.6 breaking py2* jobs - bug 1407736

2015-01-05 Thread Doug Hellmann

On Jan 5, 2015, at 12:22 PM, Doug Hellmann d...@doughellmann.com wrote:

 
 On Jan 5, 2015, at 12:00 PM, Matt Riedemann mrie...@linux.vnet.ibm.com 
 wrote:
 
 There is a deprecation warning in pip 6.0.6 which is making the py26 (on 
 stable branches) and py27 jobs hit subunit log sizes of over 50 MB which 
 makes the job fail.
 
 A logstash query shows this started happening around 1/3 which is when pip 
 6.0.6 was released. In Nova alone there are nearly 18 million hits of the 
 deprecation warning.
 
 Should we temporarily block so that pip  6.0.6?
 
 https://bugs.launchpad.net/nova/+bug/1407736
 
 I think this is actually a change in pkg_resources (in the setuptools dist) 
 [1], being triggered by stevedore using require=False to avoid checking 
 dependencies when plugins are loaded.
 
 Doug
 
 [1] 
 https://bitbucket.org/pypa/setuptools/commits/b1c7a311fb8e167d026126f557f849450b859502

After some discussion with Jason Coombs and dstufft, a version of setuptools 
with a split API to replace the deprecated option was released. I have a patch 
up to teach stevedore about the new methods[1].

Doug

[1] https://review.openstack.org/#/c/145042/1

 
 
 
 -- 
 
 Thanks,
 
 Matt Riedemann
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Bug in federation

2015-01-05 Thread David Chadwick
Hi Marco

1. I agree that a discovery service is needed somewhere in the
federation architecture. But the discovery service should be independent
of the endpoint URL that is used to access OpenStack services via
Keystone. It is not a good design to mix up these two aspects, which
appears to have been done in the current design. So Keystone could still
offer a discovery service to end users, by returning details of the IDPs
that it trusts, and information about how to contact those IdPs. But all
the federated users should still be redirected back to the same endpoint
URL of Keystone. Then if discovery is offered by something other than
Keystone, Keystone can still validate federated user authentication.

2. Discovery is different from trust management. The discovery service
could be separate from Keystone (and Apache), but Keystone (or Apache)
would still need a way of specifying which IdPs are trusted, so as to
reject users authenticated by untrusted IdPs.

3. In ABFAB, discovery is part of the ABFAB protocol suite, since
usernames contain the domain name of the user (e.g. d.c...@kent.ac.uk)
which allows Radius to route the user to the correct IDP. Hence ABFAB
does not require Keystone to be the discovery service. But Keystone (or
Apache) still needs to perform trust management to reject users from
untrusted IdPs.

More comments below about the current design, which is clearly sub-optimal.


On 05/01/2015 10:33, Marco Fargetta wrote:
 Hi David,
 
 in principle I agree with your comments. The current design mixes
 different aspect up and it is not manageable when the number of IdPs
 get bigger, like in the case you should allow access from users in a
 country federation, especially compared to other tools supporting
 identity federation.
 
 Nevertheless, I think you have to consider the current implementation
 like a fusion between the discovery protocol and the
 authentication.

this is still mixing up aspects!

 Users, instead of being re-directed to a Discovery
 Service providing the list of IdPs, receive the list from keystone
 itself. 

The list of IdPs and the endpoint URL are two different concepts and
should not be mixed up together (which they currently are). You should
be able to tell the user the list of IdPs to choose from, and still have
a single endpoint URL. You could in our original federation
implementation :-)


 Each endpoint URL is an IdP the user can use to authenticate

Sorry it is not quite that. The endpoint is not an IDP, but rather it is
the endpoint that the user has to return to after being authenticated by
the IDP.

 and when selected the user should go directly to the IdP

the discovery service should tell the user how to get to this IDP, but
not where in Keystone to return to, since in general, the Disco service
should be usable by multiple service providers. Therefore the
information it provides should be independent of the service providers
(which in the current implementation it is not).

 and not into
 the DS. Of course I am not saying this is good but it is acceptable
 from the user point of view. There is not the problem to map IdPs in
 the DS with endpoint URL because it is made in advance.

This mapping is not needed in my opinion.

 
 By the way, if you change the approach and create a single URL for the
 authentication then I cannot see the use of a list of trusted
 IdPs.

as stated above trust management is separate from and different to
discovery. Dont confuse the two.

 You should disable the not accepted IdPs at higher level so to
 avoid the situation where a user authenticate into the IdP but cannot
 access the service.

In general you can never prevent this, since the endpoint is public
information (which is usually published).

 You may work at apache and DS level to enable only
 trusted IdPs. Then you need a better mapping in order to put your
 logic there.
 
 I think this is a significant change and if there is agreement I think
 it is possible to end with a more flexible design.

this is needed in my opinion.

 
 Do you plan to propose a new spec?

Not at the current time. There is little point in producing designs that
no-one is willing to implement :-) Once the core implementers accept
that the current design is poor and needs re-engineering, then I will be
happy to propose a new design

regards

David

 
 Marco
 
 
 
 
 
 
 
 On Fri, Jan 02, 2015 at 09:51:55PM +, David Chadwick wrote:
 Hi Marco

 I think the current design is wrong because it is mixing up access
 control with service endpoint location. The endpoint of a service should
 be independent of the access control rules determining who can contact
 the service. Any entity should be able to contact a service endpoint
 (subject to firewall rules of course, but this is out of scope of
 Keystone), and once connected, access control should then be enforced.
 Unfortunately the current design directly ties access control (which
 IdP) to the service endpoint by building the IDP name into the URL. 

Re: [openstack-dev] Horizon switching to the normal .ini format

2015-01-05 Thread Matthew Farina
Switching to an ini format would likely be painful to impossible.

Horizon is built on django which is where the settings.py format comes
from. It's part of a django app.

For more info see the django docs. The settings information is at
https://docs.djangoproject.com/en/1.6/topics/settings/

On Thu, Dec 25, 2014 at 1:25 PM, Timur Sufiev tsuf...@mirantis.com wrote:

 Thomas,

 I could only point you to the Radomir's patch
 https://review.openstack.org/#/c/100521/

 It's still a work in progress, so you may ask him for more details.

 On Thu, Dec 25, 2014 at 1:59 AM, Thomas Goirand z...@debian.org wrote:

 Hi,

 There's been talks about Horizon switching to the normal .ini format
 that all other projects have been using so far. It would really be
 awesome if this could happen. Though I don't see the light at the end of
 the tunnel. Quite the opposite way: the settings.py is every day
 becoming more complicated.

 Is anyone at least working on the .ini switch idea? Or will we continue
 to see the Django style settings.py forever? Is there any blockers?

 Cheers,

 Thomas Goirand (zigo)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.config 1.6.0 released

2015-01-05 Thread Doug Hellmann
The Oslo team is pleased to announce the release of
oslo.config 1.6.0: Oslo Configuration API

The primary reason for this release is to move the code
out of the oslo namespace package as part of
https://blueprints.launchpad.net/oslo-incubator/+spec/drop-namespace-packages

For more details, please see the git log history below and
 http://launchpad.net/oslo/+milestone/1.6.0

Please report issues through launchpad:
 http://bugs.launchpad.net/oslo



Changes in openstack/oslo.config  1.5.0..1.6.0

f045681 Set the version string
6feb19b Stop sorting options on output
70c5b67 Move files out of the namespace package
063a5ef Workflow documentation is now in infra-manual
e447675 Fix wrong order of positional args in cli
5b5df64 add tests coverage for an oslo.messaging use case
b5f14ce Refactored help string generation

  diffstat (except docs and test files):

 .gitignore  |1 +
 CONTRIBUTING.rst|7 +-
 oslo/config/__init__.py |   28 +
 oslo/config/cfg.py  | 2437 +---
 oslo/config/cfgfilter.py|  307 +--
 oslo/config/fixture.py  |  107 +-
 oslo/config/generator.py|  293 +--
 oslo/config/iniparser.py|  116 +-
 oslo/config/types.py|  402 +---
 oslo_config/__init__.py |0
 oslo_config/cfg.py  | 2471 
 oslo_config/cfgfilter.py|  318 
 oslo_config/fixture.py  |  118 ++
 oslo_config/generator.py|  313 
 oslo_config/iniparser.py|  127 ++
 oslo_config/types.py|  413 
 setup.cfg   |3 +-
 tests/test_cfg.py   |  144 +-
 tests/test_cfgfilter.py |   36 -
 tests/test_generator.py |2 +-
 tests/test_warning.py   |   61 +
 tests/testmods/bar_foo_opt.py   |2 +-
 tests/testmods/baz_qux_opt.py   |2 +-
 tests/testmods/blaa_opt.py  |2 +-
 tests/testmods/fbaar_baa_opt.py |2 +-
 tests/testmods/fbar_foo_opt.py  |2 +-
 tests/testmods/fblaa_opt.py |2 +-
 44 files changed, 8955 insertions(+), 3763 deletions(-)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] how to delete a volume which is in error_deleting state

2015-01-05 Thread Erlon Cruz
This is usually is related to mis-configuration on the backend driver. For
example, if you create a volume, shutdown the driver to change some
configuration, the backend driver can get confused while trying to delete
the volume or even can't be able to locate the volume in the storage array.
S

On Mon, Jan 5, 2015 at 3:35 AM, Eli Qiao ta...@linux.vnet.ibm.com wrote:


 在 2015年01月05日 13:02, Punith S 写道:

 Hi Eli,

  you have to log-in to MySQL cinder database , and try deleting the
 required volume from the volumes table using the id.
 if it fails due to foreign key constraints in volume metadata table, try
 deleting the corresponding volume metadata and then try to delete the
 required volume row.

   hi Punith, I did as your suggestion, it works. but is that reasonable
 not to delete a error_deleting even that status keeping for a quite long
 time?
 thanks.

  thanks

 On Mon, Jan 5, 2015 at 7:22 AM, Eli Qiao ta...@linux.vnet.ibm.com wrote:


 hi all,
 how to delete a cinder volume which is in error_deleting status ?
 I don't find force delete options in 'cinder delete',  then how we fix it
 if we got such situation ?
 [tagett@stack-01 devstack]$ cinder list

 +--++-+--+-+--+--+
 |  ID  | Status | Name|
 Size | Volume Type | Bootable | Attached to  |

 +--++-+--+-+--+--+
 | 3e0acd0a-f28f-4fe3-b6e9-e65d5c40740b | in-use | with_cirros |
 4   | lvmdriver-1 |   true   | 428f0235-be54-462f-8916-f32965d42e63 |
 | 7039c683-2341-4dd7-a947-e35941245ec4 | error_deleting | None|
 4   | lvmdriver-1 |  false   |  |
 | d576773f-6865-4959-ba26-13602ed32e89 | error_deleting | None|
 4   | lvmdriver-1 |  false   |  |

 +--++-+--+-+--+--+
 [tagett@stack-01 devstack]$ cinder delete
 7039c683-2341-4dd7-a947-e35941245ec4
 Delete for volume 7039c683-2341-4dd7-a947-e35941245ec4 failed: Bad
 Request (HTTP 400) (Request-ID: req-e4d8cdd9-6ed5-4a7f-81de-7f38f2163d33)
 ERROR: Unable to delete any of specified volumes.

 --
 Thanks,
 Eli (Li Yong) Qiao


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --
  regards,

  punith s
 cloudbyte.com


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Thanks,
 Eli (Li Yong) Qiao


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] devstack plugins

2015-01-05 Thread Dean Troyer
On Mon, Jan 5, 2015 at 8:09 AM, Sean Dague s...@dague.net wrote:

 1) install_plugins - currently is a one way process

 Dean correctly points out that install_plugins is currently a one way
 process. I actually wonder if we should change that fact and run a 'git
 clean -f extras.d' before the install plugins under the principle of
 least surprise. This would make removing the enable_plugin actually
 remove it from the environment.


Or we just don't copy things around and the problem doesn't even appear.
If the configuration of the plugin includes the path to the dispatch script
(what is currently in extras.d) and we run it in place, removing it doesn't
have any surprises if the first place.


 2) is_service_enabled for things that aren't OpenStack services?

 Overloading ENABLED_SERVICES with things that aren't OpenStack services
 is something I'd actually like to avoid. Because other parts of our tool
 chain, like grenade, need some understanding of what's an openstack
 service and what is not.

 Maybe for things like ceph, glusterfs, opendaylight we need some other
 array of features or something. Honestly I'd really like to get mysql
 and rabbitmq out of the service list as well. It confusing things quite
 a bit at times.


We do need to separate system services from OpenStack services, and this
might be a good time to find another word to use here for one of them:
'system service' vs 'openstack service'.

One of the things multi-node adds is the distinction between the cluster
using a service and a specific node needing it configured or started.

Does this need to be solved before the plugin work can be completed?

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] [glance] Consistency in client side sorting

2015-01-05 Thread Mike Perez
On 09:13 Mon 05 Jan , Steven Kaufer wrote:
 Giving that each of these 3 clients will be supporting client-side sorting
 in kilo, it seems that we should get this implemented in a consistent
 manner.  It seems that the 2 options are either:
 
   --sort-key key1 --sort-dir desc --sort-key key2 --sort-dir asc
   --sort key1:asc,key2:desc
 
 Personally, I favor option 2 but IMO it is more important that these are
 made consistent.

I like option 2 better.

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Addition to solum core

2015-01-05 Thread Adrian Otto
Solum cores,

Thanks for your votes. Ed has been added to the solum-core group.

Cheers,

Adrian

On Dec 26, 2014, at 11:56 PM, Adrian Otto adrian.o...@rackspace.com wrote:

 Solum cores,
 
 I propose the following addition to the solum-core group[1]:
 
 + Ed Cranford (ed--cranford)
 
 Please reply to this email to indicate your votes.
 
 Thanks,
 
 Adrian Otto
 
 [1] https://review.openstack.org/#/admin/groups/229,members Current Members
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday January 6th at 19:00 UTC

2015-01-05 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is resuming our weekly
meetings on Tuesday January 6th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

And in case you missed it, meeting log and minutes from the last
meeting, held on December 23rd, are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-23-19.03.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-23-19.03.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-23-19.03.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Proposed Changes to Magnum Core

2015-01-05 Thread Adrian Otto
Magnum Cores,

Thanks for your votes. Jay Lau has been added to the magnum-core group.

Regards,

Adrian

On Jan 2, 2015, at 3:59 PM, Adrian Otto adrian.o...@rackspace.com wrote:

 Magnum Cores,
 
 I propose the following addition to the Magnum Core group[1]:
 
 + Jay Lau (jay-lau-513)
 
 Please let me know your votes by replying to this message.
 
 Thanks,
 
 Adrian
 
 [1] https://review.openstack.org/#/admin/groups/473,members Current Members


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Addition to solum core

2015-01-05 Thread Ed Cranford
Thank you all very much.

On 1/5/15, 12:54 PM, Adrian Otto adrian.o...@rackspace.com wrote:

Solum cores,

Thanks for your votes. Ed has been added to the solum-core group.

Cheers,

Adrian

On Dec 26, 2014, at 11:56 PM, Adrian Otto adrian.o...@rackspace.com
wrote:

 Solum cores,
 
 I propose the following addition to the solum-core group[1]:
 
 + Ed Cranford (ed--cranford)
 
 Please reply to this email to indicate your votes.
 
 Thanks,
 
 Adrian Otto
 
 [1] https://review.openstack.org/#/admin/groups/229,members Current
Members
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] IRC logging

2015-01-05 Thread Nikhil Komawar
Thanks Cindy!

Glance cores, can you all please pitch in?

-Nikhil


From: Cindy Pallares [cpalla...@redhat.com]
Sent: Monday, January 05, 2015 12:28 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance] IRC logging

I've made a patch, we can vote on it there.

https://review.openstack.org/#/c/145025/


On 01/05/2015 11:15 AM, Amrith Kumar wrote:
 I think logging the channel is a benefit even if, as Nikhil points out, it is 
 not an official meeting. Trove logs both the #openstack-trove channel and the 
 meetings when they occur. I have also had some conversations with other ATC's 
 on #openstack-oslo and #openstack-security and have found that the eavesdrop 
 logs at http://eavesdrop.openstack.org/irclogs/ to be invaluable in either 
 bug comments or code review comments.

 The IRC channel is an integral part of communicating within the OpenStack 
 community. The use of foul language and other inappropriate behavior should 
 be monitored not by admins but by other members of the community and called 
 out just as one would call out similar behavior in a non-virtual work 
 environment. I submit to you that profanity and inappropriate conduct in an 
 IRC channel constitutes a hostile work environment just as much as it does in 
 a non-virtual environment.

 Therefore I submit to you that there is no place for such behavior on an IRC 
 channel irrespective of whether it is logged or not.

 Thanks,

 -amrith

 | -Original Message-
 | From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com]
 | Sent: Monday, January 05, 2015 11:58 AM
 | To: OpenStack Development Mailing List (not for usage questions)
 | Subject: Re: [openstack-dev] [Glance] IRC logging
 |
 |
 |
 |  On Jan 5, 2015, at 08:07, Nikhil Komawar nikhil.koma...@rackspace.com
 | wrote:
 | 
 |  Based on the feedback received, we would like to avoid logging on the
 | project channel. My take from the discussion was that it gives many a
 | folks a feeling of informal platform to express their ideas freely in
 | contrast to the meeting channels.
 | 
 |  However, at the same time I would like to point out that using foul
 | language in the open freenode channels is a bad practice. There are no
 | admins monitoring our project channels however, those channels that are
 | monitored have people kicked out on misbehavior.  The point being, no
 | logging means freedom of thought for only the creative purposes; please
 | do not take me any other way.
 | 
 |  Thanks,
 |  -Nikhil
 | 
 |
 | I just want to point out that keystone has logging enabled for our channel
 | and I do not see it as a hamper to creative discussion / open discussion.
 | The logging is definitely of value. Also a lot of people will locally log
 | a given irc channel, which largely nets the same result.
 |
 | It is still not an official meeting, and we have heated debates at times,
 | the logging let's us check back on things discussed outside of the
 | official meetings. I do admit it is used less frequently than the meeting
 | logs.
 |
 | --Morgan
 |
 | Sent via mobile
 |  
 |  From: Anita Kuno [ante...@anteaya.info]
 |  Sent: Monday, January 05, 2015 10:42 AM
 |  To: openstack-dev@lists.openstack.org
 |  Subject: Re: [openstack-dev] [Glance] IRC logging
 | 
 |  On 01/05/2015 06:42 AM, Cindy Pallares wrote:
 |  Hi all,
 | 
 |  I would like to re-open the discussion on IRC logging for the glance
 |  channel. It was discussed on a meeting back in November[1], but it
 |  didn't seem to have a lot of input from the community and it was not
 |  discussed in the mailing list. A lot of information is exchanged
 |  through the channel and it isn't accessible for people who
 |  occasionally come into our channel from other projects, new
 |  contributors, and people who don't want to be reached off-hours or
 |  don't have bouncers. Logging our channel would  increase our
 |  community's transparency and make our development discussions
 |  publicly accessible to contributors in all time-zones and from other
 |  projects. It is very useful to look back on the logs for previous
 |  discussions or as well as to refer people to discussions or questions
 | previously answered.
 | 
 | 
 |  --Cindy
 | 
 |  [1]
 |  http://eavesdrop.openstack.org/meetings/glance/2014/glance.2014-11-13
 |  -20.03.log.html
 | 
 | 
 |  ___
 |  OpenStack-dev mailing list
 |  OpenStack-dev@lists.openstack.org
 |  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 |  Hi Cindy:
 | 
 |  You might want to consider offering a patch (you can use this one as
 |  an
 |  example: https://review.openstack.org/#/c/138965/2) and anyone with a
 |  strong perspective can express themselves with a vote and comment.
 | 
 |  Thanks,
 |  Anita.
 | 
 |  ___
 |  OpenStack-dev mailing list
 |  OpenStack-dev@lists.openstack.org
 |  

Re: [openstack-dev] [Ironic] thoughts on the midcycle

2015-01-05 Thread Chris K
A SF meet up would be much easier for me to attend. I could make the
purposed dates of the 11th through the 13th. Do we know where this event
would be held yet?


-Chris

On Mon, Dec 29, 2014 at 2:45 PM, Devananda van der Veen 
devananda@gmail.com wrote:

 I'm sending the details of the midcycle in a separate email. Before you
 reply that you won't be able to make it, I'd like to share some thoughts /
 concerns.

 In the last few weeks, several people who I previously thought would
 attend told me that they can't. By my informal count, it looks like we will
 have at most 5 of our 10 core reviewers in attendance. I don't think we
 should cancel based on that, but it does mean that we need to set our
 expectations accordingly.

 Assuming that we will be lacking about half the core team, I think it will
 be more practical as a focused sprint, rather than a planning  design
 meeting. While that's a break from precedent, planning should be happening
 via the spec review process *anyway*. Also, we already have a larger back
 log of specs and work than we had this time last cycle, but with the same
 size review team. Rather than adding to our backlog, I would like us to use
 this gathering to burn through some specs and land some code.

 That being said, I'd also like to put forth this idea: if we had a second
 gathering (with the same focus on writing code) the following week (let's
 say, Feb 11 - 13) in the SF Bay area -- who would attend? Would we be able
 to get the other half of the core team together and get more work done?
 Is this a good idea?

 OK. That's enough of my musing for now...

 Once again, if you will be attending the midcycle sprint in Grenoble the
 week of Feb 3rd, please sign up HERE
 https://www.eventbrite.com/e/openstack-ironic-kilo-midcycle-sprint-in-grenoble-tickets-15082886319
 .

 Regards,
 Devananda


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Participation count for mid-cycle meetup.

2015-01-05 Thread Nikhil Komawar
Hi all,

For all those of you planning to be at the Glance kilo-mid-cycle meetup and 
haven't added your name to the list, please do so here 
https://etherpad.openstack.org/p/kilo-glance-mid-cycle-meetup . We need the 
info for scheduling purposes.

Thanks,
-Nikhil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][vpnaas] VPNaaS Subteam meeting Tuesday 1500 UTC meeting-4

2015-01-05 Thread Paul Michali (pcm)
Since we took a break for two weeks and the meeting channel has changed, I 
figured I send a reminder.

Please update the page for any agenda items you may have,

Regards,


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pc_m (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] devstack plugins

2015-01-05 Thread Sean Dague
On 01/05/2015 01:45 PM, Dean Troyer wrote:
 On Mon, Jan 5, 2015 at 8:09 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 1) install_plugins - currently is a one way process
 
 Dean correctly points out that install_plugins is currently a one way
 process. I actually wonder if we should change that fact and run a 'git
 clean -f extras.d' before the install plugins under the principle of
 least surprise. This would make removing the enable_plugin actually
 remove it from the environment.
 
 
 Or we just don't copy things around and the problem doesn't even
 appear.  If the configuration of the plugin includes the path to the
 dispatch script (what is currently in extras.d) and we run it in place,
 removing it doesn't have any surprises if the first place.

The reason I avoided that was because of possible pathing issues,
especially as we source the files instead of running them in their own
address space (which means they don't have to have all the includes
locally). I'm somewhat concerned that any cd logic is going to leave us
in odd places (which I guess we could fix with a reset at the top level).

Maybe if we symlinked instead?

  
 
 2) is_service_enabled for things that aren't OpenStack services?
 
 Overloading ENABLED_SERVICES with things that aren't OpenStack services
 is something I'd actually like to avoid. Because other parts of our tool
 chain, like grenade, need some understanding of what's an openstack
 service and what is not.
 
 Maybe for things like ceph, glusterfs, opendaylight we need some other
 array of features or something. Honestly I'd really like to get mysql
 and rabbitmq out of the service list as well. It confusing things quite
 a bit at times.
 
 
 We do need to separate system services from OpenStack services, and this
 might be a good time to find another word to use here for one of them:
 'system service' vs 'openstack service'.
 
 One of the things multi-node adds is the distinction between the cluster
 using a service and a specific node needing it configured or started.
 
 Does this need to be solved before the plugin work can be completed?

Probably not. But I was trying to avoid further overloading the
is_service_enabled on these plugins by just running them if they exist.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] review closure for nova blueprint review.openstack.org/#/c/140133/

2015-01-05 Thread Kenneth Burger


Hi, I am trying to get approval on this nova blueprint,
https://review.openstack.org/#/c/140133/.   There was a +2 from Michael
Still ( twice in prior patches ) and a +1 from Jay Bryant from a cinder
perspective.The only changes from the patches receiving the +2  was
related to a directory change of the spec location in the repository.

Is it still possible to get approval for this blueprint?

Thanks,
Ken Burger___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] how to delete a volume which is in error_deleting state

2015-01-05 Thread Abel Lopez
Also important to note, you should check the 'provider_location' column for
the volumes in error_deleting, otherwise you may have the space potentially
allocated on the backend, but cinder thinking it's deleted.
I also like to update 'deleted_at' to NOW

On Mon, Jan 5, 2015 at 10:41 AM, Erlon Cruz sombra...@gmail.com wrote:

 This is usually is related to mis-configuration on the backend driver. For
 example, if you create a volume, shutdown the driver to change some
 configuration, the backend driver can get confused while trying to delete
 the volume or even can't be able to locate the volume in the storage array.
 S

 On Mon, Jan 5, 2015 at 3:35 AM, Eli Qiao ta...@linux.vnet.ibm.com wrote:


 在 2015年01月05日 13:02, Punith S 写道:

 Hi Eli,

  you have to log-in to MySQL cinder database , and try deleting the
 required volume from the volumes table using the id.
 if it fails due to foreign key constraints in volume metadata table, try
 deleting the corresponding volume metadata and then try to delete the
 required volume row.

   hi Punith, I did as your suggestion, it works. but is that reasonable
 not to delete a error_deleting even that status keeping for a quite long
 time?
 thanks.

  thanks

 On Mon, Jan 5, 2015 at 7:22 AM, Eli Qiao ta...@linux.vnet.ibm.com
 wrote:


 hi all,
 how to delete a cinder volume which is in error_deleting status ?
 I don't find force delete options in 'cinder delete',  then how we fix
 it if we got such situation ?
 [tagett@stack-01 devstack]$ cinder list

 +--++-+--+-+--+--+
 |  ID  | Status | Name|
 Size | Volume Type | Bootable | Attached to  |

 +--++-+--+-+--+--+
 | 3e0acd0a-f28f-4fe3-b6e9-e65d5c40740b | in-use | with_cirros |
 4   | lvmdriver-1 |   true   | 428f0235-be54-462f-8916-f32965d42e63 |
 | 7039c683-2341-4dd7-a947-e35941245ec4 | error_deleting | None|
 4   | lvmdriver-1 |  false   |  |
 | d576773f-6865-4959-ba26-13602ed32e89 | error_deleting | None|
 4   | lvmdriver-1 |  false   |  |

 +--++-+--+-+--+--+
 [tagett@stack-01 devstack]$ cinder delete
 7039c683-2341-4dd7-a947-e35941245ec4
 Delete for volume 7039c683-2341-4dd7-a947-e35941245ec4 failed: Bad
 Request (HTTP 400) (Request-ID: req-e4d8cdd9-6ed5-4a7f-81de-7f38f2163d33)
 ERROR: Unable to delete any of specified volumes.

 --
 Thanks,
 Eli (Li Yong) Qiao


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --
  regards,

  punith s
 cloudbyte.com


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Thanks,
 Eli (Li Yong) Qiao


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [nova] [glance] Consistency in client side sorting

2015-01-05 Thread Steven Kaufer


The nova, cinder, and glance REST APIs support listing instances, volumes,
and images in a specific order.  In general, the REST API supports
something like:

  ?sort_key=key1sort_dir=ascsort_key=key2sort_dir=desc

This sorts the results using 'key1' as the primary key (in ascending
order), 'key2' as the secondary key (in descending order), etc.

Note that this behavior is not consistent across the projects.  Nova
supports multiple sort keys and multiple sort directions, glance supports
multiple sort keys but a single direction, and cinder only supports a
single sort key and a single sort direction (approved kilo BP to support
multiple sort keys and directions is here:
https://blueprints.launchpad.net/cinder/+spec/cinder-pagination).

The purpose of this thread is to discuss how the sort information should be
inputted to the client.

In nova, (committed in kilo https://review.openstack.org/#/c/117591/) the
syntax is:  --sort key1:asc,key2:desc
In cinder, the syntax is:  --sort_key key1 --sort_dir desc
In glance, the proposed syntax (from
https://review.openstack.org/#/c/120777/) is: --sort-key key1 --sort-key
key2 --sort-dir desc

Note that the keys are different for cinder and glance (--sort_key vs.
--sort-key).  Also, client side sorting does not actually work in cinder
(fix under review at https://review.openstack.org/#/c/141964/).

Giving that each of these 3 clients will be supporting client-side sorting
in kilo, it seems that we should get this implemented in a consistent
manner.  It seems that the 2 options are either:

  --sort-key key1 --sort-dir desc --sort-key key2 --sort-dir asc
  --sort key1:asc,key2:desc

Personally, I favor option 2 but IMO it is more important that these are
made consistent.

Thoughts on getting consistency across all 3 projects (and possibly
others)?

Thanks,
Steven Kaufer___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proper use of 'git review -R'

2015-01-05 Thread Adam Young

On 12/30/2014 11:47 AM, Jeremy Stanley wrote:

On 2014-12-30 10:32:22 -0600 (-0600), Dolph Mathews wrote:

The default behavior, rebasing automatically, is the sane default
to avoid having developers run into unexpected merge conflicts on
new patch submissions.

Please show me an example of this in the wild. I suspect a lot of
reviewers are placing blame on this without due investigation.
I think you msiread the intention of what Dolph posted, Jeremy. What he 
is saying is that automatic rebasing ensures that a submitted patch 
would not get a false negative on a rebase problem.  Or, to try to make 
it even clearer, the default behaviour forces the submitter to deal with 
rebase problems in their own sand box.



But if git-review can check to see if a review already exists in
gerrit *before* doing the local rebase, I'd be in favor of it
skipping the rebase by default if the review already exists.
Require developers to rebase existing patches manually. (This is
my way of saying I can't think of a good answer to your question.)

It already requires contributors to take manual action--it will not
automatically rebase and then push that without additional steps.

What I would like to see is this:
1.  Rebase the last patch. (if possible)
2.  Submit the new patch

Now a reviewer can see the difference between what was actually 
submitted and the previous patch.


If step 1 fails (it often does using the git review option for diff 
between two versions of the patch)  just accept it and move on.





While we're on the topic, it's also worth noting that --no-rebase
becomes critically important when a patch in the review sequence
has already been approved, because the entire series will be
rebased, potentially pulling patches out of the gate, clearing the
Workflow+1 bit, and resetting the gate (probably unnecessarily). A
tweak to the default behavior would help avoid this scenario.

The only thing -R is going to accomplish is people uploading changes
which can never pass because they're merge-conflicting with the
target branch.
It makes it clearer what the diff is without complicating it with 
unrelated changes, which is what David wants to make happen. Ideally, 
any user that did a -R  would immediately do a rebase as well, but that 
would complicate things for the reviewer.


This is a common problem, and not a trivial one to solve.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] IRC logging

2015-01-05 Thread Anita Kuno
On 01/05/2015 06:42 AM, Cindy Pallares wrote:
 Hi all,
 
 I would like to re-open the discussion on IRC logging for the glance
 channel. It was discussed on a meeting back in November[1], but it
 didn't seem to have a lot of input from the community and it was not
 discussed in the mailing list. A lot of information is exchanged through
 the channel and it isn't accessible for people who occasionally come
 into our channel from other projects, new contributors, and people who
 don't want to be reached off-hours or don't have bouncers. Logging our
 channel would  increase our community's transparency and make our
 development discussions publicly accessible to contributors in all
 time-zones and from other projects. It is very useful to look back on
 the logs for previous discussions or as well as to refer people to
 discussions or questions previously answered.
 
 
 --Cindy
 
 [1]
 http://eavesdrop.openstack.org/meetings/glance/2014/glance.2014-11-13-20.03.log.html
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Hi Cindy:

You might want to consider offering a patch (you can use this one as an
example: https://review.openstack.org/#/c/138965/2) and anyone with a
strong perspective can express themselves with a vote and comment.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] dropping namespace packages

2015-01-05 Thread Doug Hellmann

 On Nov 12, 2014, at 3:32 PM, Doug Hellmann d...@doughellmann.com wrote:
 
 We rather quickly came to consensus at the summit that we should drop the use 
 of namespace packages in Oslo libraries [1]. As far as I could tell, everyone 
 was happy with my proposed approach [2] of moving the code from oslo.foo to 
 oslo_foo and then creating a backwards-compatibility shim in oslo.foo that 
 imports public symbols from oslo_foo. We also agreed that we would not rename 
 existing libraries, and we would continue to use the same naming convention 
 for new libraries. So the distribution and git repository both will be called 
 “oslo.foo” and the import statement would look like “from oslo_foo import 
 bar”.

We have made good progress on this work [1]. Because the release of 
oslo.concurrency before the holidays exposed some issues with our process, we 
have held off on releasing any more libraries until those problems were worked 
out. We believe we are now ready to start releasing the other libraries, and 
are going to start doing so this week.

I plan to release a new version of oslo.config later today, and then wait a few 
hours to see if there are any issues (if you start seeing odd behavior related 
to oslo.config, please report it in #openstack-oslo). When we are confident 
that the release is working, we will continue with the other libraries with 
similar pauses over the course of the rest of the week.

As each library is released, we will send release notes to this list, as usual. 
At that point the Oslo liaisons should start planning patches to change imports 
in their projects from oslo.foo to “oslo_foo. The old imports should still 
work for now, but new features will not be added to the old namespace, so over 
time it will be necessary to make the changes anyway. We are likely to remove 
the old namespace package completely during the next release cycle, but that 
hasn't been decided.

Doug

[1] 
https://blueprints.launchpad.net/oslo-incubator/+spec/drop-namespace-packages


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Zabbix in HA mode

2015-01-05 Thread Andrew Woodward
Rob,


 This is of limited value to my business due to the GPL license -- so my
 company's lawyers tell me.  I will be unable to take advantage of what
 looks to be a solid solution from what I can see of Zabbix.  Are there any
 risks to Fuel (open source contamination) from this approach?  I doubt it
 but I want to make sure you are considering this.


Zabbix is GPL 2.0, however the impact of this license is only when
developing against its source. Using a GPL program through its standard
interfaces does not pull in any of the license requirements that your legal
team may be upset with. (AGPL is different story). Also GPL programs are
used throughout the base Linux operating system.

In some cases we may modify packages (including GPL ones), their source is
provided and can be found at [1]

In the scope of fuel, we configure Zabbix through standard interfaces with
our puppet manifests which are Apache 2.0 [2]

[1] https://review.fuel-infra.org/
[2]
https://github.com/stackforge/fuel-library/tree/master/deployment/puppet/zabbix

On Tue, Nov 25, 2014 at 6:07 AM, Rob Basham rob...@us.ibm.com wrote:

 Rob Basham

 Cloud Systems Software Architecture
 971-344-1999


 Bartosz Kupidura bkupid...@mirantis.com wrote on 11/25/2014 05:21:59 AM:

  From: Bartosz Kupidura bkupid...@mirantis.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Date: 11/25/2014 05:26 AM
  Subject: [openstack-dev] [FUEL] Zabbix in HA mode
 
  Hello All,
 
  Im working on Zabbix implementation which include HA support.
 
  Zabbix server should be deployed on all controllers in HA mode.
 
  Currently we have dedicated role 'zabbix-server', which does not support
 more
  than one zabbix-server. Instead of this we will move monitoring
  solution (zabbix),
  as an additional component.
 
  We will introduce additional role 'zabbix-monitoring', assigned to
  all servers with
  lowest priority in serializer (run puppet after every other roles)
  when zabbix is
  enabled.
  'Zabbix-monitoring' role will be assigned automatically.
 
  When zabbix component is enabled, we will install zabbix-server on
  all controllers
  in active-backup mode (pacemaker+haproxy).
 
  In next stage, we can allow users to deploy zabbix-server on dedicated
 node OR
  on controllers for performance reasons.
  But for now we should force zabbix-server to be deployed on controllers.
 
  BP is in initial phase, but code is ready and working with Fuel 5.1.
  Now im checking if it works with master.
 
  Any comments are welcome!

 This is of limited value to my business due to the GPL license -- so my
 company's lawyers tell me.  I will be unable to take advantage of what
 looks to be a solid solution from what I can see of Zabbix.  Are there any
 risks to Fuel (open source contamination) from this approach?  I doubt it
 but I want to make sure you are considering this.

 
  BP link: https://blueprints.launchpad.net/fuel/+spec/zabbix-ha
 
  Best Regards,
  Bartosz Kupidura
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrew
Mirantis
Ceph community
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Zabbix in HA mode

2015-01-05 Thread Andrew Woodward
On Tue, Nov 25, 2014 at 5:21 AM, Bartosz Kupidura
bkupid...@mirantis.com wrote:

 Hello All,

 Im working on Zabbix implementation which include HA support.

 Zabbix server should be deployed on all controllers in HA mode.

This needs to be discouraged as much as putting mongo-db on the controllers.

 Currently we have dedicated role 'zabbix-server', which does not support more
 than one zabbix-server. Instead of this we will move monitoring solution 
 (zabbix),
 as an additional component.

No, this must remain a separate role and can not be forced onto the
controllers the user should be discouraged from doing this. The
corosync code is quickly becoming granular enough to stand up a CRM
cluster elsewhere.


 We will introduce additional role 'zabbix-monitoring', assigned to all 
 servers with
 lowest priority in serializer (run puppet after every other roles) when 
 zabbix is
 enabled.
 'Zabbix-monitoring' role will be assigned automatically.

Seems a good way to handle it, but would it run well for a plugin that
wants to be monitored (since they run after)

 When zabbix component is enabled, we will install zabbix-server on all 
 controllers
 in active-backup mode (pacemaker+haproxy).

Again, not forced on controllers, this is very bad.


Controllers:

While there is development use cases to deploy monitoring on combined
controllers, and it can make use of the already existing pacemaker
cluster, this is the wrong direction to point users. There are many
reasons this is bad: for one, monitoring can become quite loaded, and
as we've seen secondary load on the controllers can collapse the
entire control plane. Secondly running monitoring on the cluster may
also result in the monitoring going offline if the cluster does, from
my own experience, not being able to see your monitoring is nearly
worse than having everything down and leads to lost precious moments
of downtime SLA.

HA Scaling:

Just like with controllers, our other HA components need to support a
scale of 1 to N. This is important as a cluster will need to scale, or
as the operator moves from POC to Production, they can deploy more
hardware. This also helps alleviate some of the not enough nodes
issues mentioned in the thread already

-- 
Andrew
Mirantis
Ceph community

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev