Re: [openstack-dev] [qa][cinder][ceph] should Tempest tests the backend specific feature?

2017-05-02 Thread Jordan Pittier
On Tue, May 2, 2017 at 7:42 AM, Ghanshyam Mann 
wrote:

> In Cinder, there are many features/APIs which are backend specific and
> will return 405 or 501 if same is not implemented on any backend [1].
> If such tests are implemented in Tempest, then it will break some gate
> where that backend job is voting. like ceph job in glance_store gate.
>
> There been many such cases recently where ceph jobs were broken due to
> such tests and recently it is for force-delete backup feature[2].
> Reverting force-delete tests in [3]. To resolve such cases at some
> extend, Jon is going to add a white/black list of tests which can run
> on ceph job [4] depends on what all feature ceph implemented. But this
> does not resolve it completely due to many reason like
> 1. External use of Tempest become difficult where user needs to know
> what all tests to skip for which backend
> 2. Tempest tests become too specific to backend.
>
> Now there are few options to resolve this:
> 1. Tempest should not tests such API/feature which are backend
> specific like mentioned by api-ref like[1].
>
So basically, if one of the 50 Cinder driver doesn't support a feature, we
should never test that feature ? What about the 49 other drivers ? If a
feature exists and can be tested in the Gate (with whatever default
config/driver is shipped) then I think we should test it.


> 2. Tempest test can be disabled/skip based on backend. - This is not
> good idea as it increase config options and overhead of setting those.
>
Using regex and blacklist, any 3rd party CI can skip any test based on the
test ID. Without introducing a config flag. See:
https://github.com/openstack-infra/project-config/blob/1cea31f402b6b047cde203c12184b5392c90/jenkins/jobs/devstack-gate.yaml#L1871


> 3. Tempest test can verify behavior with if else condition as per
> backend. This is bad idea and lose the test strength.
>
Yeah, that's bad.

>
> IMO options 1 is better options. More feedback are welcome.


> ..1 https://developer.openstack.org/api-ref/block-storage/v3/?
> expanded=force-delete-a-backup-detail#force-delete-a-backup
> ..2 https://bugs.launchpad.net/glance/+bug/1687538
> ..3 https://review.openstack.org/#/c/461625/
> ..4 http://lists.openstack.org/pipermail/openstack-dev/2017-
> April/115229.html
>
> -gmann
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptls] Apologies and update

2017-04-20 Thread Jordan Pittier
Hi Tony,
Thank you for this email and the good work you've been doing for OpenStack.

Take care,
Jordan

On Thu, Apr 20, 2017 at 8:56 AM, Tony Breeds 
wrote:

> Hi All,
> I'd belatedly like to apologise for my recent absence from my OpenStack
> duties.  As a few people know I needed to take a small break due to medical
> leave.
>
> Also unrelated to that my employment circumstances have changed recently
> so I'm
> still committed to the community and my roles I'm going to be a little
> distracted for the, hopefully, near future.
>
> I hope to see many of you in Boston, which thanks to the Foundation I'm
> able to
> attend. \o/
>
> Yours Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] [Designate]

2017-04-17 Thread Jordan Pittier
Hi,

On Mon, Apr 17, 2017 at 7:32 PM, Carmine Annunziata <
carmine.annunziat...@gmail.com> wrote:

> Hi guys,
> I'm trying to integrate designate in openstack by devstack, the standard
> procedure uses rejoin-stack.sh after the stack.sh but in the last release
> there isn't anymore. So what may i do alternatively?
>
You can "rejoin" your screen session using the "screen -r" command.

>
> Carmine
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][gate] tempest slow - where do we execute them in gate?

2017-04-17 Thread Jordan Pittier
On Mon, Apr 17, 2017 at 6:50 AM, Ihar Hrachyshka 
wrote:

> Hi all,
>
> so I tried to inject a failure in a tempest test and was surprised
> that no gate job failed because of that:
> https://review.openstack.org/#/c/457102/1
>
> It turned out that the test is not executed because we always ignore
> all 'slow' tagged test cases:
> http://logs.openstack.org/02/457102/1/check/gate-tempest-
> dsvm-neutron-full-ubuntu-xenial/89a08cc/console.html#_
> 2017-04-17_01_43_39_115768

Indeed, we don't run slow tests. Many network scenarios are not run since
https://review.openstack.org/#/c/439698/


>
>
> Question: do we execute those tests anywhere in gate, and if so,
> where? (And if not, why, and how do we guarantee that they are not
> broken by new changes?)
>
We don"t run slow tests because the QA team think that they don't bring
enough value to be executed, every time and everywhere. The idea is that if
some specific slow tests are of some interest to some specific openstack
projects, those projects can change the config of their jobs to enable
these tests.



>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-17 Thread Jordan Pittier
On Fri, Mar 17, 2017 at 3:11 PM, Sean Dague <s...@dague.net> wrote:

> On 03/17/2017 09:24 AM, Jordan Pittier wrote:
> >
> >
> > On Fri, Mar 17, 2017 at 1:58 PM, Sean Dague <s...@dague.net
> > <mailto:s...@dague.net>> wrote:
> >
> > On 03/17/2017 08:27 AM, Jordan Pittier wrote:
> > > The patch that reduced the number of Tempest Scenarios we run in
> every
> > > job and also reduce the test run concurrency [0] was merged 13
> days ago.
> > > Since, the situation (i.e the high number of false negative job
> results)
> > > has not improved significantly. We need to keep looking
> collectively at
> > > this.
> >
> > While the situation hasn't completely cleared out -
> > http://tinyurl.com/mdmdxlk - since we've merged this we've not seen
> that
> > job go over 25% failure rate in the gate, which it was regularly
> > crossing in the prior 2 week period. That does feel like progress. In
> > spot checking I we are also rarely failing in scenario tests now, but
> > the fails tend to end up inside heavy API tests running in parallel.
> >
> >
> > > There seems to be an agreement that we are hitting some memory
> limit.
> > > Several of our most frequent failures are memory related [1]. So we
> > > should either reduce our memory usage or ask for bigger VMs, with
> more
> > > than 8GB of RAM.
> > >
> > > There was/is several attempts to reduce our memory usage, by
> reducing
> > > the Mysql memory consumption ([2] but quickly reverted [3]),
> reducing
> > > the number of Apache workers ([4], [5]), more apache2 tuning [6].
> If you
> > > have any crazy idea to help in this regard, please help. This is
> high
> > > priority for the whole openstack project, because it's plaguing
> many
> > > projects.
> >
> > Interesting, I hadn't seen the revert. It is also curious that it was
> > largely limitted to the neutron-api test job. It's also notable that
> the
> > sort buffers seem to have been set to the minimum allowed limit of
> mysql
> > -
> > https://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.
> html#sysvar_innodb_sort_buffer_size
> > <https://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.
> html#sysvar_innodb_sort_buffer_size>
> > - and is over an order of magnitude decrease from the existing
> default.
> >
> > I wonder about redoing the change with everything except it and
> seeing
> > how that impacts the neutron-api job.
> >
> > Yes, that would be great because mysql is by far our biggest memory
> > consumer so we should target this first.
>
> While it is the single biggest process, weighing in at 500 MB, the
> python services are really our biggest memory consumers. They are
> collectively far outweighing either mysql or rabbit, and are the reason
> that even with 64MB guests we're running out of memory. So we want to
> keep that under perspective.
>
Absolutely. I have https://review.openstack.org/#/c/446986/ in that vain.
And if someone wants to start the work of not running the several Swift
*auditor*, *updater*, *reaper*, *replicator* services, in case the Swift
Replication factor is set to 1, that's also a good memory saving.

>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-17 Thread Jordan Pittier
On Fri, Mar 17, 2017 at 1:58 PM, Sean Dague <s...@dague.net> wrote:

> On 03/17/2017 08:27 AM, Jordan Pittier wrote:
> > The patch that reduced the number of Tempest Scenarios we run in every
> > job and also reduce the test run concurrency [0] was merged 13 days ago.
> > Since, the situation (i.e the high number of false negative job results)
> > has not improved significantly. We need to keep looking collectively at
> > this.
>
> While the situation hasn't completely cleared out -
> http://tinyurl.com/mdmdxlk - since we've merged this we've not seen that
> job go over 25% failure rate in the gate, which it was regularly
> crossing in the prior 2 week period. That does feel like progress. In
> spot checking I we are also rarely failing in scenario tests now, but
> the fails tend to end up inside heavy API tests running in parallel.
>

> > There seems to be an agreement that we are hitting some memory limit.
> > Several of our most frequent failures are memory related [1]. So we
> > should either reduce our memory usage or ask for bigger VMs, with more
> > than 8GB of RAM.
> >
> > There was/is several attempts to reduce our memory usage, by reducing
> > the Mysql memory consumption ([2] but quickly reverted [3]), reducing
> > the number of Apache workers ([4], [5]), more apache2 tuning [6]. If you
> > have any crazy idea to help in this regard, please help. This is high
> > priority for the whole openstack project, because it's plaguing many
> > projects.
>
> Interesting, I hadn't seen the revert. It is also curious that it was
> largely limitted to the neutron-api test job. It's also notable that the
> sort buffers seem to have been set to the minimum allowed limit of mysql
> -
> https://dev.mysql.com/doc/refman/5.6/en/innodb-
> parameters.html#sysvar_innodb_sort_buffer_size
> - and is over an order of magnitude decrease from the existing default.
>
> I wonder about redoing the change with everything except it and seeing
> how that impacts the neutron-api job.
>
Yes, that would be great because mysql is by far our biggest memory
consumer so we should target this first.

>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-17 Thread Jordan Pittier
The patch that reduced the number of Tempest Scenarios we run in every job
and also reduce the test run concurrency [0] was merged 13 days ago. Since,
the situation (i.e the high number of false negative job results) has not
improved significantly. We need to keep looking collectively at this.

There seems to be an agreement that we are hitting some memory limit.
Several of our most frequent failures are memory related [1]. So we should
either reduce our memory usage or ask for bigger VMs, with more than 8GB of
RAM.

There was/is several attempts to reduce our memory usage, by reducing the
Mysql memory consumption ([2] but quickly reverted [3]), reducing the
number of Apache workers ([4], [5]), more apache2 tuning [6]. If you have
any crazy idea to help in this regard, please help. This is high priority
for the whole openstack project, because it's plaguing many projects.

We have some tools to investigate memory consumption, like some regular
"dstat" output [7], a home-made memory tracker [8] and stackviz [9].

Best,
Jordan

[0]: https://review.openstack.org/#/c/439698/
[1]: http://status.openstack.org/elastic-recheck/gate.html
[2] : https://review.openstack.org/#/c/438668/
[3]: https://review.openstack.org/#/c/446196/
[4]: https://review.openstack.org/#/c/426264/
[5]: https://review.openstack.org/#/c/445910/
[6]: https://review.openstack.org/#/c/446741/
[7]:
http://logs.openstack.org/96/446196/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/b5c362f/logs/dstat-csv_log.txt.gz
[8]:
http://logs.openstack.org/96/446196/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/b5c362f/logs/screen-peakmem_tracker.txt.gz
[9] :
http://logs.openstack.org/41/446741/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/fa4d2e6/logs/stackviz/#/stdin/timeline

On Sat, Mar 4, 2017 at 4:19 PM, Andrea Frittoli 
wrote:

> Quick update on this, the change is now merged, so we now have a smaller
> number of scenario tests running serially after the api test run.
>
> We'll monitor gate stability for the next week or so and decide whether
> further actions are required.
>
> Please keep categorizing failures via elastic recheck as usual.
>
> thank you
>
> andrea
>
> On Fri, 3 Mar 2017, 8:02 a.m. Ghanshyam Mann, 
> wrote:
>
>> Thanks. +1. i added my list in ethercalc.
>>
>> Left put scenario tests can be run on periodic and experimental job. IMO
>> on both ( periodic and experimental) to monitor their status periodically
>> as well as on particular patch if we need to.
>>
>> -gmann
>>
>> On Fri, Mar 3, 2017 at 4:28 PM, Andrea Frittoli <
>> andrea.fritt...@gmail.com> wrote:
>>
>> Hello folks,
>>
>> we discussed a lot since the PTG about issues with gate stability; we
>> need a stable and reliable gate to ensure smooth progress in Pike.
>>
>> One of the issues that stands out is that most of the times during test
>> runs our test VMs are under heavy load.
>> This can be the common cause behind several failures we've seen in the
>> gate, so we agreed during the QA meeting yesterday [0] that we're going to
>> try reducing the load and see whether that improves stability.
>>
>> Next steps are:
>> - select a subset of scenario tests to be executed in the gate, based on
>> [1], and run them serially only
>> - the patch for this is [2] and we will approve this by the end of the day
>> - we will monitor stability for a week - if needed we may reduce
>> concurrency a bit on API tests as well, and identify "heavy" tests
>> candidate for removal / refactor
>> - the QA team won't approve any new test (scenario or heavy resource
>> consuming api) until gate stability is ensured
>>
>> Thanks for your patience and collaboration!
>>
>> Andrea
>>
>> ---
>> irc: andreaf
>>
>> [0] http://eavesdrop.openstack.org/meetings/qa/
>> 2017/qa.2017-03-02-17.00.txt
>> [1] https://ethercalc.openstack.org/nu56u2wrfb2b
>> [2] https://review.openstack.org/#/c/439698/
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [QA]Refactoring Scenarios manager.py

2017-03-01 Thread Jordan Pittier
On Wed, Mar 1, 2017 at 4:18 AM,  wrote:

> I think it a good solution, I already put +1 :)
>
>
> And, as to the scenario testcases, shall we:
>
> 1) remove test steps/checks already coverd in API test
>
Duplicate test steps/checks is not good and should be removed. It's not
related to the scenario refactoring effort, so please if you find
duplicated tests or test steps, we should remove them.

> 2) remove sequence test cases (such as test_server_sequence_suspend_resume),
> othersize scenario will get fatter and fatter
>
There's no definitive answer to that. We should just remember what a
scenario should be: test several openstack components, "real" world use
cases, "real" integration testing. Those should be our guideline for
scenarios. We should not buy into "I put it into the scenarios directory
because the helper methods were here and convenient" or "because I saw an
already existing scenario that look kind of the same".
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA]Refactoring Scenarios manager.py

2017-03-01 Thread Jordan Pittier
On Wed, Mar 1, 2017 at 3:57 AM, Ghanshyam Mann 
wrote:

> Doing gradual refactoring and fixing plugins time to time needs lot of
> wait and sync.
>
> That needs:
> 1. Plugins to switch from current method usage. Plugins to have some other
> function or same copy paste code what current scenario base class has.
> 2. Tempest patch to wait for plugin fix.
> 3. Plugins to switch back to stable interface once Tempest going to
> provide those.
>
> This needs lot of sync between tempest and plugins and we have to wait for
> tempest refactoring patch for long.
>
> To make it more efficient, how about this:
> 1. Keep the scenario manger copy in tempest as it is. for plugins usage
> only.
>
Given that the refactoring effort "started" a year ago, at the current
speed it may take 2 years to complete. In the mean time we will have a
massive code duplication and a maintenance burden.

> 2. Start refactoring the scenario framework by adding more and more helper
> function under /common or lib.
>
Starting a "framework" (each time I see that word, I have a bad feeling)
from scratch without users and usage is very very difficult. How do we know
what we need in that framework and what will be actually used in tests ?

The effort was called scenario refactoring and I think that's what we
should do. We should not do "start from scratch scenarios" or "copy all the
code and see what happens".

There's no problem with plugins. We committed to have a stable interface
which is documented and agreed upon. It's clearly written here
https://docs.openstack.org/developer/tempest/plugin.html#stable-tempest-apis-plugins-may-use
The rest is private, for Tempest internal use. If Tempest cores disagree
with that, then we should first of all put a spec and rewrite that
document.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA]Refactoring Scenarios manager.py

2017-02-25 Thread Jordan Pittier
Hi guys,
So I have a problem with these 2 patches here [1] and here [2]. You
basically are blocking any attempt of refactoring manager.py. Refactoring
that file has been our number one priority for 2 cycles, and so far hardly
no one stepped up really to do the work, except me with these 2 patches.
Let me remind you that that file is a gigantic mess, an so are our network
scenarios.

The manager.py file in the scenarios directory has no stable interface, and
it was never "advertised" so. That some plugins decided to use some private
methods (such as this _get_network_by_name) is unfortunate but that should
not block us from moving.

So just to be clear, if we really want to refactor our scenarios (and we
must in my opinion), things will break for projects that are importing
Tempest and using it outside of its stable interface. I am not interested
in being the good Samaritan for the whole OpenStack galaxy, I have enough
with the 6 core projects and the Tempest stable interface. So guys, if you
are and don't want to go forward with [1] and [2], be sure I'll never touch
those scenarios again. I am not upset, but we have to make clear decisions,
sometimes difficult.

[1] : https://review.openstack.org/#/c/436555/
[2] : https://review.openstack.org/#/c/438097/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA]Removal of test runner wrapper scripts in Tempest

2017-02-22 Thread Jordan Pittier
Hi,
I've proposed https://review.openstack.org/#/c/436983/ in Tempest which
says:

*Remove deprecated test runner wrappers (.sh files) This patch removes
run_tempest.sh, run_tests.sh, tools/pretty_tox.sh
tools/pretty_tox_serial.sh. They all have been deprecated between 7 and 9
months ago. As stated in the deprecation warnings, the way forward is with
os-testr, testr or stestr.*

If you are still using one of these scripts from the Tempest repo, please do
the switch and reply to this email so that we give you some extra time.

Cheers,
Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][all] HELP NEEDED: test failures blocking requirements ocata branch and opening of pike

2017-02-09 Thread Jordan Pittier
On Thu, Feb 9, 2017 at 2:16 PM, Doug Hellmann  wrote:
> Excerpts from Doug Hellmann's message of 2017-02-08 23:54:06 -0500:
>> The patch to update the XStatic package versions [1] is blocked by a
>> patch to remove nova-docker from the requirements project sync list [2],
>> which is in turn running into issues in the
>> gate-grenade-dsvm-neutron-multinode-ubuntu-xenial job [3].
>>
>> We need some folks who understand these tests and the related
>> projects (nova, cinder, and neutron based on a cursory review of
>> the failed tests) to help with debugging and fixes.
>>
>> Doug
>>
>> [1] https://review.openstack.org/#/c/429753
>> [2] https://review.openstack.org/#/c/431221
>> [3] 
>> http://logs.openstack.org/21/431221/1/gate/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/67dd8a4/logs/grenade.sh.txt.gz
>>
>
> These patches landed overnight, so we are ready to proceed. It's not
> clear if there was a fix involved or just the magic of recheck, so if
> you had a hand in helping please post details.
I've checked the failures and I lean toward 'the magic of recheck'.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Announcing my candidacy for PTL of the Pike cycle

2017-02-06 Thread Jordan Pittier
Hi gmann,
Thanks for your candidacy. Let me ask one question if it's not too
late. What's the role of the QA team when it comes to API change ? I
have in mind the recent Glance change related to private vs shared
image status.

Someone in our community asked :
* "I need to get an official decision from the QA team on whether
[such a] patch is acceptable or not"
* "what's needed is an "official" response from the QA team concerning
the acceptability of the patch"

But we didn't provide such an answer. There could be a feeling that
the QA team is acting as a self-appointed activist judiciary.

Now we have another occurrence of a disagreement between the QA team
and a project team: https://bugs.launchpad.net/glance/+bug/1656183,
https://review.openstack.org/#/c/420038/,
https://review.openstack.org/#/c/425487/

I have myself no strong opinion on the matter, that why I need a leader here :)

Note, there *is* a question in this email :D

Jordan

On Thu, Jan 19, 2017 at 12:05 PM, Ghanshyam Mann
 wrote:
> Hi All,
>
>
>
> First and foremost would like to wish you all a successful 2017 ahead and
> with this I'm announcing my PTL candidacy of the Quality Assurance team for
> the Pike release cycle.
>
>
>
> I am glad to work in OpenStack community and would like to thank all the
> contributors who supported me to explore new things which brings out my best
> for the community.
>
>
>
> Let me introduce myself, briefly. I have joined the OpenStack community
> development in 2014 during mid of Ice-House release. Currently, I'm
> contributing in QA projects and Nova as well as a core member in Tempest.
> Since Barcelona Summit, I volunteered  as mentor in the upstream training.
> It‘s always a great experience to introduce OpenStack upstream workflow to
> new contributors and encourage them.
>
>
>
> Following are my contribution activities:
>
> * Review:
> http://stackalytics.com/?release=all=marks_id=ghanshyammann
>
> * Commit:
> http://stackalytics.com/?release=all=commits_id=ghanshyammann
>
>
>
> I have worked on some key areas on QA like Interfaces migration to lib, JSON
> schema response validation(for compute), API Microversion testing framework
> in Tempest, Improve test coverage and Bug Triage etc.
>
>
>
> QA program has been immensely improved since it was introduced which
> increased upstream development quality as well as helping production Cloud
> for their testing and stability. We have a lot of ideas from many different
> contributors to keep improving the QA which is phenomenal and I truly
> appreciate.
>
>
>
> Moving forwards, following are my focus areas for Pike Cycle:
>
>
>
> * Help the other Projects' developments and Plugin Improvement:
>
> OpenStack projects consider the quality is important and QA team needs to
> provide useful testing framework for them. Projects who all needs to
> implement their tempest tests in plugin, focus will be to help plugin tests
> improvement and so projects quality. Lot of Tempest  interfaces are moving
> towards stable interfaces, existing plugin tests needs to be fixed multiple
> times. We are taking care of those and helping them to migrate smoothly. But
> there are still many  interfaces going to migrate to lib and further to be
> adopted on plugin side. I’d like to have some mechanism/automation to
> trigger plugins to know about change interfaces before it breaks them. Also
> help them to use the framework correctly. This helps the other non-core
> projects’ tests.
>
>
>
> * Improve QA projects for Production Cloud:
>
> This will be the main focus area. Having QA projects more useful for
> Production Cloud testing is/will be great achievement for QA team. This area
> has been improved a lot since last couple of cycles and still a lot to do.
> We have to improve Production scenario testing coverage and make all QA
> projects easy to configure and use. During Barcelona summit, 2 new projects
> are initiated which will definitely help to achieve this goal.
>
>   *RBAC Policy -  https://github.com/openstack/patrole
>
>   *HA testing  -  https://review.openstack.org/#/c/374667/
>
>   https://review.openstack.org/#/c/399618/
>
>   *Hoping for more in future
>
> There will be more focus on those projects and new ideas which will help
> production Cloud testing in more powerful way.
>
>
>
> * JSON Schema *response* validation for projects:
>
> JSON schema response validation for compute APIs has been very helpful to
> keep the APIs quality and compatibility. Currently many projects support
> microversion which provides a way to introduce the APIs changes in Backward
> compatible way. I'd like to concentrate on response schema validation for
> those projects also. This helps the OpenStack interoperability and the APIs
> compatibility.
>
>
>
> * Improve Documentation and UX:
>
> Documentation and UX are the key part for any software. There have been huge
> improvement in UX , documentation 

Re: [openstack-dev] [qa] PTL Candidacy for Pike

2017-02-06 Thread Jordan Pittier
Hi Andrea,
Thanks for your candidacy. Let me ask one question if it's not too
late. What's the role of the QA team when it comes to API change ? I
have in mind the recent Glance change related to private vs shared
image status.

Someone in our community asked :
* "I need to get an official decision from the QA team on whether
[such a] patch is acceptable or not"
* "what's needed is an "official" response from the QA team concerning
the acceptability of the patch"

But we didn't provide such an answer. There could be a feeling that
the QA team is acting as a self-appointed activist judiciary.

Now we have another occurrence of a disagreement between the QA team
and a project team: https://bugs.launchpad.net/glance/+bug/1656183,
https://review.openstack.org/#/c/420038/,
https://review.openstack.org/#/c/425487/

I have myself no strong opinion on the matter, that why I need a leader here :)

Note, there *is* a question in this email :D

Jordan

On Fri, Jan 20, 2017 at 1:56 AM, Andrea Frittoli
 wrote:
> Dear all,
>
> I’d like to announce my candidacy for PTL of the QA Program for the Pike
> cycle.
>
> I started working with OpenStack towards the end of 2011. Since 2014 I’ve
> been
> a core developer for Tempest.
> I’ve always aimed for Tempest to be able to run against any OpenStack cloud;
> a lot of my contributions to Tempest have been driven by that.
> I’ve worked on QA for the OpenStack community, for an OpenStack based public
> cloud as well as for an OpenStack distribution.
>
> I believe that quality engineers should develop innovative, high quality
> open-source tools and tests.
>
> The OpenStack community has built an amazing set of tools and services to
> handle quality engineering at such a large scale.
> The number of tests executed, the test infrastructure and amount of test
> data
> produced can still be difficult to handle.
> Complexity can inhibit new contributors as well as existing ones, not only
> for
> the QA program but for OpenStack in general as well.
>
> If elected, in the Pike cycle I would like to focus on two areas.
>
> - QA team support to the broader OpenStack community
> - Finish the work on Tempest stable interfaces for plugins and support
>   existing plugins in the transition
> - Keep an open channel with the broader community when setting
> priorities
>
> - Promote contribution to the QA program, by:
> - removing cruft from Tempest code
> - making it easier to know “what’s going on” when a test job fails
> - focus on tools that help triage and debug gate failures (OpenStack
>   Health, Stackviz)
> - leverage the huge amount of test data we produce every day to
>   automate as much as possible the failure triage and issue
> discovery
>   processes
>
> I hold the QA crew in great esteem, and I would be honoured to serve as the
> next PTL.
>
> Thank you
>
> Andrea Frittoli (andreaf)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] POST /api-wg/news

2017-02-06 Thread Jordan Pittier
On Fri, Feb 3, 2017 at 7:51 PM, Ken'ichi Ohmichi 
wrote:

> 2017-02-03 9:56 GMT-08:00 Chris Dent :
> > On Thu, 2 Feb 2017, Ken'ichi Ohmichi wrote:
> >>>
> >>> In today's meeting [0] after briefly covering old business we spent
> >>> nearly
> >>> 50 minutes going round in circles discussing the complex interactions
> of
> >>> expectations of API stability, the need to fix bugs and the costs and
> >>> benefits of microversions. We didn't make a lot of progress on the
> >>> general
> >>> issues, but we did #agree that a glance issue [4] should be treated as
> a
> >>> code bug (not a documentation bug) that should be fixed. In some ways
> >>> this
> >>> position is not aligned with the ideal presented by stability
> guidelines
> >>> but
> >>> it is aligned with an original goal of the API-WG: consistency. It's
> >>> unclear
> >>> how to resolve this conflict, either in this specific instance or in
> the
> >>> guidelines that the API-WG creates. As stated in response to one of the
> >>> related reviews [5]: "If bugs like this don't get fixed properly in the
> >>> code, OpenStack risks going down the path of Internet Explorer and
> people
> >>> wind up writing client code to the bugs and that way lies madness."
> >>
> >>
> >> I am not sure the code change can avoid the madness.
> >
> >
> > Just for the record, I'm not the speaker of that quote. I included
> > it because I think it does a good job of representing the complexity
> > and confusion that we have going on or at least in inspiring
> > responses that help to do so.
> >
> > Which is a round about way of saying: Thank you very much for
> > responding.
>
> Haha, I see.
>
> >> If we change the success status code (200 ->204) without any version
> >> bumps, OpenStack clouds return different status codes on the same API
> >> operations.
> >> That will break OpenStack interoperability and clients' APPs need to
> >> be changed for accepting 204 also as success operation.
> >> That could make APPs code mudness.
> >> I also think this is basically code bug, but this is hard to fix
> >> because of big impact against the existing users.
> >
> >
> > There have been lots of different opinions and perspective on this
> > (in the reviews and elsewhere), all of which are pretty sensible but
> > as a mass are hard to reconcile. The below is reporting evidence, not
> > supporting a plan:
> >
> >   The API-WG is striving for OpenStack APIs to be consistent within
> >   themselves, with each other and with the HTTP RFCs. This particular
> >   issue is an example where none of those are satisfied.
> >
> > Yet it is true that if client code is specifically looking for a
> > 200 response this change, without a version signal, will break
> > that code.
> >
> >   But glance isn't set up with microversions or something like.
> >
> >   But isn't checking specifically for 200 as "success" unusual so
> >   this is unlikely to be as bad as changing a 4xx to some other
> >   4xx.
> >
> >   But correcting the docs so they indicate this one request out of
> >   several in a group breaks the 204 pattern set by the rest of
> >   the group and could easily be perceived as a typo and thus need
> >   to be annotated as "special".
> >
> > How do we reconcile that?
>
> This API has been implemented since 2 years ago, and it is easy to
> imagine many users are using the API.
> If changing success status code like this, we send a message "status
> code could be changed anytime" to users and users would recognize "the
> success status code is unstable and it is better to check status code
> range(20X) instead of a certain code(200, 201, etc) for long-term
> maintenance".
>

I think we should be pragmatic. There's a a difference between changing one
return code from 200->204 (still in the 2XX range), for one endpoint
exceptionally and saying "we keep changing status code, OpenStack is not
stable".

Microversions are great, but not all projects support them yet. In the mean
time, if a project properly documents through a release note a minor API
change (like this one), that sounds reasonable. Sure some user code will
break, at upgrade, but let's be realistic here things get broken after a
major upgrade of every software, that's why operators/deployers test the
upgrades beforehand.


>
> If the above is we expect/hope, why should we fix/change success code
> to ideal one?
> On the above scenario, we are expecting users should not check a certain
> code.
> So even if changing status code, users would not be affected by the change.
> Whom we are changing the status code for?
>

That's a good question. In that case we should do it "for us, the
developers". I prefer to work with a sane code base, sane API, small
technical debt. But I also understand the users are paying us for
stability.


> That seems a dilemma.
>

I would say we should make compromise, not solve dilemma. I can live in a
world where we sometimes 

Re: [openstack-dev] gate jobs - papercuts

2017-02-02 Thread Jordan Pittier
On Wed, Feb 1, 2017 at 6:57 PM, Davanum Srinivas  wrote:
> Thanks Melanie! Since the last report 4 hours ago, the
> ServersNegativeTestJSON failed 8 more times.
>
> Is the following one of the libvirt ones?
>
> http://logs.openstack.org/24/425924/2/gate/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/f1b9229/logs/testr_results.html.gz
> tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2
No, it's something else. Probably a random network glitch: failure is
due to "13:19:01,294 12963 ERROR [paramiko.transport] Socket
exception: Connection reset by peer (104)" while executing a SSH
command on a fresh test VM.
>
> Thanks,
> Dims
>
>
> On Wed, Feb 1, 2017 at 12:29 PM, melanie witt  wrote:
>> On Wed, 1 Feb 2017 08:06:41 -0500, Davanum Srinivas wrote:
>>>
>>> Three more from this morning, at least the first one looks new:
>>>
>>>
>>> http://logs.openstack.org/04/426604/3/gate/gate-tempest-dsvm-neutron-full-ubuntu-xenial/708771d/logs/testr_results.html.gz
>>> tempest.api.compute.images.test_list_image_filters
>>>
>>>
>>> http://logs.openstack.org/99/420299/3/gate/gate-tempest-dsvm-neutron-dvr-ubuntu-xenial/6e8d208/logs/testr_results.html.gz
>>>
>>> http://logs.openstack.org/23/427223/2/gate/gate-tempest-dsvm-neutron-full-ubuntu-xenial/18e635a/logs/testr_results.html.gz
>>> tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON
>>
>>
>> The last two are https://launchpad.net/bugs/1660878 being worked on here
>> https://review.openstack.org/#/c/427775
>>
>> -melanie
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Nova]Libvirt frequent crashes on Ubuntu Xenial

2017-02-01 Thread Jordan Pittier
Hi,
After investigating several gate failures and thanks to logstash, I
found out that Libvirtd often crashes badly on Ubuntu Xenial. I know
of 3 distinct crashes, with the following fingerprints in syslog.txt

* libvirtd: malloc.c:3720: _int_malloc: Assertion `(unsigned long)
(size) >= (unsigned long) (nb)' failed.
13 hits in the last 7 days. Somewhat reported in
https://bugs.launchpad.net/tempest/+bug/1646779

* traps: libvirtd[14731] general protection ip:7f2249691c4e
sp:7ffd2c223d70 error:0 in libc-2.23.so[7f224961+1bf000]
92 hits in the last 7 days. Somewhat reported in
https://bugs.launchpad.net/tempest/+bug/1646779

* *** Error in `/usr/sbin/libvirtd': malloc(): memory corruption:
0x55d6a8375ca0 ***
100 hits in the last 7 days. Somewhat reported in
https://bugs.launchpad.net/nova/+bug/1643911

Now, depending on how quick systemd detects these crashes and restart
libvirtd, not all of these hits lead to a job failure.

My C language skill is very low, and this is too low level for me to
track. But there's definitively something wrong that should be looked
after here. If anyone has some free cycles, that would be interesting
to track.

Cheers,
Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How change a section name in oslo.config but keep compatibility with the old format

2017-01-12 Thread Jordan Pittier
Hi,

Look at this file
https://github.com/openstack/tempest/blob/master/tempest/config.py and
search for "deprecated_opts" Not sure you can deprecate a whole
section, but you can set the old location (old section/old config
name) for each new config option.

On Thu, Jan 12, 2017 at 11:01 AM, Maxime Cottret
 wrote:
> Hi every one,
>
> I'm a new developer on Cloudkitty project and I was wondering how I can
> depreciate a section name in oslo.config.
>
> I'd like to rename a section for consistency and depreciate the old name
> before removing its use in the next cycle.
>
> I've looked into oslo.config documentation but  couldn't manage a way to do
> it.
>
> Thanks in advance for your help.
>
> Regards
>
> --
> Maxime Cottret - Consultant Cloud/DataOps @ OBJECTIF-LIBRE
>
> Mail : maxime.cott...@objectif-libre.com
> Tel : 05 82 95 65 36 (standard)
> Web : www.objectif-libre.com Twitter: @objectiflibre
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Nova]Making gate-tempest-dsvm-full-devstack-plugin-ceph-ubuntu-xenial non voting

2017-01-10 Thread Jordan Pittier
Hi,
I don't know if you've noticed but
the gate-tempest-dsvm-full-devstack-plugin-ceph-ubuntu-xenial job has a
high rate of false negative. I've queried Gerrit and analysed all the
"Verified -2" messages left by Jenkins (i.e Gate failure) for the last 30
days. (script is here [1]).

On project openstack/nova: For the last 58 times where
gate-tempest-dsvm-full-devstack-plugin-ceph-ubuntu-xenial ran AND jenkins
left a 'Verified -2' message, the job failed 48 times and succeeded 10
times.

On project openstack/tempest: For the last 25 times where
gate-tempest-dsvm-full-devstack-plugin-ceph-ubuntu-xenial ran AND jenkins
left a 'Verified -2' message, the job failed 14 times and succeeded 11
times.

In order words, when there's a gate failure
gate-tempest-dsvm-full-devstack-plugin-ceph-ubuntu-xenial is the main
culprit, by a significant margin.

I am Tempest core reviewer and this bugs me because it slows the
development of the project I care for reasons that I don't really
understand. I am going to propose a change to make this job non voting on
openstack/tempest.

Jordan

[1] :
https://github.com/JordanP/openstack-snippets/blob/master/analyse-gate-failures/analyse_gate_failures.py

-- 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Embracing new languages in OpenStack

2016-11-07 Thread Jordan Pittier
Again ? Are we going to have that discussion every 3 months... ?

On Mon, Nov 7, 2016 at 6:09 PM, Flavio Percoco  wrote:

> Greetings,
>
> I literally just posted a thing on my blog with some thoughts of what I'd
> expect
> any new language being proposed for OpenStack to cover before it can be
> accepted.
>
> The goal is to set the expectations right for what's needed for new
> languages to
> be accepted in the community. During the last evaluation of the Go
> proposal I
> raised some concerns but I also said I was not closed to discussing this
> again
> in the future. It's clear we've not documented expectations yet and this
> is a
> first step to get that going before a new proposal comes up and we start
> discussing this topic again.
>
> I don't think a blog post is the "way we should do this" but it was my way
> to
> dump what's in my brain before working on something official (which I'll do
> this/next week).
>
> I also don't think this list is perfect. It could either be too
> restrictive or
> too open but it's a first step. I won't paste the content of my post in
> this
> email but I'll provide a tl;dr and eventually come back with the actual
> reference document patch. I thought I'd drop this here in case people read
> my
> post and were confused about what's going on there.
>
> Ok, here's the TL;DR of what I believe we should know/do before accepting
> a new
> language into the community:
>
> - Define a way to share code/libraries for projects using the language
> - Work on a basic set of libraries for OpenStack base services
> - Define how the deliverables are distributed
> - Define how stable maintenance will work
> - Setup the CI pipelines for the new language
>
> The longer and more detailed version is here:
>
> https://blog.flaper87.com/embracing-other-languages-openstack.html
>
> Stay tuned,
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

-- 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest] Failing to run a test on a remote devstack server

2016-10-20 Thread Jordan Pittier
Hi Yossi,
I suggest you carefully read the source code of this test and all the
options in your tempest.conf file.

Jordan

On Thu, Oct 20, 2016 at 3:22 PM, Yossi Tamarov 
wrote:

> Hello everyone,
> When trying to run the next test: TestNetworkBasicOps.test_
> connectivity_between_vms_on_different_networks, from the devstack server
> itself, it passes, but when trying to run it from another server. i.e. by
> changing the *uri *parameter in the tempest.conf file in another server,
> the first step fails (
>
> I really need to run the test from a remote server.
> Does anyone have any idea of what the origin of the problem might be?
>
> thanks in advance,
> Joseph.
>
> Log prints:
>
> *The unsuccessful run's output:*
> *root@devstack-man17-d1:/opt/stack/tempest# ostestr --regex
> '(?!.*\[.*\bslow\b.*\])('tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_connectivity_between_vms_on_different_ne*
> *tworks')'*
> *running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \*
> *OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \*
> *OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-500} \*
> *OS_TEST_LOCK_PATH=${OS_TEST_LOCK_PATH:-${TMPDIR:-'/tmp'}} \*
> *${PYTHON:-python} -m subunit.run discover -t ${OS_TOP_LEVEL:-./}
> ${OS_TEST_PATH:-./tempest/test_discover} --list*
> *running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \*
> *OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \*
> *OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-500} \*
> *OS_TEST_LOCK_PATH=${OS_TEST_LOCK_PATH:-${TMPDIR:-'/tmp'}} \*
> *${PYTHON:-python} -m subunit.run discover -t ${OS_TOP_LEVEL:-./}
> ${OS_TEST_PATH:-./tempest/test_discover}  --load-list /tmp/tmpflGqv0*
> *{0}
> tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_connectivity_between_vms_on_different_networks
> [1.439351s] ... FAILED*
>
> *==*
> *Failed 1 tests - output below:*
> *==*
>
>
> *tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_connectivity_between_vms_on_different_networks[compute,id-1546850e-fbaa-42f5-8b5f-03d8a6a95f15,network]*
>
> **
>
> *Captured pythonlogging:*
> *~~~*
> *2016-10-20 16:18:42,698 12084 INFO
> [tempest.lib.common.rest_client] Request
> (TestNetworkBasicOps:test_connectivity_between_vms_on_different_networks):
> 200 POST http://10.0.43.14:500 *
> *0/v2.0/tokens*
> *2016-10-20 16:18:42,698 12084 DEBUG
>  [tempest.lib.common.rest_client] Request - Headers: {'Content-Type':
> 'application/json', 'Accept': 'application/json'}*
> *Body: *
> *Response - Headers: {'status': '200', 'content-length': '3033',
> 'content-location': 'http://10.0.43.14:5000/v2.0/tokens
> ', 'vary': 'X-Auth-Token', 'server':
> 'Apache/2.4.10 (Debian)',*
> * 'connection': 'close', 'date': 'Thu, 20 Oct 2016 13:18:39 GMT',
> 'content-type': 'application/json', 'x-openstack-request-id':
> 'req-860cca80-c240-4868-af33-6767c34bac7b'}*
> *Body: {"access": {"token": {"issued_at":
> "2016-10-20T13:18:40.00Z", "expires": "2016-10-20T14:18:40Z", "id":
> "bcc6ea6da5a54551b8d48b2726fbc4cf", "tenant": {"description": "t*
> *empest-TestNetworkBasicOps-2126651196 <2126651196>-desc", "enabled":
> true, "id": "f8cb09dd19094615befd6cc27553c5d6", "name":
> "tempest-TestNetworkBasicOps-2126651196 <2126651196>"}, "audit_ids":
> ["5nc6OIUNQMiN3HtwZ7x*
> *i2A"]}, "serviceCatalog": [{"endpoints": [{"adminURL":
> "http://10.0.43.14:8774/v2.1/f8cb09dd19094615befd6cc27553c5d6
> ", "region":
> "RegionOne", "internalURL": "http://10.0.43.14:8774/v2.1/f8c
> *
> *b09dd19094615befd6cc27553c5d6", "id": "1f706e5df45a435c86e04536f1337d7a",
> "publicURL": "http://10.0.43.14:8774/v2.1/f8cb09dd19094615befd6cc27553c5d6
> "}],
> "endpoints_links": [], "type": "comp*
> *ute", "name": "nova"}, {"endpoints": [{"adminURL":
> "http://10.0.43.14:9696/ ", "region": "RegionOne",
> "internalURL": "http://10.0.43.14:9696/ ", "id":
> "32476f6907cc48d49c5bc5fe30bacd2b", "pub*
> *licURL": "http://10.0.43.14:9696/ "}],
> "endpoints_links": [], "type": "network", "name": "neutron"}, {"endpoints":
> [{"adminURL": "http://10.0.43.14:8776/v2/f8cb09dd19094615befd6cc27553c5d6
> ",*
> * "region": "RegionOne", "internalURL":
> "http://10.0.43.14:8776/v2/f8cb09dd19094615befd6cc27553c5d6
> ", "id":
> "06fec1e446244e2d84c2fdced4fc51c7", "publicURL":
> "http://10.0.43.14:8776/v2/f8cb09 *
> 

[openstack-dev] [QA][Ironic]Removal of Ironic tests from Tempest

2016-10-12 Thread Jordan Pittier
Hi guys,
As you may know we are pushing projects to use Tempest plugins and we are
only keeping "core" projects tests in Tempest tree.

There has been several attempts to remove Ironic from Tempest so I guess it
doesn't come as a surprise (hopefully). I am starting to work on the
removal now, as it's super early in the dev cycle, now is a good time.
Expect a patch soon.

Thanks,
Jordan

-- 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Cinder]Removing Cinder v1 in Tempest

2016-10-11 Thread Jordan Pittier
On Tue, Oct 11, 2016 at 4:23 AM, Ken'ichi Ohmichi 
wrote:

> Thanks for pointing this up, Jordan
>
> Before removing volume v1 API tests, it is nice to make the v2 API the
> default of Tempest scenario tests.
> Now the v1 and v2 is set as True on the default in the
> configuration[1], and the v1 API is used in the scenario like [2].
> So it is better to switch using v2 API on the default.
>
https://review.openstack.org/#/c/385050/

2016-10-10 11:37 GMT-07:00 Matt Riedemann :

> >
> > So make it conditional in Tempest via a config option, disable volume v1
> > tests by default for the integrated gate, and then add a new job that
> runs
> > only on cinder changes (and maybe only in the experimental queue) that
> > enables volume v1 tests. You could run it on cinder in the check/gate and
> > skip the job from running unless something in the v1 API path is changed,
> > there are examples of that in project-config.
> >
> > Nova used to have the v2 API in tree and this was kind of the eventual
> path
> > to phasing out the Tempest testing on that code and got us to the point
> of
> > removing the v2 *code*.  The compute v2 API itself is still honored via
> the
> > v2.1 base microversion.
>
Yeah that could work. But I'd rather we completely remove the Cinder v1
part in Tempest, I am tired of the "if cinder_v1 is True ... elif cinder_v2
is True ...", not to mention the name vs display name thing. Anyway, that's
not my call.

I understand from Sean and Duncan reply that Cinder v1 is going to live on
for a while, I'll see how we can react to this in Tempest.


> >
> > --
> >
> > Thanks,
> >
> > Matt Riedemann
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.op
> enstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Cinder]Removing Cinder v1 in Tempest

2016-10-10 Thread Jordan Pittier
Hi,
I'd like to reduce the duration of a full Tempest run and I noticed that
Cinder tests take a good amount of time (cumulative time 2149sec vs 2256sec
for Nova, source code [0])

So I'd like to not run the Cinder v1 tests anymore, at least on the master
branches.

I remember that Cinder  v1 is deprecated (it has been for what, 2 years ?)
Is the removal scheduled ? I don't see/feel a lot of efforts toward that
removal but I may be missing something. Any way, that's not really my
business but it's not really fair to all the projects that run the "common
jobs" that Cinder "slows" everyone down.

What do you think ?

[0] :
https://github.com/JordanP/openstack-snippets/blob/master/tempest-timing/tempest_timing.py

-- 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-27 Thread Jordan Pittier
Hi,

On Tue, Sep 27, 2016 at 11:43 AM, milanisko k  wrote:

> Dear Stackers,
> I'd like to gather some overview on the $Sub: is there some infrastructure
> in place to gather such stats? Are there any groups interested in it? Any
> plans to establish such infrastructure?
>
I am working on such a tool with mixed results so far. Here's my approach
taking let's say Nova as an example:

1) Print all the routes known to nova (available as a python-routes object:
 nova.api.openstack.compute.APIRouterV21())
2) "Normalize" the Nova routes
3) Take the logs produced by Tempest during a tempest run (in
logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
8774)
4) "Normalize" the tested-by-tempest Nova routes.
5) Compare the two sets of routes
6) 
7) Profit !!

So the hard part is obviously the normalizing of the URLs. I am currently
using a tons of regex :) That's not fun.

I'll let you guys know if I have something to show.

I think there's real interest on the topic (it comes up every year or so),
but no definitive answer/tool.

Cheers,
Jordan

-- 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl] code churn and questionable changes

2016-09-22 Thread Jordan Pittier
On Thu, Sep 22, 2016 at 8:26 AM, Steven Dake (stdake) 
wrote:

> Folks,
>
> We want to be inviting to new contributors even if they are green.  New
> contributors reflect on OpenStack’s growth in a positive way.  The fact
> that a new-to-openstack contributor would make such and error doesn’t
> warrant such a negative response even if it a hassle for the various PTLs
> and core reviewer teams to deal with.  This is one of the many aspects of
> OpenStack projects a PTL is elected to manage (mentorship).  If mentorship
> isn’t in a leader’s personal mission, I’m not sure they should be leading
> anything.
>
Yeah you are right. We should be all welcoming and friendly. But please
(re)read the complains and issues that several reviewers raised in that
thread. Saying there's no problem and that core reviewers have to deal with
it is not the solution.

I've encountered most situations Amrith mentionned in his email, I feel the
same and sometimes I'am exhausted. I remember when I started contributing
to OpenStack, I never did that kind of useless patches. I think we have to
accept that some people could lack some common sense or could be a bit too
new to Python programming or too new to open source. I don"t feel bad about
ignoring these patches or auto -1/-2. I read somewhere that contributing to
opensource is a privileged not a right.



>
> Regards
> -steve
>
>
> On 9/21/16, 7:35 AM, "Boris Bobrov"  wrote:
>
> Hello,
>
> > in addition to this, please, PLEASE stop creating 'all project
> bugs'. i
> > don't want to get emails on updates to projects unrelated to the
> ones i
> > care about. also, it makes updating the bug impossible because it
> times
> > out. i'm too lazy to search ML but this has been raise before,
> please stop.
> >
> > let's all unite together and block these patches to bring an end to
> it. :)
>
> People who contribute to OpenStack long enough already know this.
> Usually new contributors do it. And we cannot reach out to them
> in this mailing list. There should be a way to limit this somewhere
> in Launchpad.
>
> > On 21/09/16 07:56 AM, Amrith Kumar wrote:
> >> Of late I've been seeing a lot of rather questionable changes that
> >> appear to be getting blasted out across multiple projects; changes
> that
> >> cause considerable code churn, and don't (IMHO) materially improve
> the
> >> quality of OpenStack.
> >>
> >> I’d love to provide a list of the changes that triggered this email
> but
> >> I know that this will result in a rat hole where we end up
> discussing
> >> the merits of the individual items on the list and lose sight of the
> >> bigger picture. That won’t help address the question I have below
> in any
> >> way, so I’m at a disadvantage of having to describe my issue in
> abstract
> >> terms.
> >>
> >>
> >>
> >> Here’s how I characterize these changes (changes that meet one or
> more
> >> of these criteria):
> >>
> >>
> >>
> >> -Contains little of no information in the commit message (often
> just
> >> a single line)
> >>
> >> -Makes some generic statement like “Do X not Y”, “Don’t use Z”,
> >> “Make ABC better” with no further supporting information
> >>
> >> -Fail (literally) every single CI job, clearly never tested by
> the
> >> developer
> >>
> >> -Gets blasted across many projects, literally tens with often
> the
> >> same kind of questionable (often wrong) change
> >>
> >> -Makes a stylistic python improvement that is not enforced by
> any
> >> check (causes a cottage industry of changes making the same
> correction
> >> every couple of months)
> >>
> >> -Reverses some previous python stylistic improvement with no
> clear
> >> reason (another cottage industry)
> >>
> >>
> >>
> >> I’ve tried to explain it to myself as enthusiasm, and a desire to
> >> contribute aggressively; I’ve lapsed into cynicism at times and
> tried to
> >> explain it as gaming the numbers system, but all that is merely
> >> rationalization and doesn’t help.
> >>
> >>
> >>
> >> Over time, the result generally is that these developers’ changes
> get
> >> ignored. And that’s not a good thing for the community as a whole.
> We
> >> want to be a welcoming community and one which values all
> contributions
> >> so I’m looking for some suggestions and guidance on how one can work
> >> with contributors to try and improve the quality of these changes,
> and
> >> help the contributor feel that their changes are valued by the
> project?
> >> Other more experienced PTL’s, ex-PTL’s, long time
> open-source-community
> >> folks, I’m seriously looking for suggestions and ideas.
> >>
> >>
> >>
> >> Any and all input is welcome, do other projects see this, how do you
> 

Re: [openstack-dev] [qa] resigning from Tempest core

2016-09-19 Thread Jordan Pittier
Hi Marc,
Thanks for your good work on Tempest all these years.

I know you are still contributing to other areas of OpenStack so that's
good for the project as a whole.

Cheers,
Jordan

On Mon, Sep 19, 2016 at 11:51 AM, Koderer, Marc  wrote:

> Hi folks,
>
> as already mentioned during the current code sprint:
> I am currently lacking time for reviews and code contributions.
>
> I think it better to step back and let other’s in that are more active.
>
> Thanks for all the support and it was really fun the past 3 years working
> with
> you!
>
> Regards
> Marc
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSC] Tenant Resource Cleanup

2016-09-07 Thread Jordan Pittier
On Wed, Sep 7, 2016 at 4:18 PM, Boris Bobrov  wrote:

> Hello,
>
> I wonder if it would be worth integrating ospurge into openstackclient.
>
> Are there any osc sessions planned at the summit?
>
>
> Hi,
I am the current "PTL" of the openstack/ospurge project. The project is
still alive and we have some small contributions from time to time, which
proves that there's definitively a need for a project purger tool.

It would be great if there were an official/widely used tool to do that,
maybe OSC is the best place. One advice to who ever wants to have another
stab at it: make the thing modular from the start. (in ospurge we now have
a fat file of 900LoC that's hard to maintain and I regularly have to "say
no" to people trying to extend it to clean resources of the new a-la-mode
openstack service).


> On 09/07/2016 04:05 PM, John Davidge wrote:
>
>> Hello,
>>
>> During the Mitaka cycle we merged a new feature into the
>> python-neutronclient called ’neutron purge’. This enables a simple CLI
>> command that deletes all of the neutron resources owned by a given
>> tenant. It’s documented in the networking guide[1].
>>
>> We did this in response to feedback from operators that they needed a
>> better way to remove orphaned resources after a tenant had been deleted.
>> So far this feature has been well received, and we already have a couple
>> of enhancement requests. Given that we’re moving to OSC I’m hesitant to
>> continue iterating on this in the neutron client, and so I’m reaching
>> out to propose that we look into making this a part of OSC.
>>
>> Earlier this week I was about to file a BP, when I noticed one covering
>> this subject was already filed last month[2]. I’ve spoken to Roman, who
>> says that they’ve been thinking about implementing this in nova, and
>> have come to the same conclusion that it would fit better in OSC.
>>
>> I would propose that we work together to establish how this command will
>> behave in OSC, and build a framework that implements the cleanup of a
>> small set of core resources. This should be achievable during the Ocata
>> cycle. After that, we can reach out to the wider community to encourage
>> a cross-project effort to incrementally support more projects/resources
>> over time.
>>
>> If you already have an etherpad for planning summit sessions then please
>> let me know, I’d love to get involved.
>>
>> Thanks,
>>
>> John
>>
>> [1] http://docs.openstack.org/mitaka/networking-guide/ops-resour
>> ce-purge.html
>> [2] https://blueprints.launchpad.net/python-openstackclient/+spe
>> c/tenant-data-scrub
>>
>> 
>> Rackspace Limited is a company registered in England & Wales (company
>> registered number 03897010) whose registered office is at 5 Millington
>> Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy
>> policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This
>> e-mail message may contain confidential or privileged information
>> intended for the recipient. Any dissemination, distribution or copying
>> of the enclosed material is prohibited. If you receive this transmission
>> in error, please notify us immediately by e-mail at ab...@rackspace.com
>> and delete the original message. Your cooperation is appreciated.
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Reliable way to filter CI in gerrit spam?

2016-08-31 Thread Jordan Pittier
On Wed, Aug 31, 2016 at 3:44 PM, Matthew Booth  wrote:
>
> Is there anything I missed? Or is it possible to unsubscribe from gerrit
> mail from bots? Or is there any other good way to achieve what I'm looking
> for which doesn't involve maintaining my own bot list? If not, would it be
> feasible to add something?
>

Most(all?) messages from CI have the lines:

"Patch Set X:
Build (succeeded|failed)."

Not super robust, but that's a start.

-- 
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support specified volume_type when boot instance, do we like it?

2016-08-29 Thread Jordan Pittier
On Mon, Aug 29, 2016 at 8:50 AM, Zhenyu Zheng 
wrote:

> Hi, all
>
> Currently we have customer demands about adding parameter "volume_type" to
> --block-device to provide the support of specified storage backend to boot
> instance. And I find one newly drafted Blueprint that aiming to address the
> same feature: https://blueprints.launchpad.net/nova/+spec/support
> -boot-instance-set-store-type ;
>
> As I know this is kind of "proxy" feature for cinder and we don't like it
> in general, but as the boot from volume functional was already there, so
> maybe it is OK to support another parameter?
>
> So, my question is that what are your opinions about this in general? Do
> you like it or it will not be able to got approved at all?
>
> Thanks,
>
> Kevin Zheng
>

Hi,
I think it's not a great idea. Not only for the reason you mention, but
also because the "nova boot" command is already way to complicated with way
to many options. IMO we should only add support for new features, not
"features" we can have by other means, just for convenience.

-- 
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][cinder] Clone feature toggle not in clone tests

2016-08-26 Thread Jordan Pittier
On Thu, Aug 25, 2016 at 7:06 PM, Ben Swartzlander 
wrote:

> Originally the NFS driver did support snapshots, but it was implemented by
> just 'cp'ing the file containing the raw bits. This works fine (if
> inefficiently) for unattached volumes, but if you do this on an attached
> volume the snapshot won't be crash consistent at all.
>
> It was decided that we could do better for attached volumes by switching
> to qcow2 and relying on nova to perform the snapshots. Based on this, the
> bad snapshot implementation was removed.
>
> However, for a variety of reasons the nova-assisted snapshot
> implementation has remained unmerged for 2+ years and the NFS driver has
> been an exception to the rules for that whole time.
>
I am not sure to understand what you mean by "the nova-assisted snapshot
implementation has remained unmerged for 2+ years". It looks merged to me
[1] and several Cinder drivers dependent on it as far as I know.

[1]:
http://developer.openstack.org/api-ref-compute-v2.1.html#os-assisted-volume-snapshots-v2.1

-- 
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][cinder] Clone feature toggle not in clone tests

2016-08-24 Thread Jordan Pittier
On Wed, Aug 24, 2016 at 6:06 PM, Slade Baumann <baum...@us.ibm.com> wrote:

> I am attempting to disable clone tests in tempest as they aren't
> functioning in NFS. But the tests test_volumes_clone.py and
> test_volumes_clone_negative.py don't have the "clone" feature
> toggle in them. I thought it obvious that if clone is disabled
> in tempest, the tests that simply clone should be disabled.
>
> So I put up a bug and fix for it, but have been talking with
> Jordan Pittier and he suggested I come to the mailing list to
> get this figured out.
>
> I'm not asking for reviews, unless you want to give them.
> I'm simply asking if this is the right way to go about this
> or if there is something else I need to do to get this into
> Tempest.
>
> Here are the bug and fix:
> https://bugs.launchpad.net/tempest/+bug/1615770
> https://review.openstack.org/#/c/358813/
>
> I would appreciate any suggestion or direction in this problem.
>
> For extra reference, the clone toggle flag was added here:
> https://bugs.launchpad.net/tempest/+bug/1488274
>
> Hi,
Thanks for starting this thread. My point about this patch is, as "volume
clone" is part of the core requirements [1] every Cinder drive must
support, I don't see a need for a feature flag. The feature flag already
exists, but that doesn't mean we should encourage its usage.

Now, if this really helps the NFS driver (although I don"t know why we
couldn't support clone with NFS)... I don't have a strong opinion on this
patch.

I -1ed the patch for consistency: I agree that there should be a minimum
set of features expected from a Cinder driver.

[1]
http://docs.openstack.org/developer/cinder/devref/drivers.html#core-functionality

Cheers,
Jordan

-- 
 <http://bit.ly/2aKbaTu>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why do we need a Requirements Team and PTL

2016-08-06 Thread Jordan Pittier
On Sat, Aug 6, 2016 at 6:16 PM, Anita Kuno  wrote:

> On 16-08-06 10:34 AM, Davanum Srinivas wrote:
>
>> Folks,
>>
>> Question asked by Julien here:
>> https://twitter.com/juldanjou/status/761897228596350976
>>
>> Answer:
>> There's a boat load of work that goes on in global requirements
>> process. Here's the list of things that we dropped on the new team
>> being formed:
>> https://etherpad.openstack.org/p/requirements-tasks
>>
>> Please feel free to look at the requirements repo, weekly chats etc to
>> get an idea.
>>
>> Also if if you disagree, please bring it up in a community forum so
>> you get better answers for your concerns.
>>
>> Thanks,
>> Dims
>>
>> I have to say that I am disappointed that if a community member felt like
> questioning a community action, that question did not occur in a community
> channel of communication prior to action being taken.
>
> The election workflow was posted to the mailing list a week prior to the
> election commencing. Questioning the purpose of an election while an
> election is taking place fosters distrust in the election process, which is
> feel is unfair to all those participating in the process with good intent.
>
> If you have questions about the purpose of an election please voice them
> at the appropriate time and venue so your concerns can be addressed.
>
> Thank you,
> Anita.


We should all relax. This is Twitter, who doesn't like a couple of
"retweet" and "likes" ? I am pretty sure most of us know the role of the
Requirements project.

-- 
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Pending removal of Scality volume driver

2016-07-28 Thread Jordan Pittier
Hi Sean,

Thanks for the heads up.

On Wed, Jul 27, 2016 at 11:13 PM, Sean McGinnis 
wrote:

> The Cinder policy for driver CI requires that all volume drivers
> have a CI reporting on any new patchset. CI's may have some down
> time, but if they do not report within a two week period they are
> considered out of compliance with our policy.
>
> This is a notification that the Scality OpenStack CI is out of compliance.
> It has not reported since April 12th, 2016.
>
Our CI is still running for every patchset, just that it doesn't report
back to Gerrit. I'll see what I can do about it.

>
> The patch for driver removal has been posted here:
>
> https://review.openstack.org/348032/

That link is about the Tegile driver, not ours.

>
>
> If this CI is not brought into compliance, the patch to remove the
> driver will be approved one week from now.
>
ACK.

>
> Thanks,
> Sean McGinnis (smcginnis)
>

Thanks,
Jordan

-- 
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] test strategy for the serial console feature

2016-07-26 Thread Jordan Pittier
Hi Markus
You don"t really need a whole new job for this. Just turn that flag to True
on existing jobs.

30/40 seconds is acceptable. But I am surprised considering a VM usually
boots in 5 sec or so. Any idea of where that slowdown comes from ?

On Tue, Jul 26, 2016 at 11:50 AM, Markus Zoeller <
mzoel...@linux.vnet.ibm.com> wrote:

> I'd like to discuss a testing strategy which ensures that the "serial
> console" feature in Nova doesn't break. Right now it's broken again [1].
> This happens every once in a while as we don't test it in our CI
> environment. I pushed [2] which should be the start of a change series
> which checks:
> * does the "get-serial-console" API return the expected result
> * is a connection to the instance via serial console possible
> * is the live-migration still possible
> * are the resources (ports) cleaned up correctly
>
> I can create a new testing job for that which enables the serial console
> in the nova config:
>
> [serial_console]
> enabled = True
>
> My concern is that I could burn unnecessary many testing nodes for that
> and my question is, are there are other ways which are less testing
> resource hungry?
>
> I also noticed that it takes up to 30 seconds (locally) until the
> instance is booted completely and accepts console input, which makes the
> test run for [3] a very long one.
>
> References:
> [1] https://bugs.launchpad.net/nova/+bug/1455252
> [2] https://review.openstack.org/#/c/346815/1
> [3] https://review.openstack.org/#/c/346911/1
>
> --
> Regards, Markus Zoeller (markus_z)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Integration testing for Nova API os-assisted-volume-snapshots

2016-06-15 Thread Jordan Pittier
On Wed, Jun 15, 2016 at 6:21 PM, Matt Riedemann 
wrote:

> In the nova-api meeting today we were talking about the nova
> os-assisted-volume-snapshots API and whether or not it was a proxy API to
> cinder. We determined it's not, it performs an action on a server resource.
> But what a few of us didn't realize was it's an admin API in nova only for
> cinder RemoteFSSnapDrivers to call when creating a volume snapshot.
>
> From what I can tell, this is used in Cinder by the glusterfs, scality,
> quobyte and virtuozzo remotefs volume drivers.
>
> This is also a nova API that's only implemented for the libvirt compute
> driver - so gmann is going to document that in the nova api-ref for the API
> so it's at least communicated somewhere (it's not in the nova feature
> support matrix).
>
> The other thing I was wondering about was CI coverage. I looked through
> several cinder patches this morning looking for recent successful CI
> results for gluster/scality/quobyte but couldn't find any.
>
> Does someone have a link to a successful job run for one of those drivers?
> I'd like to see if they are testing volume snapshot and that it's properly
> calling the nova API and everything is working. Because this is also
> something that Nova could totally unknowingly break to that flow since we
> have no CI coverage for it (we don't have those cinder 3rd party CI jobs
> running against nova changes).
>
> --
>

Hi Matt,
I am in charge of the Scality CI. It used to report to changes in Cinder. A
change in devstack broke us a couple of months ago, so I had to turn off my
CI (because it was reporting false negative) while developing a patch. The
patch took a long time to develop and merge but was merged finally:
https://review.openstack.org/#/c/310204/

But in the mean time, something else crept in, hidden by the first failure.
So the Scality CI is still broken, but it is my intention to find the
commit that broke it and come up with a patch.

The C is still running on every change to Cinder, just that the result is
not reported back. Results are here
https://37.187.159.67:5443/job/cinder-sofs-validate/ (yeah, that's an IP
and with an invalid HTTPS certificate, I am not proud :p). This is the
build artififact (logs) for a job that ran a couple of hours ago:
https://37.187.159.67:5443/job/cinder-sofs-validate/17356/artifact/jenkins-logs/

So overall, I struggle to maintain that CI. Because I always have to play
catch up. That's fine but what bother me is that when I finally come up
with a patch (be it for Nova or Cinder) reviews take a long time because
basically not a lot of people care/know about
about os-assisted-volume-snapshots.

Cheers,
Jordan

-- 
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest pre-provisioned credentials in the gate

2016-06-15 Thread Jordan Pittier
On Wed, Jun 15, 2016 at 2:00 AM, Andrea Frittoli 
wrote:

> Dear all,
>
> TL;DR: I'd like to propose to start running some of the existing dsvm
> check/gate jobs using Tempest pre-provisioned credentials.
>
> Full Text:
> Tempest provides tests with two mechanisms to acquire test credentials
> [0]: dynamic credentials and pre-provisioned ones.
>
> The current check and gate jobs only use the dynamic credentials provider.
>
> The pre-provisioned credentials provider has been introduced to support
> running test in parallel without the need of having access to admin
> credentials in tempest configuration file - which is a valid use case
> especially when testing public clouds or in general a deployment that is
> not own by who runs the test.
>
> As a small extra, since pre-provisioned credentials are re-used to run
> many tests during a CI test run, they give an opportunity to discover
> issues related to cleanup of test resources.
>
> Pre-provisioned credentials is currently used in periodic jobs [1][2] - as
> well as an experimental job defined for tempest changes. This means that
> even if we are careful, there is a good chance for it to be inadvertently
> broken by a change.
>
> Until recently the periodic job suffered a racy failure on object-storage
> tests. A recent refactor [3] of the tool that pre-proprovisioned the
> accounts has fixed the issue: the past 8 runs of the periodic jobs have not
> encountered that race anymore [4][5].
>
> Specifically I'd like to propose changing to start changing of the neutron
> jobs [6].
>
> Andrea Frittoli
>
> [0]
> http://docs.openstack.org/developer/tempest/configuration.html#credential-provider-mechanisms
> [1]
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n220
> [2]
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n253
> [3] https://review.openstack.org/#/c/317105/
> [4]
> http://status.openstack.org/openstack-health/#/job/periodic-tempest-dsvm-full-test-accounts-master
> [5]
> http://status.openstack.org/openstack-health/#/job/periodic-tempest-dsvm-neutron-full-test-accounts-master
> [6] https://review.openstack.org/329723
>
>
I like the idea. If the jobs with the preprov credentials are as stable as
the ones running dynamic credentials, that's fine. Also I like that we
don"t introduce new jobs but instead changing the configuration of existing
jobs: we already have a lot of jobs in Tempest.

In my previous job, we also used Tempest to validate our cloud deployment
and we didn"t have the admin credentials, so yeah, preprov credentials are
useful.

-- 
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Encrypt the sensitive options

2016-04-26 Thread Jordan Pittier
On Tue, Apr 26, 2016 at 3:32 PM, Daniel P. Berrange 
wrote:

> On Tue, Apr 26, 2016 at 08:19:23AM -0500, Doug Hellmann wrote:
> > Excerpts from Guangyu Suo's message of 2016-04-26 07:28:42 -0500:
> > > Hello, oslo team
> > >
> > > For now, some sensitive options like password or token are configured
> as
> > > plaintext, anyone who has the priviledge to read the configure file
> can get
> > > the real password, this may be a security problem that can't be
> > > unacceptable for some people.
>
It's not a security problem if your config files have the proper
permissions.


> > >
> > > So the first solution comes to my mind is to encrypt these options when
> > > configuring them and decrypt them when reading them in oslo.config.
> This is
> > > a bit like apache/openldap did, but the difference is these softwares
> do a
> > > salt hash to the password, this is a one-way encryption that can't be
> > > decrypted, these softwares can recognize the hashed value. But if we do
> > > this work in oslo.config, for example the admin_password in
> > > keystone_middleware section, we must feed the keystone with the
> plaintext
> > > password which will be hashed in keystone to compare with the stored
> hashed
> > > password, thus the encryped value in oslo.config must be decryped to
> > > plaintext. So we should encrypt these options using symmetrical or
> > > unsymmetrical method with a key, and put the key in a well secured
> place,
> > > and decrypt them using the same key when reading them.
>
The issue here is to find a "well secured place". We should not only move
the problem somewhere else.


> > >
> > > Of course, this feature should be default closed. Any ideas?
> >
> > Managing the encryption keys has always been the issue blocking
> > implementing this feature when it has come up in the past. We can't have
> > oslo.config rely on a separate OpenStack service for key management,
> > because presumably that service would want to use oslo.config and then
> > we have a dependency cycle.
> >
> > So, we need a design that lets us securely manage those encryption keys
> > before we consider adding encryption. If we solve that, it's then
> > probably simpler to encrypt an entire config file instead of worrying
> > about encrypting individual values (something like how ansible vault
> > works).
>
> IMHO encrypting oslo config files is addressing the wrong problem.
> Rather than having sensitive passwords stored in the main config
> files, we should have them stored completely separately by a secure
> password manager of some kind. The config file would then merely
> contain the name or uuid of an entry in the password manager. The
> service (eg nova-compute) would then query that password manager
> to get the actual sensitive password data it requires. At this point
> oslo.config does not need to know/care about encryption of its data
> as there's no longer sensitive data stored.
>
This looks complicated. I like text files that I can quickly view and edit,
if I am authorized to (through good old plain Linux permissions).


>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate] Tempest Tests - in-repo or separate

2016-04-04 Thread Jordan Pittier
On Mon, Apr 4, 2016 at 5:01 PM, Hayes, Graham  wrote:

> As we have started to move to a tempest plugin for our functional test
> suite, we have 2 choices about where it lives.
>
> 1 - In repo (as we have [0] currently)
> 2 - In a repo of its own (something like openstack/designate-tempest)
>
> There are several advantages to a separate repo:
>
> * It will force us to make API changes compatable
> * * This could cause us to be slower at merging changes [1]
> * It allows us to be branchless (like tempest is)
> * It can be its own installable package, and a (much) shorter list
>of requirements.
>
>
I am not a Designate contributor, but as a Tempest contributor we recommend
to use a separate repo. See
http://docs.openstack.org/developer/tempest/plugin.html#standalone-plugin-vs-in-repo-plugin
for more details.

If everyone is OK with a separate repo, I will go ahead and start the
> creation process.
>
> Thanks
>
> - Graham
>
>
> 0 - https://review.openstack.org/283511
> 1 -
>
> http://docs.openstack.org/developer/tempest/HACKING.html#branchless-tempest-considerations
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] No rejoin-stack.sh script in my setup

2016-03-31 Thread Jordan Pittier
Hi,
rejoin-stack.sh has been removed 14 days ago by
https://review.openstack.org/#/c/291453/

You should use the "screen" command now. (e.g screen -R)

On Thu, Mar 31, 2016 at 1:06 PM, Ouadï Belmokhtar <
ouadi.belmokh...@gmail.com> wrote:

> Hi everyone,
>
> Could you give any help to my question here, please.
>
>
> http://stackoverflow.com/questions/36268822/no-rejoin-stack-sh-script-in-my-setup
>
> I'm blocked with the same problem since 10 days. Any help is considered.
>
> Regards,
>
> --
> Ouadï Belmokhtar
> *Permanent Professor at EMSI-Rabat*
> *PhD Candidate in **Computer & Information Science*
> *Mohammadia School of Engineering*
> *Mohammed V University, Rabat, Morocco*
> *ouadibelmokh...@research.emi.ac.ma *
> *Mobile: (+212) 668829641 <%28%2B212%29%20668829641>*
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Cirros image

2016-03-29 Thread Jordan Pittier
On Tue, Mar 29, 2016 at 9:46 AM, Eran Kuris <eku...@redhat.com> wrote:

>
>
> - Original Message -----
> > From: "Jordan Pittier" <jordan.pitt...@scality.com>
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> > Cc: "Eran Kuris" <eku...@redhat.com>
> > Sent: Tuesday, March 29, 2016 10:34:55 AM
> > Subject: Re: [openstack-dev] [tempest] Cirros image
> >
> > Hi,
> >
> > On Tue, Mar 29, 2016 at 9:17 AM, Eyal Dannon <edan...@redhat.com> wrote:
> >
> > > Hi,
> > >
> > > I'm writing a tempest scenario test, the test will check the SCTP
> protocol,
> > > the ncat utility in the current image of cirros does not support the
> > > "--sctp" flag,
> > > so we opened a bug : https://bugs.launchpad.net/cirros/+bug/1540359
> > >
> > > Is there any change I'll get a commit on my code while using another
> > > image(RHEL for example?)
> > >
> > That's unlikely. Tempest should be self testing, which means we can't
> > accept tests that are not run continuously in our CI system. As our CI
> uses
> > CirrOS image all tests must pass with cirros. And we need a very
> > lightweight OS, otherwise tests will run in a much longer time.
> >
> > Jordan
>
>
> Any chance to upgrade "cirros" image with new nc version that support sctp
> (Any chance to upgrade "cirros" image with new nc version ? ) ?
>

If a new version of Cirros is shipped with a new nc version, then we could
use this newer version of CirrOS. But I don't think we should build and
maintain a custom version of CirrOS ourselves.  I think your best chance
here is to wait for https://bugs.launchpad.net/cirros/+bug/1540359 to be
resolved.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Cirros image

2016-03-29 Thread Jordan Pittier
Hi,

On Tue, Mar 29, 2016 at 9:17 AM, Eyal Dannon  wrote:

> Hi,
>
> I'm writing a tempest scenario test, the test will check the SCTP protocol,
> the ncat utility in the current image of cirros does not support the
> "--sctp" flag,
> so we opened a bug : https://bugs.launchpad.net/cirros/+bug/1540359
>
> Is there any change I'll get a commit on my code while using another
> image(RHEL for example?)
>
That's unlikely. Tempest should be self testing, which means we can't
accept tests that are not run continuously in our CI system. As our CI uses
CirrOS image all tests must pass with cirros. And we need a very
lightweight OS, otherwise tests will run in a much longer time.

Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][openstackclient] Required name option for volumes, snapshots and backups

2016-03-27 Thread Jordan Pittier
I am going to play the devil's advocate here but why can"t
python-openstackclient have its own opinion on the matter ? This CLI seems
to be for humans and humans love names/labels/tags and find UUIDS hard to
remember. Advanced users who want anonymous volumes can always hit the API
directly with curl or whatever SDK.

On Sun, Mar 27, 2016 at 4:44 PM, Duncan Thomas 
wrote:

> I think it is worth fixing the client to actually match the API, yes. The
> client seems to be determined not to actually match the API in lots of
> ways, e.g. https://bugs.launchpad.net/python-openstackclient/+bug/1561666
>
> On 24 March 2016 at 19:08, Ivan Kolodyazhny  wrote:
>
>> Hi team,
>>
>> From the Cinder point of view, both volumes, snapshots and backups APIs
>> do not require name param. But python-openstackclient requires name param
>> for these entities.
>>
>> I'm going to fix this inconsistency with patch [1]. Unfortunately, it's a
>> bit more than changing required params to not required. We have to change
>> CLI signatures. E.g. for create a volume: from [2].
>>
>> Is it acceptable? What is the right way to do such changes for OpenStack
>> Client?
>>
>>
>> [1] https://review.openstack.org/#/c/294146/
>> [2] http://paste.openstack.org/show/491771/
>> [3] http://paste.openstack.org/show/491772/
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> --
> Duncan Thomas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][all] Propose to remove negative tests from Tempest

2016-03-19 Thread Jordan Pittier
Hi,

On Thu, Mar 17, 2016 at 2:20 AM, Ken'ichi Ohmichi 
wrote:

> Hi
>
> I have one proposal[1] related to negative tests in Tempest, and
> hoping opinions before doing that.
>
> Now Tempest contains negative tests and sometimes patches are being
> posted for adding more negative tests, but I'd like to propose
> removing them from Tempest instead.
>
> Negative tests verify surfaces of REST APIs for each component without
> any integrations between components. That doesn't seem integration
> tests which are scope of Tempest.
>
Tempest is not only about integration tests. I mean, we have hundreds of
tests that are not integration tests.


> In addition, we need to spend the test operating time on different
> component's gate if adding negative tests into Tempest. For example,
> we are operating negative tests of Keystone and more
> components on the gate of Nova. That is meaningless, so we need to
> avoid more negative tests into Tempest now.
>
You have a good point here. But this problem (running tests for project X
on project Y's gate) should be addressed more generally not only for
negative tests.


>
> If wanting to add negative tests, it is a nice option to implement
> these tests on each component repo with Tempest plugin interface. We
> can avoid operating negative tests on different component gates and
> each component team can decide what negative tests are valuable on the
> gate.
>
> In long term, all negative tests will be migrated into each component
> repo with Tempest plugin interface. We will be able to operate
> valuable negative tests only on each gate.
>
> Any thoughts?
>

I am not sure we should remove negative tests from Tempest. Agreed that we
should reject most new negative tests, but some negative
tests do test useful things imo. Also I ran all the negative tests today:
"Ran: 452 tests in 144. sec." They just account for 2 minutes and 20sec
in the gate. That's very little, removing them won't bring a lot. And the
code for negative tests is quite contain, not a big maintenance burden.

Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Tempest]Run only multinode tests in multinode jobs

2016-02-16 Thread Jordan Pittier
Hi list,
I understood we need to limit the number of tests and jobs that are run for
each Tempest patch because our resources are not unlimited.

In Tempest, we have 5 multinode experimental jobs:

experimental-tempest-dsvm-multinode-full-dibtest
gate-tempest-dsvm-multinode-full
gate-tempest-dsvm-multinode-live-migration
gate-tempest-dsvm-neutron-multinode-full
gate-tempest-dsvm-neutron-dvr-multinode-full

These jobs largely overlap with the non-multinode jobs. What about tagging
(with a python decorator) each test that really requires multiple nodes and
only run those tests as part of the multinode jobs ?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Cinder Mitaka Midcycle Summary

2016-02-02 Thread Jordan Pittier
Thanks a lot for this summary. I enjoyed the reading.

Jordan

On Tue, Feb 2, 2016 at 10:14 PM, Sean McGinnis 
wrote:

> War and Peace
> or
> Notes from the Cinder Mitaka Midcycle Sprint
> January 26-29
>
> Etherpads from discussions:
> * https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-1
> * https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-2
> * https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-3
> * https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-4
>
> *Topics Covered*
> 
> In no particular order...
>
> Disable Old Volume Types
> 
> There was a request from an end user to have a mechanism to disable
> a volume type as part of a workflow for progressing from a beta to
> production state.
>
> Of what was known of the request, there was some confusion as to
> whether the desired use case couldn't be met with the existing
> functionality. It was decided nothing would be done for this until
> more input is receieved explaining what is needed and why it cannot
> be done as it is today.
>
> User Provided Encryption Keys for Volume Encryption
> ===
> The question was raised as to whether we want to allow user specified
> keys. Google has something today where this key can be passed in
> headers.
>
> Some concern with doing this, both from a security and amount of work
> perspective. It was ultimately agreed this was a better fit for a
> cross project discussion.
>
> Adding a Common cinder.conf Setting for Suppressing SSL Warnings
> 
> Log files get a TON of warnings when using a driver that uses the
> requests library internally for communication and you do not have
> a signed valid certificate. Some drivers have gotten around this
> by implementing their own settings for disabling these warnings.
>
> The question was raised that although not all drivers use requests,
> and therefore are not affected by this, should we still have a common
> config setting to disable these warnings for those drivers that do use
> it.
>
> Different approaches to disabling this will be explored. As long as
> it is clear what the option does, we were not opposed to this.
>
> Nested Quotas
> =
> The current nested quota enforcement is badly broken. There are many
> scenarios the just do not work as expected. There is also some
> confusion around how nested quotas should work. Things like setting
> -1 for a child quota do not work as expected and things are not
> properly enforced during volume creation.
>
> Glance has also started to look at implementing nested quota support
> based on Cinder's implementation, so we don't want to cause broken
> implementation in Cinder to be propogated to other projects.
>
> Ryan McNair is working with folks on other projects to make find
> a better solution and to work through our current issues. This will
> be an ongoing effort for now.
>
> The Future of CLI for python-cinderclient
> =
> A cross project spec has been approved to work toward removing
> individual project CLIs to center on the one common osc CLI. We
> discussed the feasibility of deprecating the cinder CLI in favor
> of focusing all CLI work on osc.
>
> There is also concern about delays getting new functionality
> deployed. First we need to make server side API changes, then get
> them added to the client library, then get them added to osc.
>
> There is not feature parity between the cinder and osc CLI's at the
> moment for cinder functionality. This needs to be addressed first
> before we can consider removing or deprecating anything in the cinder
> client CLI. Once we have the same level of functionality with both,
> we can then decide at what point to only add new CLI commands to osc
> and start deprecating the cinder CLI.
>
> Ivan and Ryan will look in to how to implement osc plugins.
>
> We will also look in to using cliff and other osc patterns to see if
> we can bring the existing cinder client implementation closer to the
> osc implementation to make the switch over smoother.
>
> API Microversions
> =
> Scott gave an update on the microversion work.
>
> Cinder patch: https://review.openstack.org/#/c/224910
> cinderclient patch: https://review.openstack.org/#/c/248163/
> spec: https://review.openstack.org/#/c/223803/
> Test cases: https://github.com/scottdangelo/TestCinderAPImicroversions
>
> Ben brought up the need to have a new unique URL endpoint for
> this to get around some backward compatibility problems. This new URL
> would be made v3 even though it will initially be the same as v2.
>
> We would like to get this in soon so it has some runtime. There were
> a lot of work items identified though that should get done before we
> land. Scott is going to continue working through these issues.
>
> Async User Reporting
> 

Re: [openstack-dev] [all] towards a keystone v3 only devstack

2016-02-02 Thread Jordan Pittier
On Tue, Feb 2, 2016 at 2:09 PM, gordon chung  wrote:

> yeah... the revert broke us across all telemetry projects since we fixed
> plugins to adapt to v3. i'm very much for adapting to v3 since it's
> lingered around for years. i think given the time lapsed, if it breaks
> them, tough.
>

 This is not how our community works.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][security] New BP for anti brute force in keystone

2016-01-13 Thread Jordan Pittier
Hi,
Can't you just do some rate limiting at your webserver level ?

On Tue, Jan 12, 2016 at 3:55 PM, McPeak, Travis 
wrote:

> One issue to be aware of is the use of this as a Denial of Service
> vector.  Basically an attacker can use this to lock out key accounts
> by continuously sending invalid passwords.
>
> Doing this might have unexpected and undesirable results,
> particularly in automated tasks.
>
> I think this feature has some definite uses, but we should definitely
> think through use and abuse cases, and probably allow a list of
> accounts that this should not be active for.
>
>
> -Travis
>
> On 1/12/16, 3:11 AM, "openstack-dev-requ...@lists.openstack.org" <
> openstack-dev-requ...@lists.openstack.org> wrote:
>
> >I have registered a new bp for keystone with the capability of anti brute
> force
> >
> >
> >Problem Description:
> >the attacks of account are increasing in the cloud
> >the attacker steals the account information by guessing the password in
> brute force.
> >therefore, the ability of account in anti brute force is necessary.
> >
> >proposed Change:
> >1. add two configure properties for keystone: threshold for times of
> password error consecutively, time of locked when password error number
> reaches the threshold.
> >2. add two properties of user information in times of password
> consecutive errors, and last password error time. when the password of an
> account error consecutively reaches threshold, the account will be locked
> with a few time.
> >3. locked account will unlock automatically when locked status time out
> >4. the APIs of keystone which use user_name and password for
> authentication, the message of response will add an error description when
> the account is locked
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] kilo and liberty is blocked on bug 1532048

2016-01-08 Thread Jordan Pittier
On Fri, Jan 8, 2016 at 6:34 AM, Ian Cordasco 
wrote:

> -Original Message-
> From: Matt Riedemann 
> Reply: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Date: January 7, 2016 at 19:11:10
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Subject:  [openstack-dev] [stable] kilo and liberty is blocked on bug
> 1532048
>
> > django_compressor 2.0 was released today which drops support for
> > django<1.8 which is what stable/kilo g-r has django capped at, so
> > horizon fails to install in kilo which breaks kilo jobs and grenade jobs
> > in liberty.
> >
> > The cap in stable/kilo g-r is here:
> >
> > https://review.openstack.org/#/c/265025/
> >
> > I'll babysit the reqs sync to horizon in kilo tonight which should then
> > free things up again.
>
> Thanks for babysitting those Matt!
>
Yes, thanks a lot !
It's bad when you wake up in the morning and you see that your CI is broken
but it's really good to see a fix has already been submitted.

>
> --
> Ian Cordasco
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][stable] Horizon kilo gate fails due to testrepository dependency

2016-01-05 Thread Jordan Pittier
On Tue, Jan 5, 2016 at 9:16 AM, Matthias Runge  wrote:

> On Tue, Jan 05, 2016 at 12:26:24PM +1300, Robert Collins wrote:
> > On 5 January 2016 at 12:04, Robert Collins 
> wrote:
> > ...
> > > Indeed -
> https://bitbucket.org/pypa/setuptools/commits/fb35fcade302fa828d34e6aff952ec2398f2c877?at=get_command_list
> > > - the failing bit AFAICT is indeed new code :/.
> >
> >
> > Ok, so I've paged this all in. Here's whats up, and some thoughts on
> fixing it.
> >
> > Old pbr does indeed have a bug where 'setup.py test' will error with
> > that unguarded import of what isn't meant to be a dependency.
> >
> > The reason this started failing is that a bugfix to setuptools - so
> > that the existing pbr code that wraps commands can wrap commands only
> > added by setuptools plugins like 'wheel' was merged and included in a
> > setuptools release.
> >
> > This causes the pbr testr command to be loaded, which fails in old pbr.
> >
> > The right answer is a back port of the import guard to pbr < 1.0.0 and
> > a point release - 0.11.1.
>

There's no 0.11 branch in PBR to which we could cherry-pick
https://git.openstack.org/cgit/openstack-dev/pbr/commit/?id=946cf80b750f3735a5d3b0c2173f4eaa7fad4a81
There's only a 0.10 branch.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone] Move oslo.policy from oslo to keystone

2015-12-16 Thread Jordan Pittier
Hi,
I am sure oslo.policy would be good under Keystone's governance. But I am
not sure I understood what's wrong in having oslo.policy under the oslo
program ?

Jordan

On Wed, Dec 16, 2015 at 6:13 PM, Brant Knudson  wrote:

>
> I'd like to propose moving oslo.policy from the oslo program to the
> keystone program. Keystone developers know what's going on with oslo.policy
> and I think are more interested in what's going on with it so that reviews
> will get proper vetting, and it's not like oslo doesn't have enough going
> on with all the other repos. Keystone core has equivalent stringent
> development policy that we already enforce with keystoneclient and
> keystoneauth, so oslo.policy isn't going to be losing any stability.
>
> If there aren't any objections, let's go ahead with this. I heard this
> requires a change to a governance repo, and gerrit permission changes to
> make keystone-core core, and updates in oslo.policy to change some docs or
> links. Any oslo.policy specs that are currently proposed
>
> - Brant
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tox 2.3.0 broke tempest jobs

2015-12-14 Thread Jordan Pittier
Tox 2.3.1 was released on pypi a few minutes ago, and it fixes this issue.

Jordan

On Mon, Dec 14, 2015 at 12:55 AM, Robert Collins 
wrote:

> On 13 December 2015 at 03:20, Yuriy Taraday  wrote:
> > Tempest jobs in all our projects seem to become broken after tox 2.3.0
> > release yesterday. It's a regression in tox itself:
> > https://bitbucket.org/hpk42/tox/issues/294
> >
> > I suggest us to add tox to upper-constraints to avoid this breakage for
> now
> > and in the future: https://review.openstack.org/256947
> >
> > Note that we install tox in gate with no regard to global-requirements,
> so
> > only upper-constraints can save us from tox releases.
>
> Ah, friday releases. Gotta love them... on my saturday :(.
>
> So - tl;dr AIUI:
>
>  - the principle behind gating changes to tooling applies to tox as well
>  - existing implementation of jobs in the gate precludes applying
> upper-constraints systematically as a way to gate these changes
>  - the breakage we experienced was due to already known-bad system images
>
> Assuming that thats correct, my suggestion would be that we either
> make tox pip installed during jobs (across the board), so that we can
> in fact control it with upper-constraints, or we work on functional
> tests of new images before they go-live
>
> -Rob
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Dependencies of snapshots on volumes

2015-12-09 Thread Jordan Pittier
Hi,
FWIW, I completely agree with what John said. All of it.

Please don't do that.

Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][tempest][defcore] Process to imrpove tests coverge in temepest

2015-12-08 Thread Jordan Pittier
Hi Flavio,

On Tue, Dec 8, 2015 at 9:52 PM, Flavio Percoco  wrote:
>
>
> Oh, I meant ocasionally. Whenever a missing test for an API is found,
> it'd be easy enough for the implementer to sohw up at the meeting and
> bring it up.

>From my experience as a Tempest reviewer, I'd say that most newly added
tests are *not* submitted by Tempest regular contributors. I assume
(wrongly ?) that it's mostly people from the actual projects (e.g glance)
who are interested in adding new Tempest tests to test a feature recently
implemented. Put differently, I don't think it's part of Tempest core
team/community to add new tests. We mostly provide a framework and guidance
these days.

But, reading this thread, I don"t know what to suggest. As a Tempest
reviewer I won't start a new ML thread or send a message to a PTL each time
I see a new test being added...I assume the patch author to know what he is
doing, I can't keep on with what's going on in each and every project.
Also, a test can be quickly removed if it is latter on deemed not so useful.

Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] Disabling HA for RPC queues in RabbitMQ

2015-12-02 Thread Jordan Pittier
On Wed, Dec 2, 2015 at 1:05 PM, Dmitry Mescheryakov <
dmescherya...@mirantis.com> wrote:

>
>
> My point is simple - lets increase our architecture scalability by 2-3
> times by _maybe_ causing more errors for users during failover. The
> failover time itself should not get worse (to be tested by me) and errors
> should be correctly handler by services anyway.
>

Scalability is great, but what about correctness ?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Use tempest-config for tempest-cli-improvements

2015-11-27 Thread Jordan Pittier
Hi,
I think this script is valuable to some users: Rally and Red Hat expressed
their needs, they seem clear.

This tool is far from bullet proof and if used blindly or in case of bugs,
Tempest could be misconfigured. So, we could have this tool inside the
Tempest repository (in the tools/) but not use it at all for the Gate.

I am not sure I fully understand the resistance for this, if we don"t use
this config generator for the gate, what's the risk ?

Jordan

On Fri, Nov 27, 2015 at 8:05 AM, Ken'ichi Ohmichi 
wrote:

> 2015-11-27 15:40 GMT+09:00 Daniel Mellado :
> > I still do think that even if there are some issues addressed to the
> > feature, such as skipping tests in the gate, the feature itself it's
> still
> > good -we just won't use it for the gates-
> > Instead it'd be used as a wrapper for a user who would be interested on
> > trying it against a real/reals clouds.
> >
> > Ken, do you really think a tempest user should know all tempest options?
> > As you pointed out there are quite a few of them and even if they should
> at
> > least know their environment, this script would set a minimum acceptable
> > default. Do you think PTL and Pre-PTL concerns that we spoke of would
> still
> > apply to that scenario?
>
> If Tempest users run part of tests of Tempest, they need to know the
> options which are used with these tests only.
> For example, current Tempest contains ironic API tests and the
> corresponding options.
> If users don't want to run these tests because the cloud don't support
> ironic API, they don't need to know/setup these options.
> I feel users need to know necessary options which are used on tests
> they want, because they need to investigate the reason if facing a
> problem during Tempest tests.
>
> Now Tempest options contain their default values, but you need a
> script for changing them from the default.
> Don't these default values work for your cloud at all?
> If so, these values should be changed to better.
>
> Thanks
> Ken Ohmichi
>
> ---
>
> > Andrey, Yaroslav. Would you like to revisit the blueprint to adapt it to
> > tempest-cli improvements? What do you think about this, Masayuki?
> >
> > Thanks for all your feedback! ;)
> >
> > El 27/11/15 a las 00:15, Andrey Kurilin escribió:
> >
> > Sorry for wrong numbers. The bug-fix for issue with counters is merged.
> > Correct numbers(latest result from rally's gate[1]):
> >  - total number of executed tests: 1689
> >  - success: 1155
> >  - skipped: 534 (neutron,heat,sahara,ceilometer are disabled. [2] should
> > enable them)
> >  - failed: 0
> >
> > [1] -
> >
> http://logs.openstack.org/27/246627/11/gate/gate-rally-dsvm-verify-full/800bad0/rally-verify/7_verify_results_--html.html.gz
> > [2] - https://review.openstack.org/#/c/250540/
> >
> > On Thu, Nov 26, 2015 at 3:23 PM, Yaroslav Lobankov <
> yloban...@mirantis.com>
> > wrote:
> >>
> >> Hello everyone,
> >>
> >> Yes, I am working on this now. We have some success already, but there
> is
> >> a lot of work to do. Of course, some things don't work ideally. For
> example,
> >> in [2] from the previous letter we have not 24 skipped tests, actually
> much
> >> more. So we have a bug somewhere :)
> >>
> >> Regards,
> >> Yaroslav Lobankov.
> >>
> >> On Thu, Nov 26, 2015 at 3:59 PM, Andrey Kurilin 
> >> wrote:
> >>>
> >>> Hi!
> >>> Boris P. and I tried to push a spec[1] for automation tempest config
> >>> generator, but we did not succeed to merge it. Imo, qa-team doesn't
> want to
> >>> have such tool:(
> >>>
> >>> >However, there is a big concern:
> >>> >If the script contain a bug and creates the configuration which makes
> >>> >most tests skipped, we cannot do enough tests on the gate.
> >>> >Tempest contains 1432 tests and difficult to detect which tests are
> >>> >skipped as unexpected.
> >>>
> >>> Yaroslav Lobankov is working on improvement for tempest config
> generator
> >>> in Rally. Last time when we launch full tempest run[2], we got 1154
> success
> >>> tests and only 24 skipped. Also, there is a patch, which adds x-fail
> >>> mechanism(it based on subunit-filter): you can transmit a file with
> test
> >>> names + reasons and rally will modify results.
> >>>
> >>> [1] - https://review.openstack.org/#/c/94473/
> >>>
> >>> [2] -
> >>>
> http://logs.openstack.org/49/242849/8/check/gate-rally-dsvm-verify/e91992e/rally-verify/7_verify_results_--html.html.gz
> >>>
> >>> On Thu, Nov 26, 2015 at 1:52 PM, Ken'ichi Ohmichi <
> ken1ohmi...@gmail.com>
> >>> wrote:
> 
>  Hi Daniel,
> 
>  Thanks for pointing this up.
> 
>  2015-11-25 1:40 GMT+09:00 Daniel Mellado  >:
>  > Hi All,
>  >
>  > As you might already know, within Red Hat's tempest fork, we do have
>  > one
>  > tempest configuration script which was built in the past by David
>  > Kranz [1]
>  > and that's been actively used in our CI system. Regarding this
> topic,
> 

Re: [openstack-dev] [devstack]Question about using Devstack

2015-11-23 Thread Jordan Pittier
Hi

On Mon, Nov 23, 2015 at 11:58 AM, Young Yang  wrote:

> Hi,
> I'm using devstack to deploy stable/Kilo in my Xenserver.
> I successfully deploy devstack. But I found that every time I restart it,
> devstack always run ./stack.sh to clear all my data and resintall all the
> components.
> So here comes  the questions.
>
> 1) Can I stop devstack from reinstalling after rebooting and just use the
> openstack installed successfully last time.
> I've tried  replacing the stack.sh with another blank shell script to stop
> it running. Then  It didn't reinstall the services after rebooting.
> However, some services didn't start successfully.
>
No. As far as I know, if you reboot your server you need to relaunch
./stack.sh. . Because, but not only, your IP address (on which some
services bind to) could have changed.

>
> 2) I found that devstack will exit if it is unable to connect the Internet
> when rebooting.
> Is there any way I can reboot devstack successfully without connection to
> the Internet after I've install it successfully with connection to the
> Internet.
>
After a successful install, you can start devstack again with the "offline
mode": OFFLINE=yes ./stack.sh

>
> Thanks in advance !  :)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FKs in the DB

2015-11-23 Thread Jordan Pittier
Hi,

However, data
> integrity /can/ be preserved in other ways than RDBMS constraints, while
> persistence layer performance caps that of the whole system

Is the DB the limiting factor of openstack performance ? De we have hard
evidence of this ? We need numbers before acting otherwise it will be an
endless discussion.

When I look at the number of race conditions we had/have in OpenStack, it
seems scary to remove the FK in the DB. FK look like a "guardian" to me and
we should aim at enforcing more consistency/integrity, not the contrary.

Also, this is an open source project with contributors with different
skills and experience (beginners, part time contributor etc.). Maybe this
is something to also consider.

Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack]Question about using Devstack

2015-11-23 Thread Jordan Pittier
On Mon, Nov 23, 2015 at 1:29 PM, Young Yang  wrote:

> Really thanks for responsing so rapidly!!
>
> @ozamiatin
> I forget to mention that I've run rejoin-stack.sh and  manually started
> apache.
> However,  something is not still properly configured.  Such as  lvm volume
> group is not created.
>
> @jordan.pittier
> Really thanks for you  OFFLINE advice. It make some thing offline now. But
> it fails finally.
> It gives such error.
> I tried both OFFLINE=True and OFFLINE=yes in my local rc.
>
> venv create: /opt/stack/tempest/.tox/venv
>> venv installdeps: -r/opt/stack/tempest/requirements.txt,
>> -r/opt/stack/tempest/test-requirements.txt
>> ERROR: invocation failed (exit code 1), logfile:
>> /opt/stack/tempest/.tox/venv/log/venv-1.log
>> ERROR: actionid: venv
>> msg: getenv
>> cmdargs: [local('/opt/stack/tempest/.tox/venv/bin/pip'), 'install', '-U',
>> '-r/opt/stack/tempest/requirements.txt',
>> '-r/opt/stack/tempest/test-requirements.txt']
>> env: {'UPSTART_EVENTS': 'stopped', 'LOGNAME': 'stack', 'USER': 'stack',
>> 'os_RELEASE': '14.04', 'OS_REGION_NAME': 'RegionOne',
>> .. NOTICE I ignore some error here
>> 
>>  Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after
>> connection broken by 'ProtocolError('Connection aborted.', gaierror(-2,
>> 'Name or service not known'))': /simple/pbr/
>>   Could not find a version that satisfies the requirement pbr>=1.6 (from
>> -r /opt/stack/tempest/requirements.txt (line 1)) (from versions: )
>> No matching distribution found for pbr>=1.6 (from -r
>> /opt/stack/tempest/requirements.txt (line 1))
>> ERROR: could not install deps [-r/opt/stack/tempest/requirements.txt,
>> -r/opt/stack/tempest/test-requirements.txt]; v =
>> InvocationError('/opt/stack/tempest/.tox/venv/bin/pip install -U
>> -r/opt/stack/tempest/requirements.txt
>> -r/opt/stack/tempest/test-requirements.txt (see
>> /opt/stack/tempest/.tox/venv/log/venv-1.log)', 1)
>> ___ summary
>> 
>> ERROR:   venv: could not install deps
>> [-r/opt/stack/tempest/requirements.txt,
>> -r/opt/stack/tempest/test-requirements.txt]; v =
>> InvocationError('/opt/stack/tempest/.tox/venv/bin/pip install -U
>> -r/opt/stack/tempest/requirements.txt
>> -r/opt/stack/tempest/test-requirements.txt (see
>> /opt/stack/tempest/.tox/venv/log/venv-1.log)', 1)
>> Error on exit
>> World dumping... see /opt/stack/logs/worlddump-2015-11-23-121956.txt for
>> details
>
>
I think this could be a bug in devstack. In lib/tempest we call 'tox
-revenv -- verify-tempest-config' which forces to recreate the virtual env
and thus requires an internet connection. I am not sure what's the best way
to deal with it now, but you can try to comment out this particular line
or, if you don"t need tempest add a "disable_service tempest" in your
local.conf

>
>
> @Bob Ball
> Thanks your mirantis offline advice, I'll try it later :)
>
> @yatinkumbhare
> thanks, I'll try it latter.
>
> @jordan.pittier  @Bob Ball
> If I can ensure my IP address , localrc and everything else are not
> changed, is there any way I can achieve my goal?
>
>
>
>
> On Mon, Nov 23, 2015 at 7:07 PM, Oleksii Zamiatin 
> wrote:
>
>>
>>
>> On Mon, Nov 23, 2015 at 12:58 PM, Young Yang  wrote:
>>
>>> Hi,
>>> I'm using devstack to deploy stable/Kilo in my Xenserver.
>>> I successfully deploy devstack. But I found that every time I restart
>>> it, devstack always run ./stack.sh to clear all my data and resintall all
>>> the components.
>>> So here comes  the questions.
>>>
>>> 1) Can I stop devstack from reinstalling after rebooting and just use
>>> the openstack installed successfully last time.
>>> I've tried  replacing the stack.sh with another blank shell script to
>>> stop it running. Then  It didn't reinstall the services after rebooting.
>>> However, some services didn't start successfully.
>>>
>>
>> try rejoin-stack.sh - it is in the same folder as unstack.sh, stack.sh
>>
>>
>>>
>>> 2) I found that devstack will exit if it is unable to connect the
>>> Internet when rebooting.
>>> Is there any way I can reboot devstack successfully without connection
>>> to the Internet after I've install it successfully with connection to the
>>> Internet.
>>>
>>> Thanks in advance !  :)
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> 

[openstack-dev] [Tempest]Non service clients migration to tempest-lib

2015-10-28 Thread Jordan Pittier
Hi guys,
As discussed this morning, here is an etherpad to track the progress in the
migration of files other than the service clients from Tempest to
tempest-lib:
https://etherpad.openstack.org/p/tempest-lib-non-service-clients-migration

A couple of work items already have assignee, thanks :) If I forgot
something or want to add a bit more details, feel free !

I am not sure I fully understand the scope of the "migrate :Test base class
fixtures": which files/class are we talking about ? Andrea, could you
please check/edit the etherpad on this please ?

Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Rant] About blank rechecks

2015-10-22 Thread Jordan Pittier
On Thu, Oct 22, 2015 at 9:58 AM, Thomas Herve  wrote:

> Hi all,
>
> You've seen me complain about people doing blank rechecks in Gerrit on
> IRC, and it seems it had little to no effect. So here I am trying to spread
> the word here. I'll try to stay calm.
>
> I'm seeing way too many rechecks on heat patches. It's not epidemic, but
> it's still enough to make me sad.
>
> First, it makes me sad as a developer. I don't know if it's just me, but
> one of the reason I code is curiosity, and debugging a gate failure is a
> great way to learn, pierce through the layers, and improve the situation.
>
> It then makes me sad as a team member. By doing a recheck you're basically
> implying that you don't care about the failure, and surely someone will
> care at some point. Except, the information will be lost, and we may have
> 100 builds before that happen again, when a release already happened, and
> we have to backport it. Working early means working less.
>
> And finally, it makes me sad for the infra team. Doing a recheck is
> disrespecting all the work they're doing to create a reliable environment
> to run our tests. Sure, sometimes the environment is the reason the failure
> happens, but then it's even more important to give feedback about it. They
> provide a great deal of logs, we can use logstach to find patterns, the
> least we can do is trying. We're also using resources that other projects
> could be using. As much as we'd like to believe it, the cloud doesn't have
> free infinite resources.
>
> Recently, I've seen many cases where rechecks were made whereas:
> 1) The heat branch was broken. Generally for some external reason (a
> dependency updated), doing a recheck is a pure waste of resources until
> that failure is fixed. Most of the time, we say something on IRC when it's
> the case. We also try to open a bug, so looking at launchpad can show
> something.
> 2) THE PATCH WAS ACTUALLY BROKEN. And there I'm not sad anymore I'm
> particularly angry. It basically means that you didn't look at all at the
> build results, and just mindlessly typed rechecks hoping that some fairy
> will fix your broken code. Frankly, that makes want to go on a -2 rampage.
> ESPECIALLY where a core is doing it.
>
> To close, I'll try to provide a solution. I know we all have our agenda,
> debugging gate failures takes some time that you may not have, and
> obviously "my patch is fine it's not my fault" (who cares, that's what
> being in a team means). Still, I'd like everyone to look at the test
> failures, look if the patch is not the problem, and if not open a bug [1]
> mentioning the test name, pasting the traceback in the description, and
> linking the build result. Then do recheck bug #xyz. That's it. It shouldn't
> take you more than 3 minutes, and at least we didn't lose the information.
>
> Thanks for reading that far and sorry for the length,
>
>
> [1] https://bugs.launchpad.net/heat/+filebug
>
> --
> Thomas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Hi Thomas,
I hear you and I have the exact same feeling on the projects I am involved
in.


Testing and our CI is the root of OpenStack's product quality, it deserves
care, attention, involvement from everybody.


I take the opportunity of this email to put a link to our logstash here
http://logstash.openstack.org/ and to our Elasticrecheck project here
http://status.openstack.org/elastic-recheck/ (I only learnt about this
tools a few month ago and I think they bring a lot of value)

Cheers,
Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] Try to introduce RFC mechanism to CI.

2015-10-09 Thread Jordan Pittier
Hi,
On Fri, Oct 9, 2015 at 11:00 AM, Tang Chen  wrote:

> Hi,
>
> CI systems will run tests for each patch once it is submitted or modified.
> But most CI systems occupy a lot of resource, and take a long time to
> run tests (1 or 2 hours for one patch).
>
> I think, not all the patches submitted need to be tested. Even those
> patches
> with an approved BP and spec may be reworked for 20+ versions. So I think
> CI should support a RFC (Require For Comments) mechanism for developers
> to submit and review the code detail and rework. When the patches are
> fully ready, I mean all reviewers have agreed on the implementation detail,
> then CI will test the patches.

So have the humans do the hard work to eventually find out that the patch
breaks the world ?


> For a 20+ version patch-set, maybe 3 or 4 rounds
> of tests are enough. Just test the last 3 or 4 versions.
>
 How do know, when a new patchset arrives, that it's part of the last 3 or
4 versions ?

>
> This can significantly reduce CI overload.
>
> This workflow appears in many other OSS communities, such as Linux kernel,
> qemu and libvirt. Testers won't test patches with a [RFC] tag in the
> commit message.
> So I want to enable CI to support a similar mechanism.
>
> I'm not sure if it is a good idea. Please help to review the following BP.
>
> https://blueprints.launchpad.net/openstack-ci/+spec/ci-rfc-mechanism
>
> Thanks.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I am running a 3rd party for Cinder. The amount of time to setup, operate
and watch after the CI results cost way more than the 1 or 2 servers it
take to run the jobs. So, I don"t want to be a party pooper here, but in my
opinion I am not sure it's worth the effort.

Note: I don"t know about nova or neutron.

Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Liberty RC1 availability in Debian

2015-09-30 Thread Jordan Pittier
On Wed, Sep 30, 2015 at 1:58 PM, Thomas Goirand  wrote:

> Hi everyone!
>
> 1/ Announcement
> ===
>
> I'm pleased to announce, in advance of the final Liberty release, that
> Liberty RC1 not only has been fully uploaded to Debian Experimental, but
> also that the Tempest CI (which I maintain and is a package only CI, no
> deployment tooling involved), shows that it's also fully installable and
> working. There's still some failures, but these are, I am guessing, not
> due to problems in the packaging, but rather some Tempest setup problems
> which I intend to address.
>
> If you want to try out Liberty RC1 in Debian, you can either try it
> using Debian Sid + Experimental (recommended), or use the Jessie
> backport repository built out of Mirantis Jenkins server. Repositories
> are listed at this address:
>
> http://liberty-jessie.pkgs.mirantis.com/
>
> 2/ Quick note about Liberty Debian repositories
> ===
>
> During Debconf 15, someone reported that the fact the Jessie backports
> are on a Mirantis address is disturbing.
>
> Note that, while the above really is a non-Debian (ie: non official
> private) repository, it only contains unmodified source packages, only
> just rebuilt for Debian Stable. Please don't be afraid by the tainted
> "mirantis.com" domain name, I could have as well set a debian.net
> address (which has been on my todo list for a long time). But it is
> still Debian only packages. Everything there is strait out of Debian
> repositories, nothing added, modified or removed.
>
> I believe that Liberty release in Sid, is currently working very well,
> but I haven't tested it as much as the Jessie backport.
>
> Started with the Kilo release, I have been uploading packages to the
> official Debian backports repositories. I will do so as well for the
> Liberty release, after the final release is out, and after Liberty is
> fully migrated to Debian Testing (the rule for stable-backports is that
> packages *must* be available in Testing *first*, in order to provide an
> upgrade path). So I do expect Liberty to be available from
> jessie-backports maybe a few weeks *after* the final Liberty release.
> Before that, use the unofficial Debian repositories.
>
> 3/ Horizon dependencies still in NEW queue
> ==
>
> It is also worth noting that Horizon hasn't been fully FTP master
> approved, and that some packages are still remaining in the NEW queue.
> This isn't the first release with such an issue with Horizon. I hope
> that 1/ FTP masters will approve the remaining packages son 2/ for
> Mitaka, the Horizon team will care about freezing external dependencies
> (ie: new Javascript objects) earlier in the development cycle. I am
> hereby proposing that the Horizon 3rd party dependency freeze happens
> not later than Mitaka b2, so that we don't experience it again for the
> next release. Note that this problem affects both Debian and Ubuntu, as
> Ubuntu syncs dependencies from Debian.
>
> 5/ New packages in this release
> ===
>
> You may have noticed that the below packages are now part of Debian:
> - Manila
> - Aodh
> - ironic-inspector
> - Zaqar (this one is still in the FTP masters NEW queue...)
>
> I have also packaged a few more, but there are still blockers:
> - Congress (antlr version is too low in Debian)
> - Mistral
>
> 6/ Roadmap for Liberty final release
> 
>
> Next on my roadmap for the final release of Liberty, is finishing to
> upgrade the remaining components to the latest version tested in the
> gate. It has been done for most OpenStack deliverables, but about a
> dozen are still in the lowest version supported by our global-requirements.
>
> There's also some remaining work:
> - more Neutron drivers
> - Gnocchi
> - Address the remaining Tempest failures, and widen the scope of tests
> (add Sahara, Heat, Swift and others to the tested projects using the
> Debian package CI)
>
> I of course welcome everyone to test Liberty RC1 before the final
> release, and report bugs on the Debian bug tracker if needed.
>
> Also note that the Debian packaging CI is fully free software, and part
> of Debian as well (you can look into the openstack-meta-packages package
> in git.debian.org, and in openstack-pkg-tools). Contributions in this
> field are also welcome.
>
> 7/ Thanks to Canonical & every OpenStack upstream projects
> ==
>
> I'd like to point out that, even though I did the majority of the work
> myself, for this release, there was a way more collaboration with
> Canonical on the dependency chain. Indeed, for this Liberty release,
> Canonical decided to upload every dependency to Debian first, and then
> only sync from it. So a big thanks to the Canonical server team for
> doing community work with me together. I just hope we could push this
> even further, 

Re: [openstack-dev] [Devstack][Sahara][Cinder] BlockDeviceDriver support in Devstack

2015-09-30 Thread Jordan Pittier
Hi Sean,
Because the recommended way in now to write devstack plugins.

Jordan

On Wed, Sep 30, 2015 at 3:29 AM, Sean Collins  wrote:

> This review was recently abandoned. Can you provide insight as to why?
>
> On September 17, 2015, at 2:30 PM, "Sean M. Collins" 
> wrote:
>
> You need to remove your Workflow-1.
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] Devstack broken - third party CI broken

2015-09-09 Thread Jordan Pittier
Also, as I believe your CI is for Cinder, I recommend that you disable all
uneeded services. (look how the DEVSTACK_LOCAL_CONFIG is used in
devstack-gate to add the proper disable_service line).

On Wed, Sep 9, 2015 at 11:00 AM, Chris Dent  wrote:

> On Wed, 9 Sep 2015, Chris Dent wrote:
>
> On Wed, 9 Sep 2015, Chris Dent wrote:
>>
>> I'll push up a couple of reviews to fix this, either on the
>>> ceilometer or devstack side and we can choose which one we prefer.
>>>
>>
>> Here's the devstack fix: https://review.openstack.org/#/c/221634/
>>
>
> This is breaking ceilometer in the gate too, not just third party
> CI.
>
>
> --
> Chris Dent tw:@anticdent freenode:cdent
> https://tank.peermore.com/tanks/cdent
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] upcoming glanceclient release

2015-08-28 Thread Jordan Pittier
On Fri, Aug 28, 2015 at 5:12 PM, stuart.mcla...@hp.com wrote:



 I've compiled a list of backwards incompatabilities where the new client
 will impact (in some cases break) existing scripts:

 https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability


 Awesome!


 To be honest there's a little more red there than I'd like.

 Of the 72 commands I tried, the new client failed to even parse the input
 in 36 cases.

 Yep, I am not involved in Glance development but as a user this looks bad.
And I didn't know Glance v2 lost that many features (is-public,
all-tenants, list by name)...

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][third-party][ci] Announcing CI Watch - Third-party CI monitoring tool

2015-08-25 Thread Jordan Pittier
Hi,
On Tue, Aug 25, 2015 at 2:43 AM, Anita Kuno ante...@anteaya.info wrote:

 On 08/24/2015 07:59 PM, Skyler Berg wrote:
  Hi all,
 
  I am pleased to announce CI Watch [1], a CI monitoring tool developed at
  Tintri. For each OpenStack project with third-party CI's, CI Watch shows
  the status of all CI systems for all recent patch sets on a single
  dashboard.

That's great ! I like it a lot. I will watch this from time to time for
sure.


  Any feedback would be appreciated. We plan to open source this project
  soon and welcome contributions from anyone interested. For the moment,
  any bugs, concerns, or ideas can be sent to openstack-...@tintri.com.
 

Some suggestion:
a) For a given 3rd party CI, compute a % of patches it commented on (to see
how available the CI is)
b) For a given 3rd party CI, compute how often (in % maybe) it disagrees
with Jenkins (to see how reliable the CI is, assuming Jenkins/The gate is
reliable, *cough* :p)

Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] Cannot get compute endpoint when running nodepool.

2015-08-21 Thread Jordan Pittier
Hi,
Please have a look at
http://lists.openstack.org/pipermail/openstack-dev/2015-August/072556.html

Jordan

On Fri, Aug 21, 2015 at 4:12 AM, Tang Chen tangc...@cn.fujitsu.com wrote:

 Hi, all,



 I got following error message while running nodepoold with nodepoold -d  $
 DAEMON_ARGS



 

 2015-08-21 20:18:00,336 ERROR nodepool.NodePool: Exception cleaning up
 leaked nodes

 Traceback (most recent call last):

   File /home/nodepool/nodepool/nodepool.py, line 2399, in periodicCleanup

 self.cleanupLeakedInstances()

   File /home/nodepool/nodepool/nodepool.py, line 2410, in
 cleanupLeakedInstances

 servers = manager.listServers()

   File /home/nodepool/nodepool/provider_manager.py, line 570, in
 listServers

 self._servers = self.submitTask(ListServersTask())

   File /home/nodepool/nodepool/task_manager.py, line 119, in submitTask

 return task.wait()

   File /home/nodepool/nodepool/task_manager.py, line 57, in run

 self.done(self.main(client))

   File /home/nodepool/nodepool/provider_manager.py, line 136, in main

 servers = client.nova_client.servers.list()

   File /usr/local/lib/python2.7/dist-packages/shade/__init__.py, line
 318, in nova_client

 self.get_session_endpoint('compute')

   File /usr/local/lib/python2.7/dist-packages/shade/__init__.py, line
 811, in get_session_endpoint

 Error getting %s endpoint: %s % (service_key, str(e)))

 OpenStackCloudException: Error getting compute endpoint: Project ID not
 found: admin (Disable debug mode to suppress these details.) (HTTP 401)
 (Request-ID: req-fb986bff-3cad-48e1-9da9-915ac9ef5927)

 ---



 And in my case, the output info with cli listed as follows:

 $ openstack service list

 | ID   | Name | Type   |

 +--+--++

 | 213a7ba8f0564523a3d2769f77621fde | nova | compute|



 $ openstack project list

 +--++

 | ID   | Name   |

 +--++

 | 0a765fdfa79a438aae56202bdd5824e2 | admin  |



 $ keystone endpoint-list


 +--+---+-+-+-+--+

 |id|   region  |
   publicurl|   internalurl
 | adminurl|
 service_id|


 +--+---+-+-+-+--+

 | d89b009e81f04a17a26fd07ffbf83efb | regionOne |
 http://controller:8774/v2/%(tenant_id)s |
 http://controller:8774/v2/%(tenant_id)s |
 http://controller:8774/v2/%(tenant_id)s |
 213a7ba8f0564523a3d2769f77621fde |


 +--+---+-+-+-+--+





 Have you ever seen this error? And could you give me any advice to solve
 it?
 Thanks in advance.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest] kwargs of service clients for POST/PUT methods

2015-07-31 Thread Jordan Pittier
Hi,
So after I took a lot at Ken'ichi's recent proposed changes, I think this
is the good approach. The kwargs approach has the good benefit of being
generic so that if a consumer (say Nova) of the client class wants to add a
new parameter to one of its API, it can do it without the need of updating
the client class. This makes the client class more lightweight and should
ease its adoption.

But I'd like to hear what other Tempest developers thing about that ? ( the
topic has been mentioned in yesterday's QA meeting).

Jordan

On Fri, Jul 10, 2015 at 9:02 AM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
wrote:

 Hi Anne,

 2015-07-09 12:22 GMT+09:00 Anne Gentle annegen...@justwriteclick.com:
  On Wed, Jul 8, 2015 at 9:48 PM, GHANSHYAM MANN ghanshyamm...@gmail.com
  wrote:
  On Thu, Jul 9, 2015 at 9:39 AM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
 
  wrote:
   2015-07-08 16:42 GMT+09:00 Ken'ichi Ohmichi ken1ohmi...@gmail.com:
   2015-07-08 14:07 GMT+09:00 GHANSHYAM MANN ghanshyamm...@gmail.com:
   On Wed, Jul 8, 2015 at 12:27 PM, Ken'ichi Ohmichi
   ken1ohmi...@gmail.com wrote:
  
   By defining all parameters on each method like update_quota_set(),
 it
   is easy to know what parameters are available from caller/programer
   viewpoint.
  
   I think this can be achieved with former approach also by defining
 all
   expected param in doc string properly.
  
   You are exactly right.
   But current service clients contain 187 methods *only for Nova* and
   most methods don't contain enough doc string.
   So my previous hope which was implied was we could avoid writing doc
   string with later approach.
  
   I am thinking it is very difficult to maintain doc string of REST APIs
   in tempest-lib because APIs continue changing.
   So instead of doing it, how about putting the link of official API
   document[1] in tempest-lib and concentrating on maintaining official
   API document?
   OpenStack APIs are huge now and It doesn't seem smart to maintain
   these docs at different places.
  
 
  ++, this will be great. Even API links can be provided in both class
  doc string as well as each method doc string (link to specific API).
  This will improve API ref docs quality and maintainability.
 
 
  Agreed, though I also want to point you to this doc specification. We
 hope
  it will help with the maintenance of the API docs.
 
  https://review.openstack.org/#/c/177934/
 
  I also want Tempest maintainers to start thinking about how a diff
  comparison can help with reviews of any changes to the API itself. We
 have a
  proof of concept and need to do additional work to ensure it works for
  multiple OpenStack APIs.

 Thanks for your feedback,
 That will be a big step for improving the API docs, I also like to
 join for working together.

 Thanks
 Ken Ohmichi

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tempest]No way to skip S3 related tests

2015-07-31 Thread Jordan Pittier
Hi,
With the commit [1] minimize the default services that happened in April,
nova-objectstore is not run by default. Which means that by default,
Devstack doesnt provide any S3 compatible API (because swift3 is not
enabled by default, of course).

Now, I don't see any config flag or mechanism in Tempest to skip S3 related
tests. So, out of the box, we can't have a full green Tempest run.

Note that there is a Tempest config flag compute_feature_enabled.ec2_api.
And there's also a mechanism implemented in 2012 by afazekas (see [2]) that
tried to skip S3 tests if an HTTP connection to  'boto.s3_url' failed with
a NetwordError, but that mechanism doesnt work anymore: the tests are not
properly skipped.

I'd like your opinion on the correct way to fix stuff:
1) Either introduce a object_storage_feature_enabled.s3_api flag in Tempest
and skip S3 tests if the value is False. This requires an additionnal
patch to devstack to properly set the value of
object_storage_feature_enabled.s3_api flag

2) Or, try to fix the mechanism in tempest/thirdparty/boto/test.py that
auto-magically skips the S3 tests on NetworkError.

What do you think ?

Jordan


[1]
https://github.com/openstack-dev/devstack/commit/279cfe75198c723519f1fb361b2bff3c641c6cef
[2]
https://github.com/openstack/tempest/commit/a23f500725df8d5ae83f69eb4da5e47736fbb647#diff-ea760d854610bfed1ae3daa4ac242f74R133
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] snapshot and cloning for NFS backend

2015-07-28 Thread Jordan Pittier
Hi

The patch [4] has been abandoned but it's not clear why. I too think that
having a full fledged NFS driver would be great !

[4] https://review.openstack.org/#/c/149037/

Jordan

On Tue, Jul 28, 2015 at 10:00 AM, Kekane, Abhishek 
abhishek.kek...@nttdata.com wrote:

  Hi Devs,



 There is an NFS backend driver for cinder, which supports only limited
 volume handling features. Specifically, snapshot and cloning features
 are missing.



 Eric Harney has proposed a feature of NFS driver snapshot [1][2][3],
 which was approved on Dec 2014 but not implemented yet.



 [1] blueprint https://blueprints.launchpad.net/cinder/+spec/nfs-snapshots

 [2] cinder-specs https://review.openstack.org/#/c/133074/  - merged for
 Kilo but moved to Liberty

 [3] implementation https://review.openstack.org/#/c/147186/  - WIP



 As of now [4] nova patch is a blocker for this feature.

 I have tested this feature by applying [4] nova patch and it is working as
 per expectation.



 [4] https://review.openstack.org/#/c/149037/



 IMO this is a very useful feature and I want to know communities
 opinion/direction about getting it merged during liberty time-frame.



 Thanks  Regards,



 Abhishek Kekane

 __
 Disclaimer: This email and any attachments are sent in strictest confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data. If you are not the intended recipient,
 please advise the sender by replying promptly to this email and then delete
 and destroy this email and any attachments without any further use, copying
 or forwarding.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest] Can we add testcase for Unshelving volume backed instance?

2015-07-20 Thread Jordan Pittier
On Mon, Jul 20, 2015 at 2:41 PM, Deore, Pranali11 
pranali11.de...@nttdata.com wrote:

  Hi,



 Unshelving a volume backed instance was not working before merging the
 patch [1].



 In case of volume backed instance, snapshot is not taken when an instance
 is shelved,

 so shelve_image_id key is not set to the instance system metadata.

 If shelve_image_id is None, UnshelveException was raised which was
 incorrect.



 In current tempest suit there is no such testcase for unshleving a volume
 backed

 instance. Can I add that testcase?



 [1] https://review.openstack.org/#/c/144582/


Hi,
Just for my information, what is the use case ? If a VM is booted from a
volume, it's already persistent, so why don't you just delete the
instance, knowing you could later on boot again from this volume ?

In any case, you could add this test into the test_shelve_instance scenario.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] openstack install error with stable/kilo

2015-07-06 Thread Jordan Pittier
Hi,
FYI I got the same error since Wednesday or Thursday last week. With
devstack master. I haven't had the chance to spend some time on it yet.

Jordan

On Mon, Jul 6, 2015 at 4:32 PM, Danny Choi (dannchoi) dannc...@cisco.com
wrote:

  Hi,

  I’m trying to run devstack install of stable/kilo on Ubuntu 14.04 and
 getting the following error.

  Any suggestion on how to resolve it?

  2015-07-06 13:25:08.670 | + recreate_database glance

 2015-07-06 13:25:08.670 | + local db=glance

 2015-07-06 13:25:08.670 | + recreate_database_mysql glance

 2015-07-06 13:25:08.670 | + local db=glance

 2015-07-06 13:25:08.670 | + mysql -uroot -pmysql -h127.0.0.1 -e 'DROP
 DATABASE IF EXISTS glance;'

 2015-07-06 13:25:08.678 | + mysql -uroot -pmysql -h127.0.0.1 -e 'CREATE
 DATABASE glance CHARACTER SET utf8;'

 2015-07-06 13:25:08.684 | + /usr/local/bin/glance-manage db_sync

 2015-07-06 13:25:09.067 | Traceback (most recent call last):

 2015-07-06 13:25:09.067 |   File /usr/local/bin/glance-manage, line 6,
 in module

 2015-07-06 13:25:09.067 | from glance.cmd.manage import main

 2015-07-06 13:25:09.067 |   File /opt/stack/glance/glance/cmd/manage.py,
 line 47, in module

 2015-07-06 13:25:09.067 | from glance.common import config

 2015-07-06 13:25:09.067 |   File
 /opt/stack/glance/glance/common/config.py, line 31, in module

 2015-07-06 13:25:09.067 | from paste import deploy

 2015-07-06 13:25:09.067 | ImportError: cannot import name deploy

  Thanks,
 Danny

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Use devstack/master to install older releases

2015-07-01 Thread Jordan Pittier
Hi,

On Wed, Jul 1, 2015 at 12:35 AM, Dean Troyer dtro...@gmail.com wrote:

 On Tue, Jun 30, 2015 at 7:04 AM, Emmanuel Cazenave cont...@emcaz.fr
 wrote:

 My first approach was to use devstack/icehouse to install swift/icehouse,
 devstack/juno for swift/juno, etc


 This is the only approach that is sane...


 I am now trying to use devstack/master in every cases because I need this
 : https://review.openstack.org/#/c/115307/ which allow not to install
 nova+glance which I don't need at all, and whose installation takes a
 really long time.


 We would probably consider a backport of that to DevStack Juno, but
 Icehouse is effectively EOL and we will be removing that branch soon (days
 or weeks, not months).

I've proposed https://review.openstack.org/#/c/197464/ to backport that
patch to Juno.



 Is my use case of installing older releases with devstack/master not
 supported ?


 No.  Even on master you may have issues trying to run a early cycle
 project with a late-cycle DevStack.  DevStack evolves to meet the needs of
 the projects as they develop.

 That Tempest fix is pretty straightforward, with only the one code block
 move not being a skip this if project is not enabled check.  With
 Icehouse EOL, there will be no additional updates so maintaining that
 backport in a local private branch should not be a big deal.  You will need
 that anyway to keep using icehouse after we remove that branch from the
 DevStack repo.

 dt

 --

 Dean Troyer
 dtro...@gmail.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Proposing Jordan Pittier for Tempest Core

2015-06-29 Thread Jordan Pittier
Thanks a lot !
I just want to say that I am happy about this and I look forward to
continue working on Tempest with you all.

Cheers,
Jordan

On Mon, Jun 29, 2015 at 3:59 PM, Matthew Treinish mtrein...@kortar.org
wrote:

 On Mon, Jun 22, 2015 at 04:23:30PM -0400, Matthew Treinish wrote:
 
 
  Hi Everyone,
 
  I'd like to propose we add Jordan Pittier (jordanP) to the tempest core
 team.
  Jordan has been a steady contributor and reviewer on tempest over the
 past few
  cycles and he's been actively engaged in the Tempest community. Jordan
 has had
  one of the higher review counts on Tempest for the past cycle, and he has
  consistently been providing reviews that show insight into both the
 project
  internals and it's future direction. I feel that Jordan will make an
 excellent
  addition to the core team.
 
  As per the usual, if the current Tempest core team members would please
 vote +1
  or -1(veto) to the nomination when you get a chance. We'll keep the
 polls open
  for 5 days or until everyone has voted.
 

 So, after  5 days and it's been all positive feedback. Welcome to the team
 Jordan.

 -Matt Treinish

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cinder] [Tempest] Regarding deleting snapshot when instance is OFF

2015-06-17 Thread Jordan Pittier
On Tue, Jun 16, 2015 at 3:33 PM, Jordan Pittier jordan.pitt...@scality.com
wrote:

 On Thu, Apr 9, 2015 at 6:10 PM, Eric Blake ebl...@redhat.com wrote:

 On 04/08/2015 11:22 PM, Deepak Shetty wrote:
  + [Cinder] and [Tempest] in the $subject since this affects them too
 
  On Thu, Apr 9, 2015 at 4:22 AM, Eric Blake ebl...@redhat.com wrote:
 
  On 04/08/2015 12:01 PM, Deepak Shetty wrote:
 
  Questions:
 
  1) Is this a valid scenario being tested ? Some say yes, I am not
 sure,
  since the test makes sure that instance is OFF before snap is deleted
 and
  this doesn't work for fs-backed drivers as they use hyp assisted snap
  which
  needs domain to be active.
 
  Logically, it should be possible to delete snapshots when a domain is
  off (qemu-img can do it, but libvirt has not yet been taught how to
  manage it, in part because qemu-img is not as friendly as qemu in
 having
  a re-connectible Unix socket monitor for tracking long-running
 progress).
 
 
  Is there a bug/feature already opened for this ?

 Libvirt has this bug: https://bugzilla.redhat.com/show_bug.cgi?id=987719
 which tracks generic ability of libvirt to delete snapshots; ideally,
 the code to manage snapshots will work for both online and persistent
 offline guests, but it may result in splitting the work into multiple
 bugs.


 I can't access this bug report, it seems private, I need to authenticate.


  I didn't understand much
  on what you
  mean by re-connectible unix socket :)... are you hinting that qemu-img
  doesn't have
  ability to attach to a qemu / VM process for long time over unix socket
 ?

 For online guest control, libvirt normally creates a Unix socket, then
 starts qemu with its -qmp monitor pointing to that socket.  That way, if
 libvirtd goes away and then restarts, it can reconnect as a client to
 the existing socket file, and qemu never has to know that the person on
 the other end changed.  With that QMP monitor, libvirt can query qemu's
 current state at will, get event notifications when long-running jobs
 have finished, and issue commands to terminate long-running jobs early,
 even if it is a different libvirtd issuing a later command than the one
 that started the command.

 qemu-img, on the other hand, only has the -p option or SIGUSR1 signal
 for outputting progress to stderr on a long-running operation (not the
 most machine-parseable), but is not otherwise controllable.  It does not
 have a management connection through a Unix socket.  I guess in thinking
 about it a bit more, a Unix socket is not essential; as long as the old
 libvirtd starts qemu-img in a manner that tracks its pid and collects
 stderr reliably, then restarting libvirtd can send SIGUSR1 to the pid
 and track the changes to stderr to estimate how far along things are.

 Also, the idea has been proposed that qemu-img is not necessary; libvirt
 could use qemu -M none to create a dummy machine with no CPUs and JUST
 disk images, and then use the qemu QMP monitor as usual to perform block
 operations on those disks by reusing the code it already has working for
 online guests.  But even this approach needs coding into libvirt.

 --
 Eric Blake   eblake redhat com+1-919-301-3266
 Libvirt virtualization library http://libvirt.org


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Hi,
 I'd like to progress on this issue, so I will spend some time on it.

 Let's recap. The issue is deleting a Cinder snapshot that was created
 during an Nova Instance snapshot (booted from a cinder volume) doesn't work
 when the original Nova Instance is stopped. This bug only arises when a
 Cinder driver uses the feature called QEMU Assisted
 Snapshots/live-snapshot. (currently only GlusterFS, but soon generic NFS
 when https://blueprints.launchpad.net/cinder/+spec/nfs-snapshots gets in).

 This issue is triggered by the Tempest scenario
 test_volume_boot_pattern. This scenario:
 [does some stuff]
 1) Creates a cinder volume from an Cirros Image
 2) Boot a Nova Instance on the volume
 3) Make a snapshot of this instance (which creates a cinder snapshot
 because the instance was booted from a volume), using the feature QEMU
 Assisted Snapshots
 [do some other stuff]
 4) stop the instance created in step 2 then delete the snapshot created in
 step 3.

 The deletion of snapshot created in step 3 fails because Nova wants
 libvirt to do a blockRebase (see
 https://github.com/openstack/nova/blob/68f6f080b2cddd3d4e97dc25a98e0c84c4979b8a/nova/virt/libvirt/driver.py#L1920
 )

 For reference, there's a bug targeting Cinder for this :
 https://bugs.launchpad.net/cinder/+bug/1444806

 What I'd like to do, but I am asking your advice first is:
 Just before doing the call to virt_dom.blockRebase(), check if the domain
 is running

Re: [openstack-dev] [Nova] [Cinder] [Tempest] Regarding deleting snapshot when instance is OFF

2015-06-16 Thread Jordan Pittier
On Thu, Apr 9, 2015 at 6:10 PM, Eric Blake ebl...@redhat.com wrote:

 On 04/08/2015 11:22 PM, Deepak Shetty wrote:
  + [Cinder] and [Tempest] in the $subject since this affects them too
 
  On Thu, Apr 9, 2015 at 4:22 AM, Eric Blake ebl...@redhat.com wrote:
 
  On 04/08/2015 12:01 PM, Deepak Shetty wrote:
 
  Questions:
 
  1) Is this a valid scenario being tested ? Some say yes, I am not sure,
  since the test makes sure that instance is OFF before snap is deleted
 and
  this doesn't work for fs-backed drivers as they use hyp assisted snap
  which
  needs domain to be active.
 
  Logically, it should be possible to delete snapshots when a domain is
  off (qemu-img can do it, but libvirt has not yet been taught how to
  manage it, in part because qemu-img is not as friendly as qemu in having
  a re-connectible Unix socket monitor for tracking long-running
 progress).
 
 
  Is there a bug/feature already opened for this ?

 Libvirt has this bug: https://bugzilla.redhat.com/show_bug.cgi?id=987719
 which tracks generic ability of libvirt to delete snapshots; ideally,
 the code to manage snapshots will work for both online and persistent
 offline guests, but it may result in splitting the work into multiple bugs.


I can't access this bug report, it seems private, I need to authenticate.


  I didn't understand much
  on what you
  mean by re-connectible unix socket :)... are you hinting that qemu-img
  doesn't have
  ability to attach to a qemu / VM process for long time over unix socket ?

 For online guest control, libvirt normally creates a Unix socket, then
 starts qemu with its -qmp monitor pointing to that socket.  That way, if
 libvirtd goes away and then restarts, it can reconnect as a client to
 the existing socket file, and qemu never has to know that the person on
 the other end changed.  With that QMP monitor, libvirt can query qemu's
 current state at will, get event notifications when long-running jobs
 have finished, and issue commands to terminate long-running jobs early,
 even if it is a different libvirtd issuing a later command than the one
 that started the command.

 qemu-img, on the other hand, only has the -p option or SIGUSR1 signal
 for outputting progress to stderr on a long-running operation (not the
 most machine-parseable), but is not otherwise controllable.  It does not
 have a management connection through a Unix socket.  I guess in thinking
 about it a bit more, a Unix socket is not essential; as long as the old
 libvirtd starts qemu-img in a manner that tracks its pid and collects
 stderr reliably, then restarting libvirtd can send SIGUSR1 to the pid
 and track the changes to stderr to estimate how far along things are.

 Also, the idea has been proposed that qemu-img is not necessary; libvirt
 could use qemu -M none to create a dummy machine with no CPUs and JUST
 disk images, and then use the qemu QMP monitor as usual to perform block
 operations on those disks by reusing the code it already has working for
 online guests.  But even this approach needs coding into libvirt.

 --
 Eric Blake   eblake redhat com+1-919-301-3266
 Libvirt virtualization library http://libvirt.org


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi,
I'd like to progress on this issue, so I will spend some time on it.

Let's recap. The issue is deleting a Cinder snapshot that was created
during an Nova Instance snapshot (booted from a cinder volume) doesn't work
when the original Nova Instance is stopped. This bug only arises when a
Cinder driver uses the feature called QEMU Assisted
Snapshots/live-snapshot. (currently only GlusterFS, but soon generic NFS
when https://blueprints.launchpad.net/cinder/+spec/nfs-snapshots gets in).

This issue is triggered by the Tempest scenario test_volume_boot_pattern.
This scenario:
[does some stuff]
1) Creates a cinder volume from an Cirros Image
2) Boot a Nova Instance on the volume
3) Make a snapshot of this instance (which creates a cinder snapshot
because the instance was booted from a volume), using the feature QEMU
Assisted Snapshots
[do some other stuff]
4) stop the instance created in step 2 then delete the snapshot created in
step 3.

The deletion of snapshot created in step 3 fails because Nova wants libvirt
to do a blockRebase (see
https://github.com/openstack/nova/blob/68f6f080b2cddd3d4e97dc25a98e0c84c4979b8a/nova/virt/libvirt/driver.py#L1920
)

For reference, there's a bug targeting Cinder for this :
https://bugs.launchpad.net/cinder/+bug/1444806

What I'd like to do, but I am asking your advice first is:
Just before doing the call to virt_dom.blockRebase(), check if the domain
is running, and if not call qemu-img rebase -b $rebase_base rebase_disk.
(this idea was brought up by Eric Blake in the previous reply).


Re: [openstack-dev] [oslo] Adopt mox3

2015-06-11 Thread Jordan Pittier
Hi,
Shouldn't we move to use mock instead ? If mox3 is supported and active,
why would we recommend to use mock ?

Jordan

On Wed, Jun 10, 2015 at 10:17 PM, Davanum Srinivas dava...@gmail.com
wrote:

 Oslo folks, everyone,

 mox3 needs to be maintained since some of our projects use it and we
 have it in our global requirements.

 Here's the proposal from Doug - https://review.openstack.org/#/c/190330/

 Any objections? Please chime in here or on the review.

 thanks,
 dims

 --
 Davanum Srinivas :: https://twitter.com/dims

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] cinder support matrix, chap support?

2015-04-27 Thread Jordan Pittier
Hi,
Isn't CHAP iscsi-specific ?

Jordan

On Mon, Apr 27, 2015 at 2:41 PM, Dave Walker em...@daviey.com wrote:

 Hi,

 Recently I have been curious as to which Cinder drivers support
 authentication. It seems that only a subset do.  I wondered, is this
 something that would be useful on the CinderSupportMatrix wiki page?

 Thanks

 --
 Kind Regards,
 Dave Walker

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack live migration using devstack

2015-04-17 Thread Jordan Pittier
Hi
Double check that sql_connection in the [database] section of cinder.conf
is not empty.

Jordan

On Fri, Apr 17, 2015 at 7:24 PM, Erlon Cruz sombra...@gmail.com wrote:

 Had the same error, but with cinder. Did you find find out something about
 this error?
 
 2015-04-17 14:12:31.957 TRACE cinder Traceback (most recent call last):
 2015-04-17 14:12:31.957 TRACE cinder   File
 /usr/local/bin/cinder-volume, line 10, in module
 2015-04-17 14:12:31.957 TRACE cinder sys.exit(main())
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/cmd/volume.py, line 72, in main
 2015-04-17 14:12:31.957 TRACE cinder binary='cinder-volume')
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/service.py, line 249, in create
 2015-04-17 14:12:31.957 TRACE cinder service_name=service_name)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/service.py, line 129, in __init__
 2015-04-17 14:12:31.957 TRACE cinder *args, **kwargs)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/volume/manager.py, line 195, in __init__
 2015-04-17 14:12:31.957 TRACE cinder *args, **kwargs)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/manager.py, line 130, in __init__
 2015-04-17 14:12:31.957 TRACE cinder super(SchedulerDependentManager,
 self).__init__(host, db_driver)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/manager.py, line 80, in __init__
 2015-04-17 14:12:31.957 TRACE cinder super(Manager,
 self).__init__(db_driver)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/db/base.py, line 42, in __init__
 2015-04-17 14:12:31.957 TRACE cinder self.db.dispose_engine()
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/db/api.py, line 80, in dispose_engine
 2015-04-17 14:12:31.957 TRACE cinder if 'sqlite' not in
 IMPL.get_engine().name:
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/db/sqlalchemy/api.py, line 85, in get_engine
 2015-04-17 14:12:31.957 TRACE cinder facade = _create_facade_lazily()
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/db/sqlalchemy/api.py, line 72, in
 _create_facade_lazily
 2015-04-17 14:12:31.957 TRACE cinder **dict(CONF.database.iteritems())
 2015-04-17 14:12:31.957 TRACE cinder   File
 /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/session.py,
 line 796, in __init__
 2015-04-17 14:12:31.957 TRACE cinder **engine_kwargs)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/session.py,
 line 376, in create_engine
 2015-04-17 14:12:31.957 TRACE cinder url =
 sqlalchemy.engine.url.make_url(sql_connection)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/url.py, line
 176, in make_url
 2015-04-17 14:12:31.957 TRACE cinder return
 _parse_rfc1738_args(name_or_url)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/url.py, line
 225, in _parse_rfc1738_args
 2015-04-17 14:12:31.957 TRACE cinder Could not parse rfc1738 URL from
 string '%s' % name)
 2015-04-17 14:12:31.957 TRACE cinder ArgumentError: Could not parse
 rfc1738 URL from string ''
 2015-04-17 14:12:31.957 TRACE cinder
 c-vol failed to start


 On Mon, Mar 10, 2014 at 12:44 PM, abhishek jain ashujain9...@gmail.com
 wrote:

 Hi all

 I have created one openstack using one controller node and one compute
 node both installed using devstack.I'm running one instance on controller
 node and want to migrate it to over compute node.
 I'm using following link for this.


 http://docs.openstack.org/grizzly/openstack-compute/admin/content/live-migration-usage.html

 http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-migrations.html

 The output of  nova-manage vm list  on the compute node is as follows..

 10 16:01:49.502 DEBUG nova.openstack.common.lockutils
 [req-019d2337-143e-4157-9d6c-3c1f2207f63b None None] Semaphore / lock
 released __get_backend inner
 /opt/stack/nova/nova/openstack/common/lockutils.py:252
 Command failed, please check log for more info
 2014-03-10 16:01:49.507 CRITICAL nova
 [req-019d2337-143e-4157-9d6c-3c1f2207f63b None None] Could not parse
 rfc1738 URL from string ''
 2014-03-10 16:01:49.507 9609 TRACE nova Traceback (most recent call
 last):
 2014-03-10 16:01:49.507 9609 TRACE nova   File /usr/bin/nova-manage,
 line 10, in module
 2014-03-10 16:01:49.507 9609 TRACE nova sys.exit(main())
 2014-03-10 16:01:49.507 9609 TRACE nova   File
 /opt/stack/nova/nova/cmd/manage.py, line 1378, in main
 2014-03-10 16:01:49.507 9609 TRACE nova ret = fn(*fn_args,
 **fn_kwargs)
 2014-03-10 16:01:49.507 9609 TRACE nova   File
 /opt/stack/nova/nova/cmd/manage.py, line 658, in list
 2014-03-10 16:01:49.507 9609 TRACE nova context.get_admin_context(),
 host)
 

Re: [openstack-dev] [cinder]Driver broken

2015-03-25 Thread Jordan Pittier
Hey,

From
http://packages.cloudfounders.com/ci_logs/47/85847/47/check/openvstorage-cinder-functionality/b073fb9/console.html
:
Running ` python setup.py testr --testr-args='--subunit --concurrency 1
 test_openvstorage'`

==
Totals
==
Ran: 10 tests in 2. sec.
 - Passed: 10
 - Skipped: 0
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 0.4477 sec.


So his CI did run :)

Jordan


On Wed, Mar 25, 2015 at 4:31 PM, Asselin, Ramy ramy.asse...@hp.com wrote:

  Hi Eduard,



 Your third party ci reported success on that patch. The tempest volume
 tests include attached  detaches. Seems your CI is not running them?


 http://packages.cloudfounders.com/ci_logs/47/85847/47/check/openvstorage-cinder-functionality/b073fb9/console.html



 *CloudFounders OpenvStorage CI check*

 *Mar 10, 2015 9:14 AM*

 openvstorage-cinder-functionality
 http://packages.cloudfounders.com/ci_logs/47/85847/47/check/openvstorage-cinder-functionality/b073fb9

 SUCCESS in 37m 16s



 Ramy



 *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
 *Sent:* Wednesday, March 25, 2015 8:05 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [cinder]Driver broken



 Hi,



 Just reported an issue: https://bugs.launchpad.net/cinder/+bug/1436367

 Seems to be related to https://review.openstack.org/#/c/85847/ which
 introduced another parameter to be passed to the driver, but our driver
 didn't get updated so detach_volume fails for us.



 How can we get this fixed asap?



 Thanks,

 Eduard



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] CI report formatting (citrix / hyperv / vmware )

2015-03-25 Thread Jordan Pittier
On Wed, Mar 25, 2015 at 2:44 PM, Salvatore Orlando sorla...@nicira.com
wrote:



 On 25 March 2015 at 14:21, Sean Dague s...@dague.net wrote:

 On 03/25/2015 09:03 AM, Gary Kotton wrote:
 
  From: Jordan Pittier jordan.pitt...@scality.com
  mailto:jordan.pitt...@scality.com
  Reply-To: OpenStack List openstack-dev@lists.openstack.org
  mailto:openstack-dev@lists.openstack.org
  Date: Wednesday, March 25, 2015 at 1:47 PM
  To: OpenStack List openstack-dev@lists.openstack.org
  mailto:openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [nova] CI report formatting (citrix /
  hyperv / vmware )
 
  Hi
  On Wed, Mar 25, 2015 at 12:39 PM, Sean Dague s...@dague.net
  mailto:s...@dague.net wrote:
 
  Currently Citrix, HyperV, and VMWare CI systems reporting on Nova
  patches have a different formatting than the standard that Jenkins
 and
  other systems are using:
 
  * test-name-no-spaces http://link.to/result
  
 https://urldefense.proofpoint.com/v2/url?u=http-3A__link.to_resultd=AwMFaQc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=EtHYlIXfK33bYsGf2k8XbFtgWlkcm_VdZCrFHTLEdiEs=5SS-txUrD3o8KS3QIaCL3XMBbeCYK5CjmzmuxDda7Oce=
 
  : [SUCCESS|FAILURE] some
  comment about the test
 
  I don't want to talk for Citrix, HyperV or VMWare but the standard
  only work if you use Zuul in your CI. I am using a setup based on a
  Jenkins plugin called gerrit-trigger and there's no way to format the
  message the way it's expected...


 FWIW I help maintain one the VMware CIs (the one voting on neutron and
 network-related patches for devstack and tempest).
 We use gerrit-trigger too (mostly out of lazyness, no other real reason),
 but we're able to format the message posted back to gerrit.
 For posting back votes we use the gerrit review command to post the
 message in the standard format.

 Ok. I managed to find a way. It's possible. For future reference, on the
job configuration, there's a field called URL to post. The correct value
is literally * $JOB_NAME $BUILD_URL. Sorry for the noise guys. I can't
find myself an excuse not to report results in the expected format anymore.

 I think the same process is adopted also by the CI voting on nova.
 However, the job result string is not being posted. I will double check
 with the respective owners.

 Salvatore


 
 
  This means these systems don't show up in the CI rollup block -
  http://dl.dropbox.com/u/6514884/screenshot_158.png
  
 https://urldefense.proofpoint.com/v2/url?u=http-3A__dl.dropbox.com_u_6514884_screenshot-5F158.pngd=AwMFaQc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=EtHYlIXfK33bYsGf2k8XbFtgWlkcm_VdZCrFHTLEdiEs=nIjYwznh1_8aVBSz-XVEJrpNaMsDfqyekOQ2IhiHTo8e=
 
 
 
  Current the Vmware CI will vote +1 iff the patch has passed on the CI.
  We can investigate adding this to the CI rollup block.
 
 
 
  I'd really like that to change. The CI rollup block has been
 extremely
  useful in getting the test results of a patch above the fold, and
 the
  ability to dig into them clearly. I feel like if any CI system isn't
  reporting in standard format that's parsible by that, we should
 probably
  turn it off.
 
 
  I do not think that we should turn this off. They have value. It would
  be nice if things were all of the same format, which I guess that this
  is the intension of the mail. Lets all try and make an effort to work
  towards this goal.

 Right, honestly, I don't want these turned off, I want them reporting in
 a more standard format. But I do think if they don't report in a
 standard format it will cause problems and add to them being ignored.

 -Sean

 --
 Sean Dague
 http://dague.net


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] CI report formatting (citrix / hyperv / vmware )

2015-03-25 Thread Jordan Pittier
Hi
On Wed, Mar 25, 2015 at 12:39 PM, Sean Dague s...@dague.net wrote:

 Currently Citrix, HyperV, and VMWare CI systems reporting on Nova
 patches have a different formatting than the standard that Jenkins and
 other systems are using:

 * test-name-no-spaces http://link.to/result : [SUCCESS|FAILURE] some
 comment about the test

I don't want to talk for Citrix, HyperV or VMWare but the standard only
work if you use Zuul in your CI. I am using a setup based on a Jenkins
plugin called gerrit-trigger and there's no way to format the message the
way it's expected...


 This means these systems don't show up in the CI rollup block -
 http://dl.dropbox.com/u/6514884/screenshot_158.png

 I'd really like that to change. The CI rollup block has been extremely
 useful in getting the test results of a patch above the fold, and the
 ability to dig into them clearly. I feel like if any CI system isn't
 reporting in standard format that's parsible by that, we should probably
 turn it off.

 What's fair warning to get these systems postings into the standard
 format? It realistically should be a very quick change by them, but will
 help quite a lot in reviewing code.

 -Sean

 --
 Sean Dague
 http://dague.net


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [third-party]Properly format the status message with gerrit trigger

2015-03-20 Thread Jordan Pittier
Hi guys,
I am in charge of a third party CI (for Cinder). My setup is based on
Jenkins + gerrit trigger plugin. As you may know, its hard to customize
the message in the Gerrit Verified Commands config. In particular, its
not possible to add white/empty line. And you need white lines if you
have several builds for the same trigger.

As you know, the infra team wants the 3rd party CI to respect this format :
http://ci.openstack.org/third_party.html#posting-result-to-gerrit

Currenly gerrit trigger plugin reports in this format :

http://link.to/result : [SUCCESS|FAILURE]

As you see the test-name-no-spaces is missing...

I don't want to fork the gerrit-trigger (the code to change is here [1])
just for that. And, I know other people have faced the same issue. For some
obscure reason I don't want to install/use Zuuul.

So would it make sense to slightly change the regex [2] so that gerrit
trigger is also supported out of the box  ? That would make my life easier
:)

Thanks,
Jordan

[1]
https://github.com/jenkinsci/gerrit-trigger-plugin/blob/master/src/main/java/com/sonyericsson/hudson/plugins/gerrit/trigger/gerritnotifier/ParameterExpander.java#L531
[2]
https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/gerrit.pp#n164
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.policy] guraduation status

2015-03-02 Thread Jordan Pittier
Hi,
FYI there might be something related to what you plan :
https://github.com/stackforge/swiftpolicy/
https://review.openstack.org/#/c/89568/

The project is abandoned but the initial goal was to have the code somehow
proposed to be merged in Openstack Swift. Feel free to have a look a
continue this effort.

Jordan

On Mon, Mar 2, 2015 at 11:01 AM, Osanai, Hisashi 
osanai.hisa...@jp.fujitsu.com wrote:

 oslo.policy folks,

 I'm thinking about realization of policy-based access control in swift
 using oslo.policy [1] so I would like to know oslo.policy's status for
 graduation.

 [1]
 https://github.com/openstack/oslo-specs/blob/master/specs/kilo/graduate-policy.rst

 Thanks in advance,
 Hisashi Osanai


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Module_six_moves_urllib_parse error

2015-02-25 Thread Jordan Pittier
Hi
You probably have an old python-six version installed system-wide.

Jordan

On Wed, Feb 25, 2015 at 11:56 AM, Manickam, Kanagaraj 
kanagaraj.manic...@hp.com wrote:

  Hi,



 I see the below error in my devstack and is raised from the package ‘six’



 AttributeError: 'Module_six_moves_urllib_parse' object has no attribute
 'SplitResult'



 Currently my devstack setup is having six 1.9.0 version. Could anyone help
 here to fix the issue? Thanks.



 Regards

 Kanagaraj M

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] LOG.debug()

2015-02-18 Thread Jordan Pittier
Hi,
Also, make sure you have :
debug = True
verbose = True
in the [DEFAULT] section of your nova.conf

Jordan

On Wed, Feb 18, 2015 at 3:56 PM, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 Please see
 http://docs.openstack.org/openstack-ops/content/logging_monitoring.html
 If you have installed from packages this may be in
 /var/log/nova/nova-compute.log
 Thanks
 Gary

   From: Vedsar Kushwaha vedsarkushw...@gmail.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Wednesday, February 18, 2015 at 4:52 PM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] LOG.debug()

   Hello World,

  I'm new to openstack and python too :).

 In the file:

 https://github.com/openstack/nova/blob/master/nova/scheduler/filters/ram_filter.py
 https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_master_nova_scheduler_filters_ram-5Ffilter.pyd=AwMFAwc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=DJEz4nE1qVf12YKxUwG0LN0yBMtl3warl36Oqq4Vqpos=ONUtXkRDFMM8mfVHff8Fc3zFe9AdJohiOZBUe7P6P3Ie=

  where does the LOG.debug() is storing the information?



 --
   Vedsar Kushwaha
 M.Tech-Computational Science
 Indian Institute of Science

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-17 Thread Jordan Pittier
Jay, Flavio, thanks for this interesting discussion. I get your points and
they really make sense to me.

I'll go for a specific driver that will inherits from the HTTP Store for
the read path and implements the write path.

Jordan

On Tue, Feb 17, 2015 at 12:52 PM, Flavio Percoco fla...@redhat.com wrote:

 On 13/02/15 17:01 +0100, Jordan Pittier wrote:

 Humm this doesn't have to be complicated, for a start.


 Sorry for my late reply

  - Figuring out the http method the server expects (POST/PUT)

 Yeah, I agree. Theres no definitive answer to this but I think PUT makes
 sense
 here. I googled 'post vs put' and I found that the idempotent and who
 is in
 charge of the actual resource location choice (the client vs the server),
 favors PUT.


 Right but that's not what the remote server may be expecting. One of
 the problems about the HTTP store is that there's not real API
 besides what the HTTP protocol allows you to do. That is to say that a
 remote server may accept POST/PUT and in order to keep the
 implementation non-opinionated, you'd need to have a way for these
 things to be specified.


  - Adding support for at least few HTTP auth methods

 Why should the write path be more secured/more flexible than the read
 path ?
 If I take a look at the current HTTP store, only basic auth is supported
 (ie
 http://user:pass@server1/myLinuxImage). I suggest the write path (ie the
 add()
 method) should support the same auth mecanism. The cloud admin could also
 add
 some firewall rules to make sure the HTTP backend server can only be
 accessed
 by the Glance-api servers.


 I didn't say the read path was correct :P

 That said, I agree that we should keep both paths consistent.

  - Having a sufixed URL where we're sure glance will have proper
 permissions to upload data.

 That's up the the cloud admin/operator to make it work. The HTTP
 glance_store
 could have 2 config flags :
 a) http_server, a string with the scheme (http vs https) and the
 hostname of
 the HTTP server, ie 'http://server1'
 b) path_prefix. A string that will prefix the path part of the image
 URL.
 This config flag could be left empty/is optional.


 Yes, it was probably not clear from my previous email that these were
 not ands but things that would need to be brought up.


  Handling HTTP responses from the server

 That's of course to be discussed. But, IMO, this should be as simple as
 if
 response.code is 200 or 202 then OKAY else raise GlanceStoreException. I
 am
 not sure any other glance store is more granular than this.


 Again, this assumes too much from the server. So, either we impose
 some kind of rules as to how Glance expects the HTTP server to behave
 or we try to be bullet proof API-wise.

  How can we handle quota?

 I am new to glance_store, is there a notion of quotas in glance stores ? I
 though Glance (API) was handling this. What kind of quotas are we talking
 about
 here ?


 Glance handles quotas. The problem is that when the data is sent to
 the remote store, glance looses some control on it. A user may upload
 some data, the HTTP push could fail and we may try to delete the data
 without any proof that it will be correctly deleted.

 Also, without auth, we will have to force the user to send all image
 data through glance. The reason is that we don't know whether the HTTP
 store has support for HEAD to report the image size when using
 `--location`.

 Sorry if all the above sounds confusing. The problem with the HTTP
 store is that we have basically no control over it and that is
 worrisome from a security and implementation perspective.

 Flavio


  Frankly, it shouldn't add that much code. I feel we can make it clean if
 we
 leverage the different Python modules (httplib etc.)

 Regards,
 Jordan


 On Fri, Feb 13, 2015 at 4:20 PM, Flavio Percoco fla...@redhat.com
 wrote:

On 13/02/15 16:01 +0100, Jordan Pittier wrote:

What is the difference between just calling the Glance API to
upload an image,

versus adding add() functionality to the HTTP image store?
You mean using glance image-create --location http://server1/
myLinuxImage [..]
 ? If so, I guess adding the add() functionality will save the
 user
from
having to find the right POST curl/wget command to properly upload
 his
image.


I believe it's more complex than this. Having an `add` method for the
HTTP store implies:

- Figuring out the http method the server expects (POST/PUT)
- Adding support for at least few HTTP auth methods
- Having a sufixed URL where we're sure glance will have proper
 permissions to upload data.
- Handling HTTP responses from the server w.r.t the status of the data
 upload. For example: What happens if the remote http server runs out
 of space? What's the response status going to be like? How can we
 make glance agnostic to these discrepancies across HTTP servers so
 that it's consistent in its

Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-16 Thread Jordan Pittier
So, I don't understand what allowing the HTTP backend to support add()
gives the user of Glance.
It doesn't give anything to the user.

glance_store is all about different backends, such as the VMWare datastore
or the Sheepdog data store. Having several backends/drivers allows the
cloud operator/administrator to choose among several options when he
deploys and operates his cloud. Currently the HTTP store lacks an 'add'
method so it can't be used as a default store. But the cloud provider may
have an existing storage solution/infrastructure that has an HTTP gateway
and that understands basic PUT/GET/DELETE operations.  So having a full
blown HTTP store makes sense, imo, because it gives more deployment options.

Is that clearer ? What do you think ?
Jordan

On Fri, Feb 13, 2015 at 7:28 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 02/13/2015 11:55 AM, Jordan Pittier wrote:

 Jay, I am afraid I didn't understand your point.

 Could you rephrase/elaborate on What is the difference between just
 calling the Glance API to upload an image, versus adding add() please ?
 Currently, you can't call the Glance API to upload an image if the
 default_store is the HTTP store.


 No, you upload the image to a Glance server that has a backing data store
 like filesystem or swift. But the process of doing that (i.e. calling
 `glance image upload`) is the same as what you are describing -- it's all
 just POST'ing some data via HTTP through the Glance API endpoint.

 So, I don't understand what allowing the HTTP backend to support add()
 gives the user of Glance.

 Best,
 -jay


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-13 Thread Jordan Pittier
What is the difference between just calling the Glance API to upload an
image, versus adding add() functionality to the HTTP image store?
You mean using glance image-create --location http://server1/myLinuxImage
[..] ? If so, I guess adding the add() functionality will save the user
from having to find the right POST curl/wget command to properly upload his
image.

On Fri, Feb 13, 2015 at 3:55 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 02/13/2015 09:47 AM, Jordan Pittier wrote:

 Hi list,

 I would like to add the 'add' capability to the HTTP glance store.

 Let's say I (as an operator or cloud admin) provide an HTTP server where
 (authenticated/trusted) users/clients can make the following HTTP request
 :

 POST http://server1/myLinuxImage HTTP/1.1
 Host: server1
 Content-Length: 25600
 Content-Type: application/octet-stream

 mybinarydata[..]

 Then the HTTP server will store the binary data, somewhere (for instance
 locally), some how (for instance in a plain file), so that the data is
 later on accessible by a simple GET http://server1/myLinuxImage

 In that case, this HTTP server could easily be a full fleshed Glance
 store.

 Questions :
 1) Has this been already discussed/proposed ? If so, could someone give
 me a pointer to this work ?
 2) Can I start working on this ? (the 2 main work items are : 'add an
 add method to glance_store._drivers.http.__Store' and 'add a delete
 method to glance_store._drivers.http.__Store (HTTP DELETE method)'


 What is the difference between just calling the Glance API to upload an
 image, versus adding add() functionality to the HTTP image store?

 Best,
 -jay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-13 Thread Jordan Pittier
Hi list,

I would like to add the 'add' capability to the HTTP glance store.

Let's say I (as an operator or cloud admin) provide an HTTP server where
(authenticated/trusted) users/clients can make the following HTTP request :

POST http://server1/myLinuxImage HTTP/1.1
Host: server1
Content-Length: 25600
Content-Type: application/octet-stream

mybinarydata[..]

Then the HTTP server will store the binary data, somewhere (for instance
locally), some how (for instance in a plain file), so that the data is
later on accessible by a simple GET http://server1/myLinuxImage

In that case, this HTTP server could easily be a full fleshed Glance store.

Questions :
1) Has this been already discussed/proposed ? If so, could someone give me
a pointer to this work ?
2) Can I start working on this ? (the 2 main work items are : 'add an add
method to glance_store._drivers.http.Store' and 'add a delete method to
glance_store._drivers.http.Store (HTTP DELETE method)'

What do you think ?
Thanks,
Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-13 Thread Jordan Pittier
Humm this doesn't have to be complicated, for a start.

- Figuring out the http method the server expects (POST/PUT)
Yeah, I agree. Theres no definitive answer to this but I think PUT makes
sense here. I googled 'post vs put' and I found that the idempotent and
who is in charge of the actual resource location choice (the client vs
the server), favors PUT.

- Adding support for at least few HTTP auth methods
Why should the write path be more secured/more flexible than the read
path ? If I take a look at the current HTTP store, only basic auth is
supported (ie http://user:pass@server1/myLinuxImage). I suggest the write
path (ie the add() method) should support the same auth mecanism. The cloud
admin could also add some firewall rules to make sure the HTTP backend
server can only be accessed by the Glance-api servers.

- Having a sufixed URL where we're sure glance will have proper
 permissions to upload data.
That's up the the cloud admin/operator to make it work. The HTTP
glance_store could have 2 config flags :
a) http_server, a string with the scheme (http vs https) and the hostname
of the HTTP server, ie 'http://server1'
b) path_prefix. A string that will prefix the path part of the image
URL. This config flag could be left empty/is optional.

Handling HTTP responses from the server
That's of course to be discussed. But, IMO, this should be as simple as if
response.code is 200 or 202 then OKAY else raise GlanceStoreException. I
am not sure any other glance store is more granular than this.

How can we handle quota?
I am new to glance_store, is there a notion of quotas in glance stores ? I
though Glance (API) was handling this. What kind of quotas are we talking
about here ?

Frankly, it shouldn't add that much code. I feel we can make it clean if we
leverage the different Python modules (httplib etc.)

Regards,
Jordan


On Fri, Feb 13, 2015 at 4:20 PM, Flavio Percoco fla...@redhat.com wrote:

 On 13/02/15 16:01 +0100, Jordan Pittier wrote:

 What is the difference between just calling the Glance API to upload an
 image,

 versus adding add() functionality to the HTTP image store?
 You mean using glance image-create --location http://server1/
 myLinuxImage [..]
  ? If so, I guess adding the add() functionality will save the user from
 having to find the right POST curl/wget command to properly upload his
 image.


 I believe it's more complex than this. Having an `add` method for the
 HTTP store implies:

 - Figuring out the http method the server expects (POST/PUT)
 - Adding support for at least few HTTP auth methods
 - Having a sufixed URL where we're sure glance will have proper
  permissions to upload data.
 - Handling HTTP responses from the server w.r.t the status of the data
  upload. For example: What happens if the remote http server runs out
  of space? What's the response status going to be like? How can we
  make glance agnostic to these discrepancies across HTTP servers so
  that it's consistent in its responses to glance users?
 - How can we handle quota?

 I'm not fully opposed, although it sounds like not worth it code-wise,
 maintenance-wise and performance-wise. The user will have to run just
 1 command but at the cost of all of the above.

 Do the points listed above make sense to you?

 Cheers,
 Flavio



 On Fri, Feb 13, 2015 at 3:55 PM, Jay Pipes jaypi...@gmail.com wrote:

On 02/13/2015 09:47 AM, Jordan Pittier wrote:
  Hi list,

I would like to add the 'add' capability to the HTTP glance store.

Let's say I (as an operator or cloud admin) provide an HTTP server
where
(authenticated/trusted) users/clients can make the following HTTP
request :

POST http://server1/myLinuxImage HTTP/1.1
Host: server1
Content-Length: 25600
Content-Type: application/octet-stream

mybinarydata[..]

Then the HTTP server will store the binary data, somewhere (for
instance
locally), some how (for instance in a plain file), so that the
 data is
later on accessible by a simple GET http://server1/myLinuxImage

In that case, this HTTP server could easily be a full fleshed
 Glance
store.

Questions :
1) Has this been already discussed/proposed ? If so, could someone
 give
me a pointer to this work ?
2) Can I start working on this ? (the 2 main work items are : 'add
 an
add method to glance_store._drivers.http.__Store' and 'add a
 delete
method to glance_store._drivers.http.__Store (HTTP DELETE method)'


What is the difference between just calling the Glance API to upload an
image, versus adding add() functionality to the HTTP image store?

Best,
-jay


 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
http://lists.openstack.org

Re: [openstack-dev] [glance]'Add' capability to the HTTP store

2015-02-13 Thread Jordan Pittier
Jay, I am afraid I didn't understand your point.

Could you rephrase/elaborate on What is the difference between just
calling the Glance API to upload an image, versus adding add() please ?
Currently, you can't call the Glance API to upload an image if the
default_store is the HTTP store.

On Fri, Feb 13, 2015 at 5:17 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 02/13/2015 10:01 AM, Jordan Pittier wrote:

  What is the difference between just calling the Glance API to upload
 an image, versus adding add() functionality to the HTTP image store?
 You mean using glance image-create --location
 http://server1/myLinuxImage [..] ? If so, I guess adding the add()
 functionality will save the user from having to find the right POST
 curl/wget command to properly upload his image.


 How so?

 If the user is already using Glance, they can use either the Glance REST
 API or the glanceclient tools.

 -jay

  On Fri, Feb 13, 2015 at 3:55 PM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 On 02/13/2015 09:47 AM, Jordan Pittier wrote:

 Hi list,

 I would like to add the 'add' capability to the HTTP glance store.

 Let's say I (as an operator or cloud admin) provide an HTTP
 server where
 (authenticated/trusted) users/clients can make the following
 HTTP request :

 POST http://server1/myLinuxImage HTTP/1.1
 Host: server1
 Content-Length: 25600
 Content-Type: application/octet-stream

 mybinarydata[..]

 Then the HTTP server will store the binary data, somewhere (for
 instance
 locally), some how (for instance in a plain file), so that the
 data is
 later on accessible by a simple GET http://server1/myLinuxImage

 In that case, this HTTP server could easily be a full fleshed
 Glance store.

 Questions :
 1) Has this been already discussed/proposed ? If so, could
 someone give
 me a pointer to this work ?
 2) Can I start working on this ? (the 2 main work items are :
 'add an
 add method to glance_store._drivers.http.Store' and 'add a
 delete
 method to glance_store._drivers.http.Store (HTTP DELETE
 method)'


 What is the difference between just calling the Glance API to upload
 an image, versus adding add() functionality to the HTTP image store?

 Best,
 -jay

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 OpenStack-dev-request@lists.__openstack.org?subject:__unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Ceilometer] Real world experience with Ceilometer deployments - Feedback requested

2015-02-12 Thread Jordan Pittier
Hi,
My experience with Ceilometer is that MongoDB is/was a major bottleneck.
You need sharding + servers with lot of RAM. You need to set TTL on your
samples, and only save in DB the metrics that really mater to you. MongoDB
v3 should also help.

Regarding RabbitMQ pressure, I think this blueprint helps a lot
https://blueprints.launchpad.net/ceilometer/+spec/multiple-rabbitmq

And also, you should make your own tests because there has been a lot of
FUD around Ceilometer.

Jordan

On Thu, Feb 12, 2015 at 6:23 PM, Diego Parrilla Santamaría 
diego.parrilla.santama...@gmail.com wrote:

 Hi Mash,

 we dropped Ceilometer as the core tool to gather metrics for our rating
 and billing system. I must admit it has improved, but I think it's broken
 by design: a metering and monitoring system is not the same thing.

 We have built a component that directly listens from rabbit notification
 tools (a-la-Stacktach). This tool stores the all events in a database (but
 anything could work, it's just a logging system) and then we process these
 events and store them in a datamart style database every hour. The rating
 and billing system reads this database and process it every hour too. We
 decided to implement this pipeline processing of data because we knew in
 advance that processing such an amount of data was a challenge.

 I think Ceilometer should be used just to trigger alarms for heat for
 example, and something else should be used for rating and billing.

 Cheers
 Diego



  --
 Diego Parrilla
 http://www.stackops.com/*CEO*
 *www.stackops.com http://www.stackops.com/ | *
 diego.parri...@stackops.com | +34 91 005-2164 | skype:diegoparrilla



 On Wed, Feb 11, 2015 at 8:37 PM, Maish Saidel-Keesing mais...@maishsk.com
  wrote:

 Is Ceilometer ready for prime time?

 I would be interested in hearing from people who have deployed OpenStack
 clouds with Ceilometer, and their experience. Some of the topics I am
 looking for feedback on are:

 - Database Size
 - MongoDB management, Sharding, replica sets etc.
 - Replication strategies
 - Database backup/restore
 - Overall useability
 - Gripes, pains and problems (things to look out for)
 - Possible replacements for Ceilometer that you have used instead


 If you are willing to share - I am sure it will be beneficial to the
 whole community.

 Thanks in Advance


 With best regards,


 Maish Saidel-Keesing
 Platform Architect
 Cisco




 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] volume / host coupling

2015-01-08 Thread Jordan Pittier
Arne,

I imagine this has an
impact on things using the services table, such as “cinder-manage” (how
does your “cinder-manage service list” output look like? :-)
It has indeed. I have 3 cinder-volume services, but only one line output in
 “cinder-manage service list”. But it's a minor inconvenience to me.

Duncan,
There are races, e.g. do snapshot and delete at the same time, backup and
delete at the same time, etc. The race windows are pretty tight on ceph but
they are there. It is worse on some other backends
Okay, never ran into those, yet ! I cross fingers :p

Thanks, and sorry if I hijacked this thread a little.
Jordan

On Thu, Jan 8, 2015 at 5:30 PM, Arne Wiebalck arne.wieba...@cern.ch wrote:

  Hi Jordan,

  As Duncan pointed out there may be issues if you have multiple backends
 and indistinguishable nodes (which you could  probably avoid by separating
 the hosts per backend and use different “host” flags for each set).

  But also if you have only one backend: the “host flag will enter the
 ‘services'
 table and render the host column more or less useless. I imagine this has
 an
 impact on things using the services table, such as “cinder-manage” (how
 does your “cinder-manage service list” output look like? :-), and it may
 make it
 harder to tell if the individual services are doing OK, or to control them.

  I haven’t run Cinder with identical “host” flags in production, but I
 imagine
 there may be other areas which are not happy about indistinguishable hosts.

  Arne


  On 08 Jan 2015, at 16:50, Jordan Pittier jordan.pitt...@scality.com
 wrote:

  Hi,
 Some people apparently use the ‘host’ option in cinder.conf to make the
 hosts indistinguishable, but this creates problems in other places.
 I use shared storage mounted on several cinder-volume nodes, with host
 flag set the same everywhere. Never ran into problems so far. Could you
 elaborate on this creates problems in other places please ?

  Thanks !
 Jordan

 On Thu, Jan 8, 2015 at 3:40 PM, Arne Wiebalck arne.wieba...@cern.ch
 wrote:

  Hmm. Not sure how widespread installations with multiple Ceph backends
 are where the
 Cinder hosts have access to only one of the backends (which is what you
 assume, right?)
 But, yes, if the volume type names are also the same (is that also needed
 for this to be a
 problem?), this will be an issue ...

  So, how about providing the information the scheduler does not have by
 introducing an
 additional tag to identify ‘equivalent’ backends, similar to the way some
 people already
 use the ‘host’ option?

  Thanks!
   Arne


  On 08 Jan 2015, at 15:11, Duncan Thomas duncan.tho...@gmail.com wrote:

  The problem is that the scheduler doesn't currently have enough info to
 know which backends are 'equivalent' and which aren't. e.g. If you have 2
 ceph clusters as cinder backends, they are indistinguishable from each
 other.

 On 8 January 2015 at 12:14, Arne Wiebalck arne.wieba...@cern.ch wrote:

 Hi,

 The fact that volume requests (in particular deletions) are coupled with
 certain Cinder hosts is not ideal from an operational perspective:
 if the node has meanwhile disappeared, e.g. retired, the deletion gets
 stuck and can only be unblocked by changing the database. Some
 people apparently use the ‘host’ option in cinder.conf to make the hosts
 indistinguishable, but this creates problems in other places.

 From what I see, even for backends that would support it (such as Ceph),
 Cinder currently does not provide means to ensure that any of
 the hosts capable of performing a volume operation would be assigned the
 request in case the original/desired one is no more available,
 right?

 If that is correct, how about changing the scheduling of delete
 operation to use the same logic as create operations, that is pick any of
 the
 available hosts, rather than the one which created a volume in the first
 place (for backends where that is possible, of course)?

 Thanks!
  Arne

 —
 Arne Wiebalck
 CERN IT
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Duncan Thomas
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev

Re: [openstack-dev] [Cinder] volume / host coupling

2015-01-08 Thread Jordan Pittier
Hi,
Some people apparently use the ‘host’ option in cinder.conf to make the
hosts indistinguishable, but this creates problems in other places.
I use shared storage mounted on several cinder-volume nodes, with host
flag set the same everywhere. Never ran into problems so far. Could you
elaborate on this creates problems in other places please ?

Thanks !
Jordan

On Thu, Jan 8, 2015 at 3:40 PM, Arne Wiebalck arne.wieba...@cern.ch wrote:

  Hmm. Not sure how widespread installations with multiple Ceph backends
 are where the
 Cinder hosts have access to only one of the backends (which is what you
 assume, right?)
 But, yes, if the volume type names are also the same (is that also needed
 for this to be a
 problem?), this will be an issue ...

  So, how about providing the information the scheduler does not have by
 introducing an
 additional tag to identify ‘equivalent’ backends, similar to the way some
 people already
 use the ‘host’ option?

  Thanks!
  Arne


  On 08 Jan 2015, at 15:11, Duncan Thomas duncan.tho...@gmail.com wrote:

  The problem is that the scheduler doesn't currently have enough info to
 know which backends are 'equivalent' and which aren't. e.g. If you have 2
 ceph clusters as cinder backends, they are indistinguishable from each
 other.

 On 8 January 2015 at 12:14, Arne Wiebalck arne.wieba...@cern.ch wrote:

 Hi,

 The fact that volume requests (in particular deletions) are coupled with
 certain Cinder hosts is not ideal from an operational perspective:
 if the node has meanwhile disappeared, e.g. retired, the deletion gets
 stuck and can only be unblocked by changing the database. Some
 people apparently use the ‘host’ option in cinder.conf to make the hosts
 indistinguishable, but this creates problems in other places.

 From what I see, even for backends that would support it (such as Ceph),
 Cinder currently does not provide means to ensure that any of
 the hosts capable of performing a volume operation would be assigned the
 request in case the original/desired one is no more available,
 right?

 If that is correct, how about changing the scheduling of delete operation
 to use the same logic as create operations, that is pick any of the
 available hosts, rather than the one which created a volume in the first
 place (for backends where that is possible, of course)?

 Thanks!
  Arne

 —
 Arne Wiebalck
 CERN IT
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Duncan Thomas
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Progress of the port to Python 3

2015-01-06 Thread Jordan Pittier
Hi Victor.

Thanks for your effort. Did you read the ML thread [nova]nova not work
with eventlet 0.16.0 ? It looks important.

Jordan

On Tue, Jan 6, 2015 at 5:51 PM, victor stinner victor.stin...@enovance.com
wrote:

 Hi,

 I'm still working on porting OpenStack to Python 3. I'm fixing Python 3
 issues in eventlet and I'm trying to replace eventlet with trollius in
 OpenStack. Two different and complementary ways to port OpenStack to Python
 3.


 I fixed some eventlet issues related to Python 3 and monkey-patching. My
 changes are part of eventlet 0.16 which was released the 2014-12-30. I
 tried this version with Oslo Messaging: with another fix in eventlet
 (threading.Condition) and a change in the zmq driver, only 3 tests are now
 failing (which is a great success for me).

 See my Oslo Messaging change for more information:
 https://review.openstack.org/#/c/145241/

 It looks like eventlet 0.16 works much better with Python 3 and
 monkey-patching. You should try it on your own project! Tell me if you need
 help to fix issues.

 --

 About asyncio, I renamed my aiogreen project to aioeventlet (to make
 it more explicit that it is specific to eventlet). With the release 0.4,
 the API is now considered as stable.
 http://aioeventlet.readthedocs.org/

 Mehdi Abaakouk voted +2 on my Oslo Messaging patches to support trollius
 coroutines using aioeventlet project, but it looks that he's alone to
 review Oslo Messaging patches :-/ Anyway to approve my changes?

 * Add an optional executor callback to dispatcher
   https://review.openstack.org/136652
 * Add a new aioeventlet executor
   https://review.openstack.org/136653

 By the way, my review to add aioeventlet dependency also waits for a
 review:

 * Add aioeventlet dependency
   https://review.openstack.org/138750

 Victor Stinner

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] HTTPS for spice console

2014-12-18 Thread Jordan Pittier
Hi,
You'll need a recent version of spice-html5. Because this commit here
http://cgit.freedesktop.org/spice/spice-html5/commit/?id=293d405e15a4499219fe81e830862cc2b1518e3e
is recent.

Jordan

On Wed, Dec 17, 2014 at 11:29 PM, Akshik DBK aks...@outlook.com wrote:

 Are there any recommended approach to configure spice console proxy on a
 secure [https], could not find proper documentation for the same.

 can someone point me to the rigt direction

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] environment variables in local.conf post-config section

2014-11-26 Thread jordan pittier
Hi,
It should work with your current local.conf. You may be facing this bug : 
https://bugs.launchpad.net/devstack/+bug/1386413

Jordan

- Original Message -
From: Andreas Scheuring scheu...@linux.vnet.ibm.com
To: openstack-dev openstack-dev@lists.openstack.org
Sent: Wednesday, 26 November, 2014 2:04:57 PM
Subject: [openstack-dev] [devstack] environment variables in local.conf 
post-config section

Hi together, 
is there a way to use Environment variables in the local.conf
post-config section?

On my system (stable/juno devstack) not the content of the variable, but
the variable name itself is being inserted into the config file.


So e.g. 
[[post-config|$NOVA_CONF]]
[DEFAULT]
vncserver_proxyclient_address=$HOST_IP_MGMT

results in nova.conf as:
vncserver_proxyclient_address = $HOST_IP_MGMT



Thanks

-- 
Andreas 
(irc: scheuran)




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift]Questions on concurrent operations on the same object

2014-10-31 Thread jordan pittier
Hi guys,

We are currently benchmarking our Scality object server backend for Swift. We 
basically created a new DiskFile class that is used in a new ObjectController 
that inherits from the native server.ObjectController. It's pretty similar to 
how Ceph can be used as a backend for Swift objects. Our DiskFile is used to 
make HTTP request to the Scality Ring which supports GET/PUT/Delete on 
objects. 

Scality implementation is here : 
https://github.com/scality/ScalitySproxydSwift/blob/master/swift/obj/scality_sproxyd_diskfile.py

We are using SSBench to benchmark and when the concurrency is high, we see 
somehow interleaved operations on the same object. For example, our DiskFile 
will be asked to DELETE an object while the object is currently being PUT by 
another client. The Scality ring doesnt support multi writers on the same 
object. So a lot of ssbench operations fail with a HTTP response '423 - Object 
is locked'.

We dive into ssbench code and saw that it should not do interleaved operations. 
By adding some logging in our DiskFile class, we kinda of guess that the Object 
server doesn't wait for the put() method of the DiskFileWriter to finish before 
returning HTTP 200 to the Swift Proxy. Is this explanation correct ? Our put() 
method in the DiskFileWriter could take some time to complete, thus this would 
explain that the PUT on the object is being finalized while a DELETE arrives. 

Some questions :
1) Is it possible that the put() method of the DiskFileWriter is somehow non 
blocking ? (or that the result of put() is not awaited?). If not, how could 
ssbench thinks that an object is completely PUT and that ssbench is allowed to 
delete it ?
2) If someone could explain me in a few words (or more :)) how Swift deals with 
multiple writers on the same object, that will be very much appreciated.

Thanks a lot,
Jordan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >