Re: [openstack-dev] [Murano] 'NoMatchingFunctionException: No function "#operator_." matches supplied arguments' error when adding an application to an environment

2015-11-26 Thread Vahid S Hashemian
Thanks Stan for the pointer.

I removed the line that referred to the 'name' property and now my 
application is added to the environment without any errors.
However, what I see in ui.yaml still doesn't look like YAML.

I'm attaching samples again.


Even for HOT packages the content is not YAML.

Regards,
--Vahid



ui-csar.yaml
Description: Binary data


ui-hot2.yaml
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how do we get the migration status details info from nova

2015-11-26 Thread 金运通
I agree we should have both(notification/API).

Watcher will consume notifications, not the API, API would be helpful for
operator.

The notification way should be addressed in separate BP.

BR,
YunTongJin

2015-11-26 22:37 GMT+08:00 少合冯 :

> Agree.  Why not both.  and will use created_at to work out how long the
> migration has been
> running.
>
>
> Paul, thank you very much for the suggestion.
>
> BR.
> Shaohe Feng
>
>
>
> 2015-11-26 19:10 GMT+08:00 Paul Carlton :
>
>> On 26/11/15 10:48, 少合冯 wrote:
>>
>> Now, we are agree on getting more migration status details info are
>> useful.
>>
>> But How do we get them?
>> By REST API or Notification?
>>
>>
>> IF by API, does the  "time_elapsed" is needed?
>> For there is a "created_at" field.
>> But IMO, it is base on the time of the conductor server?
>> The time_elapsed can get from libvirt, which from the hypervisor.
>> Usually, there are ntp-server in the cloud. and we can get the
>> time_elapsed by "created_at".
>> but not sure there will be the case:
>> the time of hypervisor and conductor server host are out of sync?
>>
>> Why not both.  Just update the _monitor_live_migration method in the
>> libvirt
>>  driver (and any similar functions in other drivers if they exist) so it
>> updates
>>  the migration object and also sends notification events.  These don't
>> have
>>  to be at 5 second intervals, although I think that is about right for
>> the
>> migration object update.  Notification messages could be once event 30
>>  seconds or so.
>>
>> Operators can monitor the progress via the API and orchestration utilities
>>  to consume the notification messages (and/or use API).
>> This will enable them to identify migration operations that are not making
>>  good progress and take actions to address the issue.
>>
>> The created_at and updated_at fields of the migration object should be
>> sufficient to allow the caller to work out how long the migration has been
>> running for (or how long it took in the case of a completed migration).
>>
>> Notification payload can include the created_at field or not.  I'd say
>> not.
>> There will be a notification message generated when a migration starts
>> so subsequent progress messages don't need it, if the consumer wants
>> the complete picture they can call the API.
>>
>>
>> --
>> Paul Carlton
>> Software Engineer
>> Cloud Services
>> Hewlett Packard
>> BUK03:T242
>> Longdown Avenue
>> Stoke Gifford
>> Bristol BS34 8QZ
>>
>> Mobile:+44 (0)7768 994283
>> Email:mailto:paul.carlt...@hpe.com 
>> Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 
>> 1HN Registered No: 690597 England.
>> The contents of this message and any attachments to it are confidential and 
>> may be legally privileged. If you have received this message in error, you 
>> should delete it from your system immediately and advise the sender. To any 
>> recipient of this message within HP, unless otherwise stated you should 
>> consider this message and attachments as "HP CONFIDENTIAL".
>>
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Use tempest-config for tempest-cli-improvements

2015-11-26 Thread Andrey Kurilin
Sorry for wrong numbers. The bug-fix for issue with counters is merged.
Correct numbers(latest result from rally's gate[1]):
 - total number of executed tests: 1689
 - success: 1155
 - skipped: 534 (neutron,heat,sahara,ceilometer are disabled. [2] should
enable them)
 - failed: 0

[1] -
http://logs.openstack.org/27/246627/11/gate/gate-rally-dsvm-verify-full/800bad0/rally-verify/7_verify_results_--html.html.gz
[2] - https://review.openstack.org/#/c/250540/

On Thu, Nov 26, 2015 at 3:23 PM, Yaroslav Lobankov 
wrote:

> Hello everyone,
>
> Yes, I am working on this now. We have some success already, but there is
> a lot of work to do. Of course, some things don't work ideally. For
> example, in [2] from the previous letter we have not 24 skipped tests,
> actually much more. So we have a bug somewhere :)
>
> Regards,
> Yaroslav Lobankov.
>
> On Thu, Nov 26, 2015 at 3:59 PM, Andrey Kurilin 
> wrote:
>
>> Hi!
>> Boris P. and I tried to push a spec[1] for automation tempest config
>> generator, but we did not succeed to merge it. Imo, qa-team doesn't want to
>> have such tool:(
>>
>> >However, there is a big concern:
>> >If the script contain a bug and creates the configuration which makes
>> >most tests skipped, we cannot do enough tests on the gate.
>> >Tempest contains 1432 tests and difficult to detect which tests are
>> >skipped as unexpected.
>>
>> Yaroslav Lobankov is working on improvement for tempest config generator
>> in Rally. Last time when we launch full tempest run[2], we got 1154 success
>> tests and only 24 skipped. Also, there is a patch, which adds x-fail
>> mechanism(it based on subunit-filter): you can transmit a file with test
>> names + reasons and rally will modify results.
>>
>> [1] - https://review.openstack.org/#/c/94473/
>>
>> [2] -
>> http://logs.openstack.org/49/242849/8/check/gate-rally-dsvm-verify/e91992e/rally-verify/7_verify_results_--html.html.gz
>>
>> On Thu, Nov 26, 2015 at 1:52 PM, Ken'ichi Ohmichi 
>> wrote:
>>
>>> Hi Daniel,
>>>
>>> Thanks for pointing this up.
>>>
>>> 2015-11-25 1:40 GMT+09:00 Daniel Mellado :
>>> > Hi All,
>>> >
>>> > As you might already know, within Red Hat's tempest fork, we do have
>>> one
>>> > tempest configuration script which was built in the past by David
>>> Kranz [1]
>>> > and that's been actively used in our CI system. Regarding this topic,
>>> I'm
>>> > aware that quite some effort has been done in the past [2] and I would
>>> like
>>> > to complete the implementation of this blueprint/spec.
>>> >
>>> > My plan would be to have this script under the /tempest/cmd or
>>> > /tempest/tools folder from tempest so it can be used to configure not
>>> the
>>> > tempest gate but any cloud we'd like to run tempest against.
>>> >
>>> > Adding the configuration script was discussed briefly at the Mitaka
>>> summit
>>> > in the QA Priorities meting [3]. I propose we use the existing
>>> etherpad to
>>> > continue the discussion around and tracking of implementing "tempest
>>> > config-create" using the downstream config script as a starting point.
>>> [4]
>>> >
>>> > If you have any questions, comments or opinion, please let me know.
>>>
>>> This topic have happened several times, and I also felt this kind of
>>> tool was very useful for Tempest users, because Tempest contains 296
>>> options($ grep cfg * -R | grep Opt | wc -l) now and it is difficult to
>>> set the configuration up.
>>> However, there is a big concern:
>>> If the script contain a bug and creates the configuration which makes
>>> most tests skipped, we cannot do enough tests on the gate.
>>> Tempest contains 1432 tests and difficult to detect which tests are
>>> skipped as unexpected.
>>> Actually we faced unexpected skipped tests on the gate before due to
>>> some bug, then the problem has been fixed.
>>> But I can imagine this kind of problem happens after implementing this
>>> kind of script.
>>>
>>> So now I am feeling Tempest users need to know what cloud they want to
>>> test with Tempest, and need to know what tests run with Tempest.
>>> Testers need to know what test target/items they are testing basically.
>>>
>>> Thanks
>>> Ken Ohmichi
>>>
>>> ---
>>>
>>> > ---
>>> > [1]
>>> >
>>> https://github.com/redhat-openstack/tempest/blob/master/tools/config_tempest.py
>>> > [2]
>>> https://blueprints.launchpad.net/tempest/+spec/tempest-config-generator
>>> > [3] https://etherpad.openstack.org/p/mitaka-qa-priorities
>>> > [4] https://etherpad.openstack.org/p/tempest-cli-improvements
>>> >
>>> >
>>> https://github.com/openstack/qa-specs/blob/master/specs/tempest/tempest-cli-improvements.rst
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > 

Re: [openstack-dev] [puppet] [release] Puppet OpenStack 7.0.0 Liberty (_independent)

2015-11-26 Thread Andrew Woodward
Fantastic to hear, good work guys.

On Thu, Nov 26, 2015, 3:01 PM David Moreau Simard  wrote:

> Awesome !
>
> Congratulations to everyone involved in the release of the most
> popular deloyment method [1] :)
> The new acceptance and integration tests make this rock solid, too !
>
> [1]:
> http://superuser.openstack.org/articles/openstack-users-share-how-their-deployments-stack-up
>
>
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
>
> dmsimard = [irc, github, twitter]
>
>
> On Thu, Nov 26, 2015 at 9:12 AM, Emilien Macchi 
> wrote:
> > Puppet OpenStack community is very proud to announce the release of 22
> > modules:
> >
> > puppet-aodh 7.0.0
> > puppet-ceilometer 7.0.0
> > puppet-cinder 7.0.0
> > puppet-designate 7.0.0
> > puppet-glance 7.0.0
> > puppet-gnocchi 7.0.0
> > puppet-heat 7.0.0
> > puppet-horizon 7.0.0
> > puppet-ironic 7.0.0
> > puppet-keystone 7.0.0
> > puppet-manila 7.0.0
> > puppet-murano 7.0.0
> > puppet-neutron 7.0.0
> > puppet-nova 7.0.0
> > puppet-openstacklib 7.0.0
> > puppet-openstack_extras 7.0.0
> > puppet-sahara 7.0.0
> > puppet-swift 7.0.0
> > puppet-tempest 7.0.0
> > puppet-trove 7.0.0
> > puppet-tuskar 7.0.0
> > puppet-vswitch 3.0.0
> >
> > For more details about the release, you can visit:
> > https://wiki.openstack.org/wiki/Puppet/releases
> > https://forge.puppetlabs.com/openstack
> >
> > Here are some interesting numbers [1]:
> >
> > Contributors during Kilo cycle: 91
> > Contributors during Liberty cycle: 108
> >
> > Commits during Kilo cycle: 730
> > Commits during Liberty cycle: 1201
> >
> > LOC during Kilo cycle: 67104
> > LOC during Liberty cycle: 93448
> >
> > [1] Sources: http://stackalytics.openstack.org
> >
> > Thank you to the Puppet OpenStack community to make it happen,
> > Also big kudos to other teams, specially OpenStack Infra, Tempest and
> > Packaging folks who never hesitate to help us.
> > --
> > Emilien Macchi
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [release] Puppet OpenStack 7.0.0 Liberty (_independent)

2015-11-26 Thread David Moreau Simard
Awesome !

Congratulations to everyone involved in the release of the most
popular deloyment method [1] :)
The new acceptance and integration tests make this rock solid, too !

[1]: 
http://superuser.openstack.org/articles/openstack-users-share-how-their-deployments-stack-up


David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Thu, Nov 26, 2015 at 9:12 AM, Emilien Macchi  wrote:
> Puppet OpenStack community is very proud to announce the release of 22
> modules:
>
> puppet-aodh 7.0.0
> puppet-ceilometer 7.0.0
> puppet-cinder 7.0.0
> puppet-designate 7.0.0
> puppet-glance 7.0.0
> puppet-gnocchi 7.0.0
> puppet-heat 7.0.0
> puppet-horizon 7.0.0
> puppet-ironic 7.0.0
> puppet-keystone 7.0.0
> puppet-manila 7.0.0
> puppet-murano 7.0.0
> puppet-neutron 7.0.0
> puppet-nova 7.0.0
> puppet-openstacklib 7.0.0
> puppet-openstack_extras 7.0.0
> puppet-sahara 7.0.0
> puppet-swift 7.0.0
> puppet-tempest 7.0.0
> puppet-trove 7.0.0
> puppet-tuskar 7.0.0
> puppet-vswitch 3.0.0
>
> For more details about the release, you can visit:
> https://wiki.openstack.org/wiki/Puppet/releases
> https://forge.puppetlabs.com/openstack
>
> Here are some interesting numbers [1]:
>
> Contributors during Kilo cycle: 91
> Contributors during Liberty cycle: 108
>
> Commits during Kilo cycle: 730
> Commits during Liberty cycle: 1201
>
> LOC during Kilo cycle: 67104
> LOC during Liberty cycle: 93448
>
> [1] Sources: http://stackalytics.openstack.org
>
> Thank you to the Puppet OpenStack community to make it happen,
> Also big kudos to other teams, specially OpenStack Infra, Tempest and
> Packaging folks who never hesitate to help us.
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Using docker container to run COE daemons

2015-11-26 Thread Jay Lau
One of the benefit of running daemons in docker container is that the
cluster can upgrade more easily. Take mesos as an example, if I can make
mesos running in container, then when update mesos slave with some hot
fixes, I can upgrade the mesos slave to a new version in an gray upgrade,
i.e. ABtest etc.

On Fri, Nov 27, 2015 at 12:01 AM, Hongbin Lu  wrote:

> Jay,
>
>
>
> Agree and disagree. Containerize some COE daemons will facilitate the
> version upgrade and maintenance. However, I don’t think it is correct to
> blindly containerize everything unless there is an investigation performed
> to understand the benefits and costs of doing that. Quoted from Egor, the
> common practice in k8s is to containerize everything except kublet, because
> it seems it is just too hard to containerize everything. In the case of
> mesos, I am not sure if it is a good idea to move everything to containers,
> given the fact that it is relatively easy to manage and upgrade debian
> packages at Ubuntu. However, in the new CoreOS mesos bay [1], meos daemons
> will run at containers.
>
>
>
> In summary, I think the correct strategy is to selectively containerize
> some COE daemons, but we don’t have to containerize **all** COE daemons.
>
>
>
> [1] https://blueprints.launchpad.net/magnum/+spec/mesos-bay-with-coreos
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Jay Lau [mailto:jay.lau@gmail.com]
> *Sent:* November-26-15 2:06 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] Using docker container to run COE
> daemons
>
>
>
> Thanks Kai Qing, I filed a bp for mesos bay here
> https://blueprints.launchpad.net/magnum/+spec/mesos-in-container
>
>
>
> On Thu, Nov 26, 2015 at 8:11 AM, Kai Qiang Wu  wrote:
>
> Hi Jay,
>
> For the Kubernetes COE container ways, I think @Hua Wang is doing that.
>
> For the swarm COE, the swarm already has master and agent running in
> container
>
> For the mesos, it still not have container work until now, Maybe someone
> already draft bp on it ? Not quite sure
>
>
>
> Thanks
>
> Best Wishes,
>
> 
> Kai Qiang Wu (吴开强 Kennan)
> IBM China System and Technology Lab, Beijing
>
> E-mail: wk...@cn.ibm.com
> Tel: 86-10-82451647
> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
> No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
>
> 
> Follow your heart. You are miracle!
>
> [image: Inactive hide details for Jay Lau ---26/11/2015 07:15:59 am---Hi,
> It is becoming more and more popular to use docker container]Jay Lau
> ---26/11/2015 07:15:59 am---Hi, It is becoming more and more popular to use
> docker container run some
>
> From: Jay Lau 
> To: OpenStack Development Mailing List 
> Date: 26/11/2015 07:15 am
> Subject: [openstack-dev] [magnum] Using docker container to run COE
> daemons
> --
>
>
>
>
> Hi,
>
> It is becoming more and more popular to use docker container run some
> applications, so what about leveraging this in Magnum?
>
> What I want to do is that we can put all COE daemons running in docker
> containers, because now Kubernetes, Mesos and Swarm support running in
> docker container and there are already some existing docker
> images/dockerfiles which we can leverage.
>
> So what about update all COE templates to use docker container to run COE
> daemons and maintain some dockerfiles for different COEs in Magnum? This
> can reduce the maintain effort for COE as if there are new versions and we
> want to upgrade, just update the dockerfile is enough. Comments?
>
> --
> Thanks,
>
> Jay Lau (Guangya Liu)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
>
> Thanks,
>
> Jay Lau (Guangya Liu)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [oslo] fake driver doesn't work with multi topics

2015-11-26 Thread Masahito MUROI

Thanks Flavio. I'll report it in launchpad.

best regard,
Masahito

On 2015/11/27 3:00, Flavio Percoco wrote:

On 26/11/15 10:40 +0900, Masahito MUROI wrote:

Hi oslo.message folks,

We are trying to use oslo_message's fake driver [1] for our testing.
However, the driver doesn't seem to work with multi topics. Is this
behavior expected or a bug?


mmh, I'd say it's not. It's very likely this fake driver was not
updated to support that. Any chance you can file a bug for it?

Thanks,
Flavio



best regard,
Masahito


--
室井 雅仁(Masahito MUROI)
Software Innovation Center, NTT
Tel: +81-422-59-4539,FAX: +81-422-59-2699



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
室井 雅仁(Masahito MUROI)
Software Innovation Center, NTT
Tel: +81-422-59-4539,FAX: +81-422-59-2699



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Use tempest-config for tempest-cli-improvements

2015-11-26 Thread Ken'ichi Ohmichi
2015-11-27 15:40 GMT+09:00 Daniel Mellado :
> I still do think that even if there are some issues addressed to the
> feature, such as skipping tests in the gate, the feature itself it's still
> good -we just won't use it for the gates-
> Instead it'd be used as a wrapper for a user who would be interested on
> trying it against a real/reals clouds.
>
> Ken, do you really think a tempest user should know all tempest options?
> As you pointed out there are quite a few of them and even if they should at
> least know their environment, this script would set a minimum acceptable
> default. Do you think PTL and Pre-PTL concerns that we spoke of would still
> apply to that scenario?

If Tempest users run part of tests of Tempest, they need to know the
options which are used with these tests only.
For example, current Tempest contains ironic API tests and the
corresponding options.
If users don't want to run these tests because the cloud don't support
ironic API, they don't need to know/setup these options.
I feel users need to know necessary options which are used on tests
they want, because they need to investigate the reason if facing a
problem during Tempest tests.

Now Tempest options contain their default values, but you need a
script for changing them from the default.
Don't these default values work for your cloud at all?
If so, these values should be changed to better.

Thanks
Ken Ohmichi

---

> Andrey, Yaroslav. Would you like to revisit the blueprint to adapt it to
> tempest-cli improvements? What do you think about this, Masayuki?
>
> Thanks for all your feedback! ;)
>
> El 27/11/15 a las 00:15, Andrey Kurilin escribió:
>
> Sorry for wrong numbers. The bug-fix for issue with counters is merged.
> Correct numbers(latest result from rally's gate[1]):
>  - total number of executed tests: 1689
>  - success: 1155
>  - skipped: 534 (neutron,heat,sahara,ceilometer are disabled. [2] should
> enable them)
>  - failed: 0
>
> [1] -
> http://logs.openstack.org/27/246627/11/gate/gate-rally-dsvm-verify-full/800bad0/rally-verify/7_verify_results_--html.html.gz
> [2] - https://review.openstack.org/#/c/250540/
>
> On Thu, Nov 26, 2015 at 3:23 PM, Yaroslav Lobankov 
> wrote:
>>
>> Hello everyone,
>>
>> Yes, I am working on this now. We have some success already, but there is
>> a lot of work to do. Of course, some things don't work ideally. For example,
>> in [2] from the previous letter we have not 24 skipped tests, actually much
>> more. So we have a bug somewhere :)
>>
>> Regards,
>> Yaroslav Lobankov.
>>
>> On Thu, Nov 26, 2015 at 3:59 PM, Andrey Kurilin 
>> wrote:
>>>
>>> Hi!
>>> Boris P. and I tried to push a spec[1] for automation tempest config
>>> generator, but we did not succeed to merge it. Imo, qa-team doesn't want to
>>> have such tool:(
>>>
>>> >However, there is a big concern:
>>> >If the script contain a bug and creates the configuration which makes
>>> >most tests skipped, we cannot do enough tests on the gate.
>>> >Tempest contains 1432 tests and difficult to detect which tests are
>>> >skipped as unexpected.
>>>
>>> Yaroslav Lobankov is working on improvement for tempest config generator
>>> in Rally. Last time when we launch full tempest run[2], we got 1154 success
>>> tests and only 24 skipped. Also, there is a patch, which adds x-fail
>>> mechanism(it based on subunit-filter): you can transmit a file with test
>>> names + reasons and rally will modify results.
>>>
>>> [1] - https://review.openstack.org/#/c/94473/
>>>
>>> [2] -
>>> http://logs.openstack.org/49/242849/8/check/gate-rally-dsvm-verify/e91992e/rally-verify/7_verify_results_--html.html.gz
>>>
>>> On Thu, Nov 26, 2015 at 1:52 PM, Ken'ichi Ohmichi 
>>> wrote:

 Hi Daniel,

 Thanks for pointing this up.

 2015-11-25 1:40 GMT+09:00 Daniel Mellado :
 > Hi All,
 >
 > As you might already know, within Red Hat's tempest fork, we do have
 > one
 > tempest configuration script which was built in the past by David
 > Kranz [1]
 > and that's been actively used in our CI system. Regarding this topic,
 > I'm
 > aware that quite some effort has been done in the past [2] and I would
 > like
 > to complete the implementation of this blueprint/spec.
 >
 > My plan would be to have this script under the /tempest/cmd or
 > /tempest/tools folder from tempest so it can be used to configure not
 > the
 > tempest gate but any cloud we'd like to run tempest against.
 >
 > Adding the configuration script was discussed briefly at the Mitaka
 > summit
 > in the QA Priorities meting [3]. I propose we use the existing
 > etherpad to
 > continue the discussion around and tracking of implementing "tempest
 > config-create" using the downstream config script as a starting point.
 > [4]
 >
 > If you have any questions, 

Re: [openstack-dev] [QA][Tempest] Use tempest-config for tempest-cli-improvements

2015-11-26 Thread Daniel Mellado
I still do think that even if there are some issues addressed to the
feature, such as skipping tests in the gate, the feature itself it's
still good -we just won't use it for the gates-
Instead it'd be used as a wrapper for a user who would be interested on
trying it against a real/reals clouds.

Ken, do you really think a tempest user should know all tempest options?
As you pointed out there are quite a few of them and even if they should
at least know their environment, this script would set a minimum
acceptable default. Do you think PTL and Pre-PTL concerns that we spoke
of would still apply to that scenario?

Andrey, Yaroslav. Would you like to revisit the blueprint to adapt it to
tempest-cli improvements? What do you think about this, Masayuki?

Thanks for all your feedback! ;)

El 27/11/15 a las 00:15, Andrey Kurilin escribió:
> Sorry for wrong numbers. The bug-fix for issue with counters is merged.
> Correct numbers(latest result from rally's gate[1]):
>  - total number of executed tests: 1689
>  - success: 1155
>  - skipped: 534 (neutron,heat,sahara,ceilometer are disabled. [2]
> should enable them)
>  - failed: 0
>
> [1] -
> http://logs.openstack.org/27/246627/11/gate/gate-rally-dsvm-verify-full/800bad0/rally-verify/7_verify_results_--html.html.gz
> [2] - https://review.openstack.org/#/c/250540/
>
> On Thu, Nov 26, 2015 at 3:23 PM, Yaroslav Lobankov
> > wrote:
>
> Hello everyone,
>
> Yes, I am working on this now. We have some success already, but
> there is a lot of work to do. Of course, some things don't work
> ideally. For example, in [2] from the previous letter we have not
> 24 skipped tests, actually much more. So we have a bug somewhere :)
>
> Regards,
> Yaroslav Lobankov.  
>
> On Thu, Nov 26, 2015 at 3:59 PM, Andrey Kurilin
> > wrote:
>
> Hi!
> Boris P. and I tried to push a spec[1] for automation tempest
> config generator, but we did not succeed to merge it. Imo,
> qa-team doesn't want to have such tool:(
>
> >However, there is a big concern:
> >If the script contain a bug and creates the configuration
> which makes
> >most tests skipped, we cannot do enough tests on the gate.
> >Tempest contains 1432 tests and difficult to detect which
> tests are
> >skipped as unexpected.
>
> Yaroslav Lobankov is working on improvement for tempest config
> generator in Rally. Last time when we launch full tempest
> run[2], we got 1154 success tests and only 24 skipped. Also,
> there is a patch, which adds x-fail mechanism(it based on
> subunit-filter): you can transmit a file with test names +
> reasons and rally will modify results.
>
> [1] - https://review.openstack.org/#/c/94473/
>
> [2] -
> 
> http://logs.openstack.org/49/242849/8/check/gate-rally-dsvm-verify/e91992e/rally-verify/7_verify_results_--html.html.gz
>
> On Thu, Nov 26, 2015 at 1:52 PM, Ken'ichi Ohmichi
> > wrote:
>
> Hi Daniel,
>
> Thanks for pointing this up.
>
> 2015-11-25 1:40 GMT+09:00 Daniel Mellado
>  >:
> > Hi All,
> >
> > As you might already know, within Red Hat's tempest
> fork, we do have one
> > tempest configuration script which was built in the past
> by David Kranz [1]
> > and that's been actively used in our CI system.
> Regarding this topic, I'm
> > aware that quite some effort has been done in the past
> [2] and I would like
> > to complete the implementation of this blueprint/spec.
> >
> > My plan would be to have this script under the
> /tempest/cmd or
> > /tempest/tools folder from tempest so it can be used to
> configure not the
> > tempest gate but any cloud we'd like to run tempest against.
> >
> > Adding the configuration script was discussed briefly at
> the Mitaka summit
> > in the QA Priorities meting [3]. I propose we use the
> existing etherpad to
> > continue the discussion around and tracking of
> implementing "tempest
> > config-create" using the downstream config script as a
> starting point. [4]
> >
> > If you have any questions, comments or opinion, please
> let me know.
>
> This topic have happened several times, and I also felt
> this kind of
> tool was very useful for Tempest users, because Tempest
> contains 296
>  

Re: [openstack-dev] [magnum]storage for docker-bootstrap

2015-11-26 Thread 王华
Hi Hongbin,

The docker in master node stores data in /dev/mapper/atomicos-docker--data
and metadata in /dev/mapper/atomicos-docker--meta.
/dev/mapper/atomicos-docker--data
and /dev/mapper/atomicos-docker--meta are logic volumes. The docker in
minion node store data in the cinder volume, but
/dev/mapper/atomicos-docker--meta
and /dev/mapper/atomicos-docker--meta are not used. If we want to leverage
Cinder volume for docker in master, should we drop
/dev/mapper/atomicos-docker--meta
and /dev/mapper/atomicos-docker--meta? I think it is not necessary to
allocate a Cinder volume. It is enough to allocate two logic volumes for
docker, because only etcd, flannel, k8s run in the docker daemon which need
not a large amount of storage.

Best regards,
Wanghua

On Thu, Nov 26, 2015 at 12:40 AM, Hongbin Lu  wrote:

> Here is a bit more context.
>
>
>
> Currently, at k8s and swarm bay, some required binaries (i.e. etcd and
> flannel) are built into image and run at host. We are exploring the
> possibility to containerize some of these system components. The rationales
> are (i) it is infeasible to build custom packages into an atomic image and
> (ii) it is infeasible to upgrade individual component. For example, if
> there is a bug in current version of flannel and we know the bug was fixed
> in the next version, we need to upgrade flannel by building a new image,
> which is a tedious process.
>
>
>
> To containerize flannel, we need a second docker daemon, called
> docker-bootstrap [1]. In this setup, pods are running on the main docker
> daemon, and flannel and etcd are running on the second docker daemon. The
> reason is that flannel needs to manage the network of the main docker
> daemon, so it needs to run on a separated daemon.
>
>
>
> Daneyon, I think it requires separated storage because it needs to run a
> separated docker daemon (unless there is a way to make two docker daemons
> share the same storage).
>
>
>
> Wanghua, is it possible to leverage Cinder volume for that. Leveraging
> external storage is always preferred [2].
>
>
>
> [1]
> http://kubernetes.io/v1.1/docs/getting-started-guides/docker-multinode.html#bootstrap-docker
>
> [2] http://www.projectatomic.io/docs/docker-storage-recommendation/
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Daneyon Hansen (danehans) [mailto:daneh...@cisco.com]
> *Sent:* November-25-15 11:10 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum]storage for docker-bootstrap
>
>
>
>
>
>
>
> *From: *王华 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Wednesday, November 25, 2015 at 5:00 AM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *[openstack-dev] [magnum]storage for docker-bootstrap
>
>
>
> Hi all,
>
>
>
> I am working on containerizing etcd and flannel. But I met a problem. As
> described in [1], we need a docker-bootstrap. Docker and docker-bootstrap
> can not use the same storage, so we need some disk space for it.
>
>
>
> I reviewed [1] and I do not see where the bootstrap docker instance
> requires separate storage.
>
>
>
> The docker in master node stores data in /dev/mapper/atomicos-docker--data
> and metadata in /dev/mapper/atomicos-docker--meta. The disk space left is
> too same for docker-bootstrap. Even if the root_gb of the instance flavor
> is 20G, only 8G can be used in our image. I want to make it bigger. One way
> is we can add the disk space left in the vda as vda3 into atomicos vg after
> the instance starts and we allocate two logic volumes for docker-bootstrap.
> Another way is when we create the image, we allocate two logic volumes for
> docker-bootstrap. The second way has a advantage. It doesn't have to make
> filesystem when the instance is created which is time consuming.
>
>
>
> What is your opinion?
>
>
>
> Best Regards
>
> Wanghua
>
>
>
> [1]
> http://kubernetes.io/v1.1/docs/getting-started-guides/docker-multinode/master.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Is it possible to use Murano RESTful API to create and deploy an application

2015-11-26 Thread Anastasia Kuznetsova
Hi Tony,

We have already implemented commands which allow user to configure and
deploy environments via CLI.
Please, take a look to this article:
http://docs.openstack.org/developer/murano/draft/enduser-guide/deploying_using_cli.html

On Thu, Nov 26, 2015 at 9:18 AM, WANG, Ming Hao (Tony T) <
tony.a.w...@alcatel-lucent.com> wrote:

> Dear Murano developers and testers,
>
>
>
> I want to use Murano RESTful API to create and deploy an application.
>
> Based on my current understanding, I want to use muranoclient cli as
> following:
>
> 1.   “environment-create” to create Murano environment;
>
> 2.   “environment-session-create” to create session for the
> environment;
>
> 3.   “environment-apps-create” to create application for the session.
>
> This command hasn’t been implemented yet, thus I implement it by studying “
> do_environment_apps_edit” to send POST request to “services” object.
>
>
>
> Could you please help to check if my thought is right?
>
>
>
> If it is right, I meet the following issue:
>
> When an environment includes several applications, I need to generate an
> uuid for each application, and use the uuid to let one application
> reference to another application.
>
> It is a little strange to let user provide this kind of information, and I
> doubt if I’m using Murano in a wrong way, and Murano isn’t designed for
> this.
>
>
>
> Could you please help to check? Is Murano designed to be able to expose
> RESTful to do all the works(including application creation/deployment) that
> user can do from UI?
>
>
>
> Please advice,
>
> Thanks,
>
> Tony
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Anastasia Kuznetsova
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]New Quota Subteam on Nova

2015-11-26 Thread Markus Zoeller
Raildo Mascena  wrote on 11/20/2015 05:13:18 PM:

> From: Raildo Mascena 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 11/20/2015 05:15 PM
> Subject: [openstack-dev] [nova]New Quota Subteam on Nova
> 
> Hi guys
> 
> [...]
>
> So was I thinking on create a subteam on Nova to speed up the code 
> review in the nested quota implementation and discuss this re-design 
> of quotas. Someone have interest on be part of this subteam or 
suggestions?
> 
> Cheers,
> 
> Raildo

Do you see a chance that the subteam would also look at the existing 
bugs [1] in the quotas area? Most of them are pretty old (>= 1 year)
and there might be a chance that, while you digg through the code, 
you come to the conclusion that some of them are not valid anymore or
are already solved. That would be really helpful from a bug management
perspective.

[1] Launchpad nova bugs; tag "quotas"; status is not in progress: 
http://bit.ly/1Pbr8YL

Regards, Markus Zoeller (markus_z)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Location of TripleO REST API

2015-11-26 Thread Dmitry Tantsur

On 11/25/2015 10:43 PM, Ben Nemec wrote:

On 11/23/2015 06:50 AM, Dmitry Tantsur wrote:

On 11/17/2015 04:31 PM, Tzu-Mainn Chen wrote:






 On 10 November 2015 at 15:08, Tzu-Mainn Chen > wrote:

 Hi all,

 At the last IRC meeting it was agreed that the new TripleO REST API
 should forgo the Tuskar name, and simply be called... the TripleO
 API.  There's one more point of discussion: where should the API
 live?  There are two possibilities:

 a) Put it in tripleo-common, where the business logic lives.  If we
 do this, it would make sense to rename tripleo-common to simply
 tripleo.


 +1 - I think this makes most sense if we are not going to support
 the tripleo repo as a library.


Okay, this seems to be the consensus, which is great.

The leftover question is how to package the renamed repo.  'tripleo' is
already intuitively in use by tripleo-incubator.
In IRC, bnemec and trown suggested splitting the renamed repo into two
packages - 'python-tripleo' and 'tripleo-api',
which seems sensible to me.


-1, that would be inconsistent with what other projects are doing. I
guess tripleo-incubator will die soon, and I think only tripleo devs
have any intuition about it. For me tripleo == instack-undercloud.


This was only referring to rpm packaging, and it is how we currently
package most of the other projects.  The repo itself would stay as one
thing, but would be split into python-tripleo and openstack-tripleo-api
rpms.

I don't massively care about package names, but given that there is no
(for example) openstack-nova package and openstack-tripleo is already in
use by a completely different project, I think it's reasonable to move
ahead with the split packages named this way.


Got it, sorry for confusion







What do others think?


Mainn


 b) Put it in its own repo, tripleo-api


 The first option made a lot of sense to people on IRC, as the
 proposed
 API is a very thin layer that's bound closely to the code in
 tripleo-
 common.  The major objection is that renaming is not trivial;
 however
 it was mentioned that renaming might not be *too* bad... as long as
 it's done sooner rather than later.

 What do people think?


 Thanks,
 Tzu-Mainn Chen

 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Encouraging first-time contributors through bug tags/reviews

2015-11-26 Thread Thierry Carrez
Doug Hellmann wrote:
> Excerpts from Shamail's message of 2015-11-26 02:07:55 +0500:
>>
>>> On Nov 26, 2015, at 1:42 AM, Doug Hellmann  wrote:
>>>
>>> OK, reserving bugs for new contributors does reduce the number of
>>> people contending for them, but it doesn't eliminate the need to
>>> figure out if someone else is already working on a bug before you
>>> start. Encouraging folks to assign bugs to themselves when they start
>>> work is probably the best way to solve that.
>> +1, I think most do a good job at this.
>>
>> Where do you think is the appropriate place to formally ask for a new tag 
>> and/or reservations?  
> 
> This list is a good place to ask for a tag like that. It's also a good
> topic for the cross-project meetings.

Launchpad "tags" are per-project, so ideally you would find a pilot
project (or a few pilot projects) ready to play with a
"I-added-instructions-for-first-timers-to-follow" type tag. If those are
successful, we could then encourage every other project to adopt it too...

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with Unexpected vif_type=binding_failed

2015-11-26 Thread Praveen MANKARA RADHAKRISHNAN
Hi,

One thing i observed is there is no packets coming to the dpdk interface
(data network).
I have verified it with the tcpdump using mirror interface.
and if i assign ip address and ping each other the data network bridges
that is also not working.
could this be a possible cause for the nova exception? (NovaException:
Unexpected vif_type=binding_failed)

Thanks
Praveen

On Tue, Nov 24, 2015 at 3:28 PM, Praveen MANKARA RADHAKRISHNAN <
praveen.mank...@6wind.com> wrote:

> Hi Sean,
>
> Thanks for the reply.
>
> Please find the logs attached.
> ovs-dpdk is correctly running in compute.
>
> Thanks
> Praveen
>
> On Tue, Nov 24, 2015 at 3:04 PM, Mooney, Sean K 
> wrote:
>
>> Hi would you be able to attach the
>>
>> n-cpu log form the computenode  and  the
>>
>> n-sch and q-svc logs for the controller so we can see if there is a stack
>> trace relating to the
>>
>> vm boot.
>>
>>
>>
>> Also can you confirm ovs-dpdk is running correctly on the compute node by
>> running
>>
>> sudo service ovs-dpdk status
>>
>>
>>
>> the neutron and networking-ovs-dpdk commits are from their respective
>> stable/kilo branches so they should be compatible
>>
>> provided no breaking changes have been merged to either branch.
>>
>>
>>
>> regards
>>
>> sean.
>>
>>
>>
>> *From:* Praveen MANKARA RADHAKRISHNAN [mailto:praveen.mank...@6wind.com]
>> *Sent:* Tuesday, November 24, 2015 1:39 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails
>> with Unexpected vif_type=binding_failed
>>
>>
>>
>> Hi Przemek,
>>
>>
>>
>> Thanks For the response,
>>
>>
>>
>> Here are the commit ids for Neutron and networking-ovs-dpdk
>>
>>
>>
>> [stack@localhost neutron]$ git log --format="%H" -n 1
>>
>> 026bfc6421da796075f71a9ad4378674f619193d
>>
>> [stack@localhost neutron]$ cd ..
>>
>> [stack@localhost ~]$ cd networking-ovs-dpdk/
>>
>> [stack@localhost networking-ovs-dpdk]$  git log --format="%H" -n 1
>>
>> 90dd03a76a7e30cf76ecc657f23be8371b1181d2
>>
>>
>>
>> The Neutron agents are up and running in compute node.
>>
>>
>>
>> Thanks
>>
>> Praveen
>>
>>
>>
>>
>>
>> On Tue, Nov 24, 2015 at 12:57 PM, Czesnowicz, Przemyslaw <
>> przemyslaw.czesnow...@intel.com> wrote:
>>
>> Hi Praveen,
>>
>>
>>
>> There’s been some changes recently to networking-ovs-dpdk, it no longer
>> host’s a mech driver as the openviswitch mech driver in Neutron supports
>> vhost-user ports.
>>
>> I guess something went wrong and the version of Neutron is not matching
>> networking-ovs-dpdk. Can you post commit ids of Neutron and
>> networking-ovs-dpdk.
>>
>>
>>
>> The other possibility is that the Neutron agent is not running/died on
>> the compute node.
>>
>> Check with:
>>
>> neutron agent-list
>>
>>
>>
>> Przemek
>>
>>
>>
>> *From:* Praveen MANKARA RADHAKRISHNAN [mailto:praveen.mank...@6wind.com]
>> *Sent:* Tuesday, November 24, 2015 12:18 PM
>> *To:* openstack-dev@lists.openstack.org
>> *Subject:* [openstack-dev] [networking-ovs-dpdk] VM creation fails with
>> Unexpected vif_type=binding_failed
>>
>>
>>
>> Hi,
>>
>>
>>
>> Am trying to set up an open stack (kilo) installation using ovs-dpdk
>> through devstack installation.
>>
>>
>>
>> I have followed the "
>> https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/getstarted.rst
>> " documentation.
>>
>>
>>
>> I used the same versions as in documentation (fedora21, with right
>> kernel).
>>
>>
>>
>> My openstack installation is successful in both controller and compute.
>>
>> I have used example local.conf given in the documentation.
>>
>> But if i try to spawn the VM. I am having the following error.
>>
>>
>>
>> "NovaException: Unexpected vif_type=binding_failed"
>>
>>
>>
>> It would be really helpful if you can point out how to debug and fix this
>> error.
>>
>>
>>
>> Thanks
>>
>> Praveen
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Is it possible to use Murano RESTful API to create and deploy an application

2015-11-26 Thread WANG, Ming Hao (Tony T)
Hi Anastasia,

Thanks for your information.
“environment-apps-edit” can be used as “environment-apps-create”, I got it. ☺

Thanks,
Tony
From: Anastasia Kuznetsova [mailto:akuznets...@mirantis.com]
Sent: Thursday, November 26, 2015 4:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [murano] Is it possible to use Murano RESTful API 
to create and deploy an application

Hi Tony,

We have already implemented commands which allow user to configure and deploy 
environments via CLI.
Please, take a look to this article: 
http://docs.openstack.org/developer/murano/draft/enduser-guide/deploying_using_cli.html

On Thu, Nov 26, 2015 at 9:18 AM, WANG, Ming Hao (Tony T) 
> wrote:
Dear Murano developers and testers,

I want to use Murano RESTful API to create and deploy an application.
Based on my current understanding, I want to use muranoclient cli as following:

1.   “environment-create” to create Murano environment;

2.   “environment-session-create” to create session for the environment;

3.   “environment-apps-create” to create application for the session.

This command hasn’t been implemented yet, thus I implement it by studying 
“do_environment_apps_edit” to send POST request to “services” object.

Could you please help to check if my thought is right?

If it is right, I meet the following issue:
When an environment includes several applications, I need to generate an uuid 
for each application, and use the uuid to let one application reference to 
another application.
It is a little strange to let user provide this kind of information, and I 
doubt if I’m using Murano in a wrong way, and Murano isn’t designed for this.

Could you please help to check? Is Murano designed to be able to expose RESTful 
to do all the works(including application creation/deployment) that user can do 
from UI?

Please advice,
Thanks,
Tony


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best regards,
Anastasia Kuznetsova
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New [puppet] module for Magnum project

2015-11-26 Thread Emilien Macchi


On 11/25/2015 05:08 PM, Ricardo Rocha wrote:
> Hi.
> 
> We've started implementing a similar module here, i just pushed it to:
> https://github.com/cernops/puppet-magnum
> 
> It already does a working magnum-api/conductor, and we'll add
> configuration for additional conf options this week - to allow
> alternate heat templates for the bays.
> 
> I've done some work before on puppet-ceph before and i'm happy to
> start pushing patches to openstack/puppet-magnum. Is there already
> something going on? I couldn't find any in:
> https://review.openstack.org/#/q/status:open+project:openstack/puppet-magnum,n,z

Like we said on IRC, yes we have a community module, and you're highly
welcome to contribute to it.

Thanks!

> Cheers,
>   Ricardo
> 
> On Thu, Oct 29, 2015 at 4:58 PM, Potter, Nathaniel
>  wrote:
>> Hi Adrian,
>>
>>
>>
>> Basically it would fall under the same umbrella as all of the other
>> puppet-openstack projects, which use puppet automation to configure as well
>> as manage various OpenStack projects. An example of a mature one is here for
>> the Cinder project: https://github.com/openstack/puppet-cinder. Right now
>> there are about 35-40 such puppet modules for different projects in
>> OpenStack, so one example of people who might make use of this project are
>> people who have already used the existing puppet modules to set up their
>> cloud and wish to incorporate Magnum into their cloud using the same tool.
>>
>>
>>
>> Thanks,
>>
>> Nate
>>
>>
>>
>> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
>> Sent: Thursday, October 29, 2015 10:10 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] New [puppet] module for Magnum project
>>
>>
>>
>> Nate,
>>
>>
>>
>> On Oct 29, 2015, at 11:26 PM, Potter, Nathaniel 
>> wrote:
>>
>>
>>
>> Hi everyone,
>>
>>
>>
>> I’m interested in starting up a puppet module that will handle the Magnum
>> containers project. Would this be something the community might want?
>> Thanks!
>>
>>
>>
>> Best,
>>
>> Nate Potter
>>
>>
>>
>> Can you elaborate a bit more about your concept? Who would use this? What
>> function would it provide? My guess is that you are suggesting a puppet
>> config for adding the Magnum service to an OpenStack cloud. Is that what you
>> meant? If so, could you share a reference to an existing one that we could
>> see as an example of what you had in mind?
>>
>>
>>
>> Thanks,
>>
>>
>>
>> Adrian
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CentOS-7 Transition Plan

2015-11-26 Thread Roman Vyalov
Hi all,
Part of those change requests may be merged shortly (today). They are
compatible with Centos6.
List of change requests ready to merge (compatible with Centos6):
Fuel-library

   - https://review.openstack.org/#/c/247066/
   - https://review.openstack.org/#/c/248781/
   - https://review.openstack.org/#/c/247727/

Fuel-nailgun-agent

   - https://review.openstack.org/#/c/244810/

Fuel-web

   - https://review.openstack.org/#/c/248206/
   - https://review.openstack.org/#/c/246531/
   - https://review.openstack.org/#/c/246535/

Python-fuelclient

   - https://review.openstack.org/#/c/231935/

Fuel-ostf

   - https://review.openstack.org/#/c/248096/

Fuel-menu

   - https://review.openstack.org/#/c/246888/


List with all change requests related to the support Centos7:
https://etherpad.openstack.org/p/fuel_on_centos7

On Tue, Nov 24, 2015 at 4:37 PM, Oleg Gelbukh  wrote:

> That's good to know, thank you, Vladimir, Dmitry.
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Tue, Nov 24, 2015 at 3:10 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> In fact, we (I and Dmitry) are on the same page of how to merge these two
>> features (Centos7 and Docker removal). We agreed that Dmitry's feature is
>> much more complicated and of higher priority. So, Centos 7 should be merged
>> first and then I'll rebase my patches (mostly supervisor -> systemd).
>>
>>
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Tue, Nov 24, 2015 at 1:57 AM, Igor Kalnitsky 
>> wrote:
>>
>>> Hey Dmitry,
>>>
>>> Thank you for your effort. I believe it's a huge step forward that
>>> opens number of possibilities.
>>>
>>> > Every container runs systemd as PID 1 process instead of
>>> > supervisord or application / daemon.
>>>
>>> Taking into account that we're going to drop Docker containers, I
>>> think it was unnecessary complication of your work.
>>>
>>> Please sync-up with Vladimir Kozhukalov, he's working on getting rid
>>> of containers.
>>>
>>> > Every service inside a container is a systemd unit. Container build
>>> > procedure was modified, scripts setup.sh and start.sh were introduced
>>> > to be running during building and configuring phases respectively.
>>>
>>> Ditto. :)
>>>
>>> Thanks,
>>> Igor
>>>
>>> P.S: I wrote the mail and forgot to press "send" button. It looks like
>>> Oleg is already pointed out that I wanted to.
>>>
>>> On Mon, Nov 23, 2015 at 2:37 PM, Oleg Gelbukh 
>>> wrote:
>>> > Please, take into account the plan to drop the containerization of Fuel
>>> > services:
>>> >
>>> > https://review.openstack.org/#/c/248814/
>>> >
>>> > --
>>> > Best regards,
>>> > Oleg Gelbukh
>>> >
>>> > On Tue, Nov 24, 2015 at 12:25 AM, Dmitry Teselkin <
>>> dtesel...@mirantis.com>
>>> > wrote:
>>> >>
>>> >> Hello,
>>> >>
>>> >> We've been working for some time on bringing CentOS-7 to master node,
>>> >> and now is the time to share and discuss the transition plan.
>>> >>
>>> >> First of all, what have been changed:
>>> >> * Master node itself runs on CentOS-7. Since all the containers share
>>> >>   the same repo as master node they all have been migrated to CentOS-7
>>> >>   too. Every container runs systemd as PID 1 process instead of
>>> >>   supervisord or application / daemon.
>>> >> * Every service inside a container is a systemd unit. Container build
>>> >>   procedure was modified, scripts setup.sh and start.sh were
>>> introduced
>>> >>   to be running during building and configuring phases respectively.
>>> >>   The main reason for this was the fact that many puppet manifests use
>>> >>   service management commands that require systemd daemon running.
>>> This
>>> >>   also allowed to simplify Dockerfiles by removing all actions to
>>> >>   setup.sh file.
>>> >> * We managed to find some bugs in various parts that were fixed too.
>>> >> * Bootstrap image is also CentOS-7 based. It was updated to better
>>> >>   support it - some services converted to systemd units and fixes to
>>> >>   support new network naming schema were made.
>>> >> * ISO build procedure was updated to reflect changes in CentOS-7
>>> >>   distribution and to support changes in docker build procedure.
>>> >> * Many applications was updated (puppet, docker, openstack
>>> >>   components).
>>> >> * Docker containers moved to LVM volume to improve performance and get
>>> >>   rid of annoying warning messages during master node deployment.
>>> >>   bootstrap_admin_node.sh script was updated to fix some deployment
>>> >>   issues (e.g. dracut behavior when there are multiple network
>>> >>   interfaces available) and simplified by removing outdated
>>> >>   functionality. It was also converted to a "run once" logon script
>>> >>   instead of being run as a service, primarily because of a way it's
>>> >>   used.
>>> >>
>>> >> As you can see there are a lot of changes were made. Some of them
>>> might
>>> >> be merged into current master if surrounded by conditionals to be
>>> >> 

[openstack-dev] [neutron][fwaas]

2015-11-26 Thread Oguz Yarimtepe

Hi,

I am trying to fork vArmour FWaaS driver and didn't find how and when 
https://github.com/openstack/neutron-fwaas/blob/master/neutron_fwaas/services/firewall/agents/varmour/varmour_router.py#L276 
function is called. I put pdb traces but starting neutron-l3-agent never 
fall in to a debug state. Any vArmour developer here that can help me?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2015-11-26 Thread Korzeniewski, Artur
I have submitted patch for DVR Grenade multinode job:
https://review.openstack.org/#/c/250215

Without DVR upgrade - we won't be able to tell if L3 upgrade is working.
What is left to be done, it is the DVR support in Grenade.
DVR has the multinode job, but I do not see DVR in grenade - creation of DVR 
router should be done in Grenade scripts.

Regards,
Artur  

-Original Message-
From: Sean M. Collins [mailto:s...@coreitpro.com] 
Sent: Wednesday, November 25, 2015 9:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial 
upgrade

On Wed, Nov 25, 2015 at 02:31:26PM EST, Armando M. wrote:
> So we fail before even attempting an upgrade?

Yeah I think so, I think we were failing at step 3 of Artur's list, creating 
the resources.

> It looks like we're testing 7.0.1.dev114, shouldn't we test from 7.0.0?

I think Grenade checks out stable/liberty - so that's probably a version string 
generated from the tip of stable/liberty 

> I am really confused, I should probably stop asking questions and do 
> some homework :)

No please keep asking - I think we're all learning things here. I'm certainly 
no expert.

--
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] how do we get the migration status details info from nova

2015-11-26 Thread 少合冯
Now, we are agree on getting more migration status details info are useful.

But How do we get them?
By REST API or Notification?


IF by API, does the  "time_elapsed" is needed?

For there is a "created_at" field.
But IMO, it is base on the time of the conductor server?
The time_elapsed can get from libvirt, which from the hypervisor.
Usually, there are ntp-server in the cloud. and we can get the time_elapsed
by "created_at".
but not sure there will be the case:
the time of hypervisor and conductor server host are out of sync?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Neutron] Testing of Ironic/Neutron integration on devstack

2015-11-26 Thread Vasyl Saienko
Hello Kevin,

I've added some pictures that illustrates how it works with HW switch and
with VMs on devstack.


On Wed, Nov 25, 2015 at 10:53 PM, Kevin Benton  wrote:

> This is cool. I didn't know you were working on an OVS driver for testing
> in CI as well. :)
>
> Does this work by getting the port wired into OVS so the agent recognizes
> it like a regular port so it can be put into VXLAN/VLAN or whatever the
> node is configured with? From what I can tell it looks like it's on a
> completely different bridge so they wouldn't have connectivity to the rest
> of the network.
>
> Driver works with VLAN at the moment, I don't see any reason why it
wouldn't work with VXLAN.
Ironic VMs are created on devstack by [0]. They are not registered in
Nova/Neutron so neutron-ovs-agent doesn't know anything about them.
In single node devstack you can't launch regular nova VM instances since
compute_driver=ironic doesn't allow this. They would have connectivity to
rest of network via 'br-int'.

I have some POC code[1] for 'baremetal' support directly in the OVS agent
> so ports get treated just like VM ports. However, it requires upstream
> changes so if yours accomplishes the same thing without any upstream
> changes, that will be the best way to go.
>
>
In real setup neutron will plug baremetal server to specific network via
ML2 driver.
We should keep as much closer to real ironic use-case scenario in testing
model. That is why we should have ML2 that allows us to interact with OVS.


> Perhaps we can merge your approach (config via ssh) with mine (getting the
> 'baremetal' ports wired up for real connectivity) so we don't need upstream
> changes.
>
> 1. https://review.openstack.org/#/c/249265/
>
> Cheers,
> Kevin Benton
>
> On Wed, Nov 25, 2015 at 7:27 AM, Vasyl Saienko 
> wrote:
>
>> Hello Community,
>>
>> As you know Ironic/Neutron integration is planned in Mitaka. And at the
>> moment we don't have any CI that will test it. Unfortunately we can't test
>> Ironic/Neutron integration on HW as we don't have it.
>> So probably the best way is to develop ML2 driver that will work with OVS.
>>
>> At the moment we have a PoC [1] of ML2 driver that works with Cisco and
>> OVS on linux.
>> Also we have some patches to devstack that allows to try Ironic/Neutron
>> integration on VM and real HW. And quick guide how to test it locally [0]
>>
>> https://review.openstack.org/#/c/247513/
>> https://review.openstack.org/#/c/248048/
>> https://review.openstack.org/#/c/249717/
>> https://review.openstack.org/#/c/248074/
>>
>> I'm interested in Neutron/Ironic integration. It would be great if we
>> have it in Mitaka.
>> I'm asking Community to check [0] and [1] and share your thoughts.
>>
>>  Also I would like to request a repo on openstack.org for [1]
>>
>>
>> [0]
>> https://github.com/jumpojoy/ironic-neutron/blob/master/devstack/examples/ironic-neutron-vm.md
>> [1] https://github.com/jumpojoy/generic_switch
>>
>> --
>> Sincerely
>> Vasyl Saienko
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
[0]
https://github.com/openstack-dev/devstack/blob/master/tools/ironic/scripts/create-node
[1] https://review.openstack.org/#/c/249717

--
Sincerely
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Post-release bump after 2014.2.4?

2015-11-26 Thread Alan Pevec
> I've confirmed that the juno side of kilo grenade is not blowing up [1], but
> I'm not sure why it's not blowing up. Trying to figure that out.

It would blow up if something were merged after 2014.2.4 tag which
won't happen before version in setup.cfg is either bumped or removed
so we're safe :)

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how do we get the migration status details info from nova

2015-11-26 Thread Paul Carlton

On 26/11/15 10:48, 少合冯 wrote:
Now, we are agree on getting more migration status details info are 
useful.


But How do we get them?
By REST API or Notification?


IF by API, does the "time_elapsed"is needed?
For there is a "created_at" field.
But IMO, it is base on the time ofthe conductor server?
The time_elapsed can get from libvirt, which from the hypervisor.
Usually, there are ntp-server in the cloud. and we can get the 
time_elapsed by "created_at".

but not sure there will be the case:
the time of hypervisor and conductor server host are out of sync?

Why not both.  Just update the _monitor_live_migration method in the libvirt
 driver (and any similar functions in other drivers if they exist) so 
it updates

 the migration object and also sends notification events.  These don't have
 to be at 5 second intervals, although I think that is about right for the
migration object update.  Notification messages could be once event 30
 seconds or so.

Operators can monitor the progress via the API and orchestration utilities
 to consume the notification messages (and/or use API).
This will enable them to identify migration operations that are not making
 good progress and take actions to address the issue.

The created_at and updated_at fields of the migration object should be
sufficient to allow the caller to work out how long the migration has been
running for (or how long it took in the case of a completed migration).

Notification payload can include the created_at field or not.  I'd say not.
There will be a notification message generated when a migration starts
so subsequent progress messages don't need it, if the consumer wants
the complete picture they can call the API.


--
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Email:mailto:paul.carlt...@hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be 
legally privileged. If you have received this message in error, you should delete it from 
your system immediately and advise the sender. To any recipient of this message within 
HP, unless otherwise stated you should consider this message and attachments as "HP 
CONFIDENTIAL".

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with Unexpected vif_type=binding_failed

2015-11-26 Thread Mooney, Sean K
Am when you say dpdk interface do you mean dpdk physical interface is not 
reciving any packets or a vhost-user interface.

Can you provide the output of ovs-vsctl show.
And sudo /opt/stack/DPDK-v2.1.0/tools/dpdk_nic_bind.py –status

You should see an output similar to this.
Network devices using DPDK-compatible driver

:02:00.0 'Ethernet Controller XL710 for 40GbE QSFP+' drv=igb_uio unused=i40e

Network devices using kernel driver
===
:02:00.1 'Ethernet Controller XL710 for 40GbE QSFP+' if=ens785f1 drv=i40e 
unused=igb_uio
:02:00.2 'Ethernet Controller XL710 for 40GbE QSFP+' if=ens785f2 drv=i40e 
unused=igb_uio
:02:00.3 'Ethernet Controller XL710 for 40GbE QSFP+' if=ens785f3 drv=i40e 
unused=igb_uio
:06:00.0 'I350 Gigabit Network Connection' if=enp6s0f0 drv=igb 
unused=igb_uio
:06:00.1 'I350 Gigabit Network Connection' if=enp6s0f1 drv=igb 
unused=igb_uio

Other network devices
=



From: Praveen MANKARA RADHAKRISHNAN [mailto:praveen.mank...@6wind.com]
Sent: Thursday, November 26, 2015 9:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Hi,

One thing i observed is there is no packets coming to the dpdk interface (data 
network).
I have verified it with the tcpdump using mirror interface.
and if i assign ip address and ping each other the data network bridges that is 
also not working.
could this be a possible cause for the nova exception? (NovaException: 
Unexpected vif_type=binding_failed)

Thanks
Praveen

On Tue, Nov 24, 2015 at 3:28 PM, Praveen MANKARA RADHAKRISHNAN 
> wrote:
Hi Sean,

Thanks for the reply.

Please find the logs attached.
ovs-dpdk is correctly running in compute.

Thanks
Praveen

On Tue, Nov 24, 2015 at 3:04 PM, Mooney, Sean K 
> wrote:
Hi would you be able to attach the
n-cpu log form the computenode  and  the
n-sch and q-svc logs for the controller so we can see if there is a stack trace 
relating to the
vm boot.

Also can you confirm ovs-dpdk is running correctly on the compute node by 
running
sudo service ovs-dpdk status

the neutron and networking-ovs-dpdk commits are from their respective 
stable/kilo branches so they should be compatible
provided no breaking changes have been merged to either branch.

regards
sean.

From: Praveen MANKARA RADHAKRISHNAN 
[mailto:praveen.mank...@6wind.com]
Sent: Tuesday, November 24, 2015 1:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Hi Przemek,

Thanks For the response,

Here are the commit ids for Neutron and networking-ovs-dpdk

[stack@localhost neutron]$ git log --format="%H" -n 1
026bfc6421da796075f71a9ad4378674f619193d
[stack@localhost neutron]$ cd ..
[stack@localhost ~]$ cd networking-ovs-dpdk/
[stack@localhost networking-ovs-dpdk]$  git log --format="%H" -n 1
90dd03a76a7e30cf76ecc657f23be8371b1181d2

The Neutron agents are up and running in compute node.

Thanks
Praveen


On Tue, Nov 24, 2015 at 12:57 PM, Czesnowicz, Przemyslaw 
> wrote:
Hi Praveen,

There’s been some changes recently to networking-ovs-dpdk, it no longer host’s 
a mech driver as the openviswitch mech driver in Neutron supports vhost-user 
ports.
I guess something went wrong and the version of Neutron is not matching 
networking-ovs-dpdk. Can you post commit ids of Neutron and networking-ovs-dpdk.

The other possibility is that the Neutron agent is not running/died on the 
compute node.
Check with:
neutron agent-list

Przemek

From: Praveen MANKARA RADHAKRISHNAN 
[mailto:praveen.mank...@6wind.com]
Sent: Tuesday, November 24, 2015 12:18 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Hi,

Am trying to set up an open stack (kilo) installation using ovs-dpdk through 
devstack installation.

I have followed the " 
https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/getstarted.rst
 " documentation.

I used the same versions as in documentation (fedora21, with right kernel).

My openstack installation is successful in both controller and compute.
I have used example local.conf given in the documentation.
But if i try to spawn the VM. I am having the following error.

"NovaException: Unexpected vif_type=binding_failed"

It would be really helpful if you can point out how to debug and fix this error.

Thanks
Praveen



Re: [openstack-dev] Encouraging first-time contributors through bug tags/reviews

2015-11-26 Thread Rossella Sblendido



On 11/26/2015 10:40 AM, Thierry Carrez wrote:

Doug Hellmann wrote:

Excerpts from Shamail's message of 2015-11-26 02:07:55 +0500:



On Nov 26, 2015, at 1:42 AM, Doug Hellmann  wrote:

OK, reserving bugs for new contributors does reduce the number of
people contending for them, but it doesn't eliminate the need to
figure out if someone else is already working on a bug before you
start. Encouraging folks to assign bugs to themselves when they start
work is probably the best way to solve that.

+1, I think most do a good job at this.

Where do you think is the appropriate place to formally ask for a new tag 
and/or reservations?


This list is a good place to ask for a tag like that. It's also a good
topic for the cross-project meetings.


Launchpad "tags" are per-project, so ideally you would find a pilot
project (or a few pilot projects) ready to play with a
"I-added-instructions-for-first-timers-to-follow" type tag. If those are
successful, we could then encourage every other project to adopt it too...



I really like this idea. I was contacted many times by people who wanted 
to start contributing and had troubles finding a bug to fix.
low-hanging-fruit are not always so easy to understand for newbies even 
if they might be straightforward for experienced people. Another issue 
is that sometimes trivial bugs are fixed by experienced people. Which is 
not optimal. Somebody spends time filing a bug, editing the description 
and tagging it low-hanging-fruit, hoping that it will be taken by a 
newbie (there's no way to reserve a bug for newbies right now). Then 
it's taken by an experience contributor, :/ the reporter could have 
fixed it easily in the first place, without spending time adding a 
detailed description to it.


I'd like to help. Neutron could be one of the pilot project. I will 
mention that in the next Neutron team meeting :)


Rossella

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Encouraging first-time contributors through bug tags/reviews

2015-11-26 Thread Shamail


> On Nov 26, 2015, at 4:19 PM, Rossella Sblendido  wrote:
> 
> I'd like to help. Neutron could be one of the pilot project. I will mention 
> that in the next Neutron team meeting :)

Thank you!  If one, or more, projects pilot this concept... We could share the 
results at a future cross-project meeting and decide where to go from there.

Regards,
Shamail 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Neutron] Testing of Ironic/Neutron integration on devstack

2015-11-26 Thread Vasyl Saienko
Hi Sukhdev,

I didn't have a change to be present on previous meeting due to personal
reasons, but I will be on next meeting.
It is important to keep CI testing as much possible closer to real Ironic
use-case scenario.

At the moment we don't have any test-case that covers ironic/neutron
integration in tempests.
I think it is nice time to discuss it. So my vision of ironic/neutron
test-case is next:

1. Setup Devstack with 3 ironic nodes
2. In project: *demo *

   - create a network 10.0.100.0/24
   - boot vm on it with fixed ip 10.0.100.10
   - boot vm2 on it with fixed ip 10.0.100.11

3. In project: *alt_demo*

   - create network 10.0.100.0/24 with same prefix as in project *demo *
   - boot vm on it with fixed ip 10.0.100.20

4. Wait for both instances become active

5. Check that we *can't* ping *demo: vm* from *alt_demo vm*

6. Check that we *can* access to vm1 from vm in project demo

7. Make sure that there is no packets with MAC of *alt_demo vm *on *demo:
vm *(can use tcpdump)
--
Sincerely
Vasyl Saienko

On Wed, Nov 25, 2015 at 11:06 PM, Sukhdev Kapur 
wrote:

> Hi Vasyl,
>
> This is great. Kevin and I was working on the similar thing. I just
> finished testing his patch and gave a +1.
> This is a missing (and needed) functionality for getting the
> Ironic/Neutron integration completed.
>
> As Kevin suggests, it will be best if we can combine these approaches and
> come up with the best solution.
>
> If you are available, please join us in our next weekly meeting at 8AM
> (pacific time) at #openstack-meeting-4.
> I am sure team will be excited to know about this solution and this will
> give an opportunity to make sure we cover all angles of this testing.
>
> Thanks
> -Sukhdev
>
>
> On Wed, Nov 25, 2015 at 7:27 AM, Vasyl Saienko 
> wrote:
>
>> Hello Community,
>>
>> As you know Ironic/Neutron integration is planned in Mitaka. And at the
>> moment we don't have any CI that will test it. Unfortunately we can't test
>> Ironic/Neutron integration on HW as we don't have it.
>> So probably the best way is to develop ML2 driver that will work with OVS.
>>
>> At the moment we have a PoC [1] of ML2 driver that works with Cisco and
>> OVS on linux.
>> Also we have some patches to devstack that allows to try Ironic/Neutron
>> integration on VM and real HW. And quick guide how to test it locally [0]
>>
>> https://review.openstack.org/#/c/247513/
>> https://review.openstack.org/#/c/248048/
>> https://review.openstack.org/#/c/249717/
>> https://review.openstack.org/#/c/248074/
>>
>> I'm interested in Neutron/Ironic integration. It would be great if we
>> have it in Mitaka.
>> I'm asking Community to check [0] and [1] and share your thoughts.
>>
>>  Also I would like to request a repo on openstack.org for [1]
>>
>>
>> [0]
>> https://github.com/jumpojoy/ironic-neutron/blob/master/devstack/examples/ironic-neutron-vm.md
>> [1] https://github.com/jumpojoy/generic_switch
>>
>> --
>> Sincerely
>> Vasyl Saienko
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ironic] do we really need websockify with numpy speedups?

2015-11-26 Thread Pavlo Shchelokovskyy
Hi all,

I was long puzzled why devstack is installing numpy. Being a fantastic
package itself, it has the drawback of taking about 4 minutes to compile
its C extensions when installing on our gates (e.g. [0]). I finally took
time to research and here is what I've found:

it is used only by websockify package (installed by AFAIK ironic and nova
only), and there it is used to speed up the HyBi protocol. Although the
code itself has a path to work without numpy installed [1], the setup.py of
websockify declares numpy as a hard dependency [2].

My question is do we really need those speedups? Do we test any feature
requiring fast HyBi support on gates? Not installing numpy would shave 4
minutes off any gate job that is installing Nova or Ironic, which seems
like a good deal to me.

If we decide to save this time, I have prepared a pull request for
websockify that moves numpy requirement to "extras" [3]. As a consequence
numpy will not be installed by default as dependency, but still possible to
install with e.g. "pip install websockify[fastHyBi]", and package builders
can also specify numpy as hard dependency for websockify package in package
specs.

What do you think?

[0]
http://logs.openstack.org/82/236982/6/check/gate-tempest-dsvm-ironic-agent_ssh/1141960/logs/devstacklog.txt.gz#_2015-11-11_19_51_40_784
[1]
https://github.com/kanaka/websockify/blob/master/websockify/websocket.py#L143
[2] https://github.com/kanaka/websockify/blob/master/setup.py#L37
[3]
https://github.com/pshchelo/websockify/commit/0b1655e73ea13b4fba9c6fb4122adb1435d5ce1a

Best regards,
-- 
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Use tempest-config for tempest-cli-improvements

2015-11-26 Thread Ken'ichi Ohmichi
Hi Daniel,

Thanks for pointing this up.

2015-11-25 1:40 GMT+09:00 Daniel Mellado :
> Hi All,
>
> As you might already know, within Red Hat's tempest fork, we do have one
> tempest configuration script which was built in the past by David Kranz [1]
> and that's been actively used in our CI system. Regarding this topic, I'm
> aware that quite some effort has been done in the past [2] and I would like
> to complete the implementation of this blueprint/spec.
>
> My plan would be to have this script under the /tempest/cmd or
> /tempest/tools folder from tempest so it can be used to configure not the
> tempest gate but any cloud we'd like to run tempest against.
>
> Adding the configuration script was discussed briefly at the Mitaka summit
> in the QA Priorities meting [3]. I propose we use the existing etherpad to
> continue the discussion around and tracking of implementing "tempest
> config-create" using the downstream config script as a starting point. [4]
>
> If you have any questions, comments or opinion, please let me know.

This topic have happened several times, and I also felt this kind of
tool was very useful for Tempest users, because Tempest contains 296
options($ grep cfg * -R | grep Opt | wc -l) now and it is difficult to
set the configuration up.
However, there is a big concern:
If the script contain a bug and creates the configuration which makes
most tests skipped, we cannot do enough tests on the gate.
Tempest contains 1432 tests and difficult to detect which tests are
skipped as unexpected.
Actually we faced unexpected skipped tests on the gate before due to
some bug, then the problem has been fixed.
But I can imagine this kind of problem happens after implementing this
kind of script.

So now I am feeling Tempest users need to know what cloud they want to
test with Tempest, and need to know what tests run with Tempest.
Testers need to know what test target/items they are testing basically.

Thanks
Ken Ohmichi

---

> ---
> [1]
> https://github.com/redhat-openstack/tempest/blob/master/tools/config_tempest.py
> [2] https://blueprints.launchpad.net/tempest/+spec/tempest-config-generator
> [3] https://etherpad.openstack.org/p/mitaka-qa-priorities
> [4] https://etherpad.openstack.org/p/tempest-cli-improvements
>
> https://github.com/openstack/qa-specs/blob/master/specs/tempest/tempest-cli-improvements.rst
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Meeting Thursday November 26th at 9:00 UTC

2015-11-26 Thread Ken'ichi Ohmichi
Hi,

On today QA meeting, the log bot seemed down.
So I'd like to send a mail for summarizing the log and the situation instead:

#action gmann will ask mtreinish and johnthetubaguy for reviewing the
microversion test patches
 - We are discussing this already on IRC after the meeting and that
seems nice progress.
 - Related to: 
https://review.openstack.org/#/q/status:open+branch:master+topic:bp/api-microversions-testing-support,n,z

#action dmellado gmann ylobankov oomichi will solve the separation of
tenants_clients together
 - Doing
 - Related to:https://review.openstack.org/#/c/248170/
#action jordanP will review https://review.openstack.org/#/c/247554 again
 - Done, nice
#action oomichi(at least) will see dmellado's mail related to config
thing and rely
 - Done on 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/080398.html
#action oomichi review https://review.openstack.org/#/c/225575
 - Will do

Thanks
Ken Ohmichi

---

2015-11-26 15:14 GMT+09:00 Ken'ichi Ohmichi :
> Hi everyone,
>
> Please reminder that the weekly OpenStack QA team IRC meeting will be
> Thursday, November 26th at 9:00 UTC in the #openstack-meeting channel.
>
> The agenda for the meeting can be found here:
>
> https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Proposed_Agenda_for_November_26th_2015_.280900_UTC.29
>
> Anyone is welcome to add an item to the agenda.
>
> To help people figure out what time 9:00 UTC is in other timezones the next
> meeting will be at:
>
> 03:00 EDT
> 18:00 JST
> 18:30 ACST
> 11:00 CEST
> 04:00 CDT
> 02:00 PDT
>
> Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How were we going to remove soft delete again?

2015-11-26 Thread John Garbutt
On 24 November 2015 at 16:36, Matt Riedemann  wrote:
> I know in Vancouver we talked about no more soft delete and in Mitaka lxsli
> decoupled the nova models from the SoftDeleteMixin [1].
>
> From what I remember, the idea is to not add the deleted column to new
> tables, to not expose soft deleted resources in the REST API in new ways,
> and to eventually drop the deleted column from the models.
>
> I bring up the REST API because I was tinkering with the idea of allowing
> non-admins to list/show their (soft) deleted instances [2]. Doing that,
> however, would expose more of the REST API to deleted resources which makes
> it harder to remove from the data model.
>
> My question is, how were we thinking we were going to remove the deleted
> column from the data model in a backward compatible way? A new microversion
> in the REST API isn't going to magically work if we drop the column in the
> data model, since anything before that microversion should still work - like
> listing deleted instances for the admin.
>
> Am I forgetting something? There were a lot of ideas going around the room
> during the session in Vancouver and I'd like to sort out the eventual
> long-term plan so we can document it in the devref about policies so that
> when ideas like [2] come up we can point to the policy and say 'no we aren't
> going to do that and here's why'.
>
> [1]
> http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/no-more-soft-delete.html
> [2]
> https://blueprints.launchpad.net/nova/+spec/non-admin-list-deleted-instances

>From my memory, step 1 is to ensure we don't keep adding soft delete
by default/accident, which is where the explicit mix-in should help.

Step 2, is removing existing soft_deletes. Now we can add a new
microversion to remove the concept of requesting deleted things, but
as you point out, that doesn't help the older microversions.

What we could raise 403 errors when users request deleted things in
older versions of the API. I don't like that breaking API change, but
I also don't like the idea of keeping soft_delete in the database for
ever. Its a the case of picking the best of two bad outcomes. I am not
sure we have reached consensus on the preferred approach yet.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How were we going to remove soft delete again?

2015-11-26 Thread John Garbutt
On 26 November 2015 at 12:10, John Garbutt  wrote:
> On 24 November 2015 at 16:36, Matt Riedemann  
> wrote:
>> I know in Vancouver we talked about no more soft delete and in Mitaka lxsli
>> decoupled the nova models from the SoftDeleteMixin [1].
>>
>> From what I remember, the idea is to not add the deleted column to new
>> tables, to not expose soft deleted resources in the REST API in new ways,
>> and to eventually drop the deleted column from the models.
>>
>> I bring up the REST API because I was tinkering with the idea of allowing
>> non-admins to list/show their (soft) deleted instances [2]. Doing that,
>> however, would expose more of the REST API to deleted resources which makes
>> it harder to remove from the data model.
>>
>> My question is, how were we thinking we were going to remove the deleted
>> column from the data model in a backward compatible way? A new microversion
>> in the REST API isn't going to magically work if we drop the column in the
>> data model, since anything before that microversion should still work - like
>> listing deleted instances for the admin.
>>
>> Am I forgetting something? There were a lot of ideas going around the room
>> during the session in Vancouver and I'd like to sort out the eventual
>> long-term plan so we can document it in the devref about policies so that
>> when ideas like [2] come up we can point to the policy and say 'no we aren't
>> going to do that and here's why'.
>>
>> [1]
>> http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/no-more-soft-delete.html
>> [2]
>> https://blueprints.launchpad.net/nova/+spec/non-admin-list-deleted-instances
>
> From my memory, step 1 is to ensure we don't keep adding soft delete
> by default/accident, which is where the explicit mix-in should help.
>
> Step 2, is removing existing soft_deletes. Now we can add a new
> microversion to remove the concept of requesting deleted things, but
> as you point out, that doesn't help the older microversions.
>
> What we could raise 403 errors when users request deleted things in
> older versions of the API. I don't like that breaking API change, but
> I also don't like the idea of keeping soft_delete in the database for
> ever. Its a the case of picking the best of two bad outcomes. I am not
> sure we have reached consensus on the preferred approach yet.

I just realised, my text is ambiguous...

There is a difference between soft deleted instances, and soft delete in the DB.

If the instance could still be restored, and is not yet deleted, it
makes sense that policy could allow a non-admin to see those. But
thats a non-db-deleted instance in the SOFT_DELETED state.

I am still leaning towards killing the APIs that allow you to read in
DB soft-deleted data. Although, in some ways thats because the API
changes based on the DB retention policy of the deployer, which seems
very odd.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with Unexpected vif_type=binding_failed

2015-11-26 Thread Praveen MANKARA RADHAKRISHNAN
Hi Sean,

its the dpdk physical interface.

dpdk interface is correctly added to the bridge.

please find the below ovs-stats.

controller
--
[stack@localhost devstack]$ sudo ovs-vsctl show
c74e8e55-8f4e-4e49-9547-4319803a85f9
Bridge "br-ens5f1"
Port "br-ens5f1"
Interface "br-ens5f1"
type: internal
Port "phy-br-ens5f1"
Interface "phy-br-ens5f1"
type: patch
options: {peer="int-br-ens5f1"}
Port "dpdk0"
Interface "dpdk0"
type: dpdk
Bridge br-int
fail_mode: secure
Port "qr-9283d6fb-43"
tag: 4095
Interface "qr-9283d6fb-43"
type: internal
Port "tap66998ddd-73"
tag: 4095
Interface "tap66998ddd-73"
type: internal
Port br-int
Interface br-int
type: internal
Port "int-br-ens5f1"
Interface "int-br-ens5f1"
type: patch
options: {peer="phy-br-ens5f1"}
Bridge br-ex
Port "qg-736e604f-66"
Interface "qg-736e604f-66"
type: internal
Port br-ex
Interface br-ex
type: internal

compute
--
[stack@compute-mwc devstack]$ sudo ovs-vsctl show
02f8fe2c-6d8a-426f-b9f0-8d228fab28e6
Bridge br-int
fail_mode: secure
Port "int-br-ens4f1"
Interface "int-br-ens4f1"
type: patch
options: {peer="phy-br-ens4f1"}
Port br-int
Interface br-int
type: internal
Bridge "br-ens4f1"
Port "br-ens4f1"
Interface "br-ens4f1"
type: internal
Port "phy-br-ens4f1"
Interface "phy-br-ens4f1"
type: patch
options: {peer="int-br-ens4f1"}
Port "dpdk0"
Interface "dpdk0"
type: dpdk

and one thing i notices is the ovs-dpdk agent for compute is missing in the
neutron agent-list.

[stack@localhost devstack]$ neutron agent-list
+--++---+---++---+
| id   | agent_type | host
 | alive | admin_state_up | binary|
+--++---+---++---+
| 588b79af-6bec-4a5f-ac8b-1668cff948db | DHCP agent |
localhost.localdomain | :-)   | True   | neutron-dhcp-agent|
| 794acfdd-5e73-42b8-873b-682021689aad | DPDK OVS Agent |
localhost.localdomain | :-)   | True   | neutron-openvswitch-agent |
| 7b570743-7042-463c-af90-6c74adbec2e2 | Metadata agent |
localhost.localdomain | :-)   | True   | neutron-metadata-agent|
| ffc2f41d-33fa-4a8b-8a59-60cda81de6e4 | L3 agent   |
localhost.localdomain | :-)   | True   | neutron-l3-agent  |
+--++---+---++---+

[stack@localhost devstack]$ nova service-list
+++---+--+-+---++-+
| Id | Binary | Host  | Zone | Status  | State
| Updated_at | Disabled Reason |
+++---+--+-+---++-+
| 1  | nova-conductor | localhost.localdomain | internal | enabled | up
 | 2015-11-26T05:17:30.00 | -   |
| 3  | nova-cert  | localhost.localdomain | internal | enabled | up
 | 2015-11-26T05:17:33.00 | -   |
| 4  | nova-scheduler | localhost.localdomain | internal | enabled | up
 | 2015-11-26T05:17:36.00 | -   |
| 5  | nova-compute   | compute-mwc   | nova | enabled | up
 | 2015-11-26T05:17:37.00 | -   |
+++---+--+-+---++-+

Thanks
Praveen

On Thu, Nov 26, 2015 at 12:18 PM, Mooney, Sean K 
wrote:

> Am when you say dpdk interface do you mean dpdk physical interface is not
> reciving any packets or a vhost-user interface.
>
>
>
> Can you provide the output of ovs-vsctl show.
>
> And sudo /opt/stack/DPDK-v2.1.0/tools/dpdk_nic_bind.py –status
>
>
>
> You should see an output similar to this.
>
> Network devices using DPDK-compatible driver
>
> 
>
> :02:00.0 'Ethernet Controller XL710 for 40GbE QSFP+' drv=igb_uio
> unused=i40e
>
>
>
> Network devices using kernel driver
>
> ===
>
> :02:00.1 'Ethernet Controller XL710 for 40GbE QSFP+' if=ens785f1
> drv=i40e 

Re: [openstack-dev] [openstack-announce] [release][stable][keystone][ironic] keystonemiddleware release 1.5.3 (kilo)

2015-11-26 Thread Dmitry Tantsur
I suspect it could break ironic stable/kilo in the same way as 2.0.0 
release. Still investigating, checking if 
https://review.openstack.org/#/c/250341/ will also fix it. Example of 
failing patch: https://review.openstack.org/#/c/248365/


On 11/23/2015 08:54 PM, d...@doughellmann.com wrote:

We are pumped to announce the release of:

keystonemiddleware 1.5.3: Middleware for OpenStack Identity

This release is part of the kilo stable release series.

With source available at:

 http://git.openstack.org/cgit/openstack/keystonemiddleware

With package available at:

 https://pypi.python.org/pypi/keystonemiddleware

For more details, please see the git log history below and:

 http://launchpad.net/keystonemiddleware/+milestone/1.5.3

Please report issues through launchpad:

 http://bugs.launchpad.net/keystonemiddleware

Notable changes


will now require python-requests<2.8.0

Changes in keystonemiddleware 1.5.2..1.5.3
--

d56d96c Updated from global requirements
9aafe8d Updated from global requirements
cc746dc Add an explicit test failure condition when auth_token is missing
5b1e18f Fix list_opts test to not check all deps
217cd3d Updated from global requirements
518e9c3 Ensure cache keys are a known/fixed length
033c151 Updated from global requirements

Diffstat (except docs and test files)
-

keystonemiddleware/auth_token/_cache.py   | 19 ++-
requirements.txt  | 19 ++-
setup.py  |  1 -
test-requirements-py3.txt | 18 +-
test-requirements.txt | 18 +-
7 files changed, 69 insertions(+), 37 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index e3288a1..23308cd 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7,9 +7,9 @@ iso8601>=0.1.9
-oslo.config>=1.9.3,<1.10.0  # Apache-2.0
-oslo.context>=0.2.0,<0.3.0 # Apache-2.0
-oslo.i18n>=1.5.0,<1.6.0  # Apache-2.0
-oslo.serialization>=1.4.0,<1.5.0   # Apache-2.0
-oslo.utils>=1.4.0,<1.5.0   # Apache-2.0
-pbr>=0.6,!=0.7,<1.0
-pycadf>=0.8.0,<0.9.0
-python-keystoneclient>=1.1.0,<1.4.0
-requests>=2.2.0,!=2.4.0
+oslo.config<1.10.0,>=1.9.3 # Apache-2.0
+oslo.context<0.3.0,>=0.2.0 # Apache-2.0
+oslo.i18n<1.6.0,>=1.5.0 # Apache-2.0
+oslo.serialization<1.5.0,>=1.4.0 # Apache-2.0
+oslo.utils!=1.4.1,<1.5.0,>=1.4.0 # Apache-2.0
+pbr!=0.7,<1.0,>=0.6
+pycadf<0.9.0,>=0.8.0
+python-keystoneclient<1.4.0,>=1.2.0
+requests!=2.4.0,<2.8.0,>=2.2.0
@@ -16,0 +17 @@ six>=1.9.0
+stevedore<1.4.0,>=1.3.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 11d9e17..5ab5eb0 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5 +5 @@
-hacking>=0.10.0,<0.11
+hacking<0.11,>=0.10.0
@@ -9,2 +9,2 @@ discover
-fixtures>=0.3.14
-mock>=1.0
+fixtures<1.3.0,>=0.3.14
+mock<1.1.0,>=1.0
@@ -12,5 +12,5 @@ pycrypto>=2.6
-oslosphinx>=2.5.0,<2.6.0 # Apache-2.0
-oslotest>=1.5.1,<1.6.0  # Apache-2.0
-oslo.messaging>=1.8.0,<1.9.0  # Apache-2.0
-requests-mock>=0.6.0  # Apache-2.0
-sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
+oslosphinx<2.6.0,>=2.5.0 # Apache-2.0
+oslotest<1.6.0,>=1.5.1 # Apache-2.0
+oslo.messaging<1.9.0,>=1.8.0 # Apache-2.0
+requests-mock>=0.6.0 # Apache-2.0
+sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
@@ -19 +19 @@ testresources>=0.2.4
-testtools>=0.9.36,!=1.2.0
+testtools!=1.2.0,>=0.9.36



___
OpenStack-announce mailing list
openstack-annou...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TripleO client answers file.

2015-11-26 Thread Lennart Regebro
We are proposing to have an "answers file" for the tripleoclient so
that you don't have to have a ling line of

   openstack overcloud deploy --templates /home/stack/mytemplates
--environment superduper.yaml --environment loadsofstuff.yaml
--envirnoment custom.yaml

But instead can just do

   opennstack overcloud deploy --answers-file answers.yaml

And in that file have:

  templates: /home/stack/mytemplates
  environments:
- superduper.yaml
- loadsofstuff.yaml
- custom.yaml

This way you wont mess up the command line or confuse multiple subtle
command-line variations from Bash history and deploy the wrong one
(Yeah, we did that on a deployment).

A change request exists: https://review.openstack.org/#/c/249222/

Feedback?

//Lennart

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ironic] do we really need websockify with numpy speedups?

2015-11-26 Thread Pavlo Shchelokovskyy
Hi again,

I've went on and created a proper pull request to websockify [0], comment
there if you think we need it :)

I also realized that there is another option, which is to include
python-numpy to files/debs/ironic and files/debs/nova (strangely it is
already present in rpms/ for nova, noVNC and spice services).
This should install a pre-compiled version from distro repos, and should
also speed things up.

Any comments welcome.

[0] https://github.com/kanaka/websockify/pull/212

Best regards,

On Thu, Nov 26, 2015 at 1:44 PM Pavlo Shchelokovskyy <
pshchelokovs...@mirantis.com> wrote:

> Hi all,
>
> I was long puzzled why devstack is installing numpy. Being a fantastic
> package itself, it has the drawback of taking about 4 minutes to compile
> its C extensions when installing on our gates (e.g. [0]). I finally took
> time to research and here is what I've found:
>
> it is used only by websockify package (installed by AFAIK ironic and nova
> only), and there it is used to speed up the HyBi protocol. Although the
> code itself has a path to work without numpy installed [1], the setup.py of
> websockify declares numpy as a hard dependency [2].
>
> My question is do we really need those speedups? Do we test any feature
> requiring fast HyBi support on gates? Not installing numpy would shave 4
> minutes off any gate job that is installing Nova or Ironic, which seems
> like a good deal to me.
>
> If we decide to save this time, I have prepared a pull request for
> websockify that moves numpy requirement to "extras" [3]. As a consequence
> numpy will not be installed by default as dependency, but still possible to
> install with e.g. "pip install websockify[fastHyBi]", and package builders
> can also specify numpy as hard dependency for websockify package in package
> specs.
>
> What do you think?
>
> [0]
> http://logs.openstack.org/82/236982/6/check/gate-tempest-dsvm-ironic-agent_ssh/1141960/logs/devstacklog.txt.gz#_2015-11-11_19_51_40_784
> [1]
> https://github.com/kanaka/websockify/blob/master/websockify/websocket.py#L143
> [2] https://github.com/kanaka/websockify/blob/master/setup.py#L37
> [3]
> https://github.com/pshchelo/websockify/commit/0b1655e73ea13b4fba9c6fb4122adb1435d5ce1a
>
> Best regards,
> --
> Dr. Pavlo Shchelokovskyy
> Senior Software Engineer
> Mirantis Inc
> www.mirantis.com
>
-- 
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TripleO client answers file.

2015-11-26 Thread Qasim Sarfraz
+1. That would be really helpful.

What about passing other deployment parameters via answers.yaml ?  For
example, compute-flavor, control-flavor etc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Use tempest-config for tempest-cli-improvements

2015-11-26 Thread Andrey Kurilin
Hi!
Boris P. and I tried to push a spec[1] for automation tempest config
generator, but we did not succeed to merge it. Imo, qa-team doesn't want to
have such tool:(

>However, there is a big concern:
>If the script contain a bug and creates the configuration which makes
>most tests skipped, we cannot do enough tests on the gate.
>Tempest contains 1432 tests and difficult to detect which tests are
>skipped as unexpected.

Yaroslav Lobankov is working on improvement for tempest config generator in
Rally. Last time when we launch full tempest run[2], we got 1154 success
tests and only 24 skipped. Also, there is a patch, which adds x-fail
mechanism(it based on subunit-filter): you can transmit a file with test
names + reasons and rally will modify results.

[1] - https://review.openstack.org/#/c/94473/

[2] -
http://logs.openstack.org/49/242849/8/check/gate-rally-dsvm-verify/e91992e/rally-verify/7_verify_results_--html.html.gz

On Thu, Nov 26, 2015 at 1:52 PM, Ken'ichi Ohmichi 
wrote:

> Hi Daniel,
>
> Thanks for pointing this up.
>
> 2015-11-25 1:40 GMT+09:00 Daniel Mellado :
> > Hi All,
> >
> > As you might already know, within Red Hat's tempest fork, we do have one
> > tempest configuration script which was built in the past by David Kranz
> [1]
> > and that's been actively used in our CI system. Regarding this topic, I'm
> > aware that quite some effort has been done in the past [2] and I would
> like
> > to complete the implementation of this blueprint/spec.
> >
> > My plan would be to have this script under the /tempest/cmd or
> > /tempest/tools folder from tempest so it can be used to configure not the
> > tempest gate but any cloud we'd like to run tempest against.
> >
> > Adding the configuration script was discussed briefly at the Mitaka
> summit
> > in the QA Priorities meting [3]. I propose we use the existing etherpad
> to
> > continue the discussion around and tracking of implementing "tempest
> > config-create" using the downstream config script as a starting point.
> [4]
> >
> > If you have any questions, comments or opinion, please let me know.
>
> This topic have happened several times, and I also felt this kind of
> tool was very useful for Tempest users, because Tempest contains 296
> options($ grep cfg * -R | grep Opt | wc -l) now and it is difficult to
> set the configuration up.
> However, there is a big concern:
> If the script contain a bug and creates the configuration which makes
> most tests skipped, we cannot do enough tests on the gate.
> Tempest contains 1432 tests and difficult to detect which tests are
> skipped as unexpected.
> Actually we faced unexpected skipped tests on the gate before due to
> some bug, then the problem has been fixed.
> But I can imagine this kind of problem happens after implementing this
> kind of script.
>
> So now I am feeling Tempest users need to know what cloud they want to
> test with Tempest, and need to know what tests run with Tempest.
> Testers need to know what test target/items they are testing basically.
>
> Thanks
> Ken Ohmichi
>
> ---
>
> > ---
> > [1]
> >
> https://github.com/redhat-openstack/tempest/blob/master/tools/config_tempest.py
> > [2]
> https://blueprints.launchpad.net/tempest/+spec/tempest-config-generator
> > [3] https://etherpad.openstack.org/p/mitaka-qa-priorities
> > [4] https://etherpad.openstack.org/p/tempest-cli-improvements
> >
> >
> https://github.com/openstack/qa-specs/blob/master/specs/tempest/tempest-cli-improvements.rst
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] 'NoMatchingFunctionException: No function "#operator_." matches supplied arguments' error when adding an application to an environment

2015-11-26 Thread Stan Lagun
Vahid,

I see what is the problem.

You are generating UI form that looks like this:

  Forms:
-  group0:
fields: []
  Application:
name: $.group0.name
?:
  type: "io.murano.apps.generated.CsarHelloWorld"

  Version: 2.2


The problem in in "name: $.group0.name". Previously you removed the "name"
field from the form because it is provided automatically by the 2.2 form
format. However you did not removed the expression that was referring to it.

So just drop this expression from the form (and form generation code).
If you will need name during deployment time you always can obtain it using
name($this) YAQL expression as here:
https://github.com/openstack/murano/blob/master/contrib/plugins/cloudify_plugin/cloudify_applications_library/Classes/CloudifyApplication.yaml#L46

Hope this will help

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis



On Thu, Nov 26, 2015 at 3:23 AM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Stan,
>
> Thanks for looking into those files.
>
> They are what were generated on my machine.
>
> I tried one more time, and this time used a HOT package (something that
> comes ootb) and noticed the ui.yaml file generated was still in the same
> format as the previous ones.
> I am attaching the full folder structure created for this HOT package
> under /tmp/muranodashboard-cache/apps/ along with the hot package itself
> that I imported.
>
>
>
> I'd appreciate it if you could take a look and let me know if you see
> something wrong. Thanks.
>
> Regards,
> --Vahid
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-11-26 Thread Jiří Stránský




My personal preference is to say:

1. Any templates which are included in the default environment (e.g
overcloud-resource-registry-puppet.yaml), must expose their parameters
via overcloud-without-mergepy.yaml

2. Any templates which are included in the default environment, but via a
"noop" implementation *may* expose their parameters provided they are
common and not implementation/vendor specific.

3. Any templates exposing vendor specific interfaces (e.g at least anything
related to the OS::TripleO::*ExtraConfig* interfaces) must not expose any
parameters via the top level template.

How does this sound?


Pardon the longer e-mail please, but i think this topic is very far 
reaching and impactful on the future of TripleO, perhaps even strategic, 
and i'd like to present some food for thought.



I think as we progress towards more composable/customizable overcloud, 
using parameter_defaults will become a necessity in more and more places 
in the templates.


Nowadays we can get away with hierarchical passing of some parameters 
from the top-level template downwards because we can make very strong 
assumptions about how the overcloud is structured, and what each piece 
of the overcloud takes as its parameters. Even though we support 
customization via the resource registry, it's still mostly just 
switching between alternate implementations of the same thing, not 
strong composability.



I would imagine that going forward, TripleO would receive feature 
requests to add custom node types into the deployment, be it e.g. 
separating neutron network node functionality out of controller node 
onto its own hardware, or adding custom 3rd party node types into the 
overcloud, which need to integrate with the rest of the overcloud tightly.


When such scenario is considered, even the most code-static parameters 
like node-type-specific ExtraConfig, or a nova flavor to use for a node 
type, suddenly become dynamic on the code level (think 
parameter_defaults), simply because we can't predict upfront what node 
types we'll have.



I think a parallel with how Puppet evolved can be observed here. It used 
to be that Puppet classes included in deployments formed a sort-of 
hierarchy and got their parameters fed in a top-down cascade. This 
carried limitations on composability of machine configuration manifests 
(collisions when using the same class from multiple places, huge number 
of parameters in the higher-level manifests). Hiera was introduced to 
solve the problem, and nowadays top-level Puppet manifests contain a lot 
of include statements, and the parameter values are mostly read from 
external hiera data files, and hiera values transcend through the class 
hierarchy freely. This hinders easy discoverability of "what settings 
can i tune within this machine's configuration", but judging by the 
adoption of the approach, the benefits probably outweigh the drawbacks. 
In Puppet's case, at least :)


It seems TripleO is hitting similar composability and sanity limits with 
the top-down approach, and the number of parameters which can only be 
fed via parameter_defaults is increasing. (The disadvantage of 
parameter_defaults is that, unlike hiera, we currently have no clear 
namespacing rules, which means a higher chance of conflict. Perhaps the 
unit tests suggested in another subthread would be a good start, maybe 
we could even think about how to do proper namespacing.)



Does what i described seem somewhat accurate? Should we maybe buy into 
the concept of "composable templates, externally fed 
hierarchy-transcending parameters" for the long term?


Thanks for reading this far :)


Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry] upcoming Mitaka-1 release

2015-11-26 Thread gord chung

hi,

just a quick note, i'll be tagging the Mitaka-1 release early next week 
for Ceilometer and Aodh. if you want to get something in for M-1, now is 
the time. please let us know so we can track it.


Gnocchi will not have a release -- it will continue to be released 
independently as features are added.


cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Use tempest-config for tempest-cli-improvements

2015-11-26 Thread Yaroslav Lobankov
Hello everyone,

Yes, I am working on this now. We have some success already, but there is a
lot of work to do. Of course, some things don't work ideally. For example,
in [2] from the previous letter we have not 24 skipped tests, actually much
more. So we have a bug somewhere :)

Regards,
Yaroslav Lobankov.

On Thu, Nov 26, 2015 at 3:59 PM, Andrey Kurilin 
wrote:

> Hi!
> Boris P. and I tried to push a spec[1] for automation tempest config
> generator, but we did not succeed to merge it. Imo, qa-team doesn't want to
> have such tool:(
>
> >However, there is a big concern:
> >If the script contain a bug and creates the configuration which makes
> >most tests skipped, we cannot do enough tests on the gate.
> >Tempest contains 1432 tests and difficult to detect which tests are
> >skipped as unexpected.
>
> Yaroslav Lobankov is working on improvement for tempest config generator
> in Rally. Last time when we launch full tempest run[2], we got 1154 success
> tests and only 24 skipped. Also, there is a patch, which adds x-fail
> mechanism(it based on subunit-filter): you can transmit a file with test
> names + reasons and rally will modify results.
>
> [1] - https://review.openstack.org/#/c/94473/
>
> [2] -
> http://logs.openstack.org/49/242849/8/check/gate-rally-dsvm-verify/e91992e/rally-verify/7_verify_results_--html.html.gz
>
> On Thu, Nov 26, 2015 at 1:52 PM, Ken'ichi Ohmichi 
> wrote:
>
>> Hi Daniel,
>>
>> Thanks for pointing this up.
>>
>> 2015-11-25 1:40 GMT+09:00 Daniel Mellado :
>> > Hi All,
>> >
>> > As you might already know, within Red Hat's tempest fork, we do have one
>> > tempest configuration script which was built in the past by David Kranz
>> [1]
>> > and that's been actively used in our CI system. Regarding this topic,
>> I'm
>> > aware that quite some effort has been done in the past [2] and I would
>> like
>> > to complete the implementation of this blueprint/spec.
>> >
>> > My plan would be to have this script under the /tempest/cmd or
>> > /tempest/tools folder from tempest so it can be used to configure not
>> the
>> > tempest gate but any cloud we'd like to run tempest against.
>> >
>> > Adding the configuration script was discussed briefly at the Mitaka
>> summit
>> > in the QA Priorities meting [3]. I propose we use the existing etherpad
>> to
>> > continue the discussion around and tracking of implementing "tempest
>> > config-create" using the downstream config script as a starting point.
>> [4]
>> >
>> > If you have any questions, comments or opinion, please let me know.
>>
>> This topic have happened several times, and I also felt this kind of
>> tool was very useful for Tempest users, because Tempest contains 296
>> options($ grep cfg * -R | grep Opt | wc -l) now and it is difficult to
>> set the configuration up.
>> However, there is a big concern:
>> If the script contain a bug and creates the configuration which makes
>> most tests skipped, we cannot do enough tests on the gate.
>> Tempest contains 1432 tests and difficult to detect which tests are
>> skipped as unexpected.
>> Actually we faced unexpected skipped tests on the gate before due to
>> some bug, then the problem has been fixed.
>> But I can imagine this kind of problem happens after implementing this
>> kind of script.
>>
>> So now I am feeling Tempest users need to know what cloud they want to
>> test with Tempest, and need to know what tests run with Tempest.
>> Testers need to know what test target/items they are testing basically.
>>
>> Thanks
>> Ken Ohmichi
>>
>> ---
>>
>> > ---
>> > [1]
>> >
>> https://github.com/redhat-openstack/tempest/blob/master/tools/config_tempest.py
>> > [2]
>> https://blueprints.launchpad.net/tempest/+spec/tempest-config-generator
>> > [3] https://etherpad.openstack.org/p/mitaka-qa-priorities
>> > [4] https://etherpad.openstack.org/p/tempest-cli-improvements
>> >
>> >
>> https://github.com/openstack/qa-specs/blob/master/specs/tempest/tempest-cli-improvements.rst
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Best regards,
> Andrey Kurilin.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

Re: [openstack-dev] [stable] Stable team PTL nominations are open

2015-11-26 Thread Kuvaja, Erno
> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: Monday, November 23, 2015 5:11 PM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [stable] Stable team PTL nominations are open
> 
> Hi everyone,
> 
> We discussed setting up a standalone stable maintenance team and as part
> of this effort we'll be organizing PTL elections over the coming weeks.
> 
> We held a preliminary meeting today to converge on team scope and
> election mechanisms. The stable team mission is to:
> 
> * Define and enforce the common stable branch policy
> * Educate and accompany projects as they use stable branches
> * Keep CI working on stable branches
> * Mentoring/growing the stable maintenance team
> * Create and improve stable tooling/automation
> 
> Anyone who successfully contributed a stable branch backport over the last
> year (on any active stable branch) is considered a stable contributor and can
> vote in the Stable PTL election.
> 
> If you're interested, please reply to this thread with your self-nomination
> (and your platform) in the coming week. Deadline for self-nomination is
> 23:59 UTC on Monday, Nov 30. Elections will then be held if needed the week
> after.
> 
> Thanks!
> 
> --
> Thierry Carrez (ttx)
> 

Hi all,

As indicated in [0] I'd like to put myself up for the task.

On my first reply [0] to the conversation to spin off Stable Maint team on it's 
own I said that I would like the team being liberally inclusive and I'm happy 
to see that happening by us counting all backporters as being part of this team.

What I think should be the first list of priorities for the PTL of this team:

* Activate people working on the stable branches. I've had few conversations 
with engineers in different companies saying that they are doing the stable 
work downstream and it would make sense for them to do it upstream instead/as 
well. We need to find way to enable and encourage these people to do the work 
in our stable branches to keep them healthy and up to date.

* With that comes gating. We have no benefit of stable branches if we cannot 
run tests on them and merge those backports. This is currently done by handful 
of people and it's not easy task to ramp up new folks on that work. We need to 
identify and encourage the people, who has the correct mindset for it, to step 
up sharing the workload of those few. Short term that will need even more 
effort from the current group doing the work and we need to ensure to not 
overload them.

* Coordination between the project stable maintenance teams. Everyone should 
not be reinventing the wheel. I don't mean that we should recentralize the 
stable maintenance out from the project specific teams, but we need to 
establish active communication to share best practices, issues seen, etc.

* Stable Branch Policy [1]. Current revision is rather discouraging to bring 
anything that is not absolutely needed to stable branches. I think we need to 
find wording that encourages to backport bug fixes while still make sure that 
the reviewers understand what is sustainable and appropriate to merge.

* Ramping up new projects to stable mindset. Via big tent we have lots of new 
projects coming in, some of them would like to have their own stable branches 
but might not have experience and/or knowledge to do it right. 

* Recognition for the people doing the stable work. We have lots of statistics 
for reviews, commits all the way to e-mails to the mailing list, but we do not 
have anything showing interested parties how they or their interest is doing on 
stable side. While in ideal world statistics wouldn't be the driving factor for 
ones contributions, in real world that is way too often the case. 

* Driving the stable related project tagging reformation.


My background and motivations to run for the position:

* Before OpenStack my near work history is in Enterprise support, consulting 
and training. I have firsthand experience of what the enterprise expectations 
and challenges are. And that's our audience for the stable branches.

* Member of HPE Public Cloud engineering. We do run old code.

* Member of HPE Helion OpenStack engineering. We package and distribute stable 
releases.

* Glance Stable Liaison for past year[2]. Freezer Stable Liaison, bringing new 
team up to speed with stable branching in OpenStack.

* Part of the Glance Release team for past cycle driving python-glanceclient 
and glance_store releases.

* I do have the time commitment from my management to work on improving 
upstream stable branches and processes.

I'm not part of stable-maint-core nor I belong to the group of gate fixers 
mentioned earlier. I do believe that I can enable that group to work at their 
best, and limit the overhead of the other areas on that priority list towards 
them; I do believe that I can improve the communication between the project 
teams and activate people to care more about their stable branches; and I do 
know that 

Re: [openstack-dev] how do we get the migration status details info from nova

2015-11-26 Thread Balázs Gibizer
> -Original Message-
> From: Paul Carlton [mailto:paul.carlt...@hpe.com]
> Sent: November 26, 2015 12:11
> On 26/11/15 10:48, 少合冯 wrote:
> 
> 
>   Now, we are agree on getting more migration status details
> info are useful.
> 
>   But How do we get them?
>   By REST API or Notification?
> 
> 
>   IF by API, does the  "time_elapsed" is needed?
> 
>   For there is a "created_at" field.
> 
>   But IMO, it is base on the time of the conductor server?
>   The time_elapsed can get from libvirt, which from the
> hypervisor.
>   Usually, there are ntp-server in the cloud. and we can get the
> time_elapsed by "created_at".
>   but not sure there will be the case:
>   the time of hypervisor and conductor server host are out of
> sync?
> 
> Why not both.  Just update the _monitor_live_migration method in the
> libvirt  driver (and any similar functions in other drivers if they exist) so 
> it
> updates  the migration object and also sends notification events.  These
> don't have  to be at 5 second intervals, although I think that is about right 
> for
> the migration object update.  Notification messages could be once event 30
> seconds or so.
> 
> Operators can monitor the progress via the API and orchestration utilities  to
> consume the notification messages (and/or use API).
> This will enable them to identify migration operations that are not making
> good progress and take actions to address the issue.
> 
> The created_at and updated_at fields of the migration object should be
> sufficient to allow the caller to work out how long the migration has been
> running for (or how long it took in the case of a completed migration).
> 
> Notification payload can include the created_at field or not.  I'd say not.
> There will be a notification message generated when a migration starts so
> subsequent progress messages don't need it, if the consumer wants the
> complete picture they can call the API.


As a side note if you are planning to add a new notification please consider 
aligning with the ongoing effort to make the notification payloads versioned. 
[1]
Cheers,
Gibi

[1] https://blueprints.launchpad.net/nova/+spec/versioned-notification-api 
> 
> 
> 
> --
> Paul Carlton
> Software Engineer
> Cloud Services
> Hewlett Packard
> BUK03:T242
> Longdown Avenue
> Stoke Gifford
> Bristol BS34 8QZ
> 
> Mobile:+44 (0)7768 994283
> Email:mailto:paul.carlt...@hpe.com
> Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12
> 1HN Registered No: 690597 England.
> The contents of this message and any attachments to it are confidential and
> may be legally privileged. If you have received this message in error, you
> should delete it from your system immediately and advise the sender. To any
> recipient of this message within HP, unless otherwise stated you should
> consider this message and attachments as "HP CONFIDENTIAL".

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TripleO client answers file.

2015-11-26 Thread Steven Hardy
On Thu, Nov 26, 2015 at 01:37:16PM +0100, Lennart Regebro wrote:
> We are proposing to have an "answers file" for the tripleoclient so
> that you don't have to have a ling line of
> 
>openstack overcloud deploy --templates /home/stack/mytemplates
> --environment superduper.yaml --environment loadsofstuff.yaml
> --envirnoment custom.yaml
> 
> But instead can just do
> 
>opennstack overcloud deploy --answers-file answers.yaml
> 
> And in that file have:
> 
>   templates: /home/stack/mytemplates
>   environments:
> - superduper.yaml
> - loadsofstuff.yaml
> - custom.yaml

I like the idea of this, provided we keep the scope limited to what is not
already possible via the heat environment files.

So, for example in the reply from Qasim Sarfraz there is mention of passing
other deployment parameters, and I would prefer we did not do that, because
it duplicates functionality that already exists in the heat environment
(I'll reply separately to explain that further).

I do have a couple of questions:

1. How will this integrate with the proposal to add an optional environment
directory? See https://review.openstack.org/#/c/245172/

2. How will this integrate with non "deploy" actions, such as image
building (e.g both the current interface and the new yaml definition
proposed in https://review.openstack.org/#/c/235569/)

It's probably fine to say it's only scoped to the deploy command initially,
but I wanted to at least consider if a broader answer-file format could be
adopted which could potentially support all overcloud * commands.

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Neutron] Testing of Ironic/Neutron integration on devstack

2015-11-26 Thread Yuriy Yekovenko
Hi All,

Just want to notify that I've created tempest bug [0] to verify Ironic
multitenancy and going to work on it.
Please let me know if you have any comments/suggestions regarding the test
scenario described there.

[0] https://bugs.launchpad.net/tempest/+bug/1520230

Best regards,
Yuriy Yekovenko
Senior QA Engineer
Mirantis Inc

On Thu, Nov 26, 2015 at 1:39 PM, Vasyl Saienko 
wrote:

> Hi Sukhdev,
>
> I didn't have a change to be present on previous meeting due to personal
> reasons, but I will be on next meeting.
> It is important to keep CI testing as much possible closer to real Ironic
> use-case scenario.
>
> At the moment we don't have any test-case that covers ironic/neutron
> integration in tempests.
> I think it is nice time to discuss it. So my vision of ironic/neutron
> test-case is next:
>
> 1. Setup Devstack with 3 ironic nodes
> 2. In project: *demo *
>
>- create a network 10.0.100.0/24
>- boot vm on it with fixed ip 10.0.100.10
>- boot vm2 on it with fixed ip 10.0.100.11
>
> 3. In project: *alt_demo*
>
>- create network 10.0.100.0/24 with same prefix as in project *demo *
>- boot vm on it with fixed ip 10.0.100.20
>
> 4. Wait for both instances become active
>
> 5. Check that we *can't* ping *demo: vm* from *alt_demo vm*
>
> 6. Check that we *can* access to vm1 from vm in project demo
>
> 7. Make sure that there is no packets with MAC of *alt_demo vm *on *demo:
> vm *(can use tcpdump)
> --
> Sincerely
> Vasyl Saienko
>
> On Wed, Nov 25, 2015 at 11:06 PM, Sukhdev Kapur 
> wrote:
>
>> Hi Vasyl,
>>
>> This is great. Kevin and I was working on the similar thing. I just
>> finished testing his patch and gave a +1.
>> This is a missing (and needed) functionality for getting the
>> Ironic/Neutron integration completed.
>>
>> As Kevin suggests, it will be best if we can combine these approaches and
>> come up with the best solution.
>>
>> If you are available, please join us in our next weekly meeting at 8AM
>> (pacific time) at #openstack-meeting-4.
>> I am sure team will be excited to know about this solution and this will
>> give an opportunity to make sure we cover all angles of this testing.
>>
>> Thanks
>> -Sukhdev
>>
>>
>> On Wed, Nov 25, 2015 at 7:27 AM, Vasyl Saienko 
>> wrote:
>>
>>> Hello Community,
>>>
>>> As you know Ironic/Neutron integration is planned in Mitaka. And at the
>>> moment we don't have any CI that will test it. Unfortunately we can't test
>>> Ironic/Neutron integration on HW as we don't have it.
>>> So probably the best way is to develop ML2 driver that will work with
>>> OVS.
>>>
>>> At the moment we have a PoC [1] of ML2 driver that works with Cisco and
>>> OVS on linux.
>>> Also we have some patches to devstack that allows to try Ironic/Neutron
>>> integration on VM and real HW. And quick guide how to test it locally [0]
>>>
>>> https://review.openstack.org/#/c/247513/
>>> https://review.openstack.org/#/c/248048/
>>> https://review.openstack.org/#/c/249717/
>>> https://review.openstack.org/#/c/248074/
>>>
>>> I'm interested in Neutron/Ironic integration. It would be great if we
>>> have it in Mitaka.
>>> I'm asking Community to check [0] and [1] and share your thoughts.
>>>
>>>  Also I would like to request a repo on openstack.org for [1]
>>>
>>>
>>> [0]
>>> https://github.com/jumpojoy/ironic-neutron/blob/master/devstack/examples/ironic-neutron-vm.md
>>> [1] https://github.com/jumpojoy/generic_switch
>>>
>>> --
>>> Sincerely
>>> Vasyl Saienko
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TripleO client answers file.

2015-11-26 Thread Steven Hardy
On Thu, Nov 26, 2015 at 06:01:31PM +0500, Qasim Sarfraz wrote:
>+1. That would be really helpful. 
>What about passing other deployment parameters via answers.yaml ?  For
>example, compute-flavor, control-flavor etc

So, I think the main reason to avoid this, is that long term it would be
best to deprecate/remove all those hard-coded parameter options (like
control-flavor etc).

The reason for saying this is --control-flavor is mapped inside
tripleoclient to a hard-coded parameter name (OvercloudControlFlavor):

https://github.com/openstack/python-tripleoclient/blob/master/tripleoclient/v1/overcloud_deploy.py#L111

This is both fragile (coupling between the tripleo-heat-templates and
python-tripleoclient), and inflexible (any new options have to be added to
tripleoclient if we want a consistent interface).

With the benefit of hindsight, IMHO, it was a mistake to expose these
explicit parameter options in the CLI, instead we should be defining the
parameters directly in environment files.

For example:

openstack overcloud deploy --templates -e my_flavors.yaml

Where my_flavors.yaml looks like:

parameters:
  OvercloudControlFlavor: overcloud-special
  OvercloudComputeFlavor: overcloud-special

This still works with the answer file interface Lennart is proposing, e.g
you just add my_flavors to the environments list, but it avoids hard-coded
coupling between tripleoclient and tripleo-heat-templates, which should
make maintenance easier in the long term, and allow better flexibilty for
operators who which to customize the templates with alternative parameters.

Does that sound reasonable?

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with Unexpected vif_type=binding_failed

2015-11-26 Thread Andreas Scheuring
Praveen, 
there are many error in your q-svc log.
It says:

InvalidInput: Invalid input for operation: (u'Tunnel IP %(ip)s in use
with host %(host)s', {'ip': u'10.81.1.150', 'host':
u'localhost.localdomain'}).\n"]


Did you maybe specify duplicated ips in your controllers and compute
nodes neutron tunnel config?

Or did you change the hostname after installation

Or maybe the code has trouble with duplicated host names?

-- 
Andreas
(IRC: scheuran)



On Di, 2015-11-24 at 15:28 +0100, Praveen MANKARA RADHAKRISHNAN wrote:
> Hi Sean, 
> 
> 
> Thanks for the reply. 
> 
> 
> Please find the logs attached. 
> ovs-dpdk is correctly running in compute.
> 
> 
> Thanks
> Praveen 
> 
> On Tue, Nov 24, 2015 at 3:04 PM, Mooney, Sean K
>  wrote:
> Hi would you be able to attach the
> 
> n-cpu log form the computenode  and  the
> 
> n-sch and q-svc logs for the controller so we can see if there
> is a stack trace relating to the
> 
> vm boot.
> 
>  
> 
> Also can you confirm ovs-dpdk is running correctly on the
> compute node by running 
> 
> sudo service ovs-dpdk status
> 
>  
> 
> the neutron and networking-ovs-dpdk commits are from their
> respective stable/kilo branches so they should be compatible
> 
> provided no breaking changes have been merged to either
> branch.
> 
>  
> 
> regards
> 
> sean.
> 
>  
> 
> From: Praveen MANKARA RADHAKRISHNAN
> [mailto:praveen.mank...@6wind.com] 
> Sent: Tuesday, November 24, 2015 1:39 PM
> To: OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation
> fails with Unexpected vif_type=binding_failed
> 
>  
> 
> Hi Przemek,
> 
>  
> 
> 
> Thanks For the response, 
> 
> 
>  
> 
> 
> Here are the commit ids for Neutron and networking-ovs-dpdk 
> 
> 
>  
> 
> 
> [stack@localhost neutron]$ git log --format="%H" -n 1
> 
> 
> 026bfc6421da796075f71a9ad4378674f619193d
> 
> 
> [stack@localhost neutron]$ cd ..
> 
> 
> [stack@localhost ~]$ cd networking-ovs-dpdk/
> 
> 
> [stack@localhost networking-ovs-dpdk]$  git log --format="%H"
> -n 1
> 
> 
> 90dd03a76a7e30cf76ecc657f23be8371b1181d2
> 
> 
>  
> 
> 
> The Neutron agents are up and running in compute node. 
> 
> 
>  
> 
> 
> Thanks 
> 
> 
> Praveen
> 
> 
>  
> 
> 
>  
> 
> On Tue, Nov 24, 2015 at 12:57 PM, Czesnowicz, Przemyslaw
>  wrote:
> 
> Hi Praveen,
> 
>  
> 
> There’s been some changes recently to
> networking-ovs-dpdk, it no longer host’s a mech driver
> as the openviswitch mech driver in Neutron supports
> vhost-user ports.
> 
> I guess something went wrong and the version of
> Neutron is not matching networking-ovs-dpdk. Can you
> post commit ids of Neutron and networking-ovs-dpdk.
> 
>  
> 
> The other possibility is that the Neutron agent is not
> running/died on the compute node.
> 
> Check with:
> 
> neutron agent-list
> 
>  
> 
> Przemek
> 
>  
> 
> From: Praveen MANKARA RADHAKRISHNAN
> [mailto:praveen.mank...@6wind.com] 
> Sent: Tuesday, November 24, 2015 12:18 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [networking-ovs-dpdk] VM
> creation fails with Unexpected vif_type=binding_failed
> 
> 
>  
> 
> Hi,
> 
>  
> 
> 
> Am trying to set up an open stack (kilo) installation
> using ovs-dpdk through devstack installation. 
> 
> 
>  
> 
> 
> I have followed the "
> 
> 

Re: [openstack-dev] [nova][ironic] do we really need websockify with numpy speedups?

2015-11-26 Thread Roman Podoliaka
Hi Pavlo,

Can we just use a wheel package for numpy instead?

Thanks,
Roman

On Thu, Nov 26, 2015 at 3:00 PM, Pavlo Shchelokovskyy
 wrote:
> Hi again,
>
> I've went on and created a proper pull request to websockify [0], comment
> there if you think we need it :)
>
> I also realized that there is another option, which is to include
> python-numpy to files/debs/ironic and files/debs/nova (strangely it is
> already present in rpms/ for nova, noVNC and spice services).
> This should install a pre-compiled version from distro repos, and should
> also speed things up.
>
> Any comments welcome.
>
> [0] https://github.com/kanaka/websockify/pull/212
>
> Best regards,
>
> On Thu, Nov 26, 2015 at 1:44 PM Pavlo Shchelokovskyy
>  wrote:
>>
>> Hi all,
>>
>> I was long puzzled why devstack is installing numpy. Being a fantastic
>> package itself, it has the drawback of taking about 4 minutes to compile its
>> C extensions when installing on our gates (e.g. [0]). I finally took time to
>> research and here is what I've found:
>>
>> it is used only by websockify package (installed by AFAIK ironic and nova
>> only), and there it is used to speed up the HyBi protocol. Although the code
>> itself has a path to work without numpy installed [1], the setup.py of
>> websockify declares numpy as a hard dependency [2].
>>
>> My question is do we really need those speedups? Do we test any feature
>> requiring fast HyBi support on gates? Not installing numpy would shave 4
>> minutes off any gate job that is installing Nova or Ironic, which seems like
>> a good deal to me.
>>
>> If we decide to save this time, I have prepared a pull request for
>> websockify that moves numpy requirement to "extras" [3]. As a consequence
>> numpy will not be installed by default as dependency, but still possible to
>> install with e.g. "pip install websockify[fastHyBi]", and package builders
>> can also specify numpy as hard dependency for websockify package in package
>> specs.
>>
>> What do you think?
>>
>> [0]
>> http://logs.openstack.org/82/236982/6/check/gate-tempest-dsvm-ironic-agent_ssh/1141960/logs/devstacklog.txt.gz#_2015-11-11_19_51_40_784
>> [1]
>> https://github.com/kanaka/websockify/blob/master/websockify/websocket.py#L143
>> [2] https://github.com/kanaka/websockify/blob/master/setup.py#L37
>> [3]
>> https://github.com/pshchelo/websockify/commit/0b1655e73ea13b4fba9c6fb4122adb1435d5ce1a
>>
>> Best regards,
>> --
>> Dr. Pavlo Shchelokovskyy
>> Senior Software Engineer
>> Mirantis Inc
>> www.mirantis.com
>
> --
> Dr. Pavlo Shchelokovskyy
> Senior Software Engineer
> Mirantis Inc
> www.mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TripleO client answers file.

2015-11-26 Thread Giulio Fidente

On 11/26/2015 02:34 PM, Steven Hardy wrote:

On Thu, Nov 26, 2015 at 01:37:16PM +0100, Lennart Regebro wrote:

We are proposing to have an "answers file" for the tripleoclient so
that you don't have to have a ling line of

openstack overcloud deploy --templates /home/stack/mytemplates
--environment superduper.yaml --environment loadsofstuff.yaml
--envirnoment custom.yaml

But instead can just do

opennstack overcloud deploy --answers-file answers.yaml

And in that file have:

   templates: /home/stack/mytemplates
   environments:
 - superduper.yaml
 - loadsofstuff.yaml
 - custom.yaml


I like the idea of this, provided we keep the scope limited to what is not
already possible via the heat environment files.

So, for example in the reply from Qasim Sarfraz there is mention of passing
other deployment parameters, and I would prefer we did not do that, because
it duplicates functionality that already exists in the heat environment
(I'll reply separately to explain that further).

I do have a couple of questions:

1. How will this integrate with the proposal to add an optional environment
directory? See https://review.openstack.org/#/c/245172/

2. How will this integrate with non "deploy" actions, such as image
building (e.g both the current interface and the new yaml definition
proposed in https://review.openstack.org/#/c/235569/)

It's probably fine to say it's only scoped to the deploy command initially,
but I wanted to at least consider if a broader answer-file format could be
adopted which could potentially support all overcloud * commands.


deploy and update actually, as per the change request, given you'll 
probably want those to remain the same on update


haven't checked the submission in the details but have a few comments:

1. what is the benefit of having the templates location specified in the 
answers file as well? how about keeping the templates path out of the yaml?


2. I'd also rename the answers file into something like 
environments-file, given it's not an answers-file but more a list of env 
files


3. in which order are the env files appended? it is important that order 
is respected and known in advance to the user


4. how does this behave if one is passing some env with -e as well? 
looks like we should append the -e files to the list of environments 
gathered from environments-file?


5. make sure not to mangle the file paths; these can be absolute or 
relative to the templates location

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TripleO client answers file.

2015-11-26 Thread Qasim Sarfraz
Thanks Steve. Makes sense to avoid passing deployment parameters with
answer file interface Lennart proposed.

On Thu, Nov 26, 2015 at 6:42 PM, Steven Hardy  wrote:

> On Thu, Nov 26, 2015 at 06:01:31PM +0500, Qasim Sarfraz wrote:
> >+1. That would be really helpful.Â
> >What about passing other deployment parameters via answers.yaml ?Â
> For
> >example, compute-flavor, control-flavor etc
>
> So, I think the main reason to avoid this, is that long term it would be
> best to deprecate/remove all those hard-coded parameter options (like
> control-flavor etc).
>
> The reason for saying this is --control-flavor is mapped inside
> tripleoclient to a hard-coded parameter name (OvercloudControlFlavor):
>
>
> https://github.com/openstack/python-tripleoclient/blob/master/tripleoclient/v1/overcloud_deploy.py#L111
>
> This is both fragile (coupling between the tripleo-heat-templates and
> python-tripleoclient), and inflexible (any new options have to be added to
> tripleoclient if we want a consistent interface).
>
> With the benefit of hindsight, IMHO, it was a mistake to expose these
> explicit parameter options in the CLI, instead we should be defining the
> parameters directly in environment files.
>
> For example:
>
> openstack overcloud deploy --templates -e my_flavors.yaml
>
> Where my_flavors.yaml looks like:
>
> parameters:
>   OvercloudControlFlavor: overcloud-special
>   OvercloudComputeFlavor: overcloud-special
>
> This still works with the answer file interface Lennart is proposing, e.g
> you just add my_flavors to the environments list, but it avoids hard-coded
> coupling between tripleoclient and tripleo-heat-templates, which should
> make maintenance easier in the long term, and allow better flexibilty for
> operators who which to customize the templates with alternative parameters.
>
> Does that sound reasonable?
>
> Thanks,
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Qasim Sarfraz
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] [release] Puppet OpenStack 7.0.0 Liberty (_independent)

2015-11-26 Thread Emilien Macchi
Puppet OpenStack community is very proud to announce the release of 22
modules:

puppet-aodh 7.0.0
puppet-ceilometer 7.0.0
puppet-cinder 7.0.0
puppet-designate 7.0.0
puppet-glance 7.0.0
puppet-gnocchi 7.0.0
puppet-heat 7.0.0
puppet-horizon 7.0.0
puppet-ironic 7.0.0
puppet-keystone 7.0.0
puppet-manila 7.0.0
puppet-murano 7.0.0
puppet-neutron 7.0.0
puppet-nova 7.0.0
puppet-openstacklib 7.0.0
puppet-openstack_extras 7.0.0
puppet-sahara 7.0.0
puppet-swift 7.0.0
puppet-tempest 7.0.0
puppet-trove 7.0.0
puppet-tuskar 7.0.0
puppet-vswitch 3.0.0

For more details about the release, you can visit:
https://wiki.openstack.org/wiki/Puppet/releases
https://forge.puppetlabs.com/openstack

Here are some interesting numbers [1]:

Contributors during Kilo cycle: 91
Contributors during Liberty cycle: 108

Commits during Kilo cycle: 730
Commits during Liberty cycle: 1201

LOC during Kilo cycle: 67104
LOC during Liberty cycle: 93448

[1] Sources: http://stackalytics.openstack.org

Thank you to the Puppet OpenStack community to make it happen,
Also big kudos to other teams, specially OpenStack Infra, Tempest and
Packaging folks who never hesitate to help us.
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-11-26 Thread Jiří Stránský

On 26.11.2015 14:12, Jiří Stránský wrote:




My personal preference is to say:

1. Any templates which are included in the default environment (e.g
overcloud-resource-registry-puppet.yaml), must expose their parameters
via overcloud-without-mergepy.yaml

2. Any templates which are included in the default environment, but via a
"noop" implementation *may* expose their parameters provided they are
common and not implementation/vendor specific.

3. Any templates exposing vendor specific interfaces (e.g at least anything
related to the OS::TripleO::*ExtraConfig* interfaces) must not expose any
parameters via the top level template.

How does this sound?


Pardon the longer e-mail please, but i think this topic is very far
reaching and impactful on the future of TripleO, perhaps even strategic,
and i'd like to present some food for thought.


I think as we progress towards more composable/customizable overcloud,
using parameter_defaults will become a necessity in more and more places
in the templates.

Nowadays we can get away with hierarchical passing of some parameters
from the top-level template downwards because we can make very strong
assumptions about how the overcloud is structured, and what each piece
of the overcloud takes as its parameters. Even though we support
customization via the resource registry, it's still mostly just
switching between alternate implementations of the same thing, not
strong composability.


I would imagine that going forward, TripleO would receive feature
requests to add custom node types into the deployment, be it e.g.
separating neutron network node functionality out of controller node
onto its own hardware, or adding custom 3rd party node types into the
overcloud, which need to integrate with the rest of the overcloud tightly.

When such scenario is considered, even the most code-static parameters
like node-type-specific ExtraConfig, or a nova flavor to use for a node
type, suddenly become dynamic on the code level (think
parameter_defaults), simply because we can't predict upfront what node
types we'll have.


I think a parallel with how Puppet evolved can be observed here. It used
to be that Puppet classes included in deployments formed a sort-of
hierarchy and got their parameters fed in a top-down cascade. This
carried limitations on composability of machine configuration manifests
(collisions when using the same class from multiple places, huge number
of parameters in the higher-level manifests). Hiera was introduced to
solve the problem, and nowadays top-level Puppet manifests contain a lot
of include statements, and the parameter values are mostly read from
external hiera data files, and hiera values transcend through the class
hierarchy freely. This hinders easy discoverability of "what settings
can i tune within this machine's configuration", but judging by the
adoption of the approach, the benefits probably outweigh the drawbacks.
In Puppet's case, at least :)

It seems TripleO is hitting similar composability and sanity limits with
the top-down approach, and the number of parameters which can only be
fed via parameter_defaults is increasing. (The disadvantage of
parameter_defaults is that, unlike hiera, we currently have no clear
namespacing rules, which means a higher chance of conflict. Perhaps the
unit tests suggested in another subthread would be a good start, maybe
we could even think about how to do proper namespacing.)


Does what i described seem somewhat accurate? Should we maybe buy into
the concept of "composable templates, externally fed
hierarchy-transcending parameters" for the long term?


I now realized i might have used too generic or Puppetish terms in the 
explanation, perhaps drowning the gist of the message a bit :) What i'm 
suggesting is: let's consider going with parameter_defaults wherever we 
can, for the sake of composability, and figure out what is the best way 
to prevent parameter name collisions.




Thanks for reading this far :)


Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] RFC: profile matching

2015-11-26 Thread Dmitry Tantsur

On 11/09/2015 03:51 PM, Dmitry Tantsur wrote:

Hi folks!

I spent some time thinking about bringing profile matching back in, so
I'd like to get your comments on the following near-future plan.

First, the scope of the problem. What we do is essentially kind of
capability discovery. We'll help nova scheduler with doing the right
thing by assigning a capability like "suits for compute", "suits for
controller", etc. The most obvious path is to use inspector to assign
capabilities like "profile=1" and then filter nodes by it.

A special care, however, is needed when some of the nodes match 2 or
more profiles. E.g. if we have all 4 nodes matching "compute" and then
only 1 matching "controller", nova can select this one node for
"compute" flavor, and then complain that it does not have enough hosts
for "controller".

We also want to conduct some sanity check before even calling to
heat/nova to avoid cryptic "no valid host found" errors.

(1) Inspector part

During the liberty cycle we've landed a whole bunch of API's to
inspector that allow us to define rules on introspection data. The plan
is to have rules saying, for example:

  rule 1: if memory_mb >= 8192, add capability "compute_profile=1"
  rule 2: if local_gb >= 100, add capability "controller_profile=1"

Note that these rules are defined via inspector API using a JSON-based
DSL [1].

As you see, one node can receive 0, 1 or many such capabilities. So we
need the next step to make a final decision, based on how many nodes we
need of every profile.

(2) Modifications of `overcloud deploy` command: assigning profiles

New argument --assign-profiles will be added. If it's provided,
tripleoclient will fetch all ironic nodes, and try to ensure that we
have enough nodes with all profiles.

Nodes with existing "profile:xxx" capability are left as they are. For
nodes without a profile it will look at "xxx_profile" capabilities
discovered on the previous step. One of the possible profiles will be
chosen and assigned to "profile" capability. The assignment stops as
soon as we have enough nodes of a flavor as requested by a user.


I've put up a prototype patch for this work item: 
https://review.openstack.org/#/c/250405/




(3) Modifications of `overcloud deploy` command: validation

To avoid 'no valid host found' errors from nova, the deploy command will
fetch all flavors involved and look at the "profile" capabilities. If
they are set for any flavors, it will check if we have enough ironic
nodes with a given "profile:xxx" capability. This check will happen
after profiles assigning, if --assign-profiles is used.


Looks like this is already implemented, so the patch above is the only 
thing we actually need.




Please let me know what you think.

[1] https://github.com/openstack/ironic-inspector#introspection-rules

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how do we get the migration status details info from nova

2015-11-26 Thread 少合冯
Useful information.
Thank you Gibi.

BR
Shaohe Feng..

2015-11-26 21:29 GMT+08:00 Balázs Gibizer :

> > -Original Message-
> > From: Paul Carlton [mailto:paul.carlt...@hpe.com]
> > Sent: November 26, 2015 12:11
> > On 26/11/15 10:48, 少合冯 wrote:
> >
> >
> >   Now, we are agree on getting more migration status details
> > info are useful.
> >
> >   But How do we get them?
> >   By REST API or Notification?
> >
> >
> >   IF by API, does the  "time_elapsed" is needed?
> >
> >   For there is a "created_at" field.
> >
> >   But IMO, it is base on the time of the conductor server?
> >   The time_elapsed can get from libvirt, which from the
> > hypervisor.
> >   Usually, there are ntp-server in the cloud. and we can get the
> > time_elapsed by "created_at".
> >   but not sure there will be the case:
> >   the time of hypervisor and conductor server host are out of
> > sync?
> >
> > Why not both.  Just update the _monitor_live_migration method in the
> > libvirt  driver (and any similar functions in other drivers if they
> exist) so it
> > updates  the migration object and also sends notification events.  These
> > don't have  to be at 5 second intervals, although I think that is about
> right for
> > the migration object update.  Notification messages could be once event
> 30
> > seconds or so.
> >
> > Operators can monitor the progress via the API and orchestration
> utilities  to
> > consume the notification messages (and/or use API).
> > This will enable them to identify migration operations that are not
> making
> > good progress and take actions to address the issue.
> >
> > The created_at and updated_at fields of the migration object should be
> > sufficient to allow the caller to work out how long the migration has
> been
> > running for (or how long it took in the case of a completed migration).
> >
> > Notification payload can include the created_at field or not.  I'd say
> not.
> > There will be a notification message generated when a migration starts so
> > subsequent progress messages don't need it, if the consumer wants the
> > complete picture they can call the API.
>
>
> As a side note if you are planning to add a new notification please
> consider
> aligning with the ongoing effort to make the notification payloads
> versioned. [1]
> Cheers,
> Gibi
>
> [1] https://blueprints.launchpad.net/nova/+spec/versioned-notification-api
> >
> >
> >
> > --
> > Paul Carlton
> > Software Engineer
> > Cloud Services
> > Hewlett Packard
> > BUK03:T242
> > Longdown Avenue
> > Stoke Gifford
> > Bristol BS34 8QZ
> >
> > Mobile:+44 (0)7768 994283
> > Email:mailto:paul.carlt...@hpe.com
> > Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks
> RG12
> > 1HN Registered No: 690597 England.
> > The contents of this message and any attachments to it are confidential
> and
> > may be legally privileged. If you have received this message in error,
> you
> > should delete it from your system immediately and advise the sender. To
> any
> > recipient of this message within HP, unless otherwise stated you should
> > consider this message and attachments as "HP CONFIDENTIAL".
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how do we get the migration status details info from nova

2015-11-26 Thread 少合冯
Agree.  Why not both.  and will use created_at to work out how long the
migration has been
running.


Paul, thank you very much for the suggestion.

BR.
Shaohe Feng



2015-11-26 19:10 GMT+08:00 Paul Carlton :

> On 26/11/15 10:48, 少合冯 wrote:
>
> Now, we are agree on getting more migration status details info are
> useful.
>
> But How do we get them?
> By REST API or Notification?
>
>
> IF by API, does the  "time_elapsed" is needed?
> For there is a "created_at" field.
> But IMO, it is base on the time of the conductor server?
> The time_elapsed can get from libvirt, which from the hypervisor.
> Usually, there are ntp-server in the cloud. and we can get the
> time_elapsed by "created_at".
> but not sure there will be the case:
> the time of hypervisor and conductor server host are out of sync?
>
> Why not both.  Just update the _monitor_live_migration method in the
> libvirt
>  driver (and any similar functions in other drivers if they exist) so it
> updates
>  the migration object and also sends notification events.  These don't have
>  to be at 5 second intervals, although I think that is about right for the
> migration object update.  Notification messages could be once event 30
>  seconds or so.
>
> Operators can monitor the progress via the API and orchestration utilities
>  to consume the notification messages (and/or use API).
> This will enable them to identify migration operations that are not making
>  good progress and take actions to address the issue.
>
> The created_at and updated_at fields of the migration object should be
> sufficient to allow the caller to work out how long the migration has been
> running for (or how long it took in the case of a completed migration).
>
> Notification payload can include the created_at field or not.  I'd say not.
> There will be a notification message generated when a migration starts
> so subsequent progress messages don't need it, if the consumer wants
> the complete picture they can call the API.
>
>
> --
> Paul Carlton
> Software Engineer
> Cloud Services
> Hewlett Packard
> BUK03:T242
> Longdown Avenue
> Stoke Gifford
> Bristol BS34 8QZ
>
> Mobile:+44 (0)7768 994283
> Email:mailto:paul.carlt...@hpe.com 
> Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 
> 1HN Registered No: 690597 England.
> The contents of this message and any attachments to it are confidential and 
> may be legally privileged. If you have received this message in error, you 
> should delete it from your system immediately and advise the sender. To any 
> recipient of this message within HP, unless otherwise stated you should 
> consider this message and attachments as "HP CONFIDENTIAL".
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] More and more circular build dependencies: what can we do to stop this?

2015-11-26 Thread Thomas Goirand
Hi,

As a package maintainer, I'm seeing more and more circular
build-dependency. The latest of them is between oslotest and oslo.config
in Mitaka.

There's been some added between unittest2, linecache2 and traceback2
too, which are now really broadly used.

The only way I can work around this type of issue is to temporarily
disable the unit tests (or allow them to fail), build both packages, and
revert the unit tests tweaks. That's both annoying and frustrating to do.

What can we do so that it doesn't constantly happen again and again?
It's a huge pain for downstream package maintainers and distros.

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo][TaskFlow] Non-DAG Flows, Timers and External Events

2015-11-26 Thread milanisko k
Hello list,

I've been busy investigating how to best introduce HA to Ironic Inspector
and I got referred to TaskFlow.

With Ironic Inspector, besides handling other issues, each Node being
introspected undergoes couple of steps with side-effects.
This can be described using a finite automaton[1].
However, there are these constrains with that:
- User can Cancel the Introspection in the Waiting state
- Automaton advances from Waiting to Error if it reaches a Timeout
- Introspection can be restarted from either Error or Finished State (would
induce cyclic graph, not DAG)

Is there some best-practise how to address those?

Thanks,
milan


[1] Inspection State Transition System, Line #64,
http://www.fpaste.org/294451/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-announce] [release][stable][keystone][ironic] keystonemiddleware release 1.5.3 (kilo)

2015-11-26 Thread Morgan Fainberg
Here is the first pass at a fixture to handle the case that ceilometer is
doing with the hacked-up memcache interface. This will be something
supported by keystonemiddleware so it should not break randomly in the
future: https://review.openstack.org/#/c/249794/

On Thu, Nov 26, 2015 at 7:27 AM, Dmitry Tantsur  wrote:

> I suspect it could break ironic stable/kilo in the same way as 2.0.0
> release. Still investigating, checking if
> https://review.openstack.org/#/c/250341/ will also fix it. Example of
> failing patch: https://review.openstack.org/#/c/248365/
>
> On 11/23/2015 08:54 PM, d...@doughellmann.com wrote:
>
>> We are pumped to announce the release of:
>>
>> keystonemiddleware 1.5.3: Middleware for OpenStack Identity
>>
>> This release is part of the kilo stable release series.
>>
>> With source available at:
>>
>>  http://git.openstack.org/cgit/openstack/keystonemiddleware
>>
>> With package available at:
>>
>>  https://pypi.python.org/pypi/keystonemiddleware
>>
>> For more details, please see the git log history below and:
>>
>>  http://launchpad.net/keystonemiddleware/+milestone/1.5.3
>>
>> Please report issues through launchpad:
>>
>>  http://bugs.launchpad.net/keystonemiddleware
>>
>> Notable changes
>> 
>>
>> will now require python-requests<2.8.0
>>
>> Changes in keystonemiddleware 1.5.2..1.5.3
>> --
>>
>> d56d96c Updated from global requirements
>> 9aafe8d Updated from global requirements
>> cc746dc Add an explicit test failure condition when auth_token is missing
>> 5b1e18f Fix list_opts test to not check all deps
>> 217cd3d Updated from global requirements
>> 518e9c3 Ensure cache keys are a known/fixed length
>> 033c151 Updated from global requirements
>>
>> Diffstat (except docs and test files)
>> -
>>
>> keystonemiddleware/auth_token/_cache.py   | 19
>> ++-
>> requirements.txt  | 19
>> ++-
>> setup.py  |  1 -
>> test-requirements-py3.txt | 18
>> +-
>> test-requirements.txt | 18
>> +-
>> 7 files changed, 69 insertions(+), 37 deletions(-)
>>
>>
>> Requirements updates
>> 
>>
>> diff --git a/requirements.txt b/requirements.txt
>> index e3288a1..23308cd 100644
>> --- a/requirements.txt
>> +++ b/requirements.txt
>> @@ -7,9 +7,9 @@ iso8601>=0.1.9
>> -oslo.config>=1.9.3,<1.10.0  # Apache-2.0
>> -oslo.context>=0.2.0,<0.3.0 # Apache-2.0
>> -oslo.i18n>=1.5.0,<1.6.0  # Apache-2.0
>> -oslo.serialization>=1.4.0,<1.5.0   # Apache-2.0
>> -oslo.utils>=1.4.0,<1.5.0   # Apache-2.0
>> -pbr>=0.6,!=0.7,<1.0
>> -pycadf>=0.8.0,<0.9.0
>> -python-keystoneclient>=1.1.0,<1.4.0
>> -requests>=2.2.0,!=2.4.0
>> +oslo.config<1.10.0,>=1.9.3 # Apache-2.0
>> +oslo.context<0.3.0,>=0.2.0 # Apache-2.0
>> +oslo.i18n<1.6.0,>=1.5.0 # Apache-2.0
>> +oslo.serialization<1.5.0,>=1.4.0 # Apache-2.0
>> +oslo.utils!=1.4.1,<1.5.0,>=1.4.0 # Apache-2.0
>> +pbr!=0.7,<1.0,>=0.6
>> +pycadf<0.9.0,>=0.8.0
>> +python-keystoneclient<1.4.0,>=1.2.0
>> +requests!=2.4.0,<2.8.0,>=2.2.0
>> @@ -16,0 +17 @@ six>=1.9.0
>> +stevedore<1.4.0,>=1.3.0 # Apache-2.0
>> diff --git a/test-requirements.txt b/test-requirements.txt
>> index 11d9e17..5ab5eb0 100644
>> --- a/test-requirements.txt
>> +++ b/test-requirements.txt
>> @@ -5 +5 @@
>> -hacking>=0.10.0,<0.11
>> +hacking<0.11,>=0.10.0
>> @@ -9,2 +9,2 @@ discover
>> -fixtures>=0.3.14
>> -mock>=1.0
>> +fixtures<1.3.0,>=0.3.14
>> +mock<1.1.0,>=1.0
>> @@ -12,5 +12,5 @@ pycrypto>=2.6
>> -oslosphinx>=2.5.0,<2.6.0 # Apache-2.0
>> -oslotest>=1.5.1,<1.6.0  # Apache-2.0
>> -oslo.messaging>=1.8.0,<1.9.0  # Apache-2.0
>> -requests-mock>=0.6.0  # Apache-2.0
>> -sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
>> +oslosphinx<2.6.0,>=2.5.0 # Apache-2.0
>> +oslotest<1.6.0,>=1.5.1 # Apache-2.0
>> +oslo.messaging<1.9.0,>=1.8.0 # Apache-2.0
>> +requests-mock>=0.6.0 # Apache-2.0
>> +sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
>> @@ -19 +19 @@ testresources>=0.2.4
>> -testtools>=0.9.36,!=1.2.0
>> +testtools!=1.2.0,>=0.9.36
>>
>>
>>
>> ___
>> OpenStack-announce mailing list
>> openstack-annou...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [gnocchi][rating] Issues regarding gnocchi support in CloudKitty

2015-11-26 Thread Stéphane Albert
Hi Julien,

You'll find attached to this mail two dump files.

gnocchi_resource.txt is an example of resource requests and responses
from gnocchi.

gnocchi_measure.txt an example of a timeframe request.
Metric (vcpus) measure
==
Data stored by ceilometer with default configuration (devstack).

GET 
http://10.8.8.168:8041/v1/metric/9c26bbea-6041-4067-9384-f6aa9b4ce120/measures
← 200 application/json 1.71kB 295ms
Host: 10.8.8.168:8041
Connection:   keep-alive
X-Auth-Token: 90cf2d940e464ae0aef733d5f124aa43
Accept-Encoding:  gzip, deflate
Accept:   application/json, */*
User-Agent:   keystoneauth1
No request content
Date:Thu, 26 Nov 2015 14:58:59 GMT
Server:  Apache/2.4.7 (Ubuntu)
content-length:  1752
Keep-Alive:  timeout=5, max=100
Connection:  Keep-Alive
Content-Type:application/json; charset=UTF-8
JSON
[
[
"2015-11-23T00:00:00+00:00",
86400.0,
1.0
],
[
"2015-11-24T00:00:00+00:00",
86400.0,
1.0
],
[
"2015-11-25T00:00:00+00:00",
86400.0,
1.0
],
[
"2015-11-26T00:00:00+00:00",
86400.0,
1.0
],
[
"2015-11-25T15:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-25T16:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-25T17:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-25T18:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-25T19:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-25T20:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-25T21:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-25T22:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-25T23:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T00:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T01:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T02:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T03:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T04:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T05:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T06:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T07:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T08:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T09:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T10:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T11:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T12:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T13:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T14:00:00+00:00",
3600.0,
1.0
],
[
"2015-11-26T03:00:00+00:00",
300.0,
1.0
],
[
"2015-11-26T04:00:00+00:00",
300.0,
1.0
],
[
"2015-11-26T05:00:00+00:00",
300.0,
1.0
],
[
"2015-11-26T06:00:00+00:00",
300.0,
1.0
],
[
"2015-11-26T07:00:00+00:00",
300.0,
1.0
],
[
"2015-11-26T08:00:00+00:00",
300.0,
1.0
],
[
"2015-11-26T09:00:00+00:00",
300.0,
1.0
],
[
"2015-11-26T10:00:00+00:00",
300.0,
1.0
],
[
"2015-11-26T11:00:00+00:00",
300.0,
1.0
],
[
"2015-11-26T12:00:00+00:00",
300.0,
1.0
],
[
"2015-11-26T13:00:00+00:00",
300.0,
1.0
],
[
"2015-11-26T14:00:00+00:00",
300.0,
1.0
]
]

Request for a 1h timeframe
==

GET 
http://10.8.8.168:8041/v1/metric/9c26bbea-6041-4067-9384-f6aa9b4ce120/measures?start=2015-11-23T10%3A00%3A0.0%2B00%3A00=2015-11-23T11%3A00%3A0.0%2B00%3A00=max
← 200 application/json 2B 64ms
Host: 10.8.8.168:8041
Connection:   keep-alive
X-Auth-Token: 948f7cf696d94c41908e819237112876
Accept-Encoding:  gzip, deflate
Accept:   application/json, */*
User-Agent:   python-keystoneclient
No request content
Date:Thu, 26 Nov 2015 14:17:39 GMT
Server:  Apache/2.4.7 (Ubuntu)
content-length:  2
Keep-Alive:  timeout=5, max=86
Connection:  Keep-Alive
Content-Type:application/json; charset=UTF-8
JSON

   [m:Auto]
[]

We don't get any data, but there is data with a bigger 

Re: [openstack-dev] [ironic] Releases and things

2015-11-26 Thread Ruby Loo
On 25 November 2015 at 18:02, Jim Rollenhagen 
wrote:

> Hi all,
>
> We're approaching OpenStack's M-1 milestone, and as we have lots of good
> stuff in the master branch, and no Mitaka release yet, I'd like to make
> a release next Thursday, December 3.
>
> First, I've caught us up (best I can tell) on missing release notes
> since our last release. Please do review them:
> https://review.openstack.org/#/c/250029/
>
> Second, please make sure when writing and reviewing code, that we are
> adding release notes for anything significant, including important bug
> fixes. See the patch above for examples on things that could be
> candidates for the release notes. Basically, if you think it's something
> a deployer or operator might care about, we should have a note for it.
>
> How to make a release note:
> http://docs.openstack.org/developer/reno/usage.html
>
>
Jim, thanks for putting together the release notes! It isn't crystal clear
to me what ought to be mentioned in release notes, but I'll use your
release notes as a guide :)

This is a heads up to folks that if you have submitted a patch that
warrants mention in the release notes, you ought to update the patch to
include a note. Otherwise, (sorry,) it will be -1'd.



> Last, I'd love if cores could help test the master branch and try to
> dislodge any issues there, and also try to find any existing bug reports
> that feel like they should definitely be fixed before the release.
>
>
I think this also means that we shouldn't land any patches this coming
week, that might be risky or part of an incomplete feature.


> After going through the commit log to build the release notes patch, I
> think we've done a lot of great work since the 4.2 release. Thank you
> all for that. Let's keep pushing hard on our priority list and have an
> amazing rest of the cycle! :D
>

Hear, hear!

--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Releases and things

2015-11-26 Thread Dmitry Tantsur
FYI the same thing applies to both inspector and (very soon) 
inspector-client.


On 11/26/2015 04:30 PM, Ruby Loo wrote:

On 25 November 2015 at 18:02, Jim Rollenhagen > wrote:

Hi all,

We're approaching OpenStack's M-1 milestone, and as we have lots of good
stuff in the master branch, and no Mitaka release yet, I'd like to make
a release next Thursday, December 3.

First, I've caught us up (best I can tell) on missing release notes
since our last release. Please do review them:
https://review.openstack.org/#/c/250029/

Second, please make sure when writing and reviewing code, that we are
adding release notes for anything significant, including important bug
fixes. See the patch above for examples on things that could be
candidates for the release notes. Basically, if you think it's something
a deployer or operator might care about, we should have a note for it.

How to make a release note:
http://docs.openstack.org/developer/reno/usage.html


Jim, thanks for putting together the release notes! It isn't crystal
clear to me what ought to be mentioned in release notes, but I'll use
your release notes as a guide :)

This is a heads up to folks that if you have submitted a patch that
warrants mention in the release notes, you ought to update the patch to
include a note. Otherwise, (sorry,) it will be -1'd.

Last, I'd love if cores could help test the master branch and try to
dislodge any issues there, and also try to find any existing bug reports
that feel like they should definitely be fixed before the release.


I think this also means that we shouldn't land any patches this coming
week, that might be risky or part of an incomplete feature.

After going through the commit log to build the release notes patch, I
think we've done a lot of great work since the 4.2 release. Thank you
all for that. Let's keep pushing hard on our priority list and have an
amazing rest of the cycle! :D


Hear, hear!

--ruby


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-26 Thread 少合冯
Hi all,
We want to support xbzrle compress for live migration.

Now there are 3 options,
1. add the enable flag in nova.conf.
such as a dedicated 'live_migration_compression=on|off" parameter in
nova.conf.
And nova simply enable it.
seems not good.
2.  add a parameters in live migration API.

A new array compress will be added as optional, the json-schema as below::

  {
'type': 'object',
'properties': {
  'os-migrateLive': {
'type': 'object',
'properties': {
  'block_migration': parameter_types.boolean,
  'disk_over_commit': parameter_types.boolean,
  'compress': {
'type': 'array',
'items': ["xbzrle"],
  },
  'host': host
},
'additionalProperties': False,
  },
},
'required': ['os-migrateLive'],
'additionalProperties': False,
  }


3.  dynamically choose when to activate xbzrle compress for live migration.
 This is the best.
 xbzrle really wants to be used if the network is not able to keep up
with the dirtying rate of the guest RAM.
 But how do I check the coming migration fit this situation?


REF:
https://review.openstack.org/#/c/248465/


BR
Shaohe Feng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Using docker container to run COE daemons

2015-11-26 Thread Hongbin Lu
Jay,

Agree and disagree. Containerize some COE daemons will facilitate the version 
upgrade and maintenance. However, I don’t think it is correct to blindly 
containerize everything unless there is an investigation performed to 
understand the benefits and costs of doing that. Quoted from Egor, the common 
practice in k8s is to containerize everything except kublet, because it seems 
it is just too hard to containerize everything. In the case of mesos, I am not 
sure if it is a good idea to move everything to containers, given the fact that 
it is relatively easy to manage and upgrade debian packages at Ubuntu. However, 
in the new CoreOS mesos bay [1], meos daemons will run at containers.

In summary, I think the correct strategy is to selectively containerize some 
COE daemons, but we don’t have to containerize *all* COE daemons.

[1] https://blueprints.launchpad.net/magnum/+spec/mesos-bay-with-coreos

Best regards,
Hongbin

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: November-26-15 2:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Using docker container to run COE daemons

Thanks Kai Qing, I filed a bp for mesos bay here 
https://blueprints.launchpad.net/magnum/+spec/mesos-in-container

On Thu, Nov 26, 2015 at 8:11 AM, Kai Qiang Wu 
> wrote:

Hi Jay,

For the Kubernetes COE container ways, I think @Hua Wang is doing that.

For the swarm COE, the swarm already has master and agent running in container

For the mesos, it still not have container work until now, Maybe someone 
already draft bp on it ? Not quite sure



Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Jay Lau ---26/11/2015 07:15:59 am---Hi, It is 
becoming more and more popular to use docker container]Jay Lau ---26/11/2015 
07:15:59 am---Hi, It is becoming more and more popular to use docker container 
run some

From: Jay Lau >
To: OpenStack Development Mailing List 
>
Date: 26/11/2015 07:15 am
Subject: [openstack-dev] [magnum] Using docker container to run COE daemons





Hi,

It is becoming more and more popular to use docker container run some 
applications, so what about leveraging this in Magnum?

What I want to do is that we can put all COE daemons running in docker 
containers, because now Kubernetes, Mesos and Swarm support running in docker 
container and there are already some existing docker images/dockerfiles which 
we can leverage.

So what about update all COE templates to use docker container to run COE 
daemons and maintain some dockerfiles for different COEs in Magnum? This can 
reduce the maintain effort for COE as if there are new versions and we want to 
upgrade, just update the dockerfile is enough. Comments?

--
Thanks,
Jay Lau (Guangya 
Liu)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,
Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi][rating] Issues regarding gnocchi support in CloudKitty

2015-11-26 Thread Julien Danjou
On Thu, Nov 26 2015, Stéphane Albert wrote:

Here a first reply about resource handling.

> Resource data
> =
> Duplicated resource informations from ceilometer, only revision timestamps
> changing.

So I guess that's a bug in Gnocchi, we should not create new revision if
there is 0 change. Would you mind opening a bug?

> Search for active instances during a timeframe
> ==

[…]

> Here the revision is outside of the requested timeframe.

I don't get that comment. You didn't ask for any specific revision, you
asked for a start/end timestamps. So that looks correct to me. You get
the instances that was active between 10:33 and 17:33.

> Same request with a filter on the revision
> ==

[…]

> Empty response because the filter is not matching with the latest resource
> revision.

Yes, that's normal too.

Revision are about resources modification.
You don't need to search based on revision if you want to retrieve
active resources during a timeframe. Just started_at/ended_at.

> Workaround
> ==
> Search for every resource of type 'instances' active during the timeframe. The
> generic request is just to reduce the amount of data transfered as its
> useless.

I don't see what you are working around. You get _exactly_ the same
result that you got with "Search for active instances during a
timeframe" so what's the problem in the first place?

> Request the correct revision from the resource_id we found before.

I don't understand what you call a "correct revision".

If what you want is the list of active resource during a timeframe and
their revision within that timeframe, you can just do:

POST http://10.8.8.168:8041/v1/search/resource/instance?history=true
{
"and": [
{
"or": [
{
"=": {
"ended_at": null
}
},
{
">=": {
"ended_at": "2015-11-23T10:33:26.388112+00:00"
}
}
]
},
{
"or": [
{
"=": {
"ended_at": null
}
},
{
"<=": {
"ended_at": "2015-11-23T17:33:26.388112+00:00"
}
}
]
},
{
"<=": {
"started_at": "2015-11-23T17:33:26.388112+00:00"
}
},
{
"<=": {
"revision_start": "2015-11-23T17:33:26.388112+00:00"
}
}
]
}


-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi][rating] Issues regarding gnocchi support in CloudKitty

2015-11-26 Thread Julien Danjou
On Thu, Nov 26 2015, Stéphane Albert wrote:

Now about measures.

> We don't get any data, but there is data with a bigger granularity. We don't
> have a way to know that but request the archive policy and parse it.

Oh I see. I think we can consider that as being a bug in the API, that
should be easy to fix, feel free to open one. ;)

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] More and more circular build dependencies: what can we do to stop this?

2015-11-26 Thread Thierry Carrez
Thomas Goirand wrote:
> What can we do so that it doesn't constantly happen again and again?
> It's a huge pain for downstream package maintainers and distros.

The only way to avoid it constantly happening again and again is to test
against it in check/gate.

"If it's not tested, it's broken"

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi][rating] Issues regarding gnocchi support in CloudKitty

2015-11-26 Thread Stéphane Albert
On Thu, Nov 26, 2015 at 05:04:02PM +0100, Julien Danjou wrote:
> On Thu, Nov 26 2015, Stéphane Albert wrote:
> 
> Here a first reply about resource handling.
> 
> > Resource data
> > =
> > Duplicated resource informations from ceilometer, only revision timestamps
> > changing.
> 
> So I guess that's a bug in Gnocchi, we should not create new revision if
> there is 0 change. Would you mind opening a bug?
I'll open a bug with a full history dump, sometime it's due to the host
changing (flapping) from name to hash, this might be due to ceilometer.
But most of the time there is no change.
> 
> > Search for active instances during a timeframe
> > ==
> 
> […]
> 
> > Here the revision is outside of the requested timeframe.
> 
> I don't get that comment. You didn't ask for any specific revision, you
> asked for a start/end timestamps. So that looks correct to me. You get
> the instances that was active between 10:33 and 17:33.
I didn't ask for a specific revision because I can't ;)
I get the list of instances active from 10:33 to 17:33, but with the
latest metadata which can be 2 or 3 days from the timeframe requested.
When you are doing rating based on metadata it's problematic.
> 
> > Same request with a filter on the revision
> > ==
> 
> […]
> 
> > Empty response because the filter is not matching with the latest resource
> > revision.
> 
> Yes, that's normal too.
> 
> Revision are about resources modification.
> You don't need to search based on revision if you want to retrieve
> active resources during a timeframe. Just started_at/ended_at.
See above.

> 
> > Workaround
> > ==
> > Search for every resource of type 'instances' active during the timeframe. 
> > The
> > generic request is just to reduce the amount of data transfered as its
> > useless.
> 
> I don't see what you are working around. You get _exactly_ the same
> result that you got with "Search for active instances during a
> timeframe" so what's the problem in the first place?
I do this to get the id of active instances. So I can then query gnocchi
for the correct revision filtering on the id.
> 
> > Request the correct revision from the resource_id we found before.
> 
> I don't understand what you call a "correct revision".
The correct revision is the revision matching the timeframe (10:33 to
17:33). So I can get the metadata that were applying to the instance in
this timeframe.
> 
> If what you want is the list of active resource during a timeframe and
> their revision within that timeframe, you can just do:
> 
> POST http://10.8.8.168:8041/v1/search/resource/instance?history=true
> {
> "and": [
> {
> "or": [
> {
> "=": {
> "ended_at": null
> }
> },
> {
> ">=": {
> "ended_at": "2015-11-23T10:33:26.388112+00:00"
> }
> }
> ]
> },
> {
> "or": [
> {
> "=": {
> "ended_at": null
> }
> },
> {
> "<=": {
> "ended_at": "2015-11-23T17:33:26.388112+00:00"
> }
> }
> ]
> },
> {
> "<=": {
> "started_at": "2015-11-23T17:33:26.388112+00:00"
> }
> },
> {
> "<=": {
> "revision_start": "2015-11-23T17:33:26.388112+00:00"
> }
> }
> ]
> }
If there is more than one revision, then I'll get multiple revisions for
a resource. This imply that you request all the revisions and then
filter them client side. It's not the most efficient way to proceed as
it could be directly done in the DB query.

Thanks for taking time to answer Julien.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] More and more circular build dependencies: what can we do to stop this?

2015-11-26 Thread Robert Collins
On 27 November 2015 at 03:50, Thomas Goirand  wrote:
> Hi,
>
> As a package maintainer, I'm seeing more and more circular
> build-dependency. The latest of them is between oslotest and oslo.config
> in Mitaka.
>
> There's been some added between unittest2, linecache2 and traceback2
> too, which are now really broadly used.
>
> The only way I can work around this type of issue is to temporarily
> disable the unit tests (or allow them to fail), build both packages, and
> revert the unit tests tweaks. That's both annoying and frustrating to do.
>
> What can we do so that it doesn't constantly happen again and again?
> It's a huge pain for downstream package maintainers and distros.
>
> Cheers,
>
> Thomas Goirand (zigo)

Firstly, as Thierry says, we're not well equipped to stop things
happening without tests, its the nature of a multi-thousand developer
structure.

Secondly, the cases you site are not circular build dependencies: they
are circular test dependencies, which are not the same thing.

I realise that the Debian and RPM tooling around this has historically
been weak, but its improving -
https://wiki.debian.org/DebianBootstrap#Circular_dependencies.2Fstaged_builds
- covers the current state of the art, and should, AIUI, entirely
address your needs: you do one build that is just a pure build with no
tests-during-build-time, then when the build phase of everything is
covered, a second stage 'normal' build that includes tests.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi][rating] Issues regarding gnocchi support in CloudKitty

2015-11-26 Thread Julien Danjou
On Thu, Nov 26 2015, Stéphane Albert wrote:

> I'll open a bug with a full history dump, sometime it's due to the host
> changing (flapping) from name to hash, this might be due to
> ceilometer.

This would be a Ceilometer (or above) bug indeed. I think I saw
something recently. It's possible the host differs between polling and
notifications; feel free to also open a bug on the Ceilometer side.

>> > Search for active instances during a timeframe
>> > ==
>> 
>> […]
>> 
>> > Here the revision is outside of the requested timeframe.
>> 
>> I don't get that comment. You didn't ask for any specific revision, you
>> asked for a start/end timestamps. So that looks correct to me. You get
>> the instances that was active between 10:33 and 17:33.
> I didn't ask for a specific revision because I can't ;)

Why can't you? You can request revisions based on their timeframes.
Or is it because the revision date is based no the upload timestamp
rather than the actual date of the revision?

This is something we could fix in the API and Ceilometer I imagine,
giving the ability to provide a timestamp for the change.

> If there is more than one revision, then I'll get multiple revisions for
> a resource. This imply that you request all the revisions and then
> filter them client side. It's not the most efficient way to proceed as
> it could be directly done in the DB query.

But but… there might be multiple revision between two timeframe. If you
want to rate an instance that was up during 10:33 and 17:33, and that
resource has been e.g. resized 3 times at 12:00, 13:00 and 15:00, you'll
get 3 revisions from Gnocchi. Isn't that what you want?

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]New Quota Subteam on Nova

2015-11-26 Thread Raildo Mascena
Hi Markus,

I totally agree with you, we'll add some effort to close bugs related to
quotas, but there is some bugs like "Quotas can be exceeded by making
highly parallel requests"  that
will not be easy to fix on the current quota design.

For now, I'll add this link with the quota related bugs on the etherpad and
we can start take a looking on it.

Cheers,

Raildo

On Thu, Nov 26, 2015 at 5:45 AM Markus Zoeller  wrote:

> Raildo Mascena  wrote on 11/20/2015 05:13:18 PM:
>
> > From: Raildo Mascena 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: 11/20/2015 05:15 PM
> > Subject: [openstack-dev] [nova]New Quota Subteam on Nova
> >
> > Hi guys
> >
> > [...]
> >
> > So was I thinking on create a subteam on Nova to speed up the code
> > review in the nested quota implementation and discuss this re-design
> > of quotas. Someone have interest on be part of this subteam or
> suggestions?
> >
> > Cheers,
> >
> > Raildo
>
> Do you see a chance that the subteam would also look at the existing
> bugs [1] in the quotas area? Most of them are pretty old (>= 1 year)
> and there might be a chance that, while you digg through the code,
> you come to the conclusion that some of them are not valid anymore or
> are already solved. That would be really helpful from a bug management
> perspective.
>
> [1] Launchpad nova bugs; tag "quotas"; status is not in progress:
> http://bit.ly/1Pbr8YL
>
> Regards, Markus Zoeller (markus_z)
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi][rating] Issues regarding gnocchi support in CloudKitty

2015-11-26 Thread Stéphane Albert
On Thu, Nov 26, 2015 at 05:31:38PM +0100, Julien Danjou wrote:
> On Thu, Nov 26 2015, Stéphane Albert wrote:
> 
> > I'll open a bug with a full history dump, sometime it's due to the host
> > changing (flapping) from name to hash, this might be due to
> > ceilometer.
> 
> This would be a Ceilometer (or above) bug indeed. I think I saw
> something recently. It's possible the host differs between polling and
> notifications; feel free to also open a bug on the Ceilometer side.
I'll see if there's a bug declared in ceilometer's bug tracker and if
not report a new one.
> 
> >> > Search for active instances during a timeframe
> >> > ==
> >> 
> >> […]
> >> 
> >> > Here the revision is outside of the requested timeframe.
> >> 
> >> I don't get that comment. You didn't ask for any specific revision, you
> >> asked for a start/end timestamps. So that looks correct to me. You get
> >> the instances that was active between 10:33 and 17:33.
> > I didn't ask for a specific revision because I can't ;)
> 
> Why can't you? You can request revisions based on their timeframes.
By "I can't" I meant I can't without asking for history. I just want to
have the same result as the standard query (the latest revision) but
with a maximum revision timestamp.
> Or is it because the revision date is based no the upload timestamp
> rather than the actual date of the revision?
> 
> This is something we could fix in the API and Ceilometer I imagine,
> giving the ability to provide a timestamp for the change.
> 
> > If there is more than one revision, then I'll get multiple revisions for
> > a resource. This imply that you request all the revisions and then
> > filter them client side. It's not the most efficient way to proceed as
> > it could be directly done in the DB query.
> 
> But but… there might be multiple revision between two timeframe. If you
> want to rate an instance that was up during 10:33 and 17:33, and that
> resource has been e.g. resized 3 times at 12:00, 13:00 and 15:00, you'll
> get 3 revisions from Gnocchi. Isn't that what you want?
Yes and no... In CloudKitty the collection period is our epsilon. So we
won't process multiple values for it. That's the way CloudKitty is
working at the moment, it might change in the future. We get the latest
metas and most of the time get max value from the aggregated measures
(vcpus, mem, disk, etc). It's just that at the moment ceilometer/gnocchi
is returning way too much revisions for a timeframe (and they don't have
relevant data most of the time). If we query all active instances for a
tenant we can quickly get huge load of data.

We'll have support for ceilometer events soon, so we'll start doing diff
on gnocchi resources to detect changes and apply rate on these. But at
the moment it's out of our scope.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with Unexpected vif_type=binding_failed

2015-11-26 Thread Mooney, Sean K
Openstack uses the hostname as a primary key in many of the project.
Nova and neutron both do this.
If you had two nodes with the same host name then it would cause undefined 
behavior. 

Based on the error Andreas highlighted  are you currently trying to configure 
ovs-dpdk with vxlan/gre?

I also noticed that the getting started guide you linked to earlier was for the 
master branch(mitaka) but
You mentioned you were deploying kilo.
The local.conf settings will be different in both case.





-Original Message-
From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com] 
Sent: Thursday, November 26, 2015 1:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Praveen,
there are many error in your q-svc log.
It says:

InvalidInput: Invalid input for operation: (u'Tunnel IP %(ip)s in use with host 
%(host)s', {'ip': u'10.81.1.150', 'host':
u'localhost.localdomain'}).\n"]


Did you maybe specify duplicated ips in your controllers and compute nodes 
neutron tunnel config?

Or did you change the hostname after installation

Or maybe the code has trouble with duplicated host names?

--
Andreas
(IRC: scheuran)



On Di, 2015-11-24 at 15:28 +0100, Praveen MANKARA RADHAKRISHNAN wrote:
> Hi Sean, 
> 
> 
> Thanks for the reply. 
> 
> 
> Please find the logs attached. 
> ovs-dpdk is correctly running in compute.
> 
> 
> Thanks
> Praveen 
> 
> On Tue, Nov 24, 2015 at 3:04 PM, Mooney, Sean K
>  wrote:
> Hi would you be able to attach the
> 
> n-cpu log form the computenode  and  the
> 
> n-sch and q-svc logs for the controller so we can see if there
> is a stack trace relating to the
> 
> vm boot.
> 
>  
> 
> Also can you confirm ovs-dpdk is running correctly on the
> compute node by running 
> 
> sudo service ovs-dpdk status
> 
>  
> 
> the neutron and networking-ovs-dpdk commits are from their
> respective stable/kilo branches so they should be compatible
> 
> provided no breaking changes have been merged to either
> branch.
> 
>  
> 
> regards
> 
> sean.
> 
>  
> 
> From: Praveen MANKARA RADHAKRISHNAN
> [mailto:praveen.mank...@6wind.com] 
> Sent: Tuesday, November 24, 2015 1:39 PM
> To: OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation
> fails with Unexpected vif_type=binding_failed
> 
>  
> 
> Hi Przemek,
> 
>  
> 
> 
> Thanks For the response, 
> 
> 
>  
> 
> 
> Here are the commit ids for Neutron and networking-ovs-dpdk 
> 
> 
>  
> 
> 
> [stack@localhost neutron]$ git log --format="%H" -n 1
> 
> 
> 026bfc6421da796075f71a9ad4378674f619193d
> 
> 
> [stack@localhost neutron]$ cd ..
> 
> 
> [stack@localhost ~]$ cd networking-ovs-dpdk/
> 
> 
> [stack@localhost networking-ovs-dpdk]$  git log --format="%H"
> -n 1
> 
> 
> 90dd03a76a7e30cf76ecc657f23be8371b1181d2
> 
> 
>  
> 
> 
> The Neutron agents are up and running in compute node. 
> 
> 
>  
> 
> 
> Thanks 
> 
> 
> Praveen
> 
> 
>  
> 
> 
>  
> 
> On Tue, Nov 24, 2015 at 12:57 PM, Czesnowicz, Przemyslaw
>  wrote:
> 
> Hi Praveen,
> 
>  
> 
> There’s been some changes recently to
> networking-ovs-dpdk, it no longer host’s a mech driver
> as the openviswitch mech driver in Neutron supports
> vhost-user ports.
> 
> I guess something went wrong and the version of
> Neutron is not matching networking-ovs-dpdk. Can you
> post commit ids of Neutron and networking-ovs-dpdk.
> 
>  
> 
> The other possibility is that the Neutron agent is not
> running/died on the compute node.
> 
> Check with:
> 
> neutron agent-list
> 
>  
> 
> Przemek
> 
>  
> 
> From: 

[openstack-dev] [neutron][tap-as-a-service] Tap-as-a-service API

2015-11-26 Thread Fawad Khaliq
Folks,

Any plan to revive this [1] so we can discuss and finalize the use cases
and APIs.

[1] https://review.openstack.org/#/c/96149/

Thanks,
Fawad Khaliq
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-26 Thread John Garbutt
On 26 November 2015 at 15:55, 少合冯  wrote:
> Hi all,
> We want to support xbzrle compress for live migration.
>
> Now there are 3 options,
> 1. add the enable flag in nova.conf.
> such as a dedicated 'live_migration_compression=on|off" parameter in
> nova.conf.
> And nova simply enable it.
> seems not good.
> 2.  add a parameters in live migration API.
>
> A new array compress will be added as optional, the json-schema as below::
>
>   {
> 'type': 'object',
> 'properties': {
>   'os-migrateLive': {
> 'type': 'object',
> 'properties': {
>   'block_migration': parameter_types.boolean,
>   'disk_over_commit': parameter_types.boolean,
>   'compress': {
> 'type': 'array',
> 'items': ["xbzrle"],
>   },
>   'host': host
> },
> 'additionalProperties': False,
>   },
> },
> 'required': ['os-migrateLive'],
> 'additionalProperties': False,
>   }
>
>
> 3.  dynamically choose when to activate xbzrle compress for live migration.
>  This is the best.
>  xbzrle really wants to be used if the network is not able to keep up
> with the dirtying rate of the guest RAM.
>  But how do I check the coming migration fit this situation?
>
>
> REF:
> https://review.openstack.org/#/c/248465/

I have added my comments in the review.

I really don't want to have a REST API that is so specific as this, if
at all possible.
Nova aims to create a consistent API abstraction across all Nova clouds.

Suggested alternatives in the spec review.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [group-based-policy] Meeting today

2015-11-26 Thread Duarte Cardoso, Igor
Hi GBP team,

Is the meeting today not going to happen due to US Thanksgiving?

Best regards,

Igor Duarte Cardoso
-
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]New Quota Subteam on Nova

2015-11-26 Thread John Garbutt
On 26 November 2015 at 16:46, Raildo Mascena  wrote:
> Hi Markus,
>
> I totally agree with you, we'll add some effort to close bugs related to
> quotas, but there is some bugs like "Quotas can be exceeded by making highly
> parallel requests" that will not be easy to fix on the current quota design.
>
> For now, I'll add this link with the quota related bugs on the etherpad and
> we can start take a looking on it.

A suggestion in the past, that I like, is creating a nova functional
test that stress tests the quota code.

Hopefully that will be able to help reproduce the error.
That should help prove if any proposed fix actually works.

Thanks,
John

> On Thu, Nov 26, 2015 at 5:45 AM Markus Zoeller  wrote:
>>
>> Raildo Mascena  wrote on 11/20/2015 05:13:18 PM:
>>
>> > From: Raildo Mascena 
>> > To: "OpenStack Development Mailing List (not for usage questions)"
>> > 
>> > Date: 11/20/2015 05:15 PM
>> > Subject: [openstack-dev] [nova]New Quota Subteam on Nova
>> >
>> > Hi guys
>> >
>> > [...]
>> >
>> > So was I thinking on create a subteam on Nova to speed up the code
>> > review in the nested quota implementation and discuss this re-design
>> > of quotas. Someone have interest on be part of this subteam or
>> suggestions?
>> >
>> > Cheers,
>> >
>> > Raildo
>>
>> Do you see a chance that the subteam would also look at the existing
>> bugs [1] in the quotas area? Most of them are pretty old (>= 1 year)
>> and there might be a chance that, while you digg through the code,
>> you come to the conclusion that some of them are not valid anymore or
>> are already solved. That would be really helpful from a bug management
>> perspective.
>>
>> [1] Launchpad nova bugs; tag "quotas"; status is not in progress:
>> http://bit.ly/1Pbr8YL
>>
>> Regards, Markus Zoeller (markus_z)
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-26 Thread Daniel P. Berrange
On Thu, Nov 26, 2015 at 11:55:31PM +0800, 少合冯 wrote:
> Hi all,
> We want to support xbzrle compress for live migration.
> 
> Now there are 3 options,
> 1. add the enable flag in nova.conf.
> such as a dedicated 'live_migration_compression=on|off" parameter in
> nova.conf.
> And nova simply enable it.
> seems not good.

Just having a live_migration_compression=on|off parameter that
unconditionally turns it on for all VMs is not really a solution
on its own, as it leaves out the problem of compression cache
memory size, which is at the root of the design problem.

Without sensible choice of the cache size, the compression is
either useless (to small and it won't get a useful number cache
hits and so won't save any data transfer bandwidth) or it is
hugely wasteful of resources (to large and you're just sucking
host RAM for no benefit). QEMU migration code maintainers
guidelines are that the cache size should be approximately
equal to the guest RAM working set. IOW for a 4 GB guest
you potentially need a 4 GB cache for migration, so we're
doubling the memory usage of a guest, without the schedular
being any the wiser, which will inevitably cause the host
to die in out of memory at some point.


> 2.  add a parameters in live migration API.
> 
> A new array compress will be added as optional, the json-schema as below::
> 
>   {
> 'type': 'object',
> 'properties': {
>   'os-migrateLive': {
> 'type': 'object',
> 'properties': {
>   'block_migration': parameter_types.boolean,
>   'disk_over_commit': parameter_types.boolean,
>   'compress': {
> 'type': 'array',
> 'items': ["xbzrle"],
>   },
>   'host': host
> },
> 'additionalProperties': False,
>   },
> },
> 'required': ['os-migrateLive'],
> 'additionalProperties': False,
>   }

I really don't think we want to expose this kind of hypervisor
specific detail in the live migration API of Nova. It just leaks
too many low level details. It still leaves the problem of deciding
the compression cache size unsolved and likewise the problem of the
schedular knowing about the memory usage for this cache in order to
avoid OOM

> 3.  dynamically choose when to activate xbzrle compress for live migration.
>  This is the best.
>  xbzrle really wants to be used if the network is not able to keep up
> with the dirtying rate of the guest RAM.
>  But how do I check the coming migration fit this situation?

FWIW, if we decide we want compression support in Nova, I think that
having the Nova libvirt driver dynamically decide when to use it is
the only viable approach. Unfortunately the way the QEMU support
is implemented makes it very hard to use, as QEMU forces you to decide
to use it upfront, at a time when you don't have any useful information
on which to make the decision :-(  To be useful IMHO, we really need
the ability to turn on compression on the fly for an existing active
migration process. ie, we'd start migration off and let it run and
only enable compression if we encounter problems with completion.
Sadly we can't do this with QEMU as it stands today :-(

Oh and of course we still need to address the issue of RAM usage and
communicating that need with the scheduler in order to avoid OOM
scenarios due to large compression cache.

I tend to feel that the QEMU compression code is currently broken by
design and needs rework in QEMU before it can be pratically used in
an autonomous fashion :-(

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-26 Thread John Garbutt
On 26 November 2015 at 17:39, Daniel P. Berrange  wrote:
> On Thu, Nov 26, 2015 at 11:55:31PM +0800, 少合冯 wrote:
>> Hi all,
>> We want to support xbzrle compress for live migration.
>>
>> Now there are 3 options,
>> 1. add the enable flag in nova.conf.
>> such as a dedicated 'live_migration_compression=on|off" parameter in
>> nova.conf.
>> And nova simply enable it.
>> seems not good.
>
> Just having a live_migration_compression=on|off parameter that
> unconditionally turns it on for all VMs is not really a solution
> on its own, as it leaves out the problem of compression cache
> memory size, which is at the root of the design problem.
>
> Without sensible choice of the cache size, the compression is
> either useless (to small and it won't get a useful number cache
> hits and so won't save any data transfer bandwidth) or it is
> hugely wasteful of resources (to large and you're just sucking
> host RAM for no benefit). QEMU migration code maintainers
> guidelines are that the cache size should be approximately
> equal to the guest RAM working set. IOW for a 4 GB guest
> you potentially need a 4 GB cache for migration, so we're
> doubling the memory usage of a guest, without the schedular
> being any the wiser, which will inevitably cause the host
> to die in out of memory at some point.
>
>
>> 2.  add a parameters in live migration API.
>>
>> A new array compress will be added as optional, the json-schema as below::
>>
>>   {
>> 'type': 'object',
>> 'properties': {
>>   'os-migrateLive': {
>> 'type': 'object',
>> 'properties': {
>>   'block_migration': parameter_types.boolean,
>>   'disk_over_commit': parameter_types.boolean,
>>   'compress': {
>> 'type': 'array',
>> 'items': ["xbzrle"],
>>   },
>>   'host': host
>> },
>> 'additionalProperties': False,
>>   },
>> },
>> 'required': ['os-migrateLive'],
>> 'additionalProperties': False,
>>   }
>
> I really don't think we want to expose this kind of hypervisor
> specific detail in the live migration API of Nova. It just leaks
> too many low level details. It still leaves the problem of deciding
> the compression cache size unsolved and likewise the problem of the
> schedular knowing about the memory usage for this cache in order to
> avoid OOM

+1

>> 3.  dynamically choose when to activate xbzrle compress for live migration.
>>  This is the best.
>>  xbzrle really wants to be used if the network is not able to keep up
>> with the dirtying rate of the guest RAM.
>>  But how do I check the coming migration fit this situation?
>
> FWIW, if we decide we want compression support in Nova, I think that
> having the Nova libvirt driver dynamically decide when to use it is
> the only viable approach. Unfortunately the way the QEMU support
> is implemented makes it very hard to use, as QEMU forces you to decide
> to use it upfront, at a time when you don't have any useful information
> on which to make the decision :-(  To be useful IMHO, we really need
> the ability to turn on compression on the fly for an existing active
> migration process. ie, we'd start migration off and let it run and
> only enable compression if we encounter problems with completion.
> Sadly we can't do this with QEMU as it stands today :-(
>
> Oh and of course we still need to address the issue of RAM usage and
> communicating that need with the scheduler in order to avoid OOM
> scenarios due to large compression cache.
>
> I tend to feel that the QEMU compression code is currently broken by
> design and needs rework in QEMU before it can be pratically used in
> an autonomous fashion :-(

Honestly, most of the conversations seems to be leading that way.

It does seem a nice alternative to throttling the VM performance when
trying to get the memory transfer to complete. But as you say, seems
we can't use it in that way right now.

johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-26 Thread Paul Carlton

On 26/11/15 15:55, 少合冯 wrote:

Hi all,
We want to support xbzrle compress for live migration.

Now there are 3 options,
1. add the enable flag in nova.conf.
such asa dedicated 'live_migration_compression=on|off" parameter 
in nova.conf.

And nova simply enable it.
seems not good.
2.  add a parameters in live migration API.

A new array compress will be added as optional, the json-schema as below::

  {
'type': 'object',
'properties': {
  'os-migrateLive': {
'type': 'object',
'properties': {
  'block_migration': parameter_types.boolean,
  'disk_over_commit': parameter_types.boolean,
  'compress': {
'type': 'array',
'items': ["xbzrle"],
  },
  'host': host
},
'additionalProperties': False,
  },
},
'required': ['os-migrateLive'],
'additionalProperties': False,
  }


3. dynamically choose when to activate xbzrle compress for live migration.
   This is the best.
 xbzrle really wants to be used if the network is not able to keep up 
with the dirtying rate of the guest RAM.

   But how do I check the coming migration fit this situation?


REF:
https://review.openstack.org/#/c/248465/


BR
Shaohe Feng


Feels to me that this is too implementation dependent to be exposed in 
API.  I've seen elsewhere
discussion about removing the block migrate and disk over commit 
parameters, the idea being it
would figure out the type of migration for itself and over commit is not 
needed anymore.


Seems to me the prevailing view is that we should get live migration to 
figure out the best setting for
itself where possible.  There was discussion of being able have a 
default policy setting that will allow
the operator to define balance between speed of migration and impact on 
the instance.  This could be
a global default for the cloud with overriding defaults per aggregate, 
image, tenant and instance as

well as the ability to vary the setting during the migration operation.

Seems to me that items like compression should be set in configuration 
files based on what works best

given the cloud operator's environment?

--
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Email:mailto:paul.carlt...@hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be 
legally privileged. If you have received this message in error, you should delete it from 
your system immediately and advise the sender. To any recipient of this message within 
HP, unless otherwise stated you should consider this message and attachments as "HP 
CONFIDENTIAL".

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-26 Thread Daniel P. Berrange
On Thu, Nov 26, 2015 at 05:49:50PM +, Paul Carlton wrote:
> Seems to me the prevailing view is that we should get live migration to
> figure out the best setting for
> itself where possible.  There was discussion of being able have a default
> policy setting that will allow
> the operator to define balance between speed of migration and impact on the
> instance.  This could be
> a global default for the cloud with overriding defaults per aggregate,
> image, tenant and instance as
> well as the ability to vary the setting during the migration operation.
> 
> Seems to me that items like compression should be set in configuration files
> based on what works best
> given the cloud operator's environment?

Merely turning on use of compression is the "easy" bit - there needs to be
a way to deal with compression cache size allocation, which needs to have
some smarts in Nova, as there's no usable "one size fits all" value for
the compression cache size. If we did want to hardcode a compression cache
size, you'd have to pick set it as a scaling factor against the guest RAM
size. This is going to be very heavy on memory usage, so there needs careful
design work to solve the problem of migration compression triggering host
OOM scenarios, particularly since we can have multiple concurrent
migrations.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] fake driver doesn't work with multi topics

2015-11-26 Thread Flavio Percoco

On 26/11/15 10:40 +0900, Masahito MUROI wrote:

Hi oslo.message folks,

We are trying to use oslo_message's fake driver [1] for our testing.
However, the driver doesn't seem to work with multi topics. Is this
behavior expected or a bug?


mmh, I'd say it's not. It's very likely this fake driver was not
updated to support that. Any chance you can file a bug for it?

Thanks,
Flavio



best regard,
Masahito


--
室井 雅仁(Masahito MUROI)
Software Innovation Center, NTT
Tel: +81-422-59-4539,FAX: +81-422-59-2699



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

2015-11-26 Thread Daniel P. Berrange
On Thu, Nov 26, 2015 at 05:39:04PM +, Daniel P. Berrange wrote:
> On Thu, Nov 26, 2015 at 11:55:31PM +0800, 少合冯 wrote:
> > 3.  dynamically choose when to activate xbzrle compress for live migration.
> >  This is the best.
> >  xbzrle really wants to be used if the network is not able to keep up
> > with the dirtying rate of the guest RAM.
> >  But how do I check the coming migration fit this situation?
> 
> FWIW, if we decide we want compression support in Nova, I think that
> having the Nova libvirt driver dynamically decide when to use it is
> the only viable approach. Unfortunately the way the QEMU support
> is implemented makes it very hard to use, as QEMU forces you to decide
> to use it upfront, at a time when you don't have any useful information
> on which to make the decision :-(  To be useful IMHO, we really need
> the ability to turn on compression on the fly for an existing active
> migration process. ie, we'd start migration off and let it run and
> only enable compression if we encounter problems with completion.
> Sadly we can't do this with QEMU as it stands today :-(
> 
> Oh and of course we still need to address the issue of RAM usage and
> communicating that need with the scheduler in order to avoid OOM
> scenarios due to large compression cache.
> 
> I tend to feel that the QEMU compression code is currently broken by
> design and needs rework in QEMU before it can be pratically used in
> an autonomous fashion :-(

Actually thinking about it, there's not really any significant
difference between Option 1 and Option 3. In both cases we want
a nova.conf setting live_migration_compression=on|off to control
whether we want to *permit* use  of compression.

The only real difference between 1 & 3 is whether migration has
compression enabled always, or whether we turn it on part way
though migration.

So although option 3 is our desired approach (which we can't
actually implement due to QEMU limitations), option 1 could
be made fairly similar if we start off with a very small
compression cache size which would have the effect of more or
less disabling compression initially.

We already have logic in the code for dynamically increasing
the max downtime value, which we could mirror here

eg something like

 live_migration_compression=on|off

  - Whether to enable use of compression

 live_migration_compression_cache_ratio=0.8

  - The maximum size of the compression cache relative to
the guest RAM size. Must be less than 1.0

 live_migration_compression_cache_steps=10

  - The number of steps to take to get from initial cache
size to the maximum cache size

 live_migration_compression_cache_delay=75

  - The time delay in seconds between increases in cache
size


In the same way that we do with migration downtime, instead of
increasing cache size linearly, we'd increase it in ever larger
steps until we hit the maximum. So we'd start off fairly small
a few MB, and monitoring the cache hit rates, we'd increase it
periodically.  If the number of steps configured and time delay
between steps are reasonably large, that would have the effect
that most migrations would have a fairly small cache and would
complete without needing much compression overhead.

Doing this though, we still need a solution to the host OOM scenario
problem. We can't simply check free RAM at start of migration and
see if there's enough to spare for compression cache, as the schedular
can spawn a new guest on the compute host at any time, pushing us into
OOM. We really need some way to indicate that there is a (potentially
very large) extra RAM overhead for the guest during migration.

ie if live_migration_compression_cache_ratio is 0.8 and we have a
4 GB guest, we need to make sure the schedular knows that we are
potentially going to be using 7.2 GB of memory during migration

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [group-based-policy] Meeting today

2015-11-26 Thread Sumit Naiksatam
Hi Igor, Yes, no meeting today. We discussed in last week’s IRC. Happy
Thanksgiving! ;-)

Best,
~Sumit.

On Thu, Nov 26, 2015 at 9:31 AM, Duarte Cardoso, Igor
 wrote:
> Hi GBP team,
>
>
>
> Is the meeting today not going to happen due to US Thanksgiving?
>
>
>
> Best regards,
>
>
>
> Igor Duarte Cardoso
>
> -
>
> Intel Research and Development Ireland Limited
>
> Registered in Ireland
>
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
>
> Registered Number: 308263
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev