Re: [openstack-dev] [nova][scheduler] Anyone relying on the host_subset_size config option?

2017-05-26 Thread Rui Chen
Beside eliminate race conditions, we use host_subnet_size in the special
cases, we have different capacity hardware in a deployment,
imagine a simple case, two compute hosts(RAM 48G vs 16G free), only enable
the RAM weighter for nova-scheduler, if we launch
10 instances(RAM 1G flavor) one by one, all the 10 instances will be
launched on the 48G RAM compute hosts, that don't we want,
host_subset_size help to distribute load to random available hosts in the
situation.

Thank you sending the mail to operators list, let us to get more feedback
before doing some changes.

2017-05-27 4:46 GMT+08:00 Ben Nemec :

>
>
> On 05/26/2017 12:17 PM, Edward Leafe wrote:
>
>> [resending to include the operators list]
>>
>> The host_subset_size configuration option was added to the scheduler to
>> help eliminate race conditions when two requests for a similar VM would be
>> processed close together, since the scheduler’s algorithm would select the
>> same host in both cases, leading to a race and a likely failure to build
>> for the second request. By randomly choosing from the top N hosts, the
>> likelihood of a race would be reduced, leading to fewer failed builds.
>>
>> Current changes in the scheduling process now have the scheduler claiming
>> the resources as soon as it selects a host. So in the case above with 2
>> similar requests close together, the first request will claim successfully,
>> but the second will fail *while still in the scheduler*. Upon failing the
>> claim, the scheduler will simply pick the next host in its weighed list
>> until it finds one that it can claim the resources from. So the
>> host_subset_size configuration option is no longer needed.
>>
>> However, we have heard that some operators are relying on this option to
>> help spread instances across their hosts, rather than using the RAM
>> weigher. My question is: will removing this randomness from the scheduling
>> process hurt any operators out there? Or can we safely remove that logic?
>>
>
> We used host_subset_size to schedule randomly in one of the TripleO CI
> clouds.  Essentially we had a heterogeneous set of hardware where the
> numerically larger (more RAM, more disk, equal CPU cores) systems were
> significantly slower.  This caused them to be preferred by the scheduler
> with a normal filter configuration, which is obviously not what we wanted.
> I'm not sure if there's a smarter way to handle it, but setting
> host_subset_size to the number of compute nodes and disabling basically all
> of the weighers allowed us to equally distribute load so at least the slow
> nodes weren't preferred.
>
> That said, we're migrating away from that frankencloud so I certainly
> wouldn't block any scheduler improvements on it.  I'm mostly chiming in to
> describe a possible use case.  And please feel free to point out if there's
> a better way to do this. :-)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] What the behavior of AddFixedIp API should be?

2017-03-28 Thread Rui Chen
Thank you Matt, the background information is important. Seems all the
peoples don't know how the add-fixed-ip API works,
and there is no exact use case about it. Now neutron port-update API also
support to set multiple fixed ip for a port, and
the fixed-ip updating will sync to nova side automatically (I had verified
it in my latest devstack). Updating fixed-ip for
specified port is easier to understand for me in multiple nics case than
nova add-fixed-ip API.

So if others known the orignal API design or had used nova add/remove
fixed-ip API and would like to show your use cases,
it's nice for us to understand how the API works and when we should use it,
we can update the api-ref and add exact usage,
avoid users' confusion about it. Feel free to reply something, thank you.

2017-03-27 23:36 GMT+08:00 Matt Riedemann <mriede...@gmail.com>:

> On 3/27/2017 7:23 AM, Rui Chen wrote:
>
>> Hi:
>>
>> A question about nova AddFixedIp API, nova api-ref[1] describe the
>> API as "Adds a fixed IP address to a server instance, which associates
>> that address with the server.", the argument of API is network id, so if
>> there are two or more subnets in a network, which one is lucky to
>> associate ip address to the instance? and the API behavior is always
>> consistent? I'm not sure.
>> The latest code[2] get all of the instance's ports and subnets of
>> the specified network, then loop them, but it return when the first
>> update_port success, so the API behavior depends on the order of subnet
>> and port list that return by neutron API. I have no idea about what
>> scenario we should use the API in, and the original design, anyone know
>> that?
>>
>> [1]: https://developer.openstack.org/api-ref/compute/#add-associa
>> te-fixed-ip-addfixedip-action
>> [2]: https://github.com/openstack/nova/blob/master/nova/network/n
>> eutronv2/api.py#L1366
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> I wondered about this API implementation myself awhile ago, see this bug
> report for details:
>
> https://bugs.launchpad.net/nova/+bug/1430512
>
> There was a related change for this from garyk:
>
> https://review.openstack.org/#/c/163864/
>
> But that was abandoned.
>
> I'm honestly not really sure what the direction is here. From what I
> remember when I reported that bug, this was basically a feature-parity
> implementation in the compute API for the multinic API with nova-network.
> However, I'm not sure it's very usable. There is a Tempest test for this
> API, but I think all it does is attach an interface and make sure that does
> not blow up, it does not try to use the interface to ssh into the guest,
> for example.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [neutron] What the behavior of AddFixedIp API should be?

2017-03-27 Thread Rui Chen
Hi:

A question about nova AddFixedIp API, nova api-ref[1] describe the API
as "Adds a fixed IP address to a server instance, which associates that
address with the server.", the argument of API is network id, so if there
are two or more subnets in a network, which one is lucky to associate ip
address to the instance? and the API behavior is always consistent? I'm not
sure.
The latest code[2] get all of the instance's ports and subnets of the
specified network, then loop them, but it return when the first update_port
success, so the API behavior depends on the order of subnet and port list
that return by neutron API. I have no idea about what scenario we should
use the API in, and the original design, anyone know that?

[1]:
https://developer.openstack.org/api-ref/compute/#add-associate-fixed-ip-addfixedip-action
[2]:
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1366
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mogan] Nominating liusheng for Mogan core

2017-03-20 Thread Rui Chen
+1

Liusheng is a responsible reviewer and keep good reviewing quality in Mogan.

Thank you working hard for Mogan, Liusheng.

2017-03-20 16:19 GMT+08:00 Zhenguo Niu :

> Hi team,
>
> I would like to nominate liusheng to Mogan core. Liusheng has been a
> significant code contributor since the project's creation providing high
> quality reviews.
>
> Please feel free to respond in public or private your support or any
> concerns.
>
>
> Thanks,
> Zhenguo
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feedback for upcoming user survey questionnaire

2016-12-27 Thread Rui Chen
I have one to users:

- Have you need to add some customized features on upstream Nova code to
meet your special needs? or Nova is out of the box project?

Thanks.

2016-12-27 7:18 GMT+08:00 Jay Pipes :

> On 12/26/2016 06:08 PM, Matt Riedemann wrote:
>
>> We have the opportunity to again [1] ask a question in the upcoming user
>> survey which will be conducted in February. We can ask one question and
>> have it directed to either *users* of Nova, people *testing* nova, or
>> people *interested* in using/adopting nova. Given the existing adoption
>> of Nova in OpenStack deployments (98% as of October 2016) I think that
>> sliding scale really only makes sense to direct a question at existing
>> users of the project. It's also suggested that for projects with over
>> 50% adoption to make the question quantitative rather than qualitative.
>>
>> We have until January 9th to submit a question. If you have any
>> quantitative questions about Nova to users, please reply to this thread
>> before then.
>>
>> Personally I tend to be interested in feedback on recent development, so
>> I'd like to ask questions about cells v2 or the placement API, i.e. they
>> were optional in Newton but how many deployments that have upgraded to
>> Newton are deploying those features (maybe also noting they will be
>> required to upgrade to Ocata)? However, the other side of me knows that
>> most major production deployments are also lagging behind by a few
>> releases, and may only now be upgrading, or planning to upgrade, to
>> Mitaka since we've recently end-of-life'd the Liberty release. So asking
>> questions about cells v2 or the placement service is probably premature.
>> It might be better to ask about microversion adoption, i.e. if you're
>> monitoring API request traffic to your cloud, what % of compute API
>> requests are using a microversion > 2.1.
>>
>
> My vote would be to ask the following question:
>
> Have you considered using (or already chosen) an alternative to OpenStack
> Nova for launching your software workloads? If you have, please list one to
> three reasons why you chose this alternative.
>
> Thanks,
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Nominating Masahito for core

2016-02-17 Thread Rui Chen
+1

Congratulations!

2016-02-18 2:14 GMT+08:00 Masahito MUROI :

> Thank you folks. I'm glad to be a part of this team and community, and
> appreciate all supports from you.
>
> On 2016/02/17 12:10, Anusha Ramineni wrote:
>
>> +1
>>
>> Best Regards,
>> Anusha
>>
>> On 17 February 2016 at 00:59, Peter Balland > > wrote:
>>
>> +1
>>
>> From: Tim Hinrichs >
>> Reply-To: "OpenStack Development Mailing List (not for usage
>> questions)" > >
>> Date: Tuesday, February 16, 2016 at 11:15 AM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> > >
>> Subject: [openstack-dev] [Congress] Nominating Masahito for core
>>
>> Hi all,
>>
>> I'm writing to nominate Masahito Muroi for the Congress core
>> team.  He's been a consistent contributor for the entirety of
>> Liberty and Mitaka, both in terms of code contributions and
>> reviews.  In addition to volunteering for bug fixes and
>> blueprints, he initiated and carried out the design and
>> implementation of a new class of datasource driver that allows
>> external datasources to push data into Congress.  He has also
>> been instrumental in migrating Congress to its new distributed
>> architecture.
>>
>> Tim
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> --
> 室井 雅仁(Masahito MUROI)
> Software Innovation Center, NTT
> Tel: +81-422-59-4539
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature suggestion - API for creating VM without powering it up

2016-01-27 Thread Rui Chen
Looks like we can use user_data and cloud-init to do this stuff.

Adding the following content into user_data.txt and launch instance like
this: nova boot --user-data user_data.txt ...,
the instance will shutdown after boot is finished.

power_state:
 mode: poweroff
 message: Bye Bye

You can find more details in cloud-init document[1].

[1]:
https://github.com/number5/cloud-init/blob/master/doc/examples/cloud-config.txt

2016-01-22 3:32 GMT+08:00 Fox, Kevin M :

> The nova instance user spec has a use case.
> https://review.openstack.org/#/c/93/
>
> Thanks,
> Kevin
> 
> From: Matt Riedemann [mrie...@linux.vnet.ibm.com]
> Sent: Thursday, January 21, 2016 7:32 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Feature suggestion - API for creating
> VM without powering it up
>
> On 1/20/2016 10:57 AM, Shoham Peller wrote:
> > Hi,
> >
> > I would like to suggest a feature in nova to allow creating a VM,
> > without powering it up.
> >
> > If the user will be able to create a stopped VM, it will allow for
> > better flexibility and user automation.
> >
> > I can personally say such a feature would greatly improve comfortability
> > of my work with nova - currently we shutdown each vm manually as we're
> > creating it.
> > What do you think?
> >
> > Regards,
> > Shoham Peller
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> What is your use case?
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] Symantec's security group management policies

2015-10-08 Thread Rui Chen
It's a very good example to show how to draft the customize cloud policy in
OpenStack deployment, thank you Su :-)

Some my comments had been added into the google doc.


2015-10-09 4:23 GMT+08:00 Su Zhang :

> Hello,
>
> I've implemented a set of security group management policies and already
> put them into our usecase doc.
> Let me know if you guys have any comments. My policies is called "Security
> Group Management "
> You can find the use case doc at:
> https://docs.google.com/document/d/1ExDmT06vDZjzOPePYBqojMRfXodvsk0R8nRkX-zrkSw/edit#heading=h.6z1ggtfrzg3n
>
> Thanks,
>
> --
> Su Zhang
> Senior Software Engineer
> Symantec Corporation
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Congress] Tokyo sessions

2015-10-07 Thread Rui Chen
In my memory, there are 4 topics about OPNFV, congress gating, distributed
arch, Monasca.

Some details in IRC meeting log
http://eavesdrop.openstack.org/meetings/congressteammeeting/2015/congressteammeeting.2015-10-01-00.01.log.html

2015-10-08 9:48 GMT+08:00 zhangyali (D) :

> Hi Tim,
>
>
>
> Thanks for informing the meeting information. But does the meeting have
> some topics scheduled? I think it’s better to know what we are going to
> talk. Thanks so much!
>
>
>
> Yali
>
>
>
> *发件人:* Tim Hinrichs [mailto:t...@styra.com]
> *发送时间:* 2015年10月2日 2:52
> *收件人:* OpenStack Development Mailing List (not for usage questions)
> *主题:* [openstack-dev] [Congress] Tokyo sessions
>
>
>
> Hi all,
>
>
>
> We just got a tentative assignment for our meeting times in Tokyo.  Our 3
> meetings are scheduled back-to-back-to-back on Wed afternoon from
> 2:00-4:30p.  I don't think there's much chance of getting the meetings
> moved, but does anyone have a hard conflict?
>
>
>
> Here's our schedule for Wed:
>
>
>
> Wed 11:15-12:45 HOL
>
> Wed 2:00-2:40 Working meeting
>
> Wed 2:50-3:30 Working meeting
>
> Wed 3:40-4:20 Working meeting
>
>
>
> Tim
>
>
>
>
>
>
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] PTL candidacy

2015-09-16 Thread Rui Chen
+1

Tim is an excellent and passionate leader, go ahead, Congress :-)


2015-09-17 4:09 GMT+08:00 :

> +1 and looking forward to see you in Tokyo.
>
>
>
> Thanks,
>
> Ramki
>
>
>
> *From:* Tim Hinrichs [mailto:t...@styra.com]
> *Sent:* Tuesday, September 15, 2015 1:23 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [Congress] PTL candidacy
>
>
>
> Hi all,
>
>
>
> I’m writing to announce my candidacy for Congress PTL for the Mitaka
> cycle.  I’m excited at the prospect of continuing the development of our
> community, our code base, and our integrations with other projects.
>
>
>
> This past cycle has been exciting in that we saw several new, consistent
> contributors, who actively pushed code, submitted reviews, wrote specs, and
> participated in the mid-cycle meet-up.  Additionally, our integration with
> the rest of the OpenStack ecosystem improved with our move to running
> tempest tests in the gate instead of manually or with our own CI.  The code
> base matured as well, as we rounded out some of the features we added near
> the end of the Kilo cycle.  We also began making the most significant
> architectural change in the project’s history, in an effort meet our
> high-availability and API throughput targets.
>
>
>
> I’m looking forward to the Mitaka cycle.  My highest priority for the code
> base is completing the architectural changes that we began in Liberty.
> These changes are undoubtedly the right way forward for production use
> cases, but it is equally important that we make Congress easy to use and
> understand for both new developers and new end users.  I also plan to
> further our integration with the OpenStack ecosystem by better utilizing
> the plugin architectures that are available (e.g. devstack and tempest).  I
> will also work to begin (or continue) dialogues with other projects that
> might benefit from consuming Congress.  Finally I’m excited to continue
> working with our newest project members, helping them toward becoming core
> contributors.
>
>
>
> See you all in Tokyo!
>
> Tim
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] bugs for liberty release

2015-09-07 Thread Rui Chen
I start to fix https://bugs.launchpad.net/congress/+bug/1492329
if I have enough time, I can allocate other one or two bugs.

2015-09-06 8:13 GMT+08:00 Zhou, Zhenzan :

> I have taken two, thanks.
>
> https://bugs.launchpad.net/congress/+bug/1492308
>
> https://bugs.launchpad.net/congress/+bug/1492354
>
>
>
> BR
>
> Zhou Zhenzan
>
> *From:* Tim Hinrichs [mailto:t...@styra.com]
> *Sent:* Friday, September 4, 2015 23:40
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [Congress] bugs for liberty release
>
>
>
> Hi all,
>
>
>
> I've found a few bugs that we could/should fix by the liberty release.  I
> tagged them with "liberty-rc".  If we could all pitch in, that'd be great.
> Let me know which ones you'd like to work on so I can assign them to you in
> launchpad.
>
>
>
> https://bugs.launchpad.net/congress/+bugs/?field.tag=liberty-rc
>
>
>
> Thanks,
>
> Tim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Hope Server Count API can land in Mitaka

2015-08-27 Thread Rui Chen
hi folks:

When we use paginated queries to retrieve instances, we can't get the
total count of instances in current list-servers API.
The count of the querying result is important for operators, Think about a
case, the operators want to know how many 'error' instances
in current deployment in order to make a plan to handle these instances
according to the total count. If the querying page limit is 100,
they have no idea about the count of 'error' instances when they view the
first page, how many instances in the subsequent pages, 101 or 1000?

I found this blueprint Server Count API [1], looks like it can solve
my question, so I would like to see it land in Mitaka release.
But the spec [2] has not been updated since May, somebody still work on
this? I can help to push this feature if need.


[1]: https://blueprints.launchpad.net/nova/+spec/server-count-api
[2]: https://review.openstack.org/#/c/134279/


Best Regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] 2nd prc hackathon event finised, need your help to review patch

2015-08-24 Thread Rui Chen
We had reviewed these patches each other, and fixed some minor issues by
following others' suggestion.

Please feel free to add your comments in these patches, welcome~~

Best Regards.

2015-08-21 18:34 GMT+08:00 Qiao, Liyong liyong.q...@intel.com:

 Hi folks



 We just finished 2nd prc hackathon this Friday.

 For nova project, we finially have 31 patch/bug submitted/updated, we
 finally get out a

 etherpad link to track all bugs/patches, can you kindly help to review
 these patches on link



 https://etherpad.openstack.org/p/hackathon2_nova_list



 BR, Eli(Li Yong)Qiao



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Confused syntax error when inserting rule.

2015-08-16 Thread Rui Chen
Thanks for your clarification. I think the root reason is hidden variables
that generated by policy engine.

*error(id) :- cinder:volumes(id=id), not avail_cinder_vol(id)*
*avail_cinder_vol(id) :- cinder:volumes(id=id, status=available)*

It's a good idea, keep the rules simple and readable.

It point out a best practice that the negative and positive literal of same
table shouldn't exist in one rule.

Thank you very much.

2015-08-14 21:38 GMT+08:00 Tim Hinrichs t...@styra.com:

 Hi Rui,

 The problem with the following rule is that there are a bunch of hidden
 variables in the not cinder:volumes(...) literal.  The error message
 shows the hidden variables.  The syntax restriction is that every variable
 in a negative literal must appear in a positive literal in the body.  Those
 hidden variables fail to satisfy that restriction, hence the error.

  error(id) :- cinder:volumes(id=id), not cinder:volumes(id=id,
 status=available)

 The reason the other rule worked is that you made the hidden variables
 equivalent to the ones in the positive literal, e.g. x_0_1 shows up in both
 the positive and negative literals.

 error(x) :- cinder:volumes(x, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5,
 _x_0_6, _x_0_7, _x_0_8),not cinder:volumes(x, _x_0_1, _x_0_2,
 \available\,_x_0_4, _x_0_5, _x_0_6, _x_0_7, _x_0_8)

 But in the auto-generated one, the variables in the two literals are
 different e.g. _x_0_1 and _x_1_1

 error(id) :- cinder:volumes(id, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5,
 _x_0_6, _x_0_7, _x_0_8), not cinder:volumes(id, _x_1_1, _x_1_2,
 available, _x_1_4, _x_1_5, _x_1_6, _x_1_7, _x_1_8).  Unsafe lits: not
 cinder:volumes(id, _x_1_1, _x_1_2, available, _x_1_4, _x_1_5, _x_1_6,
 _x_1_7, _x_1_8)

 Probably the solution you want is to write 2 rules:

 error(id) :- cinder:volumes(id=id), not avail_cinder_vol(id)
 avail_cinder_vol(id) :- cinder:volumes(id=id, status=available)

 Tim

 On Thu, Aug 13, 2015 at 8:07 PM Rui Chen chenrui.m...@gmail.com wrote:

 Sorry, send the same mail again, please comments at here, the other mail
 lack title.

 2015-08-14 11:03 GMT+08:00 Rui Chen chenrui.m...@gmail.com:

 Hi folks:

 I face a problem when I insert a rule into Congress. I want to find
 out all of the volumes that are not available status, so I draft a rule
 like this:

 error(id) :- cinder:volumes(id=id), not cinder:volumes(id=id,
 status=available)

 But when I create the rule, a error is raised:

 (openstack) congress policy rule create chenrui_p error(id) :-
 cinder:volumes(id=id),not cinder:volumes(id=id, status=\available\)
 ERROR: openstack Syntax error for rule::Errors: Could not reorder rule
 error(id) :- cinder:volumes(id, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5,
 _x_0_6, _x_0_7, _x_0_8), not cinder:volumes(id, _x_1_1, _x_1_2,
 available, _x_1_4, _x_1_5, _x_1_6, _x_1_7, _x_1_8).  Unsafe lits: not
 cinder:volumes(id, _x_1_1, _x_1_2, available, _x_1_4, _x_1_5, _x_1_6,
 _x_1_7, _x_1_8) (vars set(['_x_1_2', '_x_1_1', '_x_1_6', '_x_1_7',
 '_x_1_4', '_x_1_5', '_x_1_8'])) (HTTP 400) (Request-ID:
 req-1f4432d6-f869-472b-aa7d-4cf78dd96fa1)

 I check the Congress policy docs [1], looks like that the rule don't
 break any syntax restrictions.

 If I modify the rule like this, it works:

 (openstack) congress policy rule create chenrui_p error(x) :-
 cinder:volumes(x, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5, _x_0_6, _x_0_7,
 _x_0_8),not cinder:volumes(x, _x_0_1, _x_0_2, \available\,_x_0_4, _x_0_5,
 _x_0_6, _x_0_7, _x_0_8)

 +-++
 | Field   | Value
|

 +-++
 | comment | None
   |
 | id  | ad121e09-ba0a-45d6-bd18-487d975d5bf5
   |
 | name| None
   |
 | rule| error(x) :-
|
 | | cinder:volumes(x, _x_0_1, _x_0_2, _x_0_3, _x_0_4,
 _x_0_5, _x_0_6, _x_0_7, _x_0_8), |
 | | not cinder:volumes(x, _x_0_1, _x_0_2, available,
 _x_0_4, _x_0_5, _x_0_6, _x_0_7, _x_0_8) |

 +-++

 I'm not sure this is a bug or I miss something from docs, so I need
 some feedback from mail list.
 Feel free to discuss about it.


 [1]:
 http://congress.readthedocs.org/en/latest/policy.html#datalog-syntax-restrictions


 Best Regards.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development

[openstack-dev] [Congress] Confused syntax error when inserting rule.

2015-08-13 Thread Rui Chen
Hi folks:

I face a problem when I insert a rule into Congress. I want to find out
all of the volumes that are not available status, so I draft a rule like
this:

error(id) :- cinder:volumes(id=id), not cinder:volumes(id=id,
status=available)

But when I create the rule, a error is raised:

(openstack) congress policy rule create chenrui_p error(id) :-
cinder:volumes(id=id),not cinder:volumes(id=id, status=\available\)
ERROR: openstack Syntax error for rule::Errors: Could not reorder rule
error(id) :- cinder:volumes(id, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5,
_x_0_6, _x_0_7, _x_0_8), not cinder:volumes(id, _x_1_1, _x_1_2,
available, _x_1_4, _x_1_5, _x_1_6, _x_1_7, _x_1_8).  Unsafe lits: not
cinder:volumes(id, _x_1_1, _x_1_2, available, _x_1_4, _x_1_5, _x_1_6,
_x_1_7, _x_1_8) (vars set(['_x_1_2', '_x_1_1', '_x_1_6', '_x_1_7',
'_x_1_4', '_x_1_5', '_x_1_8'])) (HTTP 400) (Request-ID:
req-1f4432d6-f869-472b-aa7d-4cf78dd96fa1)

I check the Congress policy docs [1], looks like that the rule don't
break any syntax restrictions.

If I modify the rule like this, it works:

(openstack) congress policy rule create chenrui_p error(x) :-
cinder:volumes(x, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5, _x_0_6, _x_0_7,
_x_0_8),not cinder:volumes(x, _x_0_1, _x_0_2, \available\,_x_0_4, _x_0_5,
_x_0_6, _x_0_7, _x_0_8)
+-++
| Field   | Value
   |
+-++
| comment | None
|
| id  | ad121e09-ba0a-45d6-bd18-487d975d5bf5
|
| name| None
|
| rule| error(x) :-
   |
| | cinder:volumes(x, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5,
_x_0_6, _x_0_7, _x_0_8), |
| | not cinder:volumes(x, _x_0_1, _x_0_2, available, _x_0_4,
_x_0_5, _x_0_6, _x_0_7, _x_0_8) |
+-++

I'm not sure this is a bug or I miss something from docs, so I need
some feedback from mail list.
Feel free to discuss about it.


[1]:
http://congress.readthedocs.org/en/latest/policy.html#datalog-syntax-restrictions


Best Regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Restart openstack service

2015-08-13 Thread Rui Chen
I use *screen* in devstack, Ctrl+c kill services, then restart it in
console.

Please try the following cmd in your devstack environment, and read some
docs.

*screen -r stack*

http://www.ibm.com/developerworks/cn/linux/l-cn-screen/



2015-08-14 11:20 GMT+08:00 Guo, Ruijing ruijing@intel.com:

 It is very useful to restart openstack services in devstack so that we
 don’t need to unstack and stack again.



 How much effort to support restarting openstack? Anyone is interested in
 that?



 Thanks,

 -Ruijing

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Confused syntax error when inserting rule.

2015-08-13 Thread Rui Chen
Sorry, send the same mail again, please comments at here, the other mail
lack title.

2015-08-14 11:03 GMT+08:00 Rui Chen chenrui.m...@gmail.com:

 Hi folks:

 I face a problem when I insert a rule into Congress. I want to find
 out all of the volumes that are not available status, so I draft a rule
 like this:

 error(id) :- cinder:volumes(id=id), not cinder:volumes(id=id,
 status=available)

 But when I create the rule, a error is raised:

 (openstack) congress policy rule create chenrui_p error(id) :-
 cinder:volumes(id=id),not cinder:volumes(id=id, status=\available\)
 ERROR: openstack Syntax error for rule::Errors: Could not reorder rule
 error(id) :- cinder:volumes(id, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5,
 _x_0_6, _x_0_7, _x_0_8), not cinder:volumes(id, _x_1_1, _x_1_2,
 available, _x_1_4, _x_1_5, _x_1_6, _x_1_7, _x_1_8).  Unsafe lits: not
 cinder:volumes(id, _x_1_1, _x_1_2, available, _x_1_4, _x_1_5, _x_1_6,
 _x_1_7, _x_1_8) (vars set(['_x_1_2', '_x_1_1', '_x_1_6', '_x_1_7',
 '_x_1_4', '_x_1_5', '_x_1_8'])) (HTTP 400) (Request-ID:
 req-1f4432d6-f869-472b-aa7d-4cf78dd96fa1)

 I check the Congress policy docs [1], looks like that the rule don't
 break any syntax restrictions.

 If I modify the rule like this, it works:

 (openstack) congress policy rule create chenrui_p error(x) :-
 cinder:volumes(x, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5, _x_0_6, _x_0_7,
 _x_0_8),not cinder:volumes(x, _x_0_1, _x_0_2, \available\,_x_0_4, _x_0_5,
 _x_0_6, _x_0_7, _x_0_8)

 +-++
 | Field   | Value
  |

 +-++
 | comment | None
 |
 | id  | ad121e09-ba0a-45d6-bd18-487d975d5bf5
 |
 | name| None
 |
 | rule| error(x) :-
  |
 | | cinder:volumes(x, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5,
 _x_0_6, _x_0_7, _x_0_8), |
 | | not cinder:volumes(x, _x_0_1, _x_0_2, available, _x_0_4,
 _x_0_5, _x_0_6, _x_0_7, _x_0_8) |

 +-++

 I'm not sure this is a bug or I miss something from docs, so I need
 some feedback from mail list.
 Feel free to discuss about it.


 [1]:
 http://congress.readthedocs.org/en/latest/policy.html#datalog-syntax-restrictions


 Best Regards.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress]

2015-08-13 Thread Rui Chen
Hi folks:

I face a problem when I insert a rule into Congress. I want to find out
all of the volumes that are not available status, so I draft a rule like
this:

error(id) :- cinder:volumes(id=id), not cinder:volumes(id=id,
status=available)

But when I create the rule, a error is raised:

(openstack) congress policy rule create chenrui_p error(id) :-
cinder:volumes(id=id),not cinder:volumes(id=id, status=\available\)
ERROR: openstack Syntax error for rule::Errors: Could not reorder rule
error(id) :- cinder:volumes(id, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5,
_x_0_6, _x_0_7, _x_0_8), not cinder:volumes(id, _x_1_1, _x_1_2,
available, _x_1_4, _x_1_5, _x_1_6, _x_1_7, _x_1_8).  Unsafe lits: not
cinder:volumes(id, _x_1_1, _x_1_2, available, _x_1_4, _x_1_5, _x_1_6,
_x_1_7, _x_1_8) (vars set(['_x_1_2', '_x_1_1', '_x_1_6', '_x_1_7',
'_x_1_4', '_x_1_5', '_x_1_8'])) (HTTP 400) (Request-ID:
req-1f4432d6-f869-472b-aa7d-4cf78dd96fa1)

I check the Congress policy docs [1], looks like that the rule don't
break any syntax restrictions.

If I modify the rule like this, it works:

(openstack) congress policy rule create chenrui_p error(x) :-
cinder:volumes(x, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5, _x_0_6, _x_0_7,
_x_0_8),not cinder:volumes(x, _x_0_1, _x_0_2, \available\,_x_0_4, _x_0_5,
_x_0_6, _x_0_7, _x_0_8)
+-++
| Field   | Value
   |
+-++
| comment | None
|
| id  | ad121e09-ba0a-45d6-bd18-487d975d5bf5
|
| name| None
|
| rule| error(x) :-
   |
| | cinder:volumes(x, _x_0_1, _x_0_2, _x_0_3, _x_0_4, _x_0_5,
_x_0_6, _x_0_7, _x_0_8), |
| | not cinder:volumes(x, _x_0_1, _x_0_2, available, _x_0_4,
_x_0_5, _x_0_6, _x_0_7, _x_0_8) |
+-++

I'm not sure this is a bug or I miss something from docs, so I need
some feedback from mail list.
Feel free to discuss about it.


[1]:
http://congress.readthedocs.org/en/latest/policy.html#datalog-syntax-restrictions


Best Regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] meeting time change

2015-08-02 Thread Rui Chen
Convert to Asia timezone, the new time is easy to remember for us :)

For CST (UTC+8:00):
Thursday 08:00 AM

For JST (UTC+9:00):
Thursday 09:00 AM

http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150806T00p1=1440


2015-08-01 1:20 GMT+08:00 Tim Hinrichs t...@styra.com:

 Peter pointed out that no one uses #openstack-meeting-2.  So we'll go with
 #openstack-meeting.  Here are the updated meeting details.

 Room: #openstack-meeting
 Time: Wednesday 5p Pacific = Thursday midnight UTC

 There's a change out for review that will update the meeting website once
 it merges.
 http://eavesdrop.openstack.org/#Congress_Team_Meeting
 https://review.openstack.org/#/c/207981/

 Tim

 On Fri, Jul 31, 2015 at 9:24 AM Tim Hinrichs t...@styra.com wrote:

 Hi all,

 We managed to find a day/time where all the active contributors can
 attend (without being up too early/late).  The room, day, and time have all
 changed.

 Room: #openstack-meeting-2
 Time: Wednesday 5p Pacific = Thursday midnight UTC

 Next week we begin with this new schedule.

 And don't forget that next week Thu/Fri is our Mid-cycle sprint.  Hope to
 see you there!

 Tim


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] How to start a replica ?

2015-07-27 Thread Rui Chen
According to the error message, looks like no enough mysql db connections
for the HA Congress server launching.

Can you double check your mysql '*max_connections*' option in my.cnf and
show the active connections in mysql console like this:

*msyql show full processlist;*

more details:

https://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html


2015-07-28 6:46 GMT+08:00 Tim Hinrichs t...@styra.com:

 Could you show us the contents of /tmp/congress.conf?

 Tim


 On Mon, Jul 27, 2015 at 3:09 PM Wong, Hong hong.w...@hp.com wrote:

  Hi Tim and Alex,



 I see congress recently added the HA functionality, and I was looking at
 the tempest test code to understand how to start a replica.  I created a
 new congress.conf file with the different bind_port and set the
 datasource_sync_period value to 5.  However, I got the errors below when
 I try to bring up the replica:



 to start the replica: cd /opt/stack/congress  python
 /usr/local/bin/congress-server --config-file /tmp/congress.conf  echo $!
 /tmp/congress.pid; fg || echo congress failed to start | tee
 /tmp/congress.failure



 2015-07-27 14:56:33.592 TRACE congress.service Traceback (most recent
 call last):

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /opt/stack/congress/congress/service.py, line 32, in wrapper

 2015-07-27 14:56:33.592 TRACE congress.service return f(*args, **kw)

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /opt/stack/congress/congress/service.py, line 50, in congress_app_factory

 2015-07-27 14:56:33.592 TRACE congress.service cage =
 harness.create(root_path, data_path)

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /opt/stack/congress/congress/harness.py, line 151, in create

 2015-07-27 14:56:33.592 TRACE congress.service for policy in
 db_policy_rules.get_policies():

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /opt/stack/congress/congress/db/db_policy_rules.py, line 84, in
 get_policies

 2015-07-27 14:56:33.592 TRACE congress.service session = session or
 db.get_session()

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /opt/stack/congress/congress/db/api.py, line 40, in get_session

 2015-07-27 14:56:33.592 TRACE congress.service facade =
 _create_facade_lazily()

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /opt/stack/congress/congress/db/api.py, line 27, in _create_facade_lazily

 2015-07-27 14:56:33.592 TRACE congress.service _FACADE =
 session.EngineFacade.from_config(cfg.CONF, sqlite_fk=True)

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py,
 lin

 .

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /usr/local/lib/python2.7/dist-packages/pymysql/err.py, line 112, in
 _check_mysql_exception

 2015-07-27 14:56:33.592 TRACE congress.service raise
 errorclass(errno, errorvalue)

 2015-07-27 14:56:33.592 TRACE congress.service OperationalError:
 (pymysql.err.OperationalError) (1040, u'Too many connections')



 I got the same error when running the tempest test as well.  Any idea ?



 Thanks,

 Hong

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] How to start a replica ?

2015-07-27 Thread Rui Chen
According to the error message, looks like no enough mysql db connections
for the HA Congress server launching.

Can you double check your mysql '*max_connections*' option in my.cnf and
show the active connections in mysql console like this:

*msyql show full processlist;*

more details:

https://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html


2015-07-28 6:46 GMT+08:00 Tim Hinrichs t...@styra.com:

 Could you show us the contents of /tmp/congress.conf?

 Tim


 On Mon, Jul 27, 2015 at 3:09 PM Wong, Hong hong.w...@hp.com wrote:

  Hi Tim and Alex,



 I see congress recently added the HA functionality, and I was looking at
 the tempest test code to understand how to start a replica.  I created a
 new congress.conf file with the different bind_port and set the
 datasource_sync_period value to 5.  However, I got the errors below when
 I try to bring up the replica:



 to start the replica: cd /opt/stack/congress  python
 /usr/local/bin/congress-server --config-file /tmp/congress.conf  echo $!
 /tmp/congress.pid; fg || echo congress failed to start | tee
 /tmp/congress.failure



 2015-07-27 14:56:33.592 TRACE congress.service Traceback (most recent
 call last):

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /opt/stack/congress/congress/service.py, line 32, in wrapper

 2015-07-27 14:56:33.592 TRACE congress.service return f(*args, **kw)

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /opt/stack/congress/congress/service.py, line 50, in congress_app_factory

 2015-07-27 14:56:33.592 TRACE congress.service cage =
 harness.create(root_path, data_path)

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /opt/stack/congress/congress/harness.py, line 151, in create

 2015-07-27 14:56:33.592 TRACE congress.service for policy in
 db_policy_rules.get_policies():

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /opt/stack/congress/congress/db/db_policy_rules.py, line 84, in
 get_policies

 2015-07-27 14:56:33.592 TRACE congress.service session = session or
 db.get_session()

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /opt/stack/congress/congress/db/api.py, line 40, in get_session

 2015-07-27 14:56:33.592 TRACE congress.service facade =
 _create_facade_lazily()

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /opt/stack/congress/congress/db/api.py, line 27, in _create_facade_lazily

 2015-07-27 14:56:33.592 TRACE congress.service _FACADE =
 session.EngineFacade.from_config(cfg.CONF, sqlite_fk=True)

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py,
 lin

 .

 2015-07-27 14:56:33.592 TRACE congress.service   File
 /usr/local/lib/python2.7/dist-packages/pymysql/err.py, line 112, in
 _check_mysql_exception

 2015-07-27 14:56:33.592 TRACE congress.service raise
 errorclass(errno, errorvalue)

 2015-07-27 14:56:33.592 TRACE congress.service OperationalError:
 (pymysql.err.OperationalError) (1040, u'Too many connections')



 I got the same error when running the tempest test as well.  Any idea ?



 Thanks,

 Hong

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] New IRC meeting time

2015-07-16 Thread Rui Chen
Wonderful! I don't need to stay up late.

Thank everybody.

2015-07-15 10:28 GMT+08:00 Masahito MUROI muroi.masah...@lab.ntt.co.jp:

 I'm happy to see that.

 btw, is the day on Tuesday?

 best regard,
 masa

 On 2015/07/15 9:52, Zhou, Zhenzan wrote:

 Glad to see this change.
 Thanks for the supporting for developers in Asia☺

 BR
 Zhou Zhenzan

 From: Tim Hinrichs [mailto:t...@styra.com]
 Sent: Wednesday, July 15, 2015 02:14
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Congress] New IRC meeting time

 To better accommodate the active contributors, we're moving our IRC
 meeting to

 2300 UTC = 4p Pacific = 7p Eastern
 #openstack-meeting-3

 Hope to see you there!
 Tim



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 室井 雅仁(Masahito MUROI)
 Software Innovation Center, NTT
 Tel: +81-422-59-4539,FAX: +81-422-59-2699


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Approval dates for non-priority specs

2015-06-27 Thread Rui Chen
Thank you @Alex, it's helpful for me :)

2015-06-27 13:51 GMT+08:00 Alex Xu sou...@gmail.com:

 Hi, Rui, Abhishek,

 There is email can answer your question:
 http://lists.openstack.org/pipermail/openstack-dev/2015-June/068079.html

 Thanks
 Alex

 2015-06-27 11:33 GMT+08:00 Rui Chen chenrui.m...@gmail.com:

 I have same question about this.

 My spec and blueprint:

 https://review.openstack.org/#/c/169638/

 https://blueprints.launchpad.net/nova/+spec/selecting-subnet-when-creating-vm



 2015-06-26 17:56 GMT+08:00 Kekane, Abhishek abhishek.kek...@nttdata.com
 :

  Hi Nova Devs,



 I have submitted a nova spec [1] for improving unshelve api performance.



 It's not listed under the liberty priorities, in spec also I have set
 project priority as None and in Launchpad blueprint [2] also milestone
 target is None.

 As per nova liberty schedule, June 23-25, 2015 were the dates for nova
 spec freeze for L and July 28-30, 2015 liberty-2 is non-priority feature
 freeze.



 I have raised review requests in couple of nova meetings as well as on
 IRC whenever I got a chance for discussion but failed to get any
 constructive feedback.



 I would like to know what will the last date of approving non-priority
 specs for Liberty.

 If it is already passed, can I raise a “Spec freeze exception” for the
 same.





 Thank you in advance.



 Abhishek Kekane



 [1] https://review.openstack.org/#/c/135387/

 [2]
 https://blueprints.launchpad.net/nova/+spec/improve-unshelve-performance



 __
 Disclaimer: This email and any attachments are sent in strictest
 confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data. If you are not the intended
 recipient,
 please advise the sender by replying promptly to this email and then
 delete
 and destroy this email and any attachments without any further use,
 copying
 or forwarding.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Approval dates for non-priority specs

2015-06-26 Thread Rui Chen
I have same question about this.

My spec and blueprint:

https://review.openstack.org/#/c/169638/
https://blueprints.launchpad.net/nova/+spec/selecting-subnet-when-creating-vm



2015-06-26 17:56 GMT+08:00 Kekane, Abhishek abhishek.kek...@nttdata.com:

  Hi Nova Devs,



 I have submitted a nova spec [1] for improving unshelve api performance.



 It's not listed under the liberty priorities, in spec also I have set
 project priority as None and in Launchpad blueprint [2] also milestone
 target is None.

 As per nova liberty schedule, June 23-25, 2015 were the dates for nova
 spec freeze for L and July 28-30, 2015 liberty-2 is non-priority feature
 freeze.



 I have raised review requests in couple of nova meetings as well as on IRC
 whenever I got a chance for discussion but failed to get any constructive
 feedback.



 I would like to know what will the last date of approving non-priority
 specs for Liberty.

 If it is already passed, can I raise a “Spec freeze exception” for the
 same.





 Thank you in advance.



 Abhishek Kekane



 [1] https://review.openstack.org/#/c/135387/

 [2]
 https://blueprints.launchpad.net/nova/+spec/improve-unshelve-performance



 __
 Disclaimer: This email and any attachments are sent in strictest confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data. If you are not the intended recipient,
 please advise the sender by replying promptly to this email and then delete
 and destroy this email and any attachments without any further use, copying
 or forwarding.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Should we add instance action event to live migration?

2015-06-03 Thread Rui Chen
Hi all:

We have the instance action and action event for most of the instance
operations,

exclude: live-migration. In the current master code, when we do
live-migration, the

instance action is recorded, but the action event for live-migration is
lost. I'm not sure that

it's a bug or design behavior, so I want to get more feedback in mail list.

I found the patch https://review.openstack.org/#/c/95440/

It's add the live migration action, but no event. It looks weird.

I think there are two improvement we can do

[1]: add the live migration event, keep consistence with other instance
operations.

[2]: remove the live migration action in order to make the operation
transparent to end-users, like Andrew say in the patch comments.

Which way you like? please let me know, thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Is Live-migration not supported in CONF.libvirt.images_type=lvm case?

2015-05-08 Thread Rui Chen
Hi all:

I find the bug [1] block/live migration doesn't work with LVM as
libvirt storage is marked as 'Fix released', but I don't think this issue
really is solved, I check the live-migration code and don't find any logic
for handling LVM disk. Please correct me if I'm wrong.

In the bug [1] comments, the only related merged patch is
https://review.openstack.org/#/c/73387/ , it cover the 'resize/migrate'
code path, not live-migration, and I don't think this bug [1] is duplicate
with bug [2], they are the different use case, live-migration and migration.

So should we reopen this bug and add some documentation to describe that
live-migration is not supported in current code?

[1]: https://bugs.launchpad.net/nova/+bug/1282643
[2]: https://bugs.launchpad.net/nova/+bug/1270305
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] upgrade_levels in nova upgrade

2015-05-07 Thread Rui Chen
Assuming my understanding is correct, 2 things make you sad in the upgrade
process.

1. must reconfig the 'upgrade_levels' in the config file during
post-upgrade.
2. must restart the service in order to make the option 'upgrade_level'
work.

I think the configuration management tools (e.g. chef, pupput) can solve
the #1.
We can change the 'upgrade_level' option in config file after upgrading and
sync it to all the hosts conveniently.

#2 is more complex, fortunately there are some works to try to solve it,
[1] [2].
If all the OpenStack services can support SIGHUP, I think we just need to
trigger a SIGHUP to make the services reload the config file.

Correct me If I'm wrong, thanks.


[1]: https://blueprints.launchpad.net/glance/+spec/sighup-conf-reload
[2]: https://bugs.launchpad.net/oslo-incubator/+bug/1276694



2015-05-07 16:09 GMT+08:00 Guo, Ruijing ruijing@intel.com:

  Hi, All,



 In existing design, we need to reconfig nova.conf and restart nova
 service during post-upgrade cleanup

 As https://www.rdoproject.org/Upgrading_RDO_To_Icehouse:



 I propose to send RPC message to remove RPC API version pin.





 1.   Stop services  (same with existing)

 2.   Upgrade packages (same with existing)

 3.   Upgrade DB schema (same with existint)

 4.   Start service with upgrade  (add upgrade parameter so that nova
 will use old version of RPC API. We may add more parameter for other
 purpose including query upgrade progress)

 5.   Send RPC message to remove RPC API version pin. (we don’t need
 to reconfig nova.conf and restart nova service)



 What do you think?



 Thanks,

 -Ruijing





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Port Nova to Python 3

2015-04-24 Thread Rui Chen
Maybe we can add a python3 jenkins job (non-voting) to help us finding out
some potential issue.

2015-04-24 16:34 GMT+08:00 Victor Stinner vstin...@redhat.com:

 Hi,

 Porting OpenStack applications during the Liberty Cycle was discussed last
 days in the thread [oslo] eventlet 0.17.3 is now fully Python 3
 compatible.

 I wrote a spec to port Nova to Python 3:

https://review.openstack.org/#/c/176868/

 I mentioned the 2 other Python 3 specs for Neutron and Heat.

 You can reply to this email, or comment the review, if you want to discuss
 Python 3 in Nova, or if you have any question related to Python 3.

 See also the Python 3 wiki page:

https://wiki.openstack.org/wiki/Python3

 Thanks,
 Victor

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Need more suggestion about nova-scheduler patch/147048

2015-04-23 Thread Rui Chen
Hi all:

I'm working on the patch https://review.openstack.org/#/c/147048/ for
bug/1408859

Description of Bug:
When the nova-scheduler can't select enough hosts for multiple creating
instance, a NoValidHost exception was raised, but the part of hosts had
been consumed from instance in the _schedule loop, the resource of consumed
hosts was not reverted, it would result in the resource lacking in
nova-scheduler.

But now I'm in a dilemma, because the reviewers have different point
about implementation of the patch, so I need more feedback.

See patch set 36
This is simple way, just set the 'updated' attribute of consumed host
as None, and make use of the 'update_from_compute_node' logic to update the
local HostState at the next scheduling request.

http://git.openstack.org/cgit/openstack/nova/tree/nova/scheduler/host_manager.py#n185

But like John say, it would break CachingScheduler in a refresh cycle,
the resource of HostState would been fixed until the next periodic_tasks is
executed.

Other side, Alex and Sylvain think that it should be acceptable,
because CachingSheduler use the out of date HostState in the cache
according to the design, and the usage limitation had been described in
class notes.

I'm really really hope this patch can been merged ASAP, it has spent
several months to review :(

Please let me know your point, feel free to discuss it, thanks.

Best Regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [libvirt] The risk of hanging when shutdown instance.

2015-03-28 Thread Rui Chen
Thank you for reply, Chris.


2015-03-27 23:15 GMT+08:00 Chris Friesen chris.frie...@windriver.com:

 On 03/26/2015 07:44 PM, Rui Chen wrote:

 Yes, you are right, but we found our instance hang at first
 dom.shutdown() call,
 if the dom.shutdown() don't return, there is no chance to execute
 dom.destroy(),
 right?


 Correct.  The code is written assuming dom.shutdown() cannot block
 indefinitely.

 The libvirt docs at https://libvirt.org/html/libvirt-libvirt-domain.html#
 virDomainShutdown say ...this command returns as soon as the shutdown
 request is issued rather than blocking until the guest is no longer
 running.

 If dom.shutdown() blocks indefinitely, then that's a libvirt bug.


 Chris


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [libvirt] The risk of hanging when shutdown instance.

2015-03-26 Thread Rui Chen
Yes, you are right, but we found our instance hang at first dom.shutdown()
call, if the dom.shutdown() don't return, there is no chance to execute
dom.destroy(), right?

2015-03-26 23:20 GMT+08:00 Chris Friesen chris.frie...@windriver.com:

 On 03/25/2015 10:15 PM, Rui Chen wrote:

 Hi all:

  I found a discuss about the libvirt shutdown API maybe hang when
 shutdown
 instance in libvirt community,
 https://www.redhat.com/archives/libvir-list/2015-March/msg01121.html

  I'm not sure that whether there are some risks when we shutdown
 instance in
 nova.

  Three questions:
  1. Whether acpi is the default shutdown mode in QEMU/KVM hypervisor
 when we
 shutdown instance using libvirt?
  2. Whether acpi is asynchronous mode, and qemu-agent is synchronous
 mode
 when we shutdown instance?
  3. If the hypervisor use qemu-agent as default shutdown mode, how we
 can
 deal the hanging issue?



 When shutting down an instance if there is a timeout (controlled by config
 file or system metadata) the code will first attempt a clean shutdown via
 dom.shutdown().  If that doesn't terminate the instance by the time the
 timeout expires, then we'll call virt_dom.destroy().

 Chris

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [libvirt] The risk of hanging when shutdown instance.

2015-03-25 Thread Rui Chen
Hi all:

I found a discuss about the libvirt shutdown API maybe hang when
shutdown instance in libvirt community,
https://www.redhat.com/archives/libvir-list/2015-March/msg01121.html

I'm not sure that whether there are some risks when we shutdown
instance in nova.

Three questions:
1. Whether acpi is the default shutdown mode in QEMU/KVM hypervisor
when we shutdown instance using libvirt?
2. Whether acpi is asynchronous mode, and qemu-agent is synchronous
mode when we shutdown instance?
3. If the hypervisor use qemu-agent as default shutdown mode, how we
can deal the hanging issue?

Best Regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] How to deal the rpc timeout between compute and conductor?

2015-03-23 Thread Rui Chen
Hi all:

I deploy my OpenStack with VMware driver, one nova-compute connect to
VMware deployment,
there are about 3000 VMs in VMware deployment. I use mysql. The method
of InstanceList.get_by_host
rasie rpc timeout error when ComputeManager.init_host() and
_sync_power_states periodic task execute.
Currently, one nova-compute host map to the whole VMware deployment
that maybe contain several clusters
in nova VMware driver. When InstanceList.get_by_host execute in
ComputeManager, it indicate that nova-compute
will execute a rpc call to nova-conducutor, nova-conductor will fetch a
lots of instances in the whole VMware
deployment in once, in my case , it's 3000 instances. The long time SQL
query maybe lead to the rpc timeout
from nova-compute to nova-conductor. We only face the issue in VMWare
driver.

https://bugs.launchpad.net/nova/+bug/1420662
https://review.openstack.org/#/c/155676/

In the patch I split the large rpc request to multiple small rpc requests
using pagination mechanism in order to
fix this issue, but sahid think it looks like a hack and need a real
pattern to handle this problem.

If you have other better idea, please let me know.
Feel free to discuss it. Thanks.

Best Regards
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-05 Thread Rui Chen
Thank you very much for in-depth discussion about this topic, @Nikola and
@Sylvain.

I agree that we should solve the technical debt firstly, and then make the
scheduler better.

Best Regards.

2015-03-05 21:12 GMT+08:00 Sylvain Bauza sba...@redhat.com:


 Le 05/03/2015 13:00, Nikola Đipanov a écrit :

  On 03/04/2015 09:23 AM, Sylvain Bauza wrote:

 Le 04/03/2015 04:51, Rui Chen a écrit :

 Hi all,

 I want to make it easy to launch a bunch of scheduler processes on a
 host, multiple scheduler workers will make use of multiple processors
 of host and enhance the performance of nova-scheduler.

 I had registered a blueprint and commit a patch to implement it.
 https://blueprints.launchpad.net/nova/+spec/scheduler-
 multiple-workers-support

 This patch had applied in our performance environment and pass some
 test cases, like: concurrent booting multiple instances, currently we
 didn't find inconsistent issue.

 IMO, nova-scheduler should been scaled horizontally on easily way, the
 multiple workers should been supported as an out of box feature.

 Please feel free to discuss this feature, thanks.


 As I said when reviewing your patch, I think the problem is not just
 making sure that the scheduler is thread-safe, it's more about how the
 Scheduler is accounting resources and providing a retry if those
 consumed resources are higher than what's available.

 Here, the main problem is that two workers can actually consume two
 distinct resources on the same HostState object. In that case, the
 HostState object is decremented by the number of taken resources (modulo
 what means a resource which is not an Integer...) for both, but nowhere
 in that section, it does check that it overrides the resource usage. As
 I said, it's not just about decorating a semaphore, it's more about
 rethinking how the Scheduler is managing its resources.


 That's why I'm -1 on your patch until [1] gets merged. Once this BP will
 be implemented, we will have a set of classes for managing heterogeneous
 types of resouces and consume them, so it would be quite easy to provide
 a check against them in the consume_from_instance() method.

  I feel that the above explanation does not give the full picture in
 addition to being factually incorrect in several places. I have come to
 realize that the current behaviour of the scheduler is subtle enough
 that just reading the code is not enough to understand all the edge
 cases that can come up. The evidence being that it trips up even people
 that have spent significant time working on the code.

 It is also important to consider the design choices in terms of
 tradeoffs that they were trying to make.

 So here are some facts about the way Nova does scheduling of instances
 to compute hosts, considering the amount of resources requested by the
 flavor (we will try to put the facts into a bigger picture later):

 * Scheduler receives request to chose hosts for one or more instances.
 * Upon every request (_not_ for every instance as there may be several
 instances in a request) the scheduler learns the state of the resources
 on all compute nodes from the central DB. This state may be inaccurate
 (meaning out of date).
 * Compute resources are update by each compute host periodically. This
 is done by updating the row in the DB.
 * The wall-clock time difference between the scheduler deciding to
 schedule an instance, and the resource consumption being reflected in
 the data the scheduler learns from the DB can be arbitrarily long (due
 to load on the compute nodes and latency of message arrival).
 * To cope with the above, there is a concept of retrying the request
 that fails on a certain compute node due to the scheduling decision
 being made with data stale at the moment of build, by default we will
 retry 3 times before giving up.
 * When running multiple instances, decisions are made in a loop, and
 internal in-memory view of the resources gets updated (the widely
 misunderstood consume_from_instance method is used for this), so as to
 keep subsequent decisions as accurate as possible. As was described
 above, this is all thrown away once the request is finished.

 Now that we understand the above, we can start to consider what changes
 when we introduce several concurrent scheduler processes.

 Several cases come to mind:
 * Concurrent requests will no longer be serialized on reading the state
 of all hosts (due to how eventlet interacts with mysql driver).
 * In the presence of a single request for a large number of instances
 there is going to be a drift in accuracy of the decisions made by other
 schedulers as they will not have the accounted for any of the instances
 until they actually get claimed on their respective hosts.

 All of the above limitations will likely not pose a problem under normal
 load and usage and can cause issues to start appearing when nodes are
 close to full or when there is heavy load. Also this changes drastically
 based on how we actually chose

Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-05 Thread Rui Chen
My BP aims is launching multiple nova-scheduler processes on a host, like
nova-conductor.

If we run multiple nova-scheduler services on separate hosts, that will
work, forking the multiple nova-scheduler
child processes on a host that will work too? Different child processes had
different HostState object in self memory,
the only different point with HA is just launching all scheduler processes
on a host.

I'm sorry to waste some time, I just want to clarify it.


2015-03-05 17:12 GMT+08:00 Sylvain Bauza sba...@redhat.com:


 Le 05/03/2015 08:54, Rui Chen a écrit :

 We will face the same issue in multiple nova-scheduler process case, like
 Sylvain say, right?

  Two processes/workers can actually consume two distinct resources on the
 same HostState.


 No. The problem I mentioned was related to having multiple threads
 accessing the same object in memory.
 By running multiple schedulers on different hosts and listening to the
 same RPC topic, it would work - with some caveats about race conditions
 too, but that's unrelated to your proposal -

 If you want to run multiple nova-scheduler services, then just fire them
 up on separate machines (that's HA, eh) and that will work.

 -Sylvain





 2015-03-05 13:26 GMT+08:00 Alex Xu sou...@gmail.com:

 Rui, you still can run multiple nova-scheduler process now.


 2015-03-05 10:55 GMT+08:00 Rui Chen chenrui.m...@gmail.com:

 Looks like it's a complicated problem, and nova-scheduler can't
 scale-out horizontally in active/active mode.

  Maybe we should illustrate the problem in the HA docs.


 http://docs.openstack.org/high-availability-guide/content/_schedulers.html

 Thanks for everybody's attention.

 2015-03-05 5:38 GMT+08:00 Mike Bayer mba...@redhat.com:



 Attila Fazekas afaze...@redhat.com wrote:

  Hi,
 
  I wonder what is the planned future of the scheduling.
 
  The scheduler does a lot of high field number query,
  which is CPU expensive when you are using sqlalchemy-orm.
  Does anyone tried to switch those operations to sqlalchemy-core ?

 An upcoming feature in SQLAlchemy 1.0 will remove the vast majority of
 CPU
 overhead from the query side of SQLAlchemy ORM by caching all the work
 done
 up until the SQL is emitted, including all the function overhead of
 building
 up the Query object, producing a core select() object internally from
 the
 Query, working out a large part of the object fetch strategies, and
 finally
 the string compilation of the select() into a string as well as
 organizing
 the typing information for result columns. With a query that is
 constructed
 using the “Baked” feature, all of these steps are cached in memory and
 held
 persistently; the same query can then be re-used at which point all of
 these
 steps are skipped. The system produces the cache key based on the
 in-place
 construction of the Query using lambdas so no major changes to code
 structure are needed; just the way the Query modifications are performed
 needs to be preceded with “lambda q:”, essentially.

 With this approach, the traditional session.query(Model) approach can go
 from start to SQL being emitted with an order of magnitude less function
 calls. On the fetch side, fetching individual columns instead of full
 entities has always been an option with ORM and is about the same speed
 as a
 Core fetch of rows. So using ORM with minimal changes to existing ORM
 code
 you can get performance even better than you’d get using Core directly,
 since caching of the string compilation is also added.

 On the persist side, the new bulk insert / update features provide a
 bridge
 from ORM-mapped objects to bulk inserts/updates without any unit of work
 sorting going on. ORM mapped objects are still more expensive to use in
 that
 instantiation and state change is still more expensive, but bulk
 insert/update accepts dictionaries as well, which again is competitive
 with
 a straight Core insert.

 Both of these features are completed in the master branch, the “baked
 query”
 feature just needs documentation, and I’m basically two or three tickets
 away from beta releases of 1.0. The “Baked” feature itself lives as an
 extension and if we really wanted, I could backport it into oslo.db as
 well
 so that it works against 0.9.

 So I’d ask that folks please hold off on any kind of migration from ORM
 to
 Core for performance reasons. I’ve spent the past several months adding
 features directly to SQLAlchemy that allow an ORM-based app to have
 routes
 to operations that perform just as fast as that of Core without a
 rewrite of
 code.

  The scheduler does lot of thing in the application, like filtering
  what can be done on the DB level more efficiently. Why it is not done
  on the DB side ?
 
  There are use cases when the scheduler would need to know even more
 data,
  Is there a plan for keeping `everything` in all schedulers process
 memory up-to-date ?
  (Maybe zookeeper)
 
  The opposite way would be to move most operation into the DB side,
  since the DB

Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-04 Thread Rui Chen
Looks like it's a complicated problem, and nova-scheduler can't scale-out
horizontally in active/active mode.

Maybe we should illustrate the problem in the HA docs.

http://docs.openstack.org/high-availability-guide/content/_schedulers.html

Thanks for everybody's attention.

2015-03-05 5:38 GMT+08:00 Mike Bayer mba...@redhat.com:



 Attila Fazekas afaze...@redhat.com wrote:

  Hi,
 
  I wonder what is the planned future of the scheduling.
 
  The scheduler does a lot of high field number query,
  which is CPU expensive when you are using sqlalchemy-orm.
  Does anyone tried to switch those operations to sqlalchemy-core ?

 An upcoming feature in SQLAlchemy 1.0 will remove the vast majority of CPU
 overhead from the query side of SQLAlchemy ORM by caching all the work done
 up until the SQL is emitted, including all the function overhead of
 building
 up the Query object, producing a core select() object internally from the
 Query, working out a large part of the object fetch strategies, and finally
 the string compilation of the select() into a string as well as organizing
 the typing information for result columns. With a query that is constructed
 using the “Baked” feature, all of these steps are cached in memory and held
 persistently; the same query can then be re-used at which point all of
 these
 steps are skipped. The system produces the cache key based on the in-place
 construction of the Query using lambdas so no major changes to code
 structure are needed; just the way the Query modifications are performed
 needs to be preceded with “lambda q:”, essentially.

 With this approach, the traditional session.query(Model) approach can go
 from start to SQL being emitted with an order of magnitude less function
 calls. On the fetch side, fetching individual columns instead of full
 entities has always been an option with ORM and is about the same speed as
 a
 Core fetch of rows. So using ORM with minimal changes to existing ORM code
 you can get performance even better than you’d get using Core directly,
 since caching of the string compilation is also added.

 On the persist side, the new bulk insert / update features provide a bridge
 from ORM-mapped objects to bulk inserts/updates without any unit of work
 sorting going on. ORM mapped objects are still more expensive to use in
 that
 instantiation and state change is still more expensive, but bulk
 insert/update accepts dictionaries as well, which again is competitive with
 a straight Core insert.

 Both of these features are completed in the master branch, the “baked
 query”
 feature just needs documentation, and I’m basically two or three tickets
 away from beta releases of 1.0. The “Baked” feature itself lives as an
 extension and if we really wanted, I could backport it into oslo.db as well
 so that it works against 0.9.

 So I’d ask that folks please hold off on any kind of migration from ORM to
 Core for performance reasons. I’ve spent the past several months adding
 features directly to SQLAlchemy that allow an ORM-based app to have routes
 to operations that perform just as fast as that of Core without a rewrite
 of
 code.

  The scheduler does lot of thing in the application, like filtering
  what can be done on the DB level more efficiently. Why it is not done
  on the DB side ?
 
  There are use cases when the scheduler would need to know even more data,
  Is there a plan for keeping `everything` in all schedulers process
 memory up-to-date ?
  (Maybe zookeeper)
 
  The opposite way would be to move most operation into the DB side,
  since the DB already knows everything.
  (stored procedures ?)
 
  Best Regards,
  Attila
 
 
  - Original Message -
  From: Rui Chen chenrui.m...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Wednesday, March 4, 2015 4:51:07 AM
  Subject: [openstack-dev] [nova] blueprint about multiple workers
 supported   in nova-scheduler
 
  Hi all,
 
  I want to make it easy to launch a bunch of scheduler processes on a
 host,
  multiple scheduler workers will make use of multiple processors of host
 and
  enhance the performance of nova-scheduler.
 
  I had registered a blueprint and commit a patch to implement it.
 
 https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support
 
  This patch had applied in our performance environment and pass some test
  cases, like: concurrent booting multiple instances, currently we didn't
 find
  inconsistent issue.
 
  IMO, nova-scheduler should been scaled horizontally on easily way, the
  multiple workers should been supported as an out of box feature.
 
  Please feel free to discuss this feature, thanks.
 
  Best Regards
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin

Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-04 Thread Rui Chen
We will face the same issue in multiple nova-scheduler process case, like
Sylvain say, right?

Two processes/workers can actually consume two distinct resources on the
same HostState.




2015-03-05 13:26 GMT+08:00 Alex Xu sou...@gmail.com:

 Rui, you still can run multiple nova-scheduler process now.


 2015-03-05 10:55 GMT+08:00 Rui Chen chenrui.m...@gmail.com:

 Looks like it's a complicated problem, and nova-scheduler can't scale-out
 horizontally in active/active mode.

 Maybe we should illustrate the problem in the HA docs.

 http://docs.openstack.org/high-availability-guide/content/_schedulers.html

 Thanks for everybody's attention.

 2015-03-05 5:38 GMT+08:00 Mike Bayer mba...@redhat.com:



 Attila Fazekas afaze...@redhat.com wrote:

  Hi,
 
  I wonder what is the planned future of the scheduling.
 
  The scheduler does a lot of high field number query,
  which is CPU expensive when you are using sqlalchemy-orm.
  Does anyone tried to switch those operations to sqlalchemy-core ?

 An upcoming feature in SQLAlchemy 1.0 will remove the vast majority of
 CPU
 overhead from the query side of SQLAlchemy ORM by caching all the work
 done
 up until the SQL is emitted, including all the function overhead of
 building
 up the Query object, producing a core select() object internally from the
 Query, working out a large part of the object fetch strategies, and
 finally
 the string compilation of the select() into a string as well as
 organizing
 the typing information for result columns. With a query that is
 constructed
 using the “Baked” feature, all of these steps are cached in memory and
 held
 persistently; the same query can then be re-used at which point all of
 these
 steps are skipped. The system produces the cache key based on the
 in-place
 construction of the Query using lambdas so no major changes to code
 structure are needed; just the way the Query modifications are performed
 needs to be preceded with “lambda q:”, essentially.

 With this approach, the traditional session.query(Model) approach can go
 from start to SQL being emitted with an order of magnitude less function
 calls. On the fetch side, fetching individual columns instead of full
 entities has always been an option with ORM and is about the same speed
 as a
 Core fetch of rows. So using ORM with minimal changes to existing ORM
 code
 you can get performance even better than you’d get using Core directly,
 since caching of the string compilation is also added.

 On the persist side, the new bulk insert / update features provide a
 bridge
 from ORM-mapped objects to bulk inserts/updates without any unit of work
 sorting going on. ORM mapped objects are still more expensive to use in
 that
 instantiation and state change is still more expensive, but bulk
 insert/update accepts dictionaries as well, which again is competitive
 with
 a straight Core insert.

 Both of these features are completed in the master branch, the “baked
 query”
 feature just needs documentation, and I’m basically two or three tickets
 away from beta releases of 1.0. The “Baked” feature itself lives as an
 extension and if we really wanted, I could backport it into oslo.db as
 well
 so that it works against 0.9.

 So I’d ask that folks please hold off on any kind of migration from ORM
 to
 Core for performance reasons. I’ve spent the past several months adding
 features directly to SQLAlchemy that allow an ORM-based app to have
 routes
 to operations that perform just as fast as that of Core without a
 rewrite of
 code.

  The scheduler does lot of thing in the application, like filtering
  what can be done on the DB level more efficiently. Why it is not done
  on the DB side ?
 
  There are use cases when the scheduler would need to know even more
 data,
  Is there a plan for keeping `everything` in all schedulers process
 memory up-to-date ?
  (Maybe zookeeper)
 
  The opposite way would be to move most operation into the DB side,
  since the DB already knows everything.
  (stored procedures ?)
 
  Best Regards,
  Attila
 
 
  - Original Message -
  From: Rui Chen chenrui.m...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Wednesday, March 4, 2015 4:51:07 AM
  Subject: [openstack-dev] [nova] blueprint about multiple workers
 supported   in nova-scheduler
 
  Hi all,
 
  I want to make it easy to launch a bunch of scheduler processes on a
 host,
  multiple scheduler workers will make use of multiple processors of
 host and
  enhance the performance of nova-scheduler.
 
  I had registered a blueprint and commit a patch to implement it.
 
 https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support
 
  This patch had applied in our performance environment and pass some
 test
  cases, like: concurrent booting multiple instances, currently we
 didn't find
  inconsistent issue.
 
  IMO, nova-scheduler should been scaled horizontally on easily way, the
  multiple

[openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-03 Thread Rui Chen
Hi all,

I want to make it easy to launch a bunch of scheduler processes on a host,
multiple scheduler workers will make use of multiple processors of host and
enhance the performance of nova-scheduler.

I had registered a blueprint and commit a patch to implement it.
https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support

This patch had applied in our performance environment and pass some test
cases, like: concurrent booting multiple instances, currently we didn't
find inconsistent issue.

IMO, nova-scheduler should been scaled horizontally on easily way, the
multiple workers should been supported as an out of box feature.

Please feel free to discuss this feature, thanks.

Best Regards
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about boot-from-volume instance and flavor

2015-03-03 Thread Rui Chen
Thank you for reply, @Jay.

+1 for
There should not be any magic setting for root_gb that needs to be
interpreted both by the user and the Nova code base.

I will try to restart the patch 136284 on the other way, like: instance
object.

Best Regards

2015-03-04 4:45 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 On 03/03/2015 01:10 AM, Rui Chen wrote:

 Hi all,

 When we boot instance from volume, we find some ambiguous description
 about flavor root_gb in operations guide,
 http://docs.openstack.org/openstack-ops/content/flavors.html

 /Virtual root disk size in gigabytes. This is an ephemeral disk the base
 image is copied into. You don't use it when you boot from a persistent
 volume. /
 /The 0 size is a special case that uses the native base image size as
 the size of the ephemeral root volume./
 /
 /
 'You don't use it(root_gb) when you boot from a persistent volume.'
 It means that we need to set the root_gb to 0 or not? I don't know.


 Hi Rui, I agree the documentation -- and frankly, the code in Nova -- is
 confusing around this area.

  But I find out that the root_gb will been added into local_gb_used of
 compute_node so that it will impact the next scheduling. Think about a
 use case, the local_gb of compute_node is 10, boot instance from volume
 with the root_gb=5 flavor, in this case, I can only boot 2
 boot-from-volume instances on the compute_nodes, although these
 instances don't use the local disk of compute_nodes.

 I find a patch that try to fix this issue,
 https://review.openstack.org/#/c/136284/

 I want to know that which solution is better for you?

 Solution #1: boot instance from volume with the root_gb=0 flavor.
 Solution #2: add some special logic in order to correct the disk usage,
 like patch #136284


 Solution #2 is a better idea, IMO. There should not be any magic setting
 for root_gb that needs to be interpreted both by the user and the Nova code
 base.

 The issue with the 136284 patch is that it is trying to address the
 problem in the wrong place, IMHO.

 Best,
 -jay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Question about boot-from-volume instance and flavor

2015-03-03 Thread Rui Chen
Hi all,

When we boot instance from volume, we find some ambiguous description about
flavor root_gb in operations guide,
http://docs.openstack.org/openstack-ops/content/flavors.html

*Virtual root disk size in gigabytes. This is an ephemeral disk the base
image is copied into. You don't use it when you boot from a persistent
volume. *
*The 0 size is a special case that uses the native base image size as the
size of the ephemeral root volume.*

'You don't use it(root_gb) when you boot from a persistent volume.'
It means that we need to set the root_gb to 0 or not? I don't know.

But I find out that the root_gb will been added into local_gb_used of
compute_node so that it will impact the next scheduling. Think about a use
case, the local_gb of compute_node is 10, boot instance from volume with
the root_gb=5 flavor, in this case, I can only boot 2 boot-from-volume
instances on the compute_nodes, although these instances don't use the
local disk of compute_nodes.

I find a patch that try to fix this issue,
https://review.openstack.org/#/c/136284/

I want to know that which solution is better for you?

Solution #1: boot instance from volume with the root_gb=0 flavor.
Solution #2: add some special logic in order to correct the disk usage,
like patch #136284
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Priority resizing instance on same host

2015-02-12 Thread Rui Chen
@Manickam, thank you for the information :)

+1 for the use case
-1 for the approach in patch https://review.openstack.org/#/c/117116/

I think we should try to filter the current host and auto fallback to
select a host in nova-scheduler if the current host is no suitable.

2015-02-12 16:17 GMT+08:00 Manickam, Kanagaraj kanagaraj.manic...@hp.com:

  Hi,



 There is a patch on resize https://review.openstack.org/#/c/117116/

 To address the resize,  there are some suggestions and please refer the
 review comments on this patch.



 Regards

 Kanagaraj M



 *From:* Jesse Pretorius [mailto:jesse.pretor...@gmail.com]
 *Sent:* Thursday, February 12, 2015 1:25 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [nova] Priority resizing instance on same
 host



 On Thursday, February 12, 2015, Rui Chen chenrui.m...@gmail.com wrote:

  Currently, resizing instance cause migrating from the host that the
 instance run on to other host, but maybe the current host is suitable for
 new flavor. Migrating will lead to copy image between hosts if no shared
 storage, it waste time.

 I think that priority resizing instance on the current host may be
 better if the host is suitable.

 The logic like this:



 if CONF.allow_resize_to_same_host:

 filte current host

 if suitable:

resize on current host

 else:

select a host

resize on the host



 I don't know whether there have been some discussion about this
 question. Please let me know what do you think. If the idea is no problem,
 maybe I can register a blueprint to implement it.



 But the nova.conf flag for that already exists?



 What I would suggest, however, is that some logic is put in to determine
 whether the disk size remains the same while the cpu/ram size is changing -
 if so, then resize the instance on the host without the disk snapshot and
 copy.



 --

 Jesse Pretorius
 mobile: +44 7586 906045
 email: jesse.pretor...@gmail.com
 skype: jesse.pretorius



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-12 Thread Rui Chen
 filters should be applied to the list of hosts that are in ‘force_hosts’.

Yes, @Gray, it's my point.

Operator can live-migrate a instance to a specified host and skip filters,
 it's apposite and important, I agree with you.

But when we boot instance, we always want to launch a instance successfully
or get a clear failure reason, if the filters are applied for the force
host, operator maybe find out that he is doing something wrong at earlier
time. For example, he couldn't boot a pci instance on a force host that
don't own pci device.

and I don't think 'force_hosts' is operator action, the default value is
'is_admin:True' in policy.json, but in some case the value may be changed
so that the regular user can boot instance on specified host.

2015-02-12 17:44 GMT+08:00 Sylvain Bauza sba...@redhat.com:


 Le 12/02/2015 10:05, Rui Chen a écrit :

 Hi:

 If we boot instance with 'force_hosts', the force host will skip all
 filters, looks like that it's intentional logic, but I don't know the
 reason.

 I'm not sure that the skipping logic is apposite, I think we should
 remove the skipping logic, and the 'force_hosts' should work with the
 scheduler, test whether the force host is appropriate ASAP. Skipping
 filters and postponing the booting failure to nova-compute is not advisable.

  On the other side, more and more options had been added into flavor,
 like NUMA, cpu pinning, pci and so on, forcing a suitable host is more and
 more difficult.


 Any action done by the operator is always more important than what the
 Scheduler could decide. So, in an emergency situation, the operator wants
 to force a migration to an host, we need to accept it and do it, even if it
 doesn't match what the Scheduler could decide (and could violate any policy)

 That's a *force* action, so please leave the operator decide.

 -Sylvain



  Best Regards.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-12 Thread Rui Chen
Append blueprint link:
https://blueprints.launchpad.net/nova/+spec/verifiable-force-hosts

2015-02-13 10:48 GMT+08:00 Rui Chen chenrui.m...@gmail.com:

 I agree with you @Chris
 '--force' flag is a good idea, it keep backward compatibility and
 flexibility.
 We can select whether the filters was applied for force_hosts.
 I will register blueprint to trace the feature.

 The 'force_hosts' feature is so age-old that I don't know how many users
 had used it.
 Like @Jay says. Removing it is once and for all idea, but I'm not sure
 that it's a suitable occasion.

 2015-02-12 23:10 GMT+08:00 Chris Friesen chris.frie...@windriver.com:

 On 02/12/2015 03:44 AM, Sylvain Bauza wrote:

  Any action done by the operator is always more important than what the
 Scheduler
 could decide. So, in an emergency situation, the operator wants to force
 a
 migration to an host, we need to accept it and do it, even if it doesn't
 match
 what the Scheduler could decide (and could violate any policy)

 That's a *force* action, so please leave the operator decide.


 Are we suggesting that the operator would/should only ever specify a
 specific host if the situation is an emergency?

 If not, then perhaps it would make sense to have it go through the
 scheduler filters even if a host is specified.  We could then have a
 --force flag that would proceed anyways even if the filters don't match.

 There are some cases (provider networks or PCI passthrough for example)
 where it really makes no sense to try and run an instance on a compute node
 that wouldn't pass the scheduler filters.  Maybe it would make the most
 sense to specify a list of which filters to override while still using the
 others.

 Chris


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-12 Thread Rui Chen
I agree with you @Chris
'--force' flag is a good idea, it keep backward compatibility and
flexibility.
We can select whether the filters was applied for force_hosts.
I will register blueprint to trace the feature.

The 'force_hosts' feature is so age-old that I don't know how many users
had used it.
Like @Jay says. Removing it is once and for all idea, but I'm not sure that
it's a suitable occasion.

2015-02-12 23:10 GMT+08:00 Chris Friesen chris.frie...@windriver.com:

 On 02/12/2015 03:44 AM, Sylvain Bauza wrote:

  Any action done by the operator is always more important than what the
 Scheduler
 could decide. So, in an emergency situation, the operator wants to force a
 migration to an host, we need to accept it and do it, even if it doesn't
 match
 what the Scheduler could decide (and could violate any policy)

 That's a *force* action, so please leave the operator decide.


 Are we suggesting that the operator would/should only ever specify a
 specific host if the situation is an emergency?

 If not, then perhaps it would make sense to have it go through the
 scheduler filters even if a host is specified.  We could then have a
 --force flag that would proceed anyways even if the filters don't match.

 There are some cases (provider networks or PCI passthrough for example)
 where it really makes no sense to try and run an instance on a compute node
 that wouldn't pass the scheduler filters.  Maybe it would make the most
 sense to specify a list of which filters to override while still using the
 others.

 Chris


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Priority resizing instance on same host

2015-02-12 Thread Rui Chen
Yes, CONF.allow_resize_to_same_host exist, but the meaning is that the
current host have chance to be selected in nova-scheduler, the final chosen
host maybe not the current host, in this case, the instance will be
migrated from current host to chosen host and the image will be copied to
the chosen host even if the disk size remain the same.

2015-02-12 15:55 GMT+08:00 Jesse Pretorius jesse.pretor...@gmail.com:

 On Thursday, February 12, 2015, Rui Chen chenrui.m...@gmail.com wrote:

 Currently, resizing instance cause migrating from the host that the
 instance run on to other host, but maybe the current host is suitable for
 new flavor. Migrating will lead to copy image between hosts if no shared
 storage, it waste time.
 I think that priority resizing instance on the current host may be
 better if the host is suitable.
 The logic like this:

 if CONF.allow_resize_to_same_host:
 filte current host
 if suitable:
resize on current host
 else:
select a host
resize on the host

 I don't know whether there have been some discussion about this
 question. Please let me know what do you think. If the idea is no problem,
 maybe I can register a blueprint to implement it.


 But the nova.conf flag for that already exists?

 What I would suggest, however, is that some logic is put in to determine
 whether the disk size remains the same while the cpu/ram size is changing -
 if so, then resize the instance on the host without the disk snapshot and
 copy.


 --
 Jesse Pretorius
 mobile: +44 7586 906045
 email: jesse.pretor...@gmail.com
 skype: jesse.pretorius


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Priority resizing instance on same host

2015-02-12 Thread Rui Chen
Yes, @Lingxian, I agree with you.

Only the host pass the scheduler filters, the resizing can success,
'force_hosts' is not enough IMO.

2015-02-12 16:41 GMT+08:00 Lingxian Kong anlin.k...@gmail.com:

 Hi, Rui,

 I think resize VM to the same host if the host could pass scheduler
 filters makes sense to me.

 2015-02-12 15:01 GMT+08:00 Rui Chen chenrui.m...@gmail.com:
  Hi:
 
  Currently, resizing instance cause migrating from the host that the
  instance run on to other host, but maybe the current host is suitable for
  new flavor. Migrating will lead to copy image between hosts if no shared
  storage, it waste time.
  I think that priority resizing instance on the current host may be
  better if the host is suitable.
  The logic like this:
 
  if CONF.allow_resize_to_same_host:
  filte current host
  if suitable:
 resize on current host
  else:
 select a host
 resize on the host
 
  I don't know whether there have been some discussion about this
  question. Please let me know what do you think. If the idea is no
 problem,
  maybe I can register a blueprint to implement it.
 
  Best Regards.
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Regards!
 ---
 Lingxian Kong

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Question about force_host skip filters

2015-02-12 Thread Rui Chen
Hi:

   If we boot instance with 'force_hosts', the force host will skip all
filters, looks like that it's intentional logic, but I don't know the
reason.

   I'm not sure that the skipping logic is apposite, I think we should
remove the skipping logic, and the 'force_hosts' should work with the
scheduler, test whether the force host is appropriate ASAP. Skipping
filters and postponing the booting failure to nova-compute is not advisable.

On the other side, more and more options had been added into flavor,
like NUMA, cpu pinning, pci and so on, forcing a suitable host is more and
more difficult.


Best Regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Priority resizing instance on same host

2015-02-11 Thread Rui Chen
Hi:

Currently, resizing instance cause migrating from the host that the
instance run on to other host, but maybe the current host is suitable for
new flavor. Migrating will lead to copy image between hosts if no shared
storage, it waste time.
I think that priority resizing instance on the current host may be
better if the host is suitable.
The logic like this:

if CONF.allow_resize_to_same_host:
filte current host
if suitable:
   resize on current host
else:
   select a host
   resize on the host

I don't know whether there have been some discussion about this
question. Please let me know what do you think. If the idea is no problem,
maybe I can register a blueprint to implement it.

Best Regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Volunteer for BP 'Improve Nova KVM IO support'

2014-12-19 Thread Rui Chen
Thank @Sahid, I will help to review this patch :)

2014-12-19 16:01 GMT+08:00 Sahid Orentino Ferdjaoui 
sahid.ferdja...@redhat.com:

 On Fri, Dec 19, 2014 at 11:36:03AM +0800, Rui Chen wrote:
  Hi,
 
  Is Anybody still working on this nova BP 'Improve Nova KVM IO support'?
  https://blueprints.launchpad.net/nova/+spec/improve-nova-kvm-io-support

 This feature is already in review, since it is only to add an option
 to libvirt I guess we can consider to do not address a spec but I
 may be wrong.

 https://review.openstack.org/#/c/117442/

 s.

  I willing to complement nova-spec and implement this feature in kilo or
  subsequent versions.
 
  Feel free to assign this BP to me, thanks:)
 
  Best Regards.

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Volunteer for BP 'Improve Nova KVM IO support'

2014-12-18 Thread Rui Chen
Hi,

Is Anybody still working on this nova BP 'Improve Nova KVM IO support'?
https://blueprints.launchpad.net/nova/+spec/improve-nova-kvm-io-support

I willing to complement nova-spec and implement this feature in kilo or
subsequent versions.

Feel free to assign this BP to me, thanks:)

Best Regards.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Stepping down as PTL

2014-09-22 Thread Rui Chen
Thanks for your fantastic leadership!!

2014-09-23 10:54 GMT+08:00 Adam Young ayo...@redhat.com:

  On 09/22/2014 10:47 AM, Dolph Mathews wrote:

  Dearest stackers and [key]stoners,

  With the PTL candidacies officially open for Kilo, I'm going to take the
 opportunity to announce that I won't be running again for the position.

  I thoroughly enjoyed my time as PTL during Havana, Icehouse and Juno.
 There was a perceived increase in stability [citation needed], which was
 one of my foremost goals. We primarily achieved that by improving the
 communication between developers which allowed developers to share their
 intent early and often (by way of API designs and specs). As a result, we
 had a lot more collaboration and a great working knowledge in the community
 when it came time for bug fixes. I also think we raised the bar for user
 experience, especially by way of reasonable defaults, strong documentation,
 and effective error messages. I'm consistently told that we have the best
 out-of-the-box experience of any OpenStack service. Well done!

  I'll still be involved in OpenStack, and I'm super confident in our
 incredibly strong core team of reviewers on Keystone. I thoroughly enjoy
 helping other developers be as productive as possible, and intend to
 continue doing exactly that.

  OpenStack owes you more than most people realize.  I personally owe you a
 huge debt of gratitude.

 Don't you dare pull a Joe Heck and disappear on us now.  I'd be the old
 man on the Keystone project if you do, and everyone knows that I lie and
 make things up on the spot.  I'd have to invent reasons for half the things
 we do.


 See you in Paris.



  Keep hacking responsibly,

  -Dolph


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [OpenStack-dev][Nova] Can we add one configuration item for cache-using in libvirt/hypervisor?

2014-02-24 Thread Rui Chen
*I think domain attribute is more appropriate than nova.conf node config,
need to consider across host task like **migrate and live-migrate :)*


2014-02-24 10:45 GMT+08:00 zhangyu (AI) zhangy...@huawei.com:

  Sure, hard-coding seems weird…



 However, a global configuration here dominates all domains. It might be a
 little too strong in cases in which we want to apply various configurations
 to different domains.



 Could we add any new attributes in the info for creating a domain for
 this? Or any other suggestion?



 Thanks!



 *发件人:* wu jiang [mailto:win...@gmail.com]
 *发送时间:* 2014年2月24日 10:31
 *收件人:* OpenStack Development Mailing List
 *主题:* [openstack-dev] [OpenStack-dev][Nova] Can we add one configuration
 item for cache-using in libvirt/hypervisor?



 Hi all,



 Recently, I met one scenario which needs to close the cache on linux
 hypervisor.



 But some codes written in libvirt/driver.py (including suspend/snapshot)
 are hard-coded.

 For example:

 ---

 def suspend(self, instance):

 Suspend the specified instance.

 dom = self._lookup_by_name(instance['name'])

 self._detach_pci_devices(dom,

 pci_manager.get_instance_pci_devs(instance))

 dom.managedSave(0)



 So, can we add one configuration item in nova.conf, like
 *DOMAIN_SAVE_BYPASS_CACHE*, to let operator can handle it?



 That would be improved flexibility of Nova.





 Thanks

 wingwj

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][EC2] attach and detach volume response status

2014-02-16 Thread Rui Chen
Thanks for confirming it, I add some comments in the bug :)


2014-02-15 20:23 GMT+08:00 Rushi Agrawal rushi@gmail.com:

 I remember seeing the same while attaching -- return value is 'detached'.
 So I can confirm this is a bug.

 I couldn't locate a bug report for it, so I created one:
 https://bugs.launchpad.net/nova/+bug/1280572

 Please mark it as a dup if you already have a bug report.

 Regards,
 Rushi Agrawal
 Ph: (+91) 99 4518 4519


 On Sat, Feb 15, 2014 at 11:56 AM, wu jiang win...@gmail.com wrote:

 Hi,

 I checked the AttachVolume in AWS EC2:

 http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-AttachVolume.html

 The status returned is 'attaching':

 AttachVolumeResponse xmlns=http://ec2.amazonaws.com/doc/2013-10-15/;
   requestId59dbff89-35bd-4eac-99ed-be587EXAMPLE/requestId
   volumeIdvol-1a2b3c4d/volumeId
   instanceIdi-1a2b3c4d/instanceId
   device/dev/sdh/device
   statusattaching/status
   attachTime-MM-DDTHH:MM:SS.000Z/attachTime
 /AttachVolumeResponse


 So I think it's a bug IMO.Thanks~


 wingwj


 On Sat, Feb 15, 2014 at 11:35 AM, Rui Chen chenrui.m...@gmail.comwrote:

 Hi Stackers;

 I use Nova EC2 interface to attach a volume, attach success, but volume
 status is detached in message response.

 # euca-attach-volume -i i-000d -d /dev/vdb vol-0001
 ATTACHMENT  vol-0001i-000d  detached

 This make me confusion, I think the status should be attaching or in-use.

 I find attach and detach volume interfaces return
 volume['attach_status'], but describe volume interface return
 volume['status']

 Is it a bug? or for other considerations I do not know.

 Thanks

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][EC2] attach and detach volume response status

2014-02-14 Thread Rui Chen
Hi Stackers;

I use Nova EC2 interface to attach a volume, attach success, but volume
status is detached in message response.

# euca-attach-volume -i i-000d -d /dev/vdb vol-0001
ATTACHMENT  vol-0001i-000d  detached

This make me confusion, I think the status should be attaching or in-use.

I find attach and detach volume interfaces return volume['attach_status'],
but describe volume interface return volume['status']

Is it a bug? or for other considerations I do not know.

Thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Checking before delete flavor?

2014-01-26 Thread Rui Chen
Hi Stackers:

Some instance operations and flavor are closely connected, for example,
resize.
If I delete the flavor when resize instance, instance will be error. Like
this:

1. run instance with flavor A
2. resize instance from flavor A to flavor B
3. delete flavor A
4. resize-revert instance
5. instance state into error

Which following ways we think is a better? or you have another way?

1. List instance filter by flavor A, verify that no instance associated
with flavor A, then delete flavor A
2. Delete flavor A, if instance state into error, reset instance state to
active

General how do?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev