Re: [openstack-dev] [nova] about notification in nova

2018-12-02 Thread Zhenyu Zheng
Hi,

Are you using versioned notification? If you are using versioned
nofitication, you should get an ``action_initiator_user`` and an
``action_initiator_project``
indicating who initiated this action, we had them since
I649d8a27baa8840bc1bb567fef027c749c663432
 .
If you are not using versioned notification, then
versioned notification will be recommanded.

Thanks

On Mon, Dec 3, 2018 at 10:06 AM Rambo  wrote:

> Hi, all:
>  I have a question about the notification in nova, that is the
> actual  operator is different from the operator was record  in panko. Such
> as the delete action, we create the VM as user1,  and we delete the VM as
> user2, but the operator is user1 who delete the VM in panko event, not the
> actual operator user2.
>  Can you tell me more about this?Thank you very much.
>
>
>
>
>
>
>
>
> Best Regards
> Rambo
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [nova] Volunteer needed to write reshaper FFU hook

2018-10-29 Thread Zhenyu Zheng
I would like to help since I have now finished all the downstream works.
But I may need to take some time understanding all the background
information

On Mon, Oct 29, 2018 at 11:25 PM Matt Riedemann  wrote:

> Given the outstanding results of my recruiting job last week [1] I have
> been tasked with recruiting one of our glorious and most talented
> contributors to work on the fast-forward-upgrade script changes needed
> for the reshape-provider-tree blueprint.
>
> The work item is nicely detailed in the spec [2]. A few things to keep
> in mind:
>
> 1. There are currently no virt drivers which run the reshape routine.
> However, patches are up for review for libvirt [3] and xen [4]. There
> are also functional tests which exercise the ResourceTracker code with a
> faked out virt driver interface to test reshaping [5].
>
> 2. The FFU entry point will mimic the reshape routine that will happen
> on nova-compute service startup in the ResourceTracker [6].
>
> 3. The FFU script will need to run per-compute service rather than
> globally (or per cell) since it actually needs to call the virt driver's
> update_provider_tree() interface which might need to inspect the
> hardware (like for GPUs).
>
> Given there is already a model to follow from the ResourceTracker this
> should not be too hard, the work will likely mostly be writing tests.
>
> What do you get if you volunteer? The usual: fame, fortune, the respect
> of your peers, etc.
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2018-October/136075.html
> [2]
>
> https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/reshape-provider-tree.html#offline-upgrade-script
> [3] https://review.openstack.org/#/c/599208/
> [4] https://review.openstack.org/#/c/521041/
> [5]
>
> https://github.com/openstack/nova/blob/a0eacbf7f/nova/tests/functional/test_servers.py#L1839
> [6]
>
> https://github.com/openstack/nova/blob/a0eacbf7f/nova/compute/resource_tracker.py#L917-L940
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Searchlight] Reaching out to the Searchlight core members for Stein

2018-08-19 Thread Zhenyu Zheng
Hi, Thanks for stand up, I would like to continue work on SL.

On Sat, Aug 18, 2018 at 12:16 AM Trinh Nguyen  wrote:

> Dear Searchlight team,
>
> As you may know, the Searchlight project has missed several milestones,
> especially the Rocky cycle. The TC already has the plan to remove
> Searchlight from governance [1] but I volunteer to take over it [2]. But
> due to the unresponsive on IRC and launchpad, I send this email to reach
> out to all the Searchlight core members to discuss our plan in Stein as
> well as re-organize the team. Hopefully, this effort will work well and may
> bring Searchlight back to life.
>
> If anyone on the core team sees this email, please reply.
>
> My IRC is dangtrinhnt.
>
> [1] https://review.openstack.org/#/c/588644/
> [2] https://review.openstack.org/#/c/590601/
>
> Best regards,
>
> *Trinh Nguyen *| Founder & Chief Architect
>
> 
>
> *E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz *
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] A multi-cell instance-list performance test

2018-08-16 Thread Zhenyu Zheng
Hi,

Thanks alot for the reply, for your question #2, we did tests with two
kinds of deployments: 1. There is only 1 DB with all 10 cells(also cell0)
and it is on the same server with
the API; 2. We took 5 of the DBs to another machine on the same rack to
test out if it matters, and it turns out there are no big differences.

For question #3, we did a test with limit = 1000 and 10 cells:
as we can see, the CPU workload from API process and MySQL query is both
high in the first 3 seconds, but start from the 4th second, only API
process occupies the CPU,
and the memory consumption is low comparing to the CPU consumption. And
this is tested with the patch fix posted in previous mail.

[image: image.png]

[image: image.png]

BR,

Kevin

On Fri, Aug 17, 2018 at 2:45 AM Dan Smith  wrote:

> >  yes, the DB query was in serial, after some investigation, it seems
> that we are unable to perform eventlet.mockey_patch in uWSGI mode, so
> >  Yikun made this fix:
> >
> >  https://review.openstack.org/#/c/592285/
>
> Cool, good catch :)
>
> >
> >  After making this change, we test again, and we got this kind of data:
> >
> >   total collect sort view
> >  before monkey_patch 13.5745 11.7012 1.1511 0.5966
> >  after monkey_patch 12.8367 10.5471 1.5642 0.6041
> >
> >  The performance improved a little, and from the log we can saw:
>
> Since these all took ~1s when done in series, but now take ~10s in
> parallel, I think you must be hitting some performance bottleneck in
> either case, which is why the overall time barely changes. Some ideas:
>
> 1. In the real world, I think you really need to have 10x database
>servers or at least a DB server with plenty of cores loading from a
>very fast (or separate) disk in order to really ensure you're getting
>full parallelism of the DB work. However, because these queries all
>took ~1s in your serialized case, I expect this is not your problem.
>
> 2. What does the network look like between the api machine and the DB?
>
> 3. What do the memory and CPU usage of the api process look like while
>this is happening?
>
> Related to #3, even though we issue the requests to the DB in parallel,
> we still process the result of those calls in series in a single python
> thread on the API. That means all the work of reading the data from the
> socket, constructing the SQLA objects, turning those into nova objects,
> etc, all happens serially. It could be that the DB query is really a
> small part of the overall time and our serialized python handling of the
> result is the slow part. If you see the api process pegging a single
> core at 100% for ten seconds, I think that's likely what is happening.
>
> >  so, now the queries are in parallel, but the whole thing still seems
> >  serial.
>
> In your table, you show the time for "1 cell, 1000 instances" as ~3s and
> "10 cells, 1000 instances" as 10s. The problem with comparing those
> directly is that in the latter, you're actually pulling 10,000 records
> over the network, into memory, processing them, and then just returning
> the first 1000 from the sort. A closer comparison would be the "10
> cells, 100 instances" with "1 cell, 1000 instances". In both of those
> cases, you pull 1000 instances total from the db, into memory, and
> return 1000 from the sort. In that case, the multi-cell situation is
> faster (~2.3s vs. ~3.1s). You could also compare the "10 cells, 1000
> instances" case to "1 cell, 10,000 instances" just to confirm at the
> larger scale that it's better or at least the same.
>
> We _have_ to pull $limit instances from each cell, in case (according to
> the sort key) the first $limit instances are all in one cell. We _could_
> try to batch the results from each cell to avoid loading so many that we
> don't need, but we punted this as an optimization to be done later. I'm
> not sure it's really worth the complexity at this point, but it's
> something we could investigate.
>
> --Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][nova] about update flavor

2018-08-16 Thread Zhenyu Zheng
I mean this https://review.openstack.org/#/c/491442/ and the related ML

http://lists.openstack.org/pipermail/openstack-dev/2017-August/120540.html


On Thu, Aug 16, 2018 at 4:30 PM Rambo  wrote:

> Sorry,I don't understand what has been removed,the docs about the update
> the flavor's cpu?Otherwise,why we don't consider to add the function that
> we can update the flavor's cpu?
>
>
> -- Original --
> *From:* "Zhenyu Zheng";
> *Date:* 2018年8月16日(星期四) 下午3:56
> *To:* "OpenStack Developmen";
> *Subject:* Re: [openstack-dev] [docs][nova] about update flavor
>
> We only allow update flavor descriptions(added in microversion 2.55) in
> Nova and what the horizon did was just delete the old one and create a new
> one, and I think it has been removed in last year.
>
> On Thu, Aug 16, 2018 at 3:19 PM Rambo  wrote:
>
>> Hi,all
>>
>>   I find it is supported that we can update the flavor name, VCPUs,
>> RAM, root disk, ephemeral disk and so on in doc.openstack.org[1].But
>> only can change the flavor propertity in fact.Is the document wrong?Can you
>> tell me more about this ?Thank you very much.
>>
>> [1]https://docs.openstack.org/horizon/latest/admin/manage-flavors.html
>>
>>
>>
>>
>>
>>
>>
>>
>> Best Regards
>> Rambo
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][nova] about update flavor

2018-08-16 Thread Zhenyu Zheng
We only allow update flavor descriptions(added in microversion 2.55) in
Nova and what the horizon did was just delete the old one and create a new
one, and I think it has been removed in last year.

On Thu, Aug 16, 2018 at 3:19 PM Rambo  wrote:

> Hi,all
>
>   I find it is supported that we can update the flavor name, VCPUs,
> RAM, root disk, ephemeral disk and so on in doc.openstack.org[1].But only
> can change the flavor propertity in fact.Is the document wrong?Can you tell
> me more about this ?Thank you very much.
>
> [1]https://docs.openstack.org/horizon/latest/admin/manage-flavors.html
>
>
>
>
>
>
>
>
> Best Regards
> Rambo
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] A multi-cell instance-list performance test

2018-08-16 Thread Zhenyu Zheng
able to perform eventlet.mockey_patch in uWSGI mode, so Yikun made
this fix:

https://review.openstack.org/#/c/592285/


After making this change, we test again, and we got this kind of data:


total

collect

sort

view

before monkey_patch

13.5745

11.7012

1.1511

0.5966

after monkey_patch

12.8367

10.5471

1.5642

0.6041

The performance improved a little, and from the log we can saw:

Aug 16 02:14:46.383081 begin detail api

Aug 16 02:14:46.406766 begin cell gather begin

Aug 16 02:14:46.419346 db begin, nova_cell0

Aug 16 02:14:46.425065 db begin, nova_cell1

Aug 16 02:14:46.430151 db begin, nova_cell2

Aug 16 02:14:46.435012 db begin, nova_cell3

Aug 16 02:14:46.440634 db begin, nova_cell4

Aug 16 02:14:46.446191 db begin, nova_cell5

Aug 16 02:14:46.450749 db begin, nova_cell6

Aug 16 02:14:46.455461 db begin, nova_cell7

Aug 16 02:14:46.459959 db begin, nova_cell8

Aug 16 02:14:46.466066 db begin, nova_cell9

Aug 16 02:14:46.470550 db begin, ova_cell10

Aug 16 02:14:46.731882 db end, nova_cell0: 0.311906099319

Aug 16 02:14:52.667791 db end, nova_cell5: 6.22100400925

Aug 16 02:14:54.065655 db end, nova_cell1: 7.63998198509

Aug 16 02:14:54.939856 db end, nova_cell3: 8.50425100327

Aug 16 02:14:55.309017 db end, nova_cell6: 8.85762405396

Aug 16 02:14:55.309623 db end, nova_cell8: 8.84928393364

Aug 16 02:14:55.310240 db end, nova_cell2: 8.87976694107

Aug 16 02:14:56.057487 db end, ova_cell10: 9.58636116982

Aug 16 02:14:56.058001 db end, nova_cell4: 9.61698698997

Aug 16 02:14:56.058547 db end, nova_cell9: 9.59216403961

Aug 16 02:14:56.954209 db end, nova_cell7: 10.4981210232

Aug 16 02:14:56.954665 end cell gather end: 10.5480799675

Aug 16 02:14:56.955010 begin heaq.merge

Aug 16 02:14:58.527040 end heaq.merge: 1.57150006294


so, now the queries are in parallel, but the whole thing still seems serial.


We tried to adjust the database configs like: max_thread_pool, use_tpool,
etc. And we also tried to use a separate DB for some of the cells, but the
result

seems to be no big difference.


So, the above are what we have now, and feel free to ping us if you have
any questions or suggestions.


BR,


Zhenyu Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]API update week 5-11

2018-07-15 Thread Zhenyu Zheng
Thank you very much for the review and updates during the weekends.

On Sat, Jul 14, 2018 at 4:05 AM Matt Riedemann  wrote:

> On 7/11/2018 9:03 PM, Zhenyu Zheng wrote:
> > 2. Abort live migration in queued state:
> > -
> https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status
> > -
> https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged)
> > <
> https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+%28status:open+OR+status:merged%29
> >
> > - Weekly Progress: Review is going and it is in nova runway this
> > week. In API office hour, we discussed about doing the compute
> > service version checks oncompute.api.py <http://compute.api.py/>side
> > than on rpc side. Dan has point of doing it on rpc side where
> > migration status can changed to running. We decided to further
> > discussed it on patch.
> >
> >
> > This is my own defence, Dan's point seems to be that the actual rpc
> > version pin could be set to be lower than the can_send_version even when
> > the service version is new enough, so he thinks doing it in rpc is
> better.
>
> That series is all rebased now and I'm +2 up the stack until the API
> change, where I'm just +1 since I wrote the compute service version
> checking part, but I think this series is ready for wider review.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]API update week 5-11

2018-07-11 Thread Zhenyu Zheng
>
> 2. Abort live migration in queued state:
> -
> https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status
>
> -
> https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged)
>
> - Weekly Progress: Review is going and it is in nova runway this week. In
> API office hour, we discussed about doing the compute service version
> checks on compute.api.py side than on rpc side. Dan has point of doing it
> on rpc side where migration status can changed to running. We decided to
> further discussed it on patch.


This is my own defence, Dan's point seems to be that the actual rpc version
pin could be set to be lower than the can_send_version even when the
service version is new enough, so he thinks doing it in rpc is better.

On Thu, Jul 12, 2018 at 9:15 AM Ghanshyam Mann 
wrote:

> Hi All,
>
> Please find the Nova API highlights of this week.
>
> Weekly Office Hour:
> ===
> We had more attendees in this week office hours.
>
> What we discussed this week:
> - Discussion on API related BP. Discussion points are embedded inline with
> BP weekly progress in next section.
> - Triage 1 new bug and Alex reviewed one in-progress
>
> Planned Features :
> ==
> Below are the API related features for Rocky cycle. Nova API Sub team will
> start reviewing those to give their regular feedback. If anythings missing
> there feel free to add those in etherpad-
> https://etherpad.openstack.org/p/rocky-nova-priorities-tracking
>
> 1. Servers Ips non-unique network names :
> -
> https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names
> - Spec Merged
> -
> https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged)
> - Weekly Progress: Spec is merged. I am in contact with author about code
> update (sent email last night). If no response till this week, i will push
> the code update for this BP.
>
> 2. Abort live migration in queued state:
> -
> https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status
> -
> https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged)
> - Weekly Progress: Review is going and it is in nova runway this week. In
> API office hour, we discussed about doing the compute service version
> checks on compute.api.py side than on rpc side. Dan has point of doing it
> on rpc side where migration status can changed to running. We decided to
> further discussed it on patch.
>
> 3. Complex anti-affinity policies:
> -
> https://blueprints.launchpad.net/nova/+spec/complex-anti-affinity-policies
> -
> https://review.openstack.org/#/q/topic:bp/complex-anti-affinity-policies+(status:open+OR+status:merged)
> - Weekly Progress: Good review progress. In API office hour, we discussed
> on 2 points-
>1. whether request also need to have flat format like response. IMO
> we need to have flat in both request and response. Yikun need more opinion
> on that.
>
>2. naming fields to policy_* as we are moving these new fields in
> flat format. I like to have policy_* for clear understanding of attributes
> by their name. This is not concluded
>  and alex will give feedback on patch.
>Discussion is on patch for consensus on naming things.
>
> 4. Volume multiattach enhancements:
> -
> https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements
> -
> https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged)
> - Weekly Progress: mriedem mentioned in last week status mail that he will
> continue work on this.
>
> 5. API Extensions merge work
> - https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky
> -
> https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-rocky
> - Weekly Progress: I did not get chance to push more patches on this. I
> will target this one before next office hour.
>
> 6. Handling a down cell
>  - https://blueprints.launchpad.net/nova/+spec/handling-down-cell
>  -  Spec mriedem  mentioned in previous week ML is merged -
> https://review.openstack.org/#/c/557369/
>
> Bugs:
> 
> Triage 1 new bug and Alex reviewed one in-progress. I did not do my home
> work of doing review on in-progress patches (i will accommodate that in
> next week)
>
> This week Bug Progress:
> Critical: 0->0
> High importance: 2->3
> By Status:
> New:  1->0
> Confirmed/Triage: 30-> 31
> In-progress: 36->36
> Incomplete: 4->4
> =
> Total: 70->71
>
> NOTE- There might be some bug which are not tagged as 'api' or 'api-ref',
> those are not in above list. Tag such bugs so that we can keep our eyes.
>
> Ref: https://etherpad.openstack.org/p/nova-api-weekly-bug-report
>
> -gmann
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [nova] Continuously growing request_specs table

2018-07-02 Thread Zhenyu Zheng
Thanks, I may have missed that one.

On Mon, Jul 2, 2018 at 10:29 PM Matt Riedemann  wrote:

> On 7/2/2018 2:47 AM, Zhenyu Zheng wrote:
> > It seems that the current request_specs record did not got removed even
> > when the related instance is gone, which lead to a continuously growing
> > request_specs table. How is that so?
> >
> > Is it because the delete process could be error and we have to recover
> > the request_spec if we deleted it?
> >
> > How about adding a nova-manage CLI command for operators to clean up
> > out-dated request specs records from the table by comparing the request
> > specs and existence of related instance?
>
> Already fixed in Rocky:
>
> https://review.openstack.org/#/c/515034/
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Continuously growing request_specs table

2018-07-02 Thread Zhenyu Zheng
Hi,

It seems that the current request_specs record did not got removed even
when the related instance is gone, which lead to a continuously growing
request_specs table. How is that so?

Is it because the delete process could be error and we have to recover the
request_spec if we deleted it?

How about adding a nova-manage CLI command for operators to clean up
out-dated request specs records from the table by comparing the request
specs and existence of related instance?

BR,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova API meeting schedule

2018-06-11 Thread Zhenyu Zheng
Glad to hear that the API meeting is happening again, I would also love to
join.

On Mon, Jun 11, 2018 at 10:49 AM Ghanshyam  wrote:

> Hi All,
>
> As you might know, we used to have Nova API subteam meeting on Wed [1] but
> we did not continue that this year due to unavailability of members.
>
> As per discussion with melanie , we would like to continue the API meeting
> either on meeting channel (openstack-meeting-4) or as office hour on Nova
> channel. We have 2 options for that:
>
> 1. If there are members from USA/Europe TZ would like to join API meeting
> regularly then, we will continue the meeting on meeting channel with more
> suitable time considering Asia TZ also. I will initiate the doodle vote to
> select the time suitable for all interested members.
>
> 2. If no member from USA/Europe TZ then, myself and Alex will conduct the
> API meeting as office hour on Nova channel during our day time (something
> between UTC+1 to  UTC + 9). There is not much activity on Nova channel
> during our TZ so it will be ok to use Nova channel.  In this case, we will
> release the current occupied meeting channel.
>
> Please let us know who all would like to join API meeting so that we can
> pursue accordingly.
>
> [1] https://wiki.openstack.org/wiki/Meetings/NovaAPI
>
> -Nova API Subteam
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rocky PTG summary - cells

2018-03-15 Thread Zhenyu Zheng
Thanks for the reply, both solution looks reasonable.

On Thu, Mar 15, 2018 at 10:29 AM, melanie witt <melwi...@gmail.com> wrote:

> On Thu, 15 Mar 2018 09:54:59 +0800, Zhenyu Zheng wrote:
>
>> Thanks for the recap, got one question for the "block creation":
>>
>> * An attempt to create an instance should be blocked if the project
>> has instances in a "down" cell (the instance_mappings table has a
>> "project_id" column) because we cannot count instances in "down"
>> cells for the quota check.
>>
>>
>> Since users are not aware of any cell information, and the cells are
>> mostly randomly selected, there could be high possibility that
>> users(projects) instances are equally spreaded across cells. The proposed
>> behavior seems can
>> easily cause a lot of users couldn't create instances because one of the
>> cells is down, isn't it too rude?
>>
>
> To be honest, I share your concern. I had planned to change quota checks
> to use placement instead of reading cell databases ASAP but hit a snag
> where we won't be able to count instances from placement because we can't
> determine the "type" of an allocation. Allocations can be instances, or
> network-related resources, or volume-related resources, etc. Adding the
> concept of an allocation "type" in placement has been a controversial
> discussion so far.
>
> BUT ... we also said we would add a column like "queued_for_delete" to the
> instance_mappings table. If we do that, we could count instances from the
> instance_mappings table in the API database and count cores/ram from
> placement and no longer rely on reading cell databases for quota checks.
> Although, there is one more wrinkle: instance_mappings has a project_id
> column but does not have a user_id column, so we wouldn't be able to get a
> count by project + user needed for the quota check against user quota. So,
> if people would not be opposed, we could also add a "user_id" column to
> instance_mappings to handle that case.
>
> I would prefer not to block instance creations because of "down" cells, so
> maybe there is some possibility to avoid it if we can get
> "queued_for_delete" and "user_id" columns added to the instance_mappings
> table.
>
> -melanie
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rocky PTG summary - cells

2018-03-14 Thread Zhenyu Zheng
Thanks for the recap, got one question for the "block creation":

  * An attempt to create an instance should be blocked if the project has
> instances in a "down" cell (the instance_mappings table has a "project_id"
> column) because we cannot count instances in "down" cells for the quota
> check.


Since users are not aware of any cell information, and the cells are mostly
randomly selected, there could be high possibility that users(projects)
instances are equally spreaded across cells. The proposed behavior seems can
easily cause a lot of users couldn't create instances because one of the
cells is down, isn't it too rude?

BR,

Kevin Zheng


On Thu, Mar 15, 2018 at 2:26 AM, Chris Dent  wrote:

> On Wed, 14 Mar 2018, melanie witt wrote:
>
> I’ve created a summary etherpad [0] for the nova cells session from the
>> PTG and included a plain text export of it on this email.
>>
>
> Nice summary. Apparently I wasn't there or paying attention when
> something was decided:
>
>  * An attempt to delete an instance in a "down" cell should result in a
>> 500 or 503 error.
>>
>
> Depending on how we look at it, this doesn't really align with what
> 500 or 503 are supposed to be used. They are supposed to indicate
> that the web server is broken in some fashion: 500 being an
> unexpected and uncaught exception in the web server, 503 that the
> web server is either overloaded or down for maintenance.
>
> So, you could argue that 409 is the right thing here (as seems to
> always happen when we discuss these things). You send a DELETE to
> kill the instance, but the current state of the instance is "on a
> cell that can't be reached" which is in "conflict" with the state
> required to do a DELETE.
>
> If a 5xx is really necessary, for whatever reason, then 503 is a
> better choice than 500 because it at least signals that the broken
> thing is sort of "over there somewhere" rather than the web server
> having an error (which is what 500 is supposed to mean).
>
> --
> Chris Dent   ٩◔̯◔۶   https://anticdent.org/
> freenode: cdent tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Should we get auth from context for Neutron endpoint?

2018-02-06 Thread Zhenyu Zheng
Hi Nova,

While doing some test with my newly deployed devstack env today, it turns
out that the default devstack deployment cannot cleanup networks after the
retry attempt exceeded. This is because in the deployment with
super-conductor and cell-conductor, the retry and cleanup logic is in
cell-conductor [1], and by default the devstack didn't put Neutron endpoint
info in nova_cell1.conf. And as the neutron endpoint is also not included
in the context [2], so we can't find Neutron endpoint when try to cleanup
network [3].

The solution is simple though, ether add Neutron endpoint info in
nova_cell1.conf in devstack or change Nova code to support get auth from
context. I think the latter one is better as in real deployment there could
be many cells and by doing this can ignore config it all the time.

Any particular consideration that Neutron is not included in [2]?

Suggestions on how this should be fixed?

I also registered a devstack bug to fix it in devstack [4].

[1]
https://github.com/openstack/nova/blob/bccf26c93a973d000e4339843ce9256814286d10/nova/conductor/manager.py#L604
[2]
https://github.com/openstack/nova/blob/9519601401ee116a9197fe3b5d571495a96912e9/nova/context.py#L121
[3] https://bugs.launchpad.net/nova/+bug/1747600
[4] https://bugs.launchpad.net/devstack/+bug/1747598

BR,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] PTL Election Season

2018-01-23 Thread Zhenyu Zheng
Thanks for all the help in these cycles :)

On Tue, Jan 23, 2018 at 5:34 PM, Yikun Jiang  wrote:

> Matt, Thanks for your all works.
> As a beginner of Nova upstream player, really appreciate your patient
> review and warm help.  : )
>
> Regards,
> Yikun
> 
> Jiang Yikun(Kero)
> Mail: yikunk...@gmail.com
>
> 2018-01-23 7:09 GMT+08:00 Matt Riedemann :
>
>> On 1/15/2018 11:04 AM, Kendall Nelson wrote:
>>
>>> Election details: https://governance.openstack.org/election/
>>>
>>> Please read the stipulations and timelines for candidates and electorate
>>> contained in this governance documentation.
>>>
>>> Be aware, in the PTL elections if the program only has one candidate,
>>> that candidate is acclaimed and there will be no poll. There will only be a
>>> poll if there is more than one candidate stepping forward for a program's
>>> PTL position.
>>>
>>> There will be further announcements posted to the mailing list as action
>>> is required from the electorate or candidates. This email is for
>>> information purposes only.
>>>
>>> If you have any questions which you feel affect others please reply to
>>> this email thread.
>>>
>>>
>> To anyone that cares, I don't plan on running for Nova PTL again for the
>> Rocky release. Queens was my fourth tour and it's definitely time for
>> someone else to get the opportunity to lead here. I don't plan on going
>> anywhere and I'll be here to help with any transition needed assuming
>> someone else (or a couple of people hopefully) will run in the election.
>> It's been a great experience and I thank everyone that has had to put up
>> with me and my obsessive paperwork and process disorder in the meantime.
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata

2018-01-16 Thread Zhenyu Zheng
Thanks for the info, so it seems we are not going to implement aggregate
overcommit ratio in placement at least in the near future?

On Wed, Jan 17, 2018 at 9:19 AM, Zhenyu Zheng <zhengzhenyul...@gmail.com>
wrote:

> Thanks for the info, so it seems we are not going to implement aggregate
> overcommit ratio in placement at least in the near future?
>
> On Wed, Jan 17, 2018 at 5:24 AM, melanie witt <melwi...@gmail.com> wrote:
>
>> Hello Stackers,
>>
>> This is a heads up to any of you using the AggregateCoreFilter,
>> AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler.
>> These filters have effectively allowed operators to set overcommit ratios
>> per aggregate rather than per compute node in <= Newton.
>>
>> Beginning in Ocata, there is a behavior change where aggregate-based
>> overcommit ratios will no longer be honored during scheduling. Instead,
>> overcommit values must be set on a per compute node basis in nova.conf.
>>
>> Details: as of Ocata, instead of considering all compute nodes at the
>> start of scheduler filtering, an optimization has been added to query
>> resource capacity from placement and prune the compute node list with the
>> result *before* any filters are applied. Placement tracks resource capacity
>> and usage and does *not* track aggregate metadata [1]. Because of this,
>> placement cannot consider aggregate-based overcommit and will exclude
>> compute nodes that do not have capacity based on per compute node
>> overcommit.
>>
>> How to prepare: if you have been relying on per aggregate overcommit,
>> during your upgrade to Ocata, you must change to using per compute node
>> overcommit ratios in order for your scheduling behavior to stay consistent.
>> Otherwise, you may notice increased NoValidHost scheduling failures as the
>> aggregate-based overcommit is no longer being considered. You can safely
>> remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter
>> from your enabled_filters and you do not need to replace them with any
>> other core/ram/disk filters. The placement query takes care of the
>> core/ram/disk filtering instead, so CoreFilter, RamFilter, and DiskFilter
>> are redundant.
>>
>> Thanks,
>> -melanie
>>
>> [1] Placement has been a new slate for resource management and prior to
>> placement, there were conflicts between the different methods for setting
>> overcommit ratios that were never addressed, such as, "which value to take
>> if a compute node has overcommit set AND the aggregate has it set? Which
>> takes precedence?" And, "if a compute node is in more than one aggregate,
>> which overcommit value should be taken?" So, the ambiguities were not
>> something that was desirable to bring forward into placement.
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata

2018-01-16 Thread Zhenyu Zheng
Thanks for the info, so it seems we are not going to implement aggregate
overcommit ratio in placement at least in the near future?

On Wed, Jan 17, 2018 at 5:24 AM, melanie witt  wrote:

> Hello Stackers,
>
> This is a heads up to any of you using the AggregateCoreFilter,
> AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler.
> These filters have effectively allowed operators to set overcommit ratios
> per aggregate rather than per compute node in <= Newton.
>
> Beginning in Ocata, there is a behavior change where aggregate-based
> overcommit ratios will no longer be honored during scheduling. Instead,
> overcommit values must be set on a per compute node basis in nova.conf.
>
> Details: as of Ocata, instead of considering all compute nodes at the
> start of scheduler filtering, an optimization has been added to query
> resource capacity from placement and prune the compute node list with the
> result *before* any filters are applied. Placement tracks resource capacity
> and usage and does *not* track aggregate metadata [1]. Because of this,
> placement cannot consider aggregate-based overcommit and will exclude
> compute nodes that do not have capacity based on per compute node
> overcommit.
>
> How to prepare: if you have been relying on per aggregate overcommit,
> during your upgrade to Ocata, you must change to using per compute node
> overcommit ratios in order for your scheduling behavior to stay consistent.
> Otherwise, you may notice increased NoValidHost scheduling failures as the
> aggregate-based overcommit is no longer being considered. You can safely
> remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter
> from your enabled_filters and you do not need to replace them with any
> other core/ram/disk filters. The placement query takes care of the
> core/ram/disk filtering instead, so CoreFilter, RamFilter, and DiskFilter
> are redundant.
>
> Thanks,
> -melanie
>
> [1] Placement has been a new slate for resource management and prior to
> placement, there were conflicts between the different methods for setting
> overcommit ratios that were never addressed, such as, "which value to take
> if a compute node has overcommit set AND the aggregate has it set? Which
> takes precedence?" And, "if a compute node is in more than one aggregate,
> which overcommit value should be taken?" So, the ambiguities were not
> something that was desirable to bring forward into placement.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] Questions about truncked disk serial number

2018-01-15 Thread Zhenyu Zheng
Hi,

I meet a problem like this recently:

When attaching a volume to an instance, in the xml, the disk is described
as:

[image: Inline image 1]
where the serial number here is the volume uuid in Cinder. While inside the
vm:
in /dev/disk/by-id, there is a link for /vdb with the name of
"virtio"+truncated serial number:

[image: Inline image 2]

and according to
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/ch16s03.html

it seems that we will use this mount the volume.

The truncate seems to be happen in here [1][2] which is 20 digits.

*My question here is: *if two volume have the identical first 20 digits in
their uuids, it seems that the latter attached one will overwrite the first
one's link:
[image: Inline image 3]
(the above graph is snapshot for an volume backed instance, the
virtio-15ex was point to vda before, the by-path seems correct though)

It is rare to have the identical first 20 digits of two uuids, but
possible, so what was the consideration of truncate only 20 digits of the
volume uuid instead of use full 32?

BR,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Questions about truncked disk serial number

2018-01-15 Thread Zhenyu Zheng
Ops, forgot references:
[1]
https://github.com/torvalds/linux/blob/1cc15701cd89b0ce695bbc5cff3a2bf3e2efd25f/include/uapi/linux/virtio_blk.h#L54
[2]
https://github.com/torvalds/linux/blob/1cc15701cd89b0ce695bbc5cff3a2bf3e2efd25f/drivers/block/virtio_blk.c#L363

On Tue, Jan 16, 2018 at 2:35 PM, Zhenyu Zheng <zhengzhenyul...@gmail.com>
wrote:

> Hi,
>
> I meet a problem like this recently:
>
> When attaching a volume to an instance, in the xml, the disk is described
> as:
>
> [image: Inline image 1]
> where the serial number here is the volume uuid in Cinder. While inside
> the vm:
> in /dev/disk/by-id, there is a link for /vdb with the name of
> "virtio"+truncated serial number:
>
> [image: Inline image 2]
>
> and according to https://access.redhat.com/documentation/en-US/Red_Hat_
> Enterprise_Linux_OpenStack_Platform/2/html/Getting_
> Started_Guide/ch16s03.html
>
> it seems that we will use this mount the volume.
>
> The truncate seems to be happen in here [1][2] which is 20 digits.
>
> *My question here is: *if two volume have the identical first 20 digits
> in their uuids, it seems that the latter attached one will overwrite the
> first one's link:
> [image: Inline image 3]
> (the above graph is snapshot for an volume backed instance, the
> virtio-15ex was point to vda before, the by-path seems correct though)
>
> It is rare to have the identical first 20 digits of two uuids, but
> possible, so what was the consideration of truncate only 20 digits of the
> volume uuid instead of use full 32?
>
> BR,
>
> Kevin Zheng
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-14 Thread Zhenyu Zheng
The contributors in APAC are growing and wish to be more involved in
OpenStack, it will be really hard for us to join informal meetups( VISA
Invitation letters, company support etc.)
So I really hope the current official technical gathering remains, so that
we can be more involved with the community.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-14 Thread Zhenyu Zheng
On Thu, Dec 14, 2017 at 7:02 PM, Kashyap Chamarthy 
wrote:

> [..]
>
> For a relatively mature (~7 years; and ~5 years if we count from the
> time governance changed to OpenStack Foudation) project, one official
> contributor gathering per year sounds fine.  And for those that need
> more face time, people would continue to organize informal meetups.  But
> admittedly, this shifts the logistics apsect onto contributors -- that's
> probably okay, many other contributor communities self-organize meetups.
>
> [...]
>
> --
> /kashyap
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

The contributors in APAC are growing and wish to be more involved in
OpenStack, it will be really hard for us to join informal meetups( VISA
Invitation letters, company support etc.)
So I really hope the current official technical gathering remains, so that
we can be more involved with the community.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] reset key pair during rebuilding

2017-09-24 Thread Zhenyu Zheng
Hi,

FYI, we are going to use the existing PUT /servers/{server_uuid} API,
adding the 'key_name' field.

On Sat, Sep 23, 2017 at 9:58 PM, LIU Yulong  wrote:

> Hi nova developers,
>
> This mail is proposed to reconsider the key pair resetting of instance.
> The nova queens PTG discuss is here: https://etherpad.opensta
> ck.org/p/nova-ptg-queens L498. And there are now two proposals.
>
> 1. SPEC 1:  https://review.openstack.org/#/c/375221/ started by me
> (liuyulong) since sep 2016.
>
>This spec will allow setting the new key_name for the instance during
> rebuild API. That’s a very simple and well-understood approach:
>
>- Make consistent with rebuild API properties, such as name, imageRef,
>metadata, adminPass etc.
>- Rebuild API is something like `recreating`, this is the right way to
>do key pair updating. For keypair-login-only VM, this is the key point.
>- This does not involve to other APIs like reboot/unshelve etc.
>- Easy to use, only one API.
>
> By the way, here is the patch (https://review.openstack.org/#/c/379128/)
> which has implemented this spec. And it stays there more than one year too.
>
> 2. SPEC 2 : https://review.openstack.org/#/c/506552/ propose by
> Kevin_zheng.
>
> This spec supposed to add a new updating API for one instance’s key pair.
> This one has one foreseeable advantage for this is to do instance running
> key injection.
>
> But it may cause some issues:
>
>- This approach needs to update the instance key pair first (one step,
>API call). And then do a reboot/rebuild or any actions causing the vm
>restart (second step, another API call). Firstly, this is waste, it use two
>API calls. Secondly, if updating key pair was done, and the reboot was not.
>That may result an inconsistent between instance DB key pair and guest VM
>inside key. Cloud user may confused to choose which key should be used to
>login.
>- For the second step (reboot), there is a strong constraint is that
>cloud-init config needs to be set to running-per-booting. But if a cloud
>platform all images are set cloud-init to per-deployment. In order to
>achieve this new API goal, the entire cloud platform images need updating.
>This will cause a huge upgrading work for entire cloud platform images.
>They need to change all the images cloud-init config from
>running-per-deployment to running-every-boot. But that still can not solve
>the inconsistent between DB keypair and guest-key. For instance, if the
>running VM is based on a running-once-cloud-init image: 1. create image
>from this VM; 2. change the keypair of the new VM; 3. reboot still can not
>work because of the old config of per-deployment-running.
>- For another second step (rebuild), if they have to rebuild, or
>rebuild is the only one way to deployment new key, we are going back to
>rebuild SPEC 1. Two steps for keypair updating are not good. Why don’t
>directly using SPEC 1?
>- Another perspective to think about this is that SPEC 2 is expanding
>the functionality of reboot. What about one day user want to change
>password/name/personality at a reboot?
>- Cloud user may still have questions: why key pair can not be updated
>during rebuilding ? Why name/image/personality can?
>- If new API does not supporting running injection, the DB keypair and
>guest keypair are inconsistent if cloud user forget a rebuiding, reboot,
>unshelving API call.
>
>
> In conclusion, IMHO SPEC 1 is the reasonable way to set key pair for
> rebuild API. It’s simple and easy.
>
> SPEC 2 can be used for future running key injection. And it is still a way
> for reboot  API to deploy the new key. But the its disadvantages are as
> stated above.
>
>
> There already has some discuss [1] about this with Matt and Kevin.
>
> Sincerely hope to receive your opinion. Feel free to ping me at IRC
> openstack-nova, my nick is liuyulong. Thank you very much.
>
>
>
> [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-
> nova.2017-09-22.log.html#t2017-09-22T14:05:07
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Nova] Editing flavor causing instance flavor display error

2017-08-03 Thread Zhenyu Zheng
I was thinking, the current "edit" in Horizon is delete-and-create, and it
is there maybe just because
flavor has many fields, user may want to have a new flavor but just modify
one of the old flavor, so
they don't want to manually copy all other fields. And it is the automatic
delete action that causes
all the problem. Maybe horizon can provide a copy-and-modify action and
leave the deletion of
the flavor to the admin.

On Thu, Aug 3, 2017 at 5:59 PM, Sean Dague <s...@dague.net> wrote:

> On 08/02/2017 10:12 PM, Zhenyu Zheng wrote:
> > Hi All,
> >
> > Horizon provided the feature to allow user edit existing flavors, no
> > matter it is currently used by any instances or not. While Nova doesn't
> > provide this kind of ability, Horizon achieved this by deleting the old
> > flavor first and create a new flavor with the requested properties. And
> > the flavor ID will be changed to a new UUID.
> >
> > This causes problem when display instances with latest Nova, Horizon
> > display flavor name by request flavor details using the flavor id
> > returned by get_instance request, because Nova now moved flavor to
> > api_db, and when a flavor got deleted, it will be directly removed from
> > the DB, and when displaying, the field will be "Not Available".
> >
> > Should we stop support editing existing flavors? It is actually
> > deleting-and-create.
>
> Yes, it should not be a feature in Horizon, because it's extremely
> misleading what it does. It's actually really breaking their
> environment. It's unfortunate that it was implemented there at all. :(
>
> > Maybe we should  at least add WARNING notes about  this when editing
> > flavors, about how it is actually done and what kind of influence will
> > be to the existing instances.
> >
> > Nova now(> microversion 2.47) can reply embedded flavor details
> > including ram, vcpus, original_name etc.
> >
> > Since we provided this flavor editing feature and we displayed "name" as
> > a identifier, after we done some flavor edit, even we fix the above
> > mentioned problem and displayed the correct flavor name, the flavor
> > details will be different than the actual instance type.
>
> There was an extremely long an heated discussion in the Nova room in
> Atlanta about including that name in server document because of this
> Horizon behavior. I came down on the side of showing it, because people
> that use the flavor "rename" function are actually basically breaking
> their environment (in many ways). Lots of people don't do that, so this
> is useful to them so that they can create new instances like the ones
> there.
>
> The only way to change the flavor of an instance is to resize it.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon][Nova] Editing flavor causing instance flavor display error

2017-08-02 Thread Zhenyu Zheng
Hi All,

Horizon provided the feature to allow user edit existing flavors, no matter
it is currently used by any instances or not. While Nova doesn't provide
this kind of ability, Horizon achieved this by deleting the old flavor
first and create a new flavor with the requested properties. And the flavor
ID will be changed to a new UUID.

This causes problem when display instances with latest Nova, Horizon
display flavor name by request flavor details using the flavor id returned
by get_instance request, because Nova now moved flavor to api_db, and when
a flavor got deleted, it will be directly removed from the DB, and when
displaying, the field will be "Not Available".

Should we stop support editing existing flavors? It is actually
deleting-and-create.

Maybe we should  at least add WARNING notes about  this when editing
flavors, about how it is actually done and what kind of influence will be
to the existing instances.

Nova now(> microversion 2.47) can reply embedded flavor details including
ram, vcpus, original_name etc.

Since we provided this flavor editing feature and we displayed "name" as a
identifier, after we done some flavor edit, even we fix the above mentioned
problem and displayed the correct flavor name, the flavor details will be
different than the actual instance type.

Thoughts?

BR,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Allow passing security groups when attaching interfaces?

2017-07-06 Thread Zhenyu Zheng
Thanks alot, actually they are using Heat with update network function, so
I guess Heat has to do the work :)

On Thu, Jul 6, 2017 at 10:50 PM, Jay Pipes <jaypi...@gmail.com> wrote:

> On 07/06/2017 10:39 AM, Matt Riedemann wrote:
>
>> On 7/6/2017 6:39 AM, Gary Kotton wrote:
>>
>>> Hi,
>>>
>>> When you attach an interface there are a number of options:
>>>
>>> 1. Pass a existing port
>>>
>>> 2. Pass a network
>>>
>>> In the second case a new port will be created and by default that will
>>> have the default security group.
>>>
>>> You could try the first option by attaching the security group to the
>>> port
>>>
>>> Thanks
>>>
>>> Gary
>>>
>>> *From: *Zhenyu Zheng <zhengzhenyul...@gmail.com>
>>> *Reply-To: *OpenStack List <openstack-dev@lists.openstack.org>
>>> *Date: *Thursday, July 6, 2017 at 12:45 PM
>>> *To: *OpenStack List <openstack-dev@lists.openstack.org>
>>> *Subject: *[openstack-dev] [Nova][Neutron] Allow passing security groups
>>> when attaching interfaces?
>>>
>>> Hi,
>>>
>>> Our product has meet this kind of problem, when we boot instances, we
>>> are allowed to pass security groups, and if we provided network id, ports
>>> with the sg we passed will be created and when we show instances, we can
>>> see security groups field of instance is the sg we provided. But when we
>>> attach again some new interfaces(using network_id), the newly added
>>> interfaces will be in the default security group.
>>>
>>> We are wondering, will it be better to allow passing security groups
>>> when attaching interfaces? or it is considered to be a proxy-api which we
>>> do not like?
>>>
>>
>> I don't think we want this, it's more proxy orchestration that would have
>> to live in Nova. As Gary pointed out, if you want a non-default security
>> group, create the port in neutron ahead of time, associate the non-default
>> security group(s) and then attach that port to the server instance in nova.
>>
>
> This +100.
>
> Best,
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Neutron] Allow passing security groups when attaching interfaces?

2017-07-06 Thread Zhenyu Zheng
Hi,

Our product has meet this kind of problem, when we boot instances, we are
allowed to pass security groups, and if we provided network id, ports with
the sg we passed will be created and when we show instances, we can see
security groups field of instance is the sg we provided. But when we attach
again some new interfaces(using network_id), the newly added interfaces
will be in the default security group.

We are wondering, will it be better to allow passing security groups when
attaching interfaces? or it is considered to be a proxy-api which we do not
like?

BR,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Need volunteer(s) to help migrate project docs

2017-06-24 Thread Zhenyu Zheng
I will help

On Sat, Jun 24, 2017 at 12:23 AM, Matt Riedemann 
wrote:

> The spec [1] with the plan to migrate project-specific docs from
> docs.openstack.org to each project has merged.
>
> There are a number of steps outlined in there which need people from the
> project teams, e.g. nova, to do for their project. Some of it we're already
> doing, like building a config reference, API reference, using the
> openstackdocstheme, etc. But there are other things like moving the install
> guide for compute into the nova repo.
>
> Is anyone interested in owning this work? There are enough tasks that it
> could probably be a couple of people coordinating. It also needs to be done
> by the end of the Pike release, so time is a factor.
>
> [1] https://review.openstack.org/#/c/472275/
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-dev[[nova] Simple question about sorting CPU topologies

2017-06-22 Thread Zhenyu Zheng
Thanks all for the reply, I guess it will be better to config those
preference using flavor/image according to different hardware then.

On Wed, Jun 21, 2017 at 1:21 AM, Mooney, Sean K <sean.k.moo...@intel.com>
wrote:

>
>
> > -Original Message-
> > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > Sent: Tuesday, June 20, 2017 5:59 PM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [openstack-dev[[nova] Simple question
> > about sorting CPU topologies
> >
> > On 06/20/2017 12:53 PM, Chris Friesen wrote:
> > > On 06/20/2017 06:29 AM, Jay Pipes wrote:
> > >> On 06/19/2017 10:45 PM, Zhenyu Zheng wrote:
> > >>> Sorry, The mail sent accidentally by mis-typing ...
> > >>>
> > >>> My question is, what is the benefit of the above preference?
> > >>
> > >> Hi Kevin!
> > >>
> > >> I believe the benefit is so that the compute node prefers CPU
> > >> topologies that do not have hardware threads over CPU topologies
> > that
> > >> do include hardware threads.
> [Mooney, Sean K] if you have not expressed that you want the require or
> isolate policy
> Then you really cant infer which is better as for some workloads
> preferring hyperthread
> Siblings will improve performance( 2 threads sharing data via l2 cache)
> and other it will reduce it
> (2 thread that do not share data)
> > >>
> > >> I'm not sure exactly of the reason for this preference, but perhaps
> > >> it is due to assumptions that on some hardware, threads will compete
> > >> for the same cache resources as other siblings on a core whereas
> > >> cores may have their own caches (again, on some specific hardware).
> > >
> > > Isn't the definition of hardware threads basically the fact that the
> > > sibling threads share the resources of a single core?
> > >
> > > Are there architectures that OpenStack runs on where hardware threads
> > > don't compete for cache/TLB/execution units?  (And if there are, then
> > > why are they called threads and not cores?)
> [Mooney, Sean K] well on x86 when you turn on hypter threading your L1
> data and instruction cache is
> Partitioned in 2 with each half allocated to a thread sibling. The l2
> cache which is also per core is shared
> Between the 2 thread siblings so on intels x86 implementation the thread
> do not compete for l1 cache but do share l2
> That could easibly change though in new generations.
>
> Pre xen architure I believe amd shared the floating point units between
> each smt thread but had separate integer execution units that
> Were not shared. That meant for integer heavy workloads there smt
> implementation approached 2X performance limited by the
> Shared load and store units and reduced to 0 scaling if both Treads tried
> to access the floating point execution unit concurrently.
>
> So its not quite as clean cut as saying the thread  do or don’t share
> resources
> Each vendor addresses this differently even with in x86 you are not
> required to have the partitioning
> described above for cache as intel did or for the execution units. On
> other architectures im sure they have
> come up with equally inventive ways to make this an interesting shade of
> grey when describing the difference
> between a hardware thread a full core.
>
> >
> > I've learned over the years not to make any assumptions about hardware.
> >
> > Thus my "not sure exactly" bet-hedging ;)
> [Mooney, Sean K] yep hardware is weird and will always find ways to break
> your assumptions :)
> >
> > Best,
> > -jay
> >
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-dev[[nova] Simple question about sorting CPU topologies

2017-06-19 Thread Zhenyu Zheng
Sorry, The mail sent accidentally by mis-typing ...

My question is, what is the benefit of the above preference?

BR,
Kevin

On Tue, Jun 20, 2017 at 10:43 AM, Zhenyu Zheng <zhengzhenyul...@gmail.com>
wrote:

> Hi,
>
> In https://github.com/openstack/nova/blob/master/
> nova/virt/hardware.py#L396 we calculated every possible CPU topologies
> and sorted by:
> # We want to
> # - Minimize threads (ie larger sockets * cores is best)
> # - Prefer sockets over cores
> possible = sorted(possible, reverse=True,
> key=lambda x: (x.sockets * x.cores,
> x.sockets,
> x.threads))
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-dev[[nova] Simple question about sorting CPU topologies

2017-06-19 Thread Zhenyu Zheng
Hi,

In https://github.com/openstack/nova/blob/master/nova/virt/hardware.py#L396
we calculated every possible CPU topologies and sorted by:
# We want to
# - Minimize threads (ie larger sockets * cores is best)
# - Prefer sockets over cores
possible = sorted(possible, reverse=True,
key=lambda x: (x.sockets * x.cores,
x.sockets,
x.threads))
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] Ceph volumes attached to local deleted instance could not be correctly handled

2017-03-14 Thread Zhenyu Zheng
Hi all,

We have met the following problem:

we deployed our env with ceph as volume backend, we boot an instance and
attach a ceph volume to this instance, when our nova-compute is down and we
delete this instance, it will go local_delete and the ceph volume we
attached to this instance will change to "available" status in cinder but
when we try to delete it, error will happen, so we will have an "available"
volume but can't be attached or delete. We also tested with iscii volumes
and it seems fine.

I reported a bug about this:
https://bugs.launchpad.net/nova/+bug/1672624

Thanks,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight] When do instances get removed from Searchlight?

2017-03-06 Thread Zhenyu Zheng
Hi, Gibi

Yes, soft_delete.end notification didn't got handled in SL, and we should
do it, but what Matt mean here is deferent, even you 'hard' delete an
instance the record still exists in DB and user with certain role can list
it using deleted=true, so we should also do it in SL

On Monday, March 6, 2017, Balazs Gibizer <balazs.gibi...@ericsson.com>
wrote:

>
>
> On Mon, Mar 6, 2017 at 3:09 AM, Zhenyu Zheng <zhengzhenyul...@gmail.com>
> wrote:
>
>> Hi, Matt
>>
>> AFAIK, searchlight did delete the record, it catch the instance.delete
>> notification and perform the action:
>> http://git.openstack.org/cgit/openstack/searchlight/tree/sea
>> rchlight/elasticsearch/plugins/nova/notification_handler.py#n100
>> -> http://git.openstack.org/cgit/openstack/searchlight/tree/sea
>> rchlight/elasticsearch/plugins/nova/notification_handler.py#n307
>>
>
> Hi,
>
> There is instance.soft_delete legacy notification [2] (delete_type ==
> 'soft_delete'). This could be transformed to versioned notification along
> with [3]. So I guess there could be a way to distinguish between soft
> delete and real delete on searchlight side based on these notifications.
>
> Cheers,
> gibi
>
> [2] https://github.com/openstack/nova/blob/master/nova/compute/a
> pi.py#L1872
> [3] https://review.openstack.org/#/c/410297/
>
>
> I will double check with others from the SL team, and if it is the case,
>> we will try to find a way to solve this ASAP.
>>
>> Thanks,
>>
>> Kevin Zheng
>>
>> On Mon, Mar 6, 2017 at 1:21 AM, Matt Riedemann <mriede...@gmail.com>
>> wrote:
>>
>>> I've posted a spec [1] for nova's integration with searchlight for
>>> listing instance across multiple cells. One of the open questions I have on
>>> that is when/how do instances get removed from searchlight?
>>>
>>> When an instance gets deleted via the compute API today, it's not really
>>> deleted from the database. It's considered "soft" deleted and you can still
>>> list (soft) deleted instances from the database via the compute API if
>>> you're an admin.
>>>
>>> Nova will be sending instance.destroy notifications to searchlight but
>>> we don't really want the ES entry removed because we still have to support
>>> the compute API contract to list deleted instances. Granted, this is a
>>> pretty limp contract because there is no guarantee that you'll be able to
>>> list those deleted instances forever because once they get archived (moved
>>> to shadow tables in the nova database) or purged (hard delete), then they
>>> are gone from that API query path.
>>>
>>> So I'm wondering at what point instances stored in searchlight will be
>>> removed. Maybe there is already an answer to this and the searchlight team
>>> can just inform me. Otherwise we might need to think about data retention
>>> policies and how long a deleted instances will be stored in searchlight
>>> before it's removed. Again, I'm not sure if nova would control this or if
>>> it's something searchlight supports already.
>>>
>>> [1] https://review.openstack.org/#/c/441692/
>>>
>>> --
>>>
>>> Thanks,
>>>
>>> Matt Riedemann
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight] When do instances get removed from Searchlight?

2017-03-05 Thread Zhenyu Zheng
Hi, Matt

AFAIK, searchlight did delete the record, it catch the instance.delete
notification and perform the action:
http://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/elasticsearch/plugins/nova/notification_handler.py#n100
->
http://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/elasticsearch/plugins/nova/notification_handler.py#n307

I will double check with others from the SL team, and if it is the case, we
will try to find a way to solve this ASAP.

Thanks,

Kevin Zheng

On Mon, Mar 6, 2017 at 1:21 AM, Matt Riedemann  wrote:

> I've posted a spec [1] for nova's integration with searchlight for listing
> instance across multiple cells. One of the open questions I have on that is
> when/how do instances get removed from searchlight?
>
> When an instance gets deleted via the compute API today, it's not really
> deleted from the database. It's considered "soft" deleted and you can still
> list (soft) deleted instances from the database via the compute API if
> you're an admin.
>
> Nova will be sending instance.destroy notifications to searchlight but we
> don't really want the ES entry removed because we still have to support the
> compute API contract to list deleted instances. Granted, this is a pretty
> limp contract because there is no guarantee that you'll be able to list
> those deleted instances forever because once they get archived (moved to
> shadow tables in the nova database) or purged (hard delete), then they are
> gone from that API query path.
>
> So I'm wondering at what point instances stored in searchlight will be
> removed. Maybe there is already an answer to this and the searchlight team
> can just inform me. Otherwise we might need to think about data retention
> policies and how long a deleted instances will be stored in searchlight
> before it's removed. Again, I'm not sure if nova would control this or if
> it's something searchlight supports already.
>
> [1] https://review.openstack.org/#/c/441692/
>
> --
>
> Thanks,
>
> Matt Riedemann
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docs] Is anyone interested in being the docs liaison for Nova?

2017-03-01 Thread Zhenyu Zheng
Hi,

I'm not a native English speaker but I would like to have a try if possible
:)

On Wed, Mar 1, 2017 at 11:45 PM, Matt Riedemann  wrote:

> There is a need for a liaison from Nova for the docs team to help with
> compute-specific docs in the install guide and various manuals.
>
> For example, we documented placement and cells v2 in the nova devref in
> Ocata but instructions on those aren't in the install guide, so the docs
> team is adding that here [1].
>
> I'm not entirely sure what the docs liaison role consists of, but I assume
> it at least means attending docs meetings, helping to review docs patches
> that are related to nova, helping to alert the docs team of big changes
> coming in a release that will impact the install guide, etc.
>
> From my point of view, I've historically pushed nova developers to be
> documenting new features within the nova devref since it was "closer to
> home" and could be tied to landing said feature in the nova tree, so there
> was more oversight on the docs actually happening *somewhere* rather than a
> promise to work them in the non-nova manuals, which a lot of the time was
> lip service and didn't actually happen once the feature was in. But there
> is still the need for the install guide as the first step to deploying nova
> so we need to balance both things.
>
> If no one else steps up for the docs liaison role, by default it lands on
> me, so I'd appreciate any help here.
>
> [1] https://review.openstack.org/#/c/438328/
>
> --
>
> Thanks,
>
> Matt Riedemann
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Pike PTG recap - cells

2017-02-27 Thread Zhenyu Zheng
Matt,

Thanks for the recap, it's a pity I cannot attend PTG due to personal
reason, I will be willing to take the work you mentioned, and will check
the details with you.

And another thing, I don't know whether you guys discussed or not, I saw in
[1] we are talking about adding tags field(and others of cause) to the
instance notification
payload, to be able to send during instance.create and instance.update. The
fact is that currently we cannot add tags during boot, nor we send
notifications when we
add/update/delete tags latter(it is a direct DB change and no
instance.update notification has been sent out), so for tags field, we have
to wait for another instance.update
action to get the latest info. And I have already tried to working on the
boot thing[2] and already planned to work on the tag notification thing[3].

So, are there any plans about those? Maybe it is OK to send out
instance.update notification in tags actions once [1] got merged?

Thanks,

Kevin Zheng

[1] https://blueprints.launchpad.net/nova/+spec/additional-notif
ication-fields-for-searchlight
[2]
https://blueprints.launchpad.net/nova/+spec/support-tag-instance-when-boot
[3] https://blueprints.launchpad.net/nova/+spec/send-tag-notification


On Tue, Feb 28, 2017 at 6:33 AM, Matt Riedemann  wrote:

> We talked about cells on Wednesday morning at the PTG. The full etherpad
> is here [1].
>
> Searchlight integration
> ---
>
> We talked a bit about what needs to happen for this, and it starts with
> getting the data into searchlight so that it can serve the REST API, which
> is being worked in this blueprint [2]. We want to get that done early in
> Pike.
>
> We plan on making the use of Searchlight configurable in Nova since at
> first you might not even have anything in it, so listing instances wouldn't
> work. We're also going to attempt to merge-sort when listing instances
> across multiple cells but it's going to be a known issue that it will be
> slow.
>
> For testing Nova with Searchlight, we need to start by enabling the
> Searchlight devstack plugin in the nova-next CI job, which I'll work on.
>
> I'm going to talk to Kevin Zheng about seeing if he can spend some time on
> getting Nova to use Searchlight if it's (1) configured for use and (2) is
> available (the endpoint is in the service catalog). Kevin is a Searchlight
> core and familiar with the Nova API code, so he's a good candidate for
> working on this (assuming he's available and willing to own it).
>
> Cells-aware gaps in the API
> ---
>
> Dan Smith has started a blueprint [3] for closing gaps in the API which
> break in a multi-cell deployment. He has a test patch [4] to expose the
> failures and then they can be worked on individually. The pattern of the
> work is in [5]. Help is welcome here, so please attend the weekly cells
> meeting [6] if you want to help out.
>
> Auto-discovery of compute hosts
> ---
>
> The "discover_hosts_in_cells_interval" config option was introduced in
> Ocata which controls a periodic task in the scheduler to discover new
> unmapped compute hosts but it's not very efficient since it queries all
> cell mappings and then all compute nodes in each cell mapping and checks to
> see if those compute nodes are yet mapped to the cell in the nova_api
> database. Dan Smith has a series of changes [7] which should make that
> discovery process more efficient, it just needs to be cleaned up a bit.
>
> Service arrangement
> ---
>
> Dan Smith is working on a series of changes in both Nova and devstack for
> testing with multiple cells [8]. The general idea is that there will still
> be two nodes and two nova-compute services. There will be three
> nova-conductor services, one per cell, and then another top-level "super
> conductor" which is there for building instances and sending the server
> create down to one of the cells. All three conductors are going to be
> running in the subnode just to balance the resources a bit otherwise the
> primary node is going to be starved. The multi-cell job won't be running
> migration tests since we don't currently support instance move operations
> between cells. We're going to work a hack into the scheduler to restrict a
> move operation to the same cell the instance is already in. This means the
> live migration job will still be a single-cell setup where both
> nova-computes are in the same cell.
>
> Getting rid of nova-consoleauth
> ---
>
> There is an unfinished blueprint [9] from Paul Murray which melwitt is
> going to pick up for Pike. The idea is to move the tokens into the database
> so we don't care where the consoleauth service lives and then we can also
> kill the service.
>
> [1] https://etherpad.openstack.org/p/nova-ptg-pike-cells
> [2] https://blueprints.launchpad.net/nova/+spec/additional-notif
> ication-fields-for-searchlight
> [3] 

Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2017-02-23 Thread Zhenyu Zheng
BTW, I think this can be done using new placement service, using the custom
resource provider? correct?

On Fri, Feb 24, 2017 at 10:18 AM, Zhenyu Zheng <zhengzhenyul...@gmail.com>
wrote:

> Matt,
>
> Thanks for the information, I will check that; But still I think the user
> demand here is to use local disk from
> compute node as block device, as the data can be remained if the old vm
> got deleted, and we can start a
> new one with the data and having the performance they wanted.
>
> Kevin Zheng
>
> On Fri, Feb 24, 2017 at 4:06 AM, Matt Riedemann <
> mrie...@linux.vnet.ibm.com> wrote:
>
>> On 9/26/2016 9:21 PM, Zhenyu Zheng wrote:
>>
>>> Hi,
>>>
>>> Thanks for the reply, actually approach one is not we are looking for,
>>> our demands is to attach the real physical volume from compute node to
>>> VMs,
>>> by this way we can achieve the performance we need for usecases such as
>>> big data, this can be done by cinder using BlockDeviceDriver, it is quite
>>> different from the approach one you mentioned. The only problem now is
>>> that we cannot practially ensure the compute resource located on the same
>>> host with the volume, as Matt mentioned above, currently we have to
>>> arrange 1:1 AZ in Cinder and Nova to do this and it is not practical in
>>> commercial
>>> deployments.
>>>
>>> Thanks.
>>>
>>>
>> Kevin,
>>
>> Is the issue because you can't use ephemeral local disks (it must be a
>> persistent boot from volume)?
>>
>> Have you looked at using the LVM image backend for local storage in Nova?
>> I thought cfriesen said once that windriver is doing high performance
>> config using local LVM in nova.
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2017-02-23 Thread Zhenyu Zheng
Matt,

Thanks for the information, I will check that; But still I think the user
demand here is to use local disk from
compute node as block device, as the data can be remained if the old vm got
deleted, and we can start a
new one with the data and having the performance they wanted.

Kevin Zheng

On Fri, Feb 24, 2017 at 4:06 AM, Matt Riedemann <mrie...@linux.vnet.ibm.com>
wrote:

> On 9/26/2016 9:21 PM, Zhenyu Zheng wrote:
>
>> Hi,
>>
>> Thanks for the reply, actually approach one is not we are looking for,
>> our demands is to attach the real physical volume from compute node to
>> VMs,
>> by this way we can achieve the performance we need for usecases such as
>> big data, this can be done by cinder using BlockDeviceDriver, it is quite
>> different from the approach one you mentioned. The only problem now is
>> that we cannot practially ensure the compute resource located on the same
>> host with the volume, as Matt mentioned above, currently we have to
>> arrange 1:1 AZ in Cinder and Nova to do this and it is not practical in
>> commercial
>> deployments.
>>
>> Thanks.
>>
>>
> Kevin,
>
> Is the issue because you can't use ephemeral local disks (it must be a
> persistent boot from volume)?
>
> Have you looked at using the LVM image backend for local storage in Nova?
> I thought cfriesen said once that windriver is doing high performance
> config using local LVM in nova.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] How boot with bdm_v2={source_type=image, destination_type=local} should been used?

2017-02-08 Thread Zhenyu Zheng
Hi All,

As I was working on "Check destination_type when booting with bdm
provided": https://review.openstack.org/#/c/402372/ and addressing
reviewers comments, I find out that the current source_type=image,
destination_type=local seems unusable.

According to docs:
http://docs.openstack.org/developer/nova/block_device_mapping.html#block-device-mapping-v2,
it seems to me that "image --> local" means "boot from image", and as the
doc says, I should also provide image_ref param, but if I do so, Error
raised:

ERROR (BadRequest): Block Device Mapping is Invalid: Boot sequence for the
instance and image/block device mapping combination is not valid. (HTTP
400) (Request-ID: req-f848c6c1-0961-46c4-ac51-713fde042215)

If I just use bdm, it goes:
2017-02-08 11:04:24.929 24141 ERROR nova.compute.manager [instance:
6e44cafd-b330-4a10-8c77-eac60d58f20c] ImageNotFound: Image could not be
found.

turned out we use '' as image ID to fetch image from glance, and obviously
we cannot get it.

Detailed Log and explaination could be found in my bug report:
https://bugs.launchpad.net/nova/+bug/1662748

So, what do we expect for this API usage?

Thanks,
Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tag in the API breaks in the old microversion

2017-01-24 Thread Zhenyu Zheng
Thanks Alex for raising this up widely, as Chinese holiday is comming and
Alex and me might be away for a week, And it will be better to fix this
faster, so thanks Artom taking over to fix it :)

On Wed, Jan 25, 2017 at 7:50 AM, Ghanshyam Mann 
wrote:

>
> On Wed, Jan 25, 2017 at 1:18 AM, Matt Riedemann 
> wrote:
>
>> On 1/24/2017 2:05 AM, Alex Xu wrote:
>>
>>> Unfortunately the device tag support in the API was broken in the old
>>> Microversion https://bugs.launchpad.net/nova/+bug/1658571, which thanks
>>> to Kevin Zheng to find out that.
>>>
>>> Actually there are two bugs, just all of them are about device tag. The
>>> first one [0] is a mistake in the initial introduce of device tag. The
>>> new schema only available for the version 2.32, when the request version
>>>
 2.32, the schema fallback to the old one.

>>>
>>> The second one [1] is that when we bump the API to 2.37, the network
>>> device tag was removed accidentally which also added in 2.32 [2].
>>>
>>> So the current API behavior is as below:
>>>
>>> 2.32: BDM tag and network device tag added.
>>> 2.33 - 2.36: 'tag' in the BDM disappeared. The network device tag still
>>> works.
>>> 2.37: The network device tag disappeared also.
>>>
>>> There are few questions we should think about:
>>>
>>> 1. Should we fix that by Microversion?
>>> Thanks to Chris Dent point that out in the review. I also think we
>>> need to bump Microversion, which follow the rule of Microversion.
>>>
>>> 2. If we need Microversion, is that something we can do before release?
>>> We are very close to the feature freeze. And in normal, we need spec
>>> for microversion. Maybe we only can do that in Pike. For now we can
>>> update the API-ref, and microversion history to notice that, maybe a
>>> reno also.
>>>
>>> 2. How can we prevent that happened again?
>>>Both of those patches were reviewed multiple cycles. But we still
>>> miss that. It is worth to think about how to prevent that happened again.
>>>
>>>Talk with Sean. He suggests stop passing plain string version to the
>>> schema extension point. We should always pass APIVersionRequest object
>>> instead of plain string. Due to "version == APIVersionRequest('2.32')"
>>> is always wrong, we should remove the '__eq__'. The developer should
>>> always use the 'APIVersionRequest.matches' [3] method.
>>>
>>>That can prevent the first mistake we made. But nothing help for
>>> second mistake. Currently we only run the test on the specific
>>> Microversion for the specific interesting point. In the before, the new
>>> tests always inherits from the previous microversion tests, just like
>>> [4]. That can test the old API behavior won't be changed in the new
>>> microversion. But now, we said that is waste, we didn't do that again
>>> just like [5]. Should we change that back?
>>>
>>> Thanks
>>> Alex
>>>
>>> [0]
>>> https://review.openstack.org/#/c/304510/64/nova/api/openstac
>>> k/compute/block_device_mapping.py
>>> [1] https://review.openstack.org/#/c/316398/37/nova/api/openstac
>>> k/compute/schemas/servers.py@88
>>> [2] https://review.openstack.org/#/c/316398/37/nova/api/openstac
>>> k/compute/schemas/servers.py@79
>>> [3] https://github.com/openstack/nova/blob/master/nova/api/opens
>>> tack/api_version_request.py#L219
>>> [4] https://github.com/openstack/nova/blob/master/nova/tests/uni
>>> t/api/openstack/compute/test_evacuate.py#L415
>>> [5] https://github.com/openstack/nova/blob/master/nova/tests/uni
>>> t/api/openstack/compute/test_serversV21.py#L3584
>>>
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> First, thanks to Kevin and Alex for finding this issue and explaining it
>> in detail so we can understand the scope.
>>
>> This is a nasty unfortunate issue which I really wish we could just fix
>> without a microversion bump but we have microversions for a reason, which
>> is to fix issues in the API. In thinking about if this were the legacy 2.0
>> API, we always had a rule that you couldn't fix bugs in the API if they
>> changed the behavior, no matter how annoying.
>>
>> So let's fix this with a microversion. I don't think we need to hold it
>> to the feature freeze deadline as it's a microversion only for a bug fix,
>> it's not a new feature. So that's a compromise at least and gives us some
>> time to get this done correctly and still have it fixed in Ocata. We'll
>> also want to document this in the api-ref and REST API version history in
>> whatever way makes it clear about the limitations between microversions.
>>
>
> ​+1 for fixing in Ocata itself. We have fix up just need to put that under
> new version. I can modify the tests to cover this bug scenario. ​
>
>
>
>>

Re: [openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-17 Thread Zhenyu Zheng
OK, added to my todo for the next cycle.

On Tue, Jan 17, 2017 at 7:08 PM, Matt Riedemann 
wrote:

> On 1/17/2017 3:31 AM, Roman Podoliaka wrote:
>
>> Hi all,
>>
>> Changing the type of column from VARCHAR(80) to VARCHAR(60) would also
>> require a data migration (i.e. a schema migration to add a new column
>> with the "correct" type, changes to the object, data migration logic)
>> as it is not an "online" DDL operation according to [1].  Adding a new
>> API microversion seems to be easier.
>>
>> Thanks,
>> Roman
>>
>>
> Yeah if we're going to do anything we should do the microversion bump
> since the DB change requires an offline schema migration which we don't
> want to do.
>
> I didn't think about the interoperability issue with the change so I agree
> it will require a microversion.
>
> As for the timing, we're two weeks from feature freeze and all API changes
> require a spec according to our policy [1]. We also have a lot of unmerged
> blueprints yet to get reviewed [2] and frankly our review numbers are
> already down this release. So if this can be held until Pike I'd prefer
> that so it's not a distraction in Ocata.
>
> [1] http://docs.openstack.org/developer/nova/blueprints.html#specs
> [2] https://blueprints.launchpad.net/nova/ocata
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-17 Thread Zhenyu Zheng
OK, then, lets try to work this out.

On Tue, Jan 17, 2017 at 4:19 PM, Sergey Nikitin 
wrote:

> Hi, Zhenyu!
>
> I think we should ask DB guys about migration. But my personal opinion is
> that DB migration is much painful than new microversion.
>
>  But it seems too late to have a microversion for this cycle.
>>
>
> Correct me if I'm wrong but I thought that Feature Freeze will be in
> action Jan 26.
> https://wiki.openstack.org/wiki/Nova/Ocata_Release_Schedule
>
> Even if we need a new microversion I think it will be a specless
> microversion and patch will change about 5 lines of code. We can merge such
> patch in one day.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-16 Thread Zhenyu Zheng
Hi, Sergey!

Thanks for the info, but we are now to the point that should it be a
microversion bump or not. The users would love to have longer tags of
cause. But it seems too late to have a microversion for this cycle.
But will the DB migration be a problem? From 80 to 60?


On Tue, Jan 17, 2017 at 3:16 PM, Sergey Nikitin <sniki...@mirantis.com>
wrote:

> Hi, folks!
>
> I guess I found the reason of the problem. The first spec was created by
> Jay. At that moment I was just an implementer. In this spec we have a
> contradiction between lines #74 and #99.
> https://review.openstack.org/#/c/91444/16/specs/juno/tag-instances.rst
>
> Line 74 says "A tag shall be defined as a Unicode bytestring no longer
> than 60 bytes in length."
>
> Line 99 contains SQL instruction for migration "tag VARCHAR(80) NOT NULL
> CHARACTER SET utf8"
>
> It seems to me that everybody missed this contradiction and I just copy
> the whole migration script with length 80.
>
> So it's just an old mistake and I think we can change length of tag from
> 60 to 80.
>
> 2017-01-17 10:04 GMT+04:00 GHANSHYAM MANN <ghanshyamm...@gmail.com>:
>
>> On Tue, Jan 17, 2017 at 2:37 PM, Alex Xu <sou...@gmail.com> wrote:
>>
>>>
>>>
>>> 2017-01-17 10:26 GMT+08:00 Matt Riedemann <mrie...@linux.vnet.ibm.com>:
>>>
>>>> On 1/16/2017 7:12 PM, Zhenyu Zheng wrote:
>>>>
>>>>> Hi Nova,
>>>>>
>>>>> I just discovered something interesting, the tag has a limited length,
>>>>> and in the current implementation, it is 60 in the tag object
>>>>> definition:
>>>>> http://git.openstack.org/cgit/openstack/nova/tree/nova/objec
>>>>> ts/tag.py#n18
>>>>>
>>>>> but 80 in the db model:
>>>>> http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sq
>>>>> lalchemy/models.py#n1464
>>>>>
>>>>> As asked in the IRC and some of the cores responded(thanks to Matt and
>>>>> Jay), it seems to be an
>>>>> oversight and has no particular reason to do it this way.
>>>>>
>>>>> Since we have already created a 80 long space in DB and the current
>>>>> implementation might be confusing,  maybe we should expand the
>>>>> limitation in tag object definition to 80. Besides, users can enjoy
>>>>> longer tags.
>>>>>
>>>>> And the question could be, does anyone know why it is 60 in object but
>>>>> 80 in DB model? is it an oversight or we have some particular reason?
>>>>>
>>>>> If we could expand it to be the same as DB model (80 for both), it is
>>>>> ok
>>>>> to do this tiny change without microversion?
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Kevin Zheng
>>>>>
>>>>>
>>>>> 
>>>>> __
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe: openstack-dev-requ...@lists.op
>>>>> enstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>> As I said in IRC, the tags feature took a long time to land (several
>>>> releases) so between the time that the spec was written and then the data
>>>> model patch and finally the REST API change, we might have just totally
>>>> missed that the length of the column in the DB was different than what was
>>>> allowed in the REST API.
>>>>
>>>> I'm not aware of any technical reason why they are different. I'm
>>>> hoping that Sergey Nikitin might remember something about this. But even
>>>> looking at the spec:
>>>>
>>>> https://specs.openstack.org/openstack/nova-specs/specs/liber
>>>> ty/approved/tag-instances.html
>>>>
>>>> The column was meant to be 60 so my guess is someone noticed that in
>>>> the REST API review but missed it in the data model review.
>>>>
>>>
>>> I can't remember the detail also. Hoping Sergey can remember something
>>> also.
>>>
>>>
>>>>
>>>> As for needing a microversion of changing this, I tend to think we
>>>> don't need a microversion because we're not restricting the schema in the
>>>> REST API, we're just increasing it to ma

[openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-16 Thread Zhenyu Zheng
Hi Nova,

I just discovered something interesting, the tag has a limited length, and
in the current implementation, it is 60 in the tag object definition:
http://git.openstack.org/cgit/openstack/nova/tree/nova/objects/tag.py#n18

but 80 in the db model:
http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/models.py#n1464

As asked in the IRC and some of the cores responded(thanks to Matt and
Jay), it seems to be an
oversight and has no particular reason to do it this way.

Since we have already created a 80 long space in DB and the current
implementation might be confusing,  maybe we should expand the limitation
in tag object definition to 80. Besides, users can enjoy longer tags.

And the question could be, does anyone know why it is 60 in object but 80
in DB model? is it an oversight or we have some particular reason?

If we could expand it to be the same as DB model (80 for both), it is ok to
do this tiny change without microversion?

Thanks,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight] No Searchlight IRC meeting today

2016-12-21 Thread Zhenyu Zheng
Hi everyone,

As Travis sent out last week, most of the contributor will be on holiday
this week and the next, so we decide not to hold the Searchlight IRC
meeting today.

The only remaining topic from last meeting was the pipeline patch:
https://review.openstack.org/#/c/359972/

We have discussed in the Searchlight IRC channel and lei-zh will update the
patch according to the current comments, and we will help review it when it
gets update and hopefully we can have it in in few weeks.

Thanks, Merry Christmas and Happy New Year.

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Adding CONTRIBUTING.rst files to projects

2016-12-21 Thread Zhenyu Zheng
Agreed with Amrith, it might be useful and maybe also good for new
contributors to learn how to have a commit to OpenStack. BUT over 130
identical patches to 130 different projects from one company/person in one
run? I don't think this is going to help OpenStack growing. We should not
let this happen.

On Thu, Dec 22, 2016 at 12:44 AM, Amrith Kumar  wrote:

> For those who would like to know exactly what this set of changes cost in
> the CI, the answer is approximately 1050 jobs which consumed 190 compute
> hours of CI time.
>
> -amrith
>
> -Original Message-
> From: Amrith Kumar [mailto:amr...@tesora.com]
> Sent: Wednesday, December 21, 2016 11:13 AM
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [all] Adding CONTRIBUTING.rst files to
> projects
>
> Ian, Andreas, Emilien,
>
> My sentiments on the subject of these kinds of "production line" changes
> is unchanged from [1] and [2]. A complete list of these changes is at [3].
>
> I've updated all of the changes in this thread with a block comment and a
> -1. My apologies to other reviewers (and active contributors in those
> projects) for this automated comment across 131 commits.
>
> It is high time we eliminated these kinds of changes which do little to
> improve the overall quality of the product and serve merely to generate a
> huge amount of pointless work on the CI systems, and boost some meaningless
> statistics that someone wants to put on a slide someplace.
>
> -amrith
>
> [1] http://openstack.markmail.org/thread/dsuxy2sxxudfbij4
> [2] http://openstack.markmail.org/thread/3sr5c2u7fhpzanit
> [3] https://review.openstack.org/#/q/topic:addCONTRIBUTING.rst
>
> -Original Message-
> From: Andreas Jaeger [mailto:a...@suse.com]
> Sent: Wednesday, December 21, 2016 10:47 AM
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [all] Adding CONTRIBUTING.rst files to
> projects
>
> On 2016-12-21 16:22, Ian Cordasco wrote:
> > [...]
> > That said, I think there are two better places for this information
> > that are already standards in OpenStack:
> >
> > * README.rst
> > * HACKING.rst
> >
> > Most projects include links to the contributing documentation in at
> > least one of these files. I think the effort here is to standardize,
> > albeit in a brand new file, and that's admirable.
>
> If that's the goal - to standardize - then I would expect that we move all
> the documentation out of those files in one place.
>
> Right now, the changes duplicate information that exists - and the new
> information is often wrong. It points to place that do not exist or where
> better places exist. ;(
>
>
> I'm fine with the status quo - of using the two files that you mention.
> Having contribution information is important,
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is this a correct use of scheduler hints and nova-scheduler

2016-12-07 Thread Zhenyu Zheng
Thanks a for the information, I will propose a blueprint for the next cycle
:)

On Wed, Dec 7, 2016 at 4:38 PM, Sylvain Bauza <sba...@redhat.com> wrote:

>
>
> Le 07/12/2016 04:21, Zhenyu Zheng a écrit :
> > Hi all,
> >
> > I want to ask a question about using scheduler-hints, could we add
> > custom scheduler keys to work with our custom filters? Is it designed to
> > allow vendors add own custom filters and keys?
> >
>
> I tend to disagree with that approach from an interoperability
> perspective as two clouds could behave very differently.
>
> That said, there is a long-standing problem about scheduler hints being
> extensible with regards to our API input validation [1] and we basically
> agreed on allowing to relax the constraints [2].
>
> Long story short, you *can* technically do that for a custom filter but
> please take care of the communication you make around that new hint to
> your customers and make it clear that this hint is not interoperable.
>
> Also, I beg you to make sure that the hint name is self-explanatory and
> enough distinct from the other hints we already have so that a confusion
> could be minimal.
>
>
> > Another question is, as we have now persistent scheduler-hints in
> > request spec, is it possible to show the scheduler-hints either in
> > server-show or a new API? Because vendors may be interested to have an
> > idea on how this instance was built in the first place.
> >
>
> Well, I'd say it would be an admin or owner information, but yeah that
> could be worth to be exposed.
> AFAIK, there is no current way to get that so a blueprint with a spec
> describing the problem and the proposal (including an API microversion)
> could be interesting to review.
>
> -Sylvain
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2015-
> June/067996.html
>
> [2]
> https://github.com/openstack/nova/blob/5cc5a841109b082395d9664edcfc11
> e31fb678fa/nova/api/openstack/compute/schemas/scheduler_hints.py#L67-L71
>
> > Thanks.
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Is this a correct use of scheduler hints and nova-scheduler

2016-12-06 Thread Zhenyu Zheng
Hi all,

I want to ask a question about using scheduler-hints, could we add custom
scheduler keys to work with our custom filters? Is it designed to allow
vendors add own custom filters and keys?

Another question is, as we have now persistent scheduler-hints in request
spec, is it possible to show the scheduler-hints either in server-show or a
new API? Because vendors may be interested to have an idea on how this
instance was built in the first place.

Thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [searchlight] Propose Zhenyu Zheng for Searchlight core

2016-11-07 Thread Zhenyu Zheng
Thanks a lot, there will be still so much to learn from you guys.

On Tue, Nov 1, 2016 at 4:58 PM, Lei Zhang <lei12zhan...@gmail.com> wrote:

> +1 for me
>
> On Thu, Oct 20, 2016 at 5:13 PM, Brian Rosmaita <
> brian.rosma...@rackspace.com> wrote:
>
>> +1 from me, I'll be happy to see Kevin on the core list.
>>
>> On 10/19/16, 10:10 AM, "McLellan, Steven" <steve.mclel...@hpe.com> wrote:
>>
>> Hi,
>>
>> I'd like to propose Zhenyu Zheng (Kevin_Zheng on IRC) for Searchlight
>> core. While he's most active on Nova, he's also been very active on
>> Searchlight both in commits and reviews during the Newton release and into
>> Ocata on Searchlight. Kevin's participated during the weekly meetings and
>> during the week, and his reviews have been very high quality as well as
>> numerous. This would also help move towards have greater cross-project
>> participation, especially with Nova.
>>
>> If anyone has any objections, let me know, otherwise I will add Kevin to
>> the core list at the weekend.
>>
>> Thanks!
>>
>> Steve
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Propose to add FusionCompute driver to Nova

2016-10-13 Thread Zhenyu Zheng
Thanks alot

On Thu, Oct 13, 2016 at 5:26 PM, Christian Berendt <
bere...@betacloud-solutions.de> wrote:

> Hello Zheng.
>
> > On 13 Oct 2016, at 10:02, Zhenyu Zheng <zhengzhenyul...@gmail.com>
> wrote:
> >
> > As mentioned above, FusionCompute has been proved to be with high
> reliability and large user base, thus we would like to propose
> FusionCompute driver to Nova as an official Nova driver.
>
> In the past I helped with the TelekomCloud. Nice to see these efforts.
>
> Christian.
>
> --
> Christian Berendt
> Chief Executive Officer (CEO)
>
> Telefon: +49 711 21957003
> Mobil: +49 171 5542175
> Mail: bere...@betacloud-solutions.de
> Web: https://www.betacloud-solutions.de
>
> Betacloud Solutions GmbH
> Teckstrasse 62 / 70190 Stuttgart / Deutschland
>
> Geschäftsführer: Christian Berendt
> Unternehmenssitz: Stuttgart
> Amtsgericht: Stuttgart, HRB 756139
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Propose to add FusionCompute driver to Nova

2016-10-13 Thread Zhenyu Zheng
Hi All,

We would like to propose FusionCompute driver to become an official Nova
driver.

FusionCompute is an computing virtualization software developed by Huawei,
which can provide tuned high-performance and high reliabilities in VM
instance provisioning, clustered resource pool management, and intelligent
HA/FT scheduling.

The concepts and technical details for FusionCompute could be found in:
https://wiki.openstack.org/wiki/FusionCompute

Huawei has been working on integrating FusionCompute and OpenStack since
Folsom release using Nova-FusionCompute driver. FusionCompute has been
successfully deployed as the hypervisor within Huawei's FusionShpere
Openstack Cloud Operation System solution in large number of commercial
private and public clouds running stable for several years, including:

 *Deutsche Telekom - Open Telekom Cloud*:
 http://www.cebit.de/en/news/open-telekom-cloud-is-live.xhtml
 https://www.telekom.com/media/company/291108
 http://www.huawei.com/en/news/2016/3/dian-xin-yun
 https://cloud.telekom.de/en/cloud-infrastructure/open-telekom-cloud/

 *Telefonica LatAm Public Cloud*:

https://www.business-solutions.telefonica.com/es/information-centre/news/telefonica-and-huawei-reach-a-global-agreement-to-promote-enterprise-migration-to-the-cloud/

http://www.lightreading.com/services/cloud-services/telefonica-and-huawei-debut-latam-public-cloud/d/d-id/726571

https://www.huawei.com/th-TH/news/2016/9/Telefonica-Brazil-Mexico-Chile-Cloud-Serve
 https://www.cloud.telefonica.com/en/

 *Huawei Enterprise Cloud:*
 http://www.hwclouds.com/en-us/

 *China Telecom Public Cloud:*
 http://www.ctyun.cn/
 http://www.ctyun.cn/product/oos_e

 *Other cases can be found in*:
 http://e.huawei.com/en/case-studies?product=Cloud%20Computing

As mentioned above, FusionCompute has been proved to be with high
reliability and large user base, thus we would like to propose
FusionCompute driver to Nova as an official Nova driver.

We have tried to propose this back in 2014, blueprint and discussions can
be found in:
https://blueprints.launchpad.net/nova/+spec/driver-for-huawei-fusioncompute
http://lists.openstack.org/pipermail/openstack-dev/2014-February/026075.html

We have set up the ThirdPartyCI:
https://wiki.openstack.org/wiki/ThirdPartySystems/Huawei_FusionCompute_CI
and adjusting it to Nova, it will be online very soon.

Thanks

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-10-07 Thread Zhenyu Zheng
So do we like the idea of "volume based scheduling?"

On Tue, Sep 27, 2016 at 11:39 AM, Joshua Harlow 
wrote:

> Huang Zhiteng wrote:
>
>>
>>
>> On Tue, Sep 27, 2016 at 12:00 AM, Joshua Harlow > > wrote:
>>
>> Huang Zhiteng wrote:
>>
>>
>> On Mon, Sep 26, 2016 at 12:05 PM, Joshua Harlow
>> 
>> >>
>>
>> wrote:
>>
>>  Huang Zhiteng wrote:
>>
>>  In eBay, we did some inhouse change to Nova so that our
>> big data
>>  type of
>>  use case can have physical disks as ephemeral disk for
>> this type of
>>  flavors.  It works well so far.   My 2 cents.
>>
>>
>>  Is there a published patch (or patchset) anywhere that
>> people can
>>  look at for said in-house changes?
>>
>>
>> Unfortunately no, but I think we can publish it if there are
>> enough
>> interests.  However, I don't think that can be easily adopted onto
>> upstream Nova since it depends on other in-house changes we've
>> done to Nova.
>>
>>
>> Is there any blog, or other that explains the full bunch of changes
>> that ebay has done (u got me curious)?
>>
>> The nice thing about OSS is that if u just get the patchsets out
>> (even to github or somewhere), those patches may trigger things to
>> change to match your usecase better just by the nature of people
>> being able to read them; but if they are never put out there, then
>> well ya, it's a little hard to get anything to change.
>>
>>
>> Anything stopping a full release of all in-house changes?
>>
>> Even if they are not 'super great quality' it really doesn't matter :)
>>
>> Apology for sidetracking the topic a bit.  While we encourage our
>> engineers to embrace community and open source, I think we didn't do a
>> good job to actually emphasize that. 'Time To Market' is another factor,
>> usually a feature requirement becomes deployed service in 2,3 sprint
>> (4~6 weeks), but you know how much can be done in same amount of time in
>> community, especially with Nova. :)
>>
>
> Ya, sorry for side-tracking,
>
> Overall yes I do know getting changes done in upstream is not a 4-6 week
> process (though maybe someday it could be). In general I don't want to turn
> this into a rant, and thankfully I think there is a decent LWN article
> about this kind of situation already. You might like it :)
>
> https://lwn.net/Articles/647524/ (replace embedded linux/kernel in this
> with openstack and imho it's equally useful/relevant).
>
>
> -Josh
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-26 Thread Zhenyu Zheng
Hi,

Thanks for the reply, actually approach one is not we are looking for, our
demands is to attach the real physical volume from compute node to VMs,
by this way we can achieve the performance we need for usecases such as big
data, this can be done by cinder using BlockDeviceDriver, it is quite
different from the approach one you mentioned. The only problem now is that
we cannot practially ensure the compute resource located on the same
host with the volume, as Matt mentioned above, currently we have to arrange
1:1 AZ in Cinder and Nova to do this and it is not practical in commercial
deployments.

Thanks.

On Mon, Sep 26, 2016 at 9:48 PM, Erlon Cruz <sombra...@gmail.com> wrote:

>
>
> On Fri, Sep 23, 2016 at 10:19 PM, Zhenyu Zheng <zhengzhenyul...@gmail.com>
> wrote:
>
>> Hi,
>>
>> Thanks all for the information, as for the filter Erlon(
>> InstanceLocalityFilter) mentioned, this only solves a part of the
>> problem,
>> we can create new volumes for existing instances using this filter and
>> then attach to it, but the root volume still cannot
>> be guranteed to be on the same host as the compute resource, right?
>>
>>
> You have two options to use a disk in the same node as the instance.
> 1 - The easiest, just don't use Cinder volumes. When you create an
> instance from an image, the default behavior in Nova, is to create the root
> disk in the local host (/var/lib/nova/instances). This have the advantage
> that Nova will cache the image locally and will avoid the need of copying
> the image over the wire (or having to configure image caching in Cinder).
>
> 2 - Use Cinder volumes as root disk. Nova will somehow have to pass the
> hints to the scheduler so it properly can use the InstanceLocalityFilter.
> If you place this in Nova, and make sure that all requests have the proper
> hint, then the volumes created will be scheduled and the host.
>
> Is there any reason why you can't use the first approach?
>
>
>
>
>> The idea here is that all the volumes uses local disks.
>> I was wondering if we already have such a plan after the Resource
>> Provider structure has accomplished?
>>
>> Thanks
>>
>> On Sat, Sep 24, 2016 at 2:05 AM, Erlon Cruz <sombra...@gmail.com> wrote:
>>
>>> Not sure exactly what you mean, but in Cinder using the
>>> InstanceLocalityFilter[1], you can  schedule a volume to the same compute
>>> node the instance is located. Is this what you need?
>>>
>>> [1] http://docs.openstack.org/developer/cinder/scheduler-fil
>>> ters.html#instancelocalityfilter
>>>
>>> On Fri, Sep 23, 2016 at 12:19 PM, Jay S. Bryant <
>>> jsbry...@electronicjungle.net> wrote:
>>>
>>>> Kevin,
>>>>
>>>> This is functionality that has been requested in the past but has never
>>>> been implemented.
>>>>
>>>> The best way to proceed would likely be to propose a blueprint/spec for
>>>> this and start working this through that.
>>>>
>>>> -Jay
>>>>
>>>>
>>>> On 09/23/2016 02:51 AM, Zhenyu Zheng wrote:
>>>>
>>>> Hi Novaers and Cinders:
>>>>
>>>> Quite often application requirements would demand using locally
>>>> attached disks (or direct attached disks) for OpenStack compute instances.
>>>> One such example is running virtual hadoop clusters via OpenStack.
>>>>
>>>> We can now achieve this by using BlockDeviceDriver as Cinder driver and
>>>> using AZ in Nova and Cinder, illustrated in[1], which is not very feasible
>>>> in large scale production deployment.
>>>>
>>>> Now that Nova is working on resource provider trying to build an
>>>> generic-resource-pool, is it possible to perform "volume-based-scheduling"
>>>> to build instances according to volume? As this could be much easier to
>>>> build instances like mentioned above.
>>>>
>>>> Or do we have any other ways of doing this?
>>>>
>>>> References:
>>>> [1] http://cloudgeekz.com/71/how-to-setup-openstack-to-use-l
>>>> ocal-disks-for-instances.html
>>>>
>>>> Thanks,
>>>>
>>>> Kevin Zheng
>>>>
>>>>
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: 
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listi

Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-25 Thread Zhenyu Zheng
Hi Matt,

Yes, we can only do this using 1:1 AZs mapped for each compute node in the
deployment, which is not very feasible in commercial deployment,
we can either pass some hints to Cinder(for current code, cinder
"InstanceLocalityFilter"
uses instance uuid as parameter so it will be impossible for
user to pass it while booting instances)/ add filters or something else to
Nova when doing Nova scheduling. And maybe we will have new solutions
after "Generic-resource-pool" is reached?

The implementations may varies, but this could be a reasonable demands?
right?

Thanks

On Sun, Sep 25, 2016 at 1:02 AM, Matt Riedemann <mrie...@linux.vnet.ibm.com>
wrote:

> On 9/23/2016 8:19 PM, Zhenyu Zheng wrote:
>
>> Hi,
>>
>> Thanks all for the information, as for the filter
>> Erlon(InstanceLocalityFilter) mentioned, this only solves a part of the
>> problem,
>> we can create new volumes for existing instances using this filter and
>> then attach to it, but the root volume still cannot
>> be guranteed to be on the same host as the compute resource, right?
>>
>> The idea here is that all the volumes uses local disks.
>> I was wondering if we already have such a plan after the Resource
>> Provider structure has accomplished?
>>
>> Thanks
>>
>> On Sat, Sep 24, 2016 at 2:05 AM, Erlon Cruz <sombra...@gmail.com
>> <mailto:sombra...@gmail.com>> wrote:
>>
>> Not sure exactly what you mean, but in Cinder using the
>> InstanceLocalityFilter[1], you can  schedule a volume to the same
>> compute node the instance is located. Is this what you need?
>>
>> [1] http://docs.openstack.org/developer/cinder/scheduler-filters
>> .html#instancelocalityfilter
>> <http://docs.openstack.org/developer/cinder/scheduler-filter
>> s.html#instancelocalityfilter>
>>
>> On Fri, Sep 23, 2016 at 12:19 PM, Jay S. Bryant
>> <jsbry...@electronicjungle.net
>> <mailto:jsbry...@electronicjungle.net>> wrote:
>>
>> Kevin,
>>
>> This is functionality that has been requested in the past but
>> has never been implemented.
>>
>> The best way to proceed would likely be to propose a
>> blueprint/spec for this and start working this through that.
>>
>> -Jay
>>
>>
>> On 09/23/2016 02:51 AM, Zhenyu Zheng wrote:
>>
>>> Hi Novaers and Cinders:
>>>
>>> Quite often application requirements would demand using
>>> locally attached disks (or direct attached disks) for
>>> OpenStack compute instances. One such example is running
>>> virtual hadoop clusters via OpenStack.
>>>
>>> We can now achieve this by using BlockDeviceDriver as Cinder
>>> driver and using AZ in Nova and Cinder, illustrated in[1],
>>> which is not very feasible in large scale production deployment.
>>>
>>> Now that Nova is working on resource provider trying to build
>>> an generic-resource-pool, is it possible to perform
>>> "volume-based-scheduling" to build instances according to
>>> volume? As this could be much easier to build instances like
>>> mentioned above.
>>>
>>> Or do we have any other ways of doing this?
>>>
>>> References:
>>> [1] http://cloudgeekz.com/71/how-to-setup-openstack-to-use-local
>>> -disks-for-instances.html
>>> <http://cloudgeekz.com/71/how-to-setup-openstack-to-use-loca
>>> l-disks-for-instances.html>
>>>
>>> Thanks,
>>>
>>> Kevin Zheng
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> <mailto:openstack-dev-requ...@lists.openstack.org?subject:un
>>> subscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
>>> k-dev
>>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/opensta
>>> ck-dev>
>>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> op

Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-23 Thread Zhenyu Zheng
Hi,

Thanks all for the information, as for the filter Erlon(
InstanceLocalityFilter) mentioned, this only solves a part of the problem,
we can create new volumes for existing instances using this filter and then
attach to it, but the root volume still cannot
be guranteed to be on the same host as the compute resource, right?

The idea here is that all the volumes uses local disks.
I was wondering if we already have such a plan after the Resource Provider
structure has accomplished?

Thanks

On Sat, Sep 24, 2016 at 2:05 AM, Erlon Cruz <sombra...@gmail.com> wrote:

> Not sure exactly what you mean, but in Cinder using the
> InstanceLocalityFilter[1], you can  schedule a volume to the same compute
> node the instance is located. Is this what you need?
>
> [1] http://docs.openstack.org/developer/cinder/scheduler-filters.html#
> instancelocalityfilter
>
> On Fri, Sep 23, 2016 at 12:19 PM, Jay S. Bryant <
> jsbry...@electronicjungle.net> wrote:
>
>> Kevin,
>>
>> This is functionality that has been requested in the past but has never
>> been implemented.
>>
>> The best way to proceed would likely be to propose a blueprint/spec for
>> this and start working this through that.
>>
>> -Jay
>>
>>
>> On 09/23/2016 02:51 AM, Zhenyu Zheng wrote:
>>
>> Hi Novaers and Cinders:
>>
>> Quite often application requirements would demand using locally attached
>> disks (or direct attached disks) for OpenStack compute instances. One such
>> example is running virtual hadoop clusters via OpenStack.
>>
>> We can now achieve this by using BlockDeviceDriver as Cinder driver and
>> using AZ in Nova and Cinder, illustrated in[1], which is not very feasible
>> in large scale production deployment.
>>
>> Now that Nova is working on resource provider trying to build an
>> generic-resource-pool, is it possible to perform "volume-based-scheduling"
>> to build instances according to volume? As this could be much easier to
>> build instances like mentioned above.
>>
>> Or do we have any other ways of doing this?
>>
>> References:
>> [1] http://cloudgeekz.com/71/how-to-setup-openstack-to-use-l
>> ocal-disks-for-instances.html
>>
>> Thanks,
>>
>> Kevin Zheng
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-23 Thread Zhenyu Zheng
Hi Novaers and Cinders:

Quite often application requirements would demand using locally attached
disks (or direct attached disks) for OpenStack compute instances. One such
example is running virtual hadoop clusters via OpenStack.

We can now achieve this by using BlockDeviceDriver as Cinder driver and
using AZ in Nova and Cinder, illustrated in[1], which is not very feasible
in large scale production deployment.

Now that Nova is working on resource provider trying to build an
generic-resource-pool, is it possible to perform "volume-based-scheduling"
to build instances according to volume? As this could be much easier to
build instances like mentioned above.

Or do we have any other ways of doing this?

References:
[1]
http://cloudgeekz.com/71/how-to-setup-openstack-to-use-local-disks-for-instances.html

Thanks,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Add support for tag instances when booting

2016-09-07 Thread Zhenyu Zheng
Hi All,

Tags for servers are supported in microversion 2.26, but currently we can
only add tags to
instances that are already existed in the cloud, that is, we can not set
tags to instances
when we boot the instances. User will have to first find the instances and
then add tags
with another API call. This is not user-friendly enough when user doing
bulk boot, it will be
not practical to add tags for those instances one by one afterwards.

So, I think it will be good to add support for tag instances when booting
them, I have registered
a BP for O:
https://blueprints.launchpad.net/nova/+spec/support-tag-instance-when-boot
https://review.openstack.org/#/c/366469/

Comments?

Thanks,

Zhenyu Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support specified volume_type when boot instance, do we like it?

2016-08-30 Thread Zhenyu Zheng
Dear all,

Thanks all for the reply, I have read the etherpad note and there seems no
working BP/SPEC now.
So I have updated one BP/SPEC from my colleague put up for Mitaka with
microversion implementation for Ocata release:
BP:
https://blueprints.launchpad.net/nova/+spec/add-volume-type-when-boot-instances
SPEC: https://review.openstack.org/#/c/362698/

I'm aiming to implement this useful feature for O release :-)

Thanks,

Kevin Zheng

On Tue, Aug 30, 2016 at 3:35 AM, Sean McGinnis <sean.mcgin...@gmx.com>
wrote:

> On Mon, Aug 29, 2016 at 09:29:57AM -0400, Andrew Laski wrote:
> >
> >
> >
> > On Mon, Aug 29, 2016, at 09:06 AM, Jordan Pittier wrote:
> > >
> > >
> > > On Mon, Aug 29, 2016 at 8:50 AM, Zhenyu Zheng
> > > <zhengzhenyul...@gmail.com> wrote:
> > >> Hi, all
> > >>
> > >> Currently we have customer demands about adding parameter
> > >> "volume_type" to --block-device to provide the support of specified
> > >> storage backend to boot instance. And I find one newly drafted
> > >> Blueprint that aiming to address the same feature:
> > >> https://blueprints.launchpad.net/nova/+spec/support-boot-
> instance-set-store-type
> > >> ;
> > >>
> > >> As I know this is kind of "proxy" feature for cinder and we don't
> > >> like it in general, but as the boot from volume functional was
> > >> already there, so maybe it is OK to support another parameter?
> > >>
> > >> So, my question is that what are your opinions about this in general?
> > >> Do you like it or it will not be able to got approved at all?
> > >>
> > >> Thanks,
> > >>
> > >> Kevin Zheng
> > >
> > > Hi,
> > > I think it's not a great idea. Not only for the reason you mention,
> > > but also because the "nova boot" command is already way to complicated
> > > with way to many options. IMO we should only add support for new
> > > features, not "features" we can have by other means, just for
> > > convenience.
> >
> > I completely agree with this. However I have some memory of us
> > saying(in Austin?) that adding volume_type would be acceptable since
> > it's a clear oversight in the list of parameters for specifying a block
> > device. So while I greatly dislike Nova creating volumes and would
> > rather users pass in pre-created volume ids I would support adding this
> > parameter. I do not support continuing to add parameters if Cinder adds
> > parameters though.
> >
>
> FWIW, I get asked the question on the Cinder side of how to specify
> which volume type to use when booting from a Cinder volume on a fairly
> regular basis.
>
> I agree with the approach of not adding more proxy functionality in
> Nova, but since this is an existing feature that is missing expected
> functionality, I would like to see this get in.
>
> Just my $0.02.
>
> Sean
>
> >
> > >
> > >
> > > -
> > > 
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: OpenStack-dev-
> > > requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Support specified volume_type when boot instance, do we like it?

2016-08-29 Thread Zhenyu Zheng
Hi, all

Currently we have customer demands about adding parameter "volume_type" to
--block-device to provide the support of specified storage backend to boot
instance. And I find one newly drafted Blueprint that aiming to address the
same feature:
https://blueprints.launchpad.net/nova/+spec/support-boot-instance-set-store-type
;

As I know this is kind of "proxy" feature for cinder and we don't like it
in general, but as the boot from volume functional was already there, so
maybe it is OK to support another parameter?

So, my question is that what are your opinions about this in general? Do
you like it or it will not be able to got approved at all?

Thanks,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] About deleting keypairs

2016-07-14 Thread Zhenyu Zheng
Hi All,

We have meet some problems when trying to cleanup resources, keypairs in
particular.

The scenario is like this, we have several projects in our public cloud,
each project have their own admin, they can create and delete users, and
their users may create keypairs; As keypairs are only related to
users(user_id), when project admin delete it's users, they may forget to
delete the related keypairs and also they might tried to delete keypairs
but some thing happened and it didn't work.

Now, when we, as public cloud admin, we want to delete this project and
cleanup its' resources, we can't delete the keypairs because when delete
keypairs we have to provide the related user_id, if this user has already
been deleted(keystone uses hard delete and we cannot find deleted users
their), we won't able to delete the keypairs forever.

Does anyone have any comments or thoughts about the above problem?

Thanks

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Questions about instance actions' update and finish

2016-07-10 Thread Zhenyu Zheng
OK, then I will work on it in O

On Thu, Jul 7, 2016 at 12:15 AM, Matt Riedemann <mrie...@linux.vnet.ibm.com>
wrote:

> On 7/4/2016 1:12 AM, Zhenyu Zheng wrote:
>
>> I'm willing to work on this, should this be a Blueprint for O?
>>
>>
> The spec will need to be re-proposed for Ocata and any adjustments for the
> sorting/paging/marker discussions from this thread and/or the review should
> be laid out in the spec.
>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Questions about instance actions' update and finish

2016-07-04 Thread Zhenyu Zheng
I'm willing to work on this, should this be a Blueprint for O?

On Sun, Jul 3, 2016 at 9:05 PM, Matt Riedemann <mrie...@linux.vnet.ibm.com>
wrote:

> On 7/3/2016 6:21 AM, Alex Xu wrote:
>
>>
>>
>> 2016-07-02 2:32 GMT+08:00 Sean Dague <s...@dague.net
>> <mailto:s...@dague.net>>:
>>
>>
>> On 06/30/2016 08:31 AM, Andrew Laski wrote:
>> >
>> >
>> > On Wed, Jun 29, 2016, at 11:11 PM, Matt Riedemann wrote:
>> >> On 6/29/2016 10:10 PM, Matt Riedemann wrote:
>> >>> On 6/29/2016 6:40 AM, Andrew Laski wrote:
>> >>>>
>> >>>>
>> >>>>
>> >>>> On Tue, Jun 28, 2016, at 09:27 PM, Zhenyu Zheng wrote:
>> >>>>> How about I sync updated_at and created_at in my patch, and
>> leave the
>> >>>>> finish to the other BP, by this way, I can use updated_at for
>> the
>> >>>>> timestamp filter I added and it don't need to change again once
>> the
>> >>>>> finish BP is complete.
>> >>>>
>> >>>> Sounds good to me.
>> >>>>
>> >>>
>> >>> It's been a long day so my memory might be fried, but the options
>> we
>> >>> talked about in the API meeting were:
>> >>>
>> >>> 1. Setting updated_at = created_at when the instance action
>> record is
>> >>> created. Laski likes this, I'm not crazy about it, especially
>> since we
>> >>> don't do that for anything else.
>> >
>> > I would actually like for us to do this generally. I have the same
>> > thinking as Ed does elsewhere in this thread, the creation of a
>> record
>> > is an update of that record. So take my comments as applying to Nova
>> > overall and not just this issue.
>>
>> Agree. Also it just simplifies a number of things. We should just
>> start
>> doing this going forward, and probably put some online data migrations
>> in place next cycle to update all the old records. Once updated_at
>> can't
>> be null, we can handle things like this a bit better.
>>
>>
>> The marker should be a column with UniqueConstraint, the updated_at is
>> not. But if we say the accuracy is ok, there will have problem with
>> updated_at as None.
>>
>
> Yeah I thought about this later, we don't use a timestamp field as a
> marker for anything else, and as noted it's not a non-nullable unique
> field, plus it's mutable which worries me for a marker field (created_at
> wouldn't change, but updated_at could).
>
>
>> Anyway, we already freeze... probably we can begin to fix the updated_at
>> problem first.
>>
>>
>> >>> 2. Update the instance action's updated_at when instance action
>> events
>> >>> are created. I like this since the instance action is like a
>> parent
>> >>> resource and the event is the child, so when we create/modify an
>> event
>> >>> we can consider it an update to the parent. Laski thought this
>> might be
>> >>> weird UX given we don't expose instance action events in the REST
>> API
>> >>> unless you're an admin. This is also probably not something we'd
>> do for
>> >>> other related resources like server groups and server group
>> members (but
>> >>> we don't page on those either right now).
>> >
>> > Right. My concern is just that the ordering of actions can change
>> based
>> > on events happening which are not visible to the user. However
>> thinking
>> > about it further we don't really allow multiple actions at once,
>> except
>> > for a few special cases like delete, so this may not end up
>> affecting
>> > any ordering as actions are mostly serial. I think this is a fine
>> > solution for the issue at hand. I just think #1 is a more general
>> > solution.
>> >
>> >>>
>> >>> 3. Order the results by updated_at,created_at so that if
>> updated_at
>> >>> isn't set for older records, created_at will be used. I think we
>> all
>> >>> agreed in the meeting to do this regardless of #1 or #2 above.
>>
>> I kind of hate that as the order, because t

Re: [openstack-dev] [Nova] Questions about instance actions' update and finish

2016-06-28 Thread Zhenyu Zheng
How about I sync updated_at and created_at in my patch, and leave the
finish to the other BP, by this way, I can use updated_at for the timestamp
filter I added and it don't need to change again once the finish BP is
complete.

On Tue, Jun 28, 2016 at 8:28 PM, Andrew Laski <and...@lascii.com> wrote:

>
>
>
> On Tue, Jun 28, 2016, at 03:26 AM, Zhenyu Zheng wrote:
>
> Hi all,
>
> I'm working on add pagination and timestamp filter for os-instance-actions
> API:
> https://review.openstack.org/#/c/326326/
> As Alex_xu pointed out that it will be better to filter by `updated_at`
> for timestamp filter which is reasonable, but when I tried to modify the
> patch I found out that:
>
> 1. The current APIs only called _record_action_start
> (objects.InstanceAction.action_start) and never call action_finish, so
> the field of `finish_time` is always empty in instance_actions table;
>
>
> There was a spec proposed to address this, though I don't believe it was
> approved for Newton. So for now you have to assume this will continue to be
> empty.
>
>
>
> 2. The updated_at field is also empty, should we sync the updated_at time
> to the created_at time when we create the action and also update it
> whenever the action status changed, e.g finished.
>
>
> When a finish_time is recorded that should definitely also update
> updated_at. I would be in favor of having updated_at set when the instance
> action is created. I've never fully understood why Nova doesn't do that
> generally.
>
>
>
> Thanks,
> Kevin Zheng
>
> *__*
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Questions about instance actions' update and finish

2016-06-28 Thread Zhenyu Zheng
Hi all,

I'm working on add pagination and timestamp filter for os-instance-actions
API:
https://review.openstack.org/#/c/326326/
As Alex_xu pointed out that it will be better to filter by `updated_at` for
timestamp filter which is reasonable, but when I tried to modify the patch
I found out that:

1. The current APIs only called _record_action_start
(objects.InstanceAction.action_start) and never call action_finish, so the
field of `finish_time` is always empty in instance_actions table;

2. The updated_at field is also empty, should we sync the updated_at time
to the created_at time when we create the action and also update it
whenever the action status changed, e.g finished.

Thanks,
Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API changes on limit / marker / sort in Newton

2016-05-30 Thread Zhenyu Zheng
I think it is good to share codes and a single microversion can make life
more easier during coding.
Can we approve those specs first and then decide on the details in IRC and
patch review? Because
the non-priority spec deadline is so close.

Thanks

On Tue, May 31, 2016 at 1:09 AM, Ken'ichi Ohmichi 
wrote:

> 2016-05-29 19:25 GMT-07:00 Alex Xu :
> >
> >
> > 2016-05-20 20:05 GMT+08:00 Sean Dague :
> >>
> >> There are a number of changes up for spec reviews that add parameters to
> >> LIST interfaces in Newton:
> >>
> >> * keypairs-pagination (MERGED) -
> >>
> >>
> https://github.com/openstack/nova-specs/blob/8d16fc11ee6d01b5a9fe1b8b7ab7fa6dff460e2a/specs/newton/approved/keypairs-pagination.rst#L2
> >> * os-instances-actions - https://review.openstack.org/#/c/240401/
> >> * hypervisors - https://review.openstack.org/#/c/240401/
> >> * os-migrations - https://review.openstack.org/#/c/239869/
> >>
> >> I think that limit / marker is always a legit thing to add, and I almost
> >> wish we just had a single spec which is "add limit / marker to the
> >> following APIs in Newton"
> >>
> >
> > Are you looking for code sharing or one microversion? For code sharing,
> it
> > sounds ok if people have some co-work. Probably we need a common
> pagination
> > supported model_query function for all of those. For one microversion,
> i'm a
> > little hesitate, we should keep one small change, or enable all in one
> > microversion. But if we have some base code for pagination support, we
> > probably can make the pagination as default thing support for all list
> > method?
>
> It is nice to share some common code for this, that would be nice for
> writing the api doc also to know what APIs support them.
> And also nice to do it with a single microversion for the above
> resources, because we can avoid microversion bumping conflict and all
> of them don't seem a big change.
>
> Thanks
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API changes on limit / marker / sort in Newton

2016-05-25 Thread Zhenyu Zheng
Thanks for the information, really hope these two can get merged for Newton:
 https://review.openstack.org/#/c/240401/
 https://review.openstack.org/#/c/239869/

On Sat, May 21, 2016 at 5:55 AM, Jay Pipes  wrote:

> +1 on all your suggestions below, Sean.
>
> -jay
>
>
> On 05/20/2016 08:05 AM, Sean Dague wrote:
>
>> There are a number of changes up for spec reviews that add parameters to
>> LIST interfaces in Newton:
>>
>> * keypairs-pagination (MERGED) -
>>
>> https://github.com/openstack/nova-specs/blob/8d16fc11ee6d01b5a9fe1b8b7ab7fa6dff460e2a/specs/newton/approved/keypairs-pagination.rst#L2
>> * os-instances-actions - https://review.openstack.org/#/c/240401/
>> * hypervisors - https://review.openstack.org/#/c/240401/
>> * os-migrations - https://review.openstack.org/#/c/239869/
>>
>> I think that limit / marker is always a legit thing to add, and I almost
>> wish we just had a single spec which is "add limit / marker to the
>> following APIs in Newton"
>>
>> Most of these came in with sort_keys as well. We currently don't have
>> schema enforcement on sort_keys, so I don't think we should add any more
>> instances of it until we scrub it. Right now sort_keys is mostly a way
>> to generate a lot of database load because users can sort by things not
>> indexed in your DB. We really should close that issue in the future, but
>> I don't think we should make it any worse. I have -1s on
>> os-instance-actions and hypervisors for that reason.
>>
>> os-instances-actions and os-migrations are time based, so they are
>> proposing a changes-since. That seems logical and fine. Date seems like
>> the natural sort order for those anyway, so it's "almost" limit/marker,
>> except from end not the beginning. I think that in general changes-since
>> on any resource which is time based should be fine, as long as that
>> resource is going to natural sort by the time field in question.
>>
>> So... I almost feel like this should just be soft policy at this point:
>>
>> limit / marker - always ok
>> sort_* - no more until we have a way to scrub sort (and we fix weird
>> sort key issues we have)
>> changes-since - ok on any resource that will natural sort with the
>> updated time
>>
>>
>> That should make proposing these kinds of additions easier for folks,
>>
>> -Sean
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Suggest to not deprecate os-migrations API

2016-05-23 Thread Zhenyu Zheng
Hi All,

According to
https://github.com/openstack/nova/blob/master/releasenotes/notes/os-migrations-ef225e5b309d5497.yaml
, we are going to deprecate the old os-migrations API, and two new APIs:
/servers/{server uuid}/migrations and /servers/{server
uuid}/migrations/{migration id} is added.

As we can see, the newly added APIs cannot work if we don't know which
instance is migrating. If our user uses HA applications or DRS applications
such as openstack-watcher, automatic migrations can take place, we may not
know which instance is migrating. And an API like the old os-migrations
will be a really good choice to use, we can get all the current running
migrations in simply one call. So I suggest not deprecate this API.

Any thoughts?

BR,
Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Versioned notification work continues in Newton

2016-04-12 Thread Zhenyu Zheng
Great, glad to help.

On Tue, Apr 12, 2016 at 9:03 PM, Balázs Gibizer  wrote:

> Hi,
>
> I just want to let the community know that the versioned notification
> work we started in Mitaka is planned to be continued in Newton.
> The planned goals for Newton:
> * Transform the most important notification to the new format [1]
> * Help others to use the new framework adding new notifications [2], [3],
> [4]
> * Further help notification consumers by adding json schemas for the
> notifications [5]
>
> I will be in Austin during the whole summit week to discuss these ideas.
> I will restart the notification subteam meeting [6] after the summit to
> have
> a weekly synchronization point. All this work is followed up on the
> etherpad [7].
>
> Cheers,
> Gibi
>
> [1]
> https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-newton
> [2]
> https://blueprints.launchpad.net/nova/+spec/add-swap-volume-notifications
> [3]
> https://blueprints.launchpad.net/nova/+spec/expose-quiesce-unquiesce-api
> [4] https://blueprints.launchpad.net/nova/+spec/hypervisor-notification
> [5]
> https://blueprints.launchpad.net/nova/+spec/json-schema-for-notifications
> [6] https://wiki.openstack.org/wiki/Meetings/NovaNotification
> [7] https://etherpad.openstack.org/p/nova-versioned-notifications
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] How about use IP instead of hostname when live Migrate.

2016-03-30 Thread Zhenyu Zheng
Hi, Nova,

Currently, we use destination host name as the target node uri by default
(if the newly added config option live_migration_inbound_addr has not been
set). This will need transformations from hostname to IP to perform the
operation such as copy disks, and it will depend on DNS services, for
example, we have to add our destination host to /etc/hosts in Ubuntu.

Actually, it is not hard for nova to get the destination nodes' IP address,
why don't we pass it in migrate data and use it as the uri for disk
migrations instead of hostname?

Any thoughts?

I have submitted a bug report for this and there is a Error Log about it,
please checkout:
https://bugs.launchpad.net/nova/+bug/1564197

Thanks,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Non-Admin user can show deleted instances using changes-since parameter when calling list API

2016-03-06 Thread Zhenyu Zheng
Thanks a lot for the reply.

I have already registered a BP for this, and will submit a spec for N, we
can discuss the details in the spec then.



On Sun, Mar 6, 2016 at 2:01 AM, Matt Riedemann <mrie...@linux.vnet.ibm.com>
wrote:

>
>
> On 3/5/2016 9:48 AM, Adam Young wrote:
>
>> On 03/05/2016 12:27 AM, Chris Friesen wrote:
>>
>>> On 03/04/2016 03:42 PM, Matt Riedemann wrote:
>>>
>>>>
>>>>
>>>> On 3/3/2016 9:14 PM, Zhenyu Zheng wrote:
>>>>
>>>>> Hm, I found out the reason:
>>>>>
>>>>> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L1139-L1145
>>>>>
>>>>>
>>>>> here we filtered out parameters like "deleted", and that's why the API
>>>>> behavior is like above mentioned.
>>>>>
>>>>> So should we simple add "deleted" to the tuple or a microversion is
>>>>> needed?
>>>>>
>>>>
>>> Nice find. This is basically the same as the ip6 case which required
>>>> microversion 2.5 [1]. So I think this is going to require a
>>>> microversion in
>>>> Newton to fix it (which means a blueprint and a spec since it's an
>>>> API change).
>>>> But it's pretty trivial, the paperwork is the majority of the work.
>>>>
>>>> [1] https://review.openstack.org/#/c/179569/
>>>>
>>>
>>> Does it really need a spec given that microversions are documented in
>>> the codebase?
>>>
>>> That almost seems like paperwork for the sake of following the rules
>>> rather than to serve any useful purpose.
>>>
>>> Is anyone really going to argue about the details?
>>>
>>>
>> I've been lurking on this discussion. I was worried that you were going
>> to try to hard code "admin" somewhere in here.
>>
>> If the only change is that the currently accepted set of params is
>> enforced with the current set of policy rules, just in a slightly
>> different place, it will not break people, and thus I would think no
>> microversion is essential.  However, if the the user might need to test
>> which way policy is enforced in order to use the right code path when
>> doing a client call, then a microversion would be needed.
>>
>>
>>
>> Chris
>>>
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> The ip6 case and microversion 2.5 is exactly the same scenario and sets
> precedent here, so yes we need a microversion which makes it an API change
> which according to our policy requires a spec. I realize it's trivial, but
> them's the rules.
>
> As far as I can tell, this is latent behavior since forever and no one has
> freaked out about it before, so I don't think doing things by the book and
> doing that in Newton is going to cause any problems.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Non-Admin user can show deleted instances using changes-since parameter when calling list API

2016-03-03 Thread Zhenyu Zheng
Hm, I found out the reason:
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L1139-L1145
here we filtered out parameters like "deleted", and that's why the API
behavior is like above mentioned.

So should we simple add "deleted" to the tuple or a microversion is needed?

On Fri, Mar 4, 2016 at 10:27 AM, Zhenyu Zheng <zhengzhenyul...@gmail.com>
wrote:

> Anyway, I updated the bug report:
> https://bugs.launchpad.net/nova/+bug/1552071
>
> and I will start to working on the bug first.
>
> On Fri, Mar 4, 2016 at 9:29 AM, Zhenyu Zheng <zhengzhenyul...@gmail.com>
> wrote:
>
>> Yes, so you are suggest fixing the return data of non-admin user use
>> 'nova list --deleted' but leave non-admin using 'nova list
>> --status=deleted' as is. Or it would be better to also submit a BP for next
>> cycle to add support for non-admin using '--status=deleted' with
>> microversions. Because in my opinion, if we allow non-admin use "nova list
>> --deleted", there will be no reason for us to limit the use of
>> "--status=deleted".
>>
>> On Fri, Mar 4, 2016 at 12:37 AM, Matt Riedemann <
>> mrie...@linux.vnet.ibm.com> wrote:
>>
>>>
>>>
>>> On 3/3/2016 10:02 AM, Matt Riedemann wrote:
>>>
>>>>
>>>>
>>>> On 3/3/2016 2:55 AM, Zhenyu Zheng wrote:
>>>>
>>>>> Yes, I agree with you guys, I'm also OK for non-admin users to list
>>>>> their own instances no matter what status they are.
>>>>>
>>>>> My question is this:
>>>>> I have done some tests, yet we have 2 different ways to list deleted
>>>>> instances (not counting using changes-since):
>>>>>
>>>>> 1.
>>>>> "GET
>>>>> /v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?status=deleted
>>>>> HTTP/1.1"
>>>>> (nova list --status deleted in CLI)
>>>>> 2. REQ: curl -g -i -X GET
>>>>>
>>>>> http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
>>>>> (nova
>>>>> list --deleted in CLI)
>>>>>
>>>>> for admin user, we can all get deleted instances(after the fix of
>>>>> Matt's
>>>>> patch).
>>>>>
>>>>> But for non-admin users, #1 is restricted here:
>>>>>
>>>>> https://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/servers.py#n350
>>>>>
>>>>> and it will return 403 error:
>>>>> RESP BODY: {"forbidden": {"message": "Only administrators may list
>>>>> deleted instances", "code": 403}}
>>>>>
>>>>
>>>> This is part of the API so if we were going to allow non-admins to query
>>>> for deleted servers using status=deleted, it would have to be a
>>>> microversion change. [1] I could also see that being policy-driven.
>>>>
>>>> It does seem odd and inconsistent though that non-admins can't query
>>>> with status=deleted but they can query with deleted=True in the query
>>>> options.
>>>>
>>>>
>>>>> and for #2 it will strangely return servers that are not in deleted
>>>>> status:
>>>>>
>>>>
>>>> This seems like a bug. I tried looking for something obvious in the code
>>>> but I'm not seeing the issue, I'd suspect something down in the DB API
>>>> code that's doing the filtering.
>>>>
>>>>
>>>>> DEBUG (connectionpool:387) "GET
>>>>> /v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
>>>>> HTTP/1.1" 200 3361
>>>>> DEBUG (session:235) RESP: [200] Content-Length: 3361
>>>>> X-Compute-Request-Id: req-bd073750-982a-4ef7-864a-a5db03e59a68 Vary:
>>>>> X-OpenStack-Nova-API-Version Connection: keep-alive
>>>>> X-Openstack-Nova-Api-Version: 2.1 Date: Thu, 03 Mar 2016 08:43:17 GMT
>>>>> Content-Type: application/json
>>>>> RESP BODY: {"servers": [{"status": "ACTIVE", "updated":
>>>>> "2016-02-29T06:24:16Z", "hostId":
>>>>> "56b12284bb4d1da6cbd066d15e17df252dac1f0dc6c81a74bf0634b7",
>>>>> "addresses":
>>>>> {"private": [{"OS-EXT-IPS-MAC:mac_addr": "

Re: [openstack-dev] [nova] Non-Admin user can show deleted instances using changes-since parameter when calling list API

2016-03-03 Thread Zhenyu Zheng
Anyway, I updated the bug report:
https://bugs.launchpad.net/nova/+bug/1552071

and I will start to working on the bug first.

On Fri, Mar 4, 2016 at 9:29 AM, Zhenyu Zheng <zhengzhenyul...@gmail.com>
wrote:

> Yes, so you are suggest fixing the return data of non-admin user use 'nova
> list --deleted' but leave non-admin using 'nova list --status=deleted' as
> is. Or it would be better to also submit a BP for next cycle to add support
> for non-admin using '--status=deleted' with microversions. Because in my
> opinion, if we allow non-admin use "nova list --deleted", there will be no
> reason for us to limit the use of "--status=deleted".
>
> On Fri, Mar 4, 2016 at 12:37 AM, Matt Riedemann <
> mrie...@linux.vnet.ibm.com> wrote:
>
>>
>>
>> On 3/3/2016 10:02 AM, Matt Riedemann wrote:
>>
>>>
>>>
>>> On 3/3/2016 2:55 AM, Zhenyu Zheng wrote:
>>>
>>>> Yes, I agree with you guys, I'm also OK for non-admin users to list
>>>> their own instances no matter what status they are.
>>>>
>>>> My question is this:
>>>> I have done some tests, yet we have 2 different ways to list deleted
>>>> instances (not counting using changes-since):
>>>>
>>>> 1.
>>>> "GET
>>>> /v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?status=deleted
>>>> HTTP/1.1"
>>>> (nova list --status deleted in CLI)
>>>> 2. REQ: curl -g -i -X GET
>>>>
>>>> http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
>>>> (nova
>>>> list --deleted in CLI)
>>>>
>>>> for admin user, we can all get deleted instances(after the fix of Matt's
>>>> patch).
>>>>
>>>> But for non-admin users, #1 is restricted here:
>>>>
>>>> https://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/servers.py#n350
>>>>
>>>> and it will return 403 error:
>>>> RESP BODY: {"forbidden": {"message": "Only administrators may list
>>>> deleted instances", "code": 403}}
>>>>
>>>
>>> This is part of the API so if we were going to allow non-admins to query
>>> for deleted servers using status=deleted, it would have to be a
>>> microversion change. [1] I could also see that being policy-driven.
>>>
>>> It does seem odd and inconsistent though that non-admins can't query
>>> with status=deleted but they can query with deleted=True in the query
>>> options.
>>>
>>>
>>>> and for #2 it will strangely return servers that are not in deleted
>>>> status:
>>>>
>>>
>>> This seems like a bug. I tried looking for something obvious in the code
>>> but I'm not seeing the issue, I'd suspect something down in the DB API
>>> code that's doing the filtering.
>>>
>>>
>>>> DEBUG (connectionpool:387) "GET
>>>> /v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
>>>> HTTP/1.1" 200 3361
>>>> DEBUG (session:235) RESP: [200] Content-Length: 3361
>>>> X-Compute-Request-Id: req-bd073750-982a-4ef7-864a-a5db03e59a68 Vary:
>>>> X-OpenStack-Nova-API-Version Connection: keep-alive
>>>> X-Openstack-Nova-Api-Version: 2.1 Date: Thu, 03 Mar 2016 08:43:17 GMT
>>>> Content-Type: application/json
>>>> RESP BODY: {"servers": [{"status": "ACTIVE", "updated":
>>>> "2016-02-29T06:24:16Z", "hostId":
>>>> "56b12284bb4d1da6cbd066d15e17df252dac1f0dc6c81a74bf0634b7", "addresses":
>>>> {"private": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:1b:32", "version":
>>>> 4, "addr": "10.0.0.14", "OS-EXT-IPS:type": "fixed"},
>>>> {"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:1b:32", "version": 6, "addr":
>>>> "fdb7:5d7b:6dcd:0:f816:3eff:fe4f:1b32", "OS-EXT-IPS:type": "fixed"}]},
>>>> "links": [{"href":
>>>> "
>>>> http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/ee8907c7-0730-4051-8426-64be44300e70
>>>> ",
>>>>
>>>> "rel": "self"}, {"href":
>>>> "
>>>> h

Re: [openstack-dev] [nova] Non-Admin user can show deleted instances using changes-since parameter when calling list API

2016-03-03 Thread Zhenyu Zheng
Yes, so you are suggest fixing the return data of non-admin user use 'nova
list --deleted' but leave non-admin using 'nova list --status=deleted' as
is. Or it would be better to also submit a BP for next cycle to add support
for non-admin using '--status=deleted' with microversions. Because in my
opinion, if we allow non-admin use "nova list --deleted", there will be no
reason for us to limit the use of "--status=deleted".

On Fri, Mar 4, 2016 at 12:37 AM, Matt Riedemann <mrie...@linux.vnet.ibm.com>
wrote:

>
>
> On 3/3/2016 10:02 AM, Matt Riedemann wrote:
>
>>
>>
>> On 3/3/2016 2:55 AM, Zhenyu Zheng wrote:
>>
>>> Yes, I agree with you guys, I'm also OK for non-admin users to list
>>> their own instances no matter what status they are.
>>>
>>> My question is this:
>>> I have done some tests, yet we have 2 different ways to list deleted
>>> instances (not counting using changes-since):
>>>
>>> 1.
>>> "GET
>>> /v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?status=deleted
>>> HTTP/1.1"
>>> (nova list --status deleted in CLI)
>>> 2. REQ: curl -g -i -X GET
>>>
>>> http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
>>> (nova
>>> list --deleted in CLI)
>>>
>>> for admin user, we can all get deleted instances(after the fix of Matt's
>>> patch).
>>>
>>> But for non-admin users, #1 is restricted here:
>>>
>>> https://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/servers.py#n350
>>>
>>> and it will return 403 error:
>>> RESP BODY: {"forbidden": {"message": "Only administrators may list
>>> deleted instances", "code": 403}}
>>>
>>
>> This is part of the API so if we were going to allow non-admins to query
>> for deleted servers using status=deleted, it would have to be a
>> microversion change. [1] I could also see that being policy-driven.
>>
>> It does seem odd and inconsistent though that non-admins can't query
>> with status=deleted but they can query with deleted=True in the query
>> options.
>>
>>
>>> and for #2 it will strangely return servers that are not in deleted
>>> status:
>>>
>>
>> This seems like a bug. I tried looking for something obvious in the code
>> but I'm not seeing the issue, I'd suspect something down in the DB API
>> code that's doing the filtering.
>>
>>
>>> DEBUG (connectionpool:387) "GET
>>> /v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
>>> HTTP/1.1" 200 3361
>>> DEBUG (session:235) RESP: [200] Content-Length: 3361
>>> X-Compute-Request-Id: req-bd073750-982a-4ef7-864a-a5db03e59a68 Vary:
>>> X-OpenStack-Nova-API-Version Connection: keep-alive
>>> X-Openstack-Nova-Api-Version: 2.1 Date: Thu, 03 Mar 2016 08:43:17 GMT
>>> Content-Type: application/json
>>> RESP BODY: {"servers": [{"status": "ACTIVE", "updated":
>>> "2016-02-29T06:24:16Z", "hostId":
>>> "56b12284bb4d1da6cbd066d15e17df252dac1f0dc6c81a74bf0634b7", "addresses":
>>> {"private": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:1b:32", "version":
>>> 4, "addr": "10.0.0.14", "OS-EXT-IPS:type": "fixed"},
>>> {"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:1b:32", "version": 6, "addr":
>>> "fdb7:5d7b:6dcd:0:f816:3eff:fe4f:1b32", "OS-EXT-IPS:type": "fixed"}]},
>>> "links": [{"href":
>>> "
>>> http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/ee8907c7-0730-4051-8426-64be44300e70
>>> ",
>>>
>>> "rel": "self"}, {"href":
>>> "
>>> http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/servers/ee8907c7-0730-4051-8426-64be44300e70
>>> ",
>>>
>>> "rel": "bookmark"}], "key_name": null, "image": {"id":
>>> "6455625c-a68d-4bd3-ac2e-07382ac5cbf4", "links": [{"href":
>>> "
>>> http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/images/6455625c-a68d-4bd3-ac2e-07382ac5cbf4
>>> ",
>>>
>>> "rel": "bookmark"}]}, "OS-EXT-STS:task_state": null,
>&

Re: [openstack-dev] [nova] Non-Admin user can show deleted instances using changes-since parameter when calling list API

2016-03-03 Thread Zhenyu Zheng
d-4bd3-ac2e-07382ac5cbf4", "links": [{"href": "
http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/images/6455625c-a68d-4bd3-ac2e-07382ac5cbf4;,
"rel": "bookmark"}]}, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state":
"active", "OS-SRV-USG:launched_at": "2016-02-29T06:21:22.00", "flavor":
{"id": "1", "links": [{"href": "
http://10.229.45.17:8774/62bfb653eb0d4d5cabdf635dd8181313/flavors/1;,
"rel": "bookmark"}]}, "id": "40bab05f-0692-43df-a8a9-e7c0d58a73bd",
"security_groups": [{"name": "default"}], "OS-SRV-USG:terminated_at": null,
"OS-EXT-AZ:availability_zone": "nova", "user_id":
"da935c024dc1444abb7b32390eac4e0b", "name": "test_inject", "created":
"2016-02-29T06:19:51Z", "tenant_id": "62bfb653eb0d4d5cabdf635dd8181313",
"OS-DCF:diskConfig": "MANUAL", "os-extended-volumes:volumes_attached": [],
"accessIPv4": "", "accessIPv6": "", "progress": 0,
"OS-EXT-STS:power_state": 1, "config_drive": "True", "metadata": {}}]}

I think this is obviously not consistent, I think we can decide what the
behavior should be and make them consistent?

Yours,

Kevin

On Thu, Mar 3, 2016 at 3:59 PM, Alex Xu <sou...@gmail.com> wrote:

>
>
> 2016-03-03 2:11 GMT+08:00 Matt Riedemann <mrie...@linux.vnet.ibm.com>:
>
>>
>>
>> On 3/2/2016 3:02 AM, Zhenyu Zheng wrote:
>>
>>> Hi, Nova,
>>>
>>> While I'm working on add "changes-since" parameter support for
>>> python-novaclient "list" CLI.
>>>
>>> I realized that non-admin can list all deleted instances using
>>> "changes-since" parameter. This is reasonable in some level, as delete
>>> is an update to instances. But as we have a limitation that when list
>>> instances, deleted parameter is only allowed for admin users.
>>>
>>> This will lead to inconsistent to the rule of show deleted instances, as
>>> we limit the list of deleted instances to admin only, but non-admin can
>>> get the information using changes-since.
>>>
>>> Should we fix this?
>>>
>>> https://bugs.launchpad.net/nova/+bug/1552071
>>>
>>> Thanks,
>>>
>>> Kevin Zheng
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> Unless I'm missing some use case, I think that listing instances for
>> non-admins should be restricted to the instances they own, regardless of
>> whether or not they are deleted, period.
>>
>
> agree with this. I didn't see a problem showing the deleted instance for
> non-admins.
>
>
>>
>> As for listing deleting instances as an admin, that was broken with the
>> 2.16 microversion and there is a fix here:
>>
>> https://review.openstack.org/#/c/283820/
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Non-Admin user can show deleted instances using changes-since parameter when calling list API

2016-03-02 Thread Zhenyu Zheng
Hi, Nova,

While I'm working on add "changes-since" parameter support for
python-novaclient "list" CLI.

I realized that non-admin can list all deleted instances using
"changes-since" parameter. This is reasonable in some level, as delete is
an update to instances. But as we have a limitation that when list
instances, deleted parameter is only allowed for admin users.

This will lead to inconsistent to the rule of show deleted instances, as we
limit the list of deleted instances to admin only, but non-admin can get
the information using changes-since.

Should we fix this?

https://bugs.launchpad.net/nova/+bug/1552071

Thanks,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Migration progress

2016-02-04 Thread Zhenyu Zheng
I think we can add a config option for this and set a theoretical proper
default value,
we also add help messages to inform the the user about how inappropriate
value of
this config option will effect the performance.



On Wed, Feb 3, 2016 at 7:45 PM, Daniel P. Berrange 
wrote:

> On Wed, Feb 03, 2016 at 11:27:16AM +, Paul Carlton wrote:
> > On 03/02/16 10:49, Daniel P. Berrange wrote:
> > >On Wed, Feb 03, 2016 at 10:44:36AM +, Daniel P. Berrange wrote:
> > >>On Wed, Feb 03, 2016 at 10:37:24AM +, Koniszewski, Pawel wrote:
> > >>>Hello everyone,
> > >>>
> > >>>On the yesterday's live migration meeting we had concerns that
> interval of
> > >>>writing migration progress to the database is too short.
> > >>>
> > >>>Information about migration progress will be stored in the database
> and
> > >>>exposed through the API (/servers//migrations/). In current
> > >>>proposition [1] migration progress will be updated every 2 seconds. It
> > >>>basically means that every 2 seconds a call through RPC will go from
> compute
> > >>>to conductor to write migration data to the database. In case of
> parallel
> > >>>live migrations each migration will report progress by itself.
> > >>>
> > >>>Isn't 2 seconds interval too short for updates if the information is
> exposed
> > >>>through the API and it requires RPC and DB call to actually save it
> in the
> > >>>DB?
> > >>>
> > >>>Our default configuration allows only for 1 concurrent live migration
> [2],
> > >>>but it might vary between different deployments and use cases as it is
> > >>>configurable. Someone might want to trigger 10 (or even more)
> parallel live
> > >>>migrations and each might take even a day to finish in case of block
> > >>>migration. Also if deployment is big enough rabbitmq might be
> fully-loaded.
> > >>>I'm not sure whether updating each migration every 2 seconds makes
> sense in
> > >>>this case. On the other hand it might be hard to observe fast enough
> that
> > >>>migration is stuck if we increase this interval...
> > >>Do we have any actual data that this is a real problem. I have a
> pretty hard
> > >>time believing that a database update of a single field every 2
> seconds is
> > >>going to be what pushes Nova over the edge into a performance
> collapse, even
> > >>if there are 20 migrations running in parallel, when you compare it to
> the
> > >>amount of DB queries & updates done across other areas of the code for
> pretty
> > >>much every singke API call and background job.
> > >Also note that progress is rounded to the nearest integer. So even if
> the
> > >migration runs all day, there is a maximum of 100 possible changes in
> value
> > >for the progress field, so most of the updates should turn in to no-ops
> at
> > >the database level.
> > >
> > >Regards,
> > >Daniel
> > I agree with Daniel, these rpc and db access ops are a tiny percentage
> > of the overall load on rabbit and mysql and properly configured these
> > subsystems should have no issues with this workload.
> >
> > One correction, unless I'm misreading it, the existing
> > _live_migration_monitor code updates the progress field of the instance
> > record every 5 seconds.  However this value can go up and down so
> > an infinate number of updates are possible?
>
> Oh yes, you are in fact correct. Technically you could have an unbounded
> number of updates if migration goes backwards. Some mitigation against
> this is if we see progress going backwards we'll actually abort the
> migration if it gets stuck for too long. We'll also be progressively
> increasing the permitted downtime. So except in pathelogical scenarios
> I think the number of updates should still be relatively small.
>
> > However, the issue raised here is not with the existing implementation
> > but with the proposed change
> > https://review.openstack.org/#/c/258813/5/nova/virt/libvirt/driver.py
> > This add a save() operation on the migration object every 2 seconds
>
> Ok, that is more heavy weight since it is recording the raw byte values
> and so it is guaranteed to do a database update pretty much every time.
> It still shouldn't be too unreasonable a loading though. FWIW I think
> it is worth being consistent in the update frequency betweeen the
> progress value & the migration object save, so switching to be every
> 5 seconds probably makes more sense, so we know both objects are
> reflecting the same point in time.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [telemetry][aodh] announcing Liusheng as new Aodh liaison

2016-02-04 Thread Zhenyu Zheng
Congratulations Liusheng

On Fri, Feb 5, 2016 at 12:07 AM, gordon chung  wrote:

> hi,
>
> we've been searching for a lead/liaison/lieutenant for Aodh for some
> time. thankfully, we've had a volunteer.
>
> i'd like to announce Liusheng as the new lead of Aodh, the alarming
> service under Telemetry. he will help me with monitor bugs and specs and
> will be another resource for alarming related items. he will also help
> track some of the features we hope to implement[1].
>
> i'll let him mention some of the target goals but for now, i'd like to
> thank him for volunteering to help improve the community.
>
> [1] https://wiki.openstack.org/wiki/Telemetry/RoadMap#Aodh_.28alarming.29
>
> cheers,
> --
> gord
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] need your help on bug #1449498

2016-01-19 Thread Zhenyu Zheng
This is a bug, as tenant and user is maintained by keystone and quota are
stored in nova db, so it has the cleanup problems.

On Tue, Jan 19, 2016 at 8:12 PM, jialiang_song517 
wrote:

> Hi guys,
>
> I am working on bug #1449498,  display the quota of a user has been deleted>.
>
> Reproduction steps w/ devstack and Liberty:
> 1) create a tenant bug_test
> 2) create a user test1 in tenant bug_test
> 3) update the quota instances of test1 as 5 (the default instances value
> is 10)
> 4) delete user test1
> 5) query the quota information for user test1 in tenant bug_test
> in step5, the expected result should indicate user test1 doesn't exist,
> while nova returned the deleted user test1's quota infomation with
> instances as 5.
>
> After investigation, it is found that quota_get_all_by_project_and_user()
> and quota_get_all_by_project() will invoke model_query(context, model,
> args=None,
> session=None,
> use_slave=False,
> *read_deleted=None*,
> project_only=False)
> to query the quota information specified by project or project & user.
> While the model_query() doesnot work as expected, that is, in case a user
> was deleted, even *read_deleted *is set as *no*, the quota information
> associated with the deleted user will also be returned.
>
> I am not sure if this is a design behavior or this could be problem in
> oslo_db? Could you give some instruction on the further direction? Thanks.
>
> Any other comments are welcome.
>
> Best Regards,
> Jialiang
>
> --
> jialiang_song517
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Glance] Nova + Glance_v2 = Love

2016-01-04 Thread Zhenyu Zheng
Agreed with Sam, the point 6 is actually very important now for production
deployment, as volume-backed instance is very common. Maybe we should keep
it until we find a better solution.

On Tue, Dec 29, 2015 at 9:41 PM, Sam Matzek  wrote:

> On Thu, Dec 24, 2015 at 7:49 AM, Mikhail Fedosin 
> wrote:
> > Hello, it's another topic about glance v2 adoption in Nova, but it's
> > different from the others. I want to declare that there is a set of
> commits,
> > that make Nova version agnostic and allow to work with both glance apis.
> The
> > idea of the solution is to determine the current api version in the
> > beginning and make appropriate requests after.
> > (https://review.openstack.org/#/c/228578/,
> > https://review.openstack.org/#/c/238309/,
> > https://review.openstack.org/#/c/259097/)
> >
> > Indeed, it requires some additional (and painful) work, but now all
> tempest
> > tests pass in Jenkins.
> >
> > Note: this thread is not about xenplugin, there is another topic, called
> > 'Xenplugin + Glance_v2 = Hate'
> >
> > Here the main issues we faced and how we've solved them:
> >
> > 1. "changes-since" filter for image-list is not supported in v2 api.
> Steve
> > Lewis made a great job and implemented a set of filters with comparison
> > operators https://review.openstack.org/#/c/197388/. Filtering by
> > 'changes-since' is completely similar to 'gte:updated_at'.
> >
> > 2. Filtering by custom properties must have prefix 'property-'. In v2
> it's
> > not required.
> >
> > 3. V2 states that all custom properties must be image attributes, but
> Nova
> > assumes that they are included in 'properties' dictionary. It's fixed
> with
> > glanceclient's method 'is_base_property(prop_name)', that returns False
> in
> > case of custom property.
> >
> > 4. is_public=True/False is visibility="public"/"private" respectively.
> >
> > 5. Deleting custom image properties in Nova is performed with
> 'purge_props'
> > flag. If it's set to True, then all prop names, that are not included in
> > updated data, will be removed. In case of v2 we have to explicitly
> specify
> > prop names in the list param 'remove_props'. To implement this
> behaviour, if
> > 'purge_props' is set, we make additional 'get' request to determine the
> > existing properties and put in 'remove_prop' list only those, that are
> not
> > in updated_data.
> >
> > 6. My favourite:
> > There is an ability to activate an empty image by just providing 'size =
> 0':
> > https://review.openstack.org/#/c/9715/, in this case image will be a
> > collection of metadata. Glance v2 doesn't support this "feature" and
> that's
> > why we have to implement a very dirty workaround:
> > * v2 requires, that disk_format and container-format must be set
> before
> > the activation. if these params are not provided to 'create' method then
> we
> > hardcode it to 'qcow2' and 'bare'.
> > * we call 'upload' method with empty data (data = '') to activate
> image.
> > Me (and the rest glance team) think that this image activation with
> > zero-size is inconsistent and we won't implement it in v2. So, I'm going
> to
> > ask if Nova really needs this feature and maybe it's possible to make it
> > work without it.
>
> Nova uses this functionality when it creates snapshots of volume
> backed instances, that is, instances that only have Cinder volumes
> attached and do not have an ephemeral disk.
> In this case Nova API creates Cinder snapshots for the Cinder volumes
> and builds the block_device_mapping property with block devices that
> reference the Cinder snapshots.  Nova activates this image with size=0
> because this image does not have a disk and simply refers to the
> collection of Cinder snapshots that collectively comprise the binary
> image.  Callers of Glance outside of Nova may also use the APIs to
> create "block device mapping" images as well that contain references
> to Cinder volumes to attach, blank volumes to create, snapshots to
> create volumes from, etc during the server creation.  Not having the
> ability to create these images with V2 is a loss of function.
>
> The callers could supply 1 byte of junk data, like a space character,
> but that would result in a junk image being stored in Glance's default
> backing store for the image.  It would also give the impression that a
> real disk image exists for the image in a backing store which could be
> fetched, which is incorrect.
>
> If I remember correctly Glance V2 lets you transition an image from
> queued to active state with size = 0 if the image is using an external
> backing store such as http.  The store is then called to fetch the
> size.  Would it be possible to allow Glance to consider images with
> size = 0 and the block_device_mapping property to be "externally
> sourced" and allow the transition?
>
>
>
> >
> > In conclusion, I want to congratulate you with this huge progress and say
> > there is a big chance that we can deprecate glance v1 in 

Re: [openstack-dev] [openstack][nova] Microversions support for extensions without Controller

2015-12-13 Thread Zhenyu Zheng
Hi, I think for this kind of change you should register a Blueprint and
submit a spec for discussion. Sounds like it will be a bit change.

BR

On Sun, Dec 13, 2015 at 2:18 AM, Alexandre Levine  wrote:

> Hi all,
>
> os-user-data extension implements server_create method to add user_data
> for server creation. No Controller is used for this, only "class
> UserData(extensions.V21APIExtensionBase)".
>
> I want to add server_update method allowing to update the user_data.
> Obviously I have to add it as a microversioned functionality.
>
> And here is the problem: there is no information about the incoming
> request version in this code. It is available for Controllers only. But
> checking the version in controller would be too late, because the instance
> is already updated (non-generator extensions are post-processed).
>
> Can anybody guide me how to resolve this collision?
>
> Would it be possible to just retroactively add the user_data modification
> for the whole 2.1 version skipping the microversioning? Or we need to
> change nova so that request version is passed through to extension?
>
> Best regards,
>   Alex Levine
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova] Microversions support for extensions without Controller

2015-12-13 Thread Zhenyu Zheng
Sorry, s/bit/big

On Mon, Dec 14, 2015 at 10:46 AM, Zhenyu Zheng <zhengzhenyul...@gmail.com>
wrote:

> Hi, I think for this kind of change you should register a Blueprint and
> submit a spec for discussion. Sounds like it will be a bit change.
>
> BR
>
> On Sun, Dec 13, 2015 at 2:18 AM, Alexandre Levine <
> alexandrelev...@gmail.com> wrote:
>
>> Hi all,
>>
>> os-user-data extension implements server_create method to add user_data
>> for server creation. No Controller is used for this, only "class
>> UserData(extensions.V21APIExtensionBase)".
>>
>> I want to add server_update method allowing to update the user_data.
>> Obviously I have to add it as a microversioned functionality.
>>
>> And here is the problem: there is no information about the incoming
>> request version in this code. It is available for Controllers only. But
>> checking the version in controller would be too late, because the instance
>> is already updated (non-generator extensions are post-processed).
>>
>> Can anybody guide me how to resolve this collision?
>>
>> Would it be possible to just retroactively add the user_data modification
>> for the whole 2.1 version skipping the microversioning? Or we need to
>> change nova so that request version is passed through to extension?
>>
>> Best regards,
>>   Alex Levine
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] About rebuilding volume-backed instances.

2015-11-09 Thread Zhenyu Zheng
Hi, thanks all for replying, sorry I might be a bit unclear.

We have user demands that we only change the root device of an
volume-backed instance for upper layer services. It's not cloudy but it is
quite common. And changing OS is another demand that sort of related to
this.

Cinder supports live-backup volume, but not support live-restore a volume.

Are we planning to support this kind of action?

Yours,
Zheng

On Mon, Nov 9, 2015 at 8:24 PM, Duncan Thomas <duncan.tho...@gmail.com>
wrote:

> On 9 November 2015 at 09:04, Zhenyu Zheng <zhengzhenyul...@gmail.com>
> wrote:
>
>>  And Nova side also doesn't support detaching root device, that means we
>> cannot performing volume backup/restore from cinder side, because those
>> actions needs the volume in "available" status.
>>
>>
>
> It might be of interest to note that volume snapshots have always worked
> on attached volumes, and as of liberty, the backup operation now supports a
> --force=True option that does a backup of a live volume (via an internal
> snapshot, so it should be crash consistent)
>
>
> --
> --
> Duncan Thomas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] About rebuilding volume-backed instances.

2015-11-09 Thread Zhenyu Zheng
Hi,

Thanks for the reply, for some scenario, launching an new instance is
easier. But for production deployment, an instance may contain a large
number of data such as keypairs, metadata, bdm etc. and it may have
multiple internet interfaces that are connected to multiple networks. That
is to say, for operations like recovery, change/update operating system, to
build an new instance is a lot more "expensive" than rebuild it. And one
instance may have some volumes that are marked as delete_on_terminate =
True, if that is the situation, build an new instance will not save the
user data in user volume, but rebuild can protect them.

So, I think this is a quite reasonable demand for OpenStack.



On Mon, Nov 9, 2015 at 3:28 PM, Clint Byrum  wrote:

> Excerpts from Zhenyu Zheng's message of 2015-11-08 23:04:59 -0800:
> > Hi All,
> >
> > Currently, we have strong demands about "rebuilding"(or actions like
> > rebuilding) volume-backed instances. As in production deployment, volume
> > backed instance is widely used. Users have the demands of performing the
> > rebuild(recovery) action for root device while maintain instance UUID
> sorts
> > of information, many users also wants to keep the volume uuid unchanged.
> >
> > Nova side doesn't support using Rebuild API directly for volume backed
> > instances (the volume will not change). And Nova side also doesn't
> support
> > detaching root device, that means we cannot performing volume
> > backup/restore from cinder side, because those actions needs the volume
> in
> > "available" status.
> >
> > Now there are couple of patches proposed in nova trying to fix this
> problem:
> > [1] https://review.openstack.org/#/c/201458/
> > [2] https://review.openstack.org/#/c/221732/
> > [3] https://review.openstack.org/#/c/223887/
> >
> > [1] and [2] are trying to expose the API of detaching root devices, [3]
> is
> > trying to fix it in the current Rebuild API. But yet none of them got
> much
> > attention.
> >
> > As we now have strong demand on performing the "rebuilding" action for
> > volume-backed instances, and yet there is not any clear information about
> >  it. I wonder is there any plans of how to support it in Nova and Cinder?
> >
>
> This seems entirely misguided by the users.
>
> Why not just boot a new instance on a new volume with the same image?
> Names can be the same.. UUID's should never be anything except a physical
> handle.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] About rebuilding volume-backed instances.

2015-11-08 Thread Zhenyu Zheng
Hi All,

Currently, we have strong demands about "rebuilding"(or actions like
rebuilding) volume-backed instances. As in production deployment, volume
backed instance is widely used. Users have the demands of performing the
rebuild(recovery) action for root device while maintain instance UUID sorts
of information, many users also wants to keep the volume uuid unchanged.

Nova side doesn't support using Rebuild API directly for volume backed
instances (the volume will not change). And Nova side also doesn't support
detaching root device, that means we cannot performing volume
backup/restore from cinder side, because those actions needs the volume in
"available" status.

Now there are couple of patches proposed in nova trying to fix this problem:
[1] https://review.openstack.org/#/c/201458/
[2] https://review.openstack.org/#/c/221732/
[3] https://review.openstack.org/#/c/223887/

[1] and [2] are trying to expose the API of detaching root devices, [3] is
trying to fix it in the current Rebuild API. But yet none of them got much
attention.

As we now have strong demand on performing the "rebuilding" action for
volume-backed instances, and yet there is not any clear information about
 it. I wonder is there any plans of how to support it in Nova and Cinder?

Yours,

Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Pagination in thre API

2015-11-05 Thread Zhenyu Zheng
So lets work on the API WG guideline first, looking forward to get it done
sooner, pagination is actually very useful in production deployment.

On Thu, Nov 5, 2015 at 11:16 PM, Everett Toews 
wrote:

> On Nov 5, 2015, at 5:44 AM, John Garbutt  wrote:
>
>
> On 5 November 2015 at 09:46, Richard Jones  wrote:
>
> As a consumer of such APIs on the Horizon side, I'm all for consistency in
> pagination, and more of it, so yes please!
>
> On 5 November 2015 at 13:24, Tony Breeds  wrote:
>
>
> On Thu, Nov 05, 2015 at 01:09:36PM +1100, Tony Breeds wrote:
>
> Hi All,
>Around the middle of October a spec [1] was uploaded to add
> pagination
> support to the os-hypervisors API.  While I recognize the use case it
> seemed
> like adding another pagination implementation wasn't an awesome idea.
>
> Today I see 3 more requests to add pagination to APIs [2]
>
> Perhaps I'm over thinking it but should we do something more strategic
> rather
> than scattering "add pagination here".
>
>
> +1
>
> The plan, as I understand it, is to first finish off this API WG guideline:
>
> http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html
>
>
>
> An attempt at an API guideline for pagination is here [1] but hasn't
> received any updates in over a month, which can be understandable as
> sometimes other work takes precedence.
>
> Perhaps we can get that guideline moving again?
>
> If it's becoming difficult to reach agreement on that approach in the
> guideline, it could be worthwhile to take a step back and do some analysis
> on the way pagination is done in the more established APIs. I've found that
> such analysis can be very helpful as you're moving forward from a known
> state.
>
> The place for that analysis is in Current Design [2] by filling in the
> Pagination page. You can find many examples of such analysis from the
> Current Design like Sorting [3].
>
> Cheers,
> Everett
>
>
> [1] https://review.openstack.org/#/c/190743/
> [2] https://wiki.openstack.org/wiki/API_Working_Group/Current_Design
> [3]
> https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Sorting
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Add expected_task for instance.save() used by block_device mapping in compute.manager._build_resources

2015-11-02 Thread Zhenyu Zheng
Hi all,

We have recently meet this problem: for large scale deployment, command can
be sent concurrently, and for several times, when an instance was asked to
be delete during it is launching, we observed that the vm_state and
task_state of that instance has changed abnormally like this:

When we delete the instance while its' task state is networking:
scheduling->none->(networking)->deleting->block_device_mapping-->spawing-->none
The expected task_state should be:
networking->deleting->deleted
and the vm_state changes like this:
BUILD-ACIVE-disappear , which is also very strange for user.

After we dive deeper, we found out that in the _build_resource code, the
instance.save() for block_device_mapping doesn't contain
expected_task_state:
https://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n2135
,
so it also saved "deleting" and two process keep working which causes the
above situation.

How about we add some expected_task_state also for block_device_mapping?
The expected task states can be NETWORKING, SCHEDULING, and none.

I have registered a bug for this:
https://bugs.launchpad.net/nova/+bug/1512563

Any suggestions?

Best Regards,
Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-10-13 Thread Zhenyu Zheng
I think it will be better if you can submit a spec for your proposal, it
will be easier for people to give comment.

On Wed, Oct 14, 2015 at 10:05 AM, Tang Chen  wrote:

> Hi, all,
>
> Please help to review this BP.
>
> https://blueprints.launchpad.net/nova/+spec/live-migration-state-machine
>
>
> Currently, the migration_status field in Migration object is indicating
> the
> status of migration process. But in the current code, it is represented
> by pure string, like 'migrating', 'finished', and so on.
>
> The strings could be confusing to different developers, e.g. there are 3
> statuses representing the migration process is over successfully:
> 'finished', 'completed' and 'done'.
> And 2 for migration in process: 'running' and 'migrating'.
>
> So I think we should use constants or enum for these statuses.
>
>
> Furthermore, Nikola has proposed to create a state machine for the
> statuses,
> which is part of another abandoned BP. And this is also the work I'd like
> to go
> on with. Please refer to:
> https://review.openstack.org/#/c/197668/
> https://review.openstack.org/#/c/197669/
>
>
> Another proposal is: introduce a new member named "state" into Migration.
> Use a state machine to handle this Migration.state, and leave
> migration_status
> field a descriptive human readable free-form.
>
>
> So how do you think ?
>
> Thanks.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][python-novaclient] Functional test fail due to publicURL endpoint for volume service not found

2015-09-29 Thread Zhenyu Zheng
Hi, all

I submitted a patch for novaclient last night:
https://review.openstack.org/#/c/228769/ , and it turns out the functional
test has failed due to:  publicURL endpoint for volume service not found. I
also found out that another novaclient patch:
https://review.openstack.org/#/c/217131/ also fails due to this error, so
this must be a bug. Any idea on how to fix this?

Thanks,

BR,

Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [api] Nova currently handles list with limit=0 quite different for different objects.

2015-09-23 Thread Zhenyu Zheng
Hi, Alex,

Thanks for the information, I was unable to join the conference yesterday.
Then lets get the dicision done before fix it.

BR,

Zheng

On Wed, Sep 23, 2015 at 12:56 PM, Alex Xu <hejie...@intel.com> wrote:

> Hi, Zhengyu,
>
> We discussed this in yesterday Nova API meeting. We think it should get
> consistent in API-WG.
>
> And there already have patch for pagination guideline
> https://review.openstack.org/190743 , and there also have some discussion
> on limits.
> So we are better waiting the guideline get consistent before fix it.
>
> Thanks
> Alex
>
> On Sep 23, 2015, at 9:18 AM, Zhenyu Zheng <zhengzhenyul...@gmail.com>
> wrote:
>
> Any thoughts on this?
>
> On Mon, Sep 14, 2015 at 11:53 AM, Zhenyu Zheng <zhengzhenyul...@gmail.com>
> wrote:
>
>> Hi, Thanks for your reply, after check again and I agree with you. I
>> think we should come up with a conclusion about how we should treat this
>> limit=0 across nova. And that's also why I sent out this mail. I will
>> register this topic in the API meeting open discussion section, my be a BP
>> in M to fix this.
>>
>> BR,
>>
>> Zheng
>>
>> On Sat, Sep 12, 2015 at 12:07 AM, Kevin L. Mitchell <
>> kevin.mitch...@rackspace.com> wrote:
>>
>>> On Fri, 2015-09-11 at 15:41 +0800, Zhenyu Zheng wrote:
>>> > Hi, I found out that nova currently handles list with limit=0 quite
>>> > different for different objects.
>>> >
>>> > Especially when list servers:
>>> >
>>> > According to the code:
>>> >
>>> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/common.py#n206
>>> >
>>> > when limit = 0, it should apply as max_limit, but currently, in:
>>> >
>>> http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n1930
>>> >
>>> > we directly return [], this is quite different with comment in the api
>>> > code.
>>> >
>>> >
>>> > I checked other objects:
>>> >
>>> > when list security groups and server groups, it will return as no
>>> > limit has been set. And for flavors it returns []. I will continue to
>>> > try out other APIs if needed.
>>> >
>>> > I think maybe we should make a rule for all objects, at least fix the
>>> > servers to make it same in api and db code.
>>> >
>>> > I have reported a bug in launchpad:
>>> >
>>> > https://bugs.launchpad.net/nova/+bug/1494617
>>> >
>>> >
>>> > Any suggestions?
>>>
>>> After seeing the test failures that showed up on your proposed fix, I'm
>>> thinking that the proposed change reads like an API change, requiring a
>>> microversion bump.  That said, I approve of increased consistency across
>>> the API, and perhaps the behavior on limit=0 is something the API group
>>> needs to discuss a guideline for?
>>> --
>>> Kevin L. Mitchell <kevin.mitch...@rackspace.com>
>>> Rackspace
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> <http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] About availability zones

2015-09-23 Thread Zhenyu Zheng
Hi,

Thanks for the reply, one possible usecase is that user wants to
live-migrate to az2 so he specified host2. As we didn't update the
instance.az, if the user live-migrate again without specifiying destination
host, the instance will migrate to az1 again, this might be different as
the user expect. Any thought about this?

BR,

Zheng

On Wed, Sep 23, 2015 at 4:30 PM, Sylvain Bauza <sba...@redhat.com> wrote:

>
>
> Le 23/09/2015 05:24, Zhenyu Zheng a écrit :
>
> Hi, all
>
> I have a question about availability zones when performing live-migration.
>
> Currently, when performing live-migration the AZ of the instance didn't
> update. In usecase like this:
> Instance_1 is in host1 which is in az1, we live-migrate it to host2
> (provide host2 in API request) which is in az2. The operation will secusess
> but the availability zone data stored in instance1 is still az1, this may
> cause inconsistency with the az data stored in instance db and the actual
> az. I think update the az information in instance using the host az can
> solve this.
>
>
> Well, no. Instance.AZ is only the reflect of what the user asked, not what
> the current AZ is from the host the instance belongs to. In other words,
> instance.az is set once forever by taking the --az hint from the API
> request and persisting it in DB.
>
> That means that if you want to create a new VM without explicitly
> specifying one AZ in the CLI, it will take the default value of
> CONF.default_schedule_az which is None (unless you modify that flag).
>
> Consequently, when it will go to the scheduler, the AZFilter will not
> check the related AZs from any host because you didn't asked for an AZ.
> That means that the instance is considered "AZ-free".
>
> Now, when live-migrating, *if you specify a destination*, you totally
> bypass the scheduler and thus the AZFilter. By doing that, you can put your
> instance to another host without really checking the AZ.
>
> That said, if you *don't specify a destination*, then the scheduler will
> be called and will enforce the instance.az field with regards to the host
> AZ. That should still work (again, depending on whether you explicitly set
> an AZ at the boot time)
>
> To be clear, there is no reason of updating that instance AZ field. We can
> tho consider it's a new "request"' field and could be potentially moved to
> the RequestSpec object, but for the moment, this is a bit too early since
> we don't really use that new RequestSpec object yet.
>
>
>
> Also, I have heard from my collegue that in the future we are planning to
> use host az information for instances. I couldn't find informations about
> this, could anyone provide me some information about it if thats true?
>
>
> See my point above, I'd rather prefer to fix how live-migrations check the
> scheduler (and not bypass it when specifying a destination) and possibly
> move the instance AZ field to the RequestSpec object once that object is
> persisted, but I don't think we should check the host instead of the
> instance in the AZFilter.
>
>
> I assume all of that can be very confusing and mostly tribal knowledge,
> that's why we need to document that properly and one first shot is
> https://review.openstack.org/#/c/223802/
>
> -Sylvain
>
> Thanks,
>
> Best Regards,
>
> Zheng
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [api] Nova currently handles list with limit=0 quite different for different objects.

2015-09-22 Thread Zhenyu Zheng
Any thoughts on this?

On Mon, Sep 14, 2015 at 11:53 AM, Zhenyu Zheng <zhengzhenyul...@gmail.com>
wrote:

> Hi, Thanks for your reply, after check again and I agree with you. I think
> we should come up with a conclusion about how we should treat this limit=0
> across nova. And that's also why I sent out this mail. I will register this
> topic in the API meeting open discussion section, my be a BP in M to fix
> this.
>
> BR,
>
> Zheng
>
> On Sat, Sep 12, 2015 at 12:07 AM, Kevin L. Mitchell <
> kevin.mitch...@rackspace.com> wrote:
>
>> On Fri, 2015-09-11 at 15:41 +0800, Zhenyu Zheng wrote:
>> > Hi, I found out that nova currently handles list with limit=0 quite
>> > different for different objects.
>> >
>> > Especially when list servers:
>> >
>> > According to the code:
>> >
>> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/common.py#n206
>> >
>> > when limit = 0, it should apply as max_limit, but currently, in:
>> >
>> http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n1930
>> >
>> > we directly return [], this is quite different with comment in the api
>> > code.
>> >
>> >
>> > I checked other objects:
>> >
>> > when list security groups and server groups, it will return as no
>> > limit has been set. And for flavors it returns []. I will continue to
>> > try out other APIs if needed.
>> >
>> > I think maybe we should make a rule for all objects, at least fix the
>> > servers to make it same in api and db code.
>> >
>> > I have reported a bug in launchpad:
>> >
>> > https://bugs.launchpad.net/nova/+bug/1494617
>> >
>> >
>> > Any suggestions?
>>
>> After seeing the test failures that showed up on your proposed fix, I'm
>> thinking that the proposed change reads like an API change, requiring a
>> microversion bump.  That said, I approve of increased consistency across
>> the API, and perhaps the behavior on limit=0 is something the API group
>> needs to discuss a guideline for?
>> --
>> Kevin L. Mitchell <kevin.mitch...@rackspace.com>
>> Rackspace
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] About availability zones

2015-09-22 Thread Zhenyu Zheng
Hi, all

I have a question about availability zones when performing live-migration.

Currently, when performing live-migration the AZ of the instance didn't
update. In usecase like this:
Instance_1 is in host1 which is in az1, we live-migrate it to host2
(provide host2 in API request) which is in az2. The operation will secusess
but the availability zone data stored in instance1 is still az1, this may
cause inconsistency with the az data stored in instance db and the actual
az. I think update the az information in instance using the host az can
solve this.

Also, I have heard from my collegue that in the future we are planning to
use host az information for instances. I couldn't find informations about
this, could anyone provide me some information about it if thats true?

Thanks,

Best Regards,

Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [api] Nova currently handles list with limit=0 quite different for different objects.

2015-09-13 Thread Zhenyu Zheng
Hi, Thanks for your reply, after check again and I agree with you. I think
we should come up with a conclusion about how we should treat this limit=0
across nova. And that's also why I sent out this mail. I will register this
topic in the API meeting open discussion section, my be a BP in M to fix
this.

BR,

Zheng

On Sat, Sep 12, 2015 at 12:07 AM, Kevin L. Mitchell <
kevin.mitch...@rackspace.com> wrote:

> On Fri, 2015-09-11 at 15:41 +0800, Zhenyu Zheng wrote:
> > Hi, I found out that nova currently handles list with limit=0 quite
> > different for different objects.
> >
> > Especially when list servers:
> >
> > According to the code:
> >
> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/common.py#n206
> >
> > when limit = 0, it should apply as max_limit, but currently, in:
> >
> http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n1930
> >
> > we directly return [], this is quite different with comment in the api
> > code.
> >
> >
> > I checked other objects:
> >
> > when list security groups and server groups, it will return as no
> > limit has been set. And for flavors it returns []. I will continue to
> > try out other APIs if needed.
> >
> > I think maybe we should make a rule for all objects, at least fix the
> > servers to make it same in api and db code.
> >
> > I have reported a bug in launchpad:
> >
> > https://bugs.launchpad.net/nova/+bug/1494617
> >
> >
> > Any suggestions?
>
> After seeing the test failures that showed up on your proposed fix, I'm
> thinking that the proposed change reads like an API change, requiring a
> microversion bump.  That said, I approve of increased consistency across
> the API, and perhaps the behavior on limit=0 is something the API group
> needs to discuss a guideline for?
> --
> Kevin L. Mitchell <kevin.mitch...@rackspace.com>
> Rackspace
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova currently handles list with limit=0 quite different for different objects.

2015-09-11 Thread Zhenyu Zheng
Hi, I found out that nova currently handles list with limit=0 quite
different for different objects.

Especially when list servers:

According to the code:
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/common.py#n206

when limit = 0, it should apply as max_limit, but currently, in:
http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n1930

we directly return [], this is quite different with comment in the api code.


I checked other objects:

when list security groups and server groups, it will return as no limit has
been set. And for flavors it returns []. I will continue to try out other
APIs if needed.

I think maybe we should make a rule for all objects, at least fix the
servers to make it same in api and db code.

I have reported a bug in launchpad:

https://bugs.launchpad.net/nova/+bug/1494617

Any suggestions?

Best Regards,

Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Blazar] Anyone interested?

2015-08-31 Thread Zhenyu Zheng
Hello,
It seems like an interesting project.

On Fri, Aug 28, 2015 at 7:54 PM, Pierre Riteau  wrote:

> Hello,
>
> The NSF-funded Chameleon project (https://www.chameleoncloud.org) uses
> Blazar to provide advance reservations of resources for running cloud
> computing experiments.
>
> We would be interested in contributing as well.
>
> Pierre Riteau
>
> On 28 Aug 2015, at 07:56, Ildikó Váncsa 
> wrote:
>
> > Hi All,
> >
> > The resource reservation topic pops up time to time on different forums
> to cover use cases in terms of both IT and NFV. The Blazar project was
> intended to address this need, but according to my knowledge due to earlier
> integration and other difficulties the work has been stopped.
> >
> > My question is that who would be interested in resurrecting the Blazar
> project and/or working on a reservation system in OpenStack?
> >
> > Thanks and Best Regards,
> > Ildikó
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Bug 1482444] Abnormal changes of quota usage after instance restored by admin

2015-08-10 Thread Zhenyu Zheng
Hi, I didn't reproduce this bug, maybe you can explain more details.

On Tue, Aug 11, 2015 at 8:37 AM, 郑岳 zheng...@hihuron.com wrote:

 Hello everyone:
 I found a bug in openstack about project's quota usage in newest version.

 The link of bug description: https://bugs.launchpad.net/nova/+bug/1482444

 Reproduce steps:
 1. Enable soft delete via set reclaim_instance_interval in nova.conf.

 2. A normal project: ProjectA create a new instance and then delete it,
 then it's status change to SOFT_DELETED.

 3. Now restore the instance by admin user in project: admin, the instance
 back to ACTIVE, but the quota usage of project: admin has changed, the
 flavor of that instance has added on admin project quota usage.
 Please help me to confirm the problem is real.
 Thanks!

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Adding Project_id to the display list when using nova server-group-list

2015-08-05 Thread Zhenyu Zheng
Hi All,

Currently, when using command: nova server-group-list, server groups'
project id will not be displayed. As the admin user can use option
--all-projects to list server groups in all projects, it will be really
difficult to identify which serer group belongs to which project. It will
be better if we can display also project id.

I have submitted patches for the above mentioned problem:
https://review.openstack.org/#/c/209018/

Since it is really small fix(only added one line in the code), is it
necessary to submit a spec for that?

Thanks,
BR,

Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] Should we allow this kind of action?

2015-07-26 Thread Zhenyu Zheng
Hi all,

Recently, I've been asked to perform this kind of action using OpenStack:

1. Launch an volume-backended instance.
2. Take a snapshot of this instance using nova image-create, an image will
be added in glance, the size is zero, and the BDM will be saved in it's
metadata.
3. Create an volume using this image (with Cinder), This volume will be
marked with bootable.
4. Launch an new volume-backended instance using this newly built volume.

There will be errors performing this action:
1. Cinder will create an volume with zero size and the BDM saved in
metadata is transformed from dict to string and it is not able to be used
in nova for instance creation.
2. The BDM is provided by user with REST API, and it will be conflict with
the BDM saved in metadata.

Now, my question is:
Should we allow this kind of action in Nova? Or should we only use the
image directly for instance creation.
If this kind of action is not allowed, should we add checks in Cinder to
forbid volume creation with an zero-sized image?

Thanks,

BR,
Zhenyu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev