[openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?
In Denver, we agree to add a new "re-image" API in cinder to support upport volume-backed server rebuild with a new image. An initial blueprint has been drafted in [3], welcome to review it, thanks. : ) The API is very simple, something like: URL: POST /v3/{project_id}/volumes/{volume_id}/action Request body: { 'os-reimage': { 'image_id': "71543ced-a8af-45b6-a5c4-a46282108a90" } } The question is do we need a "force" parameter in request body? like: { 'os-reimage': { 'image_id': "71543ced-a8af-45b6-a5c4-a46282108a90", * 'force': True* } } The "force" parameter idea comes from [4], means that 1. we can re-image an "available" volume directly. 2. we can't re-image "in-use"/"reserved" volume directly. 3. we can only re-image an "in-use"/"reserved" volume with "force" parameter. And it means nova need to always call re-image API with an extra "force" parameter, because the volume status is "in-use" or "reserve" when we rebuild the server. *So, what's you idea? Do we really want to add this "force" parameter?* [1] https://etherpad.openstack.org/p/nova-ptg-stein L483 [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild L12 [3] https://review.openstack.org/#/c/605317 [4] https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst@75 Regards, Yikun Jiang Yikun(Kero) Mail: yikunk...@gmail.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] A multi-cell instance-list performance test
Some more information: *1. How did we record the time when listing?* you can see all our changes in: http://paste.openstack.org/show/728162/ Total cost: L26 Construct view: L43 Data gather per cell cost: L152 Data gather all cells cost: L174 Merge Sort cost: L198 *2. Why it is not parallel in the first result?* The root reason of gathering data in first table is not in parallel because we don’t enable eventlet.monkey_patch (especially, time flag is not True) under the uswgi mode. Then the oslo_db’s thread yield [2] doesn’t work, and all db data gathering threads are blocked until they get all data from db[1]. Finally the gathering process looks like is executed in serial, so we fix it in [2] but after fix[2], it still has no more improvement as we expected, looks like every thread is influenced by each other, so we need your idea. : ) [1] https://github.com/openstack/oslo.db/blob/256ebc3/oslo_db/sqlalchemy/engines.py#L51 [2] https://review.openstack.org/#/c/592285/ Regards, Yikun Jiang Yikun(Kero) Mail: yikunk...@gmail.com Zhenyu Zheng 于2018年8月16日周四 下午3:54写道: > Hi, Nova > > As the Cells v2 architecture is getting mature, and CERN used it and seems > worked well, *Huawei *is also willing to consider using this in our > Public Cloud deployments. > As we still have concerns about the performance when doing multi-cell > listing, recently *Yikun Jiang* and I have done a performance test for > ``instance list`` across > multi-cell deployment, we would like share our test results and findings. > > First, I want to point out our testing environment, as we(Yikun and I) are > doing this as a concept test(to show the ratio between time consumptions > for query data from > DB and sorting etc.) so we are doing it on our own machine, the machine > has 16 CPUs and 80 GB RAM, as it is old, so the Disk might be slow. So we > will not judging > the time consumption data itself, but the overall logic and the ratios > between different steps. We are doing it with a devstack deployment on this > single machine. > > Then I would like to share our test plan, we will setup 10 cells > (cell1~cell10) and we will generate 1 instance records in those cells > (considering 20 instances per > host, it would be like 500 hosts, which seems a good size for a cell), > cell0 is kept empty as the number for errored instance could be very less > and it doesn't really matter. > We will test the time consumption for listing instances across 1,2,5, and > 10 cells(cell0 will be always queried, so it is actually 2, 3, 6 and 11 > cells) with the limit of > 100, 200, 500 and 1000, as the default maximum limit is 1000. In order to > get more general results, we tested the list with default sort key and dir, > sort by > instance_uuid and sort by uuid & name, this should provide a more general > result. > > This is what we got(the time unit is second): > > *Default sort* > > *Uuid* *Sort* > > *uuid+name* *Sort* > > *Cell* > > *Num* > > *Limit* > > > *Total* > > *Cost* > > *Data Gather Cost* > > *Merge Sort Cost* > > *Construct View* > > *Total* > > *Cost* > > *Data Gather Cost* > > *Merge Sort Cost* > > *Construct View* > > *Total* > > *Cost* > > *Data Gather Cost* > > *Merge Sort Cost* > > *Construct View* > > 10 > > 100 > > 2.3313 > > 2.1306 > > 0.1145 > > 0.0672 > > 2.3693 > > 2.1343 > > 0.1148 > > 0.1016 > > 2.3284 > > 2.1264 > > 0.1145 > > 0.0679 > > 200 > > 3.5979 > > 3.2137 > > 0.2287 > > 0.1265 > > 3.5316 > > 3.1509 > > 0.2265 > > 0.1255 > > 3.481 > > 3.054 > > 0.2697 > > 0.1284 > > 500 > > 7.1952 > > 6.2597 > > 0.5704 > > 0.3029 > > 7.5057 > > 6.4761 > > 0.6263 > > 0.341 > > 7.4885 > > 6.4623 > > 0.6239 > > 0.3404 > > 1000 > > 13.5745 > > 11.7012 > > 1.1511 > > 0.5966 > > 13.8408 > > 11.9007 > > 1.2268 > > 0.5939 > > 13.8813 > > 11.913 > > 1.2301 > > 0.6187 > > 5 > > 100 > > 1.3142 > > 1.1003 > > 0.1163 > > 0.0706 > > 1.2458 > > 1.0498 > > 0.1163 > > 0.0665 > > 1.2528 > > 1.0579 > > 0.1161 > > 0.066 > > 200 > > 2.0151 > > 1.6063 > > 0.2645 > > 0.1255 > > 1.9866 > > 1.5386 > > 0.2668 > > 0.1615 > > 2.0352 > > 1.6246 > > 0.2646 > > 0.1262 > > 500 > > 4.2109 > > 3.1358 > > 0.7033 > > 0.3343 > > 4.1605 > > 3.0893 > > 0.6951 > > 0
Re: [openstack-dev] [nova]Notification update week 25
I'd like to help it. : ) Regards, Yikun Jiang Yikun(Kero) Mail: yikunk...@gmail.com Matt Riedemann 于2018年6月20日周三 上午1:07写道: > On 6/18/2018 10:10 AM, Balázs Gibizer wrote: > > * Introduce instance.lock and instance.unlock notifications > > > https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances > > This hasn't been updated in quite awhile. I wonder if someone else wants > to pick that up now? > > -- > > Thanks, > > Matt > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] PTL Election Season
Matt, Thanks for your all works. As a beginner of Nova upstream player, really appreciate your patient review and warm help. : ) Regards, Yikun Jiang Yikun(Kero) Mail: yikunk...@gmail.com 2018-01-23 7:09 GMT+08:00 Matt Riedemann <mriede...@gmail.com>: > On 1/15/2018 11:04 AM, Kendall Nelson wrote: > >> Election details: https://governance.openstack.org/election/ >> >> Please read the stipulations and timelines for candidates and electorate >> contained in this governance documentation. >> >> Be aware, in the PTL elections if the program only has one candidate, >> that candidate is acclaimed and there will be no poll. There will only be a >> poll if there is more than one candidate stepping forward for a program's >> PTL position. >> >> There will be further announcements posted to the mailing list as action >> is required from the electorate or candidates. This email is for >> information purposes only. >> >> If you have any questions which you feel affect others please reply to >> this email thread. >> >> > To anyone that cares, I don't plan on running for Nova PTL again for the > Rocky release. Queens was my fourth tour and it's definitely time for > someone else to get the opportunity to lead here. I don't plan on going > anywhere and I'll be here to help with any transition needed assuming > someone else (or a couple of people hopefully) will run in the election. > It's been a great experience and I thank everyone that has had to put up > with me and my obsessive paperwork and process disorder in the meantime. > > -- > > Thanks, > > Matt > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] question on , quience use call and unquience use cast
Hi, @jichen: Something I found, FYI: The commit history in here: https://review.openstack.org/#/c/138795 I noticed that the unquience change call to cast in PS5 and PS6: https://review.openstack.org/#/c/138795/5..6/nova/compute/rpcapi.py and some comments on https://review.openstack.org/#/c/138795/5/nova/compute/api.py@2235 According history comments, the reason of "use cast for unquience": it adds _wait_for_snapshots_completion operation on unquience method in PS6, it will cause the rpc timeout before snapshot finished if we use call rpc. The reason of "use call for quience", I think is just quience is a short operation, no need to change cast to call? In the other word, call or cast is okay for quience operation, so, he didn't change it. Hope this helps, : ) Regards, Yikun ---- Jiang Yikun(Kero) Mail: yikunk...@gmail.com Tel: (+86) 13572822142 2017-12-21 16:48 GMT+08:00 Chen CH Ji <jiche...@cn.ibm.com>: > During review https://review.openstack.org/#/c/529278/2 ,some question on > the method for quience/unquience > > https://github.com/openstack/nova/blob/master/nova/compute/rpcapi.py#L1140 > use call for quience > https://github.com/openstack/nova/blob/master/nova/compute/rpcapi.py#L1146 > use cast for unquience > > just curious ,any special purpose for use different type here? > > Best Regards! > > Kevin (Chen) Ji 纪 晨 > > Engineer, zVM Development, CSTL > Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com > Phone: +86-10-82451493 <+86%2010%208245%201493> > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, > Beijing 100193, PRC > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev