[openstack-dev] [nova] Support starting index when auto naming multiple instances

2014-09-16 Thread Yingjun Li
Currently when booting multiple instances, the instance display-names will be 
something like 'test-1,test-2' if we set
multi_instance_display_name_template = %(name)s-%(count)s. Here is the problem, 
if we need more instances 
and want the instance names start with 'test-3', there is no such way to do so 
now.

So a new template is introduced to solve the issue: '%(name)s-%(index)s’ If we 
enter instance name like `test-12`
when booting multiple instances, the display name of the remaining instances 
will be auto plus with 1 and begin with `12`.

And here is the patch related to this: 
https://review.openstack.org/#/c/38/, i’m proposing this
for discussion as Andrew suggested, any feedback would be appreciated, thanks!___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova V2 Quota API

2014-01-29 Thread Yingjun Li

On Jan 29, 2014, at 22:48, Vinod Kumar Boppanna vinod.kumar.boppa...@cern.ch 
wrote:

 Hi,
 
 In the Documentation, it was mentioned that there are two API's to see the 
 quotas of a tenant.
 
 1. v2/{tenant_id}/os-quota-sets - Shows quotas for a tenant
  
 2. v2/{tenant_id}/os-quota-sets/{tenant_id}/{user_id} - Enables an admin to 
 show quotas for a specified tenant and a user
 
 I guess the first API can be used by a member in a tenant to get the quotas 
 of that tenant. The second one can be run by admin to get the quotas of any 
 tenant or any user.
 
 But through normal user when i am running any of the below (after 
 authentication)
 
 $ nova --debug quota-show --tenant tenant_id(tenant id of a project in 
 which this user is member)
 It is calling the second API i.e  v2/{tenant_id}/os-quota-sets/{tenant_id} 
 
 or even when i am calling directly the API 
 
 $  curl -i -HX-Auth-Token:$TOKEN -H Content-type: application/json 
 http://localhost:8774/v2/tenant_id/os-quota-sets/

I think the documentation is missing tenant_id behind os-quota-sets/
It should be like curl -i -HX-Auth-Token:$TOKEN -H Content-type: 
application/json http://localhost:8774/v2/tenant_id/os-quota-sets/tenant_id

 It says the Resource not found.
 
 So, Is the first API is available?
 
 Regards,
 Vinod Kumar Boppanna
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova V2 Quota API

2014-01-29 Thread Yingjun Li
I reported a bug here: https://bugs.launchpad.net/openstack-manuals/+bug/1274153

On Jan 29, 2014, at 23:33, Anne Gentle a...@openstack.org wrote:

 Hi can you point out where you're seeing documentation for the first without 
 tenant_id? 
 
 At http://api.openstack.org/api-ref-compute-ext.html#ext-os-quota-sets only 
 the tenant_id is documented. 
 
 This is documented identically at 
 http://docs.openstack.org/api/openstack-compute/2/content/ext-os-quota-sets.html
 
 Let us know where you're seeing the misleading documentation so we can log a 
 bug and fix it.
 Anne
 
 
 On Wed, Jan 29, 2014 at 8:48 AM, Vinod Kumar Boppanna 
 vinod.kumar.boppa...@cern.ch wrote:
 Hi,
 
 In the Documentation, it was mentioned that there are two API's to see the 
 quotas of a tenant.
 
 1. v2/{tenant_id}/os-quota-sets - Shows quotas for a tenant
  
 2. v2/{tenant_id}/os-quota-sets/{tenant_id}/{user_id} - Enables an admin to 
 show quotas for a specified tenant and a user
 
 I guess the first API can be used by a member in a tenant to get the quotas 
 of that tenant. The second one can be run by admin to get the quotas of any 
 tenant or any user.
 
 But through normal user when i am running any of the below (after 
 authentication)
 
 $ nova --debug quota-show --tenant tenant_id(tenant id of a project in 
 which this user is member)
 It is calling the second API i.e  v2/{tenant_id}/os-quota-sets/{tenant_id} 
 
 or even when i am calling directly the API 
 
 $  curl -i -HX-Auth-Token:$TOKEN -H Content-type: application/json 
 http://localhost:8774/v2/tenant_id/os-quota-sets/
 It says the Resource not found.
 
 So, Is the first API is available?
 
 Regards,
 Vinod Kumar Boppanna
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-04 Thread Yingjun Li
Glad to see this, i will be glad to contribute on it if the project could move 
on..

On Apr 4, 2014, at 10:01, Cazzolato, Sergio J sergio.j.cazzol...@intel.com 
wrote:

 
 Glad to see that, for sure I'll participate of this session.
 
 Thanks
 
 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com] 
 Sent: Thursday, April 03, 2014 7:21 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Quota Management
 
 On Thu, 2014-04-03 at 14:41 -0500, Kevin L. Mitchell wrote:
 On Thu, 2014-04-03 at 19:16 +, Cazzolato, Sergio J wrote:
 Jay, thanks for taking ownership on this idea, we are really 
 interested to contribute to this, so what do you think are the next 
 steps to move on?
 
 Perhaps a summit session on quota management would be in order?
 
 Done:
 
 http://summit.openstack.org/cfp/details/221
 
 Best,
 -jay
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally] Application for a new OpenStack Program: Performance and Scalability

2014-07-21 Thread Yingjun Li
Cool, Rally is really helpful for performance benchmarking and optimizing for 
our openstack cloud.

On Jul 22, 2014, at 5:53, Boris Pavlovic bo...@pavlovic.me wrote:

 Hi Stackers and TC,
 
 The Rally contributor team would like to propose a new OpenStack program
 with a mission to provide scalability and performance benchmarking, and
 code profiling tools for OpenStack components.
 
 We feel we've achieved a critical mass in the Rally project, with an
 active, diverse contributor team. The Rally project will be the initial
 project in a new proposed Performance and Scalability program.
 
 Below, the details on our proposed new program.
 
 Thanks for your consideration,
 Boris
 
 
 
 [1] https://review.openstack.org/#/c/108502/
 
 
 Official Name
 =
 
 Performance and Scalability
 
 Codename
 
 
 Rally
 
 Scope
 =
 
 Scalability benchmarking, performance analysis, and profiling of
 OpenStack components and workloads
 
 Mission
 ===
 
 To increase the scalability and performance of OpenStack clouds by:
 
 * defining standard benchmarks
 * sharing performance data between operators and developers
 * providing transparency of code paths through profiling tools
 
 Maturity
 
 
 * Meeting logs http://eavesdrop.openstack.org/meetings/rally/2014/
 * IRC channel: #openstack-rally
 * Rally performance jobs are in (Cinder, Glance, Keystone  Neutron)
 check pipelines.
 *  950 commits over last 10 months
 * Large, diverse contributor community
  * 
 http://stackalytics.com/?release=junometric=commitsproject_type=Allmodule=rally
  * http://stackalytics.com/report/contribution/rally/180
 
 * Non official lead of project is Boris Pavlovic
  * Official election In progress.
 
 Deliverables
 
 
 Critical deliverables in the Juno cycle are:
 
 * extending Rally Benchmark framework to cover all use cases that are
 required by all OpenStack projects
 * integrating OSprofiler in all core projects
 * increasing functional  unit testing coverage of Rally.
 
 Discussion
 ==
 
 One of the major goals of Rally is to make it simple to share results of
 standardized benchmarks and experiments between operators and
 developers. When an operator needs to verify certain performance
 indicators meet some service level agreement, he will be able to run
 benchmarks (from Rally) and share with the developer community the
 results along with his OpenStack configuration. These benchmark results
 will assist developers in diagnosing particular performance and
 scalability problems experienced with the operator's configuration.
 
 Another interesting area is Rally  the OpenStack CI process. Currently,
 working on performance issues upstream tends to be a more social than
 technical process. We can use Rally in the upstream gates to identify
 performance regressions and measure improvement in scalability over
 time. The use of Rally in the upstream gates will allow a more rigorous,
 scientific approach to performance analysis. In the case of an
 integrated OSprofiler, it will be possible to get detailed information
 about API call flows (e.g. duration of API calls in different services).
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] PTL Candidacy

2014-07-21 Thread Yingjun Li
+1

On Jul 22, 2014, at 2:38, Boris Pavlovic bpavlo...@mirantis.com wrote:

 Hi, 
 
 I would like to propose my candidacy for Rally PTL.
 
 I started this project to make benchmarking of OpenStack simple as possible. 
 This means not only load generation, but as well OpenStack specific benchmark 
 framework, data analyze and integration with gates. All these things should 
 make it simple for developers and operators to benchmark (perf, scale, stress 
 test) OpenStack, share experiments  results, and have a fast way to find 
 what produce bottleneck or just to ensure that OpenStack works well under 
 load that they are expecting. 
 
 I am current non official PTL and in my responsibilities are such things like:
 1) Adoption of Rally architecture to cover everybody's use cases
 2) Building  managing work of community
 3) Writing a lot of code
 4) Working on docs  wiki 
 5) Helping newbies to join Rally team 
 
 As a PTL I would like to continue work and finish my initial goal:
 1) Ensure that everybody's use cases are fully covered
 2) There is no monopoly in project
 3) Run Rally in gates of all OpenStack projects (currently we have check jobs 
 in Keystone, Cinder, Glance  Neutron)
 4) Continue work on making project more mature. It covers such topics like 
 increasing unit and functional test coverage and making Rally absolutely safe 
 to run against any production cloud)
 
 
 Best regards,
 Boris Pavlovic
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-06 Thread Yingjun Li
From a user’s aspect i do think Rally is more suitable for a product-ready 
cloud, and seems like it is where it focused on.  It’s very easy to evaluate 
that if the performance of the cloud is better after we adjust some configs or 
some other tuning. It also provides SLA which maybe not
so powerful currently but it’s a good start. So I think Rally is good enough to 
be in separated program.

I totally agree that tempest shouldn’t try to cover everything, simple makes a 
thing better.

On Aug 7, 2014, at 5:48, John Griffith john.griff...@solidfire.com wrote:

 I have to agree with Duncan here.  I also don't know if I fully understand 
 the limit in options.  Stress test seems like it could/should be different 
 (again overlap isn't a horrible thing) and I don't see it as siphoning off 
 resources so not sure of the issue.  We've become quite wrapped up in 
 projects, programs and the like lately and it seems to hinder forward 
 progress more than anything else.
 
 I'm also not convinced that Tempest is where all things belong, in fact I've 
 been thinking more and more that a good bit of what Tempest does today should 
 fall more on the responsibility of the projects themselves.  For example 
 functional testing of features etc, ideally I'd love to have more of that 
 fall on the projects and their respective teams.  That might even be 
 something as simple to start as saying if you contribute a new feature, you 
 have to also provide a link to a contribution to the Tempest test-suite that 
 checks it.  Sort of like we do for unit tests, cross-project tracking is 
 difficult of course, but it's a start.  The other idea is maybe functional 
 test harnesses live in their respective projects.
 
 Honestly I think who better to write tests for a project than the folks 
 building and contributing to the project.  At some point IMO the QA team 
 isn't going to scale.  I wonder if maybe we should be thinking about 
 proposals for delineating responsibility and goals in terms of functional 
 testing?
 
 
 
 
 On Wed, Aug 6, 2014 at 12:25 PM, Duncan Thomas duncan.tho...@gmail.com 
 wrote:
 I'm not following here - you complain about rally being monolithic,
 then suggest that parts of it should be baked into tempest - a tool
 that is already huge and difficult to get into. I'd rather see tools
 that do one thing well and some overlap than one tool to rule them
 all.
 
 On 6 August 2014 14:44, Sean Dague s...@dague.net wrote:
  On 08/06/2014 09:11 AM, Russell Bryant wrote:
  On 08/06/2014 06:30 AM, Thierry Carrez wrote:
  Hi everyone,
 
  At the TC meeting yesterday we discussed Rally program request and
  incubation request. We quickly dismissed the incubation request, as
  Rally appears to be able to live happily on top of OpenStack and would
  benefit from having a release cycle decoupled from the OpenStack
  integrated release.
 
  That leaves the question of the program. OpenStack programs are created
  by the Technical Committee, to bless existing efforts and teams that are
  considered *essential* to the production of the OpenStack integrated
  release and the completion of the OpenStack project mission. There are 3
  ways to look at Rally and official programs at this point:
 
  1. Rally as an essential QA tool
  Performance testing (and especially performance regression testing) is
  an essential QA function, and a feature that Rally provides. If the QA
  team is happy to use Rally to fill that function, then Rally can
  obviously be adopted by the (already-existing) QA program. That said,
  that would put Rally under the authority of the QA PTL, and that raises
  a few questions due to the current architecture of Rally, which is more
  product-oriented. There needs to be further discussion between the QA
  core team and the Rally team to see how that could work and if that
  option would be acceptable for both sides.
 
  2. Rally as an essential operator tool
  Regular benchmarking of OpenStack deployments is a best practice for
  cloud operators, and a feature that Rally provides. With a bit of a
  stretch, we could consider that benchmarking is essential to the
  completion of the OpenStack project mission. That program could one day
  evolve to include more such operations best practices tools. In
  addition to the slight stretch already mentioned, one concern here is
  that we still want to have performance testing in QA (which is clearly
  essential to the production of OpenStack). Letting Rally primarily be
  an operational tool might make that outcome more difficult.
 
  3. Let Rally be a product on top of OpenStack
  The last option is to not have Rally in any program, and not consider it
  *essential* to the production of the OpenStack integrated release or
  the completion of the OpenStack project mission. Rally can happily exist
  as an operator tool on top of OpenStack. It is built as a monolithic
  product: that approach works very well for external complementary
  solutions... Also be more 

Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-04 Thread Yingjun Li
+1, if doing so, a related bug related bug may be solved as well: 
https://bugs.launchpad.net/nova/+bug/1323538

On Jun 3, 2014, at 21:29, Jay Pipes jaypi...@gmail.com wrote:

 Hi Stackers,
 
 tl;dr
 =
 
 Move CPU and RAM allocation ratio definition out of the Nova scheduler and 
 into the resource tracker. Remove the calculations for overcommit out of the 
 core_filter and ram_filter scheduler pieces.
 
 Details
 ===
 
 Currently, in the Nova code base, the thing that controls whether or not the 
 scheduler places an instance on a compute host that is already full (in 
 terms of memory or vCPU usage) is a pair of configuration options* called 
 cpu_allocation_ratio and ram_allocation_ratio.
 
 These configuration options are defined in, respectively, 
 nova/scheduler/filters/core_filter.py and 
 nova/scheduler/filters/ram_filter.py.
 
 Every time an instance is launched, the scheduler loops through a collection 
 of host state structures that contain resource consumption figures for each 
 compute node. For each compute host, the core_filter and ram_filter's 
 host_passes() method is called. In the host_passes() method, the host's 
 reported total amount of CPU or RAM is multiplied by this configuration 
 option, and the product is then subtracted from the reported used amount of 
 CPU or RAM. If the result is greater than or equal to the number of vCPUs 
 needed by the instance being launched, True is returned and the host 
 continues to be considered during scheduling decisions.
 
 I propose we move the definition of the allocation ratios out of the 
 scheduler entirely, as well as the calculation of the total amount of 
 resources each compute node contains. The resource tracker is the most 
 appropriate place to define these configuration options, as the resource 
 tracker is what is responsible for keeping track of total and used resource 
 amounts for all compute nodes.
 
 Benefits:
 
 * Allocation ratios determine the amount of resources that a compute node 
 advertises. The resource tracker is what determines the amount of resources 
 that each compute node has, and how much of a particular type of resource 
 have been used on a compute node. It therefore makes sense to put 
 calculations and definition of allocation ratios where they naturally belong.
 * The scheduler currently needlessly re-calculates total resource amounts on 
 every call to the scheduler. This isn't necessary. The total resource amounts 
 don't change unless either a configuration option is changed on a compute 
 node (or host aggregate), and this calculation can be done more efficiently 
 once in the resource tracker.
 * Move more logic out of the scheduler
 * With the move to an extensible resource tracker, we can more easily evolve 
 to defining all resource-related options in the same place (instead of in 
 different filter files in the scheduler...)
 
 Thoughts?
 
 Best,
 -jay
 
 * Host aggregates may also have a separate allocation ratio that overrides 
 any configuration setting that a particular host may have
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Multiple workers for neurton API server

2013-08-15 Thread Yingjun Li
Hi, all.

Currently, there is only one pid running for neutron-server. It's not
enough to handle the requests when undering lots of API access. So multiple
workers for neutron-server are urgrent necessary.

Please refer to
https://blueprints.launchpad.net/neutron/+spec/multi-workers-for-api-server to
get more details, and the BP needs approval from the core team.

Thanks!

Best,
Yingjun
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to revert per-user-quotas

2013-08-20 Thread Yingjun Li
Thanks for address the issues. About the bad state for fixed_ips,
floating_ips, i think we could make the user_id column=NULL when creating
the quota usage and reservation, so the usages for fixed_ips and
floating_ips will be  synced within the project.
Does this make sense?


2013/8/20 Andrew Laski andrew.la...@rackspace.com

 The patch in question 
 (https://review.openstack.org/**#/c/28232/24https://review.openstack.org/#/c/28232/24)
 adds the ability to track quota usage on a per user basis within a project.
  I have run into two issues with it so far: the db migration is incomplete
 and leaves the data in a bad state, and the sync methods used during quota
 reservations no longer work for fixed_ips, floating_ips, and networks since
 they are not tied to a user.

 The db migration issue is documented at https://bugs.launchpad.net/**
 nova/+bug/1212798 https://bugs.launchpad.net/nova/+bug/1212798 but the
 tl;dr is that the quota usages that were in place before the migration is
 run can not be decremented and aren't fixed by the healing sync that
 occurs.  I sought to address this by introducing a new migration which
 performs a full sync of quota usages and removes the bad rows but that led
 me to the next issue.

 Some resources can't be synced properly because they're tracked per user
 in the quota table but they're not tied to a user so it's not feasible to
 grab a count of how many are being used by any particular user.  So right
 now the quota_usages table can get into a bad state with no good way to
 address it.

 Right now I think it will be better to revert this change and re-introduce
 it once these issues are worked out. Thoughts?

 As an addendum, the patch merged about a month ago on Jul 25th and looks
 to have some minor conflicts for a revert but should be minimally
 disruptive.

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Review request for 40171

2013-09-02 Thread Yingjun Li
Hi, all,

Could any one from the nova core team take a look at the patch
https://review.openstack.org/#/c/40171/

Clean destroy for project quota

* Destroy user quotas under the project when deleting project quota.
* Fixes bug 1206479 https://code.launchpad.net/bugs/1206479

Change-Id: 
Id8391a2f6c25974b990c4a95a6bc99d696cd1c98https://review.openstack.org/#q,Id8391a2f6c25974b990c4a95a6bc99d696cd1c98,n,z

Thanks

Yingjun
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Needs approval again after rebase

2013-09-06 Thread Yingjun Li
Hi, The patch https://review.openstack.org/43583 was approved but failed to
get merged. Could any core reviewer take a look at this after rebase ?

Thanks

Yingjun
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][rally] What's new in Rally v0.0.2

2015-03-12 Thread Yingjun Li
Nice!

 On Mar 13, 2015, at 1:03 AM, Boris Pavlovic bo...@pavlovic.me wrote:
 
 Hi stackers, 
 
 For those who doesn't know Rally team started making releases. 
 
 There are 3 major reasons why we started doing releases: 
 
  * A lot of people started using Rally in their CI/CD. 
 
 Usually they don't like to depend on something that is from master.
 And would like to have smooth testable upgrades between versions 
 
  * Rally is used in gates of many projects. 
 
 As far as you know in Rally everything is plugable. These plugins can be
 put  in project tree. This is nice flexibility for all projects. But it 
 blocks a lot
development of Rally. To resolve this issue we are going to allow projects 
 t
specify which version of Rally to run in their trees. This resolves 2 
 issues:
1) projects gates won't depend on Rally master 
2) projects have smooth, no downtime, testable way to switch to newer
version of Rally 
 
  * Release notes - as a simple way to track project changes. 
 
 
 
 Release stats: 
 +--+-+
 | Commits  | **100** |
 +--+-+
 | Bug fixes| **18**  |
 +--+-+
 | Dev cycle|   **45 days**   |
 +--+-+
 | Release date | **12/Mar/2015** |
 +--+-+
 
 
 Release notes: 
 
 https://rally.readthedocs.org/en/latest/release_notes/v0.0.2.html 
 https://rally.readthedocs.org/en/latest/release_notes/v0.0.2.html
 
 
 Pypi:
 
 https://pypi.python.org/pypi/rally/0.0.2 
 https://pypi.python.org/pypi/rally/0.0.2
 
 
 Future goals: 
 
 Our goal is to cut releases ever 2 weeks.  As far as project is quite bugless 
 and stable we don't need feature freeze at all, so I don't think that it will 
 be hard to achieve this goal.  
 
 
 Best regards,
 Boris Pavlovic 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] Improve review process

2015-05-05 Thread Yingjun Li
Nice!

 On May 5, 2015, at 8:11 PM, Roman Vasilets rvasil...@mirantis.com wrote:
 
 Hi, Rally Team.
 I have created Rally Gerrit dashboard that organized patches in groups: 
 Critical for next release, Waiting for final approve, Bug fixes, Proposed 
 specs, Important patches, Ready for review, Has -1 but passed tests. Please 
 use link http://goo.gl/iRxA5t http://goo.gl/iRxA5t for you comfortable. 
 Patch is here https://review.openstack.org/#/c/179610/ 
 https://review.openstack.org/#/c/179610/ It was made by gerrit-dash-creator.
 First group are the patches that are needed to merge to the nearest 
 release. Content of the next three groups is obvious from the titles. 
 Important patches - its just patches chosen(starred) by Boris Pavlovic or 
 Mikhail Dubov. Ready for review - patches that are waiting for attention. And 
 the last section - its patches with -1 mark but they passed CI.
 
 Roman Vasilets, Mirantis Inc.
 Intern Software Engineer
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaasv2]How to configure lbaasv2 in devstack

2015-07-20 Thread Yingjun Li
Currently horizon doesn’t support LBaaS v2, there is a blueprint related
but it doesn’t implement yet:
https://blueprints.launchpad.net/horizon/+spec/lbaas-v2-panel

2015-07-21 9:49 GMT+08:00 jiangshan0...@139.com jiangshan0...@139.com:

 Hi all,

  I have configured these lines in my devstack localrc

 # Load the external LBaaS plugin.
 enable_plugin neutron-lbaas
 https://git.openstack.org/openstack/neutron-lbaas

 ## Neutron - Load Balancing
 ENABLED_SERVICES+=,q-lbaasv2

 # Horizon (Dashboard UI) - (always use the trunk)
 ENABLED_SERVICES+=,horizon

 # Neutron - Networking Service
 # If Neutron is not declared the old good nova-network will be used
 ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,neutron


 And I can use lbaasv2 through CLI. But do not have the
 loadbalance pages in dashboard(the other pages like routers, networks are
 all right).

 Is there anything wrong in my configuration? Or maybe some
 configuration need to be done in horizon to use lbaasv2?

 Thanks a lot for your help!

 --


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-23 Thread Yingjun Li
It’s definitely a nice feature to have for end user, actually we implemented it 
our own because we need this but
nova doesn’t support.

Yingjun

> On May 24, 2017, at 6:58 AM, Jay Bryant  wrote:
> 
> 
> On 5/23/2017 9:56 AM, Duncan Thomas wrote:
>> 
>> 
>> On 23 May 2017 4:51 am, "Matt Riedemann" > > wrote:
>> 
>> 
>> Is this really something we are going to have to deny at least once per 
>> release? My God how is it that this is the #1 thing everyone for all time 
>> has always wanted Nova to do for them?
>> 
>> Is it entirely unreasonable to turn the question around and ask why, given 
>> it is such a commonly requested feature, the Nova team are so resistant to 
>> it?
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
> 
> I am going to jump into the fray here ...
> 
> I think that at some point we need to do a cost/benefit analysis.  If 
> customers really want this, than maybe it is worth the potential technical 
> debt.  Going down a route of hacking something together from the client seems 
> to potentially incur more technical debt and create a worse UX.
> 
> At the risks of having things thrown at me, I am going to say that this could 
> have a number of benefits.  It could be leveraged by the Cinder Ephemeral 
> driver that is being considered.  Volume types associated with compute hosts 
> could be used to ensure use of storage local to the compute host that is 
> managed by Cinder.
> 
> Anyway, that is my $0.02.
> 
> Jay
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev