Re: [openstack-dev] [QA] Proposed changes to the team meeting time

2017-05-28 Thread Masayuki Igawa
Thanks Andrea,

+1
But my only one concern is this change is not good for you, right? Of
course, you don't need to attend both meetings, though :)
-- 
  Masayuki Igawa



On Fri, May 26, 2017, at 10:41 AM, zhu.fang...@zte.com.cn wrote:
> 


> +1, thanks!


> 


> zhufl


> 


> 
> 
> Original Mail
> *Sender: * <andrea.fritt...@gmail.com>;
> *To: * <openstack-dev@lists.openstack.org>;
> *Date: *2017/05/25 21:19
> *Subject: **[openstack-dev] [QA] Proposed changes to the team
> meeting time*> 


> Hello team,
> 
> our current QA team meeting schedule alternates between 9:00 UTC and
> 17:00 UTC.> The 9:00 meetings is a bit towards the end of the day for out
> contributors in APAC, so I'm proposing to move the meeting to
> 8:00 UTC.> 
> Please respond with +1 / -1 and/or comments, I will leave the poll
> open for about 10 days to make sure everyone interested gets a chance
> to comment.> 
> Thank you
> 
> andrea
> 


> 


> -
> > OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] No team meeting today - 05/29/2017

2017-05-28 Thread Renat Akhmerov
Hi,

We’re cancelling today’s meeting since most of the key members can’t attend due 
to holidays in UK and US.

Team, please keep in mind that next week we will release Pike-2. Try to wrap up 
your most important work this week.

Thanks

Renat Akhmerov
 @Nokia 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron]Nova cells v2+Neutron+Tricircle, it works

2017-05-28 Thread joehuang
Hello,

There is one session in OpenStack Boston summit about Neutron multi-site, and 
discuss how to make Neutron cells-aware[1].

We have done experiment how Nova cells v2 + Neutron + Tricircle can work very 
well:

Follow the guide[2], you will have one two cells environment: cell1 in node1 
and cell2 in node2, and Nova-api/scheduler/conductor/placement in node1. To 
simplify the integration, the region for Nova-api is registered as 
CentralRegion in KeyStone.

At the same time, the tricircle devstack plugin will also install RegionOne 
Neutron in node1 to work with cell1, RegionTwo Neutron in node2, i,e, we'll 
have one local Neutron with one cell, the neutron endpoint url in cell's 
compute node will be configured to local Neutron. each local Neutron will be 
configured with Tricircle local Neutron plugin.

We just mentioned that Nova-API is registered as CentralRegion, the tricircle 
devstack plugin will also start a central Neutron server and register it in 
CentralRegion( same region as Nova-API), the central Neutron server will be 
installed with Tricircle central Neutron plugin. In Nova-api's configuration 
nova.conf, the neutron endpoint url is configured to the central Neutron 
server's url.

In both central Neutron server and local Neutron, the nova endpoint url 
configuration will be pointed to central Nova-api url, it's for call-back 
message to Nova-API.

After the installation, now you have one CentralRegion with one Nova API and 
one Neutron Server, and regard to Nova multi-cells, each cell is accompanied 
with one local Neutron. ( It's not necessary for 1:1 mapping between cell and 
local Neutron, may multi-cells mapped to one local Neutron).

If you create network/router without availability-zone specified, global 
network/router is applied, all instances from any cell can be attached to. If 
you create network/router with availability-zone specified, you can get scoped 
network/router, i.e, the network/router can only be presented in regarding 
cells.

Note:
1. Because Nova multi-cells is still under development, there will be some 
issue in deployment and experience, some typical trouble shooting has been 
provided in the document[2].
2. For the RegionOne and RegionTwo name which is registered by local Neutron, 
you can change it to other better name if needed, for example 
"cell1-subregion", "cell2-subregion", etc.

Feedback and contribution are welcome to make this mode works.

[1]http://lists.openstack.org/pipermail/openstack-dev/2017-May/116614.html
[2]https://docs.openstack.org/developer/tricircle/installation-guide.html#work-with-nova-cell-v2-experiment

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] - Summit "Making Neutron easy for people who want basic Networking" summary

2017-05-28 Thread Sukhdev Kapur
Folks,

I moderated the above referenced session at Boston Summit (sorry for
posting the summary a bit late - because of personal reason).

The etherpad of the session [1] gives you the details of the discussion.
 Based upon the discussions, there were two critical take aways from this
session:

   - IP address allocation (DHCP) for the default/basic configurations. It
   would be desirable, for simpler deployments, to allow a straightforward
   method to specify the IP address for an instance - e.g. one should be able
   to use config drive to config the address and pass it to neutron
   - The documentation for simpler deployments is bit confusing. It is more
   OVS centric and does not provide clear documentation or steps for non-OVS
   deployments. Perhaps update it so that user that are not familiar with
   neutron should be able to deploy instances by answering few simple
   questions in a single config file.  I have filed an RFE [2] to address this.


1. https://etherpad.openstack.org/p/pike-neutron-making-it-easy
2. https://bugs.launchpad.net/neutron/+bug/1694165

regards
-Sukhdev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [panko] dropping hbase driver support

2017-05-28 Thread Mehdi Abaakouk



Le 2017-05-28 19:18, Julien Danjou a écrit :

On Fri, May 26 2017, gordon chung wrote:


as all of you know, we moved all storage out of ceilometer so it is
handles only data generation and normalisation. there seems to be very
little contribution to panko which handles metadata indexing, event
storage so given how little it's being adopted and how little 
resources

are being put on supporting it, i'd like to proposed to drop hbase
support as a first step in making the project more manageable for
whatever resource chooses to support it.


No objection from me.


It's ok for me too.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [panko] dropping hbase driver support

2017-05-28 Thread Julien Danjou
On Fri, May 26 2017, gordon chung wrote:

> as all of you know, we moved all storage out of ceilometer so it is 
> handles only data generation and normalisation. there seems to be very 
> little contribution to panko which handles metadata indexing, event 
> storage so given how little it's being adopted and how little resources 
> are being put on supporting it, i'd like to proposed to drop hbase 
> support as a first step in making the project more manageable for 
> whatever resource chooses to support it.

No objection from me.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][scheduler] Anyone relying on the host_subset_size config option?

2017-05-28 Thread Belmiro Moreira
This option is useful in large deployments.

Our scheduler strategy is to "pack", however we are not interested in this
strategy per individual compute node but per sets of them. One of the
advantages is that when a user creates consecutive instances in the same
AVZ it's unlikely that they will be started in the same compute node.

Also, a problem in the "best" compute node doesn't block completely the
creation of new instances when not using "retry".



Belmiro

--

CERN

On Fri, May 26, 2017 at 7:17 PM, Edward Leafe  wrote:

> [resending to include the operators list]
>
> The host_subset_size configuration option was added to the scheduler to
> help eliminate race conditions when two requests for a similar VM would be
> processed close together, since the scheduler’s algorithm would select the
> same host in both cases, leading to a race and a likely failure to build
> for the second request. By randomly choosing from the top N hosts, the
> likelihood of a race would be reduced, leading to fewer failed builds.
>
> Current changes in the scheduling process now have the scheduler claiming
> the resources as soon as it selects a host. So in the case above with 2
> similar requests close together, the first request will claim successfully,
> but the second will fail *while still in the scheduler*. Upon failing the
> claim, the scheduler will simply pick the next host in its weighed list
> until it finds one that it can claim the resources from. So the
> host_subset_size configuration option is no longer needed.
>
> However, we have heard that some operators are relying on this option to
> help spread instances across their hosts, rather than using the RAM
> weigher. My question is: will removing this randomness from the scheduling
> process hurt any operators out there? Or can we safely remove that logic?
>
>
> -- Ed Leafe
>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev