Re: [openstack-dev] Mitaka: Identity V3 status and observations using domains

2016-08-19 Thread Nick Papadonis
Note that I'm creating all of this using the OS_TOKEN instead of going
through the API.  I wonder if that is causing the issue?

So far, my suspicions are that IDv3 with domains isn't fully baked in the
Mitaka bits.

On Fri, Aug 19, 2016 at 3:19 PM, Nick Papadonis 
wrote:

> Hi Folks,
>
> I'm playing with IDv3 in Mitaka and it doesn't seem to work as I'd
> expect.  Hopefully I'm understanding the way domains work.  The strategy is
> to create a top level cloud_admin_dom and super user.  Then create a
> default domain and admin user and default project and admin user.  Then
> create another dom_0001 to test projects in a different domain.
>
> The cloud_admin user works fine and appears to have privileges to do most
> things.  Now, when I use the default domain admin user or default domain
> default project admin user, I either get authentication issues from
> Keystone or the policy json isn't allowing the default domain admin (not in
> a project) to do things like list projects or users.  It appears folks have
> used this a few different ways and appreciate insight from your experience.
>
> As I understand the process (please correct me) is:
>
> function get_id () {
> echo `"$@" | grep ' id ' | awk '{print $4}'`
> }
>
> # Create admin role
> admin_role_id=$(get_id openstack role create admin)
>
> # Create Cloud Admin Domain
> cloud_admin_dom_id=$(get_id openstack domain create \
> --description "Cloud Admin Domain" cloud_admin_dom)
>
> # Update policy for domain ID
> cat /etc/keystone/policy.v3cloudsample.json | \
> sed -e "s/admin_domain_id/${cloud_admin_dom_id}/g" >
> /etc/keystone/policy.json
>
> # Create admin user for cloud admin domain
> cloud_admin_user_id=$(get_id openstack user create \
> --password secrete \
> --domain "${cloud_admin_dom_id}" \
> --description "Cloud Admin Domain Admin" \
> admin_cloud_admin_dom)
>
> # Assign admin role to admin user
> openstack role add --domain "${cloud_admin_dom_id}" \
>--user "${cloud_admin_user_id}" \
>"${admin_role_id}"
>
> # Create default domain (for legacy services)
> def_dom_id=$(get_id openstack domain create \
> --description "Default Domain" default)
>
> # Create admin user for default domain
> def_user_id=$(get_id openstack user create \
> --password secrete \
> --domain "${def_dom_id}" \
> --description "Default Domain Admin" \
> admin_default_dom)
>
> # Assign admin role to admin user
> openstack role add --domain "${def_dom_id}" \
>--user "${def_user_id}" \
>--inherited \
>"${admin_role_id}"
>
> # Create default project in default domain (for legacy services)
> project_id=$(get_id openstack project create "${DEFAULT_PROJECT}" \
> --description "Default Project" --domain "${cloud_admin_dom_id}"
> --enable)
>
> # Create admin user for default project in default domain
> user_id=$(get_id openstack user create admin_dom_default_proj_default \
> --project "${project_id}" \
> --password secrete \
> --domain "${def_dom_id}")
>
> # Assign admin role to admin user in default domain and default project
> openstack role add --project "${project_id}" \
>--user "${user_id}" \
>--inherited \
>"${admin_role_id}"
>
> # Create service role
> service_role_id=$(get_id openstack role create service)
>
> # Create service project in default domain
> project_id=$(get_id openstack project create service \
> --description "Service Tenant" --domain "${def_dom_id}" --enable)
>
> # Create service project admin in default domain
> user_id=$(get_id openstack user create admin_default_dom_proj_service \
> --project "${project_id}" \
> --password secrete \
> --domain "${def_dom_id}")
>
> # Assign admin role to admin user in service project
> openstack role add --domain "${def_dom_id}" \
>--user "${user_id}" \
>--inherited \
>"${admin_role_id}"
>
> # First other Domain - dom_0001
> dom_id=$(get_id openstack domain create \
> --description "Default Domain" dom_0001
>
> # Create admin user for dom_0001
> user_id=$(get_id openstack user create \
> --password secrete \
> --domain "${dom_id}" \
> --description "dom_0001 Admin" \
> admin_dom_0001)
>
> # Assign admin role to admin_dom_0001 in domain dom_0001
> openstack role add --domain "${dom_id}" \
>--user "${user_id}" \
>--user-domain "${dom_id}" \
>--inherited \
>"${admin_role_id}"
>
> ==
>
> Also note, when adding:
> #--project-domain "${cloud_admin_dom_id}" \
>  #--user-domain "${def_dom_id}" \
>
> to openstack role add, I'm finding that OSC complains the user ID doesn't
> exist in that specified domain, when OSC 

Re: [openstack-dev] mod_wsgi: what services are people using with it?

2016-08-19 Thread Nick Papadonis
On Fri, Aug 19, 2016 at 5:34 PM, John Dickinson  wrote:

>
>
> On 17 Aug 2016, at 15:27, Nick Papadonis wrote:
>
> > comments
> >
> > On Wed, Aug 17, 2016 at 4:53 PM, Matthew Thode <
> prometheanf...@gentoo.org>
> > wrote:
> >
> >> On 08/17/2016 03:52 PM, Nick Papadonis wrote:
> >>
> >>> Thanks for the quick response!
> >>>
> >>> Glance worked for me in Mitaka.  I had to specify 'chunked transfers'
> >>> and increase the size limit to 5GB.  I had to pull some of the WSGI
> >>> source from glance and alter it slightly to call from Apache.
> >>>
> >>> I saw that Nova claims mod_wsgi is 'experimental'.  Interested in it's
> >>> really experimental or folks use it in production.
> >>>
> >>> Nick
> >>
> >> ya, cinder is experimental too (at least in my usage) as I'm using
> >> python3 as well :D  For me it's a case of having to test the packages I
> >> build.
> >>
> >>
> > I converted Cinder to mod_wsgi because from what I recall, I found that
> SSL
> > support was removed from the Eventlet server.  Swift endpoint outputs a
> log
> > warning that Eventlet SSL is only for testing purposes, which is another
> > reason why I turned to mod_wsgi for that.
>
> FWIW, most prod Swift deployments I know of use HAProxy or stud to
> terminate TLS before forwarding the http stream to a proxy endpoint (local
> or remote). Especially when combined with a server that has AES-NI, this
> gives good performance.


Thanks.  I'd be interested if anyone has done a performance comparison of
HAProxy vs mod_wsgi to terminate.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] bugdaystats for different time ranges

2016-08-19 Thread Ken'ichi Ohmichi
Hi Infra-team,

(We told this a little bit on IRC, but it is happy to get more feedback widely)

Recently, we are triaging bugs of Tempest on the launchpad and that
work was fine to clean up old bugs and fixed bugs with different
patches.
There is bugdaystats[1] for showing a progress on bag days, that is
good for motivate us.
Current bugdaystats shows it in last several days statically, can we
add/implement more time ranges like daily in 30 days, or in releases?
The bugs are reported randomly and bugs triage also happens every day,
so I guess different time range would be good to know bug situation in
a development cycle.

Thanks
Ken Ohmichi

---
[1]: http://status.openstack.org/bugday/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] convergence cancel messages

2016-08-19 Thread Zane Bitter

On 19/08/16 09:55, Anant Patil wrote:


What I'm suggesting is very close to that:

(1) stack-cancel-update  will start another update using the
previous template/environment. We'll start rolling back; in-progress
resources will be allowed to complete normally.
(2) stack-cancel-update  --no-rollback will set the
traversal_id to None so no further resources will be updated;
in-progress resources will be allowed to complete normally.
(3) resource-mark-unhealthy   ... 
Kill any threads running a CREATE or UPDATE on the given resources, mark
as CHECK_FAILED if they are not already in UPDATE_FAILED, don't do
anything else. If the resource was in progress, the stack won't progress
further, other resources currently in-progress will complete, and if
rollback is enabled and no other traversal has started then it will roll
back to the previous template/environment.

I have started implementation of the above three mechanisms. The first
two are implemented in https://review.openstack.org/#/c/357618


This looks great, thanks! That covers both our internal use of 
update-cancel and the current user API update-cancel nicely.



Note that the (2) needs a change in heat client (openstack client?) to
have a --no-rollback option.


Yeah, and also a (very minor) REST API change. I'd be in favour of 
trying to get this in before Newton FF, it'd be really useful to have.



(3) is a bit of long haul, and needs:
https://review.openstack.org/343076 : Adds mechanism to interrupt
convergence worker threads
https://review.openstack.org/301483 : Mechanism to send cancel message
and cancel worker upon receiving messages


Another thing I forgot is that when we delete a stack, we cancel all the 
threads working on it, so that any in-progress update/create used to be 
stopped (you're about to delete that stuff anyway, so you might as well 
not bother with anything else), and the lack of this functionality in 
convergence is causing problems for some users. It looks like this patch 
is intended to build on the previous two to resolve that:


https://review.openstack.org/#/c/354000/

(This is actually going to be much better than the old behaviour, 
because it turned out that cancelling threads was very much not the 
right thing to do, and it's much better to stop them at a yield point.)


So I think all of the above apart from the API/client change for (2) are 
going to be critical to land for Newton. (They're all in a sense bugs at 
the moment.)



Apart from the above two, I am implementing the actual patch which will
leverage the above two to complete resource-mark-unhealthy feature in
convergence.


Great! Hopefully people will rarely need this, but it'll be much more 
comfortable unleashing convergence on the world if we know that this 
exists as a circuit-breaker in case something does get stuck.


Let me know if I can help with any of this stuff without stepping on any 
toes (time zones unfortunately make it hard for you and I to 
co-ordinate). I'll at least try to circle back regularly to the reviews.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mod_wsgi: what services are people using with it?

2016-08-19 Thread John Dickinson


On 17 Aug 2016, at 15:27, Nick Papadonis wrote:

> comments
>
> On Wed, Aug 17, 2016 at 4:53 PM, Matthew Thode 
> wrote:
>
>> On 08/17/2016 03:52 PM, Nick Papadonis wrote:
>>
>>> Thanks for the quick response!
>>>
>>> Glance worked for me in Mitaka.  I had to specify 'chunked transfers'
>>> and increase the size limit to 5GB.  I had to pull some of the WSGI
>>> source from glance and alter it slightly to call from Apache.
>>>
>>> I saw that Nova claims mod_wsgi is 'experimental'.  Interested in it's
>>> really experimental or folks use it in production.
>>>
>>> Nick
>>
>> ya, cinder is experimental too (at least in my usage) as I'm using
>> python3 as well :D  For me it's a case of having to test the packages I
>> build.
>>
>>
> I converted Cinder to mod_wsgi because from what I recall, I found that SSL
> support was removed from the Eventlet server.  Swift endpoint outputs a log
> warning that Eventlet SSL is only for testing purposes, which is another
> reason why I turned to mod_wsgi for that.

FWIW, most prod Swift deployments I know of use HAProxy or stud to terminate 
TLS before forwarding the http stream to a proxy endpoint (local or remote). 
Especially when combined with a server that has AES-NI, this gives good 
performance.

--John



>
> Nick
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Easier way of trying TripleO

2016-08-19 Thread Dan Prince
On Tue, 2013-11-19 at 16:40 -0500, James Slagle wrote:
> I'd like to propose an idea around a simplified and complimentary
> version of
> devtest that makes it easier for someone to get started and try
> TripleO.  
> 
> The goal being to get people using TripleO as a way to experience the
> deployment of OpenStack, and not necessarily a way to get an
> experience of a
> useable OpenStack cloud itself.
> 
> To that end, we could:
> 
> 1) Provide an undercloud vm image so that you could effectively skip
> the entire
>    seed setup.

The question here for me is what are you proposing to use to create
this image? Is it something that could live in tripleo-puppet-elements
like we manage the overcloud package dependencies? Or is it more than
this? I'd like to not have to build another alternate tool to help
manage this.

What if instead of an undercloud image we just created the undercloud
locally out of containers? Similar to what I've recently proposed with
the heat all-in-one installer here: https://dprince.github.io/tripleo-o
nward-dark-owl.html we could leverage the containers composable service
work for the overcloud in t-h-t and get containers support in the
undercloud for free.

If you still want to run an undercloud VM you could configure things
that way locally, or provide an image with containers in it I guess
too.

I'm fine supporting an easier developer case for TripleO but I'd like
to ultimately have less duplication across the maintenance of the
Undercloud and Overcloud as part of our solutions for these things too.

Dan

> 2) Provide pre-built downloadable images for the overcloud and
> deployment
>    kernel and ramdisk.
> 3) Instructions on how to use these images to deploy a running
>    overcloud.
> 
> Images could be provided for Ubuntu and Fedora, since both those work
> fairly
> well today.
> 
> The instructions would look something like:
> 
> 1) Download all the images.
> 2) Perform initial host setup.  This would be much smaller than what
> is
>    required for devtest and off the top of my head would mostly be:
>    - openvswitch bridge setup
>    - libvirt configuration
>    - ssh configuration (for the baremetal virtual power driver)
> 3) Start the undercloud vm.  It would need to be bootstrapped with an
> initial
>    static json file for the heat metadata, same as the seed works
> today.
> 4) Any last mile manual configuration, such as nova.conf edits for
> the virtual
>    power driver user.
> 6) Use tuskar+horizon (running on the undercloud)  to deploy the
> overcloud.
> 7) Overcloud configuration (don't see this being much different than
> what is
>    there today).
> 
> All the openstack clients, heat templates, etc., are on the
> undercloud vm, and
> that's where they're used from, as opposed to from the host (results
> in less stuff
> to install/configure on the host).
> 
> We could also provide instructions on how to configure the undercloud
> vm to
> provision baremetal.  I assume this would be possible, given the
> correct
> bridged networking setup.
> 
> It could make sense to use an all in one overcloud for this as well,
> given it's
> going for simplification.
> 
> Obviously, this approach implies some image management on the
> community's part,
> and I think we'd document and use all the existing tools (dib,
> elements) to
> build images, etc.
> 
> Thoughts on this approach?  
> 
> --
> -- James Slagle
> --
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo Gonzalez Gutierrez (egonzales90 on irc)

2016-08-19 Thread Steven Dake (stdake)
Eduardo,

Their was a unanimous response in votes with no veto on this core reviewer 
nomination.  As a result, voting is closed early.

Welcome to the Kolla core review team!  Looking for further good work from you! 
 I have added you to the kolla-core team in gerrit.  You should have access to 
full +2/-2 voting rights on patches, as well as voting rights on Kolla's policy 
votes.  If you need a primer ping me or another core reviewer.

Regards
-steve


From: Steven Dake >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, August 18, 2016 at 4:09 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo 
Gonzalez Gutierrez (egonzales90 on irc)

Kolla Core Review Team:

I am nominating Eduardo for the core reviewer team.  His reviews are fantastic, 
as I'm sure most of you have seen after looking over the review queue.  His 30 
day stats place him at #3 by review count [1] and 60 day stats [2] at #4 by 
review count.  He is also first to review a significant amount of the time - 
which is impressive for someone new to Kolla.  He participates in IRC and he 
has done some nice code contribution as well [3] including the big chunk of 
work on enabling Senlin in Kolla, the dockerfile customizations work, as well 
as a few documentation fixes.  Eduardo is not affiliated with any particular 
company.  As a result he is not full time on Kolla like many of our other core 
reviewers.  The fact that he is part time and still doing fantastically well at 
reviewing is a great sign of things to come :)

Consider this nomination as my +1 vote.

Voting is open for 7 days until August 24th.  Joining the core review team 
requires a majority of the core review team to approve within a 1 week period 
with no veto (-1) votes.  If a veto or unanimous decision is reached prior to 
August 24th, voting will close early.

Regards
-steve

[1] http://stackalytics.com/report/contribution/kolla/30
[2] http://stackalytics.com/report/contribution/kolla/60
[3] 
https://review.openstack.org/#/q/owner:%22Eduardo+Gonzalez+%253Cdabarren%2540gmail.com%253E%22
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mitaka: Identity V3 status and observations using domains

2016-08-19 Thread Nick Papadonis
Hi Folks,

I'm playing with IDv3 in Mitaka and it doesn't seem to work as I'd expect.
Hopefully I'm understanding the way domains work.  The strategy is to
create a top level cloud_admin_dom and super user.  Then create a default
domain and admin user and default project and admin user.  Then create
another dom_0001 to test projects in a different domain.

The cloud_admin user works fine and appears to have privileges to do most
things.  Now, when I use the default domain admin user or default domain
default project admin user, I either get authentication issues from
Keystone or the policy json isn't allowing the default domain admin (not in
a project) to do things like list projects or users.  It appears folks have
used this a few different ways and appreciate insight from your experience.

As I understand the process (please correct me) is:

function get_id () {
echo `"$@" | grep ' id ' | awk '{print $4}'`
}

# Create admin role
admin_role_id=$(get_id openstack role create admin)

# Create Cloud Admin Domain
cloud_admin_dom_id=$(get_id openstack domain create \
--description "Cloud Admin Domain" cloud_admin_dom)

# Update policy for domain ID
cat /etc/keystone/policy.v3cloudsample.json | \
sed -e "s/admin_domain_id/${cloud_admin_dom_id}/g" >
/etc/keystone/policy.json

# Create admin user for cloud admin domain
cloud_admin_user_id=$(get_id openstack user create \
--password secrete \
--domain "${cloud_admin_dom_id}" \
--description "Cloud Admin Domain Admin" \
admin_cloud_admin_dom)

# Assign admin role to admin user
openstack role add --domain "${cloud_admin_dom_id}" \
   --user "${cloud_admin_user_id}" \
   "${admin_role_id}"

# Create default domain (for legacy services)
def_dom_id=$(get_id openstack domain create \
--description "Default Domain" default)

# Create admin user for default domain
def_user_id=$(get_id openstack user create \
--password secrete \
--domain "${def_dom_id}" \
--description "Default Domain Admin" \
admin_default_dom)

# Assign admin role to admin user
openstack role add --domain "${def_dom_id}" \
   --user "${def_user_id}" \
   --inherited \
   "${admin_role_id}"

# Create default project in default domain (for legacy services)
project_id=$(get_id openstack project create "${DEFAULT_PROJECT}" \
--description "Default Project" --domain "${cloud_admin_dom_id}"
--enable)

# Create admin user for default project in default domain
user_id=$(get_id openstack user create admin_dom_default_proj_default \
--project "${project_id}" \
--password secrete \
--domain "${def_dom_id}")

# Assign admin role to admin user in default domain and default project
openstack role add --project "${project_id}" \
   --user "${user_id}" \
   --inherited \
   "${admin_role_id}"

# Create service role
service_role_id=$(get_id openstack role create service)

# Create service project in default domain
project_id=$(get_id openstack project create service \
--description "Service Tenant" --domain "${def_dom_id}" --enable)

# Create service project admin in default domain
user_id=$(get_id openstack user create admin_default_dom_proj_service \
--project "${project_id}" \
--password secrete \
--domain "${def_dom_id}")

# Assign admin role to admin user in service project
openstack role add --domain "${def_dom_id}" \
   --user "${user_id}" \
   --inherited \
   "${admin_role_id}"

# First other Domain - dom_0001
dom_id=$(get_id openstack domain create \
--description "Default Domain" dom_0001

# Create admin user for dom_0001
user_id=$(get_id openstack user create \
--password secrete \
--domain "${dom_id}" \
--description "dom_0001 Admin" \
admin_dom_0001)

# Assign admin role to admin_dom_0001 in domain dom_0001
openstack role add --domain "${dom_id}" \
   --user "${user_id}" \
   --user-domain "${dom_id}" \
   --inherited \
   "${admin_role_id}"

==

Also note, when adding:
#--project-domain "${cloud_admin_dom_id}" \
 #--user-domain "${def_dom_id}" \

to openstack role add, I'm finding that OSC complains the user ID doesn't
exist in that specified domain, when OSC user list --log shows it does. Odd

Thanks,
Nick
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo Gonzalez Gutierrez (egonzales90 on irc)

2016-08-19 Thread Martin André
+1

On Fri, Aug 19, 2016 at 8:52 PM, Vikram Hosakote (vhosakot)
 wrote:
> +1.
>
> Great work Eduardo contributing to the third party plugin support kolla
> blueprint!
>
> Regards,
> Vikram Hosakote
> IRC:  vhosakot
>
> From: "Steven Dake (stdake)" 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Thursday, August 18, 2016 at 7:09 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo
> Gonzalez Gutierrez (egonzales90 on irc)
>
> Kolla Core Review Team:
>
> I am nominating Eduardo for the core reviewer team.  His reviews are
> fantastic, as I'm sure most of you have seen after looking over the review
> queue.  His 30 day stats place him at #3 by review count [1] and 60 day
> stats [2] at #4 by review count.  He is also first to review a significant
> amount of the time – which is impressive for someone new to Kolla.  He
> participates in IRC and he has done some nice code contribution as well [3]
> including the big chunk of work on enabling Senlin in Kolla, the dockerfile
> customizations work, as well as a few documentation fixes.  Eduardo is not
> affiliated with any particular company.  As a result he is not full time on
> Kolla like many of our other core reviewers.  The fact that he is part time
> and still doing fantastically well at reviewing is a great sign of things to
> come :)
>
> Consider this nomination as my +1 vote.
>
> Voting is open for 7 days until August 24th.  Joining the core review team
> requires a majority of the core review team to approve within a 1 week
> period with no veto (-1) votes.  If a veto or unanimous decision is reached
> prior to August 24th, voting will close early.
>
> Regards
> -steve
>
> [1] http://stackalytics.com/report/contribution/kolla/30
> [2] http://stackalytics.com/report/contribution/kolla/60
> [3]
> https://review.openstack.org/#/q/owner:%22Eduardo+Gonzalez+%253Cdabarren%2540gmail.com%253E%22
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-19 Thread Giulio Fidente

On 08/05/2016 01:21 PM, Steven Hardy wrote:

On Fri, Aug 05, 2016 at 12:27:40PM +0200, Dmitry Tantsur wrote:

On 08/04/2016 11:48 PM, Dan Prince wrote:

Last week I started some prototype work on what could be a new way to
install the Undercloud. The driving force behind this was some of the
recent "composable services" work we've done in TripleO so initially I
called in composable undercloud. There is an etherpad here with links
to some of the patches already posted upstream (many of which stand as
general imporovements on their own outside the scope of what I'm
talking about here).

https://etherpad.openstack.org/p/tripleo-composable-undercloud

The idea in short is that we could spin up a small single process all-
in-one heat-all (engine and API) and thereby avoid things like Rabbit,
and MySQL. Then we can use Heat templates to drive the Undercloud
deployment just like we do in the Overcloud.


I don't want to sound rude, but please no. The fact that you have a hammer
does not mean everything around is nails :( What problem are you trying to
solve by doing it?


I think Dan explains it pretty well in his video, and your comment
indicates a fundamental misunderstanding around the entire TripleO vision,
which is about symmetry and reuse between deployment tooling and the
deployed cloud.

The problems this would solve are several:

1. Remove divergence between undercloud and overcloud puppet implementation
(instead of having an undercloud specific manifest, we reuse the *exact*
same stuff we use for overcloud deployments)


this; to reuse the service templates and puppet classes as they are 
sounds good



2. Better modularity, far easier to enable/disable services

3. Get container integration "for free" when we land it in the overcloud

4. Any introspection and debugging workflow becomes identical between the
undercloud and overcloud

5. We remove dependencies on a bunch of legacy scripts which run outside of
puppet

6. Whenever someone lands support for a new service in the overcloud, we
automatically get undercloud support for it, completely for free.

7. Potential for much easier implementation of a multi-node undercloud


Undercloud installation is already sometimes fragile, but it's probably the
least fragile part right now (at least from my experience) And at the very
least it's pretty obviously debuggable in most cases. THT is hard to
understand and often impossible to debug. I'd prefer we move away from THT
completely rather than trying to fix it in one more place where heat does
not fit..


I *do* see your point about the undercloud installation being the less 
problematic but I part of that is because we didn't need to plug into 
the undercloud the same level of flexibility we demand for overcloud


Now, maybe we also shouldn't make things complicated where they don't 
need to (see points 2 and 3) but in addition to reusing tht/puppet code, 
I think it would be interesting to have undercloud/ha (point 7)


fwiw, I'd like to try this out myself before the summit to get a better 
picture.

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo Gonzalez Gutierrez (egonzales90 on irc)

2016-08-19 Thread Vikram Hosakote (vhosakot)
+1.

Great work Eduardo contributing to the third party plugin support kolla
blueprint!

Regards,
Vikram Hosakote
IRC:  vhosakot

From: "Steven Dake (stdake)" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, August 18, 2016 at 7:09 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo 
Gonzalez Gutierrez (egonzales90 on irc)

Kolla Core Review Team:

I am nominating Eduardo for the core reviewer team.  His reviews are fantastic, 
as I'm sure most of you have seen after looking over the review queue.  His 30 
day stats place him at #3 by review count [1] and 60 day stats [2] at #4 by 
review count.  He is also first to review a significant amount of the time - 
which is impressive for someone new to Kolla.  He participates in IRC and he 
has done some nice code contribution as well [3] including the big chunk of 
work on enabling Senlin in Kolla, the dockerfile customizations work, as well 
as a few documentation fixes.  Eduardo is not affiliated with any particular 
company.  As a result he is not full time on Kolla like many of our other core 
reviewers.  The fact that he is part time and still doing fantastically well at 
reviewing is a great sign of things to come :)

Consider this nomination as my +1 vote.

Voting is open for 7 days until August 24th.  Joining the core review team 
requires a majority of the core review team to approve within a 1 week period 
with no veto (-1) votes.  If a veto or unanimous decision is reached prior to 
August 24th, voting will close early.

Regards
-steve

[1] http://stackalytics.com/report/contribution/kolla/30
[2] http://stackalytics.com/report/contribution/kolla/60
[3] 
https://review.openstack.org/#/q/owner:%22Eduardo+Gonzalez+%253Cdabarren%2540gmail.com%253E%22
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mod_wsgi: what services are people using with it?

2016-08-19 Thread Nick Papadonis
Here's an example with SSL enabled:

mod_wsgi:
#!/usr/bin/python2.7
import os
import sys

from oslo_utils import encodeutils

# XXX reduce dependencies
from glance.cmd.api import main
import glance_store
from oslo_config import cfg
from oslo_log import log as logging
import oslo_messaging
import osprofiler.notifier
import osprofiler.web

import threading

from glance.common import config
from glance.common import exception
from glance.common import wsgi
from glance import notifier

CONF = cfg.CONF
CONF.import_group("profiler", "glance.common.wsgi")
logging.register_options(CONF)

def fail(e):
global KNOWN_EXCEPTIONS
return_code = KNOWN_EXCEPTIONS.index(type(e)) + 1
sys.stderr.write("ERROR: %s\n" % encodeutils.exception_to_unicode(e))
sys.exit(return_code)

if __name__ == "__main__":
sys.exit(main())
else:
KNOWN_EXCEPTIONS = (RuntimeError,
exception.WorkerCreationFailure,
glance_store.exceptions.BadStoreConfiguration)

try:
config_files = cfg.find_config_files(project='glance',
prog='glance-api')
config.parse_args(default_config_files=config_files)
config.set_config_defaults()
logging.setup(CONF, 'glance') # XXX
notifier.set_defaults()
   """Initialize glance store."""
glance_store.register_opts(CONF)
glance_store.create_stores(CONF)
glance_store.verify_default_store()

if cfg.CONF.profiler.enabled:
_notifier = osprofiler.notifier.create("Messaging",
   oslo_messaging, {},
   notifier.get_transport(),
   "glance", "api",
   cfg.CONF.bind_host)
osprofiler.notifier.set(_notifier)
osprofiler.web.enable(cfg.CONF.profiler.hmac_keys)
else:
osprofiler.web.disable()

application = None
app_lock = threading.Lock()

with app_lock:
if application is None:
application = config.load_paste_app('glance-api')
except KNOWN_EXCEPTIONS as e:
fail(e)

-
import os
from subprocess import CalledProcessError, check_call, Popen
import sys

def httpd(cmd):
cmd = ['/usr/apache2/2.4/bin/httpd', '-f',
   '/var/lib/glance/api-httpd.conf',
   '-k', cmd]
try:
Popen(cmd, stdout=sys.stdout, stderr=sys.stderr)
except CalledProcessError as err:
print >> sys.stderr, 'Error executing %s: %s' % (cmd, err)
sys.exit(1)

sys.exit(0)

def start():
httpd('start')

def stop():
httpd('stop')

def restart():
httpd('restart')



ServerRoot "/usr/apache2/2.4"

LoadModule authn_file_module libexec/mod_authn_file.so
LoadModule authn_core_module libexec/mod_authn_core.so
LoadModule authz_host_module libexec/mod_authz_host.so
LoadModule authz_groupfile_module libexec/mod_authz_groupfile.so
LoadModule authz_user_module libexec/mod_authz_user.so
LoadModule authz_core_module libexec/mod_authz_core.so
LoadModule access_compat_module libexec/mod_access_compat.so
LoadModule auth_basic_module libexec/mod_auth_basic.so
LoadModule reqtimeout_module libexec/mod_reqtimeout.so
LoadModule filter_module libexec/mod_filter.so
LoadModule log_config_module libexec/mod_log_config.so
LoadModule env_module libexec/mod_env.so
LoadModule headers_module libexec/mod_headers.so
LoadModule version_module libexec/mod_version.so
LoadModule slotmem_shm_module libexec/mod_slotmem_shm.so

LoadModule mpm_prefork_module libexec/mod_mpm_prefork.so


LoadModule mpm_worker_module libexec/mod_mpm_worker.so



LoadModule mpm_event_module libexec/mod_mpm_event.so


LoadModule unixd_module libexec/mod_unixd.so
LoadModule status_module libexec/mod_status.so
LoadModule alias_module libexec/mod_alias.so
LoadModule wsgi_module libexec/mod_wsgi-2.7.so

LoadModule ssl_module libexec/mod_ssl.so


User glance
Group glance


PidFile /var/lib/glance/glance-api.httpd.pid

ServerName XXX

Listen 9292

ErrorLogFormat "%{cu}t %M"
ErrorLog "/var/log/glance/glance-api_error.log"
LogLevel warn


LogFormat "%h %u %t \"%r\" %p %>s %b \"%{Referer}i\"
\"%{User-Agent}i\"" combined


CustomLog /var/log/glance/glance-api_access.log combined

# Limit request up to 5GB

Options Indexes FollowSymLinks MultiViews
AllowOverride None
Require all granted
LimitRequestBody 5368709122


WSGISocketPrefix /var/run/glance-api_wsgi_

# Enable Chunks Requests Glance requires it


SSLEngine On

# Disable the known insecure SSLv3 protocol
SSLProtocol all -SSLv3

SSLCertificateFile /etc/glance/ssl/public/server-cert-fchain.pem
SSLCACertificateFile /etc/certs/ca-certificates.crt
SSLCertificateKeyFile /etc/glance/ssl/private/server-key.pem

WSGIChunkedRequest On
WSGIDaemonProcess glance-api processes=2 threads=1 user=glance

[openstack-dev] [glance] [osc] an important regression related to python-glanceclient 2.4.0

2016-08-19 Thread Nikhil Komawar
Hi,


Recently a bug [1] was discovered affecting openstack client that
resulted into python-glanceclient not being compatible with the later.


As this issue contains complexity more than what meets the eye a revert
[2] to a commit that recently introduced this bug has been proposed.
Nevertheless, we would like to know why that revert is helping as it's
subsiding an underlying issue.


While we dig more into the details, a solution to help the packagers,
operators, different projects, etc. has been proposed via commits [3,
4]. Please consider the global requirements sync as a short term
solution for this issue/bug until we provide a cleaner solution.


You are welcome to drop by on #openstack-glance or reach out via email,
if having issues and bug isn't sufficient source of information.


[1] https://bugs.launchpad.net/python-glanceclient/+bug/1614971
[2] https://review.openstack.org/357624
[3] https://review.openstack.org/357937
[4] https://review.openstack.org/357955

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo Gonzalez Gutierrez (egonzales90 on irc)

2016-08-19 Thread Mauricio Lima
+1
Em 19/08/2016 14:14, "Jeffrey Zhang"  escreveu:

> +1
>
> On Sat, Aug 20, 2016 at 1:07 AM, Swapnil Kulkarni 
> wrote:
> > Thanks sdake.
> >
> > +1 :)
> >
> > On Aug 19, 2016 10:35 PM, "Steven Dake (stdake)" 
> wrote:
> >>
> >> Coolsvap mentioned he wasn't receiving his emails at his home email
> >> server, so he will respond from a different mail service.  Just bringing
> >> this up again so he can see the thread.
> >>
> >> Regards
> >> -steve
> >>
> >> On 8/19/16, 7:09 AM, "Kwasniewska, Alicja" <
> alicja.kwasniew...@intel.com>
> >> wrote:
> >>
> >> >+1 :)
> >> >
> >> >-Original Message-
> >> >From: Ryan Hallisey [mailto:rhall...@redhat.com]
> >> >Sent: Friday, August 19, 2016 2:50 PM
> >> >To: OpenStack Development Mailing List (not for usage questions)
> >> >
> >> >Subject: Re: [openstack-dev] [vote][kolla] Core nomination proposal for
> >> >Eduardo Gonzalez Gutierrez (egonzales90 on irc)
> >> >
> >> >+1
> >> >
> >> >-Ryan
> >> >
> >> >- Original Message -
> >> >From: "Steven Dake (stdake)" 
> >> >To: "OpenStack Development Mailing List (not for usage questions)"
> >> >
> >> >Sent: Thursday, August 18, 2016 7:09:35 PM
> >> >Subject: [openstack-dev] [vote][kolla] Core nomination proposal for
> >> >Eduardo Gonzalez Gutierrez (egonzales90 on irc)
> >> >
> >> >Kolla Core Review Team:
> >> >
> >> >I am nominating Eduardo for the core reviewer team. His reviews are
> >> >fantastic, as I'm sure most of you have seen after looking over the
> >> >review queue. His 30 day stats place him at #3 by review count [1] and
> 60
> >> >day stats [2] at #4 by review count. He is also first to review a
> >> >significant amount of the time ­ which is impressive for someone new to
> >> >Kolla. He participates in IRC and he has done some nice code
> contribution
> >> >as well [3] including the big chunk of work on enabling Senlin in
> Kolla,
> >> >the dockerfile customizations work, as well as a few documentation
> fixes.
> >> >Eduardo is not affiliated with any particular company. As a result he
> is
> >> >not full time on Kolla like many of our other core reviewers. The fact
> >> >that he is part time and still doing fantastically well at reviewing
> is a
> >> >great sign of things to come :)
> >> >
> >> >Consider this nomination as my +1 vote.
> >> >
> >> >Voting is open for 7 days until August 24th. Joining the core review
> team
> >> >requires a majority of the core review team to approve within a 1 week
> >> >period with no veto (-1) votes. If a veto or unanimous decision is
> >> >reached prior to August 24th, voting will close early.
> >> >
> >> >Regards
> >> >-steve
> >> >
> >> >[1] http://stackalytics.com/report/contribution/kolla/30
> >> >[2] http://stackalytics.com/report/contribution/kolla/60
> >> >[3]
> >>
> >> > >https://review.openstack.org/#/q/owner:%22Eduardo+Gonzalez+
> %253Cdabarren%2
> >> >540gmail.com%253E%22
> >> >
> >>
> >> > >___
> ___
> >> >OpenStack Development Mailing List (not for usage questions)
> >> >Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >> > >___
> ___
> >> >OpenStack Development Mailing List (not for usage questions)
> >> >Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> > >___
> ___
> >> >OpenStack Development Mailing List (not for usage questions)
> >> >Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Re: [openstack-dev] [ironic] Driver removal policies - should we make it softer?

2016-08-19 Thread Villalovos, John L
> -Original Message-
> From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
> Sent: Friday, August 19, 2016 7:15 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] [ironic] Driver removal policies - should we make it
> softer?
> 
> Hi Ironickers,
> 
> There was a big thread here[0] about Cinder, driver removal, and standard
> deprecation policy. If you haven't read through it yet, please do before
> continuing here. :)
> 
> The outcome of that thread is summarized well here.[1]
> 
> I know that I previously had a different opinion on this, but I think we
> should go roughly the same route, for the sake of the users.
> 
> 1) A ``supported`` flag for each driver that is True if and only if the driver
>is tested in infra or third-party CI (and meets our third party CI
>requirements).
> 2) If the supported flag is False for a driver, deprecation is implied (and
>a warning is emitted at load time). A driver may be removed per standard
>deprecation policies, with turning the supported flag False to start the
>clock.
> 3) Add a ``enable_unsupported_drivers`` config option that allows enabling
>drivers marked supported=False. If a driver is in enabled_drivers, has
>supported=False, and enable_unsupported_drivers=False, ironic-
> conductor
>will fail to start. Setting enable_unsupported_drivers=True will allow
>ironic-conductor to start with warnings emitted.
> 
> It is important to note that (3) does still technically break the standard
> deprecation policy (old config may not work with new version of ironic).
> However, this is a much softer landing than the original plan. FWIW, I do
> expect (but not hope!) this part will be somewhat contentious.
> 
> I'd like to hear thoughts and get consensus on this from the rest of the
> ironic community, so please do reply whether you agree or disagree.
> 
> I'm happy to do the work required (update spec, code patches, doc updates)
> when we do come to agreement.
> 
> // jim
> 
> [0] http://lists.openstack.org/pipermail/openstack-dev/2016-
> August/101428.html
> [1] http://lists.openstack.org/pipermail/openstack-dev/2016-
> August/101898.html

Thanks Jim. This proposal makes sense to me. So put me into the agree camp.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Performance][Documentation] Please take a look on OpenStack perofmance-docs

2016-08-19 Thread Dina Belova
Folks,

During previous weeks we merged lots of OpenStack performance-related test
plans and test results to the performance-docs
 and your feedback
(as usually) is very appreciated. I hope you can find something interesting
for you here and share your opinion on what's going on regarding our
performance researches.

Thanks in advance!

Cheers,
Dina

-- 
*Dina Belova*
*Senior Software Engineer*
Mirantis, Inc.
525 Almanor Avenue, 4th Floor
Sunnyvale, CA 94085

*Phone: 650-772-8418Email: dbel...@mirantis.com *
www.mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo Gonzalez Gutierrez (egonzales90 on irc)

2016-08-19 Thread Jeffrey Zhang
+1

On Sat, Aug 20, 2016 at 1:07 AM, Swapnil Kulkarni  wrote:
> Thanks sdake.
>
> +1 :)
>
> On Aug 19, 2016 10:35 PM, "Steven Dake (stdake)"  wrote:
>>
>> Coolsvap mentioned he wasn't receiving his emails at his home email
>> server, so he will respond from a different mail service.  Just bringing
>> this up again so he can see the thread.
>>
>> Regards
>> -steve
>>
>> On 8/19/16, 7:09 AM, "Kwasniewska, Alicja" 
>> wrote:
>>
>> >+1 :)
>> >
>> >-Original Message-
>> >From: Ryan Hallisey [mailto:rhall...@redhat.com]
>> >Sent: Friday, August 19, 2016 2:50 PM
>> >To: OpenStack Development Mailing List (not for usage questions)
>> >
>> >Subject: Re: [openstack-dev] [vote][kolla] Core nomination proposal for
>> >Eduardo Gonzalez Gutierrez (egonzales90 on irc)
>> >
>> >+1
>> >
>> >-Ryan
>> >
>> >- Original Message -
>> >From: "Steven Dake (stdake)" 
>> >To: "OpenStack Development Mailing List (not for usage questions)"
>> >
>> >Sent: Thursday, August 18, 2016 7:09:35 PM
>> >Subject: [openstack-dev] [vote][kolla] Core nomination proposal for
>> >Eduardo Gonzalez Gutierrez (egonzales90 on irc)
>> >
>> >Kolla Core Review Team:
>> >
>> >I am nominating Eduardo for the core reviewer team. His reviews are
>> >fantastic, as I'm sure most of you have seen after looking over the
>> >review queue. His 30 day stats place him at #3 by review count [1] and 60
>> >day stats [2] at #4 by review count. He is also first to review a
>> >significant amount of the time ­ which is impressive for someone new to
>> >Kolla. He participates in IRC and he has done some nice code contribution
>> >as well [3] including the big chunk of work on enabling Senlin in Kolla,
>> >the dockerfile customizations work, as well as a few documentation fixes.
>> >Eduardo is not affiliated with any particular company. As a result he is
>> >not full time on Kolla like many of our other core reviewers. The fact
>> >that he is part time and still doing fantastically well at reviewing is a
>> >great sign of things to come :)
>> >
>> >Consider this nomination as my +1 vote.
>> >
>> >Voting is open for 7 days until August 24th. Joining the core review team
>> >requires a majority of the core review team to approve within a 1 week
>> >period with no veto (-1) votes. If a veto or unanimous decision is
>> >reached prior to August 24th, voting will close early.
>> >
>> >Regards
>> >-steve
>> >
>> >[1] http://stackalytics.com/report/contribution/kolla/30
>> >[2] http://stackalytics.com/report/contribution/kolla/60
>> >[3]
>>
>> > >https://review.openstack.org/#/q/owner:%22Eduardo+Gonzalez+%253Cdabarren%2
>> >540gmail.com%253E%22
>> >
>>
>> > >__
>> >OpenStack Development Mailing List (not for usage questions)
>> >Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> > >__
>> >OpenStack Development Mailing List (not for usage questions)
>> >Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> > >__
>> >OpenStack Development Mailing List (not for usage questions)
>> >Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo Gonzalez Gutierrez (egonzales90 on irc)

2016-08-19 Thread Swapnil Kulkarni
Thanks sdake.

+1 :)

On Aug 19, 2016 10:35 PM, "Steven Dake (stdake)"  wrote:
>
> Coolsvap mentioned he wasn't receiving his emails at his home email
> server, so he will respond from a different mail service.  Just bringing
> this up again so he can see the thread.
>
> Regards
> -steve
>
> On 8/19/16, 7:09 AM, "Kwasniewska, Alicja" 
> wrote:
>
> >+1 :)
> >
> >-Original Message-
> >From: Ryan Hallisey [mailto:rhall...@redhat.com]
> >Sent: Friday, August 19, 2016 2:50 PM
> >To: OpenStack Development Mailing List (not for usage questions)
> >
> >Subject: Re: [openstack-dev] [vote][kolla] Core nomination proposal for
> >Eduardo Gonzalez Gutierrez (egonzales90 on irc)
> >
> >+1
> >
> >-Ryan
> >
> >- Original Message -
> >From: "Steven Dake (stdake)" 
> >To: "OpenStack Development Mailing List (not for usage questions)"
> >
> >Sent: Thursday, August 18, 2016 7:09:35 PM
> >Subject: [openstack-dev] [vote][kolla] Core nomination proposal for
> >Eduardo Gonzalez Gutierrez (egonzales90 on irc)
> >
> >Kolla Core Review Team:
> >
> >I am nominating Eduardo for the core reviewer team. His reviews are
> >fantastic, as I'm sure most of you have seen after looking over the
> >review queue. His 30 day stats place him at #3 by review count [1] and 60
> >day stats [2] at #4 by review count. He is also first to review a
> >significant amount of the time ­ which is impressive for someone new to
> >Kolla. He participates in IRC and he has done some nice code contribution
> >as well [3] including the big chunk of work on enabling Senlin in Kolla,
> >the dockerfile customizations work, as well as a few documentation fixes.
> >Eduardo is not affiliated with any particular company. As a result he is
> >not full time on Kolla like many of our other core reviewers. The fact
> >that he is part time and still doing fantastically well at reviewing is a
> >great sign of things to come :)
> >
> >Consider this nomination as my +1 vote.
> >
> >Voting is open for 7 days until August 24th. Joining the core review team
> >requires a majority of the core review team to approve within a 1 week
> >period with no veto (-1) votes. If a veto or unanimous decision is
> >reached prior to August 24th, voting will close early.
> >
> >Regards
> >-steve
> >
> >[1] http://stackalytics.com/report/contribution/kolla/30
> >[2] http://stackalytics.com/report/contribution/kolla/60
> >[3]
> >
https://review.openstack.org/#/q/owner:%22Eduardo+Gonzalez+%253Cdabarren%2
> >540gmail.com%253E%22
> >
>
>__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo Gonzalez Gutierrez (egonzales90 on irc)

2016-08-19 Thread Steven Dake (stdake)
Coolsvap mentioned he wasn't receiving his emails at his home email
server, so he will respond from a different mail service.  Just bringing
this up again so he can see the thread.

Regards
-steve

On 8/19/16, 7:09 AM, "Kwasniewska, Alicja" 
wrote:

>+1 :)
>
>-Original Message-
>From: Ryan Hallisey [mailto:rhall...@redhat.com]
>Sent: Friday, August 19, 2016 2:50 PM
>To: OpenStack Development Mailing List (not for usage questions)
>
>Subject: Re: [openstack-dev] [vote][kolla] Core nomination proposal for
>Eduardo Gonzalez Gutierrez (egonzales90 on irc)
>
>+1
>
>-Ryan
>
>- Original Message -
>From: "Steven Dake (stdake)" 
>To: "OpenStack Development Mailing List (not for usage questions)"
>
>Sent: Thursday, August 18, 2016 7:09:35 PM
>Subject: [openstack-dev] [vote][kolla] Core nomination proposal for
>Eduardo Gonzalez Gutierrez (egonzales90 on irc)
>
>Kolla Core Review Team:
>
>I am nominating Eduardo for the core reviewer team. His reviews are
>fantastic, as I'm sure most of you have seen after looking over the
>review queue. His 30 day stats place him at #3 by review count [1] and 60
>day stats [2] at #4 by review count. He is also first to review a
>significant amount of the time ­ which is impressive for someone new to
>Kolla. He participates in IRC and he has done some nice code contribution
>as well [3] including the big chunk of work on enabling Senlin in Kolla,
>the dockerfile customizations work, as well as a few documentation fixes.
>Eduardo is not affiliated with any particular company. As a result he is
>not full time on Kolla like many of our other core reviewers. The fact
>that he is part time and still doing fantastically well at reviewing is a
>great sign of things to come :)
>
>Consider this nomination as my +1 vote.
>
>Voting is open for 7 days until August 24th. Joining the core review team
>requires a majority of the core review team to approve within a 1 week
>period with no veto (-1) votes. If a veto or unanimous decision is
>reached prior to August 24th, voting will close early.
>
>Regards 
>-steve 
>
>[1] http://stackalytics.com/report/contribution/kolla/30
>[2] http://stackalytics.com/report/contribution/kolla/60
>[3] 
>https://review.openstack.org/#/q/owner:%22Eduardo+Gonzalez+%253Cdabarren%2
>540gmail.com%253E%22
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Driver removal policies - should we make it softer?

2016-08-19 Thread Doug Hellmann
Excerpts from Jim Rollenhagen's message of 2016-08-19 11:02:58 -0400:
> On Fri, Aug 19, 2016 at 10:54 AM, Mathieu Mitchell
>  wrote:
> >
> >
> > On 2016-08-19 10:42 AM, Doug Hellmann wrote:
> >>>
> >>> 3) Add a ``enable_unsupported_drivers`` config option that allows
> >>> enabling
> >>> >drivers marked supported=False. If a driver is in enabled_drivers,
> >>> > has
> >>
> >> Do you mean "in disabled_drivers" there?
> >>
> >
> > enabled_drivers is the list of all drivers to be loaded by the conductor,
> > see [0]. In other words, if the operator wants a driver marked as
> > supported=False, enable_unsupported_drivers has to be enabled, otherwise
> > conductor will fail to start.
> 
> Right, ironic-conductor can load multiple drivers, and the driver
> to use is per-node (node being ironic's representation of a bare
> metal server). Since drivers are often vendor-specific, this allows
> a deployment of heterogenous hardware.
> 
> // jim
> 

Ah, OK. I thought you meant that as shorthand for "the list of drivers
where supported is not False", and didn't realize it was literally the
name of an option.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release][ptl] tentative ocata schedule up for review

2016-08-19 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2016-08-18 09:26:37 -0400:
> The release team has prepared a proposed schedule for the Ocata cycle.
> Please look over https://review.openstack.org/357214 and let us know if
> you spot any issues.
> 
> Doug
> 

Tony pointed out that I'd started the schedule from the wrong date
(starting from the Newton release and not the Ocata design summit).
I've updated the patch with new dates that correctly reflect the
shorter cycle.

Sorry for the confusion,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler] Next Nova Scheduler Subteam Meeting

2016-08-19 Thread Ed Leafe
The next meeting of the Nova Scheduler subteam will be on Monday, August 22 at 
1400 UTC in #openstack-meeting-alt

http://www.timeanddate.com/worldclock/fixedtime.html?iso=20160822T14

The agenda is here: https://wiki.openstack.org/wiki/Meetings/NovaScheduler

If you have any items you wish to discuss, please add them to the agenda before 
the meeting.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NOVA] How boot an instance on specific compute with provider-network: physnet1

2016-08-19 Thread Leehom Li (feli5)
Hi, All

I used to use below command to boot an instance with a specified IP
address to a 
Specified compute node.

nova boot 
--image  \
--flavor  \
--nic net-id=,v4-fixed-ip= \
--availability-zone :




May it helps.

leehom

On 8/17/16, 11:53 PM, "Géza Gémes"  wrote:

>On 08/17/2016 05:38 PM, Rick Jones wrote:
>> On 08/17/2016 08:25 AM, Kelam, Koteswara Rao wrote:
>>> Hi All,
>>>
>>> I have two computes
>>>
>>> Compute node 1:
>>> 1. physnet3:br-eth0
>>>
>>> 2. physnet2: br-eth2
>>>
>>> Compute node 2:
>>> 1. physnet3:br-eth0
>>> 2. physnet1:br-eth1
>>> 3. physnet2:br-eth2
>>>
>>> When I boot an instance with a network of provider-network physnet1,
>>> nova is scheduling it on compute1 but there is no physnet1 on compute1
>>> and it fails.
>>>
>>> Is there any mechanism/way to choose correct compute with correct
>>> provider-network?
>>
>> Well, the --availability-zone option can be given a host name
>> separated from an optional actual availability zone identifier by a
>> colon:
>>
>> nova boot .. --availability-zone :hostname ...
>>
>> But specifying a specific host rather than just an availability zone
>> requires the project to have forced_host (or is it force_host?)
>> capabilities.  You could, perhaps, define the two computes to be
>> separate availability zones to work around that.
>>
>> rick jones
>>
>>
>> 
>>_
>>_ 
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>Hi,
>
>Does it help if you boot your VMs, with pre-created neutron ports,
>rather than a neutron network? I think nova is supposed to bind then and
>failing that it shall rescedule the VM (up to the configured re-schedule
>attempts (3 by default)). I think this is an area, where e.g. one of the
>physnet would relate to an SRIOV PF the PciDeviceFilter would be able to
>select the right host from beginning.
>
>Cheers,
>
>Geza
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api-ref][networking-sfc][ceilometer][glance][heat][ironic][keystone][manila][designate][trove][neutron][nova][sahara][searchlight][senlin][swift][zaqar] OpenStack Docs theme migration

2016-08-19 Thread Hayes, Graham
Hi All,

I have just pushed reviews to all the repos using os-api-ref, to allow
us to migrate to the new sphinx theme gracefully.

I have pushed reviews to the following repos:

openstack/networking-sfc
openstack/ceilometer
openstack/glance
openstack/heat
openstack/ironic
openstack/keystone
openstack/manila
openstack/designate
openstack/trove
openstack/neutron-lib
openstack/nova
openstack/sahara
openstack/searchlight
openstack/senlin (*)
openstack/swift
openstack/zaqar

with the topic "os-api-ref-1.0.0-prep" [0]

If I have missed anyone, please let me know.

The next step would be to merge all of these, and then [1], and then do
a release of the os-api-ref library.

* Senlin looked like it was using the new theme already - so if they 
want to continue they can. just be warned that the visual styling will
change at some point.

Thanks,

Graham


0 - https://review.openstack.org/#/q/topic:os-api-ref-1.0.0-prep
1 - https://review.openstack.org/#/c/322430/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Let's drop the postgresql gate job

2016-08-19 Thread Matt Riedemann

On 8/19/2016 9:42 AM, Mike Bayer wrote:



On 08/18/2016 11:00 AM, Matt Riedemann wrote:

It's that time of year again to talk about killing this job, at least
from the integrated gate (move it to experimental for people that care
about postgresql, or make it gating on a smaller subset of projects like
oslo.db).



Running a full tempest load for Postgresql for everything is not very
critical.   I'm sure the software gets full-integration tested against
PG at some point past the gate at least so regressions are reportable,
so if that's all that's being dropped, I don't see any issue.

That there is PG-specific code being proposed in Neutron [1].  The patch
here is planned to be rolled largely into oslo.db, so most of it would
be tested under oslo.db in any case, however Neutron would still have a
specific issue that is addressed by this library code, so local unit
testing of this issue would still be needed against both MySQL and
Postgresql.


Sean reminded me yesterday that the DB migrations in the projects are 
run through unit tests which are using both MySQL and PG, e.g.:


http://logs.openstack.org/38/356638/2/check/gate-nova-python27-db-ubuntu-xenial/801ca5f/console.html#_2016-08-18_00_44_47_146291

http://logs.openstack.org/38/356638/2/check/gate-nova-python27-db-ubuntu-xenial/801ca5f/console.html#_2016-08-18_00_44_34_175694



There is also the subject area of routines that are somehow dependent on
the transaction isolation behavior of the backend database, such as code
that's attempting to see if something has changed in another
transaction.   This is usually in PG's favor because Postgresql defaults
to a lower isolation level than MySQL, but there are probably some weird
edges to this particular subject area.   Again, these things should be
tested in a local unit-test kind of context.

For the specific goal of oslo.db running cross-project checks, I'd like
that a lot, not necessarily for the Postgresql use case, but just to
ensure that API changes in oslo.db don't break on any downstream
projects.   I would think that for all of oslo, seeing that oslo is
"horizontal" to openstack "verticals", that all oslo projects would
somehow have cross-project testing of new patches against consuming
projects.I run a very small and focused type of this kind of testing
on my own against downstream openstack for all proposed changes to
SQLAlchemy, Alembic, and dogpile.cache.


[1] https://review.openstack.org/#/c/314054/




--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Driver removal policies - should we make it softer?

2016-08-19 Thread Jim Rollenhagen
On Fri, Aug 19, 2016 at 10:54 AM, Mathieu Mitchell
 wrote:
>
>
> On 2016-08-19 10:42 AM, Doug Hellmann wrote:
>>>
>>> 3) Add a ``enable_unsupported_drivers`` config option that allows
>>> enabling
>>> >drivers marked supported=False. If a driver is in enabled_drivers,
>>> > has
>>
>> Do you mean "in disabled_drivers" there?
>>
>
> enabled_drivers is the list of all drivers to be loaded by the conductor,
> see [0]. In other words, if the operator wants a driver marked as
> supported=False, enable_unsupported_drivers has to be enabled, otherwise
> conductor will fail to start.

Right, ironic-conductor can load multiple drivers, and the driver
to use is per-node (node being ironic's representation of a bare
metal server). Since drivers are often vendor-specific, this allows
a deployment of heterogenous hardware.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Driver removal policies - should we make it softer?

2016-08-19 Thread Mathieu Mitchell



On 2016-08-19 10:42 AM, Doug Hellmann wrote:

3) Add a ``enable_unsupported_drivers`` config option that allows enabling
>drivers marked supported=False. If a driver is in enabled_drivers, has

Do you mean "in disabled_drivers" there?



enabled_drivers is the list of all drivers to be loaded by the 
conductor, see [0]. In other words, if the operator wants a driver 
marked as supported=False, enable_unsupported_drivers has to be enabled, 
otherwise conductor will fail to start.


Mathieu

[0] 
https://github.com/openstack/ironic/blob/master/etc/ironic/ironic.conf.sample#L23


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Driver removal policies - should we make it softer?

2016-08-19 Thread Mathieu Mitchell

On 2016-08-19 10:15 AM, Jim Rollenhagen wrote:

3) Add a ``enable_unsupported_drivers`` config option that allows enabling
   drivers marked supported=False. If a driver is in enabled_drivers, has
   supported=False, and enable_unsupported_drivers=False, ironic-conductor
   will fail to start. Setting enable_unsupported_drivers=True will allow
   ironic-conductor to start with warnings emitted.


I very much like that idea. It allows users (deployers/operators) to 
simply enable the setting and establish a discussion / put pressure on 
their hardware vendor. Plus, they can always accept the "unsupported" 
route and submit bugs / maintain it themselves, since they have access 
to the hardware.



Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Driver removal policies - should we make it softer?

2016-08-19 Thread Doug Hellmann
Excerpts from Jim Rollenhagen's message of 2016-08-19 10:15:20 -0400:
> Hi Ironickers,
> 
> There was a big thread here[0] about Cinder, driver removal, and standard
> deprecation policy. If you haven't read through it yet, please do before
> continuing here. :)
> 
> The outcome of that thread is summarized well here.[1]
> 
> I know that I previously had a different opinion on this, but I think we
> should go roughly the same route, for the sake of the users.
> 
> 1) A ``supported`` flag for each driver that is True if and only if the driver
>is tested in infra or third-party CI (and meets our third party CI
>requirements).
> 2) If the supported flag is False for a driver, deprecation is implied (and
>a warning is emitted at load time). A driver may be removed per standard
>deprecation policies, with turning the supported flag False to start the
>clock.
> 3) Add a ``enable_unsupported_drivers`` config option that allows enabling
>drivers marked supported=False. If a driver is in enabled_drivers, has

Do you mean "in disabled_drivers" there?

>supported=False, and enable_unsupported_drivers=False, ironic-conductor
>will fail to start. Setting enable_unsupported_drivers=True will allow
>ironic-conductor to start with warnings emitted.
> 
> It is important to note that (3) does still technically break the standard
> deprecation policy (old config may not work with new version of ironic).
> However, this is a much softer landing than the original plan. FWIW, I do
> expect (but not hope!) this part will be somewhat contentious.
> 
> I'd like to hear thoughts and get consensus on this from the rest of the
> ironic community, so please do reply whether you agree or disagree.
> 
> I'm happy to do the work required (update spec, code patches, doc updates)
> when we do come to agreement.
> 
> // jim
> 
> [0] http://lists.openstack.org/pipermail/openstack-dev/2016-August/101428.html
> [1] http://lists.openstack.org/pipermail/openstack-dev/2016-August/101898.html
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Let's drop the postgresql gate job

2016-08-19 Thread Mike Bayer



On 08/18/2016 11:00 AM, Matt Riedemann wrote:

It's that time of year again to talk about killing this job, at least
from the integrated gate (move it to experimental for people that care
about postgresql, or make it gating on a smaller subset of projects like
oslo.db).



Running a full tempest load for Postgresql for everything is not very 
critical.   I'm sure the software gets full-integration tested against 
PG at some point past the gate at least so regressions are reportable, 
so if that's all that's being dropped, I don't see any issue.


That there is PG-specific code being proposed in Neutron [1].  The patch 
here is planned to be rolled largely into oslo.db, so most of it would 
be tested under oslo.db in any case, however Neutron would still have a 
specific issue that is addressed by this library code, so local unit 
testing of this issue would still be needed against both MySQL and 
Postgresql.


There is also the subject area of routines that are somehow dependent on 
the transaction isolation behavior of the backend database, such as code 
that's attempting to see if something has changed in another 
transaction.   This is usually in PG's favor because Postgresql defaults 
to a lower isolation level than MySQL, but there are probably some weird 
edges to this particular subject area.   Again, these things should be 
tested in a local unit-test kind of context.


For the specific goal of oslo.db running cross-project checks, I'd like 
that a lot, not necessarily for the Postgresql use case, but just to 
ensure that API changes in oslo.db don't break on any downstream 
projects.   I would think that for all of oslo, seeing that oslo is 
"horizontal" to openstack "verticals", that all oslo projects would 
somehow have cross-project testing of new patches against consuming 
projects.I run a very small and focused type of this kind of testing 
on my own against downstream openstack for all proposed changes to 
SQLAlchemy, Alembic, and dogpile.cache.



[1] https://review.openstack.org/#/c/314054/



The postgresql job used to have three interesting things about it:

1. It ran keystone with eventlet (which is no longer a thing).
2. It runs the n-api-meta service rather than using config drive.
3. It uses postgresql for the database.

So #1 is gone, and for #3, according to the April 2016 user survey (page
40) [1], 4% of reporting deployments are using it in production.

I don't think we're running n-api-meta in any other integrated gate
jobs, but I'm pretty sure there is at least one neutron job out there
that's running with it that way. We could also consider making the
nova-net dsvm full gate job run n-api-meta, or vice-versa with the
neutron dsvm full gate job.

We also have to consider that with HP public cloud being gone as a node
provider and we've got fewer test nodes to run with, we have to make
tough decisions about which jobs we're going to run in the integrated gate.

I'm bringing this up again because Nova has a few more jobs it would
like to make voting on it's repo (neutron LB and live migration, at
least in the check queue) but there are concerns about adding yet more
jobs that each change has to get through before it's merged, which means
if anything goes wrong in any of those we can have a 24 hour turnaround
on getting an approved change back through the gate.

[1]
https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo Gonzalez Gutierrez (egonzales90 on irc)

2016-08-19 Thread Kwasniewska, Alicja
+1 :)

-Original Message-
From: Ryan Hallisey [mailto:rhall...@redhat.com] 
Sent: Friday, August 19, 2016 2:50 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo 
Gonzalez Gutierrez (egonzales90 on irc)

+1

-Ryan

- Original Message -
From: "Steven Dake (stdake)" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Thursday, August 18, 2016 7:09:35 PM
Subject: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo 
Gonzalez Gutierrez (egonzales90 on irc)

Kolla Core Review Team: 

I am nominating Eduardo for the core reviewer team. His reviews are fantastic, 
as I'm sure most of you have seen after looking over the review queue. His 30 
day stats place him at #3 by review count [1] and 60 day stats [2] at #4 by 
review count. He is also first to review a significant amount of the time – 
which is impressive for someone new to Kolla. He participates in IRC and he has 
done some nice code contribution as well [3] including the big chunk of work on 
enabling Senlin in Kolla, the dockerfile customizations work, as well as a few 
documentation fixes. Eduardo is not affiliated with any particular company. As 
a result he is not full time on Kolla like many of our other core reviewers. 
The fact that he is part time and still doing fantastically well at reviewing 
is a great sign of things to come :) 

Consider this nomination as my +1 vote. 

Voting is open for 7 days until August 24th. Joining the core review team 
requires a majority of the core review team to approve within a 1 week period 
with no veto (-1) votes. If a veto or unanimous decision is reached prior to 
August 24th, voting will close early. 

Regards 
-steve 

[1] http://stackalytics.com/report/contribution/kolla/30 
[2] http://stackalytics.com/report/contribution/kolla/60 
[3] 
https://review.openstack.org/#/q/owner:%22Eduardo+Gonzalez+%253Cdabarren%2540gmail.com%253E%22
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Driver removal policies - should we make it softer?

2016-08-19 Thread Jim Rollenhagen
Hi Ironickers,

There was a big thread here[0] about Cinder, driver removal, and standard
deprecation policy. If you haven't read through it yet, please do before
continuing here. :)

The outcome of that thread is summarized well here.[1]

I know that I previously had a different opinion on this, but I think we
should go roughly the same route, for the sake of the users.

1) A ``supported`` flag for each driver that is True if and only if the driver
   is tested in infra or third-party CI (and meets our third party CI
   requirements).
2) If the supported flag is False for a driver, deprecation is implied (and
   a warning is emitted at load time). A driver may be removed per standard
   deprecation policies, with turning the supported flag False to start the
   clock.
3) Add a ``enable_unsupported_drivers`` config option that allows enabling
   drivers marked supported=False. If a driver is in enabled_drivers, has
   supported=False, and enable_unsupported_drivers=False, ironic-conductor
   will fail to start. Setting enable_unsupported_drivers=True will allow
   ironic-conductor to start with warnings emitted.

It is important to note that (3) does still technically break the standard
deprecation policy (old config may not work with new version of ironic).
However, this is a much softer landing than the original plan. FWIW, I do
expect (but not hope!) this part will be somewhat contentious.

I'd like to hear thoughts and get consensus on this from the rest of the
ironic community, so please do reply whether you agree or disagree.

I'm happy to do the work required (update spec, code patches, doc updates)
when we do come to agreement.

// jim

[0] http://lists.openstack.org/pipermail/openstack-dev/2016-August/101428.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2016-August/101898.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Feature proposal freeze is here

2016-08-19 Thread Ben Swartzlander
So the feature proposal deadline passed (yesterday evening) and I've 
started putting -2s on things which didn't meet the deadline. The 
purpose of this is to focus attention on features which we still want to 
land in Newton.


The goal of the next 7 days is to merge all of the features which are 
ready so we can enter feature freeze and start the QA cycle. Please turn 
your attention to reviewing and merging features which met the deadline, 
and of course testing stuff.


If you're still working on any feature, you've missed the deadline 
already and you should delay that work until Ocata and help us finish 
the Newton release. The only changes to feature patches in the next 2 
weeks should be responses to review comments and resolving merge conflicts.


Also please prioritize fixing bugs that affect the gate as we'll need 
every bit of cooperation we can get from the gate to merge the backlog 
of features we have.


thanks,
-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] convergence cancel messages

2016-08-19 Thread Anant Patil
On Tue, Apr 19, 2016 at 9:36 PM Zane Bitter  wrote:

> On 17/04/16 00:44, Anant Patil wrote:
> > I think it is a good idea, but I see that a resource can be
> marked
> > unhealthy only after it is done.
> >
> >
> > Currently, yes. The idea would be to change that so that if it finds
> > the resource IN_PROGRESS then it kills the thread and makes sure the
> > resource is in a FAILED state. I
> >
> >
> > Move the resource to CHECK_FAILED?
>
> I'd say that if killing the thread gets it to UPDATE_FAILED then Mission
> Accomplished, but obviously we'd have to check for races and make sure
> we move it to CHECK_FAILED if the update completes successfully.
>
> > The trick would be if the stack update is still running and the
> > resource is currently IN_PROGRESS to make sure that we fail the
> > whole stack update (rolling back if the user has enabled that).
> >
> >
> > IMO, we can probably use the cancel  command do this, because when you
> > are marking a resource as unhealthy, you are
> > cancelling any action running on that resource. Would the following be
> ok?
> > (1) stack-cancel-update  will cancel the update, mark
> > cancelled resources failed and rollback (existing stuff)
> > (2) stack-cancel-update  --no-rollback will just cancel the
> > update and mark cancelled resources as failed
> > (3) stack-cancel-update   ...  Just
> > stop the action on given resources, mark as CHECK_FAILED, don't do
> > anything else. The stack won't progress further. Other resources running
> > while cancel-update will complete.
>
> None of those solve the use case I actually care about, which is "don't
> start any more resource updates, but don't mark the ones currently
> in-progress as failed either, and don't roll back". That would be a huge
> help in TripleO. We need a way to be able to stop updates that
> guarantees not unnecessarily destroying any part of the existing stack,
> and we need that to be the default.
>
> (We sort-of have the rollback version of this; it's equivalent to a
> stack update with the previous template/environment. But we need to make
> it easier and decouple it from the rollback IMHO.)
>
> So one way to do this would be:
>
> (1) stack-cancel-update  will start another update using the
> previous template/environment. We'll start rolling back; in-progress
> resources will be allowed to complete normally.
> (2) stack-cancel-update  --no-rollback will set the
> traversal_id to None so no further resources will be updated;
> in-progress resources will be allowed to complete normally.
> (3) stack-cancel-update  --stop-in-progress will stop the
> traversal, kill any running threads update (marking cancelled resources
> failed) and rollback
> (4) stack-cancel-update  --stop-in-progress --no-rollback will
> just stop the traversal, kill any running threads update (marking
> cancelled resources failed)
> (5) stack-cancel-update  --stop-in-progress  ...
>  Just stop the action on given resources, mark as
> UPDATE_FAILED, don't do anything else. The stack won't progress further.
> Other resources running while cancel-update will complete.
>
> That would cover all the use cases. Some problems with it are:
> - It's way complicated. Lots of options.
> - Those options don't translate well to legacy (pre-convergence) stacks
> using the same client. e.g. there is now a non-default
> --stop-in-progress option, but on legacy stacks we always stop in-progress.
> - Options don't commute. When you specify resources with the
> --stop-in-progress flag it never rolls back, even though you haven't set
> the --no-rollback flag.
>
> An alternative would be to just drop (3) and (4), and maybe rename (5).
> I'd be OK with that:
>
> (1) stack-cancel-update  will start another update using the
> previous template/environment. We'll start rolling back; in-progress
> resources will be allowed to complete normally.
> (2) stack-cancel-update  --no-rollback will set the
> traversal_id to None so no further resources will be updated;
> in-progress resources will be allowed to complete normally.
> (3) resource-stop-update   ...  Just
> stop the action on given resources, mark as UPDATE_FAILED, don't do
> anything else. The stack won't progress further. Other resources running
> while cancel-update will complete.
>
> That solves most of the issues, except that (3) has no real equivalent
> on legacy stacks (I guess we could just make it fail on the server side).
>
> What I'm suggesting is very close to that:
>
> (1) stack-cancel-update  will start another update using the
> previous template/environment. We'll start rolling back; in-progress
> resources will be allowed to complete normally.
> (2) stack-cancel-update  --no-rollback will set the
> traversal_id to None so no further resources will be updated;
> in-progress resources will be allowed to complete normally.
> (3) resource-mark-unhealthy   ... 
> Kill any threads running a CREATE or UPDATE on the given resources, 

Re: [openstack-dev] [TripleO][CI] Memory shortage in HA jobs, please increase it

2016-08-19 Thread Sagi Shnaidman
Hi, Derek

I suspect Sahara can cause it, it started to run on overcloud since my
patch was merged: https://review.openstack.org/#/c/352598/
I don't think it ever ran on jobs, because was either improperly configured
or disabled. And according to reports it's most memory consuming service on
overcloud controllers.


On Fri, Aug 19, 2016 at 12:41 PM, Derek Higgins  wrote:

> On 19 August 2016 at 00:07, Sagi Shnaidman  wrote:
> > Hi,
> >
> > we have a problem again with not enough memory in HA jobs, all of them
> > constantly fails in CI: http://status-tripleoci.rhcloud.com/
>
> Have we any idea why we need more memory all of a sudden? For months
> the overcloud nodes have had 5G of RAM, then last week[1] we bumped it
> too 5.5G now we need it bumped too 6G.
>
> If a new service has been added that is needed on the overcloud then
> bumping to 6G is expected and probably the correct answer but I'd like
> to see us avoiding blindly increasing the resources each time we see
> out of memory errors without investigating if there was a regression
> causing something to start hogging memory.
>
> Sorry if it seems like I'm being picky about this (I seem to resist
> these bumps every time they come up) but there are two good reasons to
> avoid this if possible
> o at peak we are currently configured to run 75 simultaneous jobs
> (although we probably don't reach that at the moment), and each HA job
> has 5 baremetal nodes so bumping from 5G too 6G increases the amount
> of RAM ci can use at peak by 375G
> o When we bump the RAM usage of baremetal nodes from 5G too 6G what
> we're actually doing is increasing the minimum requirements for
> developers from 28G(or whatever the number is now) too 32G
>
> So before we bump the number can we just check first if its justified,
> as I've watched this number increase from 2G since we started running
> tripleo-ci
>
> thanks,
> Derek.
>
> [1] - https://review.openstack.org/#/c/353655/
>
> > I've created a patch that will increase it[1], but we need to increase it
> > right now on rh1.
> > I can't do it now, because unfortunately I'll not be able to watch this
> if
> > it works and no problems appear.
> > TripleO CI cloud admins, please increase the memory for baremetal flavor
> on
> > rh1 tomorrow (to 6144?).
> >
> > Thanks
> >
> > [1] https://review.openstack.org/#/c/357532/
> > --
> > Best regards
> > Sagi Shnaidman
>



-- 
Best regards
Sagi Shnaidman
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun][Higgins] Proposing Sudipta Biswas and Wenzhi Yu for Zun core reviewer team

2016-08-19 Thread Hongbin Lu
Hi all,

Thanks for your vote. According to the feedback, Sudipta and Wenzhi have
been added to the core team [1].

Best regards,
Hongbin

On Sat, Aug 13, 2016 at 5:44 PM, Fei Long Wang 
wrote:

> +1
>
> On 12/08/16 19:22, taget wrote:
>
>>
>> +1 for both, they would be great addition to zun team.
>>
>> On 2016年08月12日 10:26, Yanyan Hu wrote:
>>
>>>
>>> Both Sudipta and Wenzhi have been actively contributing to the Zun
>>> project for a while. Sudipta provided helpful advice for the project
>>> roadmap and architecture design. Wenzhi consistently contributed high
>>> quality patches and insightful reviews. I think both of them are qualified
>>> to join the core team.
>>>
>>>
>>
> --
> Cheers & Best regards,
> Fei Long Wang (王飞龙)
> --
> Senior Cloud Software Engineer
> Tel: +64-48032246
> Email: flw...@catalyst.net.nz
> Catalyst IT Limited
> Level 6, Catalyst House, 150 Willis Street, Wellington
> --
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Memory shortage in HA jobs, please increase it

2016-08-19 Thread Derek Higgins
On 19 August 2016 at 11:08, Giulio Fidente  wrote:
> On 08/19/2016 11:41 AM, Derek Higgins wrote:
>>
>> On 19 August 2016 at 00:07, Sagi Shnaidman  wrote:
>>>
>>> Hi,
>>>
>>> we have a problem again with not enough memory in HA jobs, all of them
>>> constantly fails in CI: http://status-tripleoci.rhcloud.com/
>>
>>
>> Have we any idea why we need more memory all of a sudden? For months
>> the overcloud nodes have had 5G of RAM, then last week[1] we bumped it
>> too 5.5G now we need it bumped too 6G.
>>
>> If a new service has been added that is needed on the overcloud then
>> bumping to 6G is expected and probably the correct answer but I'd like
>> to see us avoiding blindly increasing the resources each time we see
>> out of memory errors without investigating if there was a regression
>> causing something to start hogging memory.
>
>
> fwiw, one recent addition was the cinder-backup service
>
> though this service wasn't enabled by default in mitaka so with [1] we can
> disable the service by default for newton as well

we still got memory errors with this patch, I'm going to bump up to 6G
as sagi suggested to temporarily unblock things but I strongly suggest
somebody looks into this and we prioritized undoing the bump if
possible next week.

>
> 1. https://review.openstack.org/#/c/357729
>
> --
> Giulio Fidente
> GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo Gonzalez Gutierrez (egonzales90 on irc)

2016-08-19 Thread Ryan Hallisey
+1

-Ryan

- Original Message -
From: "Steven Dake (stdake)" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Thursday, August 18, 2016 7:09:35 PM
Subject: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo 
Gonzalez Gutierrez (egonzales90 on irc)

Kolla Core Review Team: 

I am nominating Eduardo for the core reviewer team. His reviews are fantastic, 
as I'm sure most of you have seen after looking over the review queue. His 30 
day stats place him at #3 by review count [1] and 60 day stats [2] at #4 by 
review count. He is also first to review a significant amount of the time – 
which is impressive for someone new to Kolla. He participates in IRC and he has 
done some nice code contribution as well [3] including the big chunk of work on 
enabling Senlin in Kolla, the dockerfile customizations work, as well as a few 
documentation fixes. Eduardo is not affiliated with any particular company. As 
a result he is not full time on Kolla like many of our other core reviewers. 
The fact that he is part time and still doing fantastically well at reviewing 
is a great sign of things to come :) 

Consider this nomination as my +1 vote. 

Voting is open for 7 days until August 24th. Joining the core review team 
requires a majority of the core review team to approve within a 1 week period 
with no veto (-1) votes. If a veto or unanimous decision is reached prior to 
August 24th, voting will close early. 

Regards 
-steve 

[1] http://stackalytics.com/report/contribution/kolla/30 
[2] http://stackalytics.com/report/contribution/kolla/60 
[3] 
https://review.openstack.org/#/q/owner:%22Eduardo+Gonzalez+%253Cdabarren%2540gmail.com%253E%22
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [cinder] [neutron] [ironic] [api] [doc][trove] API status report

2016-08-19 Thread Amrith Kumar
This email managed to fall through the cracks in the couch cushions so my 
apologies for doing this late. I’ve added [trove] to the subject line now.

Here’s the update from Trove:

Anne’s original change[1] was abandoned and replaced by [2] which has now 
merged. The infra change [3] has also merged and the api-ref job is now live in 
Trove.

My thanks to Anne, Sean Dague, Mariam, Laurel, Peter, Yolanda, Ricardo, and 
Trevor in getting this done.

Sorry I’m late, now, where’s the champagne?

-amrith

[1] https://review.openstack.org/#/c/316381/
[2] https://review.openstack.org/#/c/356631/
[3] https://review.openstack.org/#/c/356407/

From: Anne Gentle [mailto:annegen...@justwriteclick.com]
Sent: Thursday, August 11, 2016 10:53 AM
To: OpenStack Development Mailing List ; 
openstack-d...@lists.openstack.org
Subject: [OpenStack-docs] [cinder] [neutron] [ironic] [api] [doc] API status 
report



On Wed, Aug 10, 2016 at 2:49 PM, Anne Gentle 
> wrote:
Hi all,
I wanted to report on status and answer any questions you all have about the 
API reference and guide publishing process.

The expectation is that we provide all OpenStack API information on 
developer.openstack.org. In order to meet that 
goal, it's simplest for now to have all projects use the 
RST+YAML+openstackdocstheme+os-api-ref extension tooling so that users see 
available OpenStack APIs in a sidebar navigation drop-down list.

--Migration--
The current status for migration is that all WADL content is migrated except 
for trove. There is a patch in progress and I'm in contact with the team to 
assist in any way. https://review.openstack.org/#/c/316381/

--Theme, extension, release requirements--
The current status for the theme, navigation, and Sphinx extension tooling is 
contained in the latest post from Graham proposing a solution for the release 
number switchover and offers to help teams as needed: 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101112.html I 
hope to meet the requirements deadline to get those changes landed. 
Requirements freeze is Aug 29.

--Project coverage--
The current status for project coverage is that these projects are now using 
the RST+YAML in-tree workflow and tools and publishing to 
http://developer.openstack.org/api-ref/ so they will be included 
in the upcoming API navigation sidebar intended to span all OpenStack APIs:

designate http://developer.openstack.org/api-ref/dns/
glance http://developer.openstack.org/api-ref/image/
heat http://developer.openstack.org/api-ref/orchestration/
ironic http://developer.openstack.org/api-ref/baremetal/
keystone http://developer.openstack.org/api-ref/identity/
manila http://developer.openstack.org/api-ref/shared-file-systems/
neutron-lib http://developer.openstack.org/api-ref/networking/
nova http://developer.openstack.org/api-ref/compute/
sahara http://developer.openstack.org/api-ref/data-processing/
senlin http://developer.openstack.org/api-ref/clustering/
swift http://developer.openstack.org/api-ref/object-storage/
zaqar http://developer.openstack.org/api-ref/messaging/

These projects are using the in-tree workflow and common tools, but do not have 
a publish job in project-config in the jenkins/jobs/projects.yaml file.

ceilometer

Sorry, in reviewing further today I found another project that does not have a 
publish job but has in-tree source files:

cinder

Team cinder: can you let me know where you are in your publishing comfort 
level? Please add an api-ref-jobs: line with a target of block-storage to 
jenkins/jobs/projects.yaml in the project-config repo to ensure publishing is 
correct.

Another issue is the name of the target directory for the final URL. Team 
ironic can I change your api-ref-jobs: line to bare-metal instead of baremetal? 
It'll be better for search engines and for alignment with the other projects 
URLs: https://review.openstack.org/354135

I've also uncovered a problem where a neutron project's API does not have an 
official service name, and am working on a solution but need help from the 
neutron team: https://review.openstack.org/#/c/351407
Thanks,
Anne


--Projects not using common tooling--
These projects have API docs but are not yet using the common tooling, as far 
as I can tell. Because of the user experience, I'm making a judgement call that 
these cannot be included in the common navigation. I have patched the 
projects.yaml file in the governance repo with the URLs I could screen-scrape, 
but if I'm incorrect please do patch the projects.yaml in the governance repo.

astara
cloudkitty
congress
magnum
mistral
monasca
solum
tacker
trove

Please reach out if you have questions or need assistance getting started with 
the new common tooling, documented here: 
http://docs.openstack.org/contributor-guide/api-guides.html.

For searchlight, looking at 

Re: [openstack-dev] [nova] Nova mascot - nominations and voting

2016-08-19 Thread Sean Dague
On 08/11/2016 10:14 AM, Sean Dague wrote:
> 
> So... I overstepped here and jumped to a conclusion based on an
> incorrect understanding of people's sentiments. And there has been some
> concern expressed that part of this conversation was private, which is a
> valid concern. I'm sorry about all of that.
> 
> Let's start afresh...
> 
> What's been publicly suggested so far (from all ML posts that seem to
> contain a suggestion):
> 
> ant - alexis (already chosen by infra)
> bee - alexis (already chosen by refstack)
> star - heidi
> supernova - markus, auggy, bob ball
> octopus - chris (already chosen by UX)
> 
> I'd suggest that we actually combine star/supernova into one item to
> give the graphic designers some flexibility and creativity. With less
> distinctive features than animals, the freedom is probably needed to
> make something cool.
> 
> Are there other suggestions? Those items already chosen by other teams
> are out of bounds per the FAQ
> (http://www.openstack.org/project-mascots). We can leave this open for
> the rest of the week, and if there are additional valid options do an
> ATC poll next week.

There were no additional suggestions over the last week, and the only
valid options (not taken by other teams) were the star/supernova that
was suggested.

So we've got our mascot, thanks folks.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] gate iffy, hold your rechecks

2016-08-19 Thread Doug Wiegley
The ip collisions with the devstack fixed range are no longer an issue, so 
rechecks and approvals can resume.

Thanks,
doug

> On Aug 19, 2016, at 12:08 PM, Doug Wiegley  
> wrote:
> 
> And cores, please hold your +A’s until the patch below has merged.
> 
> Thanks,
> doug
> 
>> On Aug 19, 2016, at 12:03 PM, Doug Wiegley  
>> wrote:
>> 
>> Hi all,
>> 
>> The CI system is having some issues with osic nodes running dsvm jobs right 
>> now, and the odds of getting one are pretty high with neutron or lbaas, 
>> because of how many dsvm jobs we run on each change. Please hold your 
>> rechecks until the following patch merges:
>> 
>> https://review.openstack.org/#/c/357764/
>> 
>> Thanks,
>> doug
>> 
>> 
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] gate iffy, hold your rechecks

2016-08-19 Thread Doug Wiegley
And cores, please hold your +A’s until the patch below has merged.

Thanks,
doug

> On Aug 19, 2016, at 12:03 PM, Doug Wiegley  
> wrote:
> 
> Hi all,
> 
> The CI system is having some issues with osic nodes running dsvm jobs right 
> now, and the odds of getting one are pretty high with neutron or lbaas, 
> because of how many dsvm jobs we run on each change. Please hold your 
> rechecks until the following patch merges:
> 
> https://review.openstack.org/#/c/357764/
> 
> Thanks,
> doug
> 
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas] gate iffy, hold your rechecks

2016-08-19 Thread Doug Wiegley
Hi all,

The CI system is having some issues with osic nodes running dsvm jobs right 
now, and the odds of getting one are pretty high with neutron or lbaas, because 
of how many dsvm jobs we run on each change. Please hold your rechecks until 
the following patch merges:

https://review.openstack.org/#/c/357764/

Thanks,
doug



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Memory shortage in HA jobs, please increase it

2016-08-19 Thread Giulio Fidente

On 08/19/2016 12:12 PM, Erno Kuvaja wrote:

On Fri, Aug 19, 2016 at 10:53 AM, Hugh Brock  wrote:

On Fri, Aug 19, 2016 at 11:41 AM, Derek Higgins  wrote:

On 19 August 2016 at 00:07, Sagi Shnaidman  wrote:

Hi,

we have a problem again with not enough memory in HA jobs, all of them
constantly fails in CI: http://status-tripleoci.rhcloud.com/


Have we any idea why we need more memory all of a sudden? For months
the overcloud nodes have had 5G of RAM, then last week[1] we bumped it
too 5.5G now we need it bumped too 6G.

If a new service has been added that is needed on the overcloud then
bumping to 6G is expected and probably the correct answer but I'd like
to see us avoiding blindly increasing the resources each time we see
out of memory errors without investigating if there was a regression
causing something to start hogging memory.

Sorry if it seems like I'm being picky about this (I seem to resist
these bumps every time they come up) but there are two good reasons to
avoid this if possible
o at peak we are currently configured to run 75 simultaneous jobs
(although we probably don't reach that at the moment), and each HA job
has 5 baremetal nodes so bumping from 5G too 6G increases the amount
of RAM ci can use at peak by 375G
o When we bump the RAM usage of baremetal nodes from 5G too 6G what
we're actually doing is increasing the minimum requirements for
developers from 28G(or whatever the number is now) too 32G

So before we bump the number can we just check first if its justified,
as I've watched this number increase from 2G since we started running
tripleo-ci

thanks,
Derek.

[1] - https://review.openstack.org/#/c/353655/


Wondering if it makes sense to enable any but the most basic overcloud
services in TripleO CI. The idea of using some type of on-demand job
for services other than the ones needed for the ping test has been
proposed elsewhere -- maybe this should be our default mode for
TripleO CI. Thoughts?

--Hugh


Problem with periodic jobs are that the results are bit hidden and 1
to 2 people care about them when they happen to have time. OTOH if I
understand correctly we don't test the services even now, just that
their deployment goes through without failures.


we do some testing of the overcloud in the gate jobs, we actually deploy 
a heat stack in the overcloud [1], creating a volume based nova guest 
(backed by Ceph for HA job), set some routing and ping it (in network 
isolation!)


1. 
https://github.com/openstack-infra/tripleo-ci/blob/master/templates/tenantvm_floatingip.yaml

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Memory shortage in HA jobs, please increase it

2016-08-19 Thread Erno Kuvaja
On Fri, Aug 19, 2016 at 10:53 AM, Hugh Brock  wrote:
> On Fri, Aug 19, 2016 at 11:41 AM, Derek Higgins  wrote:
>> On 19 August 2016 at 00:07, Sagi Shnaidman  wrote:
>>> Hi,
>>>
>>> we have a problem again with not enough memory in HA jobs, all of them
>>> constantly fails in CI: http://status-tripleoci.rhcloud.com/
>>
>> Have we any idea why we need more memory all of a sudden? For months
>> the overcloud nodes have had 5G of RAM, then last week[1] we bumped it
>> too 5.5G now we need it bumped too 6G.
>>
>> If a new service has been added that is needed on the overcloud then
>> bumping to 6G is expected and probably the correct answer but I'd like
>> to see us avoiding blindly increasing the resources each time we see
>> out of memory errors without investigating if there was a regression
>> causing something to start hogging memory.
>>
>> Sorry if it seems like I'm being picky about this (I seem to resist
>> these bumps every time they come up) but there are two good reasons to
>> avoid this if possible
>> o at peak we are currently configured to run 75 simultaneous jobs
>> (although we probably don't reach that at the moment), and each HA job
>> has 5 baremetal nodes so bumping from 5G too 6G increases the amount
>> of RAM ci can use at peak by 375G
>> o When we bump the RAM usage of baremetal nodes from 5G too 6G what
>> we're actually doing is increasing the minimum requirements for
>> developers from 28G(or whatever the number is now) too 32G
>>
>> So before we bump the number can we just check first if its justified,
>> as I've watched this number increase from 2G since we started running
>> tripleo-ci
>>
>> thanks,
>> Derek.
>>
>> [1] - https://review.openstack.org/#/c/353655/
>
> Wondering if it makes sense to enable any but the most basic overcloud
> services in TripleO CI. The idea of using some type of on-demand job
> for services other than the ones needed for the ping test has been
> proposed elsewhere -- maybe this should be our default mode for
> TripleO CI. Thoughts?
>
> --Hugh

Problem with periodic jobs are that the results are bit hidden and 1
to 2 people care about them when they happen to have time. OTOH if I
understand correctly we don't test the services even now, just that
their deployment goes through without failures. Likely the best option
would be to test different subset of services depending of what files
the change touches (like discussed yesterday/earlier this week, can't
remember where so I have no reference for that discussion, sorry).

In general I'm with Derek on this, we should not just blindly throw in
more resources without understanding why we need to do so.

- Erno
>
>
>>> I've created a patch that will increase it[1], but we need to increase it
>>> right now on rh1.
>>> I can't do it now, because unfortunately I'll not be able to watch this if
>>> it works and no problems appear.
>>> TripleO CI cloud admins, please increase the memory for baremetal flavor on
>>> rh1 tomorrow (to 6144?).
>>>
>>> Thanks
>>>
>>> [1] https://review.openstack.org/#/c/357532/
>>> --
>>> Best regards
>>> Sagi Shnaidman
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
>   ,   ,| Hugh Brock, hbr...@redhat.com
>   )-_"""_-(| Director of Engineering, OpenStack Management
>  ./ o\ /o \.   | TripleO: Install, configure, and scale OpenStack.
> . \__/ \__/ .  | http://rdoproject.org, http://tripleo.org
> ...   V   ...  |
> ... - - - ...  | "I know that you believe you understand what you
>  .   - -   .   | think I said, but I'm not sure you realize that what
>   `-.-´| you heard is not what I meant." --Robert McCloskey
>  "TripleOwl"
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Memory shortage in HA jobs, please increase it

2016-08-19 Thread Giulio Fidente

On 08/19/2016 11:41 AM, Derek Higgins wrote:

On 19 August 2016 at 00:07, Sagi Shnaidman  wrote:

Hi,

we have a problem again with not enough memory in HA jobs, all of them
constantly fails in CI: http://status-tripleoci.rhcloud.com/


Have we any idea why we need more memory all of a sudden? For months
the overcloud nodes have had 5G of RAM, then last week[1] we bumped it
too 5.5G now we need it bumped too 6G.

If a new service has been added that is needed on the overcloud then
bumping to 6G is expected and probably the correct answer but I'd like
to see us avoiding blindly increasing the resources each time we see
out of memory errors without investigating if there was a regression
causing something to start hogging memory.


fwiw, one recent addition was the cinder-backup service

though this service wasn't enabled by default in mitaka so with [1] we 
can disable the service by default for newton as well


1. https://review.openstack.org/#/c/357729

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Memory shortage in HA jobs, please increase it

2016-08-19 Thread Hugh Brock
On Fri, Aug 19, 2016 at 11:41 AM, Derek Higgins  wrote:
> On 19 August 2016 at 00:07, Sagi Shnaidman  wrote:
>> Hi,
>>
>> we have a problem again with not enough memory in HA jobs, all of them
>> constantly fails in CI: http://status-tripleoci.rhcloud.com/
>
> Have we any idea why we need more memory all of a sudden? For months
> the overcloud nodes have had 5G of RAM, then last week[1] we bumped it
> too 5.5G now we need it bumped too 6G.
>
> If a new service has been added that is needed on the overcloud then
> bumping to 6G is expected and probably the correct answer but I'd like
> to see us avoiding blindly increasing the resources each time we see
> out of memory errors without investigating if there was a regression
> causing something to start hogging memory.
>
> Sorry if it seems like I'm being picky about this (I seem to resist
> these bumps every time they come up) but there are two good reasons to
> avoid this if possible
> o at peak we are currently configured to run 75 simultaneous jobs
> (although we probably don't reach that at the moment), and each HA job
> has 5 baremetal nodes so bumping from 5G too 6G increases the amount
> of RAM ci can use at peak by 375G
> o When we bump the RAM usage of baremetal nodes from 5G too 6G what
> we're actually doing is increasing the minimum requirements for
> developers from 28G(or whatever the number is now) too 32G
>
> So before we bump the number can we just check first if its justified,
> as I've watched this number increase from 2G since we started running
> tripleo-ci
>
> thanks,
> Derek.
>
> [1] - https://review.openstack.org/#/c/353655/

Wondering if it makes sense to enable any but the most basic overcloud
services in TripleO CI. The idea of using some type of on-demand job
for services other than the ones needed for the ping test has been
proposed elsewhere -- maybe this should be our default mode for
TripleO CI. Thoughts?

--Hugh


>> I've created a patch that will increase it[1], but we need to increase it
>> right now on rh1.
>> I can't do it now, because unfortunately I'll not be able to watch this if
>> it works and no problems appear.
>> TripleO CI cloud admins, please increase the memory for baremetal flavor on
>> rh1 tomorrow (to 6144?).
>>
>> Thanks
>>
>> [1] https://review.openstack.org/#/c/357532/
>> --
>> Best regards
>> Sagi Shnaidman
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
  ,   ,| Hugh Brock, hbr...@redhat.com
  )-_"""_-(| Director of Engineering, OpenStack Management
 ./ o\ /o \.   | TripleO: Install, configure, and scale OpenStack.
. \__/ \__/ .  | http://rdoproject.org, http://tripleo.org
...   V   ...  |
... - - - ...  | "I know that you believe you understand what you
 .   - -   .   | think I said, but I'm not sure you realize that what
  `-.-´| you heard is not what I meant." --Robert McCloskey
 "TripleOwl"

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Memory shortage in HA jobs, please increase it

2016-08-19 Thread Derek Higgins
On 19 August 2016 at 00:07, Sagi Shnaidman  wrote:
> Hi,
>
> we have a problem again with not enough memory in HA jobs, all of them
> constantly fails in CI: http://status-tripleoci.rhcloud.com/

Have we any idea why we need more memory all of a sudden? For months
the overcloud nodes have had 5G of RAM, then last week[1] we bumped it
too 5.5G now we need it bumped too 6G.

If a new service has been added that is needed on the overcloud then
bumping to 6G is expected and probably the correct answer but I'd like
to see us avoiding blindly increasing the resources each time we see
out of memory errors without investigating if there was a regression
causing something to start hogging memory.

Sorry if it seems like I'm being picky about this (I seem to resist
these bumps every time they come up) but there are two good reasons to
avoid this if possible
o at peak we are currently configured to run 75 simultaneous jobs
(although we probably don't reach that at the moment), and each HA job
has 5 baremetal nodes so bumping from 5G too 6G increases the amount
of RAM ci can use at peak by 375G
o When we bump the RAM usage of baremetal nodes from 5G too 6G what
we're actually doing is increasing the minimum requirements for
developers from 28G(or whatever the number is now) too 32G

So before we bump the number can we just check first if its justified,
as I've watched this number increase from 2G since we started running
tripleo-ci

thanks,
Derek.

[1] - https://review.openstack.org/#/c/353655/

> I've created a patch that will increase it[1], but we need to increase it
> right now on rh1.
> I can't do it now, because unfortunately I'll not be able to watch this if
> it works and no problems appear.
> TripleO CI cloud admins, please increase the memory for baremetal flavor on
> rh1 tomorrow (to 6144?).
>
> Thanks
>
> [1] https://review.openstack.org/#/c/357532/
> --
> Best regards
> Sagi Shnaidman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ovo] Need help to understand the exception of NeutronSyntheticFieldMultipleForeignKeys

2016-08-19 Thread Ihar Hrachyshka

taget  wrote:


hi Artur,

Thanks for you response and suggestion, I update patch as your suggestion  
[1], but I am not sure if the synth_objs can be loaded correctly since
it requires to use foreign_keys to get objects [2]. We may need to  
implement your propose patch `Add support fro multiple foreign keys in  
NeutronDbObject`[3] as some patches requires that too, I will follow up  
with it.


[1]https://review.openstack.org/306685
[2]https://github.com/openstack/neutron/blob/9.0.0.0b2/neutron/objects/base.py#L434
[3]https://review.openstack.org/#/c/357207


Agreed the patch is needed, including for ports (that need to link to  
segments - thru level objects - in addition to networks). We will land it  
shortly. I respin it now to handle the missing bits and issues.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ovo] Need help to understand the exception of NeutronSyntheticFieldMultipleForeignKeys

2016-08-19 Thread taget

hi Artur,

Thanks for you response and suggestion, I update patch as your 
suggestion [1], but I am not sure if the synth_objs can be loaded 
correctly since
it requires to use foreign_keys to get objects [2]. We may need to 
implement your propose patch `Add support fro multiple foreign keys in 
NeutronDbObject`[3] as some patches requires that too, I will follow up 
with it.


[1]https://review.openstack.org/306685
[2]https://github.com/openstack/neutron/blob/9.0.0.0b2/neutron/objects/base.py#L434
[3]https://review.openstack.org/#/c/357207

Thanks Eli.

On 2016年08月18日 21:20, Korzeniewski, Artur wrote:

Hi,
So in the first place, you do not need the multiple foreign keys in Flavor db 
use case.
You can only declare flavor_id and service_profile_id in 
FlavorServiceProfileBinding object, since the relationships ('flavor' and 
'service_profile') is not used anywhere, only ids.

Secondly, nobody right now is working on improving the situation. I have two 
ideas how to fix it:
The example:
{
'network_id': 'id',
'agent_id': id
}

1) add a name of object to the value ('id' i.e.)
{
'network_id': 'Network.id',
'agent_id': 'Agent.id'
}
It would be kind of complicated to get this used in [1], where the foreign keys 
are accessed

2) get deeper structure like:
{
'Network': {'network_id': 'id'},
'Agent': {'agent_id': id}
}
It looks better, because you just add proper foreign key under the related 
object name. Then you can proceed without any issues in [1], just grab the 
proper object name value as dictionary of single foreign key, you can check the 
[2] to see how it should look like for second option.

Regards,
Artur

[1] 
https://github.com/openstack/neutron/blob/9.0.0.0b2/neutron/objects/base.py#L433
[2] https://review.openstack.org/#/c/357207
-Original Message-
From: taget [mailto:qiaoliy...@gmail.com]
Sent: Thursday, August 18, 2016 5:08 AM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: Qiao, Liyong ; Bhatia, Manjeet S 
; Korzeniewski, Artur 
Subject: [neutron][ovo] Need help to understand the exception of 
NeutronSyntheticFieldMultipleForeignKeys

hi neutron ovo hacker:

Recently I am working on neutron OVO blue print, and found there are some 
blocking (slow progress) issues when tring to add new object to Neutron.

When I try to add Flavor related object on [1], I need to add 2 foreign_keys to 
FlavorServiceProfileBinding :

flavor_id: Flavor.id
service_profile_id: ServiceProfile.id

For ServiceProfile and Flavor object, FlavorServiceProfileBinding is a 
synthetic_fields, we refer FlavorServiceProfileBinding in [2], but in currently 
object base implementation, we only allow synthetic_fields to only have 1 
foreignkeys[3].

can anyone help to clarify this? or give some guide on how to overcome [1]? If 
there's anyone who working on to fix it too?


P. S There are other use case for multiple foreign keys [4]
[1]https://review.openstack.org/#/c/306685/6/neutron/db/flavor/models.py@45
[2]https://github.com/openstack/neutron/blob/9.0.0.0b2/neutron/db/flavors_db.py#L86
[3]https://github.com/openstack/neutron/blob/9.0.0.0b2/neutron/objects/base.py#L429-L430
[4]https://review.openstack.org/#/c/307964/20/neutron/objects/router.py@33



--
Best Regards,
Eli Qiao (乔立勇), Intel OTC.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] Core nomination proposal for Eduardo Gonzalez Gutierrez (egonzales90 on irc)

2016-08-19 Thread Paul Bourke

+1

On 19/08/16 00:15, Michał Jastrzębski wrote:

+1

On 18 August 2016 at 18:09, Steven Dake (stdake)  wrote:

Kolla Core Review Team:

I am nominating Eduardo for the core reviewer team.  His reviews are
fantastic, as I'm sure most of you have seen after looking over the review
queue.  His 30 day stats place him at #3 by review count [1] and 60 day
stats [2] at #4 by review count.  He is also first to review a significant
amount of the time – which is impressive for someone new to Kolla.  He
participates in IRC and he has done some nice code contribution as well [3]
including the big chunk of work on enabling Senlin in Kolla, the dockerfile
customizations work, as well as a few documentation fixes.  Eduardo is not
affiliated with any particular company.  As a result he is not full time on
Kolla like many of our other core reviewers.  The fact that he is part time
and still doing fantastically well at reviewing is a great sign of things to
come :)

Consider this nomination as my +1 vote.

Voting is open for 7 days until August 24th.  Joining the core review team
requires a majority of the core review team to approve within a 1 week
period with no veto (-1) votes.  If a veto or unanimous decision is reached
prior to August 24th, voting will close early.

Regards
-steve

[1] http://stackalytics.com/report/contribution/kolla/30
[2] http://stackalytics.com/report/contribution/kolla/60
[3]
https://review.openstack.org/#/q/owner:%22Eduardo+Gonzalez+%253Cdabarren%2540gmail.com%253E%22

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] FFE request for Manila CephFS Native backend integration

2016-08-19 Thread Erno Kuvaja
Hi all,

I'm still working on getting all pieces together for the Manila CephFS
driver integration and realizing that we have about a week of busy
gating left 'till FF and the changes are not reviewed yet, I'd like to
ask community consider the feature for Feature Freeze Exception.

I'm confident that I will get all the bits together over next week or
so, but I'm far from confident that we will have them merged in time.
I would like to see this feature making Newton still.

Best,
Erno (jokke) Kuvaja

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][networking-odl] New error Mitaka together with ODL-Beryllium

2016-08-19 Thread Rui Zang

Hi Hikolas,

The `ovs-vsctl show` output does not show how physnet1 can be reached. 
That is what I meant by it is not managed by OpenDaylight. Maybe you 
need to manipulate the "OVS_BRIDGE_MAPPINGS" configuration options.
However, that does not explain why networking-odl or OpenDaylight does 
not bind port on flat network. Maybe other folks have the answer.


Thanks,
Zang, Rui

On 8/19/2016 3:36 PM, Nikolas Hermanns wrote:

I redeployed the system cause of other reason so ids changed. Problem still 
exist:
root@node-3:~# ovs-vsctl show
7c5fba05-9094-4306-87ed-2e44f2edc192
Manager "tcp:192.168.0.4:6640"
is_connected: true
Bridge br-int
Controller "tcp:192.168.0.4:6633"
is_connected: true
Port "TUNNEL:2"
Interface "TUNNEL:2"
type: vxlan
options: {key=flow, local_ip="192.168.2.4", 
remote_ip="192.168.2.1"}
Port "tap33a54c35-1b"
Interface "tap33a54c35-1b"
type: internal
Port "tap54ea3b3f-ee"
Interface "tap54ea3b3f-ee"
type: internal
Port br-int
Interface br-int
type: internal
Port "tap6cd7d327-11"
Interface "tap6cd7d327-11"
type: internal
Port "qr-15746548-70"
Interface "qr-15746548-70"
type: internal
Port "TUNNEL:10"
Interface "TUNNEL:10"
type: vxlan
options: {key=flow, local_ip="192.168.2.4", 
remote_ip="192.168.2.3"}
Port "tap15a2a33f-9f"
Interface "tap15a2a33f-9f"
type: internal
Port "tapfeea7fab-be"
Interface "tapfeea7fab-be"
type: internal
Port "TUNNEL:6"
Interface "TUNNEL:6"
type: vxlan
options: {key=flow, local_ip="192.168.2.4", 
remote_ip="192.168.2.2"}
Port "tapc1ad3c0d-3c"
Interface "tapc1ad3c0d-3c"
type: internal
ovs_version: "2.4.1"

## neutron port-list
| fcb2ecbe-07ba-41a1-8c5e-7dac6577d58b |  | fa:16:3e:9a:55:16 | {"subnet_id": 
"102e7306-8a2d-4448-bf69-9e0c3c8649b4", "ip_address": "172.16.0.130"}  |

## ERROR from the logs:
2016-08-19 07:29:30.170 23329 ERROR neutron.plugins.ml2.managers 
[req-f228e4bb-0808-42b5-9628-0b2cd9ad1c92 - - - - -] Failed to bind port 
fcb2ecbe-07ba-41a1-8c5e-7dac6577d58b on host node-3.domain.tld for vnic_type 
normal using segments [{'segmentation_id': None, 'physical_network': 
u'physnet1', 'id': u'9c2c66a2-557c-4547-912a-1f043ea76d9f', 'network_type': 
u'flat'}]
2016-08-19 07:29:30.171 23329 INFO neutron.plugins.ml2.plugin 
[req-f228e4bb-0808-42b5-9628-0b2cd9ad1c92 - - - - -] Attempt 10 to bind port 
fcb2ecbe-07ba-41a1-8c5e-7dac6577d58b
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology 
[req-f228e4bb-0808-42b5-9628-0b2cd9ad1c92 - - - - -] Network topology element 
has failed binding port:
{
"class": "networking_odl.ml2.ovsdb_topology.OvsdbNetworkTopologyElement",
"has_datapath_type_netdev": false,
"host_addresses": [
"192.168.0.4"
],
"support_vhost_user": false,
"uuid": "7c5fba05-9094-4306-87ed-2e44f2edc192",
"valid_vif_types": [
"ovs"
]
}
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology 
Traceback (most recent call last):
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology   File 
"/usr/local/lib/python2.7/dist-packages/networking_odl/ml2/network_topology.py",
 line 117, in bind_port
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology 
port_context, vif_type, self._vif_details)
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology   File 
"/usr/local/lib/python2.7/dist-packages/networking_odl/ml2/ovsdb_topology.py", 
line 173, in bind_port
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology 
raise ValueError('Unable to find any valid segment in given context.')
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology 
ValueError: Unable to find any valid segment in given context.
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology
2016-08-19 07:29:30.180 23329 ERROR networking_odl.ml2.network_topology 
[req-f228e4bb-0808-42b5-9628-0b2cd9ad1c92 - - - - -] Unable to bind port 
element for given host and valid VIF types:
hostname: node-3.domain.tld
valid VIF types: vhostuser, ovs
2016-08-19 07:29:30.181 23329 ERROR neutron.plugins.ml2.managers 
[req-f228e4bb-0808-42b5-9628-0b2cd9ad1c92 - - - - -] Failed to bind port 
fcb2ecbe-07ba-41a1-8c5e-7dac6577d58b on host node-3.domain.tld for vnic_type 
normal using segments [{'segmentation_id': None, 'physical_network': 
u'physnet1', 'id': u'9c2c66a2-557c-4547-912a-1f043ea76d9f', 'network_type': 
u'flat'}]
2016-08-19 07:29:30.644 23329 INFO networking_odl.journal.journal [-] Syncing 
update port 

[openstack-dev] [Magnum] about word "baymodel"

2016-08-19 Thread Shuu Mutou
Hi folks, 

I recognize that "baymodel" or "Baymodel" is correct, and "bay model" or 
"BayModel" is not correct. 

Magnum-UI implemented using former since Rob's last patch. Before the 
implementation, Rob seemed to ask on IRC.

What is truth?

And please check https://review.openstack.org/#/c/355804/


Thanks, 
Shu


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][networking-odl] New error Mitaka together with ODL-Beryllium

2016-08-19 Thread Nikolas Hermanns
I redeployed the system cause of other reason so ids changed. Problem still 
exist:
root@node-3:~# ovs-vsctl show
7c5fba05-9094-4306-87ed-2e44f2edc192
Manager "tcp:192.168.0.4:6640"
is_connected: true
Bridge br-int
Controller "tcp:192.168.0.4:6633"
is_connected: true
Port "TUNNEL:2"
Interface "TUNNEL:2"
type: vxlan
options: {key=flow, local_ip="192.168.2.4", 
remote_ip="192.168.2.1"}
Port "tap33a54c35-1b"
Interface "tap33a54c35-1b"
type: internal
Port "tap54ea3b3f-ee"
Interface "tap54ea3b3f-ee"
type: internal
Port br-int
Interface br-int
type: internal
Port "tap6cd7d327-11"
Interface "tap6cd7d327-11"
type: internal
Port "qr-15746548-70"
Interface "qr-15746548-70"
type: internal
Port "TUNNEL:10"
Interface "TUNNEL:10"
type: vxlan
options: {key=flow, local_ip="192.168.2.4", 
remote_ip="192.168.2.3"}
Port "tap15a2a33f-9f"
Interface "tap15a2a33f-9f"
type: internal
Port "tapfeea7fab-be"
Interface "tapfeea7fab-be"
type: internal
Port "TUNNEL:6"
Interface "TUNNEL:6"
type: vxlan
options: {key=flow, local_ip="192.168.2.4", 
remote_ip="192.168.2.2"}
Port "tapc1ad3c0d-3c"
Interface "tapc1ad3c0d-3c"
type: internal
ovs_version: "2.4.1"

## neutron port-list
| fcb2ecbe-07ba-41a1-8c5e-7dac6577d58b |  | fa:16:3e:9a:55:16 | 
{"subnet_id": "102e7306-8a2d-4448-bf69-9e0c3c8649b4", "ip_address": 
"172.16.0.130"}  |

## ERROR from the logs:
2016-08-19 07:29:30.170 23329 ERROR neutron.plugins.ml2.managers 
[req-f228e4bb-0808-42b5-9628-0b2cd9ad1c92 - - - - -] Failed to bind port 
fcb2ecbe-07ba-41a1-8c5e-7dac6577d58b on host node-3.domain.tld for vnic_type 
normal using segments [{'segmentation_id': None, 'physical_network': 
u'physnet1', 'id': u'9c2c66a2-557c-4547-912a-1f043ea76d9f', 'network_type': 
u'flat'}]
2016-08-19 07:29:30.171 23329 INFO neutron.plugins.ml2.plugin 
[req-f228e4bb-0808-42b5-9628-0b2cd9ad1c92 - - - - -] Attempt 10 to bind port 
fcb2ecbe-07ba-41a1-8c5e-7dac6577d58b
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology 
[req-f228e4bb-0808-42b5-9628-0b2cd9ad1c92 - - - - -] Network topology element 
has failed binding port:
{
"class": "networking_odl.ml2.ovsdb_topology.OvsdbNetworkTopologyElement",
"has_datapath_type_netdev": false,
"host_addresses": [
"192.168.0.4"
],
"support_vhost_user": false,
"uuid": "7c5fba05-9094-4306-87ed-2e44f2edc192",
"valid_vif_types": [
"ovs"
]
}
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology 
Traceback (most recent call last):
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology   File 
"/usr/local/lib/python2.7/dist-packages/networking_odl/ml2/network_topology.py",
 line 117, in bind_port
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology 
port_context, vif_type, self._vif_details)
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology   File 
"/usr/local/lib/python2.7/dist-packages/networking_odl/ml2/ovsdb_topology.py", 
line 173, in bind_port
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology 
raise ValueError('Unable to find any valid segment in given context.')
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology 
ValueError: Unable to find any valid segment in given context.
2016-08-19 07:29:30.178 23329 ERROR networking_odl.ml2.network_topology
2016-08-19 07:29:30.180 23329 ERROR networking_odl.ml2.network_topology 
[req-f228e4bb-0808-42b5-9628-0b2cd9ad1c92 - - - - -] Unable to bind port 
element for given host and valid VIF types:
hostname: node-3.domain.tld
valid VIF types: vhostuser, ovs
2016-08-19 07:29:30.181 23329 ERROR neutron.plugins.ml2.managers 
[req-f228e4bb-0808-42b5-9628-0b2cd9ad1c92 - - - - -] Failed to bind port 
fcb2ecbe-07ba-41a1-8c5e-7dac6577d58b on host node-3.domain.tld for vnic_type 
normal using segments [{'segmentation_id': None, 'physical_network': 
u'physnet1', 'id': u'9c2c66a2-557c-4547-912a-1f043ea76d9f', 'network_type': 
u'flat'}]
2016-08-19 07:29:30.644 23329 INFO networking_odl.journal.journal [-] Syncing 
update port fcb2ecbe-07ba-41a1-8c5e-7dac6577d58b

Thanks for helping!

BR Nikolas  

> -Original Message-
> From: Rui Zang [mailto:rui.z...@foxmail.com]
> Sent: Friday, August 19, 2016 4:26 AM
> To: Nikolas Hermanns; OpenStack Development Mailing List (not for usage
> questions)
> Cc: Vishal Thapar; Michal Skalski; neutron-...@lists.opendaylight.org
> Subject: Re: [openstack-dev] [Neutron][networking-odl] New error Mitaka
> together with 

Re: [openstack-dev] [OpenStack-docs] [Magnum] Using common tooling for API docs

2016-08-19 Thread Shuu Mutou
>   AFAIK, the API WG adopted Swagger (OpenAPI) as common tool for API
> docs.
>   Anne, has not the adoption been changed? Otherwise, do we need to
> maintain much RST files also?
> 
> 
> 
> It does say either/or in the API WG guideline:
> http://specs.openstack.org/openstack/api-wg/guidelines/api-docs.html

Yes. Also Ken'ichi Omichi said.


> This isn't about a contest between projects for the best approach. This
> is about serving end-users the best information we can.

Yes. Correct information is the best information. The Accuracy is more 
important than web experience. When I was a user (and SIer), the document 
accuracy had not been kept. So we had to read source code at last. And now, as 
a developer (mainly UI plugins), I don't want maintain overlapped content 
several times (API source code, API reference, helps in client, helps in WebUI, 
etc). So I spend efforts to the spec auto-generation.


> I'm reporting what I'm seeing from a broader viewpoint than a single project.
> I don't have a solution other than RST/YAML for common navigation, and I'm
> asking you to provide ideas for that integration point.
> 
> My vision is that even if you choose to publish with OpenAPI, you would
> find a way to make this web experience better. We can do better than this
> scattered approach. I'm asking you to find a way to unify and consider the
> web experience of a consumer of OpenStack services. Can you generate HTML
> that can plug into the openstackdocstheme we are providing as a common tool?

I need to know about the "common tools". Please, let me know what is difference 
between HTMLs built by Lars's patch and by common tools? Or can fairy-slipper 
do that from OpenAPI file?


Thanks,
Shu


> -Original Message-
> From: Anne Gentle [mailto:annegen...@justwriteclick.com]
> Sent: Wednesday, August 17, 2016 11:55 AM
> To: Mutou Shuu(武藤 周) 
> Cc: openstack-dev@lists.openstack.org; m...@redhat.com; Katou Haruhiko(加
> 藤 治彦) ; openstack-d...@lists.openstack.org;
> kenichi.omi...@necam.com
> Subject: Re: [OpenStack-docs] [openstack-dev] [Magnum] Using common
> tooling for API docs
> 
> 
> 
> On Tue, Aug 16, 2016 at 1:05 AM, Shuu Mutou   > wrote:
> 
> 
>   Hi Anne,
> 
>   AFAIK, the API WG adopted Swagger (OpenAPI) as common tool for API
> docs.
>   Anne, has not the adoption been changed? Otherwise, do we need to
> maintain much RST files also?
> 
> 
> 
> It does say either/or in the API WG guideline:
> http://specs.openstack.org/openstack/api-wg/guidelines/api-docs.html
> 
> 
> 
>   IMO, for that the reference and the source code doesn't have
> conflict, these should be near each other as possible as follow. And it
> decreases maintainance costs for documents, and increases document
> reliability. So I believe our approach is more ideal.
> 
> 
> 
> 
> This isn't about a contest between projects for the best approach. This
> is about serving end-users the best information we can.
> 
> 
>   The Best: the references generated from source code.
> 
> 
> 
> I don't want to argue, but anything generated from the source code suffers
> if the source code changes in a way that reviewers don't catch as a
> backwards-incompatible change you can break your contract.
> 
> 
>   Better: the references written in docstring.
> 
>   We know some projects abandoned these approach, and then they uses
> RST + YAML.
>   But we hope decreasing maintainance cost for the documents. So we
> should not create so much RST files, I think.
> 
> 
> 
> 
> I think you'll see the evolution of our discussions over the years has
> brought us to this point in time. Yes, there are trade-offs in these
> decisions.
> 
> 
> 
>   I'm proceeding auto-generation of swagger spec from magnum source
> code using pecan-swagger [1], and improving pecan-swagger with Michael
> McCune [2].
>   These will generate almost Magnum API specs automatically in
> OpenAPI format.
>   Also, these approach can be adopted by other APIs that uses pecan
> and WSME.
>   Please check this also.
> 
> 
> 
> I ask you to consider the experience of someone consuming the API documents
> OpenStack provides. There are 26 REST API services under an OpenStack
> umbrella. Twelve of them will be included in an unified side-bar navigation
> on developer.openstack.org   due to
> using Sphinx tooling provided as a common web experience. Six of them don't
> have redirects yet from the "old way" to do API reference docs. Seven of
> them are not collected under a single landing page or common sidebar or
> navigation. Three of them have no API docs published to a website.
> 
> I'm reporting what I'm seeing from a broader viewpoint than a single project.
> I don't have a solution other than RST/YAML for common navigation, and I'm
> asking you to provide ideas for that integration point.
> 
> My 

Re: [openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-19 Thread hie...@vn.fujitsu.com
Thanks for all the information.

Yeah, hope that we can help a session about auto-scaling this summit.

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: Friday, August 19, 2016 4:19 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Magnum] Next auto-scaling feature design?


We have had numerous discussion on this topic, including a presentation and a 
design session
in Tokyo, but we have not really arrived at a consensus yet. Part of the 
problem is that auto-scaling
at the container level is still being developed, so it is still a moving target.
However, a few points did emerge from the discussion (not necessarily 
consensus):

  *   It's preferable to have a single point of decision on auto-scaling for 
both the container and infrastructure level.
One approach is to make this decision at the container orchestration level, so 
the infrastructure level would just
provide the service to handle request to scale the infrastructure. This would 
require coordinating support with
upstream like Kubernetes. This approach also means that we don't want a major 
component in Magnum to
drive auto-scaling.
  *   It's good to have a policy-driven mechanism for auto-scaling to handle 
complex scenarios. For this, Senlin
is a candidate; upstream is another potential choice.
We may want to revisit this topic as a design session in the next summit.
Ton Ngo,

[Inactive hide details for Hongbin Lu ---08/18/2016 12:26:07 PM---> 
-Original Message- > From: hie...@vn.fujitsu.com [ma]Hongbin Lu 
---08/18/2016 12:26:07 PM---> -Original Message- > From: 
hie...@vn.fujitsu.com 
[mailto:hie...@vn.fujitsu.com]

From: Hongbin Lu >
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: 08/18/2016 12:26 PM
Subject: Re: [openstack-dev] [Magnum] Next auto-scaling feature design?






> -Original Message-
> From: hie...@vn.fujitsu.com 
> [mailto:hie...@vn.fujitsu.com]
> Sent: August-18-16 3:57 AM
> To: 
> openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [Magnum] Next auto-scaling feature design?
>
> Hi Magnum folks,
>
> I have some interests in our auto scaling features and currently
> testing with some container monitoring solutions such as heapster,
> telegraf and prometheus. I have seen the PoC session corporate with
> Senlin in Austin and have some questions regarding of this design:
> - We have decided to move all container management from Magnum to Zun,
> so is there only one level of scaling (node) instead of both node and
> container?
> - The PoC design show that Magnum (Magnum Scaler) need to depend on
> Heat/Ceilometer for gathering metrics and do the scaling work based on
> auto scaling policies, but is Heat/Ceilometer is the best choice for
> Magnum auto scaling?
>
> Currently, I saw that Magnum only send CPU and Memory metric to
> Ceilometer, and Heat can grab these to decide the right scaling method.
> IMO, this approach have some problems, please take a look and give
> feedbacks:
> - The AutoScaling Policy and AutoScaling Resource of Heat cannot handle
> complex scaling policies. For example:
> If CPU > 80% then scale out
> If Mem < 40% then scale in
> -> What if CPU = 90% and Mem = 30%, the conflict policy will appear.
> There are some WIP patch-set of Heat conditional logic in [1]. But IMO,
> the conditional logic of Heat also cannot resolve the conflict of
> scaling policies. For example:
> If CPU > 80% and Mem >70% then scale out If CPU < 30% or Mem < 50% then
> scale in
> -> What if CPU = 90% and Mem = 30%.
> Thus, I think that we need to implement magnum scaler for validating
> the policy conflicts.
> - Ceilometer may have troubles if we deploy thousands of COE.
>
> I think we need a new design for auto scaling feature, not for Magnum
> only but also Zun (because the scaling level of container maybe forked
> to Zun too). Here are some ideas:
> 1. Add new field enable_monitor to cluster template (ex baymodel) and
> show the monitoring URL when creating cluster (bay) complete. For
> example, we can use Prometheus as monitoring container for each cluster.
> (Heapster is the best choice for k8s, but not good enough for swarm or
> mesos).

[Hongbin Lu] Personally, I think this is a good idea.

> 2. Create Magnum scaler manager (maybe a new service):
> - Monitoring enabled monitor cluster and send metric to ceilometer if
> need.
> - Manage user-defined scaling policy: not only cpu and memory but also
> other metrics like network bw, CCU.
> - Validate user-defined scaling policy and trigger heat for scaling
> actions. (can trigger nova-scheduler for more scaling options)
> - Need highly scalable architecture, first step we can