Re: [openstack-dev] [puppet] [Swift] Multiple proxy recipes will create out of sync rings

2015-06-12 Thread Mark Kirkwood
From what I can see, the ring gets created and rebalanced in 
puppet-swift/manifest/ringbuilder.pp i.e calling:


  class { '::swift::ringbuilder':
# the part power should be determined by assuming 100 partitions 
per drive

part_power => '18',
replicas   => '3',
min_part_hours => 1,
require=> Class['swift'],
  }

*not* when each device is added.

Yeah, using a seed is probably a good solution too. For the moment I'm 
using the idea of one proxy being a 'ring server/master' which achieves 
the same thing (identical rings everywhere). However I'll have a look at 
using a seed, as this may simplify the code and also the operational 
procedure needed to replace said 'master' if it fails (i.e to avoid 
accidentally creating a new ring when you really don't need to...)


Regards,

Mark

On 12/06/15 23:10, McCabe, Donagh wrote:

I skimmed the code, but since I'm not familiar with the environment, I could not find where 
"swift-ring-builder rebalance" is invoked. I'm guessing that each time you add a device 
to a ring, a rebalance is also done. Leaving aside how inefficient that is, the key thing is that 
the rebalance command has an optional "seed" parameter. Unless you explicitly set the 
seed (to same value on all node obviously), you won't get the same ring on all nodes. You also need 
to make sure you add the same set of drives and in same order.

Regards,
Donagh
-Original Message-
From: Mark Kirkwood [mailto:mark.kirkw...@catalyst.net.nz]
Sent: 12 June 2015 06:28
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [puppet] [Swift] Multiple proxy recipes will create 
out of sync rings

I've looking at using puppet-swift to deploy a swift cluster.

Firstly - without
http://git.openstack.org/cgit/stackforge/puppet-swift/tree/tests/site.pp
I would have struggled a great deal more to get up and running, so a big thank 
you for a nice worked example of how to do multiple nodes!

However I have stumbled upon a problem - with respect to creating multiple proxy 
nodes. There are some recipes around that follow on from the site.pp above and 
explicitly build >1 proxy (e.g
https://github.com/CiscoSystems/puppet-openstack-ha/blob/folsom_ha/examples/swift-nodes.pp)

Now the problem is - each proxy node does a ring builder create, so ends up 
with *different* builder (and therefore) ring files. This is not good, as the 
end result is a cluster with all storage nodes and *one* proxy with the same 
set of ring files, and *all* other proxies with
*different* ring (and builder) files.

I have used logic similar to the attached to work around this, i.e only create 
rings if we are the 'ring server', otherwise get 'em via rsync.

Thoughts?

Regards

Mark
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port Blueprint

2015-06-12 Thread Adrian Otto
Hongbin,

Good use case. I suggest that we add a parameter to magnum bay-create that will 
allow the user to override the baymodel.apiserver_port attribute with a new 
value that will end up in the bay.api_address attribute as part of the URL. 
This approach assumes implementation of the magnum-api-address-url blueprint. 
This way we solve for the use case, and don't need a new attribute on the bay 
resource that requires users to concatenate multiple attribute values in order 
to get a native client tool working.

Adrian

On Jun 12, 2015, at 6:32 PM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:

A use case could be the cloud is behind a proxy and the API port is filtered. 
In this case, users have to start the service in an alternative port.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: June-12-15 2:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port 
Blueprint

Thanks for raising this for discussion. Although I do think that the API port 
humber should be expressed in a URL that the local client can immediately use 
for connecting a native client to the API, I am not convinced that this needs 
to be a separate attribute on the Bay resource.

In general, I think it’s a reasonable assumption that nova instances will have 
unique IP addresses assigned to them (public or private is not an issue here) 
so unique port numbers for running the API services on alternate ports seems 
like it may not be needed. I’d like to have input from at least one Magnum user 
explaining an actual use case for this feature before accepting this blueprint.

One possible workaround for this would be to instruct those who want to run 
nonstandard ports to copy the heat template, and specify a new heat template as 
an alternate when creating the BayModel, which can implement the port number as 
a parameter. If we learn that this happens a lot, we should revisit this as a 
feature in Magnum rather than allowing it through an external workaround.

I’d like to have a generic feature that allows for arbitrary key/value pairs 
for parameters and values to be passed to the heat stack create call so that 
this, and other values can be passed in using the standard magnum client and 
API without further modification. I’m going to look to see if we have a BP for 
this, and if not, I will make one.

Adrian



On Jun 11, 2015, at 6:05 PM, Kai Qiang Wu(Kennan) 
mailto:wk...@cn.ibm.com>> wrote:

If I understand the bp correctly,

the apiserver_port is for public access or API call service endpoint. If it is 
that case, user would use that info

htttp(s)://:

so port is good information for users.


If we believe above assumption is right. Then

1) Some user not needed to change port, since the heat have default hard code 
port in that

2) If some users want to change port, (through heat, we can do that)  We need 
add such flexibility for users.
That's  bp 
https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port try to 
solve.

It depends on how end-users use with magnum.


Welcome to more inputs about this, If many of us think it is not necessary to 
customize the ports. we can drop the bp.


Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

Jay Lau ---06/11/2015 01:17:42 PM---I think that we have a similar 
bp before: https://blueprints.launchpad.net/magnum/+spec/override-nat

From: Jay Lau mailto:jay.lau@gmail.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 06/11/2015 01:17 PM
Subject: Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port 
Blueprint




I think that we have a similar bp before: 
https://blueprints.launchpad.net/magnum/+spec/override-native-rest-port

 I have some discussion before with Larsks, it seems that it does not make much 
sense to customize this port as the kubernetes/swarm/mesos cluster will be 
created by heat and end user do not need to care the ports,different 
kubernetes/swarm/mesos cluster will have different IP addresses so there will 
be no port conflict.

2015-06-11 9:35 GMT+08:00 Kai Qiang Wu 
mailto:wk...@cn.ibm.com>>:
I’m moving this whiteboard to the ML so we can have some discussion to refine 
it, and then go back and update the whiteboard.

Source:https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port


@Sdake and I have some discussion now, but may need more input from you

[openstack-dev] [openstack][trove] The default configuration for redis instance

2015-06-12 Thread Li Tianqing
Hello,
   guys, I found the default configuration of redis is not changed as the 
flavor changed. Why this? I think it should be like mysql default configuraion 
that 
will changed as the instrance's flavor changed.





--

Best
Li Tianqing__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][puppet] Federation using ipsilon

2015-06-12 Thread Adam Young

On 06/12/2015 04:53 PM, Rich Megginson wrote:
I've done a first pass of setting up a puppet module to configure 
Keystone to use ipsilon for federation, using 
https://github.com/richm/puppet-apache-auth-mods, and a version of 
ipsilon-client-install with patches 
https://fedorahosted.org/ipsilon/ticket/141 and 
https://fedorahosted.org/ipsilon/ticket/142, and a heavily modified 
version of the ipa/rdo federation setup scripts - 
https://github.com/richm/rdo-vm-factory.


I would like some feedback from the Keystone and puppet folks about 
this approach.


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I take it this is not WebSSO yet, but only Federation.

Around here...

https://github.com/richm/puppet-apache-auth-mods/blob/master/manifests/keystone_ipsilon.pp#L64

You would need to have the trusted dashboard, etc.

But I think that is what you intend.  However, without an ECP setup, we 
really have no way to test it.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port Blueprint

2015-06-12 Thread Hongbin Lu
A use case could be the cloud is behind a proxy and the API port is filtered. 
In this case, users have to start the service in an alternative port.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: June-12-15 2:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port 
Blueprint

Thanks for raising this for discussion. Although I do think that the API port 
humber should be expressed in a URL that the local client can immediately use 
for connecting a native client to the API, I am not convinced that this needs 
to be a separate attribute on the Bay resource.

In general, I think it’s a reasonable assumption that nova instances will have 
unique IP addresses assigned to them (public or private is not an issue here) 
so unique port numbers for running the API services on alternate ports seems 
like it may not be needed. I’d like to have input from at least one Magnum user 
explaining an actual use case for this feature before accepting this blueprint.

One possible workaround for this would be to instruct those who want to run 
nonstandard ports to copy the heat template, and specify a new heat template as 
an alternate when creating the BayModel, which can implement the port number as 
a parameter. If we learn that this happens a lot, we should revisit this as a 
feature in Magnum rather than allowing it through an external workaround.

I’d like to have a generic feature that allows for arbitrary key/value pairs 
for parameters and values to be passed to the heat stack create call so that 
this, and other values can be passed in using the standard magnum client and 
API without further modification. I’m going to look to see if we have a BP for 
this, and if not, I will make one.

Adrian



On Jun 11, 2015, at 6:05 PM, Kai Qiang Wu(Kennan) 
mailto:wk...@cn.ibm.com>> wrote:

If I understand the bp correctly,

the apiserver_port is for public access or API call service endpoint. If it is 
that case, user would use that info

htttp(s)://:

so port is good information for users.


If we believe above assumption is right. Then

1) Some user not needed to change port, since the heat have default hard code 
port in that

2) If some users want to change port, (through heat, we can do that)  We need 
add such flexibility for users.
That's  bp 
https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port try to 
solve.

It depends on how end-users use with magnum.


Welcome to more inputs about this, If many of us think it is not necessary to 
customize the ports. we can drop the bp.


Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

Jay Lau ---06/11/2015 01:17:42 PM---I think that we have a similar 
bp before: https://blueprints.launchpad.net/magnum/+spec/override-nat

From: Jay Lau mailto:jay.lau@gmail.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 06/11/2015 01:17 PM
Subject: Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port 
Blueprint




I think that we have a similar bp before: 
https://blueprints.launchpad.net/magnum/+spec/override-native-rest-port

 I have some discussion before with Larsks, it seems that it does not make much 
sense to customize this port as the kubernetes/swarm/mesos cluster will be 
created by heat and end user do not need to care the ports,different 
kubernetes/swarm/mesos cluster will have different IP addresses so there will 
be no port conflict.

2015-06-11 9:35 GMT+08:00 Kai Qiang Wu 
mailto:wk...@cn.ibm.com>>:
I’m moving this whiteboard to the ML so we can have some discussion to refine 
it, and then go back and update the whiteboard.

Source:https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port


@Sdake and I have some discussion now, but may need more input from your side.


1. keep apserver_port in baymodel, it may only allow admin to have (if we 
involved policy) create that baymodel, have less flexiblity for other users.


2. apiserver_port was designed in baymodel, if moved from baymodel to bay, it 
is big change, and if we have other better ways. (it also may apply for
other configuration fileds, like dns-nameserver etc.)



Thanks



Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10

[openstack-dev] Gerrit maintenance concluded

2015-06-12 Thread Jeremy Stanley
Our maintenance has concluded successfully without incident and the
accompanying Gerrit outage was roughly an hour.

We moved 57 repositories to new Git namespaces:

stackforge/cookbook-openstack-bare-metal
-> openstack/cookbook-openstack-bare-metal
stackforge/cookbook-openstack-block-storage
-> openstack/cookbook-openstack-block-storage
stackforge/cookbook-openstack-client
-> openstack/cookbook-openstack-client
stackforge/cookbook-openstack-common
-> openstack/cookbook-openstack-common
stackforge/cookbook-openstack-compute
-> openstack/cookbook-openstack-compute
stackforge/cookbook-openstack-dashboard
-> openstack/cookbook-openstack-dashboard
stackforge/cookbook-openstack-data-processing
-> openstack/cookbook-openstack-data-processing
stackforge/cookbook-openstack-database
-> openstack/cookbook-openstack-database
stackforge/cookbook-openstack-identity
-> openstack/cookbook-openstack-identity
stackforge/cookbook-openstack-image
-> openstack/cookbook-openstack-image
stackforge/cookbook-openstack-integration-test
-> openstack/cookbook-openstack-integration-test
stackforge/cookbook-openstack-network
-> openstack/cookbook-openstack-network
stackforge/cookbook-openstack-object-storage
-> openstack/cookbook-openstack-object-storage
stackforge/cookbook-openstack-ops-database
-> openstack/cookbook-openstack-ops-database
stackforge/cookbook-openstack-ops-messaging
-> openstack/cookbook-openstack-ops-messaging
stackforge/cookbook-openstack-orchestration
-> openstack/cookbook-openstack-orchestration
stackforge/cookbook-openstack-telemetry
-> openstack/cookbook-openstack-telemetry
stackforge/dragonflow
-> openstack/dragonflow
stackforge/mistral
-> openstack/mistral
stackforge/mistral-dashboard
-> openstack/mistral-dashboard
stackforge/mistral-extra
-> openstack/mistral-extra
stackforge/networking-bgpvpn
-> openstack/networking-bgpvpn
stackforge/networking-cisco
-> openstack/networking-cisco
stackforge/networking-l2gw
-> openstack/networking-l2gw
stackforge/networking-midonet
-> openstack/networking-midonet
stackforge/networking-odl
-> openstack/networking-odl
stackforge/networking-ofagent
-> openstack/networking-ofagent
stackforge/networking-ovn
-> openstack/networking-ovn
stackforge/octavia
-> openstack/octavia
stackforge/openstack-chef-repo
-> openstack/openstack-chef-repo
stackforge/openstack-chef-specs
-> openstack/openstack-chef-specs
stackforge/puppet-ceilometer
-> openstack/puppet-ceilometer
stackforge/puppet-cinder
-> openstack/puppet-cinder
stackforge/puppet-designate
-> openstack/puppet-designate
stackforge/puppet-glance
-> openstack/puppet-glance
stackforge/puppet-gnocchi
-> openstack/puppet-gnocchi
stackforge/puppet-heat
-> openstack/puppet-heat
stackforge/puppet-horizon
-> openstack/puppet-horizon
stackforge/puppet-ironic
-> openstack/puppet-ironic
stackforge/puppet-keystone
-> openstack/puppet-keystone
stackforge/puppet-manila
-> openstack/puppet-manila
stackforge/puppet-modulesync-configs
-> openstack/puppet-modulesync-configs
stackforge/puppet-monasca
-> openstack/puppet-monasca
stackforge/puppet-neutron
-> openstack/puppet-neutron
stackforge/puppet-nova
-> openstack/puppet-nova
stackforge/puppet-openstack-specs
-> openstack/puppet-openstack-specs
stackforge/puppet-openstack_extras
-> openstack/puppet-openstack_extras
stackforge/puppet-openstacklib
-> openstack/puppet-openstacklib
stackforge/puppet-sahara
-> openstack/puppet-sahara
stackforge/puppet-swift
-> openstack/puppet-swift
stackforge/puppet-tempest
-> openstack/puppet-tempest
stackforge/puppet-tripleo
-> openstack/puppet-tripleo
stackforge/puppet-trove
-> openstack/puppet-trove
stackforge/puppet-tuskar
-> openstack/puppet-tuskar
stackforge/puppet-vswitch
-> openstack/puppet-vswitch
stackforge/python-mistralclient
-> openstack/python-mistralclient
stackforge/vmware-nsx
-> openstack/vmware-nsx

We moved and also renamed 1 repository:

stackforge/ironic-discoverd
-> openstack/ironic-inspector

We retired 3 unmaintained/abandoned repositories:

stackforge/fuel-plugin-external-nfs
-> stackforge-attic/fuel-plugin-external-nfs
stackforge/fuel-plugin-group-based-policy
-> stackforge-attic/fuel-plugin-group-based-policy
stackforge/zvm-driver
-> stackforge-attic/zvm-driver

I've uploaded these .gitreview updates and request the respecti

Re: [openstack-dev] [Neutron] Proposing YAMAMOTO Takashi for the Control Plane core team

2015-06-12 Thread Edgar Magana
I second Henry! Great addition to the team!

Edgar




On 6/12/15, 2:39 PM, "Henry Gessau"  wrote:

>Although I am not on your list I would like to add my +1! Yamamoto shows great
>attention to detail in code reviews and frequently finds real issues that were
>not spotted by others.
>
>On Thu, Jun 11, 2015, Kevin Benton  wrote:
>> Hello all!
>> 
>> As the Lieutenant of the built-in control plane[1], I would like YAMAMOTO
>> Takashi to be a member of the control plane core reviewer team.
>> 
>> He has been extensively reviewing the entire codebase[2] and his feedback on
>> patches related to the reference implementation has been very useful. This
>> includes everything ranging from the AMPQ API to OVS flows.
>> 
>> Existing cores that have spent time working on the reference implementation
>> (agents and AMQP code), please vote +1/-1 for his addition to the team. 
>> Aaron,
>> Gary, Assaf, Maru, Kyle, Armando, Carl and Oleg; you have all been reviewing
>> things in these areas recently so I would like to hear from you specifically.
>> 
>> 1. 
>> http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
>> 2. http://stackalytics.com/report/contribution/neutron-group/90
>> 
>> 
>> Cheers
>> -- 
>> Kevin Benton
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-12 Thread Adrian Otto
It turns out we already have one: https://launchpad.net/magnum-ui

The Driver is already set to "Magnum Drivers”.

On Jun 12, 2015, at 12:26 PM, Adrian Otto 
mailto:adrian.o...@rackspace.com>> wrote:

Okay, I will think on that a bit.

Adrian


 Original message 
From: "Steven Dake (stdake)" mailto:std...@cisco.com>>
Date: 06/12/2015 8:04 AM (GMT-08:00)
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Adiran,

Disagree.

The python-magnumclient and heat-coe-templates have separate launchpad 
trackers.  I suspect we will want a separate tracker for python-k8sclient when 
we are ready for that.

IMO each repo should have a separate launchpad tracker to make the lives of the 
folks maintaining the software easier :)  This is a common best practice in 
OpenStack.

Regards
-steve


From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, June 12, 2015 at 7:47 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Steve,

There is no need for a second LP project at all.

Adrian


 Original message 
From: "Steven Dake (stdake)" mailto:std...@cisco.com>>
Date: 06/12/2015 7:41 AM (GMT-08:00)
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Adrian,

Great thanks for that.  I would recommend one change and that is for one 
magnum-drivers team across launchpad trackers.  The magnum-drivers team as you 
know (this is for the benefit of others) is responsible for maintaining the 
states of the launchpad trackers.

Regards,
-steve

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, June 11, 2015 at 7:21 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Team,

We are fortunate enough to have a thriving community of developers who want to 
make OpenStack great, and several of us have pledged support for this work in 
Magnum. Due to the amount of interest expressed in this pursuit, and the small 
amount of overlap between the developers in magnum-core, I’m authorizing the 
creation of a new gerrit ACL group named magnum-ui-core. Please install me as 
the pilot member of the group. I will seed the group with those who have 
pledged support for the effort from the “essential” subscribers to the 
following blueprint. If our contributors to the magnum-ui repo feel that review 
velocity is too low, I will add magnum-core as a member so we can help. On 
regular intervals, I will review the activity level of our new group, and make 
adjustments as needed to add/subtract from it in accordance with input from the 
active contributors. We will use the Magnum project on Launchpad for blueprints 
and bugs.

https://blueprints.launchpad.net/magnum/+spec/magnum-horizon-plugin

There are 8 contributors identified, who will comprise our initial 
magnum-ui-core group.

I ask that the ACLs be configured as follows:

[access "refs/heads/*"]
abandon = group magnum-ui-core
create = group magnum-milestone
label-Code-Review = -2..+2 group magnum-ui-core
label-Workflow = -1..+1 group magnum-ui-core

[access "refs/tags/*"]
pushSignedTag = group magnum-milestone

[receive]
requireChangeId = true
requireContributorAgreement = true

[submit]
mergeContent = true

Thanks everyone for your enthusiasm about this new pursuit. I look forward to 
working together with you to make this into something we are all proud of.

Adrian

PS: Special thanks to sdake for initiating this conversation, and helping us to 
arrive at a well reasoned decision about how to approach this.

On Jun 4, 2015, at 10:58 AM, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we c

[openstack-dev] [Solum] Why do app names have to be unique?

2015-06-12 Thread Adrian Otto
Team,

While triaging this bug, I got to thinking about unique names:

https://bugs.launchpad.net/solum/+bug/1434293

Should our app names be unique? Why? Should I open a blueprint for a new 
feature to make name uniqueness optional, and default it to “on”. If not, why?

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-12 Thread Adrian Otto
Team,

We currently delete logs for an app when we delete the app[1].

https://bugs.launchpad.net/solum/+bug/1463986

Perhaps there should be an optional setting at the tenant level that determines 
whether your logs are deleted or not by default (set to off initially), and an 
optional parameter to our DELETE calls that allows for the opposite action from 
the default to be specified if the user wants to override it at the time of the 
deletion. Thoughts?

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-12 Thread Clint Byrum
Excerpts from Alec Hothan (ahothan)'s message of 2015-06-12 13:41:17 -0700:
> 
> On 6/1/15, 5:03 PM, "Davanum Srinivas"  wrote:
> 
> >fyi, the spec for zeromq driver in oslo.messaging is here:
> >https://review.openstack.org/#/c/187338/1/specs/liberty/zmq-patterns-usage
> >.rst,unified
> >
> >-- dims
> 
> I was about to provide some email comments on the above review off gerrit,
> but figured maybe it would be good to make a quick status of the state of
> this general effort for pushing out a better zmq driver for oslo essaging.
> So I started to look around the oslo/zeromq wiki and saw few email threads
> that drew my interest.
> 
> In this email (Nov 2014) Ilya proposes about getting rid of a central
> broker for zmq:
> http://lists.openstack.org/pipermail/openstack-dev/2014-November/050701.htm
> l
> Not clear if Ilya already had in mind to instead have a local proxy on
> every node (as proposed in the above spec)
> 
> 
> In this email (mar 2014), Yatin described the prospect of using zmq in a
> completely broker-less way (so not even a proxy per node), with the use of
> matchmaker rings to configure well known ports.
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/030411.html
> Which is pretty close to what I think would be a better design (with the
> variant that I'd rather see a robust and highly available name server
> instead of fixed port assignments), I'd be interested to know what
> happened to that proposal and why we ended up with a proxy per node
> solution at this stage (I'll reply to the proxy per node design in a
> separate email to complement my gerrit comments).
> 
> 
> I could not find one document that summarizes the list of issues related
> to rabbitMQ deployments, all it appears is that many people are unhappy
> with it, some are willing to switch to zmq, many are hesitant and some are
> decidedly skeptical. On my side I know a number of issues related to oslo
> messaging over rabbitMQ.
> 
> I think it is important for the community to understand that of the many
> issues generally attributed to oslo messaging over rabbitMQ, not all of
> them are caused by the choice of rabbitMQ as a transport (and hence those
> will likely not be fixed if we just switched from rabbitMQ to ZMQ) and
> many are actually caused by the misuse of oslo messaging by the apps
> (Neutron, Nova...) and can only be fixed by modification of the app code.
> 
> I think personally that there is a strong case for a properly designed ZMQ
> driver but we first need to make the expectations very clear.
> 
> One long standing issue I can see is the fact that the oslo messaging API
> documentation is sorely lacking details on critical areas such as API
> behavior during fault conditions, load conditions and scale conditions.
> As a result, app developers are using the APIs sometimes indiscriminately
> and that will have an impact on the overall quality of openstack in
> deployment conditions.
> I understand that a lot of the existing code was written in a hurry and
> good enough to work properly on small setups, but some code will break
> really badly under load or when things start to go south in the cloud.
> That is unless the community realizes that perhaps there is something that
> needs to be done.
> 
> We're only starting to see today things breaking under load because we
> have more lab tests at scale, more deployments at scale and we only start
> to see real system level testing at scale with HA testing (the kind of
> test where you inject load and cause failures of all sorts). Today we know
> that openstack behaves terribly in these conditions, even in so-called HA
> deployments!
> 
> As a first step, would it be useful to have one single official document
> that characterizes all the issues we're trying to fix and perhaps used
> that document as a basis for showing which of all these issues will be
> fixed by the use of the zmq driver? I think that could help us focus
> better on the type of requirements we need from this new ZMQ driver.
> 

I think you missed "it is not tested in the gate" as a root cause for
some of the ambiguity. Anecdotes and bug reports are super important for
knowing where to invest next, but a test suite would at least establish a
base line and prevent the sort of thrashing and confusion that comes from
such a diverse community of users feeding bug reports into the system.

Also, not having a test in the gate is a serious infraction now, and will
lead to zmq's removal from oslo.messaging now that we have a ratified
policy requiring this. I suggest a first step being to strive to get a
devstack-gate job that runs using zmq instead of rabbitmq. You can
trigger it in oslo.messaging's check pipeline, and make it non-voting,
but eventually it needs to get into nova, neutron, cinder, heat, etc.
etc. Without that, you'll find that the community of potential
benefactors of any effort you put into zmq will shrink dramatically when
we are forced to remove the driver from oslo.messag

Re: [openstack-dev] [Neutron] Proposing Rossella Sblendido for the Control Plane core team

2015-06-12 Thread Carl Baldwin
+1!

On Fri, Jun 12, 2015 at 1:44 PM, Kevin Benton  wrote:
> Hello!
>
> As the Lieutenant of the built-in control plane[1], I would like Rossella
> Sblendido to be a member of the control plane core reviewer team.
>
> Her review stats are in line with other cores[2] and her feedback on patches
> related to the agents has been great. Additionally, she has been working
> quite a bit on the blueprint to restructure the L2 agent code so she is very
> familiar with the agent code and the APIs it leverages.
>
> Existing cores that have spent time working on the reference implementation
> (agents and AMQP code), please vote +1/-1 for her addition to the team.
> Aaron, Gary, Assaf, Maru, Kyle, Armando, Carl and Oleg; you have all been
> reviewing things in these areas recently so I would like to hear from you
> specifically.
>
> 1.
> http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
> 2. http://stackalytics.com/report/contribution/neutron-group/30
>
> Cheers
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-12 Thread ozamiatin

Hi, Alec

Thanks for email threads investigation.
I've decided to spend more time to dig into old zmq-related threads too.
Some notes inline.

6/12/15 23:41, Alec Hothan (ahothan) пишет:


On 6/1/15, 5:03 PM, "Davanum Srinivas"  wrote:


fyi, the spec for zeromq driver in oslo.messaging is here:
https://review.openstack.org/#/c/187338/1/specs/liberty/zmq-patterns-usage
.rst,unified

-- dims

I was about to provide some email comments on the above review off gerrit,
but figured maybe it would be good to make a quick status of the state of
this general effort for pushing out a better zmq driver for oslo essaging.
So I started to look around the oslo/zeromq wiki and saw few email threads
that drew my interest.

In this email (Nov 2014) Ilya proposes about getting rid of a central
broker for zmq:
http://lists.openstack.org/pipermail/openstack-dev/2014-November/050701.htm
l
Not clear if Ilya already had in mind to instead have a local proxy on
every node (as proposed in the above spec)


In this email (mar 2014), Yatin described the prospect of using zmq in a
completely broker-less way (so not even a proxy per node), with the use of
matchmaker rings to configure well known ports.

This solution with matchmaker-rings looks promising. I figured out that
I've put too little attention to matchmaker-ring. I like it much more than
a name server.

http://lists.openstack.org/pipermail/openstack-dev/2014-March/030411.html
Which is pretty close to what I think would be a better design (with the
variant that I'd rather see a robust and highly available name server
instead of fixed port assignments), I'd be interested to know what
happened to that proposal and why we ended up with a proxy per node
solution at this stage (I'll reply to the proxy per node design in a
separate email to complement my gerrit comments).


I could not find one document that summarizes the list of issues related
to rabbitMQ deployments, all it appears is that many people are unhappy
with it, some are willing to switch to zmq, many are hesitant and some are
decidedly skeptical. On my side I know a number of issues related to oslo
messaging over rabbitMQ.

I think it is important for the community to understand that of the many
issues generally attributed to oslo messaging over rabbitMQ, not all of
them are caused by the choice of rabbitMQ as a transport (and hence those
will likely not be fixed if we just switched from rabbitMQ to ZMQ) and
many are actually caused by the misuse of oslo messaging by the apps
(Neutron, Nova...) and can only be fixed by modification of the app code.

Agree. During intergration process of new ZeroMQ driver we will probably
need to push a series of changes to make services a proper usage of 
oslo.messaging.

As an example Neutron makes fork process and breaks global zmq Context.
There were also some issues with rabbitmq heartbeats implementation 
because of that fork

as I remember.


I think personally that there is a strong case for a properly designed ZMQ
driver but we first need to make the expectations very clear.

+1


One long standing issue I can see is the fact that the oslo messaging API
documentation is sorely lacking details on critical areas such as API
behavior during fault conditions, load conditions and scale conditions.
As a result, app developers are using the APIs sometimes indiscriminately
and that will have an impact on the overall quality of openstack in
deployment conditions.
I understand that a lot of the existing code was written in a hurry and
good enough to work properly on small setups, but some code will break
really badly under load or when things start to go south in the cloud.
That is unless the community realizes that perhaps there is something that
needs to be done.

We're only starting to see today things breaking under load because we
have more lab tests at scale, more deployments at scale and we only start
to see real system level testing at scale with HA testing (the kind of
test where you inject load and cause failures of all sorts). Today we know
that openstack behaves terribly in these conditions, even in so-called HA
deployments!

As a first step, would it be useful to have one single official document
that characterizes all the issues we're trying to fix and perhaps used
that document as a basis for showing which of all these issues will be
fixed by the use of the zmq driver?

Of course it would be very useful. +1 for making such doc.

I think that could help us focus
better on the type of requirements we need from this new ZMQ driver.


Thanks,

   Alec



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Regards,
Oleksii

__
OpenStack Development Mailing List (not for usage questi

Re: [openstack-dev] Looking for help getting git-review to work over https

2015-06-12 Thread KARR, DAVID
> -Original Message-
> From: KARR, DAVID
> Sent: Thursday, June 11, 2015 8:00 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] Looking for help getting git-review to
> work over https
> 
> *** Security Advisory: This Message Originated Outside of AT&T ***.
> Reference http://cso.att.com/EmailSecurity/IDSP.html for more
> information.
> 
> I could use some help with setting up git-review in a slightly
> unfriendly firewall situation.
> 
> I'm trying to set up git-review on my CentOS7 VM, and our firewall
> blocks the non-standard ssh port.  I'm following the instructions
> at
> http://docs.openstack.org/infra/manual/developers.html#accessing-
> gerrit-over-https , for configuring git-review to use https on port
> 443, but this still isn't working (times out with "Could not
> connect to gerrit").  I've confirmed that I can reach other
> external sites on port 443.
> 
> Can someone give me a hand with this?

Thanks to everyone who helped.  I believe I've finally got it working with 
https tunneling.  It's amazing how many twisty little paths I had to to 
through.  Now to get my repo into a state where I can actually create a review.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Rossella Sblendido for the Control Plane core team

2015-06-12 Thread Henry Gessau
A big +1 from me. Rossella is also a great community influence with her "Land
your first patch" for Neutron talk at the Paris summit.

On Fri, Jun 12, 2015, Kevin Benton  wrote:
> Hello!
> 
> As the Lieutenant of the built-in control plane[1], I would like Rossella
> Sblendido to be a member of the control plane core reviewer team.
> 
> Her review stats are in line with other cores[2] and her feedback on patches
> related to the agents has been great. Additionally, she has been working quite
> a bit on the blueprint to restructure the L2 agent code so she is very
> familiar with the agent code and the APIs it leverages.
> 
> Existing cores that have spent time working on the reference implementation
> (agents and AMQP code), please vote +1/-1 for her addition to the team. Aaron,
> Gary, Assaf, Maru, Kyle, Armando, Carl and Oleg; you have all been reviewing
> things in these areas recently so I would like to hear from you specifically.
> 
> 1. 
> http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
> 2. http://stackalytics.com/report/contribution/neutron-group/30
> 
> Cheers
> -- 
> Kevin Benton



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing YAMAMOTO Takashi for the Control Plane core team

2015-06-12 Thread Henry Gessau
Although I am not on your list I would like to add my +1! Yamamoto shows great
attention to detail in code reviews and frequently finds real issues that were
not spotted by others.

On Thu, Jun 11, 2015, Kevin Benton  wrote:
> Hello all!
> 
> As the Lieutenant of the built-in control plane[1], I would like YAMAMOTO
> Takashi to be a member of the control plane core reviewer team.
> 
> He has been extensively reviewing the entire codebase[2] and his feedback on
> patches related to the reference implementation has been very useful. This
> includes everything ranging from the AMPQ API to OVS flows.
> 
> Existing cores that have spent time working on the reference implementation
> (agents and AMQP code), please vote +1/-1 for his addition to the team. Aaron,
> Gary, Assaf, Maru, Kyle, Armando, Carl and Oleg; you have all been reviewing
> things in these areas recently so I would like to hear from you specifically.
> 
> 1. 
> http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
> 2. http://stackalytics.com/report/contribution/neutron-group/90
> 
> 
> Cheers
> -- 
> Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Rossella Sblendido for the Control Plane core team

2015-06-12 Thread Armando M.
+1

On 12 June 2015 at 13:49, Edgar Magana  wrote:

>  Excellent news! +1
>
>  Cheers,
>
>  Edgar
>
>
> On Jun 12, 2015, at 12:50 PM, Kevin Benton  wrote:
>
>   Hello!
>
>  As the Lieutenant of the built-in control plane[1], I would
> like Rossella Sblendido to be a member of the control plane core reviewer
> team.
>
>  Her review stats are in line with other cores[2] and her feedback on
> patches related to the agents has been great. Additionally, she has been
> working quite a bit on the blueprint to restructure the L2 agent code so
> she is very familiar with the agent code and the APIs it leverages.
>
>  Existing cores that have spent time working on the reference
> implementation (agents and AMQP code), please vote +1/-1 for her addition
> to the team. Aaron, Gary, Assaf, Maru, Kyle, Armando, Carl and Oleg; you
> have all been reviewing things in these areas recently so I would like to
> hear from you specifically.
>
>  1.
> http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
>  2. http://stackalytics.com/report/contribution/neutron-group/30
>
>  Cheers
> --
>  Kevin Benton
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Networking] Support for multiple gateways in neutron/nova-net subnets for provider networks

2015-06-12 Thread Assaf Muller
I think Shraddha was talking about the gateway IP the DHCP server will respond
with. Different VMs will get different gateways.

- Original Message -
> That logic is contained in the virtual machine. We have no control over that.
> 
> On Thu, Jun 11, 2015 at 3:34 PM, Shraddha Pandhe <
> spandhe.openst...@gmail.com > wrote:
> 
> 
> 
> The idea is to round-robin between gateways by using some sort of mod
> operation
> 
> So logically it can look something like:
> 
> idx = len(gateways) % ip
> gateway = gateways[idx]
> 
> 
> This is just one idea. I am open to more ideas.
> 
> 
> 
> 
> On Thu, Jun 11, 2015 at 3:10 PM, Kevin Benton < blak...@gmail.com > wrote:
> 
> 
> 
> 
> What gateway address do you give to regular clients via dhcp when you have
> multiple?
> 
> 
> On Jun 11, 2015 12:29 PM, "Shraddha Pandhe" < spandhe.openst...@gmail.com >
> wrote:
> > 
> > Hi,
> > Currently, the Subnets in Neutron and Nova-Network only support one
> > gateway. For provider networks in large data centers, quite often, the
> > architecture is such a way that multiple gateways are configured per
> > subnet. These multiple gateways are typically spread across backplanes so
> > that the production traffic can be load-balanced between backplanes.
> > This is just my use case for supporting multiple gateways, but other folks
> > might have more use cases as well and also want to take the community's
> > opinion about this feature. Is this something that's going to help a lot
> > of users?
> > I want to open up a discussion on this topic and figure out the best way to
> > handle this.
> > 1. Should this be done in a same way as dns-nameserver, with a separate
> > table with two columns: gateway_ip, subnet_id.
> > 2. Should Gateway field be converted to a List instead of String?
> > I have also opened a bug for Neutron here:
> > https://bugs.launchpad.net/neutron/+bug/1464361
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> --
> Kevin Benton
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Rossella Sblendido for the Control Plane core team

2015-06-12 Thread Assaf Muller
+1

- Original Message -
> Excellent news! +1
> 
> Cheers,
> 
> Edgar
> 
> 
> On Jun 12, 2015, at 12:50 PM, Kevin Benton < blak...@gmail.com > wrote:
> 
> 
> 
> 
> Hello!
> 
> As the Lieutenant of the built-in control plane[1], I would like Rossella
> Sblendido to be a member of the control plane core reviewer team.
> 
> Her review stats are in line with other cores[2] and her feedback on patches
> related to the agents has been great. Additionally, she has been working
> quite a bit on the blueprint to restructure the L2 agent code so she is very
> familiar with the agent code and the APIs it leverages.
> 
> Existing cores that have spent time working on the reference implementation
> (agents and AMQP code), please vote +1/-1 for her addition to the team.
> Aaron, Gary, Assaf, Maru, Kyle, Armando, Carl and Oleg; you have all been
> reviewing things in these areas recently so I would like to hear from you
> specifically.
> 
> 1.
> http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
> 2. http://stackalytics.com/report/contribution/neutron-group/30
> 
> Cheers
> --
> Kevin Benton
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Networking] Support for multiple gateways in neutron/nova-net subnets for provider networks

2015-06-12 Thread Kevin Benton
That logic is contained in the virtual machine. We have no control over
that.

On Thu, Jun 11, 2015 at 3:34 PM, Shraddha Pandhe <
spandhe.openst...@gmail.com> wrote:

> The idea is to round-robin between gateways by using some sort of mod
> operation
>
> So logically it can look something like:
>
> idx = len(gateways) % ip
> gateway = gateways[idx]
>
>
> This is just one idea. I am open to more ideas.
>
>
>
>
> On Thu, Jun 11, 2015 at 3:10 PM, Kevin Benton  wrote:
>
>> What gateway address do you give to regular clients via dhcp when you
>> have multiple?
>>
>> On Jun 11, 2015 12:29 PM, "Shraddha Pandhe" 
>> wrote:
>> >
>> > Hi,
>> > Currently, the Subnets in Neutron and Nova-Network only support one
>> gateway. For provider networks in large data centers, quite often, the
>> architecture is such a way that multiple gateways are configured per
>> subnet. These multiple gateways are typically spread across backplanes so
>> that the production traffic can be load-balanced between backplanes.
>> > This is just my use case for supporting multiple gateways, but other
>> folks might have more use cases as well and also want to take the
>> community's opinion about this feature. Is this something that's going to
>> help a lot of users?
>> > I want to open up a discussion on this topic and figure out the best
>> way to handle this.
>> > 1. Should this be done in a same way as dns-nameserver, with a separate
>> table with two columns: gateway_ip, subnet_id.
>> > 2. Should Gateway field be converted to a List instead of String?
>> > I have also opened a bug  for Neutron here:
>> https://bugs.launchpad.net/neutron/+bug/1464361
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread Zane Bitter
This thread kind of deteriorated a bit (though it looks like it's 
hopefully recovering), so I'd just like to add some observations.


What we have here is a classic case of a long-running fork, with all 
that that entails. In this case the fork is a public one, but that 
actually makes very little difference to the fundamentals. (I think it's 
great that Mirantis have chosen to develop Fuel in the open though! Kudos.)


The fact is that maintaining a fork is very expensive. And while it's 
expensive for the upstream community in terms of lost opportunities for 
bug fixes, it's far, far more expensive for the maintainers of the 
downstream fork. IMHO that's one of the reasons that permissive licenses 
like ASL2 have gained so much ground over the GPL - it didn't take very 
long for almost everyone to realise that there were more compelling 
reasons to contribute your code upstream than that you were compelled to 
by the license. I don't think a project like OpenStack could exist if 
they hadn't. It's simply better business, even if you consider the other 
upstream users to be competitors.


So I think both projects would benefit from more co-operation, but Fuel 
has by far the most to gain.


I see from the thread that a lot of well-intentioned policies have been 
put in place to try to improve co-operation, and it's actually not that 
surprising to see them not working that well because the incentives are 
wrong. When you set up a conflict between incentives and rules... 
incentives tend to win. (I probably don't need to try to prove this, 
because it was IMHO one of the great lessons of the communist 
experiment, and looking at the names in this thread I suspect that most 
of y'all at least know someone with direct experience of that.)


So at the moment committing a patch to Fuel is easy for a Fuel 
developer, whereas getting that same patch into upstream is hard. So it 
is much more likely that the downstream patch lands while the upstream 
patch languishes, despite the hidden cost that another Fuel developer 
will need to reconcile the two later. To get this to work, you need to 
make upstream the default (and therefore easiest) path to get changes 
included in Fuel.


Of course you will need a way to make urgent changes to your product 
without waiting for upstream. As an example, we do this in RDO Manager 
by maintaining patches on top of an upstream snapshot. (We do actually 
use Git - in a non-traditional way - as a tool to aid this process, but 
it's not really the point and there are many ways to tackle the 
problem.) The snapshot gets updated regularly, so changes that are 
committed upstream just show up without any extra work. If we need 
something urgently, we have to option to apply it as a patch, but our 
enthusiasm to do so is always tempered by the knowledge that if a change 
that is at least extremely similar does not land upstream then we are 
creating extra work for ourselves in the very near future. That's how we 
keep the incentives and policies aligned. (In this way, building a 
project around a library like this is very similar to building a 
downstream distribution around an upstream project. We use essentially 
the same techniques.)


And of course once the upstream becomes the default place to land 
patches, you'll very quickly stop thinking of upstream as 'them' and 
start thinking of them as 'us'. You'll start assimilating the ideas of 
what are and are not good coding standards so that you won't have to 
rework them nearly as much before they can be merged, and once you get 
involved in the community you'll have the opportunity to influence those 
ideas as well. Once everyone is up to speed I'm sure you'd see a lot of 
folks get added to core. Instead of upstream co-operation appearing to 
consume time that you don't have (which appears to be the problem at the 
moment), I'm quite sure that same people will be able to get a *lot* 
more done.


Tinkering with the current model by putting in place more policies or 
trying to offload work to the upstream openstack-puppet team will not 
work, and more importantly would not realise the same benefits to the 
Fuel team even if it did work.


The problem, of course, is that once you are on a long-running fork it 
takes a big up-front investment to get off it. (Ask anyone still running 
an OpenStack Folsom cloud ;) That can be hard to make a case for, 
especially when you have other priorities and the dividends take some 
time to appear. I think in this case it would be totally worth the 
investment, and I hope the Fuel team will consider making that investment.


As a bonus, it'll be more polite to the original authors of the code, 
it'll help everyone who is deploying OpenStack with Puppet (which is 
most people in the community), and it'll help Fuel users join a bigger 
critical mass of users so they can get better support from channels like 
ask.openstack.org.


cheers,
Zane.

On 11/06/15 10:36, Matthew Mosesohn wrote:

Hi Emilie

Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-12 Thread Dolph Mathews
Just to follow up, I've posted a revised specification which only include
group *IDs* in tokens (so, effectively promoting OS-FEDERATION's behavior
to core without modification) and mention of an X-Group-Ids header in
keystonemiddleware.auth_token:

  https://review.openstack.org/#/c/188564/

On Wed, Jun 10, 2015 at 10:47 AM, Dolph Mathews 
wrote:

> We're aiming for a Spec "Proposal" Freeze deadline for Liberty of June
> 23rd, but are requiring that specs are approved by our spec reviewers by
> that date. The spec [1] is currently pretty straightforward and provides us
> several benefits, so I don't expect it to be a complicated process, but is
> currently pending a revision from myself. I'm confident in Liberty at this
> point.
>
> [1] https://review.openstack.org/#/c/188564/
>
> On Wed, Jun 10, 2015 at 10:35 AM, John Wood 
> wrote:
>
>>  Hello folks,
>>
>>  Thanks for the consideration of this feature. Does it seem realistic
>> for a Liberty release of Keystone middleware to expose X-Group-Ids, or
>> would this be an M and beyond sort of thing?
>>
>>  Thanks,
>> John
>>
>>
>>   From: Henry Nash 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Friday, June 5, 2015 at 12:49 PM
>>
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing
>> X-Group- in token validation
>>
>>   The one proviso is that in single LDAP situations, the cloud provider
>> can chose (for backward compatibility reasons) to allow the underlying LDAP
>> user/group ID….so we might want to advise this to be disabled (there’s a
>> config switch to use the Public ID mapping for even this case).
>>
>>  Henry
>>
>> On 5 Jun 2015, at 18:19, Dolph Mathews  wrote:
>>
>>
>> On Fri, Jun 5, 2015 at 11:50 AM, Henry Nash 
>> wrote:
>>
>>> So I think that GroupID's are actually unique and safesince in the
>>> multi LDAP case we provide an indirection already in Keystone and issue a
>>> "Public ID" (this is true for bother users and groups), that we map to the
>>> underlying local ID in the particular LDAP backend.
>>
>>
>>  Oh, awesome! I didn't realize we did that for groups as well. So then,
>> we're safe exposing X-Group-Ids to services via
>> keystonemiddleware.auth_token but still not X-Group-Names (in any trivial
>> form).
>>
>>
>>>
>>>
>>> Henry
>>>
>>>
>>>   From:  Dolph Mathews   To:  "OpenStack
>>> Development Mailing List (not for usage questions)" <
>>> openstack-dev@lists.openstack.org>, Henry Nash <
>>> hen...@linux.vnet.ibm.com>, Henry Nash/UK/IBM@IBMGB   Date:  05/06/2015
>>> 15:38   Subject:  Re: [openstack-dev] [keystone][barbican] Regarding
>>> exposing X-Group- in token validation
>>>
>>> --
>>>
>>>
>>>
>>>
>>> On Thu, Jun 4, 2015 at 10:17 PM, John Wood <*john.w...@rackspace.com*
>>> > wrote:
>>> Hello folks,
>>>
>>> Regarding option C, if group IDs are unique within a given
>>> cloud/context, and these are discoverable by clients that can then set the
>>> ACL on a secret in Barbican, then that seems like a viable option to me. As
>>> it is now, the user information provided to the ACL is the user ID
>>> information as found in X-User-Ids now, not user names.
>>>
>>> To Kevin’s point though, are these group IDs unique across domains now,
>>> or in the future? If not the more complex tuples suggested could be used,
>>> but seem more error prone to configure on an ACL.
>>>
>>> Well, that's a good question, because that depends on the backend, and
>>> our backend architecture has recently gotten very complicated in this area.
>>>
>>> If groups are backed by SQL, then they're going to be globally unique
>>> UUIDs, so the answer is always yes.
>>>
>>> If they're backed by LDAP, then actually it depends on LDAP, but the
>>> answer should be yes.
>>>
>>> But the nightmare scenario we now support is domain-specific identity
>>> drivers, where each domain can actually be configured to talk to a
>>> different LDAP server. In that case, I don't think you can make any
>>> guarantees about group ID uniqueness :( Instead, each domain could provide
>>> whatever IDs it wants, and those might conflict with those of other
>>> domains. We have a workaround for a similar issue with user IDs, but it
>>> hasn't been applied to groups, leaving them quite broken in this scenario.
>>> I'd consider this to be an issue we need to solve in Keystone, though, not
>>> something other projects need to worry about. I'm hoping Henry Nash can
>>> chime in and correct me!
>>>
>>>
>>> Thanks,
>>> John
>>>
>>> *From: *, Kevin M <*kevin@pnnl.gov* >
>>> * Reply-To: *"OpenStack Development Mailing List (not for usage
>>> questions)" <*openstack-dev@lists.openstack.org*
>>> >
>>> * Date: *Thursday, June 4, 2015 at 6:01 PM
>>> * To: *"OpenStack Development Mailing List (not for usage questions)" <
>>> *openstack-dev@lists.openstack.org* >
>>>
>>> * Subject: *Re: [op

[openstack-dev] [keystone][puppet] Federation using ipsilon

2015-06-12 Thread Rich Megginson
I've done a first pass of setting up a puppet module to configure 
Keystone to use ipsilon for federation, using 
https://github.com/richm/puppet-apache-auth-mods, and a version of 
ipsilon-client-install with patches 
https://fedorahosted.org/ipsilon/ticket/141 and 
https://fedorahosted.org/ipsilon/ticket/142, and a heavily modified 
version of the ipa/rdo federation setup scripts - 
https://github.com/richm/rdo-vm-factory.


I would like some feedback from the Keystone and puppet folks about this 
approach.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Networking] Support for multiple gateways in neutron/nova-net subnets for provider networks

2015-06-12 Thread Shraddha Pandhe
Hi everyone,

Any thoughts on supporting multiple gateway IPs for subnets?





On Thu, Jun 11, 2015 at 3:34 PM, Shraddha Pandhe <
spandhe.openst...@gmail.com> wrote:

> The idea is to round-robin between gateways by using some sort of mod
> operation
>
> So logically it can look something like:
>
> idx = len(gateways) % ip
> gateway = gateways[idx]
>
>
> This is just one idea. I am open to more ideas.
>
>
>
>
> On Thu, Jun 11, 2015 at 3:10 PM, Kevin Benton  wrote:
>
>> What gateway address do you give to regular clients via dhcp when you
>> have multiple?
>>
>> On Jun 11, 2015 12:29 PM, "Shraddha Pandhe" 
>> wrote:
>> >
>> > Hi,
>> > Currently, the Subnets in Neutron and Nova-Network only support one
>> gateway. For provider networks in large data centers, quite often, the
>> architecture is such a way that multiple gateways are configured per
>> subnet. These multiple gateways are typically spread across backplanes so
>> that the production traffic can be load-balanced between backplanes.
>> > This is just my use case for supporting multiple gateways, but other
>> folks might have more use cases as well and also want to take the
>> community's opinion about this feature. Is this something that's going to
>> help a lot of users?
>> > I want to open up a discussion on this topic and figure out the best
>> way to handle this.
>> > 1. Should this be done in a same way as dns-nameserver, with a separate
>> table with two columns: gateway_ip, subnet_id.
>> > 2. Should Gateway field be converted to a List instead of String?
>> > I have also opened a bug  for Neutron here:
>> https://bugs.launchpad.net/neutron/+bug/1464361
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing git-review 1.25.0

2015-06-12 Thread Jeremy Stanley
I am pleased to announce git-review 1.25.0 is officially released
today (Friday, June 12, 2015). This version brings together 43 new
changes from 23 different collaborators including fixes for 9 bugs
and a variety of other improvements:

https://git.openstack.org/cgit/openstack-infra/git-review/log/?id=1.24..1.25.0 >

Some brief highlights:

  * Tested against Python 2.6, 2.7 and now 3.4.

  * A new -T/--no-topic option for skipping auto-topic generation.

  * A new --reviewers option for adding requested reviewers.

  * A new --track option for smart remote branch tracking.

  * More flexible handling of Git remotes.

  * Better integration with Git configuration options.

  * Improvements in Gerrit-over-HTTP/HTTPS support.

  * ANSI color support is disabled on non-interactive sessions.

  * Properly installs its manpage again when possible.

  * Documentation now split out of the README.rst and additionally
published at http://docs.openstack.org/infra/git-review/ for
added convenience.

  * Better error messages for rebase failures and CLA/Contact Info
issues.

  * Additional debugging output.

The release tarball can be installed from PyPI as usual, and can
also be found at:

http://tarballs.openstack.org/git-review/git-review-1.25.0.tar.gz >

Its checksums are...

md5sum: 0a061d0e23ee9b93c6212a3fe68fb7ab

sha256: 087e0a7dc2415796a9f21c484a6f652c5410e6ba4562c36291c5399f9395a11d

It's also available as a Python wheel:

http://tarballs.openstack.org/git-review/git_review-1.25.0-py2.py3-none-any.whl 
>

md5sum: c1e7de93d210afeb85f2fc4381c6cd92

sha256: 6402b83f4f4b6966979809df7bb8b39f1f821384672932fded78ae3a0635

A huge thank-you to the following people for their code
contributions in this release:

Anders Kaseorg
Antoine Musso
Cedric Brandily
Christian Berendt
Clint Adams
Darragh Bailey
Dexter Fryar
Dmitry Ratushnyy
Doug Hellmann
Eric Harney
JC Delay
James E. Blair
Jeremy Stanley
John Vandenberg
Julien Danjou
K Jonathan Harker
Michael Johnson
Michael Krotscheck
Michael Pratt
Monty Taylor
david
julien.marinfrisonroche
liuyang1

-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Rossella Sblendido for the Control Plane core team

2015-06-12 Thread Edgar Magana
Excellent news! +1

Cheers,

Edgar


On Jun 12, 2015, at 12:50 PM, Kevin Benton 
mailto:blak...@gmail.com>> wrote:

Hello!

As the Lieutenant of the built-in control plane[1], I would like Rossella 
Sblendido to be a member of the control plane core reviewer team.

Her review stats are in line with other cores[2] and her feedback on patches 
related to the agents has been great. Additionally, she has been working quite 
a bit on the blueprint to restructure the L2 agent code so she is very familiar 
with the agent code and the APIs it leverages.

Existing cores that have spent time working on the reference implementation 
(agents and AMQP code), please vote +1/-1 for her addition to the team. Aaron, 
Gary, Assaf, Maru, Kyle, Armando, Carl and Oleg; you have all been reviewing 
things in these areas recently so I would like to hear from you specifically.

1. 
http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
2. http://stackalytics.com/report/contribution/neutron-group/30

Cheers
--
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-12 Thread Alec Hothan (ahothan)


On 6/1/15, 5:03 PM, "Davanum Srinivas"  wrote:

>fyi, the spec for zeromq driver in oslo.messaging is here:
>https://review.openstack.org/#/c/187338/1/specs/liberty/zmq-patterns-usage
>.rst,unified
>
>-- dims

I was about to provide some email comments on the above review off gerrit,
but figured maybe it would be good to make a quick status of the state of
this general effort for pushing out a better zmq driver for oslo essaging.
So I started to look around the oslo/zeromq wiki and saw few email threads
that drew my interest.

In this email (Nov 2014) Ilya proposes about getting rid of a central
broker for zmq:
http://lists.openstack.org/pipermail/openstack-dev/2014-November/050701.htm
l
Not clear if Ilya already had in mind to instead have a local proxy on
every node (as proposed in the above spec)


In this email (mar 2014), Yatin described the prospect of using zmq in a
completely broker-less way (so not even a proxy per node), with the use of
matchmaker rings to configure well known ports.
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030411.html
Which is pretty close to what I think would be a better design (with the
variant that I'd rather see a robust and highly available name server
instead of fixed port assignments), I'd be interested to know what
happened to that proposal and why we ended up with a proxy per node
solution at this stage (I'll reply to the proxy per node design in a
separate email to complement my gerrit comments).


I could not find one document that summarizes the list of issues related
to rabbitMQ deployments, all it appears is that many people are unhappy
with it, some are willing to switch to zmq, many are hesitant and some are
decidedly skeptical. On my side I know a number of issues related to oslo
messaging over rabbitMQ.

I think it is important for the community to understand that of the many
issues generally attributed to oslo messaging over rabbitMQ, not all of
them are caused by the choice of rabbitMQ as a transport (and hence those
will likely not be fixed if we just switched from rabbitMQ to ZMQ) and
many are actually caused by the misuse of oslo messaging by the apps
(Neutron, Nova...) and can only be fixed by modification of the app code.

I think personally that there is a strong case for a properly designed ZMQ
driver but we first need to make the expectations very clear.

One long standing issue I can see is the fact that the oslo messaging API
documentation is sorely lacking details on critical areas such as API
behavior during fault conditions, load conditions and scale conditions.
As a result, app developers are using the APIs sometimes indiscriminately
and that will have an impact on the overall quality of openstack in
deployment conditions.
I understand that a lot of the existing code was written in a hurry and
good enough to work properly on small setups, but some code will break
really badly under load or when things start to go south in the cloud.
That is unless the community realizes that perhaps there is something that
needs to be done.

We're only starting to see today things breaking under load because we
have more lab tests at scale, more deployments at scale and we only start
to see real system level testing at scale with HA testing (the kind of
test where you inject load and cause failures of all sorts). Today we know
that openstack behaves terribly in these conditions, even in so-called HA
deployments!

As a first step, would it be useful to have one single official document
that characterizes all the issues we're trying to fix and perhaps used
that document as a basis for showing which of all these issues will be
fixed by the use of the zmq driver? I think that could help us focus
better on the type of requirements we need from this new ZMQ driver.


Thanks,

  Alec




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to microversion API code which is not in API layer

2015-06-12 Thread Matt Riedemann



On 6/12/2015 11:11 AM, Chen CH Ji wrote:

Hi
  We have [1] in the db layer and it's directly used by API
layer , the filters is directly from client's input
  In this case, when doing [2] or similar changes, do we
need to consider microversion usage when we change options?
  Thanks

[1]
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L4440
[2] https://review.openstack.org/#/c/144883

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
District, Beijing 100193, PRC


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Sean has started documenting some of this here:

https://review.openstack.org/#/c/191188/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread Dmitry Borodaenko
On Fri, Jun 12, 2015 at 01:23:28PM -0700, James Bottomley wrote:
> However, the commit history is vital to obtaining the provenance of the
> code.  If there's ever a question about who authored what part of the
> code (or worse, who copied it wrongly from a different project, as in
> the SCO suit against Linux) you need the commit history to establish the
> chain of conveyance into the code.  If we lose this, the protection of
> the OpenStack CLA and ICLA will be lost as well (along with any patent
> grants that may have been captured) because they rely on knowing where
> the code came from.  So in legal hygiene and governance terms, you're
> not free to flush the commit history without setting up the project for
> provenance problems on down the road.

This kind of provenance is currently provided by including sha1 id of
the upstream commit from which the module was imported. That gives you
enough information to a) confirm that the imported version of the code
exactly matches the referenced version in upstream git, and b) use
upstream git commit history to further track down origin of any imported
line of code. Yes, a hassle, but at least the track is not lost.

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread Dmitry Borodaenko
On Fri, Jun 12, 2015 at 12:14:52PM -0400, Emilien Macchi wrote:
> On 06/12/2015 11:41 AM, Sergii Golovatiuk wrote:
> > IMO, it's a communication issue and related  more to Puppet OpenStack
> > community that to Fuel Library folks. In Fuel Library when patch from
> > external contributor has some problems we cherry-pick it, update a
> > patchset to succeed our expectations. Meanwhile we contact the
> > contributor over IRC or email and explain why we did that. That's very
> > important as contributor may not know all architectural details. He may
> > not know the details of CI tests or details how we test. That's the good
> > attitude to help newcomers. So they will be naturally involved to
> > community. Yes, it takes time for Fuel Library folks, but we continue
> > doing that way as we think that communication with contributor is a key
> > of success.
> 
> Adding someone by using Gerrit is not enough. Communication on IRC with
> Puppet OpenStack group would be good on the right channel, like it's
> done in other OpenStack projects quite often.

+1

> > I have looked over patches in progress. I may be wrong but I didn't find
> > that Puppet OpenStack community updated patch to pass CI. It's not
> > complex to cherry-pick and fix failed tests. It's also not complex to
> > contact person over IRC or in email to explain what needs to be done.
> > Trust me, usually it takes once. Smart creatives are clever enough not
> > to make same mistakes twice.

+1, and I also agree with Emilien that Fuel developers should join
#puppet-openstack for such discussions, instead of waiting for Puppet
OpenStack developers to find them on #fuel-dev.

> https://bugs.launchpad.net/puppet-openstack/+bugs is for
> puppet-openstack, which is deprecated in Juno.
> 
> You should look https://launchpad.net/openstack-puppet-modules which
> contains mostly triaged bugs.

Looks like you need to update the links here:
https://wiki.openstack.org/wiki/Puppet

It still sends bug reporters to https://launchpad.net/puppet-openstack/

> Honestly, if you submit a good patch now, it will land in maximum one
> week or so.

Yes, one week is a timeframe we can work with.

> If Fuel team could also participate in upstream reviews that would be
> awesome:
> * they would be involved in the community
> * they would get experience from other patches and provide better
> patches in the future, and get reviews merged faster.

Agreed. Even something as small as one review per week would be a good
start.

Do you have a gerrit review dashboard like the one we use in Fuel:
https://wiki.openstack.org/wiki/Fuel#Development_related_links

or something else to track reviews backlog?

> > From Fuel side I see that some engineers will be involved to review
> > process. They will participate in weekly meetings. They also be active
> > in communication asking people for help in review or asking why CI failed.
> 
> Good.

I'm wondering if we could set up something like an "upstream liaison
duty roster", so that there's always a couple of engineers in the Fuel
team who make sure that communication with upstream is not falling
through the cracks.

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread James Bottomley
On Fri, 2015-06-12 at 13:05 -0700, Dmitry Borodaenko wrote:
> On Fri, Jun 12, 2015 at 08:33:56AM -0700, James Bottomley wrote:
> > On Fri, 2015-06-12 at 02:43 -0700, Dmitry Borodaenko wrote:
> > > On Thu, Jun 11, 2015 at 11:43:09PM -0400, Emilien Macchi wrote:
> > > > What about code history and respect of commit ownership?
> > > > I'm personally wondering if it's fair to copy/paste several thousands of
> > > > lines of code from another Open-Source project without asking to the
> > > > community or notifying the authors before. I know it's Open-Source and
> > > > Apache 2.0 but well... :-)
> > > 
> > > Being able to copy code without having to ask for permission is exactly
> > > what Free Software (and more recently, Open Source) is for.
> > 
> > Copy and Paste forking of code into compatibly licenced code (without
> > asking permission) is legally fine as long as you observe the licence,
> > but it comes with a huge technical penalty:
> > 
> >  1. Copy and Paste forks can't share patches: a patch for one has to
> > be manually applied to the other.  The amount of manual
> > intervention required grows as the forks move out of sync.
> >  2. Usually one fork gets more attention than the other, so the
> > patch problem of point 1 eventually causes the less attended
> > fork to become unmaintainable (or if not patched, eventually
> > unusable).
> >  3. In the odd chance both forks receive equal attention, you're
> > still expending way over 2x the effort you would have on a
> > single code base.
> > 
> > There's no rule of thumb for this: we all paste snippets (pieces of code
> > of up to around 10 lines or so).  Sometimes these snippets contain
> > errors and suddenly hundreds of places need fixing.   The way around
> > this problem is to share code, either by inclusion, modularity or
> > linking.  The reason we paste snippets is because sharing for them is
> > enormous effort.  However, as the size of the paste grows, so does the
> > fork penalty and it becomes advantageous to avoid it and the effort of
> > sharing the code looks a lot less problematic.
> > 
> > Even in the case where the fork is simply "patch the original for bug
> > fixes and some new functionality", the fork penalty rules apply.
> > 
> > The data that supports all of this came from Red Hat and SUSE.  The end
> > of the 2.4 kernel release cycle for them was a disaster with patch sets
> > larger than the actual kernel itself.  Sorting through the resulting
> > rubble is where the "upstream first" policy actually came from.
> 
> Thanks for the excellent summary of the technical penalties incurred by
> straying too far from upstream.

You're welcome.

> It's funny how after years of trying to convince Fuel developers to put
> more effort into collaboration with upstream, in this thread I managed
> to come across as if I were arguing the opposite. To reiterate, I
> understand and support the practical reasons to reduce the gap between
> Fuel and Puppet OpenStack, and I believe that practical reasons are a
> much better way to motivate Fuel developers to collaborate than arguing
> whether what Fuel team has done in the past was fair or wrong.

I agree; recriminations never solve anything but just to close out on
the topic of authorship and commit history, since I think there's been
some misunderstandings there as well:

The licence is the ultimate arbiter of what you absolutely *have* to do
to remain in compliance.  The licence governs only the code, not the
commit history, so under the licence, you're free to flush all the
commit history with no legal consequence from the terms of the licence.

However, the commit history is vital to obtaining the provenance of the
code.  If there's ever a question about who authored what part of the
code (or worse, who copied it wrongly from a different project, as in
the SCO suit against Linux) you need the commit history to establish the
chain of conveyance into the code.  If we lose this, the protection of
the OpenStack CLA and ICLA will be lost as well (along with any patent
grants that may have been captured) because they rely on knowing where
the code came from.  So in legal hygiene and governance terms, you're
not free to flush the commit history without setting up the project for
provenance problems on down the road.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread Emilien Macchi


On 06/12/2015 03:33 PM, Dmitry Borodaenko wrote:
> On Fri, Jun 12, 2015 at 02:25:31PM +0200, Flavio Percoco wrote:
>>> I have already explained in the thread how we address the problem of
>>> tracking down and managing the Fuel specific changes in forked modules.
>>> With that problem addressed, I don't see any other objective reason for
>>> frustration. Does anybody's bonus depend on the number of lines of code
>>> in stackforge repositories such as fuel-library that git blame
>>> attributes to their name?
>>
>> I don't think anyone here is talking about bonuses or worrying about
>> salaries. The fact that you mention it offends the purposes of this
>> thread and, as much as you don't care, I'm really sad to read that.
>>
>> The whole thing this thread is trying to achieve is improving
>> collaboration and you are derailing the conversation with completely
>> unfriendly/unhelpful comments like the one above.
> 
> I am really sorry that I made you feel bad about what I wrote, I didn't
> mean to do that. I actually completely agree with you that this aspect
> of the thread was derailing the conversation, and I tried to use
> reductio ad absurdum to demonstrate how ridiculous it can get if we
> focus on perfecting author attribution instead of discussing
> collaboration. I should have been more explicit in indicating that I
> didn't actually mean this as a serious question. Lets write it off as a
> bad joke that didn't make it across the language barrier.
> 
>> It does cause frustration because, as you can read from Emiliem's
>> original email, it not just adds some extra burden to people in the
>> puppet team but it also defeates the purposes of the team itself,
>> which is creating OpenStack puppet manifests that are consumable by
>> everyone.
> 
> Now I see that we're on the same page. I agree that it does add extra
> burden, and even though we've done what we could to reduce that burden
> in the process I've described earlier, the only way to eliminate it
> completely is to use upstream Puppet modules in Fuel directly and
> without Fuel specific modifications. I see a broad consensus on this
> thread in favor of setting this as the end goal, and I gladly join that
> sentiment.
> 
> To prove that I'm not merely trying to placate you, here's what I had to
> say about this to the Fuel team back in March 2014 when we first came up
> with our current process for tracking upsteam:
> 
> https://lists.launchpad.net/fuel-dev/msg00727.html
> 
> Peace?

It seems we finally broke the ice and found some agreements here, I'm
quite happy.

Thanks for your help and your involvement in this topic, it's really
appreciated.

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Looking for help getting git-review to work over https

2015-06-12 Thread Ian Cordasco


On 6/12/15, 14:46, "KARR, DAVID"  wrote:

>> -Original Message-
>> From: Kevin L. Mitchell [mailto:kevin.mitch...@rackspace.com]
>> Sent: Friday, June 12, 2015 12:05 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] Looking for help getting git-review to
>> work over https
>> 
>> On Fri, 2015-06-12 at 17:08 +, KARR, DAVID wrote:
>> > Thanks.  I already tried that. It's not even clear this is
>> failing to
>> > connect. I don't know what this is telling me.
>> > --
>> > # pip install --proxy http://one.proxy.att.com:8080 .
>> > Processing /home/dk068x/work/git-review
>> > Complete output from command python setup.py egg_info:
>> > Download error on https://pypi.python.org/simple/pbr/: [Errno
>> 1]
>> > _ssl.c:504: error:140770FC:SSL
>> routines:SSL23_GET_SERVER_HELLO:unknown
>> > protocol -- Some packages may not be found!
>> > Couldn't find index page for 'pbr' (maybe misspelled?)
>> > Download error on https://pypi.python.org/simple/: [Errno 1]
>> > _ssl.c:504: error:140770FC:SSL
>> routines:SSL23_GET_SERVER_HELLO:unknown
>> > protocol -- Some packages may not be found!
>> > No local packages or download links found for pbr
>> > Traceback (most recent call last):
>> >   File "", line 20, in 
>> >   File "/tmp/pip-MStPEo-build/setup.py", line 20, in 
>> > setuptools.setup(setup_requires=['pbr'], pbr=True)
>> 
>> Have you confirmed that your proxy at 8080 is capable of SSL?
>> Usually,
>> people use port 8080 for plain old HTTP servers or proxies, and
>> trying
>> to talk SSL to a plain HTTP proxy would probably result in that
>> error.
>> 
>> (Also noticed that your proxy URL is specified as "http://";; if you
>> know
>> that proxy works for SSL, try "https://"…)
>
>Yes, we have both http and https versions of that proxy.  I've tried both
>of them here, with the same result.

Can you get onto IRC? We might be able to fix this faster using a more
immediate medium. If so, I'm sigmavirus24 in #openstack-dev on
irc.freenode.net so feel free to ping me there.

--
Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread Dmitry Borodaenko
On Fri, Jun 12, 2015 at 08:33:56AM -0700, James Bottomley wrote:
> On Fri, 2015-06-12 at 02:43 -0700, Dmitry Borodaenko wrote:
> > On Thu, Jun 11, 2015 at 11:43:09PM -0400, Emilien Macchi wrote:
> > > What about code history and respect of commit ownership?
> > > I'm personally wondering if it's fair to copy/paste several thousands of
> > > lines of code from another Open-Source project without asking to the
> > > community or notifying the authors before. I know it's Open-Source and
> > > Apache 2.0 but well... :-)
> > 
> > Being able to copy code without having to ask for permission is exactly
> > what Free Software (and more recently, Open Source) is for.
> 
> Copy and Paste forking of code into compatibly licenced code (without
> asking permission) is legally fine as long as you observe the licence,
> but it comes with a huge technical penalty:
> 
>  1. Copy and Paste forks can't share patches: a patch for one has to
> be manually applied to the other.  The amount of manual
> intervention required grows as the forks move out of sync.
>  2. Usually one fork gets more attention than the other, so the
> patch problem of point 1 eventually causes the less attended
> fork to become unmaintainable (or if not patched, eventually
> unusable).
>  3. In the odd chance both forks receive equal attention, you're
> still expending way over 2x the effort you would have on a
> single code base.
> 
> There's no rule of thumb for this: we all paste snippets (pieces of code
> of up to around 10 lines or so).  Sometimes these snippets contain
> errors and suddenly hundreds of places need fixing.   The way around
> this problem is to share code, either by inclusion, modularity or
> linking.  The reason we paste snippets is because sharing for them is
> enormous effort.  However, as the size of the paste grows, so does the
> fork penalty and it becomes advantageous to avoid it and the effort of
> sharing the code looks a lot less problematic.
> 
> Even in the case where the fork is simply "patch the original for bug
> fixes and some new functionality", the fork penalty rules apply.
> 
> The data that supports all of this came from Red Hat and SUSE.  The end
> of the 2.4 kernel release cycle for them was a disaster with patch sets
> larger than the actual kernel itself.  Sorting through the resulting
> rubble is where the "upstream first" policy actually came from.

Thanks for the excellent summary of the technical penalties incurred by
straying too far from upstream.

It's funny how after years of trying to convince Fuel developers to put
more effort into collaboration with upstream, in this thread I managed
to come across as if I were arguing the opposite. To reiterate, I
understand and support the practical reasons to reduce the gap between
Fuel and Puppet OpenStack, and I believe that practical reasons are a
much better way to motivate Fuel developers to collaborate than arguing
whether what Fuel team has done in the past was fair or wrong.

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Looking for help getting git-review to work over https

2015-06-12 Thread KARR, DAVID
> -Original Message-
> From: Ian Cordasco [mailto:ian.corda...@rackspace.com]
> Sent: Friday, June 12, 2015 12:05 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Looking for help getting git-review to
> work over https
> 
> It looks like it can't run the setup.py because it can't find pbr.
> 
> Could you provide the following:
> 
> - Version of pip you're using (pip --version)

pip 7.0.3 from /usr/lib/python2.7/site-packages (python 2.7)

> - How you installed pip (e.g., apt-get install -y python-pip)

Sigh.  I no longer remember.  I know I had to install the EPEL repo.  It would 
have been through "yum", most likely.

> - Contents of your global pip configuration (if one exists, it will
> be in
> ~/.pip/)

[global]
proxy=https://one.proxy.att.com:8080

> - What you get when you do `python -c 'import setuptools;
> print(setuptools.__version__)'`

--
$ `python -c 'import setuptools; > print(setuptools.__version__)'`
  File "", line 1
import setuptools; > print(setuptools.__version__)
   ^
SyntaxError: invalid syntax
---

I also tried without the wrapping backticks, which you may have used just to 
wrap the command line.  Same result.

> - Output of python -V

Python 2.7.5

> On 6/12/15, 12:08, "KARR, DAVID"  wrote:
> 
> >> -Original Message-
> >> From: Ian Cordasco [mailto:ian.corda...@rackspace.com]
> >> Sent: Friday, June 12, 2015 9:57 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] Looking for help getting git-review
> to
> >> work over https
> >>
> >> Python has generally awful support for doing HTTPS over an HTTPS
> >> proxy.
> >> Can you try doing:
> >>
> >> # pip install --proxy http://one.proxy.att.com: port>
> >
> >Thanks.  I already tried that. It's not even clear this is failing
> to
> >connect. I don't know what this is telling me.
> >--
> ># pip install --proxy http://one.proxy.att.com:8080 .
> >Processing /home/dk068x/work/git-review
> >Complete output from command python setup.py egg_info:
> >Download error on https://pypi.python.org/simple/pbr/: [Errno
> 1]
> >_ssl.c:504: error:140770FC:SSL
> routines:SSL23_GET_SERVER_HELLO:unknown
> >protocol -- Some packages may not be found!
> >Couldn't find index page for 'pbr' (maybe misspelled?)
> >Download error on https://pypi.python.org/simple/: [Errno 1]
> >_ssl.c:504: error:140770FC:SSL
> routines:SSL23_GET_SERVER_HELLO:unknown
> >protocol -- Some packages may not be found!
> >No local packages or download links found for pbr
> >Traceback (most recent call last):
> >  File "", line 20, in 
> >  File "/tmp/pip-MStPEo-build/setup.py", line 20, in 
> >setuptools.setup(setup_requires=['pbr'], pbr=True)
> >-
> >
> >>
> >> Cheers,
> >> Ian
> >>
> >> On 6/12/15, 10:26, "KARR, DAVID"  wrote:
> >>
> >> >As I apparently have to sudo this, it would likely be more
> >> effective, if
> >> >at all, to specify the proxy on the command line.
> Unfortunately,
> >> > it didn’t make any difference:
> >> >
> >> ># pip install --proxy "https://one.proxy.att.com:8080"; .
> >> >Processing /home/dk068x/work/git-review
> >> >Complete output from command python setup.py egg_info:
> >> >Download error on https://pypi.python.org/simple/pbr/:
> [Errno
> >> 1]
> >> >_ssl.c:504: error:140770FC:SSL
> >> routines:SSL23_GET_SERVER_HELLO:unknown
> >> >protocol -- Some packages may
> >> > not be found!
> >> >Couldn't find index page for 'pbr' (maybe misspelled?)
> >> >Download error on https://pypi.python.org/simple/: [Errno
> 1]
> >> >_ssl.c:504: error:140770FC:SSL
> >> routines:SSL23_GET_SERVER_HELLO:unknown
> >> >protocol -- Some packages may not
> >> > be found!
> >> >No local packages or download links found for pbr
> >> >Traceback (most recent call last):
> >> >  File "", line 20, in 
> >> >  File "/tmp/pip-D5jhCD-build/setup.py", line 20, in
> 
> >> >setuptools.setup(setup_requires=['pbr'], pbr=True)
> >> >  File "/usr/lib64/python2.7/distutils/core.py", line 112,
> in
> >> setup
> >> >_setup_distribution = dist = klass(attrs)
> >> >...
> >> >
> >> >
> >> >I’m providing the appropriate https proxy url, as pip appears
> to
> >> be using
> >> >a https url.
> >> >
> >> >From: Asselin, Ramy [mailto:ramy.asse...@hp.com]
> >> >
> >> >Sent: Thursday, June 11, 2015 4:35 PM
> >> >To: OpenStack Development Mailing List (not for usage
> questions)
> >> >Subject: Re: [openstack-dev] Looking for help getting git-
> review
> >> to work
> >> >over https
> >> >
> >> >
> >> >
> >> >Try creating/updated ~/.pip/pip.conf
> >> >
> >> >With contents:
> >> >
> >> >[global]
> >> >proxy =
> >> >http://your_proxy:port/
> >> >
> >> >From: KARR, DAVID [mailto:dk0...@att.com]
> >> >
> >> >Sent: Thursday, June 11, 2015 4:16 PM
> >> >To: OpenStack Development Mailing List (not for usage
> questions)
> >> >Su

[openstack-dev] [neutron] Service Chain project IRC meeting minutes - 06/11/2015

2015-06-12 Thread Cathy Zhang
Hi Everyone,

Thanks for joining the service chaining meeting on 6/11/2015. Here are the 
links to the meeting logs:
http://eavesdrop.openstack.org/meetings/sfc_project/2015/sfc_project.2015-06-11-17.01.html
http://eavesdrop.openstack.org/meetings/sfc_project/2015/sfc_project.2015-06-11-17.01.txt
http://eavesdrop.openstack.org/meetings/sfc_project/2015/sfc_project.2015-06-11-17.01.log.html

Thanks,
Cathy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Rossella Sblendido for the Control Plane core team

2015-06-12 Thread Kyle Mestery
On Fri, Jun 12, 2015 at 2:44 PM, Kevin Benton  wrote:

> Hello!
>
> As the Lieutenant of the built-in control plane[1], I would like Rossella
> Sblendido to be a member of the control plane core reviewer team.
>
> Her review stats are in line with other cores[2] and her feedback on
> patches related to the agents has been great. Additionally, she has been
> working quite a bit on the blueprint to restructure the L2 agent code so
> she is very familiar with the agent code and the APIs it leverages.
>
> Existing cores that have spent time working on the reference
> implementation (agents and AMQP code), please vote +1/-1 for her addition
> to the team. Aaron, Gary, Assaf, Maru, Kyle, Armando, Carl and Oleg; you
> have all been reviewing things in these areas recently so I would like to
> hear from you specifically.
>
> 1.
> http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
> 2. http://stackalytics.com/report/contribution/neutron-group/30
>
>
+1


> Cheers
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Looking for help getting git-review to work over https

2015-06-12 Thread KARR, DAVID
> -Original Message-
> From: Kevin L. Mitchell [mailto:kevin.mitch...@rackspace.com]
> Sent: Friday, June 12, 2015 12:05 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Looking for help getting git-review to
> work over https
> 
> On Fri, 2015-06-12 at 17:08 +, KARR, DAVID wrote:
> > Thanks.  I already tried that. It's not even clear this is
> failing to
> > connect. I don't know what this is telling me.
> > --
> > # pip install --proxy http://one.proxy.att.com:8080 .
> > Processing /home/dk068x/work/git-review
> > Complete output from command python setup.py egg_info:
> > Download error on https://pypi.python.org/simple/pbr/: [Errno
> 1]
> > _ssl.c:504: error:140770FC:SSL
> routines:SSL23_GET_SERVER_HELLO:unknown
> > protocol -- Some packages may not be found!
> > Couldn't find index page for 'pbr' (maybe misspelled?)
> > Download error on https://pypi.python.org/simple/: [Errno 1]
> > _ssl.c:504: error:140770FC:SSL
> routines:SSL23_GET_SERVER_HELLO:unknown
> > protocol -- Some packages may not be found!
> > No local packages or download links found for pbr
> > Traceback (most recent call last):
> >   File "", line 20, in 
> >   File "/tmp/pip-MStPEo-build/setup.py", line 20, in 
> > setuptools.setup(setup_requires=['pbr'], pbr=True)
> 
> Have you confirmed that your proxy at 8080 is capable of SSL?
> Usually,
> people use port 8080 for plain old HTTP servers or proxies, and
> trying
> to talk SSL to a plain HTTP proxy would probably result in that
> error.
> 
> (Also noticed that your proxy URL is specified as "http://";; if you
> know
> that proxy works for SSL, try "https://"…)

Yes, we have both http and https versions of that proxy.  I've tried both of 
them here, with the same result.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Proposing Rossella Sblendido for the Control Plane core team

2015-06-12 Thread Kevin Benton
Hello!

As the Lieutenant of the built-in control plane[1], I would like Rossella
Sblendido to be a member of the control plane core reviewer team.

Her review stats are in line with other cores[2] and her feedback on
patches related to the agents has been great. Additionally, she has been
working quite a bit on the blueprint to restructure the L2 agent code so
she is very familiar with the agent code and the APIs it leverages.

Existing cores that have spent time working on the reference implementation
(agents and AMQP code), please vote +1/-1 for her addition to the team.
Aaron, Gary, Assaf, Maru, Kyle, Armando, Carl and Oleg; you have all been
reviewing things in these areas recently so I would like to hear from you
specifically.

1.
http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
2. http://stackalytics.com/report/contribution/neutron-group/30

Cheers
-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread Dmitry Borodaenko
On Fri, Jun 12, 2015 at 02:25:31PM +0200, Flavio Percoco wrote:
> >I have already explained in the thread how we address the problem of
> >tracking down and managing the Fuel specific changes in forked modules.
> >With that problem addressed, I don't see any other objective reason for
> >frustration. Does anybody's bonus depend on the number of lines of code
> >in stackforge repositories such as fuel-library that git blame
> >attributes to their name?
> 
> I don't think anyone here is talking about bonuses or worrying about
> salaries. The fact that you mention it offends the purposes of this
> thread and, as much as you don't care, I'm really sad to read that.
> 
> The whole thing this thread is trying to achieve is improving
> collaboration and you are derailing the conversation with completely
> unfriendly/unhelpful comments like the one above.

I am really sorry that I made you feel bad about what I wrote, I didn't
mean to do that. I actually completely agree with you that this aspect
of the thread was derailing the conversation, and I tried to use
reductio ad absurdum to demonstrate how ridiculous it can get if we
focus on perfecting author attribution instead of discussing
collaboration. I should have been more explicit in indicating that I
didn't actually mean this as a serious question. Lets write it off as a
bad joke that didn't make it across the language barrier.

> It does cause frustration because, as you can read from Emiliem's
> original email, it not just adds some extra burden to people in the
> puppet team but it also defeates the purposes of the team itself,
> which is creating OpenStack puppet manifests that are consumable by
> everyone.

Now I see that we're on the same page. I agree that it does add extra
burden, and even though we've done what we could to reduce that burden
in the process I've described earlier, the only way to eliminate it
completely is to use upstream Puppet modules in Fuel directly and
without Fuel specific modifications. I see a broad consensus on this
thread in favor of setting this as the end goal, and I gladly join that
sentiment.

To prove that I'm not merely trying to placate you, here's what I had to
say about this to the Fuel team back in March 2014 when we first came up
with our current process for tracking upsteam:

https://lists.launchpad.net/fuel-dev/msg00727.html

Peace?

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-12 Thread Adrian Otto
Okay, I will think on that a bit.

Adrian


 Original message 
From: "Steven Dake (stdake)" 
Date: 06/12/2015 8:04 AM (GMT-08:00)
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Adiran,

Disagree.

The python-magnumclient and heat-coe-templates have separate launchpad 
trackers.  I suspect we will want a separate tracker for python-k8sclient when 
we are ready for that.

IMO each repo should have a separate launchpad tracker to make the lives of the 
folks maintaining the software easier :)  This is a common best practice in 
OpenStack.

Regards
-steve


From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, June 12, 2015 at 7:47 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Steve,

There is no need for a second LP project at all.

Adrian


 Original message 
From: "Steven Dake (stdake)" mailto:std...@cisco.com>>
Date: 06/12/2015 7:41 AM (GMT-08:00)
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Adrian,

Great thanks for that.  I would recommend one change and that is for one 
magnum-drivers team across launchpad trackers.  The magnum-drivers team as you 
know (this is for the benefit of others) is responsible for maintaining the 
states of the launchpad trackers.

Regards,
-steve

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, June 11, 2015 at 7:21 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Team,

We are fortunate enough to have a thriving community of developers who want to 
make OpenStack great, and several of us have pledged support for this work in 
Magnum. Due to the amount of interest expressed in this pursuit, and the small 
amount of overlap between the developers in magnum-core, I’m authorizing the 
creation of a new gerrit ACL group named magnum-ui-core. Please install me as 
the pilot member of the group. I will seed the group with those who have 
pledged support for the effort from the “essential” subscribers to the 
following blueprint. If our contributors to the magnum-ui repo feel that review 
velocity is too low, I will add magnum-core as a member so we can help. On 
regular intervals, I will review the activity level of our new group, and make 
adjustments as needed to add/subtract from it in accordance with input from the 
active contributors. We will use the Magnum project on Launchpad for blueprints 
and bugs.

https://blueprints.launchpad.net/magnum/+spec/magnum-horizon-plugin

There are 8 contributors identified, who will comprise our initial 
magnum-ui-core group.

I ask that the ACLs be configured as follows:

[access "refs/heads/*"]
abandon = group magnum-ui-core
create = group magnum-milestone
label-Code-Review = -2..+2 group magnum-ui-core
label-Workflow = -1..+1 group magnum-ui-core

[access "refs/tags/*"]
pushSignedTag = group magnum-milestone

[receive]
requireChangeId = true
requireContributorAgreement = true

[submit]
mergeContent = true

Thanks everyone for your enthusiasm about this new pursuit. I look forward to 
working together with you to make this into something we are all proud of.

Adrian

PS: Special thanks to sdake for initiating this conversation, and helping us to 
arrive at a well reasoned decision about how to approach this.

On Jun 4, 2015, at 10:58 AM, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if

Re: [openstack-dev] Looking for help getting git-review to work over https

2015-06-12 Thread Ian Cordasco
It looks like it can't run the setup.py because it can't find pbr.

Could you provide the following:

- Version of pip you're using (pip --version)
- How you installed pip (e.g., apt-get install -y python-pip)
- Contents of your global pip configuration (if one exists, it will be in
~/.pip/)
- What you get when you do `python -c 'import setuptools;
print(setuptools.__version__)'`
- Output of python -V

On 6/12/15, 12:08, "KARR, DAVID"  wrote:

>> -Original Message-
>> From: Ian Cordasco [mailto:ian.corda...@rackspace.com]
>> Sent: Friday, June 12, 2015 9:57 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] Looking for help getting git-review to
>> work over https
>> 
>> Python has generally awful support for doing HTTPS over an HTTPS
>> proxy.
>> Can you try doing:
>> 
>> # pip install --proxy http://one.proxy.att.com:
>
>Thanks.  I already tried that. It's not even clear this is failing to
>connect. I don't know what this is telling me.
>--
># pip install --proxy http://one.proxy.att.com:8080 .
>Processing /home/dk068x/work/git-review
>Complete output from command python setup.py egg_info:
>Download error on https://pypi.python.org/simple/pbr/: [Errno 1]
>_ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown
>protocol -- Some packages may not be found!
>Couldn't find index page for 'pbr' (maybe misspelled?)
>Download error on https://pypi.python.org/simple/: [Errno 1]
>_ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown
>protocol -- Some packages may not be found!
>No local packages or download links found for pbr
>Traceback (most recent call last):
>  File "", line 20, in 
>  File "/tmp/pip-MStPEo-build/setup.py", line 20, in 
>setuptools.setup(setup_requires=['pbr'], pbr=True)
>-
>
>> 
>> Cheers,
>> Ian
>> 
>> On 6/12/15, 10:26, "KARR, DAVID"  wrote:
>> 
>> >As I apparently have to sudo this, it would likely be more
>> effective, if
>> >at all, to specify the proxy on the command line.  Unfortunately,
>> > it didn’t make any difference:
>> >
>> ># pip install --proxy "https://one.proxy.att.com:8080"; .
>> >Processing /home/dk068x/work/git-review
>> >Complete output from command python setup.py egg_info:
>> >Download error on https://pypi.python.org/simple/pbr/: [Errno
>> 1]
>> >_ssl.c:504: error:140770FC:SSL
>> routines:SSL23_GET_SERVER_HELLO:unknown
>> >protocol -- Some packages may
>> > not be found!
>> >Couldn't find index page for 'pbr' (maybe misspelled?)
>> >Download error on https://pypi.python.org/simple/: [Errno 1]
>> >_ssl.c:504: error:140770FC:SSL
>> routines:SSL23_GET_SERVER_HELLO:unknown
>> >protocol -- Some packages may not
>> > be found!
>> >No local packages or download links found for pbr
>> >Traceback (most recent call last):
>> >  File "", line 20, in 
>> >  File "/tmp/pip-D5jhCD-build/setup.py", line 20, in 
>> >setuptools.setup(setup_requires=['pbr'], pbr=True)
>> >  File "/usr/lib64/python2.7/distutils/core.py", line 112, in
>> setup
>> >_setup_distribution = dist = klass(attrs)
>> >...
>> >
>> >
>> >I’m providing the appropriate https proxy url, as pip appears to
>> be using
>> >a https url.
>> >
>> >From: Asselin, Ramy [mailto:ramy.asse...@hp.com]
>> >
>> >Sent: Thursday, June 11, 2015 4:35 PM
>> >To: OpenStack Development Mailing List (not for usage questions)
>> >Subject: Re: [openstack-dev] Looking for help getting git-review
>> to work
>> >over https
>> >
>> >
>> >
>> >Try creating/updated ~/.pip/pip.conf
>> >
>> >With contents:
>> >
>> >[global]
>> >proxy =
>> >http://your_proxy:port/
>> >
>> >From: KARR, DAVID [mailto:dk0...@att.com]
>> >
>> >Sent: Thursday, June 11, 2015 4:16 PM
>> >To: OpenStack Development Mailing List (not for usage questions)
>> >Subject: Re: [openstack-dev] Looking for help getting git-review
>> to work
>> >over https
>> >
>> >
>> >
>> >This is just going swimmingly.
>> >
>> >% sudo python setup.py install
>> >Download error on
>> >https://pypi.python.org/simple/pbr/: [Errno 110] Connection timed
>> out --
>> >Some packages may not be found!
>> >Couldn't find index page for 'pbr' (maybe misspelled?)
>> >Download error on
>> >https://pypi.python.org/simple/: [Errno 110] Connection timed out
>> -- Some
>> >packages may not be found!
>> >No local packages or download links found for pbr
>> >
>> >
>> >Do I have to do something special to set the proxy for this?  I
>> have
>> >“http_proxy” and “https_proxy” already set.
>> >
>> >From: ZZelle [mailto:zze...@gmail.com]
>> >
>> >Sent: Thursday, June 11, 2015 3:50 PM
>> >To: OpenStack Development Mailing List (not for usage questions)
>> >Subject: Re: [openstack-dev] Looking for help getting git-review
>> to work
>> >over https
>> >
>> >
>> >
>> >Indeed, the doc[1] is unclear
>> >
>> >git-review can be installed using: python setup.py install or pip
>> install
>> >.
>> >
>> >
>> >[1]http:/

Re: [openstack-dev] Looking for help getting git-review to work over https

2015-06-12 Thread Kevin L. Mitchell
On Fri, 2015-06-12 at 17:08 +, KARR, DAVID wrote:
> Thanks.  I already tried that. It's not even clear this is failing to
> connect. I don't know what this is telling me.
> --
> # pip install --proxy http://one.proxy.att.com:8080 .
> Processing /home/dk068x/work/git-review
> Complete output from command python setup.py egg_info:
> Download error on https://pypi.python.org/simple/pbr/: [Errno 1]
> _ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown
> protocol -- Some packages may not be found!
> Couldn't find index page for 'pbr' (maybe misspelled?)
> Download error on https://pypi.python.org/simple/: [Errno 1]
> _ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown
> protocol -- Some packages may not be found!
> No local packages or download links found for pbr
> Traceback (most recent call last):
>   File "", line 20, in 
>   File "/tmp/pip-MStPEo-build/setup.py", line 20, in 
> setuptools.setup(setup_requires=['pbr'], pbr=True)

Have you confirmed that your proxy at 8080 is capable of SSL?  Usually,
people use port 8080 for plain old HTTP servers or proxies, and trying
to talk SSL to a plain HTTP proxy would probably result in that error.

(Also noticed that your proxy URL is specified as "http://";; if you know
that proxy works for SSL, try "https://"…)
-- 
Kevin L. Mitchell 
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] : DotHill FC/iSCSI Drivers ; Copyright attribution incident

2015-06-12 Thread Bandaru Nageswara Rao
Dear Mike

It has come to my attention that during the work done to submit code into 
OpenStack one of our Engineers had accidently deleted the original copyright 
notice thereby failing to give credit to the original authors of the code 
namely “Objectif Libre”. It was inadvertent and there was no intent to take 
away the credit from the original work.. We agree this was an oversight on our 
part and we take this seriously. We thank the community for detecting this and 
bringing it to our notice. Subsequently this has been corrected and submitted 
to the community for review.

We do realize that this runs contrary to the terms of our commitment to 
OpenStack and the Apache 2.0 Open Source License. Please be assured that we are 
treating this matter with utmost importance and will ensure that our 
contributors pay special attention to the copyright while reviewing and 
submitting code.


Regards

NageswaraRao Bandaru

HTC Manager -Dot Hill Systems
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][Cue] Add Cue to OpenStack

2015-06-12 Thread Vipul Sabhaya
Hello OpenStack TC and stackers!

We’ve submitted a patch to add Cue to the OpenStack project list.

https://review.openstack.org/#/c/191173/

For those not familiar, Cue is a Message Broker Provisioning and Lifecycle
Management service for OpenStack.  We’ve focused initially on RabbitMQ, and
are starting to look into adding Kafka clusters.

We’ve already made lots of progress:
- v1 API for cluster management
- DevStack integration
- Tempest tests and gate
- Rally scenarios and gate
- Extensive developer and deployment documentation [1]

Please reach out if there are any questions.  You can also find us on
#openstack-cue.

Thanks!
-Vipul

[1]: http://cue.readthedocs.org/en/latest/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port Blueprint

2015-06-12 Thread Adrian Otto
Egor makes a good point. If we have the API port number on the Baymodel, and 
Magnum sets the api_address value to a URL that contains the correct protocol, 
address, and port for the native API service. Here is the blueprint for that:

https://blueprints.launchpad.net/magnum/+spec/magnum-api-address-url

Adrian

On Jun 10, 2015, at 10:22 PM, Egor Guz 
mailto:e...@walmartlabs.com>> wrote:

Kai,

+1 for add it to baymodel, but I don’t see many use cases when people need to 
change it. And if they really need to change it they can always modify heat 
template.
-1 for opening it just for admins. I think everyone who create a model should 
be able specify it the same way as dns-nameserver for example.

―
Egor

From: Kai Qiang Wu 
mailto:wk...@cn.ibm.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, June 10, 2015 at 18:35
To: 
"openstack-dev@lists.openstack.org"
 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Magnum] Discuss configurable-coe-api-port Blueprint


I’m moving this whiteboard to the ML so we can have some discussion to refine 
it, and then go back and update the whiteboard.

Source:https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port


@Sdake and I have some discussion now, but may need more input from your side.


1. keep apserver_port in baymodel, it may only allow admin to have (if we 
involved policy) create that baymodel, have less flexiblity for other users.


2. apiserver_port was designed in baymodel, if moved from baymodel to bay, it 
is big change, and if we have other better ways. (it also may apply for
other configuration fileds, like dns-nameserver etc.)



Thanks



Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
   No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Essential and High priority blueprints

2015-06-12 Thread Adrian Otto
Team,

Starting next week, I will be asking our blueprint assignees (or a chosen 
delegate) to provide a brief team update on each of our Essential and High 
priority blueprints. We currently have 11 of them so, let’s plan to keep the 
updates concise in the < 3 min range.

Remember that if you are the assignee of a blueprint, you are not required to 
do all the work, but you are expected to report back to the team about the 
progress of the feature, and take responsibility to help it land on schedule.

If you are assigned to a blueprint, and can not attend the team meeting, please 
prepare a delegate to update us.

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Docker Native Networking

2015-06-12 Thread Adrian Otto
Team,

OpenStack Networking support for Magnum Bays was an important topic for us in 
Vancouver at the design summit. Here is one blueprint that requires discussion 
that’s beyond the scope of what we can easily fit in the BP whiteboard:

https://blueprints.launchpad.net/magnum/+spec/native-docker-network

Before we dive into implementation planning, I'll offer these as guardrails to 
use as a starting point:

1) Users of the Swarm bay type have the ability to create containers. Those 
containers may reside on different hosts (Nova instances). We want those 
containers to be able to communicate with each other over a network similar to 
the way that they can over the Flannel network used with Kubernetes.

2) We should leverage community work as much as possible, combining the best of 
Docker and OpenStack to produce an integrated solution that is easy to use, and 
exhibits performance that's suitable for common use cases.

3) Recognize that our Docker community is working on libnetwork [1] which will 
allow for the creation of logical "networks" similar to "links" that allow 
containers to communicate with each other across host boundaries. The 
implementation is pluggable, and we decided in Vancouver that working on a 
Neutron plugin for libnetwork could potentially make the user experience  
consistent whether you are using Docker within Magnum or not.

4) We would like to plug in Neutron to Flannel as a modular option for 
Kubernetes Bays, so both solutions leverage OpenStack networking, and users can 
use familiar, native tools.

References:
[1] https://github.com/docker/libnetwork

Please let me know what you think of this approach. I’d like to re-state the 
Blueprint description, clear the whiteboard, and put up a spec that will 
accommodate in-line comments so we can work on the implementation specifics 
better in context.

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port Blueprint

2015-06-12 Thread Adrian Otto
Thanks for raising this for discussion. Although I do think that the API port 
humber should be expressed in a URL that the local client can immediately use 
for connecting a native client to the API, I am not convinced that this needs 
to be a separate attribute on the Bay resource.

In general, I think it’s a reasonable assumption that nova instances will have 
unique IP addresses assigned to them (public or private is not an issue here) 
so unique port numbers for running the API services on alternate ports seems 
like it may not be needed. I’d like to have input from at least one Magnum user 
explaining an actual use case for this feature before accepting this blueprint.

One possible workaround for this would be to instruct those who want to run 
nonstandard ports to copy the heat template, and specify a new heat template as 
an alternate when creating the BayModel, which can implement the port number as 
a parameter. If we learn that this happens a lot, we should revisit this as a 
feature in Magnum rather than allowing it through an external workaround.

I’d like to have a generic feature that allows for arbitrary key/value pairs 
for parameters and values to be passed to the heat stack create call so that 
this, and other values can be passed in using the standard magnum client and 
API without further modification. I’m going to look to see if we have a BP for 
this, and if not, I will make one.

Adrian



On Jun 11, 2015, at 6:05 PM, Kai Qiang Wu(Kennan) 
mailto:wk...@cn.ibm.com>> wrote:


If I understand the bp correctly,

the apiserver_port is for public access or API call service endpoint. If it is 
that case, user would use that info

htttp(s)://:

so port is good information for users.


If we believe above assumption is right. Then

1) Some user not needed to change port, since the heat have default hard code 
port in that

2) If some users want to change port, (through heat, we can do that)  We need 
add such flexibility for users.
That's  bp 
https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port try to 
solve.

It depends on how end-users use with magnum.


Welcome to more inputs about this, If many of us think it is not necessary to 
customize the ports. we can drop the bp.


Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

Jay Lau ---06/11/2015 01:17:42 PM---I think that we have a similar 
bp before: https://blueprints.launchpad.net/magnum/+spec/override-nat

From: Jay Lau mailto:jay.lau@gmail.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 06/11/2015 01:17 PM
Subject: Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port 
Blueprint





I think that we have a similar bp before: 
https://blueprints.launchpad.net/magnum/+spec/override-native-rest-port

 I have some discussion before with Larsks, it seems that it does not make much 
sense to customize this port as the kubernetes/swarm/mesos cluster will be 
created by heat and end user do not need to care the ports,different 
kubernetes/swarm/mesos cluster will have different IP addresses so there will 
be no port conflict.

2015-06-11 9:35 GMT+08:00 Kai Qiang Wu 
mailto:wk...@cn.ibm.com>>:

I’m moving this whiteboard to the ML so we can have some discussion to refine 
it, and then go back and update the whiteboard.

Source:https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port


@Sdake and I have some discussion now, but may need more input from your side.


1. keep apserver_port in baymodel, it may only allow admin to have (if we 
involved policy) create that baymodel, have less flexiblity for other users.


2. apiserver_port was designed in baymodel, if moved from baymodel to bay, it 
is big change, and if we have other better ways. (it also may apply for
other configuration fileds, like dns-nameserver etc.)



Thanks



Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

__
OpenStack Development Mailing List (not for usage

Re: [openstack-dev] [Neutron] Issue with pymysql

2015-06-12 Thread Boris Pavlovic
Sean,

Thanks for quick fix/revert https://review.openstack.org/#/c/191010/
This unblocked Rally gates...

Best regards,
Boris Pavlovic

On Fri, Jun 12, 2015 at 8:56 PM, Clint Byrum  wrote:

> Excerpts from Mike Bayer's message of 2015-06-12 09:42:42 -0700:
> >
> > On 6/12/15 11:37 AM, Mike Bayer wrote:
> > >
> > >
> > > On 6/11/15 9:32 PM, Eugene Nikanorov wrote:
> > >> Hi neutrons,
> > >>
> > >> I'd like to draw your attention to an issue discovered by rally gate
> job:
> > >>
> http://logs.openstack.org/96/190796/4/check/gate-rally-dsvm-neutron-rally/7a18e43/logs/screen-q-svc.txt.gz?level=TRACE
> > >>
> > >> I don't have bandwidth to take a deep look at it, but first
> > >> impression is that it is some issue with nested transaction support
> > >> either on sqlalchemy or pymysql side.
> > >> Also, besides errors with nested transactions, there are a lot of
> > >> Lock wait timeouts.
> > >>
> > >> I think it makes sense to start with reverting the patch that moves
> > >> to pymysql.
> > > My immediate reaction is that this is perhaps a concurrency-related
> > > issue; because PyMySQL is pure python and allows for full blown
> > > eventlet monkeypatching, I wonder if somehow the same PyMySQL
> > > connection is being used in multiple contexts. E.g. one greenlet
> > > starts up a savepoint, using identifier "_3" which is based on a
> > > counter that is local to the SQLAlchemy Connection, but then another
> > > greenlet shares that PyMySQL connection somehow with another
> > > SQLAlchemy Connection that uses the same identifier.
> >
> > reading more of the log, it seems the main issue is just that there's a
> > deadlock on inserting into the securitygroups table.  The deadlock on
> > insert can be because of an index being locked.
> >
> >
> > I'd be curious to know how many greenlets are running concurrently here,
> > and what the overall transaction looks like within the operation that is
> > failing here (e.g. does each transaction insert multiple rows into
> > securitygroups?  that would make a deadlock seem more likely).
>
> This begs two questions:
>
> 1) Are we handling deadlocks with retries? It's important that we do
> that to be defensive.
>
> 2) Are we being careful to sort the table order in any multi-table
> transactions so that we minimize the chance of deadlocks happening
> because of any cross table deadlocks?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue with pymysql

2015-06-12 Thread Clint Byrum
Excerpts from Mike Bayer's message of 2015-06-12 09:42:42 -0700:
> 
> On 6/12/15 11:37 AM, Mike Bayer wrote:
> >
> >
> > On 6/11/15 9:32 PM, Eugene Nikanorov wrote:
> >> Hi neutrons,
> >>
> >> I'd like to draw your attention to an issue discovered by rally gate job:
> >> http://logs.openstack.org/96/190796/4/check/gate-rally-dsvm-neutron-rally/7a18e43/logs/screen-q-svc.txt.gz?level=TRACE
> >>
> >> I don't have bandwidth to take a deep look at it, but first 
> >> impression is that it is some issue with nested transaction support 
> >> either on sqlalchemy or pymysql side.
> >> Also, besides errors with nested transactions, there are a lot of 
> >> Lock wait timeouts.
> >>
> >> I think it makes sense to start with reverting the patch that moves 
> >> to pymysql.
> > My immediate reaction is that this is perhaps a concurrency-related 
> > issue; because PyMySQL is pure python and allows for full blown 
> > eventlet monkeypatching, I wonder if somehow the same PyMySQL 
> > connection is being used in multiple contexts. E.g. one greenlet 
> > starts up a savepoint, using identifier "_3" which is based on a 
> > counter that is local to the SQLAlchemy Connection, but then another 
> > greenlet shares that PyMySQL connection somehow with another 
> > SQLAlchemy Connection that uses the same identifier.
> 
> reading more of the log, it seems the main issue is just that there's a 
> deadlock on inserting into the securitygroups table.  The deadlock on 
> insert can be because of an index being locked.
> 
> 
> I'd be curious to know how many greenlets are running concurrently here, 
> and what the overall transaction looks like within the operation that is 
> failing here (e.g. does each transaction insert multiple rows into 
> securitygroups?  that would make a deadlock seem more likely).

This begs two questions:

1) Are we handling deadlocks with retries? It's important that we do
that to be defensive.

2) Are we being careful to sort the table order in any multi-table
transactions so that we minimize the chance of deadlocks happening
because of any cross table deadlocks?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] RPM Packaging for OpenStack IRC Meeting

2015-06-12 Thread Joshua Harlow

Dirk Müller wrote:

Hi Russel,


I'm just kind of curious ... as both the RDO and SUSE folks look closer
at this, how big are the differences?


 From the overall big picture, we're doing pretty much the same thing.
We have both a tool chain to continuously track changes as they happen
in upstream git, packaging that up and building binary packages there,
testing them in an isolated area with a tempest-like run and when that
succeeds, promote them into the respective stable trees for operators
to consume.

In my personal view, there are surprisingly many overlapping
activities where collaborating makes sense and could save duplicated
effort on both sides.

However, the devil is a bit in the details: Right now there are
differences in the Fedora and openSUSE python packaging guidelines
that are leading to "almost the same but slightly different" .spec
files. We're looking at unifying those differences up to a point where
the remaining diff, should be anything left, can be handled by a post
processing tool that just generates the distro variant of the spec
file from the upstream spec file.

Just to give you an example (this is not an exhaustive list):
- SUSE requires the use of a spdx.org standardized License: tag in the
spec file, RDO uses something else
- SUSE requires packages to be called python-$pypi_name, while Fedora
escapes more things from the pypi name ('.', '_' and '+' are replaced
by '-' and the name is lowercased). This adds up in differences of
requires/provides/obsoletes/conflicts and so on. This can be likely
solved by substitution and by %if sections, we just need to work on
that.
- Indenting and whitespacing rules seem to be slightly different
between the distros

There are also some conventional changes (in some cases the RDO spec
file is more correctly packaged than the SUSE variant or vice versa)
and those can be easily resolved on a case by case base, and that will
immediately help both user bases.


If instead it seems
the differences are minor enough that combining efforts is a win for
everyone, then that's even better, but I don't see it as the required
outcome here personally.


Right. We've started with an open discussion and not started with any
of those two outcomes in mind already. I think thats also why we
agreed to start with a "green field" and not seed the repos with any
of the distro's existing spec files.

To me it looks promising that we can mechanically compile the $distro
policy conformant .spec file from the canonical upstream naming, and
at some point that compile step might end up being a "cp".


An example of some specs already doing this (they are built using the 
cheetah template engine/style):


https://github.com/stackforge/anvil/tree/master/conf/templates/packaging/specs

They are turned into 'normal' spec files (the compilation part) during 
build time.


Perhaps something similar can be done (ideally using jinja2 or better, 
since cheetah the project seems to be mostly dead...)





Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] RPM Packaging for OpenStack IRC Meeting

2015-06-12 Thread Russell Bryant
On 06/12/2015 01:31 PM, Dirk Müller wrote:
>> If instead it seems
>> the differences are minor enough that combining efforts is a win for
>> everyone, then that's even better, but I don't see it as the required
>> outcome here personally.
> 
> Right. We've started with an open discussion and not started with any
> of those two outcomes in mind already. I think thats also why we
> agreed to start with a "green field" and not seed the repos with any
> of the distro's existing spec files.
> 
> To me it looks promising that we can mechanically compile the $distro
> policy conformant .spec file from the canonical upstream naming, and
> at some point that compile step might end up being a "cp".

Yikes ... having to start green field and drop history from the last
several years seems quite unfortunate.  It kind of sounds like "too much
work to be worth it" to me, but I'm just on the sidelines here.

Anyway, my main objective was just to make sure nobody felt like
combining efforts was the only acceptable outcome.  I'm happy with
whatever you all end up deciding is most helpful overall.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack][Neutron][OVN] OVN-Neutron - TODO List

2015-06-12 Thread Gal Sagie
Hello All,

I wanted to share some of our next working items and hopefully get more
people on board with the project.
I personally would mentor any new comer that wants to get familiar with the
project and help
with any of these items.
You can also feel free to approach Russell Bryant (rbry...@redhat.com)
which is heading the OVN-Openstack integration.

We both are usually active on IRC at #openstack-neutron-ovn (freenode) ,
you can drop us a visit if you have any questions.

The Neutron sprint in Fort Collins [1] has a work item for OVN, hopefully
some work can start
there on some of these items (or others).
Russell Bryant and myself unfortunately won't be there, but feel free to
contact us online or in email.

*1. Security Group Implementation*
Currently security groups are not being configured to OVN, there is a
document written
about how to model security groups to OVN northbound ACL's. [2]
I suspect getting this right is not going to be trivial, hopefully i might
be able to also start tackling
this item next week.

*2. Provider Network support*
Russell sent a design proposal to the ovs-dev mailing list [3], need to
follow on that
and implement in OVN

*3. Tempest configuration*
Russell has a patch for that [4] which needs additional help to make it
work.

*4. Unit Tests / Functional Tests *
We want to start adding more testing to the project in all fronts

*5. Integration with OVS-DPDK*
OVS-DPDK has a ML2 mechanism driver [5] to enable userspace DPDK dataplane
for OVS,
we want to try and see how this can combine with OVN mechanism driver
together. (one idea is to
use hierarchical port binding for that)
Need to design and test it and provide additional working items for this
integration

*6. L2 Gateway Integration*
OVN supports L2 gateway translation between virtual and physical networks.
We want to leverage the current L2-Gateway sub project in stack forge [6]
and use it
to enable configuration of L2 gateways in OVN.
I have looked briefly at the project and it seems the API's are good, but
currently the
implementation relay on RPC and agent implementation (where we would like to
configure it using OVSDB) , so this needs to be sorted and tested.

Another issue is related to OVN it self which doesn't have L2 Gateway
awareness
in the Northbound DB (which is the DB that neutron configure) but only has
the API
in the Southbound DB

*7. QoS Support*
We want to be able to support the new QoS API that is being implemented in
Liberty [7]
Need to see how we can leverage the work that will implement this for OVS
in the
reference implementation and what additions need to be made for OVN case.

*8. L3 Implementation*
L3 is not yet implemented in OVN, need to follow up on the design and add
the L3 service plugin
and implementation.

*9. VLAN Aware VM's*
This is not directly related to OVN, but we need to see that OVN use case
of configuring parent
ports (for the use case of Containers inside a VM) is being addressed, and
if the implementation
is finished, to align the API for OVN as well.

As i mentioned above, if you are interested in working on any of these
items please email me
or Russell back and we can get you started!

Thanks
Gal.

[1] https://etherpad.openstack.org/p/neutron-liberty-mid-cycle
[2]
https://github.com/stackforge/networking-ovn/blob/master/doc/source/design/data_model.rst
[3] http://openvswitch.org/pipermail/dev/2015-June/056212.html
[4] https://review.openstack.org/#/c/186894/
[5] https://github.com/stackforge/networking-ovs-dpdk
[6] https://github.com/stackforge/networking-l2gw
[7] https://review.openstack.org/#/c/182349/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] RPM Packaging for OpenStack IRC Meeting

2015-06-12 Thread Dirk Müller
Hi Russel,

> I'm just kind of curious ... as both the RDO and SUSE folks look closer
> at this, how big are the differences?

>From the overall big picture, we're doing pretty much the same thing.
We have both a tool chain to continuously track changes as they happen
in upstream git, packaging that up and building binary packages there,
testing them in an isolated area with a tempest-like run and when that
succeeds, promote them into the respective stable trees for operators
to consume.

In my personal view, there are surprisingly many overlapping
activities where collaborating makes sense and could save duplicated
effort on both sides.

However, the devil is a bit in the details: Right now there are
differences in the Fedora and openSUSE python packaging guidelines
that are leading to "almost the same but slightly different" .spec
files. We're looking at unifying those differences up to a point where
the remaining diff, should be anything left, can be handled by a post
processing tool that just generates the distro variant of the spec
file from the upstream spec file.

Just to give you an example (this is not an exhaustive list):
- SUSE requires the use of a spdx.org standardized License: tag in the
spec file, RDO uses something else
- SUSE requires packages to be called python-$pypi_name, while Fedora
escapes more things from the pypi name ('.', '_' and '+' are replaced
by '-' and the name is lowercased). This adds up in differences of
requires/provides/obsoletes/conflicts and so on. This can be likely
solved by substitution and by %if sections, we just need to work on
that.
- Indenting and whitespacing rules seem to be slightly different
between the distros

There are also some conventional changes (in some cases the RDO spec
file is more correctly packaged than the SUSE variant or vice versa)
and those can be easily resolved on a case by case base, and that will
immediately help both user bases.

> If instead it seems
> the differences are minor enough that combining efforts is a win for
> everyone, then that's even better, but I don't see it as the required
> outcome here personally.

Right. We've started with an open discussion and not started with any
of those two outcomes in mind already. I think thats also why we
agreed to start with a "green field" and not seed the repos with any
of the distro's existing spec files.

To me it looks promising that we can mechanically compile the $distro
policy conformant .spec file from the canonical upstream naming, and
at some point that compile step might end up being a "cp".


Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Looking for help getting git-review to work over https

2015-06-12 Thread KARR, DAVID
> -Original Message-
> From: Ian Cordasco [mailto:ian.corda...@rackspace.com]
> Sent: Friday, June 12, 2015 9:57 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Looking for help getting git-review to
> work over https
> 
> Python has generally awful support for doing HTTPS over an HTTPS
> proxy.
> Can you try doing:
> 
> # pip install --proxy http://one.proxy.att.com:

Thanks.  I already tried that. It's not even clear this is failing to connect. 
I don't know what this is telling me.
--
# pip install --proxy http://one.proxy.att.com:8080 .
Processing /home/dk068x/work/git-review
Complete output from command python setup.py egg_info:
Download error on https://pypi.python.org/simple/pbr/: [Errno 1] 
_ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol 
-- Some packages may not be found!
Couldn't find index page for 'pbr' (maybe misspelled?)
Download error on https://pypi.python.org/simple/: [Errno 1] _ssl.c:504: 
error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol -- Some 
packages may not be found!
No local packages or download links found for pbr
Traceback (most recent call last):
  File "", line 20, in 
  File "/tmp/pip-MStPEo-build/setup.py", line 20, in 
setuptools.setup(setup_requires=['pbr'], pbr=True)
-

> 
> Cheers,
> Ian
> 
> On 6/12/15, 10:26, "KARR, DAVID"  wrote:
> 
> >As I apparently have to sudo this, it would likely be more
> effective, if
> >at all, to specify the proxy on the command line.  Unfortunately,
> > it didn’t make any difference:
> >
> ># pip install --proxy "https://one.proxy.att.com:8080"; .
> >Processing /home/dk068x/work/git-review
> >Complete output from command python setup.py egg_info:
> >Download error on https://pypi.python.org/simple/pbr/: [Errno
> 1]
> >_ssl.c:504: error:140770FC:SSL
> routines:SSL23_GET_SERVER_HELLO:unknown
> >protocol -- Some packages may
> > not be found!
> >Couldn't find index page for 'pbr' (maybe misspelled?)
> >Download error on https://pypi.python.org/simple/: [Errno 1]
> >_ssl.c:504: error:140770FC:SSL
> routines:SSL23_GET_SERVER_HELLO:unknown
> >protocol -- Some packages may not
> > be found!
> >No local packages or download links found for pbr
> >Traceback (most recent call last):
> >  File "", line 20, in 
> >  File "/tmp/pip-D5jhCD-build/setup.py", line 20, in 
> >setuptools.setup(setup_requires=['pbr'], pbr=True)
> >  File "/usr/lib64/python2.7/distutils/core.py", line 112, in
> setup
> >_setup_distribution = dist = klass(attrs)
> >...
> >
> >
> >I’m providing the appropriate https proxy url, as pip appears to
> be using
> >a https url.
> >
> >From: Asselin, Ramy [mailto:ramy.asse...@hp.com]
> >
> >Sent: Thursday, June 11, 2015 4:35 PM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: Re: [openstack-dev] Looking for help getting git-review
> to work
> >over https
> >
> >
> >
> >Try creating/updated ~/.pip/pip.conf
> >
> >With contents:
> >
> >[global]
> >proxy =
> >http://your_proxy:port/
> >
> >From: KARR, DAVID [mailto:dk0...@att.com]
> >
> >Sent: Thursday, June 11, 2015 4:16 PM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: Re: [openstack-dev] Looking for help getting git-review
> to work
> >over https
> >
> >
> >
> >This is just going swimmingly.
> >
> >% sudo python setup.py install
> >Download error on
> >https://pypi.python.org/simple/pbr/: [Errno 110] Connection timed
> out --
> >Some packages may not be found!
> >Couldn't find index page for 'pbr' (maybe misspelled?)
> >Download error on
> >https://pypi.python.org/simple/: [Errno 110] Connection timed out
> -- Some
> >packages may not be found!
> >No local packages or download links found for pbr
> >
> >
> >Do I have to do something special to set the proxy for this?  I
> have
> >“http_proxy” and “https_proxy” already set.
> >
> >From: ZZelle [mailto:zze...@gmail.com]
> >
> >Sent: Thursday, June 11, 2015 3:50 PM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: Re: [openstack-dev] Looking for help getting git-review
> to work
> >over https
> >
> >
> >
> >Indeed, the doc[1] is unclear
> >
> >git-review can be installed using: python setup.py install or pip
> install
> >.
> >
> >
> >[1]http://docs.openstack.org/infra/manual/developers.html#accessin
> g-gerrit
> >-over-https
> >
> >
> >
> >On Thu, Jun 11, 2015 at 11:16 PM, KARR, DAVID 
> wrote:
> >
> >I see.  I would guess a footnote on the instructions about this
> would be
> >useful. Is
> >https://github.com/openstack-infra/git-review the proper location
> to get
> >the buildable source?  I don’t see any obvious build instructions
> there.
> >
> >From: ZZelle [mailto:zze...@gmail.com]
> >
> >Sent: Thursday, June 11, 2015 2:01 PM
> >
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: 

Re: [openstack-dev] [packaging] RPM Packaging for OpenStack IRC Meeting

2015-06-12 Thread Russell Bryant
On 06/12/2015 12:47 PM, Dirk Müller wrote:
> Hi,
> 
> A couple of packagers from RDO and SUSE met today on IRC to kick off
> brain storming on unified upstream rpm packaging for OpenStack.
> 
> Please note: there are currently two movements going on: RDO would
> like to move their Liberty packaging from github.com and gerrithub to
> openstack gerrit, and RDO and SUSE would like to collaborate on
> unified packaging. For the rest of the mail I'm only talking about the
> latter.
> 
> The agreed goal is that by the time of the Liberty release, we have
> all something that anyone interested like RDO and SUSE can use for
> maintaining the packages for the whole Liberty maintenance lifecycle.
> For the rest of the meeting we were quickly explaining our setups to
> each other and shortly walked over the list of obvious differences
> that we need to work on.
> 
> The meeting minutes have been captured here:
> 
> https://etherpad.openstack.org/p/openstack-rpm-packaging
> 
> We're planning to have a followup meetings to work some more on the issues and
> are currently hanging out on #openstack-rpm-packaging on IRC.

I'm just kind of curious ... as both the RDO and SUSE folks look closer
at this, how big are the differences?  I could see one possible outcome
where the differences are significant enough that it would be less work
overall to just continue to treat them separately.  If instead it seems
the differences are minor enough that combining efforts is a win for
everyone, then that's even better, but I don't see it as the required
outcome here personally.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Looking for help getting git-review to work over https

2015-06-12 Thread Ian Cordasco
Python has generally awful support for doing HTTPS over an HTTPS proxy.
Can you try doing:

# pip install --proxy http://one.proxy.att.com:

Cheers,
Ian

On 6/12/15, 10:26, "KARR, DAVID"  wrote:

>As I apparently have to sudo this, it would likely be more effective, if
>at all, to specify the proxy on the command line.  Unfortunately,
> it didn’t make any difference:
>
># pip install --proxy "https://one.proxy.att.com:8080"; .
>Processing /home/dk068x/work/git-review
>Complete output from command python setup.py egg_info:
>Download error on https://pypi.python.org/simple/pbr/: [Errno 1]
>_ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown
>protocol -- Some packages may
> not be found!
>Couldn't find index page for 'pbr' (maybe misspelled?)
>Download error on https://pypi.python.org/simple/: [Errno 1]
>_ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown
>protocol -- Some packages may not
> be found!
>No local packages or download links found for pbr
>Traceback (most recent call last):
>  File "", line 20, in 
>  File "/tmp/pip-D5jhCD-build/setup.py", line 20, in 
>setuptools.setup(setup_requires=['pbr'], pbr=True)
>  File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
>_setup_distribution = dist = klass(attrs)
>...
>
> 
>I’m providing the appropriate https proxy url, as pip appears to be using
>a https url.
> 
>From: Asselin, Ramy [mailto:ramy.asse...@hp.com]
>
>Sent: Thursday, June 11, 2015 4:35 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] Looking for help getting git-review to work
>over https
>
>
> 
>Try creating/updated ~/.pip/pip.conf
> 
>With contents:
> 
>[global]
>proxy =
>http://your_proxy:port/
> 
>From: KARR, DAVID [mailto:dk0...@att.com]
>
>Sent: Thursday, June 11, 2015 4:16 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] Looking for help getting git-review to work
>over https
>
>
> 
>This is just going swimmingly.
>
>% sudo python setup.py install
>Download error on
>https://pypi.python.org/simple/pbr/: [Errno 110] Connection timed out --
>Some packages may not be found!
>Couldn't find index page for 'pbr' (maybe misspelled?)
>Download error on
>https://pypi.python.org/simple/: [Errno 110] Connection timed out -- Some
>packages may not be found!
>No local packages or download links found for pbr
>
> 
>Do I have to do something special to set the proxy for this?  I have
>“http_proxy” and “https_proxy” already set.
> 
>From: ZZelle [mailto:zze...@gmail.com]
>
>Sent: Thursday, June 11, 2015 3:50 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] Looking for help getting git-review to work
>over https
>
>
> 
>Indeed, the doc[1] is unclear
>
>git-review can be installed using: python setup.py install or pip install
>.
>
>
>[1]http://docs.openstack.org/infra/manual/developers.html#accessing-gerrit
>-over-https
>
>
> 
>On Thu, Jun 11, 2015 at 11:16 PM, KARR, DAVID  wrote:
>
>I see.  I would guess a footnote on the instructions about this would be
>useful. Is
>https://github.com/openstack-infra/git-review the proper location to get
>the buildable source?  I don’t see any obvious build instructions there.
> 
>From: ZZelle [mailto:zze...@gmail.com]
>
>Sent: Thursday, June 11, 2015 2:01 PM
>
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] Looking for help getting git-review to work
>over https
>
>
>
>
> 
>Hi David,
>
>
>Following git config options are supported by git-review
>(https://review.openstack.org/116035)
>
>   git config --global gitreview.scheme https
>   git config --global gitreview.port 443
>
>BUT the feature was merged after 1.24 (it's highlighted by your git
>review -vs)
>
>so the feature is currently only available on the git-review master
>branch (which
>
>is quite stable, i use it every day).
>
> 
>
> 
>
>Cedric/ZZelle@irc
>
> 
>
> 
>On Thu, Jun 11, 2015 at 10:14 PM, KARR, DAVID  wrote:
>
>I followed the instructions for installing and configuring corkscrew,
>similar to what you provided here.  The
> result seems to indicate it did something, but the overall result is the
>same:
>
>2015-06-11 13:07:25.866568 Running: git log --color=never --oneline
>HEAD^1..HEAD
>2015-06-11 13:07:25.869309 Running: git remote
>2015-06-11 13:07:25.872742 Running: git config --get gitreview.username
>No remote set, testing
>ssh://dk0...@review.openstack.org:29418/openstack/horizon.git
>
>2015-06-11 13:07:25.874869 Running: git push --dry-run
>ssh://dk0...@review.openstack.org:29418/openstack/horizon.git
>
> --all
>The authenticity of host '[review.openstack.org
>]:29418
> ()' can't be established.
>RSA key fingerprint is 28:c6:42:b7:44:d2:48:64:c1:3f:3

[openstack-dev] [nova][drivers] hypervisor support matrix: feature "serial console"

2015-06-12 Thread Markus Zoeller
The "serial console" feature was introduced during the Juno cycle and
is an alternative to remote access via VNC, RDP, SPICE for platforms
which don't support these. The review 180912 adds a new row for this
feature in the hypervisor support matrix [1]. Please have a look if you
can make a support statement for the listed drivers and add a comment.

As a hint where to start if not yet known:

The "nova.conf" needs:
[DEFAULT]
vnc_enabled = False

[serial_console]
enabled = True

Horizon is capable to use that console in the instance details view.

[1] https://review.openstack.org/#/c/180912/

Regards,
Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominating Joshua Hesketh for infra-root

2015-06-12 Thread Elizabeth K. Joseph
On Thu, Jun 11, 2015 at 9:54 AM, James E. Blair  wrote:
> The Infrastructure program has a unique three-tier team structure:
> contributors (that's all of us!), core members (people with +2 ability
> on infra projects in Gerrit) and root members (people with
> administrative access).  Read all about it here:
>
>   http://docs.openstack.org/infra/system-config/project.html#teams
>
> Joshua has been a valuable member of infra-core for some time, having a
> high degree of familiarity with Zuul and related systems and an
> increasing interest in other operational aspects of the project
> infrastructure.  His eagerness to contribute is presently tempered by
> most of the infra-root team sleeping[1] during large parts of his day.
> I look forward to Joshua being able to approve changes with confidence.
>
> Joshua, if you are interested, please propose your SSH key to
> system-config, and thanks again for all of your work!

Very pleased to see him join our merry band!

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [packaging] RPM Packaging for OpenStack IRC Meeting

2015-06-12 Thread Dirk Müller
Hi,

A couple of packagers from RDO and SUSE met today on IRC to kick off
brain storming on unified upstream rpm packaging for OpenStack.

Please note: there are currently two movements going on: RDO would
like to move their Liberty packaging from github.com and gerrithub to
openstack gerrit, and RDO and SUSE would like to collaborate on
unified packaging. For the rest of the mail I'm only talking about the
latter.

The agreed goal is that by the time of the Liberty release, we have
all something that anyone interested like RDO and SUSE can use for
maintaining the packages for the whole Liberty maintenance lifecycle.
For the rest of the meeting we were quickly explaining our setups to
each other and shortly walked over the list of obvious differences
that we need to work on.

The meeting minutes have been captured here:

https://etherpad.openstack.org/p/openstack-rpm-packaging

We're planning to have a followup meetings to work some more on the issues and
are currently hanging out on #openstack-rpm-packaging on IRC.


Greetings,
Dirk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Looking for help getting git-review to work over https

2015-06-12 Thread KARR, DAVID
Good idea, but that also made no difference.  In any case, I don’t think this 
is attempting a direct ssh connection, so I don’t see how setting up a ssh 
proxy would make any difference.  I tried this as “myself” and also with “sudo 
–E bash” with the same result:
$ pip install .
Processing /home/dk068x/work/git-review
Complete output from command python setup.py egg_info:
Download error on https://pypi.python.org/simple/pbr/: [Errno 1] 
_ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol 
-- Some packages may not be found!
Couldn't find index page for 'pbr' (maybe misspelled?)
Download error on https://pypi.python.org/simple/: [Errno 1] _ssl.c:504: 
error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol -- Some 
packages may not be found!
No local packages or download links found for pbr
Traceback (most recent call last):
  File "", line 20, in 
  File "/tmp/pip-DQrJTz-build/setup.py", line 20, in 
setuptools.setup(setup_requires=['pbr'], pbr=True)


From: Paul Michali [mailto:p...@michali.net]
Sent: Friday, June 12, 2015 9:31 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Looking for help getting git-review to work over 
https

David,
FYI, I used corkscrew with pypi as well with another .ssh/config line of:

Host pypi.python.org
ProxyCommand corkscrew  80 %h %p



On Fri, Jun 12, 2015 at 11:29 AM KARR, DAVID 
mailto:dk0...@att.com>> wrote:
As I apparently have to sudo this, it would likely be more effective, if at 
all, to specify the proxy on the command line.  Unfortunately, it didn’t make 
any difference:
# pip install --proxy "https://one.proxy.att.com:8080"; .
Processing /home/dk068x/work/git-review
Complete output from command python setup.py egg_info:
Download error on https://pypi.python.org/simple/pbr/: [Errno 1] 
_ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol 
-- Some packages may not be found!
Couldn't find index page for 'pbr' (maybe misspelled?)
Download error on https://pypi.python.org/simple/: [Errno 1] _ssl.c:504: 
error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol -- Some 
packages may not be found!
No local packages or download links found for pbr
Traceback (most recent call last):
  File "", line 20, in 
  File "/tmp/pip-D5jhCD-build/setup.py", line 20, in 
setuptools.setup(setup_requires=['pbr'], pbr=True)
  File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
...

I’m providing the appropriate https proxy url, as pip appears to be using a 
https url.

From: Asselin, Ramy [mailto:ramy.asse...@hp.com]
Sent: Thursday, June 11, 2015 4:35 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Looking for help getting git-review to work over 
https

Try creating/updated ~/.pip/pip.conf

With contents:

[global]
proxy = http://your_proxy:port/

From: KARR, DAVID [mailto:dk0...@att.com]
Sent: Thursday, June 11, 2015 4:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Looking for help getting git-review to work over 
https

This is just going swimmingly.
% sudo python setup.py install
Download error on https://pypi.python.org/simple/pbr/: [Errno 110] Connection 
timed out -- Some packages may not be found!
Couldn't find index page for 'pbr' (maybe misspelled?)
Download error on https://pypi.python.org/simple/: [Errno 110] Connection timed 
out -- Some packages may not be found!
No local packages or download links found for pbr

Do I have to do something special to set the proxy for this?  I have 
“http_proxy” and “https_proxy” already set.

From: ZZelle [mailto:zze...@gmail.com]
Sent: Thursday, June 11, 2015 3:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Looking for help getting git-review to work over 
https

Indeed, the doc[1] is unclear
git-review can be installed using: python setup.py install or pip install .


[1]http://docs.openstack.org/infra/manual/developers.html#accessing-gerrit-over-https

On Thu, Jun 11, 2015 at 11:16 PM, KARR, DAVID 
mailto:dk0...@att.com>> wrote:
I see.  I would guess a footnote on the instructions about this would be 
useful. Is https://github.com/openstack-infra/git-review the proper location to 
get the buildable source?  I don’t see any obvious build instructions there.

From: ZZelle [mailto:zze...@gmail.com]
Sent: Thursday, June 11, 2015 2:01 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Looking for help getting git-review to work over 
https

Hi David,


Following git config options are supported by git-review 
(https://review.openstack.org/116035)
   git config --global gitreview.scheme https
  

Re: [openstack-dev] [Neutron] Issue with pymysql

2015-06-12 Thread Mike Bayer



On 6/12/15 11:37 AM, Mike Bayer wrote:



On 6/11/15 9:32 PM, Eugene Nikanorov wrote:

Hi neutrons,

I'd like to draw your attention to an issue discovered by rally gate job:
http://logs.openstack.org/96/190796/4/check/gate-rally-dsvm-neutron-rally/7a18e43/logs/screen-q-svc.txt.gz?level=TRACE

I don't have bandwidth to take a deep look at it, but first 
impression is that it is some issue with nested transaction support 
either on sqlalchemy or pymysql side.
Also, besides errors with nested transactions, there are a lot of 
Lock wait timeouts.


I think it makes sense to start with reverting the patch that moves 
to pymysql.
My immediate reaction is that this is perhaps a concurrency-related 
issue; because PyMySQL is pure python and allows for full blown 
eventlet monkeypatching, I wonder if somehow the same PyMySQL 
connection is being used in multiple contexts. E.g. one greenlet 
starts up a savepoint, using identifier "_3" which is based on a 
counter that is local to the SQLAlchemy Connection, but then another 
greenlet shares that PyMySQL connection somehow with another 
SQLAlchemy Connection that uses the same identifier.


reading more of the log, it seems the main issue is just that there's a 
deadlock on inserting into the securitygroups table.  The deadlock on 
insert can be because of an index being locked.



I'd be curious to know how many greenlets are running concurrently here, 
and what the overall transaction looks like within the operation that is 
failing here (e.g. does each transaction insert multiple rows into 
securitygroups?  that would make a deadlock seem more likely).


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Looking for help getting git-review to work over https

2015-06-12 Thread Paul Michali
David,

FYI, I used corkscrew with pypi as well with another .ssh/config line of:

Host pypi.python.org
ProxyCommand corkscrew  80 %h %p



On Fri, Jun 12, 2015 at 11:29 AM KARR, DAVID  wrote:

>   As I apparently have to sudo this, it would likely be more effective,
> if at all, to specify the proxy on the command line.  Unfortunately, it
> didn’t make any difference:
>
> # pip install --proxy "https://one.proxy.att.com:8080"; .
>
> Processing /home/dk068x/work/git-review
>
> Complete output from command python setup.py egg_info:
>
> Download error on https://pypi.python.org/simple/pbr/: [Errno 1]
> _ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown
> protocol -- Some packages may not be found!
>
> Couldn't find index page for 'pbr' (maybe misspelled?)
>
> Download error on https://pypi.python.org/simple/: [Errno 1]
> _ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown
> protocol -- Some packages may not be found!
>
> No local packages or download links found for pbr
>
> Traceback (most recent call last):
>
>   File "", line 20, in 
>
>   File "/tmp/pip-D5jhCD-build/setup.py", line 20, in 
>
> setuptools.setup(setup_requires=['pbr'], pbr=True)
>
>   File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
>
> _setup_distribution = dist = klass(attrs)
>
> ...
>
>
>
> I’m providing the appropriate https proxy url, as pip appears to be using
> a https url.
>
>
>
> *From:* Asselin, Ramy [mailto:ramy.asse...@hp.com]
> *Sent:* Thursday, June 11, 2015 4:35 PM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] Looking for help getting git-review to
> work over https
>
>
>
> Try creating/updated ~/.pip/pip.conf
>
>
>
> With contents:
>
>
>
> [global]
>
> proxy = http://your_proxy:port/
>
>
>
> *From:* KARR, DAVID [mailto:dk0...@att.com ]
> *Sent:* Thursday, June 11, 2015 4:16 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] Looking for help getting git-review to
> work over https
>
>
>
> This is just going swimmingly.
>
> % sudo python setup.py install
>
> Download error on https://pypi.python.org/simple/pbr/: [Errno 110]
> Connection timed out -- Some packages may not be found!
>
> Couldn't find index page for 'pbr' (maybe misspelled?)
>
> Download error on https://pypi.python.org/simple/: [Errno 110] Connection
> timed out -- Some packages may not be found!
>
> No local packages or download links found for pbr
>
>
>
> Do I have to do something special to set the proxy for this?  I have
> “http_proxy” and “https_proxy” already set.
>
>
>
> *From:* ZZelle [mailto:zze...@gmail.com ]
> *Sent:* Thursday, June 11, 2015 3:50 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] Looking for help getting git-review to
> work over https
>
>
>
> Indeed, the doc[1] is unclear
>
> git-review can be installed using: python setup.py install or pip install .
>
>
>
> [1]
> http://docs.openstack.org/infra/manual/developers.html#accessing-gerrit-over-https
>
>
>
> On Thu, Jun 11, 2015 at 11:16 PM, KARR, DAVID  wrote:
>
>  I see.  I would guess a footnote on the instructions about this would be
> useful. Is https://github.com/openstack-infra/git-review the proper
> location to get the buildable source?  I don’t see any obvious build
> instructions there.
>
>
>
> *From:* ZZelle [mailto:zze...@gmail.com]
> *Sent:* Thursday, June 11, 2015 2:01 PM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] Looking for help getting git-review to
> work over https
>
>
>
> Hi David,
>
>
> Following git config options are supported by git-review (
> https://review.openstack.org/116035)
>
>git config --global gitreview.scheme https
>git config --global gitreview.port 443
>
> BUT the feature was merged after 1.24 (it's highlighted by your git review
> -vs)
> so the feature is currently only available on the git-review master branch
> (which
>
> is quite stable, i use it every day).
>
>
>
>
>
> Cedric/ZZelle@irc
>
>
>
>
>
> On Thu, Jun 11, 2015 at 10:14 PM, KARR, DAVID  wrote:
>
>   I followed the instructions for installing and configuring corkscrew,
> similar to what you provided here.  The result seems to indicate it did
> something, but the overall result is the same:
>
> 2015-06-11 13:07:25.866568 Running: git log --color=never --oneline
> HEAD^1..HEAD
>
> 2015-06-11 13:07:25.869309 Running: git remote
>
> 2015-06-11 13:07:25.872742 Running: git config --get gitreview.username
>
> No remote set, testing ssh://
> dk0...@review.openstack.org:29418/openstack/horizon.git
>
> 2015-06-11 13:07:25.874869 Running: git push --dry-run ssh://
> dk0...@review.openstack.org:29418/openstack/horizon.git --all
>
> The authenticity of host '[review.openstack.org]:29418 ( proxy command>)' can't be established.
>
> RSA key fingerprint is 28:c

Re: [openstack-dev] [OpenStack-Infra] Need help! Zuul can not connect to port 29418 of review.openstack.org

2015-06-12 Thread Jeremy Stanley
On 2015-06-12 09:04:09 + (+), liuxinguo wrote:
> Recently our CI can not connect to port 29418 of
> review.openstack.org. Following are the failuer
> message, is there anyone know the reasion why our CI can not
> cennect to 29418 of review.openstack.org?

See the several other threads on this mailing list from the past few
days. It seems that if your CI system is in mainland China, the
government there has very recently started blocking egress to
29418/tcp.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue with pymysql

2015-06-12 Thread Armando M.
This is exactly what I did in [1].

Cheers,
Armando

[1]
https://review.openstack.org/#/q/status:open+branch:master+topic:neutron-unstable,n,z

On 12 June 2015 at 09:00, Clint Byrum  wrote:

> Excerpts from Chris Dent's message of 2015-06-12 03:47:02 -0700:
> > On Fri, 12 Jun 2015, Joe Gordon wrote:
> >
> > > Glad to see us catch these issues early.
> >
> > Yes! CI is doing exactly the job it is supposed to be doing here. It
> > is finding bugs in code. When that happens we should fix the bugs,
> > not revert. Even if it stalls other stuff.
> >
>
> I'd like to offer an alternative path. Please do remember that people
> deploy OpenStack from trunk, and we want them to do that, because if an
> organization is focused and prepared, continuous deployment will produce
> a vibrant OpenStack experience for users, and will keep those operators
> in a position to contribute.
>
> We also have hundreds of developers who need the system to work well.
> There are maybe only a handful who will work on this issue. It does not
> make any sense to me to stall hundreds of developers while a handful
> work through something that may take weeks.
>
> So, consider reverting _quickly_, landing a test that fails at least more
> reliably with the current patch, and then have that handful of users who
> are capable of fixing it work in their own branch until the problem is
> resolved to a level that it won't stand in the way of progress for CD
> operators and the development community at large.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread Emilien Macchi


On 06/12/2015 11:41 AM, Sergii Golovatiuk wrote:
> Hi,
> 
> I have read all this thread trying to understand what's going on. It has
> many emotions but very few logical proposals. Let me try to sum up and
> make some proposals.
> 
> * A bug is reported in both Fuel Library and the Puppet module having
> trouble. A patch is provided in Fuel Library (your fork of Puppet
> OpenStack modules) but not in Puppet upstream module. That means you fix
> the bug for Fuel, and not for Puppet OpenStack community. It does not
> happen all the time but quite often.
> 
> 
> I agree, Fuel Library had such issues in the past. As Dmitry Borodaenko
> noted we changed the policy and follow it. Fuel Core reviewers don't
> accept the patch if it's not submitted to upstream first. Fuel Library
> does all its best to have less divergence with community.
> 
>  
> 
> * A patch is submitted in a Puppet module and quite often does not land
> because there is no activity, no tests or is abandonned later because
> fixed in Fuel Library. I've noticed the patch is fixed in Fuel Library
> though.
> 
> 
> John F. Kennedy said:  
> "Ask not what your country can do for you — ask what you can do for your
> country"
> 
> IMO, it's a communication issue and related  more to Puppet OpenStack
> community that to Fuel Library folks. In Fuel Library when patch from
> external contributor has some problems we cherry-pick it, update a
> patchset to succeed our expectations. Meanwhile we contact the
> contributor over IRC or email and explain why we did that. That's very
> important as contributor may not know all architectural details. He may
> not know the details of CI tests or details how we test. That's the good
> attitude to help newcomers. So they will be naturally involved to
> community. Yes, it takes time for Fuel Library folks, but we continue
> doing that way as we think that communication with contributor is a key
> of success.

Adding someone by using Gerrit is not enough. Communication on IRC with
Puppet OpenStack group would be good on the right channel, like it's
done in other OpenStack projects quite often.

> I have looked over patches in progress. I may be wrong but I didn't find
> that Puppet OpenStack community updated patch to pass CI. It's not
> complex to cherry-pick and fix failed tests. It's also not complex to
> contact person over IRC or in email to explain what needs to be done.
> Trust me, usually it takes once. Smart creatives are clever enough not
> to make same mistakes twice.
> 
>  RAW copy/paste between upstream modules code and your forks. In term
> of Licensing, I'm even not sure you have the right to do that (I'm not a
> CLA expert though) but well... in term of authorship and statistics on
> code, I'm not sure it's fair. Using submodules with custom patches would
> have been great to respect the authors who created the original code and
> you could have personalize the manifests.
> 
> 
> This happens. People fork projects when they have some communication
> issues or different architectural views. However, I like Compiz and
> Beryl story. After some time both communities negotiated the issues and
> merged the projects. Merging projects and make some concession is more
> respectful in terms of attitude between smart creatives.
> 
> We as a community don't do a great job watching bugs, so personally
> I'd prefer that fuel developers just push patches, filing a bug too
> if you want. (Note: we do need to improve our bug tracking!) 
> 
> 
> Thank you Matt for bringing this up. That's definitely needs
> improvement. I asked people in Fuel Library in person if they know how
> to open a bug in Puppet OpenStack community. They also said "We usually
> create a review and add someone's from community to review". Dmitry
> Borodaenko already pointed to "How To contribute" guide and policy that
> everyone in Fuel follows. It helps a lot for our external contributors.
> Also we do strictly require "Closes-Bug" or "Implements: blueprint" in
> every review.
> 
> There only few bugs on https://bugs.launchpad.net/puppet-openstack/+bugs
>  Though, there is no assignee or milestones. Some bugs can be
> invalidated. Some bugs require additional info but nobody asked to
> provide that info.

https://bugs.launchpad.net/puppet-openstack/+bugs is for
puppet-openstack, which is deprecated in Juno.

You should look https://launchpad.net/openstack-puppet-modules which
contains mostly triaged bugs.

> Velocity of review is also a big problem fur Fuel. Sometimes it takes
> 3-6 months even when all tests are written and tested. That happened
> when we worked on Juno where Fuel library submitted many patches before
> official packages were released. This should be improved also.

"This should be improved also" > sorry but we can't do magic here.
Puppet OpenStack folks is already working a lot and we do our best to
clean the upstream patches backlog.

3-6 months rarely h

[openstack-dev] [nova] How to microversion API code which is not in API layer

2015-06-12 Thread Chen CH Ji

Hi
 We have [1] in the db layer and it's directly used by API
layer , the filters is directly from client's input
 In this case, when doing [2] or similar changes, do we need to
consider microversion usage when we change options?
 Thanks

[1]
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L4440
[2] https://review.openstack.org/#/c/144883

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue with pymysql

2015-06-12 Thread Clint Byrum
Excerpts from Chris Dent's message of 2015-06-12 03:47:02 -0700:
> On Fri, 12 Jun 2015, Joe Gordon wrote:
> 
> > Glad to see us catch these issues early.
> 
> Yes! CI is doing exactly the job it is supposed to be doing here. It
> is finding bugs in code. When that happens we should fix the bugs,
> not revert. Even if it stalls other stuff.
> 

I'd like to offer an alternative path. Please do remember that people
deploy OpenStack from trunk, and we want them to do that, because if an
organization is focused and prepared, continuous deployment will produce
a vibrant OpenStack experience for users, and will keep those operators
in a position to contribute.

We also have hundreds of developers who need the system to work well.
There are maybe only a handful who will work on this issue. It does not
make any sense to me to stall hundreds of developers while a handful
work through something that may take weeks.

So, consider reverting _quickly_, landing a test that fails at least more
reliably with the current patch, and then have that handful of users who
are capable of fixing it work in their own branch until the problem is
resolved to a level that it won't stand in the way of progress for CD
operators and the development community at large.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taskflow] Returning information from reverted flow

2015-06-12 Thread Joshua Harlow

Dulko, Michal wrote:

Hi,

In Cinder we had merged a complicated piece of code[1] to be able to
return something from flow that was reverted. Basically outside we
needed an information if volume was rescheduled or not. Right now this
is done by injecting information needed into exception thrown from the
flow. Another idea was to use notifications mechanism of TaskFlow. Both
ways are rather workarounds than real solutions.


Unsure about notifications being a workaround (basically u are notifying 
to some other entities that rescheduling happened, which seems like 
exactly what it was made for) but I get the point ;)




I wonder if TaskFlow couldn’t provide a mechanism to mark stored element
to not be removed when revert occurs. Or maybe another way of returning
something from reverted flow?

Any thoughts/ideas?


I have a couple, I'll make some paste(s) and see what people think,

How would this look (as pseudo-code or other) to you, what would be your 
ideal, and maybe we can work from there (maybe u could do some paste(s) 
to and we can prototype it), just storing information that is returned 
from revert() somewhere? Or something else? There has been talk about 
task 'local storage' (or something like that/along those lines) that 
could also be used for this similar purpose.




[1] https://review.openstack.org/#/c/154920/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread Sergii Golovatiuk
Hi,

I have read all this thread trying to understand what's going on. It has
many emotions but very few logical proposals. Let me try to sum up and make
some proposals.

* A bug is reported in both Fuel Library and the Puppet module having
> trouble. A patch is provided in Fuel Library (your fork of Puppet
> OpenStack modules) but not in Puppet upstream module. That means you fix
> the bug for Fuel, and not for Puppet OpenStack community. It does not
> happen all the time but quite often.


I agree, Fuel Library had such issues in the past. As Dmitry Borodaenko
noted we changed the policy and follow it. Fuel Core reviewers don't accept
the patch if it's not submitted to upstream first. Fuel Library does all
its best to have less divergence with community.



> * A patch is submitted in a Puppet module and quite often does not land
> because there is no activity, no tests or is abandonned later because
> fixed in Fuel Library. I've noticed the patch is fixed in Fuel Library
> though.


John F. Kennedy said:
"Ask not what your country can do for you — ask what you can do for your
country"

IMO, it's a communication issue and related  more to Puppet OpenStack
community that to Fuel Library folks. In Fuel Library when patch from
external contributor has some problems we cherry-pick it, update a patchset
to succeed our expectations. Meanwhile we contact the contributor over IRC
or email and explain why we did that. That's very important as contributor
may not know all architectural details. He may not know the details of CI
tests or details how we test. That's the good attitude to help newcomers.
So they will be naturally involved to community. Yes, it takes time for
Fuel Library folks, but we continue doing that way as we think that
communication with contributor is a key of success.

I have looked over patches in progress. I may be wrong but I didn't find
that Puppet OpenStack community updated patch to pass CI. It's not complex
to cherry-pick and fix failed tests. It's also not complex to contact
person over IRC or in email to explain what needs to be done. Trust me,
usually it takes once. Smart creatives are clever enough not to make same
mistakes twice.

 RAW copy/paste between upstream modules code and your forks. In term
> of Licensing, I'm even not sure you have the right to do that (I'm not a
> CLA expert though) but well... in term of authorship and statistics on
> code, I'm not sure it's fair. Using submodules with custom patches would
> have been great to respect the authors who created the original code and
> you could have personalize the manifests.


This happens. People fork projects when they have some communication issues
or different architectural views. However, I like Compiz and Beryl story.
After some time both communities negotiated the issues and merged the
projects. Merging projects and make some concession is more respectful in
terms of attitude between smart creatives.

We as a community don't do a great job watching bugs, so personally I'd
> prefer that fuel developers just push patches, filing a bug too if you
> want. (Note: we do need to improve our bug tracking!)


Thank you Matt for bringing this up. That's definitely needs improvement. I
asked people in Fuel Library in person if they know how to open a bug in
Puppet OpenStack community. They also said "We usually create a review and
add someone's from community to review". Dmitry Borodaenko already pointed
to "How To contribute" guide and policy that everyone in Fuel follows. It
helps a lot for our external contributors. Also we do strictly require
"Closes-Bug" or "Implements: blueprint" in every review.

There only few bugs on https://bugs.launchpad.net/puppet-openstack/+bugs
 Though, there is no assignee or milestones. Some bugs can be invalidated.
Some bugs require additional info but nobody asked to provide that info.

Velocity of review is also a big problem fur Fuel. Sometimes it takes 3-6
months even when all tests are written and tested. That happened when we
worked on Juno where Fuel library submitted many patches before official
packages were released. This should be improved also.


Going back to resolution and action items:

We as all smart creatives have own vision. As Emilien mentioned Fuel
developers don't participate in weekly meetings or mailing lists. That's
true :( Though we participate in summit and follow the best practices in
code styling and test coverages. I believe that communication improvement
may resolve such issues. If we both start involving people to reviews and
talk personally over IRC that will be a bit win for both projects. I
believe we'll have synergy so there will be no divergence. Fuel as well as
other projects like TripleO will use upstream manifests from master without
any modifications. The developers will work on enhancement functionality
speeding up the development process.

>From Fuel side I see that some engineers will be involved to review
process. They will participate in weekly me

[openstack-dev] [Fuel] 7.0 (master branch) ISO nightly builds are available

2015-06-12 Thread Aleksandra Fedorova
Hi, everyone,

as we are heading to Fuel 6.1 release, we went through 6.1 Hard Code
Freeze milestone and created stable/6.1 branch in stackforge/fuel-*
repositories [1]

>From now on our master branch (which is a future 7.0) is open for new
features and we've enabled nightly builds for it.

You can download latest 7.0 ISO from our public CI service:
https://ci.fuel-infra.org

Note that currently 7.0 Fuel ISO still uses OpenStack Juno.

Fuel 7.0 specs are available in stackforge/fuel-specs repository [2]

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-June/066152.html
[2] 
https://review.openstack.org/#/q/status:open+project:stackforge/fuel-specs,n,z

-- 
Aleksandra Fedorova
Fuel Devops Engineer
bookwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue with pymysql

2015-06-12 Thread Mike Bayer



On 6/12/15 6:42 AM, Sean Dague wrote:

On 06/12/2015 06:31 AM, Joe Gordon wrote:


On Fri, Jun 12, 2015 at 7:13 PM, Sean Dague mailto:s...@dague.net>> wrote:

 On 06/12/2015 01:17 AM, Salvatore Orlando wrote:
 > It is however interesting that both "lock wait timeouts" and "missing
 > savepoint" errors occur in operations pertaining the same table -
 > securitygroups in this case.
 > I wonder if the switch to pymysl has not actually uncovered some other
 > bug in Neutron.
 >
 > I have no opposition to a revert, but since this will affect most
 > projects, it's probably worth finding some time to investigate what is
 > triggering this failure when sqlalchemy is backed by pymysql before
 > doing that.

 Right, we knew that the db driver would move some bugs around because
 we're no longer blocking python processes on db access (so there used to
 be a pseudo synchronization point before you ever got to the database).

 My feeling is this should be looked into before it is straight reverted
 (are jobs failing beyond Rally?). There are a number of benefits with


A quick look at logstash.openstack.org 
shows some of the stacktraces are happening in other neutron jobs as well.
  


 the new driver, and we can't get to python3 with the old one.


Agreed, pymysql is not to blame it looks like we have hit some neutron
issues.  So lets try to fix neutron. Just because neutron reverts the
default sql connector doesn't mean operators won't end up trying pymysql.
  



 Rally failing is also an indicator that just such and implicit lock was
 behavior that was depended on before, because it will be sending a bunch
 of similar operations all at once as a kind of stress test. It would
 tend to expose issues like this first.


Glad to see us catch these issues early.

So, looking at little deeper this is also exposing across the board as
well. I filed this bug - https://bugs.launchpad.net/neutron/+bug/1464612.

I'm going to trigger and push the revert, as it's blocked fixes for
stable/kilo being able to merge code (so it's cascading).
Please keep me very deeply in the loop on this because it will save 
everyone a lot of time if I can decipher the interaction with SQLAlchemy 
internals for folks.







We can decide if we bring this back for all non-neutron jobs once things
are working again (it lives behind a config var in devstack, so easy to
set it per job).

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue with pymysql

2015-06-12 Thread Mike Bayer



On 6/11/15 9:32 PM, Eugene Nikanorov wrote:

Hi neutrons,

I'd like to draw your attention to an issue discovered by rally gate job:
http://logs.openstack.org/96/190796/4/check/gate-rally-dsvm-neutron-rally/7a18e43/logs/screen-q-svc.txt.gz?level=TRACE

I don't have bandwidth to take a deep look at it, but first impression 
is that it is some issue with nested transaction support either on 
sqlalchemy or pymysql side.
Also, besides errors with nested transactions, there are a lot of Lock 
wait timeouts.


I think it makes sense to start with reverting the patch that moves to 
pymysql.
My immediate reaction is that this is perhaps a concurrency-related 
issue; because PyMySQL is pure python and allows for full blown eventlet 
monkeypatching, I wonder if somehow the same PyMySQL connection is being 
used in multiple contexts.  E.g. one greenlet starts up a savepoint, 
using identifier "_3" which is based on a counter that is local to the 
SQLAlchemy Connection, but then another greenlet shares that PyMySQL 
connection somehow with another SQLAlchemy Connection that uses the same 
identifier.


I'm not saying this is a bug in PyMySQL or Eventlet necessarily, it 
could be a bug in Neutron itself, since none of this code has ever been 
used with a true context-switching greenlet environment at the database 
connection level.








Thanks,
Eugene.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [TC] Discussion: changing Ironic's release model

2015-06-12 Thread Jim Rollenhagen
On Thu, May 28, 2015 at 12:55:50PM -0700, Jim Rollenhagen wrote:
> [snip]
> 
> I also put an informational spec about this change up in the
> ironic-specs repo: https://review.openstack.org/#/c/185171/. My goal was
> to discuss this in the spec, but the mailing list is fine too. There are
> some unanswered questions in that review that we should make sure we
> cover.

I've incorporated feedback from this thread and the spec review into a
new version of this spec. Thanks to everyone for the great feedback and
support, keep it coming. :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][openstack-ansible] Proposal to move project openstack-ansible

2015-06-12 Thread Kevin Carter
Hello TC members and fellow stackers,

The `os-ansible-deployment`[0] project has submitted a review to to change the 
Governance of the project[1]. Our community of developers and deployers have
discussed the move at length, in IRC/meetings and on etherpad[2], and believe 
that 
we're ready to be considered for a move to "big tent". As this time, we would 
like to formally ask the TC to consider our candidacy to be an official 
OpenStack 
project.

Thank you

--

Kevin Carter

[0] https://github.com/stackforge/os-ansible-deployment
[1] https://review.openstack.org/#/c/191105/
[2] https://etherpad.openstack.org/p/osad-openstack-naming

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread James Bottomley
On Fri, 2015-06-12 at 02:43 -0700, Dmitry Borodaenko wrote:
> On Thu, Jun 11, 2015 at 11:43:09PM -0400, Emilien Macchi wrote:
> > What about code history and respect of commit ownership?
> > I'm personally wondering if it's fair to copy/paste several thousands of
> > lines of code from another Open-Source project without asking to the
> > community or notifying the authors before. I know it's Open-Source and
> > Apache 2.0 but well... :-)
> 
> Being able to copy code without having to ask for permission is exactly
> what Free Software (and more recently, Open Source) is for.

Copy and Paste forking of code into compatibly licenced code (without
asking permission) is legally fine as long as you observe the licence,
but it comes with a huge technical penalty:

 1. Copy and Paste forks can't share patches: a patch for one has to
be manually applied to the other.  The amount of manual
intervention required grows as the forks move out of sync.
 2. Usually one fork gets more attention than the other, so the
patch problem of point 1 eventually causes the less attended
fork to become unmaintainable (or if not patched, eventually
unusable).
 3. In the odd chance both forks receive equal attention, you're
still expending way over 2x the effort you would have on a
single code base.

There's no rule of thumb for this: we all paste snippets (pieces of code
of up to around 10 lines or so).  Sometimes these snippets contain
errors and suddenly hundreds of places need fixing.   The way around
this problem is to share code, either by inclusion, modularity or
linking.  The reason we paste snippets is because sharing for them is
enormous effort.  However, as the size of the paste grows, so does the
fork penalty and it becomes advantageous to avoid it and the effort of
sharing the code looks a lot less problematic.

Even in the case where the fork is simply "patch the original for bug
fixes and some new functionality", the fork penalty rules apply.

The data that supports all of this came from Red Hat and SUSE.  The end
of the 2.4 kernel release cycle for them was a disaster with patch sets
larger than the actual kernel itself.  Sorting through the resulting
rubble is where the "upstream first" policy actually came from.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Looking for help getting git-review to work over https

2015-06-12 Thread KARR, DAVID
As I apparently have to sudo this, it would likely be more effective, if at 
all, to specify the proxy on the command line.  Unfortunately, it didn’t make 
any difference:
# pip install --proxy "https://one.proxy.att.com:8080"; .
Processing /home/dk068x/work/git-review
Complete output from command python setup.py egg_info:
Download error on https://pypi.python.org/simple/pbr/: [Errno 1] 
_ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol 
-- Some packages may not be found!
Couldn't find index page for 'pbr' (maybe misspelled?)
Download error on https://pypi.python.org/simple/: [Errno 1] _ssl.c:504: 
error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol -- Some 
packages may not be found!
No local packages or download links found for pbr
Traceback (most recent call last):
  File "", line 20, in 
  File "/tmp/pip-D5jhCD-build/setup.py", line 20, in 
setuptools.setup(setup_requires=['pbr'], pbr=True)
  File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
...

I’m providing the appropriate https proxy url, as pip appears to be using a 
https url.

From: Asselin, Ramy [mailto:ramy.asse...@hp.com]
Sent: Thursday, June 11, 2015 4:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Looking for help getting git-review to work over 
https

Try creating/updated ~/.pip/pip.conf

With contents:

[global]
proxy = http://your_proxy:port/

From: KARR, DAVID [mailto:dk0...@att.com]
Sent: Thursday, June 11, 2015 4:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Looking for help getting git-review to work over 
https

This is just going swimmingly.
% sudo python setup.py install
Download error on https://pypi.python.org/simple/pbr/: [Errno 110] Connection 
timed out -- Some packages may not be found!
Couldn't find index page for 'pbr' (maybe misspelled?)
Download error on https://pypi.python.org/simple/: [Errno 110] Connection timed 
out -- Some packages may not be found!
No local packages or download links found for pbr

Do I have to do something special to set the proxy for this?  I have 
“http_proxy” and “https_proxy” already set.

From: ZZelle [mailto:zze...@gmail.com]
Sent: Thursday, June 11, 2015 3:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Looking for help getting git-review to work over 
https

Indeed, the doc[1] is unclear
git-review can be installed using: python setup.py install or pip install .


[1]http://docs.openstack.org/infra/manual/developers.html#accessing-gerrit-over-https

On Thu, Jun 11, 2015 at 11:16 PM, KARR, DAVID 
mailto:dk0...@att.com>> wrote:
I see.  I would guess a footnote on the instructions about this would be 
useful. Is https://github.com/openstack-infra/git-review the proper location to 
get the buildable source?  I don’t see any obvious build instructions there.

From: ZZelle [mailto:zze...@gmail.com]
Sent: Thursday, June 11, 2015 2:01 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Looking for help getting git-review to work over 
https

Hi David,


Following git config options are supported by git-review 
(https://review.openstack.org/116035)
   git config --global gitreview.scheme https
   git config --global gitreview.port 443
BUT the feature was merged after 1.24 (it's highlighted by your git review -vs)
so the feature is currently only available on the git-review master branch 
(which
is quite stable, i use it every day).


Cedric/ZZelle@irc


On Thu, Jun 11, 2015 at 10:14 PM, KARR, DAVID 
mailto:dk0...@att.com>> wrote:
I followed the instructions for installing and configuring corkscrew, similar 
to what you provided here.  The result seems to indicate it did something, but 
the overall result is the same:
2015-06-11 13:07:25.866568 Running: git log --color=never --oneline HEAD^1..HEAD
2015-06-11 13:07:25.869309 Running: git remote
2015-06-11 13:07:25.872742 Running: git config --get gitreview.username
No remote set, testing 
ssh://dk0...@review.openstack.org:29418/openstack/horizon.git
2015-06-11 13:07:25.874869 Running: git push --dry-run 
ssh://dk0...@review.openstack.org:29418/openstack/horizon.git
 --all
The authenticity of host 
'[review.openstack.org]:29418 ()' can't be established.
RSA key fingerprint is 28:c6:42:b7:44:d2:48:64:c1:3f:31:d8:1b:6e:3b:63.
Are you sure you want to continue connecting (yes/no)? yes
ssh://dk0...@review.openstack.org:29418/openstack/horizon.git
 did not work.
Could not connect to gerrit.
Enter your gerrit username:
---

From: Paul Mi

Re: [openstack-dev] [neutron] Microversioning work questions and kick-start

2015-06-12 Thread Henry Gessau
On Thu, Jun 11, 2015, Salvatore Orlando  wrote:
> Finally, I received queries from several community members that would be keen
> on helping supporting this microversioning effort. I wonder if the PTL and the
> API lieutenants would ok with agreeing to have a team of developers meeting
> regularly, working towards implementing this feature, and report progress
> and/or issues to the general Neutron meeting.

Yes, I am ok with agreeing to form such a team. ;) With an effort this complex
it makes sense to have tl;dr type summaries in the general meeting. This has
worked well for large-effort features before, and when the work winds down the
topic can fold back into the main meeting.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-12 Thread Steven Dake (stdake)
Adiran,

Disagree.

The python-magnumclient and heat-coe-templates have separate launchpad 
trackers.  I suspect we will want a separate tracker for python-k8sclient when 
we are ready for that.

IMO each repo should have a separate launchpad tracker to make the lives of the 
folks maintaining the software easier :)  This is a common best practice in 
OpenStack.

Regards
-steve


From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, June 12, 2015 at 7:47 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Steve,

There is no need for a second LP project at all.

Adrian


 Original message 
From: "Steven Dake (stdake)" mailto:std...@cisco.com>>
Date: 06/12/2015 7:41 AM (GMT-08:00)
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Adrian,

Great thanks for that.  I would recommend one change and that is for one 
magnum-drivers team across launchpad trackers.  The magnum-drivers team as you 
know (this is for the benefit of others) is responsible for maintaining the 
states of the launchpad trackers.

Regards,
-steve

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, June 11, 2015 at 7:21 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Team,

We are fortunate enough to have a thriving community of developers who want to 
make OpenStack great, and several of us have pledged support for this work in 
Magnum. Due to the amount of interest expressed in this pursuit, and the small 
amount of overlap between the developers in magnum-core, I’m authorizing the 
creation of a new gerrit ACL group named magnum-ui-core. Please install me as 
the pilot member of the group. I will seed the group with those who have 
pledged support for the effort from the “essential” subscribers to the 
following blueprint. If our contributors to the magnum-ui repo feel that review 
velocity is too low, I will add magnum-core as a member so we can help. On 
regular intervals, I will review the activity level of our new group, and make 
adjustments as needed to add/subtract from it in accordance with input from the 
active contributors. We will use the Magnum project on Launchpad for blueprints 
and bugs.

https://blueprints.launchpad.net/magnum/+spec/magnum-horizon-plugin

There are 8 contributors identified, who will comprise our initial 
magnum-ui-core group.

I ask that the ACLs be configured as follows:

[access "refs/heads/*"]
abandon = group magnum-ui-core
create = group magnum-milestone
label-Code-Review = -2..+2 group magnum-ui-core
label-Workflow = -1..+1 group magnum-ui-core

[access "refs/tags/*"]
pushSignedTag = group magnum-milestone

[receive]
requireChangeId = true
requireContributorAgreement = true

[submit]
mergeContent = true

Thanks everyone for your enthusiasm about this new pursuit. I look forward to 
working together with you to make this into something we are all proud of.

Adrian

PS: Special thanks to sdake for initiating this conversation, and helping us to 
arrive at a well reasoned decision about how to approach this.

On Jun 4, 2015, at 10:58 AM, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Devel

[openstack-dev] [taskflow] Returning information from reverted flow

2015-06-12 Thread Dulko, Michal
Hi,

In Cinder we had merged a complicated piece of code[1] to be able to return 
something from flow that was reverted. Basically outside we needed an 
information if volume was rescheduled or not. Right now this is done by 
injecting information needed into exception thrown from the flow. Another idea 
was to use notifications mechanism of TaskFlow. Both ways are rather 
workarounds than real solutions.

I wonder if TaskFlow couldn't provide a mechanism to mark stored element to not 
be removed when revert occurs. Or maybe another way of returning something from 
reverted flow?

Any thoughts/ideas?

[1] https://review.openstack.org/#/c/154920/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread Fox, Kevin M
-1. You are correct in that it is allowed in free source, and done in practice. 
There has to be a very good reason for it for it though. It feels like you are 
using the nuclear option when there are other ways to solve the problem that 
are more friendly to the attribution of the authors. One of the reasons free 
software developers do what they do is for recognition, and by this type of 
copying, that gets lost. Not a good thing. It discourages developers from 
committing.

The way other distro's deal with this is use an upstream release with a set of 
vendor patches that fix the bugs that haven't made it upstream yet.

Its a bit of a hassle to maintain the patches this way, but it does allow the 
bugs to be fixed relatively quickly if upstream is slow and creates incentive 
for the patches to get back upstream so they don't have to be carried. The 
patches should be able to be relatively easily dumped from gerrit as well, so 
you can submit the patch for review, and then dump it to a patch while its 
waiting and stick it in the fuel repo until it lands. Once it lands you can 
simply delete the patch out of fuel's repo.

Would this work?

Thanks,
Kevin

From: Dmitry Borodaenko [dborodae...@mirantis.com]
Sent: Friday, June 12, 2015 2:43 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [puppet] [fuel] more collaboration request

On Thu, Jun 11, 2015 at 11:43:09PM -0400, Emilien Macchi wrote:
> What about code history and respect of commit ownership?
> I'm personally wondering if it's fair to copy/paste several thousands of
> lines of code from another Open-Source project without asking to the
> community or notifying the authors before. I know it's Open-Source and
> Apache 2.0 but well... :-)

Being able to copy code without having to ask for permission is exactly
what Free Software (and more recently, Open Source) is for.

You can't rely on commit history and even changelog to track attribution
and licensing, source tree itself should contain all appropriate
copyright notices and licenses, and we keep all of those intact:

https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/cinder/Modulefile#L3-L4
https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/cinder/LICENSE

Besides, there's a historic precedent that stripping commit history is
acceptable even with GPL:

https://lwn.net/Articles/432012/

> >> Should not it be the way around?
> >> Puppet OpenStack modules provide the original code. If there is a bug,
> >> it has to be fixed in the modules. Puppet OpenStack developers don't
> >> have time/bandwidth and moreover don't want to periodically have a
> >> look at Fuel git history. I'm not sure this is the best solution for
> >> the community.
> > (...)
> >> The reality (and again I won't blame any patch, you can find them on
> >> Gerrit) is that most of patches are not merged and in staled status.
> >> If I can suggest something, the policy should be more "upstream first"
> >> where you submit a patch upstream, you backport it downstream, and in
> >> the until it's merged you should make sure it land upstream after
> >> community review process. The last step is I think the problem I'm
> >> mentioning here and part of the root cause of this topic.
> >
> > Yes, this right here is the biggest point of contention in the whole
> > discussion.
> >
> > The most problematic implication of what you're asking for is the
> > additional effort that it would require from Fuel developers. When you
> > say that Puppet OpenStack developers don't have time to look at Fuel git
> > history for bugfixes, and then observe that actually Fuel developers do
> > propose their patches to upstream, but those patches are stalled in the
> > community review process, this indicates that you don't consider taking
> > over and landing these patches a priority:
>
> We don't consider taking the patches?

Why do you misinterpret my words this way here, and then few paragraphs
later you demonstrate that you clearly understand the difference between
"taking patches" and "taking over patches"?

> Please go on Gerrit, look at the patches and tell me if there is no
> review from Puppet OpenStack community. Most of the patchs are -1 or
> not passing unit testing which means the code can't be merged.
>
> Let me give you examples so you can see Puppet OpenStack folks is doing
> reviews on patchs from Fuel team:
> https://review.openstack.org/#/c/170485/
> https://review.openstack.org/#/c/157004/
> https://review.openstack.org/#/c/176924/
> https://review.openstack.org/#/c/168848/
> https://review.openstack.org/#/c/130937/
> https://review.openstack.org/#/c/131710/
> https://review.openstack.org/#/c/174811/
>
> And this is only 'in progress' patches. A lot of fixed have been
> abandoned upstream. You can easily query them on Gerrit.

Once again, this only disproves your concern that Puppet OpenStack
developers would have to waste time digging thro

Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-12 Thread Adrian Otto
Steve,

There is no need for a second LP project at all.

Adrian


 Original message 
From: "Steven Dake (stdake)" 
Date: 06/12/2015 7:41 AM (GMT-08:00)
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Adrian,

Great thanks for that.  I would recommend one change and that is for one 
magnum-drivers team across launchpad trackers.  The magnum-drivers team as you 
know (this is for the benefit of others) is responsible for maintaining the 
states of the launchpad trackers.

Regards,
-steve

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, June 11, 2015 at 7:21 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Team,

We are fortunate enough to have a thriving community of developers who want to 
make OpenStack great, and several of us have pledged support for this work in 
Magnum. Due to the amount of interest expressed in this pursuit, and the small 
amount of overlap between the developers in magnum-core, I’m authorizing the 
creation of a new gerrit ACL group named magnum-ui-core. Please install me as 
the pilot member of the group. I will seed the group with those who have 
pledged support for the effort from the “essential” subscribers to the 
following blueprint. If our contributors to the magnum-ui repo feel that review 
velocity is too low, I will add magnum-core as a member so we can help. On 
regular intervals, I will review the activity level of our new group, and make 
adjustments as needed to add/subtract from it in accordance with input from the 
active contributors. We will use the Magnum project on Launchpad for blueprints 
and bugs.

https://blueprints.launchpad.net/magnum/+spec/magnum-horizon-plugin

There are 8 contributors identified, who will comprise our initial 
magnum-ui-core group.

I ask that the ACLs be configured as follows:

[access "refs/heads/*"]
abandon = group magnum-ui-core
create = group magnum-milestone
label-Code-Review = -2..+2 group magnum-ui-core
label-Workflow = -1..+1 group magnum-ui-core

[access "refs/tags/*"]
pushSignedTag = group magnum-milestone

[receive]
requireChangeId = true
requireContributorAgreement = true

[submit]
mergeContent = true

Thanks everyone for your enthusiasm about this new pursuit. I look forward to 
working together with you to make this into something we are all proud of.

Adrian

PS: Special thanks to sdake for initiating this conversation, and helping us to 
arrive at a well reasoned decision about how to approach this.

On Jun 4, 2015, at 10:58 AM, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-12 Thread Steven Dake (stdake)
Brad,

Each repo should have a separate launchpad tracker.  I reconfigured the 
magnum-ui launchpad tracker with the liberty series and the liberty milestones. 
 This is common best practice in OpenStack governance.  I recommend a common 
drivers team however, which should be magnum-drivers.

Regards
-steve


From: "Bradley Jones (bradjone)" mailto:bradj...@cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, June 12, 2015 at 3:31 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

The review for the creation of the new project is here 
https://review.openstack.org/190998

To confirm Adrian, do you intend to use the Magnum launchpad for UI related bps 
and bugs or the Magnum UI launchpad indicated in the spec 
(https://launchpad.net/magnum-ui)? If it’s the former I shall update the spec 
to indicate so.

Thanks,
Brad

On 12 Jun 2015, at 03:21, Adrian Otto 
mailto:adrian.o...@rackspace.com>> wrote:

Team,

We are fortunate enough to have a thriving community of developers who want to 
make OpenStack great, and several of us have pledged support for this work in 
Magnum. Due to the amount of interest expressed in this pursuit, and the small 
amount of overlap between the developers in magnum-core, I’m authorizing the 
creation of a new gerrit ACL group named magnum-ui-core. Please install me as 
the pilot member of the group. I will seed the group with those who have 
pledged support for the effort from the “essential” subscribers to the 
following blueprint. If our contributors to the magnum-ui repo feel that review 
velocity is too low, I will add magnum-core as a member so we can help. On 
regular intervals, I will review the activity level of our new group, and make 
adjustments as needed to add/subtract from it in accordance with input from the 
active contributors. We will use the Magnum project on Launchpad for blueprints 
and bugs.

https://blueprints.launchpad.net/magnum/+spec/magnum-horizon-plugin

There are 8 contributors identified, who will comprise our initial 
magnum-ui-core group.

I ask that the ACLs be configured as follows:

[access "refs/heads/*"]
abandon = group magnum-ui-core
create = group magnum-milestone
label-Code-Review = -2..+2 group magnum-ui-core
label-Workflow = -1..+1 group magnum-ui-core

[access "refs/tags/*"]
pushSignedTag = group magnum-milestone

[receive]
requireChangeId = true
requireContributorAgreement = true

[submit]
mergeContent = true

Thanks everyone for your enthusiasm about this new pursuit. I look forward to 
working together with you to make this into something we are all proud of.

Adrian

PS: Special thanks to sdake for initiating this conversation, and helping us to 
arrive at a well reasoned decision about how to approach this.

On Jun 4, 2015, at 10:58 AM, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
htt

Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-12 Thread Steven Dake (stdake)
Adrian,

Great thanks for that.  I would recommend one change and that is for one 
magnum-drivers team across launchpad trackers.  The magnum-drivers team as you 
know (this is for the benefit of others) is responsible for maintaining the 
states of the launchpad trackers.

Regards,
-steve

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, June 11, 2015 at 7:21 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - 
need a vote from the core team

Team,

We are fortunate enough to have a thriving community of developers who want to 
make OpenStack great, and several of us have pledged support for this work in 
Magnum. Due to the amount of interest expressed in this pursuit, and the small 
amount of overlap between the developers in magnum-core, I’m authorizing the 
creation of a new gerrit ACL group named magnum-ui-core. Please install me as 
the pilot member of the group. I will seed the group with those who have 
pledged support for the effort from the “essential” subscribers to the 
following blueprint. If our contributors to the magnum-ui repo feel that review 
velocity is too low, I will add magnum-core as a member so we can help. On 
regular intervals, I will review the activity level of our new group, and make 
adjustments as needed to add/subtract from it in accordance with input from the 
active contributors. We will use the Magnum project on Launchpad for blueprints 
and bugs.

https://blueprints.launchpad.net/magnum/+spec/magnum-horizon-plugin

There are 8 contributors identified, who will comprise our initial 
magnum-ui-core group.

I ask that the ACLs be configured as follows:

[access "refs/heads/*"]
abandon = group magnum-ui-core
create = group magnum-milestone
label-Code-Review = -2..+2 group magnum-ui-core
label-Workflow = -1..+1 group magnum-ui-core

[access "refs/tags/*"]
pushSignedTag = group magnum-milestone

[receive]
requireChangeId = true
requireContributorAgreement = true

[submit]
mergeContent = true

Thanks everyone for your enthusiasm about this new pursuit. I look forward to 
working together with you to make this into something we are all proud of.

Adrian

PS: Special thanks to sdake for initiating this conversation, and helping us to 
arrive at a well reasoned decision about how to approach this.

On Jun 4, 2015, at 10:58 AM, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as “how to get patches approved”, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn’t large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] general invitation to people involved with Kolla to join the Kolla-drivers team

2015-06-12 Thread Steven Dake (stdake)
The kolla-drivers team in launchpad is a restricted team because people can 
damage the issue tracker and cause a bunch of work on my end to fix.  That 
said, I’m going to open up the kolla-drivers launchpad team to a wider group of 
any individual involved in Kolla.  This is different from the core team.  The 
core team has +2/-2 capability for reviews and are the final guardians of the 
gate.

The drivers team is responsible for maintaining the state of the launchpad 
tracker.  If you want to join, hit me up on IRC (sdake).

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread Emilien Macchi


On 06/12/2015 07:58 AM, Bogdan Dobrelya wrote:
>> Hi,
>>
>> Before you read me, please remember I know almost nothing about puppet. :)
>>
>> On 06/11/2015 11:03 PM, Matt Fischer wrote:
>>
>> Matt,
>>
>> I appreciate a lot who you are, and all the help you've given me so far,
>> but what you are asking here is wrong. You shouldn't ask Emilien to
>> track the work of the Fuel team, and ping them on IRC to contribute
>> back. It should be up to them to directly fix upstream *first*, and
>> *then* fix back in Fuel.
> 
> This is what we should do, indeed, as a Fuel library team. First, always
> get the patch merged upstream, and only next - backport this to Fuel fork.
> Ideally, we will next switch to upstream manifests, eventually, so there
> would be no need for forks anymore. Sadly, this *never* worked for us
> and doesn't work yet as we're, it seems, not ready for this *quite a
> long path* of changes landing> So there was a "lazy compromise" and
> shortcuts found, which I personally don't like.
> 
> I strongly believe that someday this will start to work for us. And this
> is not just a words of hope. Before we did the first
> "get-closer-to-upstream" effort, our fork's code base diverge was ~97%
> and 0 patches contributed upstream by changes in Fuel library. An
> initial sync with upstream modules was the very first step on the right
> way. And we're keep doing the best to reduce the code diverge to be
> ready to switch upstream modules one day.

I'm actually happy to hear from you, since we were discussing together
about that over the last 2 summits, without real plan between both groups.

>>
>>
>> It shouldn't be the way either. The team working on fuel-library should
>> be pro-active and doing the contributions, Emilien shouldn't have to
> 
> Nothing to add, you are completely right.
> 
>> discuss a "specific bug of commits". I believe also that Emilien's
>> reasoning goes beyond just one or 2 commits, it's a general thinking.
>>
>> On 06/11/2015 04:36 PM, Matthew Mosesohn wrote:
>>
>> This isn't the only place where we have a huge git repository doing
>> everything. This IMO is a big mistake, which give us more work because
>> we have to duplicate what's upstream and constantly rebase. This is yet
>> another technical dept... This only works because we have a lot of
> 
> Agree, this makes the technical dept only to grow uncontrollable.

Good feedback from the patch's author.

> 
>> Mirantis employee doing the work, so the inefficiency is counter
>> balanced by the work force. But as you know, we're always pushing to the
>> very limit of everyone to release a new version of MOS and Fuel, so
>> maybe now is the time to rethink the way we work.
>>
>> To move forward, I really believe we (as in: Mirantis) should be:
>> 1/ Rework fuel-library to use multiple git for puppet, and maybe work
>> out a way to package these individually.
>> 2/ Using unmodified version of upstream puppet as much as possible
> 
>> 3/ Work *only* on upstream puppet and not on a separate fork
> 
> I'm all for this option. We have a backlog item to deploy

Sounds like another plan here, which sounds great.

> OpenStack from upstream packages with Fuel. I'd say this must be done by
> upstream puppet manifests as well.

Can you clarify what must be done by upstream manifests?

>>
>> As a lot of the changes that I propose, this would be a one-off painful
>> effort to kill this technical dept, but on the long run, we would really
>> benefits from such reorganization.
>>
>> If we don't do the above, it's going to be "business as usual", no mater
>> how much efforts Mirantis engineer will put on: the pressure we have to
>> deliver Fuel/MOS should shift from the fork to what's upstream.
>>
>> Cheers,
>>
>> Thomas Goirand (zigo)
> 
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread Emilien Macchi


On 06/12/2015 08:29 AM, Flavio Percoco wrote:
> On 12/06/15 03:04 -0700, Dmitry Borodaenko wrote:
>> On Fri, Jun 12, 2015 at 09:31:45AM +0200, Flavio Percoco wrote:
>>> On 11/06/15 17:36 +0300, Matthew Mosesohn wrote:
>>> >Secondly, I'd like to point out that Fuel is not so different from
>>> >what other teams are doing. At the Summit, I heard from others who all
>>> >maintain internal Gerrits and internal forks of the modules. The
>>> >difference is that Fuel is being worked on in the open in StackForge.
>>> >Anyone is free to contribute to Fuel as he or she wishes, take our
>>> >patches, or review changesets.
>>>
>>> TBH, I really dislike the fact that there are internal forks but as
>>> long as they are kept internal, I don't really care.
>>
>> "Internal" may apply to other projects Matt is referring to, but it does
>> not apply to Fuel. Fuel's forks of upstream puppet modules are not
>> internal, they're embedded into the fuel-library repository, which,
>> along with the rest of Fuel source code, is fully public.
> 
> Yup, I was referring to other projects too. I should've been more
> explicit but thanks for clarifying.
> 
>>
>>> It's not correct to just copy/paste code, sure, but at least they are
>>> not making it publicly consumable with the wrong attributions.
>>
>> We are making Fuel publicly consumable, and, as I've pointed out in
>> previous email, we're keeping all attributions in the source code
>> intact.
>>
>>> I do prefer (and I believe Emiliem does as well) to have Fuel in the
>>> open,
>>
>> And yet in your previous statements you say that publishing Fuel source
>> code is somehow worse than keeping one's modifications of open source
>> code unavailable to public. Which one is it?
> 
> I was referring to other projects :)
> 
> I like Fuel open, I like every project open but I'd very much want
> them to do it right.

+1
Open-Source is not just about publishing your source code on github
I'm sure we all know this statement, but in practice this is not always
happening.

> Cheers,
> Flavio
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread Emilien Macchi


On 06/12/2015 05:43 AM, Dmitry Borodaenko wrote:
> On Thu, Jun 11, 2015 at 11:43:09PM -0400, Emilien Macchi wrote:
>> What about code history and respect of commit ownership?
>> I'm personally wondering if it's fair to copy/paste several thousands of
>> lines of code from another Open-Source project without asking to the
>> community or notifying the authors before. I know it's Open-Source and
>> Apache 2.0 but well... :-)
> 
> Being able to copy code without having to ask for permission is exactly
> what Free Software (and more recently, Open Source) is for.
> 
> You can't rely on commit history and even changelog to track attribution
> and licensing, source tree itself should contain all appropriate
> copyright notices and licenses, and we keep all of those intact:
> 
> https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/cinder/Modulefile#L3-L4
> https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/cinder/LICENSE
> 
> Besides, there's a historic precedent that stripping commit history is
> acceptable even with GPL:
> 
> https://lwn.net/Articles/432012/

Oh I'm pretty sure you don't violate any law. I'm just not sure it was
the right way to use the Puppet modules.

I don't think there is value to discuss more about this 'copy/paste'
thing, let's move on to interesting stuffs.

> 
 Should not it be the way around?
 Puppet OpenStack modules provide the original code. If there is a bug,
 it has to be fixed in the modules. Puppet OpenStack developers don't
 have time/bandwidth and moreover don't want to periodically have a
 look at Fuel git history. I'm not sure this is the best solution for
 the community.
>>> (...)
 The reality (and again I won't blame any patch, you can find them on
 Gerrit) is that most of patches are not merged and in staled status.
 If I can suggest something, the policy should be more "upstream first"
 where you submit a patch upstream, you backport it downstream, and in
 the until it's merged you should make sure it land upstream after
 community review process. The last step is I think the problem I'm
 mentioning here and part of the root cause of this topic.
>>>
>>> Yes, this right here is the biggest point of contention in the whole
>>> discussion.
>>>
>>> The most problematic implication of what you're asking for is the
>>> additional effort that it would require from Fuel developers. When you
>>> say that Puppet OpenStack developers don't have time to look at Fuel git
>>> history for bugfixes, and then observe that actually Fuel developers do
>>> propose their patches to upstream, but those patches are stalled in the
>>> community review process, this indicates that you don't consider taking
>>> over and landing these patches a priority:
>>
>> We don't consider taking the patches?
> 
> Why do you misinterpret my words this way here, and then few paragraphs
> later you demonstrate that you clearly understand the difference between
> "taking patches" and "taking over patches"?
> 
>> Please go on Gerrit, look at the patches and tell me if there is no
>> review from Puppet OpenStack community. Most of the patchs are -1 or
>> not passing unit testing which means the code can't be merged.
>>
>> Let me give you examples so you can see Puppet OpenStack folks is doing
>> reviews on patchs from Fuel team:
>> https://review.openstack.org/#/c/170485/
>> https://review.openstack.org/#/c/157004/
>> https://review.openstack.org/#/c/176924/
>> https://review.openstack.org/#/c/168848/
>> https://review.openstack.org/#/c/130937/
>> https://review.openstack.org/#/c/131710/
>> https://review.openstack.org/#/c/174811/
>>
>> And this is only 'in progress' patches. A lot of fixed have been
>> abandoned upstream. You can easily query them on Gerrit.
> 
> Once again, this only disproves your concern that Puppet OpenStack
> developers would have to waste time digging through Fuel commit history
> to track down bugfixes. Fuel developers are taking care of this for you.

Good to know so we have a first action:
"Fuel developers to track down every bug in Puppet modules that are
fixed in Fuel, at least by submitting a patch upstream and making sure
the review stays alive. If the developer can't spend more time on a
pending patch, the developer can gently ask on the ML or IRC if someone
from Puppet OpenStack group can takeover to finish the code, and use
Co-Authored-By".

>>> The fact that you have started this thread means that you actually do
>>> care to get these bugfixes into Puppet OpenStack. If that's true, maybe
>>> you can consider a middleground approach: Fuel team agrees to propose
>>> all our changes to upstream (i.e. do a better job at something we've
>>> already committed to unilaterally), and you help us land the patches we
>>> propose, and take over those that get stalled when the submitter from
>>> Fuel team has moved on to other tasks?
>>
>> If I understand correctly, you're asking for Puppet OpenSta

Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread Vladimir Kuklin
Folks

As Dmitry already said, we are open towards merging with upstream and would
like to make our code divergence no more than several percents of lines,
but there are historical reasons for this divergence with which we are not
happy either. So let's just point out that both sides look into the same
direction and we just need to figure out workflows and technical details on
how to make it comfortable both for Puppet OpenStack and Fuel developers,
taking into consideration that we all need to meet our deadlines and goals.
Let's close this dicussion with positive results that we both want to merge
the codebases as much possible and stop all the blame and judgement game
here. Let's do a good friendly meeting and come up with a list of action
items for both sides. I do not think that moving towards a quarell is in
any way productive.

Let's do IRC or at least voice-based meeting and figure out all the details.


On Fri, Jun 12, 2015 at 3:29 PM, Flavio Percoco  wrote:

> On 12/06/15 03:04 -0700, Dmitry Borodaenko wrote:
>
>> On Fri, Jun 12, 2015 at 09:31:45AM +0200, Flavio Percoco wrote:
>>
>>> On 11/06/15 17:36 +0300, Matthew Mosesohn wrote:
>>> >Secondly, I'd like to point out that Fuel is not so different from
>>> >what other teams are doing. At the Summit, I heard from others who all
>>> >maintain internal Gerrits and internal forks of the modules. The
>>> >difference is that Fuel is being worked on in the open in StackForge.
>>> >Anyone is free to contribute to Fuel as he or she wishes, take our
>>> >patches, or review changesets.
>>>
>>> TBH, I really dislike the fact that there are internal forks but as
>>> long as they are kept internal, I don't really care.
>>>
>>
>> "Internal" may apply to other projects Matt is referring to, but it does
>> not apply to Fuel. Fuel's forks of upstream puppet modules are not
>> internal, they're embedded into the fuel-library repository, which,
>> along with the rest of Fuel source code, is fully public.
>>
>
> Yup, I was referring to other projects too. I should've been more
> explicit but thanks for clarifying.
>
>
>>  It's not correct to just copy/paste code, sure, but at least they are
>>> not making it publicly consumable with the wrong attributions.
>>>
>>
>> We are making Fuel publicly consumable, and, as I've pointed out in
>> previous email, we're keeping all attributions in the source code
>> intact.
>>
>>  I do prefer (and I believe Emiliem does as well) to have Fuel in the
>>> open,
>>>
>>
>> And yet in your previous statements you say that publishing Fuel source
>> code is somehow worse than keeping one's modifications of open source
>> code unavailable to public. Which one is it?
>>
>
> I was referring to other projects :)
>
> I like Fuel open, I like every project open but I'd very much want
> them to do it right.
>
>
> Cheers,
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] VLAN-aware VMs meeting

2015-06-12 Thread Ildikó Váncsa
Hi,

Since we reopened the review for this blueprint we've got a large number of 
comments. It can be clearly seen that the original proposal has to be changed, 
although it still requires some discussion to define a reasonable design that 
provides the desired feature and is aligned with the architecture and 
guidelines of Neutron. In order to speed up the process to fit into the Liberty 
timeframe, we would like to have a discussion about this. The goal is to 
discuss the alternatives we have, decide which to go on with and sort out the 
possible issues. After this discussion the blueprint will be updated with the 
desired solution.

I would like to propose a time slot for _next Tuesday (06. 16.), 17:00UTC - 
18:00UTC_. I would like to have the discussion on the #openstack-neutron 
channel, that gives a chance to guys who might be interested, but missed this 
mail to attend. I tried to check the slot, but please let me know if it 
collides with any Neutron related meeting.

Thanks and Best Regards,
Ildikó
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for changing 1600UTC meeting to 1700 UTC

2015-06-12 Thread Steven Dake (stdake)
Even though 7am is not ideal for the west coast, I¹d be willing to go back
that far.  That would put the meeting at the morning school rush for the
west coast folks though (although we are in summer break in the US and we
could renegotiate a time in 3 months when school starts up again if its a
problem) - so creating different set of problems for different set of
people :)

This would be a 1400 UTC meeting.

While I wake up prior to 7am, (usually around 5:30) I am not going to put
people through the torture of a 6am meeting in any timezone if I can help
it so 1400 is the earliest we can go :)

Regards
-steve


On 6/12/15, 4:37 AM, "Paul Bourke"  wrote:

>I'm fairly easy on this but, if the issue is that the meeting is running
>into people's evening schedules (in EMEA), would it not make sense to
>push it back an hour or two into office hours, rather than forward?
>
>On 10/06/15 18:20, Ryan Hallisey wrote:
>> After some upstream discussion, moving the meeting from 1600 to 1700
>>UTC does not seem very popular.
>> It was brought up that changing the time to 16:30 UTC could accommodate
>>more people.
>>
>> For the people that attend the 1600 UTC meeting time slot can you post
>>further feedback to address this?
>>
>> Thanks,
>> Ryan
>>
>> - Original Message -
>> From: "Jeff Peeler" 
>> To: "OpenStack Development Mailing List (not for usage questions)"
>>
>> Sent: Tuesday, June 9, 2015 2:19:00 PM
>> Subject: Re: [openstack-dev] [kolla] Proposal for changing 1600UTC
>>meeting to 1700 UTC
>>
>> On Mon, Jun 08, 2015 at 05:15:54PM +, Steven Dake (stdake) wrote:
>>> Folks,
>>>
>>> Several people have messaged me from EMEA timezones that 1600UTC fits
>>> right into the middle of their family life (ferrying kids from school
>>> and what-not) and 1700UTC while not perfect, would be a better fit
>>> time-wise.
>>>
>>> For all people that intend to attend the 1600 UTC, could I get your
>>> feedback on this thread if a change of the 1600UTC timeslot to 1700UTC
>>> would be acceptable?  If it wouldn¹t be acceptable, please chime in as
>>> well.
>>
>> Both 1600 and 1700 UTC are fine for me.
>>
>> Jeff
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we add instance action event to live migration?

2015-06-12 Thread Sylvain Bauza



Le 12/06/2015 15:27, Matt Van Winkle a écrit :


On 6/12/15 7:46 AM, "Andrew Laski"  wrote:


On 06/11/15 at 06:54pm, Ian Wells wrote:

On 11 June 2015 at 12:37, Richard Raseley  wrote:


Andrew Laski wrote:


There are many reasons a deployer may want to live-migrate instances
around: capacity planning, security patching, noisy neighbors, host
maintenance, etc... and I just don't think the user needs to know or
care that it has taken place.


They might care, insofar as live migrations will often cause
performance
degradation from a users perspective.


Seconded.  If your app manager is warned that you're going to be live
migrating it can do something about the capacity drop.  I can imagine
cases
where a migrating VM would be brutally murdered [1] and replaced because
it's not delivering sufficient performance.

To be clear I see instance-action reporting more as a log of what has
previously happened to an instance, not a report of what's currently
happening.  A live migration still has task_state changes to make it
visible to a user.

My view is that in a cloud environment it shouldn't matter which host
you're on, as long as it meets the scheduling constraints required, so a
no-downtime movement between them is not an important event for a user.
But that does ignore the performance drop-off that may be noticed by a
user, so there are reasons to expose that information.  I'm just in
favor of making it optional, not hiding it in general.

Completely agree with all your points, Andrew.  As someone who's starting
to live migrate more and more tenants, this is exactly the approach that
would be useful for me.


Well, since live-migrate is having RBAC, why not considering that 
listing live migrations should also be something RBAC'd ?


I mean, if the policy allows the user to live-migrate, then the API 
should provide the list of live migrations. If that's not the case, then 
the API should just drop the related lines.


-Sylvain


Thanks!
Matt





--
Ian.

[1] See "nova help brutally-murder"
_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FreeBSD host support, round 2

2015-06-12 Thread Roman Bogorodskiy
  Daniel P. Berrange wrote:

> On Tue, Jun 09, 2015 at 07:52:39PM +0300, Roman Bogorodskiy wrote:
> > Hi,
> > 
> > Few months ago I've started a discussion on FreeBSD host support for
> > OpenStack. [1] At a result of discussion it was figured out that there
> > are a number of limitations, mainly on the BHyVe (the FreeBSD hypervisor)
> > side, that make the effort not feasible.
> 
> Do you still intend to add the missing features to BHyve to let it be
> supported in Nova eventually too ?

As I don't take part in bhyve development, it's hard for me to give a
detailed answer on that. I know that bhyve developers are
planning/working on some important improvements like UEFI support,
moving to single step guest boot and some others. I'm just trying to
keep libvirt/bhyve driver up to date.

> 
> > However, some things changed since then. Specifically, FreeBSD's got Xen
> > dom0 support. [2] In context of OpenStack deployment that has a number of
> > benefits over bhyve. Specifically:
> > 
> >  * The stack is more mature and feature-rich
> >  * The toolstack is here already: libxl is available through the FreeBSD
> >ports tree, libvirt/libxl works there with minimal modifications
> >(already available in the git master)
> >  * OpenStack libvirt/libxl driver is already here
> > 
> > I was able to setup a proof-of-concept environment on FreeBSD that
> > required quite a small amount of modifications required in OpenStack:
> > 
> >  * Glance and Keystone didn't require any changes
> >  * Nove required some minor modifications mainly in the linux_net.py
> >area
> > 
> > The summary of Nova modifications:
> > 
> >  * I had to implement FreeBSD version of linux_net.LinuxNetInterfaceDriver.
> >It currently doesn't support vlans though. 
> > 
> >
> > https://github.com/novel/bsdstack/blob/master/bsdstack/nova/network/freebsd_net.py
> > 
> >I keep it as an external package and configure Nova to use it through
> >linuxnet_interface_driver config option in nova.conf
> >  * I had to create a stub for the IptablesManager class. I also had to
> >add a config option to be able to override class for that in a way
> >similar to interface driver.
> >  * I had to fix a minor interface incapability for NullL3 stub, that's
> >already in the Nova tree: https://review.openstack.org/#/c/189001/
> >  * I added a hack to use 'phy' driver in domain's xml for disks because
> >for some reason driver='qemu' results in guests not able to access
> >disk devices (tried both FreeBSD and Linux guests). Need to
> >investigate
> >  * Dropped some LinuxBridgeInterfaceDriver hardcodes in
> >virt.libvirt.vif.
> > 
> > Here's a quick overview of my changes:
> > 
> > https://github.com/novel/nova/compare/stable/kilo...novel:stable/kilo_freebsd?expand=1
> > 
> > With this changes I was able to get things working, i.e. VM starting,
> > obtaining IP addresses (with nova-network configured with FlatDHCP) etc.
> > 
> > Having that said I'm wondering if community is interested in integrating
> > FreeBSD support through libvirt/libxl into mainline? Obviously, the
> > changes I provided are shortcuts and the appropriate specs should be
> > create with proposals of proper designs, not quick hacks like that.
> 
> I'd be happy to see FreeBSD support for any of the libvirt hypervisors
> added to Nova. As you point out, the changes to the libvirt code are
> going to be pretty minimal for libxl, so there's not going to be any
> appreciable ongoing maint burden for this. So I see no serious reason
> to reject the changes to Nova libvirt driver code for libxl+FreeBSD.
> The fun bit will be the changes to non-libvirt code in nova...
> 
> > The biggest part of the unportable code, just like it was in bhyve case,
> > is still linux_net.py, so probably it makes sense to revive the old
> > spec:
> > 
> > https://review.openstack.org/#/c/127827/
> > 
> > TBH, IMHO linux_net.py could have a refactor regardless of FreeBSD
> > support. It's approx. 2000 lines long, contains a lot of stuff like
> > dnsmasq handling code, interface handing code and firewall management
> > that could have their own place.
> 
> Yes, that network setup code is really awful and I'd love to see
> it refactored regardless of FreeBSD support.
> 
> With my crystal ball, probably the main question wrt any kind of
> FreeBSD support will be that of 3rd party CI testing. I don't know
> whether there is any company backing your work who has resources
> to provide a CI system, or whether FreeBSD project can manage it
> independently ?

I do all my FreeBSD related activities in my free time, in other words
it's not backed by any company. Anyway, I think it would not be a
problem to arrange virtual machines for running unit tests. As for
integration testing it's going to be harder because it'd need a real
hardware and I need to figure out how I could obtain one.

> The refactoring of the linux_net.py code could be done even without
> CI support of course -

Re: [openstack-dev] [nova] Should we add instance action event to live migration?

2015-06-12 Thread Matt Van Winkle


On 6/12/15 7:46 AM, "Andrew Laski"  wrote:

>On 06/11/15 at 06:54pm, Ian Wells wrote:
>>On 11 June 2015 at 12:37, Richard Raseley  wrote:
>>
>>> Andrew Laski wrote:
>>>
 There are many reasons a deployer may want to live-migrate instances
 around: capacity planning, security patching, noisy neighbors, host
 maintenance, etc... and I just don't think the user needs to know or
 care that it has taken place.

>>>
>>> They might care, insofar as live migrations will often cause
>>>performance
>>> degradation from a users perspective.
>>>
>>
>>Seconded.  If your app manager is warned that you're going to be live
>>migrating it can do something about the capacity drop.  I can imagine
>>cases
>>where a migrating VM would be brutally murdered [1] and replaced because
>>it's not delivering sufficient performance.
>
>To be clear I see instance-action reporting more as a log of what has
>previously happened to an instance, not a report of what's currently
>happening.  A live migration still has task_state changes to make it
>visible to a user.
>
>My view is that in a cloud environment it shouldn't matter which host
>you're on, as long as it meets the scheduling constraints required, so a
>no-downtime movement between them is not an important event for a user.
>But that does ignore the performance drop-off that may be noticed by a
>user, so there are reasons to expose that information.  I'm just in
>favor of making it optional, not hiding it in general.

Completely agree with all your points, Andrew.  As someone who's starting
to live migrate more and more tenants, this is exactly the approach that
would be useful for me.

Thanks!
Matt

>
>
>
>>-- 
>>Ian.
>>
>>[1] See "nova help brutally-murder"
>
>>_
>>_
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we add instance action event to live migration?

2015-06-12 Thread Andrew Laski

On 06/11/15 at 06:54pm, Ian Wells wrote:

On 11 June 2015 at 12:37, Richard Raseley  wrote:


Andrew Laski wrote:


There are many reasons a deployer may want to live-migrate instances
around: capacity planning, security patching, noisy neighbors, host
maintenance, etc... and I just don't think the user needs to know or
care that it has taken place.



They might care, insofar as live migrations will often cause performance
degradation from a users perspective.



Seconded.  If your app manager is warned that you're going to be live
migrating it can do something about the capacity drop.  I can imagine cases
where a migrating VM would be brutally murdered [1] and replaced because
it's not delivering sufficient performance.


To be clear I see instance-action reporting more as a log of what has 
previously happened to an instance, not a report of what's currently 
happening.  A live migration still has task_state changes to make it 
visible to a user.


My view is that in a cloud environment it shouldn't matter which host 
you're on, as long as it meets the scheduling constraints required, so a 
no-downtime movement between them is not an important event for a user.  
But that does ignore the performance drop-off that may be noticed by a 
user, so there are reasons to expose that information.  I'm just in 
favor of making it optional, not hiding it in general.





--
Ian.

[1] See "nova help brutally-murder"



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] vif type libvirt-network

2015-06-12 Thread Andreas Scheuring

> But I still need to
> figure out more details about device renaming and what other side
> effects might come with it.

Device renaming might not be a good idea in the macvtap context, as the
interface used could also be used with by other applications that might
insist on a fixed device name. It's not like in the sriov case, where
there's a pool of interfaces that is handled as Openstack resource. It's
more like with ovs vxlan approach - the eth device used for tunneling
could also be used by other apps and is not dedicated to Openstack



I digged deeper into the Live Migration code. I wonder if before live
migration, some checking is done if the target hypervisor could serve
the same physical network with neutron as well? I haven't found
something like that. 
The only thing I found is, that during pre_live_migration the nova plug
code is being called. and some instance specific netork info is being
gathered

Are you aware of additional checkings?


I also had the following other ideas:

- use the libvirt network approach and have a api call to
neutron-agent-show in our "vendor" specific plug/unplug code

- if some other neutron pre_live_migration checkings exist, maybe I can
enrich the data with some content...


But I would need some more time for evaluation next week.


Thanks so far!

Andreas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Looking for help getting git-review to work over https

2015-06-12 Thread ZZelle
Hi David,

Ok, sudo python setup.py install without pbr install is not working behind
an http proxy.

Because you should use sudo -E python setup.py install to pass
http(s)_proxy ernv variables.
But even if you do so python setup.py install will try to install pbr
WITHOUT taking into account proxy info :(

Option 1: you wait for the 1.25 :)
Option 2: you install manually pbr with pip or aptitude/yum and requests
with aptitude/yum (otherwise requests will complain with https
certificates) than


On Fri, Jun 12, 2015 at 2:55 AM, Jeremy Stanley  wrote:

> On 2015-06-11 23:32:58 + (+), KARR, DAVID wrote:
> > I managed to install pip, but I don’t understand what “pip install
> > git-review” is doing.  It doesn’t appear to be replacing the
> > already installed git-review.
>
> `pip install git-review` installs the latest git-review release from
> pypi.python.org, but `pip install .` installs the source tree from
> your current working directory.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread Flavio Percoco

On 12/06/15 03:04 -0700, Dmitry Borodaenko wrote:

On Fri, Jun 12, 2015 at 09:31:45AM +0200, Flavio Percoco wrote:

On 11/06/15 17:36 +0300, Matthew Mosesohn wrote:
>Secondly, I'd like to point out that Fuel is not so different from
>what other teams are doing. At the Summit, I heard from others who all
>maintain internal Gerrits and internal forks of the modules. The
>difference is that Fuel is being worked on in the open in StackForge.
>Anyone is free to contribute to Fuel as he or she wishes, take our
>patches, or review changesets.

TBH, I really dislike the fact that there are internal forks but as
long as they are kept internal, I don't really care.


"Internal" may apply to other projects Matt is referring to, but it does
not apply to Fuel. Fuel's forks of upstream puppet modules are not
internal, they're embedded into the fuel-library repository, which,
along with the rest of Fuel source code, is fully public.


Yup, I was referring to other projects too. I should've been more
explicit but thanks for clarifying.




It's not correct to just copy/paste code, sure, but at least they are
not making it publicly consumable with the wrong attributions.


We are making Fuel publicly consumable, and, as I've pointed out in
previous email, we're keeping all attributions in the source code
intact.


I do prefer (and I believe Emiliem does as well) to have Fuel in the
open,


And yet in your previous statements you say that publishing Fuel source
code is somehow worse than keeping one's modifications of open source
code unavailable to public. Which one is it?


I was referring to other projects :)

I like Fuel open, I like every project open but I'd very much want
them to do it right.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgpkkTKmBPIw7.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-12 Thread Flavio Percoco

On 12/06/15 03:28 -0700, Dmitry Borodaenko wrote:

On Fri, Jun 12, 2015 at 09:24:33AM +0200, Flavio Percoco wrote:

I'm sure you both, and the Fuel team, are acting on good faith but I
believe, in this case, there's no problem that makes copy/pasting
code, and therefore loosing commits attribution, acceptable.


To sum up my previous emails, you're wrong on all accounts: commits and
attribution are different things; we're not losing attribution; losing
commit history is acceptable.


Do you keep the author of the code you copied?




The fact that you are developing Fuel in the open is awesome and I
really hope you never change your mind on that but please, do find a
solution for this issue. As you can see from this thread, it creates
lots of frustration and it makes other project's work more difficult.


I have already explained in the thread how we address the problem of
tracking down and managing the Fuel specific changes in forked modules.
With that problem addressed, I don't see any other objective reason for
frustration. Does anybody's bonus depend on the number of lines of code
in stackforge repositories such as fuel-library that git blame
attributes to their name?


I don't think anyone here is talking about bonuses or worrying about
salaries. The fact that you mention it offends the purposes of this
thread and, as much as you don't care, I'm really sad to read that.

The whole thing this thread is trying to achieve is improving
collaboration and you are derailing the conversation with completely
unfriendly/unhelpful comments like the one above.

It does cause frustration because, as you can read from Emiliem's
original email, it not just adds some extra burden to people in the
puppet team but it also defeates the purposes of the team itself,
which is creating OpenStack puppet manifests that are consumable by
everyone.

Help them be better and let them help you improve your workflow/app.
That's all.




>The most problematic implication of what you're asking for is the
>additional effort that it would require from Fuel developers. When you
>say that Puppet OpenStack developers don't have time to look at Fuel git
>history for bugfixes, and then observe that actually Fuel developers do
>propose their patches to upstream, but those patches are stalled in the
>community review process, this indicates that you don't consider taking
>over and landing these patches a priority:
>
>http://lifehacker.com/5892948/instead-of-saying-i-dont-have-time-say-its-not-a-priority
>
>The fact that you have started this thread means that you actually do
>care to get these bugfixes into Puppet OpenStack. If that's true, maybe
>you can consider a middleground approach: Fuel team agrees to propose
>all our changes to upstream (i.e. do a better job at something we've
>already committed to unilaterally), and you help us land the patches we
>propose, and take over those that get stalled when the submitter from
>Fuel team has moved on to other tasks?

Assuming good faith, I'd guess you meant something: "Please, help us
prioritize patches comming from Fuel".


Please do not assume that what I actually meant (as I explained in
previous reply to Emilien) is incompatible with good faith. I am a
strong believer in Free Software, and I want to improve collaboration
between Puppet OpenStack and Fuel. I also know that unless we come up
with collaboration improvements that are mutually beneficial to both
projects, nothing will change and this discussion will remain, as
Emilien has put it, just words.


Please, do come up with something that works for both. It'll be of
great benefit for both projects.

Just to be clear, I believe in yours and both teams good faith. The
reason I mentioned that is because, as a non-native English speaker, I
could've read that paragraph in a different way. Just trying to be
explicit to avoid others to read it the same way I did.

Thanks a lot,
Flavio

--
@flaper87
Flavio Percoco


pgpU9ndjm2aq4.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >