Re: [openstack-dev] [neutron] openwrt VM as service

2015-04-17 Thread Sridhar Ramaswamy
As mentioned earlier in this thread there are few VM based L3
implementations already in neutron - from Cisco (CSR) and Brocade (Vyatta
vRouter). Both extends off neutron L3 service-plugin framework. And they
both have been decomposed in the current Kilo cycle into stackforge. So all
you need is another L3 service-plugin for openwrt hosted in stackforge. I
don't see any framework level enhancements required to support this.

However I do see some value in extracting common elements off these
service-VM implementations - particularly related to launching the VM,
plumbing the ports, etc into an utility library (oslo? tacker?).

- Sridhar

On Thu, Apr 16, 2015 at 2:06 PM, Sławek Kapłoński sla...@kaplonski.pl
wrote:

 Hello,

 --
 Pozdrawiam
 Sławek Kapłoński
 sla...@kaplonski.pl

 On Wed, Apr 15, 2015 at 11:06:49PM +0200, Salvatore Orlando wrote:
  I think this work falls into the service VM category.
 
  openwrt unlike other service VMs used for networking services (like
  cloudstack's router vm) is very lightweight, and it's fairly easy to
  provision such VMs on the fly. It should be easy also to integrate with a
  ML2 control plane or even with other plugins.
 
  It is a decent alternative to the l3 agent. Possibly to the dhcp agent as
  well. As I see this as an alternative to part of the reference control
  plane, I expect it to provide its own metadata proxy. The only change in
  neutron would be some sort of configurability in the metadata proxy
  launcher (assuming you do not provide DHCP as well via openwrt, in which
  case the problem would not exist, probably).
 
  It's not my call about whether this should live in neutron or not. My
 vote
  is not - simply because I believe that neutron is not a control plane,
 and
  everything that is control plane or integration with it should live
 outside
  of neutron, including our agents.
 
  On the other hand, I don't really see what the 'aaS' part of this. You're
  not exposing anything as a service specific to openwrt, are you?

 I told here only my (maybe not good) idea that instead of openwrt as a
 service which will provide router functionalities in vm maybe better
 would be to provide some mechanism to make possibility to connect
 different vms which provide such router functionalities (something like
 service here).

 
  Salvatore
 
 
 
  On 15 April 2015 at 22:06, Sławek Kapłoński sla...@kaplonski.pl wrote:
 
   Hello,
  
   I agree. IMHO it should be maybe something like *aaS deployed on VM. I
   think that Octavia is something like that for LBaaS now.
   Maybe it could be something like RouteraaS which will provide all
 such
   functions in VM?
  
   --
   Best regards / Pozdrawiam
   Sławek Kapłoński
   sla...@kaplonski.pl
  
   On Wed, Apr 15, 2015 at 11:55:06AM -0500, Dean Troyer wrote:
On Wed, Apr 15, 2015 at 2:37 AM, Guo, Ruijing ruijing@intel.com
 
   wrote:
   
   I’d like to propose openwrt VM as service.



 What’s openWRT VM as service:



 a)Tenant can download openWRT VM from
 http://downloads.openwrt.org/

 b)Tenant can create WAN interface from external public
   network

 c)Tenant can create private network and create instance
   from
 private network

 d)Tenent can configure openWRT for several services
   including
 DHCP, route, QoS, ACL and VPNs.

   
   
So first off, I'll be the first on in line to promote using OpenWRT
 for
   the
basis of appliances for this sort of thing.  I use it to overcome the
   'joy'
of VirtualBox's local networking and love what it can do in 64M RAM.
   
However, what you are describing are services, yes, but I think to
 focus
   on
the OpenWRT part of it is missing the point.  For example, Neutron
 has a
VPNaaS already, but I agree it can also be built using OpenWRT and
OpenVPN.  I don't think it is a stand-alone service though, using a
combination of Heat/{ansible|chef|puppet|salt}/any other
deployment/orchestration can get you there.  I have a shell script
somewhere for doing exactly that on AWS from way back.
   
What I've always wanted was an image builder that would customize the
packages pre-installed.  This would be especially useful for
 disposable
ramdisk-only or JFFS images that really can't install additional
   packages.
Such a front-end to the SDK/imagebuilder sounds like about half of
 what
   you
are talking about above.
   
Also, FWIW, a while back I packaged up a micro cloud-init
 replacement[0]
   in
shell that turns out to be really useful.  It's based on something I
couldn't find again to give proper attribution so if anyone knows who
originated this I'd be grateful.
   
dt
   
[0] https://github.com/dtroyer/openwrt-packages/tree/master/rc.cloud
--
   
Dean Troyer
dtro...@gmail.com
  
   
  
 

Re: [openstack-dev] [wsme] [ironic] [ceilometer] [magnum] [kite] [tuskar] WSME unmaintained ?

2015-04-17 Thread Julien Danjou
On Thu, Apr 16 2015, Lucas Alvares Gomes wrote:

/me put his wsme-core hat on

 * Should projects relying on WSME start thinking about migrating their APIs
 to another technology?

Maybe not migrating, but at least not starting something new with it.

 * Can we somehow get the core team to start paying more attention to the
 project? Or can we elect some people willing to do some review to the core
 team ? If so, there's anyone out there that wants help with it?

Err, yeah, right. There are 4 people in wsme-core: Christophe (the
original author) doesn't use WSME since at least 2 years, I'm pretty
sure Doug  Ryan have other things to care about, and I don't use WSME
anymore neither,

So if anyone has a project that rely on WSME and wants to take care, no
problem.

 * Forking the project an option?

No need to do that, I'm OK to add any competent people to wsme-core. :)

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] [ironic] [ceilometer] [magnum] [kite] [tuskar] WSME unmaintained ?

2015-04-17 Thread Julien Danjou
On Thu, Apr 16 2015, Jay Pipes wrote:

 I think this may be the way to go. The intent of WSME is admirable
 (the multiprotocol stack) but the execution leads to unwarranted
 complexity.

 I think a framework more specifically dedicated to JSON APIs, or
 even just being webby and correct would be better. Something
 _much_ simpler and _much_ more aligned with WSGI.

 Amen. Honestly, I never liked WSME and now that projects are no longer
 supporting XML, I don't see any reason to continue using it.

Yeah, supporting XML was one of the good point of WSME… back then and a
reason to pick it… back then.

 Like you say, it adds way too much unnecessary complexity to the API
 framework, IMHO. Better to just use a simple JSONSchema framework
 (using the jsonschema Python library) to do input validations and have
 schema definitions in JSONSchema instead of random attribute factories
 like:

In Gnocchi we rely on voluptuous¹. The good part is that it allows to
write shorter and more Pythonic schema. The downside might be that it's
not portable obviously.

 Personally, I prefer the Falcon approach to routing, which is what I call
 explicit object dispatch ;)

 class ThingsResource:

 def on_get(self, req, resp, user_id):
 ... do some stuff
 resp.set_header('X-Powered-By', 'Small Furry Creatures')
 resp.status = falcon.HTTP_200

 things = ThingsResource()
 app = falcon.API()
 app.add_route('/{user_id}/things', things)

After using Pecan for a while, I'm leaning toward explicit routing too.
This Falcon API looks right too.
Honestly Pecan is messy. Most of the time, when coding, you have no clue
which one of the method is going to be picked in which controller class.

Cheers,
-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Setting the subnet's gateway with the IP allocated to a ServiceVM's port?

2015-04-17 Thread Wang, Yalei
Hi all,

This is a problem about the gateway setting in the subnet when one VM could act 
as a router/firewall. When one VM works
as a router/firewall in the network, the port where the VM connect to the 
subnet should be the gateway of the subnet.
But now, we can't set the gateway to any VM's port plugged into the subnet 
because the gateway IP cannot be in the IP allocation pool.

The usage is like this:
1.  Create subnet with a IP allocation pool, specifying the gateway as 
normal.
2.  Create a router and attach the interfaces with the subnets. With some 
vendor router-plugin, it will create a router VM and connect this VM with 
subnets.
   Router VM would get a IP from the pool, but not the gateway IP.
   This the limitation comes, gateway IP could not be allocated to VM, and 
subnet's gateway could not be updated with IP which has been assigned to some 
VM.

GatewayConflictWithAllocationPools exception would be emitted.
And this verification code related is 
https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L1112
It was added by patch for this bug 
https://bugs.launchpad.net/neutron/+bug/1062061.

Here is an error example:
stack@yalie-Studio-XPS-8000:~/job/dev2/devstack$ neutron subnet-update subnet2  
--gateway  10.0.0.3
Gateway ip 10.0.0.3 conflicts with allocation pool 10.0.0.2-10.0.0.254

I think we need to remove this API limitation considering the usage listed, and 
I want to file a bug about it although I know it may appear be incompatible 
with the API expected before.
Maybe we could:
1.  Remove this limitation unconditionally. simple but it would conflict 
with the API behavior before. Is the behavior before bind with something more?
2.  Remove this limitation conditionally. Add a flag for neutron router to 
delicate whether VM as router or a legacy router. Just rough idea.

More ideas about it?

much appreciate for any comments.

Thanks

/Yalei

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] eventlet 0.17.3 is now fully Python 3 compatible

2015-04-17 Thread Victor Stinner
 For the full list, see the wiki page: 
 https://wiki.openstack.org/wiki/Python3#Core_OpenStack_projects 

 Thanks for updating the wiki page that is a very useful list.
 From the looks of things, it seems like nova getting Python3 support in 
 Liberty is not going to happen.

Why? I plan to work on porting nova to Python 3. I proposed a nova session on 
Python 3 at the next OpenStack Summit at Vancouver. I plan to write a spec too.

I'm not aware of any real blocker for nova.

 What are your thoughts on how to tackle sqlalchemy-migrate? It looks like 
 that is a blocker for several projects. And something I think we have wanted 
 to move off of for some time now. 

I just checked sqlachemy-migrate. The README and the documentation are 
completly outdated, but the project is very active: latest commit one month ago 
and latest release (0.9.6) one month ago. There are py33 and py34 environments 
and tests pass on Python 3.3 and Python 3.4! I didn't check yet, but I guess 
that sqlachemy-migrate 0.9.6 already works on Python 3. Python 3 classifiers 
are just missing in setup.cfg.

I sent patches to update the doc, to add Python 3 classifiers and to upgrade 
requirements. The project moved to stackforge, reviews are at 
review.openstack.org:

   
https://review.openstack.org/#/q/status:open+project:stackforge/sqlalchemy-migrate,n,z

The wiki page said that scripttest and ibm-db-sa were not Python 3 compatible. 
It's no more true: scripttest is compatible Python 3, and there is 
ibm-db-sa-py3 which is Python 3 compatible.

I updated the wiki page for sqlachemy-migrate.

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] openwrt VM as service

2015-04-17 Thread Guo, Ruijing
Service VM is a good way to go.

Possibly we need to change metadata proxy in neutron for VM based L3:

https://blueprints.launchpad.net/neutron/+spec/metadata-overlapping-networks

Thanks,
-Ruijing


From: Sridhar Ramaswamy [mailto:sric...@gmail.com]
Sent: Friday, April 17, 2015 1:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] openwrt VM as service

As mentioned earlier in this thread there are few VM based L3 implementations 
already in neutron - from Cisco (CSR) and Brocade (Vyatta vRouter). Both 
extends off neutron L3 service-plugin framework. And they both have been 
decomposed in the current Kilo cycle into stackforge. So all you need is 
another L3 service-plugin for openwrt hosted in stackforge. I don't see any 
framework level enhancements required to support this.

However I do see some value in extracting common elements off these 
service-VM implementations - particularly related to launching the VM, 
plumbing the ports, etc into an utility library (oslo? tacker?).

- Sridhar

On Thu, Apr 16, 2015 at 2:06 PM, Sławek Kapłoński 
sla...@kaplonski.plmailto:sla...@kaplonski.pl wrote:
Hello,

--
Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.plmailto:sla...@kaplonski.pl

On Wed, Apr 15, 2015 at 11:06:49PM +0200, Salvatore Orlando wrote:
 I think this work falls into the service VM category.

 openwrt unlike other service VMs used for networking services (like
 cloudstack's router vm) is very lightweight, and it's fairly easy to
 provision such VMs on the fly. It should be easy also to integrate with a
 ML2 control plane or even with other plugins.

 It is a decent alternative to the l3 agent. Possibly to the dhcp agent as
 well. As I see this as an alternative to part of the reference control
 plane, I expect it to provide its own metadata proxy. The only change in
 neutron would be some sort of configurability in the metadata proxy
 launcher (assuming you do not provide DHCP as well via openwrt, in which
 case the problem would not exist, probably).

 It's not my call about whether this should live in neutron or not. My vote
 is not - simply because I believe that neutron is not a control plane, and
 everything that is control plane or integration with it should live outside
 of neutron, including our agents.

 On the other hand, I don't really see what the 'aaS' part of this. You're
 not exposing anything as a service specific to openwrt, are you?

I told here only my (maybe not good) idea that instead of openwrt as a
service which will provide router functionalities in vm maybe better
would be to provide some mechanism to make possibility to connect
different vms which provide such router functionalities (something like
service here).


 Salvatore



 On 15 April 2015 at 22:06, Sławek Kapłoński 
 sla...@kaplonski.plmailto:sla...@kaplonski.pl wrote:

  Hello,
 
  I agree. IMHO it should be maybe something like *aaS deployed on VM. I
  think that Octavia is something like that for LBaaS now.
  Maybe it could be something like RouteraaS which will provide all such
  functions in VM?
 
  --
  Best regards / Pozdrawiam
  Sławek Kapłoński
  sla...@kaplonski.plmailto:sla...@kaplonski.pl
 
  On Wed, Apr 15, 2015 at 11:55:06AM -0500, Dean Troyer wrote:
   On Wed, Apr 15, 2015 at 2:37 AM, Guo, Ruijing 
   ruijing@intel.commailto:ruijing@intel.com
  wrote:
  
  I’d like to propose openwrt VM as service.
   
   
   
What’s openWRT VM as service:
   
   
   
a)Tenant can download openWRT VM from
http://downloads.openwrt.org/
   
b)Tenant can create WAN interface from external public
  network
   
c)Tenant can create private network and create instance
  from
private network
   
d)Tenent can configure openWRT for several services
  including
DHCP, route, QoS, ACL and VPNs.
   
  
  
   So first off, I'll be the first on in line to promote using OpenWRT for
  the
   basis of appliances for this sort of thing.  I use it to overcome the
  'joy'
   of VirtualBox's local networking and love what it can do in 64M RAM.
  
   However, what you are describing are services, yes, but I think to focus
  on
   the OpenWRT part of it is missing the point.  For example, Neutron has a
   VPNaaS already, but I agree it can also be built using OpenWRT and
   OpenVPN.  I don't think it is a stand-alone service though, using a
   combination of Heat/{ansible|chef|puppet|salt}/any other
   deployment/orchestration can get you there.  I have a shell script
   somewhere for doing exactly that on AWS from way back.
  
   What I've always wanted was an image builder that would customize the
   packages pre-installed.  This would be especially useful for disposable
   ramdisk-only or JFFS images that really can't install additional
  packages.
   Such a front-end to the SDK/imagebuilder sounds like about half of what
  you
   are talking about above.
  
   Also, FWIW, a while back I 

Re: [openstack-dev] [neutron] Neutron scaling datapoints?

2015-04-17 Thread joehuang
Hi, Attila,

only address the issue of agent status/liveness management is not enough for 
Neutron scalability. The concurrent dynamic load impact on large scale ( for 
example 100k managed nodes with the dynamic load like security group rule 
update, routers_updated, etc ) should also be taken into account too. So even 
if is agent status/liveness management improved in Neutron, that doesn't mean 
the scalability issue totally being addressed.

And on the other hand, Nova already supports several segregation concepts, for 
example, Cells, Availability Zone... If there are 100k nodes to be managed by 
one OpenStack instances, it's impossible to work without hardware resources 
segregation. It's weird to put agent liveness manager in availability zone(AZ 
in short) 1, but all managed agents in AZ 2. If AZ 1 is power off, then all 
agents in AZ2 lost management. 

The benchmark is already here for scalability test report for million ports 
scalability of Neutron  
http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers

The cascading may be not perfect, but at least it provides a feasible way if we 
really want scalability.

I am also working to evolve OpenStack to a world no need to worry about 
OpenStack Scalability Issue based on cascading:

Tenant level virtual OpenStack service over hybrid or federated or multiple 
OpenStack based clouds:

There are lots of OpenStack based clouds, each tenant will be allocated with 
one cascading OpenStack as the virtual OpenStack service, and single OpenStack 
API endpoint served for this tenant. The tenant's resources can be distributed 
or dynamically scaled to multi-OpenStack based clouds, these clouds may be 
federated with KeyStone, or using shared KeyStone, or  even some OpenStack 
clouds built in AWS or Azure, or VMWare vSphere.

Under this deployment scenario, unlimited scalability in a cloud can be 
achieved, no unified cascading layer, tenant level resources orchestration 
among multi-OpenStack clouds fully distributed(even geographically). The 
database and load for one casacding OpenStack is very very small, easy for 
disaster recovery or backup. Multiple tenant may share one cascading OpenStack 
to reduce resource waste, but the principle is to keep the cascading OpenStack 
as thin as possible.

You can find the information here:
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Use_Case

Best Regards
Chaoyi Huang ( joehuang )

-Original Message-
From: Attila Fazekas [mailto:afaze...@redhat.com] 
Sent: Thursday, April 16, 2015 3:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?





- Original Message -
 From: joehuang joehu...@huawei.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Sunday, April 12, 2015 3:46:24 AM
 Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
 
 
 
 As Kevin talking about agents, I want to remind that in TCP/IP stack, 
 port ( not Neutron Port ) is a two bytes field, i.e. port ranges from 
 0 ~ 65535, supports maximum 64k port number.
 
 
 
  above 100k managed node  means more than 100k L2 agents/L3 
 agents... will be alive under Neutron.
 
 
 
 Want to know the detail design how to support 99.9% possibility for 
 scaling Neutron in this way, and PoC and test would be a good support for 
 this idea.
 

Would you consider something as PoC which uses the technology in similar way, 
with a similar port - security problem, but with a lower level API than neutron 
using currently ?

Is it an acceptable flaw:
If you kill -9 the q-svc 1 times at the `right` millisec the rabbitmq 
memory usage increases by ~1MiB ? (Rabbit usually eats ~10GiB under pressure) 
The memory can be freed without broker restart, it also gets freed on agent 
restart.


 
 
 I'm 99.9% sure, for scaling above 100k managed node, we do not really 
 need to split the openstack to multiple smaller openstack, or use 
 significant number of extra controller machine.
 
 
 
 Best Regards
 
 
 
 Chaoyi Huang ( joehuang )
 
 
 
 From: Kevin Benton [blak...@gmail.com]
 Sent: 11 April 2015 12:34
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
 
 Which periodic updates did you have in mind to eliminate? One of the 
 few remaining ones I can think of is sync_routers but it would be 
 great if you can enumerate the ones you observed because eliminating 
 overhead in agents is something I've been working on as well.
 
 One of the most common is the heartbeat from each agent. However, I 
 don't think we can't eliminate them because they are used to determine 
 if the agents are still alive for scheduling purposes. Did you have 
 something else in mind to determine if an agent is alive?
 
 On Fri, Apr 10, 

[openstack-dev] [all] Liberty Design Summit - Proposed room / time layout

2015-04-17 Thread Thierry Carrez
Hi PTLs,

Following the slot allocation last week, here is the proposed room layout:

https://docs.google.com/spreadsheets/d/1VsFdRYGbX5eCde81XDV7TrPBfEC7cgtOFikruYmqbPY/edit?usp=sharing

It takes into account the following changes:
- Barbican donates on fishbowl room to Security
- Magnum takes one of the available Friday-afternoon sprints

You'll notice that we still have 3 unallocated Friday afternoon sprints,
as well as an unallocated fishbowl room. Ceilometer expressed interest
in having the latter (which would require some shuffling around since
they have a work session at that time). At this point I prefer to keep
it for late adjustments.

If you see major problems with the proposed layout let me know. I may
have missed a blatant issue. It's a bit difficult to find a combination
that works for everyone, especially when you add conference talk
conflicts in the mix, but I'll do my best to accommodate reported issues.

Barring major issues we'll push this to the Design Summit sched and let
PTLs update the content there as they refine content.

Regards,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugin] Location of 6.1 source code for nova and neutron

2015-04-17 Thread Emma Gordon (projectcalico.org)
Thanks Sergii.

From: Sergii Golovatiuk [mailto:sgolovat...@mirantis.com]
Sent: 16 April 2015 20:35
To: OpenStack Development Mailing List (not for usage questions)
Cc: Joe Marshall (projectcalico.org); Neil Jerram (projectcalico.org)
Subject: Re: [openstack-dev] [fuel][plugin] Location of 6.1 source code for 
nova and neutron

Hi,

http://obs-1.mirantis.com:82/trusty-fuel-6.1-stable/ubuntu/ is a repository for 
build system. It's not the same as  
http://mirror.fuel-infra.org/http://mirror.fuel-infra.org/mos/ubuntu/pool/main/n/nova/
 as packages should pass acceptance, performance and security testing. Once all 
validations are done the package will finally appear in mirror after long 
journey

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Thu, Apr 16, 2015 at 4:02 PM, Emma Gordon 
(projectcalico.orghttp://projectcalico.org) 
e...@projectcalico.orgmailto:e...@projectcalico.org wrote:
Hi Roman,

Thanks for the links. In the meantime I found 
http://obs-1.mirantis.com:82/trusty-fuel-6.1-stable/ubuntu/ - are the files 
there the same thing?

Thanks,
Emma

From: Roman Vyalov [mailto:rvya...@mirantis.commailto:rvya...@mirantis.com]
Sent: 16 April 2015 14:11
To: OpenStack Development Mailing List (not for usage questions)
Cc: Joe Marshall (projectcalico.orghttp://projectcalico.org); Neil Jerram 
(projectcalico.orghttp://projectcalico.org)
Subject: Re: [openstack-dev] [fuel][plugin] Location of 6.1 source code for 
nova and neutron

Hi Emma,
Sources for nova: http://mirror.fuel-infra.org/mos/ubuntu/pool/main/n/nova/
Sources for neutron: 
http://mirror.fuel-infra.org/mos/ubuntu/pool/main/n/neutron/

On Thu, Apr 16, 2015 at 1:26 PM, Emma Gordon 
(projectcalico.orghttp://projectcalico.org) 
e...@projectcalico.orgmailto:e...@projectcalico.org wrote:
I’m working on a calico plug-in for fuel and am looking for the orig.tar.gz and 
debian.tar.gz source code tar files for nova and neutron for 6.1 so that I can 
rebuild the packages with the changes required for calico. Can anyone tell me 
where I can find these?

Thanks,
Emma


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] why is evacuate marked as missing for libvirt?

2015-04-17 Thread Markus Zoeller
Daniel P. Berrange berra...@redhat.com wrote on 04/15/2015 11:35:39 
AM:

 From: Daniel P. Berrange berra...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 04/15/2015 11:42 AM
 Subject: Re: [openstack-dev] [nova] why is evacuate marked as missing 
 for libvirt?
 
 On Tue, Apr 14, 2015 at 01:44:45PM -0400, Russell Bryant wrote:
  On 04/14/2015 12:22 PM, Matt Riedemann wrote:
   This came up in IRC this morning, but the hypervisor support matrix 
is
   listing evacuate as 'missing' for the libvirt driver:
   
   http://docs.openstack.org/developer/nova/support-
 matrix.html#operation_evacuate
   
   
   Does anyone know why that is?  The rebuild method in the compute 
manager
   just re-uses other virt driver operations so by default it's 
implemented
   by all drivers.  The only one that overrides rebuild for evacuate is 
the
   ironic driver.
  
  I think it's a case where there are a couple of different things
  referred to with 'evacuate'.  I believe this was originally added to
  track something that was effectively XenServer specific and the
  description of the feature seems to reflect that.  We've since added 
the
  more generic 'evacuate' API, so it's pretty confusing.  It should
  probably be reworked to track which drivers work with the 'evacuate' 
API
  call, and perhaps have a different entry for whatever this different
  XenServer thing is (or was).
 
 Yep, if there's any mistakes or bizarre things in the support matrix
 just remember that the original wiki page had essentially zero 
information
 about what each feature item was referring to - just the two/three word
 feature name. When I turned it into formal docs I tried to add better
 explanations, but it is entirely possible my interpretations were wrong
 in places. So if in doubt assume the support matrix is wrong, and just
 send a review to update it to some saner state with better description
 of the feature. Pretty much all the features in the matrix could do
 with better explanations and/or being broken up into finer grained
 features - there's plenty of scope for people to submit patches to
 improve the granularity of items.

I think that the confusion is caused by something called the host
maintenance mode [1]. When this is enabled, an evacuate is triggered
by the underlying hypervisor. This can mode can be set by the CLI [2] 
and is not implemented by the libvirt driver.
The probably intented API for the feature evacuate is [3] which can 
be triggered via CLI with:
* nova evacuate server
* nova host-evacuate host
* nova host-evacuate-live host

The feature evacuate has hereby a dependency to live-migration. As
the system z platform doesn't yet has [4] merged, evacuate is there 
partial [5] (TODO for me) whereas for x86 there should be complete.
Please correct me if I'm wrong here.

Unfortunately I couldn't find any tempest tests for the evacuate 
feature, so I tested in manually.

[1] virt.driver.ComputeDriver.host_maintenance_mode(self, host, mode)

https://github.com/openstack/nova/blob/2015.1.0rc1/nova/virt/driver.py#L1016
[2] Nova CLI; command nova host-update

http://docs.openstack.org/cli-reference/content/novaclient_commands.html
[3] nova.api.openstack.compute.contrib.evacuate

https://github.com/openstack/nova/blob/2015.1.0rc1/nova/api/openstack/compute/contrib/evacuate.py
[4] libvirt: handle NotSupportedError in compareCPU
https://review.openstack.org/#/c/166130/
[5] Update hypervisor support matrix with column for kvm on system z
https://review.openstack.org/#/c/172391/

Regards,
Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Using DevStack in Kilo

2015-04-17 Thread Sean Dague
glanceclient has always been that way it's terrible. :(

the nova message timeout looks like it's becauce nova-cert is not
running, and it keeps trying to ping it. What actions were you doing
that triggered that path?

On 04/16/2015 07:23 PM, Danny Choi (dannchoi) wrote:
 When use devstack with Kilo, after stack.sh, the following tracebacks are 
 logged:
 
 localadmin@qa4:/opt/stack/logs$ grep -r Traceback *
 g-api.log:2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client 
 Traceback (most recent call last):
 g-api.log:2015-04-16 14:16:20.237 TRACE glance.registry.client.v1.client 
 Traceback (most recent call last):
 g-api.log:2015-04-16 14:16:21.973 TRACE glance.registry.client.v1.client 
 Traceback (most recent call last):
 g-api.log:2015-04-16 14:16:24.538 TRACE glance.registry.client.v1.client 
 Traceback (most recent call last):
 g-api.log.2015-04-16-141127:2015-04-16 14:16:17.996 TRACE 
 glance.registry.client.v1.client Traceback (most recent call last):
 g-api.log.2015-04-16-141127:2015-04-16 14:16:20.237 TRACE 
 glance.registry.client.v1.client Traceback (most recent call last):
 g-api.log.2015-04-16-141127:2015-04-16 14:16:21.973 TRACE 
 glance.registry.client.v1.client Traceback (most recent call last):
 g-api.log.2015-04-16-141127:2015-04-16 14:16:24.538 TRACE 
 glance.registry.client.v1.client Traceback (most recent call last):
 n-api.log:2015-04-16 14:18:28.163 TRACE nova.api.openstack Traceback (most 
 recent call last):
 n-api.log:2015-04-16 14:19:34.470 TRACE nova.api.openstack Traceback (most 
 recent call last):
 n-api.log:2015-04-16 14:20:39.624 TRACE nova.api.openstack Traceback (most 
 recent call last):
 n-api.log:2015-04-16 14:21:44.879 TRACE nova.api.openstack Traceback (most 
 recent call last):
 n-api.log:2015-04-16 14:22:49.676 TRACE nova.api.openstack Traceback (most 
 recent call last):
 n-api.log:2015-04-16 14:23:55.475 TRACE nova.api.openstack Traceback (most 
 recent call last):
 n-api.log.2015-04-16-141127:2015-04-16 14:18:28.163 TRACE nova.api.openstack 
 Traceback (most recent call last):
 n-api.log.2015-04-16-141127:2015-04-16 14:19:34.470 TRACE nova.api.openstack 
 Traceback (most recent call last):
 n-api.log.2015-04-16-141127:2015-04-16 14:20:39.624 TRACE nova.api.openstack 
 Traceback (most recent call last):
 n-api.log.2015-04-16-141127:2015-04-16 14:21:44.879 TRACE nova.api.openstack 
 Traceback (most recent call last):
 n-api.log.2015-04-16-141127:2015-04-16 14:22:49.676 TRACE nova.api.openstack 
 Traceback (most recent call last):
 n-api.log.2015-04-16-141127:2015-04-16 14:23:55.475 TRACE nova.api.openstack 
 Traceback (most recent call last):
 
 Traceback #1:
 
 
 2015-04-16 14:16:17.996 ERROR glance.registry.client.v1.client 
 [req-dc54f751-eb4b-4a03-9146-52623ed733a9 d0feb86a66d54df8b4aff1848c35729e 
 8afd7feb78ee445b8d642a66c96d49d5] 
 Registry client request GET /images/cirros-0.3.3-x86_64-uec-kernel raised 
 NotFound
 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client Traceback 
 (most recent call last):
 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client   File 
 /opt/stack/glance/glance/registry/client/v1/client.py, line 117, in 
 do_request
 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client **kwargs)
 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client   File 
 /opt/stack/glance/glance/common/client.py, line 71, in wrapped
 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client return 
 func(self, *args, **kwargs)
 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client   File 
 /opt/stack/glance/glance/common/client.py, line 376, in do_request
 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client 
 headers=copy.deepcopy(headers))
 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client   File 
 /opt/stack/glance/glance/common/client.py, line 88, in wrapped
 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client return 
 func(self, method, url, body, headers)
 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client   File 
 /opt/stack/glance/glance/common/client.py, line 523, in _do_request
 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client raise 
 exception.NotFound(res.read())
 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client NotFound: 404 
 Not Found
 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client 
 2015-04-16 14:16:17.996 TRACE glance.registry.client.v1.client The resource 
 could not be found.
 
 Traceback #2:
 ==
 
 2015-04-16 14:17:28.159 INFO oslo_messaging._drivers.impl_rabbit 
 [req-78c0070a-2a89-452f-8f3b-7aa549b9a3ed admin admin] Connected to AMQP 
 server on 172.29.172.1
 61:5672
 2015-04-16 14:18:28.163 ERROR nova.api.openstack 
 [req-78c0070a-2a89-452f-8f3b-7aa549b9a3ed admin admin] Caught error: Timed 
 out waiting for a reply to message I
 D 79c4306de3fc46aa986664735573100b
 2015-04-16 14:18:28.163 TRACE nova.api.openstack Traceback (most 

[openstack-dev] VM migration between two data centers

2015-04-17 Thread Abhishek Talwar/HYD/TCS
Hi Folks,

I have created two
data centers and I am using OpenStack as the management platform for
them. So now my question
is it is possible to migrate VM
instances from one data center to the
other. 

Can two OpenStack
clouds migrate VM to each other ?



Thanks and Regards
Abhishek Talwar
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] [ironic] [ceilometer] [magnum] [kite] [tuskar] WSME unmaintained ?

2015-04-17 Thread Chris Dent

On Fri, 17 Apr 2015, Julien Danjou wrote:


After using Pecan for a while, I'm leaning toward explicit routing too.
This Falcon API looks right too.
Honestly Pecan is messy. Most of the time, when coding, you have no clue
which one of the method is going to be picked in which controller class.


Yes, and there's no straightforward way (that I can find) to dump
the available routes (which is handy for testing, debugging,
documenting, etc).

While Falcon's style is a clear step in the right direction I think
it would be even better with a slight extension (which may already
exist, if it doesn't it would be easy to make): Load the route mappings
from a file so you (as a human) can have a single index into code.

A few years ago I was completely in love with selector[1] because
all it did was delegate an incoming WSGI request to one of several
WSGI callables based on request method and parameterized paths.

[1] https://github.com/lukearno/selector/

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel]Format of notifications about Ubuntu repositories connectivity

2015-04-17 Thread Maciej Kwiek
Hi,

I am currently implementing fix for
https://bugs.launchpad.net/fuel/+bug/1439686 .

I plan to notify user about nodes which fail to connect to ubuntu
repositories via fuel notifications. My question is as follows: when I get
the list of nodes which failed repo connectivity test - do I add one
notification for each node, or can I add one big notification, which
consists of all names of all nodes that failed?

What is the general UI strategy for decisions like that?

Cheers,
Maciej Kwiek
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VM migration between two data centers

2015-04-17 Thread Kashyap Chamarthy
On Fri, Apr 17, 2015 at 04:05:20PM +0530, Abhishek Talwar/HYD/TCS wrote:
 Hi Folks,
 
 I have created two data centers and I am using OpenStack as the
 management platform for them. So now my question is it is possible to
 migrate VM instances from one data center to the other. 

Please ask such questions on http://ask.openstack.org/.  Though you need
to rephrase your question with more concrete details.

This mailing list of for development discussions.

 Can two OpenStack clouds migrate VM to each other ?

They can -- look up the documentation available around this.

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][policy][neutron] oslo.policy API is not powerful enough to switch Neutron to it

2015-04-17 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

tl;dr neutron has special semantics for policy targets that relies on
private symbols from oslo.policy, and it's impossible to introduce
this semantics into oslo.policy itself due to backwards compatibility
concerns, meaning we need to expose some more symbols as part of
public API for the library to facilitate neutron switch to it.

===

oslo.policy was graduated during Kilo [1]. Neutron considered the
switch to it [2], but failed to achieve it because some library
symbols that were originally public (or at least looked like public)
in policy.py from oslo-incubator, became private in oslo.policy.
Specifically, Neutron policy code [3] relies on the following symbols
that are now hidden inside oslo_policy._checks (note the underscore in
the name of the module that suggests we cannot use the module directly):

- - RoleCheck
- - RuleCheck
- - AndCheck

Those symbols are used for the following matters:
(all the relevant neutron code is in neutron/policy.py)

1. debug logging in case policy does not authorize an action
(RuleCheck, AndCheck) [log_rule_list]

2. filling in admin context with admin roles (RoleCheck, RuleCheck,
AndCheck/OrCheck internals) [get_admin_roles]

3. aggregating core, attribute and subattribute policies (RuleCheck,
AndCheck) [_prepare_check]


== 1. debug logging in case policy does not authorize an action ==

Neutron logs rules that failed to match if policy module does not
authorize an action. Not sure whether Neutron developers really want
to have those debug logs, and whether we cannot just kill them to
avoid this specific usage of private symbols; though it also seems
that we could easily use __str__ that is present for all types of
Checks instead. So it does not look like a blocker for the switch.


== 2. filling in admin context with admin roles ==

Admin context object is filled with .roles attribute that is a list of
roles considered granting admin permissions [4]. The attribute would
then be used by plugins that would like to do explicit policy checks.
As per Salvatore, this attribute can probably be dropped now that all
plugins and services don't rely on it (Salvatore mentioned lbaas
mixins as the ones that previously relied on it, but are now not doing
it since service split from neutron tree (?)).

The problem with dropping the .roles attribute from context object in
Liberty is that we, as a responsible upstream with lots of plugins
maintained out-of-tree (see the ongoing vendor decomposition effort)
would need to support the attribute while it's marked as deprecated
for at least one cycle, meaning that if we don't get those oslo.policy
internals we rely on in Liberty, we would need to postpone the switch
till Mizzle, or rely on private symbols during the switch (while a new
release of oslo.policy can easily break us).

(BTW the code to extract admin roles is not really robust and has
bugs, f.e. it does not handle AndChecks that could be used in
context_is_admin. In theory, 'and' syntax would mean that both roles
are needed to claim someone is an admin, while the code to extract
admin roles handles 'and' the same way as 'or'. For the deprecation
time being, we may need to document this limitation.)


== 3. aggregating core, attribute and subattribute policies ==

That's the most interesting issue.

For oslo.policy, policies are described as target: rule, where rule
is interpreted as per registered checks, while target is opaque to the
library.

Neutron extended the syntax for target as:
target[:attribute[:subattribute]].

If attribute is present in a policy entry, it applies to target iff
attribute is set, and 'enforce_policy' is set in attribute map for the
attribute in question, and target is not read-only (=its name does not
start from get_).

If subattribute is present, the rule applies to target if 'validate'
is set in attribute map for the attribute, and its type is dict, plus
all the requirements for :attribute described above.

Note that those rules are *aggregated* into single matching rule with
AndCheck, so f.e. if action is create_network, and provider is set in
target, then the actual rule validated would be all the rules for
create_network, create_network:provider, and e.g.
create_network:provider:physical_network, joined into single rule with
AndCheck (meaning, target should conform to all of those requirements).

This stands for a significant extension of original oslo.policy intent.

Originally, I thought that we would be able to introduce neutron
policy semantics into oslo.policy, and just switch to it once it's
there. But there is a problem with that approach. Other projects (like
nova [5]) already use similar syntax for their policy targets, while
not putting such semantics on top of what oslo.policy provides (which
is basically nothing, for target is not interpreted in any special
way). AFAIU the way those projects use this syntax does not introduce
any new *meaning*, so it's used for mere convenience to 

[openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker in openstack

2015-04-17 Thread ashish.jain14

Hi,

I have been working on running docker on openstack. I had a discussion on 
multiple IRC and IIUC there are 5 different ways of running docker on 
openstack. IIUC currently there is no way to autoscale docker on openstack. 
Please correct me if I am wrong


1) Using nova-docker driver - Running docker as a Nova::Server using 
nova-docker hypervisor
2) Using nova-plugin for heat - Running docker using 
DockerInc::Docker::Container
3) Using magnum - IIUC no automation as of now, manually it is possible. Not 
enough documentation available
4) heat compose - Saw some samples available 
@https://github.com/openstack/heat-templates/tree/master/hot/software-config/elements/heat-config-docker-compose
5) Swarm support - Still in development

Issues with each on the above approaches

1) Using nova-docker driver - IIUC there is no way for ceilometer to collect 
and emit statistics for docker hypervisor. So that mean ceilometer does not 
have any stats available once you switch to docker driver.
This link 
(https://github.com/openstack/ceilometer/tree/master/ceilometer/compute/virt) 
currently does not have anything for docker hypervisor.

2) Using nova-plugin for heat - Using this approach docker containers run on a 
Nova VM. However I do not see any illustration which suggests that you can 
autoscale using this approach.

3) Using magnum - Currently only possible by manually invoking it.

4) heat compose - Sample available at the above link just talks about deploying 
it up but nothing about auto scaling

5) Swarm Support - Still in dev

While I understand some of these options may enable us during the future 
release to autoscale docker on openstack. But looking currently I feel option 
#1 is most mature(probably) and by plugging in a ceilometer inspector for 
docker hypervisor it may be possible. Another approach could be to using 
cfn-push-stats to probably push some stats from docker container.

Please advice through your valued suggestions that time being what is the best 
way to achieve auto scaling for docker on openstack. I am ready to contribute 
to it in the best possible way.

Regards
Ashish






The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] PTL Election Conclusion and Results

2015-04-17 Thread Tristan Cacqueray
Thank you to the electorate, to all those who voted and to all
candidates who put their name forward for PTL for this election. A
healthy, open process breeds trust in our decision making capability
thank you to all those who make this process possible.

Now for the results of the PTL election process, please join me in
extending congratulations to the following PTLs:

* Compute (Nova)
** John Garbutt
* Object Storage (Swift)
** John Dickinson
* Image Service (Glance)
** Nikhil Komawar
* Identity (Keystone)
** Morgan Fainberg
* Dashboard (Horizon)
** David Lyle
* Networking (Neutron)
** Kyle Mestery
* Block Storage (Cinder)
** Mike Perez
* Metering/Monitoring (Ceilometer)
** Gordon Chung
* Orchestration (Heat)
** Steve Baker
* Database Service (Trove)
** Nikhil Manchanda
* Bare metal (Ironic)
** Devananda van der Veen
* Common Libraries (Oslo)
** Davanum Srinivas
* Infrastructure
** James E. Blair
* Documentation
** Lana Brindley
* Quality Assurance (QA)
** Matthew Treinish
* Deployment (TripleO)
** James Slagle
* Release cycle management
** Thierry Carrez
* Message Service (Zaqar)
** Flavio Percoco
* Data Processing Service (Sahara)
** Sergey Lukjanov
* Key Management Service (Barbican)
** Douglas Mendizabal
* DNS Services (Designate)
** Kiall Mac Innes
* Shared File Systems (Manila)
** Ben Swartzlander
* Command Line Client (OpenStackClient)
** Dean Troyer
* OpenStack Containers Service (Magnum)
** Adrian Otto
* Application Catalog (Murano)
** Serg Melikyan
* Non-Domain Specific Policy Enforcement (Congress)
** Tim Hinrichs

Election Results:
* Nova:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_4a879ff581b99e7a
* Glance:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_13a14da9986cac8c

Shortly I will post the announcement opening TC nominations and then we
are into the TC election process.

Thank you to all involved in the PTL election process,
Tristan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Using DevStack in Kilo

2015-04-17 Thread Sean Dague
On 04/17/2015 06:44 AM, Jens Rosenboom wrote:
 
 
 2015-04-17 12:23 GMT+02:00 Sean Dague s...@dague.net
 mailto:s...@dague.net:
 
 glanceclient has always been that way it's terrible. :(
 
 the nova message timeout looks like it's becauce nova-cert is not
 running, and it keeps trying to ping it. What actions were you doing
 that triggered that path?
 
 
 I think this may be https://bugs.launchpad.net/devstack/+bug/1441348.
 As a workaround, you can try adding
 
 EN​ABLED_SERVICES+=,n-crt
 
 into your config.

Yeh, that looks like the issue, I didn't realize there was a bug for it.

I just approved the fix for it: https://review.openstack.org/#/c/173493

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API WG Meeting Time

2015-04-17 Thread michael mccune

On 04/16/2015 05:25 PM, Everett Toews wrote:


It would be good to hear from those in Asia, Australia, and Europe on the 
subject.



+1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][NetApp plugin] Samuel Bartel added as a maintainer

2015-04-17 Thread Sebastian Kalinowski
Hello,

Today we added Samuel Bartel as a Core Review for NetApp plugin [1]. He
will be now main
person leading the plugin development.

Congrats!

Best,
Sebastian

[1] https://github.com/stackforge/fuel-plugin-cinder-netapp
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-17 Thread Fox, Kevin M
Complex is kind of the wrong thing to describe the deployer complaint. Its 
learning curve. To debug issues, I have to learn someting new, and I dont want 
to because I dont believe I need that feature. I get it. I really do. But 
there are three actors here, not just one. The deployer, the app developer, and 
the user... the deployer often doesnt but the user/developer does.

Im convinced OpenStack is a data center operating system. Linux has many 
parallels. The Linux sysadmin is the deployer, the userspace app developer is 
the OpenStack app developer, syscalls are rest api... The main odd difference 
in the parallel is that in Linux, you have to purposefully compile out major 
kernel subsystems, and in OpenStack, you must install them separately.

But think of it this way. What would you get if a third of the Linux population 
compiled out the unix pipe subsystem? That's the close parallel. Shell 
scripting changes drastically from something that looks like Unix, to something 
that looks like Windows pre Powershell. The whole development model for the app 
deployers changes. The same is true in OpenStack. You leave out naas, you end 
up with heat templates that are very different.

By not wanting to learn something new, the deployers are forcing the 
complexity of this fractured operating system target on the users (who should 
long term greatly outnumber deployers). The ecosystem the app developers create 
on top is worse off because they have to target the lowest common denominator.

In the end any Operating System lives or dies by the ecosystem of apps built on 
top of it. My opinion is the continued push for not supporting naas is 
weakining the ecosystem. We should make sure neutron will scale the way 
deployers need, then depricate nova-network. Making it too easy to gut large 
subsystems functionality too easily to cator to the deployers not wanting to 
learn something new will hurt everyone in the long run. Apps dont get written 
and the deployers have less reason to deploy due to lack of demand.

Lets focus on a strong OpenStack ecosystem.

Thanks,
Kevin


From: Kevin Benton
Sent: Thursday, April 16, 2015 9:17:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in 
DevStack [was: Status of the nova-network to Neutron migration work]


What do you disagree with? I was pointing out that using Linux bridge will not 
reduce the complexity of the model of self-service networking, which is what 
the quote was complaining about.

I just wanted to point out that one of the 'major disconnects' as I understand 
it will not get any better by switching drivers.

On 2015-04-16 18:34:40 -0700 (-0700), Kevin Benton wrote:
[...]
 This is referring to the complexity of the API model for Neutron.
 While this is a problem that I hope to address by making things
 like shared networks more useful, it's not really relevant to this
 particular discussion because the model complexity does not
 decrease with the switch to Linux bridge.

I disagree. Complexity for the operator includes more than the API.
It also includes troubleshooting what just went wrong when your
network decided to do a headstand. As a datapoint, compare the ease
of connecting tcpdump to a bridge interface vs the ease of
connecting tcpdump to an OVS instance.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opentack-dev][meetings] Proposing changes in Rally meetings

2015-04-17 Thread Andrey Kurilin
  - We should start making agenda for each meeting and publish it to Rally
wiki

+1

 * Second is release management meeting, where we are discussing
priorities for
   current  next release. So core team will know what to review
first.

It would be nice to post list of high priority patches to etherpad or
google docs after each meeting

  - Move meetings from #openstack-meeting to #openstack-rally chat.

doesn't matter for me:)

   - We should adjust better time for current Rally team.

yeah. Current time is not good:( +1 for 15:00 UTC

  - Do meetings every Monday and Wednesday

Monday?) Monday is very hard day...

On Fri, Apr 17, 2015 at 4:26 PM, Boris Pavlovic bo...@pavlovic.me wrote:

 Rally team,


 I would like to propose next changes in Rally meetings:

   - We should start making agenda for each meeting and publish it to Rally
 wiki

   - We should do 2 meeting per week:

  * First is regular meeting (like we have now) where we are discussing
 everything

  * Second is release management meeting, where we are discussing
 priorities for
current  next release. So core team will know what to review
 first.

   - Move meetings from #openstack-meeting to #openstack-rally chat.

   - We should adjust better time for current Rally team. Like at the
 moment it is too late
  for few of cores in Rally. it's 17:00 UTC and I would like to suggest
 to make at 15:00 UTC.

   - Do meetings every Monday and Wednesday


 Thoughts ?


 Best regards,
 Boris Pavlovic




-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Building images separation and moving images into right place at right time

2015-04-17 Thread Jaromir Coufal
Oh, I see it got already split (not in this patch)! Excellent. Thanks 
for striking me Jay, it actually helped. Many thanks!


-- Jarda

On 17/04/15 16:00, Jay Dobies wrote:

Have you seen Dan's first steps towards splitting the overcloud image
building out of devtest_overcloud? It's not the same thing that you're
talking about, but it might be a step in that direction.

https://review.openstack.org/#/c/173645/

On 04/17/2015 09:50 AM, Jaromir Coufal wrote:

Hi All,

at the moment we are building discovery, deploy and overcloud images all
at once. Then we face user to deal with uploading all images at one step.

User should not be exposed to discovery/deploy images. This should
happen automatically for the user during undercloud installation as
post-config step, so that undercloud is usable.

Once user installs undercloud (and have discovery  deploy images at
their place) he should be able to build / download / create overcloud
images (by overcloud images I mean overcloud-full.*). This is what user
should deal with.

For this we will need to separate building process for discovery+deploy
images and for overcloud images. Is that possible?

-- Jarda

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker in openstack

2015-04-17 Thread Sergey Kraynev
@VACHNIS:  yeah. in this case we blocked by ceilometer. AFAIK, ceilometer
collect metrics from Nova:Server, not from docker directly.
So mentioned bp is make sense (add support for this feature to ceilomete,
then to heat).


Regards,
Sergey.

On 17 April 2015 at 17:11, VACHNIS, AVI (AVI) 
avi.vach...@alcatel-lucent.com wrote:

  Hi,

 @Ashish, if the limitation you've mentioned for #1 still exists, I join
 your question how heat auto-scale-group may work w/o ceilometer being able
 to collect docker metrics?



 @Sergey, hey. Are you saying that ceilometer do collects metrics on docker
 underlying nova::server resource?







 -Avi



 -- Original message--

 *From: *ashish.jai...@wipro.com

 *Date: *Fri, Apr 17, 2015 4:56 PM

 *To: *openstack-dev@lists.openstack.org;

 *Subject:*Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling
 docker in openstack



 Hi Segey,


  So IIUC approach #2 may still help to autoscale docker on openstack. I
 will try that out and post questions on heat irc thanks.


  Regards

 Ashish
  --
 *From:* Sergey Kraynev skray...@mirantis.com
 *Sent:* Friday, April 17, 2015 7:01 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [nova-docker][ceilometer][heat]
 Autoscaling docker in openstack

   Hi, Ashish.

  Honestly I am not familiar with most part of these ways, but can add
 more information from Heat side (item 2).

  I am surprised, that you have missed Heat autoscaling mechanism (You
 should look it :) ). It's one of the important part of Heat project.
 It allows to scale vms/stacks by using Ceilometer alarms. There are couple
 examples of autoscale templates:


 https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml  
(with
 LoadBalancer)

 https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml

 https://github.com/openstack/heat-templates/blob/master/hot/asg_of_stacks.yaml

 https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml


 https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml
 It's true, that Docker plugin for Heat create docker server on
 Nova::Server resource. So you may write template Docker resource + Server
 resource (similar on third template) and scale by using Ceilometer alarms.
 If you have any questions how to use it, please got to #heat irc channel
 and ask us :)
 Also another way (AFAIK) is to use SoftwareDeployment/Config and deploy
 Server with docker inside (without docker plugin). In this way, I suppose,
 Steve Baker can help with advise :)


 On 17 April 2015 at 16:06, ashish.jai...@wipro.com wrote:


 Hi,

 I have been working on running docker on openstack. I had a discussion on
 multiple IRC and IIUC there are 5 different ways of running docker on
 openstack. IIUC currently there is no way to autoscale docker on openstack.
 Please correct me if I am wrong


 1) Using nova-docker driver - Running docker as a Nova::Server using
 nova-docker hypervisor
 2) Using nova-plugin for heat - Running docker using
 DockerInc::Docker::Container
 3) Using magnum - IIUC no automation as of now, manually it is possible.
 Not enough documentation available
 4) heat compose - Saw some samples available @
 https://github.com/openstack/heat-templates/tree/master/hot/software-config/elements/heat-config-docker-compose
 5) Swarm support - Still in development

 Issues with each on the above approaches

 1) Using nova-docker driver - IIUC there is no way for ceilometer to
 collect and emit statistics for docker hypervisor. So that mean ceilometer
 does not have any stats available once you switch to docker driver.
 This link (
 https://github.com/openstack/ceilometer/tree/master/ceilometer/compute/virt)
 currently does not have anything for docker hypervisor.

 2) Using nova-plugin for heat - Using this approach docker containers run
 on a Nova VM. However I do not see any illustration which suggests that you
 can autoscale using this approach.

 3) Using magnum - Currently only possible by manually invoking it.

 4) heat compose - Sample available at the above link just talks about
 deploying it up but nothing about auto scaling

 5) Swarm Support - Still in dev

 While I understand some of these options may enable us during the future
 release to autoscale docker on openstack. But looking currently I feel
 option #1 is most mature(probably) and by plugging in a ceilometer
 inspector for docker hypervisor it may be possible. Another approach could
 be to using cfn-push-stats to probably push some stats from docker
 container.

 Please advice through your valued suggestions that time being what is the
 best way to achieve auto scaling for docker on openstack. I am ready to
 contribute to it in the best possible way.

 Regards
 Ashish






 The information contained in this electronic message and any attachments
 to this message are intended for the exclusive use of 

[openstack-dev] [TripleO] on supporting multiple implementations of tripleo-heat-templates

2015-04-17 Thread Giulio Fidente

Hi,

the Heat/Puppet implementation of the Overcloud deployment seems to be 
surpassing in features the Heat/Elements implementation.


The changes for Ceph are an example, the Puppet based version is already 
adding features which don't have they counterpart into Elements based.


Recently we started working on the addition of Pacemaker into the 
Overcloud, to monitor the services and provide a number of 'auto 
healing' features, and again this is happening in the Puppet 
implementation only (at least for now) so I think the gap will become 
bigger.


Given we support different implementations with a single top-level 
template [1], to keep other templates valid we're forced to propagate 
the params into the Elements based templates as well, even though there 
is no use for these there, see for example [2].


The extra work itself is not of great concern but I wonder if it 
wouldn't make sense to deprecate the Elements based templates at this 
point, instead of keep adding there unused parts? Thoughts?


1. 
https://github.com/openstack/tripleo-heat-templates/blob/master/overcloud-without-mergepy.yaml

2. https://review.openstack.org/#/c/173773

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker in openstack

2015-04-17 Thread Sergey Kraynev
Hi, Ashish.

Honestly I am not familiar with most part of these ways, but can add more
information from Heat side (item 2).

I am surprised, that you have missed Heat autoscaling mechanism (You should
look it :) ). It's one of the important part of Heat project.
It allows to scale vms/stacks by using Ceilometer alarms. There are couple
examples of autoscale templates:

https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml
   (with LoadBalancer)
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_stacks.yaml
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml

https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml
It's true, that Docker plugin for Heat create docker server on Nova::Server
resource. So you may write template Docker resource + Server resource
(similar on third template) and scale by using Ceilometer alarms.
If you have any questions how to use it, please got to #heat irc channel
and ask us :)
Also another way (AFAIK) is to use SoftwareDeployment/Config and deploy
Server with docker inside (without docker plugin). In this way, I suppose,
Steve Baker can help with advise :)


On 17 April 2015 at 16:06, ashish.jai...@wipro.com wrote:


 Hi,

 I have been working on running docker on openstack. I had a discussion on
 multiple IRC and IIUC there are 5 different ways of running docker on
 openstack. IIUC currently there is no way to autoscale docker on openstack.
 Please correct me if I am wrong


 1) Using nova-docker driver - Running docker as a Nova::Server using
 nova-docker hypervisor
 2) Using nova-plugin for heat - Running docker using
 DockerInc::Docker::Container
 3) Using magnum - IIUC no automation as of now, manually it is possible.
 Not enough documentation available
 4) heat compose - Saw some samples available @
 https://github.com/openstack/heat-templates/tree/master/hot/software-config/elements/heat-config-docker-compose
 5) Swarm support - Still in development

 Issues with each on the above approaches

 1) Using nova-docker driver - IIUC there is no way for ceilometer to
 collect and emit statistics for docker hypervisor. So that mean ceilometer
 does not have any stats available once you switch to docker driver.
 This link (
 https://github.com/openstack/ceilometer/tree/master/ceilometer/compute/virt)
 currently does not have anything for docker hypervisor.

 2) Using nova-plugin for heat - Using this approach docker containers run
 on a Nova VM. However I do not see any illustration which suggests that you
 can autoscale using this approach.

 3) Using magnum - Currently only possible by manually invoking it.

 4) heat compose - Sample available at the above link just talks about
 deploying it up but nothing about auto scaling

 5) Swarm Support - Still in dev

 While I understand some of these options may enable us during the future
 release to autoscale docker on openstack. But looking currently I feel
 option #1 is most mature(probably) and by plugging in a ceilometer
 inspector for docker hypervisor it may be possible. Another approach could
 be to using cfn-push-stats to probably push some stats from docker
 container.

 Please advice through your valued suggestions that time being what is the
 best way to achieve auto scaling for docker on openstack. I am ready to
 contribute to it in the best possible way.

 Regards
 Ashish






 The information contained in this electronic message and any attachments
 to this message are intended for the exclusive use of the addressee(s) and
 may contain proprietary, confidential or privileged information. If you are
 not the intended recipient, you should not disseminate, distribute or copy
 this e-mail. Please notify the sender immediately and destroy all copies of
 this message and any attachments. WARNING: Computer viruses can be
 transmitted via email. The recipient should check this email and any
 attachments for the presence of viruses. The company accepts no liability
 for any damage caused by any virus transmitted by this email.
 www.wipro.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker in openstack

2015-04-17 Thread ashish.jain14
Hi Segey,


So IIUC approach #2 may still help to autoscale docker on openstack. I will try 
that out and post questions on heat irc thanks.


Regards

Ashish


From: Sergey Kraynev skray...@mirantis.com
Sent: Friday, April 17, 2015 7:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker 
in openstack

Hi, Ashish.

Honestly I am not familiar with most part of these ways, but can add more 
information from Heat side (item 2).

I am surprised, that you have missed Heat autoscaling mechanism (You should 
look it :) ). It's one of the important part of Heat project.
It allows to scale vms/stacks by using Ceilometer alarms. There are couple 
examples of autoscale templates:

https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml
 (with LoadBalancer)
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_stacks.yaml
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml

https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml
It's true, that Docker plugin for Heat create docker server on Nova::Server 
resource. So you may write template Docker resource + Server resource (similar 
on third template) and scale by using Ceilometer alarms.
If you have any questions how to use it, please got to #heat irc channel and 
ask us :)
Also another way (AFAIK) is to use SoftwareDeployment/Config and deploy Server 
with docker inside (without docker plugin). In this way, I suppose, Steve Baker 
can help with advise :)


On 17 April 2015 at 16:06, 
ashish.jai...@wipro.commailto:ashish.jai...@wipro.com wrote:

Hi,

I have been working on running docker on openstack. I had a discussion on 
multiple IRC and IIUC there are 5 different ways of running docker on 
openstack. IIUC currently there is no way to autoscale docker on openstack. 
Please correct me if I am wrong


1) Using nova-docker driver - Running docker as a Nova::Server using 
nova-docker hypervisor
2) Using nova-plugin for heat - Running docker using 
DockerInc::Docker::Container
3) Using magnum - IIUC no automation as of now, manually it is possible. Not 
enough documentation available
4) heat compose - Saw some samples available 
@https://github.com/openstack/heat-templates/tree/master/hot/software-config/elements/heat-config-docker-compose
5) Swarm support - Still in development

Issues with each on the above approaches

1) Using nova-docker driver - IIUC there is no way for ceilometer to collect 
and emit statistics for docker hypervisor. So that mean ceilometer does not 
have any stats available once you switch to docker driver.
This link 
(https://github.com/openstack/ceilometer/tree/master/ceilometer/compute/virt) 
currently does not have anything for docker hypervisor.

2) Using nova-plugin for heat - Using this approach docker containers run on a 
Nova VM. However I do not see any illustration which suggests that you can 
autoscale using this approach.

3) Using magnum - Currently only possible by manually invoking it.

4) heat compose - Sample available at the above link just talks about deploying 
it up but nothing about auto scaling

5) Swarm Support - Still in dev

While I understand some of these options may enable us during the future 
release to autoscale docker on openstack. But looking currently I feel option 
#1 is most mature(probably) and by plugging in a ceilometer inspector for 
docker hypervisor it may be possible. Another approach could be to using 
cfn-push-stats to probably push some stats from docker container.

Please advice through your valued suggestions that time being what is the best 
way to achieve auto scaling for docker on openstack. I am ready to contribute 
to it in the best possible way.

Regards
Ashish






The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.comhttp://www.wipro.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

The 

Re: [openstack-dev] [opentack-dev][meetings] Proposing changes in Rally meetings

2015-04-17 Thread yinjalee
15UTC works for me, it's 23 here in China, 17UTC is really late for me ;)


Cheers,
Yingjun Li


At 2015-04-17 21:26:29, Boris Pavlovic bo...@pavlovic.me wrote:

Rally team, 




I would like to propose next changes in Rally meetings: 


  - We should start making agenda for each meeting and publish it to Rally wiki 


  - We should do 2 meeting per week:


 * First is regular meeting (like we have now) where we are discussing 
everything 
  
 * Second is release management meeting, where we are discussing priorities 
for 
   current  next release. So core team will know what to review first. 


  - Move meetings from #openstack-meeting to #openstack-rally chat. 


  - We should adjust better time for current Rally team. Like at the moment it 
is too late 
 for few of cores in Rally. it's 17:00 UTC and I would like to suggest to 
make at 15:00 UTC. 


  - Do meetings every Monday and Wednesday




Thoughts ?  

 


Best regards,
Boris Pavlovic __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [opentack-dev][meetings] Proposing changes in Rally meetings

2015-04-17 Thread Boris Pavlovic
Rally team,


I would like to propose next changes in Rally meetings:

  - We should start making agenda for each meeting and publish it to Rally
wiki

  - We should do 2 meeting per week:

 * First is regular meeting (like we have now) where we are discussing
everything

 * Second is release management meeting, where we are discussing
priorities for
   current  next release. So core team will know what to review first.

  - Move meetings from #openstack-meeting to #openstack-rally chat.

  - We should adjust better time for current Rally team. Like at the moment
it is too late
 for few of cores in Rally. it's 17:00 UTC and I would like to suggest
to make at 15:00 UTC.

  - Do meetings every Monday and Wednesday


Thoughts ?


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Building images separation and moving images into right place at right time

2015-04-17 Thread Jay Dobies
Have you seen Dan's first steps towards splitting the overcloud image 
building out of devtest_overcloud? It's not the same thing that you're 
talking about, but it might be a step in that direction.


https://review.openstack.org/#/c/173645/

On 04/17/2015 09:50 AM, Jaromir Coufal wrote:

Hi All,

at the moment we are building discovery, deploy and overcloud images all
at once. Then we face user to deal with uploading all images at one step.

User should not be exposed to discovery/deploy images. This should
happen automatically for the user during undercloud installation as
post-config step, so that undercloud is usable.

Once user installs undercloud (and have discovery  deploy images at
their place) he should be able to build / download / create overcloud
images (by overcloud images I mean overcloud-full.*). This is what user
should deal with.

For this we will need to separate building process for discovery+deploy
images and for overcloud images. Is that possible?

-- Jarda

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Building images separation and moving images into right place at right time

2015-04-17 Thread Jaromir Coufal

Hi All,

at the moment we are building discovery, deploy and overcloud images all 
at once. Then we face user to deal with uploading all images at one step.


User should not be exposed to discovery/deploy images. This should 
happen automatically for the user during undercloud installation as 
post-config step, so that undercloud is usable.


Once user installs undercloud (and have discovery  deploy images at 
their place) he should be able to build / download / create overcloud 
images (by overcloud images I mean overcloud-full.*). This is what user 
should deal with.


For this we will need to separate building process for discovery+deploy 
images and for overcloud images. Is that possible?


-- Jarda

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] release critical oslo.messaging changes

2015-04-17 Thread Sean Dague
It turns out a number of people are hitting -
https://bugs.launchpad.net/oslo.messaging/+bug/1436769 (I tripped over
it this morning as well).

Under a currently unknown set of conditions you can get into a heartbeat
loop with oslo.messaging 1.8.1 which basically shuts down the RPC bus as
every service is heartbeat looping 100% of the time.

I had py-amqp  1.4.0, and 1.4.0 seems to have a bug fix for one of the
issues here.

However, after chatting with silent in IRC this morning it sounded like
the safer option might be to disable the rabbit heartbeat by default,
because this sort of heartbeat storm can kill the entire OpenStack
environment, and is not really clear how you recover from it.

All of which is recorded in the bug.

Proposed actions are to do both of:

- oslo.messaging release with heartbeats off by default (simulates 1.8.0
behavior before the heartbeat code landed)
- oslo.messaging requiring py-amqp = 1.4.0, so that if you enable the
heartbeating, at least you are protected from the known bug

This would still let operators use the feature, we'd consider it
experimental, until we're sure there aren't any other dragons hidden in
there. I think the goal would be to make it default on again for Marmoset.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Using DevStack in Kilo

2015-04-17 Thread Jens Rosenboom
2015-04-17 12:23 GMT+02:00 Sean Dague s...@dague.net:

 glanceclient has always been that way it's terrible. :(

 the nova message timeout looks like it's becauce nova-cert is not
 running, and it keeps trying to ping it. What actions were you doing
 that triggered that path?


I think this may be https://bugs.launchpad.net/devstack/+bug/1441348.
As a workaround, you can try adding

EN​ABLED_SERVICES+=,n-crt

into your config.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] 6.1 SCF declared, new HCF/GA dates are May 12th / May 28th

2015-04-17 Thread Eugene Bogdanov

Hello everyone,

I'd like to inform you that Soft Code Freeze for 6.1 release is now 
effective. This means that since now on we stop accepting fixes for 
Medium priority bugs and focus solely on Critical and High priority bugs.


As mentioned last week, we need to push Hard Code Freeze and GA 
deadlines. New dates are May 12th and May 28th respectively. 6.1 Release 
Schedule[1] has been updated.


With Soft Code Freeze in action, we still have 6.1 blueprints[1] not 
sorted out properly. Feature Leads, Component leads and Core Reviewers - 
we need your help with sorting this out:


1. Those blueprints that are not implementedshould be moved to the next
   milestone.
2. If a blueprint is obsolete, the right update is to set definition to
   Obsolete with milestone target nothing selected.

Thank you for your continued assistance. Hopefully, this is our last 
shift of 6.1 milestone dates and we'll be able to deliver on May 28th 
with high quality.


--
EugeneB


[1] Fuel 6.1 Release Schedule: 
https://wiki.openstack.org/wiki/Fuel/6.1_Release_Schedule
[2] Fuel 6.1 Milestone blueprint list: 
https://launchpad.net/fuel/+milestone/6.1


Eugene Bogdanov wrote:

Hello everyone,

Upon latest estimations from Feature Leads, we need 1 more week to 
complete FF exceptions and bugfixing so we could match Soft Code 
Freeze milestone criteria[1]. In this case, the new Soft Code Freeze 
date is April 16th. That said, we will continue accepting medium/low 
priority bugfix commits into 6.1 milestone[3] for one more week. I 
have update 6.1 Release Schedule[2] accordingly.


With last 2 shifts of Soft Code Freeze date, there's obviously too 
little time left to prepare for Hard Code Freeze[3] and GA milestones, 
so we'll have to push these a bit as well. New dates are currently 
under discussion and will be communicated early next week.


I'd like to thank everyone for your continued contributions. Let's use 
additional time efficiently and make MOS 6.1 release quality as high 
as we possibly can.


[1] Soft Code Freeze definition: 
https://wiki.openstack.org/wiki/Fuel/Soft_Code_Freeze

[2] Release schedule:
https://wiki.openstack.org/wiki/Fuel/6.1_Release_Schedule
[3] Fuel 6.1 Milestone scope: https://launchpad.net/fuel/+milestone/6.1
[4] Hard Code Freeze definition: 
https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze


--
EugeneB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] Moving old plugins to stackforge-attic

2015-04-17 Thread Sebastian Kalinowski
Hello,

I propose to move two old, unmaintained plugins to stackforge-attic as
there is no interest or plans in continuing development of them:

* https://github.com/stackforge/fuel-plugin-group-based-policy -
development is now done in another repository as different plugin
* https://github.com/stackforge/fuel-plugin-external-nfs - was created just
to check how to article

Let's use lazy consensus and see if no one have any objections for next
week (April 24th).

Best,
Sebastian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Several nominations for fuel project cores

2015-04-17 Thread Dmitry Pyzhov
Thanks for your responses. I'm sorry for combined request. I thought it is
better to have less e-mails if possible. Ok, next time I will send separate
request per nomination. If some of you think that I should restart voting -
speak up, I will restart it. Otherwise guys will get +2 permissions on
Monday.

On Wed, Apr 15, 2015 at 5:10 PM, Evgeniy L e...@mirantis.com wrote:

 1/ +1
 2/ +1
 3/ +1

 On Tue, Apr 14, 2015 at 2:45 PM, Aleksey Kasatkin akasat...@mirantis.com
 wrote:

 1/ +1
 2/ +1
 3/ +1


 Aleksey Kasatkin


 On Tue, Apr 14, 2015 at 12:26 PM, Tatyana Leontovich 
 tleontov...@mirantis.com wrote:


 3/ +1

 On Tue, Apr 14, 2015 at 11:49 AM, Sergii Golovatiuk 
 sgolovat...@mirantis.com wrote:

 +1 for separating.

 Let's follow the formal well established process.

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Tue, Apr 14, 2015 at 10:32 AM, Igor Kalnitsky 
 ikalnit...@mirantis.com wrote:

 Dmitry,

 1/ +1

 2/ +1

 3/ +1

 P.S: Dmitry, please send one mail per nomination next time. It's much
 easier to vote for each candidate in separate threads. =)

 Thanks,
 Igor

 On Mon, Apr 13, 2015 at 4:24 PM, Dmitry Pyzhov dpyz...@mirantis.com
 wrote:
  Hi,
 
  1) I want to nominate Vladimir Sharshov to fuel-astute core. We
 hardly need
  more core reviewers here. At the moment Vladimir is one of the main
  contributors and reviewers in astute.
 
  2) I want to nominate Alexander Kislitsky to fuel-stats core. He is
 the lead
  of this feature and one of the main authors in this repo.
 
  3) I want to nominate Dmitry Shulyak to fuel-web and fuel-ostf
 cores. He is
  one of the main contributors and reviewers in both repos.
 
  Core reviewers, please reply with +1/-1 for each nomination.
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][neutron] --config-dir vs. --config-file

2015-04-17 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/13/2015 10:08 PM, Matthew Thode wrote:
 The loading seems to me in a sorted order, so we can do 1.conf
 2.conf etc.
 
 https://github.com/openstack/oslo.config/blob/1.9.3/oslo_config/cfg.py
#L1265-L1268

It
 
would need to be explicitly described in documentation to rely on
it. I've sent a tiny patch to library documentation:

https://review.openstack.org/174883

 
 
 On 04/13/2015 02:45 PM, Kevin Benton wrote:
 What is the order of priority between the same option defined in
 two files with --config-dir?
 
 With '--config-file' args it seemed that it was that the latter
 ones took priority over the earlier ones. So an admin previously
 had the ability to abuse that by putting all of the desired
 global settings in one of the earlier loaded configs and then add
 some node-specific overrides to the ones loaded later.
 
 Will there still be the ability to do that with RDO?
 
 On Mon, Apr 13, 2015 at 8:25 AM, Ihar Hrachyshka
 ihrac...@redhat.com mailto:ihrac...@redhat.com wrote:
 
 Hi,
 
 RDO/master (aka Delorean) moved neutron l3 agent to this
 configuration scheme, configuring l3 (and vpn) agent with
 --config-dir [1][2][3].
 
 We also provided a way to configure neutron services without
 ever touching a single configuration file from the package [4]
 where each service has a config-dir located under 
 /etc/neutron/conf.d/service-name that can be populated by
 *.conf files that will be automatically read by services during
 startup.
 
 All other distributions are welcome to follow the path. Please
 don't introduce your own alternative to /etc/neutron/conf.d/...
 directory to avoid unneeded platform dependent differences in
 deployment tools.
 
 As for devstack, it's not really feasible to introduce such a
 change there (at least from my perspective), so it's downstream
 only.
 
 [1]: 
 https://github.com/openstack-packages/neutron/blob/f20-master/opensta
ck-

 
neutron.spec#L602
 https://github.com/openstack-packages/neutron/blob/f20-master/openst
ack-%0Aneutron.spec#L602

 
[2]:
 https://github.com/openstack-packages/neutron/blob/f20-master/neutron
- -l3

 
- -agent.service#L8
 [3]: 
 https://github.com/openstack-packages/neutron-vpnaas/blob/f20-master/
ope

 
nstack-neutron-vpnaas.spec#L97
 https://github.com/openstack-packages/neutron-vpnaas/blob/f20-master
/ope%0Anstack-neutron-vpnaas.spec#L97

 
[4]: https://review.gerrithub.io/#/c/229162/
 
 Thanks, /Ihar
 
 On 03/13/2015 03:11 PM, Ihar Hrachyshka wrote:
 Hi all,
 
 (I'm starting a new [packaging] tag in this mailing list to
 reach out people who are packaging our software in
 distributions and whatnot.)
 
 Neutron vendor split [1] introduced situations where the set
 of configuration files for L3/VPN agent is not stable and
 depends on which packages are installed in the system.
 Specifically, fwaas_driver.ini file is now shipped in
 neutron_fwaas tarball (openstack-neutron-fwaas package in RDO),
 and so --config-file=/etc/neutron/fwaas_driver.ini argument
 should be passed to L3/VPN agent *only* when the new package
 with the file is installed.
 
 In devstack, we solve the problem by dynamically generating
 CLI arguments list based on which services are configured in 
 local.conf [2]. It's not a viable approach in proper
 distribution packages though, where we usually hardcode
 arguments [3] in our service manifests (systemd unit files, in
 case of RDO).
 
 The immediate solution to solve the issue would be to use 
 --config-dir argument that is also provided to us by
 oslo.config instead of --config-file, and put auxiliary files
 there [4] (those may be just symbolic links to actual files).
 
 I initially thought to put the directory under /etc/neutron/,
 but then realized we may be interested in keeping it out of
 user sight while it only references stock (upstream)
 configuration files.
 
 But then a question arises: whether it's useful just for this 
 particular case? Maybe there is value in using --config-dir
 outside of it? And in that case, maybe the approach should be
 replicated to other services?
 
 AFAIU --config-dir could actually be useful to configure
 services. Now instead of messing with configuration files that
 are shipped with packages (and handling .rpmnew files [5] that
 are generated on upgrade when local changes to those files are
 detected), users (or deployment/installation tools) could
 instead drop a *.conf file in that configuration directory,
 being sure their stock configuration file is always current,
 and no .rpmnew files are there to manually solve conflicts).
 
 We can also use two --config-dir arguments, one for
 stock/upstream configuration files, located out of
 /etc/neutron/, and another one available for population with
 user configuration files, under /etc/neutron/. This is similar
 to how we put settings considered to be 'sane distro defaults'
 in neutron-dist.conf file that is not available for
 modification [6][7].
 
 Of course users would still be able 

Re: [openstack-dev] [packaging][neutron] --config-dir vs. --config-file

2015-04-17 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/13/2015 09:45 PM, Kevin Benton wrote:
 What is the order of priority between the same option defined in
 two files with --config-dir?

Should be alphabetically sorted, but it's not yet defined in
documentation. I've sent a patch for this:
https://review.openstack.org/174883

 
 With '--config-file' args it seemed that it was that the latter
 ones took priority over the earlier ones. So an admin previously
 had the ability to abuse that by putting all of the desired global
 settings in one of the earlier loaded configs and then add some
 node-specific overrides to the ones loaded later.

It's not actually an abuse, but behaviour that is guaranteed by public
library documentation, and it's fine to rely on it.

 
 Will there still be the ability to do that with RDO?

Nothing changes for users who do not want to use conf.d directory and
instead store their configuration in upstream config files (like
neutron.conf or l3_agent.ini). RDO/neutron only *extends*
possibilities to configure services with the new conf.d feature.

The order of configuration storages as they are currently loaded in
RDO/neutron services is the same for all of them, and can be checked
in any systemd service file:

https://github.com/openstack-packages/neutron/blob/f20-master/neutron-me
tadata-agent.service#L8

The way it's defined there, conf.d beats all other configuration
files. But if you don't buy the new approach, you just don't have any
files inside the directory to beat your conventional configuration.

 
 On Mon, Apr 13, 2015 at 8:25 AM, Ihar Hrachyshka
 ihrac...@redhat.com mailto:ihrac...@redhat.com wrote:
 
 Hi,
 
 RDO/master (aka Delorean) moved neutron l3 agent to this
 configuration scheme, configuring l3 (and vpn) agent with
 --config-dir [1][2][3].
 
 We also provided a way to configure neutron services without ever 
 touching a single configuration file from the package [4] where
 each service has a config-dir located under 
 /etc/neutron/conf.d/service-name that can be populated by *.conf 
 files that will be automatically read by services during startup.
 
 All other distributions are welcome to follow the path. Please
 don't introduce your own alternative to /etc/neutron/conf.d/...
 directory to avoid unneeded platform dependent differences in
 deployment tools.
 
 As for devstack, it's not really feasible to introduce such a
 change there (at least from my perspective), so it's downstream
 only.
 
 [1]: 
 https://github.com/openstack-packages/neutron/blob/f20-master/openstac
k-

 
neutron.spec#L602
 https://github.com/openstack-packages/neutron/blob/f20-master/opensta
ck-

 
neutron.spec#L602
 [2]: 
 https://github.com/openstack-packages/neutron/blob/f20-master/neutron-
l3

 
- -agent.service#L8
 [3]: 
 https://github.com/openstack-packages/neutron-vpnaas/blob/f20-master/o
pe

 
nstack-neutron-vpnaas.spec#L97
 https://github.com/openstack-packages/neutron-vpnaas/blob/f20-master/
ope

 
nstack-neutron-vpnaas.spec#L97
 [4]: https://review.gerrithub.io/#/c/229162/
 
 Thanks, /Ihar
 
 On 03/13/2015 03:11 PM, Ihar Hrachyshka wrote:
 Hi all,
 
 (I'm starting a new [packaging] tag in this mailing list to
 reach out people who are packaging our software in distributions
 and whatnot.)
 
 Neutron vendor split [1] introduced situations where the set of 
 configuration files for L3/VPN agent is not stable and depends
 on which packages are installed in the system. Specifically, 
 fwaas_driver.ini file is now shipped in neutron_fwaas tarball 
 (openstack-neutron-fwaas package in RDO), and so 
 --config-file=/etc/neutron/fwaas_driver.ini argument should be 
 passed to L3/VPN agent *only* when the new package with the file
 is installed.
 
 In devstack, we solve the problem by dynamically generating CLI 
 arguments list based on which services are configured in 
 local.conf [2]. It's not a viable approach in proper
 distribution packages though, where we usually hardcode arguments
 [3] in our service manifests (systemd unit files, in case of
 RDO).
 
 The immediate solution to solve the issue would be to use 
 --config-dir argument that is also provided to us by oslo.config 
 instead of --config-file, and put auxiliary files there [4]
 (those may be just symbolic links to actual files).
 
 I initially thought to put the directory under /etc/neutron/,
 but then realized we may be interested in keeping it out of user
 sight while it only references stock (upstream) configuration
 files.
 
 But then a question arises: whether it's useful just for this 
 particular case? Maybe there is value in using --config-dir
 outside of it? And in that case, maybe the approach should be
 replicated to other services?
 
 AFAIU --config-dir could actually be useful to configure
 services. Now instead of messing with configuration files that
 are shipped with packages (and handling .rpmnew files [5] that
 are generated on upgrade when local changes to those files are
 detected), users (or 

Re: [openstack-dev] [packaging][neutron] --config-dir vs. --config-file

2015-04-17 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/13/2015 07:47 PM, Dimitri John Ledkov wrote:
 Hello,
 
 For Clear Linux* for Intel Architecture we do not allow to package 
 things in /etc, instead we leave /etc completely empty and for 
 user/admin modifications only. Typically we achieve this by moving
 sane distro defaults to be compiled in defaults, or read from
 alternative locations somewhere under /usr. This is similar to e.g.
 how udev reads from /usr/lib  /etc. (ditto systemd units, XDG
 Freedesktop spec, etc.)

Some people may be used to having a configuration file with all
supported options mentioned, with descriptions, to edit. Though I
admire the direction your distribution chose for configuration. And
this is why RDO/neutron team feels that we should provide this
additional way of service configuration to our users.

 
 Integration wise, it helps a lot if there is a conf.d like
 directory somewhere under /usr  under /etc, such that both
 packaging/packages and user can integrate things.
 
 I'll need to look more into this, but e.g. support for 
 /usr/share/neutron/conf.d/*.conf or 
 /usr/share/openstack/neutron/*.conf would be useful to us and
 other distributions as well.

I vote for /usr/share/neutron/conf.d/service-name

Where service name is e.g. 'server', 'l3-agent', 'dhcp-agent' etc.
That said, the file location is not intended to be modified by anyone
other than distribution, so it's not that important whether all
distributions are on the same page in this particular case. For
/etc/neutron/conf.d file structure, it's a lot more important.

 
 Shipping things in /etc is a pain on both dpkg  rpm based 
 distributions as config file handling is complex and has many
 corner cases, hence in the past we all had to do transitions of
 stock config from /etc - /usr transitions (e.g. udev rules).
 Please keep /etc for _only_ user created configurations and changes
 without any stock, documentation, defaults shipped there.

Existing users won't buy the sentiment, so at least in RDO world, we
need to handle both old (neutron.conf) and new (conf.d/server/) ways
to configure our services.

If you start a new distribution, it's a bit different since you can
define more strict rules from the start while you haven't set
different expectations for your user base.

 
 Regards,
 
 Dimitri.
 
 ps sorry for loss of context, only recently subscribed, don't have 
 full access to the thread and hence the ugly top-post reply, sorry 
 about that.
 
 On 13 April 2015 at 09:25, Ihar Hrachyshka ihrac...@redhat.com
 wrote: Hi,
 
 RDO/master (aka Delorean) moved neutron l3 agent to this
 configuration scheme, configuring l3 (and vpn) agent with
 --config-dir [1][2][3].
 
 We also provided a way to configure neutron services without ever 
 touching a single configuration file from the package [4] where
 each service has a config-dir located under 
 /etc/neutron/conf.d/service-name that can be populated by *.conf 
 files that will be automatically read by services during startup.
 
 All other distributions are welcome to follow the path. Please
 don't introduce your own alternative to /etc/neutron/conf.d/...
 directory to avoid unneeded platform dependent differences in
 deployment tools.
 
 As for devstack, it's not really feasible to introduce such a
 change there (at least from my perspective), so it's downstream
 only.
 
 [1]: 
 https://github.com/openstack-packages/neutron/blob/f20-master/openstac
k-

 
neutron.spec#L602
 [2]: 
 https://github.com/openstack-packages/neutron/blob/f20-master/neutron-
l3

 
- -agent.service#L8
 [3]: 
 https://github.com/openstack-packages/neutron-vpnaas/blob/f20-master/o
pe

 
nstack-neutron-vpnaas.spec#L97
 [4]: https://review.gerrithub.io/#/c/229162/
 
 Thanks, /Ihar
 
 On 03/13/2015 03:11 PM, Ihar Hrachyshka wrote:
 Hi all,
 
 (I'm starting a new [packaging] tag in this mailing list to
 reach out people who are packaging our software in
 distributions and whatnot.)
 
 Neutron vendor split [1] introduced situations where the set
 of configuration files for L3/VPN agent is not stable and
 depends on which packages are installed in the system.
 Specifically, fwaas_driver.ini file is now shipped in
 neutron_fwaas tarball (openstack-neutron-fwaas package in
 RDO), and so --config-file=/etc/neutron/fwaas_driver.ini
 argument should be passed to L3/VPN agent *only* when the new
 package with the file is installed.
 
 In devstack, we solve the problem by dynamically generating
 CLI arguments list based on which services are configured in 
 local.conf [2]. It's not a viable approach in proper
 distribution packages though, where we usually hardcode
 arguments [3] in our service manifests (systemd unit files,
 in case of RDO).
 
 The immediate solution to solve the issue would be to use 
 --config-dir argument that is also provided to us by
 oslo.config instead of --config-file, and put auxiliary files
 there [4] (those may be just symbolic links to actual
 files).
 
 I initially 

Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker in openstack

2015-04-17 Thread VACHNIS, AVI (AVI)
Hi,

@Ashish, if the limitation you've mentioned for #1 still exists, I join your 
question how heat auto-scale-group may work w/o ceilometer being able to 
collect docker metrics?



@Sergey, hey. Are you saying that ceilometer do collects metrics on docker 
underlying nova::server resource?



-Avi



-- Original message--

From: ashish.jai...@wipro.com

Date: Fri, Apr 17, 2015 4:56 PM

To: openstack-dev@lists.openstack.org;

Subject:Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker 
in openstack



Hi Segey,


So IIUC approach #2 may still help to autoscale docker on openstack. I will try 
that out and post questions on heat irc thanks.


Regards

Ashish


From: Sergey Kraynev skray...@mirantis.com
Sent: Friday, April 17, 2015 7:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker 
in openstack

Hi, Ashish.

Honestly I am not familiar with most part of these ways, but can add more 
information from Heat side (item 2).

I am surprised, that you have missed Heat autoscaling mechanism (You should 
look it :) ). It's one of the important part of Heat project.
It allows to scale vms/stacks by using Ceilometer alarms. There are couple 
examples of autoscale templates:

https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml
 (with LoadBalancer)
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_stacks.yaml
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml

https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml
It's true, that Docker plugin for Heat create docker server on Nova::Server 
resource. So you may write template Docker resource + Server resource (similar 
on third template) and scale by using Ceilometer alarms.
If you have any questions how to use it, please got to #heat irc channel and 
ask us :)
Also another way (AFAIK) is to use SoftwareDeployment/Config and deploy Server 
with docker inside (without docker plugin). In this way, I suppose, Steve Baker 
can help with advise :)


On 17 April 2015 at 16:06, 
ashish.jai...@wipro.commailto:ashish.jai...@wipro.com wrote:

Hi,

I have been working on running docker on openstack. I had a discussion on 
multiple IRC and IIUC there are 5 different ways of running docker on 
openstack. IIUC currently there is no way to autoscale docker on openstack. 
Please correct me if I am wrong


1) Using nova-docker driver - Running docker as a Nova::Server using 
nova-docker hypervisor
2) Using nova-plugin for heat - Running docker using 
DockerInc::Docker::Container
3) Using magnum - IIUC no automation as of now, manually it is possible. Not 
enough documentation available
4) heat compose - Saw some samples available 
@https://github.com/openstack/heat-templates/tree/master/hot/software-config/elements/heat-config-docker-compose
5) Swarm support - Still in development

Issues with each on the above approaches

1) Using nova-docker driver - IIUC there is no way for ceilometer to collect 
and emit statistics for docker hypervisor. So that mean ceilometer does not 
have any stats available once you switch to docker driver.
This link 
(https://github.com/openstack/ceilometer/tree/master/ceilometer/compute/virt) 
currently does not have anything for docker hypervisor.

2) Using nova-plugin for heat - Using this approach docker containers run on a 
Nova VM. However I do not see any illustration which suggests that you can 
autoscale using this approach.

3) Using magnum - Currently only possible by manually invoking it.

4) heat compose - Sample available at the above link just talks about deploying 
it up but nothing about auto scaling

5) Swarm Support - Still in dev

While I understand some of these options may enable us during the future 
release to autoscale docker on openstack. But looking currently I feel option 
#1 is most mature(probably) and by plugging in a ceilometer inspector for 
docker hypervisor it may be possible. Another approach could be to using 
cfn-push-stats to probably push some stats from docker container.

Please advice through your valued suggestions that time being what is the best 
way to achieve auto scaling for docker on openstack. I am ready to contribute 
to it in the best possible way.

Regards
Ashish






The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and 

Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker in openstack

2015-04-17 Thread ashish.jain14
Yes limitation #1 indeed exists, have got confirmation from few of the 
developers. Here is one blueprint which talks about this


https://blueprints.launchpad.net/ceilometer/+spec/container-monitoring



From: VACHNIS, AVI (AVI) avi.vach...@alcatel-lucent.com
Sent: Friday, April 17, 2015 7:41 PM
To: Ashish Jain (WT01 - BAS); openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker 
in openstack


Hi,

@Ashish, if the limitation you've mentioned for #1 still exists, I join your 
question how heat auto-scale-group may work w/o ceilometer being able to 
collect docker metrics?



@Sergey, hey. Are you saying that ceilometer do collects metrics on docker 
underlying nova::server resource?



-Avi



-- Original message--

From: ashish.jai...@wipro.com

Date: Fri, Apr 17, 2015 4:56 PM

To: openstack-dev@lists.openstack.org;

Subject:Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker 
in openstack



Hi Segey,


So IIUC approach #2 may still help to autoscale docker on openstack. I will try 
that out and post questions on heat irc thanks.


Regards

Ashish


From: Sergey Kraynev skray...@mirantis.com
Sent: Friday, April 17, 2015 7:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker 
in openstack

Hi, Ashish.

Honestly I am not familiar with most part of these ways, but can add more 
information from Heat side (item 2).

I am surprised, that you have missed Heat autoscaling mechanism (You should 
look it :) ). It's one of the important part of Heat project.
It allows to scale vms/stacks by using Ceilometer alarms. There are couple 
examples of autoscale templates:

https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml
 (with LoadBalancer)
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_stacks.yaml
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml

https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml
It's true, that Docker plugin for Heat create docker server on Nova::Server 
resource. So you may write template Docker resource + Server resource (similar 
on third template) and scale by using Ceilometer alarms.
If you have any questions how to use it, please got to #heat irc channel and 
ask us :)
Also another way (AFAIK) is to use SoftwareDeployment/Config and deploy Server 
with docker inside (without docker plugin). In this way, I suppose, Steve Baker 
can help with advise :)


On 17 April 2015 at 16:06, 
ashish.jai...@wipro.commailto:ashish.jai...@wipro.com wrote:

Hi,

I have been working on running docker on openstack. I had a discussion on 
multiple IRC and IIUC there are 5 different ways of running docker on 
openstack. IIUC currently there is no way to autoscale docker on openstack. 
Please correct me if I am wrong


1) Using nova-docker driver - Running docker as a Nova::Server using 
nova-docker hypervisor
2) Using nova-plugin for heat - Running docker using 
DockerInc::Docker::Container
3) Using magnum - IIUC no automation as of now, manually it is possible. Not 
enough documentation available
4) heat compose - Saw some samples available 
@https://github.com/openstack/heat-templates/tree/master/hot/software-config/elements/heat-config-docker-compose
5) Swarm support - Still in development

Issues with each on the above approaches

1) Using nova-docker driver - IIUC there is no way for ceilometer to collect 
and emit statistics for docker hypervisor. So that mean ceilometer does not 
have any stats available once you switch to docker driver.
This link 
(https://github.com/openstack/ceilometer/tree/master/ceilometer/compute/virt) 
currently does not have anything for docker hypervisor.

2) Using nova-plugin for heat - Using this approach docker containers run on a 
Nova VM. However I do not see any illustration which suggests that you can 
autoscale using this approach.

3) Using magnum - Currently only possible by manually invoking it.

4) heat compose - Sample available at the above link just talks about deploying 
it up but nothing about auto scaling

5) Swarm Support - Still in dev

While I understand some of these options may enable us during the future 
release to autoscale docker on openstack. But looking currently I feel option 
#1 is most mature(probably) and by plugging in a ceilometer inspector for 
docker hypervisor it may be possible. Another approach could be to using 
cfn-push-stats to probably push some stats from docker container.

Please advice through your valued suggestions that time being what is the best 
way to achieve auto scaling for docker on openstack. I am ready to contribute 
to it in the best possible way.

Regards
Ashish






The information 

Re: [openstack-dev] [tripleo] Building images separation and moving images into right place at right time

2015-04-17 Thread Arkady_Kanevsky
Dell - Internal Use - Confidential 

We need an ability for an Admin to add/remove new images at will to deploy new 
overcloud images at any time.
Expect that it is standard glance functionality.


-Original Message-
From: Jaromir Coufal [mailto:jcou...@redhat.com] 
Sent: Friday, April 17, 2015 8:51 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [tripleo] Building images separation and moving images 
into right place at right time

Hi All,

at the moment we are building discovery, deploy and overcloud images all at 
once. Then we face user to deal with uploading all images at one step.

User should not be exposed to discovery/deploy images. This should happen 
automatically for the user during undercloud installation as post-config step, 
so that undercloud is usable.

Once user installs undercloud (and have discovery  deploy images at their 
place) he should be able to build / download / create overcloud images (by 
overcloud images I mean overcloud-full.*). This is what user should deal with.

For this we will need to separate building process for discovery+deploy images 
and for overcloud images. Is that possible?

-- Jarda

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Mysql db connection leaking?

2015-04-17 Thread Jay Pipes

On 04/17/2015 05:26 AM, Qiming Teng wrote:

On Thu, Apr 16, 2015 at 01:27:51PM -0400, Jay Pipes wrote:

On 04/16/2015 09:54 AM, Sean Dague wrote:

On 04/16/2015 05:20 PM, Qiming Teng wrote:


Wondering if there is something misconfigured in my devstack
environment, which was reinstalled on RHEL7 about 10 days ago.
I'm often running into mysql connections problem as shown below:

$ mysql
ERROR 1040 (HY000): Too many connections

When I try dump the mysql connection list, I'm getting the followng
result after a 'systemctl restart mariadb.service':

$ mysqladmin processlist | grep nova | wc -l
125

Most of the connections are at Sleep status:

$ mysqladmin processlist | grep nova | grep Sleep | wc -l
123

As for the workload, I'm currently only running two VMs in a multi-host
devstack environment.

So, my questions:

   - Why do we have so many mysql connections from nova?
   - Is it possible this is caused some misconfigurations?
   - 125 connections in such a toy setup is insane, any hints on nailing
 down the connections to the responsible nova components?

Thanks.

Regards,
   Qiming


No, that's about right. It's 1 connection per worker. By default most
daemons start 1 worker per processor. Each OpenStack service has a bunch
of daemons. It all adds up pretty quick.


And just to add to what Sean says above, there's nothing inherently
wrong with sleeping connections to MySQL. What *is* wrong, however,
is that the default max_connections setting in my.cnf is 150. :( I
frequently recommend upping that to 2000 or more on any modern
hardware or decent sized VM.

Best,
-jay


Thanks, guys. So the key takeaway for me is:

  - 100~200 mysql connections is not big problem, provided those
connections are sleeping;


Yep, correct, for MySQL.


  - Tuning mysql to support larger number of user connections is a
must;


Yes, because the default is a measly 150 max_connections.


  - Number of mysql connections is not proportional to the number of
VMs, it is more related to the number of cores, number of workers
etc.


Yes. nova-compute workers (which are on each compute host that houses 
VMs) actually do not contact the database directly. Instead, they 
communicate with the nova-conductor service, which itself communicates 
with the database. The nova-conductor service by default will maintain a 
number of processes equal to the number of cores on the VM or baremetal 
machine. Each of these processes will keep a pool of connections to MySQL.


In addition to nova-conductor, there are various other Nova services 
that also make connections to MySQL.


Best,
-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] naming of the project

2015-04-17 Thread Emilien Macchi


On 04/16/2015 02:32 PM, Emilien Macchi wrote:
 
 
 On 04/16/2015 02:23 PM, Richard Raseley wrote:
 Emilien Macchi wrote:
 Hi all,

 I sent a patch to openstack/governance to move our project under the big
 tent, and it came up [1] that we should decide of a project name and be
 careful about trademarks issues with Puppet name.

 I would like to hear from Puppetlabs if there is any issue to use Puppet
 in the project title; also, I open a new etherpad so people can suggest
 some names: https://etherpad.openstack.org/p/puppet-openstack-naming

 Thanks,

 [1] https://review.openstack.org/#/c/172112/1/reference/projects.yaml,cm

 Emilien,

 I went ahead and had a discussion with Puppet's legal team on this
 issue. Unfortunately at this time we are unable to sanction the use of
 Puppet's name or registered trademarks as part of the project's name.

 To be clear, this decision is in no way indicative of Puppet not feeling
 the project is 'worthy' or 'high quality' (in fact the opposite is
 true), but rather is a purely defensive decision.

 We are in the process of reevaluating our usage guidelines, but there is
 no firm timetable as of this moment.
 
 I guess our best option is to choose a name without Puppet in the title.
 We will proceed to a vote after all proposals on the etherpad.

While we hear from Puppetlabs about the trademark potential issue, I
would like to run a vote for a name that does not contain `Puppet`, so
we can go ahead on the governance thing.
I took all proposals on the etherpad [1] and created a poll that will
close on next Tuesday 3pm, just before our weekly meeting so we will
make it official.

Anyone is welcome to vote:
http://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_6c81ad92b71422d6akey=f2e85294f17caa9a

Any feedback on the vote itself is also welcome.

Thanks,

[1] https://etherpad.openstack.org/p/puppet-openstack-naming

 
 Thank you for your help,
 
 Regards,

 Richard Raseley

 SysOps Engineer
 Puppet Labs

 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Mysql db connection leaking?

2015-04-17 Thread Jay Pipes

On 04/16/2015 06:40 PM, Matt Riedemann wrote:

On 4/16/2015 12:27 PM, Jay Pipes wrote:

On 04/16/2015 09:54 AM, Sean Dague wrote:

On 04/16/2015 05:20 PM, Qiming Teng wrote:


Wondering if there is something misconfigured in my devstack
environment, which was reinstalled on RHEL7 about 10 days ago.
I'm often running into mysql connections problem as shown below:

$ mysql
ERROR 1040 (HY000): Too many connections

When I try dump the mysql connection list, I'm getting the followng
result after a 'systemctl restart mariadb.service':

$ mysqladmin processlist | grep nova | wc -l
125

Most of the connections are at Sleep status:

$ mysqladmin processlist | grep nova | grep Sleep | wc -l
123

As for the workload, I'm currently only running two VMs in a multi-host
devstack environment.

So, my questions:

   - Why do we have so many mysql connections from nova?
   - Is it possible this is caused some misconfigurations?
   - 125 connections in such a toy setup is insane, any hints on
nailing
 down the connections to the responsible nova components?

Thanks.

Regards,
   Qiming


No, that's about right. It's 1 connection per worker. By default most
daemons start 1 worker per processor. Each OpenStack service has a bunch
of daemons. It all adds up pretty quick.


And just to add to what Sean says above, there's nothing inherently
wrong with sleeping connections to MySQL. What *is* wrong, however, is
that the default max_connections setting in my.cnf is 150. :( I
frequently recommend upping that to 2000 or more on any modern hardware
or decent sized VM.

Best,
-jay


What do you consider a decent sized VM?  In devstack we default
max_connections for postgresql to 200 because we were having connection
timeout failures in the gate for pg back in the day:

http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/databases/postgresql#n15

But we don't change this for mysql:

http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/databases/mysql

I think the VMs in the gate are running 8 VCPU + 8 GB RAM, not sure
about disk.


An 8 vCPU VM will have 80 connections to MySQL consumed by the 
nova-conductor (8 processes with a 10 connection pool in each process). 
There may be 10-12 other connections from various other Nova services, 
but the total number of MySQL connections that Nova would consume should 
not be more than around 90 or so. That said, Cinder, Keystone, Neutron, 
Glance and other services will also consume MySQL connections which 
could push things near 150.


Easy way to test this is just to run:

 mysql -uroot -p$PASS -e SHOW GLOBAL STATUS LIKE 'Max_used_connections'

before and after the OpenStack services are started.

Long term, I think it's wise *not* to use connection pooling for MySQL 
backends. As Clint mentioned in an earlier response, the process of 
connecting to MySQL is *extremely* lightweight, and the way that we use 
the database -- i.e. not using stored procedures or user functions -- 
means that the total amount of memory consumed per connection thread is 
very low. It doesn't really benefit OpenStack to pool MySQL connections 
(it does for PostgreSQL, however), and the drawback to pooling 
connections is that services like nova-conductor maintain long-lived 
connection threads that other services cannot use while maintained in 
the connection pool.


Best,
-jay

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker in openstack

2015-04-17 Thread ashish.jain14
So ultimately this means there is no way to autoscale docker containers on 
openstack until and unless ceilometer adds an inspector for docker hypervisor 
something similar to this 
(https://github.com/openstack/ceilometer/tree/master/ceilometer/compute/virt).


Regards

Ashish


From: Sergey Kraynev skray...@mirantis.com
Sent: Friday, April 17, 2015 8:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker 
in openstack

@VACHNIS:  yeah. in this case we blocked by ceilometer. AFAIK, ceilometer 
collect metrics from Nova:Server, not from docker directly.
So mentioned bp is make sense (add support for this feature to ceilomete, then 
to heat).


Regards,
Sergey.

On 17 April 2015 at 17:11, VACHNIS, AVI (AVI) 
avi.vach...@alcatel-lucent.commailto:avi.vach...@alcatel-lucent.com wrote:

Hi,

@Ashish, if the limitation you've mentioned for #1 still exists, I join your 
question how heat auto-scale-group may work w/o ceilometer being able to 
collect docker metrics?



@Sergey, hey. Are you saying that ceilometer do collects metrics on docker 
underlying nova::server resource?






-Avi



-- Original message--

From: ashish.jai...@wipro.commailto:ashish.jai...@wipro.com

Date: Fri, Apr 17, 2015 4:56 PM

To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org;

Subject:Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker 
in openstack



Hi Segey,


So IIUC approach #2 may still help to autoscale docker on openstack. I will try 
that out and post questions on heat irc thanks.


Regards

Ashish


From: Sergey Kraynev skray...@mirantis.commailto:skray...@mirantis.com
Sent: Friday, April 17, 2015 7:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker 
in openstack

Hi, Ashish.

Honestly I am not familiar with most part of these ways, but can add more 
information from Heat side (item 2).

I am surprised, that you have missed Heat autoscaling mechanism (You should 
look it :) ). It's one of the important part of Heat project.
It allows to scale vms/stacks by using Ceilometer alarms. There are couple 
examples of autoscale templates:

https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml
 (with LoadBalancer)
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_stacks.yaml
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml

https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml
It's true, that Docker plugin for Heat create docker server on Nova::Server 
resource. So you may write template Docker resource + Server resource (similar 
on third template) and scale by using Ceilometer alarms.
If you have any questions how to use it, please got to #heat irc channel and 
ask us :)
Also another way (AFAIK) is to use SoftwareDeployment/Config and deploy Server 
with docker inside (without docker plugin). In this way, I suppose, Steve Baker 
can help with advise :)


On 17 April 2015 at 16:06, 
ashish.jai...@wipro.commailto:ashish.jai...@wipro.com wrote:

Hi,

I have been working on running docker on openstack. I had a discussion on 
multiple IRC and IIUC there are 5 different ways of running docker on 
openstack. IIUC currently there is no way to autoscale docker on openstack. 
Please correct me if I am wrong


1) Using nova-docker driver - Running docker as a Nova::Server using 
nova-docker hypervisor
2) Using nova-plugin for heat - Running docker using 
DockerInc::Docker::Container
3) Using magnum - IIUC no automation as of now, manually it is possible. Not 
enough documentation available
4) heat compose - Saw some samples available 
@https://github.com/openstack/heat-templates/tree/master/hot/software-config/elements/heat-config-docker-compose
5) Swarm support - Still in development

Issues with each on the above approaches

1) Using nova-docker driver - IIUC there is no way for ceilometer to collect 
and emit statistics for docker hypervisor. So that mean ceilometer does not 
have any stats available once you switch to docker driver.
This link 
(https://github.com/openstack/ceilometer/tree/master/ceilometer/compute/virt) 
currently does not have anything for docker hypervisor.

2) Using nova-plugin for heat - Using this approach docker containers run on a 
Nova VM. However I do not see any illustration which suggests that you can 
autoscale using this approach.

3) Using magnum - Currently only possible by manually invoking it.

4) heat compose - Sample available at the above link just talks about deploying 
it up but nothing about auto scaling

5) Swarm Support - Still in dev

While I understand some of these options may enable us during the future 

Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker in openstack

2015-04-17 Thread Zane Bitter

On 17/04/15 10:11, VACHNIS, AVI (AVI) wrote:

Hi,

@Ashish, if the limitation you've mentioned for #1 still exists, I join
your question how heat auto-scale-group may work w/o ceilometer being
able to collect docker metrics?


Yeah, you're correct. Approach #2 has exactly the same problem as 
approach #1, and adds in a few more.


The solution is either to write a Ceilometer agent for the Nova-Docker 
driver, or otherwise jury rig something to push your own custom metrics 
into Ceilometer. The cfn-push-stats tool in heat-cfntools might help 
with the latter.



@Sergey, hey. Are you saying that ceilometer do collects metrics on
docker underlying nova::server resource?


I guess it actually does collect metrics on the OS::Nova::Server that 
you're running Docker on (assuming you put in on an OS::Nova::Server), 
but that doesn't really help you since all of the containers are 
deployed on that server - if the workload on the server gets too high, 
then autoscaling will deploy even more containers to it, thus making 
things worse :D


cheers,
Zane.


-Avi

-- Original message--

*From: *ashish.jai...@wipro.com

*Date: *Fri, Apr 17, 2015 4:56 PM

*To: *openstack-dev@lists.openstack.org;

*Subject:*Re: [openstack-dev] [nova-docker][ceilometer][heat]
Autoscaling docker in openstack

Hi Segey,


So IIUC approach #2 may still help to autoscale docker on openstack. I
will try that out and post questions on heat irc thanks.


Regards

Ashish


*From:* Sergey Kraynev skray...@mirantis.com
*Sent:* Friday, April 17, 2015 7:01 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [nova-docker][ceilometer][heat]
Autoscaling docker in openstack
Hi, Ashish.

Honestly I am not familiar with most part of these ways, but can add
more information from Heat side (item 2).

I am surprised, that you have missed Heat autoscaling mechanism (You
should look it :) ). It's one of the important part of Heat project.
It allows to scale vms/stacks by using Ceilometer alarms. There are
couple examples of autoscale templates:

https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml
 (with
LoadBalancer)
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_stacks.yaml
https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml

https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml
It's true, that Docker plugin for Heat create docker server on
Nova::Server resource. So you may write template Docker resource +
Server resource (similar on third template) and scale by using
Ceilometer alarms.
If you have any questions how to use it, please got to #heat irc channel
and ask us :)
Also another way (AFAIK) is to use SoftwareDeployment/Config and deploy
Server with docker inside (without docker plugin). In this way, I
suppose, Steve Baker can help with advise :)


On 17 April 2015 at 16:06, ashish.jai...@wipro.com
mailto:ashish.jai...@wipro.com wrote:


Hi,

I have been working on running docker on openstack. I had a
discussion on multiple IRC and IIUC there are 5 different ways of
running docker on openstack. IIUC currently there is no way to
autoscale docker on openstack. Please correct me if I am wrong


1) Using nova-docker driver - Running docker as a Nova::Server using
nova-docker hypervisor
2) Using nova-plugin for heat - Running docker using
DockerInc::Docker::Container
3) Using magnum - IIUC no automation as of now, manually it is
possible. Not enough documentation available
4) heat compose - Saw some samples available

@https://github.com/openstack/heat-templates/tree/master/hot/software-config/elements/heat-config-docker-compose
5) Swarm support - Still in development

Issues with each on the above approaches

1) Using nova-docker driver - IIUC there is no way for ceilometer to
collect and emit statistics for docker hypervisor. So that mean
ceilometer does not have any stats available once you switch to
docker driver.
This link

(https://github.com/openstack/ceilometer/tree/master/ceilometer/compute/virt)
currently does not have anything for docker hypervisor.

2) Using nova-plugin for heat - Using this approach docker
containers run on a Nova VM. However I do not see any illustration
which suggests that you can autoscale using this approach.

3) Using magnum - Currently only possible by manually invoking it.

4) heat compose - Sample available at the above link just talks
about deploying it up but nothing about auto scaling

5) Swarm Support - Still in dev

While I understand some of these options may enable us during the
future release to autoscale docker on openstack. But looking
currently I feel option #1 is most 

Re: [openstack-dev] [tripleo] Building images separation and moving images into right place at right time

2015-04-17 Thread Arkady_Kanevsky
Dell - Internal Use - Confidential 

If images under consideration are for overcloud nodes then they will changing 
all the time also. It all depends on which layer you are working on.
Just consider images for overcloud nodes as a cloud application, and then 
follow rules you would apply to any cloud application development.

I feel that we are in agreement on the goal and only specific work to be done 
is under discussion.

Do you have a blueprint or spec? That will simplify discussion.
Thanks,
Arkady

-Original Message-
From: Jaromir Coufal [mailto:jcou...@redhat.com] 
Sent: Friday, April 17, 2015 10:54 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tripleo] Building images separation and moving 
images into right place at right time

Hey Arkady,

yes, this should stay as fundamental requirement. This is tandard Glance 
functionality, I just want to separate discover and deploy images since these 
will not be very likely subject of change and they belong into undercloud 
installation stage.

That's why I want to separate overcloud building images (which is actually 
already there) so that user can easily replace this image with different one.

-- Jarda

On 17/04/15 16:18, arkady_kanev...@dell.com wrote:
 We need an ability for an Admin to add/remove new images at will to deploy 
 new overcloud images at any time.
 Expect that it is standard glance functionality.


 -Original Message-
 From: Jaromir Coufal [mailto:jcou...@redhat.com]
 Sent: Friday, April 17, 2015 8:51 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [tripleo] Building images separation and 
 moving images into right place at right time

 Hi All,

 at the moment we are building discovery, deploy and overcloud images all at 
 once. Then we face user to deal with uploading all images at one step.

 User should not be exposed to discovery/deploy images. This should happen 
 automatically for the user during undercloud installation as post-config 
 step, so that undercloud is usable.

 Once user installs undercloud (and have discovery  deploy images at their 
 place) he should be able to build / download / create overcloud images (by 
 overcloud images I mean overcloud-full.*). This is what user should deal with.

 For this we will need to separate building process for discovery+deploy 
 images and for overcloud images. Is that possible?

 -- Jarda

 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Has Neutron satisfied parity with Nova network FlatDHCP?

2015-04-17 Thread Salvatore Orlando
DVR probably still does satisfy the same requirements as nova multi-host,
because of the lack of SNAT masquerade distribution.
Neutron DVR distributes the floating IP and east-west traffic, but the
default gateway for each VM is still centralised, thus making the network
node a SPOF.

Then from a data plane perspective the other aspect is that for DVR we
assume OVS while nova-network deployers would probably stick with Linux
Bridge. However this is a bit off topic here.

I am not aware of other differences, at least feature-wise. From a
usability perspective the infrastructure for achieving functionality
equivalent to FlatDHCP can be set up by an admin (or provided by neutron
itself using some configuration switch), and the users could be completely
oblivious to that - ie: they can boot a VM and connect a floating IP to it
without ever using the neutron utility or calling the Neutron API.

Salvatore




On 17 April 2015 at 20:11, Kevin Benton blak...@gmail.com wrote:

 If the Neutron topology is configured to use a router connected to an
 external network and a shared network, will that achieve the same semantics
 as Nova with a FlatDHCP network?

 One of the last remaining items that I was aware of was ARP poisoning
 protection. At the end of Kilo, we added protection from that in OVS-based
 deployments.[1]

 Are there any other remaining major differences that we can fix on the
 Neutron side to make it a good replacement?

 1.
 https://github.com/openstack/neutron/commit/483de6313fab5913f9e68eb24afe65c36bd9b623

 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack live migration using devstack

2015-04-17 Thread Jordan Pittier
Hi
Double check that sql_connection in the [database] section of cinder.conf
is not empty.

Jordan

On Fri, Apr 17, 2015 at 7:24 PM, Erlon Cruz sombra...@gmail.com wrote:

 Had the same error, but with cinder. Did you find find out something about
 this error?
 
 2015-04-17 14:12:31.957 TRACE cinder Traceback (most recent call last):
 2015-04-17 14:12:31.957 TRACE cinder   File
 /usr/local/bin/cinder-volume, line 10, in module
 2015-04-17 14:12:31.957 TRACE cinder sys.exit(main())
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/cmd/volume.py, line 72, in main
 2015-04-17 14:12:31.957 TRACE cinder binary='cinder-volume')
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/service.py, line 249, in create
 2015-04-17 14:12:31.957 TRACE cinder service_name=service_name)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/service.py, line 129, in __init__
 2015-04-17 14:12:31.957 TRACE cinder *args, **kwargs)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/volume/manager.py, line 195, in __init__
 2015-04-17 14:12:31.957 TRACE cinder *args, **kwargs)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/manager.py, line 130, in __init__
 2015-04-17 14:12:31.957 TRACE cinder super(SchedulerDependentManager,
 self).__init__(host, db_driver)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/manager.py, line 80, in __init__
 2015-04-17 14:12:31.957 TRACE cinder super(Manager,
 self).__init__(db_driver)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/db/base.py, line 42, in __init__
 2015-04-17 14:12:31.957 TRACE cinder self.db.dispose_engine()
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/db/api.py, line 80, in dispose_engine
 2015-04-17 14:12:31.957 TRACE cinder if 'sqlite' not in
 IMPL.get_engine().name:
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/db/sqlalchemy/api.py, line 85, in get_engine
 2015-04-17 14:12:31.957 TRACE cinder facade = _create_facade_lazily()
 2015-04-17 14:12:31.957 TRACE cinder   File
 /opt/stack/cinder/cinder/db/sqlalchemy/api.py, line 72, in
 _create_facade_lazily
 2015-04-17 14:12:31.957 TRACE cinder **dict(CONF.database.iteritems())
 2015-04-17 14:12:31.957 TRACE cinder   File
 /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/session.py,
 line 796, in __init__
 2015-04-17 14:12:31.957 TRACE cinder **engine_kwargs)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/session.py,
 line 376, in create_engine
 2015-04-17 14:12:31.957 TRACE cinder url =
 sqlalchemy.engine.url.make_url(sql_connection)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/url.py, line
 176, in make_url
 2015-04-17 14:12:31.957 TRACE cinder return
 _parse_rfc1738_args(name_or_url)
 2015-04-17 14:12:31.957 TRACE cinder   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/url.py, line
 225, in _parse_rfc1738_args
 2015-04-17 14:12:31.957 TRACE cinder Could not parse rfc1738 URL from
 string '%s' % name)
 2015-04-17 14:12:31.957 TRACE cinder ArgumentError: Could not parse
 rfc1738 URL from string ''
 2015-04-17 14:12:31.957 TRACE cinder
 c-vol failed to start


 On Mon, Mar 10, 2014 at 12:44 PM, abhishek jain ashujain9...@gmail.com
 wrote:

 Hi all

 I have created one openstack using one controller node and one compute
 node both installed using devstack.I'm running one instance on controller
 node and want to migrate it to over compute node.
 I'm using following link for this.


 http://docs.openstack.org/grizzly/openstack-compute/admin/content/live-migration-usage.html

 http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-migrations.html

 The output of  nova-manage vm list  on the compute node is as follows..

 10 16:01:49.502 DEBUG nova.openstack.common.lockutils
 [req-019d2337-143e-4157-9d6c-3c1f2207f63b None None] Semaphore / lock
 released __get_backend inner
 /opt/stack/nova/nova/openstack/common/lockutils.py:252
 Command failed, please check log for more info
 2014-03-10 16:01:49.507 CRITICAL nova
 [req-019d2337-143e-4157-9d6c-3c1f2207f63b None None] Could not parse
 rfc1738 URL from string ''
 2014-03-10 16:01:49.507 9609 TRACE nova Traceback (most recent call
 last):
 2014-03-10 16:01:49.507 9609 TRACE nova   File /usr/bin/nova-manage,
 line 10, in module
 2014-03-10 16:01:49.507 9609 TRACE nova sys.exit(main())
 2014-03-10 16:01:49.507 9609 TRACE nova   File
 /opt/stack/nova/nova/cmd/manage.py, line 1378, in main
 2014-03-10 16:01:49.507 9609 TRACE nova ret = fn(*fn_args,
 **fn_kwargs)
 2014-03-10 16:01:49.507 9609 TRACE nova   File
 /opt/stack/nova/nova/cmd/manage.py, line 658, in list
 2014-03-10 16:01:49.507 9609 TRACE nova context.get_admin_context(),
 host)
 

[openstack-dev] [chef] PTL Candidacy

2015-04-17 Thread JJ Asghar
Hey everyone!

Starting the process of moving Chef into the “big tent” of OpenStack, I’d like 
to offer my candidacy for the PTL of the Liberty cycle.

My Qualifications

I have been shepherding the OpenStack Chef community since July of last year. 
My role at Chef is the “OpenStack guy.” I have stepped up and done my best to 
backfill Matt Ray and have the community move forward.  Under my shepherding we 
have 2 weekly meetings, one Monday via Google hangouts, recorded here [1] and 
IRC meetings on Thursday which are recorded here [2].

My Plans for Liberty Cycle

With a working testing-stack my overarching goal it so create a an automated 
continuous integration system that is publicly accessible and is also initially 
a non-voting member of our community. We have made huge leaps and bounds with 
the progress and I don’t want to loose the momentum. Along with the automation, 
I’m actively recruiting for the community and attempting to gain more people to 
help test and join our community. This has proven to be challenging, but it’s 
important for our community to grow.

With the movement towards the OpenStack Big Tent this’ll require multiple 
parallel tasks and collaboration between multiple companies and institutions. I 
believe I help shepherding us through this and believe I’m up for that 
challenge.

Thank you for your time reading this, and please don’t hesitate to reach out to 
me j...@chef.io mailto:j...@chef.io or on irc as j^2.

-JJ Asghar

[1]: https://www.youtube.com/channel/UCPQhSl-wxgWJH6_r7pk5PbQ/videos 
https://www.youtube.com/channel/UCPQhSl-wxgWJH6_r7pk5PbQ/videos
[2]: http://eavesdrop.openstack.org/meetings/openstack_chef/ 
http://eavesdrop.openstack.org/meetings/openstack_chef/__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] on supporting multiple implementations of tripleo-heat-templates

2015-04-17 Thread James Slagle
On Fri, Apr 17, 2015 at 12:37 PM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Giulio Fidente's message of 2015-04-17 06:21:28 -0700:
 Hi,

 the Heat/Puppet implementation of the Overcloud deployment seems to be
 surpassing in features the Heat/Elements implementation.

 The changes for Ceph are an example, the Puppet based version is already
 adding features which don't have they counterpart into Elements based.

 Recently we started working on the addition of Pacemaker into the
 Overcloud, to monitor the services and provide a number of 'auto
 healing' features, and again this is happening in the Puppet
 implementation only (at least for now) so I think the gap will become
 bigger.

 Given we support different implementations with a single top-level
 template [1], to keep other templates valid we're forced to propagate
 the params into the Elements based templates as well, even though there
 is no use for these there, see for example [2].

 The extra work itself is not of great concern but I wonder if it
 wouldn't make sense to deprecate the Elements based templates at this
 point, instead of keep adding there unused parts? Thoughts?


 In a perfect world, templates wouldn't have implementation details like
 puppet-anything in them. We all know that isn't true, but in a perfect
 world.. ;)

 I was just wondering the other day if anybody is relying on the non-puppet
 jobs anymore. I think from my view of things, the elements approach
 can be deprecated and removed if nobody steps up to maintain them.

I think we should consider deprecation if it's clear no one is
maintaining them. The elements approach does offer testing
installation from source instead of packages, which eventually
wouldn't be tested any longer if we were to deprecate. It also has the
nice benefit of being able to CI test individual project reverts or
pins to see what might be causing failures. Maybe we could translate
these features somehow to the puppet world.

Not to put off the discussion, but I just added this as something to
discuss at the Summit[1]. Between now and then we can continue to
gauge the interest of maintaining the elements approach.

[1] https://etherpad.openstack.org/p/tripleo-liberty-proposed-sessions


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] is cloudwatch really deprecated?

2015-04-17 Thread Matt Fischer
On Fri, Apr 17, 2015 at 11:03 AM, Zane Bitter zbit...@redhat.com wrote:

 On 17/04/15 12:46, Matt Fischer wrote:

 The wiki for Using Cloudwatch states:

 This feature will be deprecated or removed during the Havana cycle as
 we move to using Ceilometer as a metric/alarm service instead. [1]

 However it seems that cloudwatch is still being developed.


 It doesn't seem that way to me, and without at least some kind of hint I'm
 not in a position to speculate on why it might seem that way to you.

  So is it deprecated or not?


 Yes, it's very deprecated.

 In fact, we should go ahead and disable it in the default config.

 - ZB


I was just looking at the dates in the commit log for the cloudwatch folder
and seeing things from 2015. If it's truly deprecated, that's great, I'll
remove it from my environment.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-17 Thread Fox, Kevin M
Currently Murano supports part of it. It provides a per cloud region app store 
like functionality. But I think each deployer needs to load in the apps they 
want in the catalog. I'm thinking that ui should somehow plug into an 
openstack.org provided catalog of apps that OpenStack app developers can submit 
to via the openstack.org website. This would make it very simple to grow a 
large catalog of OpenStack hostable apps. 

But also, Murano mostly just wraps heat templates. Having written quite a few 
heat templates, and trying to make them generic enough to work in this appstore 
like model, I can tell you its very very hard currently for things that aren't 
just a simple single server, and sometimes even that's hard. :/

The three major current stumbling blocks are:
 * Key management. (This helps 
https://blueprints.launchpad.net/barbican/+spec/vm-integration)
 * Lack of some basic conditional support in heat. (can be hacked around very 
nastily like 
https://github.com/EMSL-MSC/heat-templates/blob/master/cfn/lib/FloatingIp.yaml. 
Don't tell me that's not painful...)
 * NaaS being optional. (Requires implementing your own vpn's if the provider 
doens'nt give you NaaS... way to costly to do)

Thanks,
Kevin

From: Ihar Hrachyshka [ihrac...@redhat.com]
Sent: Friday, April 17, 2015 9:34 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in 
DevStack [was: Status of the nova-network to Neutron migration work]

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/17/2015 06:23 PM, Fox, Kevin M wrote:
 Really, what I expect to see long term in a healthy OpenStack
 ecosystem is some global AppStore like functionality baked in to
 horizon. A user goes to it, selects my awesome scalable web
 hosting system, hits launch, and is given a link to login via
 webbrowser to edit their site. Under the hood, the system just
 stood up a trove database, an elasticsearch cluster in its own
 network, a web tier, a load balancer etc. The user didnt have to
 care how hard that use to be, and just gets charged for the
 resources consumed. Benifiting the cloud deployer and the end user.
 The easier it is to use/create/consume cloud resources the better
 it is for the deployer. If a bit steaper learning curve up front is
 nessisary, that sucks, but will be worth it.

 This sort of thing is what we need to get to, and is extremely
 difficult if OpenStack clouds differ wildly in functionality.


Isn't it what Murano project is intended to do?
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJVMTYEAAoJEC5aWaUY1u57K2IH/A7hyLVMBmtnybih6TomSuyE
MK1ZQIzSp2TmoUX8umwAi5d6OFvXxSZR2dm94TFXNedDsZUT2+PN/bOqJg0cXOdn
URis7fWC1nU2scLB3SfW5jKgawCoM3R6rBiHHzKu2UrctujRz/obZpqrI5lUf4F6
+ONUYGkdL6eDe/g2tCQB6gfwNSFA44F+q193AdEh9IV/3725OJAvWKcD+iRpdEJq
vVxLAh8KI6yokf2R9g+Vck3BLsltxQbjUuQjkyxYsRwq7L1vMcRqr4oTmQ2vRP+6
9dEQNlwEJpmDDqIRlL6vYIFH7NKM639EBtCoijdx6tM1oZ9bGoSwXhVtADBWw5U=
=AD+V
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : What is the difference between secret and order resource

2015-04-17 Thread Asha Seshagiri
Hi All,

 I would like to know if the keys generated  by Barbican through the order
resource are  encrypted using KEKS and then stored in the secret object or
is it stored in unencypted format.

Any help  would be highly appreciated.

root@barbican:~# curl -H 'Accept: application/json' -H 'X-Project-Id:12345'
http ://localhost:9311/v1/orders

Please find the command and response below :

{total: 3, orders: [{status: ACTIVE, secret_ref:
*http://localhost:9311/v1/secrets/b3709da7-4691-40d6-af9a-1ae23772a7b2
http://localhost:9311/v1/secrets/b3709da7-4691-40d6-af9a-1ae23772a7b2*,
updated: 2015-03-13T22:27:48.866683, meta: {name: secretname2,
algorithm: aes, payload_content_type: application/octet-stream,
mode: cbc, bit_length: 256, expiration: null}, created:
2015-03-13T22:27:48.844860, type: key, order_ref: 
http://localhost:9311/v1/orders/5a4844ca-47a9-4bd7-ae56-fb84655f48d9},

root@barbican:~# curl -H 'Accept: application/json' -H 'X-Project-Id:12345'
http://localhost:9311/v1/secrets/b3709da7-4691-40d6-af9a-1ae23772a7b2
{status: ACTIVE, secret_type: opaque, updated:
2015-03-13T22:27:48.863403, name: secretname2, algorithm: aes,
created: 2015-03-13T22:27:48.860600, secret_ref: 
http://localhost:9311/v1/secrets/b3709da7-4691-40d6-af9a-1ae23772a7b2;,
content_types: {default: application/octet-stream}, expiration:
null, bit_length: 256, mode: cbc}


root@barbican:~#  curl -H 'Accept:application/octet-stream' -H
'X-Project-Id:12345'
http://localhost:9311/v1/secrets/b3709da7-4691-40d6-af9a-1ae23772a7b2
▒▒R▒v▒▒▒W▒4▒A?Md▒L[▒K4A▒▒bx▒▒▒   - * would like to know if this response
is encyprted by barbican using KEKS or it is unencypted format whose
content type is application/octet-stream*


Thanks and Regards,
Asha Seshagiri

On Fri, Apr 17, 2015 at 11:30 AM, Asha Seshagiri asha.seshag...@gmail.com
wrote:

 Thanks a lot  John for your response.

 I also thank everyone who has been responding to my queries if I have
 missed someone .
 There was  some problem while configuring my email .I do not receive the
 email response directly  from openstack Dev group.I would check the archive
 folder for that.
 I will have a look into it

 Once again , it's  nice working and collaborating with the openstack Dev
 -group.

 Thanks and Regards,
 Asha Seshagiri











 jh



 Thanks and Regards,
 Asha Seshagiri

 On Thu, Apr 16, 2015 at 8:10 AM, John Wood john.w...@rackspace.com
 wrote:

  Hello Asha,

  The /v1/secrets resource is used to upload, encrypt and store your
 secrets, and to decrypt and retrieve those secrets. Key encryption keys
 (KEKs) internal to Barbican are used to encrypt the secret.

  The /v1/orders resource is used when you want Barbican to generate
 secrets for you. When they are done they give you references to where the
 secrets are stored so you can retrieve them via the secrets resource above.

  Hope that helps!

  Thanks,
 John

   From: Asha Seshagiri asha.seshag...@gmail.com
 Date: Thursday, April 16, 2015 at 1:23 AM
 To: openstack-dev openstack-dev@lists.openstack.org
 Cc: John Wood john.w...@rackspace.com, Reller, Nathan S. 
 nathan.rel...@jhuapl.edu, Douglas Mendizabal 
 douglas.mendiza...@rackspace.com, Paul Kehrer paul.keh...@rackspace.com,
 Adam Harwell adam.harw...@rackspace.com, Alexis Lee alex...@hp.com
 Subject: Barbican : What is the difference between secret and order
 resource

   Hi All ,

  What is the difference between secret and the order resource ?
 Where is the key stored that is used for encrypting the payload in the
 secret resource and how do we access it.

  According to my understanding ,

  Storing/Posting  the secret  means  we are encrypting the actual
 information(payload)  using the key generated internally by the barbican
 based on the type mentioned in the secret type.
 Geting the secret means we are decryprting the information and geting the
 actual information.

  Posting the order refers to the generation of the actual keys by the
 barbican  and encyrpting those keys based on the algorithm and the internal
 key generated by barbican.
 This encrypted key is referred through the secret reference and the whole
 meta data is referred through a order reference.

  Please correct me if I am wrong.
 Any help would be highly appreciated.


  --
  *Thanks and Regards,*
 *Asha Seshagiri*




 --
 *Thanks and Regards,*
 *Asha Seshagiri*




-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Things to tackle in Liberty

2015-04-17 Thread Fox, Kevin M
True. For example, the infiniband passthrough blueprint might need port type 
info from neutron-nova?

Thanks,
Kevin

From: Neil Jerram [neil.jer...@metaswitch.com]
Sent: Friday, April 17, 2015 9:44 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Things to tackle in Liberty

On 17/04/15 17:24, Daniel P. Berrange wrote:
 On Fri, Apr 17, 2015 at 12:16:25PM -0400, Jay Pipes wrote:
 On 04/10/2015 11:48 AM, Neil Jerram wrote:
 What I imagine, though, is that the _source_ of the plugging information
 could move from Nova to Neutron, so that future plugging-related code
 changes are a matter for Neutron rather than for Nova.  The plugging
 would still _happen_ from within Nova, as directed by that information.

 -1. One of the biggest problems I have with the current implementation for
 Nova VIF types is that stuff leaks improperly out of the Neutron API. Take a
 look at all the Contrail implementation specifics in here:

 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/vif.py#L551-L589

 That belongs in the Neutron plugin/agent side of things. Nova should not
 need to know how to call the vrouter-port-control Contrail CLI command. That
 should be Neutron's domain -- and not the Neutron API, but instead the
 underlying L2 agent/plugin. Most of the port binding profile stuff that is
 returned by the Neutron API's primitives should never have been exposed by
 the Neutron API, in my opinion. There's just too much driver-implementation
 specifics in there.

 Yes, that's exactly the set of tasks that is going to be hidden from Nova
 in the work Brent is doing to enable scripts. Ultimately all the plug/unplug
 methods in vif.py should go away, and Neutron will just pass over the name of
 a script to execute at plug  unplug stages. So all the vif.py file in libvirt
 will need todo is invoke the nominated script at the right time, and build the
 libvirt XML config. All the business logic will be entirely under the 
 control
 of the Neutron maintainer.

Yes; but, as I commented on the spec earlier today, I don't think the
vif-plugin-script work as it stands quite gets us there.  We also still
need either a set of base VIF types in Nova - e.g., in the libvirt case,
to control whether the generated XML is interface type='ethernet' ...
or interface type='bridge' ... - or some equivalently generic
information that is passed from Neutron to allow Nova to generate the
required XML (or equivalent for other hypervisors).

Regards,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Neutron] Has Neutron satisfied parity with Nova network FlatDHCP?

2015-04-17 Thread Kevin Benton
If the Neutron topology is configured to use a router connected to an
external network and a shared network, will that achieve the same semantics
as Nova with a FlatDHCP network?

One of the last remaining items that I was aware of was ARP poisoning
protection. At the end of Kilo, we added protection from that in OVS-based
deployments.[1]

Are there any other remaining major differences that we can fix on the
Neutron side to make it a good replacement?

1.
https://github.com/openstack/neutron/commit/483de6313fab5913f9e68eb24afe65c36bd9b623

-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] is cloudwatch really deprecated?

2015-04-17 Thread Zane Bitter

On 17/04/15 13:54, Matt Fischer wrote:

On Fri, Apr 17, 2015 at 11:03 AM, Zane Bitter zbit...@redhat.com
mailto:zbit...@redhat.com wrote:

On 17/04/15 12:46, Matt Fischer wrote:

The wiki for Using Cloudwatch states:

This feature will be deprecated or removed during the Havana
cycle as
we move to using Ceilometer as a metric/alarm service instead. [1]

However it seems that cloudwatch is still being developed.


It doesn't seem that way to me, and without at least some kind of
hint I'm not in a position to speculate on why it might seem that
way to you.

So is it deprecated or not?


Yes, it's very deprecated.

In fact, we should go ahead and disable it in the default config.

- ZB


I was just looking at the dates in the commit log for the cloudwatch
folder and seeing things from 2015. If it's truly deprecated, that's
great, I'll remove it from my environment.


OK, that's what I looked at too, and there were a lot of recent changes 
but they all appeared to be global cleanups that went across the whole 
Heat codebase. The last thing that looked like active development was in 
July 2013. You definitely won't regret removing it ;)


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Has Neutron satisfied parity with Nova network FlatDHCP?

2015-04-17 Thread Kevin Benton
But if I understand nova network with FlatDHCP correctly, there is no such
thing as a host without a floating IP. So if every instance is given a
floating IP in Neutron in a similar fashion, then the lack of SNAT
distribution would not matter in that case, correct?

On Fri, Apr 17, 2015 at 11:26 AM, Salvatore Orlando sorla...@nicira.com
wrote:

 DVR probably still does satisfy the same requirements as nova multi-host,
 because of the lack of SNAT masquerade distribution.
 Neutron DVR distributes the floating IP and east-west traffic, but the
 default gateway for each VM is still centralised, thus making the network
 node a SPOF.

 Then from a data plane perspective the other aspect is that for DVR we
 assume OVS while nova-network deployers would probably stick with Linux
 Bridge. However this is a bit off topic here.

 I am not aware of other differences, at least feature-wise. From a
 usability perspective the infrastructure for achieving functionality
 equivalent to FlatDHCP can be set up by an admin (or provided by neutron
 itself using some configuration switch), and the users could be completely
 oblivious to that - ie: they can boot a VM and connect a floating IP to it
 without ever using the neutron utility or calling the Neutron API.

 Salvatore




 On 17 April 2015 at 20:11, Kevin Benton blak...@gmail.com wrote:

 If the Neutron topology is configured to use a router connected to an
 external network and a shared network, will that achieve the same semantics
 as Nova with a FlatDHCP network?

 One of the last remaining items that I was aware of was ARP poisoning
 protection. At the end of Kilo, we added protection from that in OVS-based
 deployments.[1]

 Are there any other remaining major differences that we can fix on the
 Neutron side to make it a good replacement?

 1.
 https://github.com/openstack/neutron/commit/483de6313fab5913f9e68eb24afe65c36bd9b623

 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] Started the review to move from stackforge to openstack

2015-04-17 Thread JJ Asghar
Hey Everyone!

I’ve started the review [1] to move from stackforge to openstack namespace. I’m 
looking for comments, ideas and any pointers. Please don’t hesitate to reach 
out via the review, email, or IRC on freenode.

Thanks!

JJ Asghar (j^2)



[1]: https://review.openstack.org/175000
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack live migration using devstack

2015-04-17 Thread Erlon Cruz
Had the same error, but with cinder. Did you find find out something about
this error?

2015-04-17 14:12:31.957 TRACE cinder Traceback (most recent call last):
2015-04-17 14:12:31.957 TRACE cinder   File /usr/local/bin/cinder-volume,
line 10, in module
2015-04-17 14:12:31.957 TRACE cinder sys.exit(main())
2015-04-17 14:12:31.957 TRACE cinder   File
/opt/stack/cinder/cinder/cmd/volume.py, line 72, in main
2015-04-17 14:12:31.957 TRACE cinder binary='cinder-volume')
2015-04-17 14:12:31.957 TRACE cinder   File
/opt/stack/cinder/cinder/service.py, line 249, in create
2015-04-17 14:12:31.957 TRACE cinder service_name=service_name)
2015-04-17 14:12:31.957 TRACE cinder   File
/opt/stack/cinder/cinder/service.py, line 129, in __init__
2015-04-17 14:12:31.957 TRACE cinder *args, **kwargs)
2015-04-17 14:12:31.957 TRACE cinder   File
/opt/stack/cinder/cinder/volume/manager.py, line 195, in __init__
2015-04-17 14:12:31.957 TRACE cinder *args, **kwargs)
2015-04-17 14:12:31.957 TRACE cinder   File
/opt/stack/cinder/cinder/manager.py, line 130, in __init__
2015-04-17 14:12:31.957 TRACE cinder super(SchedulerDependentManager,
self).__init__(host, db_driver)
2015-04-17 14:12:31.957 TRACE cinder   File
/opt/stack/cinder/cinder/manager.py, line 80, in __init__
2015-04-17 14:12:31.957 TRACE cinder super(Manager,
self).__init__(db_driver)
2015-04-17 14:12:31.957 TRACE cinder   File
/opt/stack/cinder/cinder/db/base.py, line 42, in __init__
2015-04-17 14:12:31.957 TRACE cinder self.db.dispose_engine()
2015-04-17 14:12:31.957 TRACE cinder   File
/opt/stack/cinder/cinder/db/api.py, line 80, in dispose_engine
2015-04-17 14:12:31.957 TRACE cinder if 'sqlite' not in
IMPL.get_engine().name:
2015-04-17 14:12:31.957 TRACE cinder   File
/opt/stack/cinder/cinder/db/sqlalchemy/api.py, line 85, in get_engine
2015-04-17 14:12:31.957 TRACE cinder facade = _create_facade_lazily()
2015-04-17 14:12:31.957 TRACE cinder   File
/opt/stack/cinder/cinder/db/sqlalchemy/api.py, line 72, in
_create_facade_lazily
2015-04-17 14:12:31.957 TRACE cinder **dict(CONF.database.iteritems())
2015-04-17 14:12:31.957 TRACE cinder   File
/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/session.py,
line 796, in __init__
2015-04-17 14:12:31.957 TRACE cinder **engine_kwargs)
2015-04-17 14:12:31.957 TRACE cinder   File
/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/session.py,
line 376, in create_engine
2015-04-17 14:12:31.957 TRACE cinder url =
sqlalchemy.engine.url.make_url(sql_connection)
2015-04-17 14:12:31.957 TRACE cinder   File
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/url.py, line
176, in make_url
2015-04-17 14:12:31.957 TRACE cinder return
_parse_rfc1738_args(name_or_url)
2015-04-17 14:12:31.957 TRACE cinder   File
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/url.py, line
225, in _parse_rfc1738_args
2015-04-17 14:12:31.957 TRACE cinder Could not parse rfc1738 URL from
string '%s' % name)
2015-04-17 14:12:31.957 TRACE cinder ArgumentError: Could not parse rfc1738
URL from string ''
2015-04-17 14:12:31.957 TRACE cinder
c-vol failed to start


On Mon, Mar 10, 2014 at 12:44 PM, abhishek jain ashujain9...@gmail.com
wrote:

 Hi all

 I have created one openstack using one controller node and one compute
 node both installed using devstack.I'm running one instance on controller
 node and want to migrate it to over compute node.
 I'm using following link for this.


 http://docs.openstack.org/grizzly/openstack-compute/admin/content/live-migration-usage.html

 http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-migrations.html

 The output of  nova-manage vm list  on the compute node is as follows..

 10 16:01:49.502 DEBUG nova.openstack.common.lockutils
 [req-019d2337-143e-4157-9d6c-3c1f2207f63b None None] Semaphore / lock
 released __get_backend inner
 /opt/stack/nova/nova/openstack/common/lockutils.py:252
 Command failed, please check log for more info
 2014-03-10 16:01:49.507 CRITICAL nova
 [req-019d2337-143e-4157-9d6c-3c1f2207f63b None None] Could not parse
 rfc1738 URL from string ''
 2014-03-10 16:01:49.507 9609 TRACE nova Traceback (most recent call last):
 2014-03-10 16:01:49.507 9609 TRACE nova   File /usr/bin/nova-manage,
 line 10, in module
 2014-03-10 16:01:49.507 9609 TRACE nova sys.exit(main())
 2014-03-10 16:01:49.507 9609 TRACE nova   File
 /opt/stack/nova/nova/cmd/manage.py, line 1378, in main
 2014-03-10 16:01:49.507 9609 TRACE nova ret = fn(*fn_args,
 **fn_kwargs)
 2014-03-10 16:01:49.507 9609 TRACE nova   File
 /opt/stack/nova/nova/cmd/manage.py, line 658, in list
 2014-03-10 16:01:49.507 9609 TRACE nova context.get_admin_context(),
 host)
 2014-03-10 16:01:49.507 9609 TRACE nova   File
 /opt/stack/nova/nova/db/api.py, line 671, in instance_get_all_by_host
 2014-03-10 16:01:49.507 9609 TRACE nova return
 IMPL.instance_get_all_by_host(context, host, columns_to_join)
 2014-03-10 

Re: [openstack-dev] [Heat] moving sqlite migration scripts to tests

2015-04-17 Thread Zane Bitter

On 16/04/15 04:05, Anant Patil wrote:

Hi,

Sometime back we had a discussion on IRC regarding sqlite migration
scripts. Since sqlite is mostly used for testing, we were thinking
about moving the sqlite migration related code to tests folder and
keep the migrate_repo sane (with only production code). There was
utility class[1] added recently for helping with sqlite migrations
and there were questions on the location of that class. The utility
lives in heat/db/sqlalchemy folder and since it is used only for
testing, it should probably live somewhere in the tests folder (like
tests/migrate_repo? ) along with the sqlite migration scripts.

It would be better if we have a separate path for testing code and
depending on the configured DB back-end (for example, sqlite), we pass
the appropriate path (something like tests/migrate_repo for sqlite) to
oslo_migration.db_sync().

If it is okay to assume that sqlite is *always* used for testing, then
IMO, we should re-factor the migration scripts. Please help us with your
thoughts.

[1]
https://github.com/openstack/heat/blob/master/heat/db/sqlalchemy/utils.py

- Anant


You're correct that we can assume SQLite is only used for tests.

However, I'm not convinced that we need to change the migration 
scripts... it's bad enough that we have to write two different 
migrations in many cases (and totally unclear to me how this is testing 
anything useful), but having to write them in two different places seems 
even worse.


I'd be more interested in seeing a change whereby we stop doing 
pointless migrations on an empty SQLite DB prior to testing and just 
generate it from the model. I think we can rely on the migration tests 
that run against the actual mariadb/postgresql clients to test the 
migrations themselves - we effectively already are in many cases.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][policy][neutron] oslo.policy API is not powerful enough to switch Neutron to it

2015-04-17 Thread Salvatore Orlando
Thanks for this analysis Ihar.
Some comments inline.

On 17 April 2015 at 14:45, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi,

 tl;dr neutron has special semantics for policy targets that relies on
 private symbols from oslo.policy, and it's impossible to introduce
 this semantics into oslo.policy itself due to backwards compatibility
 concerns, meaning we need to expose some more symbols as part of
 public API for the library to facilitate neutron switch to it.

 ===

 oslo.policy was graduated during Kilo [1]. Neutron considered the
 switch to it [2], but failed to achieve it because some library
 symbols that were originally public (or at least looked like public)
 in policy.py from oslo-incubator, became private in oslo.policy.
 Specifically, Neutron policy code [3] relies on the following symbols
 that are now hidden inside oslo_policy._checks (note the underscore in
 the name of the module that suggests we cannot use the module directly):

 - - RoleCheck
 - - RuleCheck
 - - AndCheck

 Those symbols are used for the following matters:
 (all the relevant neutron code is in neutron/policy.py)

 1. debug logging in case policy does not authorize an action
 (RuleCheck, AndCheck) [log_rule_list]

 2. filling in admin context with admin roles (RoleCheck, RuleCheck,
 AndCheck/OrCheck internals) [get_admin_roles]

 3. aggregating core, attribute and subattribute policies (RuleCheck,
 AndCheck) [_prepare_check]


 == 1. debug logging in case policy does not authorize an action ==

 Neutron logs rules that failed to match if policy module does not
 authorize an action. Not sure whether Neutron developers really want
 to have those debug logs, and whether we cannot just kill them to
 avoid this specific usage of private symbols; though it also seems
 that we could easily use __str__ that is present for all types of
 Checks instead. So it does not look like a blocker for the switch.


Definitely not a blocker - we could as you suggest, or removing that
logging altogether.



 == 2. filling in admin context with admin roles ==

 Admin context object is filled with .roles attribute that is a list of
 roles considered granting admin permissions [4]. The attribute would
 then be used by plugins that would like to do explicit policy checks.
 As per Salvatore, this attribute can probably be dropped now that all
 plugins and services don't rely on it (Salvatore mentioned lbaas
 mixins as the ones that previously relied on it, but are now not doing
 it since service split from neutron tree (?)).

 The problem with dropping the .roles attribute from context object in
 Liberty is that we, as a responsible upstream with lots of plugins
 maintained out-of-tree (see the ongoing vendor decomposition effort)
 would need to support the attribute while it's marked as deprecated
 for at least one cycle, meaning that if we don't get those oslo.policy
 internals we rely on in Liberty, we would need to postpone the switch
 till Mizzle, or rely on private symbols during the switch (while a new
 release of oslo.policy can easily break us).

 (BTW the code to extract admin roles is not really robust and has
 bugs, f.e. it does not handle AndChecks that could be used in
 context_is_admin. In theory, 'and' syntax would mean that both roles
 are needed to claim someone is an admin, while the code to extract
 admin roles handles 'and' the same way as 'or'. For the deprecation
 time being, we may need to document this limitation.)


Roles are normally populated by the keystone middleware. I'm not sure
whether we want to drop them altogether from the context, but I would
expect the policy engine to use them even after switching to oslo.policy -
Plugins should never leverage this kind of information. I am indeede
tempted to drop roles and other AAA-related info from the context when it's
dispatched to the plugin. This should be done carefully - beyond breaking
some plugins it might also impact the notifiers we use to communicate with
nova.

The function which artificially populates roles in the context was built
for artificial contextes, which in some cases were created by plugins
performing db operations at startup. I would check if we still have this
requirement, and if not remove the function.



 == 3. aggregating core, attribute and subattribute policies ==

 That's the most interesting issue.

 For oslo.policy, policies are described as target: rule, where rule
 is interpreted as per registered checks, while target is opaque to the
 library.

 Neutron extended the syntax for target as:
 target[:attribute[:subattribute]].

 If attribute is present in a policy entry, it applies to target iff
 attribute is set, and 'enforce_policy' is set in attribute map for the
 attribute in question, and target is not read-only (=its name does not
 start from get_).

 If subattribute is present, the rule applies to target if 'validate'
 is set in attribute map for the attribute, and its type is dict, 

Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-17 Thread Kevin Benton
On the contrary, if you reread the message to which you were
previously replying, it's was about the unnecessary complexity of
OVS (and Neutron in general) for deployments which explicitly
_don't_ need and can never take advantage of self-service
networking. The implication being that Neutron needs a just connect
everything to a simple flat network on a bridge I can easily debug
mode which hides or eliminates those complexities instead.

I understand. What I'm saying is that switching to Linux bridge will not
change the networking model to 'just connect everything to a simple flat
network'. All of the complaints about self-service networking will still
hold.

On Fri, Apr 17, 2015 at 8:22 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-04-16 21:17:03 -0700 (-0700), Kevin Benton wrote:
  What do you disagree with? I was pointing out that using Linux
  bridge will not reduce the complexity of the model of self-service
  networking, which is what the quote was complaining about.

 On the contrary, if you reread the message to which you were
 previously replying, it's was about the unnecessary complexity of
 OVS (and Neutron in general) for deployments which explicitly
 _don't_ need and can never take advantage of self-service
 networking. The implication being that Neutron needs a just connect
 everything to a simple flat network on a bridge I can easily debug
 mode which hides or eliminates those complexities instead.
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-17 Thread Kevin Benton
I definitely understand that. But what is the major complaint from
operators? I understood that quote to imply it was around Neutron's model
of self-service networking.

If the main reason the remaining Nova-net operators don't want to use
Neutron is due to the fact that they don't want to deal with the Neutron
API, swapping some implementation defaults isn't really going to get us
anywhere on that front.

It's an important distinction because it determines what actionable items
we can take (e.g. what Salvatore mentioned in his email about defaults).
Does that make sense?

On Fri, Apr 17, 2015 at 11:33 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-04-17 10:55:19 -0700 (-0700), Kevin Benton wrote:
  I understand. What I'm saying is that switching to Linux bridge
  will not change the networking model to 'just connect everything
  to a simple flat network'. All of the complaints about
  self-service networking will still hold.

 And conversely, swapping simple bridge interfaces for something else
 still means problems are harder to debug, whether or not you're
 stuck with self-service networking features you're not using.
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Building images separation and moving images into right place at right time

2015-04-17 Thread Jaromir Coufal

Hey Arkady,

yes, this should stay as fundamental requirement. This is tandard Glance 
functionality, I just want to separate discover and deploy images since 
these will not be very likely subject of change and they belong into 
undercloud installation stage.


That's why I want to separate overcloud building images (which is 
actually already there) so that user can easily replace this image with 
different one.


-- Jarda

On 17/04/15 16:18, arkady_kanev...@dell.com wrote:

We need an ability for an Admin to add/remove new images at will to deploy new 
overcloud images at any time.
Expect that it is standard glance functionality.


-Original Message-
From: Jaromir Coufal [mailto:jcou...@redhat.com]
Sent: Friday, April 17, 2015 8:51 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [tripleo] Building images separation and moving images 
into right place at right time

Hi All,

at the moment we are building discovery, deploy and overcloud images all at 
once. Then we face user to deal with uploading all images at one step.

User should not be exposed to discovery/deploy images. This should happen 
automatically for the user during undercloud installation as post-config step, 
so that undercloud is usable.

Once user installs undercloud (and have discovery  deploy images at their 
place) he should be able to build / download / create overcloud images (by 
overcloud images I mean overcloud-full.*). This is what user should deal with.

For this we will need to separate building process for discovery+deploy images 
and for overcloud images. Is that possible?

-- Jarda

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Things to tackle in Liberty

2015-04-17 Thread Neil Jerram

On 17/04/15 17:16, Jay Pipes wrote:

On 04/10/2015 11:48 AM, Neil Jerram wrote:

What I imagine, though, is that the _source_ of the plugging information
could move from Nova to Neutron, so that future plugging-related code
changes are a matter for Neutron rather than for Nova.  The plugging
would still _happen_ from within Nova, as directed by that information.


-1. One of the biggest problems I have with the current implementation
for Nova VIF types is that stuff leaks improperly out of the Neutron
API. Take a look at all the Contrail implementation specifics in here:

https://github.com/openstack/nova/blob/master/nova/virt/libvirt/vif.py#L551-L589


That belongs in the Neutron plugin/agent side of things. Nova should not
need to know how to call the vrouter-port-control Contrail CLI command.
That should be Neutron's domain -- and not the Neutron API, but instead
the underlying L2 agent/plugin.


I entirely agree on this big picture.  I think the disagreement (i.e. 
your -1) is just with some detail of my wording above; perhaps the from 
within Nova?



Most of the port binding profile stuff
that is returned by the Neutron API's primitives should never have been
exposed by the Neutron API, in my opinion. There's just too much
driver-implementation specifics in there.


(For libvirt, I believe the required information would be the interface
type and parameters to go into the libvirt XML, and then any further
processing to do after that XML has been launched.  I believe the latter
is covered by the vif-plug-script spec, but not the former.)


Agreed.


However, on thinking this through, I see that this really is bound up
with the wider question of nova-net / Neutron migration - because it
obviously can't make sense for plugging information to come from Neutron
if some people are not using Neutron at all for their networking.

Stepping right back, though, is it agreed in principle that detailed
networking code should move from Nova to Neutron, if that is possible
while preserving all existing behaviours?  If it is agreed, that's
something that I'd like to help with.


What precisely do you mean by preserving all existing behaviours? :)


Just that we always need to respect back-compatibility.  Isn't that a 
normal assumption within OpenStack?  (Perhaps I didn't need to say it, 
then... :-))


Regards,
Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] [ironic] [ceilometer] [magnum] [kite] [tuskar] WSME unmaintained ?

2015-04-17 Thread Julien Danjou
On Fri, Apr 17 2015, Lucas Alvares Gomes wrote:

 Apparently not. But it would be good to get some bugs fixed in WSME
 before we come up with
 a final solution within each OpenStack project: Whether keep WSME or
 migrate to something
 else since it requires time.

You are now both members of wsme-core.

Enjoy, and happy hacking! :-)

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? April 17, 2015

2015-04-17 Thread Anne Gentle | Just Write Click
_Migration Day!_
Both our End User Guide and our Admin User Guide have been migrated from 
DocBook source to RST source and now build with Sphinx and a shiny new 
theme:

http://docs.openstack.org/user-guide/index.html
http://docs.openstack.org/user-guide-admin/index.html

Multiple thanks and shout-outs to everyone who made this happen, it's about 
six months of effort and a big blueprint to mark COMPLETE! If you see any 
problems, please report them at http://bugs.launchpad.net/openstack-manuals. 
The new design may take a while to learn how to navigate but it's a marked 
improvement in both user and contributor experience. Now go, write some 
docs! 

_New Theme_
Developers, you can now use the openstackdocstheme rather than oslosphinx 
for your developer docs published to docs.openstack.org/developer. Please 
read the openstackdocstheme/README.rst [1] carefully as it has important 
information about required conf.py changes. You can also continue to use 
oslosphinx, though I believe consistency is best. 

_Docs Analytics_
Projects, I'd like all of you to rebuild your docs using either theme as we 
need to get web analytics working again. For the last year or so, only nova 
and swift have had web analytics consistently. Gold star to 
oslosphinx-the-project for being first to build with their updated theme 
containing analytics - who will be next?

_Bug Triage Day_
Join us next Thursday, April 23, for a doc bug triage day! [2] We have doc 
team members standing by around the clock to answer questions and triage 
doc bugs. 

_Liberty Summit Doc Topics_
Please add your docs hot topics here for consideration to discuss in 
Vancouver: 
https://etherpad.openstack.org/p/Docs_Liberty_Design_Sessions
Lana will combine and compile to add to the wiki using the schedule. [3]

Thanks,
Anne

1. 
http://git.openstack.org/cgit/openstack/openstackdocstheme/tree/README.rst
2. https://wiki.openstack.org/wiki/Documentation/BugDay
3. 
https://docs.google.com/spreadsheets/d/1VsFdRYGbX5eCde81XDV7TrPBfEC7cgtOFikruYmqbPY/edit?usp=sharing


-- 
Anne Gentle
annegen...@justwriteclick.com__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [docs] Networking Guide Doc Day - April 23rd

2015-04-17 Thread Edgar Magana
Hello Folks,

I would like to invite all available contributors to help us to complete the 
OpenStack Networking Guide.

We are having a Networking Doc Day on April 23rd in order to review the current 
guide and make a big push on its content.
Let's use both the Neutron and Docs IRC channels:
#openstack-neutron
#openstack-doc

All the expected content is being described in the TOC:
https://wiki.openstack.org/wiki/NetworkingGuide/TOC

Information for Doc contributors in here:
https://wiki.openstack.org/wiki/Documentation/HowTo#Edit_OpenStack_RST_and.2For_DocBook_documentation

We have prepared an etherpad to coordinate who is doing what:
https://etherpad.openstack.org/p/networking-guide

There are so many ways you can contribute:

  *   Assign to yourself one of the available chapters
  *   Review the current content and open bugs if needed
  *   Review the existing gerrit commit if you are familiar with the information
  *   Be available on IRC to answer some questions about configuration details 
or functionality
  *   Cheering at the contributors!

Do not hesitate in contacting me for any questions.

Cheers!

Edgar Magana
IRC: emagana
emag...@gmail.com
edgar.mag...@workday.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-17 Thread Jeremy Stanley
On 2015-04-17 10:55:19 -0700 (-0700), Kevin Benton wrote:
 I understand. What I'm saying is that switching to Linux bridge
 will not change the networking model to 'just connect everything
 to a simple flat network'. All of the complaints about
 self-service networking will still hold.

And conversely, swapping simple bridge interfaces for something else
still means problems are harder to debug, whether or not you're
stuck with self-service networking features you're not using.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-17 Thread Salvatore Orlando
And since we've circled back I might add that perhaps we want nova-network
to deliver that.
Simple, reliable networking leveraging well-established off-the-shelf
technologies that satisfies the use cases Jeremy is referring to.

If regardless of changes in governance pertaining openstack project the
consensus is still that nova-network functionalities should be performed by
neutron under the same assumptions, then what Kevin is suggesting goes in
the right direction, regardless of whether the deployer chooses linux
bridge, OVS, or some fancy advanced technology like [1]. However, there's
more than that. For instance ask the average user that just wants
connectivity whether they think creating a router or pointing a floating
ip to a port should be part of their workflow. You can figure out the
answer by yourself.

I had a chat with Sean Dague a few day back on IRC [2]. The point seems
that when neutron is deployed as a replacement for nova-network it should
provide defaults that provide a replacement for nova-network flatdhcp
networking mode. For instance this would be a shared network, a single
router, and a single external network (the floating IP pool).

If multi-host is required that single router should be distributed (and
perhaps one day neutron will distribute SNAT too). Router distribution with
linux bridge might be a problem with the current framework, where we're
insisting on supporting nova-network scenario using neutron's control plane
constructs which have been conceived for multi-tenancy and self service
networking.

And then there's the API usability perspective. But if we provide default
for neutron resources then the problem is probably solved as users will
have little to no  interaction with the neutron API.

Salvatore


[1] https://github.com/salv-orlando/hdn
[2]
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2015-04-15.log
from 2015-04-15T13:26:55

On 17 April 2015 at 17:22, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-04-16 21:17:03 -0700 (-0700), Kevin Benton wrote:
  What do you disagree with? I was pointing out that using Linux
  bridge will not reduce the complexity of the model of self-service
  networking, which is what the quote was complaining about.

 On the contrary, if you reread the message to which you were
 previously replying, it's was about the unnecessary complexity of
 OVS (and Neutron in general) for deployments which explicitly
 _don't_ need and can never take advantage of self-service
 networking. The implication being that Neutron needs a just connect
 everything to a simple flat network on a bridge I can easily debug
 mode which hides or eliminates those complexities instead.
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : What is the difference between secret and order resource

2015-04-17 Thread Asha Seshagiri
Thanks a lot  John for your response.

I also thank everyone who has been responding to my queries if I have
missed someone .
There was  some problem while configuring my email .I do not receive the
email response directly  from openstack Dev group.I would check the archive
folder for that.
I will have a look into it

Once again , it's  nice working and collaborating with the openstack Dev
-group.

Thanks and Regards,
Asha Seshagiri











jh



Thanks and Regards,
Asha Seshagiri

On Thu, Apr 16, 2015 at 8:10 AM, John Wood john.w...@rackspace.com wrote:

  Hello Asha,

  The /v1/secrets resource is used to upload, encrypt and store your
 secrets, and to decrypt and retrieve those secrets. Key encryption keys
 (KEKs) internal to Barbican are used to encrypt the secret.

  The /v1/orders resource is used when you want Barbican to generate
 secrets for you. When they are done they give you references to where the
 secrets are stored so you can retrieve them via the secrets resource above.

  Hope that helps!

  Thanks,
 John

   From: Asha Seshagiri asha.seshag...@gmail.com
 Date: Thursday, April 16, 2015 at 1:23 AM
 To: openstack-dev openstack-dev@lists.openstack.org
 Cc: John Wood john.w...@rackspace.com, Reller, Nathan S. 
 nathan.rel...@jhuapl.edu, Douglas Mendizabal 
 douglas.mendiza...@rackspace.com, Paul Kehrer paul.keh...@rackspace.com,
 Adam Harwell adam.harw...@rackspace.com, Alexis Lee alex...@hp.com
 Subject: Barbican : What is the difference between secret and order
 resource

   Hi All ,

  What is the difference between secret and the order resource ?
 Where is the key stored that is used for encrypting the payload in the
 secret resource and how do we access it.

  According to my understanding ,

  Storing/Posting  the secret  means  we are encrypting the actual
 information(payload)  using the key generated internally by the barbican
 based on the type mentioned in the secret type.
 Geting the secret means we are decryprting the information and geting the
 actual information.

  Posting the order refers to the generation of the actual keys by the
 barbican  and encyrpting those keys based on the algorithm and the internal
 key generated by barbican.
 This encrypted key is referred through the secret reference and the whole
 meta data is referred through a order reference.

  Please correct me if I am wrong.
 Any help would be highly appreciated.


  --
  *Thanks and Regards,*
 *Asha Seshagiri*




-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] [ironic] [ceilometer] [magnum] [kite] [tuskar] WSME unmaintained ?

2015-04-17 Thread Chris Dent

On Fri, 17 Apr 2015, Lucas Alvares Gomes wrote:


/me also deliberately volunteers cdent to wsme core :-)


Feh. I suppose since most of the recent code and conversation has
been you and me, that makes sense. If people agree, I'm happy to
participate, but only if you're there too. That's only fair.

I guess liking something is not a requirement for being core?

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] [ironic] [ceilometer] [magnum] [kite] [tuskar] WSME unmaintained ?

2015-04-17 Thread Lucas Alvares Gomes
 On Fri, 17 Apr 2015, Lucas Alvares Gomes wrote:

 /me also deliberately volunteers cdent to wsme core :-)


 Feh. I suppose since most of the recent code and conversation has
 been you and me, that makes sense. If people agree, I'm happy to
 participate, but only if you're there too. That's only fair.


Hah yeah good conversations in the #wsme channel. Sure, count on me :-)

 I guess liking something is not a requirement for being core?


Apparently not. But it would be good to get some bugs fixed in WSME
before we come up with
a final solution within each OpenStack project: Whether keep WSME or
migrate to something
else since it requires time.

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron scaling datapoints?

2015-04-17 Thread Attila Fazekas




- Original Message -
 From: joehuang joehu...@huawei.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, April 17, 2015 9:46:12 AM
 Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
 
 Hi, Attila,
 
 only address the issue of agent status/liveness management is not enough for
 Neutron scalability. The concurrent dynamic load impact on large scale ( for
 example 100k managed nodes with the dynamic load like security group rule
 update, routers_updated, etc ) should also be taken into account too. So
 even if is agent status/liveness management improved in Neutron, that
 doesn't mean the scalability issue totally being addressed.
 

This story is not about the heartbeat.
https://bugs.launchpad.net/neutron/+bug/1438159

What I am looking for is managing lot of nodes, with minimal `controller` 
resources.

The actual required system changes like (for example regarding to vm boot) 
per/sec
is relative low, even if you have many nodes and vms. - Consider the instances 
average lifetime -

The `bug` is for the resources what the agents are related and querying many 
times,
BTW: I am thinking about several alternatives and other variants.

In neutron case a `system change` can affect multiple agents
like security group rule change.

It seams possible to have all agents to `query` a resource only once,
and being notified by any subsequent change `for free`. (IP, sec group rule, 
new neighbor) 

This is the scenario when the message brokers can shine and scale,
and it also offloads lot of work from the DB.


 And on the other hand, Nova already supports several segregation concepts,
 for example, Cells, Availability Zone... If there are 100k nodes to be
 managed by one OpenStack instances, it's impossible to work without hardware
 resources segregation. It's weird to put agent liveness manager in
 availability zone(AZ in short) 1, but all managed agents in AZ 2. If AZ 1 is
 power off, then all agents in AZ2 lost management.
 

 The benchmark is already here for scalability test report for million ports
 scalability of Neutron 
 http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers
 
 The cascading may be not perfect, but at least it provides a feasible way if
 we really want scalability.
 
 I am also working to evolve OpenStack to a world no need to worry about
 OpenStack Scalability Issue based on cascading:
 
 Tenant level virtual OpenStack service over hybrid or federated or multiple
 OpenStack based clouds:
 
 There are lots of OpenStack based clouds, each tenant will be allocated with
 one cascading OpenStack as the virtual OpenStack service, and single
 OpenStack API endpoint served for this tenant. The tenant's resources can be
 distributed or dynamically scaled to multi-OpenStack based clouds, these
 clouds may be federated with KeyStone, or using shared KeyStone, or  even
 some OpenStack clouds built in AWS or Azure, or VMWare vSphere.

 
 Under this deployment scenario, unlimited scalability in a cloud can be
 achieved, no unified cascading layer, tenant level resources orchestration
 among multi-OpenStack clouds fully distributed(even geographically). The
 database and load for one casacding OpenStack is very very small, easy for
 disaster recovery or backup. Multiple tenant may share one cascading
 OpenStack to reduce resource waste, but the principle is to keep the
 cascading OpenStack as thin as possible.

 You can find the information here:
 https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Use_Case
 
 Best Regards
 Chaoyi Huang ( joehuang )
 
 -Original Message-
 From: Attila Fazekas [mailto:afaze...@redhat.com]
 Sent: Thursday, April 16, 2015 3:06 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
 
 
 
 
 
 - Original Message -
  From: joehuang joehu...@huawei.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Sent: Sunday, April 12, 2015 3:46:24 AM
  Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
  
  
  
  As Kevin talking about agents, I want to remind that in TCP/IP stack,
  port ( not Neutron Port ) is a two bytes field, i.e. port ranges from
  0 ~ 65535, supports maximum 64k port number.
  
  
  
   above 100k managed node  means more than 100k L2 agents/L3
  agents... will be alive under Neutron.
  
  
  
  Want to know the detail design how to support 99.9% possibility for
  scaling Neutron in this way, and PoC and test would be a good support for
  this idea.
  
 
 Would you consider something as PoC which uses the technology in similar way,
 with a similar port - security problem, but with a lower level API than
 neutron using currently ?
 
 Is it an acceptable flaw:
 If you kill -9 the q-svc 1 times at 

[openstack-dev] [nova] No L DB migrations until...

2015-04-17 Thread Dan Smith
Hi all,

We're trying to land a specific DB migration as the first in L:

https://review.openstack.org/#/c/174480/

In order to do that, we need to get some changes into grenade, which are
blocked on other housekeeping. That should happen in a week or so.

In the meantime, please don't approve any new database migrations in
master so that we can land this as the first one.

Thanks!

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] [ironic] [ceilometer] [magnum] [kite] [tuskar] WSME unmaintained ?

2015-04-17 Thread Lucas Alvares Gomes
Hi,

 * Should projects relying on WSME start thinking about migrating their APIs
 to another technology?

 Maybe not migrating, but at least not starting something new with it.


Oh no, FWIW I don't even consider starting something new as a valid option here.

 Err, yeah, right. There are 4 people in wsme-core: Christophe (the
 original author) doesn't use WSME since at least 2 years, I'm pretty
 sure Doug  Ryan have other things to care about, and I don't use WSME
 anymore neither,


Right, so at present it is abandoned.

 So if anyone has a project that rely on WSME and wants to take care, no
 problem.


That's good to know. While I'm not super familiar with the WSME code
base I'm glad to help to
maintain it until we come to a final decision. Fixing some of the
current bugs and helping with
reviews.

/me also deliberately volunteers cdent to wsme core :-)

 * Forking the project an option?

 No need to do that, I'm OK to add any competent people to wsme-core. :)


Cool!

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Candidate proposals for TC (Technical Committee) positions are now open

2015-04-17 Thread Tristan Cacqueray
Candidate proposals for the Technical Committee positions (7 positions)
are now open and will remain open until 05:59 UTC April 23, 2015.

Candidates for the Technical Committee Positions: Any Foundation
individual member can propose their candidacy for an available,
directly-elected TC seat. [0] (except the six TC members who were
elected for a one-year seat last October : Monty Taylor, Sean Dague,
Doug Hellmann, Russell Bryant, Anne Gentle, John Griffith) [1]

Propose your candidacy by sending an email to the openstack-dev at
lists.openstack.org mailing-list, with the subject: TC candidacy.
Please start your own thread so we have one thread per candidate. Since
there will be many people voting for folks with whom they might not have
worked, including a platform or statement to help voters make an
informed choice is recommended, though not required. Note that unlike in
the last TC election, no set of questions are proposed and it's entirely
up to the candidate to compose their candidacy.

Elizabeth and I will confirm candidates with an email to the candidate
thread as well as create a link to the confirmed candidate's proposal
email on the wikipage for this election. [1]

The election will be held from April 24 through to 13:00 UTC April 30,
2015. The electorate are the Foundation individual members that are also
committers for one of the official programs projects [2] over the
Juno-Kilo timeframe (April 9, 2014 06:00 UTC to April 9, 2015 05:59
UTC), as well as the extra-ATCs who are acknowledged by the TC. [3]

Please see the wikipage for additional details about this election. [1]

If you have any questions please be sure to either voice them on the
mailing list or email Elizabeth or myself [4] or contact Elizabeth or
myself on IRC.

Thank you, and we look forward to reading your candidate proposals,
Tristan

[0] https://wiki.openstack.org/wiki/Governance/Foundation/TechnicalCommittee
[1] https://wiki.openstack.org/wiki/TC_Elections_April_2015
[2]
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=april-2015-elections
Note the tag for this repo, april-2015-elections.
[3]
http://git.openstack.org/cgit/openstack/governance/tree/reference/extra-atcs?id=april-2015-elections
[4] Elizabeth K. Joseph (pleia2): lyz at princessleia dot com
Tristan (tristanC): tristan dot cacqueray at enovance dot com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] HELP -- Please review some Kilo bug fixes

2015-04-17 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/16/2015 09:54 AM, Bhandaru, Malini K wrote:
 Hello Nova and Neutron developers!
 
 OpenStack China developers held a bug fest  April 13-15. Worked on
 43 bugs and submitted patches for 29 of them. Etherpad with the bug
 fix details (at the bottom): 
 https://etherpad.openstack.org/p/prc_kilo_nova_neutron_hackathon
 
 Their efforts to make the Kilo release better can reach fruition
 only with your reviewing them. Cores and PTLS, would really
 appreciate your help.
 
 In addition to making Kilo stronger, you will be acknowledging and
 motivating our China OpenStack developer community.
 

I've walked thru the list to see whether there is anything reviewable
for neutron. It's all either merged, or launchpad bugs don't link to
any patch in review. So there was nothing for me (as a neutron
developer) to help with.
Maybe I missed something, in that case Doug's suggestion to create a
clear per-project list of actionable patches is valid.
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJVMTpjAAoJEC5aWaUY1u57KI0H/RRcTu07sRrIjJBYzhAL2jkD
bdWF+8kbyG2bKIxl+5vxIgWjCXJlXw4YYilSWQ0lKyq+Q2Jvlx8Kt975O1RJNKa/
rY8f9gVzHsroyrpb5/WSc1hsNahqiBwwP2aUoEbqtvueKNjEfM3BW8zrVZzuq8Wm
NckdhRTF3Nfzu3ZFeW23EtVcnZ8SE7o3+OGFFnJSmW8h6mTePyJKUKoEGlNEopDm
wCnFjB3GgMQJl8E2Y/31xEJx1S6FHiMm7Zk5wPadO/LsBvMyTMBFZ1DsReBB4EQB
LspNMYCHqJcnebN3hj/OWqlUfu9jzqA3U1qW+W722Bp99MoavF7UG7LEffrGgk8=
=f+XH
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-17 Thread Jeremy Stanley
On 2015-04-16 21:17:03 -0700 (-0700), Kevin Benton wrote:
 What do you disagree with? I was pointing out that using Linux
 bridge will not reduce the complexity of the model of self-service
 networking, which is what the quote was complaining about.

On the contrary, if you reread the message to which you were
previously replying, it's was about the unnecessary complexity of
OVS (and Neutron in general) for deployments which explicitly
_don't_ need and can never take advantage of self-service
networking. The implication being that Neutron needs a just connect
everything to a simple flat network on a bridge I can easily debug
mode which hides or eliminates those complexities instead.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] on supporting multiple implementations of tripleo-heat-templates

2015-04-17 Thread Clint Byrum
Excerpts from Giulio Fidente's message of 2015-04-17 06:21:28 -0700:
 Hi,
 
 the Heat/Puppet implementation of the Overcloud deployment seems to be 
 surpassing in features the Heat/Elements implementation.
 
 The changes for Ceph are an example, the Puppet based version is already 
 adding features which don't have they counterpart into Elements based.
 
 Recently we started working on the addition of Pacemaker into the 
 Overcloud, to monitor the services and provide a number of 'auto 
 healing' features, and again this is happening in the Puppet 
 implementation only (at least for now) so I think the gap will become 
 bigger.
 
 Given we support different implementations with a single top-level 
 template [1], to keep other templates valid we're forced to propagate 
 the params into the Elements based templates as well, even though there 
 is no use for these there, see for example [2].
 
 The extra work itself is not of great concern but I wonder if it 
 wouldn't make sense to deprecate the Elements based templates at this 
 point, instead of keep adding there unused parts? Thoughts?
 

In a perfect world, templates wouldn't have implementation details like
puppet-anything in them. We all know that isn't true, but in a perfect
world.. ;)

I was just wondering the other day if anybody is relying on the non-puppet
jobs anymore. I think from my view of things, the elements approach
can be deprecated and removed if nobody steps up to maintain them.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] is cloudwatch really deprecated?

2015-04-17 Thread Matt Fischer
The wiki for Using Cloudwatch states:

This feature will be deprecated or removed during the Havana cycle as we
move to using Ceilometer as a metric/alarm service instead. [1]

However it seems that cloudwatch is still being developed. So is it
deprecated or not?

[1] https://wiki.openstack.org/wiki/Heat/Using-CloudWatch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] [ironic] [ceilometer] [magnum] [kite] [tuskar] WSME unmaintained ?

2015-04-17 Thread Chris K
Hello,

No need to do that, I'm OK to add any competent people to wsme-core. :)

While I don't have a great deal of experience with the WSME code base
(other then being a user of it), As Ironic is currently using WSME I would
offer to help. As a core on Ironic I would continue to expect that to take
the bulk of my time, but even so I feel I would be able to contribute time
for reviews and other project maintenance. Please let me know if I can be
of any help with this.

Chris Krelle
-- NobodyCam in IRC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Things to tackle in Liberty

2015-04-17 Thread Neil Jerram

On 17/04/15 17:24, Daniel P. Berrange wrote:

On Fri, Apr 17, 2015 at 12:16:25PM -0400, Jay Pipes wrote:

On 04/10/2015 11:48 AM, Neil Jerram wrote:

What I imagine, though, is that the _source_ of the plugging information
could move from Nova to Neutron, so that future plugging-related code
changes are a matter for Neutron rather than for Nova.  The plugging
would still _happen_ from within Nova, as directed by that information.


-1. One of the biggest problems I have with the current implementation for
Nova VIF types is that stuff leaks improperly out of the Neutron API. Take a
look at all the Contrail implementation specifics in here:

https://github.com/openstack/nova/blob/master/nova/virt/libvirt/vif.py#L551-L589

That belongs in the Neutron plugin/agent side of things. Nova should not
need to know how to call the vrouter-port-control Contrail CLI command. That
should be Neutron's domain -- and not the Neutron API, but instead the
underlying L2 agent/plugin. Most of the port binding profile stuff that is
returned by the Neutron API's primitives should never have been exposed by
the Neutron API, in my opinion. There's just too much driver-implementation
specifics in there.


Yes, that's exactly the set of tasks that is going to be hidden from Nova
in the work Brent is doing to enable scripts. Ultimately all the plug/unplug
methods in vif.py should go away, and Neutron will just pass over the name of
a script to execute at plug  unplug stages. So all the vif.py file in libvirt
will need todo is invoke the nominated script at the right time, and build the
libvirt XML config. All the business logic will be entirely under the control
of the Neutron maintainer.


Yes; but, as I commented on the spec earlier today, I don't think the 
vif-plugin-script work as it stands quite gets us there.  We also still 
need either a set of base VIF types in Nova - e.g., in the libvirt case, 
to control whether the generated XML is interface type='ethernet' ... 
or interface type='bridge' ... - or some equivalently generic 
information that is passed from Neutron to allow Nova to generate the 
required XML (or equivalent for other hypervisors).


Regards,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] is cloudwatch really deprecated?

2015-04-17 Thread Zane Bitter

On 17/04/15 12:46, Matt Fischer wrote:

The wiki for Using Cloudwatch states:

This feature will be deprecated or removed during the Havana cycle as
we move to using Ceilometer as a metric/alarm service instead. [1]

However it seems that cloudwatch is still being developed.


It doesn't seem that way to me, and without at least some kind of hint 
I'm not in a position to speculate on why it might seem that way to you.



So is it deprecated or not?


Yes, it's very deprecated.

In fact, we should go ahead and disable it in the default config.

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-docker][ceilometer][heat] Autoscaling docker in openstack

2015-04-17 Thread Sergey Kraynev
Let's wait more opinions ;) I don;t think that we know everything.

Regards,
Sergey.

On 17 April 2015 at 18:10, ashish.jai...@wipro.com wrote:

  So ultimately this means there is no way to autoscale docker containers
 on openstack until and unless ceilometer adds an inspector for docker
 hypervisor something similar to this (
 https://github.com/openstack/ceilometer/tree/master/ceilometer/compute/virt
 ).


  Regards

 Ashish
  --
 *From:* Sergey Kraynev skray...@mirantis.com
 *Sent:* Friday, April 17, 2015 8:12 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [nova-docker][ceilometer][heat]
 Autoscaling docker in openstack

   @VACHNIS:  yeah. in this case we blocked by ceilometer. AFAIK,
 ceilometer collect metrics from Nova:Server, not from docker directly.
  So mentioned bp is make sense (add support for this feature to
 ceilomete, then to heat).


  Regards,
 Sergey.

 On 17 April 2015 at 17:11, VACHNIS, AVI (AVI) 
 avi.vach...@alcatel-lucent.com wrote:

  Hi,

 @Ashish, if the limitation you've mentioned for #1 still exists, I join
 your question how heat auto-scale-group may work w/o ceilometer being able
 to collect docker metrics?



 @Sergey, hey. Are you saying that ceilometer do collects metrics on
 docker underlying nova::server resource?







 -Avi



 -- Original message--

 *From: *ashish.jai...@wipro.com

 *Date: *Fri, Apr 17, 2015 4:56 PM

 *To: *openstack-dev@lists.openstack.org;

 *Subject:*Re: [openstack-dev] [nova-docker][ceilometer][heat]
 Autoscaling docker in openstack



 Hi Segey,


  So IIUC approach #2 may still help to autoscale docker on openstack. I
 will try that out and post questions on heat irc thanks.


  Regards

 Ashish
  --
 *From:* Sergey Kraynev skray...@mirantis.com
 *Sent:* Friday, April 17, 2015 7:01 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [nova-docker][ceilometer][heat]
 Autoscaling docker in openstack

   Hi, Ashish.

  Honestly I am not familiar with most part of these ways, but can add
 more information from Heat side (item 2).

  I am surprised, that you have missed Heat autoscaling mechanism (You
 should look it :) ). It's one of the important part of Heat project.
 It allows to scale vms/stacks by using Ceilometer alarms. There are
 couple examples of autoscale templates:


 https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml 
 (with
 LoadBalancer)

 https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml

 https://github.com/openstack/heat-templates/blob/master/hot/asg_of_stacks.yaml

 https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml


 https://github.com/openstack/heat-templates/blob/master/hot/asg_of_servers.yaml
 It's true, that Docker plugin for Heat create docker server on
 Nova::Server resource. So you may write template Docker resource + Server
 resource (similar on third template) and scale by using Ceilometer alarms.
 If you have any questions how to use it, please got to #heat irc channel
 and ask us :)
 Also another way (AFAIK) is to use SoftwareDeployment/Config and deploy
 Server with docker inside (without docker plugin). In this way, I suppose,
 Steve Baker can help with advise :)


 On 17 April 2015 at 16:06, ashish.jai...@wipro.com wrote:


 Hi,

 I have been working on running docker on openstack. I had a discussion
 on multiple IRC and IIUC there are 5 different ways of running docker on
 openstack. IIUC currently there is no way to autoscale docker on openstack.
 Please correct me if I am wrong


 1) Using nova-docker driver - Running docker as a Nova::Server using
 nova-docker hypervisor
 2) Using nova-plugin for heat - Running docker using
 DockerInc::Docker::Container
 3) Using magnum - IIUC no automation as of now, manually it is possible.
 Not enough documentation available
 4) heat compose - Saw some samples available @
 https://github.com/openstack/heat-templates/tree/master/hot/software-config/elements/heat-config-docker-compose
 5) Swarm support - Still in development

 Issues with each on the above approaches

 1) Using nova-docker driver - IIUC there is no way for ceilometer to
 collect and emit statistics for docker hypervisor. So that mean ceilometer
 does not have any stats available once you switch to docker driver.
 This link (
 https://github.com/openstack/ceilometer/tree/master/ceilometer/compute/virt)
 currently does not have anything for docker hypervisor.

 2) Using nova-plugin for heat - Using this approach docker containers
 run on a Nova VM. However I do not see any illustration which suggests that
 you can autoscale using this approach.

 3) Using magnum - Currently only possible by manually invoking it.

 4) heat compose - Sample available at the above link just talks about
 deploying it up but nothing about auto scaling

 5) Swarm Support - 

Re: [openstack-dev] [chef] Naming the Project

2015-04-17 Thread Edgar Magana
Nice! Yes, let get Chef as formal OpenStack project.

Edgar

From: JJ Asghar jasg...@chef.iomailto:jasg...@chef.io
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, April 17, 2015 at 8:41 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [chef] Naming the Project

In following the footsteps of the Puppet modules becoming a OpenStack project,
the Chef cookbooks are starting the process also. I'll be starting the review 
to the governance
project soon, but the community needs to discuss it during our weekly hangout 
on Monday.

One of the largest challenges in getting this project off the ground, just like 
Puppet,
is naming the project.

I have created an etherpad [1] to help capture the suggested names. I'm taking 
the action
item to talk to the legal teams of both institutions to make sure we are all 
above board.

If you can, please add suggestions or comment back on this thread. I'd love to 
see people +1
the suggestions they like too if you can spare the cycles.

Thanks!

JJ Asghar


[1]: https://etherpad.openstack.org/p/chef-openstack-naming
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Things to tackle in Liberty

2015-04-17 Thread Jay Pipes

On 04/10/2015 11:48 AM, Neil Jerram wrote:

What I imagine, though, is that the _source_ of the plugging information
could move from Nova to Neutron, so that future plugging-related code
changes are a matter for Neutron rather than for Nova.  The plugging
would still _happen_ from within Nova, as directed by that information.


-1. One of the biggest problems I have with the current implementation 
for Nova VIF types is that stuff leaks improperly out of the Neutron 
API. Take a look at all the Contrail implementation specifics in here:


https://github.com/openstack/nova/blob/master/nova/virt/libvirt/vif.py#L551-L589

That belongs in the Neutron plugin/agent side of things. Nova should not 
need to know how to call the vrouter-port-control Contrail CLI command. 
That should be Neutron's domain -- and not the Neutron API, but instead 
the underlying L2 agent/plugin. Most of the port binding profile stuff 
that is returned by the Neutron API's primitives should never have been 
exposed by the Neutron API, in my opinion. There's just too much 
driver-implementation specifics in there.



(For libvirt, I believe the required information would be the interface
type and parameters to go into the libvirt XML, and then any further
processing to do after that XML has been launched.  I believe the latter
is covered by the vif-plug-script spec, but not the former.)


Agreed.


However, on thinking this through, I see that this really is bound up
with the wider question of nova-net / Neutron migration - because it
obviously can't make sense for plugging information to come from Neutron
if some people are not using Neutron at all for their networking.

Stepping right back, though, is it agreed in principle that detailed
networking code should move from Nova to Neutron, if that is possible
while preserving all existing behaviours?  If it is agreed, that's
something that I'd like to help with.


What precisely do you mean by preserving all existing behaviours? :)

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Things to tackle in Liberty

2015-04-17 Thread Daniel P. Berrange
On Fri, Apr 17, 2015 at 12:16:25PM -0400, Jay Pipes wrote:
 On 04/10/2015 11:48 AM, Neil Jerram wrote:
 What I imagine, though, is that the _source_ of the plugging information
 could move from Nova to Neutron, so that future plugging-related code
 changes are a matter for Neutron rather than for Nova.  The plugging
 would still _happen_ from within Nova, as directed by that information.
 
 -1. One of the biggest problems I have with the current implementation for
 Nova VIF types is that stuff leaks improperly out of the Neutron API. Take a
 look at all the Contrail implementation specifics in here:
 
 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/vif.py#L551-L589
 
 That belongs in the Neutron plugin/agent side of things. Nova should not
 need to know how to call the vrouter-port-control Contrail CLI command. That
 should be Neutron's domain -- and not the Neutron API, but instead the
 underlying L2 agent/plugin. Most of the port binding profile stuff that is
 returned by the Neutron API's primitives should never have been exposed by
 the Neutron API, in my opinion. There's just too much driver-implementation
 specifics in there.

Yes, that's exactly the set of tasks that is going to be hidden from Nova
in the work Brent is doing to enable scripts. Ultimately all the plug/unplug
methods in vif.py should go away, and Neutron will just pass over the name of
a script to execute at plug  unplug stages. So all the vif.py file in libvirt
will need todo is invoke the nominated script at the right time, and build the
libvirt XML config. All the business logic will be entirely under the control
of the Neutron maintainer.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-17 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/17/2015 06:23 PM, Fox, Kevin M wrote:
 Really, what I expect to see long term in a healthy OpenStack
 ecosystem is some global AppStore like functionality baked in to
 horizon. A user goes to it, selects my awesome scalable web
 hosting system, hits launch, and is given a link to login via
 webbrowser to edit their site. Under the hood, the system just
 stood up a trove database, an elasticsearch cluster in its own
 network, a web tier, a load balancer etc. The user didnt have to
 care how hard that use to be, and just gets charged for the
 resources consumed. Benifiting the cloud deployer and the end user.
 The easier it is to use/create/consume cloud resources the better
 it is for the deployer. If a bit steaper learning curve up front is
 nessisary, that sucks, but will be worth it.
 
 This sort of thing is what we need to get to, and is extremely
 difficult if OpenStack clouds differ wildly in functionality.
 

Isn't it what Murano project is intended to do?
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJVMTYEAAoJEC5aWaUY1u57K2IH/A7hyLVMBmtnybih6TomSuyE
MK1ZQIzSp2TmoUX8umwAi5d6OFvXxSZR2dm94TFXNedDsZUT2+PN/bOqJg0cXOdn
URis7fWC1nU2scLB3SfW5jKgawCoM3R6rBiHHzKu2UrctujRz/obZpqrI5lUf4F6
+ONUYGkdL6eDe/g2tCQB6gfwNSFA44F+q193AdEh9IV/3725OJAvWKcD+iRpdEJq
vVxLAh8KI6yokf2R9g+Vck3BLsltxQbjUuQjkyxYsRwq7L1vMcRqr4oTmQ2vRP+6
9dEQNlwEJpmDDqIRlL6vYIFH7NKM639EBtCoijdx6tM1oZ9bGoSwXhVtADBWw5U=
=AD+V
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] on supporting multiple implementations of tripleo-heat-templates

2015-04-17 Thread Clint Byrum
Excerpts from James Slagle's message of 2015-04-17 10:49:48 -0700:
 On Fri, Apr 17, 2015 at 12:37 PM, Clint Byrum cl...@fewbar.com wrote:
  Excerpts from Giulio Fidente's message of 2015-04-17 06:21:28 -0700:
  Hi,
 
  the Heat/Puppet implementation of the Overcloud deployment seems to be
  surpassing in features the Heat/Elements implementation.
 
  The changes for Ceph are an example, the Puppet based version is already
  adding features which don't have they counterpart into Elements based.
 
  Recently we started working on the addition of Pacemaker into the
  Overcloud, to monitor the services and provide a number of 'auto
  healing' features, and again this is happening in the Puppet
  implementation only (at least for now) so I think the gap will become
  bigger.
 
  Given we support different implementations with a single top-level
  template [1], to keep other templates valid we're forced to propagate
  the params into the Elements based templates as well, even though there
  is no use for these there, see for example [2].
 
  The extra work itself is not of great concern but I wonder if it
  wouldn't make sense to deprecate the Elements based templates at this
  point, instead of keep adding there unused parts? Thoughts?
 
 
  In a perfect world, templates wouldn't have implementation details like
  puppet-anything in them. We all know that isn't true, but in a perfect
  world.. ;)
 
  I was just wondering the other day if anybody is relying on the non-puppet
  jobs anymore. I think from my view of things, the elements approach
  can be deprecated and removed if nobody steps up to maintain them.
 
 I think we should consider deprecation if it's clear no one is
 maintaining them. The elements approach does offer testing
 installation from source instead of packages, which eventually
 wouldn't be tested any longer if we were to deprecate. It also has the
 nice benefit of being able to CI test individual project reverts or
 pins to see what might be causing failures. Maybe we could translate
 these features somehow to the puppet world.
 
 Not to put off the discussion, but I just added this as something to
 discuss at the Summit[1]. Between now and then we can continue to
 gauge the interest of maintaining the elements approach.


So, we could replace the pip-from-source with jobs that just do that
in the check/gate of each project. It should be a quick test, and one
that I don't think anybody would mind being able to run themselves,
especially when adding entry points or modifying structure.

As far as testing individual reverts.. that is useful, however, as the
maintenance level goes down, the value of those results will as well. So
without full-time maintainers, I don't think it will be a positive
for long.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-17 Thread Jeremy Stanley
On 2015-04-17 11:49:23 -0700 (-0700), Kevin Benton wrote:
 I definitely understand that. But what is the major complaint from
 operators? I understood that quote to imply it was around
 Neutron's model of self-service networking.

My takeaway from Tom's message was that there was a concern about
complexity in all forms (not just of the API but also due to the
lack of maturity, documentation and debuggability of the underlying
technology), and that the self-service networking model was simply
one example of that. Perhaps I was reading between the lines too
much because of prior threads on both the operators and developers
mailing lists. Anyway, I'm sure Tom will clarify what he meant if
necessary.

 If the main reason the remaining Nova-net operators don't want to
 use Neutron is due to the fact that they don't want to deal with
 the Neutron API, swapping some implementation defaults isn't
 really going to get us anywhere on that front.

This is where I think the subthread has definitely wandered off
topic too. Swapping implementation defaults in DevStack because it's
quicker and easier to get running on the typical
all-in-one/single-node setup and faster to debug problems with
(particularly when you're trying to work on non-network-related bits
and just need to observe the network communication between your
services) doesn't seem like it should have a lot to do with the
recommended default configuration for a large production deployment.
One size definitely does not fit all.

 It's an important distinction because it determines what
 actionable items we can take (e.g. what Salvatore mentioned in his
 email about defaults). Does that make sense?

It makes sense in the context of the Neutron/Nova network parity
topic, but not so much in the context of the DevStack default
settings topic. DevStack needs a simple default that just works, and
doesn't need the kitchen sink. You can turn on more complex options
as you need to test them out. In some ways this has parallels to the
complexity concerns the operator community has over Neutron and OVS,
but I think they're still relatively distinct topics.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Managing config file values and parameter defaults

2015-04-17 Thread Clayton O'Neill
How to handle config file values, defaults and how they relate to manifest
parameters was brought up in the weekly meeting up a few weeks ago.  I
promised to put something together.  I've put written up my thoughts and my
understanding of the other viewpoints presented there in an ether pad and
you can find the link below.  I apologize if I've not properly captured
things accurately.  I'd like to get feedback on this, either in the ether
pad or on the list.  I realize we might not be able to work this out
entirely before the summit, but we may be able to do some of the heavy
lifting and discuss it further there

Ultimately I'd like to end up with a blueprint that describes how we want
to handle different types of config file options.  It will likely be a huge
amount of work to completely implement any policy we come up with.
However, if we can agree on a policy then we can at least ensure any new
changes follow it, and over time start working on the rest of the code base.

https://etherpad.openstack.org/p/puppet-config-parameter-defaults
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : What is the difference between secret and order resource

2015-04-17 Thread John Wood
Hello Asha,

So the last step you have is retrieving a decrypted secret from Barbican. 
Barbican indeed stores the secret internally encrypted using an internal KEK. 
When it is retrieved however, it is first decrypted by Barbican and then 
returned the client decrypted.

Beyond TLS to protect this information back to the client, there is also a 
transport key feature that has not yet been fully supported via the client 
library, that allows the client to select a session key that can be used to 
encrypt the secret between the client and Barbican.

Thanks,
John


From: Asha Seshagiri asha.seshag...@gmail.commailto:asha.seshag...@gmail.com
Date: Friday, April 17, 2015 at 1:02 PM
To: John Wood john.w...@rackspace.commailto:john.w...@rackspace.com
Cc: openstack-dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Reller, Nathan S. 
nathan.rel...@jhuapl.edumailto:nathan.rel...@jhuapl.edu, Douglas Mendizabal 
douglas.mendiza...@rackspace.commailto:douglas.mendiza...@rackspace.com, 
Paul Kehrer paul.keh...@rackspace.commailto:paul.keh...@rackspace.com, Adam 
Harwell adam.harw...@rackspace.commailto:adam.harw...@rackspace.com, Alexis 
Lee alex...@hp.commailto:alex...@hp.com
Subject: Re: Barbican : What is the difference between secret and order resource

Hi All,

 I would like to know if the keys generated  by Barbican through the order 
resource are  encrypted using KEKS and then stored in the secret object or is 
it stored in unencypted format.

Any help  would be highly appreciated.

root@barbican:~# curl -H 'Accept: application/json' -H 'X-Project-Id:12345' 
http ://localhost:9311/v1/orders

Please find the command and response below :

{total: 3, orders: [{status: ACTIVE, secret_ref: 
http://localhost:9311/v1/secrets/b3709da7-4691-40d6-af9a-1ae23772a7b2;, 
updated: 2015-03-13T22:27:48.866683, meta: {name: secretname2, 
algorithm: aes, payload_content_type: application/octet-stream, mode: 
cbc, bit_length: 256, expiration: null}, created: 
2015-03-13T22:27:48.844860, type: key, order_ref: 
http://localhost:9311/v1/orders/5a4844ca-47a9-4bd7-ae56-fb84655f48d9},

root@barbican:~# curl -H 'Accept: application/json' -H 'X-Project-Id:12345' 
http://localhost:9311/v1/secrets/b3709da7-4691-40d6-af9a-1ae23772a7b2
{status: ACTIVE, secret_type: opaque, updated: 
2015-03-13T22:27:48.863403, name: secretname2, algorithm: aes, 
created: 2015-03-13T22:27:48.860600, secret_ref: 
http://localhost:9311/v1/secrets/b3709da7-4691-40d6-af9a-1ae23772a7b2;, 
content_types: {default: application/octet-stream}, expiration: null, 
bit_length: 256, mode: cbc}


root@barbican:~#  curl -H 'Accept:application/octet-stream' -H 
'X-Project-Id:12345' 
http://localhost:9311/v1/secrets/b3709da7-4691-40d6-af9a-1ae23772a7b2
▒▒R▒v▒▒▒W▒4▒A?Md▒L[▒K4A▒▒bx▒▒▒   -  would like to know if this response is 
encyprted by barbican using KEKS or it is unencypted format whose content type 
is application/octet-stream


Thanks and Regards,
Asha Seshagiri

On Fri, Apr 17, 2015 at 11:30 AM, Asha Seshagiri 
asha.seshag...@gmail.commailto:asha.seshag...@gmail.com wrote:
Thanks a lot  John for your response.

I also thank everyone who has been responding to my queries if I have missed 
someone .
There was  some problem while configuring my email .I do not receive the email 
response directly  from openstack Dev group.I would check the archive folder 
for that.
I will have a look into it

Once again , it's  nice working and collaborating with the openstack Dev -group.

Thanks and Regards,
Asha Seshagiri











jh



Thanks and Regards,
Asha Seshagiri

On Thu, Apr 16, 2015 at 8:10 AM, John Wood 
john.w...@rackspace.commailto:john.w...@rackspace.com wrote:
Hello Asha,

The /v1/secrets resource is used to upload, encrypt and store your secrets, and 
to decrypt and retrieve those secrets. Key encryption keys (KEKs) internal to 
Barbican are used to encrypt the secret.

The /v1/orders resource is used when you want Barbican to generate secrets for 
you. When they are done they give you references to where the secrets are 
stored so you can retrieve them via the secrets resource above.

Hope that helps!

Thanks,
John

From: Asha Seshagiri asha.seshag...@gmail.commailto:asha.seshag...@gmail.com
Date: Thursday, April 16, 2015 at 1:23 AM
To: openstack-dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: John Wood john.w...@rackspace.commailto:john.w...@rackspace.com, 
Reller, Nathan S. 
nathan.rel...@jhuapl.edumailto:nathan.rel...@jhuapl.edu, Douglas Mendizabal 
douglas.mendiza...@rackspace.commailto:douglas.mendiza...@rackspace.com, 
Paul Kehrer paul.keh...@rackspace.commailto:paul.keh...@rackspace.com, Adam 
Harwell adam.harw...@rackspace.commailto:adam.harw...@rackspace.com, Alexis 
Lee alex...@hp.commailto:alex...@hp.com
Subject: Barbican : What is the difference between secret and order resource

Hi All ,

What is the difference between secret and the order resource ?
Where is the key stored that is used 

Re: [openstack-dev] [Fuel]Format of notifications about Ubuntu repositories connectivity

2015-04-17 Thread Tomasz Napierala
 On 17 Apr 2015, at 14:35, Maciej Kwiek mkw...@mirantis.com wrote:
 
 Hi,
 
 I am currently implementing fix for 
 https://bugs.launchpad.net/fuel/+bug/1439686 .
 
 I plan to notify user about nodes which fail to connect to ubuntu 
 repositories via fuel notifications. My question is as follows: when I get 
 the list of nodes which failed repo connectivity test - do I add one 
 notification for each node, or can I add one big notification, which consists 
 of all names of all nodes that failed?

If there are no code level restrictions, IMHO one notification should be fine

Regards,
-- 
Tomasz 'Zen' Napierala
Product Engineering - Poland







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-17 Thread Fox, Kevin M
No, the complaints from ops I have heard even internally, which I think is 
being echo'd here is I understand how linux bridge works, I don't 
opensvswitch. and I don't want to be bothered to learn to debug openvswitch 
because I don't think we need it.

If linux bridge had feature parity with openvswitch, then it would be a 
reasonable argument or if the users truly didn't need the extra features 
provided by openvswitch/naas. I still assert though, that linux bridge won't 
get feature parity with openvswitch and the extra features are actually 
critical to users (DVR/NaaS), so its worth switching to opevnswitch and 
learning how to debug it. Linux Bridge is a nonsolution at this point. :/ So is 
keeping nova-network around forever. :/ But other then requiring some more 
training for ops folks, I think Neutron can suit the rest of the use cases 
these days nova-network provided over neutron. The sooner we can put the 
nova-network issue to bed, the better off the ecosystem will be. It will take a 
couple of years for the ecosystem to settle out to deprecating it, since a lot 
of clouds take years to upgrade and finally put the issue to bed. Lets do that 
sooner rather then later so a couple of years from now, we're done. :/

Kevin


From: Kevin Benton [blak...@gmail.com]
Sent: Friday, April 17, 2015 11:49 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in 
DevStack [was: Status of the nova-network to Neutron migration work]

I definitely understand that. But what is the major complaint from operators? I 
understood that quote to imply it was around Neutron's model of self-service 
networking.

If the main reason the remaining Nova-net operators don't want to use Neutron 
is due to the fact that they don't want to deal with the Neutron API, swapping 
some implementation defaults isn't really going to get us anywhere on that 
front.

It's an important distinction because it determines what actionable items we 
can take (e.g. what Salvatore mentioned in his email about defaults). Does that 
make sense?

On Fri, Apr 17, 2015 at 11:33 AM, Jeremy Stanley 
fu...@yuggoth.orgmailto:fu...@yuggoth.org wrote:
On 2015-04-17 10:55:19 -0700 (-0700), Kevin Benton wrote:
 I understand. What I'm saying is that switching to Linux bridge
 will not change the networking model to 'just connect everything
 to a simple flat network'. All of the complaints about
 self-service networking will still hold.

And conversely, swapping simple bridge interfaces for something else
still means problems are harder to debug, whether or not you're
stuck with self-service networking features you're not using.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-17 Thread Fox, Kevin M
Its because someone recommended devstack be switched to linux bridge so that 
its easier for folks to learn openstack. but my assertion is, if all production 
sites will have to run ovs (or some vendor plugin) and not linux bridge, your 
hurting folks by making them think they are learning something useful when they 
are spending time learning something that won't apply when they try and go 
production. Its a waste of their time. Set the default to be whatever the 
production default is.

Thanks,
Kevin 

From: Jeremy Stanley [fu...@yuggoth.org]
Sent: Friday, April 17, 2015 12:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in 
DevStack [was: Status of the nova-network to Neutron migration work]

On 2015-04-17 11:49:23 -0700 (-0700), Kevin Benton wrote:
 I definitely understand that. But what is the major complaint from
 operators? I understood that quote to imply it was around
 Neutron's model of self-service networking.

My takeaway from Tom's message was that there was a concern about
complexity in all forms (not just of the API but also due to the
lack of maturity, documentation and debuggability of the underlying
technology), and that the self-service networking model was simply
one example of that. Perhaps I was reading between the lines too
much because of prior threads on both the operators and developers
mailing lists. Anyway, I'm sure Tom will clarify what he meant if
necessary.

 If the main reason the remaining Nova-net operators don't want to
 use Neutron is due to the fact that they don't want to deal with
 the Neutron API, swapping some implementation defaults isn't
 really going to get us anywhere on that front.

This is where I think the subthread has definitely wandered off
topic too. Swapping implementation defaults in DevStack because it's
quicker and easier to get running on the typical
all-in-one/single-node setup and faster to debug problems with
(particularly when you're trying to work on non-network-related bits
and just need to observe the network communication between your
services) doesn't seem like it should have a lot to do with the
recommended default configuration for a large production deployment.
One size definitely does not fit all.

 It's an important distinction because it determines what
 actionable items we can take (e.g. what Salvatore mentioned in his
 email about defaults). Does that make sense?

It makes sense in the context of the Neutron/Nova network parity
topic, but not so much in the context of the DevStack default
settings topic. DevStack needs a simple default that just works, and
doesn't need the kitchen sink. You can turn on more complex options
as you need to test them out. In some ways this has parallels to the
complexity concerns the operator community has over Neutron and OVS,
but I think they're still relatively distinct topics.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-17 Thread Salvatore Orlando
On 17 April 2015 at 21:35, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-04-17 11:49:23 -0700 (-0700), Kevin Benton wrote:
  I definitely understand that. But what is the major complaint from
  operators? I understood that quote to imply it was around
  Neutron's model of self-service networking.

 My takeaway from Tom's message was that there was a concern about
 complexity in all forms (not just of the API but also due to the
 lack of maturity, documentation and debuggability of the underlying
 technology), and that the self-service networking model was simply
 one example of that. Perhaps I was reading between the lines too
 much because of prior threads on both the operators and developers
 mailing lists. Anyway, I'm sure Tom will clarify what he meant if
 necessary.

  If the main reason the remaining Nova-net operators don't want to
  use Neutron is due to the fact that they don't want to deal with
  the Neutron API, swapping some implementation defaults isn't
  really going to get us anywhere on that front.

 This is where I think the subthread has definitely wandered off
 topic too. Swapping implementation defaults in DevStack because it's
 quicker and easier to get running on the typical
 all-in-one/single-node setup and faster to debug problems with
 (particularly when you're trying to work on non-network-related bits
 and just need to observe the network communication between your
 services) doesn't seem like it should have a lot to do with the
 recommended default configuration for a large production deployment.
 One size definitely does not fit all.

  It's an important distinction because it determines what
  actionable items we can take (e.g. what Salvatore mentioned in his
  email about defaults). Does that make sense?

 It makes sense in the context of the Neutron/Nova network parity
 topic, but not so much in the context of the DevStack default
 settings topic. DevStack needs a simple default that just works, and
 doesn't need the kitchen sink. You can turn on more complex options
 as you need to test them out. In some ways this has parallels to the
 complexity concerns the operator community has over Neutron and OVS,
 but I think they're still relatively distinct topics.


I think this is the crux of this thread, which is drifting off in the wrong
direction.
For devstack defaults, I'd say even with OVS it just works imho, but my
opinion is partial and also I've been using OVS for 4 years now. So I don't
count.
I accept the desire to default to a data plane technology whose stability
is proven by decades of use in production systems, and has probably still a
wider adoption compared with OVS.

The discussion about simple networking with neutron, whether the operator
needs or not to provide self-service networking, and whether OVS is good or
yet another piece of software junk, is super interesting. However it does
not belong to this thread.

I believe there are a few fairly valid reasons for which devstack is less
likely to fail with default params using linux bridge rather than OVS -
then let's default to linux bridge. At the end of the day I believe users
interested in OVS will find in a simple way in the documentation - possibly
even in the README file, a way for enabling it. We might even ship a
local.conf.ovs file with a ready to use alternate, ovs-based configuration.

Salvatore


 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >