Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Chmouel Boudjnah
On Thu, Sep 25, 2014 at 6:02 AM, Clint Byrum cl...@fewbar.com wrote:

 However, this does make me think that Keystone domains should be exposable
 to services inside your cloud for use as SSO. It would be quite handy
 if the keystone users used for the VMs that host Kubernetes could use
 the same credentials to manage the containers.



I was exactly thinking about the same and looking at the code here :

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/client/request.go#L263

it seems to use some basic HTTP auth which should be enough with the
REMOTE_USER/apache feature of keystone :

http://docs.openstack.org/developer/keystone/external-auth.html#using-httpd-authentication

but if we want to have proper full integration with OpenStack we would
probably at some point want to teach modularity and a keystone plugin to
give to k8

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NOVA] security group fails to attach to an instance if port-id is specified during boot.

2014-09-25 Thread Parikshit Manur
Hi All,
Creation of server with command  'nova boot  --image image 
--flavor m1.medium --nic port-id=port-id --security-groups  sec_grp name' 
fails to attach the security group to the port/instance. The response payload 
has the security group added but only default security group is attached to the 
instance.  Separate action has to be performed on the instance to add sec_grp, 
and it is successful. Supplying the same with '--nic net-id=net-id' works as 
expected.

Is this the expected behaviour / are there any other options which needs to be 
specified to add the security group when port-id needs to be attached during 
boot.

Thanks,
Parikshit Manur
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][users]

2014-09-25 Thread Boris Pavlovic
Ajay,

Ya adding support of  *benchmarking OpenStack clouds using ordinary user
accounts that already exist *is one of our majors goals for already more
then half of year. As I said in my previous message, we will support it
soon finally.

Btw we have feature request page:
https://github.com/stackforge/rally/tree/master/doc/feature_request
With the list of features that we are working now.


Best regards,
Boris Pavlovic

On Thu, Sep 25, 2014 at 5:30 AM, Ajay Kalambur (akalambu) 
akala...@cisco.com wrote:

  Hi Boris
 Existing users is one thing but according to Rally page it says admin
 account benchmarking is already supported

  Rally is on its way to support of *benchmarking OpenStack clouds using
 ordinary user accounts that already exist*. Rally lacked such
 functionality (it only supported benchmarking either from an admin account
 or from a bunch of temporarily created users), which posed a problem since
 some deployments don't allow temporary users creation. There have been two
 https://review.openstack.org/#/c/116766/ patches
 https://review.openstack.org/#/c/119344/ that prepare the code for this
 new functionality. It is going to come very soon - stay tuned.


  Ajay

   From: Boris Pavlovic bpavlo...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Wednesday, September 24, 2014 at 6:13 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [rally][users]

   Ajay,

  I am working that feature. It's almost ready.
 I'll let you know when I finish.


  Best regards,
 Boris Pavlovic

 On Thu, Sep 25, 2014 at 5:02 AM, Ajay Kalambur (akalambu) 
 akala...@cisco.com wrote:

  Hi
 Our default mode of execution of rally is allowing Rally to create a new
 user and tenant. Is there a way to have rally use the existing admin tenant
 and user.
 I need to use Rally for some tests which would need a admin access so I
 would like Rally to use existing admin tenant and admin user for tests
  Ajay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Documentation process

2014-09-25 Thread Andreas Jaeger
On 09/24/2014 09:55 PM, Sergii Golovatiuk wrote:
 Hi,
 
 I would like to discuss the documentation process and align it to
 OpenStack flow.
 
 At the moment we add special tags to bugs in Launchpad which is not
 optimal as everyone can add/remove tags
 cannot participate in documentation process or
 enforce documentation process.
 
 I suggest to switch to standard workflow that is used by OpenStack community
 All we need is to move the process of tracking documentation from
 launchpad to gerrit
 
 This process gives more control to individual developers or community
 for tracking the changes and reflect them in documentation.
 
 Every reviewer checks the commit. If he thinks that this commit requires
 documentation update, he will set -1 with comment message Docs impact
 required
 
 This will force the author of patchset to update commit with DocImpact
 commit message
 
 Our documentation team will get all messages with DocImpact from 'git
 log'. The documentation team will make a documentation where the author
 of patch will play a key role. All other reviewers from original patch
 must give own +1 for documentation update.
 
 Patches in fuel-docs may have the same Change-ID as original patch. It
 will allow us to match documentation and patches in Gerrit.
 
 More details about DocImpact flow ban be obtained at
 
 https://wiki.openstack.org/wiki/Documentation/DocImpact

Currently all bugs filed due to DocImpact land in the openstack-manuals
launchpad bug area unless the repository is setup in the infrastructure
to push them elsewhere.

If you want to move forward with this, please setup first the
infrastructure to properly file the bugs,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Mistral 0.1 released

2014-09-25 Thread Renat Akhmerov
Mistral 0.1 and Mistral Client 0.1 have been released!

I thank you all for the hard work that we’ve done together to make this huge 
step in making Mistral a production ready technology.

Release notes

* Mistral DSL version 2
* New Mistral API version 2
* Completely reworked Mistral Engine
* Much more consistent and simple DSL
* Integration with core OpenStack services (Nova, Glance, Neutron, Keystone, 
Heat)
* Extensible architecture that easily allows to add workflow types (currently, 
'direct' and 'reverse')
* Nested workflows
* Workflow task policies (retry, wait-before/wait-after, timeout)
* Task defaults defined on a workflow level
* Multiple workflows and actions in a workbook
* Workflow API to work with individual workflows
* Action API to work with individual actions (both system and adhoc)
* Engine commands (fail, succeed, pause)
* Simplified REST API
* Simplified CLI
* UI enhancements
* Bugfixes in DSL/API v1

Links

* Release 0.1 wiki page: https://wiki.openstack.org/wiki/Mistral/Releases/0.1
* Mistral launchpad: https://launchpad.net/mistral
* Mistral Client launchpad: https://launchpad.net/python-mistralclient
* DSL v2 Specification: https://wiki.openstack.org/wiki/Mistral/DSLv2
* ReST API v2 Specification: https://wiki.openstack.org/wiki/Mistral/RestAPIv2

Contribution

In case you’re looking for contribution opportunities below are the useful 
links that will help to shape your understanding of what’s going on in the 
project and where it is moving.

* Blueprints: https://blueprints.launchpad.net/mistral
* Roadmap: https://wiki.openstack.org/wiki/Mistral/Roadmap


Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Clint Byrum
Excerpts from Mike Spreitzer's message of 2014-09-24 22:01:54 -0700:
 Clint Byrum cl...@fewbar.com wrote on 09/25/2014 12:13:53 AM:
 
  Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:
   Steven Dake sd...@redhat.com wrote on 09/24/2014 11:02:49 PM:
...
   ...
   Does TripleO require container functionality that is not available
   when using the Docker driver for Nova?
   
   As far as I can tell, the quantitative handling of capacities and
   demands in Kubernetes is much inferior to what Nova does today.
   
  
  Yes, TripleO needs to manage baremetal and containers from a single
  host. Nova and Neutron do not offer this as a feature unfortunately.
 
 In what sense would Kubernetes manage baremetal (at all)?
 By from a single host do you mean that a client on one host
 can manage remote baremetal and containers?
 
 I can see that Kubernetes allows a client on one host to get
 containers placed remotely --- but so does the Docker driver for Nova.
 

I mean that one box would need to host Ironic, Docker, and Nova, for
the purposes of deploying OpenStack. We call it the undercloud, or
sometimes the Deployment Cloud.

It's not necessarily something that Nova/Neutron cannot do by design,
but it doesn't work now.

  
As far as use cases go, the main use case is to run a specific 
Docker container on a specific Kubernetes minion bare metal host.
 
 Clint, in another branch of this email tree you referred to
 the VMs that host Kubernetes.  How does that square with
 Steve's text that seems to imply bare metal minions?
 

That was in a more general context, discussing using Kubernetes for
general deployment. Could have just as easily have said hosts,
machines, or instances.

 I can see that some people have had much more detailed design
 discussions than I have yet found.  Perhaps it would be helpful
 to share an organized presentation of the design thoughts in
 more detail.
 

I personally have not had any detailed discussions about this before it
was announced. I've just dug into the design and some of the code of
Kubernetes because it is quite interesting to me.

   
   If TripleO already knows it wants to run a specific Docker image
   on a specific host then TripleO does not need a scheduler.
   
  
  TripleO does not ever specify destination host, because Nova does not
  allow that, nor should it. It does want to isolate failure domains so
  that all three Galera nodes aren't on the same PDU, but we've not really
  gotten to the point where we can do that yet.
 
 So I am still not clear on what Steve is trying to say is the main use 
 case.
 Kubernetes is even farther from balancing among PDUs than Nova is.
 At least Nova has a framework in which this issue can be posed and solved.
 I mean a framework that actually can carry the necessary information.
 The Kubernetes scheduler interface is extremely impoverished in the
 information it passes and it uses GO structs --- which, like C structs,
 can not be subclassed.

I don't think this is totally clear yet. The thing that Steven seems to be
trying to solve is deploying OpenStack using docker, and Kubernetes may
very well be a better choice than Nova for this. There are some really
nice features, and a lot of the benefits we've been citing about image
based deployments are realized in docker without the pain of a full OS
image to redeploy all the time.

The structs vs. classes argument is completely out of line and has
nothing to do with where Kubernetes might go in the future. It's like
saying because cars use internal combustion engines they are limited. It
is just a facet of how it works today.

 Nova's filter scheduler includes a fatal bug that bites when balancing and 
 you want more than
 one element per area, see https://bugs.launchpad.net/nova/+bug/1373478.
 However: (a) you might not need more than one element per area and
 (b) fixing that bug is a much smaller job than expanding the mind of K8s.
 

Perhaps. I am quite a fan of set based design, and Kubernetes is a
narrowly focused single implementation solution, where Nova is a broadly
focused abstraction layer for VM's. I think it is worthwhile to push
a bit into the Kubernetes space and see whether the limitations are
important or not.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Thoughts on launchpad usage

2014-09-25 Thread Stephen Balukoff
I forgot to add to the etiquette section:

*Before doing work on an assigned blueprint, coordinate with the assignee*
This can help to make sure you aren't wasting time by duplicating efforts
already underway.


(Hmmm... starting to thinks I should add this to a wiki page...)

Stephen

On Wed, Sep 24, 2014 at 11:56 PM, Stephen Balukoff sbaluk...@bluebox.net
wrote:

 Hi folks!

 First off-- thanks to Brandon for running yesterday's Octavia meeting when
 I had to step out for an emergency at the last minute.

 Anyway, it's clear to me from the transcripts of the meeting that I've
 done a poor job of communicating what my intentions are, as far as what I'm
 doing for managing blueprints in launchpad. Y'all did notice that I'd added
 probably around a dozen new blueprints last night, and updated *all* the
 other blueprints with various tweaks-- and unfortunately wasn't around to
 explain my intentions during the meeting. So hey! Now y'all get another
 book to read by me. I apologize in advance.

 First off, let me say that I'm not a fan of the launchpad blueprint
 system, and have a hard time understanding its usefulness over, say, a text
 file or etherpad for tracking task lists and progress. In my opinion,
 launchpad has too much detail and functionality in areas that are not
 useful, and not enough in areas that actually would be useful. I could rant
 for a while on a lot of specific things launchpad gets wrong... but suffice
 to say I'm really looking forward to a transition to Storyboard (though I'm
 being told it's not the right time for us to start using that tool yet,
 dangit!). Perhaps launchpad is useful for projects that are relatively
 stable and established, or useful where volume of contribution necessitates
 more formal processes. Octavia definitely isn't that yet (and I would
 argue, very few of the existing OpenStack and Stackforge projects appear to
 be, IMO).

 (For the record, yes I am aware of this:
 https://wiki.openstack.org/wiki/Blueprints )

 So, having said this, please note that in using this tool to manage
 software and feature development in Octavia, my primary goals are as
 follows:

 *Keep a prioritized list of everything that needs to be accomplished to
 deliver v0.5*
 (And later, v1.0 and v2.0). This list is divided up into logical topics or
 areas (I'm using blueprints for this) which should contain smaller task
 lists (in the Work Items areas of the blueprints). Since there are still
 a significant number of architectural details to work out (ex. amphora
 lifecycle management), this means some blueprints are not so much about
 coding as they are about design and discussion, followed by documentation.
 Code will likely happen in one or more other blueprints. Also, some
 blueprints might be other non-design or non-coding tasks that need to be
 done in a coordinated way (ex. Do whatever we can to get Neutron LBaaS v2
 into Neutron.)

 The point here is that by tracking everything that needs to happen, a
 complete path from where we are to where we want to be emerges (and gets
 refined and updated, as we make progress, learn more and/or encounter
 obstacles).

 *Indicate who is working on what*
 This is both so that I know who is working on the most important things,
 and so that others contributing know who they should talk to if they want
 to get involved in or need to talk about a specific topic or area.

 *Keep a rough estimate of progress on any given topic*
 For what it's worth, I consider an implementation started when there's
 been a significant amount of work done on it that you can share (which
 includes specs). Heck, as we develop some of these features, specs are
 likely to change anyway. Keeping the Work Items up to date is probably
 the quickest way to provide some detail beyond that.

 *Try to make it obvious where attention is needed*
 Unfortunately, unless everyone is almost religiously using launchpad to
 keep blueprint information up-to-date, then this is difficult to accomplish
 with this tool. At best, a prioritized task list is a good place to start,
 and using the 'blocked' progress indicator can help (when blocked).

 *Try to make things as self-serve as possible*
 I hate being a bottleneck in the process, so the more I can get out of the
 way so people can get work done, the better. Ideally, this project should
 not be dependent on any single person in order to make progress at any
 stage in the game.

 This also means if you're working on a blueprint, try to keep good notes
 in the description, whiteboard, etc. so anyone can see what's going on with
 the blueprint. Links to other resources are less desirable (since following
 them off the launchpad site is distracting and disruptive), but are often
 necessary, especially when linking to what will become permanent
 documentation.

 ...

 Anyway, having said my intentions above, let me suggest the following as
 far as etiquette is concerned (please feel free to object to / discuss
 these things of 

Re: [openstack-dev] [NOVA] security group fails to attach to an instance if port-id is specified during boot.

2014-09-25 Thread Oleg Bondarev
Hi Parikshit,

Looks like a bug. Currently if port is specified its security groups are
not updated, it shpould be fixed.
I've reported https://bugs.launchpad.net/nova/+bug/1373774 to track this.
Thanks for reporting!

Thanks,
Oleg

On Thu, Sep 25, 2014 at 10:15 AM, Parikshit Manur 
parikshit.ma...@citrix.com wrote:

  Hi All,

 Creation of server with command  ‘nova boot  --image
 image --flavor m1.medium --nic port-id=port-id --security-groups
  sec_grp name’ fails to attach the security group to the
 port/instance. The response payload has the security group added but only
 default security group is attached to the instance.  Separate action has to
 be performed on the instance to add sec_grp, and it is successful.
 Supplying the same with ‘--nic net-id=net-id’ works as expected.



 Is this the expected behaviour / are there any other options which needs
 to be specified to add the security group when port-id needs to be attached
 during boot.



 Thanks,

 Parikshit Manur

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] MySQL performance and Mongodb backend maturity question

2014-09-25 Thread Igor Degtiarov
Hi, Qiming Teng.

Now all backends support events. So you may use MongoDB instead of
MySQL, or if you like you may choose HBase.

Cheers, Igor.
-- Igor


On Thu, Sep 25, 2014 at 7:43 AM, Preston L. Bannister
pres...@bannister.us wrote:
 Sorry, I am jumping into this without enough context, but ...


 On Wed, Sep 24, 2014 at 8:37 PM, Qiming Teng teng...@linux.vnet.ibm.com
 wrote:

 mysql select count(*) from metadata_text;
 +--+
 | count(*) |
 +--+
 | 25249913 |
 +--+
 1 row in set (3.83 sec)



 There are problems where a simple sequential log file is superior to a
 database table. The above looks like a log ... a very large number of
 events, without an immediate customer. For sequential access, a simple file
 is *vastly* superior to a database table.

 If you are thinking about indexed access to the above as a table, think
 about the cost of adding items to the index, for that many items. The cost
 of building the index is not small. Running a map/reduce on sequential files
 might be faster.

 Again, I do not have enough context, but ... 25 million rows?




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Openstack-dev-Neutron and Keystone of Junos release

2014-09-25 Thread Ankit21 A
Hi All,

I am working on Openstack with ODL and wanted to understand what will be 
the impact on neutron call to ODL , of changes in Neutron and Keystone as 
per Juno release of openstack.
Can anyone help me on this?





Thanks  Regards
Ankit Agarwal
___
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] MySQL performance and Mongodb backend maturity question

2014-09-25 Thread Qiming Teng
So MongoDB support to events is ready in tree?

Regards,
  Qiming

On Thu, Sep 25, 2014 at 10:26:08AM +0300, Igor Degtiarov wrote:
 Hi, Qiming Teng.
 
 Now all backends support events. So you may use MongoDB instead of
 MySQL, or if you like you may choose HBase.
 
 Cheers, Igor.
 -- Igor
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] MySQL performance and Mongodb backend maturity question

2014-09-25 Thread Dina Belova
Qiming, yes - for MongoDB, DB2, HBase and SQL-based, all the backends
support events feature for now, this has been merged afair ~month or two
ago.

Cheers
Dina

On Thu, Sep 25, 2014 at 11:45 AM, Qiming Teng teng...@linux.vnet.ibm.com
wrote:

 So MongoDB support to events is ready in tree?

 Regards,
   Qiming

 On Thu, Sep 25, 2014 at 10:26:08AM +0300, Igor Degtiarov wrote:
  Hi, Qiming Teng.
 
  Now all backends support events. So you may use MongoDB instead of
  MySQL, or if you like you may choose HBase.
 
  Cheers, Igor.
  -- Igor
 



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Horizon: display status of the VM

2014-09-25 Thread Tien-Trung Trinh
Hi OpenStack-dev,

 

I've posted a question regarding the displaying status of the VM on
OpenStack forum:

https://ask.openstack.org/en/question/48450/horizon-display-status-of-the-vm
/

 

Any feedback/answer would be much appreciated.

 

Thanks and regards

Trung

 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] MySQL performance and Mongodb backend maturity question

2014-09-25 Thread Qiming Teng
On Wed, Sep 24, 2014 at 09:43:54PM -0700, Preston L. Bannister wrote:
 Sorry, I am jumping into this without enough context, but ...
 
 
 On Wed, Sep 24, 2014 at 8:37 PM, Qiming Teng teng...@linux.vnet.ibm.com
 wrote:
 
  mysql select count(*) from metadata_text;
  +--+
  | count(*) |
  +--+
  | 25249913 |
  +--+
  1 row in set (3.83 sec)
 
 
 
 There are problems where a simple sequential log file is superior to a
 database table. The above looks like a log ... a very large number of
 events, without an immediate customer. For sequential access, a simple file
 is *vastly* superior to a database table.
 
 If you are thinking about indexed access to the above as a table, think
 about the cost of adding items to the index, for that many items. The cost
 of building the index is not small. Running a map/reduce on sequential
 files might be faster.
 
 Again, I do not have enough context, but ... 25 million rows?

Yes, just about 3 VMs running on two hosts, for at most 3 weeks.  This
is leading me to another question -- any best practices/tools to retire
the old data on a regular basis?

Regards,
  Qiming
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Changes in networking part of Murano

2014-09-25 Thread Serg Melikyan
Murano have advanced networking features that give you ability to not care
about configuring networks for your application. By default it will create
an isolated network for each environment and join all VMs needed by your
application to that network.

Previously, created network was joined to the first found router in the
tenant and this behaviour is wrong in many ways. At least some tenants may
have more than one router, and this may cause issues when Murano attaching
network to the wrong router.

We reworked this feature a little bit
(https://review.openstack.org/119800). Now
you can choose which router should be used by Murano to attach created
networks. By default router should be named as *murano-default-router*.
You can change name of the router that will be used in configuration file,
in the [*networking]  *section:
[networking]
...

# Name of the router that going to be used in order to join
# all networks created by Murano (string value)
router_name=router04

Warning! This means, that if you will upgrade Murano to the *juno-rc1* without
additional configuration your deployment will stop working failing with
following error message: *KeyError: Router murano-default-router was not
found*

Requiring cloud providers to have configured router for each tenant is a
burden on DevOps teams, therefore we improved (
https://review.openstack.org/121679) this feature a little bit more and
added ability to create router with specified name if it is not present in
the tenant. This behaviour may be switched on/off via configuration file,
and you can also specify which external network should be used to attach
router to:
[networking]
...
# ID or name of the external network for routers to connect to
# (string value)
#external_network=ext-net
...
# This option will create a router when one with router_name
# does not exist (boolean value)
#create_router=true

-- 
Serg Melikyan
http://mirantis.com | smelik...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-25 Thread Pasquale Porreca
I will briefly explain our use case. This idea is related to another 
project to enable the network boot in OpenStack 
https://blueprints.launchpad.net/nova/+spec/pxe-boot-instance


We want to make use of the extra-dhcp-opt to indicate as tftp server a 
specific instance inside our deployed system, so it will provide the 
right operating system to the other instances booting from network (once 
the feature from the linked blueprint will be implemented).


On the tftp server we want to be able to filter what boot file to 
provide to different class of instances and our idea was to identify 
each class with 2 hexadecimal of the UUID (while the rest would be 
random generated, still granting its uniqueness).


Anyway this is a customization for our specific environment and for a 
feature that is still in early proposal stage, so we wanted to propose 
as a separate feature to allow user custom UUID and manage the 
generation out of OpenStack.



On 09/24/14 23:15, Matt Riedemann wrote:



On 9/24/2014 3:17 PM, Dean Troyer wrote:

On Wed, Sep 24, 2014 at 2:58 PM, Roman Podoliaka
rpodoly...@mirantis.com mailto:rpodoly...@mirantis.com wrote:

Are there any known gotchas with support of this feature in REST 
APIs

(in general)?


I'd be worried about relying on a user-defined attribute in that use
case, that's ripe for a DOS.  Since these are cloud-unique I wouldn't
even need to be in your project to block you from creating that clone
instance if I knew your UUID.

dt

--

Dean Troyer
dtro...@gmail.com mailto:dtro...@gmail.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We talked about this a bit before approving the 
'enforce-unique-instance-uuid-in-db' blueprint [1].  As far as we knew 
there was no one using null instance UUIDs or duplicates for that matter.


The instance object already enforces that the UUID field is unique but 
the database schema doesn't.  I'll be re-proposing that for Kilo when 
it opens up.


If it's a matter of tagging an instance, there is also the tags 
blueprint [2] which will probably be proposed again for Kilo.


[1] 
https://blueprints.launchpad.net/nova/+spec/enforce-unique-instance-uuid-in-db

[2] https://blueprints.launchpad.net/nova/+spec/tag-instances



--
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API meeting

2014-09-25 Thread Christopher Yeoh
Hi,

Just a reminder that the weekly Nova API meeting is being held tomorrow
Friday UTC . 

We encourage cloud operators and those who use the REST API such as
SDK developers and others who and are interested in the future of the
API to participate.

In other timezones the meeting is at:

EST 20:00 (Thu)
Japan 09:00 (Fri)
China 08:00 (Fri)
ACDT 9:30 (Fri)

The proposed agenda and meeting details are here: 

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda. 

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-25 Thread Flavio Percoco
On 09/24/2014 07:55 PM, Zane Bitter wrote:
 On 18/09/14 14:53, Monty Taylor wrote:
 Hey all,

 I've recently been thinking a lot about Sean's Layers stuff. So I wrote
 a blog post which Jim Blair and Devananda were kind enough to help me
 edit.

 http://inaugust.com/post/108
 
 Thanks Monty, I think there are some very interesting ideas in here.
 
 I'm particularly glad to see the 'big tent' camp reasserting itself,
 because I have no sympathy with anyone who wants to join the OpenStack
 community and then bolt the door behind them. Anyone who contributes to
 a project that is related to OpenStack's goals, is willing to do things
 the OpenStack way, and submits itself to the scrutiny of the TC deserves
 to be treated as a member of our community with voting rights, entry to
 the Design Summit and so on.
 
 I'm curious how you're suggesting we decide which projects satisfy those
 criteria though. Up until now, we've done it through the incubation
 process (or technically, the new program approval process... but in
 practice we've never added a project that was targeted for eventual
 inclusion in the integrated release to a program without incubating it).
 Would the TC continue to judge whether a project is doing things the
 OpenStack way prior to inclusion, or would we let projects self-certify?
 What does it mean for a project to submit itself to TC scrutiny if it
 knows that realistically the TC will never have time to actually
 scrutinise it? Or are you not suggesting a change to the current
 incubation process, just a willingness to incubate multiple projects in
 the same problem space?
 
 I feel like I need to play devil's advocate here, because overall I'm
 just not sure I understand the purpose of arbitrarily - and it *is*
 arbitrary - declaring Layer #1 to be anything required to run
 Wordpress. To anyone whose goal is not to run Wordpress, how is that
 relevant?
 
 Speaking of arbitrary, I had to laugh a little at this bit:
 
  Also, please someone notice that the above is too many steps and should
 be:
 
   openstack boot gentoo on-a 2G-VM with-a publicIP with-a 10G-volume
 call-it blog.inaugust.com
 
 That's kinda sorta exactly what Heat does ;) Minus the part about
 assuming there is only one kind of application, obviously.
 
 
 I think there are a number of unjustified assumptions behind this
 arrangement of things. I'm going to list some here, but I don't want
 anyone to interpret this as a personal criticism of Monty. The point is
 that we all suffer from biases - not for any questionable reasons but
 purely as a result of our own experiences, who we spend our time talking
 to and what we spend our time thinking about - and therefore we should
 all be extremely circumspect about trying to bake our own mental models
 of what OpenStack should be into the organisational structure of the
 project itself.
 
 * Assumption #1: The purpose of OpenStack is to provide a Compute cloud
 
 This assumption is front-and-centre throughout everything Monty wrote.
 Yet this wasn't how the OpenStack project started. In fact there are now
 at least three services - Swift, Nova, Zaqar - that could each make
 sense as the core of a standalone product.
 
 Yes, it's true that Nova effectively depends on Glance and Neutron (and
 everything depends on Keystone). We should definitely document that
 somewhere. But why does it make Nova special?
 
 * Assumption #2: Yawnoc's Law
 
 Don't bother Googling that, I just made it up. It's the reverse of
 Conway's Law:
 
   Infra engineers who design governance structures for OpenStack are
   constrained to produce designs that are copies of the structure of
   Tempest.
 
 I just don't understand why that needs to be the case. Currently, for
 understandable historic reasons, every project gates against every other
 project. That makes no sense any more, completely independently of the
 project governance structure. We should just change it! There is no
 organisational obstacle to changing how gating works.
 
 Even this proposal doesn't entirely make sense on this front - e.g.
 Designate requires only Neutron and Keystone... why should Nova, Glance
 and every other project in Layer 1 gate against it, and vice-versa?
 
 I suggested in another thread[1] a model where each project would
 publish a set of tests, each project would decide which sets of tests to
 pull in and gate on, and Tempest would just be a shell for setting up
 the environment and running the selected tests. Maybe that idea is crazy
 or at least needs more work (it certainly met with only crickets and
 tumbleweeds on the mailing list), but implementing it wouldn't require
 TC intervention and certainly not by-laws changes. It just requires...
 implementing it.
 
 Perhaps the idea here is that by designating Layer 1 the TC is
 indicating to projects which other projects they should accept gate test
 jobs from (a function previously fulfilled by Incubation). I'd argue
 that this is a very bad way to do it, because 

Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA

2014-09-25 Thread Xu Han Peng

Hi,

As we talked in last IPv6 sub-team meeting, I was able to construct and 
send IPv6 unsolicited neighbor advertisement for external gateway 
interface by python tool *scapy*:


http://www.secdev.org/projects/scapy/

http://www.idsv6.de/Downloads/IPv6PacketCreationWithScapy.pdf


However, I am having trouble to send this unsolicited neighbor 
advertisement in a given namespace. All the current namespace operations 
leverage ip netns exec and shell command. But we cannot do this to scapy 
since it's python code. Can anyone advise me on this?


Thanks,
Xu Han

On 09/05/2014 05:46 PM, Xu Han Peng wrote:

Carl,

Seem so. I think internal router interface and external gateway port 
GARP are taken care by keepalived during failover. And if HA is not 
enable, _send_gratuitous_arp is called to send out GARP.


I think we will need to take care IPv6 for both cases since keepalived 
1.2.0 support IPv6. May need a separate BP. For the case HA is enabled 
externally, we still need unsolicited neighbor advertisement for 
gateway failover. But for internal router interface, since Router 
Advertisement is automatically send out by RADVD after failover, we 
don't need to send out neighbor advertisement anymore.


Xu Han


On 09/05/2014 03:04 AM, Carl Baldwin wrote:

Hi Xu Han,

Since I sent my message yesterday there has been some more discussion
in the review on that patch set.  See [1] again.  I think your
assessment is likely correct.

Carl

[1] 
https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py


On Thu, Sep 4, 2014 at 3:32 AM, Xu Han Peng pengxu...@gmail.com wrote:

Carl,

Thanks a lot for your reply!

If I understand correctly, in VRRP case, keepalived will be 
responsible for

sending out GARPs? By checking the code you provided, I can see all the
_send_gratuitous_arp_packet call are wrapped by if not is_ha 
condition.


Xu Han



On 09/04/2014 06:06 AM, Carl Baldwin wrote:

It should be noted that send_arp_for_ha is a configuration option
that preceded the more recent in-progress work to add VRRP controlled
HA to Neutron's router.  The option was added, I believe, to cause the
router to send (default) 3 GARPs to the external gateway if the router
was removed from one network node and added to another by some
external script or manual intervention.  It did not send anything on
the internal network ports.

VRRP is a different story and the code in review [1] sends GARPs on
internal and external ports.

Hope this helps avoid confusion in this discussion.

Carl

[1] 
https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py


On Mon, Sep 1, 2014 at 8:52 PM, Xu Han Peng pengxu...@gmail.com 
wrote:


Anthony,

Thanks for your reply.

If HA method like VRRP are used for IPv6 router, according to the 
VRRP RFC
with IPv6 included, the servers should be auto-configured with the 
active

router's LLA as the default route before the failover happens and still
remain that route after the failover. In other word, there should be 
no need

to use two LLAs for default route of a subnet unless load balance is
required.

When the backup router become the master router, the backup router 
should be
responsible for sending out an unsolicited ND neighbor advertisement 
with
the associated LLA (the previous master's LLA) immediately to update 
the
bridge learning state and sending out router advertisement with the 
same
options with the previous master to maintain the route and bridge 
learning.


This is shown in http://tools.ietf.org/html/rfc5798#section-4.1 and the
actions backup router should take after failover is documented here:
http://tools.ietf.org/html/rfc5798#section-6.4.2. The need for 
immediate

messaging sending and periodic message sending is documented here:
http://tools.ietf.org/html/rfc5798#section-2.4

Since the keepalived manager support for L3 HA is merged:
https://review.openstack.org/#/c/68142/43. And keepalived release 1.2.0
supports VRRP IPv6 features ( 
http://www.keepalived.org/changelog.html, see
Release 1.2.0 | VRRP IPv6 Release). I think we can check if 
keepalived can

satisfy our requirement here and if that will cause any conflicts with
RADVD.

Thoughts?

Xu Han


On 08/28/2014 10:11 PM, Veiga, Anthony wrote:



Anthony and Robert,

Thanks for your reply. I don't know if the arping is there for NAT, 
but I am
pretty sure it's for HA setup to broadcast the router's own change 
since the
arping is controlled by send_arp_for_ha config. By checking the 
man page
of arping, you can find the arping -A we use in code is sending 
out ARP
REPLY instead of ARP REQUEST. This is like saying I am here 
instead of
where are you. I didn't realized this either until Brain pointed 
this out

at my code review below.


That’s what I was trying to say earlier.  Sending out the RA is the 
same
effect.  RA says “I’m here, oh and I’m also a router” and should 
supersede
the need for an unsolicited NA.  The only thing to consider here is 
that RAs
are from LLAs.  If you’re doing IPv6 

Re: [openstack-dev] [Murano] Changes in networking part of Murano

2014-09-25 Thread Timur Nurlygayanov
Hi,

what if we will add drop down list with the list of all routers which are
available in the specific tenant and user will have the ability to select
the router during the application configuration? (like now user can select,
for example, availability zone or keypair).

Regards,
Timur

On Thu, Sep 25, 2014 at 12:15 PM, Serg Melikyan smelik...@mirantis.com
wrote:

 Murano have advanced networking features that give you ability to not care
 about configuring networks for your application. By default it will create
 an isolated network for each environment and join all VMs needed by your
 application to that network.

 Previously, created network was joined to the first found router in the
 tenant and this behaviour is wrong in many ways. At least some tenants may
 have more than one router, and this may cause issues when Murano attaching
 network to the wrong router.

 We reworked this feature a little bit (https://review.openstack.org/119800). 
 Now
 you can choose which router should be used by Murano to attach created
 networks. By default router should be named as *murano-default-router*.
 You can change name of the router that will be used in configuration file,
 in the [*networking]  *section:
 [networking]
 ...

 # Name of the router that going to be used in order to join
 # all networks created by Murano (string value)
 router_name=router04

 Warning! This means, that if you will upgrade Murano to the *juno-rc1* without
 additional configuration your deployment will stop working failing with
 following error message: *KeyError: Router murano-default-router was not
 found*

 Requiring cloud providers to have configured router for each tenant is a
 burden on DevOps teams, therefore we improved (
 https://review.openstack.org/121679) this feature a little bit more and
 added ability to create router with specified name if it is not present in
 the tenant. This behaviour may be switched on/off via configuration file,
 and you can also specify which external network should be used to attach
 router to:
 [networking]
 ...
 # ID or name of the external network for routers to connect to
 # (string value)
 #external_network=ext-net
 ...
 # This option will create a router when one with router_name
 # does not exist (boolean value)
 #create_router=true

 --
 Serg Melikyan
 http://mirantis.com | smelik...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Changes in networking part of Murano

2014-09-25 Thread Serg Melikyan
We are thinking about providing extended networking configuration to
the user during environment creation through the UI, and it is
mentioned in the Networking Manager specification [1]. But we decided
to focus on improvement of use-ability and stability in Juno cycle and
many really cool features was moved to the next cycle, like manual
networking configuration of networking in the environment. We are
currently working on the roadmap for the Kilo cycle, and maybe this
feature will be available in Kilo, though no guarantee.

[1] https://wiki.openstack.org/wiki/Murano/Specifications/Network_Management

On Thu, Sep 25, 2014 at 1:05 PM, Timur Nurlygayanov
tnurlygaya...@mirantis.com wrote:
 Hi,

 what if we will add drop down list with the list of all routers which are
 available in the specific tenant and user will have the ability to select
 the router during the application configuration? (like now user can select,
 for example, availability zone or keypair).

 Regards,
 Timur

 On Thu, Sep 25, 2014 at 12:15 PM, Serg Melikyan smelik...@mirantis.com
 wrote:

 Murano have advanced networking features that give you ability to not care
 about configuring networks for your application. By default it will create
 an isolated network for each environment and join all VMs needed by your
 application to that network.

 Previously, created network was joined to the first found router in the
 tenant and this behaviour is wrong in many ways. At least some tenants may
 have more than one router, and this may cause issues when Murano attaching
 network to the wrong router.

 We reworked this feature a little bit
 (https://review.openstack.org/119800). Now you can choose which router
 should be used by Murano to attach created networks. By default router
 should be named as murano-default-router. You can change name of the
 router that will be used in configuration file, in the [networking]
 section:
 [networking]
 ...

 # Name of the router that going to be used in order to join
 # all networks created by Murano (string value)
 router_name=router04

 Warning! This means, that if you will upgrade Murano to the juno-rc1
 without additional configuration your deployment will stop working failing
 with following error message: KeyError: Router murano-default-router was not
 found

 Requiring cloud providers to have configured router for each tenant is a
 burden on DevOps teams, therefore we improved
 (https://review.openstack.org/121679) this feature a little bit more and
 added ability to create router with specified name if it is not present in
 the tenant. This behaviour may be switched on/off via configuration file,
 and you can also specify which external network should be used to attach
 router to:
 [networking]
 ...
 # ID or name of the external network for routers to connect to
 # (string value)
 #external_network=ext-net
 ...
 # This option will create a router when one with router_name
 # does not exist (boolean value)
 #create_router=true

 --
 Serg Melikyan
 http://mirantis.com | smelik...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Timur,
 QA Engineer
 OpenStack Projects
 Mirantis Inc

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Serg Melikyan
http://mirantis.com | smelik...@mirantis.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] MySQL performance and Mongodb backend maturity question

2014-09-25 Thread Daniele Venzano

On 09/25/14 10:12, Qiming Teng wrote:
Yes, just about 3 VMs running on two hosts, for at most 3 weeks. This 
is leading me to another question -- any best practices/tools to 
retire the old data on a regular basis? Regards, Qiming


There is a tool: ceilometer-expirer

I tried to use it on a mysql database, since I had the same table size 
problem as you and it made the machine hit swap. I think it tries to 
load the whole table in memory.
Just to see if it would eventually finish, I let it run for 1 week 
before throwing away the whole database and move on.


Now I use Ceilometer's pipeline to forward events to elasticsearch via 
udp + logstash and do not use Ceilometer's DB or API at all.



Best,
Daniele

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA

2014-09-25 Thread Kevin Benton
Does running the python script with ip netns exec not work correctly?

On Thu, Sep 25, 2014 at 2:05 AM, Xu Han Peng pengxu...@gmail.com wrote:
 Hi,

 As we talked in last IPv6 sub-team meeting, I was able to construct and send
 IPv6 unsolicited neighbor advertisement for external gateway interface by
 python tool scapy:

 http://www.secdev.org/projects/scapy/

 http://www.idsv6.de/Downloads/IPv6PacketCreationWithScapy.pdf


 However, I am having trouble to send this unsolicited neighbor advertisement
 in a given namespace. All the current namespace operations leverage ip netns
 exec and shell command. But we cannot do this to scapy since it's python
 code. Can anyone advise me on this?

 Thanks,
 Xu Han


 On 09/05/2014 05:46 PM, Xu Han Peng wrote:

 Carl,

 Seem so. I think internal router interface and external gateway port GARP
 are taken care by keepalived during failover. And if HA is not enable,
 _send_gratuitous_arp is called to send out GARP.

 I think we will need to take care IPv6 for both cases since keepalived 1.2.0
 support IPv6. May need a separate BP. For the case HA is enabled externally,
 we still need unsolicited neighbor advertisement for gateway failover. But
 for internal router interface, since Router Advertisement is automatically
 send out by RADVD after failover, we don't need to send out neighbor
 advertisement anymore.

 Xu Han


 On 09/05/2014 03:04 AM, Carl Baldwin wrote:

 Hi Xu Han,

 Since I sent my message yesterday there has been some more discussion
 in the review on that patch set.  See [1] again.  I think your
 assessment is likely correct.

 Carl

 [1] https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py

 On Thu, Sep 4, 2014 at 3:32 AM, Xu Han Peng pengxu...@gmail.com wrote:

 Carl,

 Thanks a lot for your reply!

 If I understand correctly, in VRRP case, keepalived will be responsible for
 sending out GARPs? By checking the code you provided, I can see all the
 _send_gratuitous_arp_packet call are wrapped by if not is_ha condition.

 Xu Han



 On 09/04/2014 06:06 AM, Carl Baldwin wrote:

 It should be noted that send_arp_for_ha is a configuration option
 that preceded the more recent in-progress work to add VRRP controlled
 HA to Neutron's router.  The option was added, I believe, to cause the
 router to send (default) 3 GARPs to the external gateway if the router
 was removed from one network node and added to another by some
 external script or manual intervention.  It did not send anything on
 the internal network ports.

 VRRP is a different story and the code in review [1] sends GARPs on
 internal and external ports.

 Hope this helps avoid confusion in this discussion.

 Carl

 [1] https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py

 On Mon, Sep 1, 2014 at 8:52 PM, Xu Han Peng pengxu...@gmail.com wrote:

 Anthony,

 Thanks for your reply.

 If HA method like VRRP are used for IPv6 router, according to the VRRP RFC
 with IPv6 included, the servers should be auto-configured with the active
 router's LLA as the default route before the failover happens and still
 remain that route after the failover. In other word, there should be no need
 to use two LLAs for default route of a subnet unless load balance is
 required.

 When the backup router become the master router, the backup router should be
 responsible for sending out an unsolicited ND neighbor advertisement with
 the associated LLA (the previous master's LLA) immediately to update the
 bridge learning state and sending out router advertisement with the same
 options with the previous master to maintain the route and bridge learning.

 This is shown in http://tools.ietf.org/html/rfc5798#section-4.1 and the
 actions backup router should take after failover is documented here:
 http://tools.ietf.org/html/rfc5798#section-6.4.2. The need for immediate
 messaging sending and periodic message sending is documented here:
 http://tools.ietf.org/html/rfc5798#section-2.4

 Since the keepalived manager support for L3 HA is merged:
 https://review.openstack.org/#/c/68142/43. And keepalived release 1.2.0
 supports VRRP IPv6 features ( http://www.keepalived.org/changelog.html, see
 Release 1.2.0 | VRRP IPv6 Release). I think we can check if keepalived can
 satisfy our requirement here and if that will cause any conflicts with
 RADVD.

 Thoughts?

 Xu Han


 On 08/28/2014 10:11 PM, Veiga, Anthony wrote:



 Anthony and Robert,

 Thanks for your reply. I don't know if the arping is there for NAT, but I am
 pretty sure it's for HA setup to broadcast the router's own change since the
 arping is controlled by send_arp_for_ha config. By checking the man page
 of arping, you can find the arping -A we use in code is sending out ARP
 REPLY instead of ARP REQUEST. This is like saying I am here instead of
 where are you. I didn't realized this either until Brain pointed this out
 at my code review below.


 That’s what I was trying to say earlier.  Sending out the RA is the same
 effect.  RA 

[openstack-dev] [nova] Kilo Blueprints and Specs

2014-09-25 Thread John Garbutt
Hi,

A big thank you to jogo who has done a great job writing up plans for
kilo blueprints and specs:

1) Allow more code that doesn't need a blueprint and spec:
https://review.openstack.org/#/c/116699/3

Specs are a heavy process, so hopefully this will strike a better
balance between process and freedom. Basically it is forcing developer
write documentation, which is always going to be painful.

2) As a community, get project wide buy-in to blueprint priorities:
https://review.openstack.org/#/c/112733/10

All attempts to do this have fallen flat of their face. The new summit
structure should really help allow the right conversations to happen.


I see two big remaining (inter-related) issues:
* how to get more code merged into a Nova release
* how to get better at saying no to stuff that will not fit


There has been much discussion over getting more code into Nova, involving:
* fix technical debt that is slowing us down
* having sub-system (semi-)core reviewers, or similar
* splitting up the code base more (after establishing firmer interfaces)
I think we need a combination of all three, but I didn't want to go
there in this thread.


How do we get better at saying no ?

We have politeness inversion here. Leaving it to the last moment to
tell people there code is not going to merge causes us lots of hidden
overhead, and frustration all round. Honestly, clicking -2 on so much
code that missed the feature freeze (and in many cases it was already
approved) left me really quite depressed, and the submitters probably
felt worse.

Discussion at the mid-cylce lead to the runway idea:
https://review.openstack.org/#/c/112733

The main push back, are worries over process heaviness of the
approach. The current spec process feels too heavy already, so adding
more process is certainly a risk.

Here is a possible compromise:
* only use the runway system for kilo-3

The idea being, we have a soft-ish feature freeze at the end of
kilo-2, and make any blueprints targeted for kilo-3 go through a
runway-like system, to help ensure what gets on there will get merged
before the end of kilo-3.

During kilo-1 and kilo-2, we can use medium and high priority as a
runway list. But during kilo-1 and kilo-2, its a soft runway system,
i.e. we allow other blueprints to merge. We could aim to keep around 5
to 10 un-merged features in medium and higher priority slots, similar
to what was attempted in juno, but with more buy in and more open
discussion in nova-meetings.

So during kilo-3, we are adding priorities into the feature freeze
process. You get your blueprint, with core reviewers, signed up to one
of 10 or so slots. And you only get in a slot of it looks like we can
get it merged before feature freeze, meaning it was most likely up for
review in kilo-2. Once in a slot, the reviewers and submitter iterate
quite quickly to get that blueprint merged. All other blueprints will
be un-approved (leaving the spec approved), until they are given a
slot (basically because its the only toggle we a good ACL on).

We still keep the end of kilo-3 for the string, docs and requirement
freeze as normal, so things that don't need a blueprint, and are
feature-like can still be merged during kilo-3.

Why bother with any feature freeze at all? Well the hope is we use the
time to fix bugs and get through the massive backlog (including the
review backlog for bug fixes).


Thoughts? Does this feel like it strikes a good balance?


Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo Blueprints and Specs

2014-09-25 Thread Daniel P. Berrange
On Thu, Sep 25, 2014 at 11:27:09AM +0100, John Garbutt wrote:
 Hi,
 
 A big thank you to jogo who has done a great job writing up plans for
 kilo blueprints and specs:
 
 1) Allow more code that doesn't need a blueprint and spec:
 https://review.openstack.org/#/c/116699/3
 
 Specs are a heavy process, so hopefully this will strike a better
 balance between process and freedom. Basically it is forcing developer
 write documentation, which is always going to be painful.
 
 2) As a community, get project wide buy-in to blueprint priorities:
 https://review.openstack.org/#/c/112733/10
 
 All attempts to do this have fallen flat of their face. The new summit
 structure should really help allow the right conversations to happen.
 
 
 I see two big remaining (inter-related) issues:
 * how to get more code merged into a Nova release
 * how to get better at saying no to stuff that will not fit
 
 
 There has been much discussion over getting more code into Nova, involving:
 * fix technical debt that is slowing us down
 * having sub-system (semi-)core reviewers, or similar
 * splitting up the code base more (after establishing firmer interfaces)
 I think we need a combination of all three, but I didn't want to go
 there in this thread.
 
 
 How do we get better at saying no ?
 
 We have politeness inversion here. Leaving it to the last moment to
 tell people there code is not going to merge causes us lots of hidden
 overhead, and frustration all round. Honestly, clicking -2 on so much
 code that missed the feature freeze (and in many cases it was already
 approved) left me really quite depressed, and the submitters probably
 felt worse.
 
 Discussion at the mid-cylce lead to the runway idea:
 https://review.openstack.org/#/c/112733
 
 The main push back, are worries over process heaviness of the
 approach. The current spec process feels too heavy already, so adding
 more process is certainly a risk.
 
 Here is a possible compromise:
 * only use the runway system for kilo-3
 
 The idea being, we have a soft-ish feature freeze at the end of
 kilo-2, and make any blueprints targeted for kilo-3 go through a
 runway-like system, to help ensure what gets on there will get merged
 before the end of kilo-3.

To use the runway system, we need to have a frequently updated list
of blueprints which are a priority to review / merge. Once we have
such a list, IMHO, adding the fixed runway slots around that does
not do anything positive for me as a reviewer. If we have a priority
list of blueprints that is accurate  timely updated, I'd be far
more effective if I just worked directly from that list. The runways
idea is just going to make me less efficient at reviewing. So I'm
very much against it as an idea. Plesae just focus on the maintaining
the blueprint priority list.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] step ahead regarding swift middleware related topic

2014-09-25 Thread Osanai, Hisashi

Hi Ceilometer Folks,

I would like to step ahead regarding the following two topic.

(1) Backporting an important fix to Icehouse
I think that this fix is really important and works OK.
Could you please review and approve it?
https://review.openstack.org/#/c/112806/

(2) Repackage the ceilometer and the ceilometerclient packages
I wrote this BP and I'm ready to set to this. Could you please 
review it? 
https://review.openstack.org/#/c/117745/

I registered this BP on specs/juno but it should be changed 
to kilo.

Thanks in advance,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-25 Thread Andrew Laski


On 09/25/2014 04:18 AM, Pasquale Porreca wrote:
I will briefly explain our use case. This idea is related to another 
project to enable the network boot in OpenStack 
https://blueprints.launchpad.net/nova/+spec/pxe-boot-instance


We want to make use of the extra-dhcp-opt to indicate as tftp server a 
specific instance inside our deployed system, so it will provide the 
right operating system to the other instances booting from network 
(once the feature from the linked blueprint will be implemented).


On the tftp server we want to be able to filter what boot file to 
provide to different class of instances and our idea was to identify 
each class with 2 hexadecimal of the UUID (while the rest would be 
random generated, still granting its uniqueness).


It seems like this would still be achievable using the instance tags 
feature that Matt mentioned.  And it would be more clear since you could 
use human readable class names rather than relying on knowing that part 
of the UUID had special meaning.


If you have a need to add specific information to an instance like 'boot 
class' or want to indicate that an instance in two different clouds is 
actually the same one, the Pumphouse use case, that information should 
be something we layer on top of an instance and not something we encode 
in the UUID.



Anyway this is a customization for our specific environment and for a 
feature that is still in early proposal stage, so we wanted to propose 
as a separate feature to allow user custom UUID and manage the 
generation out of OpenStack.

On 09/24/14 23:15, Matt Riedemann wrote:



On 9/24/2014 3:17 PM, Dean Troyer wrote:

On Wed, Sep 24, 2014 at 2:58 PM, Roman Podoliaka
rpodoly...@mirantis.com mailto:rpodoly...@mirantis.com wrote:

Are there any known gotchas with support of this feature in REST 
APIs

(in general)?


I'd be worried about relying on a user-defined attribute in that use
case, that's ripe for a DOS.  Since these are cloud-unique I wouldn't
even need to be in your project to block you from creating that clone
instance if I knew your UUID.

dt

--

Dean Troyer
dtro...@gmail.com mailto:dtro...@gmail.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We talked about this a bit before approving the 
'enforce-unique-instance-uuid-in-db' blueprint [1].  As far as we 
knew there was no one using null instance UUIDs or duplicates for 
that matter.


The instance object already enforces that the UUID field is unique 
but the database schema doesn't.  I'll be re-proposing that for Kilo 
when it opens up.


If it's a matter of tagging an instance, there is also the tags 
blueprint [2] which will probably be proposed again for Kilo.


[1] 
https://blueprints.launchpad.net/nova/+spec/enforce-unique-instance-uuid-in-db

[2] https://blueprints.launchpad.net/nova/+spec/tag-instances






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] logging around olso lockutils

2014-09-25 Thread Sean Dague
Spending a ton of time reading logs, oslo locking ends up basically
creating a ton of output at DEBUG that you have to mentally filter to
find problems:

2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Created new semaphore iptables internal_lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:206
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Acquired semaphore iptables lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:229
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Attempting to grab external lock iptables external_lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:178
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Got file lock /opt/stack/data/nova/nova-iptables acquire
/opt/stack/new/nova/nova/openstack/common/lockutils.py:93
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Got semaphore / lock _do_refresh_provider_fw_rules inner
/opt/stack/new/nova/nova/openstack/common/lockutils.py:271
2014-09-24 18:44:49.244 DEBUG nova.compute.manager
[req-b91cb1c1-f211-43ef-9714-651eeb3b2302
DeleteServersAdminTestXML-1408641898
DeleteServersAdminTestXML-469708524] [instance:
98eb8e6e-088b-4dda-ada5-7b2b79f62506] terminating bdm
BlockDeviceMapping(boot_index=0,connection_info=None,created_at=2014-09-24T18:44:42Z,delete_on_termination=True,deleted=False,deleted_at=None,destination_type='local',device_name='/dev/vda',device_type='disk',disk_bus=None,guest_format=None,id=43,image_id='262ab8a2-0790-49b3-a8d3-e8ed73e3ed71',instance=?,instance_uuid=98eb8e6e-088b-4dda-ada5-7b2b79f62506,no_device=False,snapshot_id=None,source_type='image',updated_at=2014-09-24T18:44:42Z,volume_id=None,volume_size=None)
_cleanup_volumes /opt/stack/new/nova/nova/compute/manager.py:2407
2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Released file lock /opt/stack/data/nova/nova-iptables release
/opt/stack/new/nova/nova/openstack/common/lockutils.py:115
2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Releasing semaphore iptables lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:238
2014-09-24 18:44:49.249 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Semaphore / lock released _do_refresh_provider_fw_rules inner

Also readable here:
http://logs.openstack.org/01/123801/1/check/check-tempest-dsvm-full/b5f8b37/logs/screen-n-cpu.txt.gz#_2014-09-24_18_44_49_240

(Yes, it's kind of ugly)

What occured to me is that in debugging locking issues what we actually
care about is 2 things semantically:

#1 - tried to get a lock, but someone else has it. Then we know we've
got lock contention. .
#2 - something is still holding a lock after some long amount of time.

#2 turned out to be a critical bit in understanding one of the worst
recent gate impacting issues.

You can write a tool today that analyzes the logs and shows you these
things. However, I wonder if we could actually do something creative in
the code itself to do this already. I'm curious if the creative use of
Timers might let us emit log messages under the conditions above
(someone with better understanding of python internals needs to speak up
here). Maybe it's too much overhead, but I think it's worth at least
asking the question.

The same issue exists when it comes to processutils I think, warning
that a command is still running after 10s might be really handy, because
it turns out that issue #2 was caused by this, and it took quite a bit
of decoding to figure that out.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo Blueprints and Specs

2014-09-25 Thread John Garbutt
On 25 September 2014 11:44, Daniel P. Berrange berra...@redhat.com wrote:
 To use the runway system, we need to have a frequently updated list
 of blueprints which are a priority to review / merge. Once we have
 such a list, IMHO, adding the fixed runway slots around that does
 not do anything positive for me as a reviewer. If we have a priority
 list of blueprints that is accurate  timely updated, I'd be far
 more effective if I just worked directly from that list.

I am proposing we do that for kilo-1 and kilo-2.

 Please just focus on the maintaining
 the blueprint priority list.

I am trying to. I clearly failed.


The proposal is to keep kilo-1, kilo-2 much the same as juno. Except,
we work harder on getting people to buy into the priorities that are
set, and actively provoke more debate on their correctness, and we
reduce the bar for what needs a blueprint.

We can't have 50 high priority blueprints, it doesn't mean anything,
right? We need to trim the list down to a manageable number, based on
the agreed project priorities. Thats all I mean by slots / runway at
this point.

Does this sound reasonable?


 The runways
 idea is just going to make me less efficient at reviewing. So I'm
 very much against it as an idea.

This proposal is different to the runways idea, although it certainly
borrows aspects of it. I just don't understand how this proposal has
all the same issues?


The key to the kilo-3 proposal, is about getting better at saying no,
this blueprint isn't very likely to make kilo.

If we focus on a smaller number of blueprints to review, we should be
able to get a greater percentage of those fully completed.

I am just using slots/runway-like ideas to help pick the high priority
blueprints we should concentrate on, during that final milestone.
Rather than keeping the distraction of 15 or so low priority
blueprints, with those poor submitters jamming up the check queue, and
constantly rebasing, and having to deal with the odd stray review
comment they might get lucky enough to get.


Maybe you think this bit is overkill, and thats fine. But I still
think we need a way to stop wasting so much of peoples time on things
that will not make it.


Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo Blueprints and Specs

2014-09-25 Thread Daniel P. Berrange
On Thu, Sep 25, 2014 at 01:52:48PM +0100, John Garbutt wrote:
 On 25 September 2014 11:44, Daniel P. Berrange berra...@redhat.com wrote:
  To use the runway system, we need to have a frequently updated list
  of blueprints which are a priority to review / merge. Once we have
  such a list, IMHO, adding the fixed runway slots around that does
  not do anything positive for me as a reviewer. If we have a priority
  list of blueprints that is accurate  timely updated, I'd be far
  more effective if I just worked directly from that list.
 
 I am proposing we do that for kilo-1 and kilo-2.
 
  Please just focus on the maintaining
  the blueprint priority list.
 
 I am trying to. I clearly failed.
 
 
 The proposal is to keep kilo-1, kilo-2 much the same as juno. Except,
 we work harder on getting people to buy into the priorities that are
 set, and actively provoke more debate on their correctness, and we
 reduce the bar for what needs a blueprint.
 
 We can't have 50 high priority blueprints, it doesn't mean anything,
 right? We need to trim the list down to a manageable number, based on
 the agreed project priorities. Thats all I mean by slots / runway at
 this point.

I would suggest we don't try to rank high/medium/low as that is
too coarse, but rather just an ordered priority list. Then you
would not be in the situation of having 50 high blueprints. We
would instead naturally just start at the highest priority and
work downwards. 

  The runways
  idea is just going to make me less efficient at reviewing. So I'm
  very much against it as an idea.
 
 This proposal is different to the runways idea, although it certainly
 borrows aspects of it. I just don't understand how this proposal has
 all the same issues?
 
 
 The key to the kilo-3 proposal, is about getting better at saying no,
 this blueprint isn't very likely to make kilo.
 
 If we focus on a smaller number of blueprints to review, we should be
 able to get a greater percentage of those fully completed.

 I am just using slots/runway-like ideas to help pick the high priority
 blueprints we should concentrate on, during that final milestone.
 Rather than keeping the distraction of 15 or so low priority
 blueprints, with those poor submitters jamming up the check queue, and
 constantly rebasing, and having to deal with the odd stray review
 comment they might get lucky enough to get.

 Maybe you think this bit is overkill, and thats fine. But I still
 think we need a way to stop wasting so much of peoples time on things
 that will not make it.

The high priority blueprints are going to end up being mostly the big
scope changes which take alot of time to review  probably go through
many iterations. The low priority blueprints are going to end up being
the small things that don't consume significant resource to review and
are easy to deal with in the time we're waiting for the big items to
go through rebases or whatever. So what I don't like about the runways
slots idea is that removes the ability to be agile and take the initiative
to review  approve the low priority stuff that would otherwise never
make it through.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-25 Thread Pasquale Porreca
The problem to use a different tag than UUID is that it won't be 
possible (for what I know) to include this tag in the Bootstrap Protocol 
messages exchanged during the pre-boot phase.


Our original idea was to use the Client-identifier (option 61) or Vendor 
class identifier (option 60) of the dhcp request to achieve our target, 
but these fields cannot be controlled in libvirt template and so they 
cannot be set in OpenStack either. Instead the UUID is set it the 
libvirt template by OpenStack and it is included in the messages 
exchanged in the pre-boot phase (option 97) by the instance trying to 
boot from network.


Reference: 
http://www.iana.org/assignments/bootp-dhcp-parameters/bootp-dhcp-parameters.xhtml



On 09/25/14 14:43, Andrew Laski wrote:


On 09/25/2014 04:18 AM, Pasquale Porreca wrote:
I will briefly explain our use case. This idea is related to another 
project to enable the network boot in OpenStack 
https://blueprints.launchpad.net/nova/+spec/pxe-boot-instance


We want to make use of the extra-dhcp-opt to indicate as tftp server 
a specific instance inside our deployed system, so it will provide 
the right operating system to the other instances booting from 
network (once the feature from the linked blueprint will be 
implemented).


On the tftp server we want to be able to filter what boot file to 
provide to different class of instances and our idea was to identify 
each class with 2 hexadecimal of the UUID (while the rest would be 
random generated, still granting its uniqueness).


It seems like this would still be achievable using the instance tags 
feature that Matt mentioned.  And it would be more clear since you 
could use human readable class names rather than relying on knowing 
that part of the UUID had special meaning.


If you have a need to add specific information to an instance like 
'boot class' or want to indicate that an instance in two different 
clouds is actually the same one, the Pumphouse use case, that 
information should be something we layer on top of an instance and not 
something we encode in the UUID.



Anyway this is a customization for our specific environment and for a 
feature that is still in early proposal stage, so we wanted to 
propose as a separate feature to allow user custom UUID and manage 
the generation out of OpenStack.

On 09/24/14 23:15, Matt Riedemann wrote:



On 9/24/2014 3:17 PM, Dean Troyer wrote:

On Wed, Sep 24, 2014 at 2:58 PM, Roman Podoliaka
rpodoly...@mirantis.com mailto:rpodoly...@mirantis.com wrote:

Are there any known gotchas with support of this feature in 
REST APIs

(in general)?


I'd be worried about relying on a user-defined attribute in that use
case, that's ripe for a DOS.  Since these are cloud-unique I wouldn't
even need to be in your project to block you from creating that clone
instance if I knew your UUID.

dt

--

Dean Troyer
dtro...@gmail.com mailto:dtro...@gmail.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We talked about this a bit before approving the 
'enforce-unique-instance-uuid-in-db' blueprint [1].  As far as we 
knew there was no one using null instance UUIDs or duplicates for 
that matter.


The instance object already enforces that the UUID field is unique 
but the database schema doesn't.  I'll be re-proposing that for Kilo 
when it opens up.


If it's a matter of tagging an instance, there is also the tags 
blueprint [2] which will probably be proposed again for Kilo.


[1] 
https://blueprints.launchpad.net/nova/+spec/enforce-unique-instance-uuid-in-db

[2] https://blueprints.launchpad.net/nova/+spec/tag-instances






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-09-25 Thread Thierry Carrez
Thierry Carrez wrote:
 Kilo Design Summit: Nov 4-7
 Kilo-1 milestone: Dec 11
 Kilo-2 milestone: Jan 29
 Kilo-3 milestone, feature freeze: March 12
 2015.1 (Kilo) release: Apr 23
 L Design Summit: May 18-22

Following feedback on the mailing-list and at the cross-project meeting,
there is growing consensus that shifting one week to the right would be
better. It makes for a short L cycle, but avoids losing 3 weeks between
Kilo release and L design summit. That gives:

Kilo Design Summit: Nov 4-7
Kilo-1 milestone: Dec 18
Kilo-2 milestone: Feb 5
Kilo-3 milestone, feature freeze: Mar 19
2015.1 (Kilo) release: Apr 30
L Design Summit: May 18-22

If you prefer a picture, see attached PDF.

-- 
Thierry Carrez (ttx)


kilo.pdf
Description: Adobe PDF document
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-09-25 Thread Sean Dague
On 09/25/2014 09:36 AM, Thierry Carrez wrote:
 Thierry Carrez wrote:
 Kilo Design Summit: Nov 4-7
 Kilo-1 milestone: Dec 11
 Kilo-2 milestone: Jan 29
 Kilo-3 milestone, feature freeze: March 12
 2015.1 (Kilo) release: Apr 23
 L Design Summit: May 18-22
 
 Following feedback on the mailing-list and at the cross-project meeting,
 there is growing consensus that shifting one week to the right would be
 better. It makes for a short L cycle, but avoids losing 3 weeks between
 Kilo release and L design summit. That gives:
 
 Kilo Design Summit: Nov 4-7
 Kilo-1 milestone: Dec 18
 Kilo-2 milestone: Feb 5
 Kilo-3 milestone, feature freeze: Mar 19
 2015.1 (Kilo) release: Apr 30
 L Design Summit: May 18-22
 
 If you prefer a picture, see attached PDF.

+1

-Sean


-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] MySQL performance and Mongodb backend maturity question

2014-09-25 Thread Qiming Teng
On Thu, Sep 25, 2014 at 11:40:11AM +0200, Daniele Venzano wrote:
 On 09/25/14 10:12, Qiming Teng wrote:
 Yes, just about 3 VMs running on two hosts, for at most 3 weeks.
 This is leading me to another question -- any best practices/tools
 to retire the old data on a regular basis? Regards, Qiming
 
 There is a tool: ceilometer-expirer
 
 I tried to use it on a mysql database, since I had the same table
 size problem as you and it made the machine hit swap. I think it
 tries to load the whole table in memory.
 Just to see if it would eventually finish, I let it run for 1 week
 before throwing away the whole database and move on.
 
 Now I use Ceilometer's pipeline to forward events to elasticsearch
 via udp + logstash and do not use Ceilometer's DB or API at all.

Ah, that is something worth a try.  Thanks.

Regards,
 Qiming
 
 Best,
 Daniele
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] PTL Candidacy

2014-09-25 Thread Tristan Cacqueray
confirmed

On 24/09/14 04:37 PM, Swartzlander, Ben wrote:
 Hello! I have been the de-facto PTL for Manila from the conception of
 the project up to now. Since Manila is an officially incubated OpenStack
 program, I have the opportunity to run for election and hopefully become
 the officially elected PTL for the Manila project.
 
 I'm running because I feel that the vision of the Manila project is
 still not achieved, even though we've made tremendous strides in the
 last year, and I want to see the project mature and become part of
 core OpenStack.
 
 Some of you may remember the roots of the Manila project, when we
 proposed shared file system management as an extension to the
 then-nascent Cinder project during the Folsom release. It's taken a lot
 of attempts and failures to arrive at the current Manila project, and
 it's been an exciting and humbling journey, where along the way I've
 had the opportunity to work with many great individuals.
 
 My vision for the future of the Manila includes:
 * Getting more integrated with the rest of OpenStack. We have Devstack,
   Tempest, and Horizon integration, and I'd like to get that code into
   the right places where it can be maintained. We also need to add Heat
   integration, and more complete documentation.
 * Working with distributions on issues related to packaging and
   installation to make Manila as easy to use as possible. This includes
   work with Chef and Puppet.
 * Making Manila usable in more environments. Manila's design center has
   been large-scale public clouds, but we haven't spent enough time on
   small/medium scale environments -- the kind the developers typically
   have and the kind that users typically start out with.
 * Taking good ideas from the rest of OpenStack. We're a small team and
   we can't do everything ourselves. The OpenStack ecosystem is full of
   excellent technology and I want to make sure we take the best ideas
   and apply them to Manila. In particular, there are some features I'd
   like to copy from the Cinder project.
 * A focus on quality. I want to make sure we keep test coverage high
   as we add new features, and increase test coverage on existing
   features. I also want to try to start vendor CI similar to what
   Cinder has.
 * Lastly, I expect to work with vendors to get more drivers contributed
   to expand Manila's hardware support. I am very interested in
   smoothing out some of the networking complexities that make it
   difficult to write drivers today.
 
 I hope you will support my candidacy so I can continue to lead Manila
 towards eventual integration with OpenStack and realize my dream of
 shared file system management in the cloud.
 
 Thank you,
 Ben Swartzlander
 Manila PTL, NetApp Architect
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] PTL Candidacy

2014-09-25 Thread Tristan Cacqueray
confirmed

On 24/09/14 05:19 PM, James Slagle wrote:
 I'd like to announce my candidacy for TripleO PTL.
 
 I think most folks who have worked in the TripleO community probably know me.
 For those who don't, I work for Red Hat, and over the last year and a half 
 that
 I've been involved with TripleO I've worked in different areas. My focus has
 been on improvements to the frameworks to support things such as other 
 distros,
 packages, and offering deployment choices. I've also tried to focus on
 stabilization and documentation as well.
 
 I stand by what I said in my last candidacy announcement[1], so I'm not going
 to repeat all of that here :-).
 
 One of the reasons I've been so active in reviewing changes to the project is
 because I want to help influence the direction and move progress forward for
 TripleO. The spec process was new for TripleO during the Juno cycle, and I 
 also
 helped define that. I think that process is working well and will continue to
 evolve during Kilo as we find what works best.
 
 The TripleO team has made a lot of great progress towards full HA deployments,
 CI improvements, rearchitecting Tuskar as a deployment planning service, and
 driving features in Heat to support our use cases. I support this work
 continuing in Kilo.
 
 I continue to believe in TripleO's mission to use OpenStack itself.  I think
 the feedback provided by TripleO to other projects is very valuable. Given the
 complexity to deploy OpenStack, TripleO has set a high bar for other
 integrated projects to meet to achieve this goal. The resulting new features
 and bug fixes that have surfaced as a result has been great for all of
 OpenStack.
 
 Given that TripleO is the Deployment program though, I also support 
 alternative
 implementations where they make sense. Those implementations may be in
 TripleO's existing projects themselves, new projects entirely, or pulling in
 existing projects under the Deployment program where a desire exists. Not 
 every
 operator is going to deploy OpenStack the same way, and some organizations
 already have entrenched and accepted tooling.
 
 To that end, I would also encourage integration with other deployment tools.
 Puppet is one such example and already has wide support in the broader
 OpenStack community. I'd also like to see TripleO support different update
 mechanisms potentially with Heat's SoftwareConfig feature, which didn't yet
 exist when TripleO first defined an update strategy.
 
 The tripleo-image-elements repository is a heavily used part of our process 
 and
 I've seen some recurring themes come up that I'd like to see addressed. 
 Element
 idempotence seems to often come up, as well as the ability to edit already
 built images. I'd also like to see our elements more generally applicable to
 installing OpenStack vs. just installing OpenStack in an image building
 context.  Personally, I support these features, but mostly, I'd like to drive
 to a consensus on those points during Kilo.
 
 I'd love to see more people developing and using TripleO where they can and
 providing feedback. To enable that, I'd like for easier developer setups to
 be a focus during Kilo so that it's simpler for people to contribute without
 such a large initial learning curve investment. Downloadable prebuilt images
 could be one way we could make that process easier.
 
 There have been a handful of mailing list threads recently about the
 organization of OpenStack and how TripleO/Deployment may fit into that going
 forward. One thing is clear, the team has made a ton of great progress since
 it's inception. I think we should continue on the mission of OpenStack owning
 it's own production deployment story, regardless of how programs may be
 organized in the future, or what different paths that story may take.
 
 Thanks for your consideration!
 
 [1] http://lists.openstack.org/pipermail/openstack-dev/2014-April/031772.html
 
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Docs] PTL Candidacy

2014-09-25 Thread Tristan Cacqueray
confirmed

On 25/09/14 12:10 AM, Anne Gentle wrote:
 I'm writing to announce my candidacy for the Documentation Program
 Technical Lead (PTL).
 
 The past six months have flown by. I still recall writing up wish lists
 per-deliverable on the plane home from the Atlanta Summit and the great
 news is many are completed. Of course we still have a lot to do.
 
 We face many challenges as an open source community as we grow and define
 ourselves through our users. As documentation specialists, we have to be
 creative with our resourcing for documentation as the number of teams and
 services increases each release. This release we have:
 - experimented with using RST sourcing for a chapter about Heat Templates
 - managed to keep automating where it makes sense, using the toolset we
 keep improving upon
 - held another successful book sprint for the Architecture and Design Guide
 - split out a repo for the training group focusing not only on training
 guides but also scripts and other training specialties
 - split out the Security Guide with their own review team; completed a
 thorough review of that guide
 - split out the High Availability Guide with their own review team from
 discussions at the Ops Meetup
 - began a Networking Guide pulling together as many interested parties as
 possible before and after the Ops Meetup with a plan for hiring a contract
 writer to work on it with the community
 - added the openstack common client help text to the CLI Reference
 - added Chinese, German, French, and Korean language landing pages to the
 docs site
 - generated config option tables with each milestone release (with few
 exceptions of individual projects)
 - lost a key contributor to API docs (Diane's stats didn't decline far yet:
 http://stackalytics.com/?user_id=diane-flemingrelease=juno)
 - still working towards a new design for page-based docs
 - still working on API reference information
 - still working on removing spec API documents to avoid duplication and
 confusion
 - still testing three of four install guides for the JUNO release (that
 we're nearly there is just so great)
 
 So you can see we have much more to do, but we have come so far. Even in
 compiling this list I worry I'm missing items, there's just so much scope
 to OpenStack docs. We serve users, deployers, administrators, and app
 developers. It continues to be challenging but we keep looking for ways to
 make it work.
 
 We have seen amazing contributors like Andreas Jaeger, Matt Kassawara,
 Gauvain Pocentek, and Christian Berendt find their stride and shine. Yes, I
 could name more but these people have done an incredible job this release.
 
 I'm especially eager to continue collaborating with great managers like
 Nick Chase at Mirantis and Lana Brindley at Rackspace -- they see what we
 can accomplish when enterprise doc teams work well with an upstream.
 They're behind-the-scenes much of the time but I must express my gratitude
 to these two pros up front.
 
 Thanks for your consideration. I'd be honored to continue to serve in this
 role.
 Anne
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-09-25 Thread Morgan Fainberg
On Thursday, September 25, 2014, Thierry Carrez thie...@openstack.org
wrote:

 Thierry Carrez wrote:
  Kilo Design Summit: Nov 4-7
  Kilo-1 milestone: Dec 11
  Kilo-2 milestone: Jan 29
  Kilo-3 milestone, feature freeze: March 12
  2015.1 (Kilo) release: Apr 23
  L Design Summit: May 18-22

 Following feedback on the mailing-list and at the cross-project meeting,
 there is growing consensus that shifting one week to the right would be
 better. It makes for a short L cycle, but avoids losing 3 weeks between
 Kilo release and L design summit. That gives:

 Kilo Design Summit: Nov 4-7
 Kilo-1 milestone: Dec 18
 Kilo-2 milestone: Feb 5
 Kilo-3 milestone, feature freeze: Mar 19
 2015.1 (Kilo) release: Apr 30
 L Design Summit: May 18-22

 If you prefer a picture, see attached PDF.

 --
 Thierry Carrez (ttx)


+1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release criticality of bug 1365606 (get_network_info efficiency for nova-network)

2014-09-25 Thread Dan Smith
 and I don't see how https://review.openstack.org/#/c/121663/ is actually
 dependent on https://review.openstack.org/#/c/119521/.

Yeah, agreed. I think that we _need_ the fix patch in Juno. The query
optimization is good, and something we should take, but it makes me
nervous sliding something like that in at the last minute without more
exposure. Especially given that it has been like this for more than one
release, it seems like Kilo material to me.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-25 Thread Matt Riedemann



On 9/25/2014 8:26 AM, Pasquale Porreca wrote:

The problem to use a different tag than UUID is that it won't be
possible (for what I know) to include this tag in the Bootstrap Protocol
messages exchanged during the pre-boot phase.

Our original idea was to use the Client-identifier (option 61) or Vendor
class identifier (option 60) of the dhcp request to achieve our target,
but these fields cannot be controlled in libvirt template and so they
cannot be set in OpenStack either. Instead the UUID is set it the
libvirt template by OpenStack and it is included in the messages
exchanged in the pre-boot phase (option 97) by the instance trying to
boot from network.

Reference:
http://www.iana.org/assignments/bootp-dhcp-parameters/bootp-dhcp-parameters.xhtml



On 09/25/14 14:43, Andrew Laski wrote:


On 09/25/2014 04:18 AM, Pasquale Porreca wrote:

I will briefly explain our use case. This idea is related to another
project to enable the network boot in OpenStack
https://blueprints.launchpad.net/nova/+spec/pxe-boot-instance

We want to make use of the extra-dhcp-opt to indicate as tftp server
a specific instance inside our deployed system, so it will provide
the right operating system to the other instances booting from
network (once the feature from the linked blueprint will be
implemented).

On the tftp server we want to be able to filter what boot file to
provide to different class of instances and our idea was to identify
each class with 2 hexadecimal of the UUID (while the rest would be
random generated, still granting its uniqueness).


It seems like this would still be achievable using the instance tags
feature that Matt mentioned.  And it would be more clear since you
could use human readable class names rather than relying on knowing
that part of the UUID had special meaning.

If you have a need to add specific information to an instance like
'boot class' or want to indicate that an instance in two different
clouds is actually the same one, the Pumphouse use case, that
information should be something we layer on top of an instance and not
something we encode in the UUID.



Anyway this is a customization for our specific environment and for a
feature that is still in early proposal stage, so we wanted to
propose as a separate feature to allow user custom UUID and manage
the generation out of OpenStack.
On 09/24/14 23:15, Matt Riedemann wrote:



On 9/24/2014 3:17 PM, Dean Troyer wrote:

On Wed, Sep 24, 2014 at 2:58 PM, Roman Podoliaka
rpodoly...@mirantis.com mailto:rpodoly...@mirantis.com wrote:

Are there any known gotchas with support of this feature in
REST APIs
(in general)?


I'd be worried about relying on a user-defined attribute in that use
case, that's ripe for a DOS.  Since these are cloud-unique I wouldn't
even need to be in your project to block you from creating that clone
instance if I knew your UUID.

dt

--

Dean Troyer
dtro...@gmail.com mailto:dtro...@gmail.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We talked about this a bit before approving the
'enforce-unique-instance-uuid-in-db' blueprint [1].  As far as we
knew there was no one using null instance UUIDs or duplicates for
that matter.

The instance object already enforces that the UUID field is unique
but the database schema doesn't.  I'll be re-proposing that for Kilo
when it opens up.

If it's a matter of tagging an instance, there is also the tags
blueprint [2] which will probably be proposed again for Kilo.

[1]
https://blueprints.launchpad.net/nova/+spec/enforce-unique-instance-uuid-in-db

[2] https://blueprints.launchpad.net/nova/+spec/tag-instances






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




If it's a matter of getting the instance tag information down to the 
libvirt driver on boot that shouldn't be a problem, there are others 
asking for similar things, i.e. I want to tag my instances at create 
time and store that tag metadata in some namespace in the libvirt domain 
xml so I can have an application outside of openstack consuming those 
domain xml's and reading that custom namespace information.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Choice of Series goal of a blueprint

2014-09-25 Thread Angelo Matarazzo

Hi all,
Can create a blueprint and choose a previous Series goal (eg:Icehouse)?
I think that it can be possible but no reviewer or driver will be 
interested in it.

Right?

Best regards,
Angelo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-25 Thread Daniel P. Berrange
On Thu, Sep 25, 2014 at 09:19:03AM -0500, Matt Riedemann wrote:
 
 
 On 9/25/2014 8:26 AM, Pasquale Porreca wrote:
 The problem to use a different tag than UUID is that it won't be
 possible (for what I know) to include this tag in the Bootstrap Protocol
 messages exchanged during the pre-boot phase.
 
 Our original idea was to use the Client-identifier (option 61) or Vendor
 class identifier (option 60) of the dhcp request to achieve our target,
 but these fields cannot be controlled in libvirt template and so they
 cannot be set in OpenStack either. Instead the UUID is set it the
 libvirt template by OpenStack and it is included in the messages
 exchanged in the pre-boot phase (option 97) by the instance trying to
 boot from network.
 
 Reference:
 http://www.iana.org/assignments/bootp-dhcp-parameters/bootp-dhcp-parameters.xhtml

[snip]

 If it's a matter of getting the instance tag information down to the libvirt
 driver on boot that shouldn't be a problem, there are others asking for
 similar things, i.e. I want to tag my instances at create time and store
 that tag metadata in some namespace in the libvirt domain xml so I can have
 an application outside of openstack consuming those domain xml's and reading
 that custom namespace information.

Perhaps I'm misunderstanding something, but isn't the DHCP client that
needs to send the tag running in the guest OS ? Libvirt is involved wrt
UUID, because UUID is populated in the guest's virtual BIOS and then
extracted by the guest OS and from there used by the DHCP client. If
we're talking about making a different tag/identifier available for
the DHCP client, then this is probably not going to involve libvirt
unless it also gets pushed up via the virtual BIOS. IOW, couldn't you
just pass whatever tag is needed to the guest OS via the configdrive
or metadata service.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [elections] Last hours for PTL candidate announcements

2014-09-25 Thread Anita Kuno
Tristan has been doing a great job verifying most of the current
candidate announcements - thank you, Tristan! - while I am head down on
the project-config split in infra, but I did want to send out the
reminder that we are in the last hours for PTL candidate announcements.

If you want to stand for PTL, don't delay, follow the instructions on
the wikipage and make sure we know your intentions:
https://wiki.openstack.org/wiki/PTL_Elections_September/October_2014

Thank you,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] client release deadline - Sept 18th

2014-09-25 Thread Morgan Fainberg
Keystone team has released Keystonemiddleware 1.2.0

https://pypi.python.org/pypi/keystonemiddleware/1.2.0

This should be the version coinciding with the Juno OpenStack release. 


—
Morgan Fainberg


-Original Message-
From: Sergey Lukjanov slukja...@mirantis.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: September 23, 2014 at 17:12:16
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [release] client release deadline - Sept 18th

 We have a final sahara client release for Juno -
 https://pypi.python.org/pypi/python-saharaclient/0.7.4
  
 On Tue, Sep 23, 2014 at 12:59 PM, Eoghan Glynn wrote:
 
  The ceilometer team released python-ceilometerclient vesion 1.0.11 
  yesterday:  
 
  https://pypi.python.org/pypi/python-ceilometerclient/1.0.11
 
  Cheers,
  Eoghan
 
  Keystone team has released 0.11.1 of python-keystoneclient. Due to some
  delays getting things through the gate this took a few extra days.
 
  https://pypi.python.org/pypi/python-keystoneclient/0.11.1
 
  —Morgan
 
 
  —
  Morgan Fainberg
 
 
  -Original Message-
  From: John Dickinson  
  Reply: OpenStack Development Mailing List (not for usage questions)
  
  Date: September 17, 2014 at 20:54:19
  To: OpenStack Development Mailing List (not for usage questions)
  
  Subject: Re: [openstack-dev] [release] client release deadline - Sept 18th
 
   I just release python-swiftclient 2.3.0
  
   In addition to some smaller changes and bugfixes, the biggest changes are
   the support
   for Keystone v3 and a refactoring that allows for better testing and
   extensibility of
   the functionality exposed by the CLI.
  
   https://pypi.python.org/pypi/python-swiftclient/2.3.0
  
   --John
  
  
  
   On Sep 17, 2014, at 8:14 AM, Matt Riedemann wrote:
  
   
   
On 9/15/2014 12:57 PM, Matt Riedemann wrote:
   
   
On 9/10/2014 11:08 AM, Kyle Mestery wrote:
On Wed, Sep 10, 2014 at 10:01 AM, Matt Riedemann
wrote:
   
   
On 9/9/2014 4:19 PM, Sean Dague wrote:
   
As we try to stabilize OpenStack Juno, many server projects need to
get
out final client releases that expose new features of their 
servers.
While this seems like not a big deal, each of these clients 
releases
ends up having possibly destabilizing impacts on the OpenStack 
whole
(as
the clients do double duty in cross communicating between 
services).
   
As such in the release meeting today it was agreed clients should
have
their final release by Sept 18th. We'll start applying the 
dependency
freeze to oslo and clients shortly after that, all other 
requirements
should be frozen at this point unless there is a high priority bug
around them.
   
-Sean
   
   
Thanks for bringing this up. We do our own packaging and need time
for legal
clearances and having the final client releases done in a reasonable
time
before rc1 is helpful. I've been pinging a few projects to do a 
final
client release relatively soon. python-neutronclient has a release
this
week and I think John was planning a python-cinderclient release 
this
week
also.
   
Just a slight correction: python-neutronclient will have a final
release once the L3 HA CLI changes land [1].
   
Thanks,
Kyle
   
[1] https://review.openstack.org/#/c/108378/
   
--
   
Thanks,
   
Matt Riedemann
   
   
   
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
   
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
   
   
python-cinderclient 1.1.0 was released on Saturday:
   
https://pypi.python.org/pypi/python-cinderclient/1.1.0
   
   
python-novaclient 2.19.0 was released yesterday [1].
   
List of changes:
   
mriedem@ubuntu:~/git/python-novaclient$ git log 2.18.1..2.19.0 
--oneline  
--no-merges
cd56622 Stop using intersphinx
d96f13d delete python bytecode before every test run
4bd0c38 quota delete tenant_id parameter should be required
3d68063 Don't display duplicated security groups
2a1c07e Updated from global requirements
319b61a Fix test mistake with requests-mock
392148c Use oslo.utils
e871bd2 Use Token fixtures from keystoneclient
aa30c13 Update requirements.txt to include keystoneclient
bcc009a Updated from global requirements
f0beb29 Updated from global requirements
cc4f3df Enhance network-list to allow --fields
fe95fe4 Adding Nova Client support for auto find host APIv2
b3da3eb Adding Nova Client support for auto find host APIv3

Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-25 Thread Pasquale Porreca
This is correct Daniel, except that that it is done by the virtual 
firmware/BIOS of the virtual machine and not by the OS (not yet 
installed at that time).


This is the reason we thought about UUID: it is yet used by the iPXE 
client to be included in Bootstrap Protocol messages, it is taken from 
the uuid field in libvirt template and the uuid in libvirt is set by 
OpenStack; the only missing passage is the chance to set the UUID in 
OpenStack instead to have it randomly generated.


Having another user defined tag in libvirt won't help for our issue, 
since it won't be included in Bootstrap Protocol messages, not without 
changes in the virtual BIOS/firmware (as you stated too) and honestly my 
team doesn't have interest in this (neither the competence).


I don't think the configdrive or metadata service would help either: the 
OS on the instance is not yet installed at that time (the target if the 
network boot is exactly to install the OS on the instance!), so it won't 
be able to mount it.


On 09/25/14 16:24, Daniel P. Berrange wrote:

On Thu, Sep 25, 2014 at 09:19:03AM -0500, Matt Riedemann wrote:


On 9/25/2014 8:26 AM, Pasquale Porreca wrote:

The problem to use a different tag than UUID is that it won't be
possible (for what I know) to include this tag in the Bootstrap Protocol
messages exchanged during the pre-boot phase.

Our original idea was to use the Client-identifier (option 61) or Vendor
class identifier (option 60) of the dhcp request to achieve our target,
but these fields cannot be controlled in libvirt template and so they
cannot be set in OpenStack either. Instead the UUID is set it the
libvirt template by OpenStack and it is included in the messages
exchanged in the pre-boot phase (option 97) by the instance trying to
boot from network.

Reference:
http://www.iana.org/assignments/bootp-dhcp-parameters/bootp-dhcp-parameters.xhtml

[snip]


If it's a matter of getting the instance tag information down to the libvirt
driver on boot that shouldn't be a problem, there are others asking for
similar things, i.e. I want to tag my instances at create time and store
that tag metadata in some namespace in the libvirt domain xml so I can have
an application outside of openstack consuming those domain xml's and reading
that custom namespace information.

Perhaps I'm misunderstanding something, but isn't the DHCP client that
needs to send the tag running in the guest OS ? Libvirt is involved wrt
UUID, because UUID is populated in the guest's virtual BIOS and then
extracted by the guest OS and from there used by the DHCP client. If
we're talking about making a different tag/identifier available for
the DHCP client, then this is probably not going to involve libvirt
unless it also gets pushed up via the virtual BIOS. IOW, couldn't you
just pass whatever tag is needed to the guest OS via the configdrive
or metadata service.

Regards,
Daniel


--
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] keystonemiddleware release 1.2.0

2014-09-25 Thread Morgan Fainberg
The Keystone team has released keystonemiddleware 1.2.0 [1]. This version is 
meant to be the release coinciding with the Juno release of OpenStack. 

Details of new features and bug fixes included in the 1.2.0 release of 
keystonemiddleware can be found on the milestone information page [2].


Cheers, 
Morgan Fainberg 

[1] https://pypi.python.org/pypi/keystonemiddleware/1.2.0
[2] https://launchpad.net/keystonemiddleware/+milestone/1.2.0



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-25 Thread Daniel P. Berrange
On Thu, Sep 25, 2014 at 05:23:22PM +0200, Pasquale Porreca wrote:
 This is correct Daniel, except that that it is done by the virtual
 firmware/BIOS of the virtual machine and not by the OS (not yet installed at
 that time).
 
 This is the reason we thought about UUID: it is yet used by the iPXE client
 to be included in Bootstrap Protocol messages, it is taken from the uuid
 field in libvirt template and the uuid in libvirt is set by OpenStack; the
 only missing passage is the chance to set the UUID in OpenStack instead to
 have it randomly generated.
 
 Having another user defined tag in libvirt won't help for our issue, since
 it won't be included in Bootstrap Protocol messages, not without changes in
 the virtual BIOS/firmware (as you stated too) and honestly my team doesn't
 have interest in this (neither the competence).
 
 I don't think the configdrive or metadata service would help either: the OS
 on the instance is not yet installed at that time (the target if the network
 boot is exactly to install the OS on the instance!), so it won't be able to
 mount it.

Ok, yes, if we're considering the DHCP client inside the iPXE BIOS
blob, then I don't see any currently viable options besides UUID.
There's no mechanism for passing any other data into iPXE that I
am aware of, though if there is a desire todo that it could be
raised on the QEMU mailing list for discussion.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release criticality of bug 1365606 (get_network_info efficiency for nova-network)

2014-09-25 Thread Matt Riedemann



On 9/25/2014 9:15 AM, Dan Smith wrote:

and I don't see how https://review.openstack.org/#/c/121663/ is actually
dependent on https://review.openstack.org/#/c/119521/.


Yeah, agreed. I think that we _need_ the fix patch in Juno. The query
optimization is good, and something we should take, but it makes me
nervous sliding something like that in at the last minute without more
exposure. Especially given that it has been like this for more than one
release, it seems like Kilo material to me.

--Dan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I agree with this and said the same in IRC a few times when it was 
brought up.  Unfortunately the optimization patch was approved at one 
point but had to be rebased.  Then about three weeks went by and we're 
sitting on top of rc1 and I think that optimization is too risky at this 
point, i.e. we have known gate issues, I wouldn't like to see us add to 
that.  Granted, this might actually help with some gate races, I'm not 
sure, but it seems too risky to me without more time to back it in 
before we do release candidates.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Steven Dake

On 09/25/2014 12:01 AM, Clint Byrum wrote:

Excerpts from Mike Spreitzer's message of 2014-09-24 22:01:54 -0700:

Clint Byrum cl...@fewbar.com wrote on 09/25/2014 12:13:53 AM:


Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:

Steven Dake sd...@redhat.com wrote on 09/24/2014 11:02:49 PM:

...

...
Does TripleO require container functionality that is not available
when using the Docker driver for Nova?

As far as I can tell, the quantitative handling of capacities and
demands in Kubernetes is much inferior to what Nova does today.


Yes, TripleO needs to manage baremetal and containers from a single
host. Nova and Neutron do not offer this as a feature unfortunately.

In what sense would Kubernetes manage baremetal (at all)?
By from a single host do you mean that a client on one host
can manage remote baremetal and containers?

I can see that Kubernetes allows a client on one host to get
containers placed remotely --- but so does the Docker driver for Nova.


I mean that one box would need to host Ironic, Docker, and Nova, for
the purposes of deploying OpenStack. We call it the undercloud, or
sometimes the Deployment Cloud.

It's not necessarily something that Nova/Neutron cannot do by design,
but it doesn't work now.


As far as use cases go, the main use case is to run a specific
Docker container on a specific Kubernetes minion bare metal host.

Clint, in another branch of this email tree you referred to
the VMs that host Kubernetes.  How does that square with
Steve's text that seems to imply bare metal minions?


That was in a more general context, discussing using Kubernetes for
general deployment. Could have just as easily have said hosts,
machines, or instances.


I can see that some people have had much more detailed design
discussions than I have yet found.  Perhaps it would be helpful
to share an organized presentation of the design thoughts in
more detail.


I personally have not had any detailed discussions about this before it
was announced. I've just dug into the design and some of the code of
Kubernetes because it is quite interesting to me.


If TripleO already knows it wants to run a specific Docker image
on a specific host then TripleO does not need a scheduler.


TripleO does not ever specify destination host, because Nova does not
allow that, nor should it. It does want to isolate failure domains so
that all three Galera nodes aren't on the same PDU, but we've not really
gotten to the point where we can do that yet.

So I am still not clear on what Steve is trying to say is the main use
case.
Kubernetes is even farther from balancing among PDUs than Nova is.
At least Nova has a framework in which this issue can be posed and solved.
I mean a framework that actually can carry the necessary information.
The Kubernetes scheduler interface is extremely impoverished in the
information it passes and it uses GO structs --- which, like C structs,
can not be subclassed.

I don't think this is totally clear yet. The thing that Steven seems to be
trying to solve is deploying OpenStack using docker, and Kubernetes may
very well be a better choice than Nova for this. There are some really
nice features, and a lot of the benefits we've been citing about image
based deployments are realized in docker without the pain of a full OS
image to redeploy all the time.


This is precisely the problem I want to solve.  I looked at Nova+Docker 
as a solution, and it seems to me the runway to get to a successful 
codebase is longer with more risk.  That is why this is an experiment to 
see if a Kubernetes based approach would work.  if at the end of the day 
we throw out Kubernetes as a scheduler once we have the other problems 
solved and reimplement Kubernetes in Nova+Docker, I think that would be 
an acceptable outcome, but not something I want to *start* with but 
*finish* with.


Regards
-steve


The structs vs. classes argument is completely out of line and has
nothing to do with where Kubernetes might go in the future. It's like
saying because cars use internal combustion engines they are limited. It
is just a facet of how it works today.


Nova's filter scheduler includes a fatal bug that bites when balancing and
you want more than
one element per area, see https://bugs.launchpad.net/nova/+bug/1373478.
However: (a) you might not need more than one element per area and
(b) fixing that bug is a much smaller job than expanding the mind of K8s.


Perhaps. I am quite a fan of set based design, and Kubernetes is a
narrowly focused single implementation solution, where Nova is a broadly
focused abstraction layer for VM's. I think it is worthwhile to push
a bit into the Kubernetes space and see whether the limitations are
important or not.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Steven Dake

On 09/24/2014 10:01 PM, Mike Spreitzer wrote:

Clint Byrum cl...@fewbar.com wrote on 09/25/2014 12:13:53 AM:

 Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:
  Steven Dake sd...@redhat.com wrote on 09/24/2014 11:02:49 PM:
   ...
  ...
  Does TripleO require container functionality that is not available
  when using the Docker driver for Nova?
 
  As far as I can tell, the quantitative handling of capacities and
  demands in Kubernetes is much inferior to what Nova does today.
 

 Yes, TripleO needs to manage baremetal and containers from a single
 host. Nova and Neutron do not offer this as a feature unfortunately.

In what sense would Kubernetes manage baremetal (at all)?
By from a single host do you mean that a client on one host
can manage remote baremetal and containers?

I can see that Kubernetes allows a client on one host to get
containers placed remotely --- but so does the Docker driver for Nova.


   As far as use cases go, the main use case is to run a specific
   Docker container on a specific Kubernetes minion bare metal host.

Clint, in another branch of this email tree you referred to
the VMs that host Kubernetes.  How does that square with
Steve's text that seems to imply bare metal minions?

I can see that some people have had much more detailed design
discussions than I have yet found.  Perhaps it would be helpful
to share an organized presentation of the design thoughts in
more detail.



Mike,

I have had no such design discussions.  Thus far the furthest along we 
are in the project is determining we need Docker containers for each of 
the OpenStack daemons.  We are working a bit on how that design should 
operate.  For example, our current model on reconfiguration of a docker 
container is to kill the docker container and start a fresh one with the 
new configuration.


This is literally where the design discussions have finished.  We have 
not had much discussion about Kubernetes at all other then I know it is 
a docker scheduler and I know it can get the job done :) I think other 
folks design discussions so far on this thread are speculation about 
what an architecture should look like.  That is great - lets have those 
Monday 2000 UTC in #openstack-medeting in our first Kolla meeting.


Regards
-steve


 
  If TripleO already knows it wants to run a specific Docker image
  on a specific host then TripleO does not need a scheduler.
 

 TripleO does not ever specify destination host, because Nova does not
 allow that, nor should it. It does want to isolate failure domains so
 that all three Galera nodes aren't on the same PDU, but we've not really
 gotten to the point where we can do that yet.

So I am still not clear on what Steve is trying to say is the main use 
case.

Kubernetes is even farther from balancing among PDUs than Nova is.
At least Nova has a framework in which this issue can be posed and 
solved.

I mean a framework that actually can carry the necessary information.
The Kubernetes scheduler interface is extremely impoverished in the
information it passes and it uses GO structs --- which, like C structs,
can not be subclassed.
Nova's filter scheduler includes a fatal bug that bites when balancing 
and you want more than

one element per area, see https://bugs.launchpad.net/nova/+bug/1373478.
However: (a) you might not need more than one element per area and
(b) fixing that bug is a much smaller job than expanding the mind of K8s.

Thanks,
Mike


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] MySQL performance and Mongodb backend maturity question

2014-09-25 Thread gordon chung
 mysql select count(*) from metadata_text;
 +--+
 | count(*) |
 +--+
 | 25249913 |
 +--+
 1 row in set (3.83 sec) 
 There were 25M records in one table.  The deletion time is reaching an
 unacceptable level (7 minutes for 4M records) and it was not increasing
 in a linear way.  Maybe DB experts can show me how to optimize this?
we don't do any customisations in default ceilometer package so i'm sure 
there's way to optimise... not sure if any devops ppl read this list. 
 Another question: does the mongodb backend support events now?
 (I asked this question in IRC, but, just as usual, no response from
 anyone in that community, no matter a silly question or not is it...)
regarding events, are you specifically asking about events 
(http://docs.openstack.org/developer/ceilometer/events.html) in ceilometer or 
using events term in generic sense? the table above has no relation to events 
in ceilometer, it's related to samples and corresponding resource.  we did do 
some remodelling of sql backend this cycle which should shrink the size of the 
metadata tables.
there's a euro-bias in ceilometer so you'll be more successful reaching people 
on irc during euro work hours... that said, you'll probably get best response 
by posting to list or pinging someone on core team directly.
cheers,gord   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] ./run_tests issue

2014-09-25 Thread Ben Nemec
On 09/22/2014 01:29 AM, Deepak Shetty wrote:
 Thats incorrect, as i said in my original mail.. I am usign devstack+manila
 and it wasn't very clear to me that mysql-devel needs to be installed and
 it didn't get installed. I am on F20, not sure if that causes this , if
 yes, then we need to debug and fix this.

This is because by default devstack only installs the packages needed to
actually run OpenStack.  For unit test deps, you need the
INSTALL_TESTONLY_PACKAGES variable set to true in your localrc.  I've
advocated to get it enabled by default in the past but was told that
running unit tests on a devstack vm isn't the recommended workflow so
they don't want to do that.

 
 Maybe its a good idea to put a comment in requirements.txt statign that the
 following C libs needs to be installed for  the venv to work smoothly. That
 would help too for the short term.

It's worth noting that you would need multiple entries for each lib
since every distro tends to call them something different.

 
 On Sun, Sep 21, 2014 at 12:12 PM, Valeriy Ponomaryov 
 vponomar...@mirantis.com wrote:
 
 Dep MySQL-python is already in test-requirements.txt file. As Andreas
 said, second one mysql-devel is C lib and can not be installed via pip.
 So, project itself, as all projects in OpenStack, can not install it.

 C lib deps are handled by Devstack, if it is used. See:
 https://github.com/openstack-dev/devstack/tree/master/files/rpms

 https://github.com/openstack-dev/devstack/blob/2f27a0ed3c609bfcd6344a55c121e56d5569afc9/functions-common#L895

 Yes, Manila could have its files in the same way in
 https://github.com/openstack/manila/tree/master/contrib/devstack , but
 this lib is already exist in deps for other projects. So, I guess you used
 Manila run_tests.sh file on host without devstack installation, in that
 case all other projects would fail in the same way.

 On Sun, Sep 21, 2014 at 2:54 AM, Alex Leonhardt aleonhardt...@gmail.com
 wrote:

 And yet it's a dependency so I'm with Deepak and it should at least be
 mentioned in the prerequisites on a webpage somewhere .. :) I might even
 try and update/add that myself as it caught me out a few times too..

 Alex
  On 20 Sep 2014 12:44, Andreas Jaeger a...@suse.com wrote:

 On 09/20/2014 09:34 AM, Deepak Shetty wrote:
 thanks , that worked.
 Any idea why it doesn't install it automatically and/or it isn't
 present
 in requirements.txt ?
 I thought that was the purpose of requirements.txt ?

 AFAIU requirements.txt has only python dependencies while
 mysql-devel is a C development package,

 Andreas
 --
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kind Regards
 Valeriy Ponomaryov
 www.mirantis.com
 vponomar...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] MySQL performance and Mongodb backend maturity question

2014-09-25 Thread Clint Byrum
Excerpts from Daniele Venzano's message of 2014-09-25 02:40:11 -0700:
 On 09/25/14 10:12, Qiming Teng wrote:
  Yes, just about 3 VMs running on two hosts, for at most 3 weeks. This 
  is leading me to another question -- any best practices/tools to 
  retire the old data on a regular basis? Regards, Qiming
 
 There is a tool: ceilometer-expirer
 
 I tried to use it on a mysql database, since I had the same table size 
 problem as you and it made the machine hit swap. I think it tries to 
 load the whole table in memory.
 Just to see if it would eventually finish, I let it run for 1 week 
 before throwing away the whole database and move on.
 
 Now I use Ceilometer's pipeline to forward events to elasticsearch via 
 udp + logstash and do not use Ceilometer's DB or API at all.
 

Interesting, this almost sounds like what should be the default
configuration honestly.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M
Doesn't nova with a docker driver and heat autoscaling handle case 2 and 3 for 
control jobs? Has anyone tried yet?

Thanks,
Kevin

From: Angus Lees [g...@inodes.org]
Sent: Wednesday, September 24, 2014 6:33 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and
Manage OpenStack using Kubernetes and Docker

On Wed, 24 Sep 2014 10:31:19 PM Alan Kavanagh wrote:
 Steven
 I have to ask what is the motivation and benefits we get from integrating
 Kubernetes into Openstack? Would be really useful if you can elaborate and
 outline some use cases and benefits Openstack and Kubernetes can gain.

I've no idea what Steven's motivation is, but here's my reasoning for going
down a similar path:

OpenStack deployment is basically two types of software:
1. Control jobs, various API servers, etc that are basically just regular
python wsgi apps.
2. Compute/network node agents that run under hypervisors, configure host
networking, etc.

The 2nd group probably wants to run on baremetal and is mostly identical on
all such machines, but the 1st group wants higher level PaaS type things.

In particular, for the control jobs you want:

- Something to deploy the code (docker / distro packages / pip install / etc)
- Something to choose where to deploy
- Something to respond to machine outages / autoscaling and re-deploy as
necessary

These last few don't have strong existing options within OpenStack yet (as far
as I'm aware).  Having explored a few different approaches recently, kubernetes
is certainly not the only option - but is a reasonable contender here.


So: I certainly don't see kubernetes as competing with anything in OpenStack -
but as filling a gap in job management with something that has a fairly
lightweight config syntax and is relatively simple to deploy on VMs or
baremetal.  I also think the phrase integrating kubernetes into OpenStack is
overstating the task at hand.

The primary downside I've discovered so far seems to be that kubernetes is
very young and still has an awkward cli, a few easy to encounter bugs, etc.

 - Gus

 From: Steven Dake [mailto:sd...@redhat.com]
 Sent: September-24-14 7:41 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and
 Manage OpenStack using Kubernetes and Docker

 On 09/24/2014 10:12 AM, Joshua Harlow wrote:
 Sounds like an interesting project/goal and will be interesting to see where
 this goes.

 A few questions/comments:

 How much golang will people be exposed to with this addition?

 Joshua,

 I expect very little.  We intend to use Kubernetes as an upstream project,
 rather then something we contribute to directly.


 Seeing that this could be the first 'go' using project it will be
 interesting to see where this goes (since afaik none of the infra support
 exists, and people aren't likely to familiar with go vs python in the
 openstack community overall).

 What's your thoughts on how this will affect the existing openstack
 container effort?

 I don't think it will have any impact on the existing Magnum project.  At
 some point if Magnum implements scheduling of docker containers, we may add
 support for Magnum in addition to Kubernetes, but it is impossible to tell
 at this point.  I don't want to derail either project by trying to force
 them together unnaturally so early.


 I see that kubernetes isn't exactly a small project either (~90k LOC, for
 those who use these types of metrics), so I wonder how that will affect
 people getting involved here, aka, who has the resources/operators/other...
 available to actually setup/deploy/run kubernetes, when operators are
 likely still just struggling to run openstack itself (at least operators
 are getting used to the openstack warts, a new set of kubernetes warts
 could not be so helpful).

 Yup it is fairly large in size.  Time will tell if this approach will work.

 This is an experiment as Robert and others on the thread have pointed out
 :).

 Regards
 -steve


 On Sep 23, 2014, at 3:40 PM, Steven Dake
 sd...@redhat.commailto:sd...@redhat.com wrote:


 Hi folks,

 I'm pleased to announce the development of a new project Kolla which is
 Greek for glue :). Kolla has a goal of providing an implementation that
 deploys OpenStack using Kubernetes and Docker. This project will begin as a
 StackForge project separate from the TripleO/Deployment program code base.
 Our long term goal is to merge into the TripleO/Deployment program rather
 then create a new program.



 Docker is a container technology for delivering hermetically sealed
 applications and has about 620 technical contributors [1]. We intend to
 produce docker images for a variety of platforms beginning with Fedora 20.
 We are completely open to any distro support, so if folks want to add new
 Linux distribution to Kolla please feel free to submit patches :)



 Kubernetes 

Re: [openstack-dev] [oslo] adding James Carey to oslo-i18n-core

2014-09-25 Thread Ben Nemec
+1.  He's on the short list of people who actually understand how all
that lazy translation stuff works. :-)

-Ben

On 09/23/2014 04:03 PM, Doug Hellmann wrote:
 James Carey (jecarey) from IBM has done the 3rd most reviews of oslo.i18n 
 this cycle [1]. His feedback has been useful, and I think he would be a good 
 addition to the team for maintaining oslo.i18n.
 
 Let me know what you think, please.
 
 Doug
 
 [1] http://stackalytics.com/?module=oslo.i18n
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] IRC weekly meeting minutes 25-09-2014

2014-09-25 Thread Ilya Sviridov
Hello team,

Thank you for attending meeting today.

I'm puting here meeting minutes and link to logs [1] [2]

Please pay attention that we are having meeting at #magentodb because of
schedule conflict.
The meeting agenda is free to updated [3]

[1]
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html
[2]
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.txt
[3] https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda

Meeting summary

   1.
  1. from last meeting
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-18-13.01.html
   (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-12,
  13:02:11)

   2. *Go through action items* (isviridov
   
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-14,
   13:02:35)
  1.
  https://wiki.openstack.org/wiki/MagnetoDB/specs/async-schema-operations
   (ikhudoshyn
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-19,
  13:03:38)
  2. https://review.openstack.org/#/c/122404/ (ikhudoshyn
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-31,
  13:07:23)
  3. ACTION: provide numbers about performance impact from big PKI
  token in ML (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-38,
  13:09:01)

   3. *Asynchronous table creation and removal* (isviridov
   
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-39,
   13:09:25)
   4. *Monitoring API* (isviridov
   
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-58,
   13:16:26)
  1. https://blueprints.launchpad.net/magnetodb/+spec/monitoring-api (
  isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-62,
  13:18:28)

   5. *Light weight session for authorization* (isviridov
   
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-78,
   13:24:48)
   6. *Review tempest tests and move to stable test dir* (isviridov
   
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-105,
   13:33:49)
  1.
  https://blueprints.launchpad.net/magnetodb/+spec/review-tempest-tests
  (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-109,
  13:35:09)

   7. *Monitoring - healthcheck http request* (isviridov
   
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-133,
   13:41:16)
  1. AGREED: file missed tests as bugs (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-138,
  13:42:02)
  2. ACTION: aostapenko write a spec about healthcheck (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-148,
  13:46:00)

   8. *Log management* (isviridov
   
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-149,
   13:46:16)
  1. https://blueprints.launchpad.net/magnetodb/+spec/log-rotating (
  isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-151,
  13:46:23)
  2. AGREED: put log rotation configs in mdb config. No separate
  logging config (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-173,
  13:54:35)

   9. *Open discussion* (isviridov
   
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-176,
   13:56:05)
  1. https://blueprints.launchpad.net/magnetodb/+spec/oslo-notify (
  ikhudoshyn
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-187,
  14:00:13)
  2. ACTION: ikhudoshyn write a spec for migration to
  oslo.messaging.notify (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-190,
  14:01:04)
  3. ACTION: isviridov look how to created magentodb-spec repo (
  isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-197,
  14:02:12)
  4. ACTION: ajayaa write spec for RBAC (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-205,
  14:03:43)



Meeting ended at 14:07:00 UTC (full logs
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html
).

Action items

   1. provide numbers about performance impact from big PKI token in ML
   2. 

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M
Then you still need all the kubernetes api/daemons for the master and slaves. 
If you ignore the complexity this adds, then it seems simpler then just using 
openstack for it. but really, it still is an under/overcloud kind of setup, 
your just using kubernetes for the undercloud, and openstack for the overcloud?

Thanks,
Kevin

From: Steven Dake [sd...@redhat.com]
Sent: Wednesday, September 24, 2014 8:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and 
Manage OpenStack using Kubernetes and Docker

On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
Steven
I have to ask what is the motivation and benefits we get from integrating 
Kubernetes into Openstack? Would be really useful if you can elaborate and 
outline some use cases and benefits Openstack and Kubernetes can gain.

/Alan

Alan,

I am either unaware or ignorant of another Docker scheduler that is currently 
available that has a big (100+ folks) development community.  Kubernetes meets 
these requirements and is my main motivation for using it to schedule Docker 
containers.  There are other ways to skin this cat - The TripleO folks wanted 
at one point to deploy nova with the nova docker VM manager to do such a thing. 
 This model seemed a little clunky to me since it isn't purpose built around 
containers.

As far as use cases go, the main use case is to run a specific Docker container 
on a specific Kubernetes minion bare metal host.  These docker containers are 
then composed of the various config tools and services for each detailed 
service in OpenStack.  For example, mysql would be a container, and tools to 
configure the mysql service would exist in the container.  Kubernetes would 
pass config options for the mysql database prior to scheduling and once 
scheduled, Kubernetes would be responsible for connecting the various 
containers together.

Regards
-steve



From: Steven Dake [mailto:sd...@redhat.com]
Sent: September-24-14 7:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and 
Manage OpenStack using Kubernetes and Docker

On 09/24/2014 10:12 AM, Joshua Harlow wrote:
Sounds like an interesting project/goal and will be interesting to see where 
this goes.

A few questions/comments:

How much golang will people be exposed to with this addition?

Joshua,

I expect very little.  We intend to use Kubernetes as an upstream project, 
rather then something we contribute to directly.


Seeing that this could be the first 'go' using project it will be interesting 
to see where this goes (since afaik none of the infra support exists, and 
people aren't likely to familiar with go vs python in the openstack community 
overall).

What's your thoughts on how this will affect the existing openstack container 
effort?

I don't think it will have any impact on the existing Magnum project.  At some 
point if Magnum implements scheduling of docker containers, we may add support 
for Magnum in addition to Kubernetes, but it is impossible to tell at this 
point.  I don't want to derail either project by trying to force them together 
unnaturally so early.


I see that kubernetes isn't exactly a small project either (~90k LOC, for those 
who use these types of metrics), so I wonder how that will affect people 
getting involved here, aka, who has the resources/operators/other... available 
to actually setup/deploy/run kubernetes, when operators are likely still just 
struggling to run openstack itself (at least operators are getting used to the 
openstack warts, a new set of kubernetes warts could not be so helpful).

Yup it is fairly large in size.  Time will tell if this approach will work.

This is an experiment as Robert and others on the thread have pointed out :).

Regards
-steve


On Sep 23, 2014, at 3:40 PM, Steven Dake 
sd...@redhat.commailto:sd...@redhat.com wrote:


Hi folks,

I'm pleased to announce the development of a new project Kolla which is Greek 
for glue :). Kolla has a goal of providing an implementation that deploys 
OpenStack using Kubernetes and Docker. This project will begin as a StackForge 
project separate from the TripleO/Deployment program code base. Our long term 
goal is to merge into the TripleO/Deployment program rather then create a new 
program.



Docker is a container technology for delivering hermetically sealed 
applications and has about 620 technical contributors [1]. We intend to produce 
docker images for a variety of platforms beginning with Fedora 20. We are 
completely open to any distro support, so if folks want to add new Linux 
distribution to Kolla please feel free to submit patches :)



Kubernetes at the most basic level is a Docker scheduler produced by and used 
within Google [2]. Kubernetes has in excess of 100 technical contributors. 
Kubernetes is more then just a scheduler, it provides 

Re: [openstack-dev] [Neutron][LBaaS] Migrations in feature branch

2014-09-25 Thread Mike Bayer

If Neutron is ready for more Alembic features I could in theory begin work on 
https://bitbucket.org/zzzeek/alembic/issue/167/multiple-heads-branch-resolution-support
 .Folks should ping me on IRC regarding this.


On Sep 24, 2014, at 5:30 AM, Salvatore Orlando sorla...@nicira.com wrote:

 Relying again on automatic schema generation could be error-prone. It can 
 only be enabled globally, and does not work when models are altered if the 
 table for the model being altered already exists in the DB schema.
 
 I don't think it would be a big problem to put these migrations in the main 
 sequence once the feature branch is merged back into master.
 Alembic unfortunately does not yet do a great job in maintaining multiple 
 timelines. Even if only a single migration branch is supported, in theory one 
 could have a separate alembic environment for the feature branch, but that in 
 my opinion just creates the additional problem of handling a new environment, 
 and does not solve the initial problem of re-sequencing migrations.
 
 Re-sequencing at merge time is not going to be a problem in my opinion. 
 However, keeping all the lbaas migrations chained together will help. You can 
 also do as Henry suggests, but that option has the extra (possibly 
 negligible) cost of squashing all migrations for the whole feature branch at 
 merge time.
 
 As an example:
 
 MASTER  --- X - X+1 - ... - X+n
 \
 FEATURE  \- Y - Y+1 - ... - Y+m
 
 At every rebase of rebase the migration timeline for the feature branch could 
 be rearranged as follows:
 
 MASTER  --- X - X+1 - ... - X+n ---
  \
 FEATURE   \- Y=X+n - Y+1 - ... - Y+m = X+n+m
 
 And therefore when the final merge in master comes, all the migrations in the 
 feature branch can be inserted in sequence on top of master's HEAD.
 I have not tried this, but I reckon that conceptually it should work.
 
 Salvatore
 
 
 On 24 September 2014 08:16, Kevin Benton blak...@gmail.com wrote:
 If these are just feature branches and they aren't intended to be
 deployed for long life cycles, why don't we just skip the db migration
 and enable auto-schema generation inside of the feature branch? Then a
 migration can be created once it's time to actually merge into master.
 
 On Tue, Sep 23, 2014 at 9:37 PM, Brandon Logan
 brandon.lo...@rackspace.com wrote:
  Well the problem with resequencing on a merge is that a code change for
  the first migration must be added first and merged into the feature
  branch before the merge is done.  Obviously this takes review time
  unless someone of authority pushes it through.  We'll run into this same
  problem on rebases too if we care about keeping the migration sequenced
  correctly after rebases (which we don't have to, only on a merge do we
  really need to care).  If we did what Henry suggested in that we only
  keep one migration file for the entire feature, we'd still have to do
  the same thing.  I'm not sure that buys us much other than keeping the
  feature's migration all in one file.
 
  I'd also say that code in master should definitely NOT be dependent on
  code in a feature branch, much less a migration.  This was a requirement
  of the incubator as well.
 
  So yeah this sounds like a problem but one that really only needs to be
  solved at merge time.  There will definitely need to be coordination
  with the cores when merge time comes.  Then again, I'd be a bit worried
  if there wasn't since a feature branch being merged into master is a
  huge deal.  Unless I am missing something I don't see this as a big
  problem, but I am highly capable of being blind to many things.
 
  Thanks,
  Brandon
 
 
  On Wed, 2014-09-24 at 01:38 +, Doug Wiegley wrote:
  Hi Eugene,
 
 
  Just my take, but I assumed that we’d re-sequence the migrations at
  merge time, if needed.  Feature branches aren’t meant to be optional
  add-on components (I think), nor are they meant to live that long.
   Just a place to collaborate and work on a large chunk of code until
  it’s ready to merge.  Though exactly what those merge criteria are is
  also yet to be determined.
 
 
  I understand that you’re raising a general problem, but given lbaas
  v2’s state, I don’t expect this issue to cause many practical problems
  in this particular case.
 
 
  This is also an issue for the incubator, whenever it rolls around.
 
 
  Thanks,
  doug
 
 
 
 
  On September 23, 2014 at 6:59:44 PM, Eugene Nikanorov
  (enikano...@mirantis.com) wrote:
 
  
   Hi neutron and lbaas folks.
  
  
   Recently I briefly looked at one of lbaas proposed into feature
   branch.
   I see migration IDs there are lined into a general migration
   sequence.
  
  
   I think something is definitely wrong with this approach as
   feature-branch components are optional, and also master branch can't
   depend on revision IDs in
   feature-branch (as we moved to unconditional migrations)
  
  
   So far the solution to 

Re: [openstack-dev] [nova] Choice of Series goal of a blueprint

2014-09-25 Thread Joe Gordon
On Thu, Sep 25, 2014 at 7:22 AM, Angelo Matarazzo 
angelo.matara...@dektech.com.au wrote:

 Hi all,
 Can create a blueprint and choose a previous Series goal (eg:Icehouse)?
 I think that it can be possible but no reviewer or driver will be
 interested in it.
 Right?


I am not sure what the 'why' is here, but Icehouse is under stable
maintenance mode so it is not accepting new features.


 Best regards,
 Angelo

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-25 Thread Day, Phil
  Hi Jay,
 
  So just to be clear, are you saying that we should generate 2
  notification messages on Rabbit for every DB update?   That feels
  like a big overkill for me.   If I follow that login then the current
  state transition notifications should also be changed to Starting to
  update task state / finished updating task state  - which seems just
  daft and confuisng logging with notifications.
  Sandy's answer where start /end are used if there is a significant
  amount of work between the two and/or the transaction spans multiple
  hosts makes a lot more sense to me.   Bracketing a single DB call
  with two notification messages rather than just a single one on
  success to show that something changed would seem to me to be much
  more in keeping with the concept of notifying on key events.
 
 I can see your point, Phil. But what about when the set of DB calls takes a
 not-insignificant amount of time? Would the event be considered significant
 then? If so, sending only the I completed creating this thing notification
 message might mask the fact that the total amount of time spent creating
 the thing was significant.

Sure, I think there's a judgment call to be made on a case by case basis on 
this.   In general thought I'd say it's tasks that do more than just update the 
database that need to provide this kind of timing data.   Simple object 
creation / db table inserts don't really feel like they need to be individually 
timed by pairs of messages - if there is value in providing the creation time 
that could just be part of the payload of the single message, rather than 
doubling up on messages.
 
 
 That's why I think it's safer to always wrap tasks -- a series of actions that
 *do* one or more things -- with start/end/abort context managers that send
 the appropriate notification messages.
 
 Some notifications are for events that aren't tasks, and I don't think those
 need to follow start/end/abort semantics. Your example of an instance state
 change is not a task, and therefore would not need a start/end/abort
 notification manager. However, the user action of say, Reboot this server
 *would* have a start/end/abort wrapper for the REBOOT_SERVER event.
 In between the start and end notifications for this REBOOT_SERVER event,
 there may indeed be multiple SERVER_STATE_CHANGED notification
 messages sent, but those would not have start/end/abort wrappers around
 them.
 
 Make a bit more sense?
 -jay
 
Sure - it sounds like we're agreed in principle then that not all operations 
need start/end/abort messages, only those that are a series of operations.

So in that context the server group operations to me still look like they fall 
into the first groups.

Phil



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M
Why can't you manage baremetal and containers from a single host with 
nova/neutron? Is this a current missing feature, or has the development teams 
said they will never implement it?

Thanks,
Kevin

From: Clint Byrum [cl...@fewbar.com]
Sent: Wednesday, September 24, 2014 9:13 PM
To: openstack-dev
Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and
Manage OpenStack using Kubernetes and Docker

Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:
 Steven Dake sd...@redhat.com wrote on 09/24/2014 11:02:49 PM:

  On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
  Steven
  I have to ask what is the motivation and benefits we get from
  integrating Kubernetes into Openstack? Would be really useful if you
  can elaborate and outline some use cases and benefits Openstack and
  Kubernetes can gain.
 
  /Alan
 
  Alan,
 
  I am either unaware or ignorant of another Docker scheduler that is
  currently available that has a big (100+ folks) development
  community.  Kubernetes meets these requirements and is my main
  motivation for using it to schedule Docker containers.  There are
  other ways to skin this cat - The TripleO folks wanted at one point
  to deploy nova with the nova docker VM manager to do such a thing.
  This model seemed a little clunky to me since it isn't purpose built
  around containers.

 Does TripleO require container functionality that is not available
 when using the Docker driver for Nova?

 As far as I can tell, the quantitative handling of capacities and
 demands in Kubernetes is much inferior to what Nova does today.


Yes, TripleO needs to manage baremetal and containers from a single
host. Nova and Neutron do not offer this as a feature unfortunately.

  As far as use cases go, the main use case is to run a specific
  Docker container on a specific Kubernetes minion bare metal host.

 If TripleO already knows it wants to run a specific Docker image
 on a specific host then TripleO does not need a scheduler.


TripleO does not ever specify destination host, because Nova does not
allow that, nor should it. It does want to isolate failure domains so
that all three Galera nodes aren't on the same PDU, but we've not really
gotten to the point where we can do that yet.

  These docker containers are then composed of the various config
  tools and services for each detailed service in OpenStack.  For
  example, mysql would be a container, and tools to configure the
  mysql service would exist in the container.  Kubernetes would pass
  config options for the mysql database prior to scheduling

 I am not sure what is meant here by pass config options nor how it
 would be done prior to scheduling; can you please clarify?
 I do not imagine Kubernetes would *choose* the config values,
 K8s does not know anything about configuring OpenStack.
 Before scheduling, there is no running container to pass
 anything to.


Docker containers tend to use environment variables passed to the initial
command to configure things. The Kubernetes API allows setting these
environment variables on creation of the container.

and once
  scheduled, Kubernetes would be responsible for connecting the
  various containers together.

 Kubernetes has a limited role in connecting containers together.
 K8s creates the networking environment in which the containers
 *can* communicate, and passes environment variables into containers
 telling them from what protocol://host:port/ to import each imported
 endpoint.  Kubernetes creates a universal reverse proxy on each
 minion, to provide endpoints that do not vary as the servers
 move around.
 It is up to stuff outside Kubernetes to decide
 what should be connected to what, and it is up to the containers
 to read the environment variables and actually connect.


This is a nice simple interface though, and I like that it is narrowly
defined, not trying to be anything that containers want to share with
other containers.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo Blueprints and Specs

2014-09-25 Thread John Garbutt
On 25 September 2014 14:10, Daniel P. Berrange berra...@redhat.com wrote:
 The proposal is to keep kilo-1, kilo-2 much the same as juno. Except,
 we work harder on getting people to buy into the priorities that are
 set, and actively provoke more debate on their correctness, and we
 reduce the bar for what needs a blueprint.

 We can't have 50 high priority blueprints, it doesn't mean anything,
 right? We need to trim the list down to a manageable number, based on
 the agreed project priorities. Thats all I mean by slots / runway at
 this point.

 I would suggest we don't try to rank high/medium/low as that is
 too coarse, but rather just an ordered priority list. Then you
 would not be in the situation of having 50 high blueprints. We
 would instead naturally just start at the highest priority and
 work downwards.

OK. I guess I was fixating about fitting things into launchpad.

I guess having both might be what happens.

  The runways
  idea is just going to make me less efficient at reviewing. So I'm
  very much against it as an idea.

 This proposal is different to the runways idea, although it certainly
 borrows aspects of it. I just don't understand how this proposal has
 all the same issues?


 The key to the kilo-3 proposal, is about getting better at saying no,
 this blueprint isn't very likely to make kilo.

 If we focus on a smaller number of blueprints to review, we should be
 able to get a greater percentage of those fully completed.

 I am just using slots/runway-like ideas to help pick the high priority
 blueprints we should concentrate on, during that final milestone.
 Rather than keeping the distraction of 15 or so low priority
 blueprints, with those poor submitters jamming up the check queue, and
 constantly rebasing, and having to deal with the odd stray review
 comment they might get lucky enough to get.

 Maybe you think this bit is overkill, and thats fine. But I still
 think we need a way to stop wasting so much of peoples time on things
 that will not make it.

 The high priority blueprints are going to end up being mostly the big
 scope changes which take alot of time to review  probably go through
 many iterations. The low priority blueprints are going to end up being
 the small things that don't consume significant resource to review and
 are easy to deal with in the time we're waiting for the big items to
 go through rebases or whatever. So what I don't like about the runways
 slots idea is that removes the ability to be agile and take the initiative
 to review  approve the low priority stuff that would otherwise never
 make it through.

The idea is more around concentrating on the *same* list of things.

Certainly we need to avoid the priority inversion of concentrating
only on the big things.

Its also why I suggested that for kilo-1 and kilo-2, we allow any
blueprint to merge, and only restrict it to a specific list in kilo-3,
the idea being to maximise the number of things that get completed,
rather than merging some half blueprints, but not getting to the good
bits.


Anyways, it seems like this doesn't hit a middle ground that would
gain pre-summit approval. Or at least needs some online chat time to
work out something.


Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Concurrent update issue in Glance v2 API

2014-09-25 Thread Alexander Tivelkov
Hi folks!

There is a serious issue [0] in the v2 API of Glance which may lead to race
conditions during the concurrent updates of Images' metadata.
It can be fixed in a number of ways, but we need to have some solution
soon, as we are approaching rc1 release, and the race in image updates
looks like a serious problem which has to be fixed in J, imho.

A quick description of the problem:
When the image-update is called (PUT /v2/images/%image_id%/) we get the
image from the repository, which fetches a record from the DB and forms its
content into an Image Domain Object ([1]), which is then modified (has its
attributes updated) and passed through all the layers of our domain model.
This object is not managed by the SQLAlchemy's session, so the
modifications of its attributes are not tracked anywhere.
When all the processing is done and the updated object is passed back to
the DB repository, it serializes all the attributes of the image into a
dict ([2]) and then this dict is used to create an UPDATE query for the
database.
As this serialization includes all the attribute of the object (rather then
only the modified ones), the update query updates all the columns of the
appropriate database row, putting there the values which were originally
fetched when the processing began. This may obviously overwrite the values
which could be written there by some other concurent request.

There are two possible solutions to fix this problem.
First, known as the optimistic concurrency control, checks if the
appropriate database row was modified between the data fetching and data
updates. In case of such modification the update operation reports a
conflict and fails (and may be retried based on the updated data if
needed). Modification detection is usually based on the timstamps, i.e. the
query updates the row in database only if the timestamp there matches the
timestamp of initially fetched data.
I've introduced this approach in this patch [3], however it has a major
flaw: I used the 'updated_at' attribute as a timestamp, and this attribute
is mapped to a DateTime-typed column. In many RDBMS's (including
MySql5.6.4) this column stores values with per-second precision and does
not store fractions of seconds. So, even if patch [3] is merged the race
conditions may still occur if there are many updates happening at the same
moment of time.
A better approach would be to add a new column with int (or longint) type
to store millisecond-based (or even microsecond-based) timestamps instead
of (or additionally to) date-time based updated_at. But data model
modification will require to add new migration etc, which is a major step
and I don't know if we want to make it so close to the release.

The second solution is to keep track of the changed attributes and
properties for the image and do not include the unchanged ones into the
UPDATE query, so nothing gets overwritten. This dramatically reduces the
threat of races, as the updates of different properties do not interfere
with each other. Also this is a usefull change regardless of the race
itself: being able to differentiate between changed and unchanged
attributes may have its own value for other purposes; the DB performance
will also be better when updating just the needed fields instead of all of
them.
I've submitted a patch with this approach as well [4], but it still breaks
some unittests and I am working to fix them right now.

So, we need to decide which of these approaches (or their combination) to
take: we may stick with optimistic locking on timestamp (and then decide if
we are ok with a per-second timestamps or we need to add a new column),
choose to track state of attributes or combine them together. So, could you
folks please review patches [3] and [4] and come up with some ideas on them?

Also, probably we should consider targeting [0] to juno-rc1 milestone to
make sure that this bug is fixed in J. Do you guys think it is possible at
this stage?

Thanks!


[0] https://bugs.launchpad.net/glance/+bug/1371728
[1]
https://github.com/openstack/glance/blob/master/glance/db/__init__.py#L74
[2]
https://github.com/openstack/glance/blob/master/glance/db/__init__.py#L169
[3] https://review.openstack.org/#/c/122814/
[4] https://review.openstack.org/#/c/123722/

--
Regards,
Alexander Tivelkov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread Lucas Alvares Gomes
Hi,

Today we have hit the problem of having an outdated sample
configuration file again[1]. The problem of the sample generation is
that it picks up configuration from other projects/libs
(keystoneclient in that case) and this break the Ironic gate without
us doing anything.

So, what you guys think about removing the test that compares the
configuration files and makes it no longer gate[2]?

We already have a tox command to generate the sample configuration
file[3], so folks that needs it can generate it locally.

Does anyone disagree?

[1] https://review.openstack.org/#/c/124090/
[2] https://github.com/openstack/ironic/blob/master/tox.ini#L23
[3] https://github.com/openstack/ironic/blob/master/tox.ini#L32-L34

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] PTL Candidacy

2014-09-25 Thread Douglas Mendizabal
Hi OpenStack-dev,

I would like to put my name in the hat for PTL of the Key Management Service
Program, which includes Barbican, python-barbicanclient, Kite, and
python-kiteclient.

I’ve had the pleasure of being a part of the Barbican team since the very
beginning of the project.  During the last year and half I’ve helped
Barbican grow from a project that only a couple of Rackers were hacking on,
to an Incubated OpenStack project that continues to gain adoption in the
community, and I would like to see that momentum continue through the Kilo
cycle.

I’ve been a big fan and supporter of Jarret Raim’s vision for Barbican, and
it would be an honor for me to continue his work as the new PTL for the Key
Management Program.  One of my goals for the Kilo cycle is to move Barbican
through the Integration process by working with other OpenStack projects to
enable the security minded use-cases that are now possible with Barbican.
Additionally, I would like to continue to focus on the quality of Barbican
code by leveraging the knowledge and lessons learned from deploying Barbican
at Rackspace.

Thank you,
Douglas Mendizábal


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Clint Byrum
First, Kevin, please try to figure out a way to reply in-line when you're
replying to multiple levels of threads. Even if you have to copy and
quote it manually.. it took me reading your message and the previous
message 3 times to understand the context.

Second, I don't think anybody minds having a control plane for each
level of control. The point isn't to replace the undercloud, but to
replace nova rebuild as the way you push out new software while
retaining the benefits of the image approach.

Excerpts from Fox, Kevin M's message of 2014-09-25 09:07:10 -0700:
 Then you still need all the kubernetes api/daemons for the master and slaves. 
 If you ignore the complexity this adds, then it seems simpler then just using 
 openstack for it. but really, it still is an under/overcloud kind of setup, 
 your just using kubernetes for the undercloud, and openstack for the 
 overcloud?
 
 Thanks,
 Kevin
 
 From: Steven Dake [sd...@redhat.com]
 Sent: Wednesday, September 24, 2014 8:02 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and 
 Manage OpenStack using Kubernetes and Docker
 
 On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
 Steven
 I have to ask what is the motivation and benefits we get from integrating 
 Kubernetes into Openstack? Would be really useful if you can elaborate and 
 outline some use cases and benefits Openstack and Kubernetes can gain.
 
 /Alan
 
 Alan,
 
 I am either unaware or ignorant of another Docker scheduler that is currently 
 available that has a big (100+ folks) development community.  Kubernetes 
 meets these requirements and is my main motivation for using it to schedule 
 Docker containers.  There are other ways to skin this cat - The TripleO folks 
 wanted at one point to deploy nova with the nova docker VM manager to do such 
 a thing.  This model seemed a little clunky to me since it isn't purpose 
 built around containers.
 
 As far as use cases go, the main use case is to run a specific Docker 
 container on a specific Kubernetes minion bare metal host.  These docker 
 containers are then composed of the various config tools and services for 
 each detailed service in OpenStack.  For example, mysql would be a container, 
 and tools to configure the mysql service would exist in the container.  
 Kubernetes would pass config options for the mysql database prior to 
 scheduling and once scheduled, Kubernetes would be responsible for connecting 
 the various containers together.
 
 Regards
 -steve
 
 
 
 From: Steven Dake [mailto:sd...@redhat.com]
 Sent: September-24-14 7:41 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and 
 Manage OpenStack using Kubernetes and Docker
 
 On 09/24/2014 10:12 AM, Joshua Harlow wrote:
 Sounds like an interesting project/goal and will be interesting to see where 
 this goes.
 
 A few questions/comments:
 
 How much golang will people be exposed to with this addition?
 
 Joshua,
 
 I expect very little.  We intend to use Kubernetes as an upstream project, 
 rather then something we contribute to directly.
 
 
 Seeing that this could be the first 'go' using project it will be interesting 
 to see where this goes (since afaik none of the infra support exists, and 
 people aren't likely to familiar with go vs python in the openstack community 
 overall).
 
 What's your thoughts on how this will affect the existing openstack container 
 effort?
 
 I don't think it will have any impact on the existing Magnum project.  At 
 some point if Magnum implements scheduling of docker containers, we may add 
 support for Magnum in addition to Kubernetes, but it is impossible to tell at 
 this point.  I don't want to derail either project by trying to force them 
 together unnaturally so early.
 
 
 I see that kubernetes isn't exactly a small project either (~90k LOC, for 
 those who use these types of metrics), so I wonder how that will affect 
 people getting involved here, aka, who has the resources/operators/other... 
 available to actually setup/deploy/run kubernetes, when operators are likely 
 still just struggling to run openstack itself (at least operators are getting 
 used to the openstack warts, a new set of kubernetes warts could not be so 
 helpful).
 
 Yup it is fairly large in size.  Time will tell if this approach will work.
 
 This is an experiment as Robert and others on the thread have pointed out :).
 
 Regards
 -steve
 
 
 On Sep 23, 2014, at 3:40 PM, Steven Dake 
 sd...@redhat.commailto:sd...@redhat.com wrote:
 
 
 Hi folks,
 
 I'm pleased to announce the development of a new project Kolla which is Greek 
 for glue :). Kolla has a goal of providing an implementation that deploys 
 OpenStack using Kubernetes and Docker. This project will begin as a 
 StackForge project separate from the TripleO/Deployment program code base. 

Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread Dmitry Tantsur

On 09/25/2014 06:23 PM, Lucas Alvares Gomes wrote:

Hi,

Today we have hit the problem of having an outdated sample
configuration file again[1]. The problem of the sample generation is
that it picks up configuration from other projects/libs
(keystoneclient in that case) and this break the Ironic gate without
us doing anything.

So, what you guys think about removing the test that compares the
configuration files and makes it no longer gate[2]?

We already have a tox command to generate the sample configuration
file[3], so folks that needs it can generate it locally.

Does anyone disagree?
It's a pity we won't have sample config by default, but I guess it can't 
be helped. +1 from me.




[1] https://review.openstack.org/#/c/124090/
[2] https://github.com/openstack/ironic/blob/master/tox.ini#L23
[3] https://github.com/openstack/ironic/blob/master/tox.ini#L32-L34

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M
Ah. So the goal of project Kolla then is to deploy OpenStack via Docker using 
whatever means that works, not, to deploy OpenStack using Docker+Kubernetes, 
where the first stab at an implementation is using Kubernetes. That seems like 
a much more reasonable goal to me.

Thanks,
Kevin

From: Steven Dake [sd...@redhat.com]
Sent: Thursday, September 25, 2014 8:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and 
Manage OpenStack using Kubernetes and Docker

On 09/25/2014 12:01 AM, Clint Byrum wrote:
 Excerpts from Mike Spreitzer's message of 2014-09-24 22:01:54 -0700:
 Clint Byrum cl...@fewbar.com wrote on 09/25/2014 12:13:53 AM:

 Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:
 Steven Dake sd...@redhat.com wrote on 09/24/2014 11:02:49 PM:
 ...
 ...
 Does TripleO require container functionality that is not available
 when using the Docker driver for Nova?

 As far as I can tell, the quantitative handling of capacities and
 demands in Kubernetes is much inferior to what Nova does today.

 Yes, TripleO needs to manage baremetal and containers from a single
 host. Nova and Neutron do not offer this as a feature unfortunately.
 In what sense would Kubernetes manage baremetal (at all)?
 By from a single host do you mean that a client on one host
 can manage remote baremetal and containers?

 I can see that Kubernetes allows a client on one host to get
 containers placed remotely --- but so does the Docker driver for Nova.

 I mean that one box would need to host Ironic, Docker, and Nova, for
 the purposes of deploying OpenStack. We call it the undercloud, or
 sometimes the Deployment Cloud.

 It's not necessarily something that Nova/Neutron cannot do by design,
 but it doesn't work now.

 As far as use cases go, the main use case is to run a specific
 Docker container on a specific Kubernetes minion bare metal host.
 Clint, in another branch of this email tree you referred to
 the VMs that host Kubernetes.  How does that square with
 Steve's text that seems to imply bare metal minions?

 That was in a more general context, discussing using Kubernetes for
 general deployment. Could have just as easily have said hosts,
 machines, or instances.

 I can see that some people have had much more detailed design
 discussions than I have yet found.  Perhaps it would be helpful
 to share an organized presentation of the design thoughts in
 more detail.

 I personally have not had any detailed discussions about this before it
 was announced. I've just dug into the design and some of the code of
 Kubernetes because it is quite interesting to me.

 If TripleO already knows it wants to run a specific Docker image
 on a specific host then TripleO does not need a scheduler.

 TripleO does not ever specify destination host, because Nova does not
 allow that, nor should it. It does want to isolate failure domains so
 that all three Galera nodes aren't on the same PDU, but we've not really
 gotten to the point where we can do that yet.
 So I am still not clear on what Steve is trying to say is the main use
 case.
 Kubernetes is even farther from balancing among PDUs than Nova is.
 At least Nova has a framework in which this issue can be posed and solved.
 I mean a framework that actually can carry the necessary information.
 The Kubernetes scheduler interface is extremely impoverished in the
 information it passes and it uses GO structs --- which, like C structs,
 can not be subclassed.
 I don't think this is totally clear yet. The thing that Steven seems to be
 trying to solve is deploying OpenStack using docker, and Kubernetes may
 very well be a better choice than Nova for this. There are some really
 nice features, and a lot of the benefits we've been citing about image
 based deployments are realized in docker without the pain of a full OS
 image to redeploy all the time.

This is precisely the problem I want to solve.  I looked at Nova+Docker
as a solution, and it seems to me the runway to get to a successful
codebase is longer with more risk.  That is why this is an experiment to
see if a Kubernetes based approach would work.  if at the end of the day
we throw out Kubernetes as a scheduler once we have the other problems
solved and reimplement Kubernetes in Nova+Docker, I think that would be
an acceptable outcome, but not something I want to *start* with but
*finish* with.

Regards
-steve

 The structs vs. classes argument is completely out of line and has
 nothing to do with where Kubernetes might go in the future. It's like
 saying because cars use internal combustion engines they are limited. It
 is just a facet of how it works today.

 Nova's filter scheduler includes a fatal bug that bites when balancing and
 you want more than
 one element per area, see https://bugs.launchpad.net/nova/+bug/1373478.
 However: (a) you might not need more than one element per area 

[openstack-dev] [Barbican] PTL for Barbican

2014-09-25 Thread Jarret Raim
All,


It has been my pleasure to lead the Key Management program and Barbican
over the last year and a half. I'm proud of the work we have done, the
problems we are solving and the community that has developed around the
project. 

It should be no surprise to our community members that my day job has
pulled me further and further away from Barbican on a day to day basis. It
is for this reason that I am planning to step down as PTL for the program.

Thankfully, I've had great support from my team as Douglas Mendizabal has
stepped in to help with many of my PTL duties. He's been running our
weekly meetings, releases and shepparding specs through for a good chunk
of the Juno release cycle. Simply put, without his hard work, we wouldn't
have made the progress we have made for this release.

I encourage all our community members to support Douglas. He has my full
endorsement and I'm confident he is the right person to lead us through
the Kilo cycle, graduation and the first public Cloud deployment of
Barbican at Rackspace.



Thanks,

--
Jarret Raim 
@jarretraim




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2014-09-25 09:13:26 -0700:
 Why can't you manage baremetal and containers from a single host with 
 nova/neutron? Is this a current missing feature, or has the development teams 
 said they will never implement it?
 

It's a bug.

But it is also a complexity that isn't really handled well in Nova's
current design. Nova wants to send the workload onto the machine, and
that is it. In this case, you have two workloads, one hosted on the other,
and Nova has no model for that. You end up in a weird situation where one
(baremetal) is host for other (containers) and no real way to separate
the two or identify that dependency.

I think it's worth pursuing in OpenStack, but Steven is solving deployment
of OpenStack today with tools that exist today. I think Kolla may very
well prove that the container approach is too different from Nova's design
and wants to be more separate, at which point our big tent will be in
an interesting position: Do we adopt Kubernetes and put an OpenStack
API on it, or do we re-implement it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release criticality of bug 1365606 (get_network_info efficiency for nova-network)

2014-09-25 Thread Vishvananda Ishaya
To explain my rationale:

I think it is totally reasonable to be conservative and wait to merge
the actual fixes to the network calls[1][2] until Kilo and have them
go through the stable/backports process. Unfortunately, due to our object
design, if we block https://review.openstack.org/#/c/119521/ then there
is no way we can backport those fixes, so we are stuck for a full 6
months with abysmal performance. This is why I’ve been pushing to get
that one fix in. That said, I will happily decouple the two patches.

Vish

[1] https://review.openstack.org/#/c/119522/9
[2] https://review.openstack.org/#/c/119523/10

On Sep 24, 2014, at 3:51 PM, Michael Still mi...@stillhq.com wrote:

 Hi,
 
 so, I'd really like to see https://review.openstack.org/#/c/121663/
 merged in rc1. That patch is approved right now.
 
 However, it depends on https://review.openstack.org/#/c/119521/, which
 is not approved. 119521 fixes a problem where we make five RPC calls
 per call to get_network_info, which is an obvious efficiency problem.
 
 Talking to Vish, who is the author of these patches, it sounds like
 the efficiency issue is a pretty big deal for users of nova-network
 and he'd like to see 119521 land in Juno. I think that means he's
 effectively arguing that the bug is release critical.
 
 On the other hand, its only a couple of days until rc1, so we're
 trying to be super conservative about what we land now in Juno.
 
 So... I'd like to see a bit of a conversation on what call we make
 here. Do we land 119521?
 
 Michael
 
 -- 
 Rackspace Australia



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] PTL Candidacy

2014-09-25 Thread Tristan Cacqueray
confirmed

On 25/09/14 12:31 PM, Douglas Mendizabal wrote:
 Hi OpenStack-dev,
 
 I would like to put my name in the hat for PTL of the Key Management Service
 Program, which includes Barbican, python-barbicanclient, Kite, and
 python-kiteclient.
 
 I’ve had the pleasure of being a part of the Barbican team since the very
 beginning of the project.  During the last year and half I’ve helped
 Barbican grow from a project that only a couple of Rackers were hacking on,
 to an Incubated OpenStack project that continues to gain adoption in the
 community, and I would like to see that momentum continue through the Kilo
 cycle.
 
 I’ve been a big fan and supporter of Jarret Raim’s vision for Barbican, and
 it would be an honor for me to continue his work as the new PTL for the Key
 Management Program.  One of my goals for the Kilo cycle is to move Barbican
 through the Integration process by working with other OpenStack projects to
 enable the security minded use-cases that are now possible with Barbican.
 Additionally, I would like to continue to focus on the quality of Barbican
 code by leveraging the knowledge and lessons learned from deploying Barbican
 at Rackspace.
 
 Thank you,
 Douglas Mendizábal
 
 
 Douglas Mendizábal
 IRC: redrobot
 PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread Jay Faulkner

On Sep 25, 2014, at 9:23 AM, Lucas Alvares Gomes lucasago...@gmail.com wrote:

 Hi,
 
 Today we have hit the problem of having an outdated sample
 configuration file again[1]. The problem of the sample generation is
 that it picks up configuration from other projects/libs
 (keystoneclient in that case) and this break the Ironic gate without
 us doing anything.
 
 So, what you guys think about removing the test that compares the
 configuration files and makes it no longer gate[2]?
 
 We already have a tox command to generate the sample configuration
 file[3], so folks that needs it can generate it locally.
 

+1

In a perfect world, one would be generated and put somewhere for easy access 
without a development environment setup. However I think the impact from having 
this config file break pep8 non-interactively is important enough to do it now 
and worry about generating one for docs later. :)

-
Jay Faulkner

 Does anyone disagree?
 
 [1] https://review.openstack.org/#/c/124090/
 [2] https://github.com/openstack/ironic/blob/master/tox.ini#L23
 [3] https://github.com/openstack/ironic/blob/master/tox.ini#L32-L34
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [UX] [Heat] [Mistral] Merlin project PoC update: shift from HOT builder to Mistral Workbook builder

2014-09-25 Thread Timur Sufiev
Hello, folks!

Following Drago Rosson's introduction of Barricade.js and our discussion in
ML about possibility of using it in Merlin [1], I've decided to change the
plans for PoC: now the goal for Merlin's PoC is to implement Mistral
Workbook builder on top of Barricade.js. The reasons for that are:

* To better understand Barricade.js potential as data abstraction layer in
Merlin, I need to learn much more about its possibilities and limitations
than simple examining/reviewing of its source code allows. The best way to
do this is by building upon it.
* It's becoming too crowded in the HOT builder's sandbox - doing the same
work as Drago currently does [2] seems like a waste of resources to me
(especially in case he'll opensource his HOT builder someday just as he did
with Barricade.js).
* Why Mistral and not Murano or Solum? Because Mistral's YAML templates
have simpler structure than Murano's ones do and is better defined at that
moment than the ones in Solum.

There already some commits in https://github.com/stackforge/merlin and
since client-side app doesn't talk to the Mistral's server yet, it is
pretty easy to run it (just follow the instructions in README.md) and then
see it in browser at http://localhost:8080. UI is yet not great, as the
current focus is data abstraction layer exploration, i.e. how to exploit
Barricade.js capabilities to reflect all relations between Mistral's
entities. I hope to finish the minimal set of features in a few weeks - and
will certainly announce it in the ML.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044591.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2014-August/044186.html

-- 
Timur Sufiev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread David Shrewsbury
Hi!

On Thu, Sep 25, 2014 at 12:23 PM, Lucas Alvares Gomes lucasago...@gmail.com
 wrote:

 Hi,

 Today we have hit the problem of having an outdated sample
 configuration file again[1]. The problem of the sample generation is
 that it picks up configuration from other projects/libs
 (keystoneclient in that case) and this break the Ironic gate without
 us doing anything.

 So, what you guys think about removing the test that compares the
 configuration files and makes it no longer gate[2]?

 We already have a tox command to generate the sample configuration
 file[3], so folks that needs it can generate it locally.

 Does anyone disagree?


+1 to this, but I think we should document how to generate the sample config
in our documentation (install guide?).

-Dave
-- 
David Shrewsbury (Shrews)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M


 -Original Message-
 From: Clint Byrum [mailto:cl...@fewbar.com]
 Sent: Thursday, September 25, 2014 9:35 AM
 To: openstack-dev
 Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and
 Manage OpenStack using Kubernetes and Docker
 
 First, Kevin, please try to figure out a way to reply in-line when you're
 replying to multiple levels of threads. Even if you have to copy and quote it
 manually.. it took me reading your message and the previous message 3
 times to understand the context.

I'm sorry. I think your frustration with it mirrors the frustration I have with 
having to use this blankity blank microsoft webmail that doesn't support inline 
commenting, or having to rdesktop to a windows terminal server so I can reply 
inline. :/

 
 Second, I don't think anybody minds having a control plane for each level of
 control. The point isn't to replace the undercloud, but to replace nova
 rebuild as the way you push out new software while retaining the benefits
 of the image approach.

I don't quite follow. Wouldn't you be using heat autoscaling, not nova directly?

Thanks,
Kevin
 
 Excerpts from Fox, Kevin M's message of 2014-09-25 09:07:10 -0700:
  Then you still need all the kubernetes api/daemons for the master and
 slaves. If you ignore the complexity this adds, then it seems simpler then
 just using openstack for it. but really, it still is an under/overcloud kind 
 of
 setup, your just using kubernetes for the undercloud, and openstack for the
 overcloud?
 
  Thanks,
  Kevin
  
  From: Steven Dake [sd...@redhat.com]
  Sent: Wednesday, September 24, 2014 8:02 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla:
  Deploy and Manage OpenStack using Kubernetes and Docker
 
  On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
  Steven
  I have to ask what is the motivation and benefits we get from integrating
 Kubernetes into Openstack? Would be really useful if you can elaborate and
 outline some use cases and benefits Openstack and Kubernetes can gain.
 
  /Alan
 
  Alan,
 
  I am either unaware or ignorant of another Docker scheduler that is
 currently available that has a big (100+ folks) development community.
 Kubernetes meets these requirements and is my main motivation for using
 it to schedule Docker containers.  There are other ways to skin this cat - The
 TripleO folks wanted at one point to deploy nova with the nova docker VM
 manager to do such a thing.  This model seemed a little clunky to me since it
 isn't purpose built around containers.
 
  As far as use cases go, the main use case is to run a specific Docker
 container on a specific Kubernetes minion bare metal host.  These docker
 containers are then composed of the various config tools and services for
 each detailed service in OpenStack.  For example, mysql would be a
 container, and tools to configure the mysql service would exist in the
 container.  Kubernetes would pass config options for the mysql database
 prior to scheduling and once scheduled, Kubernetes would be responsible
 for connecting the various containers together.
 
  Regards
  -steve
 
 
 
  From: Steven Dake [mailto:sd...@redhat.com]
  Sent: September-24-14 7:41 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla:
  Deploy and Manage OpenStack using Kubernetes and Docker
 
  On 09/24/2014 10:12 AM, Joshua Harlow wrote:
  Sounds like an interesting project/goal and will be interesting to see
 where this goes.
 
  A few questions/comments:
 
  How much golang will people be exposed to with this addition?
 
  Joshua,
 
  I expect very little.  We intend to use Kubernetes as an upstream project,
 rather then something we contribute to directly.
 
 
  Seeing that this could be the first 'go' using project it will be 
  interesting to
 see where this goes (since afaik none of the infra support exists, and people
 aren't likely to familiar with go vs python in the openstack community
 overall).
 
  What's your thoughts on how this will affect the existing openstack
 container effort?
 
  I don't think it will have any impact on the existing Magnum project.  At
 some point if Magnum implements scheduling of docker containers, we
 may add support for Magnum in addition to Kubernetes, but it is impossible
 to tell at this point.  I don't want to derail either project by trying to 
 force
 them together unnaturally so early.
 
 
  I see that kubernetes isn't exactly a small project either (~90k LOC, for
 those who use these types of metrics), so I wonder how that will affect
 people getting involved here, aka, who has the
 resources/operators/other... available to actually setup/deploy/run
 kubernetes, when operators are likely still just struggling to run openstack
 itself (at least operators are getting used to the openstack warts, a new set
 of 

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M


 -Original Message-
 From: Clint Byrum [mailto:cl...@fewbar.com]
 Sent: Thursday, September 25, 2014 9:44 AM
 To: openstack-dev
 Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and
 Manage OpenStack using Kubernetes and Docker
 
 Excerpts from Fox, Kevin M's message of 2014-09-25 09:13:26 -0700:
  Why can't you manage baremetal and containers from a single host with
 nova/neutron? Is this a current missing feature, or has the development
 teams said they will never implement it?
 
 
 It's a bug.
 
 But it is also a complexity that isn't really handled well in Nova's current
 design. Nova wants to send the workload onto the machine, and that is it. In
 this case, you have two workloads, one hosted on the other, and Nova has
 no model for that. You end up in a weird situation where one
 (baremetal) is host for other (containers) and no real way to separate the
 two or identify that dependency.

Ideally, like you say, you should be able to have one host managed by two 
different nova drivers in the same cell. But I think today, you can simply use 
two different cells and it should work? One cell for deploying bare metal 
images, of which one image is provided that contains the nova docker compute 
resources. The other cell used to support launching docker instances on those 
hosts. To the end user, it still looks like one unified cloud like we all want, 
but under the hood, its two separate subclouds. An under and an overcloud.

 I think it's worth pursuing in OpenStack, but Steven is solving deployment of
 OpenStack today with tools that exist today. I think Kolla may very well
 prove that the container approach is too different from Nova's design and
 wants to be more separate, at which point our big tent will be in an
 interesting position: Do we adopt Kubernetes and put an OpenStack API on
 it, or do we re-implement it.

That is a very interesting question, worth pursuing.

I think either way, most of the work is going to be in dockerizing the 
services. So that alone is worth playing with too.

I managed to get libvirt to work in docker once. It was a pain. Getting nova 
and neutron bits in that container too would be even harder. I'm waiting to try 
again until I know that systemd will run nicely inside a docker container. It 
would make managing the startup/stopping of the container much easier to get 
right. 

Thanks,
Kevin

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] logging around olso lockutils

2014-09-25 Thread Jay Pipes
+1 for making those two changes. I also have been frustrated doing 
debugging in the gate recently, and any operational-ease-of-debugging 
things like this would be appreciated.


-jay

On 09/25/2014 08:49 AM, Sean Dague wrote:

Spending a ton of time reading logs, oslo locking ends up basically
creating a ton of output at DEBUG that you have to mentally filter to
find problems:

2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Created new semaphore iptables internal_lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:206
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Acquired semaphore iptables lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:229
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Attempting to grab external lock iptables external_lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:178
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Got file lock /opt/stack/data/nova/nova-iptables acquire
/opt/stack/new/nova/nova/openstack/common/lockutils.py:93
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Got semaphore / lock _do_refresh_provider_fw_rules inner
/opt/stack/new/nova/nova/openstack/common/lockutils.py:271
2014-09-24 18:44:49.244 DEBUG nova.compute.manager
[req-b91cb1c1-f211-43ef-9714-651eeb3b2302
DeleteServersAdminTestXML-1408641898
DeleteServersAdminTestXML-469708524] [instance:
98eb8e6e-088b-4dda-ada5-7b2b79f62506] terminating bdm
BlockDeviceMapping(boot_index=0,connection_info=None,created_at=2014-09-24T18:44:42Z,delete_on_termination=True,deleted=False,deleted_at=None,destination_type='local',device_name='/dev/vda',device_type='disk',disk_bus=None,guest_format=None,id=43,image_id='262ab8a2-0790-49b3-a8d3-e8ed73e3ed71',instance=?,instance_uuid=98eb8e6e-088b-4dda-ada5-7b2b79f62506,no_device=False,snapshot_id=None,source_type='image',updated_at=2014-09-24T18:44:42Z,volume_id=None,volume_size=None)
_cleanup_volumes /opt/stack/new/nova/nova/compute/manager.py:2407
2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Released file lock /opt/stack/data/nova/nova-iptables release
/opt/stack/new/nova/nova/openstack/common/lockutils.py:115
2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Releasing semaphore iptables lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:238
2014-09-24 18:44:49.249 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Semaphore / lock released _do_refresh_provider_fw_rules inner

Also readable here:
http://logs.openstack.org/01/123801/1/check/check-tempest-dsvm-full/b5f8b37/logs/screen-n-cpu.txt.gz#_2014-09-24_18_44_49_240

(Yes, it's kind of ugly)

What occured to me is that in debugging locking issues what we actually
care about is 2 things semantically:

#1 - tried to get a lock, but someone else has it. Then we know we've
got lock contention. .
#2 - something is still holding a lock after some long amount of time.

#2 turned out to be a critical bit in understanding one of the worst
recent gate impacting issues.

You can write a tool today that analyzes the logs and shows you these
things. However, I wonder if we could actually do something creative in
the code itself to do this already. I'm curious if the creative use of
Timers might let us emit log messages under the conditions above
(someone with better understanding of python internals needs to speak up
here). Maybe it's too much overhead, but I think it's worth at least
asking the question.

The same issue exists when it comes to processutils I think, warning
that a command is still running after 10s might be really handy, because
it turns out that issue #2 was caused by this, and it took quite a bit
of decoding to figure that out.

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release criticality of bug 1365606 (get_network_info efficiency for nova-network)

2014-09-25 Thread Vishvananda Ishaya
Ok new versions have reversed the order so we can take:

https://review.openstack.org/#/c/121663/4

before:

https://review.openstack.org/#/c/119521/10

I still strongly recommend that we take the second so we at least have
the possibility of backporting the other two patches. And I also wouldn’t
complain if we just took all 4 :)

Vish

On Sep 25, 2014, at 9:44 AM, Vishvananda Ishaya vishvana...@gmail.com wrote:

 To explain my rationale:
 
 I think it is totally reasonable to be conservative and wait to merge
 the actual fixes to the network calls[1][2] until Kilo and have them
 go through the stable/backports process. Unfortunately, due to our object
 design, if we block https://review.openstack.org/#/c/119521/ then there
 is no way we can backport those fixes, so we are stuck for a full 6
 months with abysmal performance. This is why I’ve been pushing to get
 that one fix in. That said, I will happily decouple the two patches.
 
 Vish
 
 [1] https://review.openstack.org/#/c/119522/9
 [2] https://review.openstack.org/#/c/119523/10
 
 On Sep 24, 2014, at 3:51 PM, Michael Still mi...@stillhq.com wrote:
 
 Hi,
 
 so, I'd really like to see https://review.openstack.org/#/c/121663/
 merged in rc1. That patch is approved right now.
 
 However, it depends on https://review.openstack.org/#/c/119521/, which
 is not approved. 119521 fixes a problem where we make five RPC calls
 per call to get_network_info, which is an obvious efficiency problem.
 
 Talking to Vish, who is the author of these patches, it sounds like
 the efficiency issue is a pretty big deal for users of nova-network
 and he'd like to see 119521 land in Juno. I think that means he's
 effectively arguing that the bug is release critical.
 
 On the other hand, its only a couple of days until rc1, so we're
 trying to be super conservative about what we land now in Juno.
 
 So... I'd like to see a bit of a conversation on what call we make
 here. Do we land 119521?
 
 Michael
 
 -- 
 Rackspace Australia
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Concurrent update issue in Glance v2 API

2014-09-25 Thread Mark Washenberger
Thanks for diving on this grenade, Alex!

FWIW, I agree with all of your assessments. Just in case I am mistaken, I
summarize them as smaller updates  logical clocks  wall clocks (due to
imprecision and skew).

Given the small size of your patch [4], I'd say lets try to land that. It
is nicer to solve this problem with software rather than with db schema if
that is possible.

On Thu, Sep 25, 2014 at 9:21 AM, Alexander Tivelkov ativel...@mirantis.com
wrote:

 Hi folks!

 There is a serious issue [0] in the v2 API of Glance which may lead to
 race conditions during the concurrent updates of Images' metadata.
 It can be fixed in a number of ways, but we need to have some solution
 soon, as we are approaching rc1 release, and the race in image updates
 looks like a serious problem which has to be fixed in J, imho.

 A quick description of the problem:
 When the image-update is called (PUT /v2/images/%image_id%/) we get the
 image from the repository, which fetches a record from the DB and forms its
 content into an Image Domain Object ([1]), which is then modified (has its
 attributes updated) and passed through all the layers of our domain model.
 This object is not managed by the SQLAlchemy's session, so the
 modifications of its attributes are not tracked anywhere.
 When all the processing is done and the updated object is passed back to
 the DB repository, it serializes all the attributes of the image into a
 dict ([2]) and then this dict is used to create an UPDATE query for the
 database.
 As this serialization includes all the attribute of the object (rather
 then only the modified ones), the update query updates all the columns of
 the appropriate database row, putting there the values which were
 originally fetched when the processing began. This may obviously overwrite
 the values which could be written there by some other concurent request.

 There are two possible solutions to fix this problem.
 First, known as the optimistic concurrency control, checks if the
 appropriate database row was modified between the data fetching and data
 updates. In case of such modification the update operation reports a
 conflict and fails (and may be retried based on the updated data if
 needed). Modification detection is usually based on the timstamps, i.e. the
 query updates the row in database only if the timestamp there matches the
 timestamp of initially fetched data.
 I've introduced this approach in this patch [3], however it has a major
 flaw: I used the 'updated_at' attribute as a timestamp, and this attribute
 is mapped to a DateTime-typed column. In many RDBMS's (including
 MySql5.6.4) this column stores values with per-second precision and does
 not store fractions of seconds. So, even if patch [3] is merged the race
 conditions may still occur if there are many updates happening at the same
 moment of time.
 A better approach would be to add a new column with int (or longint) type
 to store millisecond-based (or even microsecond-based) timestamps instead
 of (or additionally to) date-time based updated_at. But data model
 modification will require to add new migration etc, which is a major step
 and I don't know if we want to make it so close to the release.

 The second solution is to keep track of the changed attributes and
 properties for the image and do not include the unchanged ones into the
 UPDATE query, so nothing gets overwritten. This dramatically reduces the
 threat of races, as the updates of different properties do not interfere
 with each other. Also this is a usefull change regardless of the race
 itself: being able to differentiate between changed and unchanged
 attributes may have its own value for other purposes; the DB performance
 will also be better when updating just the needed fields instead of all of
 them.
 I've submitted a patch with this approach as well [4], but it still breaks
 some unittests and I am working to fix them right now.

 So, we need to decide which of these approaches (or their combination) to
 take: we may stick with optimistic locking on timestamp (and then decide if
 we are ok with a per-second timestamps or we need to add a new column),
 choose to track state of attributes or combine them together. So, could you
 folks please review patches [3] and [4] and come up with some ideas on them?

 Also, probably we should consider targeting [0] to juno-rc1 milestone to
 make sure that this bug is fixed in J. Do you guys think it is possible at
 this stage?

 Thanks!


 [0] https://bugs.launchpad.net/glance/+bug/1371728
 [1]
 https://github.com/openstack/glance/blob/master/glance/db/__init__.py#L74
 [2]
 https://github.com/openstack/glance/blob/master/glance/db/__init__.py#L169
 [3] https://review.openstack.org/#/c/122814/
 [4] https://review.openstack.org/#/c/123722/

 --
 Regards,
 Alexander Tivelkov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [oslo] logging around olso lockutils

2014-09-25 Thread Davanum Srinivas
Logged as high priority bug -
https://bugs.launchpad.net/oslo.concurrency/+bug/1374075

On Thu, Sep 25, 2014 at 1:57 PM, Jay Pipes jaypi...@gmail.com wrote:
 +1 for making those two changes. I also have been frustrated doing debugging
 in the gate recently, and any operational-ease-of-debugging things like this
 would be appreciated.

 -jay

 On 09/25/2014 08:49 AM, Sean Dague wrote:

 Spending a ton of time reading logs, oslo locking ends up basically
 creating a ton of output at DEBUG that you have to mentally filter to
 find problems:

 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Created new semaphore iptables internal_lock
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:206
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Acquired semaphore iptables lock
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:229
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Attempting to grab external lock iptables external_lock
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:178
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Got file lock /opt/stack/data/nova/nova-iptables acquire
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:93
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Got semaphore / lock _do_refresh_provider_fw_rules inner
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:271
 2014-09-24 18:44:49.244 DEBUG nova.compute.manager
 [req-b91cb1c1-f211-43ef-9714-651eeb3b2302
 DeleteServersAdminTestXML-1408641898
 DeleteServersAdminTestXML-469708524] [instance:
 98eb8e6e-088b-4dda-ada5-7b2b79f62506] terminating bdm

 BlockDeviceMapping(boot_index=0,connection_info=None,created_at=2014-09-24T18:44:42Z,delete_on_termination=True,deleted=False,deleted_at=None,destination_type='local',device_name='/dev/vda',device_type='disk',disk_bus=None,guest_format=None,id=43,image_id='262ab8a2-0790-49b3-a8d3-e8ed73e3ed71',instance=?,instance_uuid=98eb8e6e-088b-4dda-ada5-7b2b79f62506,no_device=False,snapshot_id=None,source_type='image',updated_at=2014-09-24T18:44:42Z,volume_id=None,volume_size=None)
 _cleanup_volumes /opt/stack/new/nova/nova/compute/manager.py:2407
 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Released file lock /opt/stack/data/nova/nova-iptables release
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:115
 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Releasing semaphore iptables lock
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:238
 2014-09-24 18:44:49.249 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Semaphore / lock released _do_refresh_provider_fw_rules inner

 Also readable here:

 http://logs.openstack.org/01/123801/1/check/check-tempest-dsvm-full/b5f8b37/logs/screen-n-cpu.txt.gz#_2014-09-24_18_44_49_240

 (Yes, it's kind of ugly)

 What occured to me is that in debugging locking issues what we actually
 care about is 2 things semantically:

 #1 - tried to get a lock, but someone else has it. Then we know we've
 got lock contention. .
 #2 - something is still holding a lock after some long amount of time.

 #2 turned out to be a critical bit in understanding one of the worst
 recent gate impacting issues.

 You can write a tool today that analyzes the logs and shows you these
 things. However, I wonder if we could actually do something creative in
 the code itself to do this already. I'm curious if the creative use of
 Timers might let us emit log messages under the conditions above
 (someone with better understanding of python internals needs to speak up
 here). Maybe it's too much overhead, but I think it's worth at least
 asking the question.

 The same issue exists when it comes to processutils I think, warning
 that a command is still running after 10s might be really handy, because
 it turns out that issue #2 was caused by this, and it took quite a bit
 of decoding to figure that out.

 -Sean


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread Devdatta Kulkarni
Hi,

We have faced this situation in Solum several times. And in fact this was one 
of the topics
that we discussed in our last irc meeting.

We landed on separating the sample check from pep8 gate into a non-voting gate.
One reason to keep the sample check is so that when say a feature in your code 
fails
due to some upstream changes and for which you don't have coverage in your 
functional tests then
a non-voting but failing sample check gate can be used as a starting point of 
the failure investigation.

More details about the discussion can be found here:
http://eavesdrop.openstack.org/meetings/solum_team_meeting/2014/solum_team_meeting.2014-09-23-16.00.log.txt

- Devdatta


From: David Shrewsbury [shrewsbury.d...@gmail.com]
Sent: Thursday, September 25, 2014 12:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic] Get rid of the sample config file

Hi!

On Thu, Sep 25, 2014 at 12:23 PM, Lucas Alvares Gomes 
lucasago...@gmail.commailto:lucasago...@gmail.com wrote:
Hi,

Today we have hit the problem of having an outdated sample
configuration file again[1]. The problem of the sample generation is
that it picks up configuration from other projects/libs
(keystoneclient in that case) and this break the Ironic gate without
us doing anything.

So, what you guys think about removing the test that compares the
configuration files and makes it no longer gate[2]?

We already have a tox command to generate the sample configuration
file[3], so folks that needs it can generate it locally.

Does anyone disagree?


+1 to this, but I think we should document how to generate the sample config
in our documentation (install guide?).

-Dave
--
David Shrewsbury (Shrews)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting Sept 25 1800 UTC

2014-09-25 Thread Andrew Lazarev
Thanks everyone who have joined Sahara meeting.

Here are the logs from the meeting:

http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-09-25-18.02.html
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-09-25-18.02.log.html

Andrew.

On Wed, Sep 24, 2014 at 2:50 PM, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi folks,

 We'll be having the Sahara team meeting as usual in
 #openstack-meeting-alt channel.

 Agenda:
 https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings


 http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140925T18

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] PTL candidacy

2014-09-25 Thread John Dickinson
I'm announcing my candidacy for Swift PTL. I've been involved with Swift 
specifically and OpenStack in general since the beginning. I'd like to continue 
to serve in the role as Swift PTL.

In my last candidacy email[1], I talked about several things I wanted to focus 
on in Swift.

1) Storage policies. This is done, and we're currently building on it to 
implement erasure code storage in Swift.

2) Focus on performance and efficiency. This is an ongoing thing that is never 
done, but we have made improvements here, and there are some other 
interesting things in-progress right now (like zero-copy data paths).

3) Better QA. We've added a third-party test cluster to the CI system, but I'd 
like to improve this further, for example by adding our internal integration 
tests (probe tests) to our QA pipeline.

4) Better community efficiency. Again, we've made some small improvements here, 
but we have a ways to go yet. Our review backlog is large, and it takes a while 
for patches to land. We need to continue to improve community efficiency on 
these metrics.

Overall, I want to ensure that Swift continues to provide a stable and robust 
object storage engine. Focusing on the areas listed above will help us do that. 
We'll continue to build functionality that allows applications to rely on Swift 
to take over hard problems of storage so that apps can focus on adding their 
value without worrying about storage.

My vision for Swift is that everyone will use it every day, even if they don't 
realize it. Together we can make it happen.

--John

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-March/031450.html






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] logging around olso lockutils

2014-09-25 Thread Ben Nemec
On 09/25/2014 07:49 AM, Sean Dague wrote:
 Spending a ton of time reading logs, oslo locking ends up basically
 creating a ton of output at DEBUG that you have to mentally filter to
 find problems:
 
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Created new semaphore iptables internal_lock
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:206
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Acquired semaphore iptables lock
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:229
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Attempting to grab external lock iptables external_lock
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:178
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Got file lock /opt/stack/data/nova/nova-iptables acquire
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:93
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Got semaphore / lock _do_refresh_provider_fw_rules inner
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:271
 2014-09-24 18:44:49.244 DEBUG nova.compute.manager
 [req-b91cb1c1-f211-43ef-9714-651eeb3b2302
 DeleteServersAdminTestXML-1408641898
 DeleteServersAdminTestXML-469708524] [instance:
 98eb8e6e-088b-4dda-ada5-7b2b79f62506] terminating bdm
 BlockDeviceMapping(boot_index=0,connection_info=None,created_at=2014-09-24T18:44:42Z,delete_on_termination=True,deleted=False,deleted_at=None,destination_type='local',device_name='/dev/vda',device_type='disk',disk_bus=None,guest_format=None,id=43,image_id='262ab8a2-0790-49b3-a8d3-e8ed73e3ed71',instance=?,instance_uuid=98eb8e6e-088b-4dda-ada5-7b2b79f62506,no_device=False,snapshot_id=None,source_type='image',updated_at=2014-09-24T18:44:42Z,volume_id=None,volume_size=None)
 _cleanup_volumes /opt/stack/new/nova/nova/compute/manager.py:2407
 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Released file lock /opt/stack/data/nova/nova-iptables release
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:115
 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Releasing semaphore iptables lock
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:238
 2014-09-24 18:44:49.249 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Semaphore / lock released _do_refresh_provider_fw_rules inner
 
 Also readable here:
 http://logs.openstack.org/01/123801/1/check/check-tempest-dsvm-full/b5f8b37/logs/screen-n-cpu.txt.gz#_2014-09-24_18_44_49_240
 
 (Yes, it's kind of ugly)
 
 What occured to me is that in debugging locking issues what we actually
 care about is 2 things semantically:
 
 #1 - tried to get a lock, but someone else has it. Then we know we've
 got lock contention. .
 #2 - something is still holding a lock after some long amount of time.

We did just merge https://review.openstack.org/#/c/122166/ which adds
some contention/timing information to the log messages and should at
least be a step toward what you're talking about.

For context, we had some bad logging that resulted in
https://bugs.launchpad.net/oslo.concurrency/+bug/1367941 .  That has
been cleaned up to at least be accurate, but it did add an extra log
message (created and acquired).  The reason we cared about that is we
thought there might be a bad interaction between our code and eventlet,
so we wanted to know whether we were in fact locking the same semaphore
twice or mistakenly creating two separate ones (as it turns out, neither
- it was just the bad logging I mentioned earlier).

So, given that I think everyone involved agrees that the double-locking
thing was a cosmetic issue and not a functional one we could probably
just remove the created/using messages here:
https://github.com/openstack/oslo-incubator/blob/master/openstack/common/lockutils.py#L202
which would eliminate one message per lock operation without
significantly impacting debuggability of our code.  Those messages only
exist because we didn't trust what was going on at a lower level.

It would also be nice to reduce the duplication of the messages.  For
example, we log that we acquired a 

Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-25 Thread Vishvananda Ishaya

On Sep 24, 2014, at 10:55 AM, Zane Bitter zbit...@redhat.com wrote:

 On 18/09/14 14:53, Monty Taylor wrote:
 Hey all,
 
 I've recently been thinking a lot about Sean's Layers stuff. So I wrote
 a blog post which Jim Blair and Devananda were kind enough to help me edit.
 
 http://inaugust.com/post/108
 
 I think there are a number of unjustified assumptions behind this arrangement 
 of things. I'm going to list some here, but I don't want anyone to interpret 
 this as a personal criticism of Monty. The point is that we all suffer from 
 biases - not for any questionable reasons but purely as a result of our own 
 experiences, who we spend our time talking to and what we spend our time 
 thinking about - and therefore we should all be extremely circumspect about 
 trying to bake our own mental models of what OpenStack should be into the 
 organisational structure of the project itself.

I think there were some assumptions that lead to the Layer1 model. Perhaps a 
little insight into the in-person debate[1] at OpenStack-SV might help explain 
where monty was coming from.

The initial thought was a radical idea (pioneered by Jay) to completely 
dismantle the integrated release and have all projects release independently 
and functionally test against their real dependencies. This gained support from 
various people and I still think it is a great long-term goal.

The worry that Monty (and others) had are two-fold:

1. When we had no co-gating in the past, we ended up with a lot of 
cross-project breakage. If we jump right into this we could end up in the wild 
west were different projects expect different keystone versions and there is no 
way to deploy a functional cloud.
2. We have set expectations in our community (and especially with 
distributions), that we release a set of things that all work together. It is 
not acceptable for us to just pull the rug out from under them.

These concerns show that we must (in the short term) provide some kind of 
integrated testing and release. I see the layer1 model as a stepping stone 
towards the long term goal of having the projects release independently and 
depend on stable interfaces. We aren’t going to get there immediately, so 
having a smaller, integrated set of services representing our most common use 
case seems like a good first step. As our interfaces get more stable and our 
testing gets better it could move to a (once every X months) release that just 
packages the current version of the layer1 projects or even be completely 
managed by distributions.

We need a way to move forward, but I’m hoping we can do it without a concept of 
“specialness” around layer1 projects. I actually see it as a limitation of 
these projects that we have to take this stepping stone and cannot disaggregate 
completely. Instead it should be seen as a necessary evil so that we don’t 
break our users.

In addition, we should encourage other shared use cases in openstack both for 
testing (functional tests against groups of services) and for releases (shared 
releases of related projects).

[1] Note this wasn’t a planned debate, but a spontaneous discussion that 
included (at various points) Monty Taylor, Jay Pipes, Joe Gordon, John 
Dickenson, Myself, and (undoubtedly) one or two people I”m forgetting.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Need a solution for large catalog in PKI tokens

2014-09-25 Thread Ved Lad
The Openstack installation (Havana) at our company has a large number of
service endpoints in the catalog. As a consequence, when using PKI tokens,
my HTTP request header gets too big to handle for services like neutron. Im
evaluating different options for reducing the size of the catalog in the
PKI token. Some that I have found are:

1. Using the per tenant endpoint filtering extension: This could break if
the per tenant endpoint list gets too big

2. Using PKIZ Tokens(In Juno): Were using Havana, so I cant use this
feature, but it still doesnt look scalable

3. Using the ?nocatalog option. This is the best option for scalability but
isnt the catalog a required component for authorization?

Are there any other solutions that i am unaware of, that scale with number
of endpoints?

Thanks,
Ved
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] logging around olso lockutils

2014-09-25 Thread Joshua Harlow
Or how about we add in a new log level?

A few libraries I have come across support the log level 5 (which is less than 
debug (10) but greater than notset (0))...

One usage of this is in the multiprocessing library in python itself @

https://hg.python.org/releasing/3.4/file/8671f89107c8/Lib/multiprocessing/util.py#l34

Kazoo calls it the 'BLATHER' level @

https://github.com/python-zk/kazoo/blob/master/kazoo/loggingsupport.py

Since these messages can be actually useful for lock_utils developers it could 
be useful to keep them[1]?

Just a thought...

[1] Ones mans DEBUG is another mans garbage, ha.

On Sep 25, 2014, at 12:06 PM, Ben Nemec openst...@nemebean.com wrote:

 On 09/25/2014 07:49 AM, Sean Dague wrote:
 Spending a ton of time reading logs, oslo locking ends up basically
 creating a ton of output at DEBUG that you have to mentally filter to
 find problems:
 
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Created new semaphore iptables internal_lock
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:206
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Acquired semaphore iptables lock
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:229
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Attempting to grab external lock iptables external_lock
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:178
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Got file lock /opt/stack/data/nova/nova-iptables acquire
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:93
 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Got semaphore / lock _do_refresh_provider_fw_rules inner
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:271
 2014-09-24 18:44:49.244 DEBUG nova.compute.manager
 [req-b91cb1c1-f211-43ef-9714-651eeb3b2302
 DeleteServersAdminTestXML-1408641898
 DeleteServersAdminTestXML-469708524] [instance:
 98eb8e6e-088b-4dda-ada5-7b2b79f62506] terminating bdm
 BlockDeviceMapping(boot_index=0,connection_info=None,created_at=2014-09-24T18:44:42Z,delete_on_termination=True,deleted=False,deleted_at=None,destination_type='local',device_name='/dev/vda',device_type='disk',disk_bus=None,guest_format=None,id=43,image_id='262ab8a2-0790-49b3-a8d3-e8ed73e3ed71',instance=?,instance_uuid=98eb8e6e-088b-4dda-ada5-7b2b79f62506,no_device=False,snapshot_id=None,source_type='image',updated_at=2014-09-24T18:44:42Z,volume_id=None,volume_size=None)
 _cleanup_volumes /opt/stack/new/nova/nova/compute/manager.py:2407
 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Released file lock /opt/stack/data/nova/nova-iptables release
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:115
 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Releasing semaphore iptables lock
 /opt/stack/new/nova/nova/openstack/common/lockutils.py:238
 2014-09-24 18:44:49.249 DEBUG nova.openstack.common.lockutils
 [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
 ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
 Semaphore / lock released _do_refresh_provider_fw_rules inner
 
 Also readable here:
 http://logs.openstack.org/01/123801/1/check/check-tempest-dsvm-full/b5f8b37/logs/screen-n-cpu.txt.gz#_2014-09-24_18_44_49_240
 
 (Yes, it's kind of ugly)
 
 What occured to me is that in debugging locking issues what we actually
 care about is 2 things semantically:
 
 #1 - tried to get a lock, but someone else has it. Then we know we've
 got lock contention. .
 #2 - something is still holding a lock after some long amount of time.
 
 We did just merge https://review.openstack.org/#/c/122166/ which adds
 some contention/timing information to the log messages and should at
 least be a step toward what you're talking about.
 
 For context, we had some bad logging that resulted in
 https://bugs.launchpad.net/oslo.concurrency/+bug/1367941 .  That has
 been cleaned up to at least be accurate, but it did add an extra log
 message (created and acquired).  The reason we cared about that is we
 thought there might be a bad interaction between our code and eventlet,
 so we wanted to know whether we were in fact locking the same semaphore
 twice or 

Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread John Griffith
On Thu, Sep 25, 2014 at 12:34 PM, Devdatta Kulkarni 
devdatta.kulka...@rackspace.com wrote:

  Hi,

 We have faced this situation in Solum several times. And in fact this was
 one of the topics
 that we discussed in our last irc meeting.

 We landed on separating the sample check from pep8 gate into a non-voting
 gate.
 One reason to keep the sample check is so that when say a feature in your
 code fails
 due to some upstream changes and for which you don't have coverage in your
 functional tests then
 a non-voting but failing sample check gate can be used as a starting point
 of the failure investigation.

 More details about the discussion can be found here:

 http://eavesdrop.openstack.org/meetings/solum_team_meeting/2014/solum_team_meeting.2014-09-23-16.00.log.txt

 - Devdatta

  --
 *From:* David Shrewsbury [shrewsbury.d...@gmail.com]
 *Sent:* Thursday, September 25, 2014 12:42 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Ironic] Get rid of the sample config file

   Hi!

 On Thu, Sep 25, 2014 at 12:23 PM, Lucas Alvares Gomes 
 lucasago...@gmail.com wrote:

 Hi,

 Today we have hit the problem of having an outdated sample
 configuration file again[1]. The problem of the sample generation is
 that it picks up configuration from other projects/libs
 (keystoneclient in that case) and this break the Ironic gate without
 us doing anything.

 So, what you guys think about removing the test that compares the
 configuration files and makes it no longer gate[2]?

 We already have a tox command to generate the sample configuration
 file[3], so folks that needs it can generate it locally.

 Does anyone disagree?


  +1 to this, but I think we should document how to generate the sample
 config
 in our documentation (install guide?).

  -Dave
  --
  David Shrewsbury (Shrews)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I tried this in Cinder a while back and was actually rather surprised by
the overwhelming push-back I received from the Operator community, and
whether I agreed with all of it or not, the last thing I want to do is
ignore the Operators that are actually standing up and maintaining what
we're building.

Really at the end of the day this isn't really that big of a deal.  It's
relatively easy to update the config in most of the projects tox
-egenconfig see my posting back in May [1].  For all the more often this
should happen I'm not sure why we can't have enough contributors that are
just pro-active enough to fix it up when they see it falls out of date.

John

[1]: http://lists.openstack.org/pipermail/openstack-dev/2014-May/036438.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] PTL Candidacy

2014-09-25 Thread Sergey Lukjanov
Hey folks,

I'd like to announce my intention to continue being PTL of the Data
Processing program (Sahara).

I’m working on Sahara (ex. Savanna) project from scratch, from the
initial proof of concept implementation and till now. I have been the
acting/elected PTL since Sahara was an idea. Additionally, I’m
contributing to other OpenStack projects, especially Infrastructure
for the last two releases where I’m core/root teams member now.

My high-level focus as PTL is to coordinate work of subteams, code
review, release management and general architecture/design tracking.

During the Juno cycle I was especially focused on stability, improving
testing and supporting for the different data processing tools in
addition to the Apache Hadoop. The very huge lists of bugs and
improvements has  been done during the cycle and I’m glad that we’re
ending the Juno with completed list of planned features and new
plugins available to end users including Cloudera and Spark. The great
work was done on keeping backward compatibility together with security
and usability improvements.

For the Kilo I’d like to keep my own focus on the same stuff -
coordination, review, release management and general approach
tracking. As about the overall project focus I’d like to continue
working on stability and tests coverage, distributed architecture,
improved UX for non-expert EDP users, ability to use Sahara out of the
box and etc. Additionally, I’m thinking about adopting an idea of
czars system for Sahara in Kilo release and I’d like to discuss it on
the summit. So, my vision of Kilo is to continue moving forward in
implementing scalable and flexible Data Processing aaS for OpenStack
ecosystem by investing in quality and new features.

A few words about myself: I’m Principle Software Engineer in Mirantis.
I was working a lot with  Big Data projects and technologies (Hadoop,
HDFS, Cassandra, Twitter Storm, etc.) and enterprise-grade solutions
before starting working on Sahara in OpenStack ecosystem. You can see
my commit history [0], review history [1] using the links below.

[0] http://stackalytics.com/?user_id=slukjanovmetric=commitsrelease=all
[1] http://stackalytics.com/?user_id=slukjanovmetric=marksrelease=all

Thanks.


-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread Morgan Fainberg
-Original Message-
From: John Griffith john.griffi...@gmail.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: September 25, 2014 at 12:27:52
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Ironic] Get rid of the sample config file

 On Thu, Sep 25, 2014 at 12:34 PM, Devdatta Kulkarni 
 devdatta.kulka...@rackspace.com wrote:
  
  Hi,
 
  We have faced this situation in Solum several times. And in fact this was
  one of the topics
  that we discussed in our last irc meeting.
 
  We landed on separating the sample check from pep8 gate into a non-voting
  gate.
  One reason to keep the sample check is so that when say a feature in your
  code fails
  due to some upstream changes and for which you don't have coverage in your
  functional tests then
  a non-voting but failing sample check gate can be used as a starting point
  of the failure investigation.
 
  More details about the discussion can be found here:
 
  http://eavesdrop.openstack.org/meetings/solum_team_meeting/2014/solum_team_meeting.2014-09-23-16.00.log.txt

 
  - Devdatta
 
  --
  *From:* David Shrewsbury [shrewsbury.d...@gmail.com]
  *Sent:* Thursday, September 25, 2014 12:42 PM
  *To:* OpenStack Development Mailing List (not for usage questions)
  *Subject:* Re: [openstack-dev] [Ironic] Get rid of the sample config file
 
  Hi!
 
  On Thu, Sep 25, 2014 at 12:23 PM, Lucas Alvares Gomes 
  lucasago...@gmail.com wrote:
 
  Hi,
 
  Today we have hit the problem of having an outdated sample
  configuration file again[1]. The problem of the sample generation is
  that it picks up configuration from other projects/libs
  (keystoneclient in that case) and this break the Ironic gate without
  us doing anything.
 
  So, what you guys think about removing the test that compares the
  configuration files and makes it no longer gate[2]?
 
  We already have a tox command to generate the sample configuration
  file[3], so folks that needs it can generate it locally.
 
  Does anyone disagree?
 
 
  +1 to this, but I think we should document how to generate the sample
  config
  in our documentation (install guide?).
 
  -Dave
  --
  David Shrewsbury (Shrews)
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 I tried this in Cinder a while back and was actually rather surprised by
 the overwhelming push-back I received from the Operator community, and
 whether I agreed with all of it or not, the last thing I want to do is
 ignore the Operators that are actually standing up and maintaining what
 we're building.
  
 Really at the end of the day this isn't really that big of a deal. It's
 relatively easy to update the config in most of the projects tox
 -egenconfig see my posting back in May [1]. For all the more often this
 should happen I'm not sure why we can't have enough contributors that are
 just pro-active enough to fix it up when they see it falls out of date.
  
 John
  
 [1]: http://lists.openstack.org/pipermail/openstack-dev/2014-May/036438.html  

+1 to what John just said.
 
I know in Keystone we update the sample config (usually) whenever we notice it 
out of date. Often we ask developers making config changes to run `tox 
-esample_config` and re-upload their patch. If someone misses we (the cores) 
will do a patch that just updates the sample config along the way. Ideally we 
should have a check job that just reports the config is out of date (instead of 
blocking the review).

The issue is the premise that there are 2 options:

1) Gate on the sample config being current
2) Have no sample config in the tree.

The missing third option is the proactive approach (plus having something 
convenient like `tox -egenconfig` or `tox -eupdate_sample_config` to make it 
convenient to update the sample config) is the approach that covers both sides 
nicely. The Operators/deployers have the sample config in tree, the developers 
don’t get patched rejected in the gate because the sample config doesn’t match 
new options in an external library.

I know a lot of operators and deployers appreciate the sample config being 
in-tree.

—Morgan







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-25 Thread Dolph Mathews
On Wed, Sep 24, 2014 at 9:48 AM, Day, Phil philip@hp.com wrote:

  
   I think we should aim to /always/ have 3 notifications using a pattern
   of
  
  try:
 ...notify start...
  
 ...do the work...
  
 ...notify end...
  except:
 ...notify abort...
 
  Precisely my viewpoint as well. Unless we standardize on the above, our
  notifications are less than useful, since they will be open to
 interpretation by
  the consumer as to what precisely they mean (and the consumer will need
 to
  go looking into the source code to determine when an event actually
  occurred...)
 
  Smells like a blueprint to me. Anyone have objections to me writing one
 up
  for Kilo?
 
  Best,
  -jay
 
 Hi Jay,

 So just to be clear, are you saying that we should generate 2 notification
 messages on Rabbit for every DB update ?   That feels like a big overkill
 for me.   If I follow that login then the current state transition
 notifications should also be changed to Starting to update task state /
 finished updating task state  - which seems just daft and confuisng
 logging with notifications.

 Sandy's answer where start /end are used if there is a significant amount
 of work between the two and/or the transaction spans multiple hosts makes a
 lot more sense to me.   Bracketing a single DB call with two notification
 messages rather than just a single one on success to show that something
 changed would seem to me to be much more in keeping with the concept of
 notifying on key events.


+1 Following similar thinking, Keystone recently dropped a pending
notification that proceeded a single DB call, which was always followed by
either a success or failure notification.



 Phil


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Need a solution for large catalog in PKI tokens

2014-09-25 Thread Dolph Mathews
On Thu, Sep 25, 2014 at 3:21 PM, Ved Lad ved...@gmail.com wrote:

 The Openstack installation (Havana) at our company has a large number of
 service endpoints in the catalog. As a consequence, when using PKI tokens,
 my HTTP request header gets too big to handle for services like neutron. Im
 evaluating different options for reducing the size of the catalog in the
 PKI token. Some that I have found are:

 1. Using the per tenant endpoint filtering extension: This could break if
 the per tenant endpoint list gets too big


In Juno, there's a revision to this which makes the management easier:


https://blueprints.launchpad.net/keystone/+spec/multi-attribute-endpoint-grouping



 2. Using PKIZ Tokens(In Juno): Were using Havana, so I cant use this
 feature, but it still doesnt look scalable


You're correct, it's a step in the right direction that we should have
taken in the first place, but it's still going to run into the same problem
with (even larger) large catalogs.



 3. Using the ?nocatalog option. This is the best option for scalability
 but isnt the catalog a required component for authorization?


The catalog (historically) does not convey any sort of authorization
information, but does provide some means of obscurity. There's been an
ongoing effort to make keystonemiddleware aware of the endpoint it's
protecting, and thus the catalog becomes pertinent authZ data in that
scenario. The bottom line is that the ?nocatalog auth flow is not a
completely viable code path yet.



 Are there any other solutions that i am unaware of, that scale with number
 of endpoints?


Use UUID tokens, which Keystone defaults to in Juno for some of the same
pain points that you're experiencing. UUID provides the same level of
security as PKI, with different scaling characteristics.



 Thanks,
 Ved

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Of wiki and contributors docs (was Re: [Nova] [All] API standards working group)

2014-09-25 Thread Stefano Maffulli
On 09/24/2014 09:09 PM, Anne Gentle wrote:
 I think the wiki is a great place to get ideas out while we look for a
 cross-project specs workflow in the meantime. 

The wiki is a great place to store things temporarily until they mature
and find a stable home :)

Speaking of wiki, those of you that follow the recent changes may have
noticed that I've been doing quite a bit of gardening lately in the
Category namespace[1].

The wiki pages have been growing in a fast pace when thinking of a
taxonomy and more structure was not really an option. Given the feedback
I'm getting from people interested in becoming contributors, I think
it's time to give the wiki more shape.

Some time ago, Katherine Cranford (a trained taxonomist) volunteered to
get through the wiki pages and draft a taxonomy for us. Shari Mahrdt, a
recent hire by the Foundation, volunteered a few hours per week to
implement it and I finally took the lead for a project to reorganize
content for developers (as in contributors) community[2].

We have a proposed taxonomy[3] and a first try at implementing it is
visible as a navigable tree on
https://wiki.openstack.org/wiki/Category:Home

Shari and I are keeping track of things to do on this etherpad:
https://etherpad.openstack.org/p/Action_Items_OpenStack_Wiki

We're very early in this project, things may change and we'll need help
from each editor of the wiki. I just wanted to let you know that work is
being done to improve life for new contributors. More details will follow.

/stef

[1]
https://wiki.openstack.org/w/index.php?namespace=14tagfilter=translations=filterhideminor=1title=Special%3ARecentChanges
[2]
http://maffulli.net/2014/09/18/improving-documentation-for-new-openstack-contributors/
[3]
https://docs.google.com/a/openstack.org/spreadsheets/d/1MA_u8RRnqCJC3AWQYLOz4r_zqOCewoP_ds1t_yvBak4/edit#gid=1014544834

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] PTL Candidacy

2014-09-25 Thread Tristan Cacqueray
confirmed

On 25/09/14 03:24 PM, Sergey Lukjanov wrote:
 Hey folks,
 
 I'd like to announce my intention to continue being PTL of the Data
 Processing program (Sahara).
 
 I’m working on Sahara (ex. Savanna) project from scratch, from the
 initial proof of concept implementation and till now. I have been the
 acting/elected PTL since Sahara was an idea. Additionally, I’m
 contributing to other OpenStack projects, especially Infrastructure
 for the last two releases where I’m core/root teams member now.
 
 My high-level focus as PTL is to coordinate work of subteams, code
 review, release management and general architecture/design tracking.
 
 During the Juno cycle I was especially focused on stability, improving
 testing and supporting for the different data processing tools in
 addition to the Apache Hadoop. The very huge lists of bugs and
 improvements has  been done during the cycle and I’m glad that we’re
 ending the Juno with completed list of planned features and new
 plugins available to end users including Cloudera and Spark. The great
 work was done on keeping backward compatibility together with security
 and usability improvements.
 
 For the Kilo I’d like to keep my own focus on the same stuff -
 coordination, review, release management and general approach
 tracking. As about the overall project focus I’d like to continue
 working on stability and tests coverage, distributed architecture,
 improved UX for non-expert EDP users, ability to use Sahara out of the
 box and etc. Additionally, I’m thinking about adopting an idea of
 czars system for Sahara in Kilo release and I’d like to discuss it on
 the summit. So, my vision of Kilo is to continue moving forward in
 implementing scalable and flexible Data Processing aaS for OpenStack
 ecosystem by investing in quality and new features.
 
 A few words about myself: I’m Principle Software Engineer in Mirantis.
 I was working a lot with  Big Data projects and technologies (Hadoop,
 HDFS, Cassandra, Twitter Storm, etc.) and enterprise-grade solutions
 before starting working on Sahara in OpenStack ecosystem. You can see
 my commit history [0], review history [1] using the links below.
 
 [0] http://stackalytics.com/?user_id=slukjanovmetric=commitsrelease=all
 [1] http://stackalytics.com/?user_id=slukjanovmetric=marksrelease=all
 
 Thanks.
 
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] PTL candidacy

2014-09-25 Thread Tristan Cacqueray
confirmed

On 25/09/14 02:50 PM, John Dickinson wrote:
 I'm announcing my candidacy for Swift PTL. I've been involved with Swift 
 specifically and OpenStack in general since the beginning. I'd like to 
 continue to serve in the role as Swift PTL.
 
 In my last candidacy email[1], I talked about several things I wanted to 
 focus on in Swift.
 
 1) Storage policies. This is done, and we're currently building on it to 
 implement erasure code storage in Swift.
 
 2) Focus on performance and efficiency. This is an ongoing thing that is 
 never done, but we have made improvements here, and there are some other 
 interesting things in-progress right now (like zero-copy data paths).
 
 3) Better QA. We've added a third-party test cluster to the CI system, but 
 I'd like to improve this further, for example by adding our internal 
 integration tests (probe tests) to our QA pipeline.
 
 4) Better community efficiency. Again, we've made some small improvements 
 here, but we have a ways to go yet. Our review backlog is large, and it takes 
 a while for patches to land. We need to continue to improve community 
 efficiency on these metrics.
 
 Overall, I want to ensure that Swift continues to provide a stable and robust 
 object storage engine. Focusing on the areas listed above will help us do 
 that. We'll continue to build functionality that allows applications to rely 
 on Swift to take over hard problems of storage so that apps can focus on 
 adding their value without worrying about storage.
 
 My vision for Swift is that everyone will use it every day, even if they 
 don't realize it. Together we can make it happen.
 
 --John
 
 [1] http://lists.openstack.org/pipermail/openstack-dev/2014-March/031450.html
 
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] PTL Candidacy

2014-09-25 Thread Nikhil Manchanda
I'd like to announce my candidacy for the PTL role of the Database
(Trove) program for Kilo.

I'm the current PTL for Trove for Juno, and during the Juno time frame
we made some really good progress on multiple fronts. We completed the
Neutron integration work that we had started in Icehouse. We've added
support for asynchronous mysql master-slave replication. We added a
clustering API, and an initial implementation of clusters for MongoDB.
We furthered the testability of Trove, by adding more Trove related
tests to Tempest, and are continuing to make good progress updating
and cleaning up our developer docs, install guide, and user
documentation.

For Kilo, I'd like us to keep working on clustering, with the end goal
of being able to provision fully HA database clusters in Trove. This
means a continued focus on clustering for datastores (including a
semi-synchronous mysql clustering solution), as well as heat
integration. I'd also like to ensure that we make progress towards our
goal of integrating trove with a monitoring solution to enable
scenarios like auto-failover, which will be crucial to HA (for async
replication scenarios). I'd also like to ensure that we do a better job
integrating with the oslo libraries. And additionally, I'd like to
keep our momentum going with regards to improving Trove testability
and documentation.

Some of the other work-items that I hope we can get to in Kilo include:

- Packaging the Guest Agent separately from the other Trove services.
- Automated guest agent upgrades.
- Enabling hot pools for Trove instances.
- User access of datastore logs.
- Automated, and scheduled backups for instances.

No PTL candidate email is complete without the commit / review stats,
so here they are:

* My Patches:
  https://review.openstack.org/#/q/owner:slicknik,n,z

* My Reviews:
  https://review.openstack.org/#/q/-owner:slicknik+reviewer:slicknik,n,z

Thanks for taking the time to make it this far,
-Nikhil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] PTL candidacy

2014-09-25 Thread Nikhil Komawar
Hi,

I would like to take this opportunity and announce my candidacy for the role of 
Glance PTL.

I have been part of this program since Folsom release and have had opportunity 
to work with an awesome team. There have been really challenging changes in the 
way Glance works and it has been a pleasure to contribute my reviews and code 
to many of those changes.

With the change in mission statement [1], that now provides a direction for 
other services to upload and discover data assets using Glance, it would be my 
focus to enable new features like 'Artifacts' to merge smoothly into master. 
This is a paradigm change in the way Glance is consumed and would be my 
priority to see this through. In addition, Glance is supporting a few new 
features like async workers and metadef, as of Juno that could be improved in 
terms of bugs and their maintainability. Seeing this through would be my next 
priority.

In addition to these, there are a few other challenges which Glance project 
faces - review/feedback time, triaging ever growing bug list, BP 'validation 
and followup' etc. I have some ideas to develop more momentum in each of these 
processes. With the advent of the Artifacts feature, new developers would be 
contributing to Glance. I would like to encourage and work with them become 
core members sooner than later. Also, there are many merge propositions which 
become stale due to lack of reviews from core-reviewers. My plan is to have 
bi-weekly sync-ups with the core and driver members to keep the review cycle 
active. As a good learning lesson from Juno, I would like to work closely with 
all the developers and involved core reviewers to know their sincere intent of 
accomplishing a feature within the scope of release timeline. There are some 
really talented people involved in Glance and I would like to keep synthesizing 
the ecosystem to enable everyone involved to do their best.

Lastly, my salutations to Mark. He has provided great direction and leadership 
to this project. I would like to keep his strategy of rotation of weekly 
meeting times to accommodate the convenience of people from various time zones.

Thanks for reading and I hope you will support my candidacy!

[1] 
https://github.com/openstack/governance/blob/master/reference/programs.yaml#L26

-Nikhil Komawar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] How to set port_filter in port binding?

2014-09-25 Thread Alexandre Levine

Hi All,

I'm looking for a way to set port_filter flag to False for port binding. 
Is there a way to do this in IceHouse or in current Juno code? I use 
devstack with the default ML2 plugin and configuration.


According to this guide 
(http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html) 
it should be done via binding:profile but it gets only recorded in the 
dictionary of binding:profile and doesn't get reflected in vif_details 
as supposed to.


I tried to find any code in Neutron that can potentially do this 
transferring from incoming binding:profile into binding:vif_details and 
found none.


I'd be very grateful if anybody can point me in the right direction.

And by the by the reason I'm trying to do this is because I want to use 
one instance as NAT for another one in private subnet. As a result of 
ping 8.8.8.8 from private instance to NAT instance the reply gets 
Dropped by the security rule in iptables on TAP interface of NAT 
instance because the source is different from the NAT instance IP. So I 
suppose that port_filter is responsible for this behavior and will 
remove this restriction in iptables.


Best regards,
  Alex Levine


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] How to set port_filter in port binding?

2014-09-25 Thread Alexandre Levine

Sorry,

I managed to misplace my question into the existing thread.


On 9/26/14, 12:56 AM, Alexandre Levine wrote:

Hi All,

I'm looking for a way to set port_filter flag to False for port 
binding. Is there a way to do this in IceHouse or in current Juno 
code? I use devstack with the default ML2 plugin and configuration.


According to this guide 
(http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html) 
it should be done via binding:profile but it gets only recorded in the 
dictionary of binding:profile and doesn't get reflected in vif_details 
as supposed to.


I tried to find any code in Neutron that can potentially do this 
transferring from incoming binding:profile into binding:vif_details 
and found none.


I'd be very grateful if anybody can point me in the right direction.

And by the by the reason I'm trying to do this is because I want to 
use one instance as NAT for another one in private subnet. As a result 
of ping 8.8.8.8 from private instance to NAT instance the reply gets 
Dropped by the security rule in iptables on TAP interface of NAT 
instance because the source is different from the NAT instance IP. So 
I suppose that port_filter is responsible for this behavior and will 
remove this restriction in iptables.


Best regards,
  Alex Levine



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >