[openstack-dev] [api] API Workgroup git repository

2014-10-22 Thread Christopher Yeoh
Hi,

The API Workgroup git repository has been setup and you can access it
here.

http://git.openstack.org/cgit/openstack/api-wg/

There is some content there though not all the proposed guidelines from
the wiki page are in yet:

https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines

Please feel free to start submitting patches to the document.

I have submitted a patch to convert the initial content from markdown to
rst and setup the tox targets to produce an html document. Seemed to be
an easier route as it seems to be the preferred format for OpenStack
projects and we can just copy all the build/check bits from the specs
repositories. Also doesn't require any changes to required packages.

https://review.openstack.org/130120

Until this is merged its probably better to base any patches on this
one.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Summit] Coordination between OpenStack lower layer virt stack (libvirt, QEMU/KVM)

2014-10-22 Thread Joe Gordon
On Oct 21, 2014 4:10 AM, Daniel P. Berrange berra...@redhat.com wrote:

 On Tue, Oct 21, 2014 at 12:58:48PM +0200, Kashyap Chamarthy wrote:
  I was discussing $subject on #openstack-nova, Nikola Dipanov suggested

Sounds like a  great idea.

  it's worthwhile to bring this up on the list.
 
  I was looking at
 
  http://kilodesignsummit.sched.org/
 
  and noticed there's no specific session (correct me if I'm wrong) that's
  targeted at coordination between OpenStack - libvirt - QEMU/KVM.

 At previous summits, Nova has given each virt driver a dedicated session
 in its track. Those sessions have pretty much just been a walkthrough of
 the various features each virt team was planning.

 We always have far more topics to discuss than we have time available,
 and for this summit we want to change direction to maximise the value
 extracted from face-to-face meetings.

 As such any session which is just duplicating stuff that could easily be
 dealt with over email or irc is being cut, to make room for topics where
 we really need to have the f2f discussions. So the virt driver general
 sessions from previous summits are not likely to be on the schedule this
 time around.

Agreed, this mailing list is a great place to kick off the closer libvirt
QEMU/KVM discussions.


 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
:|
 |: http://libvirt.org  -o- http://virt-manager.org
:|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
:|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
:|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] BGPVPN implementation discussions

2014-10-22 Thread Mathieu Rohon
Hi keshava,

 Hi,



 1.   From where the MPLS traffic will be initiated ?

In this design, MPLS traffic will be initiated from a network node,
where the qrouter is located. However, we though of alternative design
where MPLS traffic is initiated on the compute node, directly from a
VM plugged in an exported IPVPN network. In this case, /32 would be
advertised, and the bgpspeaker would be hosted on each compute node.
see here :

https://docs.google.com/drawings/d/1bMXiOwHsbKS89xfE0vQMtu7D9H3XV8Cvkmcoz6rzDOE/edit?usp=sharing


 2.   How it will be mapped ?

in the proposed design, the mapping in br-mpls will be done on
destinantion network received in bgp, and on the in_port, to
distinguish trafic from each qrouter. in br-mpls, there would be one
internal port per qrouter.

Regards

Mathieu




 Regards,

 Keshava

 From: Damon Wang [mailto:damon.dev...@gmail.com]
 Sent: Friday, October 17, 2014 12:42 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] BGPVPN implementation discussions



 Good news, +1



 2014-10-17 0:48 GMT+08:00 Mathieu Rohon mathieu.ro...@gmail.com:

 Hi all,

 as discussed during today's l3-meeting, we keep on working on BGPVPN
 service plugin implementation [1].
 MPLS encapsulation is now supported in OVS [2], so we would like to
 summit a design to leverage OVS capabilities. A first design proposal,
 based on l3agent, can be found here :

 https://docs.google.com/drawings/d/1NN4tDgnZlBRr8ZUf5-6zzUcnDOUkWSnSiPm8LuuAkoQ/edit

 this solution is based on bagpipe [3], and its capacity to manipulate
 OVS, based on advertised and learned routes.

 [1]https://blueprints.launchpad.net/neutron/+spec/neutron-bgp-vpn
 [2]https://raw.githubusercontent.com/openvswitch/ovs/master/FAQ
 [3]https://github.com/Orange-OpenSource/bagpipe-bgp


 Thanks

 Mathieu

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Cells conversation starter

2014-10-22 Thread Vineet Menon
On 22 October 2014 06:24, Tom Fifield t...@openstack.org wrote:

 On 22/10/14 03:07, Andrew Laski wrote:
 
  On 10/21/2014 04:31 AM, Nikola Đipanov wrote:
  On 10/20/2014 08:00 PM, Andrew Laski wrote:
  One of the big goals for the Kilo cycle by users and developers of the
  cells functionality within Nova is to get it to a point where it can be
  considered a first class citizen of Nova.  Ultimately I think this
 comes
  down to getting it tested by default in Nova jobs, and making it easy
  for developers to work with.  But there's a lot of work to get there.
  In order to raise awareness of this effort, and get the conversation
  started on a few things, I've summarized a little bit about cells and
  this effort below.
 
 
  Goals:
 
  Testing of a single cell setup in the gate.
  Feature parity.
  Make cells the default implementation.  Developers write code once and
  it works for  cells.
 
  Ultimately the goal is to improve maintainability of a large feature
  within the Nova code base.
 
  Thanks for the write-up Andrew! Some thoughts/questions below. Looking
  forward to the discussion on some of these topics, and would be happy to
  review the code once we get to that point.
 
  Feature gaps:
 
  Host aggregates
  Security groups
  Server groups
 
 
  Shortcomings:
 
  Flavor syncing
   This needs to be addressed now.
 
  Cells scheduling/rescheduling
  Instances can not currently move between cells
   These two won't affect the default one cell setup so they will be
  addressed later.
 
 
  What does cells do:
 
  Schedule an instance to a cell based on flavor slots available.
  Proxy API requests to the proper cell.
  Keep a copy of instance data at the global level for quick retrieval.
  Sync data up from a child cell to keep the global level up to date.
 
 
  Simplifying assumptions:
 
  Cells will be treated as a two level tree structure.
 
  Are we thinking of making this official by removing code that actually
  allows cells to be an actual tree of depth N? I am not sure if doing so
  would be a win, although it does complicate the RPC/Messaging/State code
  a bit, but if it's not being used, even though a nice generalization,
  why keep it around?
 
  My preference would be to remove that code since I don't envision anyone
  writing tests to ensure that functionality works and/or doesn't
  regress.  But there's the challenge of not knowing if anyone is actually
  relying on that behavior.  So initially I'm not creating a specific work
  item to remove it.  But I think it needs to be made clear that it's not
  officially supported and may get removed unless a case is made for
  keeping it and work is put into testing it.

 While I agree that N is a bit interesting, I have seen N=3 in production

 [central API]--[state/region1]--[state/region DC1]
\-[state/region DC2]
   --[state/region2 DC]
   --[state/region3 DC]
   --[state/region4 DC]

 I'm curious.
What are the use cases for this deployment? Agreeably, root node runs n-api
along with horizon, key management etc. What components  are deployed in
tier 2 and tier 3?
And AFAIK, currently, openstack cell deployment isn't even a tree but DAG
since, one cell can have multiple parents. Has anyone come up any such
requirement?



 
  Plan:
 
  Fix flavor breakage in child cell which causes boot tests to fail.
  Currently the libvirt driver needs flavor.extra_specs which is not
  synced to the child cell.  Some options are to sync flavor and extra
  specs to child cell db, or pass full data with the request.
  https://review.openstack.org/#/c/126620/1 offers a means of passing
 full
  data with the request.
 
  Determine proper switches to turn off Tempest tests for features that
  don't work with the goal of getting a voting job.  Once this is in
 place
  we can move towards feature parity and work on internal refactorings.
 
  Work towards adding parity for host aggregates, security groups, and
  server groups.  They should be made to work in a single cell setup, but
  the solution should not preclude them from being used in multiple
  cells.  There needs to be some discussion as to whether a host
 aggregate
  or server group is a global concept or per cell concept.
 
  Have there been any previous discussions on this topic? If so I'd really
  like to read up on those to make sure I understand the pros and cons
  before the summit session.
 
  The only discussion I'm aware of is some comments on
  https://review.openstack.org/#/c/59101/ , though they mention a
  discussion at the Utah mid-cycle.
 
  The main con I'm aware of for defining these as global concepts is that
  there is no rescheduling capability in the cells scheduler.  So if a
  build is sent to a cell with a host aggregate that can't fit that
  instance the build will fail even though there may be space in that host
  aggregate from a global perspective.  That should be somewhat
  straightforward to 

Re: [openstack-dev] [neutron][stable] metadata agent performance

2014-10-22 Thread Jakub Libosvar
On 10/22/2014 02:26 AM, Maru Newby wrote:
 We merged caching support for the metadata agent in juno, and backported to 
 icehouse.  It was enabled by default in juno, but disabled by default in 
 icehouse to satisfy the stable maint requirement of not changing functional 
 behavior.
 
 While performance of the agent was improved with caching enabled, it 
 regressed a reported 8x when caching was disabled [1].  This means that by 
 default, the caching backport severely impacts icehouse Neutron's performance.
 
 So, what is the way forward?  We definitely need to document the problem for 
 both icehouse and juno.  Is documentation enough?  Or can we enable caching 
 by default in icehouse?  Or remove the backport entirely.
 
 There is also a proposal to replace the metadata agent’s use of the neutron 
 client in favor of rpc [2].  There were comments on an old bug suggesting we 
 didn’t want to do this [3], but assuming that we want this change in Kilo, is 
 backporting even a possibility given that it implies a behavioral change to 
 be useful?
 
 Thanks,
 
 
 Maru
 
 
 
 1: https://bugs.launchpad.net/cloud-archive/+bug/1361357
 2: https://review.openstack.org/#/c/121782
 3: https://bugs.launchpad.net/neutron/+bug/1092043
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
I thought the performance regression was caused by wrong keystone token
caching leading to authentication per neutron client instance. Fix was
backported to Icehouse [1].

Does it mean this patch hasn't solved the problem and regression is
somewhere else?

Kuba

[1] https://review.openstack.org/#/c/120418/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] move deallocate port from after hypervisor driver does detach interface when doing detach_interface

2014-10-22 Thread Eli Qiao

hi all.
when I am reviewing code of in nova/compute/manager.py
I find that the detach_interface calls deallocate port from neutron first
then calls detach_interface in the hypervisor, then what will happen if
hypervisor detach_interface failed? the result is the port can be seen
on guest but removed from neutron, this seems inconsistent.

I submit a patch [1] to propose remove port from neutron side after
hypervisor
detach_interface successfully, and keep neutron port if catch exception when
detach_interface then give a log message.

can some one kindly help to take a look at it and give some comments?

[1] https://review.openstack.org/#/c/130151/

-- 
Thanks,
Eli (Li Yong) Qiao

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][Cinder] The sorry state of cinder's driver in Glance

2014-10-22 Thread Flavio Percoco
Greetings,

Back in Havana a, partially-implemented[0][1], Cinder driver was merged
in Glance to provide an easier and hopefully more consistent interaction
between glance, cinder and nova when it comes to manage volume images
and booting from volumes.

While I still don't fully understand the need of this driver, I think
there's a bigger problem we need to solve now. We have a partially
implemented driver that is almost useless and it's creating lots of
confusion in users that are willing to use it but keep hitting 500
errors because there's nothing they can do with it except for creating
an image that points to an existing volume.

I'd like us to discuss what the exact plan for this driver moving
forward is, what is missing and whether it'll actually be completed
during Kilo.

If there's a slight chance it won't be completed in Kilo, I'd like to
propose getting rid of it - with a deprecation period, I guess - and
giving it another chance in the future when it can be fully implemented.

[0] https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver
[1] https://review.openstack.org/#/c/32864/

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-22 Thread Stephen Balukoff
Hi Jorge!

Welcome back, eh! You've been missed.

Anyway, I just wanted to say that your proposal sounds great to me, and
it's good to finally be closer to having concrete requirements for logging,
eh. Once this discussion is nearing a conclusion, could you write up the
specifics of logging into a specification proposal document?

Regarding the discussion itself: I think we can ignore UDP for now, as
there doesn't seem to be high demand for it, and it certainly won't be
supported in v 0.5 of Octavia (and maybe not in v1 or v2 either, unless we
see real demand).

Regarding the 'real-time usage' information: I have some ideas regarding
getting this from a combination of iptables and / or the haproxy stats
interface. Were you thinking something different that involves on-the-fly
analysis of the logs or something?  (I tend to find that logs are great for
non-real time data, but can often be lacking if you need, say, a gauge like
'currently open connections' or something.)

One other thing: If there's a chance we'll be storing logs on the amphorae
themselves, then we need to have log rotation as part of the configuration
here. It would be silly to have an amphora failure just because its
ephemeral disk fills up, eh.

Stephen

On Wed, Oct 15, 2014 at 4:03 PM, Jorge Miramontes 
jorge.miramon...@rackspace.com wrote:

 Hey Octavia folks!


 First off, yes, I'm still alive and kicking. :)

 I,d like to start a conversation on usage requirements and have a few
 suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
 based protocols, we inherently enable connection logging for load
 balancers for several reasons:

 1) We can use these logs as the raw and granular data needed to track
 usage. With logs, the operator has flexibility as to what usage metrics
 they want to bill against. For example, bandwidth is easy to track and can
 even be split into header and body data so that the provider can choose if
 they want to bill on header data or not. Also, the provider can determine
 if they will bill their customers for failed requests that were the fault
 of the provider themselves. These are just a few examples; the point is
 the flexible nature of logs.

 2) Creating billable usage from logs is easy compared to other options
 like polling. For example, in our current LBaaS iteration at Rackspace we
 bill partly on average concurrent connections. This is based on polling
 and is not as accurate as it possibly can be. It's very close, but it
 doesn't get more accurate that the logs themselves. Furthermore, polling
 is more complex and uses up resources on the polling cadence.

 3) Enabling logs for all load balancers can be used for debugging, support
 and audit purposes. While the customer may or may not want their logs
 uploaded to swift, operators and their support teams can still use this
 data to help customers out with billing and setup issues. Auditing will
 also be easier with raw logs.

 4) Enabling logs for all load balancers will help mitigate uncertainty in
 terms of capacity planning. Imagine if every customer suddenly enabled
 logs without it ever being turned on. This could produce a spike in
 resource utilization that will be hard to manage. Enabling logs from the
 start means we are certain as to what to plan for other than the nature of
 the customer's traffic pattern.

 Some Cons I can think of (please add more as I think the pros outweigh the
 cons):

 1) If we every add UDP based protocols then this model won't work.  1% of
 our load balancers at Rackspace are UDP based so we are not looking at
 using this protocol for Octavia. I'm more of a fan of building a really
 good TCP/HTTP/HTTPS based load balancer because UDP load balancing solves
 a different problem. For me different problem == different product.

 2) I'm assuming HA Proxy. Thus, if we choose another technology for the
 amphora then this model may break.


 Also, and more generally speaking, I have categorized usage into three
 categories:

 1) Tracking usage - this is usage that will be used my operators and
 support teams to gain insight into what load balancers are doing in an
 attempt to monitor potential issues.
 2) Billable usage - this is usage that is a subset of tracking usage used
 to bill customers.
 3) Real-time usage - this is usage that should be exposed via the API so
 that customers can make decisions that affect their configuration (ex.
 Based off of the number of connections my web heads can handle when
 should I add another node to my pool?).

 These are my preliminary thoughts, and I'd love to gain insight into what
 the community thinks. I have built about 3 usage collection systems thus
 far (1 with Brandon) and have learned a lot. Some basic rules I have
 discovered with collecting usage are:

 1) Always collect granular usage as it paints a picture of what actually
 happened. Massaged/un-granular usage == lost information.
 2) Never imply, always be explicit. Implications usually stem from bad
 

Re: [openstack-dev] Propose to define the compute capability clearly

2014-10-22 Thread Daniel P. Berrange
On Tue, Oct 21, 2014 at 09:41:44PM +, Jiang, Yunhong wrote:
 Hi, Daniel's  all,
   This is a follow up to Daniel's 
 http://osdir.com/ml/openstack-dev/2014-10/msg00557.html , Info on XenAPI 
 data format for 'host_data' call. 
   I'm considering to change the compute capability to be a nova object,
 with well defined field, the reasons are: a) currently the compute capability
 is a dict returned from hypervisor, however, different hypervisor may have
 different return value; b) currently the compute capability filter make
 decision simply match the flavor extra_specs with this not-well-defined 
 dict, this is not good IMHO. 

If by compute capability you mean the return value of get_available_resource,
then I've already got patches which turn that from a dict  into an object
which we're discussing

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/virt-driver-get-available-resources-object,n,z
http://lists.openstack.org/pipermail/openstack-dev/2014-October/048784.html

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Pulling nova/virt/hardware.py into nova/objects/

2014-10-22 Thread Murray, Paul (HP Cloud)
I spent a little time trying to work out a good way to include this kind of 
data in the ComputeNode object. You will have seen that I added the 
supported_instances reported to the RT by the virt drivers as a list of HVSpec 
– where HVSpec is a new nova object I created for the purpose.

 The rationale behind two parallel data model hiercharies is that the
 format the virt drivers report data in, is not likely to be exactly
 the same as the format that the resoure tracker / scheduler wishes to
 use in the database.

Yeah, and in cases where we know where that line is, it makes sense to
use the lighter-weight modeling for sure.

Something that happens in some cases in RT is the data reported by the virt 
driver is modified by the RT – this is certainly the case for stats (which 
should probably not have been used by the virt driver – ironic in this case). 
In other cases memory, cpu, disk etc are split out into other fields at the 
moment (which is where Jay Pipes is working on a model for resources in 
general).

Where the data in an object field is not simple (e.g. lists of something) it is 
not really easy to work with in an object. You can’t just add to a list that is 
already stored in an object field, you need to make sure the object knows it 
has been updated. So the easiest way to work is to add entire fields to the 
object and not change them in situ. So that seems to me like dealing with 
non-nova object data types makes sense as something separate to, say, the 
ComputeNode object.

An alternative would be to work on the way nova object handle nested data 
structures (i.e. recognizing updates in nested structures, api allowing for 
list manipulation etc.) It depends whether you think the objects are just doing 
versioning to support upgrade/mixed versions or are a general purpose object 
model.

Note that this happens with NUMA data structures and PCI as well at the moment.

 FWIW, my patch series is logically split up into two parts. THe first
 10 or so patches are just thought of as general cleanup and useful to
 Nova regardless of what we decide todo. The second 10 or so patches
 are where the objects start appearing  getting used  the controversial
 bits needing mor detailed discussion.

Right, so after some discussion I think we should go ahead and merge the
bottom of this set (all of them are now acked I think) and continue the
discussion on the top half where the modeling is introduced.


Paul
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-22 Thread Day, Phil
 -Original Message-
 From: henry hly [mailto:henry4...@gmail.com]
 Sent: 08 October 2014 09:16
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
 cascading
 
 Hi,
 
 Good questions: why not just keeping multiple endpoints, and leaving
 orchestration effort in the client side?
 
 From feedback of some large data center operators, they want the cloud
 exposed to tenant as a single region with multiple AZs, while each AZ may be
 distributed in different/same locations, very similar with AZ concept of AWS.
 And the OpenStack API is indispensable for the cloud for eco-system
 friendly.
 
 The cascading is mainly doing one thing: map each standalone child
 Openstack to AZs in the parent Openstack, hide separated child endpoints,
 thus converge them into a single standard OS-API endpoint.
 
 One of the obvious benefit doing so is the networking: we can create a single
 Router/LB, with subnet/port member from different child, just like in a single
 OpenStack instance. Without the parent OpenStack working as the
 aggregation layer, it is not so easy to do so. Explicit VPN endpoint may be
 required in each child.

I've read through the thread and the various links, and to me this still sounds 
an awful lot like having multiple regions in Keystone.

First of all I think we're in danger of getting badly mixed up in terminology 
here around AZs which is an awfully overloaded term - esp when we make 
comparisons to AWS AZs.  Whether we think the current Openstack usage of these 
terms or not, lets at least stick to how they are currently defined and used in 
Openstack:

AZs - A scheduling concept in Nova and Cinder.Simply provides some 
isolation schemantic about a compute host or storage server.  Nothing to do 
with explicit physical or geographical location, although some degree of that 
(separate racks, power, etc) is usually implied.

Regions - A keystone concept for a collection of Openstack Endpoints.   They 
may be distinct (a completely isolated set of Openstack service) or overlap 
(some shared services).  Openstack clients support explicit user selection of a 
region.

Cells - A scalability / fault-isolation concept within Nova.  Because Cells 
aspires to provide all Nova features transparently across cells this kind or 
acts like multiple regions where only the Nova service is distinct (Networking 
has to be common, Glance has to be common or at least federated in a 
transparent way, etc).   The difference from regions is that the user doesn’t 
have to make an explicit region choice - they get a single Nova URL for all 
cells.   From what I remember Cells originally started out also using the 
existing APIs as the way to connect the Cells together, but had to move away 
from that because of the performance overhead of going through multiple layers.



Now with Cascading it seems that we're pretty much building on the Regions 
concept, wrapping it behind a single set of endpoints for user convenience, 
overloading the term AZ to re-expose those sets of services to allow the user 
to choose between them (doesn't this kind of negate the advantage of not having 
to specify the region in the client- is that really such a bit deal for users 
?) , and doing something to provide a sort of federated Neutron service - 
because as we all know the hard part in all of this is how you handle the 
Networking.

It kind of feels to me that if we just concentrated on the part of this that is 
working out how to distribute/federate Neutron then we'd have a solution that 
could be mapped as easily cells and/or regions - and I wonder if then why 
really need yet another aggregation concept ?

Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Cells conversation starter

2014-10-22 Thread Dheeraj Gupta
Thanks Andrew for this (very) exhaustive list.

As you have pointed out, for all the missing features (I think flavors
can also be a part of that list) the community needs to decide where
the info lives primarily (API or compute cells) and how it is
propagated (Synced, sent with the request, asked on demand etc.)

With regards to flavors, I think the attention has shifted to getting
extra_specs to sync with child cells which isn't going to help much
because even instance_type isn't synced yet. And since instance_type
relies of auto generated IDs, syncing would be a major headache (One
cell is down when a new flavor is created or deleted). Storing
extra_specs along with instance_system_metadata is a good alternative
but if we assume API cells to be authoratative about flavors, then we
can simply pass flavor information with the boot request to the child
cell (This would also clean up non-cell nova code which currently
performs multiple DB lookups for flavors based on underlying virt
driver).

Any potential solutions for all these should also probably be
evaluated based on their compatibility with existing cell setups.

Maybe we should create a thread for all these missing features to
discuss their solutions individually or start the discussion in the
bug reports itself.

Regards,
Dheeraj

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] Meeting today ? 22nd October

2014-10-22 Thread MENDELSOHN, ITAI (ITAI)
Hi all,

Do we have a meeting today?
I can't see something in the wiki about today...

Itai

Sent from my iPhone

 On Oct 8, 2014, at 2:06 AM, Steve Gordon sgor...@redhat.com wrote:
 
 Hi all,
 
 Just a quick reminder that the NFV subteam meets Wednesday 8th October 2014 @ 
 1400 UTC in #openstack-meeting-alt on FreeNode. I have started putting an 
 agenda together here, feel free to add:
 
https://etherpad.openstack.org/p/nfv-meeting-agenda
 
 Also many thanks to Itai for running the meeting for me last week.
 
 Thanks,
 
 Steve
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] periodic jobs for master

2014-10-22 Thread Thierry Carrez
Ihar Hrachyshka wrote:
 [...]
 For stable branches, we have so called periodic jobs that are
 triggered once in a while against the current code in a stable branch,
 and report to openstack-stable-maint@ mailing list. An example of
 failing periodic job report can be found at [2]. I envision that
 similar approach can be applied to test auxiliary features in gate. So
 once something is broken in master, the interested parties behind the
 auxiliary feature will be informed in due time.
 [...]

The main issue with periodic jobs is that since they are non-blocking,
they can get ignored really easily. It takes a bit of organization and
process to get those failures addressed.

It's only recently (and a lot thanks to you) that failures in the
periodic jobs for stable branches are being taken into account quickly
and seriously. For years the failures just lingered until they blocked
someone's work enough for that person to go and fix them.

So while I think periodic jobs are a good way to increase corner case
testing coverage, I am skeptical of our collective ability to have the
discipline necessary for them not to become a pain. We'll need a strict
process around them: identified groups of people signed up to act on
failure, and failure stats so that we can remove jobs that don't get
enough attention.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] metadata agent performance

2014-10-22 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 22/10/14 02:26, Maru Newby wrote:
 We merged caching support for the metadata agent in juno, and
 backported to icehouse.  It was enabled by default in juno, but
 disabled by default in icehouse to satisfy the stable maint
 requirement of not changing functional behavior.
 
 While performance of the agent was improved with caching enabled,
 it regressed a reported 8x when caching was disabled [1].  This
 means that by default, the caching backport severely impacts
 icehouse Neutron's performance.

If I correctly follow the degradation scenario, it's caused by
unneeded tokens requested from keystone each time a request hits
neutron metadata agent. This should be already fixed as [1] (included
in the latest 2014.1.3 release).

[1]: https://review.openstack.org/#/c/118996/

 
 So, what is the way forward?  We definitely need to document the
 problem for both icehouse and juno.  Is documentation enough?  Or
 can we enable caching by default in icehouse?  Or remove the
 backport entirely.

If I'm correct, the issue is already solved in the latest Icehouse
release, so there seems to be no need to document the regression for
2014.1.2. But yeah, sure we could put it in its release notes just in
case.

 
 There is also a proposal to replace the metadata agent’s use of the
 neutron client in favor of rpc [2].  There were comments on an old
 bug suggesting we didn’t want to do this [3], but assuming that we
 want this change in Kilo, is backporting even a possibility given
 that it implies a behavioral change to be useful?

We probably wouldn't consider backporting it to stable branches
because it touches RPC API, and we usually avoid it there. Anyway, it
shouldn't be an issue at all (as per my comment above).

 
 Thanks,
 
 
 Maru
 
 
 
 1: https://bugs.launchpad.net/cloud-archive/+bug/1361357 2:
 https://review.openstack.org/#/c/121782 3:
 https://bugs.launchpad.net/neutron/+bug/1092043
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUR4XAAAoJEC5aWaUY1u57rYAH/jDqluduRLxwgHykP/NMIesj
0MnesaiFwfeHdE5z3YEnteVkxEtkDRnmTRaau9TuOJpVrUfSIA7Lpa3S79Rv4cT5
CC82FlU32fbOkCVFiXqgQvadNc3wrqHMag9FD6fpbg/MZlvV/VWHMl/z55rwhNh/
yL+CzXd9uNgZy+ng0LI1EY+u9UcLtVwF8xhLhRIu5NMRim3HeRFlFUSN41ccemRJ
TdJUxMdtlYls/nCuIUk2QpSOZt1Hk2bysrBPh0etV501vsSHCq3cYZ3vjmt+jNX9
thTKlsOaFpSLWnTn5+ERXk3y7pvJxo1AGOli3sLXIDYYNYPK4Y8PRYPLm43U45o=
=o7SU
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] periodic jobs for master

2014-10-22 Thread Chris Dent

On Wed, 22 Oct 2014, Thierry Carrez wrote:


So while I think periodic jobs are a good way to increase corner case
testing coverage, I am skeptical of our collective ability to have the
discipline necessary for them not to become a pain. We'll need a strict
process around them: identified groups of people signed up to act on
failure, and failure stats so that we can remove jobs that don't get
enough attention.


It's a bummer that we often find ourselves turning to processes to
make up for a lack of discipline. If that's how it has to be how about
we make sure the pain if easy to feel. So, for example, if there are
periodic jobs on master and they've just failed for a project, how
about just close the gate for that project until the failure
identified by the periodic job is fixed?

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Summit] Coordination between OpenStack lower layer virt stack (libvirt, QEMU/KVM)

2014-10-22 Thread Kashyap Chamarthy
On Tue, Oct 21, 2014 at 12:07:41PM +0100, Daniel P. Berrange wrote:
 On Tue, Oct 21, 2014 at 12:58:48PM +0200, Kashyap Chamarthy wrote:
  I was discussing $subject on #openstack-nova, Nikola Dipanov suggested
  it's worthwhile to bring this up on the list.
  
  I was looking at 
  
  http://kilodesignsummit.sched.org/
  
  and noticed there's no specific session (correct me if I'm wrong) that's
  targeted at coordination between OpenStack - libvirt - QEMU/KVM.
 
 At previous summits, Nova has given each virt driver a dedicated session
 in its track. Those sessions have pretty much just been a walkthrough of
 the various features each virt team was planning.
 
 We always have far more topics to discuss than we have time available,
 and for this summit we want to change direction to maximise the value
 extracted from face-to-face meetings.
 
 As such any session which is just duplicating stuff that could easily be
 dealt with over email or irc is being cut, to make room for topics where
 we really need to have the f2f discussions. So the virt driver general
 sessions from previous summits are not likely to be on the schedule this
 time around.
 
Optimized for efficiency, good to know, thanks.

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding new dependencies to stackforge projects

2014-10-22 Thread Davanum Srinivas
Matt,

I've submitted a review to remove the gate-nova-docker-requirements
from nova-docker:

https://review.openstack.org/#/c/130192/

I am good with treating the current situation with DSVM jobs as we
bug if there is consensus. I'll try to dig in, but we may need Dean,
Sean etc to help figure it out :)

thanks,
dims

On Tue, Oct 21, 2014 at 8:42 PM, Matthew Treinish mtrein...@kortar.org wrote:
 On Tue, Oct 21, 2014 at 08:09:38PM -0400, Davanum Srinivas wrote:
 Hi all,

 On the cross project meeting today, i promised to bring this to the
 ML[1]. So here it is:

 Question : Can a StackForge project (like nova-docker), depend on a
 library (docker-py) that is not specified in global requirements?

 So the answer is definitely yes, and this is definitely the case for most
 projects which aren't in the integrated release. We should only be enforcing
 requirements on projects in projects.txt in the requirements repo.


 Right now the answer seems to be No, as enforced by the CI systems.
 For the specific problems, see review:
 https://review.openstack.org/#/c/130065/

 You can see that check-tempest-dsvm-f20-docker fails:
 http://logs.openstack.org/65/130065/1/check/check-tempest-dsvm-f20-docker/f9000d4/devstacklog.txt.gz

 I think you've just hit a bug either in devstack or the nova-docker devstack
 bits. There isn't any reason these checks should be run on a project which
 isn't being tracked by global requirements.


 and the gate-nova-docker-requirements fails:
 http://logs.openstack.org/65/130065/1/check/gate-nova-docker-requirements/34256d2/console.html


 I'm not sure why this job is configured to be running on the nova-docker repo.
 The project should either decide to track global-requirements and then be 
 added
 to projects.txt or not run the requirements check job. It doesn't make much
 sense to enforce compliance with global requirements if the project is trying 
 to
 use libraries not included there.

 Just remove the job template from the zuul layout for nova-docker:
 http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n4602

 and then once the issue with devstack is figured out you can add the docker-py
 to the requirements list.

 For this specific instance, the reason for adding this dependency is
 to get rid of custom http client in nova-docker project that
 just duplicates the functionality, needs to be maintained and does not
 do proper checking etc. But the question is general
 in the broader since projects should be able to add dependencies and
 be able to run dsvm and requirements jobs until
 they are integrated and the delta list of new dependencies to global
 requirements should be vetted during the process.

 If nova-docker isn't tracked by global requirements then there shouldn't be
 anything blocking you from adding docker-py to the nova-docker requirements. 
 It
 looks like your just hitting a bug and/or a configuration issue. Granted, 
 there
 might be some complexity in moving the driver back into the nova tree if there
 are dependencies on a packages not in global requirements, but that's 
 something
 that can be addressed when/if the driver is being merged back into nova.


 Thanks,
 dims

 PS: A really long rambling version of this email with a proposal to
 add a flag in devstack-gate/devstack is at [2], Actual review
 with hacks to get DSVM running by hook/crook that shows that docker-py
 indeed be used is at [3]

 [1] 
 http://eavesdrop.openstack.org/meetings/project/2014/project.2014-10-21-21.02.log.html
 [2] 
 https://etherpad.openstack.org/p/managing-reqs-for-projects-to-be-integrated
 [3] https://review.openstack.org/#/c/128790/


 -Matt Treinish

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-22 Thread Robert van Leeuwen
 I,d like to start a conversation on usage requirements and have a few
 suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
 based protocols, we inherently enable connection logging for load
 balancers for several reasons:

Just request from the operator side of things:
Please think about the scalability when storing all logs.

e.g. we are currently logging http requests to one load balanced application 
(that would be a fit for LBAAS)
It is about 500 requests per second, which adds up to 40GB per day (in 
elasticsearch.)
Please make sure whatever solution is chosen it can cope with machines doing 
1000s of requests per second...

Cheers,
Robert van Leeuwen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] metadata agent performance

2014-10-22 Thread Akihiro Motoki
My understanding is same as from Ihar, and we no longer have the degradation
in the latest Icehouse update. There was a degradation in 2014.1.2 [2]
but the fix
was backported in 2014.1.3 [1].
We don't need to take care of backporting when considering metadata RPC patch.

[1] https://review.openstack.org/#/c/120418/
[2] https://review.openstack.org/#/c/95491/

Thanks,
Akihiro


On Wed, Oct 22, 2014 at 7:24 PM, Ihar Hrachyshka ihrac...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 22/10/14 02:26, Maru Newby wrote:
 We merged caching support for the metadata agent in juno, and
 backported to icehouse.  It was enabled by default in juno, but
 disabled by default in icehouse to satisfy the stable maint
 requirement of not changing functional behavior.

 While performance of the agent was improved with caching enabled,
 it regressed a reported 8x when caching was disabled [1].  This
 means that by default, the caching backport severely impacts
 icehouse Neutron's performance.

 If I correctly follow the degradation scenario, it's caused by
 unneeded tokens requested from keystone each time a request hits
 neutron metadata agent. This should be already fixed as [1] (included
 in the latest 2014.1.3 release).

 [1]: https://review.openstack.org/#/c/118996/


 So, what is the way forward?  We definitely need to document the
 problem for both icehouse and juno.  Is documentation enough?  Or
 can we enable caching by default in icehouse?  Or remove the
 backport entirely.

 If I'm correct, the issue is already solved in the latest Icehouse
 release, so there seems to be no need to document the regression for
 2014.1.2. But yeah, sure we could put it in its release notes just in
 case.


 There is also a proposal to replace the metadata agent’s use of the
 neutron client in favor of rpc [2].  There were comments on an old
 bug suggesting we didn’t want to do this [3], but assuming that we
 want this change in Kilo, is backporting even a possibility given
 that it implies a behavioral change to be useful?

 We probably wouldn't consider backporting it to stable branches
 because it touches RPC API, and we usually avoid it there. Anyway, it
 shouldn't be an issue at all (as per my comment above).


 Thanks,


 Maru



 1: https://bugs.launchpad.net/cloud-archive/+bug/1361357 2:
 https://review.openstack.org/#/c/121782 3:
 https://bugs.launchpad.net/neutron/+bug/1092043

 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

 iQEcBAEBCgAGBQJUR4XAAAoJEC5aWaUY1u57rYAH/jDqluduRLxwgHykP/NMIesj
 0MnesaiFwfeHdE5z3YEnteVkxEtkDRnmTRaau9TuOJpVrUfSIA7Lpa3S79Rv4cT5
 CC82FlU32fbOkCVFiXqgQvadNc3wrqHMag9FD6fpbg/MZlvV/VWHMl/z55rwhNh/
 yL+CzXd9uNgZy+ng0LI1EY+u9UcLtVwF8xhLhRIu5NMRim3HeRFlFUSN41ccemRJ
 TdJUxMdtlYls/nCuIUk2QpSOZt1Hk2bysrBPh0etV501vsSHCq3cYZ3vjmt+jNX9
 thTKlsOaFpSLWnTn5+ERXk3y7pvJxo1AGOli3sLXIDYYNYPK4Y8PRYPLm43U45o=
 =o7SU
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Akihiro Motoki amot...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Cinder] The sorry state of cinder's driver in Glance

2014-10-22 Thread Zhi Yan Liu
Greetings,

On Wed, Oct 22, 2014 at 4:56 PM, Flavio Percoco fla...@redhat.com wrote:
 Greetings,

 Back in Havana a, partially-implemented[0][1], Cinder driver was merged
 in Glance to provide an easier and hopefully more consistent interaction
 between glance, cinder and nova when it comes to manage volume images
 and booting from volumes.

With my idea, it not only for VM provisioning and consuming feature
but also for implementing a consistent and unified block storage
backend for image store.  For historical reasons, we have implemented
a lot of duplicated block storage drivers between glance and cinder,
IMO, cinder could regard as a full-functional block storage backend
from OpenStack's perspective (I mean it contains both data and control
plane), glance could just leverage cinder as a unified block storage
backend. Essentially, Glance has two kind of drivers, block storage
driver and object storage driver (e.g. swift and s3 driver),  from
above opinion, I consider to give glance a cinder driver is very
sensible, it could provide a unified and consistent way to access
different kind of block backend instead of implement duplicated
drivers in both projects.

I see some people like to see implementing similar drivers in
different projects again and again, but at least I think this is a
hurtless and beneficial feature/driver.


 While I still don't fully understand the need of this driver, I think
 there's a bigger problem we need to solve now. We have a partially
 implemented driver that is almost useless and it's creating lots of
 confusion in users that are willing to use it but keep hitting 500
 errors because there's nothing they can do with it except for creating
 an image that points to an existing volume.

 I'd like us to discuss what the exact plan for this driver moving
 forward is, what is missing and whether it'll actually be completed
 during Kilo.

I'd like to enhance cinder driver of course, but currently it blocked
on one thing it needs a correct people believed way [0] to access
volume from Glance (for both data and control plane, e.g. creating
image and upload bits). During H cycle I was told cinder will release
a separated lib soon, called Brick[0], which could be used from other
project to allow them access volume directly from cinder, but seems it
didn't ready to use still until now. But anyway, we can talk this with
cinder team to get Brick moving forward.

[0] https://review.openstack.org/#/c/20593/
[1] https://wiki.openstack.org/wiki/CinderBrick

I really appreciated if somebody could show me a clear plan/status on
CinderBrick, I still think it's a good way to go for glance cinder
driver.


 If there's a slight chance it won't be completed in Kilo, I'd like to
 propose getting rid of it - with a deprecation period, I guess - and
 giving it another chance in the future when it can be fully implemented.

 [0] https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver
 [1] https://review.openstack.org/#/c/32864/


It obviously depends, according to my above information, but I'd like to try.

zhiyan

 Cheers,
 Flavio

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Triple-O] Openstack Onboarding

2014-10-22 Thread James Slagle
On Tue, Oct 21, 2014 at 1:08 PM, Clint Byrum cl...@fewbar.com wrote:
 So Tuskar would be a part of that deployment cloud, and would ask you
 things about your hardware, your desired configuration, and help you
 get the inventory loaded.

 So, ideally our gate would leave the images we test as part of the
 artifacts for build, and we could just distribute those as part of each
 release. That probably wouldn't be too hard to do, but those images
 aren't exactly small so I would want to have some kind of strategy for
 distributing them and limiting the unique images users are exposed to so
 we're not encouraging people to run CD by downloading each commit's image.

I think the downloadable images would be great. We've done such a
thing for RDO.  And (correct me if I'm wrong), but I think the Helion
community distro does so as well? If that's the use case that seems to
work well downstream, it'd be nice to have a similar model upstream as
well.

For a bootstrap process just to try things out or get setup for
development, we could skip one layer and go straight from the seed to
the overcloud. In such a scenario, it may make sense to refer to the
seed as the undercloud since it's also your deployment cloud, and so
tuskar would likely be installed there.

We could also explore using the same image for the seed and
undercloud, and that would give folks one less image to
build/download.

Agreed that this will be good to discuss at the contributor meetup at
the summit. I think the first 2 bullet points on the etherpad are
pretty related actually (Adam's OpenStack Setup and James Polley's
Pathways into TripleO).

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler group meeting - cancelled this week

2014-10-22 Thread Dugger, Donald D
Just a reminder that, as we mentioned last week, no meeting today.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Scheduler group meeting - cancelled this week

2014-10-22 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/22/2014 08:03 AM, Dugger, Donald D wrote:
 Just a reminder that, as we mentioned last week, no meeting today.

The meetings are supposed to be on Tuesday, no? And yeah, no one showed
up yesterday.


- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iQIcBAEBAgAGBQJUR61TAAoJEKMgtcocwZqLhH4P/3J1h3Zf3qarjbma+OfpSlTh
DpT9Z4PHRQPAXKCUw4Aa8NgwzL5PIo5VG5qISDNDJAZqxK4uQHXDa3gQnjdmwtdI
7AbJfIPefgrz1owD44EbVw6QPoT4fRYZqZr2Tz512r5QKF/Dv0Oj3uCu7TyoSkMO
wnhrTboVepYcWMPL8yxDkwoYnYcvjyC+a8LErItNOP5Zc7K0f6bppPbPnrBbf/OD
Wc9efWo4Oq8ONL2wBWmcQW2104FlsVrzoUs34Oxm+K1q6Qaqh4H/rbig4iZMwhu3
/v3KfmohYEbRTJjOiuVVuPbn+uUlWq9LH86fGwkpFZdl3pBzyM9MyWtkRh8IIR4V
lhMrGQLNTKtVcBRnSq0G0+zs6fl7PS6VUia/REbHyewAyOnZrQGhfOl/oyDeV14p
EDLxobwnmdcUiqNhoTg89OPIWnwcNEpCimxLOzfyYrltrwKvymTQDfjHI7wspeg7
N+HPnq7cCcoL/ydh2LBZhgvtuAIea2K66Zt7u+6lFGbHfJpQkAGAIpBDAfVp3lPs
oNQvsUWbgF2oBLM81UrNrVFu9Q6RreLwcIrEck+7G0RLZZS1fGP0rJtcgjA8aWhV
0Mmd02WI3dWV4FnyXfSWjOQns/TyLUtzmUoF2WfIz0xXB2UsDpR6MheMwuKNEj4z
CBM7lcP7paG9rvJJ04tM
=sm+P
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding new dependencies to stackforge projects

2014-10-22 Thread Davanum Srinivas
Dear requirements-core folks,

Here's the review as promised:
https://review.openstack.org/130210

Thanks,
dims

On Wed, Oct 22, 2014 at 7:27 AM, Davanum Srinivas dava...@gmail.com wrote:
 Matt,

 I've submitted a review to remove the gate-nova-docker-requirements
 from nova-docker:

 https://review.openstack.org/#/c/130192/

 I am good with treating the current situation with DSVM jobs as we
 bug if there is consensus. I'll try to dig in, but we may need Dean,
 Sean etc to help figure it out :)

 thanks,
 dims

 On Tue, Oct 21, 2014 at 8:42 PM, Matthew Treinish mtrein...@kortar.org 
 wrote:
 On Tue, Oct 21, 2014 at 08:09:38PM -0400, Davanum Srinivas wrote:
 Hi all,

 On the cross project meeting today, i promised to bring this to the
 ML[1]. So here it is:

 Question : Can a StackForge project (like nova-docker), depend on a
 library (docker-py) that is not specified in global requirements?

 So the answer is definitely yes, and this is definitely the case for most
 projects which aren't in the integrated release. We should only be enforcing
 requirements on projects in projects.txt in the requirements repo.


 Right now the answer seems to be No, as enforced by the CI systems.
 For the specific problems, see review:
 https://review.openstack.org/#/c/130065/

 You can see that check-tempest-dsvm-f20-docker fails:
 http://logs.openstack.org/65/130065/1/check/check-tempest-dsvm-f20-docker/f9000d4/devstacklog.txt.gz

 I think you've just hit a bug either in devstack or the nova-docker devstack
 bits. There isn't any reason these checks should be run on a project which
 isn't being tracked by global requirements.


 and the gate-nova-docker-requirements fails:
 http://logs.openstack.org/65/130065/1/check/gate-nova-docker-requirements/34256d2/console.html


 I'm not sure why this job is configured to be running on the nova-docker 
 repo.
 The project should either decide to track global-requirements and then be 
 added
 to projects.txt or not run the requirements check job. It doesn't make much
 sense to enforce compliance with global requirements if the project is 
 trying to
 use libraries not included there.

 Just remove the job template from the zuul layout for nova-docker:
 http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n4602

 and then once the issue with devstack is figured out you can add the 
 docker-py
 to the requirements list.

 For this specific instance, the reason for adding this dependency is
 to get rid of custom http client in nova-docker project that
 just duplicates the functionality, needs to be maintained and does not
 do proper checking etc. But the question is general
 in the broader since projects should be able to add dependencies and
 be able to run dsvm and requirements jobs until
 they are integrated and the delta list of new dependencies to global
 requirements should be vetted during the process.

 If nova-docker isn't tracked by global requirements then there shouldn't be
 anything blocking you from adding docker-py to the nova-docker requirements. 
 It
 looks like your just hitting a bug and/or a configuration issue. Granted, 
 there
 might be some complexity in moving the driver back into the nova tree if 
 there
 are dependencies on a packages not in global requirements, but that's 
 something
 that can be addressed when/if the driver is being merged back into nova.


 Thanks,
 dims

 PS: A really long rambling version of this email with a proposal to
 add a flag in devstack-gate/devstack is at [2], Actual review
 with hacks to get DSVM running by hook/crook that shows that docker-py
 indeed be used is at [3]

 [1] 
 http://eavesdrop.openstack.org/meetings/project/2014/project.2014-10-21-21.02.log.html
 [2] 
 https://etherpad.openstack.org/p/managing-reqs-for-projects-to-be-integrated
 [3] https://review.openstack.org/#/c/128790/


 -Matt Treinish

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Davanum Srinivas :: https://twitter.com/dims



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Triple-O] Openstack Onboarding

2014-10-22 Thread Ghe Rivero


On 22/10/14 14:40, James Slagle wrote:
 On Tue, Oct 21, 2014 at 1:08 PM, Clint Byrum cl...@fewbar.com wrote:
 So Tuskar would be a part of that deployment cloud, and would ask you
 things about your hardware, your desired configuration, and help you
 get the inventory loaded.

 So, ideally our gate would leave the images we test as part of the
 artifacts for build, and we could just distribute those as part of each
 release. That probably wouldn't be too hard to do, but those images
 aren't exactly small so I would want to have some kind of strategy for
 distributing them and limiting the unique images users are exposed to so
 we're not encouraging people to run CD by downloading each commit's image.
 
 I think the downloadable images would be great. We've done such a
 thing for RDO.  And (correct me if I'm wrong), but I think the Helion
 community distro does so as well? If that's the use case that seems to
 work well downstream, it'd be nice to have a similar model upstream as
 well.

There has been a patch hanging around for some time to retrieve images
and use them for a deployment. https://review.openstack.org/#/c/85130/

I rebased it one week ago, but apparently needs a new one. Will do today.

Ghe Rivero

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Scheduler group meeting - cancelled this week

2014-10-22 Thread Dugger, Donald D
Sigh.  I've progressed from being challenged by time of day (e.g. timezone) to 
being challenged by day of week.  Pretty soon I'll be confused about the year 
:-)

Sorry about that.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

-Original Message-
From: Ed Leafe [mailto:e...@leafe.com] 
Sent: Wednesday, October 22, 2014 7:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [gantt] Scheduler group meeting - cancelled this 
week

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/22/2014 08:03 AM, Dugger, Donald D wrote:
 Just a reminder that, as we mentioned last week, no meeting today.

The meetings are supposed to be on Tuesday, no? And yeah, no one showed up 
yesterday.


- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iQIcBAEBAgAGBQJUR61TAAoJEKMgtcocwZqLhH4P/3J1h3Zf3qarjbma+OfpSlTh
DpT9Z4PHRQPAXKCUw4Aa8NgwzL5PIo5VG5qISDNDJAZqxK4uQHXDa3gQnjdmwtdI
7AbJfIPefgrz1owD44EbVw6QPoT4fRYZqZr2Tz512r5QKF/Dv0Oj3uCu7TyoSkMO
wnhrTboVepYcWMPL8yxDkwoYnYcvjyC+a8LErItNOP5Zc7K0f6bppPbPnrBbf/OD
Wc9efWo4Oq8ONL2wBWmcQW2104FlsVrzoUs34Oxm+K1q6Qaqh4H/rbig4iZMwhu3
/v3KfmohYEbRTJjOiuVVuPbn+uUlWq9LH86fGwkpFZdl3pBzyM9MyWtkRh8IIR4V
lhMrGQLNTKtVcBRnSq0G0+zs6fl7PS6VUia/REbHyewAyOnZrQGhfOl/oyDeV14p
EDLxobwnmdcUiqNhoTg89OPIWnwcNEpCimxLOzfyYrltrwKvymTQDfjHI7wspeg7
N+HPnq7cCcoL/ydh2LBZhgvtuAIea2K66Zt7u+6lFGbHfJpQkAGAIpBDAfVp3lPs
oNQvsUWbgF2oBLM81UrNrVFu9Q6RreLwcIrEck+7G0RLZZS1fGP0rJtcgjA8aWhV
0Mmd02WI3dWV4FnyXfSWjOQns/TyLUtzmUoF2WfIz0xXB2UsDpR6MheMwuKNEj4z
CBM7lcP7paG9rvJJ04tM
=sm+P
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Generate documentation with Falcon api

2014-10-22 Thread ZIBA Romain
Hello everyone,

Is it possible to generate documentation while using Falcon API framework?

Thanks beforehand  best regards,
Romain Ziba.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Cinder] The sorry state of cinder's driver in Glance

2014-10-22 Thread Flavio Percoco
On 10/22/2014 02:30 PM, Zhi Yan Liu wrote:
 Greetings,
 
 On Wed, Oct 22, 2014 at 4:56 PM, Flavio Percoco fla...@redhat.com wrote:
 Greetings,

 Back in Havana a, partially-implemented[0][1], Cinder driver was merged
 in Glance to provide an easier and hopefully more consistent interaction
 between glance, cinder and nova when it comes to manage volume images
 and booting from volumes.
 
 With my idea, it not only for VM provisioning and consuming feature
 but also for implementing a consistent and unified block storage
 backend for image store.  For historical reasons, we have implemented
 a lot of duplicated block storage drivers between glance and cinder,
 IMO, cinder could regard as a full-functional block storage backend
 from OpenStack's perspective (I mean it contains both data and control
 plane), glance could just leverage cinder as a unified block storage
 backend. Essentially, Glance has two kind of drivers, block storage
 driver and object storage driver (e.g. swift and s3 driver),  from
 above opinion, I consider to give glance a cinder driver is very
 sensible, it could provide a unified and consistent way to access
 different kind of block backend instead of implement duplicated
 drivers in both projects.

Let me see if I got this right. You're suggesting to have a cinder
driver in Glance so we can basically remove the
'create-volume-from-image' functionality from Cinder. is this right?

 I see some people like to see implementing similar drivers in
 different projects again and again, but at least I think this is a
 hurtless and beneficial feature/driver.

It's not as harmless as it seems. There are many users confused as to
what the use case of this driver is. For example, should users create
volumes from images? or should the create images that are then stored in
a volume? What's the difference?

Technically, the answer is probably none, but from a deployment and
usability perspective, there's a huge difference that needs to be
considered.

I'm not saying it's a bad idea, I'm just saying we need to get this
story straight and probably just pick one (? /me *shrugs*)

 While I still don't fully understand the need of this driver, I think
 there's a bigger problem we need to solve now. We have a partially
 implemented driver that is almost useless and it's creating lots of
 confusion in users that are willing to use it but keep hitting 500
 errors because there's nothing they can do with it except for creating
 an image that points to an existing volume.

 I'd like us to discuss what the exact plan for this driver moving
 forward is, what is missing and whether it'll actually be completed
 during Kilo.
 
 I'd like to enhance cinder driver of course, but currently it blocked
 on one thing it needs a correct people believed way [0] to access
 volume from Glance (for both data and control plane, e.g. creating
 image and upload bits). During H cycle I was told cinder will release
 a separated lib soon, called Brick[0], which could be used from other
 project to allow them access volume directly from cinder, but seems it
 didn't ready to use still until now. But anyway, we can talk this with
 cinder team to get Brick moving forward.
 
 [0] https://review.openstack.org/#/c/20593/
 [1] https://wiki.openstack.org/wiki/CinderBrick
 
 I really appreciated if somebody could show me a clear plan/status on
 CinderBrick, I still think it's a good way to go for glance cinder
 driver.

+1 Mike? John ? Any extra info here?

If the brick's lib is not going to be released before k-2, I think we
should just remove this driver until we can actually complete the work.

As it is right now, it doesn't add any benefit and there's nothing this
driver adds that cannot be done already (creating volumes from images,
that is).

 If there's a slight chance it won't be completed in Kilo, I'd like to
 propose getting rid of it - with a deprecation period, I guess - and
 giving it another chance in the future when it can be fully implemented.

 [0] https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver
 [1] https://review.openstack.org/#/c/32864/

Fla


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-22 Thread Kyle Mestery
There are currently at least two BPs registered for VLAN trunk support
to VMs in neutron-specs [1] [2]. This is clearly something that I'd
like to see us land in Kilo, as it enables a bunch of things for the
NFV use cases. I'm going to propose that we talk about this at an
upcoming Neutron meeting [3]. Given the rotating schedule of this
meeting, and the fact the Summit is fast approaching, I'm going to
propose we allocate a bit of time in next Monday's meeting to discuss
this. It's likely we can continue this discussion F2F in Paris as
well, but getting a head start would be good.

Thanks,
Kyle

[1] https://review.openstack.org/#/c/94612/
[2] https://review.openstack.org/#/c/97714
[3] https://wiki.openstack.org/wiki/Network/Meetings

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-22 Thread Jorge Miramontes
Hey Stephen (and Robert),

For real-time usage I was thinking something similar to what you are proposing. 
Using logs for this would be overkill IMO so your suggestions were what I was 
thinking of starting with.

As far as storing logs is concerned I was definitely thinking of offloading 
these onto separate storage devices. Robert, I totally hear you on the 
scalability part as our current LBaaS setup generates TB of request logs. I'll 
start planning out a spec and then I'll let everyone chime in there. I just 
wanted to get a general feel for the ideas I had mentioned. I'll also bring it 
up in today's meeting.

Cheers,
--Jorge

From: Stephen Balukoff sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, October 22, 2014 4:04 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Jorge!

Welcome back, eh! You've been missed.

Anyway, I just wanted to say that your proposal sounds great to me, and it's 
good to finally be closer to having concrete requirements for logging, eh. Once 
this discussion is nearing a conclusion, could you write up the specifics of 
logging into a specification proposal document?

Regarding the discussion itself: I think we can ignore UDP for now, as there 
doesn't seem to be high demand for it, and it certainly won't be supported in v 
0.5 of Octavia (and maybe not in v1 or v2 either, unless we see real demand).

Regarding the 'real-time usage' information: I have some ideas regarding 
getting this from a combination of iptables and / or the haproxy stats 
interface. Were you thinking something different that involves on-the-fly 
analysis of the logs or something?  (I tend to find that logs are great for 
non-real time data, but can often be lacking if you need, say, a gauge like 
'currently open connections' or something.)

One other thing: If there's a chance we'll be storing logs on the amphorae 
themselves, then we need to have log rotation as part of the configuration 
here. It would be silly to have an amphora failure just because its ephemeral 
disk fills up, eh.

Stephen

On Wed, Oct 15, 2014 at 4:03 PM, Jorge Miramontes 
jorge.miramon...@rackspace.commailto:jorge.miramon...@rackspace.com wrote:
Hey Octavia folks!


First off, yes, I'm still alive and kicking. :)

I,d like to start a conversation on usage requirements and have a few
suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
based protocols, we inherently enable connection logging for load
balancers for several reasons:

1) We can use these logs as the raw and granular data needed to track
usage. With logs, the operator has flexibility as to what usage metrics
they want to bill against. For example, bandwidth is easy to track and can
even be split into header and body data so that the provider can choose if
they want to bill on header data or not. Also, the provider can determine
if they will bill their customers for failed requests that were the fault
of the provider themselves. These are just a few examples; the point is
the flexible nature of logs.

2) Creating billable usage from logs is easy compared to other options
like polling. For example, in our current LBaaS iteration at Rackspace we
bill partly on average concurrent connections. This is based on polling
and is not as accurate as it possibly can be. It's very close, but it
doesn't get more accurate that the logs themselves. Furthermore, polling
is more complex and uses up resources on the polling cadence.

3) Enabling logs for all load balancers can be used for debugging, support
and audit purposes. While the customer may or may not want their logs
uploaded to swift, operators and their support teams can still use this
data to help customers out with billing and setup issues. Auditing will
also be easier with raw logs.

4) Enabling logs for all load balancers will help mitigate uncertainty in
terms of capacity planning. Imagine if every customer suddenly enabled
logs without it ever being turned on. This could produce a spike in
resource utilization that will be hard to manage. Enabling logs from the
start means we are certain as to what to plan for other than the nature of
the customer's traffic pattern.

Some Cons I can think of (please add more as I think the pros outweigh the
cons):

1) If we every add UDP based protocols then this model won't work.  1% of
our load balancers at Rackspace are UDP based so we are not looking at
using this protocol for Octavia. I'm more of a fan of building a really
good TCP/HTTP/HTTPS based load balancer because UDP load balancing solves
a different problem. For me different problem == different product.

2) I'm assuming HA Proxy. Thus, if we choose another technology for 

Re: [openstack-dev] [neutron][stable] metadata agent performance

2014-10-22 Thread Maru Newby
On Oct 22, 2014, at 12:53 AM, Jakub Libosvar libos...@redhat.com wrote:

 On 10/22/2014 02:26 AM, Maru Newby wrote:
 We merged caching support for the metadata agent in juno, and backported to 
 icehouse.  It was enabled by default in juno, but disabled by default in 
 icehouse to satisfy the stable maint requirement of not changing functional 
 behavior.
 
 While performance of the agent was improved with caching enabled, it 
 regressed a reported 8x when caching was disabled [1].  This means that by 
 default, the caching backport severely impacts icehouse Neutron's 
 performance.
 
 So, what is the way forward?  We definitely need to document the problem for 
 both icehouse and juno.  Is documentation enough?  Or can we enable caching 
 by default in icehouse?  Or remove the backport entirely.
 
 There is also a proposal to replace the metadata agent’s use of the neutron 
 client in favor of rpc [2].  There were comments on an old bug suggesting we 
 didn’t want to do this [3], but assuming that we want this change in Kilo, 
 is backporting even a possibility given that it implies a behavioral change 
 to be useful?
 
 Thanks,
 
 
 Maru
 
 
 
 1: https://bugs.launchpad.net/cloud-archive/+bug/1361357
 2: https://review.openstack.org/#/c/121782
 3: https://bugs.launchpad.net/neutron/+bug/1092043
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 I thought the performance regression was caused by wrong keystone token
 caching leading to authentication per neutron client instance. Fix was
 backported to Icehouse [1].
 
 Does it mean this patch hasn't solved the problem and regression is
 somewhere else?

As you say (and as per Ihar and Akihiro’s responses), the problem was fixed.  I 
was confused by the bug that Oleg’s RPC proposal hard targeted as a related bug 
and thought the problem wasn’t yet resolved, but looking at it again it appears 
the bug is a Ubuntu distro issue.  Accordingly, I’ve removed Neutron as a 
target for that bug.  Apologies for the confusion.


Maru 

 Kuba
 
 [1] https://review.openstack.org/#/c/120418/
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] HA of dhcp agents?

2014-10-22 Thread Armando M.
Hi Noel,

On 22 October 2014 01:57, Noel Burton-Krahn n...@pistoncloud.com wrote:

 Hi Armando,

 Sort of... but what happens when the second one dies?


You mean, you lost both (all) agents? In this case, yes you'd need to
resurrect the agents or move the networks to another available agent.


 If one DHCP agent dies, I need to be able to start a new DHCP agent on
 another host and take over from it.  As far as I can tell right now, when
 one DHCP agent dies, another doesn't take up the slack.


I am not sure I fully understand the failure mode you are trying to
address. The DHCP agents can work in an active-active configuration, so if
you have N agents assigned per network, all of them should be able to
address DHCP traffic. If this is not your experience, ie. one agent dies
and DHCP is no longer served on the network by any other agent, then there
might be some other problem going on.




 I have the same problem wit L3 agents by the way, that's next on my list

 --
 Noel


 On Tue, Oct 21, 2014 at 12:52 PM, Armando M. arma...@gmail.com wrote:

 As far as I can tell when you specify:

 dhcp_agents_per_network = X  1

 The server binds the network to all the agents (up to X), which means
 that you have multiple instances of dnsmasq serving dhcp requests at the
 same time. If one agent dies, there is no fail-over needed per se, as the
 other agent will continue to server dhcp requests unaffected.

 For instance, in my env I have dhcp_agents_per_network=2, so If I create
 a network, and list the agents serving the network I will see the following:

 neutron dhcp-agent-list-hosting-net test

 +--+++---+

 | id   | host   | admin_state_up | alive |

 +--+++---+

 | 6dd09649-5e24-403b-9654-7aa0f69f04fb | host1  | True   | :-)   |

 | 7d47721a-2725-45f8-b7c4-2731cfabdb48 | host2  | True   | :-)   |

 +--+++---+

 Isn't that what you're after?

 Cheers,
 Armando

 On 21 October 2014 22:26, Noel Burton-Krahn n...@pistoncloud.com wrote:

 We currently have a mechanism for restarting the DHCP agent on another
 node, but we'd like the new agent to take over all the old networks of the
 failed dhcp instance.  Right now, since dhcp agents are distinguished by
 host, and the host has to match the host of the ovs agent, and the ovs
 agent's host has to be unique per node, the new dhcp agent is registered as
 a completely new agent and doesn't take over the failed agent's networks.
 I'm looking for a way to give the new agent the same roles as the previous
 one.

 --
 Noel


 On Tue, Oct 21, 2014 at 12:12 AM, Kevin Benton blak...@gmail.com
 wrote:

 No, unfortunately when the DHCP agent dies there isn't automatic
 rescheduling at the moment.

 On Mon, Oct 20, 2014 at 11:56 PM, Noel Burton-Krahn 
 n...@pistoncloud.com wrote:

 Thanks for the pointer!

 I like how the first google hit for this is:

 Add details on dhcp_agents_per_network option for DHCP agent HA
 https://bugs.launchpad.net/openstack-manuals/+bug/1370934

 :) Seems reasonable to set dhcp_agents_per_network  1.  What happens
 when a DHCP agent dies?  Does the scheduler automatically bind another
 agent to that network?

 Cheers,
 --
 Noel



 On Mon, Oct 20, 2014 at 9:03 PM, Jian Wen wenjia...@gmail.com wrote:

 See dhcp_agents_per_network in neutron.conf.

 https://bugs.launchpad.net/neutron/+bug/1174132

 2014-10-21 6:47 GMT+08:00 Noel Burton-Krahn n...@pistoncloud.com:

 I've been working on failover for dhcp and L3 agents.  I see that in
 [1], multiple dhcp agents can host the same network.  However, it looks
 like I have to manually assign networks to multiple dhcp agents, which
 won't work.  Shouldn't multiple dhcp agents automatically fail over?

 [1]
 http://docs.openstack.org/trunk/config-reference/content/multi_agent_demo_configuration.html



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best,

 Jian

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 

Re: [openstack-dev] [NFV] Meeting today ? 22nd October

2014-10-22 Thread Steve Gordon
- Original Message -
 From: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org

 Hi all,
 
 Do we have a meeting today?
 I can't see something in the wiki about today...
 
 Itai

Hi Itai,

Apologies, yes we do!

-Steve
 
 Sent from my iPhone
 
  On Oct 8, 2014, at 2:06 AM, Steve Gordon sgor...@redhat.com wrote:
  
  Hi all,
  
  Just a quick reminder that the NFV subteam meets Wednesday 8th October 2014
  @ 1400 UTC in #openstack-meeting-alt on FreeNode. I have started putting
  an agenda together here, feel free to add:
  
 https://etherpad.openstack.org/p/nfv-meeting-agenda
  
  Also many thanks to Itai for running the meeting for me last week.
  
  Thanks,
  
  Steve
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Steve Gordon, RHCE
Sr. Technical Product Manager,
Red Hat Enterprise Linux OpenStack Platform

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Cinder] The sorry state of cinder's driver in Glance

2014-10-22 Thread John Griffith
On Wed, Oct 22, 2014 at 7:33 AM, Flavio Percoco fla...@redhat.com wrote:

 On 10/22/2014 02:30 PM, Zhi Yan Liu wrote:
  Greetings,
 
  On Wed, Oct 22, 2014 at 4:56 PM, Flavio Percoco fla...@redhat.com
 wrote:
  Greetings,
 
  Back in Havana a, partially-implemented[0][1], Cinder driver was merged
  in Glance to provide an easier and hopefully more consistent interaction
  between glance, cinder and nova when it comes to manage volume images
  and booting from volumes.
 
  With my idea, it not only for VM provisioning and consuming feature
  but also for implementing a consistent and unified block storage
  backend for image store.  For historical reasons, we have implemented
  a lot of duplicated block storage drivers between glance and cinder,
  IMO, cinder could regard as a full-functional block storage backend
  from OpenStack's perspective (I mean it contains both data and control
  plane), glance could just leverage cinder as a unified block storage
  backend. Essentially, Glance has two kind of drivers, block storage
  driver and object storage driver (e.g. swift and s3 driver),  from
  above opinion, I consider to give glance a cinder driver is very
  sensible, it could provide a unified and consistent way to access
  different kind of block backend instead of implement duplicated
  drivers in both projects.

 Let me see if I got this right. You're suggesting to have a cinder
 driver in Glance so we can basically remove the
 'create-volume-from-image' functionality from Cinder. is this right?

  I see some people like to see implementing similar drivers in
  different projects again and again, but at least I think this is a
  hurtless and beneficial feature/driver.

 It's not as harmless as it seems. There are many users confused as to
 what the use case of this driver is. For example, should users create
 volumes from images? or should the create images that are then stored in
 a volume? What's the difference?

 Technically, the answer is probably none, but from a deployment and
 usability perspective, there's a huge difference that needs to be
 considered.

 I'm not saying it's a bad idea, I'm just saying we need to get this
 story straight and probably just pick one (? /me *shrugs*)

  While I still don't fully understand the need of this driver, I think
  there's a bigger problem we need to solve now. We have a partially
  implemented driver that is almost useless and it's creating lots of
  confusion in users that are willing to use it but keep hitting 500
  errors because there's nothing they can do with it except for creating
  an image that points to an existing volume.
 
  I'd like us to discuss what the exact plan for this driver moving
  forward is, what is missing and whether it'll actually be completed
  during Kilo.
 
  I'd like to enhance cinder driver of course, but currently it blocked
  on one thing it needs a correct people believed way [0] to access
  volume from Glance (for both data and control plane, e.g. creating
  image and upload bits). During H cycle I was told cinder will release
  a separated lib soon, called Brick[0], which could be used from other
  project to allow them access volume directly from cinder, but seems it
  didn't ready to use still until now. But anyway, we can talk this with
  cinder team to get Brick moving forward.
 
  [0] https://review.openstack.org/#/c/20593/
  [1] https://wiki.openstack.org/wiki/CinderBrick
 
  I really appreciated if somebody could show me a clear plan/status on
  CinderBrick, I still think it's a good way to go for glance cinder
  driver.

 +1 Mike? John ? Any extra info here?

 If the brick's lib is not going to be released before k-2, I think we
 should just remove this driver until we can actually complete the work.

 As it is right now, it doesn't add any benefit and there's nothing this
 driver adds that cannot be done already (creating volumes from images,
 that is).

  If there's a slight chance it won't be completed in Kilo, I'd like to
  propose getting rid of it - with a deprecation period, I guess - and
  giving it another chance in the future when it can be fully implemented.
 
  [0] https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver
  [1] https://review.openstack.org/#/c/32864/

 Fla


 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Sorry State is probably fair, the issue here is as you pointed out it's
something that's partially done.  To be clear about the intended use-case
here; my intent was mostly to utilize Cinder Block Devices similar to the
model Ceph has in place.  We can make instance creation and migration quite
a bit more efficient IMO and also there are some of the points you made
around cloning and creating new volumes.

Ideas started spreading from there to Using a Read Only Cinder Volume per
image, to A Glance owned 

[openstack-dev] [all] Liaisons for Vulnerability Management Team

2014-10-22 Thread Thierry Carrez
Hi everyone,

TL;DR:
Update
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Vulnerability_management

Longer version:

In the same spirit as the Oslo Liaisons, we are introducing in the Kilo
cycle liaisons for the Vulnerability Management Team.

Historically we've been trying to rely on a group of people with ACL
access to the private security bugs for the project (the
$PROJECT-coresec group in Launchpad), but in some cases it resulted in a
everyone in charge, nobody in charge side effect. We think we could
benefit from stronger ties and involvement by designating specific liaisons.

VMT liaisons will help assessing the impact of reported issues,
coordinate the development of patches, review proposed patches and
propose backports. The liaison should be familiar with the Vulnerability
Management process
(https://wiki.openstack.org/wiki/Vulnerability_Management) and embargo
rules, and have a good grasp of security issues in software design. The
liaison may of course further delegate work to other subject matter experts.

The liaison should be a core reviewer for the project, but does not need
to be the PTL. By default, if nobody else is mentioned, the liaison will
be the PTL.

If you're up for it, talk to your PTL and add your name to:
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Vulnerability_management

Thanks for your help in keeping OpenStack secure !

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-22 Thread Salvatore Orlando
Kyle,

I pointed out the similarity of the two specifications while reviewing them
a few months ago (see patch set #4).
Ian then approached me on IRC (I'm afraid it's going to be a bit difficult
to retrieve those logs), and pointed out that actually the two
specifications, in his opinion, try to address different problems.

While the proposed approaches appear different, their ultimate goal is
apparently that of enabling instances to see multiple networks on the same
data-plane level port (as opposed to the mgmt-level logical port). While it
might be ok to have a variety of choice at the data plane level - my
suggestion is that we should have only a single way of specifying this at
the mgmt level, with the least possible changes to the simple logical model
we have - and here I'm referring to the proposed trunkport/subport approach
[1]

Salvatore

[1] https://review.openstack.org/#/c/94612/


On 22 October 2014 14:42, Kyle Mestery mest...@mestery.com wrote:

 There are currently at least two BPs registered for VLAN trunk support
 to VMs in neutron-specs [1] [2]. This is clearly something that I'd
 like to see us land in Kilo, as it enables a bunch of things for the
 NFV use cases. I'm going to propose that we talk about this at an
 upcoming Neutron meeting [3]. Given the rotating schedule of this
 meeting, and the fact the Summit is fast approaching, I'm going to
 propose we allocate a bit of time in next Monday's meeting to discuss
 this. It's likely we can continue this discussion F2F in Paris as
 well, but getting a head start would be good.

 Thanks,
 Kyle

 [1] https://review.openstack.org/#/c/94612/
 [2] https://review.openstack.org/#/c/97714
 [3] https://wiki.openstack.org/wiki/Network/Meetings

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-22 Thread Steve Gordon
- Original Message -
 From: Kyle Mestery mest...@mestery.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 There are currently at least two BPs registered for VLAN trunk support
 to VMs in neutron-specs [1] [2]. This is clearly something that I'd
 like to see us land in Kilo, as it enables a bunch of things for the
 NFV use cases. I'm going to propose that we talk about this at an
 upcoming Neutron meeting [3]. Given the rotating schedule of this
 meeting, and the fact the Summit is fast approaching, I'm going to
 propose we allocate a bit of time in next Monday's meeting to discuss
 this. It's likely we can continue this discussion F2F in Paris as
 well, but getting a head start would be good.
 
 Thanks,
 Kyle
 
 [1] https://review.openstack.org/#/c/94612/
 [2] https://review.openstack.org/#/c/97714
 [3] https://wiki.openstack.org/wiki/Network/Meetings

Hi Kyle,

Thanks for raising this, it would be great to have a converged plan for 
addressing this use case [1] for Kilo. I plan to attend the Neutron meeting and 
I've CC'd Erik, Ian, and Calum to make sure they are aware as well.

Thanks,

Steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV] Meeting minutes and summary for 2014-10-22

2014-10-22 Thread Steve Gordon
Hi all,

Thanks to those who attended the meeting today, for those who missed in minutes 
and the full log are available at these locations:

* Meeting ended Wed Oct 22 14:25:30 2014 UTC.  Information about MeetBot at 
http://wiki.debian.org/MeetBot . (v 0.1.4)
* Minutes:
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-10-22-14.01.html
* Minutes (text): 
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-10-22-14.01.txt
* Log:
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-10-22-14.01.log.html

I have also updated the Wiki per Itai's request to reflect the time for next 
week, I have not added meeting times beyond summit with the expectation we may 
wish to discuss future timing of meetings now that we have tried the 
alternating schedule for a month or two:

https://wiki.openstack.org/wiki/Teams/NFV#Agenda_for_next_meeting

Thanks!

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday October 23rd at 17:00 UTC

2014-10-22 Thread Matthew Treinish
Hi Everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
this Thursday, October 23rd at 17:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

It's also worth noting that a few weeks ago we started having a regular
dedicated Devstack topic during the meetings. So if anyone is interested in
Devstack development please join the meetings to be a part of the discussion.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

13:00 EDT
02:00 JST
02:30 ACST
19:00 CEST
12:00 CDT
10:00 PDT

-Matt Treinish


pgpcDnPP81qe8.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Scheduler group meeting - cancelled this week

2014-10-22 Thread Ed Leafe
On Oct 22, 2014, at 8:21 AM, Dugger, Donald D donald.d.dug...@intel.com wrote:

 Sigh.  I've progressed from being challenged by time of day (e.g. timezone) 
 to being challenged by day of week.  Pretty soon I'll be confused about the 
 year :-)
 
 Sorry about that.

Heh, no worries – I've had a lot worse days!


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Cinder] The sorry state of cinder's driver in Glance

2014-10-22 Thread Zhi Yan Liu
Replied in inline.

On Wed, Oct 22, 2014 at 9:33 PM, Flavio Percoco fla...@redhat.com wrote:
 On 10/22/2014 02:30 PM, Zhi Yan Liu wrote:
 Greetings,

 On Wed, Oct 22, 2014 at 4:56 PM, Flavio Percoco fla...@redhat.com wrote:
 Greetings,

 Back in Havana a, partially-implemented[0][1], Cinder driver was merged
 in Glance to provide an easier and hopefully more consistent interaction
 between glance, cinder and nova when it comes to manage volume images
 and booting from volumes.

 With my idea, it not only for VM provisioning and consuming feature
 but also for implementing a consistent and unified block storage
 backend for image store.  For historical reasons, we have implemented
 a lot of duplicated block storage drivers between glance and cinder,
 IMO, cinder could regard as a full-functional block storage backend
 from OpenStack's perspective (I mean it contains both data and control
 plane), glance could just leverage cinder as a unified block storage
 backend. Essentially, Glance has two kind of drivers, block storage
 driver and object storage driver (e.g. swift and s3 driver),  from
 above opinion, I consider to give glance a cinder driver is very
 sensible, it could provide a unified and consistent way to access
 different kind of block backend instead of implement duplicated
 drivers in both projects.

 Let me see if I got this right. You're suggesting to have a cinder
 driver in Glance so we can basically remove the
 'create-volume-from-image' functionality from Cinder. is this right?


I don't think we need to remove any feature as an existing/reasonable
use case from end user's perspective, 'create-volume-from-image' is a
useful function and need to keep as-is to me, but I consider we can do
some changes for internal implementation if we have cinder driver for
glance, e.g. for this use case, if glance store image as a volume
already then cinder can create volume effectively - to leverage such
capability from backend storage, I think this case just like ceph
current way on this situation (so a duplication example again).

 I see some people like to see implementing similar drivers in
 different projects again and again, but at least I think this is a
 hurtless and beneficial feature/driver.

 It's not as harmless as it seems. There are many users confused as to
 what the use case of this driver is. For example, should users create
 volumes from images? or should the create images that are then stored in
 a volume? What's the difference?

I'm not sure I understood all concerns from those folks, but for your
examples, one key reason I think is that they still think it in
technical way to much. I mean create-image-from-volume and
create-volume-from-image are useful and reasonable _use case_ from end
user's perspective because volume and image are totally different
concept for end user in cloud context (at least, in OpenStack
context), the benefits/purpose of leverage cinder store/driver in
glance is not to change those concepts and existing use case for end
user/operator but to try to help us implement those feature
efficiently in glance and cinder inside, IMO, including low the
duplication as much as possible which as I mentioned before. So, in
short, I see the impact of this idea is on _implementation_ level,
instead on the exposed _use case_ level.


 Technically, the answer is probably none, but from a deployment and
 usability perspective, there's a huge difference that needs to be
 considered.

According to my above explanations, IMO, this driver/idea couldn't
(and shouldn't) break existing concept and use case for end
user/operator, but if I still miss something pls let me know.

zhiyan


 I'm not saying it's a bad idea, I'm just saying we need to get this
 story straight and probably just pick one (? /me *shrugs*)

 While I still don't fully understand the need of this driver, I think
 there's a bigger problem we need to solve now. We have a partially
 implemented driver that is almost useless and it's creating lots of
 confusion in users that are willing to use it but keep hitting 500
 errors because there's nothing they can do with it except for creating
 an image that points to an existing volume.

 I'd like us to discuss what the exact plan for this driver moving
 forward is, what is missing and whether it'll actually be completed
 during Kilo.

 I'd like to enhance cinder driver of course, but currently it blocked
 on one thing it needs a correct people believed way [0] to access
 volume from Glance (for both data and control plane, e.g. creating
 image and upload bits). During H cycle I was told cinder will release
 a separated lib soon, called Brick[0], which could be used from other
 project to allow them access volume directly from cinder, but seems it
 didn't ready to use still until now. But anyway, we can talk this with
 cinder team to get Brick moving forward.

 [0] https://review.openstack.org/#/c/20593/
 [1] https://wiki.openstack.org/wiki/CinderBrick

 I really appreciated 

[openstack-dev] [kolla][tripleo][nova][glance][keystone] announce of Kolla Milestone #1

2014-10-22 Thread Steven Dake
The Kolla development community would like to announce the release of 
Kolla Milestone #1.  This milestone constitutes two weeks of effort by 
the developers and is available for immediate download from 
https://github.com/stackforge/kolla/archive/version-m1.tar.gz.


Kolla is a project to containerize OpenStack based upon Kubernetes and 
Docker.  With this release, we have minimally functional containers 
available for:


 * glance (glance-api, glance-registry)

 * mariadb

 * keystone

 * nova-compute (nova-network, nova-compute)

 * nova-controller (nova-conductor, nova-api, nova-scheduler)

 * rabbitmq

 * heat (heat-api, heat-engine)

While these containers will boot and provide the expected OpenStack 
APIs, they should be considered a technology demonstration rather than a 
functional OpenStack deployment.


We are hopeful the community gives Kolla a spin.

Setting up a test environment
===

OPTION 1 (run steps 1.1-1.5 and 3.1-3.4):
For  those without an existing Kubernetes environment, two options are 
available for configuring one:
The upstream Kubernetes community provides instructions for running 
Kubernetes using Vagrant, available 
from:https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md


The Kolla developers develop Kolla in OpenStack, using Heat to provision 
the necessary servers and other resources.  If you are familiar with 
Heat and you have a correctly configured environment available, this 
lets you deploy a working Kubernetes cluster automatically.  The Heat 
templates are available fromhttps://github.com/larsks/heat-kubernetes/. 
The templates require at least Heat 2014.1.3 (earlier versions have a 
bug that will prevent the templates from working).

Here are some simple steps to get things rolling using the Heat templates:

1.1. git clonehttps://github.com/larsks/heat-kubernetes/; cd heat-kubernetes

1.2. Create an appropriate image by running the get_image.sh script in 
this repository.  This will generate an image called 
fedora-20-k8s.qcow2. Upload this image to Glance.  You can also obtain 
an appropriate image 
fromhttps://fedorapeople.org/groups/heat/kolla/fedora-20-k8s.qcow2.


1.3. Create a file local.yaml with settings appropriate to your 
OpenStack environment. It should look something like:


 * parameters:

 * server_image: fedora-20-k8s

 * ssh_key_name: sdake

 * dns_nameserver: 8.8.8.8

 * external_network_id: 6e7e7701-46a0-49c0-9f06-ac5abc79d6ae

 * number_of_minions: 1

 * server_flavor: m1.large


 * You *must* provide settings for external_network_id and
   ssh_key_name; these are local to your environment. You will probably
   also need to provide a value for server_image, which should be the
   name (or UUID) of a Fedora 20 cloud image or derivative.


1.4. heat stack-create -f kubecluster.yaml -e local.yaml my-kube-cluster
1.5. Determine the ip addresses of your cluster hosts by running:
heat output-show my-kube-cluster kube_minions_external

OPTION 2 (run steps 2.1-2.12 and steps 3.1-3.4):

This document and the scripts provided assume Fedora on a virtual or 
physical environment outside of OpenStack. In this environment, Heat 
won't be accessible as the orchestration mechanism and scripts will be 
provided instead.

Install Master
--
2.1.: Install Fedora 20 x86-64 using whatever method is best for you 
(Kickstart+http works well if you have it setup)

2.2: Pick a node for your kubernetes master - note the IP address
2.3: Note the IP addresses of all other nodes. These will be referred to 
as minions

2.4: ssh root@{master-node} #It is very important that you ssh as root!
2.5: curlhttp://people.redhat.com/bholden/kube/master-install.sh 
master-install.sh; chmod +x master-install.sh
2.6: Edit the master-install.sh file you just created. You will need to 
add the minion IP addresses to the variable MINION_ADDRESSES at the top, 
and then comment out the exit. MINION_ADDRESSES will expect commas 
between each minion IP address. This line have entries that match the 
MINION_HOSTNAME variables set on each minion. The script later will set 
each MINION_HOSTNAME to its IP address

2.7: sh master-install.sh

Can minion install run during master install or do we need to wait until 
after master is done to run minion installation?


Install Minions
---
For this task, you may want to generate an SSH key from your workstation 
(if you're using Linux, run ssh-keygen) and then copy it around to your 
minions (if Linux, use ssh-copy-id root@${remote-ip})
You can then loop over the minion hostnames or IP addresses using a 
simple bash loop, example: for i in ip1 ip2 ip3; do ssh root@$i 
command ; done

This document will assume only one minion

2.8: ssh root@{minion-node}
2.9: curlhttp://people.redhat.com/bholden/kube/minion-install.sh 
minion-install.sh; chmod +x minion-install.sh
2.10: Edit the minion-install.sh file you just created. You will need to 
change the 

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-22 Thread Erik Moe

Hi,

Great that we can have more focus on this. I'll attend the meeting on Monday 
and also attend the summit, looking forward to these discussions.

Thanks,
Erik


-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com] 
Sent: den 22 oktober 2014 16:29
To: OpenStack Development Mailing List (not for usage questions)
Cc: Erik Moe; iawe...@cisco.com; calum.lou...@metaswitch.com
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

- Original Message -
 From: Kyle Mestery mest...@mestery.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 There are currently at least two BPs registered for VLAN trunk support 
 to VMs in neutron-specs [1] [2]. This is clearly something that I'd 
 like to see us land in Kilo, as it enables a bunch of things for the 
 NFV use cases. I'm going to propose that we talk about this at an 
 upcoming Neutron meeting [3]. Given the rotating schedule of this 
 meeting, and the fact the Summit is fast approaching, I'm going to 
 propose we allocate a bit of time in next Monday's meeting to discuss 
 this. It's likely we can continue this discussion F2F in Paris as 
 well, but getting a head start would be good.
 
 Thanks,
 Kyle
 
 [1] https://review.openstack.org/#/c/94612/
 [2] https://review.openstack.org/#/c/97714
 [3] https://wiki.openstack.org/wiki/Network/Meetings

Hi Kyle,

Thanks for raising this, it would be great to have a converged plan for 
addressing this use case [1] for Kilo. I plan to attend the Neutron meeting and 
I've CC'd Erik, Ian, and Calum to make sure they are aware as well.

Thanks,

Steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] making Daneyon Hansen core

2014-10-22 Thread Steven Dake
A few weeks ago in IRC we discussed the criteria for joining the core 
team in Kolla.  I believe Daneyon has met all of these requirements by 
reviewing patches along with the rest of the core team and providing 
valuable comments, as well as implementing neutron and helping get 
nova-networking implementation rolling.


Please vote +1 or -1 if your kolla core.  Recall a -1 is a veto.  It 
takes 3 votes.  This email counts as one vote ;)


Regards
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Proposal - add support for Markdown for docs

2014-10-22 Thread Collins, Sean
With some xargs, sed, and pandoc - I now present to you the first
attempt at converting the DevStack docs to RST, and making the doc build
look similar to other projects.

https://review.openstack.org/130241

It is extremely rough, I basically ran everything through Pandoc and
cleaned up any errors that Sphinx spat out. I'm sure there is a lot of
work that needs to be done to format it to be more readable - but I'm
pretty pleased with the result.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Removing nova-bm support within os-cloud-config

2014-10-22 Thread Ben Nemec
On 10/20/2014 10:00 PM, Steve Kowalik wrote:
 With the move to removing nova-baremetal, I'm concerned that portions
 of os-cloud-config will break once python-novaclient has released with
 the bits of the nova-baremetal gone -- import errors, and such like.
 
 I'm also concerned about backward compatibility -- in that we can't
 really remove the functionality, because it will break that
 compatibility. A further concern is that because nova-baremetal is no
 longer checked in CI, code paths may bitrot.

This is definitely a concern, but since it's been removed from Nova
master the only thing we can do is set up the proposed CI jobs to deploy
stable OpenStack from master TripleO.  For the time being I think that
would be our best path forward.

 
 Should we pony up and remove support for talking to nova-baremetal in
 os-cloud-config? Or any other suggestions?
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] move deallocate port from after hypervisor driver does detach interface when doing detach_interface

2014-10-22 Thread Ben Nemec
Please don't send review requests to the list.  The preferred methods of
asking for reviews are discussed in this post:
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html

Thanks.

-Ben

On 10/22/2014 02:57 AM, Eli Qiao wrote:
 
 hi all.
 when I am reviewing code of in nova/compute/manager.py
 I find that the detach_interface calls deallocate port from neutron first
 then calls detach_interface in the hypervisor, then what will happen if
 hypervisor detach_interface failed? the result is the port can be seen
 on guest but removed from neutron, this seems inconsistent.
 
 I submit a patch [1] to propose remove port from neutron side after
 hypervisor
 detach_interface successfully, and keep neutron port if catch exception when
 detach_interface then give a log message.
 
 can some one kindly help to take a look at it and give some comments?
 
 [1] https://review.openstack.org/#/c/130151/
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] periodic jobs for master

2014-10-22 Thread David Kranz

On 10/22/2014 06:07 AM, Thierry Carrez wrote:

Ihar Hrachyshka wrote:

[...]
For stable branches, we have so called periodic jobs that are
triggered once in a while against the current code in a stable branch,
and report to openstack-stable-maint@ mailing list. An example of
failing periodic job report can be found at [2]. I envision that
similar approach can be applied to test auxiliary features in gate. So
once something is broken in master, the interested parties behind the
auxiliary feature will be informed in due time.
[...]

The main issue with periodic jobs is that since they are non-blocking,
they can get ignored really easily. It takes a bit of organization and
process to get those failures addressed.

It's only recently (and a lot thanks to you) that failures in the
periodic jobs for stable branches are being taken into account quickly
and seriously. For years the failures just lingered until they blocked
someone's work enough for that person to go and fix them.

So while I think periodic jobs are a good way to increase corner case
testing coverage, I am skeptical of our collective ability to have the
discipline necessary for them not to become a pain. We'll need a strict
process around them: identified groups of people signed up to act on
failure, and failure stats so that we can remove jobs that don't get
enough attention.

While I share some of your skepticism, we have to find a way to make 
this work.
Saying we are doing our best to ensure the quality of upstream OpenStack 
based on a single-tier of testing (the gate) that is limited to 40min runs
is not plausible. Of course a lot more testing happens downstream but we 
can do better as a community. I think we should rephrase this subject as 
non-gating jobs. We could have various kinds of stress and longevity 
jobs running to good effect if we can solve this process problem.


Following on your process suggestion, in practice the most likely way 
this could actually work is to have a rotation of build guardians that 
agree to keep an eye on jobs for a short period of time. There would 
need to be a separate rotation list for each project that has 
non-gating, project-specific jobs. This will likely happen as we move 
towards deeper functional testing in projects. The qa team would be the 
logical pool for a rotation of more global jobs of the kind I think Ihar 
was referring to.


As for failure status, each of these non-gating jobs would have their 
own name so logstash could be used to debug failures. Do we already have 
anything that tracks failure rates of jobs?


 -David




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] making Daneyon Hansen core

2014-10-22 Thread Ryan Hallisey
Great work Daneyon!  Excellent job with neutron and nova-networking!

+1
-Ryan

- Original Message -
From: Steven Dake sd...@redhat.com
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Wednesday, October 22, 2014 11:04:24 AM
Subject: [openstack-dev] [kolla] making Daneyon Hansen core

A few weeks ago in IRC we discussed the criteria for joining the core 
team in Kolla.  I believe Daneyon has met all of these requirements by 
reviewing patches along with the rest of the core team and providing 
valuable comments, as well as implementing neutron and helping get 
nova-networking implementation rolling.

Please vote +1 or -1 if your kolla core.  Recall a -1 is a veto.  It 
takes 3 votes.  This email counts as one vote ;)

Regards
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term discovery

2014-10-22 Thread Lucas Alvares Gomes
On Tue, Oct 21, 2014 at 6:29 PM, Stuart Fox stu...@demonware.net wrote:
 Having written/worked on a few DC automation tools, Ive typically broken
 down the process of getting unknown hardware into production in to 4
 distinct stages.
 1) Discovery (The discovery of unknown hardware)
 2) Normalising (Push initial configs like drac/imm/ilo settings, flashing to
 known good firmware etc etc)
 3) Analysis (Figure out what the hardware is and what its constituent parts
 are cpu/ram/disk/IO caps/serial numbers etc)
 4) Burnin (run linpack or equiv tests for 24hrs)

 At the end of stage 4 the hardware should be ready for provisioning.

Oh, thanks for that, I quite like this separation.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Cinder] The sorry state of cinder's driver in Glance

2014-10-22 Thread Mathieu Gagné

On 2014-10-22 10:05 AM, John Griffith wrote:


Ideas started spreading from there to Using a Read Only Cinder Volume
per image, to A Glance owned Cinder Volume that would behave pretty
much the current local disk/file-system model (Create a Cinder Volume
for Glance, attach it to the Glance Server, partition, format and
mount... use as image store).



To add to John Griffith's explanation:

This is a feature we have wanted for *several* months and finally 
implemented in-house in a different way directly in Cinder.


Creating a volume from an image can takes *several* minutes depending on 
the Cinder backend used. For someone using BFV as its main way to boot 
instances, it is a *HUGE* issue.


This causes several problems:

- When BFV, Nova thinks the volume creation failed because it took more 
than 2 minutes to create the volume from the image. Nova will then 
retry the volume creation, still without success, and instance will go 
in ERROR state.


You now have 2 orphan volumes in Cinder. This is because Nova cannot 
cleanup after itself properly due to volumes still being in creating 
state when deletion is attempted by Nova.


- So you try to create the volume yourself first and ask Nova to boot on 
it. When creating a volume from an image in Cinder (not through Nova), 
from a UX perspective, this time is too long.


Time required adds up when using a SolidFire backend with QoS. You have 
the time to get several coffees and a whole breakfast with your friends 
to talk about how creation a volume from an image is too damn slow.


What we did to fix the issue:

- We created a special tenant with golden volumes which are in fact 
volumes created from images. Those golden volumes are used to optimize 
the volume creation.


The SolidFire driver has been modified so that when you create a volume 
from an image, it first tries to see if there is a corresponding golden 
volume in that special tenant. If one is found, volume is cloned into 
the appropriate tenant in a matter of seconds. If none is found, normal 
creation process is used.



AFAIK, some storage backends (like Ceph) addressed the issue by 
implementing themselves in all the OpenStack services: Nova, Glance 
and Cinder. They now have the ability to optimize each steps of the 
lifecycle of an instance/volume by simply cloning volumes instead of 
re-downloading a whole image to finally end up in the same backend the 
original image was stored in.


While this is cool for Ceph, other backends don't have this luxury and 
we are stucked in this sorry state.


--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Scheduler group meeting - cancelled this week INTERNAL

2014-10-22 Thread Elzur, Uri
Don

Will there be a meeting next week? What is the regular time slot for the 
meeting?

I'd like to work w you on a technical slide to use in Paris
Do we need to socialize the Gantt topic more?


Thx

Uri (Oo-Ree)
C: 949-378-7568

From: Dugger, Donald D [mailto:donald.d.dug...@intel.com]
Sent: Wednesday, October 22, 2014 6:04 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [gantt] Scheduler group meeting - cancelled this week

Just a reminder that, as we mentioned last week, no meeting today.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] making Daneyon Hansen core

2014-10-22 Thread Lars Kellogg-Stedman
On Wed, Oct 22, 2014 at 08:04:24AM -0700, Steven Dake wrote:
 A few weeks ago in IRC we discussed the criteria for joining the core team
 in Kolla.  I believe Daneyon has met all of these requirements by reviewing
 patches along with the rest of the core team and providing valuable
 comments, as well as implementing neutron and helping get nova-networking
 implementation rolling.
 
 Please vote +1 or -1 if your kolla core.  Recall a -1 is a veto.  It takes 3
 votes.  This email counts as one vote ;)

+1

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgptJEhRve2dp.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Scheduler group meeting - cancelled this week INTERNAL

2014-10-22 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/22/2014 10:54 AM, Elzur, Uri wrote:
 Will there be a meeting next week? What is the regular time slot for the
 meeting?

Tuesdays at 1500 UTC in IRC channel: #openstack-meeting

https://wiki.openstack.org/wiki/Meetings#Gantt_.28Scheduler.29_team_meeting


- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iQIcBAEBAgAGBQJUR9WMAAoJEKMgtcocwZqLcWsP/jqoiVyYS0kEjEO0XHUIuWDG
ZQa1V7CE749hM/PZTNSbu979uDnJTE7hF5Xi0CCfNsKRwS419ZEPnPkNglNslmO9
riReo5oQ04i4iWF/Ou5+ZVp4OeWrVuigKW6wUW1f8digzcLGuD7qpIsFFBEmkviM
msVsz0mgj2i5DNXSvD27UuQMKITSn6YKPYPVmwe2oOYhydyqWnVWVeCqqUuke0bK
MdTG2RS8FQmce5AZ35MkwgL+M6gKgeLWHOt7EBe4GAkGG5dasTlxJDNfEkPlyHqc
c8u9Aaphno0B8UgK+V1y6eIIYN1XAVnG2rXVZFF+u2aEBJL50l3pS8NBTklbmAf8
ozbcoHdB5KyhHhxgkIJoxZJ/nEr148RMhZCXuw0fcbyTTePgaGNKxJt+3okh7SyP
CILrkhzEJaT8xApevBA4JQo7YisYjdOKyfGx45XMU8QXdlsdYJG754WUhZkATqHK
M4TxloKg6KR8CFJu24pbQqiu5qRgF34wQCwHhb8KiWixbT6ZouEioHWF91qJW8Na
DuuZe33UDliymU4pFcxaAj70h6pNj4w9pEZTjB4AgpNUtZUYvBma3HoRu+2NDsLo
IaQXgQKpam5t5RHy1d8p7npTA68gD3pV8LQF3M1t7mArGsUgGK4xLX6ZSo/W2Swk
ghN9LuQfVzHEJ9YfhIQF
=LuMb
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Backup of information about nodes.

2014-10-22 Thread Andrey Volochay
Hi, everyone.

For one project we need to have backup of info about nodes (astute.yaml).
In case the Fuel and a Node-n is down.

How a bad idea to keep a copy of the astute.yaml file of each node to each
node of the cluster?
For example:
pod_state/node-1.yaml
pod_state/node-2.yaml
pod_state/node-3.yaml
pod_state/node-n.yaml

I have idea. Add new deployment engine for astute.yaml and switcher of
engines. Then we will be able to choose between two ways.

engine here: fuel-astute/lib/astute/deployment_engine

-- 
Regards,
Andrey Volochay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Scheduler group meeting - cancelled this week INTERNAL

2014-10-22 Thread Jay Pipes

The regular meeting time is Tuesdays at 15:00 UTC/11am EST:

https://wiki.openstack.org/wiki/Meetings#Gantt_.28Scheduler.29_team_meeting

Generally, we don't do slides for design summit sessions -- we use 
etherpads instead and the sessions are discussions, not presentations.


Next week's meeting we can and should create etherpads for the 
cross-project session(s) that we will get allocated for Gantt topics.


Best,
-jay

On 10/22/2014 11:54 AM, Elzur, Uri wrote:

Don

Will there be a meeting next week? What is the regular time slot for the
meeting?

I’d like to work w you on a technical slide to use in Paris

Do we need to socialize the Gantt topic more?

Thx

Uri (“Oo-Ree”)

C: 949-378-7568

*From:* Dugger, Donald D [mailto:donald.d.dug...@intel.com]
*Sent:* Wednesday, October 22, 2014 6:04 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* [openstack-dev] [gantt] Scheduler group meeting - cancelled
this week

Just a reminder that, as we mentioned last week, no meeting today.

--

Don Dugger

Censeo Toto nos in Kansa esse decisse. - D. Gale

Ph: 303/443-3786



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB][api] Monitoring API

2014-10-22 Thread Alexander Minakov (CS)
Hello Stackers,

Here is a first blueprint to discuss for Kilo. 
I would like to start discussing related to monitoring API in MagnetoDB. I've 
written Blueprint[1] about this. 
The goal of this is to create a API for exposing usage statistic for users, 
external monitoring or billing tools.

Please take a minute to review [1] and add your comments. 

Thanks!

[1] https://review.openstack.org/#/c/130239/

-- 
Regards,
Aleksandr Minakov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding new dependencies to stackforge projects

2014-10-22 Thread Davanum Srinivas
fyi, latest update after discussion on #openstack-infra, consensus
seems to be to allow projects to add to g-r.

131+ All OpenStack projects, regardless of status, may add entries to
132+ ``global-requirements.txt`` for dependencies if the project is going
133+ to run integration tests under a devstack-configured environment. We
134+ want everyone testing with the same requirements, and any project
135+ that wants to test in a fully configured environment needs to have
136+ their dependencies in the global list.

Please see https://review.openstack.org/#/c/130245/

The review that updates g-r with docker-py has been rebased to depend on 130245:
https://review.openstack.org/#/c/128746/

Also, @ttx will be adding this discussion on next week's cross-meeting agenda.


thanks,
dims

On Wed, Oct 22, 2014 at 9:15 AM, Davanum Srinivas dava...@gmail.com wrote:
 Dear requirements-core folks,

 Here's the review as promised:
 https://review.openstack.org/130210

 Thanks,
 dims

 On Wed, Oct 22, 2014 at 7:27 AM, Davanum Srinivas dava...@gmail.com wrote:
 Matt,

 I've submitted a review to remove the gate-nova-docker-requirements
 from nova-docker:

 https://review.openstack.org/#/c/130192/

 I am good with treating the current situation with DSVM jobs as we
 bug if there is consensus. I'll try to dig in, but we may need Dean,
 Sean etc to help figure it out :)

 thanks,
 dims

 On Tue, Oct 21, 2014 at 8:42 PM, Matthew Treinish mtrein...@kortar.org 
 wrote:
 On Tue, Oct 21, 2014 at 08:09:38PM -0400, Davanum Srinivas wrote:
 Hi all,

 On the cross project meeting today, i promised to bring this to the
 ML[1]. So here it is:

 Question : Can a StackForge project (like nova-docker), depend on a
 library (docker-py) that is not specified in global requirements?

 So the answer is definitely yes, and this is definitely the case for most
 projects which aren't in the integrated release. We should only be enforcing
 requirements on projects in projects.txt in the requirements repo.


 Right now the answer seems to be No, as enforced by the CI systems.
 For the specific problems, see review:
 https://review.openstack.org/#/c/130065/

 You can see that check-tempest-dsvm-f20-docker fails:
 http://logs.openstack.org/65/130065/1/check/check-tempest-dsvm-f20-docker/f9000d4/devstacklog.txt.gz

 I think you've just hit a bug either in devstack or the nova-docker devstack
 bits. There isn't any reason these checks should be run on a project which
 isn't being tracked by global requirements.


 and the gate-nova-docker-requirements fails:
 http://logs.openstack.org/65/130065/1/check/gate-nova-docker-requirements/34256d2/console.html


 I'm not sure why this job is configured to be running on the nova-docker 
 repo.
 The project should either decide to track global-requirements and then be 
 added
 to projects.txt or not run the requirements check job. It doesn't make much
 sense to enforce compliance with global requirements if the project is 
 trying to
 use libraries not included there.

 Just remove the job template from the zuul layout for nova-docker:
 http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n4602

 and then once the issue with devstack is figured out you can add the 
 docker-py
 to the requirements list.

 For this specific instance, the reason for adding this dependency is
 to get rid of custom http client in nova-docker project that
 just duplicates the functionality, needs to be maintained and does not
 do proper checking etc. But the question is general
 in the broader since projects should be able to add dependencies and
 be able to run dsvm and requirements jobs until
 they are integrated and the delta list of new dependencies to global
 requirements should be vetted during the process.

 If nova-docker isn't tracked by global requirements then there shouldn't be
 anything blocking you from adding docker-py to the nova-docker 
 requirements. It
 looks like your just hitting a bug and/or a configuration issue. Granted, 
 there
 might be some complexity in moving the driver back into the nova tree if 
 there
 are dependencies on a packages not in global requirements, but that's 
 something
 that can be addressed when/if the driver is being merged back into nova.


 Thanks,
 dims

 PS: A really long rambling version of this email with a proposal to
 add a flag in devstack-gate/devstack is at [2], Actual review
 with hacks to get DSVM running by hook/crook that shows that docker-py
 indeed be used is at [3]

 [1] 
 http://eavesdrop.openstack.org/meetings/project/2014/project.2014-10-21-21.02.log.html
 [2] 
 https://etherpad.openstack.org/p/managing-reqs-for-projects-to-be-integrated
 [3] https://review.openstack.org/#/c/128790/


 -Matt Treinish

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Davanum Srinivas :: https://twitter.com/dims



 --
 

Re: [openstack-dev] Time to Samba! :-)

2014-10-22 Thread Martinx - ジェームズ
Just for the record, they are watching us!:-O

https://aws.amazon.com/blogs/aws/new-aws-directory-service/

Best!
Thiago

On 16 August 2014 16:03, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 Hey Stackers,

  I'm wondering here... Samba4 is pretty solid (up coming 4.2 rocks), I'm
 using it on a daily basis as an AD DC controller, for both Windows and
 Linux Instances! With replication, file system ACLs - cifs, built-in LDAP,
 dynamic DNS with Bind9 as a backend (no netbios) and etc... Pretty cool!

  In OpenStack ecosystem, there are awesome solutions like Trove, Solum,
 Designate and etc... Amazing times BTW! So, why not try to integrate
 Samba4, working as an AD DC, within OpenStack itself?!

  If yes, then, what is the best way/approach to achieve this?!

  I mean, for SQL, we have Trove, for iSCSI, Cinder, Nova uses Libvirt...
 Don't you guys think that it is time to have an OpenStack project for LDAP
 too? And since Samba4 come with it, plus DNS, AD, Kerberos and etc, I think
 that it will be huge if we manage to integrate it with OpenStack.

  I think that it would be nice to have, for example: domains, users and
 groups management at Horizon, and each tenant with its own Administrator
 (not the Keystone global admin) (to mange its Samba4 domains), so, they
 will be able to fully manage its own account, while allowing Keystone to
 authenticate against these users...

  Also, maybe Designate can have support for it too! I don't know for
 sure...

  Today, I'm doing this Samba integration manually, I have an external
 Samba4, from OpenStack's point of view, then, each tenant/project, have its
 own DNS domains, when a instance boots up, I just need to do something like
 this (bootstrap):

 --
 echo 127.0.1.1 instance-1.tenant-1.domain-1.com instance-1  /etc/hosts
 net ads join -U administrator
 --

  To make this work, the instance just needs to use Samba4 AD DC as its
 Name Servers, configured at its /etc/resolv.conf, delivered by DHCP
 Agent. The packages `samba-common-bin` and `krb5-user` are also required.
 Including a ready to use smb.conf file.

  Then, ping instance-1.tenant-1.domain-1.com worldwide! It works for
 both IPv4 and IPv6!!

  Also, Samba4 works okay with Disjoint Namespaces
 http://technet.microsoft.com/en-us/library/cc731929(v=ws.10).aspx, so,
 each tenant can have one or more domains and subdomains! Like *.
 realm.domain.com, *.domain.com, *.cloud-net-1.domain.com,
 *.domain2.com... All dynamic managed by Samba4 and Bind9!

  What about that?!

 Cheers!
 Thiago

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Taking a break..

2014-10-22 Thread Chris Behrens
Hey all,

Just wanted to drop a quick note to say that I decided to leave Rackspace to 
pursue another opportunity. My last day was last Friday. I won’t have much time 
for OpenStack, but I’m going to continue to hang out in the channels. Having 
been involved in the project since day 1, I’m going to find it difficult to 
fully walk away. I really don’t know how much I’ll continue to stay involved. I 
am completely burned out on nova. However, I’d really like to see versioned 
objects broken out into oslo and Ironic synced with nova’s object advancements. 
So, if I work on anything, it’ll probably be related to that.

Cells will be left in a lot of capable hands. I have shared some thoughts with 
people on how I think we can proceed to make it ‘the way’ in nova. I’m going to 
work on documenting some of this in an etherpad so the thoughts aren’t lost.

Anyway, it’s been fun… the project has grown like crazy! Keep on trucking... 
And while I won’t be active much, don’t be afraid to ping me!

- Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Backup of information about nodes.

2014-10-22 Thread Sergii Golovatiuk
Hi Andrew,

Thank you for sharing your ideas. We have similar blueprint where you
should be able to save/restore information about your environment

https://blueprints.launchpad.net/fuel/+spec/save-and-restore-env-settings

For development, it's very useful when you need to create the identical
environment (including networks) or other specific tasks. Also you may use
the case to backup information about a cluster and restore one particular
node.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Oct 22, 2014 at 6:07 PM, Andrey Volochay avoloc...@mirantis.com
wrote:

 Hi, everyone.

 For one project we need to have backup of info about nodes (astute.yaml).
 In case the Fuel and a Node-n is down.

 How a bad idea to keep a copy of the astute.yaml file of each node to each
 node of the cluster?
 For example:
 pod_state/node-1.yaml
 pod_state/node-2.yaml
 pod_state/node-3.yaml
 pod_state/node-n.yaml

 I have idea. Add new deployment engine for astute.yaml and switcher of
 engines. Then we will be able to choose between two ways.

 engine here: fuel-astute/lib/astute/deployment_engine

 --
 Regards,
 Andrey Volochay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Pulling nova/virt/hardware.py into nova/objects/

2014-10-22 Thread Jay Pipes

On 10/21/2014 05:44 AM, Nikola Đipanov wrote:

On 10/20/2014 07:38 PM, Jay Pipes wrote:

Hi Dan, Dan, Nikola, all Nova devs,

OK, so in reviewing Dan B's patch series that refactors the virt
driver's get_available_resource() method [1], I am stuck between two
concerns. I like (love even) much of the refactoring work involved in
Dan's patches. They replace a whole bunch of our nested dicts that are
used in the resource tracker with real objects -- and this is something
I've been harping on for months that really hinders developer's
understanding of Nova's internals.

However, all of the object classes that Dan B has introduced have been
unversioned objects -- i.e. they have not derived from
nova.objects.base.NovaObject. This means that these objects cannot be
sent over the wire via an RPC API call. In practical terms, this issue
has not yet reared its head, because the resource tracker still sends a
dictified JSON representation of the object's fields directly over the
wire, in the same format as Icehouse, therefore there have been no
breakages in RPC API compatibility.

The problems with having all these objects not modelled by deriving from
nova.objects.base.NovaObject are two-fold:

  * The object's fields/schema cannot be changed -- or rather, cannot be
changed without introducing upgrade problems.
  * The objects introduce a different way of serializing the object
contents than is used in nova/objects -- it's not that much different,
but it's different, and only has not caused a problem because the
serialization routines are not yet being used to transfer data over the
wire

So, what to do? Clearly, I think the nova/virt/hardware.py objects are
badly needed. However, one of (the top?) priorities of the Nova project
is upgradeability, and by not deriving from
nova.objects.base.NovaObject, these nova.virt.hardware objects are
putting that mission in jeopardy, IMO.

My proposal is that before we go and approve any BPs or patches that add
to nova/virt/hardware.py, we first put together a patch series that
moves the object models in nova/virt/hardware.py to being full-fledged
objects in nova/objects/*


I think that we should have both in some cases, and although it makes
sense to have them only as objects in some cases - having them as
separate classes for some and not others may be confusing.

So when does it make sense to have them as separate classes? Well
basically whenever there is a need for driver-agnostic logic that will
be used outside of the driver (scheduler/claims/API/). Can this stuff go
in objects? Technically yes, but objects are really not a good place for
such logic as they may already be trying to solve too much (data
versioning and downgrading when there is a multi version cloud running,
database access for compute, and there are at least 2 more features
considered to be part of objects - cells integration and schema data
migrations).

Take CPU pinning as an example [1] - none of that logic would benefit
from living in the NovaObject child class itself, and will make it quite
bloated. Having it in the separate module objects can call into is
definitely beneficial, while we definitely should stay with objects for
versioning/backporting support. So I say in a number of cases we need both.

Both is exactly what I did for NUMA, with the exception of the compute
node side (we are hopping to start the json blob cleanup in K so I did
not concern myself with it for the sake of getting things done, but we
will need it). This is what I am doing now with CPU pinning.

The question I did not touch upon is what kind of interface does that
leave poor Nova developers with. Having everything as objects would
allow us to write things like (in the CPU pinning case):

   instance.cpu_pinning = compute.cpu_pinning.get_pinning_for_instance(
  instance)

Pretty slick, no? While keeping it completely separate would make us do
things like

   cpu_pinning = compute.cpu_pinning.topology_from_obj()
   if cpu_pinning:
 instance_pinning = cpu_pinning.get_pinning_for_instance(
 instance.cpu_pinning.topology_from_obj())
 instance.cpu_pinning = objects.InstanceCPUPinning.obj_from_topology(
 instance_pinning)

Way less slick, but can be easily fixed with a level of indirection.
Note that the above holds only when we are objectified everywhere -
until then - we pretty much *have* to have both.

So to sum up - what I think we should do is:

1) Don't bloat the object code with low level stuff


By low-level stuff, if you mean methods that *do* something with the 
data in an object, I agree. However, the nova.objects framework should 
be used to represent the *data fields* of any object model that can 
potentially be transferred over the RPC API wires or stored in backend 
storage (DB or otherwise).


The reason is that the data fields of these objects will all surely need 
to undergo some changes -- field renames/adds/deletes/re-types, etc -- 
and that is where the nova.objects framework 

Re: [openstack-dev] [nova] Pulling nova/virt/hardware.py into nova/objects/

2014-10-22 Thread Jay Pipes

On 10/21/2014 04:51 PM, Dan Smith wrote:

The rationale behind two parallel data model hiercharies is that the
format the virt drivers report data in, is not likely to be exactly
the same as the format that the resoure tracker / scheduler wishes to
use in the database.


Yeah, and in cases where we know where that line is, it makes sense to
use the lighter-weight modeling for sure.


FWIW, my patch series is logically split up into two parts. THe first
10 or so patches are just thought of as general cleanup and useful to
Nova regardless of what we decide todo. The second 10 or so patches
are where the objects start appearing  getting used  the controversial
bits needing mor detailed discussion.


Right, so after some discussion I think we should go ahead and merge the
bottom of this set (all of them are now acked I think) and continue the
discussion on the top half where the modeling is introduced.


Agreed.

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Taking a break..

2014-10-22 Thread Devananda van der Veen
Chris,

All the best on your next adventure - you'll be missed here!

-Deva

On Wed, Oct 22, 2014 at 10:37 AM, Chris Behrens cbehr...@codestud.com wrote:
 Hey all,

 Just wanted to drop a quick note to say that I decided to leave Rackspace to 
 pursue another opportunity. My last day was last Friday. I won’t have much 
 time for OpenStack, but I’m going to continue to hang out in the channels. 
 Having been involved in the project since day 1, I’m going to find it 
 difficult to fully walk away. I really don’t know how much I’ll continue to 
 stay involved. I am completely burned out on nova. However, I’d really like 
 to see versioned objects broken out into oslo and Ironic synced with nova’s 
 object advancements. So, if I work on anything, it’ll probably be related to 
 that.

 Cells will be left in a lot of capable hands. I have shared some thoughts 
 with people on how I think we can proceed to make it ‘the way’ in nova. I’m 
 going to work on documenting some of this in an etherpad so the thoughts 
 aren’t lost.

 Anyway, it’s been fun… the project has grown like crazy! Keep on trucking... 
 And while I won’t be active much, don’t be afraid to ping me!

 - Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Taking a break..

2014-10-22 Thread Morgan Fainberg
Chris,

Best of luck on the new adventure! Definitely don’t be a stranger! 

Cheers,
Morgan

 On Oct 22, 2014, at 10:37, Chris Behrens cbehr...@codestud.com wrote:
 
 Hey all,
 
 Just wanted to drop a quick note to say that I decided to leave Rackspace to 
 pursue another opportunity. My last day was last Friday. I won’t have much 
 time for OpenStack, but I’m going to continue to hang out in the channels. 
 Having been involved in the project since day 1, I’m going to find it 
 difficult to fully walk away. I really don’t know how much I’ll continue to 
 stay involved. I am completely burned out on nova. However, I’d really like 
 to see versioned objects broken out into oslo and Ironic synced with nova’s 
 object advancements. So, if I work on anything, it’ll probably be related to 
 that.
 
 Cells will be left in a lot of capable hands. I have shared some thoughts 
 with people on how I think we can proceed to make it ‘the way’ in nova. I’m 
 going to work on documenting some of this in an etherpad so the thoughts 
 aren’t lost.
 
 Anyway, it’s been fun… the project has grown like crazy! Keep on trucking... 
 And while I won’t be active much, don’t be afraid to ping me!
 
 - Chris
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Taking a break..

2014-10-22 Thread Chris K
Chris,

All the best to you on you new adventure.

Chris Krelle
NobodyCam

On Wed, Oct 22, 2014 at 10:37 AM, Chris Behrens cbehr...@codestud.com
wrote:

 Hey all,

 Just wanted to drop a quick note to say that I decided to leave Rackspace
 to pursue another opportunity. My last day was last Friday. I won’t have
 much time for OpenStack, but I’m going to continue to hang out in the
 channels. Having been involved in the project since day 1, I’m going to
 find it difficult to fully walk away. I really don’t know how much I’ll
 continue to stay involved. I am completely burned out on nova. However, I’d
 really like to see versioned objects broken out into oslo and Ironic synced
 with nova’s object advancements. So, if I work on anything, it’ll probably
 be related to that.

 Cells will be left in a lot of capable hands. I have shared some thoughts
 with people on how I think we can proceed to make it ‘the way’ in nova. I’m
 going to work on documenting some of this in an etherpad so the thoughts
 aren’t lost.

 Anyway, it’s been fun… the project has grown like crazy! Keep on
 trucking... And while I won’t be active much, don’t be afraid to ping me!

 - Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Taking a break..

2014-10-22 Thread Dan Smith
 I won’t have much time for OpenStack, but I’m going to continue to
 hang out in the channels.

Nope, sorry, veto.

Some options to explain your way out:

1. Oops, I forgot it wasn't April
2. I have a sick sense of humor; I'm getting help for it
3. I've come to my senses after a brief break from reality

Seriously, I don't recall a gerrit review for this terrible plan...

 Anyway, it’s been fun… the project has grown like crazy! Keep on
 trucking... And while I won’t be active much, don’t be afraid to ping
 me!

Well, I for one am really sorry to see you go. I'd be lying if I said I
hope that your next opportunity leaves you daydreaming about going back
to OpenStack before too long. However, if not, good luck!

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Taking a break..

2014-10-22 Thread Lucas Alvares Gomes
Chris,

It was great to work with you, best of luck and enjoy this new opportunity.

Cheers,
Lucas

On Wed, Oct 22, 2014 at 6:50 PM, Morgan Fainberg
morgan.fainb...@gmail.com wrote:
 Chris,

 Best of luck on the new adventure! Definitely don’t be a stranger!

 Cheers,
 Morgan

 On Oct 22, 2014, at 10:37, Chris Behrens cbehr...@codestud.com wrote:

 Hey all,

 Just wanted to drop a quick note to say that I decided to leave Rackspace to 
 pursue another opportunity. My last day was last Friday. I won’t have much 
 time for OpenStack, but I’m going to continue to hang out in the channels. 
 Having been involved in the project since day 1, I’m going to find it 
 difficult to fully walk away. I really don’t know how much I’ll continue to 
 stay involved. I am completely burned out on nova. However, I’d really like 
 to see versioned objects broken out into oslo and Ironic synced with nova’s 
 object advancements. So, if I work on anything, it’ll probably be related to 
 that.

 Cells will be left in a lot of capable hands. I have shared some thoughts 
 with people on how I think we can proceed to make it ‘the way’ in nova. I’m 
 going to work on documenting some of this in an etherpad so the thoughts 
 aren’t lost.

 Anyway, it’s been fun… the project has grown like crazy! Keep on trucking... 
 And while I won’t be active much, don’t be afraid to ping me!

 - Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding new dependencies to stackforge projects

2014-10-22 Thread Jeremy Stanley
On 2014-10-22 12:31:53 -0400 (-0400), Davanum Srinivas wrote:
 fyi, latest update after discussion on #openstack-infra, consensus
 seems to be to allow projects to add to g-r.
[...]

If that's deemed unacceptable for other reasons, the alternative
solution which was floated is to tweak setup_package_with_req_sync()
in the openstack-dev/devstack functions-common file to only call
update.py on projects listed in the openstack/requirements
projects.txt file. This will keep from triggering the restriction
in update.py against unknown requirements in projects which don't
opt into requirements enforcement.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] PTL Elections

2014-10-22 Thread Sergey Lukjanov
Hi folks,

due to the requirement to have PTL for the program, we're running
elections for the MagnetoDB PTL for Kilo cycle. Schedule and policies
are fully aligned with official OpenStack PTLs elections.

You can find more info in official elections wiki page [0] and
the same page for MagnetoDB elections [1], additionally some more info
in the past official nominations opening email [2].

Timeline:

till 05:59 UTC October 27, 2014: Open candidacy to PTL positions
October 27, 2014 - 1300 UTC October 31, 2014: PTL elections

To announce your candidacy please start a new openstack-dev at
lists.openstack.org mailing list thread with the following subject:
[MagnetoDB] PTL Candidacy.

[0] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014
[1] https://wiki.openstack.org/wiki/MagnetoDB/PTL_Elections_Kilo
[2]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031239.html

Thank you.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-22 Thread Doug Hellmann
The application projects are dropping python 2.6 support during Kilo, and I’ve 
had several people ask recently about what this means for Oslo. Because we 
create libraries that will be used by stable versions of projects that still 
need to run on 2.6, we are going to need to maintain support for 2.6 in Oslo 
until Juno is no longer supported, at least for some of our projects. After 
Juno’s support period ends we can look again at dropping 2.6 support in all of 
the projects.


I think these rules cover all of the cases we have:

1. Any Oslo library in use by an API client that is used by a supported stable 
branch (Icehouse and Juno) needs to keep 2.6 support.

2. If a client library needs a library we graduate from this point forward, we 
will need to ensure that library supports 2.6.

3. Any Oslo library used directly by a supported stable branch of an 
application needs to keep 2.6 support.

4. Any Oslo library graduated during Kilo can drop 2.6 support, unless one of 
the previous rules applies.

5. The stable/icehouse and stable/juno branches of the incubator need to retain 
2.6 support for as long as those versions are supported.

6. The master branch of the incubator needs to retain 2.6 support until we 
graduate all of the modules that will go into libraries used by clients.


A few examples:

- oslo.utils was graduated during Juno and is used by some of the client 
libraries, so it needs to maintain python 2.6 support.

- oslo.config was graduated several releases ago and is used directly by the 
stable branches of the server projects, so it needs to maintain python 2.6 
support.

- oslo.log is being graduated in Kilo and is not yet in use by any projects, so 
it does not need python 2.6 support.

- oslo.cliutils and oslo.apiclient are on the list to graduate in Kilo, but 
both are used by client projects, so they need to keep python 2.6 support. At 
that point we can evaluate the code that remains in the incubator and see if 
we’re ready to turn of 2.6 support there.


Let me know if you have questions about any specific cases not listed in the 
examples.

Doug

PS - Thanks to fungi and clarkb for helping work out the rules above.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] command execution refactor plan

2014-10-22 Thread Csaba Henk
Hi,

I have these concerns regarding command execution in Manila.
I was to propose these to be discussed at the Design Summit.
It might be too late for that. Then we can discuss it here --
or at the summit, just informally.

Thanks for Valeriy for his ideas about some of the topics below.  

Driver ancestry
---

See ExecuteMixin [1]:
- method _try_execute is not used anywhere
- method set_execute only used in __init__ and it could be inlined
- __init__ just inlines ShareDriver.__init__ and sets up self._execute
- ExecuteMixin itself is used only in conjunction with ShareDriver
  (to derive specific driver classes from them)

[1] 
http://git.openstack.org/cgit/openstack/manila/tree/manila/share/driver.py?id=f67311a#n49

Plan: eliminate ExecuteMixin
- Drop _try_execute. If it's needed in future, it can be resurrected as a
  utility function (ie. toplevel function in utils.py).
- Drop set_execute, instead set set self._execute directly.
- ShareDriver.__init__ should take over execute management from ExecuteMixin, 
ie.
  set self._execute from kwargs['execute'] if available, else fall back to
  utils.execute.
- Drop the ExecuteMixin class.

Impact: a driver class definition that currently takes this from:

  SomeDriver(driver.ExecuteMixin, driver.ShareDriver)

will be identical to the

  SomeDriver(driver.ShareDriver)

definition after the change.

SSH execution
-

## Signature

Terminology:
-  by 'executor' we mean a function that's used for command execution
   in some way (locally, remotely, etc.)
-  the standard (or default, cf. previous section) executor is
   utils.execute; we will refer to its signature [2] as the 'standard
   executor signature'
-  'standard signature executor': an executor the signature of which is
   the standard executor signature

[2] in the sense of http://en.wikipedia.org/wiki/Type_signature

The demand for transparent remote execution naturally arises for
drivers which do command line basedd management of a resource or service
that can be available both locally and remotely.

That is, an instance of such a resource manager class ideally would get a
standard signature executor at initialization and it would perform executions
via this executor with no knowledge whether it acts locally or remotely.
Providing the executor, appropriately set up for either local or remote 
execution,
is up to the instantiator.

However, currently local command execution is typically done by utils.execute
(or a wrapper that changes some defaults), and remote execution is
typically done with processutils.ssh_execute (or some wrapper of it).
And these two functions differ in signature, because ssh_execute takes
also an ssh connection object as parameter.

Thus such manager classes either resign of execution transparency and come in
local and remote variants, or they implement some wrapper around
processutils.ssh_execute that ensures standard executor signature.

My proposal is:
- implement and standardize an ssh executor with standard signature
- it would be built atop of processutils.ssh_execute
- it would apply some basic strategy for managing the ssh connection
  internally
- let all drivers use this executor instead of direct processutils.ssh_execute
  invocations (unless there is a manifest demand for custom management of
  the ssh connection)

Here is a prototype of that idea:

https://review.openstack.org/gitweb?p=openstack/manila.git;a=blob;f=manila/share/drivers/ganesha/utils.py;h=463da36;hb=7443672#l70

It's a class that's instantiated with the same parameters as an ssh connection,
and the corresponding ssh connection is created upon instantiation. Moreover,
it's a callable class [3], with standard executor signature, whereby a call of 
an
instance of it performs remote execution using the ssh connection with which the
instance had been initialized.

[3] ie. instances of it are callable, ie. the __call__ instance method is
defined, cf. https://docs.python.org/2/reference/datamodel.html#object.__call__

## Escaping

The other major difference between local and remote (SSH based) execution that
former is argument array based, latter is command string based (which is passed
to shell at the remote end). This is not a problem, as the argument array can
be transformed in a safe and faithful manner to a shell command string by shell
escaping the arguments and then joining them with space. This algorithm is
indeed obligatory to get at a correct result. So the standard ssh executor
will also include it properly. Therefore standardizing ssh execution will
improve safety by eliminating the need to do ad hoc conversions between argument
arrays and command strings.

## run_as_root

processutils.ssh_exec does not handle the run_as_root parameter. That's however
a low hanging fruit -- given its presence, just sudo(8) (or custom root wrapper)
should be prepended to the command string. (It is actually a realistic concern,
as remote nodes might be set up to allow remote log in with 

[openstack-dev] [sahara] team meeting Oct 23 1800 UTC

2014-10-22 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20141023T18

P.S. The main topic is finalisation of design summit schedule.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Taking a break..

2014-10-22 Thread Tim Bell

Chris,

Thanks for your work on cells in OpenStack nova... we're heavily exploiting it 
to scale out the CERN cloud.

Tim

 -Original Message-
 From: Chris Behrens [mailto:cbehr...@codestud.com]
 Sent: 22 October 2014 19:37
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] Taking a break..
 
 Hey all,
 
 Just wanted to drop a quick note to say that I decided to leave Rackspace to
 pursue another opportunity. My last day was last Friday. I won’t have much 
 time
 for OpenStack, but I’m going to continue to hang out in the channels. Having
 been involved in the project since day 1, I’m going to find it difficult to 
 fully walk
 away. I really don’t know how much I’ll continue to stay involved. I am
 completely burned out on nova. However, I’d really like to see versioned 
 objects
 broken out into oslo and Ironic synced with nova’s object advancements. So, 
 if I
 work on anything, it’ll probably be related to that.
 
 Cells will be left in a lot of capable hands. I have shared some thoughts with
 people on how I think we can proceed to make it ‘the way’ in nova. I’m going 
 to
 work on documenting some of this in an etherpad so the thoughts aren’t lost.
 
 Anyway, it’s been fun… the project has grown like crazy! Keep on trucking... 
 And
 while I won’t be active much, don’t be afraid to ping me!
 
 - Chris
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Cells conversation starter

2014-10-22 Thread Andrew Laski


On 10/22/2014 12:24 AM, Tom Fifield wrote:

On 22/10/14 03:07, Andrew Laski wrote:

On 10/21/2014 04:31 AM, Nikola Đipanov wrote:

On 10/20/2014 08:00 PM, Andrew Laski wrote:

One of the big goals for the Kilo cycle by users and developers of the
cells functionality within Nova is to get it to a point where it can be
considered a first class citizen of Nova.  Ultimately I think this comes
down to getting it tested by default in Nova jobs, and making it easy
for developers to work with.  But there's a lot of work to get there.
In order to raise awareness of this effort, and get the conversation
started on a few things, I've summarized a little bit about cells and
this effort below.


Goals:

Testing of a single cell setup in the gate.
Feature parity.
Make cells the default implementation.  Developers write code once and
it works for  cells.

Ultimately the goal is to improve maintainability of a large feature
within the Nova code base.


Thanks for the write-up Andrew! Some thoughts/questions below. Looking
forward to the discussion on some of these topics, and would be happy to
review the code once we get to that point.


Feature gaps:

Host aggregates
Security groups
Server groups


Shortcomings:

Flavor syncing
  This needs to be addressed now.

Cells scheduling/rescheduling
Instances can not currently move between cells
  These two won't affect the default one cell setup so they will be
addressed later.


What does cells do:

Schedule an instance to a cell based on flavor slots available.
Proxy API requests to the proper cell.
Keep a copy of instance data at the global level for quick retrieval.
Sync data up from a child cell to keep the global level up to date.


Simplifying assumptions:

Cells will be treated as a two level tree structure.


Are we thinking of making this official by removing code that actually
allows cells to be an actual tree of depth N? I am not sure if doing so
would be a win, although it does complicate the RPC/Messaging/State code
a bit, but if it's not being used, even though a nice generalization,
why keep it around?

My preference would be to remove that code since I don't envision anyone
writing tests to ensure that functionality works and/or doesn't
regress.  But there's the challenge of not knowing if anyone is actually
relying on that behavior.  So initially I'm not creating a specific work
item to remove it.  But I think it needs to be made clear that it's not
officially supported and may get removed unless a case is made for
keeping it and work is put into testing it.

While I agree that N is a bit interesting, I have seen N=3 in production

[central API]--[state/region1]--[state/region DC1]
\-[state/region DC2]
   --[state/region2 DC]
   --[state/region3 DC]
   --[state/region4 DC]


I would be curious to hear any information about how this is working 
out.  Does everything that works for N=2 work when N=3?  Are there fixes 
that needed to be added to make this work?  Why do it this way rather 
than bring [state/region DC1] and [state/region DC2] up a level?







Plan:

Fix flavor breakage in child cell which causes boot tests to fail.
Currently the libvirt driver needs flavor.extra_specs which is not
synced to the child cell.  Some options are to sync flavor and extra
specs to child cell db, or pass full data with the request.
https://review.openstack.org/#/c/126620/1 offers a means of passing full
data with the request.

Determine proper switches to turn off Tempest tests for features that
don't work with the goal of getting a voting job.  Once this is in place
we can move towards feature parity and work on internal refactorings.

Work towards adding parity for host aggregates, security groups, and
server groups.  They should be made to work in a single cell setup, but
the solution should not preclude them from being used in multiple
cells.  There needs to be some discussion as to whether a host aggregate
or server group is a global concept or per cell concept.


Have there been any previous discussions on this topic? If so I'd really
like to read up on those to make sure I understand the pros and cons
before the summit session.

The only discussion I'm aware of is some comments on
https://review.openstack.org/#/c/59101/ , though they mention a
discussion at the Utah mid-cycle.

The main con I'm aware of for defining these as global concepts is that
there is no rescheduling capability in the cells scheduler.  So if a
build is sent to a cell with a host aggregate that can't fit that
instance the build will fail even though there may be space in that host
aggregate from a global perspective.  That should be somewhat
straightforward to address though.

I think it makes sense to define these as global concepts.  But these
are features that aren't used with cells yet so I haven't put a lot of
thought into potential arguments or cases for doing this one way or
another.



Work towards merging 

Re: [openstack-dev] [Nova] Cells conversation starter

2014-10-22 Thread Andrew Laski


On 10/22/2014 03:42 AM, Vineet Menon wrote:


On 22 October 2014 06:24, Tom Fifield t...@openstack.org 
mailto:t...@openstack.org wrote:


On 22/10/14 03:07, Andrew Laski wrote:

 On 10/21/2014 04:31 AM, Nikola Đipanov wrote:
 On 10/20/2014 08:00 PM, Andrew Laski wrote:
 One of the big goals for the Kilo cycle by users and
developers of the
 cells functionality within Nova is to get it to a point where
it can be
 considered a first class citizen of Nova.  Ultimately I think
this comes
 down to getting it tested by default in Nova jobs, and making
it easy
 for developers to work with.  But there's a lot of work to get
there.
 In order to raise awareness of this effort, and get the
conversation
 started on a few things, I've summarized a little bit about
cells and
 this effort below.


 Goals:

 Testing of a single cell setup in the gate.
 Feature parity.
 Make cells the default implementation. Developers write code
once and
 it works for  cells.

 Ultimately the goal is to improve maintainability of a large
feature
 within the Nova code base.

 Thanks for the write-up Andrew! Some thoughts/questions below.
Looking
 forward to the discussion on some of these topics, and would be
happy to
 review the code once we get to that point.

 Feature gaps:

 Host aggregates
 Security groups
 Server groups


 Shortcomings:

 Flavor syncing
  This needs to be addressed now.

 Cells scheduling/rescheduling
 Instances can not currently move between cells
  These two won't affect the default one cell setup so they
will be
 addressed later.


 What does cells do:

 Schedule an instance to a cell based on flavor slots available.
 Proxy API requests to the proper cell.
 Keep a copy of instance data at the global level for quick
retrieval.
 Sync data up from a child cell to keep the global level up to
date.


 Simplifying assumptions:

 Cells will be treated as a two level tree structure.

 Are we thinking of making this official by removing code that
actually
 allows cells to be an actual tree of depth N? I am not sure if
doing so
 would be a win, although it does complicate the
RPC/Messaging/State code
 a bit, but if it's not being used, even though a nice
generalization,
 why keep it around?

 My preference would be to remove that code since I don't
envision anyone
 writing tests to ensure that functionality works and/or doesn't
 regress.  But there's the challenge of not knowing if anyone is
actually
 relying on that behavior.  So initially I'm not creating a
specific work
 item to remove it.  But I think it needs to be made clear that
it's not
 officially supported and may get removed unless a case is made for
 keeping it and work is put into testing it.

While I agree that N is a bit interesting, I have seen N=3 in
production

[central API]--[state/region1]--[state/region DC1]
   \-[state/region DC2]
  --[state/region2 DC]
  --[state/region3 DC]
  --[state/region4 DC]

I'm curious.
What are the use cases for this deployment? Agreeably, root node runs 
n-api along with horizon, key management etc. What components  are 
deployed in tier 2 and tier 3?
And AFAIK, currently, openstack cell deployment isn't even a tree but 
DAG since, one cell can have multiple parents. Has anyone come up any 
such requirement?





While there's nothing to prevent a cell from having multiple parents I 
would be curious to know if this would actually work in practice, since 
I can imagine a number of cases that might cause problems. And is there 
a practical use for this?


Maybe we should start logging a warning when this is setup stating that 
this is an unsupported(i.e. untested) configuration to start to codify 
the design as that of a tree.  At least for the initial scope of work I 
think this makes sense, and if a case is made for a DAG setup that can 
be done independently.





 Plan:

 Fix flavor breakage in child cell which causes boot tests to fail.
 Currently the libvirt driver needs flavor.extra_specs which is not
 synced to the child cell.  Some options are to sync flavor and
extra
 specs to child cell db, or pass full data with the request.
 https://review.openstack.org/#/c/126620/1 offers a means of
passing full
 data with the request.

 Determine proper switches to turn off Tempest tests for
features that
 don't work with the goal of getting a voting job.  Once this
is in place
 we can move towards feature parity and work on internal
refactorings.

 Work towards adding parity for 

Re: [openstack-dev] [Fuel] Backup of information about nodes.

2014-10-22 Thread Adam Lawson
What is current best practice to restore a failed Fuel node?


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Wed, Oct 22, 2014 at 10:40 AM, Sergii Golovatiuk 
sgolovat...@mirantis.com wrote:

 Hi Andrew,

 Thank you for sharing your ideas. We have similar blueprint where you
 should be able to save/restore information about your environment

 https://blueprints.launchpad.net/fuel/+spec/save-and-restore-env-settings

 For development, it's very useful when you need to create the identical
 environment (including networks) or other specific tasks. Also you may use
 the case to backup information about a cluster and restore one particular
 node.

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Wed, Oct 22, 2014 at 6:07 PM, Andrey Volochay avoloc...@mirantis.com
 wrote:

 Hi, everyone.

 For one project we need to have backup of info about nodes (astute.yaml).
 In case the Fuel and a Node-n is down.

 How a bad idea to keep a copy of the astute.yaml file of each node to
 each node of the cluster?
 For example:
 pod_state/node-1.yaml
 pod_state/node-2.yaml
 pod_state/node-3.yaml
 pod_state/node-n.yaml

 I have idea. Add new deployment engine for astute.yaml and switcher of
 engines. Then we will be able to choose between two ways.

 engine here: fuel-astute/lib/astute/deployment_engine

 --
 Regards,
 Andrey Volochay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-22 Thread Bob Melander (bmelande)
I suppose this BP also has some relevance to such a discussion.

https://review.openstack.org/#/c/100278/

/ Bob


On 2014-10-22 15:42, Kyle Mestery mest...@mestery.com wrote:

There are currently at least two BPs registered for VLAN trunk support
to VMs in neutron-specs [1] [2]. This is clearly something that I'd
like to see us land in Kilo, as it enables a bunch of things for the
NFV use cases. I'm going to propose that we talk about this at an
upcoming Neutron meeting [3]. Given the rotating schedule of this
meeting, and the fact the Summit is fast approaching, I'm going to
propose we allocate a bit of time in next Monday's meeting to discuss
this. It's likely we can continue this discussion F2F in Paris as
well, but getting a head start would be good.

Thanks,
Kyle

[1] https://review.openstack.org/#/c/94612/
[2] https://review.openstack.org/#/c/97714
[3] https://wiki.openstack.org/wiki/Network/Meetings

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] proposed summit session topics

2014-10-22 Thread Doug Hellmann

On Oct 20, 2014, at 1:22 PM, Doug Hellmann d...@doughellmann.com wrote:

 After today’s meeting, we have filled our seven session slots. Here’s the 
 proposed list, in no particular order. If you think something else needs to 
 be on the list, speak up today because I’ll be plugging all of this into the 
 scheduling tool in the next day or so.
 
 https://etherpad.openstack.org/p/kilo-oslo-summit-topics
 
 * oslo.messaging
  * need more reviewers
  * what to do about keeping drivers up to date / moving them out of the main 
 tree
  * python 3 support
 
 * Graduation schedule
 
 * Python 3
  * what other than oslo.messaging / eventlet should (or can) we be working on?
 
 * Alpha versioning
 
 * Namespace packaging
 
 * Quota management
  * What should the library do?
  * How do we manage database schema info from the incubator or a library if 
 the app owns the migration scripts?
 
 * taskflow
  * needs more reviewers
  * removing duplication with other oslo libraries

I’ve pushed our schedule to http://kilodesignsummit.sched.org but it will take 
a little while for the sync to happen. In the mean time, here’s what I came up 
with:

2014-11-05 11:00  - Oslo graduation schedule 
2014-11-05 11:50  - oslo.messaging 
2014-11-05 13:50  - A Common Quota Management Library 
2014-11-06 11:50  - taskflow 
2014-11-06 13:40  - Using alpha versioning for Oslo libraries 
2014-11-06 16:30  - Python 3 support in Oslo 
2014-11-06 17:20  - Moving Oslo away from namespace packages 

That should allow the QA and Infra teams to participate in the versioning and 
packaging discussions, Salvatore to be present for the quota library session 
(and lead it, I hope), and the eNovance guys who also work on ceilometer to be 
there for the Python 3 session.

If you know you have a conflict with one of these times, let me know and I’ll 
see if we can juggle a little.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Cells conversation starter

2014-10-22 Thread Andrew Laski


On 10/22/2014 12:52 AM, Michael Still wrote:

Thanks for this.

It would be interesting to see how much of this work you think is
achievable in Kilo. How long do you see this process taking? In line
with that, is it just you currently working on this? Would calling for
volunteers to help be meaningful?


I think that getting a single cell setup tested in the gate is 
achievable.  I think feature parity might be a stretch but could be 
achievable with enough hands to work on it.  Honestly I think that 
making cells the default implementation is going to take more than a 
cycle. But I think we can get some specifics worked out as to the 
direction and may be able to get to a point where the remaining work is 
mostly mechanical.


At the moment it is mainly me working on this with some support from a 
couple of people.  Volunteers would certainly be welcomed on this effort 
though.  If it would be useful perhaps we could even have a cells 
subgroup to track progress and direction of this effort.




Michael

On Tue, Oct 21, 2014 at 5:00 AM, Andrew Laski
andrew.la...@rackspace.com wrote:

One of the big goals for the Kilo cycle by users and developers of the cells
functionality within Nova is to get it to a point where it can be considered
a first class citizen of Nova.  Ultimately I think this comes down to
getting it tested by default in Nova jobs, and making it easy for developers
to work with.  But there's a lot of work to get there.  In order to raise
awareness of this effort, and get the conversation started on a few things,
I've summarized a little bit about cells and this effort below.


Goals:

Testing of a single cell setup in the gate.
Feature parity.
Make cells the default implementation.  Developers write code once and it
works for  cells.

Ultimately the goal is to improve maintainability of a large feature within
the Nova code base.


Feature gaps:

Host aggregates
Security groups
Server groups


Shortcomings:

Flavor syncing
 This needs to be addressed now.

Cells scheduling/rescheduling
Instances can not currently move between cells
 These two won't affect the default one cell setup so they will be
addressed later.


What does cells do:

Schedule an instance to a cell based on flavor slots available.
Proxy API requests to the proper cell.
Keep a copy of instance data at the global level for quick retrieval.
Sync data up from a child cell to keep the global level up to date.


Simplifying assumptions:

Cells will be treated as a two level tree structure.


Plan:

Fix flavor breakage in child cell which causes boot tests to fail. Currently
the libvirt driver needs flavor.extra_specs which is not synced to the child
cell.  Some options are to sync flavor and extra specs to child cell db, or
pass full data with the request. https://review.openstack.org/#/c/126620/1
offers a means of passing full data with the request.

Determine proper switches to turn off Tempest tests for features that don't
work with the goal of getting a voting job.  Once this is in place we can
move towards feature parity and work on internal refactorings.

Work towards adding parity for host aggregates, security groups, and server
groups.  They should be made to work in a single cell setup, but the
solution should not preclude them from being used in multiple cells.  There
needs to be some discussion as to whether a host aggregate or server group
is a global concept or per cell concept.

Work towards merging compute/api.py and compute/cells_api.py so that
developers only need to make changes/additions in once place.  The goal is
for as much as possible to be hidden by the RPC layer, which will determine
whether a call goes to a compute/conductor/cell.

For syncing data between cells, look at using objects to handle the logic of
writing data to the cell/parent and then syncing the data to the other.

A potential migration scenario is to consider a non cells setup to be a
child cell and converting to cells will mean setting up a parent cell and
linking them.  There are periodic tasks in place to sync data up from a
child already, but a manual kick off mechanism will need to be added.


Future plans:

Something that has been considered, but is out of scope for now, is that the
parent/api cell doesn't need the same data model as the child cell.  Since
the majority of what it does is act as a cache for API requests, it does not
need all the data that a cell needs and what data it does need could be
stored in a form that's optimized for reads.


Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Workgroup git repository

2014-10-22 Thread Stefano Maffulli
Hi Chris

On 10/21/2014 11:08 PM, Christopher Yeoh wrote:
 The API Workgroup git repository has been setup and you can access it
 here.
Cool, adding it to the repos to watch. 
 There is some content there though not all the proposed guidelines from
 the wiki page are in yet:

 https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines

I think that as soon as possible the wiki pages should be deleted and 
redirected to wherever the wg will publish the authoritative content. The wiki 
gets lots of traffic from web searches and stale content there really hurts us.

Where is the content going to live?

/stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Workgroup git repository

2014-10-22 Thread Steve Martinelli
we could set up a job to publish under docs.o.org/api-wg pretty easily - 
it seems like a good place to start to publish this content.

thanks for getting the repo all setup chris and jay.


Thanks,

_
Steve Martinelli
OpenStack Development - Keystone Core Member
Phone: (905) 413-2851
E-Mail: steve...@ca.ibm.com



From:   Stefano Maffulli stef...@openstack.org
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   10/22/2014 03:22 PM
Subject:Re: [openstack-dev] [api] API Workgroup git repository



Hi Chris

On 10/21/2014 11:08 PM, Christopher Yeoh wrote:
 The API Workgroup git repository has been setup and you can access it
 here.
Cool, adding it to the repos to watch. 
 There is some content there though not all the proposed guidelines from
 the wiki page are in yet:

 https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines

I think that as soon as possible the wiki pages should be deleted and 
redirected to wherever the wg will publish the authoritative content. The 
wiki gets lots of traffic from web searches and stale content there really 
hurts us.

Where is the content going to live?

/stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-22 Thread Eichberger, German
Hi Jorge,

Good discussion so far + glad to have you back :)

I am not a big fan of using logs for billing information since ultimately (at 
least at HP) we need to pump it into ceilometer. So I am envisioning either the 
amphora (via a proxy) to pump it straight into that system or we collect it on 
the controller and pump it from there.

Allowing/enabling logging creates some requirements on the hardware, mainly, 
that they can handle the IO coming from logging. Some operators might choose to 
hook up very cheap and non performing disks which might not be able to deal 
with the log traffic. So I would suggest that there is some rate limiting on 
the log output to help with that.

Thanks,
German

From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Wednesday, October 22, 2014 6:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hey Stephen (and Robert),

For real-time usage I was thinking something similar to what you are proposing. 
Using logs for this would be overkill IMO so your suggestions were what I was 
thinking of starting with.

As far as storing logs is concerned I was definitely thinking of offloading 
these onto separate storage devices. Robert, I totally hear you on the 
scalability part as our current LBaaS setup generates TB of request logs. I'll 
start planning out a spec and then I'll let everyone chime in there. I just 
wanted to get a general feel for the ideas I had mentioned. I'll also bring it 
up in today's meeting.

Cheers,
--Jorge

From: Stephen Balukoff sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, October 22, 2014 4:04 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Jorge!

Welcome back, eh! You've been missed.

Anyway, I just wanted to say that your proposal sounds great to me, and it's 
good to finally be closer to having concrete requirements for logging, eh. Once 
this discussion is nearing a conclusion, could you write up the specifics of 
logging into a specification proposal document?

Regarding the discussion itself: I think we can ignore UDP for now, as there 
doesn't seem to be high demand for it, and it certainly won't be supported in v 
0.5 of Octavia (and maybe not in v1 or v2 either, unless we see real demand).

Regarding the 'real-time usage' information: I have some ideas regarding 
getting this from a combination of iptables and / or the haproxy stats 
interface. Were you thinking something different that involves on-the-fly 
analysis of the logs or something?  (I tend to find that logs are great for 
non-real time data, but can often be lacking if you need, say, a gauge like 
'currently open connections' or something.)

One other thing: If there's a chance we'll be storing logs on the amphorae 
themselves, then we need to have log rotation as part of the configuration 
here. It would be silly to have an amphora failure just because its ephemeral 
disk fills up, eh.

Stephen

On Wed, Oct 15, 2014 at 4:03 PM, Jorge Miramontes 
jorge.miramon...@rackspace.commailto:jorge.miramon...@rackspace.com wrote:
Hey Octavia folks!


First off, yes, I'm still alive and kicking. :)

I,d like to start a conversation on usage requirements and have a few
suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
based protocols, we inherently enable connection logging for load
balancers for several reasons:

1) We can use these logs as the raw and granular data needed to track
usage. With logs, the operator has flexibility as to what usage metrics
they want to bill against. For example, bandwidth is easy to track and can
even be split into header and body data so that the provider can choose if
they want to bill on header data or not. Also, the provider can determine
if they will bill their customers for failed requests that were the fault
of the provider themselves. These are just a few examples; the point is
the flexible nature of logs.

2) Creating billable usage from logs is easy compared to other options
like polling. For example, in our current LBaaS iteration at Rackspace we
bill partly on average concurrent connections. This is based on polling
and is not as accurate as it possibly can be. It's very close, but it
doesn't get more accurate that the logs themselves. Furthermore, polling
is more complex and uses up resources on the polling cadence.

3) Enabling logs for all load balancers can be used for debugging, support
and audit purposes. While the customer may or may not want their logs
uploaded to swift, operators and their support teams can still use this
data to help customers out with billing and setup 

Re: [openstack-dev] [api] API Workgroup git repository

2014-10-22 Thread Anne Gentle
On Wed, Oct 22, 2014 at 2:26 PM, Steve Martinelli steve...@ca.ibm.com
wrote:

 we could set up a job to publish under docs.o.org/api-wg pretty easily -
 it seems like a good place to start to publish this content.

 thanks for getting the repo all setup chris and jay.


Thanks for setting up the repo.

We probably want it to go to docs.openstack.org/developer/api-wg and link
to it from here:

http://docs.openstack.org/developer/openstack-projects.html

Anne



 Thanks,

 _
 Steve Martinelli
 OpenStack Development - Keystone Core Member
 Phone: (905) 413-2851
 E-Mail: steve...@ca.ibm.com



 From:Stefano Maffulli stef...@openstack.org
 To:OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date:10/22/2014 03:22 PM
 Subject:Re: [openstack-dev] [api] API Workgroup git repository
 --



 Hi Chris

 On 10/21/2014 11:08 PM, Christopher Yeoh wrote:
  The API Workgroup git repository has been setup and you can access it
  here.
 Cool, adding it to the repos to watch.
  There is some content there though not all the proposed guidelines from
  the wiki page are in yet:
 
  https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines

 I think that as soon as possible the wiki pages should be deleted and
 redirected to wherever the wg will publish the authoritative content. The
 wiki gets lots of traffic from web searches and stale content there really
 hurts us.

 Where is the content going to live?

 /stef

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-22 Thread David Vossel


- Original Message -
 On 10/21/2014 07:53 PM, David Vossel wrote:
 
  - Original Message -
  -Original Message-
  From: Russell Bryant [mailto:rbry...@redhat.com]
  Sent: October 21, 2014 15:07
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Nova] Automatic evacuate
 
  On 10/21/2014 06:44 AM, Balázs Gibizer wrote:
  Hi,
 
  Sorry for the top posting but it was hard to fit my complete view
  inline.
 
  I'm also thinking about a possible solution for automatic server
  evacuation. I see two separate sub problems of this problem:
  1)compute node monitoring and fencing, 2)automatic server evacuation
 
  Compute node monitoring is currently implemented in servicegroup
  module of nova. As far as I understand pacemaker is the proposed
  solution in this thread to solve both monitoring and fencing but we
  tried and found out that pacemaker_remote on baremetal does not work
  together with fencing (yet), see [1]. So if we need fencing then
  either we have to go for normal pacemaker instead of pacemaker_remote
  but that solution doesn't scale or we configure and call stonith
  directly when pacemaker detect the compute node failure.
  I didn't get the same conclusion from the link you reference.  It says:
 
  That is not to say however that fencing of a baremetal node works any
  differently than that of a normal cluster-node. The Pacemaker policy
  engine
  understands how to fence baremetal remote-nodes. As long as a fencing
  device exists, the cluster is capable of ensuring baremetal nodes are
  fenced
  in the exact same way as normal cluster-nodes are fenced.
 
  So, it sounds like the core pacemaker cluster can fence the node to me.
I CC'd David Vossel, a pacemaker developer, to see if he can help
clarify.
  It seems there is a contradiction between chapter 1.5 and 7.2 in [1] as
  7.2
  states:
   There are some complications involved with understanding a bare-metal
  node's state that virtual nodes don't have. Once this logic is complete,
  pacemaker will be able to integrate bare-metal nodes in the same way
  virtual
  remote-nodes currently are. Some special considerations for fencing will
  need to be addressed. 
  Let's wait for David's statement on this.
  Hey, That's me!
 
  I can definitely clear all this up.
 
  First off, this document is out of sync with the current state upstream.
  We're
  already past Pacemaker v1.1.12 upstream. Section 7.2 of the document being
  referenced is still talking about future v1.1.11 features.
 
  I'll make it simple. If the document references anything that needs to be
  done
  in the future, it's already done.  Pacemaker remote is feature complete at
  this
  point. I've accomplished everything I originally set out to do. I see one
  change
  though. In 7.1 I talk about wanting pacemaker to be able to manage
  resources in
  containers. I mention something about libvirt sandbox. I scrapped whatever
  I was
  doing there. Pacemaker now has docker support.
  https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/docker
 
  I've known this document is out of date. It's on my giant list of things to
  do.
  Sorry for any confusion.
 
  As far as pacemaker remote and fencing goes, remote-nodes are fenced the
  exact
  same way as cluster-nodes. The only consideration that needs to be made is
  that
  the cluster-nodes (nodes running the full pacemaker+corosync stack) are the
  only
  nodes allowed to initiate fencing. All you have to do is make sure the
  fencing
  devices you want to use to fence remote-nodes are accessible to the
  cluster-nodes.
   From there you are good to go.
 
  Let me know if there's anything else I can clear up. Pacemaker remote was
  designed
  to be the solution for the exact scenario you all are discussing here.
  Compute nodes
  and pacemaker remote are made for one another :D
 
  If anyone is interested in prototyping pacemaker remote for this compute
  node use
  case, make sure to include me. I have done quite a bit research into how to
  maximize
  pacemaker's ability to scale horizontally. As part of that research I've
  made a few
  changes that are directly related to all of this that are not yet in an
  official
  pacemaker release.  Come to me for the latest rpms and you'll have a less
  painful
  experience setting all this up :)
 
  -- Vossel
 
 
 Hi Vossel,
 
 Could you send us a link to the source RPMs please, we have tested on
 CentOS7. It might need a recompile.

Yes, centos 7.0 isn't going to have the rpms you need to test this.

There are a couple of things you can do.

1. I put the rhel7 related rpms I test with in this repo.
http://davidvossel.com/repo/os/el7/

*disclaimer* I only maintain this repo for myself. I'm not committed to keeping
it active or up-to-date. It just happens to be updated right now for my own use.

That will give you test rpms for the pacemaker version I'm currently using plus
the latest libqb. If you're going to do any 

[openstack-dev] [Neutron][L3][IPAM] No Team Meeting Thursday

2014-10-22 Thread Carl Baldwin
I just had a conflict come up.  I won't be able to make it to the meeting.

I wanted to announce that IPAM is very likely topic for a design
session at the summit.  I will spend some time reviewing the old
etherpads starting here [1] since the topic was set aside early in
Juno.

Carl

[1] https://etherpad.openstack.org/p/neutron-ipam

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Workgroup git repository

2014-10-22 Thread Everett Toews
I notice at the top of the GitHub mirror page [1] it reads, API Working Group 
http://openstack.org”

Can we get that changed to API Working Group 
https://wiki.openstack.org/wiki/API_Working_Group”?

That URL would be much more helpful to people who come across the GitHub repo. 
It's not a code change so we would need a repo owner to actually make the 
change. Who should I contact about that?

Thanks,
Everett

[1] https://github.com/openstack/api-wg/


On Oct 22, 2014, at 1:08 AM, Christopher Yeoh cbky...@gmail.com wrote:

 Hi,
 
 The API Workgroup git repository has been setup and you can access it
 here.
 
 http://git.openstack.org/cgit/openstack/api-wg/
 
 There is some content there though not all the proposed guidelines from
 the wiki page are in yet:
 
 https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines
 
 Please feel free to start submitting patches to the document.
 
 I have submitted a patch to convert the initial content from markdown to
 rst and setup the tox targets to produce an html document. Seemed to be
 an easier route as it seems to be the preferred format for OpenStack
 projects and we can just copy all the build/check bits from the specs
 repositories. Also doesn't require any changes to required packages.
 
 https://review.openstack.org/130120
 
 Until this is merged its probably better to base any patches on this
 one.
 
 Chris
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Workgroup git repository

2014-10-22 Thread Clark Boylan
It is a code change :) Everything is a code change around here. You will
want to update the projects.yaml file in openstack-infra/project-config
[0]. If you add a 'homepage:
https://wiki.openstack.org/wiki/API_Working_Group' key value pair to the
api-wg dict there the jeepyb tooling should update the project homepage
in github for you. If it doesn't then we probably have a bug somewhere
and that should be fixed.

[0]
https://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/projects.yaml#n287

All that said, we really try to downplay the importance of github as
well. It is a mirror for us, but it is a discoverable mirror so updating
the 'homepage' is probably a reasonable thing to do.

Clark

On Wed, Oct 22, 2014, at 01:36 PM, Everett Toews wrote:
 I notice at the top of the GitHub mirror page [1] it reads, API Working
 Group http://openstack.org”
 
 Can we get that changed to API Working Group
 https://wiki.openstack.org/wiki/API_Working_Group”?
 
 That URL would be much more helpful to people who come across the GitHub
 repo. It's not a code change so we would need a repo owner to actually
 make the change. Who should I contact about that?
 
 Thanks,
 Everett
 
 [1] https://github.com/openstack/api-wg/
 
 
 On Oct 22, 2014, at 1:08 AM, Christopher Yeoh cbky...@gmail.com wrote:
 
  Hi,
  
  The API Workgroup git repository has been setup and you can access it
  here.
  
  http://git.openstack.org/cgit/openstack/api-wg/
  
  There is some content there though not all the proposed guidelines from
  the wiki page are in yet:
  
  https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines
  
  Please feel free to start submitting patches to the document.
  
  I have submitted a patch to convert the initial content from markdown to
  rst and setup the tox targets to produce an html document. Seemed to be
  an easier route as it seems to be the preferred format for OpenStack
  projects and we can just copy all the build/check bits from the specs
  repositories. Also doesn't require any changes to required packages.
  
  https://review.openstack.org/130120
  
  Until this is merged its probably better to base any patches on this
  one.
  
  Chris
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Workgroup git repository

2014-10-22 Thread Steve Martinelli
Everett, I think the description is managed by this file: 
https://github.com/openstack-infra/project-config/blob/master/gerrit/projects.yaml

- Steve



From:   Everett Toews everett.to...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   10/22/2014 04:40 PM
Subject:Re: [openstack-dev] [api] API Workgroup git repository



I notice at the top of the GitHub mirror page [1] it reads, API Working 
Group http://openstack.org?

Can we get that changed to API Working Group 
https://wiki.openstack.org/wiki/API_Working_Group??

That URL would be much more helpful to people who come across the GitHub 
repo. It's not a code change so we would need a repo owner to actually 
make the change. Who should I contact about that?

Thanks,
Everett

[1] https://github.com/openstack/api-wg/


On Oct 22, 2014, at 1:08 AM, Christopher Yeoh cbky...@gmail.com wrote:

 Hi,
 
 The API Workgroup git repository has been setup and you can access it
 here.
 
 http://git.openstack.org/cgit/openstack/api-wg/
 
 There is some content there though not all the proposed guidelines from
 the wiki page are in yet:
 
 https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines
 
 Please feel free to start submitting patches to the document.
 
 I have submitted a patch to convert the initial content from markdown to
 rst and setup the tox targets to produce an html document. Seemed to be
 an easier route as it seems to be the preferred format for OpenStack
 projects and we can just copy all the build/check bits from the specs
 repositories. Also doesn't require any changes to required packages.
 
 https://review.openstack.org/130120
 
 Until this is merged its probably better to base any patches on this
 one.
 
 Chris
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] proposed summit session topics

2014-10-22 Thread Ben Nemec
+1 to the proposed schedule from me.

On 10/22/2014 02:11 PM, Doug Hellmann wrote:
 
 On Oct 20, 2014, at 1:22 PM, Doug Hellmann d...@doughellmann.com wrote:
 
 After today’s meeting, we have filled our seven session slots. Here’s the 
 proposed list, in no particular order. If you think something else needs to 
 be on the list, speak up today because I’ll be plugging all of this into the 
 scheduling tool in the next day or so.

 https://etherpad.openstack.org/p/kilo-oslo-summit-topics

 * oslo.messaging
  * need more reviewers
  * what to do about keeping drivers up to date / moving them out of the main 
 tree
  * python 3 support

 * Graduation schedule

 * Python 3
  * what other than oslo.messaging / eventlet should (or can) we be working 
 on?

 * Alpha versioning

 * Namespace packaging

 * Quota management
  * What should the library do?
  * How do we manage database schema info from the incubator or a library if 
 the app owns the migration scripts?

 * taskflow
  * needs more reviewers
  * removing duplication with other oslo libraries
 
 I’ve pushed our schedule to http://kilodesignsummit.sched.org but it will 
 take a little while for the sync to happen. In the mean time, here’s what I 
 came up with:
 
 2014-11-05 11:00  - Oslo graduation schedule 
 2014-11-05 11:50  - oslo.messaging 
 2014-11-05 13:50  - A Common Quota Management Library 
 2014-11-06 11:50  - taskflow 
 2014-11-06 13:40  - Using alpha versioning for Oslo libraries 
 2014-11-06 16:30  - Python 3 support in Oslo 
 2014-11-06 17:20  - Moving Oslo away from namespace packages 
 
 That should allow the QA and Infra teams to participate in the versioning and 
 packaging discussions, Salvatore to be present for the quota library session 
 (and lead it, I hope), and the eNovance guys who also work on ceilometer to 
 be there for the Python 3 session.
 
 If you know you have a conflict with one of these times, let me know and I’ll 
 see if we can juggle a little.
 
 Doug
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] proposed summit session topics

2014-10-22 Thread Davanum Srinivas
+1 from me as well.

-- dims

On Wed, Oct 22, 2014 at 5:05 PM, Ben Nemec openst...@nemebean.com wrote:
 +1 to the proposed schedule from me.

 On 10/22/2014 02:11 PM, Doug Hellmann wrote:

 On Oct 20, 2014, at 1:22 PM, Doug Hellmann d...@doughellmann.com wrote:

 After today’s meeting, we have filled our seven session slots. Here’s the 
 proposed list, in no particular order. If you think something else needs to 
 be on the list, speak up today because I’ll be plugging all of this into 
 the scheduling tool in the next day or so.

 https://etherpad.openstack.org/p/kilo-oslo-summit-topics

 * oslo.messaging
  * need more reviewers
  * what to do about keeping drivers up to date / moving them out of the 
 main tree
  * python 3 support

 * Graduation schedule

 * Python 3
  * what other than oslo.messaging / eventlet should (or can) we be working 
 on?

 * Alpha versioning

 * Namespace packaging

 * Quota management
  * What should the library do?
  * How do we manage database schema info from the incubator or a library if 
 the app owns the migration scripts?

 * taskflow
  * needs more reviewers
  * removing duplication with other oslo libraries

 I’ve pushed our schedule to http://kilodesignsummit.sched.org but it will 
 take a little while for the sync to happen. In the mean time, here’s what I 
 came up with:

 2014-11-05 11:00  - Oslo graduation schedule
 2014-11-05 11:50  - oslo.messaging
 2014-11-05 13:50  - A Common Quota Management Library
 2014-11-06 11:50  - taskflow
 2014-11-06 13:40  - Using alpha versioning for Oslo libraries
 2014-11-06 16:30  - Python 3 support in Oslo
 2014-11-06 17:20  - Moving Oslo away from namespace packages

 That should allow the QA and Infra teams to participate in the versioning 
 and packaging discussions, Salvatore to be present for the quota library 
 session (and lead it, I hope), and the eNovance guys who also work on 
 ceilometer to be there for the Python 3 session.

 If you know you have a conflict with one of these times, let me know and 
 I’ll see if we can juggle a little.

 Doug


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Scheduler / Resource Tracker designated core

2014-10-22 Thread Michael Still
Hello.

Its clear that the scheduler and resource tracker inside Nova are
areas where we need to innovate. There are a lot of proposals in this
space at the moment, and it can be hard to tell which ones are being
implemented and in which order.

I have therefore asked Jay Pipes to act as designated core for
scheduler and resource tracker work in Kilo. This means he will keep
an eye on specs and code reviews, as well as advise other core
reviewers on what is ready to land. If you're working on those things
it would be good if you can coordinate with him.

Hopefully having a core focused on this area will improve our velocity there.

Thanks for Jay for stepping up here.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] NFV BoF session for OpenStack Summit Paris

2014-10-22 Thread Steve Gordon
- Original Message -
 From: Steve Gordon sgor...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org

 
 - Original Message -
  From: Steve Gordon sgor...@redhat.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  
  Hi all,
  
  I took an action item in one of the meetings to try and find a
  date/time/space to do another NFV BoF session for Paris to take advantage
  of
  the fact that many of us will be in attendance for a face to face session.
  
  To try and avoid clashing with the general and design summit sessions I am
  proposing that we meet either before the sessions start one morning, during
  the lunch break, or after the sessions finish for the day. For the lunch
  sessions the meeting would be shorter to ensure people actually have time
  to
  grab lunch beforehand.
  
  I've put together a form here, please register your preferred date/time if
  you would be interested in attending an NFV BoF session:
  
  http://doodle.com/qchvmn4sw5x39cps
  
  I will try and work out the *where* once we have a clear picture of the
  preferences for the above. We can discuss further in the weekly meeting.
  
  Thanks!
  
  Steve
  
  [1]
  https://openstacksummitnovember2014paris.sched.org/event/f5bcb6033064494390342031e48747e3#.VEWEIOKmhkM
 
 Hi all,
 
 I have just noticed an update on a conversation I had been following on the
 community list:
 
 http://lists.openstack.org/pipermail/community/2014-October/000921.html
 
 It seems like after hours use of the venue will not be an option in Paris,
 though there may be some space available for BoF style activities on
 Wednesday. I also noticed this Win the telco BoF session on the summit
 schedule for the creation of a *new* working group:
 
 
 https://openstacksummitnovember2014paris.sched.org/event/f5bcb6033064494390342031e48747e3#.VEbRkOKmhkM
 
 Does anyone know anything about this? It's unclear if this is the appropriate
 place to discuss the planning and development activities we've been working
 on. Let's discuss further in the meeting tomorrow.
 
 Thanks,
 
 Steve

Ok, it looks like there is a user-committee email on this topic now:

http://lists.openstack.org/pipermail/user-committee/2014-October/000320.html

I did reach out to Carol to highlight the existing efforts before the above was 
sent but it seems it still does contain quite a bit of overlap.

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Workgroup git repository

2014-10-22 Thread Christopher Yeoh
On Wed, 22 Oct 2014 14:44:26 -0500
Anne Gentle a...@openstack.org wrote:

 On Wed, Oct 22, 2014 at 2:26 PM, Steve Martinelli
 steve...@ca.ibm.com wrote:
 
  we could set up a job to publish under docs.o.org/api-wg pretty
  easily - it seems like a good place to start to publish this
  content.
 
  thanks for getting the repo all setup chris and jay.
 
 
 Thanks for setting up the repo.
 
 We probably want it to go to docs.openstack.org/developer/api-wg and
 link to it from here:
 
 http://docs.openstack.org/developer/openstack-projects.html


That sounds good to me. Steve has a proposed patch we can use here:

https://review.openstack.org/#/c/130363/

I'm not sure of the syntax myself (I had a bit of a look into it last
night)

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Workgroup git repository

2014-10-22 Thread Christopher Yeoh
On Wed, 22 Oct 2014 20:36:27 +
Everett Toews everett.to...@rackspace.com wrote:

 I notice at the top of the GitHub mirror page [1] it reads, API
 Working Group http://openstack.org”
 
 Can we get that changed to API Working Group
 https://wiki.openstack.org/wiki/API_Working_Group”?
 
 That URL would be much more helpful to people who come across the
 GitHub repo. It's not a code change so we would need a repo owner to
 actually make the change. Who should I contact about that?

I think this will do it:

https://review.openstack.org/130377

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][keystone] why is lxml only in test-requirements.txt?

2014-10-22 Thread Dolph Mathews
Great question!

For some backstory, the community interest in supporting XML has always
been lackluster, so the XML translation middleware has been on a slow road
of decline. It's a burden for everyone to maintain, and only works for
certain API calls. For the bulk of Keystone's documented APIs, XML support
is largely untested, undocumented, and unsupported. Given all that, I
wouldn't recommend anyone deploy the XML middleware unless you *really*
need some aspect of it's tested functionality.

In both Icehouse and Juno, we shipped the XML translation middleware with a
deprecation warning, but kept it in the default pipeline. That was
basically my fault, because both Keystone's functional tests and tempest
are hardcoded to expect XML support, and we didn't have time during
Icehouse to break those expectations... but still wanted to communicate out
the fact that XML was on the road to deprecation.

So, to remedy that, we have now have a bunch of patches (thanks for your
help, Lance!) which complete the work we started back in Icehouse.

Tempest:
- Make XML support optional https://review.openstack.org/#/c/126564/

Devstack:
- Make XML support optional moving forward
https://review.openstack.org/#/c/126672/
- stable/icehouse continue testing XML support
https://review.openstack.org/#/c/127641/

Keystone:
- Remove XML support from keystone's default paste config (this makes lxml
truly a test-requirement) https://review.openstack.org/#/c/130371/
- (Potentially) remove XML support altogether
https://review.openstack.org/#/c/125738/

The patches to Tempest and Devstack should definitely land, and now we need
to have a conversation about our desire to continue support for XML in Kilo
(i.e. choose from the last two Keystone patches).

-Dolph

On Mon, Oct 20, 2014 at 8:05 AM, Xu (Simon) Chen xche...@gmail.com wrote:

 I am trying to understand why lxml is only in test-requirements.txt... The
 default pipelines do contain xml_body and xml_body_v2 filters, which
 depends on lxml to function properly.

 Since lxml is not in requirements.txt, my packaging system won't include
 lxml in the deployment drop. At the same time, my environment involves
 using browsers to directly authenticate with keystone - and browsers
 (firefox/chrome alike) send accept: application/xml in their request
 headers, which triggers xml_body to perform json to xml conversion, which
 fails because lxml is not there.

 My opinion is that if xml_body filters are in the example/default
 paste.ini file, lxml should be included in requirements.txt.

 Comments?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Paris Summit talking points

2014-10-22 Thread Adam Harwell
Let's get a list of questions / talking points set up on this etherpad: 
https://etherpad.openstack.org/p/paris_absentee_talking_points

If you can't make it to the summit (like me) then you can put any questions or 
concerns you have in this document.
If you are going to the summit, please take a look at this list (maybe keep it 
handy) and see if you can help us out by asking the questions or arguing the 
talking points for us!

I've populated it with a couple of the things that are on my mind at the 
moment. Please add anything you can think of, and ideally put answers and 
comments in once you've gotten them!
Even if you're going, it might not be a bad idea to jot some notes now so you 
don't forget during the hustle and bustle of traveling and after-parties! :)

--Adam

https://keybase.io/rm_you

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [congress] kilo design session

2014-10-22 Thread Sean Roberts
We are scheduled for Monday, 03 Nov, 14:30 - 16:00. I have a conflict with the 
“Meet the Influencers” talk that runs from 14:30-18:30, plus the GBP session is 
on Tuesday, 04 Nov, 12:05-12:45. I was thinking we would want to co-located the 
Congress and GBP talks as much as possible.

The BOSH team has the Tuesday, 04 Nov, 16:40-18:10 slot and wants to switch. 

Does this switch work for everyone?

Maybe we can get some space in one of the pods or cross-project workshops on 
Tuesday between the GBP and the potential Congress session to make it even more 
better.

~sean
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Cells conversation starter

2014-10-22 Thread Sam Morrison

 On 23 Oct 2014, at 5:55 am, Andrew Laski andrew.la...@rackspace.com wrote:
 
 While I agree that N is a bit interesting, I have seen N=3 in production
 
 [central API]--[state/region1]--[state/region DC1]
\-[state/region DC2]
   --[state/region2 DC]
   --[state/region3 DC]
   --[state/region4 DC]
 
 I would be curious to hear any information about how this is working out.  
 Does everything that works for N=2 work when N=3?  Are there fixes that 
 needed to be added to make this work?  Why do it this way rather than bring 
 [state/region DC1] and [state/region DC2] up a level?

We (NeCTAR) have 3 tiers, our current setup has one parent, 6 children then 3 
of the children have 2 grandchildren each. All compute nodes are at the lowest 
level.

Everything works fine and we haven’t needed to do any modifications. 

We run in a 3 tier system because it matches how our infrastructure is 
logically laid out, but I don’t see a problem in just having a 2 tier system 
and getting rid of the middle man.

Sam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >