Re: [openstack-dev] [Murano] Application Actions

2014-05-30 Thread Serg Melikyan
Hi, Alexander

Thank you for formalizing specifications and requirements! I will be glad
to work on implementation of app actions, once we finish discussing them. I
have created new blueprint
https://blueprints.launchpad.net/murano/+spec/application-actions and
referenced etherpad as specification.
Actually your specification in etherpad looks very comprehensive and quite
enough to implement first version of the feature. I think I can start
drafting implementation while we also working on specification at the same
time.

I think Events should be based on Actions, so we can return to Events in
the next milestone and revisit them.


On Thu, May 29, 2014 at 8:26 PM, Alexander Tivelkov ativel...@mirantis.com
wrote:

 Hi folks!

 During the Atlanta Summit there was quite a lot of talks about the
 Application Lifecycle management and Murano's role in this process. There
 were several cross-project sessions between Murano, Heat and Solum teams
 ([1]) at which it was decided that Murano has its own place in the
 application-management ecosystem and should be able to define custom
 actions or workflows for its applications, while using Heat and its ALM
 capabilities as the underlying service.

 At the same time I had some conversation with potential customers and
 contributors, who have expressed strong interest in having actions in
 Murano in this cycle.

 That's why I've decided to drive this process forward and formalize the
 spec and requirements for the Actions feature in Murano.
 I've created a draft of the spec - please see the etherpad at [2] for
 details. I'd like some comments and discussion on the spec, and once we all
 agree on that, I will be happy to find a volunteer eager to implement this
 during Juno :)

 BTW, we have a number of blueprints already created on the topic ([3],
 [4], [5], [6]), but they lack the details and have some problems with
 terminology ('events' and 'actions' are definitely not the same, while the
 blueprints mix them). I think we should revisit these BPs and either change
 them to reflect the updated vision or to mark them as superseded and create
 more appropriate one.


 [1] https://etherpad.openstack.org/p/9XQ7Q2NQdv
 [2] https://etherpad.openstack.org/p/MuranoActions
 [3] https://blueprints.launchpad.net/murano/+spec/external-events
 [4] https://blueprints.launchpad.net/murano/+spec/api-list-events
 [5] https://blueprints.launchpad.net/murano/+spec/dsl-register-event
 [6]
 https://blueprints.launchpad.net/murano/+spec/ui-application-event-list


 --
 Regards,
 Alexander Tivelkov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] License Management

2014-05-30 Thread Tizy Ninan
Hi,

Are there are any software license management tools available for openstack
? For eg. the tool should track the usage of the number of instances
launched using a particular licensed image.
Are there any third party tools also available for this?

Thanks,
Tizy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Infra] Mid-Cycle Meet-up

2014-05-30 Thread Koderer, Marc
Hi all,

just some general information for your trip:

 - Darmstadt is quite close to Frankfurt am Main:
From Frankfurt Airport you can take a bus shuttle that takes you directly to 
the office campus (the trip takes around 25 minutes depending on your arrival 
gate):
http://www.rmv.de/linkableblob/de/33548-59802/data/AirLiner_Direktbus_Darmstadt_Frankfurt_Flughafen_PDF.pdf

- If your trip gets approved please send me your name, email and your company 
name in order to create badges for you in advance

- I will create a wiki page with more details (like hotel information) and send 
it around

Regards
Marc

 -Ursprüngliche Nachricht-
 Von: Matthew Treinish [mailto:mtrein...@kortar.org]
 Gesendet: Donnerstag, 29. Mai 2014 18:07
 An: openstack-dev@lists.openstack.org; openstack-in...@lists.openstack.org
 Betreff: [openstack-dev] [QA][Infra] Mid-Cycle Meet-up
 
 
 Hi Everyone,
 
 So we'd like to announce to everyone that we're going to be doing a
 combined Infra and QA program mid-cycle meet-up. It will be the week of
 July 14th in Darmstadt, Germany at Deutsche Telekom who has graciously
 offered to sponsor the event. The plan is to use the week as both a time
 for face to face collaboration for both programs respectively as well as
 having a couple days of bootstrapping for new users/contributors. The
 intent was that this would be useful for people who are interested in
 contributing to either Infra or QA, and those who are running third party
 CI systems.
 
 The current break down for the week that we're looking at is:
 
 July 14th: Infra
 July 15th: Infra
 July 16th: Bootstrapping for new users
 July 17th: More bootstrapping
 July 18th: QA
 
 We still have to work out more details, and will follow up once we have
 them.
 But, we thought it would be better to announce the event earlier so people
 can start to plan travel if they need it.
 
 
 Thanks,
 
 Matt Treinish
 Jim Blair

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] How to conditionally modify attributes in CreateNetwork class.

2014-05-30 Thread Nader Lahouti
Hi All,

Currently in the 
horizon/openstack_dashboard/dashboards/project/networks/workflows.py in
classes such as CreateNetwork, CreateNetworkInfo and CreateSubnetInfo, the
contributes or default_steps as shown below are fixed. Is it possible to add
entries to those attributes conditionally?

156 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#156 class CreateSubnetInfo
http://www.xrefs.info/openstack-horizon-latest/s?refs=CreateSubnetInfo
(workflows 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#workflows .Step
http://www.xrefs.info/openstack-horizon-latest/s?defs=Step ):
157 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#157 action_class
http://www.xrefs.info/openstack-horizon-latest/s?refs=action_class  =
CreateSubnetInfoAction
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#CreateSubnetInfoAction
158 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#158 contributes
http://www.xrefs.info/openstack-horizon-latest/s?refs=contributes  =
(with_subnet, subnet_name, cidr,
159 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#159ip_version,
gateway_ip, no_gateway)
160 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#160
262 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#262 class CreateNetwork
http://www.xrefs.info/openstack-horizon-latest/s?refs=CreateNetwork
(workflows 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#workflows .Workflow
http://www.xrefs.info/openstack-horizon-latest/s?defs=Workflow ):
263 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#263 slug
http://www.xrefs.info/openstack-horizon-latest/s?refs=slug  =
create_network
264 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#264 name
http://www.xrefs.info/openstack-horizon-latest/s?refs=name  = _(Create
Network)
265 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#265 finalize_button_name
http://www.xrefs.info/openstack-horizon-latest/s?refs=finalize_button_name
= _(Create)
266 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#266 success_message
http://www.xrefs.info/openstack-horizon-latest/s?refs=success_message  =
_('Created network %s.')
267 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#267 failure_message
http://www.xrefs.info/openstack-horizon-latest/s?refs=failure_message  =
_('Unable to create network %s.')
268 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#268 default_steps
http://www.xrefs.info/openstack-horizon-latest/s?refs=default_steps  =
(CreateNetworkInfo 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#CreateNetworkInfo ,
269 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#269
CreateSubnetInfo 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#CreateSubnetInfo ,
270 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#270
CreateSubnetDetail 
http://www.xrefs.info/openstack-horizon-latest/xref/openstack_dashboard/das
hboards/project/networks/workflows.py#CreateSubnetDetail )
Thanks for your input.
Nader.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Enable policy improvment both v2/v3 API or not

2014-05-30 Thread Alex Xu

Hi, guys

There are some BPs working on improve the usability of API policy. Initially
those BP just for v2.1/v3 API. For v2 API, we just want to it keep the same
as before.

But in Juno design summit, we get some complain about policy is hard to use.
(https://etherpad.openstack.org/p/juno-nova-devops)
I guess those guys is complain for v2 API. So I'm thinking of whether we 
should

enable those improvment for v2 API too. I want to hear your guys and CD
people's suggestion. To ensure we should enable those for V2 API.

The main propose of improve policy is:
Policy should be enforced at REST API layer 
https://review.openstack.org/92005


In this propose we remove the compute-api layer policy checks for v3 
API, and
move policy checks into API layer for v2 API. So only v3 API can get the 
benefit.

V2 API still have two policy checks for same API.

For example:
At API layer: compute_extension:admin_actions:pause: rule:admin_or_owner
At compute API layer: compute:pause: 

There is pros/cons of enable for v2 API as below:

Pros:
* V2 API user can get the benefit from those improvement. We still have some
user use V2 API before we release V2.1/V3.

* We don't need make the code back-compatibility for v2 API. That make the
code looks mess.

For example:
https://review.openstack.org/#/c/65071/5/nova/api/openstack/compute/contrib/shelve.py
There are two policy checks code for one API. One is used for extension 
(line 84),

another one for keep compatibility (line 85).

(There is another method that won't make the code mess and we can support
back-compatibility. It is that we didn't remove the compute api layer 
policy code,
then we just skip the policy check for v3 API. After v2 API deprecated, 
we clean

up those compute api layer policy code.)

Cons:
* Maybe V2 API user didn't have too much pain on this. And we will have 
V2.1/V3 API,
V2 will be deprecated. If we change those, this may become extra burden 
for some

operator user upgrade their policy config file when upgrade nova code.

* The risk of touch existed v2 API code.

And there are other minor improvement propose for API policy:
https://review.openstack.org/92325
https://review.openstack.org/92326

I think after make decision for first propose, then I think those two 
propose can

just follow the decision.

Thanks
Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Deploying Ironic with devstack, instance stuck on spawning

2014-05-30 Thread Kai Brennenstuhl
Hi,

I'm trying to deploy Ironic with devstack. I'm using Ubuntu 14.04 and
follow the manual on
http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html#deploying-ironic-with-devstack

Everything works fine until I want to spawn a new instance.
The instance remains in the spawning task. The command ironic node-list
shows that one of my two nodes is powered on and in provisioning state
wait call-back. This node is already associated with the instance uuid.
The ironic cond log shows following:

2014-05-27 15:49:36.724 2505 DEBUG ironic.openstack.common.processutils [-]
Running cmd (SSH): /usr/bin/virsh --connect qemu:///system dumpxml
baremetalbrbm_0 | grep mac address | awk -F' '{print $2}' | tr -d ':'
ssh_execute /opt/stack/ironic/ironic/openstack/common/processutils.py:219
2014-05-27 15:49:36.800 2505 DEBUG ironic.openstack.common.processutils [-]
Result was 0 ssh_execute
/opt/stack/ironic/ironic/openstack/common/processutils.py:240
2014-05-27 15:49:36.800 2505 DEBUG ironic.drivers.modules.ssh [-] Found Mac
address: 52:54:00:97:9d:8f _get_hosts_name_for_node
/opt/stack/ironic/ironic/drivers/modules/ssh.py:282

What can I do that the instance is spawned successfully? I can't see
anything irregular in the other logs.

Another question: Is there more information about the required fields in
ironic drivers than this example shows:
http://docs.openstack.org/developer/ironic/dev/api-spec-v1.html#id13 ?

Regards

Kai
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Selecting more carefully our dependencies

2014-05-30 Thread Chmouel Boudjnah
On Thu, May 29, 2014 at 11:25 AM, Thomas Goirand z...@debian.org wrote:

 So I'm wondering: are we being careful enough when selecting
 dependencies? In this case, I think we haven't, and I would recommend
 against using wrapt. Not only because it embeds six.py, but because
 upstream looks uncooperative, and bound to its own use cases.



is it something that could be 'testable' from an external CI which would be
in the requirements repo when there is a new library added?

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-05-30 Thread James Polley
On Fri, May 30, 2014 at 4:57 AM, Zane Bitter zbit...@redhat.com wrote:

 On 29/05/14 13:33, Mike Spreitzer wrote:

 Devananda van der Veen devananda@gmail.com wrote on 05/29/2014
 01:26:12 PM:

   Hi Jaromir,
  
   I agree that the midcycle meetup with TripleO and Ironic was very
   beneficial last cycle, but this cycle, Ironic is co-locating its
   sprint with Nova. Our focus needs to be working with them to merge
   the nova.virt.ironic driver. Details will be forthcoming as we work
   out the exact details with Nova. That said, I'll try to make the
   TripleO sprint as well -- assuming the dates don't overlap.
  
   Cheers,
   Devananda
  

   On Wed, May 28, 2014 at 4:05 AM, Jaromir Coufal jcou...@redhat.com
 wrote:
   Hi to all,
  
   after previous TripleO  Ironic mid-cycle meetup, which I believe
   was beneficial for all, I would like to suggest that we meet again
   in the middle of Juno cycle to discuss current progress, blockers,
   next steps and of course get some beer all together :)
  
   Last time, TripleO and Ironic merged their meetings together and I
   think it was great idea. This time I would like to invite also Heat
   team if they want to join. Our cooperation is increasing and I think
   it would be great, if we can discuss all issues together.
  
   Red Hat offered to host this event, so I am very happy to invite you
   all and I would like to ask, who would come if there was a mid-cycle
   meetup in following dates and place:
  
   * July 28 - Aug 1
   * Red Hat office, Raleigh, North Carolina
  
   If you are intending to join, please, fill yourselves into this
 etherpad:
   https://etherpad.openstack.org/p/juno-midcycle-meetup
  
   Cheers
   -- Jarda

 Given the organizers, I assume this will be strongly focused on TripleO
 and Ironic.
 Would this be a good venue for all the mid-cycle discussion that will be
 relevant to Heat?
 Is anyone planning a distinct Heat-focused mid-cycle meetup?


 We haven't had one in the past, but the project is getting bigger so,
 given our need to sync with the TripleO folks anyway, this may be a good
 opportunity to try. Certainly it's unlikely that any Heat developers
 attending will spend the _whole_ week working with the TripleO team, so
 there should be time to do something like what you're suggesting. I think
 we just need to see who is willing  able to attend, and work out an agenda
 on that basis.

 For my part, I will certainly be there for the whole week if it's July 28
 - Aug 1. If it's the week before I may not be able to make it at all.

 BTW one timing option I haven't seen mentioned is to follow Pycon-AU's
 model of running e.g. Friday-Tuesday (July 25-29). I know nobody wants to
 be stuck in Raleigh, NC on a weekend (I've lived there, I understand ;),
 but for folks who have a long ways to travel it's one weekend lost instead
 of two.


I quite like this idea. Yes, it means a weekend in Raleigh, but that's
better than spending 1.5 weekends on a plane, which is what I have to do at
the moment. I'll still be spending just as much time on a plane, but doing
it during the week has less of an impact on my work/life balance.

Alternatively maybe we can find somewhere to host the next mid-cycle Down
Under ;)


 cheers,
 Zane.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] How about deprecate cfg.CONF.allow_overlapping_ips?

2014-05-30 Thread Kevin Benton
Even though the OS supports it now, it's possible that there could still be
network backends that don't handle it yet. Specifically certain L3 router
implementations that map to a physical router that doesn't support the
overlapping spaces.

I'm not sure how many deployments are like that, but it's definitely
something to consider.

--
Kevin Benton


On Thu, May 29, 2014 at 8:06 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi folks

 Today, we are can change allow overlapping ips or not by configuration.
 This has impact of database design, and actually, this flag complicate the
 implementations.

 Whey we have this flag is a historical reason. This was needed when many OS
 don't support namespaces, however Most of OS support namespace.

 so IMO, we can deprecate it.
 Any thought on this?

 Best
 Nachi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-05-30 Thread Thomas Spatzier
Excerpt from Zane Bitter's message on 29/05/2014 20:57:10:

 From: Zane Bitter zbit...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 29/05/2014 20:59
 Subject: Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle
 collaborative meetup
snip
 BTW one timing option I haven't seen mentioned is to follow Pycon-AU's
 model of running e.g. Friday-Tuesday (July 25-29). I know nobody wants
 to be stuck in Raleigh, NC on a weekend (I've lived there, I understand
 ;), but for folks who have a long ways to travel it's one weekend lost
 instead of two.

+1 - excellent idea!


 cheers,
 Zane.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-05-30 Thread Radomir Dopieralski
On 05/29/2014 08:40 PM, Gabriel Hurley wrote:
 we could have a poll for the name.
 Gabriel, would you like to run that?
 
 I can run that if you like, though it might be more official coming from the 
 PTL. ;-)

I'm sure that David is reading this and that he will tell us when
anything we are doing is wrong or could be improved. I think we should
do whatever needs doing and not bother the PTL with every triviality.

It would be great if you could do it, because I'm not good at this kind
of stuff, and I just want to have the name. I do realize the choice is
important as well as the way it was made, but I'm hopeless at it.

It would be a great help if  you could handle that.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-05-30 Thread Salvatore Orlando
It seems that method has some room for optimization, and I suspect the same
logic has been used in other type drivers as well.
If optimization is possible, it might be the case to open a bug for it.

Salvatore


On 30 May 2014 04:58, Xurong Yang ido...@gmail.com wrote:

 Hi,
 Thanks for your response, yes, i get the reason, so, That's why i question
 that whether one good solution can have a high performance with a large
 vxlan range? if possible, a blueprint is deserved to consider.

 Tanks,
 Xurong Yang


 2014-05-29 18:12 GMT+08:00 ZZelle zze...@gmail.com:

 Hi,


 vxlan network are inserted/verified in DB one by one, which could explain
 the time required


 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/type_vxlan.py#L138-L172

 Cédric



 On Thu, May 29, 2014 at 12:01 PM, Xurong Yang ido...@gmail.com wrote:

 Hi, Folks,

 When we configure VXLAN range [1,16M], neutron-server service costs long
 time and cpu rate is very high(100%) when initiation. One test base on
 postgresql has been verified: more than 1h when VXLAN range is [1, 1M].

 So, any good solution about this performance issue?

 Thanks,
 Xurong Yang



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Blazar] Weekly meeting Blazar (previously Climate) [Climate]

2014-05-30 Thread Sylvain Bauza
Hi,

Due to some important changes with Climate (which is now Blazar) and as
the team is quite changing, I want to make sure we run the weekly
meeting today at 3pm UTC.

Thanks,
-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Weekly meeting Blazar (previously Climate)

2014-05-30 Thread Sylvain Bauza
Hi,

Due to some important changes with Climate (which is now Blazar) and as
the team is quite changing, I want to make sure we run the weekly
meeting today at 3pm UTC.

Thanks,
-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing Proposal

2014-05-30 Thread Jaume Devesa
Hello Takashi,

thanks for doing this! As we have proposed ExaBgp[1] in the Dynamic Routing
blueprint[2], I've added a new column for this speaker in the wiki page. I
plan to fill it soon.

ExaBgp was our first choice because we thought that run something in
library mode would be much more easy to deal with (especially the
exceptions and corner cases) and the code would be much cleaner. But seems
that Ryu BGP also can fit in this requirement. And having the help from a
Ryu developer like you turns it into a promising candidate!

I'll start working now in a proof of concept to run the agent with these
implementations and see if we need more requirements to compare between the
speakers.

[1]: https://wiki.openstack.org/wiki/Neutron/BGPSpeakersComparison
[2]: https://review.openstack.org/#/c/90833/

Regards,


On 29 May 2014 18:42, YAMAMOTO Takashi yamam...@valinux.co.jp wrote:

 as per discussions on l3 subteem meeting today, i started
 a bgp speakers comparison wiki page for this bp.

 https://wiki.openstack.org/wiki/Neutron/BGPSpeakersComparison

 Artem, can you add other requirements as columns?

 as one of ryu developers, i'm naturally biased to ryu bgp.
 i appreciate if someone provides more info for other bgp speakers.

 YAMAMOTO Takashi

  Good afternoon Neutron developers!
 
  There has been a discussion about dynamic routing in Neutron for the
 past few weeks in the L3 subteam weekly meetings. I've submitted a review
 request of the blueprint documenting the proposal of this feature:
 https://review.openstack.org/#/c/90833/. If you have any feedback or
 suggestions for improvement, I would love to hear your comments and include
 your thoughts in the document.
 
  Thank you.
 
  Sincerely,
  Artem Dmytrenko

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Jaume Devesa
Software Engineer at Midokura
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Application Actions

2014-05-30 Thread Stan Lagun
Agree. Lets for now assume nothing about events and have separate list of
blueprints for actions. As soon as we get back to event design we will
decide how they connected with actions and maybe then mark some of event
blueprints as obsolete/superseded

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

 sla...@mirantis.com


On Fri, May 30, 2014 at 9:56 AM, Serg Melikyan smelik...@mirantis.com
wrote:

 Hi, Alexander

 Thank you for formalizing specifications and requirements! I will be glad
 to work on implementation of app actions, once we finish discussing them. I
 have created new blueprint
 https://blueprints.launchpad.net/murano/+spec/application-actions and
 referenced etherpad as specification.
 Actually your specification in etherpad looks very comprehensive and quite
 enough to implement first version of the feature. I think I can start
 drafting implementation while we also working on specification at the same
 time.

 I think Events should be based on Actions, so we can return to Events in
 the next milestone and revisit them.


 On Thu, May 29, 2014 at 8:26 PM, Alexander Tivelkov 
 ativel...@mirantis.com wrote:

 Hi folks!

 During the Atlanta Summit there was quite a lot of talks about the
 Application Lifecycle management and Murano's role in this process. There
 were several cross-project sessions between Murano, Heat and Solum teams
 ([1]) at which it was decided that Murano has its own place in the
 application-management ecosystem and should be able to define custom
 actions or workflows for its applications, while using Heat and its ALM
 capabilities as the underlying service.

 At the same time I had some conversation with potential customers and
 contributors, who have expressed strong interest in having actions in
 Murano in this cycle.

 That's why I've decided to drive this process forward and formalize the
 spec and requirements for the Actions feature in Murano.
 I've created a draft of the spec - please see the etherpad at [2] for
 details. I'd like some comments and discussion on the spec, and once we all
 agree on that, I will be happy to find a volunteer eager to implement this
 during Juno :)

 BTW, we have a number of blueprints already created on the topic ([3],
 [4], [5], [6]), but they lack the details and have some problems with
 terminology ('events' and 'actions' are definitely not the same, while the
 blueprints mix them). I think we should revisit these BPs and either change
 them to reflect the updated vision or to mark them as superseded and create
 more appropriate one.


 [1] https://etherpad.openstack.org/p/9XQ7Q2NQdv
 [2] https://etherpad.openstack.org/p/MuranoActions
 [3] https://blueprints.launchpad.net/murano/+spec/external-events
 [4] https://blueprints.launchpad.net/murano/+spec/api-list-events
 [5] https://blueprints.launchpad.net/murano/+spec/dsl-register-event
 [6]
 https://blueprints.launchpad.net/murano/+spec/ui-application-event-list


 --
 Regards,
 Alexander Tivelkov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Introducing task oriented workflows

2014-05-30 Thread Salvatore Orlando
Hi Hirofumi,

I reckon this has been immediately recognised as a long term effort.
However, I just want to clarify that by long term I don't mean pushing it
back until we get to the next release cycle and we realize we are in the
same place where we are today!

It is totally correct that most Neutron resources have a sloppy status
management. Mostly because, as already pointed out, the 'status' for most
resource was conceived to be a 'network fabric' status rather than a
resource synchronisation status.

As it emerged from previous posts in this thread, I reckon we have three
choices:
1) Add a new attribute for describing configuration state. For instance
this will have values such as PENDING_UPDATE, PENDING_DELETE, IN_SYNC,
OUT_OF_SYNC, etc.
2) Merge status and configuration statuses in a single attribute. This will
probably result simpler from a client perspective, but there are open
questions such as whether a resource for which a task is in progress and is
down should be reported as 'down' or 'pending_updage'.
3) Not use any new flags, and use tasks to describe whether there are
operations in progress on a resource.
The status attribute will describe exclusively the 'fabric' status of a
resources; however tasks will be exposed through the API - and a resource
in sync will be a resource with no PENDING or FAILED task active on it.

The above are just options at the moment; I tend to lean toward the latter,
but it would be great to have your feedback.

Salvatore



On 28 May 2014 11:20, Hirofumi Ichihara ichihara.hirof...@lab.ntt.co.jp
wrote:

 Hi, Salvatore

 I think neutron needs the task management too.

 IMO, the problem of neutron resource status should be discussed
 individually.
 Task management enable neutron to roll back API operation and delete trash
 of resource, try API operation again in one API process.
 Of course, we can use task to correct inconsistency between neutron
 DB(resource status) and actual resource configuration.
 But, we should add resource status management to some resources before
 task.
 For example, LBaaS has resource status management[1].
 Neutron router, port don't mange status is basic problem.

 For instance a port is UP if it's been wired by the OVS agent; it often
 does not tell us whether the actual resource configuration is exactly the
 desired one in the database. For instance, if the ovs agent fails to apply
 security groups to a port, the port stays ACTIVE and the user might never
 know there was an error and the actual state diverged from the desired one.

 So, we should solve this problem by resource status management such LBaaS
 rather than task.

 I don't deny task, but we need to discuss for task long term, I hope the
 status management will be modified right away.

 [1]
 https://wiki.openstack.org/wiki/Neutron/LBaaS/API_1.0#Synchronous_versus_Asynchronous_Plugin_Behavior

 thanks,
 Hirofumi

 -
 Hirofumi Ichihara
 NTT Software Innovation Center
 Tel:+81-422-59-2843  Fax:+81-422-59-2699
 Email:ichihara.hirof...@lab.ntt.co.jp
 -


 On 2014/05/23, at 7:34, Salvatore Orlando sorla...@nicira.com wrote:

 As most of you probably know already, this is one of the topics discussed
 during the Juno summit [1].
 I would like to kick off the discussion in order to move towards a
 concrete design.

 Preamble: Considering the meat that's already on the plate for Juno, I'm
 not advocating that whatever comes out of this discussion should be put on
 the Juno roadmap. However, preparation (or yak shaving) activities that
 should be identified as pre-requisite might happen during the Juno time
 frame assuming that they won't interfere with other critical or high
 priority activities.
 This is also a very long post; the TL;DR summary is that I would like to
 explore task-oriented communication with the backend and how it should be
 reflected in the API - gauging how the community feels about this, and
 collecting feedback regarding design, constructs, and related
 tools/techniques/technologies.

 At the summit a broad range of items were discussed during the session,
 and most of them have been reported in the etherpad [1].

 First, I think it would be good to clarify whether we're advocating a
 task-based API, a workflow-oriented operation processing, or both.

 -- About a task-based API

 In a task-based API, most PUT/POST API operations would return tasks
 rather than neutron resources, and users of the API will interact directly
 with tasks.
 I put an example in [2] to avoid cluttering this post with too much text.
 As the API operation simply launches a task - the database state won't be
 updated until the task is completed.

 Needless to say, this would be a radical change to Neutron's API; it
 should be carefully evaluated and not considered for the v2 API.
 Even if it is easily recognisable that this approach has a few benefits, I
 don't think this will improve usability of the 

Re: [openstack-dev] Designate Incubation Request

2014-05-30 Thread Thierry Carrez
Zane Bitter wrote:
 I think the problem is that we still have elements of the 'project'
 terminology around from the bad old days of the pointless
 core/core-but-don't-call-it-core/library/gating/supporting project
 taxonomy, where project == repository. The result is that every time a
 new project gets incubated, the reaction is always Oh man, you want a
 new *program* too? That sounds really *heavyweight*. If people treated
 the terms 'program' and 'project' as interchangeable and just referred
 to repositories by another name ('repositories', perhaps?) then this
 wouldn't keep coming up.
 
 (IMHO the quickest way to effect this change in mindset would be to drop
 the term 'program' and call the programs projects. In what meaningful
 sense is e.g. Infra or Docs not a project?)

You're right that the confusion now comes from project terminology. We
replaced old projects by having granular code repositories on one
side and grouping them in programs. The issue is, we still use
project (generally to mean code repository now, but in some cases to
mean program). Personally I more and more use code repo instead of
project to avoid the confusion.

That said I disagree that we should just deprecate the term program
and use project for designating the grouping instead... I think that
would create more confusion that it solves. History in OpenStack proved
that when we reuse terms (core anyone ?) we end up with mess that
can't be easily untangled. I'd rather deprecate the use of the project
term now in favor of the new terminology. When I still use project
those days it is generally to say the OpenStack project.

In summary:

- The OpenStack project
- The Compute (Nova) program
- The openstack/nova git code repository

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Unit-test] Cinder Driver

2014-05-30 Thread Yogesh Prasad
Hi All,
I have developed a cinder driver. Can you please share the steps to create
an unit test environment and how to run unit test?

*Thanks  Regards*,
  Yogesh Prasad.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] License Management

2014-05-30 Thread Thierry Carrez
Tizy Ninan wrote:
 Are there are any software license management tools available for
 openstack ? For eg. the tool should track the usage of the number of
 instances launched using a particular licensed image.
 Are there any third party tools also available for this?

This is a development-focused mailing-list, to discuss the future of
OpenStack. Your question might get to a more appropriate audience if it
was asked on the OpenStack general mailing-list, which is focused on
questions about USING OpenStack today:

https://wiki.openstack.org/wiki/Mailing_Lists

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleO] Should #tuskar business be conducted in the #tripleo channel?

2014-05-30 Thread Tomas Sedovic
On 30/05/14 02:08, James Slagle wrote:
 On Thu, May 29, 2014 at 12:25 PM, Anita Kuno ante...@anteaya.info wrote:
 As I was reviewing this patch today:
 https://review.openstack.org/#/c/96160/

 It occurred to me that the tuskar project is part of the tripleo
 program:
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n247

 I wondered if business, including bots posting to irc for #tuskar is
 best conducted in the #tripleo channel. I spoke with Chris Jones in
 #tripleo and he said the topic hadn't come up before. He asked me if I
 wanted to kick off the email thread, so here we are.

 Should #tuskar business be conducted in the #tripleo channel?
 
 I'd say yes. I don't think the additional traffic would be a large
 distraction at all to normal TripleO business.

Agreed, I don't think the traffic increase would be problematic. Neither
channel seems particularly busy.

And it would probably be beneficial to the TripleO developers who aren't
working on the UI stuff as well as the UI people who aren't necessarily
hacking on the rest of TripleO. A discussion in one area can sometimes
use some input from the other, which is harder when you need to move the
conversation between channels.

 
 I can however see how it might be nice to have #tuskar to talk tuskar
 api and tuskar ui stuff in the same channel. Do folks usually do that?
 Or is tuskar-ui conversation already happening in #openstack-horizon?
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Redesign of Keystone Federation

2014-05-30 Thread Matthieu Huin
Hello,

For what it is worth I was toying around with the possibility to extend the 
federation mapping mechanism to be used with keystone's external auth plugin. I 
believe this would allow easy, immediate and generic support of other 
federation protocols through apache mods without the need to write specific 
auth plugins unless it is needed.
I've pushed a very early PoC on this and given some basic guidelines to make it 
work with OpenID here: 

* PoC: https://review.openstack.org/#/c/92079/
* Blueprint: 
https://blueprints.launchpad.net/keystone/+spec/external-auth-federation-mapping

I'd be happy to get some feedback and push work forward on it if it can be of 
any use to the project. Let me know what you think !

Regards

Matthieu Huin 

m...@enovance.com

- Original Message -
 From: David Chadwick d.w.chadw...@kent.ac.uk
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Wednesday, May 28, 2014 5:59:48 PM
 Subject: [openstack-dev]  [keystone] Redesign of Keystone Federation
 
 Hi Everyone
 
 at the Atlanta meeting the following slides were presented during the
 federation session
 
 http://www.slideshare.net/davidwchadwick/keystone-apach-authn
 
 It was acknowledged that the current design is sub-optimal, but was a
 best first efforts to get something working in time for the IceHouse
 release, which it did successfully.
 
 Now is the time to redesign federated access in Keystone in order to
 allow for:
 i) the inclusion of more federation protocols such as OpenID and OpenID
 Connect via Apache plugins
 ii) federating together multiple Keystone installations
 iii) the inclusion of federation protocols directly into Keystone where
 good Apache plugins dont yet exist e.g. IETF ABFAB
 
 The Proposed Design (1) in the slide show is the simplest change to
 make, in which the Authn module has different plugins for different
 federation protocols, whether via Apache or not.
 
 The Proposed Design (2) is cleaner since the plugins are directly into
 Keystone and not via the Authn module, but it requires more
 re-engineering work, and it was questioned in Atlanta whether that
 effort exists or not.
 
 Kent therefore proposes that we go with Proposed Design (1). Kent will
 provide drafts of the revised APIs and the re-engineered code for
 inspection and approval by the group, if the group agrees to go with
 this revised design.
 
 If you have any questions about the proposed re-design, please don't
 hesitate to ask
 
 regards
 
 David and Kristy
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] License Management

2014-05-30 Thread Tizy Ninan
Thank You.

Regards,
Tizy


On Fri, May 30, 2014 at 2:34 PM, Thierry Carrez thie...@openstack.org
wrote:

 Tizy Ninan wrote:
  Are there are any software license management tools available for
  openstack ? For eg. the tool should track the usage of the number of
  instances launched using a particular licensed image.
  Are there any third party tools also available for this?

 This is a development-focused mailing-list, to discuss the future of
 OpenStack. Your question might get to a more appropriate audience if it
 was asked on the OpenStack general mailing-list, which is focused on
 questions about USING OpenStack today:

 https://wiki.openstack.org/wiki/Mailing_Lists

 Cheers,

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Blazar] Weekly meeting Blazar (previously Climate) [Climate]

2014-05-30 Thread Dina Belova
It's still there, yes.
I'll be there with 50% activity, I guess, so I'd like to ask Pablo to be
chair on this one.


On Fri, May 30, 2014 at 12:44 PM, Sylvain Bauza sba...@redhat.com wrote:

 Hi,

 Due to some important changes with Climate (which is now Blazar) and as
 the team is quite changing, I want to make sure we run the weekly
 meeting today at 3pm UTC.

 Thanks,
 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] License Management

2014-05-30 Thread Diego Parrilla Santamaría
Hi Tizy,

May be this is not the right mailing list for that question (what about
openstack-operators)?

We do have that feature in our StackOps Chargeback product. It can work on
any OpenStack Nova solution since Folsom.

Regards
Diego

 --
Diego Parrilla
http://www.stackops.com/*CEO*
*www.stackops.com http://www.stackops.com/ | * diego.parri...@stackops.com |
+34 91 005-2164 | skype:diegoparrilla




On Fri, May 30, 2014 at 8:12 AM, Tizy Ninan tizy.e...@gmail.com wrote:

 Hi,

 Are there are any software license management tools available for
 openstack ? For eg. the tool should track the usage of the number of
 instances launched using a particular licensed image.
 Are there any third party tools also available for this?

 Thanks,
 Tizy

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





Enviado con MailTrack
https://mailtrack.io/install?source=signaturelang=esreferral=diego.parrilla.santama...@gmail.comidSignature=23
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing Proposal

2014-05-30 Thread Mathieu Rohon
Hi,

I was about mentionning ExaBGP too! can we also consider using those
BGP speakers for BGPVPN implementation [1].
This would be consistent to have the same BGP speaker used for every
BGP needs inside Neutron.

[1]https://review.openstack.org/#/c/93329/


On Fri, May 30, 2014 at 10:54 AM, Jaume Devesa devv...@gmail.com wrote:
 Hello Takashi,

 thanks for doing this! As we have proposed ExaBgp[1] in the Dynamic Routing
 blueprint[2], I've added a new column for this speaker in the wiki page. I
 plan to fill it soon.

 ExaBgp was our first choice because we thought that run something in library
 mode would be much more easy to deal with (especially the exceptions and
 corner cases) and the code would be much cleaner. But seems that Ryu BGP
 also can fit in this requirement. And having the help from a Ryu developer
 like you turns it into a promising candidate!

 I'll start working now in a proof of concept to run the agent with these
 implementations and see if we need more requirements to compare between the
 speakers.

 [1]: https://wiki.openstack.org/wiki/Neutron/BGPSpeakersComparison
 [2]: https://review.openstack.org/#/c/90833/

 Regards,


 On 29 May 2014 18:42, YAMAMOTO Takashi yamam...@valinux.co.jp wrote:

 as per discussions on l3 subteem meeting today, i started
 a bgp speakers comparison wiki page for this bp.

 https://wiki.openstack.org/wiki/Neutron/BGPSpeakersComparison

 Artem, can you add other requirements as columns?

 as one of ryu developers, i'm naturally biased to ryu bgp.
 i appreciate if someone provides more info for other bgp speakers.

 YAMAMOTO Takashi

  Good afternoon Neutron developers!
 
  There has been a discussion about dynamic routing in Neutron for the
  past few weeks in the L3 subteam weekly meetings. I've submitted a review
  request of the blueprint documenting the proposal of this feature:
  https://review.openstack.org/#/c/90833/. If you have any feedback or
  suggestions for improvement, I would love to hear your comments and include
  your thoughts in the document.
 
  Thank you.
 
  Sincerely,
  Artem Dmytrenko

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Jaume Devesa
 Software Engineer at Midokura

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3] VM Scheduling v/s Network as input any consideration ?

2014-05-30 Thread Isaku Yamahata

Hi. At this moment, Neutron doesn't offer physical network information for now.
There is a proposal for it[1] and it was discussed at the summit [2].
Although it is still early design phase[3][4], using routing protocol will 
surely
help to discover physical network topology.

[1] https://wiki.openstack.org/wiki/Topology-as-a-service 
[2] https://etherpad.openstack.org/p/hierarchical_network_topology
[3] http://lists.openstack.org/pipermail/openstack-dev/2014-May/035868.html
[4] https://review.openstack.org/#/c/91275/

thanks,
Isaku Yamahata

On Fri, May 30, 2014 at 11:11:46AM +0200,
Mathieu Rohon mathieu.ro...@gmail.com wrote:

 I'm also very interested in scheduling VMs with Network requirement. This
 seems to be in the scope of NFV workgroup [1].
 For instance, I think that scheduling should take into account bandwith/QoS
 requirement for a VM, or specific Nic requirement for a VM (availability of
 a SRIOV Vif on compute nodes).
 
 To do this, it seems that Neutron should report its available capacity (Vif
 availability, bandwith availability) for each compute node, and Gantt
 should take this reporting into account for scheduling.
 
 [1]https://etherpad.openstack.org/p/juno-nfv-bof
 
 
 
 On Fri, May 30, 2014 at 10:47 AM, jcsf jcsf31...@gmail.com wrote:
 
  Sylvain,
 
 
 
  Thank you for the background ??? I will educate myself on this work.
 
 
 
  Thanks,
 
  John
 
 
 
 
 
  *From:* Sylvain Bauza [mailto:sba...@redhat.com]
  *Sent:* Friday, May 30, 2014 11:31 AM
 
  *To:* OpenStack Development Mailing List (not for usage questions)
  *Cc:* jcsf; 'Carl Baldwin'; 'A, Keshava'
 
  *Subject:* Re: [openstack-dev] [neutron][L3] VM Scheduling v/s Network as
  input any consideration ?
 
 
 
  Le 30/05/2014 10:06, jcsf a écrit :
 
  Carl,
 
 
 
  A new routing protocol is certainly of great interest.   Are you working
  with IETF or can you share more here?
 
 
 
  WRT:Nova Schedule - There still are requirements for the Schedule to
  taking into consideration network as a resource.   My focus is to figure
  out how to add network capabilities to the Scheduler’s algorithm while
  still maintaining clean separation of concerns between Nova and Neutron.
  We wouldn’t want to get back into the nova-network situation.
 
 
 
  John
 
 
  As it was previously mentioned, there are already different kinds of
  grouping for VMs in Nova that probably don't require to add new
  network-specific features :
   - aggregates and user-facing AZs allow to define a common set of
  capabilities for physical hosts upon which you can boot VMs
   - ServerGroups with Affinity/Anti-Affinity filters allow you to enforce a
  certain level of network proximity for VMs
 
 
  Once that said, there is also another effort of forking the Nova Scheduler
  code into a separate project so that cross-projects scheduling could happen
  (and consequently Neutron could use it). This project is planned to be
  delivered by K release, and will be called Gantt.
 
 
  So, could you please mention which features do you need for Nova, so we
  could discuss that here before issuing a spec ?
 
  -Sylvain
 
 
 
 
 
 
 
 
 
  *From:* Carl Baldwin [mailto:c...@ecbaldwin.net c...@ecbaldwin.net]
  *Sent:* Friday, May 30, 2014 12:05 AM
  *To:* A, Keshava
  *Cc:* jcsf31...@gmail.com; Armando M.; OpenStack Development Mailing List
  (not for usage questions); Kyle Mestery
  *Subject:* Re: [openstack-dev] [neutron][L3] VM Scheduling v/s Network as
  input any consideration ?
 
 
 
  Keshava,
 
 
 
  How much of a problem is routing prefix fragmentation for you?
   Fragmentation causes routing table bloat and may reduce the performance of
  the routing table.  It also increases the amount of information traded by
  the routing protocol.  Which aspect(s) is (are) affecting you?  Can you
  quantify this effect?
 
 
 
  A major motivation for my interest in employing a dynamic routing protocol
  within a datacenter is to enable IP mobility so that I don't need to worry
  about doing things like scheduling instances based on their IP addresses.
   Also, I believe that it can make floating ips more floaty so that they
  can cross network boundaries without having to statically configure routers.
 
 
 
  To get this mobility, it seems inevitable to accept the fragmentation in
  the routing prefixes.  This level of fragmentation would be contained to a
  well-defined scope, like within a datacenter.  Is it your opinion that
  trading off fragmentation for mobility a bad trade-off?  Maybe it depends
  on the capabilities of the TOR switches and routers that you have.  Maybe
  others can chime in here.
 
 
 
  Carl
 
 
 
  On Wed, May 28, 2014 at 10:11 PM, A, Keshava keshav...@hp.com wrote:
 
  Hi,
 
  Motivation behind this  requirement is “ to achieve VM prefix aggregation
   using routing protocol ( BGP/OSPF)”.
 
  So that prefix advertised from cloud to upstream will be aggregated.
 
 
 
  I do not have idea how the current scheduler is implemented.
 
  But schedule 

Re: [openstack-dev] Selecting more carefully our dependencies

2014-05-30 Thread Thierry Carrez
Thomas Goirand wrote:
 So I'm wondering: are we being careful enough when selecting
 dependencies? In this case, I think we haven't, and I would recommend
 against using wrapt. Not only because it embeds six.py, but because
 upstream looks uncooperative, and bound to its own use cases.

Proposed new dependencies all appear as proposed changes in the
requirements repository. We welcome and encourage distribution packagers
to participate in reviews there, to make sure the packaging pain is
taken into account in the approval process. And if something gets
accepted too fast for you to review and object to it, then raising a
thread on -dev like this is entirely appropriate.

 In a more general case, I would vouch for avoiding *any* Python package
 which is embedding a copy of another one. This should IMO be solved
 before the Python module reaches our global-requirements.txt.

That sounds like a good item in our requirements review checklist. At
the design summit we talked about including requirements rules and
review tips as a living document within the requirements repo itself.
That rule would definitely fit in there.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Adding Tuskar to weekly IRC meetings agenda

2014-05-30 Thread Jaromir Coufal

Hi All,

I would like to propose to add Tuskar as a permanent topic to the agenda 
for our weekly IRC meetings. It is an official TripleO's project, there 
happening quite a lot around it and we are targeting for Juno to have 
something solid. So I think that it is important for us to regularly 
keep track on what is going on there.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2] Modular agent architecture

2014-05-30 Thread Mathieu Rohon
Hi all,

Modular agent seems to have to choose between two type of architecture [1].

As I understood during the last ML2 meeting [2], Extension driver
seems to be the most reasonnable choice.
But I think that those two approaches are complementory : Extension
drivers will deal with RPC callbacks form the plugin, wheras Agent
drivers will deal with controlling the underlying technology to
interpret those callbacks.

It looks like a controlPlane/Dataplane architecture. Could we have a
control plane manager on which each Extension driver should register
(and register callbacks it is listening at), and a data plane manager,
on which each dataplane controller will register (ofagent, ovs, LB..),
and which implement a common abastract class.
A port will be managed by only one dataplane controller, and when a
control plane driver wants to apply a modification on a port, it will
retrieve the correct dataplane controller for this port in order to
call one of the abstracted method to modify the dataplane.


[1]https://wiki.openstack.org/wiki/Neutron/ModularL2Agent#Possible_Directions
[2]http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-05-28-16.02.log.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleO] Should #tuskar business be conducted in the #tripleo channel?

2014-05-30 Thread Jiří Stránský

On 30.5.2014 11:06, Tomas Sedovic wrote:

On 30/05/14 02:08, James Slagle wrote:

On Thu, May 29, 2014 at 12:25 PM, Anita Kuno ante...@anteaya.info wrote:

As I was reviewing this patch today:
https://review.openstack.org/#/c/96160/

It occurred to me that the tuskar project is part of the tripleo
program:
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n247

I wondered if business, including bots posting to irc for #tuskar is
best conducted in the #tripleo channel. I spoke with Chris Jones in
#tripleo and he said the topic hadn't come up before. He asked me if I
wanted to kick off the email thread, so here we are.

Should #tuskar business be conducted in the #tripleo channel?


I'd say yes. I don't think the additional traffic would be a large
distraction at all to normal TripleO business.


Agreed, I don't think the traffic increase would be problematic. Neither
channel seems particularly busy.

And it would probably be beneficial to the TripleO developers who aren't
working on the UI stuff as well as the UI people who aren't necessarily
hacking on the rest of TripleO. A discussion in one area can sometimes
use some input from the other, which is harder when you need to move the
conversation between channels.


+1

Jirka





I can however see how it might be nice to have #tuskar to talk tuskar
api and tuskar ui stuff in the same channel. Do folks usually do that?
Or is tuskar-ui conversation already happening in #openstack-horizon?




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleO] Should #tuskar business be conducted in the #tripleo channel?

2014-05-30 Thread Jaromir Coufal


On 2014/30/05 02:08, James Slagle wrote:

On Thu, May 29, 2014 at 12:25 PM, Anita Kuno ante...@anteaya.info wrote:

As I was reviewing this patch today:
https://review.openstack.org/#/c/96160/

It occurred to me that the tuskar project is part of the tripleo
program:
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n247

I wondered if business, including bots posting to irc for #tuskar is
best conducted in the #tripleo channel. I spoke with Chris Jones in
#tripleo and he said the topic hadn't come up before. He asked me if I
wanted to kick off the email thread, so here we are.

Should #tuskar business be conducted in the #tripleo channel?


+1


I'd say yes. I don't think the additional traffic would be a large
distraction at all to normal TripleO business.

I can however see how it might be nice to have #tuskar to talk tuskar
api and tuskar ui stuff in the same channel. Do folks usually do that?
Or is tuskar-ui conversation already happening in #openstack-horizon?


It is a mix, but lot of UI related discussions go to Horizon and it 
*should* be part of Horizon, so I don't think there is strong need to 
keep #tuskar channel separately. So I am for moving #tuskar discussions 
to #tripleo.


-- Jarda


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [UX] Reminder - UX initial meeting starts on Monday

2014-05-30 Thread Jaromir Coufal

Hi UXers,

I just wanted to remind all, that on Monday June 2, 2014 at 1700 UTC we 
are starting with OpenStack UX meetings (#openstack-meeting-3).


More details: https://wiki.openstack.org/wiki/Meetings/UX

I'd like to ask all participants if you could write your time zones 
here: https://etherpad.openstack.org/p/ux-meetings


Thanks to all
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] [Nova] Mid-cycle collaborative meetup

2014-05-30 Thread Jaromir Coufal

Hi Devananda,

it is interesting. I think that we can invite Nova as well and join our 
efforts at one place. It will be like a tiny (more focused) summit but 
it sounds that all projects can benefit a lot from it. What do you think?


[added Nova tag to the subject]

Nova folks, what do you think? Would you like to join our mid-cycle 
meeting and have TripleO  Ironic  Heat  Nova teams together at one place?


More details about place, dates and attendees are here:
https://etherpad.openstack.org/p/juno-midcycle-meetup

-- Jarda

On 2014/29/05 19:26, Devananda van der Veen wrote:

Hi Jaromir,

I agree that the midcycle meetup with TripleO and Ironic was very
beneficial last cycle, but this cycle, Ironic is co-locating its sprint
with Nova. Our focus needs to be working with them to merge the
nova.virt.ironic driver. Details will be forthcoming as we work out the
exact details with Nova. That said, I'll try to make the TripleO sprint
as well -- assuming the dates don't overlap.

Cheers,
Devananda


On Wed, May 28, 2014 at 4:05 AM, Jaromir Coufal jcou...@redhat.com
mailto:jcou...@redhat.com wrote:

Hi to all,

after previous TripleO  Ironic mid-cycle meetup, which I believe
was beneficial for all, I would like to suggest that we meet again
in the middle of Juno cycle to discuss current progress, blockers,
next steps and of course get some beer all together :)

Last time, TripleO and Ironic merged their meetings together and I
think it was great idea. This time I would like to invite also Heat
team if they want to join. Our cooperation is increasing and I think
it would be great, if we can discuss all issues together.

Red Hat offered to host this event, so I am very happy to invite you
all and I would like to ask, who would come if there was a mid-cycle
meetup in following dates and place:

* July 28 - Aug 1
* Red Hat office, Raleigh, North Carolina

If you are intending to join, please, fill yourselves into this
etherpad:
https://etherpad.openstack.__org/p/juno-midcycle-meetup
https://etherpad.openstack.org/p/juno-midcycle-meetup

Cheers
-- Jarda

_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-05-30 Thread Jaromir Coufal

On 2014/29/05 20:57, Zane Bitter wrote:

On 29/05/14 13:33, Mike Spreitzer wrote:

Devananda van der Veen devananda@gmail.com wrote on 05/29/2014
01:26:12 PM:

  Hi Jaromir,
 
  I agree that the midcycle meetup with TripleO and Ironic was very
  beneficial last cycle, but this cycle, Ironic is co-locating its
  sprint with Nova. Our focus needs to be working with them to merge
  the nova.virt.ironic driver. Details will be forthcoming as we work
  out the exact details with Nova. That said, I'll try to make the
  TripleO sprint as well -- assuming the dates don't overlap.
 
  Cheers,
  Devananda
 

  On Wed, May 28, 2014 at 4:05 AM, Jaromir Coufal jcou...@redhat.com
wrote:
  Hi to all,
 
  after previous TripleO  Ironic mid-cycle meetup, which I believe
  was beneficial for all, I would like to suggest that we meet again
  in the middle of Juno cycle to discuss current progress, blockers,
  next steps and of course get some beer all together :)
 
  Last time, TripleO and Ironic merged their meetings together and I
  think it was great idea. This time I would like to invite also Heat
  team if they want to join. Our cooperation is increasing and I think
  it would be great, if we can discuss all issues together.
 
  Red Hat offered to host this event, so I am very happy to invite you
  all and I would like to ask, who would come if there was a mid-cycle
  meetup in following dates and place:
 
  * July 28 - Aug 1
  * Red Hat office, Raleigh, North Carolina
 
  If you are intending to join, please, fill yourselves into this
etherpad:
  https://etherpad.openstack.org/p/juno-midcycle-meetup
 
  Cheers
  -- Jarda

Given the organizers, I assume this will be strongly focused on TripleO
and Ironic.
Would this be a good venue for all the mid-cycle discussion that will be
relevant to Heat?
Is anyone planning a distinct Heat-focused mid-cycle meetup?


We haven't had one in the past, but the project is getting bigger so,
given our need to sync with the TripleO folks anyway, this may be a good
opportunity to try. Certainly it's unlikely that any Heat developers
attending will spend the _whole_ week working with the TripleO team, so
there should be time to do something like what you're suggesting. I
think we just need to see who is willing  able to attend, and work out
an agenda on that basis.


Last time we managed to work also on the items not related to other 
projects. So I think it is just matter of putting together an agenda. 
Each team can work on their own, schedule cross-project topics and huge 
benefit is that we can discuss interrelated issues directly with members 
of other teams easily and anytime because we can get up and approach 
anybody face-to-face.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-05-30 Thread Jaromir Coufal

On 2014/30/05 10:00, Thomas Spatzier wrote:

Excerpt from Zane Bitter's message on 29/05/2014 20:57:10:


From: Zane Bitter zbit...@redhat.com
To: openstack-dev@lists.openstack.org
Date: 29/05/2014 20:59
Subject: Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle
collaborative meetup

snip

BTW one timing option I haven't seen mentioned is to follow Pycon-AU's
model of running e.g. Friday-Tuesday (July 25-29). I know nobody wants
to be stuck in Raleigh, NC on a weekend (I've lived there, I understand
;), but for folks who have a long ways to travel it's one weekend lost
instead of two.


+1 - excellent idea!


It looks that there is an interest in these dates, so I added 3rd option 
to the etherpad [0].


For one more time, I would like to ask potential attendees to put 
yourselves to dates which would work for you.


-- Jarda

[0] https://etherpad.openstack.org/p/juno-midcycle-meetup

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Blazar] [Ironic] Py26/27 gates failing because of keystoneclient-0.9.0

2014-05-30 Thread Sylvain Bauza
Le 30/05/2014 14:00, Sylvain Bauza a écrit :
 Hi Keystone developers,

 I just opened a bug [1] because Ironic and Blazar (ex. Climate) patches
 are failing due to a new release in Keystone client which seems to
 regress on midleware auth.

 Do you have any ideas on if it's quick to fix, or shall I provide a
 patch to openstack/global-requirements.txt to only accept keystoneclient
  0.9.0 ?

 Thanks,
 -Sylvain


The bug by itself...
[1] https://bugs.launchpad.net/keystone/+bug/1324861

-Sylvain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] [Blazar] [Ironic] Py26/27 gates failing because of keystoneclient-0.9.0

2014-05-30 Thread Sylvain Bauza
Hi Keystone developers,

I just opened a bug [1] because Ironic and Blazar (ex. Climate) patches
are failing due to a new release in Keystone client which seems to
regress on midleware auth.

Do you have any ideas on if it's quick to fix, or shall I provide a
patch to openstack/global-requirements.txt to only accept keystoneclient
 0.9.0 ?

Thanks,
-Sylvain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mahout-as-a-service [job]

2014-05-30 Thread 68x DTS
On 05/28/2014 12:37 PM, Dat Tran wrote:
* Hi everyone,
** I have a idea for new project: Mahout-as-a-service.
** Main idea of this project:
** - Install OpenStack
** - Deploying OpenStack Sahara source
** - Deploying Mahout on Sahara OpenStack system.
** - Construction of the API.
** Through web or mobile interface, users can:
** - Enable / Disable Mahout on Hadoop cluster
** - Run Mahout job
** - Get information on surveillance systems related to Mahout job.
** - Statistics and service costs over time and total resource use.
** Definitely!!! APIs will be public. Look forward to your comments.
** Hopefully in this summer, we can do something together.
** Thank you very much! :)*

Hi,

Mahout job: You intend to build service do?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Blazar] [Ironic] Py26/27 gates failing because of keystoneclient-0.9.0

2014-05-30 Thread Dina Belova
I did not look close to this concrete issue, but in the ceilometer there is
almost the same thing: https://bugs.launchpad.net/ceilometer/+bug/1324885
and fixes were already provided.

Will this help Blazar?

-- Dina


On Fri, May 30, 2014 at 4:00 PM, Sylvain Bauza sba...@redhat.com wrote:

 Hi Keystone developers,

 I just opened a bug [1] because Ironic and Blazar (ex. Climate) patches
 are failing due to a new release in Keystone client which seems to
 regress on midleware auth.

 Do you have any ideas on if it's quick to fix, or shall I provide a
 patch to openstack/global-requirements.txt to only accept keystoneclient
  0.9.0 ?

 Thanks,
 -Sylvain


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Blazar] [Ironic] Py26/27 gates failing because of keystoneclient-0.9.0

2014-05-30 Thread Sylvain Bauza
Le 30/05/2014 14:07, Dina Belova a écrit :
 I did not look close to this concrete issue, but in the ceilometer
 there is almost the same
 thing: https://bugs.launchpad.net/ceilometer/+bug/1324885 and fixes
 were already provided.

 Will this help Blazar?


Got the Ironic patch as well :

https://review.openstack.org/#/c/96576/1/ironic/tests/api/utils.py

Will provide a patch against Blazar.

Btw, I'll close the bug.

 -- Dina


 On Fri, May 30, 2014 at 4:00 PM, Sylvain Bauza sba...@redhat.com
 mailto:sba...@redhat.com wrote:

 Hi Keystone developers,

 I just opened a bug [1] because Ironic and Blazar (ex. Climate)
 patches
 are failing due to a new release in Keystone client which seems to
 regress on midleware auth.

 Do you have any ideas on if it's quick to fix, or shall I provide a
 patch to openstack/global-requirements.txt to only accept
 keystoneclient
  0.9.0 ?

 Thanks,
 -Sylvain


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 -- 

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3] VM Scheduling v/s Network as input any consideration ?

2014-05-30 Thread CARVER, PAUL
Mathieu Rohon wrote:

I'm also very interested in scheduling VMs with Network requirement. This 
seems to be in the scope of NFV workgroup [1].
For instance, I think that scheduling should take into account bandwith/QoS 
requirement for a VM, or specific Nic
This falls in my area of interest as well. We’re working on making network 
quality of service guarantees by means of a combination of DSCP marking with a 
reservation database and separate hardware queues in physical network switches 
in order to ensure that the reservations granted don’t exceed the wire speed of 
the switches. Right now the only option if the total of requested reservations 
would exceed the wire speed of the switches is to deny reservations on the 
basis of “first come, first served” and “last come, doesn’t get served”, in 
other words simply issuing a failure response at reservation time to any 
tenants who attempt to make reservations after a particular switch port is 
maxed out (from a reservation perspective, not necessarily maxed out from an 
actual utilization perspective at any given moment.)
However, with the element of chance in VM scheduling to compute node, it’s 
possible that a tenant could get a deny response from the reservation server 
because their VM landed on a particularly reservation heavy rack. If their VM 
happened to land on a compute node in a different rack then there might well be 
plenty of excess bandwidth on that rack’s uplink. But our current 
implementation has no way to tell Nova or the tenant that a reservation that 
was denied could have been granted if the VM were relocated to a less network 
overloaded rack.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Bad perf on swift servers...

2014-05-30 Thread Shyam Prasad N
Hi Hugo,
Thanks for the reply. Sorry for the delay in this reply.

A couple of disks in one of the swift servers was accidentally wiped a
couple of days back. And swift was trying hard to restore back the data to
those disks. It looks like this was definitely contributing to the CPU
load.
Does swift use rsync to perform this data restoration? Also, is there a way
to configure swift or rsync to reduce the priority of such rsync? I realize
that since my replica count is 2, it makes sense for swift to try hard to
restore the data. But will it be any different if replica count was higher,
say 3 or 4?

Regarding the troubleshooting of account-server cpu usage, the cluster is
currently down for some other issues. Will report back if the issue
persists after I reboot the setup.
As for the topology, I have 4 swift symmetric servers
(proxy+object+container+account) each with 4GB of ram and 10G ethernet
cards to communicate to each other and to clients through a 10G switch on a
private network.

Regards,
Shyam



On Fri, May 30, 2014 at 7:49 AM, Kuo Hugo tonyt...@gmail.com wrote:

 Hi ,

 1. Correct ! Once you adding new devices and rebalance rings, portion of
 partitions will be reassigned to new devices. If those partitions were used
 by some objects, object-replicator is going to move data to new devices.
 You should see logs of object-replicator to transfer objects from one
 device to another by invoking rsync.

 2. Regarding to busy swift-account-server, that's pretty abnormal tho. Is
 there any log indicating account-server doing any jobs?   A possibility is
 the ring which includes wrong port number of other workers to
 account-server. Perhaps you can paste all your rings layout to
 http://paste.openstack.org/ . To use strace on account-server process may
 help to track the exercise.

 3. In kind of deployment that outward-facing interface shares same network
 resource with cluster-facing interface, it definitely causes some race on
 network utilization. Hence the frontend traffic is under impact by
 replication traffic now.

 4. To have a detail network topology diagram will help.

 Hugo Kuo


 2014-05-29 1:06 GMT+08:00 Shyam Prasad N nspmangal...@gmail.com:

 Hi,

 Confused about the right mailing list to ask this question. So including
 both openstack and openstack-dev in the CC list.

 I'm running a swift cluster with 4 nodes.
 All 4 nodes are symmetrical. i.e. proxy, object, container, and account
 servers running on each with similar storage configuration and conf files.
 The I/O traffic to this cluster is mainly to upload dynamic large objects
 (typically 1GB chunks (sub-objects) and around 5-6 chunks under each large
 object).

 The setup is running and serving data; but I've begun to see a few perf
 issues, as the traffic increases. I want to understand the reason behind
 some of these issues, and make sure that there is nothing wrong with the
 setup configuration.

 1. High CPU utilization from rsync. I have set replica count in each of
 account, container, and object rings to 2. From what I've read, this
 assigns 2 devices for each partition in the storage cluster. And for each
 PUT, the 2 replicas should be written synchronously. And for GET, the I/O
 is through one of the object servers. So nothing here should be
 asynchronous in nature. Then what is causing the rsync traffic here?

 I recently ran a ring rebalance command after adding a node recently.
 Could this be causing the issue?

 2. High CPU utilization from swift-account-server threads. All my
 frontend traffic use 1 account and 1 container on the servers. There are
 hundreds of such objects in the same container. I don't understand what's
 keeping the account servers busy.

 3. I've started noticing that the 1GB object transfers of the frontend
 traffic are taking significantly more time than they used to (more than
 double the time). Could this be because i'm using the same subnet for both
 the internal and the frontend traffic.

 4. Can someone provide me some pointers/tips to improving perf for my
 cluster configuration? (I guess I've given out most details above. Feel
 free to ask if you need more details)

 As always, thanks in advance for your replies. Appreciate the support. :)
 --
 -Shyam

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





-- 
-Shyam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Friday Meeting [1]

2014-05-30 Thread nobi nobita
Thank you!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] [Nova] Mid-cycle collaborative meetup

2014-05-30 Thread Clint Byrum
Excerpts from Jaromir Coufal's message of 2014-05-30 17:16:21 +0530:
 Hi Devananda,
 
 it is interesting. I think that we can invite Nova as well and join our 
 efforts at one place. It will be like a tiny (more focused) summit but 
 it sounds that all projects can benefit a lot from it. What do you think?
 
 [added Nova tag to the subject]
 
 Nova folks, what do you think? Would you like to join our mid-cycle 
 meeting and have TripleO  Ironic  Heat  Nova teams together at one place?
 
 More details about place, dates and attendees are here:
 https://etherpad.openstack.org/p/juno-midcycle-meetup

My personal opinion is that it needs to remain focused. Having two
groups together is good. Having three will increase the logistical
issues and add pressure to those who straddle all three. Having four
will just make it a mini-summit with everyones' heads spinning. We want
to get real work done at these events, they're not nearly as social.

I like the idea of it just being TripleO and Heat, but I'm biased, since
I'm a core reviewer on TripleO... and Heat. :)

I also like the idea of Ironic devs strategically placing themselves
between every Nova core reviewer and the nearest source of caffeine,
with a nice smile on their face and a hey, would you like to review my
patch?.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Marconi] Kafka support and high throughput

2014-05-30 Thread Keith Newstadt
Has anyone given thought to using Kafka to back Marconi?  And has there been 
discussion about adding high throughput APIs to Marconi.

We're looking at providing Kafka as a messaging service for our customers, in a 
scenario where throughput is a priority.  We've had good luck using both 
streaming HTTP interfaces and long poll interfaces to get high throughput for 
other web services we've built.  Would this use case be appropriate in the 
context of the Marconi roadmap?

Thanks,
Keith Newstadt
keith_newst...@symantec.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring as a Service

2014-05-30 Thread nobi nobita
Thanks!!!

I was actually going to ask this issue :)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-05-30 Thread Kyle Mestery
I agree with Salvatore, I don't the optimization of that method (and
possibly others) requires a BP, but rather a bug.

Can you please file one Xurong?

Thanks,
Kyle


On Fri, May 30, 2014 at 3:39 AM, Salvatore Orlando sorla...@nicira.com wrote:
 It seems that method has some room for optimization, and I suspect the same
 logic has been used in other type drivers as well.
 If optimization is possible, it might be the case to open a bug for it.

 Salvatore


 On 30 May 2014 04:58, Xurong Yang ido...@gmail.com wrote:

 Hi,
 Thanks for your response, yes, i get the reason, so, That's why i question
 that whether one good solution can have a high performance with a large
 vxlan range? if possible, a blueprint is deserved to consider.

 Tanks,
 Xurong Yang


 2014-05-29 18:12 GMT+08:00 ZZelle zze...@gmail.com:

 Hi,


 vxlan network are inserted/verified in DB one by one, which could explain
 the time required


 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/type_vxlan.py#L138-L172

 Cédric



 On Thu, May 29, 2014 at 12:01 PM, Xurong Yang ido...@gmail.com wrote:

 Hi, Folks,

 When we configure VXLAN range [1,16M], neutron-server service costs long
 time and cpu rate is very high(100%) when initiation. One test base on
 postgresql has been verified: more than 1h when VXLAN range is [1, 1M].

 So, any good solution about this performance issue?

 Thanks,
 Xurong Yang



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3] VM Scheduling v/s Network as input any consideration ?

2014-05-30 Thread Sylvain Bauza
Le 30/05/2014 14:44, CARVER, PAUL a écrit :

 Mathieu Rohon wrote:

  

 I'm also very interested in scheduling VMs with Network requirement.
 This seems to be in the scope of NFV workgroup [1].

 For instance, I think that scheduling should take into account
 bandwith/QoS requirement for a VM, or specific Nic

 This falls in my area of interest as well. We're working on making
 network quality of service guarantees by means of a combination of
 DSCP marking with a reservation database and separate hardware queues
 in physical network switches in order to ensure that the reservations
 granted don't exceed the wire speed of the switches. Right now the
 only option if the total of requested reservations would exceed the
 wire speed of the switches is to deny reservations on the basis of
 first come, first served and last come, doesn't get served, in
 other words simply issuing a failure response at reservation time to
 any tenants who attempt to make reservations after a particular switch
 port is maxed out (from a reservation perspective, not necessarily
 maxed out from an actual utilization perspective at any given moment.)


Hi Paul,

I don't exactly know your needs, neither your current state of work
regarding the implementation of a reservation database, but please note
that there is a Stackforge project aiming to provide this for OpenStack
natively :

http://wiki.openstack.org/wiki/Blazar (formerly Climate).


During last Juno summit, some discussions in Nova were related to having
a reservation system in place, and the agreement was to take a look at
Climate/Blazar to see if it's a good fit.

I think it would be valuable for both you and Climate folks (whose I'm
in as core reviewer) to see if we can join efforts to address your
requirements with Blazar so you wouldn't have to do all the things.

There is a weekly meeting today at 3pm UTC in #openstack-meeting for
Blazar team. If you have time, just jump in and address your questions
here, that would be a good starting point.



 However, with the element of chance in VM scheduling to compute node,
 it's possible that a tenant could get a deny response from the
 reservation server because their VM landed on a particularly
 reservation heavy rack. If their VM happened to land on a compute node
 in a different rack then there might well be plenty of excess
 bandwidth on that rack's uplink. But our current implementation has no
 way to tell Nova or the tenant that a reservation that was denied
 could have been granted if the VM were relocated to a less network
 overloaded rack.


IMHO, this strategy requires both a reservation system (in my opinion,
Blazar), a resource placement system (Gantt) and possibly some AMQP
notifications which would go thru the queue.
That's something I also have in my scope, we can discuss that whenever
you want.

-Sylvain


  



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Juno mid-cycle sprint in Paris, July 2014

2014-05-30 Thread Julien Danjou
Hi fellow OpenStack developers,

I'm glad to announce that we're organizing a mid-cycle hacking sprint in
Paris (in eNovance office) from 2nd July to 4th July 2014.
This mid-cycle sprint is not tied to any particular OpenStack project,
so it can be the occasion to have some cross-project hacking too.
So every OpenStack developer's welcome.

Details are at:

  https://wiki.openstack.org/wiki/Sprints/ParisJuno2014

Feel free to register and join us!

Cheers,
-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleO] Should #tuskar business be conducted in the #tripleo channel?

2014-05-30 Thread Petr Blaho
On Thu, May 29, 2014 at 12:25:02PM -0400, Anita Kuno wrote:
 As I was reviewing this patch today:
 https://review.openstack.org/#/c/96160/
 
 It occurred to me that the tuskar project is part of the tripleo
 program:
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n247
 
 I wondered if business, including bots posting to irc for #tuskar is
 best conducted in the #tripleo channel. I spoke with Chris Jones in
 #tripleo and he said the topic hadn't come up before. He asked me if I
 wanted to kick off the email thread, so here we are.
 
 Should #tuskar business be conducted in the #tripleo channel?
 
 Thanks,
 Anita.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi,

we already have gerritbot posting to #tripleo channel w/r/t patches in
tuskar (as for all tripleo repos).

And I think that we should talk on one common channel, ie #tripleo.

+1 for abandoning #tuskar channel.

-- 
Petr Blaho, pbl...@redhat.com
Software Engineer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Unit-test] Cinder Driver

2014-05-30 Thread Erlon Cruz
Hi Yogesh,

The best way to start writing tests is to get some examples from tests
already implemented. Since icehouse release, all tests must be written with
mock¹. Most of the tests on the codebase are written with the old framework
(mox), please have a look on this implementations:

cinder/tests/{test_hp3par.py,test_ibmnas.py, test_netapp_eseries_iscsi.py}

This² is also an implementation Im working in using mock.

Kind Regards,
Erlon



¹ https://github.com/openstack/cinder/blob/master/HACKING.rst
² https://review.openstack.org/#/c/84244/


On Fri, May 30, 2014 at 6:05 AM, Yogesh Prasad yogesh.pra...@cloudbyte.com
wrote:


 Hi All,
 I have developed a cinder driver. Can you please share the steps to create
 an unit test environment and how to run unit test?

 *Thanks  Regards*,
   Yogesh Prasad.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Change of meeting time

2014-05-30 Thread Dougal Matthews

- Original Message -
 From: James Polley j...@jamezpolley.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Saturday, 24 May, 2014 1:21:53 AM
 Subject: [openstack-dev] [TripleO] Change of meeting time
 
 Following a lengthy discussion under the subject Alternating meeting tmie
 for more TZ friendliness, the TripleO meeting now alternates between
 Tuesday 1900UTC (the former time) and Wednesday 0700UTC, for better
 coverage across Australia, India, China, Japan, and the other parts of the
 world that found it impossible to get to our previous meeting time.
 
 https://wiki.openstack.org/wiki/Meetings/TripleO#Weekly_TripleO_team_meetinghas
 been updated with a link to the iCal feed so you can figure out which
 time we're using each week.
 
 The coming meeting will be our first Wednesday 0700UTC meeting. We look
 forward to seeing some fresh faces (well, fresh nicks at least)!

Thanks for pushing this forward, I look forward to making more of the meetings
with this new time. Unfortunately I was on annual leave for the first one 
however.

Dougal

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleO] Should #tuskar business be conducted in the #tripleo channel?

2014-05-30 Thread Dougal Matthews
- Original Message -
 From: Jaromir Coufal jcou...@redhat.com
 To: openstack-dev@lists.openstack.org
 Sent: Friday, 30 May, 2014 12:04:29 PM
 Subject: Re: [openstack-dev] [tripleO] Should #tuskar business be conducted 
 in the #tripleo channel?
 
  I'd say yes. I don't think the additional traffic would be a large
  distraction at all to normal TripleO business.
 
  I can however see how it might be nice to have #tuskar to talk tuskar
  api and tuskar ui stuff in the same channel. Do folks usually do that?
  Or is tuskar-ui conversation already happening in #openstack-horizon?
 
 It is a mix, but lot of UI related discussions go to Horizon and it
 *should* be part of Horizon, so I don't think there is strong need to
 keep #tuskar channel separately. So I am for moving #tuskar discussions
 to #tripleo.

+1, this sounds good to me.

Dougal

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Clone all of OpenStack and Stackforge?

2014-05-30 Thread Doug Hellmann
On Fri, May 30, 2014 at 3:58 AM, Clark, Robert Graham
robert.cl...@hp.com wrote:
 I’m sure there’s a nice way to do this, I want to pull down all the stable 
 code for all the current OpenStack and Stackforge projects to plug into some 
 analytics tooling, whats the best way to do this?

 -Rob

I use a script with a variation of this loop that Monty posted a while back:

http://lists.openstack.org/pipermail/openstack-dev/2013-October/017532.html

Doug



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Adding Tuskar to weekly IRC meetings agenda

2014-05-30 Thread Jason Rist
On Fri 30 May 2014 04:13:31 AM MDT, Jaromir Coufal wrote:
 Hi All,

 I would like to propose to add Tuskar as a permanent topic to the
 agenda for our weekly IRC meetings. It is an official TripleO's
 project, there happening quite a lot around it and we are targeting
 for Juno to have something solid. So I think that it is important for
 us to regularly keep track on what is going on there.

 -- Jarda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+1

--
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-05-30 Thread Matt Riedemann



On 4/25/2014 7:46 AM, Doug Hellmann wrote:

On Fri, Apr 25, 2014 at 8:33 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:



On 4/18/2014 1:18 PM, Doug Hellmann wrote:


Nice work, Victor!

I left a few comments on the commits that were made after the original
history was exported from the incubator. There were a couple of small
things to address before importing the library, and a couple that can
wait until we have the normal code review system. I'd say just add new
commits to fix the issues, rather than trying to amend the existing
commits.

We haven't really discussed how to communicate when we agree the new
repository is ready to be imported, but it seems reasonable to use the
patch in openstack-infra/config that will be used to do the import:
https://review.openstack.org/#/c/78955/

Doug

On Fri, Apr 18, 2014 at 10:28 AM, Victor Sergeyev
vserge...@mirantis.com wrote:


Hello all,

During Icehouse release cycle our team has been working on splitting of
openstack common db code into a separate library blueprint [1]. At the
moment the issues, mentioned in this bp and [2] are solved and we are
moving
forward to graduation of oslo.db. You can find the new oslo.db code at
[3]

So, before moving forward, I want to ask Oslo team to review oslo.db
repository [3] and especially the commit, that allows the unit tests to
pass
[4].

Thanks,
Victor

[1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
[2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
[3] https://github.com/malor/oslo.db
[4]

https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I'm probably just late to the party, but simple question: why is it in the
malor group in github rather than the openstack group, like oslo.messaging
and oslo.rootwrap?  Is that temporary or will it be moved at some point?


This is the copy of the code being prepared to import into a new
oslo.db repository. It's easier to set up that temporary hosting on
github. The repo has been approved to be imported, and after that
happens it will be hosted on our git server like all of the other oslo
libraries.

Doug



--

Thanks,

Matt Riedemann



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Are there any status updates on where we are with this [1]?  I see that 
oslo.db is in git.openstack.org now [2].  There is a super-alpha dev 
package on pypi [3], are we waiting for an official release?


I'd like to start moving nova over to using oslo.db or at least get an 
idea for how much work it's going to be.  I don't imagine it's going to 
be that difficult since I think a lot of the oslo.db code originated in 
nova.


[1] https://review.openstack.org/#/c/91407/
[2] http://git.openstack.org/cgit/openstack/oslo.db/
[3] https://pypi.python.org/pypi/oslo.db/0.0.1.dev15.g7efbf12

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vmware] Current state of the spawn refactor

2014-05-30 Thread Matt Riedemann



On 5/27/2014 4:43 PM, Davanum Srinivas wrote:

Hi Michael,

* Phase 1 has one review left - https://review.openstack.org/#/c/92691/
* I'll update the phase 2 patch shortly -
https://review.openstack.org/#/c/87002/
* Once the 2 reviews above get approved, we will resurrect the
oslo.vmware BP/review - https://review.openstack.org/#/c/70175/

There is a team etherpad that has a game plan we try to keep
up-to-date - https://etherpad.openstack.org/p/vmware-subteam-juno.
Based on discussions during the summit, We are hoping to get the 3
items above into juno-1. So we can work on the features mentioned in
the etherpad.

Tracy,GaryK,Others, please chime in.

thanks,
dims

On Tue, May 27, 2014 at 5:31 PM, Michael Still mi...@stillhq.com wrote:

Hi.

I've been looking at the current state of the vmware driver spawn
refactor work, and as best as I can tell phase one is now complete.
However, I can only find one phase two patch, and it is based on an
outdated commit. That patch is:

 https://review.openstack.org/#/c/87002/

There also aren't any phase three patches that I can see. What is the
current state of this work? Is it still targeting juno-1?

Thanks,
Michael

--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






I reviewed the test_spawn change [1] yesterday and have some issues I'd 
like to discuss first around how we refactor out or test 
copy_virtual_disk/create_virtual_disk in isolation before moving forward 
with phase 2 (or plan that as part of phase 2, which would move out 
phase 3 a bit).


There is also this change [2] which I'm not sure if it's part of phase 1 
but it is associated with the refactor blueprint.  If so, it'd be nice 
if the commit message had the phase-1 mention in it like how the other 
patches are targeting different phases.


Otherwise the latest update is that John moved the blueprint to juno-2 
given what's left to get done in the next two weeks.


[1] https://review.openstack.org/#/c/92691/
[2] https://review.openstack.org/#/c/84713/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Blazar] Weekly meeting Blazar (previously Climate) [Climate]

2014-05-30 Thread Nikolay Starodubtsev
I'll be back to weekly meetings from next Friday.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1


2014-05-30 2:20 GMT-07:00 Dina Belova dbel...@mirantis.com:

 It's still there, yes.
 I'll be there with 50% activity, I guess, so I'd like to ask Pablo to be
 chair on this one.


 On Fri, May 30, 2014 at 12:44 PM, Sylvain Bauza sba...@redhat.com wrote:

 Hi,

 Due to some important changes with Climate (which is now Blazar) and as
 the team is quite changing, I want to make sure we run the weekly
 meeting today at 3pm UTC.

 Thanks,
 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] [Openstack-dev] New RA for Galera

2014-05-30 Thread Vladimir Kuklin
Bartosz,

this looks like a part of our global approach to rewrite pacemaker
providers to use crm_diff command instead of cs_shadow/commit approach. (
part of
https://blueprints.launchpad.net/fuel/+spec/ha-pacemaker-improvements)

If reboot attributes do not help, then we should reconsider this approach
as a high priority one. Thank you for the update.


On Thu, May 29, 2014 at 3:06 PM, Bartosz Kupidura bkupid...@mirantis.com
wrote:

 Hello,


 Wiadomość napisana przez Vladimir Kuklin vkuk...@mirantis.com w dniu 29
 maj 2014, o godz. 12:09:

  may be the problem is that you are using liftetime crm attributes
 instead of 'reboot' ones. shadow/commit is used by us because we need
 transactional behaviour in some cases. if you turn crm_shadow off, then you
 will experience problems with multi-state resources and
 location/colocation/order constraints. so we need to find a way to make
 commits transactional. there are two ways:
  1) rewrite corosync providers to use crm_diff command and apply it
 instead of shadow commit that can swallow cluster attributes sometimes

 In PoC i removed all cs_commit/cs_shadow, and looks that everything is
 working. But as you says, this can lead to problems with more complicated
 deployments.
 This need to be verified.

  2) store 'reboot' attributes instead of lifetime ones

 I test with —lifetime forever and reboot. No difference for
 cs_commit/cs_shadow fail.

 Moreover we need method to store GTID permanent (to support whole cluster
 reboot).
 If we want to stick to cs_commit/cs_shadow, we need other method to store
 GTID than crm_attribute.

 
 
 
  On Thu, May 29, 2014 at 12:42 PM, Bogdan Dobrelya 
 bdobre...@mirantis.com wrote:
  On 05/27/14 16:44, Bartosz Kupidura wrote:
   Hello,
   Responses inline.
  
  
   Wiadomość napisana przez Vladimir Kuklin vkuk...@mirantis.com w
 dniu 27 maj 2014, o godz. 15:12:
  
   Hi, Bartosz
  
   First of all, we are using openstack-dev for such discussions.
  
   Second, there is also Percona's RA for Percona XtraDB Cluster, which
 looks like pretty similar, although it is written in Perl. May be we could
 derive something useful from it.
  
   Next, if you are working on this stuff, let's make it as open for the
 community as possible. There is a blueprint for Galera OCF script:
 https://blueprints.launchpad.net/fuel/+spec/reliable-galera-ocf-script.
 It would be awesome if you wrote down the specification and sent  newer
 galera ocf code change request to fuel-library gerrit.
  
   Sure, I will update this blueprint.
   Change request in fuel-library:
 https://review.openstack.org/#/c/95764/
 
  That is a really nice catch, Bartosz, thank you. I believe we should
  review the new OCF script thoroughly and consider omitting
  cs_commits/cs_shadows as well. What would be the downsides?
 
  
  
   Speaking of crm_attribute stuff. I am very surprised that you are
 saying that node attributes are altered by crm shadow commit. We are using
 similar approach in our scripts and have never faced this issue.
  
   This is probably because you update crm_attribute very rarely. And
 with my approach GTID attribute is updated every 60s on every node (3
 updates in 60s, in standard HA setup).
  
   You can try to update any attribute in loop during deploying cluster
 to trigger fail with corosync diff.
 
  It sounds reasonable and we should verify it.
  I've updated the statuses for related bugs and attached them to the
  aforementioned blueprint as well:
  https://bugs.launchpad.net/fuel/+bug/1283062/comments/7
  https://bugs.launchpad.net/fuel/+bug/1281592/comments/6
 
 
  
  
   Corosync 2.x support is in our roadmap, but we are not sure that we
 will use Corosync 2.x earlier than 6.x release series start.
  
   Yeah, moreover corosync CMAP is not synced between cluster nodes (or
 maybe im doing something wrong?). So we need other solution for this...
  
 
  We should use CMAN for Corosync 1.x, perhaps.
 
  
  
   On Tue, May 27, 2014 at 3:08 PM, Bartosz Kupidura 
 bkupid...@mirantis.com wrote:
   Hello guys!
   I would like to start discussion on a new resource agent for
 galera/pacemaker.
  
   Main features:
   * Support cluster boostrap
   * Support reboot any node in cluster
   * Support reboot whole cluster
   * To determine which node have latest DB version, we should use
 galera GTID (Global Transaction ID)
   * Node with latest GTID is galera PC (primary component) in case of
 reelection
   * Administrator can manually set node as PC
  
   GTID:
   * get GTID from mysqld --wsrep-recover or SQL query 'SHOW STATUS LIKE
 ‚wsrep_local_state_uuid''
   * store GTID as crm_attribute for node (crm_attribute --node
 $HOSTNAME --lifetime $LIFETIME --name gtid --update $GTID)
   * on every monitor/stop/start action update GTID for given node
   * GTID can have 3 format:
- ----:123 - standard
 cluster-id:commit-id
- ----:-1 - standard non initialized
 cluster, 

Re: [openstack-dev] [FUEL][Design session 5.1] 28/05/2014 meeting minutes

2014-05-30 Thread Vladimir Kuklin
Guys, we gonna have a more extended design FUEL Library design meeting in
IRC channel during regular FUEL Meeting  on May 5th. So, feel free to add
blueprints to meeting agenda and we will consider adding them to 5.1
Roadmap.


On Wed, May 28, 2014 at 7:41 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Hey, folks

 We did a meeting today regarding 5.1-targeted blueprints and design.

 Here is the document with the results:

 https://etherpad.openstack.org/p/fuel-library-5.1-design-session

 Obviously, we need several additional meetings to build up roadmap for
 5.1, but I think this was a really good start. Thank you all.

 We will continue to work on this during this and next working week. Hope
 to see you all on weekly IRC meeting tomorrow. Feel free to propose your
 blueprints and ideas for 5.1 release.
 https://wiki.openstack.org/wiki/Meetings/Fuel

 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-05-30 Thread Roman Podoliaka
Hi Matt,

We're waiting for a few important fixes to be merged (usage of
oslo.config, eventlet tpool support). Once those are merged, we'll cut
the initial release.

Thanks,
Roman

On Fri, May 30, 2014 at 5:19 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:


 On 4/25/2014 7:46 AM, Doug Hellmann wrote:

 On Fri, Apr 25, 2014 at 8:33 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:



 On 4/18/2014 1:18 PM, Doug Hellmann wrote:


 Nice work, Victor!

 I left a few comments on the commits that were made after the original
 history was exported from the incubator. There were a couple of small
 things to address before importing the library, and a couple that can
 wait until we have the normal code review system. I'd say just add new
 commits to fix the issues, rather than trying to amend the existing
 commits.

 We haven't really discussed how to communicate when we agree the new
 repository is ready to be imported, but it seems reasonable to use the
 patch in openstack-infra/config that will be used to do the import:
 https://review.openstack.org/#/c/78955/

 Doug

 On Fri, Apr 18, 2014 at 10:28 AM, Victor Sergeyev
 vserge...@mirantis.com wrote:


 Hello all,

 During Icehouse release cycle our team has been working on splitting of
 openstack common db code into a separate library blueprint [1]. At the
 moment the issues, mentioned in this bp and [2] are solved and we are
 moving
 forward to graduation of oslo.db. You can find the new oslo.db code at
 [3]

 So, before moving forward, I want to ask Oslo team to review oslo.db
 repository [3] and especially the commit, that allows the unit tests to
 pass
 [4].

 Thanks,
 Victor

 [1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
 [2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
 [3] https://github.com/malor/oslo.db
 [4]


 https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I'm probably just late to the party, but simple question: why is it in
 the
 malor group in github rather than the openstack group, like
 oslo.messaging
 and oslo.rootwrap?  Is that temporary or will it be moved at some point?


 This is the copy of the code being prepared to import into a new
 oslo.db repository. It's easier to set up that temporary hosting on
 github. The repo has been approved to be imported, and after that
 happens it will be hosted on our git server like all of the other oslo
 libraries.

 Doug


 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Are there any status updates on where we are with this [1]?  I see that
 oslo.db is in git.openstack.org now [2].  There is a super-alpha dev package
 on pypi [3], are we waiting for an official release?

 I'd like to start moving nova over to using oslo.db or at least get an idea
 for how much work it's going to be.  I don't imagine it's going to be that
 difficult since I think a lot of the oslo.db code originated in nova.

 [1] https://review.openstack.org/#/c/91407/
 [2] http://git.openstack.org/cgit/openstack/oslo.db/
 [3] https://pypi.python.org/pypi/oslo.db/0.0.1.dev15.g7efbf12


 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-05-30 Thread Sergey Lukjanov
Hey Roman,

will it be the alpha version that should not be used by other projects
or it'll be ready to use?

Thanks.

On Fri, May 30, 2014 at 6:36 PM, Roman Podoliaka
rpodoly...@mirantis.com wrote:
 Hi Matt,

 We're waiting for a few important fixes to be merged (usage of
 oslo.config, eventlet tpool support). Once those are merged, we'll cut
 the initial release.

 Thanks,
 Roman

 On Fri, May 30, 2014 at 5:19 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:


 On 4/25/2014 7:46 AM, Doug Hellmann wrote:

 On Fri, Apr 25, 2014 at 8:33 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:



 On 4/18/2014 1:18 PM, Doug Hellmann wrote:


 Nice work, Victor!

 I left a few comments on the commits that were made after the original
 history was exported from the incubator. There were a couple of small
 things to address before importing the library, and a couple that can
 wait until we have the normal code review system. I'd say just add new
 commits to fix the issues, rather than trying to amend the existing
 commits.

 We haven't really discussed how to communicate when we agree the new
 repository is ready to be imported, but it seems reasonable to use the
 patch in openstack-infra/config that will be used to do the import:
 https://review.openstack.org/#/c/78955/

 Doug

 On Fri, Apr 18, 2014 at 10:28 AM, Victor Sergeyev
 vserge...@mirantis.com wrote:


 Hello all,

 During Icehouse release cycle our team has been working on splitting of
 openstack common db code into a separate library blueprint [1]. At the
 moment the issues, mentioned in this bp and [2] are solved and we are
 moving
 forward to graduation of oslo.db. You can find the new oslo.db code at
 [3]

 So, before moving forward, I want to ask Oslo team to review oslo.db
 repository [3] and especially the commit, that allows the unit tests to
 pass
 [4].

 Thanks,
 Victor

 [1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
 [2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
 [3] https://github.com/malor/oslo.db
 [4]


 https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I'm probably just late to the party, but simple question: why is it in
 the
 malor group in github rather than the openstack group, like
 oslo.messaging
 and oslo.rootwrap?  Is that temporary or will it be moved at some point?


 This is the copy of the code being prepared to import into a new
 oslo.db repository. It's easier to set up that temporary hosting on
 github. The repo has been approved to be imported, and after that
 happens it will be hosted on our git server like all of the other oslo
 libraries.

 Doug


 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Are there any status updates on where we are with this [1]?  I see that
 oslo.db is in git.openstack.org now [2].  There is a super-alpha dev package
 on pypi [3], are we waiting for an official release?

 I'd like to start moving nova over to using oslo.db or at least get an idea
 for how much work it's going to be.  I don't imagine it's going to be that
 difficult since I think a lot of the oslo.db code originated in nova.

 [1] https://review.openstack.org/#/c/91407/
 [2] http://git.openstack.org/cgit/openstack/oslo.db/
 [3] https://pypi.python.org/pypi/oslo.db/0.0.1.dev15.g7efbf12


 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL][Design session 5.1] 28/05/2014 meeting minutes

2014-05-30 Thread Lukasz Oles
june 5th?


On Fri, May 30, 2014 at 4:33 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Guys, we gonna have a more extended design FUEL Library design meeting in
 IRC channel during regular FUEL Meeting  on May 5th. So, feel free to add
 blueprints to meeting agenda and we will consider adding them to 5.1
 Roadmap.


 On Wed, May 28, 2014 at 7:41 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Hey, folks

 We did a meeting today regarding 5.1-targeted blueprints and design.

 Here is the document with the results:

 https://etherpad.openstack.org/p/fuel-library-5.1-design-session

 Obviously, we need several additional meetings to build up roadmap for
 5.1, but I think this was a really good start. Thank you all.

 We will continue to work on this during this and next working week. Hope
 to see you all on weekly IRC meeting tomorrow. Feel free to propose your
 blueprints and ideas for 5.1 release.
 https://wiki.openstack.org/wiki/Meetings/Fuel

 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Łukasz Oleś
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting minutes May 29

2014-05-30 Thread Sergey Lukjanov
Thanks everyone who have joined Sahara meeting.

Here are the logs from the meeting:

Minutes: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-05-29-18.01.html
Log: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-05-29-18.01.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] dealing with M:N relashionships for Pools and Listeners

2014-05-30 Thread Brandon Logan
Hi Sam and Stephen,

On Thu, 2014-05-29 at 21:10 -0700, Stephen Balukoff wrote:
 Hi Sam,
 
 
 Here are my thoughts on this:
 
 
 On Thu, May 29, 2014 at 12:32 PM, Samuel Bercovici
 samu...@radware.com wrote:
 Before solving everything, I would like first to itemize the
 things we should solve/consider.
 
 So pleas focus first on what is it that we need to pay
 attention for and less on how to solve such issues.
 
  
 
 Follows the list of items:
 
 ·Provisioning status/state
 
 o  Should it only be on the loadblancer?
 
 
 That makes sense to me, unless we're also using this field to indicate
 a pending change from an update as well. Pending changes here should
 be reflected in the listener state.
I agree with this as well.
 
 
  
 o  Do we need a more granular status per logical child object?
 
 
 
 
 Than what? I'm not sure I understand what you're saying here. Can you
 give a couple examples?
  
 o  Update process
 
 § What happens when a logical child object is modified?
 
 
 Any affected parent objects should show a 'PENDING_UPDATE' status or
 something similar until the change is pushed to them. 
  
 § Where can a user check the success of the update?
 
 
 
 
 Depending on the object... either the status of the child object
 itself or all of its affected parent(s). Since we're allowing reusing
 of the pool object, when getting the status of a pool, maybe it makes
 sense to produce a list showing the status of all the pool's members,
 as well as the update status of all the listeners using the pool?

This is confusing to me.  Will there be a separate provisioning status
field on the loadbalancer and just a generic status on the child
objects?  I get the idea of a pool having a status the reflects the
state of all of its members.  Is that what you mean by status of a child
object?

  
 ·Operation status/state – this refers to information
 returning from the load balancing back end / driver
 
 o  How is member status that failed health monitor reflected,
 on which LBaaS object and how can a user understand the
 failure?
 
 
 Assuming you're not talking about an alert which would be generated by
 a back-end load balancer and get routed to some notification system...
 I think you should be able to get the status of a member by just
 checking the member status directly (ie.  GET /members/[UUID]) or, if
 people like my suggestion above, by checking the status of the pool to
 which the member belongs (ie. GET /pools/[UUID]).
  
 
 ·Administrator state management 
 
 o  How is a change in admin_state on member, pool, listener
 get managed
 
 
 I'm thinking that disabling members, pools, and listeners should
 propagate to all parent objects. (For example, disabling a member
 should propagate to all affected pools and listeners, which
 essentially pulls the member out of rotation for all load balancers
 but otherwise leaves listeners and pools up and running. This is
 probably what the user is trying to accomplish by disabling the
 member.)

Are you saying in this case if a member is disabled and all members are
disabled then the parent's pool status is disabled which would then in
turn disable the listener?

 
 I do not think it makes sense to propagate to all child objects. For
 example, disabling a listener should not disable all the pools it
 references.
 
 And by 'propagate' here I mean that config changes are pushed to all
 affected listeners and load balancers-- not that we actually update
 all parents to be 'ADMIN_STATUS_DOWN' or something. Does this make
 sense to you?

Propogating to child objects would be bad with shared listeners and
pools.

  
 o  Do we expect a change in the operation state to reflect
 this?
 
 
 Yes.
  
 ·Statistics consumption
 
 
 I actually started another thread on this to get some operator and
 user requirements here, but so far nobody has replied.  FWIW, I'm
 leaning toward having a RESTful interface for statistics that's
 separate from the main configuration interface tree and has implied
 context depending on how it's used.
 
 
 For example, if you want to see the stats for a particular member of a
 particular pool of a particular listener on a particular load
 balancer, you'd GET something like the following:
 
 
 GET 
 /stats/loadbalancer/LB_UUID/listener/LB_UUID/pool/POOL_UUID/member/MEMBER_UUID
 
 
 ...which would give you just the stats for that member in that
 context.
 
 I think we might also want to support getting overall stats for a
 single logical object. So for example:
 
 GET /stats/member/MEMBER_UUID
 
 
 ...would get you total stats for that member, 

Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-05-30 Thread Doug Hellmann
On Fri, May 30, 2014 at 11:06 AM, Sergey Lukjanov
slukja...@mirantis.com wrote:
 Hey Roman,

 will it be the alpha version that should not be used by other projects
 or it'll be ready to use?

The current plan is to do alpha releases of oslo libraries during this
cycle, with a final official release at the end. We're close to
finishing the infra work we need to make that possible.

Doug



 Thanks.

 On Fri, May 30, 2014 at 6:36 PM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi Matt,

 We're waiting for a few important fixes to be merged (usage of
 oslo.config, eventlet tpool support). Once those are merged, we'll cut
 the initial release.

 Thanks,
 Roman

 On Fri, May 30, 2014 at 5:19 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:


 On 4/25/2014 7:46 AM, Doug Hellmann wrote:

 On Fri, Apr 25, 2014 at 8:33 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:



 On 4/18/2014 1:18 PM, Doug Hellmann wrote:


 Nice work, Victor!

 I left a few comments on the commits that were made after the original
 history was exported from the incubator. There were a couple of small
 things to address before importing the library, and a couple that can
 wait until we have the normal code review system. I'd say just add new
 commits to fix the issues, rather than trying to amend the existing
 commits.

 We haven't really discussed how to communicate when we agree the new
 repository is ready to be imported, but it seems reasonable to use the
 patch in openstack-infra/config that will be used to do the import:
 https://review.openstack.org/#/c/78955/

 Doug

 On Fri, Apr 18, 2014 at 10:28 AM, Victor Sergeyev
 vserge...@mirantis.com wrote:


 Hello all,

 During Icehouse release cycle our team has been working on splitting of
 openstack common db code into a separate library blueprint [1]. At the
 moment the issues, mentioned in this bp and [2] are solved and we are
 moving
 forward to graduation of oslo.db. You can find the new oslo.db code at
 [3]

 So, before moving forward, I want to ask Oslo team to review oslo.db
 repository [3] and especially the commit, that allows the unit tests to
 pass
 [4].

 Thanks,
 Victor

 [1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
 [2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
 [3] https://github.com/malor/oslo.db
 [4]


 https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I'm probably just late to the party, but simple question: why is it in
 the
 malor group in github rather than the openstack group, like
 oslo.messaging
 and oslo.rootwrap?  Is that temporary or will it be moved at some point?


 This is the copy of the code being prepared to import into a new
 oslo.db repository. It's easier to set up that temporary hosting on
 github. The repo has been approved to be imported, and after that
 happens it will be hosted on our git server like all of the other oslo
 libraries.

 Doug


 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Are there any status updates on where we are with this [1]?  I see that
 oslo.db is in git.openstack.org now [2].  There is a super-alpha dev package
 on pypi [3], are we waiting for an official release?

 I'd like to start moving nova over to using oslo.db or at least get an idea
 for how much work it's going to be.  I don't imagine it's going to be that
 difficult since I think a lot of the oslo.db code originated in nova.

 [1] https://review.openstack.org/#/c/91407/
 [2] http://git.openstack.org/cgit/openstack/oslo.db/
 [3] https://pypi.python.org/pypi/oslo.db/0.0.1.dev15.g7efbf12


 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-05-30 Thread Sergey Lukjanov
So, it means that we'll be able to migrate to oslo.db lib in the end
of Juno? or early K?

On Fri, May 30, 2014 at 7:30 PM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:
 On Fri, May 30, 2014 at 11:06 AM, Sergey Lukjanov
 slukja...@mirantis.com wrote:
 Hey Roman,

 will it be the alpha version that should not be used by other projects
 or it'll be ready to use?

 The current plan is to do alpha releases of oslo libraries during this
 cycle, with a final official release at the end. We're close to
 finishing the infra work we need to make that possible.

 Doug



 Thanks.

 On Fri, May 30, 2014 at 6:36 PM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi Matt,

 We're waiting for a few important fixes to be merged (usage of
 oslo.config, eventlet tpool support). Once those are merged, we'll cut
 the initial release.

 Thanks,
 Roman

 On Fri, May 30, 2014 at 5:19 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:


 On 4/25/2014 7:46 AM, Doug Hellmann wrote:

 On Fri, Apr 25, 2014 at 8:33 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:



 On 4/18/2014 1:18 PM, Doug Hellmann wrote:


 Nice work, Victor!

 I left a few comments on the commits that were made after the original
 history was exported from the incubator. There were a couple of small
 things to address before importing the library, and a couple that can
 wait until we have the normal code review system. I'd say just add new
 commits to fix the issues, rather than trying to amend the existing
 commits.

 We haven't really discussed how to communicate when we agree the new
 repository is ready to be imported, but it seems reasonable to use the
 patch in openstack-infra/config that will be used to do the import:
 https://review.openstack.org/#/c/78955/

 Doug

 On Fri, Apr 18, 2014 at 10:28 AM, Victor Sergeyev
 vserge...@mirantis.com wrote:


 Hello all,

 During Icehouse release cycle our team has been working on splitting of
 openstack common db code into a separate library blueprint [1]. At the
 moment the issues, mentioned in this bp and [2] are solved and we are
 moving
 forward to graduation of oslo.db. You can find the new oslo.db code at
 [3]

 So, before moving forward, I want to ask Oslo team to review oslo.db
 repository [3] and especially the commit, that allows the unit tests to
 pass
 [4].

 Thanks,
 Victor

 [1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
 [2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
 [3] https://github.com/malor/oslo.db
 [4]


 https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I'm probably just late to the party, but simple question: why is it in
 the
 malor group in github rather than the openstack group, like
 oslo.messaging
 and oslo.rootwrap?  Is that temporary or will it be moved at some point?


 This is the copy of the code being prepared to import into a new
 oslo.db repository. It's easier to set up that temporary hosting on
 github. The repo has been approved to be imported, and after that
 happens it will be hosted on our git server like all of the other oslo
 libraries.

 Doug


 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Are there any status updates on where we are with this [1]?  I see that
 oslo.db is in git.openstack.org now [2].  There is a super-alpha dev 
 package
 on pypi [3], are we waiting for an official release?

 I'd like to start moving nova over to using oslo.db or at least get an idea
 for how much work it's going to be.  I don't imagine it's going to be that
 difficult since I think a lot of the oslo.db code originated in nova.

 [1] https://review.openstack.org/#/c/91407/
 [2] http://git.openstack.org/cgit/openstack/oslo.db/
 [3] https://pypi.python.org/pypi/oslo.db/0.0.1.dev15.g7efbf12


 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis 

Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-05-30 Thread Roman Podoliaka
Hi Sergey,

tl;dr

I'd like to be a ready to use version, but not 1.0.0.

So it's a good question and I'd like to hear more input on this from all.

If we start from 1.0.0, this will mean that we'll be very limited in
terms of changes to public API we can make without bumping the MAJOR
part of the version number. I don't expect the number of those changes
to be big, but I also don't want us to happen in a situation when we
have oslo.db 3.0.0 in a few months (if we follow semver
pragmatically).

Perhaps, we should stick to 0.MINOR.PATCH versioning for now (as e.g.
SQLAlchemy and TripleO projects do)? These won't be alphas, but rather
ready to use versions. And we would still have a bit more 'freedom' to
do small API changes bumping the MINOR part of the version number (we
could also do intermediate releases deprecating some stuff, so we
don't break people projects every time we make some API change).

Thanks,
Roman

On Fri, May 30, 2014 at 6:06 PM, Sergey Lukjanov slukja...@mirantis.com wrote:
 Hey Roman,

 will it be the alpha version that should not be used by other projects
 or it'll be ready to use?

 Thanks.

 On Fri, May 30, 2014 at 6:36 PM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi Matt,

 We're waiting for a few important fixes to be merged (usage of
 oslo.config, eventlet tpool support). Once those are merged, we'll cut
 the initial release.

 Thanks,
 Roman

 On Fri, May 30, 2014 at 5:19 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:


 On 4/25/2014 7:46 AM, Doug Hellmann wrote:

 On Fri, Apr 25, 2014 at 8:33 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:



 On 4/18/2014 1:18 PM, Doug Hellmann wrote:


 Nice work, Victor!

 I left a few comments on the commits that were made after the original
 history was exported from the incubator. There were a couple of small
 things to address before importing the library, and a couple that can
 wait until we have the normal code review system. I'd say just add new
 commits to fix the issues, rather than trying to amend the existing
 commits.

 We haven't really discussed how to communicate when we agree the new
 repository is ready to be imported, but it seems reasonable to use the
 patch in openstack-infra/config that will be used to do the import:
 https://review.openstack.org/#/c/78955/

 Doug

 On Fri, Apr 18, 2014 at 10:28 AM, Victor Sergeyev
 vserge...@mirantis.com wrote:


 Hello all,

 During Icehouse release cycle our team has been working on splitting of
 openstack common db code into a separate library blueprint [1]. At the
 moment the issues, mentioned in this bp and [2] are solved and we are
 moving
 forward to graduation of oslo.db. You can find the new oslo.db code at
 [3]

 So, before moving forward, I want to ask Oslo team to review oslo.db
 repository [3] and especially the commit, that allows the unit tests to
 pass
 [4].

 Thanks,
 Victor

 [1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
 [2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
 [3] https://github.com/malor/oslo.db
 [4]


 https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I'm probably just late to the party, but simple question: why is it in
 the
 malor group in github rather than the openstack group, like
 oslo.messaging
 and oslo.rootwrap?  Is that temporary or will it be moved at some point?


 This is the copy of the code being prepared to import into a new
 oslo.db repository. It's easier to set up that temporary hosting on
 github. The repo has been approved to be imported, and after that
 happens it will be hosted on our git server like all of the other oslo
 libraries.

 Doug


 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Are there any status updates on where we are with this [1]?  I see that
 oslo.db is in git.openstack.org now [2].  There is a super-alpha dev package
 on pypi [3], are we waiting for an official release?

 I'd like to start moving nova over to using oslo.db or at least get an idea
 for how much work it's going to be.  I don't imagine it's going to be that
 difficult since I think a lot of the oslo.db code originated in nova.

 [1] https://review.openstack.org/#/c/91407/
 [2] http://git.openstack.org/cgit/openstack/oslo.db/
 [3] 

[openstack-dev] [nova][vmware] iSCSI and managing cluster-wide config

2014-05-30 Thread Matthew Booth
The vmware driver doesn't currently use auth information passed to it by
cinder when attaching an iSCSI volume. I'm working on a patch to address
this.

Adding authentication to the existing code is relatively simple.
However, in going over the code I've noticed a problem.

The code assumes that the host has a software iSCSI hba configured[1].
Simplifying slightly: it adds a new target to the hba and maps this to
the VM as a RDM. The problem is that it only operates on the host
returned by vm_util.get_host_ref(). This is the first host returned by a
query, so can be thought of as random. This presents an immediate
problem because the VM isn't guaranteed to run on this host.

I had hoped that the cluster might be able to move config automatically
between software HBAs, but I can confirm that vMotion certainly does not
work if only 1 software HBA is configured with the target. I haven't yet
confirmed that it does work if both hosts are configured with the
target. I assume it does, because otherwise guests with iSCSI storage
would be very much second class citizens in a cluster.

Assuming it does, we need to ensure that all hosts in the cluster are
configured with all iSCSI targets. Note that this isn't just at the time
the target is added, either. If a new host is added to the cluster, we
must ensure that all iSCSI targets present in the cluster are added to
it automatically. If we don't do this, at the very least DRS won't work.
Note that DRS can initiate vMotion at any time, so this can't be tied to
an api action. Do we currently do any kind of cluster maintenance along
these lines?

This has also lead me to thinking about vm_util.get_host_ref(). If the
host it returns is powered off, any attempt to manipulate it will fail.
This definitely breaks the current iSCSI code, but I suspect it breaks a
bunch of stuff. I can't help but feel that the api is itself broken, and
we should re-examine everywhere it is used to better do something else.

Matt

[1] Incidentally, I had wanted to automatically add one of these if it
wasn't present, but after scouring the api I'm not convinced it's possible.
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-05-30 Thread Igor Kalnitsky
Hi guys,

+1 to Roman's suggestion.

I think we have to use 0.MINOR.PATCH at least during a few cycles.
API changes aren't a problem if we use a specific (frozen) version in
requirements.

Thanks,
Igor


On Fri, May 30, 2014 at 6:37 PM, Roman Podoliaka rpodoly...@mirantis.com
wrote:

 Hi Sergey,

 tl;dr

 I'd like to be a ready to use version, but not 1.0.0.

 So it's a good question and I'd like to hear more input on this from all.

 If we start from 1.0.0, this will mean that we'll be very limited in
 terms of changes to public API we can make without bumping the MAJOR
 part of the version number. I don't expect the number of those changes
 to be big, but I also don't want us to happen in a situation when we
 have oslo.db 3.0.0 in a few months (if we follow semver
 pragmatically).

 Perhaps, we should stick to 0.MINOR.PATCH versioning for now (as e.g.
 SQLAlchemy and TripleO projects do)? These won't be alphas, but rather
 ready to use versions. And we would still have a bit more 'freedom' to
 do small API changes bumping the MINOR part of the version number (we
 could also do intermediate releases deprecating some stuff, so we
 don't break people projects every time we make some API change).

 Thanks,
 Roman

 On Fri, May 30, 2014 at 6:06 PM, Sergey Lukjanov slukja...@mirantis.com
 wrote:
  Hey Roman,
 
  will it be the alpha version that should not be used by other projects
  or it'll be ready to use?
 
  Thanks.
 
  On Fri, May 30, 2014 at 6:36 PM, Roman Podoliaka
  rpodoly...@mirantis.com wrote:
  Hi Matt,
 
  We're waiting for a few important fixes to be merged (usage of
  oslo.config, eventlet tpool support). Once those are merged, we'll cut
  the initial release.
 
  Thanks,
  Roman
 
  On Fri, May 30, 2014 at 5:19 PM, Matt Riedemann
  mrie...@linux.vnet.ibm.com wrote:
 
 
  On 4/25/2014 7:46 AM, Doug Hellmann wrote:
 
  On Fri, Apr 25, 2014 at 8:33 AM, Matt Riedemann
  mrie...@linux.vnet.ibm.com wrote:
 
 
 
  On 4/18/2014 1:18 PM, Doug Hellmann wrote:
 
 
  Nice work, Victor!
 
  I left a few comments on the commits that were made after the
 original
  history was exported from the incubator. There were a couple of
 small
  things to address before importing the library, and a couple that
 can
  wait until we have the normal code review system. I'd say just add
 new
  commits to fix the issues, rather than trying to amend the existing
  commits.
 
  We haven't really discussed how to communicate when we agree the new
  repository is ready to be imported, but it seems reasonable to use
 the
  patch in openstack-infra/config that will be used to do the import:
  https://review.openstack.org/#/c/78955/
 
  Doug
 
  On Fri, Apr 18, 2014 at 10:28 AM, Victor Sergeyev
  vserge...@mirantis.com wrote:
 
 
  Hello all,
 
  During Icehouse release cycle our team has been working on
 splitting of
  openstack common db code into a separate library blueprint [1]. At
 the
  moment the issues, mentioned in this bp and [2] are solved and we
 are
  moving
  forward to graduation of oslo.db. You can find the new oslo.db
 code at
  [3]
 
  So, before moving forward, I want to ask Oslo team to review
 oslo.db
  repository [3] and especially the commit, that allows the unit
 tests to
  pass
  [4].
 
  Thanks,
  Victor
 
  [1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
  [2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
  [3] https://github.com/malor/oslo.db
  [4]
 
 
 
 https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  I'm probably just late to the party, but simple question: why is it
 in
  the
  malor group in github rather than the openstack group, like
  oslo.messaging
  and oslo.rootwrap?  Is that temporary or will it be moved at some
 point?
 
 
  This is the copy of the code being prepared to import into a new
  oslo.db repository. It's easier to set up that temporary hosting on
  github. The repo has been approved to be imported, and after that
  happens it will be hosted on our git server like all of the other oslo
  libraries.
 
  Doug
 
 
  --
 
  Thanks,
 
  Matt Riedemann
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  Are there any status updates on where we are with this [1]?  I see that
  oslo.db is in git.openstack.org now [2].  There is a 

Re: [openstack-dev] [nova] SR-IOV nova-specs

2014-05-30 Thread John Garbutt
Hey,

-2 has been removed, feel free to ping me in IRC if you need quicker
turn around, been traveling last few days.

Thanks,
John

On 27 May 2014 19:21, Robert Li (baoli) ba...@cisco.com wrote:
 Hi John,

 Now that we have agreement during the summit on how to proceed in order to
 get it in to Juno, please take a look at this:

 https://review.openstack.org/#/c/86606/16

 Please let us know your comments or what is still missing. I’m also not sure
 if your  –2 needs to be removed before the other cores will take a look at
 it.

 thanks,
 Robert

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] dealing with M:N relashionships for Pools and Listeners

2014-05-30 Thread Stephen Balukoff
Hi y'all!

Re-responses inline:


On Fri, May 30, 2014 at 8:25 AM, Brandon Logan brandon.lo...@rackspace.com
wrote:


  § Where can a user check the success of the update?
 
 
 
 
  Depending on the object... either the status of the child object
  itself or all of its affected parent(s). Since we're allowing reusing
  of the pool object, when getting the status of a pool, maybe it makes
  sense to produce a list showing the status of all the pool's members,
  as well as the update status of all the listeners using the pool?

 This is confusing to me.  Will there be a separate provisioning status
 field on the loadbalancer and just a generic status on the child
 objects?  I get the idea of a pool having a status the reflects the
 state of all of its members.  Is that what you mean by status of a child
 object?


It seems to me that we could use the 'generic status' field on the load
balancer to show provisioning status as well. :/  Is there a compelling
reason we couldn't do this? (Sam?)

And yes, I think that's what I mean with one addition. For example:

If I have Listener A and B which use pool X which has members M and N...
 if I set member 'M' to be 'ADMIN_STATE_DISABLED', then what I would expect
to see, if I ask for the status of pool X immediately after this change is:
* An array showing N is 'UP' and 'M' is in state 'ADMIN_STATE_DISABLED' and
* An array showing that listeners 'A' and 'B' are in 'PENDING_UPDATE' state
(or something similar).

I would also expect listeners 'A' and 'B' to go back to 'UP' state shortly
thereafter.

Does this make sense?

Note that there is a problem with my suggestion: What does the status of a
member mean when the member is referenced indirectly by several listeners?
 (For example, listener A could see member N as being UP, whereas listener
B could see member N as being DOWN.)  Should member statuses also be an
array from the perspective of each listener? (in other words, we'd have a
two-dimensional array here.)

If we do this then perhaps the right thing to do is just list the pool
members' statuses in context of the listeners.  In other words, if we're
reporting this way, then given the same scenario above, if we set member
'M' to be 'ADMIN_STATE_DISABLED', then asking for the status of pool X
immediately after this change is:
* (Possibly?) an array for each listener status showing them as
'PENDING_UPDATE'
* An array for member statuses which contain:
** An array which shows member N is 'UP' for listener 'A' and 'DOWN' for
listener 'B'
** An array which shows member M is 'PENDING_DISABLED' for both listener
'A' and 'B'

...and then shortly thereafter we would see member M's status for each
listener change to 'DISABLED' at the same time the listeners' statuses
change to 'UP'.

So... this second way of looking at it is less intuitive to me, though it
is probably more correct. Isn't object re-use fun?


 
  ·Operation status/state – this refers to information
  returning from the load balancing back end / driver
 
  o  How is member status that failed health monitor reflected,
  on which LBaaS object and how can a user understand the
  failure?
 
 
  Assuming you're not talking about an alert which would be generated by
  a back-end load balancer and get routed to some notification system...
  I think you should be able to get the status of a member by just
  checking the member status directly (ie.  GET /members/[UUID]) or, if
  people like my suggestion above, by checking the status of the pool to
  which the member belongs (ie. GET /pools/[UUID]).
 
 
  ·Administrator state management
 
  o  How is a change in admin_state on member, pool, listener
  get managed
 
 
  I'm thinking that disabling members, pools, and listeners should
  propagate to all parent objects. (For example, disabling a member
  should propagate to all affected pools and listeners, which
  essentially pulls the member out of rotation for all load balancers
  but otherwise leaves listeners and pools up and running. This is
  probably what the user is trying to accomplish by disabling the
  member.)

 Are you saying in this case if a member is disabled and all members are
 disabled then the parent's pool status is disabled which would then in
 turn disable the listener?


No-- I mean that with object re-use, we have only one way to set the
admin_state for a shared object, and therefore disabling member 'M'
disables it for all connected pools and listeners. I specifically mean that
'admin status' changes of child objects do not affect the 'admin status' of
parent objects, though it will (briefly) affect the generic 'status' of the
parents as the admin state of the child gets propagated through the
connected tree.

Sorry, it's early... clear as mud still? :P



 
  I do not think it makes sense to propagate to all child objects. For
  example, disabling a listener should not disable all the pools it
  

Re: [openstack-dev] [Keystone] [Blazar] [Ironic] Py26/27 gates failing because of keystoneclient-0.9.0

2014-05-30 Thread Brant Knudson
The auth_token middleware changed recently[1] to check if tokens retrieved
from the cache are expired based on the expiration time in the token. The
unit tests for Blazar, Ceilometer, and Ironic are all using a copy-pasted
fake memcache implementation that's supposed to simulate what auth_token
stores in the cache, but the tokens that it had stored weren't valid.
Tokens have an expiration time in them and these ones didn't. I don't think
that it's safe for test code to make assumptions about how the auth_token
middleware is going to store data in its cache. The format of the cached
data isn't part of the public interface. It's changed before, when
expiration times changed from *nix timestamps to iso 8601 formatted dates.

After looking at this, I proposed a couple of changes to the auth_token
middleware. One is to have auth_token use the expiration time it has cached
and fail the auth request if the token is expired according to the cache.
It doesn't have to check the token's expiration time because it was stored
as part of the cache data. The other is to make cached token handling more
efficient by not checking the token expiration time if the token was cached.

[1]
http://git.openstack.org/cgit/openstack/python-keystoneclient/commit/keystoneclient/middleware/auth_token.py?id=8574256f9342faeba2ce64080ab5190023524e0a
[2] https://review.openstack.org/#/c/96786/

- Brant



On Fri, May 30, 2014 at 7:11 AM, Sylvain Bauza sba...@redhat.com wrote:

  Le 30/05/2014 14:07, Dina Belova a écrit :

 I did not look close to this concrete issue, but in the ceilometer there
 is almost the same thing:
 https://bugs.launchpad.net/ceilometer/+bug/1324885 and fixes were already
 provided.

  Will this help Blazar?


 Got the Ironic patch as well :

 https://review.openstack.org/#/c/96576/1/ironic/tests/api/utils.py

 Will provide a patch against Blazar.

 Btw, I'll close the bug.


  -- Dina


 On Fri, May 30, 2014 at 4:00 PM, Sylvain Bauza sba...@redhat.com wrote:

 Hi Keystone developers,

 I just opened a bug [1] because Ironic and Blazar (ex. Climate) patches
 are failing due to a new release in Keystone client which seems to
 regress on midleware auth.

 Do you have any ideas on if it's quick to fix, or shall I provide a
 patch to openstack/global-requirements.txt to only accept keystoneclient
  0.9.0 ?

 Thanks,
 -Sylvain


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][federation] Coordination for Juno

2014-05-30 Thread Marco Fargetta
Hello,

I have just tried the new specs proposal. Hope it is not too bad (maybe the 
text could be better but
english is not my language) :)


I have also included some of you involved with federations as reviewers.

Cheers,
Marco



On Tue, May 27, 2014 at 09:15:31AM -0500, Dolph Mathews wrote:
 On Tue, May 27, 2014 at 8:12 AM, Marco Fargetta
 marco.farge...@ct.infn.itwrote:
 
  On Tue, May 27, 2014 at 07:39:01AM -0500, Dolph Mathews wrote:
   On Tue, May 27, 2014 at 6:30 AM, Marco Fargetta
   marco.farge...@ct.infn.itwrote:
  
Hi All,
   
   
   • Federated Keystone and Horizon
   □ Completely open-ended, there isn't much an expectation that
  we
deliver
 this in Juno, but it's something we should start thinking
  about.
   □
   
I have just registered a new blueprint for this point:
   
https://blueprints.launchpad.net/keystone/+spec/saml-web-authn
   
Could you have a look and let me know if it make sense for the
  integration
with keystone
before I start with the code?
   
  
   That's a comparable blueprint to how we've written them historically, but
   you're about to be bitten by a change in process (sorry!).
  
   Starting with work landing in Juno milestone 2, we're going to start
   requiring that design work be done using the following template:
  
  
  https://github.com/openstack/keystone-specs/blob/master/specs/template.rst
  
   And proposed against the release during which the work is intended to
  ship,
   for example:
  
 https://github.com/openstack/keystone-specs/tree/master/specs/juno
  
 
  Therefore, this means that I have to write the specs using the template
  you sent and submit for review, correct?
 
 
 Yes - the impact of your proposal (un?)fortunately happens to be a great
 use case for the design detail required by the new template. Poke us in
 #openstack-keystone if you need any help getting a formal spec up!
 
 
 
 
 
   Since we're new to this as well, I'd also suggest referencing nova's
  -specs
   repo which has a head start on keystone's (and is where we're copying the
   overall process from):
  
 https://github.com/openstack/nova-specs
  
  
   
Cheers,
Marco
   
(NOTE: this is my first bp here so let me know if I miss something in
  the
process)
   
   
   
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
   
 
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 

Eng. Marco Fargetta, PhD
 
Istituto Nazionale di Fisica Nucleare (INFN)
Catania, Italy

EMail: marco.farge...@ct.infn.it




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] nova-compute deadlock

2014-05-30 Thread Qin Zhao
Hi all,

When I run Icehouse code, I encountered a strange problem. The nova-compute
service becomes stuck, when I boot instances. I report this bug in
https://bugs.launchpad.net/nova/+bug/1313477.

After thinking several days, I feel I know its root cause. This bug should
be a deadlock problem cause by pipe fd leaking.  I draw a diagram to
illustrate this problem.
https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720

However, I have not find a very good solution to prevent this deadlock.
This problem is related with Python runtime, libguestfs, and eventlet. The
situation is a little complicated. Is there any expert who can help me to
look for a solution? I will appreciate for your help!

-- 
Qin Zhao
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Disabling Pushes of new Gerrit Draft Patchsets

2014-05-30 Thread Clark Boylan
On Wed, May 21, 2014 at 4:24 PM, Clark Boylan clark.boy...@gmail.com wrote:
 Hello everyone,

 Gerrit has long supported Draft patchsets, and the infra team has long
 recommended against using them as they are a source of bugs and
 confusion (see below for specific details if you are curious). The newer
 version of Gerrit that we recently upgraded to allows us to prevent
 people from pushing new Draft patchsets. We will take advantage of this
 and disable pushes of new Drafts on Friday May 30, 2014.

 The impact of this change should be small. You can use the Work in
 Progress state instead of Drafts for new patchsets. Any existing
 Draft patchsets will remain in a Draft state until it is published.

 Now for the fun details on why drafts are broken.

 * Drafts appear to be secure but they offer no security. This is bad
   for user expectations and may expose data that shouldn't be exposed.
 * Draft patchsets pushed after published patchsets confuse reviewers as
   they cannot vote with a value because the latest patchset is hidden.
 * Draft patchsets confuse the Gerrit event stream output making it
   difficult for automated tooling to do the correct thing with Drafts.
 * Child changes of Drafts will fail to merge without explanation.

 Let us know if you have any questions,

 Clark (on behalf of the infra team)

Heads up everyone, this is now in effect and pushes of new draft
patchsets have been disabled.

Thanks,
Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] SR-IOV nova-specs

2014-05-30 Thread Robert Li (baoli)
John, thanks for the review. I¹m going to clarify the things you mentioned
in your comments, and upload a new version soon.

thanks,
Robert

On 5/30/14, 12:35 PM, John Garbutt j...@johngarbutt.com wrote:

Hey,

-2 has been removed, feel free to ping me in IRC if you need quicker
turn around, been traveling last few days.

Thanks,
John

On 27 May 2014 19:21, Robert Li (baoli) ba...@cisco.com wrote:
 Hi John,

 Now that we have agreement during the summit on how to proceed in order
to
 get it in to Juno, please take a look at this:

 https://review.openstack.org/#/c/86606/16

 Please let us know your comments or what is still missing. I¹m also not
sure
 if your  ­2 needs to be removed before the other cores will take a look
at
 it.

 thanks,
 Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][TC] Glance Functional API and Cross-project API Consistency

2014-05-30 Thread Hemanth Makkapati
Hello All,
I'm writing to notify you of the approach the Glance community has decided to 
take for doing functional API.  Also, I'm writing to solicit your feedback on 
this approach in the light of cross-project API consistency.

At the Atlanta Summit, the Glance team has discussed introducing functional API 
in Glance so as to be able to expose operations/actions that do not naturally 
fit into the CRUD-style. A few approaches are proposed and discussed 
herehttps://etherpad.openstack.org/p/glance-adding-functional-operations-to-api.
 We have all converged on the approach to include 'action' and action type in 
the URL. For instance, 'POST /images/{image_id}/actions/{action_type}'.

However, this is different from the way Nova does actions. Nova includes action 
type in the payload. For instance, 'POST /servers/{server_id}/action {type: 
action_type, ...}'. At this point, we hit a cross-project API consistency 
issue mentioned 
herehttps://etherpad.openstack.org/p/juno-cross-project-consistency-across-rest-apis
 (under the heading 'How to act on resource - cloud perform on resources'). 
Though we are differing from the way Nova does actions and hence another source 
of cross-project API inconsistency , we have a few reasons to believe that 
Glance's way is helpful in certain ways.

The reasons are as following:
1. Discoverability of operations.  It'll be easier to expose permitted actions 
through schemas a json home document living at /images/{image_id}/actions/.
2. More conducive for rate-limiting. It'll be easier to rate-limit actions in 
different ways if the action type is available in the URL.
3. Makes more sense for functional actions that don't require a request body 
(e.g., image deactivation).

At this point we are curious to see if the API conventions group believes this 
is a valid and reasonable approach.
Any feedback is much appreciated. Thank you!

Regards,
Hemanth Makkapati
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [Heat] Glance Metadata Catalog for Capabilities and Tags

2014-05-30 Thread Zane Bitter

On 29/05/14 18:42, Tripp, Travis S wrote:

Hello everyone!

At the summit in Atlanta we demonstrated the “Graffiti” project
concepts.  We received very positive feedback from members of multiple
dev projects as well as numerous operators.  We were specifically asked
multiple times about getting the Graffiti metadata catalog concepts into
Glance so that we can start to officially support the ideas we
demonstrated in Horizon.

After a number of additional meetings at the summit and working through
ideas the past week, we’ve created the initial proposal for adding a
Metadata Catalog to Glance for capabilities and tags.  This is distinct
from the “Artifact Catalog”, but we do see that capability and tag
catalog can be used with the artifact catalog.

We’ve detailed our initial proposal in the following Google Doc.  Mark
Washenberger agreed that this was a good place to capture the initial
proposal and we can later move it over to the Glance spec repo which
will be integrated with Launchpad blueprints soon.

https://docs.google.com/document/d/1cS2tJZrj748ZsttAabdHJDzkbU9nML5S4oFktFNNd68

Please take a look and let’s discuss!

Also, the following video is a brief recap of what was demo’ d at the
summit.  It should help to set a lot of understanding behind the ideas
in the proposal.

https://www.youtube.com/watch?v=Dhrthnq1bnw

Thank you!

Travis Tripp (HP)

Murali Sundar (Intel)

*A Few Related Blueprints *

https://blueprints.launchpad.net/horizon/+spec/instance-launch-using-capability-filtering

https://blueprints.launchpad.net/horizon/+spec/tagging

https://blueprints.launchpad.net/horizon/+spec/faceted-search

https://blueprints.launchpad.net/horizon/+spec/host-aggregate-update-metadata

https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata


+1, this is something that will be increasingly important to 
orchestration. The folks working on the TOSCA (and others) - HOT 
translator project might be able to comment in more detail, but 
basically as people start wanting to write templates that run on 
multiple clouds (potentially even non-OpenStack clouds) some sort of 
catalog for capabilities will become crucial.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [Heat] Glance Metadata Catalog for Capabilities and Tags

2014-05-30 Thread Georgy Okrokvertskhov
I think this is a great feature to have it in Glance. Tagging mechanism for
objects which are not owned by Glance is complimentary to artifact
catalog\repository in Glance. As soon as we keep tags and artifacts
metadata close to each other the end-user will be able to use them
seamlessly.
Artifacts also can use tags to find objects outside of artifact repository
which is always good to have.
In Murano project we use Glance tags to find correct images which are
required by specific applications. It will be great to extend this to other
objects like networks, routers and flavors so that application write can
specify kind of object are required for his application.

Thanks,
Georgy


On Fri, May 30, 2014 at 11:45 AM, Zane Bitter zbit...@redhat.com wrote:

 On 29/05/14 18:42, Tripp, Travis S wrote:

 Hello everyone!

 At the summit in Atlanta we demonstrated the “Graffiti” project
 concepts.  We received very positive feedback from members of multiple
 dev projects as well as numerous operators.  We were specifically asked
 multiple times about getting the Graffiti metadata catalog concepts into
 Glance so that we can start to officially support the ideas we
 demonstrated in Horizon.

 After a number of additional meetings at the summit and working through
 ideas the past week, we’ve created the initial proposal for adding a
 Metadata Catalog to Glance for capabilities and tags.  This is distinct
 from the “Artifact Catalog”, but we do see that capability and tag
 catalog can be used with the artifact catalog.

 We’ve detailed our initial proposal in the following Google Doc.  Mark
 Washenberger agreed that this was a good place to capture the initial
 proposal and we can later move it over to the Glance spec repo which
 will be integrated with Launchpad blueprints soon.

 https://docs.google.com/document/d/1cS2tJZrj748ZsttAabdHJDzkbU9nM
 L5S4oFktFNNd68

 Please take a look and let’s discuss!

 Also, the following video is a brief recap of what was demo’ d at the
 summit.  It should help to set a lot of understanding behind the ideas
 in the proposal.

 https://www.youtube.com/watch?v=Dhrthnq1bnw

 Thank you!

 Travis Tripp (HP)

 Murali Sundar (Intel)

 *A Few Related Blueprints *


 https://blueprints.launchpad.net/horizon/+spec/instance-
 launch-using-capability-filtering

 https://blueprints.launchpad.net/horizon/+spec/tagging

 https://blueprints.launchpad.net/horizon/+spec/faceted-search

 https://blueprints.launchpad.net/horizon/+spec/host-
 aggregate-update-metadata

 https://blueprints.launchpad.net/python-cinderclient/+spec/
 support-volume-image-metadata


 +1, this is something that will be increasingly important to
 orchestration. The folks working on the TOSCA (and others) - HOT
 translator project might be able to comment in more detail, but basically
 as people start wanting to write templates that run on multiple clouds
 (potentially even non-OpenStack clouds) some sort of catalog for
 capabilities will become crucial.

 cheers,
 Zane.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Announcing glance-specs repo

2014-05-30 Thread Arnaud Legendre
Greetings! 

The glance-specs repository is now opened! 

- If you have a blueprint that is targeted for Juno-1 and Approved: you do not 
need to create a spec. Only two blueprints seem to fall into this category. See 
[1] for more details. 
- If you have a blueprint in Launchpad (approved or not, targeted or not): 
please create a spec and add a reference to the Launchpad blueprint in the 
spec. 
- If you want to submit a new blueprint: please create a spec. You do not need 
to create a Launchpad blueprint . The LP blueprint will be automatically 
created for you when the spec is approved. 

All the information to create the specs are in the readme [2] and the template 
[3]. The spec needs to be created in the juno folder ( path: root / specs / 
juno ) 
Feel free to modify the template if you think something is not correct. 

If you have a problem or concern: please ping us (markwash, rosmaita, myself). 

Thank you! 
Arnaud 

[1] https://launchpad.net/glance/+milestone/juno-1 
[2] http://git.openstack.org/cgit/openstack/glance-specs/tree/README.rst 
[3] 
http://git.openstack.org/cgit/openstack/glance-specs/tree/specs/template.rst 

- Original Message -

From: Mark Washenberger mark.washenber...@markwash.net 
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org 
Sent: Friday, April 25, 2014 2:42:09 PM 
Subject: [openstack-dev] [Glance] Announcing glance-specs repo 

Hey hey glancy glance, 

Recently glance drivers made a somewhat snap decision to adopt the -specs 
gerrit repository approach for new blueprints. 

Pursuant to that, Arnaud has been kind enough to put forward some infra patches 
to set things up. After the patches to create the repo [1] and enable tests [2] 
land, we will need one more patch to add the base framework to the glance-specs 
repo, so there is a bit of time needed before people will be able to submit 
their specs. 

I'd like to see us use this system for Juno blueprints. I think it would also 
be very helpful if any blueprints being discussed at the design summit could 
adopt this format in time for review prior to the summit (which is just over 
two weeks away). I understand that this is all a bit late in the game to make 
such requirements, so obviously we'll try to be very understanding of any 
difficulties. 

Additionally, if any glance folks have serious reservations about adopting the 
glance-specs repo, please speak up now. 

Thanks again to Arnaud for spearheading this effort. And thanks to the Nova 
crew for paving a nice path for us to follow. 

Cheers, 
markwash 


[1] - https://review.openstack.org/#/c/90461/ 
[2] - https://review.openstack.org/#/c/90469/ 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=5wWaXo2oVaivfKLCMyU6Z9UTO8HOfeGCzbGHAT4gZpo%3D%0Am=BfXWjsBQnTnW3S%2BoFiToxN%2FTJ7aYKCiy42uiAosEmnA%3D%0As=a471bedf059056126e8eade6f94d20db62f8426d4c385ad923d87be8619ba424
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Adding Tuskar to weekly IRC meetings agenda

2014-05-30 Thread James Polley


 On 30 May 2014, at 8:13 pm, Jaromir Coufal jcou...@redhat.com wrote:
 
 Hi All,
 
 I would like to propose to add Tuskar as a permanent topic to the agenda for 
 our weekly IRC meetings. It is an official TripleO's project, there happening 
 quite a lot around it and we are targeting for Juno to have something solid. 
 So I think that it is important for us to regularly keep track on what is 
 going on there.
 

Sounds good to me.

What do you think we would talk about under this topic? I'm thinking that a 
brief summary of changes since last week, and any blockers tuskar is seeing 
from the broader project would be a good start?

 -- Jarda
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Disabling Pushes of new Gerrit Draft Patchsets

2014-05-30 Thread Sergey Lukjanov
Yay!

No more weird CR chains.

On Fri, May 30, 2014 at 9:32 PM, Clark Boylan clark.boy...@gmail.com wrote:
 On Wed, May 21, 2014 at 4:24 PM, Clark Boylan clark.boy...@gmail.com wrote:
 Hello everyone,

 Gerrit has long supported Draft patchsets, and the infra team has long
 recommended against using them as they are a source of bugs and
 confusion (see below for specific details if you are curious). The newer
 version of Gerrit that we recently upgraded to allows us to prevent
 people from pushing new Draft patchsets. We will take advantage of this
 and disable pushes of new Drafts on Friday May 30, 2014.

 The impact of this change should be small. You can use the Work in
 Progress state instead of Drafts for new patchsets. Any existing
 Draft patchsets will remain in a Draft state until it is published.

 Now for the fun details on why drafts are broken.

 * Drafts appear to be secure but they offer no security. This is bad
   for user expectations and may expose data that shouldn't be exposed.
 * Draft patchsets pushed after published patchsets confuse reviewers as
   they cannot vote with a value because the latest patchset is hidden.
 * Draft patchsets confuse the Gerrit event stream output making it
   difficult for automated tooling to do the correct thing with Drafts.
 * Child changes of Drafts will fail to merge without explanation.

 Let us know if you have any questions,

 Clark (on behalf of the infra team)

 Heads up everyone, this is now in effect and pushes of new draft
 patchsets have been disabled.

 Thanks,
 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-05-30 Thread Eugene Nikanorov
Hi Carl,

The idea of in-memory storage was discussed for similar problem, but might
not work for multiple server deployment.
Some hybrid approach though may be used, I think.

Thanks,
Eugene.


On Fri, May 30, 2014 at 8:53 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 This is very similar to IPAM...  There is a space of possible ids or
 addresses that can grow very large.  We need to track the allocation
 of individual ids or addresses from that space and be able to quickly
 come up with a new allocations and recycle old ones.  I've had this in
 the back of my mind for a week or two now.

 A similar problem came up when the database would get populated with
 the entire free space worth of ip addresses to reflect the
 availability of all of the individual addresses.  With a large space
 (like an ip4 /8 or practically any ip6 subnet) this would take a very
 long time or never finish.

 Neutron was a little smarter about this.  It compressed availability
 in to availability ranges in a separate table.  This solved the
 original problem but is not problem free.  It turns out that writing
 database operations to manipulate both the allocations table and the
 availability table atomically is very difficult and ends up being very
 slow and has caused us some grief.  The free space also gets
 fragmented which degrades performance.  This is what led me --
 somewhat reluctantly -- to change how IPs get recycled back in to the
 free pool which hasn't been very popular.

 I wonder if we can discuss a good pattern for handling allocations
 where the free space can grow very large.  We could use the pattern
 for the allocation of both IP addresses, VXlan ids, and other similar
 resource spaces.

 For IPAM, I have been entertaining the idea of creating an allocation
 agent that would manage the availability of IPs in memory rather than
 in the database.  I hesitate, because that brings up a whole new set
 of complications.  I'm sure there are other potential solutions that I
 haven't yet considered.

 The L3 subteam is currently working on a pluggable IPAM model.  Once
 the initial framework for this is done, we can more easily play around
 with changing the underlying IPAM implementation.

 Thoughts?

 Carl

 On Thu, May 29, 2014 at 4:01 AM, Xurong Yang ido...@gmail.com wrote:
  Hi, Folks,
 
  When we configure VXLAN range [1,16M], neutron-server service costs long
  time and cpu rate is very high(100%) when initiation. One test base on
  postgresql has been verified: more than 1h when VXLAN range is [1, 1M].
 
  So, any good solution about this performance issue?
 
  Thanks,
  Xurong Yang
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-05-30 Thread Doug Hellmann
No matter what version number we use, we will have to be careful about
API changes. We cannot have 2 versions of the same library installed
at the same time, so in order for devstack to work (and therefore the
gate), we will have to make all changes backwards-compatible and
support older APIs until our projects have migrated to the new APIs.

This is *exactly* why we have the Oslo incubator. It gives us a place
to work out stable APIs in a way that does not restrict when and how
updates can be made. since the syncs can be handled by projects at
their own pace.

Doug

On Fri, May 30, 2014 at 11:37 AM, Roman Podoliaka
rpodoly...@mirantis.com wrote:
 Hi Sergey,

 tl;dr

 I'd like to be a ready to use version, but not 1.0.0.

 So it's a good question and I'd like to hear more input on this from all.

 If we start from 1.0.0, this will mean that we'll be very limited in
 terms of changes to public API we can make without bumping the MAJOR
 part of the version number. I don't expect the number of those changes
 to be big, but I also don't want us to happen in a situation when we
 have oslo.db 3.0.0 in a few months (if we follow semver
 pragmatically).

 Perhaps, we should stick to 0.MINOR.PATCH versioning for now (as e.g.
 SQLAlchemy and TripleO projects do)? These won't be alphas, but rather
 ready to use versions. And we would still have a bit more 'freedom' to
 do small API changes bumping the MINOR part of the version number (we
 could also do intermediate releases deprecating some stuff, so we
 don't break people projects every time we make some API change).

 Thanks,
 Roman

 On Fri, May 30, 2014 at 6:06 PM, Sergey Lukjanov slukja...@mirantis.com 
 wrote:
 Hey Roman,

 will it be the alpha version that should not be used by other projects
 or it'll be ready to use?

 Thanks.

 On Fri, May 30, 2014 at 6:36 PM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi Matt,

 We're waiting for a few important fixes to be merged (usage of
 oslo.config, eventlet tpool support). Once those are merged, we'll cut
 the initial release.

 Thanks,
 Roman

 On Fri, May 30, 2014 at 5:19 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:


 On 4/25/2014 7:46 AM, Doug Hellmann wrote:

 On Fri, Apr 25, 2014 at 8:33 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:



 On 4/18/2014 1:18 PM, Doug Hellmann wrote:


 Nice work, Victor!

 I left a few comments on the commits that were made after the original
 history was exported from the incubator. There were a couple of small
 things to address before importing the library, and a couple that can
 wait until we have the normal code review system. I'd say just add new
 commits to fix the issues, rather than trying to amend the existing
 commits.

 We haven't really discussed how to communicate when we agree the new
 repository is ready to be imported, but it seems reasonable to use the
 patch in openstack-infra/config that will be used to do the import:
 https://review.openstack.org/#/c/78955/

 Doug

 On Fri, Apr 18, 2014 at 10:28 AM, Victor Sergeyev
 vserge...@mirantis.com wrote:


 Hello all,

 During Icehouse release cycle our team has been working on splitting of
 openstack common db code into a separate library blueprint [1]. At the
 moment the issues, mentioned in this bp and [2] are solved and we are
 moving
 forward to graduation of oslo.db. You can find the new oslo.db code at
 [3]

 So, before moving forward, I want to ask Oslo team to review oslo.db
 repository [3] and especially the commit, that allows the unit tests to
 pass
 [4].

 Thanks,
 Victor

 [1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
 [2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
 [3] https://github.com/malor/oslo.db
 [4]


 https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I'm probably just late to the party, but simple question: why is it in
 the
 malor group in github rather than the openstack group, like
 oslo.messaging
 and oslo.rootwrap?  Is that temporary or will it be moved at some point?


 This is the copy of the code being prepared to import into a new
 oslo.db repository. It's easier to set up that temporary hosting on
 github. The repo has been approved to be imported, and after that
 happens it will be hosted on our git server like all of the other oslo
 libraries.

 Doug


 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-05-30 Thread Doug Hellmann
On Fri, May 30, 2014 at 11:37 AM, Sergey Lukjanov
slukja...@mirantis.com wrote:
 So, it means that we'll be able to migrate to oslo.db lib in the end
 of Juno? or early K?


Projects will be able to start migrating to the oslo.db during this
cycle. We will have a non-alpha release by the end of Juno.

Doug


 On Fri, May 30, 2014 at 7:30 PM, Doug Hellmann
 doug.hellm...@dreamhost.com wrote:
 On Fri, May 30, 2014 at 11:06 AM, Sergey Lukjanov
 slukja...@mirantis.com wrote:
 Hey Roman,

 will it be the alpha version that should not be used by other projects
 or it'll be ready to use?

 The current plan is to do alpha releases of oslo libraries during this
 cycle, with a final official release at the end. We're close to
 finishing the infra work we need to make that possible.

 Doug



 Thanks.

 On Fri, May 30, 2014 at 6:36 PM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi Matt,

 We're waiting for a few important fixes to be merged (usage of
 oslo.config, eventlet tpool support). Once those are merged, we'll cut
 the initial release.

 Thanks,
 Roman

 On Fri, May 30, 2014 at 5:19 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:


 On 4/25/2014 7:46 AM, Doug Hellmann wrote:

 On Fri, Apr 25, 2014 at 8:33 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:



 On 4/18/2014 1:18 PM, Doug Hellmann wrote:


 Nice work, Victor!

 I left a few comments on the commits that were made after the original
 history was exported from the incubator. There were a couple of small
 things to address before importing the library, and a couple that can
 wait until we have the normal code review system. I'd say just add new
 commits to fix the issues, rather than trying to amend the existing
 commits.

 We haven't really discussed how to communicate when we agree the new
 repository is ready to be imported, but it seems reasonable to use the
 patch in openstack-infra/config that will be used to do the import:
 https://review.openstack.org/#/c/78955/

 Doug

 On Fri, Apr 18, 2014 at 10:28 AM, Victor Sergeyev
 vserge...@mirantis.com wrote:


 Hello all,

 During Icehouse release cycle our team has been working on splitting 
 of
 openstack common db code into a separate library blueprint [1]. At the
 moment the issues, mentioned in this bp and [2] are solved and we are
 moving
 forward to graduation of oslo.db. You can find the new oslo.db code at
 [3]

 So, before moving forward, I want to ask Oslo team to review oslo.db
 repository [3] and especially the commit, that allows the unit tests 
 to
 pass
 [4].

 Thanks,
 Victor

 [1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
 [2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
 [3] https://github.com/malor/oslo.db
 [4]


 https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I'm probably just late to the party, but simple question: why is it in
 the
 malor group in github rather than the openstack group, like
 oslo.messaging
 and oslo.rootwrap?  Is that temporary or will it be moved at some point?


 This is the copy of the code being prepared to import into a new
 oslo.db repository. It's easier to set up that temporary hosting on
 github. The repo has been approved to be imported, and after that
 happens it will be hosted on our git server like all of the other oslo
 libraries.

 Doug


 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Are there any status updates on where we are with this [1]?  I see that
 oslo.db is in git.openstack.org now [2].  There is a super-alpha dev 
 package
 on pypi [3], are we waiting for an official release?

 I'd like to start moving nova over to using oslo.db or at least get an 
 idea
 for how much work it's going to be.  I don't imagine it's going to be that
 difficult since I think a lot of the oslo.db code originated in nova.

 [1] https://review.openstack.org/#/c/91407/
 [2] http://git.openstack.org/cgit/openstack/oslo.db/
 [3] https://pypi.python.org/pypi/oslo.db/0.0.1.dev15.g7efbf12


 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [Keystone] [Blazar] [Ironic] Py26/27 gates failing because of keystoneclient-0.9.0

2014-05-30 Thread Doug Hellmann
Would it make sense to provide a test fixture in the middleware
library for projects who want or need to test with token management?

Doug

On Fri, May 30, 2014 at 12:49 PM, Brant Knudson b...@acm.org wrote:

 The auth_token middleware changed recently[1] to check if tokens retrieved
 from the cache are expired based on the expiration time in the token. The
 unit tests for Blazar, Ceilometer, and Ironic are all using a copy-pasted
 fake memcache implementation that's supposed to simulate what auth_token
 stores in the cache, but the tokens that it had stored weren't valid. Tokens
 have an expiration time in them and these ones didn't. I don't think that
 it's safe for test code to make assumptions about how the auth_token
 middleware is going to store data in its cache. The format of the cached
 data isn't part of the public interface. It's changed before, when
 expiration times changed from *nix timestamps to iso 8601 formatted dates.

 After looking at this, I proposed a couple of changes to the auth_token
 middleware. One is to have auth_token use the expiration time it has cached
 and fail the auth request if the token is expired according to the cache. It
 doesn't have to check the token's expiration time because it was stored as
 part of the cache data. The other is to make cached token handling more
 efficient by not checking the token expiration time if the token was cached.

 [1]
 http://git.openstack.org/cgit/openstack/python-keystoneclient/commit/keystoneclient/middleware/auth_token.py?id=8574256f9342faeba2ce64080ab5190023524e0a
 [2] https://review.openstack.org/#/c/96786/

 - Brant



 On Fri, May 30, 2014 at 7:11 AM, Sylvain Bauza sba...@redhat.com wrote:

 Le 30/05/2014 14:07, Dina Belova a écrit :

 I did not look close to this concrete issue, but in the ceilometer there
 is almost the same thing: https://bugs.launchpad.net/ceilometer/+bug/1324885
 and fixes were already provided.

 Will this help Blazar?


 Got the Ironic patch as well :

 https://review.openstack.org/#/c/96576/1/ironic/tests/api/utils.py

 Will provide a patch against Blazar.

 Btw, I'll close the bug.


 -- Dina


 On Fri, May 30, 2014 at 4:00 PM, Sylvain Bauza sba...@redhat.com wrote:

 Hi Keystone developers,

 I just opened a bug [1] because Ironic and Blazar (ex. Climate) patches
 are failing due to a new release in Keystone client which seems to
 regress on midleware auth.

 Do you have any ideas on if it's quick to fix, or shall I provide a
 patch to openstack/global-requirements.txt to only accept keystoneclient
  0.9.0 ?

 Thanks,
 -Sylvain


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-05-30 Thread Sergey Lukjanov
Doug, thanks for the clarification re migration time.

I'm absolutely agree with point about need to keep API backward compat.

On Sat, May 31, 2014 at 1:04 AM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:
 No matter what version number we use, we will have to be careful about
 API changes. We cannot have 2 versions of the same library installed
 at the same time, so in order for devstack to work (and therefore the
 gate), we will have to make all changes backwards-compatible and
 support older APIs until our projects have migrated to the new APIs.

 This is *exactly* why we have the Oslo incubator. It gives us a place
 to work out stable APIs in a way that does not restrict when and how
 updates can be made. since the syncs can be handled by projects at
 their own pace.

 Doug

 On Fri, May 30, 2014 at 11:37 AM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi Sergey,

 tl;dr

 I'd like to be a ready to use version, but not 1.0.0.

 So it's a good question and I'd like to hear more input on this from all.

 If we start from 1.0.0, this will mean that we'll be very limited in
 terms of changes to public API we can make without bumping the MAJOR
 part of the version number. I don't expect the number of those changes
 to be big, but I also don't want us to happen in a situation when we
 have oslo.db 3.0.0 in a few months (if we follow semver
 pragmatically).

 Perhaps, we should stick to 0.MINOR.PATCH versioning for now (as e.g.
 SQLAlchemy and TripleO projects do)? These won't be alphas, but rather
 ready to use versions. And we would still have a bit more 'freedom' to
 do small API changes bumping the MINOR part of the version number (we
 could also do intermediate releases deprecating some stuff, so we
 don't break people projects every time we make some API change).

 Thanks,
 Roman

 On Fri, May 30, 2014 at 6:06 PM, Sergey Lukjanov slukja...@mirantis.com 
 wrote:
 Hey Roman,

 will it be the alpha version that should not be used by other projects
 or it'll be ready to use?

 Thanks.

 On Fri, May 30, 2014 at 6:36 PM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi Matt,

 We're waiting for a few important fixes to be merged (usage of
 oslo.config, eventlet tpool support). Once those are merged, we'll cut
 the initial release.

 Thanks,
 Roman

 On Fri, May 30, 2014 at 5:19 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:


 On 4/25/2014 7:46 AM, Doug Hellmann wrote:

 On Fri, Apr 25, 2014 at 8:33 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:



 On 4/18/2014 1:18 PM, Doug Hellmann wrote:


 Nice work, Victor!

 I left a few comments on the commits that were made after the original
 history was exported from the incubator. There were a couple of small
 things to address before importing the library, and a couple that can
 wait until we have the normal code review system. I'd say just add new
 commits to fix the issues, rather than trying to amend the existing
 commits.

 We haven't really discussed how to communicate when we agree the new
 repository is ready to be imported, but it seems reasonable to use the
 patch in openstack-infra/config that will be used to do the import:
 https://review.openstack.org/#/c/78955/

 Doug

 On Fri, Apr 18, 2014 at 10:28 AM, Victor Sergeyev
 vserge...@mirantis.com wrote:


 Hello all,

 During Icehouse release cycle our team has been working on splitting 
 of
 openstack common db code into a separate library blueprint [1]. At the
 moment the issues, mentioned in this bp and [2] are solved and we are
 moving
 forward to graduation of oslo.db. You can find the new oslo.db code at
 [3]

 So, before moving forward, I want to ask Oslo team to review oslo.db
 repository [3] and especially the commit, that allows the unit tests 
 to
 pass
 [4].

 Thanks,
 Victor

 [1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
 [2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
 [3] https://github.com/malor/oslo.db
 [4]


 https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I'm probably just late to the party, but simple question: why is it in
 the
 malor group in github rather than the openstack group, like
 oslo.messaging
 and oslo.rootwrap?  Is that temporary or will it be moved at some point?


 This is the copy of the code being prepared to import into a new
 oslo.db repository. It's easier to set up that temporary hosting on
 github. The repo has been approved to be imported, and after that
 happens it will be hosted on our git server like all of the other oslo
 libraries.

 Doug


 --

 Thanks,

 Matt Riedemann



 

Re: [openstack-dev] [Keystone] [Blazar] [Ironic] Py26/27 gates failing because of keystoneclient-0.9.0

2014-05-30 Thread Morgan Fainberg
+1 to a fixture for the middleware if this is a common practice to do unit 
testing in this manner. The main issue here was mocking out the cache and using 
a hand-crafted “valid” token.

We have a mechanism provided in the keystone client library that allows for 
creating a valid token (all the required fields, etc): 
https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/fixture/v3.py#L48
 for example.
—
Morgan Fainberg


From: Doug Hellmann doug.hellm...@dreamhost.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: May 30, 2014 at 14:08:16
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Keystone] [Blazar] [Ironic] Py26/27 gates 
failing because of keystoneclient-0.9.0  

Would it make sense to provide a test fixture in the middleware
library for projects who want or need to test with token management?

Doug

On Fri, May 30, 2014 at 12:49 PM, Brant Knudson b...@acm.org wrote:

 The auth_token middleware changed recently[1] to check if tokens retrieved
 from the cache are expired based on the expiration time in the token. The
 unit tests for Blazar, Ceilometer, and Ironic are all using a copy-pasted
 fake memcache implementation that's supposed to simulate what auth_token
 stores in the cache, but the tokens that it had stored weren't valid. Tokens
 have an expiration time in them and these ones didn't. I don't think that
 it's safe for test code to make assumptions about how the auth_token
 middleware is going to store data in its cache. The format of the cached
 data isn't part of the public interface. It's changed before, when
 expiration times changed from *nix timestamps to iso 8601 formatted dates.

 After looking at this, I proposed a couple of changes to the auth_token
 middleware. One is to have auth_token use the expiration time it has cached
 and fail the auth request if the token is expired according to the cache. It
 doesn't have to check the token's expiration time because it was stored as
 part of the cache data. The other is to make cached token handling more
 efficient by not checking the token expiration time if the token was cached.

 [1]
 http://git.openstack.org/cgit/openstack/python-keystoneclient/commit/keystoneclient/middleware/auth_token.py?id=8574256f9342faeba2ce64080ab5190023524e0a
 [2] https://review.openstack.org/#/c/96786/

 - Brant



 On Fri, May 30, 2014 at 7:11 AM, Sylvain Bauza sba...@redhat.com wrote:

 Le 30/05/2014 14:07, Dina Belova a écrit :

 I did not look close to this concrete issue, but in the ceilometer there
 is almost the same thing: https://bugs.launchpad.net/ceilometer/+bug/1324885
 and fixes were already provided.

 Will this help Blazar?


 Got the Ironic patch as well :

 https://review.openstack.org/#/c/96576/1/ironic/tests/api/utils.py

 Will provide a patch against Blazar.

 Btw, I'll close the bug.


 -- Dina


 On Fri, May 30, 2014 at 4:00 PM, Sylvain Bauza sba...@redhat.com wrote:

 Hi Keystone developers,

 I just opened a bug [1] because Ironic and Blazar (ex. Climate) patches
 are failing due to a new release in Keystone client which seems to
 regress on midleware auth.

 Do you have any ideas on if it's quick to fix, or shall I provide a
 patch to openstack/global-requirements.txt to only accept keystoneclient
  0.9.0 ?

 Thanks,
 -Sylvain


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Cleaning up configuration settings

2014-05-30 Thread W Chan
Is there an existing unit test for testing enabling keystone middleware in
pecan (setting cfg.CONF.pecan.auth_enable = True)?  I don't seem to find
one.  If there's one, it's not obvious.  Can someone kindly point me to it?


On Wed, May 28, 2014 at 9:53 AM, W Chan m4d.co...@gmail.com wrote:

 Thanks for following up.  I will publish this change as a separate patch
 from my current config cleanup.


 On Wed, May 28, 2014 at 2:38 AM, Renat Akhmerov rakhme...@mirantis.com
 wrote:


 On 28 May 2014, at 13:51, Angus Salkeld angus.salk...@rackspace.com
 wrote:

  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 17/05/14 02:48, W Chan wrote:
  Regarding config opts for keystone, the keystoneclient middleware
 already
  registers the opts at
 
 https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/middleware/auth_token.py#L325
  under a keystone_authtoken group in the config file.  Currently,
 Mistral
  registers the opts again at
 
 https://github.com/stackforge/mistral/blob/master/mistral/config.py#L108
 under a
  different configuration group.  Should we remove the duplicate from
 Mistral and
  refactor the reference to keystone configurations to the
 keystone_authtoken
  group?  This seems more consistent.
 
  I think that is the only thing that makes sense. Seems like a bug
  waiting to happen having the same options registered twice.
 
  If some user used to other projects comes and configures
  keystone_authtoken then will their config take effect?
  (how much confusion will that generate)..
 
  I'd suggest just using the one that is registered keystoneclient.

 Ok, I had a feeling it was needed for some reason. But after having
 another look at this I think this is really a bug. Let’s do it.

 Thanks guys
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Adding Tuskar to weekly IRC meetings agenda

2014-05-30 Thread Jason Rist
On Fri 30 May 2014 02:37:49 PM MDT, James Polley wrote:


 On 30 May 2014, at 8:13 pm, Jaromir Coufal jcou...@redhat.com wrote:

 Hi All,

 I would like to propose to add Tuskar as a permanent topic to the agenda for 
 our weekly IRC meetings. It is an official TripleO's project, there 
 happening quite a lot around it and we are targeting for Juno to have 
 something solid. So I think that it is important for us to regularly keep 
 track on what is going on there.


 Sounds good to me.

 What do you think we would talk about under this topic? I'm thinking that a 
 brief summary of changes since last week, and any blockers tuskar is seeing 
 from the broader project would be a good start?

 -- Jarda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Initially I also think it'd be good to cover some of the plans for Juno 
and how they're progressing?

--
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Blueprint process (heat-specs repo)

2014-05-30 Thread Zane Bitter
Last week we agreed[1] to follow that other project in setting up a 
specs repo.[2] (Many thanks to Ying, Monty and Clint for getting this up 
and running.)


I'm still figuring out myself how this is going to work, but the basic 
idea seems to be this:


- All new blueprints should be submitted as Gerrit reviews to the specs 
repo. Do NOT submit new blueprints to Launchpad

- Existing blueprints in Launchpad are fine, there's no need to touch them.
- If you need to add design information to an existing blueprint, please 
do so by submitting a Gerrit review to the specs repo and linking to it 
from Launchpad, instead of using a wiki page.


A script will create Launchpad blueprints from approved specs and 
heat-drivers (i.e. the core team) will target them to milestones. Once 
this system is up and running, anything not targeted to a milestone will 
be subject to getting bumped from the series goal by another script. 
(That's why you don't want to create new bps in Launchpad.)


If anybody has questions, I am happy to make up answers.

Let's continue to keep things lightweight. Remember, this is not a tool 
to enforce a process, it's a better medium for the communication that's 
already happening. As a guide:


- the more ambitious/crazy/weird your idea is, the more detail you need 
to communicate.
- the harder it would be to throw part or all of the work away, the 
earlier you need to communicate it.
- as always, do whatever you judge best, for whatever definition of 
'best' you judge best.


cheers,
Zane.


[1] 
http://eavesdrop.openstack.org/meetings/heat/2014/heat.2014-05-21-20.00.html

[2] http://git.openstack.org/cgit/openstack/heat-specs

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Rally] Tempest + Rally: first success

2014-05-30 Thread Andrey Kurilin
Hi stackers,

I would like to share with you great news.
We all know that it's quite hard to use Tempest out of gates, especially
when you are going to benchmark different clouds, run just part of tests
and would like to store somewhere results. As all this stuff doesn't belong
to Tempest, we decided to make it in Rally.

More details about how to use Tempest in one click in my tutorial:
http://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/

-- 
Best regards,
Andrey Kurilin.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova]Passing flat_injected flag through instance metadata

2014-05-30 Thread ebaysf, yvempati
Hello all,
I am new to the openstack community and I am looking for feedback.

We would like to implement a feature that allows user to pass flat_injected 
flag through instance metadata. We would like to enable this feature for images 
that support config drive. This feature helps us to decrease the dependency on 
dhcp server and  to maintain a uniform configuration across all the hypervisors 
running in our cloud. In order to enable this feature should I create a blue 
print and later implement or can this feature be implemented by filing a bug.

Regards,
Yashwanth Vempati

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [Heat] Glance Metadata Catalog for Capabilities and Tags

2014-05-30 Thread Tripp, Travis S
Thanks, Zane and Georgy!

We’ll begin getting all the expected sections for the new Glance spec repo into 
this document next week and then will upload in RST format for formal review. 
That is a bit more expedient since there are still several people editing. In 
the meantime, we’ll take any additional comments in the google doc.
Thanks,
Travis

From: Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com]
Sent: Friday, May 30, 2014 1:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] [Heat] Glance Metadata Catalog for 
Capabilities and Tags
Importance: High

I think this is a great feature to have it in Glance. Tagging mechanism for 
objects which are not owned by Glance is complimentary to artifact 
catalog\repository in Glance. As soon as we keep tags and artifacts metadata 
close to each other the end-user will be able to use them seamlessly.
Artifacts also can use tags to find objects outside of artifact repository 
which is always good to have.
In Murano project we use Glance tags to find correct images which are required 
by specific applications. It will be great to extend this to other objects like 
networks, routers and flavors so that application write can specify kind of 
object are required for his application.

Thanks,
Georgy

On Fri, May 30, 2014 at 11:45 AM, Zane Bitter 
zbit...@redhat.commailto:zbit...@redhat.com wrote:
On 29/05/14 18:42, Tripp, Travis S wrote:
Hello everyone!

At the summit in Atlanta we demonstrated the “Graffiti” project
concepts.  We received very positive feedback from members of multiple
dev projects as well as numerous operators.  We were specifically asked
multiple times about getting the Graffiti metadata catalog concepts into
Glance so that we can start to officially support the ideas we
demonstrated in Horizon.

After a number of additional meetings at the summit and working through
ideas the past week, we’ve created the initial proposal for adding a
Metadata Catalog to Glance for capabilities and tags.  This is distinct
from the “Artifact Catalog”, but we do see that capability and tag
catalog can be used with the artifact catalog.

We’ve detailed our initial proposal in the following Google Doc.  Mark
Washenberger agreed that this was a good place to capture the initial
proposal and we can later move it over to the Glance spec repo which
will be integrated with Launchpad blueprints soon.

https://docs.google.com/document/d/1cS2tJZrj748ZsttAabdHJDzkbU9nML5S4oFktFNNd68

Please take a look and let’s discuss!

Also, the following video is a brief recap of what was demo’ d at the
summit.  It should help to set a lot of understanding behind the ideas
in the proposal.

https://www.youtube.com/watch?v=Dhrthnq1bnw

Thank you!

Travis Tripp (HP)

Murali Sundar (Intel)
*A Few Related Blueprints *


https://blueprints.launchpad.net/horizon/+spec/instance-launch-using-capability-filtering

https://blueprints.launchpad.net/horizon/+spec/tagging

https://blueprints.launchpad.net/horizon/+spec/faceted-search

https://blueprints.launchpad.net/horizon/+spec/host-aggregate-update-metadata

https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata

+1, this is something that will be increasingly important to orchestration. The 
folks working on the TOSCA (and others) - HOT translator project might be able 
to comment in more detail, but basically as people start wanting to write 
templates that run on multiple clouds (potentially even non-OpenStack clouds) 
some sort of catalog for capabilities will become crucial.

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.comhttp://www.mirantis.com/
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-05-30 Thread Carl Baldwin
Eugene,

That was part of the whole new set of complications that I
dismissively waved my hands at.  :)

I was thinking it would be a separate process that would communicate
over the RPC channel or something.  More complications come when you
think about making this process HA, etc.  It would mean going over RPC
to rabbit to get an allocation which would be slow.  But the current
implementation is slow.  At least going over RPC is greenthread
friendly where going to the database doesn't seem to be.

Carl

On Fri, May 30, 2014 at 2:56 PM, Eugene Nikanorov
enikano...@mirantis.com wrote:
 Hi Carl,

 The idea of in-memory storage was discussed for similar problem, but might
 not work for multiple server deployment.
 Some hybrid approach though may be used, I think.

 Thanks,
 Eugene.


 On Fri, May 30, 2014 at 8:53 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 This is very similar to IPAM...  There is a space of possible ids or
 addresses that can grow very large.  We need to track the allocation
 of individual ids or addresses from that space and be able to quickly
 come up with a new allocations and recycle old ones.  I've had this in
 the back of my mind for a week or two now.

 A similar problem came up when the database would get populated with
 the entire free space worth of ip addresses to reflect the
 availability of all of the individual addresses.  With a large space
 (like an ip4 /8 or practically any ip6 subnet) this would take a very
 long time or never finish.

 Neutron was a little smarter about this.  It compressed availability
 in to availability ranges in a separate table.  This solved the
 original problem but is not problem free.  It turns out that writing
 database operations to manipulate both the allocations table and the
 availability table atomically is very difficult and ends up being very
 slow and has caused us some grief.  The free space also gets
 fragmented which degrades performance.  This is what led me --
 somewhat reluctantly -- to change how IPs get recycled back in to the
 free pool which hasn't been very popular.

 I wonder if we can discuss a good pattern for handling allocations
 where the free space can grow very large.  We could use the pattern
 for the allocation of both IP addresses, VXlan ids, and other similar
 resource spaces.

 For IPAM, I have been entertaining the idea of creating an allocation
 agent that would manage the availability of IPs in memory rather than
 in the database.  I hesitate, because that brings up a whole new set
 of complications.  I'm sure there are other potential solutions that I
 haven't yet considered.

 The L3 subteam is currently working on a pluggable IPAM model.  Once
 the initial framework for this is done, we can more easily play around
 with changing the underlying IPAM implementation.

 Thoughts?

 Carl

 On Thu, May 29, 2014 at 4:01 AM, Xurong Yang ido...@gmail.com wrote:
  Hi, Folks,
 
  When we configure VXLAN range [1,16M], neutron-server service costs long
  time and cpu rate is very high(100%) when initiation. One test base on
  postgresql has been verified: more than 1h when VXLAN range is [1, 1M].
 
  So, any good solution about this performance issue?
 
  Thanks,
  Xurong Yang
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-05-30 Thread Eugene Nikanorov
 I was thinking it would be a separate process that would communicate over
the RPC channel or something.
memcached?

Eugene.


On Sat, May 31, 2014 at 2:27 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 Eugene,

 That was part of the whole new set of complications that I
 dismissively waved my hands at.  :)

 I was thinking it would be a separate process that would communicate
 over the RPC channel or something.  More complications come when you
 think about making this process HA, etc.  It would mean going over RPC
 to rabbit to get an allocation which would be slow.  But the current
 implementation is slow.  At least going over RPC is greenthread
 friendly where going to the database doesn't seem to be.

 Carl

 On Fri, May 30, 2014 at 2:56 PM, Eugene Nikanorov
 enikano...@mirantis.com wrote:
  Hi Carl,
 
  The idea of in-memory storage was discussed for similar problem, but
 might
  not work for multiple server deployment.
  Some hybrid approach though may be used, I think.
 
  Thanks,
  Eugene.
 
 
  On Fri, May 30, 2014 at 8:53 PM, Carl Baldwin c...@ecbaldwin.net
 wrote:
 
  This is very similar to IPAM...  There is a space of possible ids or
  addresses that can grow very large.  We need to track the allocation
  of individual ids or addresses from that space and be able to quickly
  come up with a new allocations and recycle old ones.  I've had this in
  the back of my mind for a week or two now.
 
  A similar problem came up when the database would get populated with
  the entire free space worth of ip addresses to reflect the
  availability of all of the individual addresses.  With a large space
  (like an ip4 /8 or practically any ip6 subnet) this would take a very
  long time or never finish.
 
  Neutron was a little smarter about this.  It compressed availability
  in to availability ranges in a separate table.  This solved the
  original problem but is not problem free.  It turns out that writing
  database operations to manipulate both the allocations table and the
  availability table atomically is very difficult and ends up being very
  slow and has caused us some grief.  The free space also gets
  fragmented which degrades performance.  This is what led me --
  somewhat reluctantly -- to change how IPs get recycled back in to the
  free pool which hasn't been very popular.
 
  I wonder if we can discuss a good pattern for handling allocations
  where the free space can grow very large.  We could use the pattern
  for the allocation of both IP addresses, VXlan ids, and other similar
  resource spaces.
 
  For IPAM, I have been entertaining the idea of creating an allocation
  agent that would manage the availability of IPs in memory rather than
  in the database.  I hesitate, because that brings up a whole new set
  of complications.  I'm sure there are other potential solutions that I
  haven't yet considered.
 
  The L3 subteam is currently working on a pluggable IPAM model.  Once
  the initial framework for this is done, we can more easily play around
  with changing the underlying IPAM implementation.
 
  Thoughts?
 
  Carl
 
  On Thu, May 29, 2014 at 4:01 AM, Xurong Yang ido...@gmail.com wrote:
   Hi, Folks,
  
   When we configure VXLAN range [1,16M], neutron-server service costs
 long
   time and cpu rate is very high(100%) when initiation. One test base on
   postgresql has been verified: more than 1h when VXLAN range is [1,
 1M].
  
   So, any good solution about this performance issue?
  
   Thanks,
   Xurong Yang
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev][Cinder] conf.sample conflicts

2014-05-30 Thread John Griffith
Hey Everyone,

Many of you may be aware that yesterday an upstream change landed that
outdated our sample cinder.conf file.  As a result the Jenkins tests will
fail (and continue to fail no matter how many times you hit recheck) :)

But what you may not have known was that YOU have the power to fix this!!
 First off, I know that running the entire test-suite in a venv is
something well all do EVERY time right?  Well you'll notice when you do
that (it's the same tests that Jenkins will run) it will surprisingly
report the exact same error that Jenkins has been failing your patches for.
 What's even more handy is the fact that it will also give you a hint on
how to fix the problem!  Pretty cool eh?  If even just ONE person that
first hit this problem would've updated their patch according to the
process nobody would have even noticed that it happened and you wouldn't
read my stupid email on a Friday evening.

That wasn't the case though, and the result is a bunch of failed items in
the queue. I've also submitted a patch to do away with the whole sample
conf file thing anyway, and I've also submitted a patch to update the conf
file for everybody so this is taken care of (with the exception of the all
the rechecks we'll need once my patch lands).

So why am I sending this email you ask?

Well, because what we have now is everybody continuing to pile on
submissions that are guaranteed to fail tests.  This clogs up the gate and
makes things like my proposed fix to actually update this, wait in
extremely long queues and get kicked out when something ahead of it fails.
 The result is we all sit here very unhappy waiting.  I just wanted to
point out, we all (myself included) have become a bit slack about running
tests in our own local env before submitting patches.  This messes up
Friday afternoon for all of us.

So... just a reminder (to myself as well);
1. PLEASE run tests before submitting your patch
2. PLEASE look at failures and don't just keep hitting recheck thinking
it will change.  Read the console output, it does in fact call out the
error.


Thanks!!
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Unanswered questions in object model refactor blueprint

2014-05-30 Thread Brandon Logan
Stephen,

Were you still planning on doing the second blueprint that will
implement the new API calls?

Thanks,
Brandon

On Thu, 2014-05-29 at 22:36 -0700, Bo Lin wrote:
 Hi Brandon and Stephen,
 Really thanks for your responses and i got to know it.
 
 
 Thanks!
 ---Bo
 
 __
 From: Brandon Logan brandon.lo...@rackspace.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Sent: Friday, May 30, 2014 1:17:57 PM
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Unanswered questions in
 object model refactor blueprint
 
 
 Hi Bo,
 Sorry, I forgot to respond but yes what Stephen said lol :)
 
 __
 From: Stephen Balukoff [sbaluk...@bluebox.net]
 Sent: Thursday, May 29, 2014 10:42 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Unanswered questions in
 object model refactor blueprint
 
 
 
 
 Hi Bo--
 
 
 Haproxy is able to have IPv4 front-ends with IPv6 back-ends (and visa
 versa) because it actually initiates a separate TCP connection between
 the front end client and the back-end server. The front-end thinks
 haproxy is the server, and the back-end thinks haproxy is the client.
 In practice, therefore, its totally possible to have an IPv6 front-end
 and IPv4 back-end with haproxy (for both http and generic TCP service
 types).
 
 
 I think this is similarly true for vendor appliances that are capable
 of doing IPv6, and are also initiating new TCP connections from the
 appliance to the back-end.
 
 
 Obviously, the above won't work if your load balancer implementation
 is doing something transparent on the network layer like LVM load
 balancing.
 
 
 Stephen
 
 
 
 
 On Wed, May 28, 2014 at 9:14 PM, Bo Lin l...@vmware.com wrote:
 Hi Brandon,
 
 I have one question. If we support LoadBalancer to Listener
 relationship M:N, then one listener with IPV4 service members
 backend may be shared by a loadbalancer instance with IPV6
 forntend. Does it mean we also need to provide IPV6 - IPV4
 port forwarding functions in LBaaS services products? Does
 iptables or most LBaaS services products such as haproxy or so
 on provide such function? Or I am just wrong in some technique
 details on these LBaaS products.
 
 
 Thanks!
 
 __
 From: Vijay B os.v...@gmail.com
 
 To: OpenStack Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org
 
 Sent: Thursday, May 29, 2014 6:18:42 AM
 
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Unanswered
 questions in object model refactor blueprint
 
 
 Hi Brandon!
 
 
 Please see inline..
 
 
 
 
 
 
 
 On Wed, May 28, 2014 at 12:01 PM, Brandon Logan
 brandon.lo...@rackspace.com wrote:
 Hi Vijay,
 
 On Tue, 2014-05-27 at 16:27 -0700, Vijay B wrote:
  Hi Brandon,
 
 
  The current reviews of the schema itself are
 absolutely valid and
  necessary, and must go on. However, the place of
 implementation of
  this schema needs to be clarified. Rather than make
 any changes
  whatsoever to the existing neutron db schema for
 LBaaS, this new db
  schema outlined needs to be implemented for a
 separate LBaaS core
  service.
 
 
 Are you suggesting a separate lbaas database from the
 neutron database?
 If not, then I could use some clarification. If so,
 I'd advocate against
 that right now because there's just too many things
 that would need to
 be changed.  Later, when LBaaS becomes its own service
 then yeah that
 will need to happen.
 
 
 v Ok, so as I understand it, in this scheme, there is no new
 schema or db, there will be a new set of tables resident in
 neutron_db schema itself, alongside legacy lbaas tables. Let's
 consider a rough view of the implementation.
 
 
 Layer 1 - We'll have a new lbaas v3.0 api in neutron, with the
 current lbaas service plugin having to support it in addition
 to the legacy lbaas extensions that it already supports. We'll
 need to put in new code anyway that will process the v3.0

  1   2   >