[openstack-dev] [neutron] lbaas driver implementation question

2015-01-28 Thread Shane McGough
Thanks for your response.


We have analysed the issue further and the DG is not the problem.



We have taken tcpdumps on all interfaces and we can see that the ping reaches 
the VIP on the driver virtual appliance and the virtual appliance responds to 
the ping. It responds with the mac address of the switch. Once it hits the 
switch, the traffic is dropped. The virtual appliance itself has no issues and 
it is using the same MAC Address.



I have done a dump on the flows for ovs and I can see that the flow for the 
ping request is set up but the flow for the reply is not.



The action on the port I believe is normal so should send handle the requests 
normally.



What could be causing the flow not to be created for the reply? Is there a 
misconfiguration somewhere along the line?


Cheers,


Shane McGough
Junior Software Developer
KEMP Technologies
National Technology Park, Limerick, Ireland.

kemptechnologies.comhttps://kemptechnologies.com/ | 
@KEMPtechhttps://twitter.com/KEMPtech | 
LinkedInhttps://www.linkedin.com/company/kemp-technologies

From: Jain, Vivek vivekj...@ebay.com
Sent: Monday, January 26, 2015 6:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] lbaas driver implementation question

Can you try to ping the client IP from your Load Balancer and see if that 
works. I am suspecting that default gateway used on LB is not correct.

Thanks,
Vivek

From: Shane McGough 
smcgo...@kemptechnologies.commailto:smcgo...@kemptechnologies.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, January 26, 2015 at 7:32 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] lbaas driver implementation question

Hi all,

Implementing a driver for lbaas and am running into some trouble.

The load balancer is a VM hosted on the compute node within devstack and the 
driver can create vips, pools and members no problem.

The problem lies when trying to access the service hosted by the VIP.

From TCP dumps on the load balancer instance we can see that the traffic hits 
the load balancer and the load balancer replies but the reply does not reach 
the client.

Is there an implementation reason why this might occur (i.e. not creating a 
port or doing some neutron configuration within the driver) or could there be 
another issue at play here.

We are using stable/juno devstack for test purposes. We can access the load 
balancer GUI no problem via the browser its just anything hosted by the LB 
services is inaccessible.

Any help would be appreciated

Thanks,
Shane.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Bug squashing followup

2015-01-28 Thread Derek Higgins
Take up on this was a bit lower then I would have hoped but lets go
ahead with it anyways ;-)

we had 6 volunteers altogether.

I've taken the list of current tripleo bugs and split them into groups
of 15 (randomly shuffled) and assigned a group to each volunteer.

Each person should take a look at the bugs they have been assigned
decide if it is still current(contact others if necessary), if its not
close it. Once your happy a bug has been correctly assessed then strike
it through on the etherpad.

To others, its not too late ;-), just add your name to the etherpad and
assign yourself a group of 15 bugs.

thanks,
Derek.

On 18/12/14 11:25, Derek Higgins wrote:
 While bug squashing yesterday, I went through quite a lot of bugs
 closing those that were already fixed or no longer relevant, closing
 around 40 bugs. I eventually ran out of time, but I'm pretty sure if we
 split the task up between us we could weed out a lot more.
 
 What I'd like to do is, as a once off, randomly split up all the bugs to
 a group of volunteers (hopefully a large number of people), each person
 gets assigned X number of bugs and is then responsible for just deciding
 if it is still a relevant bug (or finding somebody who can help decide)
 and closing if necessary. Nothing needs to get fixed here we just need
 to make sure people are have a uptodate list of relevant bugs.
 
 So who wants to volunteer? We probably need about 15+ people for this to
 be split into manageable chunks. If your willing to help out just add
 your name to this list
 https://etherpad.openstack.org/p/tripleo-bug-weeding
 
 If we get enough people I'll follow up by splitting out the load and
 assigning to people.
 
 The bug squashing day yesterday put a big dent in these, but wasn't
 entirely focused on weeding out stale bugs, some people probably got
 caught up fixing individual bugs and it wasn't helped by a temporary
 failure of our CI jobs (provoked by a pbr update and we were building
 pbr when we didn't need to be).
 
 thanks,
 Derek.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Question about VPNaas

2015-01-28 Thread Paul Michali
I can try to comment on your questions... inline @PCM


PCM (Paul Michali)

IRC pc_m (irc.freenode.com)
Twitter... @pmichali


On Tue, Jan 27, 2015 at 9:45 PM, shihanzhang ayshihanzh...@126.com wrote:

 Hi Stacker:

 I am a novice, I want  use Neutron VPNaas, through my preliminary
 understanding on this it, I have two questions about it:
 (1) why a 'vpnservices' can just has one subnet?

(2) why the subnet of 'vpnservices' can't be changed?


@PCM Currently, the VPN service is designed to setup a site to site
connection between two private subnets. The service is associated 1:1 with
(and applies the connection to) a Neutron router that has a interface on
the private network, and an interface on the public network. Changing the
subnet for the service would effectively change the router. One would have
to delete and recreate the service to use a different router.

I don't know if the user can attach multiple private subnets to a router,
and the VPN implementation assumes that there is only one private subnet.


 As I know, the OpenSwan does not has these limitations.
 I've learned that there is a BP to do this:

 https://blueprints.launchpad.net/neutron/+spec/vpn-multiple-subnet
  but this BP has been no progress.


 I want to know whether this will do in next cycle or later, who can
 help me to explain?


@PCM I don't know what happened with that BP, but it is effectively
abandoned (even though status says 'new'). There has not been any activity
on it for over a year, and since we are at a new release, a BP spec would
have been required for Kilo. Also, the bug that drove the issue, has been
placed into Invalid state by Mark McClain in March of last year.

https://bugs.launchpad.net/neutron/+bug/1258375


You could ask Mark for clarification, but I think it may be because the
Neutron router doesn't support multiple subnets.

Regards.


Thanks.

 -shihanzhang




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][VMware] BP priorities

2015-01-28 Thread Gary Kotton
Hi,
At the mid cycle meet up it was discussed that we need to prioritize the BP's 
that require review. This will at least get us a chance of getting something 
into Nova this cycle.
The following BP's are ready for review:

  1.  Ephemeral disk support - 
https://blueprints.launchpad.net/nova/+spec/vmware-ephemeral-disk-support
  2.  OVA support - 
https://blueprints.launchpad.net/nova/+spec/vmware-driver-ova-support
  3.  Video support - 
https://blueprints.launchpad.net/nova/+spec/vmware-driver-support-for-video-memory
 (I think that this requires 
https://blueprints.launchpad.net/nova/+spec/vmware-video-memory-filter-scheduler)
  4.  Console log - 
https://blueprints.launchpad.net/nova/+spec/vmware-console-log

I would like to propose that that be the order of priorities. Lets discuss this 
over the list and at the IRC meeting later today and next week.

The following BP's need to update their status as they are complete:

  1.  VSAN - https://blueprints.launchpad.net/nova/+spec/vmware-vsan-support

The following need to be abandoned:

  1.  SOAP session management - 
https://blueprints.launchpad.net/nova/+spec/vmware-soap-session-management 
(this is now part of oslo.vmware)

Thanks
Gary

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What should openstack-specs review approval rules be ?

2015-01-28 Thread Thierry Carrez
Hi everyone,

When we first introduced the cross-project specs (specs for things that
may potentially affect all OpenStack projects, or where more convergence
is desirable), we defaulted to rather simple rules for approval:

- discuss the spec in a cross-project meeting
- let everyone +1/-1 and seek consensus
- wait for the affected PTLs to vote
- wait even more
- tally the votes (and agree a consensus is reached) during a TC meeting
- give +2/Worflow+1 to all TC members to let them push the Go button

However, the recent approval of the Log guidelines
(https://review.openstack.org/#/c/132552/) revealed that those may not
be the rules we are looking for.

Sean suggested that only the TC chair should be able to workflow+1 to
avoid accidental approval.

Doug suggested that we should use the TC voting rules (7 YES, or at
least 5 YES and more YES than NO) on those.

In yesterday's meeting, Sean suggested that TC members should still have
a -2-like veto (if there is no TC consensus on the fact that community
consensus is reached, there probably is no real consensus).

There was little time to discuss this more in yesterday's TC meeting, so
I took the action to push that discussion to the ML.

So what is it we actually want for that repository ? In a world where
Gerrit can do anything, what would you like to have ?

Personally, I want our technical community in general, and our PTLs/CPLs
in particular, to be able to record their opinion on the proposed
cross-project spec. Then, if consensus is reached, the spec should be
approved.

This /could/ be implemented in Gerrit by giving +1/-1 to everyone to
express technical opinion and +2/-2 to TC members to evaluate consensus
(with Workflow+1 to the TC chair to mark when all votes are collected
and consensus is indeed reached).

Other personal opinions on how you'd like this repository reviews to be
run ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack-gate] How to pass through devstack config

2015-01-28 Thread Deepak Shetty
Putting the right tag in the subject to see if somone can help answer the
below

thanx,
deepak


On Tue, Jan 27, 2015 at 7:57 PM, Bharat Kumar bharat.kobag...@redhat.com
wrote:

  Hi,

 I have seen Sean Dague's patch [1], if I understood correctly, by this
 patch we can reduce the number of DEVSTACK_GATE variables that we need.
 Trying to follow this patch to configure my gate job
 DEVSTACK_GATE_GLUSTERFS [2].

 I am not able to figure out the way to use this patch [1].
 Please suggest me how to use the patch [1] to configure my gate job [2].

 [1] https://review.openstack.org/#/c/145321/
 [2] https://review.openstack.org/#/c/143308/7/devstack-vm-gate.sh

 --
 Warm Regards,
 Bharat Kumar Kobagana
 Software Engineer
 OpenStack Storage – RedHat India


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-01-28 Thread Kevin Benton
Hi,

Approximately a year and a half ago, the default DHCP lease time in Neutron
was increased from 120 seconds to 86400 seconds.[1] This was done with the
goal of reducing DHCP traffic with very little discussion (based on what I
can see in the review and bug report). While it it does indeed reduce DHCP
traffic, I don't think any bug reports were filed showing that a 120 second
lease time resulted in too much traffic or that a jump all of the way to
86400 seconds was required instead of a value in the same order of
magnitude.

Why does this matter?

Neutron ports can be updated with a new IP address from the same subnet or
another subnet on the same network. The port update will result in
anti-spoofing iptables rule changes that immediately stop the old IP
address from working on the host. This means the host is unreachable for
0-12 hours based on the current default lease time without manual
intervention[2] (assuming half-lease length DHCP renewal attempts).

Why is this on the mailing list?

In an attempt to make the VMs usable in a much shorter timeframe following
a Neutron port address change, I submitted a patch to reduce the default
DHCP lease time to 8 minutes.[3] However, this was upsetting to several
people,[4] so it was suggested I bring this discussion to the mailing list.
The following are the high-level concerns followed by my responses:

   - 8 minutes is arbitrary
  - Yes, but it's no more arbitrary than 1440 minutes. I picked it as
  an interval because it is still 4 times larger than the last short value,
  but it still allows VMs to regain connectivity in 5 minutes in the event
  their IP is changed. If someone has a good suggestion for
another interval
  based on known dnsmasq QPS limits or some other quantitative
reason, please
  chime in here.
   - other datacenters use long lease times
  - This is true, but it's not really a valid comparison. In most
  regular datacenters, updating a static DHCP lease has no effect
on the data
  plane so it doesn't matter that the client doesn't react for hours/days
  (even with DHCP snooping enabled). However, in Neutron's case,
the security
  groups are immediately updated so all traffic using the old address is
  blocked.
   - dhcp traffic is scary because it's broadcast
  - ARP traffic is also broadcast and many clients will expire entries
  every 5-10 minutes and re-ARP. L2population may be used to prevent ARP
  propagation, so the comparison between DHCP and ARP isn't always relevant
  here.


Please reply back with your opinions/anecdotes/data related to short DHCP
lease times.

Cheers

1.
https://github.com/openstack/neutron/commit/d9832282cf656b162c51afdefb830dacab72defe
2. Manual intervention could be an instance reboot, a dhcp client
invocation via the console, or a delayed invocation right before the
update. (all significantly more difficult to script than a simple update of
a port's IP via the API).
3. https://review.openstack.org/#/c/150595/
4. http://i.imgur.com/xtvatkP.jpg

-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] novaclient support for V2.1 micro versions

2015-01-28 Thread Christopher Yeoh
On Fri, 23 Jan 2015 16:53:55 +
Day, Phil philip@hp.com wrote:

 Hi Folks,
 
 Is there any support yet in novaclient for requesting a specific
 microversion ?   (looking at the final leg of extending
 clean-shutdown to the API, and wondering how to test this in devstack
 via the novaclient)
 

No, sorry, something should up within a week.

Regards,

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-01-28 Thread Ihar Hrachyshka

On 01/28/2015 09:50 AM, Kevin Benton wrote:

Hi,

Approximately a year and a half ago, the default DHCP lease time in 
Neutron was increased from 120 seconds to 86400 seconds.[1] This was 
done with the goal of reducing DHCP traffic with very little 
discussion (based on what I can see in the review and bug report). 
While it it does indeed reduce DHCP traffic, I don't think any bug 
reports were filed showing that a 120 second lease time resulted in 
too much traffic or that a jump all of the way to 86400 seconds was 
required instead of a value in the same order of magnitude.


I guess that would be a good case for FORCERENEW DHCP extension [1] 
though after digging thru dnsmasq code a bit, I doubt it supports the 
extension (though e.g. systemd dhcp client/server from networkd module 
do). Le sigh.


[1]: https://tools.ietf.org/html/rfc3203



Why does this matter?

Neutron ports can be updated with a new IP address from the same 
subnet or another subnet on the same network. The port update will 
result in anti-spoofing iptables rule changes that immediately stop 
the old IP address from working on the host. This means the host is 
unreachable for 0-12 hours based on the current default lease time 
without manual intervention[2] (assuming half-lease length DHCP 
renewal attempts).


Why is this on the mailing list?

In an attempt to make the VMs usable in a much shorter timeframe 
following a Neutron port address change, I submitted a patch to reduce 
the default DHCP lease time to 8 minutes.[3] However, this was 
upsetting to several people,[4] so it was suggested I bring this 
discussion to the mailing list. The following are the high-level 
concerns followed by my responses:


  * 8 minutes is arbitrary
  o Yes, but it's no more arbitrary than 1440 minutes. I picked it
as an interval because it is still 4 times larger than the
last short value, but it still allows VMs to regain
connectivity in 5 minutes in the event their IP is changed.
If someone has a good suggestion for another interval based on
known dnsmasq QPS limits or some other quantitative reason,
please chime in here.
  * other datacenters use long lease times
  o This is true, but it's not really a valid comparison. In most
regular datacenters, updating a static DHCP lease has no
effect on the data plane so it doesn't matter that the client
doesn't react for hours/days (even with DHCP snooping
enabled). However, in Neutron's case, the security groups are
immediately updated so all traffic using the old address is
blocked.
  * dhcp traffic is scary because it's broadcast
  o ARP traffic is also broadcast and many clients will expire
entries every 5-10 minutes and re-ARP. L2population may be
used to prevent ARP propagation, so the comparison between
DHCP and ARP isn't always relevant here.


Please reply back with your opinions/anecdotes/data related to short 
DHCP lease times.


Cheers

1. 
https://github.com/openstack/neutron/commit/d9832282cf656b162c51afdefb830dacab72defe
2. Manual intervention could be an instance reboot, a dhcp client 
invocation via the console, or a delayed invocation right before the 
update. (all significantly more difficult to script than a simple 
update of a port's IP via the API).

3. https://review.openstack.org/#/c/150595/
4. http://i.imgur.com/xtvatkP.jpg

--
Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-28 Thread Monty Taylor
Quick top-post apology...

It's entirely possible that there are people who are reading these lists
who do not personally know me or my tendency to overuse hyperbole.

I would like to formally apologize for both the subject of this thread
and my use of the word autocratic. They are both inflammatory and both
imply a level malice or ill-will that I'm certain is not present in anyone.

The salient point is that a thing that in the past felt both open and
fun this time around seemed to become more opaque and heavy handed. I
think, as I've said elsewhere, that we need to be very careful with
decisions, however well meaning, that take place outside of the public
context.

So please accept my apology for my language - and please engage with me
in the discussion around how to make sure people don't inadvertently
begin to feel disenfranchised.

Thanks,
Monty

On 01/27/2015 04:50 PM, Monty Taylor wrote:
 I do not like how we are selecting names for our releases right now.
 The current process is autocratic and opaque and not fun - which is the
 exact opposite of what a community selected name should be.
 
 I propose:
 
 * As soon as development starts on release X, we open the voting for the
 name of release X+1 (we're working on Kilo now, we should have known the
 name of L at the K summit)
 
 * Anyone can nominate a name - although we do suggest that something at
 least related to the location of the associated summit would be nice
 
 * We condorcet vote on the entire list of nominated names
 
 * After we have the winning list, the foundation trademark checks the name
 
 * If there is a trademark issue (and only a trademark issue - not a
 marketing doesn't like the name issue) we'll move down to the next
 name on the list
 
 If we cannot have this process be completely open and democratic, then
 what the heck is the point of having our massive meritocracy in the
 first place? There's a lot of overhead we deal with by being a
 leaderless collective you know - we should occasionally get to have fun
 with it.
 
 Monty
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of kickstart/preseed for all NEW releases

2015-01-28 Thread Vladimir Kozhukalov
Oleg,

In 6.0 we build OS target images during Fuel ISO building and then we just
put them on the ISO. In 6.1 we are planning to build them (at least Ubuntu
one) on the master node. We deliberately don't use DIB because it is all
about cloud case. DIB downloads pre-built cloud images (ubuntu, rhel,
fedora) and customizes them. Unlike cloud case we need long living OS on a
node (we need to be able to update it, install kernel and grub locally, put
different file systems on different block devices) because our case is
deployment. To achieve this goal we need slightly more low level mechanism
than that is provided by DIB. For ubuntu we use debootstrap and for centos
we use python-imgcreate.

Vladimir Kozhukalov

On Wed, Jan 28, 2015 at 10:35 AM, Oleg Gelbukh ogelb...@mirantis.com
wrote:

 Gentlemen,

 I have one small question about IBP, and I'm not sure if this is the right
 place to ask, but still: how do you plan to build the images for the
 image-based provisioning? Will you leverage diskimage-builder
 https://github.com/openstack/diskimage-builder or some other tool?

 Thanks,


 --
 Best regards,
 Oleg Gelbukh
 Mirantis Labs

 On Tue, Jan 27, 2015 at 10:55 PM, Andrew Woodward xar...@gmail.com
 wrote:

 I don't see this as crazy, it's not a feature of the cloud, its a
 mechanism to get us there. It's not even something that most of the
 time anyone sees. Continuing to waste time supporting something we are
 ready to replace, and have been testing for a release already is
 crazy. It adds to the technical debt around provisioning that is
 broken a chlot of the time. We spend around 11% of all commits of
 fuel-library to cobbler, templates, pmanager etc

 It's also not removing it, it will continue to be present to support
 prior releases, so it's even still available if we cant make IBP work
 the way we need to.

 On Tue, Jan 27, 2015 at 2:23 AM, Vladimir Kozhukalov
 vkozhuka...@mirantis.com wrote:
  Guys,
 
  First, we are not talking about deliberate disabling preseed based
 approach
  just because we so crazy. The question is What is the best way to
 achieve
  our 6.1 goals? We definitely need to be able to install two versions of
  Ubuntu 12.04 and 14.04. Those versions have different sets of packages
 (for
  example ntp related ones) and we install some of those packages during
  provisioning (we point out which packages we need with their versions).
 To
  make this working with preseed based approach we need either to insert
 some
  IF release==6.1 conditional lines into preseed (not very beautiful,
 isn't
  it?) or to create different Distros and Profiles for different releases.
  Second is not a problem for Cobbler but it is for nailgun/astute
 because we
  do not deal with that stuff and it looks that we cannot implement this
  easily.
 
  IMO, the only options we have are to insert IFs into preseed (I would
 say
  it is not more reliable than IBP) or to refuse preseed approach for
 ONLY NEW
  UPCOMING releases. You can call crazy but for me having a set IFs
  together with pmanager.py which are absolutely difficult to maintain is
  crazy.
 
 
 
  Vladimir Kozhukalov
 
  On Tue, Jan 27, 2015 at 3:03 AM, Andrew Woodward xar...@gmail.com
 wrote:
 
  On Mon, Jan 26, 2015 at 10:47 AM, Sergii Golovatiuk
  sgolovat...@mirantis.com wrote:
   Until we are sure IBP solves operation phase where we need to deliver
   updated packages so client will be able to provision new machines
 with
   these
   fixed packages, I would leave backward compatibility with normal
   provision.
   ... Just in case.
 
  doesn't running 'apt-get upgrade' or 'yum update' after laying out the
  FS image resolve the gap until we can rebuild the images on the fly?
  
  
  
   --
   Best regards,
   Sergii Golovatiuk,
   Skype #golserge
   IRC #holser
  
   On Mon, Jan 26, 2015 at 4:56 PM, Vladimir Kozhukalov
   vkozhuka...@mirantis.com wrote:
  
   My suggestion is to make IBP the only option available for all
 upcoming
   OpenStack releases which are defined in openstack.yaml. It is to be
   possible
   to install OS using kickstart for all currently available OpenStack
   releases.
  
   Vladimir Kozhukalov
  
   On Mon, Jan 26, 2015 at 6:22 PM, Igor Kalnitsky
   ikalnit...@mirantis.com
   wrote:
  
   Just want to be sure I understand you correctly: do you propose to
   FORBID kickstart/preseed installation way in upcoming release at
 all?
  
   On Mon, Jan 26, 2015 at 3:59 PM, Vladimir Kozhukalov
   vkozhuka...@mirantis.com wrote:
Subject is changed.
   
Vladimir Kozhukalov
   
On Mon, Jan 26, 2015 at 4:55 PM, Vladimir Kozhukalov
vkozhuka...@mirantis.com wrote:
   
Dear Fuelers,
   
As you might know we need it to be possible to install several
versions of
a particular OS (Ubuntu and Centos) by 6.1  As far as having
different
OS
versions also means having different sets of packages and some
 of
the
packages are installed and configured during provisioning
 stage, we
need 

Re: [openstack-dev] [Heat] core team changes

2015-01-28 Thread Jason Dunsmore
+1

On Tue, Jan 27 2015, Angus Salkeld wrote:

 Hi all

 After having a look at the stats:
 http://stackalytics.com/report/contribution/heat-group/90
 http://stackalytics.com/?module=heat-groupmetric=person-day

 I'd like to propose the following changes to the Heat core team:

 Add:
 Qiming Teng
 Huang Tianhua

 Remove:
 Bartosz Górski (Bartosz has indicated that he is happy to be removed and
 doesn't have the time to work on heat ATM).

 Core team please respond with +/- 1.

 Thanks
 Angus
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-28 Thread Denis Makogon
On Tue, Jan 27, 2015 at 10:26 PM, Gordon Sim g...@redhat.com wrote:

 On 01/27/2015 06:31 PM, Doug Hellmann wrote:

 On Tue, Jan 27, 2015, at 12:28 PM, Denis Makogon wrote:

 I'd like to build tool that would be able to profile messaging over
 various deployments. This tool would give me an ability to compare
 results of performance testing produced by native tools and
 oslo.messaging-based tool, eventually it would lead us into digging into
 code and trying to figure out where bad things are happening (that's
 the
 actual place where we would need to profile messaging code). Correct me
 if
 i'm wrong.


 It would be interesting to have recommendations for deployment of rabbit
 or qpid based on performance testing with oslo.messaging. It would also
 be interesting to have recommendations for changes to the implementation
 of oslo.messaging based on performance testing. I'm not sure you want to
 do full-stack testing for the latter, though.

 Either way, I think you would be able to start the testing without any
 changes in oslo.messaging.


 I agree. I think the first step is to define what to measure and then
 construct an application using olso.messaging that allows the data of
 interest to be captured using different drivers and indeed different
 configurations of a given driver.

 I wrote a very simple test application to test one aspect that I felt was
 important, namely the scalability of the RPC mechanism as you increase the
 number of clients and servers involved. The code I used is
 https://github.com/grs/ombt, its probably stale at the moment, I only
 link to it as an example of approach.

 Using that test code I was then able to compare performance in this one
 aspect across drivers (the 'rabbit', 'qpid' and new amqp 1.0 based drivers
 _ I wanted to try zmq, but couldn't figure out how to get it working at the
 time), and for different deployment options using a given driver (amqp 1.0
 using qpidd or qpid dispatch router in either standalone or with multiple
 connected routers).

 There are of course several other aspects that I think would be important
 to explore: notifications, more specific variations in the RPC 'topology'
 i.e. number of clients on given server number of servers in single group
 etc, and a better tool (or set of tools) would allow all of these to be
 explored.

 From my experimentation, I believe the biggest differences in scalability
 are going to come not from optimising the code in oslo.messaging so much as
 choosing different patterns for communication. Those choices may be
 constrained by other aspects as well of course, notably approach to
 reliability.



After couple internal discussions and hours of investigations, i think i've
foung the most applicabale solution
that will accomplish performance testing approach and will eventually be
evaluated as messaging drivers
configuration and AMQP service deployment recommendataion.

Solution that i've been talking about is already pretty well-known across
OpenStack components - Rally and its scenarios.
Why it would be the best option? Rally scenarios would not touch messaging
 core part. Scenarios are gate-able.
Even if we're talking about internal testing, scenarios are very useful in
this case,
since they are something that can be tuned/configured taking into account
environment needs.

Doug, Gordon, what do you think about bringing scenarios into messaging?

__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Kind regards,
Denis M.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins][Orchestration] Unclear handling of primary-controler and controller roles

2015-01-28 Thread Dmitriy Shulyak
But without this separation on orchestration layer, we are unable to
differentiate between nodes.
What i mean is - we need to run subset of tasks on primary first and then
on all others, and we are using role as mapper
to node identities (and this mechanism was hardcoded in nailgun for a long
time).

Lets say we have task A that is mapped to primary-controller and B that is
mapped to secondary controller, task B requires task A.
If there is no primary in mapping - we will execute task A on all
controllers and then task B on all controllers.

And how in such case deployment code will know that it should not execute
commands in task A for secondary controllers and
in task B on primary ?

On Wed, Jan 28, 2015 at 10:44 AM, Sergii Golovatiuk 
sgolovat...@mirantis.com wrote:

 Hi,

 *But with introduction of plugins and granular deployment, in my opinion,
 we need to be able*
 *to specify that task should run specifically on primary, or on
 secondaries. Alternative to this approach would be - always run task on all
 controllers, and let task itself to verify that it is  executed on primary
 or not.*

 I wouldn't differentiate tasks for primary and other controllers.
 Primary-controller logic should be controlled by task itself. That will
 allow to have elegant and tiny task framework ...

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Tue, Jan 27, 2015 at 11:35 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hello all,

 You may know that for deployment configuration we are serializing
 additional prefix for controller role (primary), with the goal of
 deployment order control (primary-controller always should be deployed
 before secondaries) and some condiions in fuel-library code.

 However, we cannot guarantee that primary controller will be always the
 same node, because it is not business of nailgun to control elections of
 primary. Essentially user should not rely on nailgun
 information to find primary, but we need to persist node elected as
 primary in first deployment
 to resolve orchestration issues (when new node added to cluster we should
 not mark it as primary).

 So we called primary-controller - internal role, which means that it is
 not exposed to users (or external developers).
 But with introduction of plugins and granular deployment, in my opinion,
 we need to be able
 to specify that task should run specifically on primary, or on
 secondaries. Alternative to this approach would be - always run task on all
 controllers, and let task itself to verify that it is  executed on primary
 or not.

 Is it possible to have significantly different sets of tasks for
 controller and primary-controller?
 And same goes for mongo, and i think we had primary for swift also.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] core team changes

2015-01-28 Thread Steven Hardy
On Wed, Jan 28, 2015 at 11:36:41AM +1000, Angus Salkeld wrote:
Hi all
 
After having a look at the stats:
http://stackalytics.com/report/contribution/heat-group/90
http://stackalytics.com/?module=heat-groupmetric=person-day
 
I'd like to propose the following changes to the Heat core team:
Add:
Qiming Teng
Huang Tianhua
 
Remove:
Bartosz GA^3rski (Bartosz has indicated that he is happy to be removed and
doesn't have the time to work on heat ATM).
 
Core team please respond with +/- 1.

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-novaclient][nova] future of --os-compute-api-version option and whole api versioning

2015-01-28 Thread Christopher Yeoh
On Fri, 23 Jan 2015 15:51:54 +0200
Andrey Kurilin akuri...@mirantis.com wrote:

 Hi everyone!
 After removing nova V3 API from novaclient[1], implementation of v1.1
 client is used for v1.1, v2 and v3 [2].
 Since we moving to micro versions, I wonder, do we need such
 mechanism of choosing api version(os-compute-api-version) or we can
 simply remove it, like in proposed change - [3]?
 If we remove it, how micro version should be selected?
 

So since v3 was never officially released I think we can re-use
os-compute-api-version for microversions which will map to the
X-OpenStack-Compute-API-Version header. See here for details on what
the header will look like. We need to also modify novaclient to handle
errors when a version requested is not supported by the server.

If the user does not specify a version number then we should not send
anything at all. The server will run the default behaviour which for
quite a while will just be v2.1 (functionally equivalent to v.2)

http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/api-microversions.html


 
 [1] - https://review.openstack.org/#/c/138694
 [2] -
 https://github.com/openstack/python-novaclient/blob/master/novaclient/client.py#L763-L769
 [3] - https://review.openstack.org/#/c/149006
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-28 Thread Flavio Percoco

On 28/01/15 10:23 +0200, Denis Makogon wrote:



On Tue, Jan 27, 2015 at 10:26 PM, Gordon Sim g...@redhat.com wrote:

   On 01/27/2015 06:31 PM, Doug Hellmann wrote:
  
   On Tue, Jan 27, 2015, at 12:28 PM, Denis Makogon wrote:
  
   I'd like to build tool that would be able to profile messaging over

   various deployments. This tool would give me an ability to
   compare
   results of performance testing produced by native tools and
   oslo.messaging-based tool, eventually it would lead us into digging
   into
   code and trying to figure out where bad things are happening
   (that's
   the
   actual place where we would need to profile messaging code).
   Correct me
   if
   i'm wrong.


   It would be interesting to have recommendations for deployment of
   rabbit
   or qpid based on performance testing with oslo.messaging. It would also
   be interesting to have recommendations for changes to the
   implementation
   of oslo.messaging based on performance testing. I'm not sure you want
   to
   do full-stack testing for the latter, though.

   Either way, I think you would be able to start the testing without any
   changes in oslo.messaging.
  


   I agree. I think the first step is to define what to measure and then
   construct an application using olso.messaging that allows the data of
   interest to be captured using different drivers and indeed different
   configurations of a given driver.

   I wrote a very simple test application to test one aspect that I felt was
   important, namely the scalability of the RPC mechanism as you increase the
   number of clients and servers involved. The code I used is https://
   github.com/grs/ombt, its probably stale at the moment, I only link to it as
   an example of approach.

   Using that test code I was then able to compare performance in this one
   aspect across drivers (the 'rabbit', 'qpid' and new amqp 1.0 based drivers
   _ I wanted to try zmq, but couldn't figure out how to get it working at the
   time), and for different deployment options using a given driver (amqp 1.0
   using qpidd or qpid dispatch router in either standalone or with multiple
   connected routers).

   There are of course several other aspects that I think would be important
   to explore: notifications, more specific variations in the RPC 'topology'
   i.e. number of clients on given server number of servers in single group
   etc, and a better tool (or set of tools) would allow all of these to be
   explored.

   From my experimentation, I believe the biggest differences in scalability
   are going to come not from optimising the code in oslo.messaging so much as
   choosing different patterns for communication. Those choices may be
   constrained by other aspects as well of course, notably approach to
   reliability.




After couple internal discussions and hours of investigations, i think i've
foung the most applicabale solution 
that will accomplish performance testing approach and will eventually be
evaluated as messaging drivers 
configuration and AMQP service deployment recommendataion.

Solution that i've been talking about is already pretty well-known across
OpenStack components - Rally and its scenarios.
Why it would be the best option? Rally scenarios would not touch messaging
 core part. Scenarios are gate-able. 
Even if we're talking about internal testing, scenarios are very useful in this
case, 
since they are something that can be tuned/configured taking into account
environment needs.

Doug, Gordon, what do you think about bringing scenarios into messaging? 


I personally wouldn't mind having them but I'd like us to first
discuss what kind of scenarios we want to test.

I'm assuming these scenarios would be pure oslo.messaging scenarios
and they won't require any of the openstack services. Therefore, I
guess these scenarios would test things like performance with many
consumers, performance with several (a)synchronous calls, etc. What
performance means in this context will have to be discussed as well.

In addition to the above, it'd be really interesting if we could have
tests for things like reconnects delays, which I think is doable with
Rally. Am I right?

Cheers,
Flavio




   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Kind regards,
Denis M.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco



Re: [openstack-dev] [Swift] Swift GUI (free or open source)?

2015-01-28 Thread Martin Geisler
Adam Lawson alaw...@aqorn.com writes:

Hi Adam,

 I'm researching for a web-based visualization that simply displays
 OpenStack Swift and/or node status, cluster health etc in some manner.

I wrote Swift Browser, which will let you browse the containers and
objects in your Swift cluster:

  Repository: https://github.com/zerovm/swift-browser
  Demo here: http://www.zerovm.org/swift-browser/#/

You mention node status and cluster health -- this is unfortunately not
what I considered in Swift Browser.

-- 
Martin Geisler

http://google.com/+MartinGeisler


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-28 Thread Thierry Carrez
Monty Taylor wrote:
 You'll notice that I did say in my suggestion that ANYONE should be able
 to propose a name - I believe that would include non-dev people. Since
 the people in question are marketing people, I would imagine that if any
 of them feel strongly about a name, that it should be trivial for them
 to make their case in a persuasive way.

The proposal as it stands (https://review.openstack.org/#/c/150604/4)
currently excludes all non-ATCs from voting, though. The wider
community was included in previous iterations of the naming process,
so this very much feels like a TC power grab.

 I'm not willing to cede that choosing the name is by definition a
 marketing activity - and in fact the sense that such a position was
 developing is precisely why I think it's time to get this sorted. I
 think the dev community feels quite a bit of ownership on this topic and
 I would like to keep it that way.

It's not by definition a technical activity either, so we are walking a
thin line. Like I commented on the review: I think the TC can retain
ownership of this process and keep the last bits of fun that were still
in it[1], as long as we find a way to keep non-ATCs in the naming
process, and take into account the problematic names raised by the
marketing community team (which will use those names as much as the
technical community does).

[1] FWIW, it's been a long time since I last considered the naming
process as fun. It's not been fun for me at all to handle this process
recently and take hits from all sides (I receive more hatemail about
this process than you would think). As we formalize and clarify this
process, I would be glad to transfer the naming process to some
TC-nominated election official. I consider all this taking back the
naming process effort as a personal reflection on my inability to
preserve the neutrality of the process. It used to be fun, yes, when I
would throw random names on a whiteboard and get the room to pick. It no
longer is.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [UI] Deploy Changes dialog redesign

2015-01-28 Thread Igor Kalnitsky
Nik,

 I'm now here and I don't agree that we need to remove changes
 attribute. On the opposite, I think this is the only attribute which
 should be looked at on UI and backend, and all these
 pending_addition and pending_someotherstuff are obsolete and
 needless.

You're absolutely right. It's better to have one field rather than
few. However, in current implementation this field (changes) is
completely unusable. It's not even extensible, since it has a
pre-defined values.

So, I propose to solve first tasks first. We can remove it for now (in
order to drop legacy) and introduce new implementation when we need.

Thanks,
Igor

On Tue, Jan 27, 2015 at 11:12 AM, Nikolay Markov nmar...@mirantis.com wrote:
 Guys,

 I'm now here and I don't agree that we need to remove changes
 attribute. On the opposite, I think this is the only attribute which
 should be looked at on UI and backend, and all these
 pending_addition and pending_someotherstuff are obsolete and
 needless.

 Just assume, that we'll soon have some plugin or just some tech which
 allows us to modify some settings on UI after environment was deployed
 and somehow apply it onto nodes (like, for example, we're planning
 such thing for VMWare). In this case there is no any
 pending_addition or some other stuff, these are just changes to
 apply on a node somehow, maybe just execute some script on them. And
 the same goes to a lot of cases with plugins, which do some services
 on target nodes configurable.

 Pending_addition flag, on the other hand, is useless, because all
 changes we should apply on node are already listed in changes
 attribute. We can even probably add provisioning and deployment
 into these pending changes do avoid logic duplication. But still, as
 for me, this is the only working mechanism we should consider and
 which will really help us to cver complex cases in the future.

 On Tue, Jan 27, 2015 at 10:52 AM, Mike Scherbakov
 mscherba...@mirantis.com wrote:
 +1, I do not think it's usable as how it is now. Let's think though if we
 can come up with better idea how to show what has been changed (or even
 otherwise, what was not touched - and so might bring a surprise later).
 We might want to think about it after wizard-like UI is implemented.

 On Mon, Jan 26, 2015 at 8:26 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:

 +1 for removing attribute.

 @Evgeniy, I'm not sure that this attribute really shows all changes
 that's going to be done.

 On Mon, Jan 26, 2015 at 7:11 PM, Evgeniy L e...@mirantis.com wrote:
  To be more specific, +1 for removing this information from UI, not from
  backend.
 
  On Mon, Jan 26, 2015 at 7:46 PM, Evgeniy L e...@mirantis.com wrote:
 
  Hi,
 
  I agree that this information is useless, but it's not really clear
  what
  you are going
  to show instead, will you completely remove the information about nodes
  for deployment?
  I think the list of nodes for deployment (without detailed list of
  changes) can be useful
  for the user.
 
  Thanks,
 
  On Mon, Jan 26, 2015 at 7:23 PM, Vitaly Kramskikh
  vkramsk...@mirantis.com wrote:
 
  +1 for removing changes attribute. It's useless now. If there are no
  plans to add something else there, let's remove it.
 
  2015-01-26 11:39 GMT+03:00 Julia Aranovich jkirnos...@mirantis.com:
 
  Hi All,
 
  Since we changed Deploy Changes pop-up and added processing of role
  limits and restrictions I would like to raise a question of it's
  subsequent
  refactoring.
 
  In particular, I mean 'changes' attribute of cluster model. It's
  displayed in Deploy Changes dialog in the following format:
 
  Changed disks configuration on the following nodes:
 
  node_name_list
 
  Changed interfaces configuration on the following nodes:
 
  node_name_list
 
  Changed network settings
  Changed OpenStack settings
 
  This list looks absolutely useless.
 
  It doesn't make any sense to display lists of new, not deployed nodes
  with changed disks/interfaces. It's obvious I think that new nodes
  attributes await deployment. At the same time user isn't able to
  change
  disks/interfaces on deployed nodes (at least in UI). So, such node
  name
  lists are definitely redundant.
  Networks and settings are also locked after deployment finished.
 
 
  I tend to get rid of cluster model 'changes' attribute at all.
 
  It is important for me to know your opinion, to make a final
  decision.
  Please feel free and share your ideas and concerns if any.
 
 
  Regards,
  Julia
 
  --
  Kind Regards,
  Julia Aranovich,
  Software Engineer,
  Mirantis, Inc
  +7 (905) 388-82-61 (cell)
  Skype: juliakirnosova
  www.mirantis.ru
  jaranov...@mirantis.com
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Vitaly Kramskikh,
  Fuel UI Tech 

Re: [openstack-dev] [Nova][Neutron] Thoughts on the nova-neutron interface

2015-01-28 Thread Brent Eagles

On 25/01/2015 11:00 PM, Ian Wells wrote:

Lots of open questions in here, because I think we need a long conversation
on the subject.

On 23 January 2015 at 15:51, Kevin Benton blak...@gmail.com wrote:


It seems like a change to using internal RPC interfaces would be pretty
unstable at this point.

Can we start by identifying the shortcomings of the HTTP interface and see
if we can address them before making the jump to using an interface which
has been internal to Neutron so far?



I think the protocol being used is a distraction from the actual
shortcomings.

Firstly, you'd have to explain to me why HTTP is so much slower than RPC.
If HTTP is incredibly slow, can be be sped up?  If RPC is moving the data
around using the same calls, what changes?  Secondly, the problem seems
more that we make too many roundtrips - which would be the same over RPC -
and if that's true, perhaps we should be doing bulk operations - which is
not transport-specific.


I agree. If the problem is too many round trips our the interaction 
being too chatty, I would expect moving towards more service oriented 
APIs - where HTTP tends to be appropriate. I think we should focus on 
better separation of concerns, and approaches such as bulk operations 
using notifications where cross process synchronization for a task is 
required. Exploring transport alternatives seems premature until after 
we are satisfied that our house is in order architecture-wise.


Furthermore, I have some off-the-cuff concerns over claims that HTTP 
is slower than RPC in our case. I'm actually used to arguing that RPC is 
faster than HTTP but based on how our RPCs work, I find such an argument 
counter-intuitive. Our REST API calls are direct client-server requests 
with GET's returning results immediately. Our RPC calls involve AMQP and 
a messaging queue server, with requests and replies encapsulated in 
separate messages. If no reply is required, then the RPC *might* be 
dispatched more quickly from the client side as it is simply a message 
being queued. The actual servicing of the request (server side dispatch 
or upcall in broker-parlance) happens some time later, meaning 
possibly never. If the RPC has a return value, then the client must wait 
for the return reply message, which again involves an AMQP message being 
constructed, published and queued, then finally consumed. At the very 
least, this implies latency for dependent on the relative location and 
availability of the queue server.


As an aside (meaning you might want to skip this part), one way our RPC 
mechanism might be better than REST over HTTP calls is in the cost of 
constructing and encoding of requests and replies. However, this is more 
of a function of how requests are encoded and less how the are sent. 
Changing how request payloads are constructed would close that gap. 
Again reducing the number of requests required to do something  would 
reduce the significance of any differences here. Unless the difference 
between the two methods were enormous (like double or an order of 
magnitude) then reducing the number of calls to perform a task still has 
more gain than switching methods. Another difference might be in how 
well the transport implementation scales. I would consider disastrous 
scaling characteristics a pretty compelling argument.



I absolutely do agree that Neutron should be doing more of the work, and
Nova less, when it comes to port binding.  (And, in fact, I'd like that we
stopped considering it 'Nova-Neutron' port binding, since in theory another
service attaching stuff to the network could request a port be bound; it
just happens at the moment that it's always Nova.)

One other problem, not yet raised,  is that Nova doesn't express its needs
when it asks for a port to be bound, and this is actually becoming a
problem for me right now.  At the moment, Neutron knows, almost
psychically, what binding type Nova will accept, and hands it over; Nova
then deals with whatever binding type it receives (optimisitically
expecting it's one it will support, and getting shirty if it isn't).  The
problem I'm seeing at the moment, and other people have mentioned, is that
certain forwarders can only bind a vhostuser port to a VM if the VM itself
has hugepages enabled.  They could fall back to another binding type but at
the moment that isn't an option: Nova doesn't tell Neutron anything about
what it supports, so there's no data on which to choose.  It should be
saying 'I will take these binding types in this preference order'.  I
think, in fact, that asking Neutron for bindings of a certain preference
type order, would give us much more flexibility - like, for instance, not
having to know exactly which binding type to deliver to which compute node
in multi-hypervisor environments, where at the moment the choice is made in
Neutron.

I scanned through the etherpad and I really like Salvatore's idea of adding

a service plugin to Neutron that is designed specifically for 

Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-01-28 Thread Miguel Ángel Ajo
Miguel Ángel Ajo


On Wednesday, 28 de January de 2015 at 09:50, Kevin Benton wrote:

 Hi,
  
 Approximately a year and a half ago, the default DHCP lease time in Neutron 
 was increased from 120 seconds to 86400 seconds.[1] This was done with the 
 goal of reducing DHCP traffic with very little discussion (based on what I 
 can see in the review and bug report). While it it does indeed reduce DHCP 
 traffic, I don't think any bug reports were filed showing that a 120 second 
 lease time resulted in too much traffic or that a jump all of the way to 
 86400 seconds was required instead of a value in the same order of magnitude.
  
 Why does this matter?  
  
 Neutron ports can be updated with a new IP address from the same subnet or 
 another subnet on the same network. The port update will result in 
 anti-spoofing iptables rule changes that immediately stop the old IP address 
 from working on the host. This means the host is unreachable for 0-12 hours 
 based on the current default lease time without manual intervention[2] 
 (assuming half-lease length DHCP renewal attempts).
  
 Why is this on the mailing list?
  
 In an attempt to make the VMs usable in a much shorter timeframe following a 
 Neutron port address change, I submitted a patch to reduce the default DHCP 
 lease time to 8 minutes.[3] However, this was upsetting to several people,[4] 
 so it was suggested I bring this discussion to the mailing list. The 
 following are the high-level concerns followed by my responses:
 8 minutes is arbitrary
 Yes, but it's no more arbitrary than 1440 minutes. I picked it as an interval 
 because it is still 4 times larger than the last short value, but it still 
 allows VMs to regain connectivity in 5 minutes in the event their IP is 
 changed. If someone has a good suggestion for another interval based on known 
 dnsmasq QPS limits or some other quantitative reason, please chime in here.
  
 other datacenters use long lease times
 This is true, but it's not really a valid comparison. In most regular 
 datacenters, updating a static DHCP lease has no effect on the data plane so 
 it doesn't matter that the client doesn't react for hours/days (even with 
 DHCP snooping enabled). However, in Neutron's case, the security groups are 
 immediately updated so all traffic using the old address is blocked.
  
 dhcp traffic is scary because it's broadcast
 ARP traffic is also broadcast and many clients will expire entries every 5-10 
 minutes and re-ARP. L2population may be used to prevent ARP propagation, so 
 the comparison between DHCP and ARP isn't always relevant here.
  
  
  
  
For what I’ve seen, at least for linux, the first DHCP request will be 
broadcast. Then all lease renewals are unicast, unless, the original
DHCP can’t be contacted, in which case, the dhcp client will turn back to 
broadcast trying to find out another server to renew his lease.

So, only initial boot of an instance should generate broadcast traffic.

Your proposal seems reasonable to me.

In this context, please see this ongoing work [5], specially comments here [6], 
where we’re discussing about optimization,  
due to theoretical 120 second limit for renews at scale, and we made some 
calculations of CPU usage for the current default, I  
will recalculate those for the new proposed default: 8 minutes.

TL; DR.  
That patch fixes an issue found when you restart dnsmasq, and old leases can’t 
be renewed, so we end up in a storm of requests,
for that we need to provide dnsmasq with a script for initialization of the 
leases table, initially such script was provided in python,
but that means that script is called for: init (once), lease (once per 
instance), and renew (every lease renew time * number of instances),
thus we should minimize the impact of such script as much as possible, or 
contribute dnsmasq to avoid such script being called
for lease renews under some flag.
  
  
 Please reply back with your opinions/anecdotes/data related to short DHCP 
 lease times.
  
 Cheers
  
 1. 
 https://github.com/openstack/neutron/commit/d9832282cf656b162c51afdefb830dacab72defe
 2. Manual intervention could be an instance reboot, a dhcp client invocation 
 via the console, or a delayed invocation right before the update. (all 
 significantly more difficult to script than a simple update of a port's IP 
 via the API).
 3. https://review.openstack.org/#/c/150595/
 4. http://i.imgur.com/xtvatkP.jpg
  
  
  
  

5. https://review.openstack.org/#/c/108272/ 
(https://review.openstack.org/#/c/108272/8/neutron/agent/linux/dhcp.py)
6. https://review.openstack.org/#/c/108272/8/neutron/agent/linux/dhcp.py  
  
 --  
 Kevin Benton  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 

Re: [openstack-dev] [nova] Questions on pep8 F811 hacking check for microversion

2015-01-28 Thread Christopher Yeoh
On Tue, 06 Jan 2015 07:31:19 -0500
Jay Pipes jaypi...@gmail.com wrote:

 On 01/06/2015 06:25 AM, Chen CH Ji wrote:
  Based on nova-specs api-microversions.rst
  we support following function definition format, but it violate the
  hacking rule pep8 F811 because duplicate function definition
  we should use #noqa for them , but considering microversion may
  live for long time ,
  keep adding #noqa may be a little bit ugly, can anyone suggest a
  good solution for it ? thanks
 
  @api_version(min_version='2.1')
  def _version_specific_func(self, req, arg1):
 pass
   
  @api_version(min_version='2.5')
  def _version_specific_func(self, req, arg1):
 pass
 
 Hey Kevin,
 
 This was actually one of my reservations about the proposed 
 microversioning implementation -- i.e. having functions that are
 named exactly the same, only decorated with the microversioning
 notation. It kinda reminds me of the hell of debugging C++ code that
 uses STL: how does one easily know which method one is in when inside
 a debugger?
 
 That said, the only other technique we could try to use would be to
 not use a decorator and instead have a top-level dispatch function
 that would inspect the API microversion (only when the API version
 makes a difference to the output or input of that function) and then
 dispatch the call to a helper method that had the version in its name.
 
 So, for instance, let's say you are calling the controller's GET 
 /$tenant/os-hosts method, which happens to get routed to the 
 nova.api.openstack.compute.contrib.hosts.HostController.index()
 method. If you wanted to modify the result of that method and the API 
 microversion is at 2.5, you might do something like:
 
   def index(self, req):
   req_api_ver = utils.get_max_requested_api_version(req)
   if req_api_ver == (2, 5):
   return self.index_2_5(req)
   return self.index_2_1(req)
 
   def index_2_5(self, req):
   results = self.index_2_1(req)
   # Replaces 'host' with 'host_name'
   for result in results:
   result['host_name'] = result['host']
   del result['host']
   return results
 
   def index_2_1(self, req):
   # Would be a rename of the existing index() method on
   # the controller
 

So having to manually add switching code everything we have an API
patch I think is not only longer and more complicated but more error
prone when updating. If we change something at the core in the future it
means changing all the microversioned code rather than just the
switching architecture at the core of wsgi.


 Another option would be to use something like JSON-patch to determine 
 the difference between two output schemas and automatically translate 
 one to another... but that would be a huge effort.
 
 That's the only other way I can think of besides disabling F811,
 which I really would not recommend, since it's a valuable safeguard
 against duplicate function names (especially duplicated test methods).

So I don't think we need to disable F811 in general - why not just
disable it for any method with the api_version decorator? On those ones
we can do checks on what is passed to api_version which will help
verify that there hasn't been a typo to an api_version decorator.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] core team changes

2015-01-28 Thread Sergey Kraynev
+1

Regards,
Sergey.

On 28 January 2015 at 10:52, Pavlo Shchelokovskyy 
pshchelokovs...@mirantis.com wrote:

 +1

 Pavlo Shchelokovskyy
 Software Engineer
 Mirantis Inc
 www.mirantis.com

 On Wed, Jan 28, 2015 at 8:26 AM, Thomas Herve thomas.he...@enovance.com
 wrote:


  Hi all
 
  After having a look at the stats:
  http://stackalytics.com/report/contribution/heat-group/90
  http://stackalytics.com/?module=heat-groupmetric=person-day
 
  I'd like to propose the following changes to the Heat core team:
 
  Add:
  Qiming Teng
  Huang Tianhua
 
  Remove:
  Bartosz Górski (Bartosz has indicated that he is happy to be removed and
  doesn't have the time to work on heat ATM).
 
  Core team please respond with +/- 1.
 
  Thanks
  Angus

 +1!

 --
 Thomas


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] [glance] Consistency in client side sorting

2015-01-28 Thread Christopher Yeoh
On Mon, 05 Jan 2015 11:10:41 -0500
Jay Pipes jaypi...@gmail.com wrote:

 
  Thoughts on getting consistency across all 3 projects (and possibly
  others)?
 
 Yeah, I personally like the second option as well, but agree that 
 consistency is the key (pun intended) here.
 
 I would say let's make a decision on the standard to go with
 (possibly via the API or SDK working groups?) and then move forward
 with support for that option in all three clients (and continue to
 support the old behaviour for 2 release cycles, with deprecation
 markings as appropriate).

+1 to making this available in a consistent way. We need need to support
the old client behaviour for at least a couple of cycles (maybe a bit
longer) and the burden of doing so is pretty low. I don't think however
that we can drop the REST API behaviour that quickly. 2 cycles for API
deprecation in the past has been considered an insufficent length of
time because of app breakage

Regards,

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins][Orchestration] Unclear handling of primary-controler and controller roles

2015-01-28 Thread Sergii Golovatiuk
Hi,

*But with introduction of plugins and granular deployment, in my opinion,
we need to be able*
*to specify that task should run specifically on primary, or on
secondaries. Alternative to this approach would be - always run task on all
controllers, and let task itself to verify that it is  executed on primary
or not.*

I wouldn't differentiate tasks for primary and other controllers.
Primary-controller logic should be controlled by task itself. That will
allow to have elegant and tiny task framework ...

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Tue, Jan 27, 2015 at 11:35 PM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 Hello all,

 You may know that for deployment configuration we are serializing
 additional prefix for controller role (primary), with the goal of
 deployment order control (primary-controller always should be deployed
 before secondaries) and some condiions in fuel-library code.

 However, we cannot guarantee that primary controller will be always the
 same node, because it is not business of nailgun to control elections of
 primary. Essentially user should not rely on nailgun
 information to find primary, but we need to persist node elected as
 primary in first deployment
 to resolve orchestration issues (when new node added to cluster we should
 not mark it as primary).

 So we called primary-controller - internal role, which means that it is
 not exposed to users (or external developers).
 But with introduction of plugins and granular deployment, in my opinion,
 we need to be able
 to specify that task should run specifically on primary, or on
 secondaries. Alternative to this approach would be - always run task on all
 controllers, and let task itself to verify that it is  executed on primary
 or not.

 Is it possible to have significantly different sets of tasks for
 controller and primary-controller?
 And same goes for mongo, and i think we had primary for swift also.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins][Orchestration] Unclear handling of primary-controler and controller roles

2015-01-28 Thread Aleksandr Didenko
Hi,

we definitely need such separation on orchestration layer.

 Is it possible to have significantly different sets of tasks for
controller and primary-controller?

Right now we already do different things on primary and secondary
controllers, but it's all conducted in the same manifest and controlled by
conditionals inside the manifest. So when we split our tasks into smaller
ones, we may want/need to separate them for primary and secondary
controllers.

 I wouldn't differentiate tasks for primary and other controllers.
Primary-controller logic should be controlled by task itself. That will
allow to have elegant and tiny task framework

Sergii, we still need this separation on the orchestration layer and, as
you know, our deployment process is based on it. Currently we already have
separate task groups for primary and secondary controller roles. So it will
be up to the task developer how to handle some particular task for
different roles: developer can write 2 different tasks (one for
'primary-controller' and the other one for 'controller'), or he can write
the same task for both groups and handle differences inside the task.

--
Regards,
Aleksandr Didenko


On Wed, Jan 28, 2015 at 11:25 AM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 But without this separation on orchestration layer, we are unable to
 differentiate between nodes.
 What i mean is - we need to run subset of tasks on primary first and then
 on all others, and we are using role as mapper
 to node identities (and this mechanism was hardcoded in nailgun for a long
 time).

 Lets say we have task A that is mapped to primary-controller and B that is
 mapped to secondary controller, task B requires task A.
 If there is no primary in mapping - we will execute task A on all
 controllers and then task B on all controllers.

 And how in such case deployment code will know that it should not execute
 commands in task A for secondary controllers and
 in task B on primary ?

 On Wed, Jan 28, 2015 at 10:44 AM, Sergii Golovatiuk 
 sgolovat...@mirantis.com wrote:

 Hi,

 *But with introduction of plugins and granular deployment, in my opinion,
 we need to be able*
 *to specify that task should run specifically on primary, or on
 secondaries. Alternative to this approach would be - always run task on all
 controllers, and let task itself to verify that it is  executed on primary
 or not.*

 I wouldn't differentiate tasks for primary and other controllers.
 Primary-controller logic should be controlled by task itself. That will
 allow to have elegant and tiny task framework ...

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Tue, Jan 27, 2015 at 11:35 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hello all,

 You may know that for deployment configuration we are serializing
 additional prefix for controller role (primary), with the goal of
 deployment order control (primary-controller always should be deployed
 before secondaries) and some condiions in fuel-library code.

 However, we cannot guarantee that primary controller will be always the
 same node, because it is not business of nailgun to control elections of
 primary. Essentially user should not rely on nailgun
 information to find primary, but we need to persist node elected as
 primary in first deployment
 to resolve orchestration issues (when new node added to cluster we
 should not mark it as primary).

 So we called primary-controller - internal role, which means that it
 is not exposed to users (or external developers).
 But with introduction of plugins and granular deployment, in my opinion,
 we need to be able
 to specify that task should run specifically on primary, or on
 secondaries. Alternative to this approach would be - always run task on all
 controllers, and let task itself to verify that it is  executed on primary
 or not.

 Is it possible to have significantly different sets of tasks for
 controller and primary-controller?
 And same goes for mongo, and i think we had primary for swift also.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not 

Re: [openstack-dev] [Nova] Requirements.txt and optional requirements

2015-01-28 Thread Silvan Kaiser
Hi All!
Thanks for the feedback!

I'll remove xattr from the requirements in my change set.
Currently i'm working on a workaround to execute 'getfattr' instead of the
xattr api call. We can asure getfattr is available via package dependencies
of our client who has to be installed either way.

I'm also checking out your proposal in parallel, i cannot find any
documentation about the 'configuration management manifests', do you mean
the puppet manifests? Otherwise, could somebody please give me a pointer to
their documentation, etc.?

Best regards
Silvan


2015-01-27 18:32 GMT+01:00 Jay Pipes jaypi...@gmail.com:

 On 01/27/2015 09:13 AM, Silvan Kaiser wrote:

 Am 27.01.2015 um 16:51 schrieb Jay Pipes jaypi...@gmail.com:
 b) The Glance API image cache can use xattr if SQLite is not
 desired [1], and Glance does *not* list xattr as a dependency in
 requirements.txt. Swift also has a dependency on python-xattr [2].
 So, this particular Python library is not an unknown by any means.

 Do you happen to know how Glance handles this if the dep. is not
 handled in requirements.txt?


 Yep, it's considered a documentation thing and handled in configuration
 management manifests...

 Best,
 -jay


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 

--
*Quobyte* GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] core team changes

2015-01-28 Thread Thomas Spatzier
+1 on all changes.

Regards,
Thomas

 From: Angus Salkeld asalk...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 28/01/2015 02:40
 Subject: [openstack-dev] [Heat] core team changes

 Hi all

 After having a look at the stats:
 http://stackalytics.com/report/contribution/heat-group/90
 http://stackalytics.com/?module=heat-groupmetric=person-day

 I'd like to propose the following changes to the Heat core team:

 Add:
 Qiming Teng
 Huang Tianhua

 Remove:
 Bartosz Górski (Bartosz has indicated that he is happy to be removed
 and doesn't have the time to work on heat ATM).

 Core team please respond with +/- 1.

 Thanks
 Angus

__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Writing own L3 service plugin

2015-01-28 Thread Sławek Kapłoński

Hello,

I want to use L3 functionalities with own device in place of normal 
L3agent. So I think that best way will be to write own L3 service plugin 
which will configure my router on my special device. But I don't know 
now few things. For example if I'm creating new port in router 
(router-interface-add) then new port in network is created but it is not 
bound anywhere. So how I should do it that it will be bound to my device 
and for example ovs_agents will be able to establish vxlan tunnels to 
this device?


--
Pozdrawiam
Sławek Kapłonski
sla...@kaplonski.pl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-28 Thread Dmitriy Shulyak


 It's not clear what problem you are going to solve with putting serializers
 alongside with deployment scripts/tasks.

I see two possible uses for specific serializer for tasks:
1. Compute additional information for deployment based not only on what is
present in astute.yaml
2. Request information from external sources based on values stored in fuel
inventory (like some token based on credentials)

For sure there is no way for this serializers to have access to the
 database,
 because with each release there will be a huge probability to get this
 serializers broken for example because of changes in database schema.
 As Dmitry mentioned in this case solution is to create another layer
 which provides stable external interface to the data.
 We already to have this interface where we support versioning and backward
 compatibility, in terms of deployment script it's astute.yaml file.

That is the problem, it is impossible to cover everything by astute.yaml.
We need to think on a way to present all data available in nailgun as
deployment configuration
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] removing single mode

2015-01-28 Thread Sergey Vasilenko
+1 to replace simple to HA with one controller

/sv
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] temporarily disabling python 3.x testing for oslo.messaging and oslo.rootwrap

2015-01-28 Thread Flavio Percoco

On 27/01/15 13:22 -0500, Doug Hellmann wrote:



On Tue, Jan 27, 2015, at 12:44 PM, Julien Danjou wrote:

On Tue, Jan 27 2015, Clark Boylan wrote:

 So the issue is that the garbage collector segfaults on null objects in
 the to be garbage collected list. Which means that by the time garbage
 collection breaks you don't have the info you need to know what
 references lead to the segfault. I spent a bit of time in gdb debugging
 this and narrowed it down enough to realize what the bug was and find it
 was fixed in later python releases but didn't have the time to sort out
 how to figure out specifically which references in oslo.messaging caused
 the garbage collector to fall over.

╯‵Д′)╯彡┻━┻

Ok, then let's disable it I guess. If there's a chance to keep something
has even a non-voting job, that'd be cool, but I'm not even sure that's
an option if it just doesn't work and we can't keep py33.


I did think about a non-voting job, but there's not much point. We
expect it to fail with the segfault, so we would just be wasting
resources. :-/


+1 for disabling it for now, although this means we may have some
incompatibilities when we put it back.

Fla





--
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Email had 1 attachment:
+ signature.asc
  1k (application/pgp-signature)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgp7v874u54Bf.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins][Orchestration] Unclear handling of primary-controler and controller roles

2015-01-28 Thread Evgeniy L
Hi,

+1 for having primary-controller role in terms of deployment.
In our tasks user should be able to run specific task on primary-controller.
But I agree that it can be tricky because after the cluster is deployed, we
cannot say who is really primary, is there a case when it's important to
know
who is really primary after deployment is done?

Also I would like to mention that in plugins user currently can write
'roles': ['controller'],
which means that the task will be applied on 'controller' and
'primary-controller' nodes.
Plugin developer can get this information from astute.yaml file. But I'm
curious if we
should change this behaviour for plugins (with backward compatibility of
course)?

Thanks,


On Wed, Jan 28, 2015 at 1:07 PM, Aleksandr Didenko adide...@mirantis.com
wrote:

 Hi,

 we definitely need such separation on orchestration layer.

  Is it possible to have significantly different sets of tasks for
 controller and primary-controller?

 Right now we already do different things on primary and secondary
 controllers, but it's all conducted in the same manifest and controlled by
 conditionals inside the manifest. So when we split our tasks into smaller
 ones, we may want/need to separate them for primary and secondary
 controllers.

  I wouldn't differentiate tasks for primary and other controllers.
 Primary-controller logic should be controlled by task itself. That will
 allow to have elegant and tiny task framework

 Sergii, we still need this separation on the orchestration layer and, as
 you know, our deployment process is based on it. Currently we already have
 separate task groups for primary and secondary controller roles. So it will
 be up to the task developer how to handle some particular task for
 different roles: developer can write 2 different tasks (one for
 'primary-controller' and the other one for 'controller'), or he can write
 the same task for both groups and handle differences inside the task.

 --
 Regards,
 Aleksandr Didenko


 On Wed, Jan 28, 2015 at 11:25 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 But without this separation on orchestration layer, we are unable to
 differentiate between nodes.
 What i mean is - we need to run subset of tasks on primary first and then
 on all others, and we are using role as mapper
 to node identities (and this mechanism was hardcoded in nailgun for a
 long time).

 Lets say we have task A that is mapped to primary-controller and B that
 is mapped to secondary controller, task B requires task A.
 If there is no primary in mapping - we will execute task A on all
 controllers and then task B on all controllers.

 And how in such case deployment code will know that it should not execute
 commands in task A for secondary controllers and
 in task B on primary ?

 On Wed, Jan 28, 2015 at 10:44 AM, Sergii Golovatiuk 
 sgolovat...@mirantis.com wrote:

 Hi,

 *But with introduction of plugins and granular deployment, in my
 opinion, we need to be able*
 *to specify that task should run specifically on primary, or on
 secondaries. Alternative to this approach would be - always run task on all
 controllers, and let task itself to verify that it is  executed on primary
 or not.*

 I wouldn't differentiate tasks for primary and other controllers.
 Primary-controller logic should be controlled by task itself. That will
 allow to have elegant and tiny task framework ...

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Tue, Jan 27, 2015 at 11:35 PM, Dmitriy Shulyak dshul...@mirantis.com
  wrote:

 Hello all,

 You may know that for deployment configuration we are serializing
 additional prefix for controller role (primary), with the goal of
 deployment order control (primary-controller always should be deployed
 before secondaries) and some condiions in fuel-library code.

 However, we cannot guarantee that primary controller will be always the
 same node, because it is not business of nailgun to control elections of
 primary. Essentially user should not rely on nailgun
 information to find primary, but we need to persist node elected as
 primary in first deployment
 to resolve orchestration issues (when new node added to cluster we
 should not mark it as primary).

 So we called primary-controller - internal role, which means that it
 is not exposed to users (or external developers).
 But with introduction of plugins and granular deployment, in my
 opinion, we need to be able
 to specify that task should run specifically on primary, or on
 secondaries. Alternative to this approach would be - always run task on all
 controllers, and let task itself to verify that it is  executed on primary
 or not.

 Is it possible to have significantly different sets of tasks for
 controller and primary-controller?
 And same goes for mongo, and i think we had primary for swift also.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:

Re: [openstack-dev] [OpenStack Foundation] [tc] Take back the naming process

2015-01-28 Thread Monty Taylor
On 01/27/2015 10:35 PM, James E. Blair wrote:
 Lauren Sell lau...@openstack.org writes:
 
 Hey Monty,

 I’d like to weigh in here, because I think there have been some
 misunderstandings around Lemming-gate. I’m glad you raised your
 concerns; it’s a good test of release naming for us all to discuss and
 learn from.

 To provide a little context for those new to the discussion,
 historically, when it’s time to name the development cycle, open
 suggestions are taken on a wiki page
 (https://wiki.openstack.org/wiki/Release_Naming) after which the
 Technical Committee works to create a short list that are then voted
 on by the entire community. Typically, Foundation staff play a role in
 this process to work with our trademark counsel to vet the release
 names. We register them to ensure our rights, and they become
 significant brands for the OpenStack community, as well as all of the
 companies who are building and marketing their products on
 OpenStack. One of the names that was proposed for the L development
 cycle was Lemming.

 So little-known fact, I’m actually a huge fan of rodents (I’ve had
 several pet rats), but I’m afraid the name Lemming conjures up more
 than a small mammal. The dictionary.com definition is a member of any
 large group following an unthinking course towards mass destruction,
 or if you prefer Urban Dictionary, “a member of a crowd with no
 originality or voice of his own. One who speaks or repeats only what
 he has been told. A tool. A cretin.”

 When I heard that Lemming was a consideration, I was a bit
 concerned. Most of all, I care about and am protective of this
 community, and I think that would paint us with a pretty big / easy
 target. Regardless, I did the due diligence with our trademark
 counsel, and they provided the following feedback: “The proposed
 trademark LEMMING cleared our preliminary search for the usual
 goods/services, subject to the usual limitations.  The majority of
 applications/registrations that others have filed for the term are
 dead (no pun intended).  I take this to mean the brand generally has
 problems in the marketplace due to negative connotation.”

 So, I reached out to Thierry and a few of the TC members to share my
 perspective and concern from a marketing standpoint. I have a lot of
 respect for you and this community, and I would hate to jeopardize the
 perception of your work. I am very sensitive to the fact that I do not
 have a magical marketing veto; I was simply providing feedback and
 trying to bring another perspective to the conversation. My sense from
 talking to them was that Lemming was kind of a joke and not a serious
 option. I also read the notes of the following TC meeting, and it
 didn’t seem like there was much of an issue...so I stopped worrying
 about it.

 (TC meeting notes for reference, you can search Lemming in this
 discussion:
 http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-01-13-20.01.log.txt)

 Anyhow, it seems like it’s boiled into a larger issue, and I’m more
 than happy to have the discussion and get more input. I stand by my
 advice and hope our community leaders will make a reasonable
 decision. I certainly don’t want to take the fun out of release
 naming, but at the end of the day we are all pretty fortunate and have
 quite a bit of fun as part of this community. I was just trying to
 protect it.
 
 I said lemming three times in that meeting, (which is three times more
 than anyone else), so I should probably respond here.
 
 That meeting was the first time I heard about the disappearance of
 Lemming from the list.  I did not feel that the way in which it happened
 was in accordance with the way we expected the process to operate.
 Nonetheless, due to the vagaries of the current process and out of a
 desire to avoid further delay in naming the Lemming release, I chose not
 to argue for the inclusion of Lemming in the poll.  I continue to hold
 that position, and I am not advocating that we include it now.

I would like to echo Jim's sentiments that this is largely about the way
a decision was made, and not about the decision itself. The below
defense of lemmings notwithstanding - I agree that I do not think that
lemming is a good name. However, I think there is a very interesting
latent conversation available about the nature of choice and also about
the media's ability to fabricate complete falsehoods and have them
become part of the vernacular. If anyone can't find modern and relevant
analogues to the one-sided destruction of the popular image of the
lemming in the tech world, they're simply not paying attention.

We may not chose to pick up that particular topic as a fight we want to
have right now - but I ardently want to be able to have the discussion
about it.

We have chosen to have an unprecedented amount of corporate involvement
in our project - and I think it's proven to be extremely valuable. One
of the things it does bring with it though are a bunch of people who
come from the 

Re: [openstack-dev] [Fuel] [UI] Deploy Changes dialog redesign

2015-01-28 Thread Nikolay Markov
Igor,

But why can't we implement it properly on the first try? It doesn't
seem like a hard task and won't take much time.

On Wed, Jan 28, 2015 at 12:50 PM, Igor Kalnitsky
ikalnit...@mirantis.com wrote:
 Nik,

 I'm now here and I don't agree that we need to remove changes
 attribute. On the opposite, I think this is the only attribute which
 should be looked at on UI and backend, and all these
 pending_addition and pending_someotherstuff are obsolete and
 needless.

 You're absolutely right. It's better to have one field rather than
 few. However, in current implementation this field (changes) is
 completely unusable. It's not even extensible, since it has a
 pre-defined values.

 So, I propose to solve first tasks first. We can remove it for now (in
 order to drop legacy) and introduce new implementation when we need.

 Thanks,
 Igor

 On Tue, Jan 27, 2015 at 11:12 AM, Nikolay Markov nmar...@mirantis.com wrote:
 Guys,

 I'm now here and I don't agree that we need to remove changes
 attribute. On the opposite, I think this is the only attribute which
 should be looked at on UI and backend, and all these
 pending_addition and pending_someotherstuff are obsolete and
 needless.

 Just assume, that we'll soon have some plugin or just some tech which
 allows us to modify some settings on UI after environment was deployed
 and somehow apply it onto nodes (like, for example, we're planning
 such thing for VMWare). In this case there is no any
 pending_addition or some other stuff, these are just changes to
 apply on a node somehow, maybe just execute some script on them. And
 the same goes to a lot of cases with plugins, which do some services
 on target nodes configurable.

 Pending_addition flag, on the other hand, is useless, because all
 changes we should apply on node are already listed in changes
 attribute. We can even probably add provisioning and deployment
 into these pending changes do avoid logic duplication. But still, as
 for me, this is the only working mechanism we should consider and
 which will really help us to cver complex cases in the future.

 On Tue, Jan 27, 2015 at 10:52 AM, Mike Scherbakov
 mscherba...@mirantis.com wrote:
 +1, I do not think it's usable as how it is now. Let's think though if we
 can come up with better idea how to show what has been changed (or even
 otherwise, what was not touched - and so might bring a surprise later).
 We might want to think about it after wizard-like UI is implemented.

 On Mon, Jan 26, 2015 at 8:26 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:

 +1 for removing attribute.

 @Evgeniy, I'm not sure that this attribute really shows all changes
 that's going to be done.

 On Mon, Jan 26, 2015 at 7:11 PM, Evgeniy L e...@mirantis.com wrote:
  To be more specific, +1 for removing this information from UI, not from
  backend.
 
  On Mon, Jan 26, 2015 at 7:46 PM, Evgeniy L e...@mirantis.com wrote:
 
  Hi,
 
  I agree that this information is useless, but it's not really clear
  what
  you are going
  to show instead, will you completely remove the information about nodes
  for deployment?
  I think the list of nodes for deployment (without detailed list of
  changes) can be useful
  for the user.
 
  Thanks,
 
  On Mon, Jan 26, 2015 at 7:23 PM, Vitaly Kramskikh
  vkramsk...@mirantis.com wrote:
 
  +1 for removing changes attribute. It's useless now. If there are no
  plans to add something else there, let's remove it.
 
  2015-01-26 11:39 GMT+03:00 Julia Aranovich jkirnos...@mirantis.com:
 
  Hi All,
 
  Since we changed Deploy Changes pop-up and added processing of role
  limits and restrictions I would like to raise a question of it's
  subsequent
  refactoring.
 
  In particular, I mean 'changes' attribute of cluster model. It's
  displayed in Deploy Changes dialog in the following format:
 
  Changed disks configuration on the following nodes:
 
  node_name_list
 
  Changed interfaces configuration on the following nodes:
 
  node_name_list
 
  Changed network settings
  Changed OpenStack settings
 
  This list looks absolutely useless.
 
  It doesn't make any sense to display lists of new, not deployed nodes
  with changed disks/interfaces. It's obvious I think that new nodes
  attributes await deployment. At the same time user isn't able to
  change
  disks/interfaces on deployed nodes (at least in UI). So, such node
  name
  lists are definitely redundant.
  Networks and settings are also locked after deployment finished.
 
 
  I tend to get rid of cluster model 'changes' attribute at all.
 
  It is important for me to know your opinion, to make a final
  decision.
  Please feel free and share your ideas and concerns if any.
 
 
  Regards,
  Julia
 
  --
  Kind Regards,
  Julia Aranovich,
  Software Engineer,
  Mirantis, Inc
  +7 (905) 388-82-61 (cell)
  Skype: juliakirnosova
  www.mirantis.ru
  jaranov...@mirantis.com
 
 
 
  __
  OpenStack Development Mailing List (not 

Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-28 Thread Evgeniy L
Hi Vladimir,

It's not clear what problem you are going to solve with putting serializers
alongside with deployment scripts/tasks.
For sure there is no way for this serializers to have access to the
database,
because with each release there will be a huge probability to get this
serializers broken for example because of changes in database schema.
As Dmitry mentioned in this case solution is to create another layer
which provides stable external interface to the data.
We already to have this interface where we support versioning and backward
compatibility, in terms of deployment script it's astute.yaml file.
So we can add python code which gets this Hash/Dict and retrieves all of the
data which are required for specific task, it means that if you want to pass
some new data, you have to fix the code in two places, in Nailgun, and in
tasks specific serializers. Looks like increasing of complexity and over
engineering.

Thanks,

On Tue, Jan 27, 2015 at 11:47 AM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Dmitry

 This is an interesting topic. As per our discussions earlier, I suggest
 that in the future we move to different serializers for each granule of our
 deployment, so that we do not need to drag a lot of senseless data into
 particular task being executed. Say, we have a fencing task, which has a
 serializer module written in python. This module is imported by Nailgun and
 what it actually does, it executes specific Nailgun core methods that
 access database or other sources of information and retrieve data in the
 way this task wants it instead of adjusting the task to the only
 'astute.yaml'.

 On Thu, Jan 22, 2015 at 8:59 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 The problem with merging is usually it's not clear how system performs
 merging.
 For example you have the next hash {'list': [{'k': 1}, {'k': 2}, {'k':
 3}]}, and I want
 {'list': [{'k': 4}]} to be merged, what system should do? Replace the
 list or add {'k': 4}?
 Both cases should be covered.

 Most of the users don't remember all of the keys, usually user gets the
 defaults, and
 changes some values in place, in this case we should ask user to remove
 the rest
 of the fields.

 The only solution which I see is to separate the data from the graph, not
 to send
 this information to user.

 Thanks,

 On Thu, Jan 22, 2015 at 5:18 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi guys,

 I want to discuss the way we are working with deployment configuration
 that were redefined for cluster.

 In case it was redefined by API - we are using that information instead
 of generated.
 With one exception, we will generate new repo sources and path to
 manifest if we are using update (patching feature in 6.0).

 Starting from 6.1 this configuration will be populated by tasks, which
 is a part of granular deployment
 workflow and replacement of configuration will lead to inability to use
 partial graph execution API.
 Ofcourse it is possible to hack around and make it work, but imo we need
 generic solution.

 Next problem - if user will upload replaced information, changes on
 cluster attributes, or networks, wont be reflected in deployment anymore
 and it constantly leads to problems for deployment engineers that are using
 fuel.

 What if user want to add data, and use generated of networks,
 attributes, etc?
 - it may be required as a part of manual plugin installation (ha_fencing
 requires a lot of configuration to be added into astute.yaml),
 - or you need to substitute networking data, e.g add specific parameters
 for linux bridges

 So given all this, i think that we should not substitute all
 information, but only part that is present in
 redefined info, and if there is additional parameters they will be
 simply merged into generated info


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack 

Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-28 Thread Denis Makogon
On Wed, Jan 28, 2015 at 11:39 AM, Flavio Percoco fla...@redhat.com wrote:

 On 28/01/15 10:23 +0200, Denis Makogon wrote:



 On Tue, Jan 27, 2015 at 10:26 PM, Gordon Sim g...@redhat.com wrote:

On 01/27/2015 06:31 PM, Doug Hellmann wrote:
  On Tue, Jan 27, 2015, at 12:28 PM, Denis Makogon wrote:
  I'd like to build tool that would be able to profile
 messaging over
various deployments. This tool would give me an ability to
compare
results of performance testing produced by native tools and
oslo.messaging-based tool, eventually it would lead us into
 digging
into
code and trying to figure out where bad things are happening
(that's
the
actual place where we would need to profile messaging code).
Correct me
if
i'm wrong.


It would be interesting to have recommendations for deployment of
rabbit
or qpid based on performance testing with oslo.messaging. It would
 also
be interesting to have recommendations for changes to the
implementation
of oslo.messaging based on performance testing. I'm not sure you
 want
to
do full-stack testing for the latter, though.

Either way, I think you would be able to start the testing without
 any
changes in oslo.messaging.

I agree. I think the first step is to define what to measure and then
construct an application using olso.messaging that allows the data of
interest to be captured using different drivers and indeed different
configurations of a given driver.

I wrote a very simple test application to test one aspect that I felt
 was
important, namely the scalability of the RPC mechanism as you increase
 the
number of clients and servers involved. The code I used is https://
github.com/grs/ombt, its probably stale at the moment, I only link to
 it as
an example of approach.

Using that test code I was then able to compare performance in this one
aspect across drivers (the 'rabbit', 'qpid' and new amqp 1.0 based
 drivers
_ I wanted to try zmq, but couldn't figure out how to get it working
 at the
time), and for different deployment options using a given driver (amqp
 1.0
using qpidd or qpid dispatch router in either standalone or with
 multiple
connected routers).

There are of course several other aspects that I think would be
 important
to explore: notifications, more specific variations in the RPC
 'topology'
i.e. number of clients on given server number of servers in single
 group
etc, and a better tool (or set of tools) would allow all of these to be
explored.

From my experimentation, I believe the biggest differences in
 scalability
are going to come not from optimising the code in oslo.messaging so
 much as
choosing different patterns for communication. Those choices may be
constrained by other aspects as well of course, notably approach to
reliability.




 After couple internal discussions and hours of investigations, i think
 i've
 foung the most applicabale solution
 that will accomplish performance testing approach and will eventually be
 evaluated as messaging drivers
 configuration and AMQP service deployment recommendataion.

 Solution that i've been talking about is already pretty well-known across
 OpenStack components - Rally and its scenarios.
 Why it would be the best option? Rally scenarios would not touch messaging
  core part. Scenarios are gate-able.
 Even if we're talking about internal testing, scenarios are very useful
 in this
 case,
 since they are something that can be tuned/configured taking into account
 environment needs.

 Doug, Gordon, what do you think about bringing scenarios into messaging?


 I personally wouldn't mind having them but I'd like us to first
 discuss what kind of scenarios we want to test.

 I'm assuming these scenarios would be pure oslo.messaging scenarios
 and they won't require any of the openstack services. Therefore, I
 guess these scenarios would test things like performance with many

consumers, performance with several (a)synchronous calls, etc. What
 performance means in this context will have to be discussed as well.


Correct, oslo.messaging scenarios would expect to have only AMQP service
and nothing else.
Yes, that's what i've been thinking about. Also, i'd like to share doc that
i've found, see [1].
As i can see it would be more than useful to enable next scenarios:

   - Single multi-thread publisher (rpc client) against single multi-thread
   consumer
  - using RPC cast/call methods try to measure time between request and
  response.
   - Multiple multi-thread publishers against single multi-thread consumer
  - using RPC cast/call methods try to measure time between requests
  and responses to multiple publishers.
   - Multiple multi-thread 

Re: [openstack-dev] [cinder] [nova] [scheduler] Nova node name passed to Cinder

2015-01-28 Thread Philipp Marek
Hello Vishvananda,

 Initialize connection passes that data to cinder in the call. The connector
 dictionary in the call should contain the info from nova:
 
 https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L1051
Ah yes, I see.

Thank you very much!


Regards,

Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] python-barbicanclient 3.0.2 released

2015-01-28 Thread Jeremy Stanley
On 2015-01-28 02:42:28 + (+), Douglas Mendizabal wrote:
 Thanks for the heads up, and for adding the stable-compat-jobs to
 the client. [1]  This is an interesting problem, since the
 proposal bot keeps the python-barbicanclient requirements in sync
 with global-requirements.  I’m not sure what the correct fix for
 this is?

The plan we discussed at the last summit and have refined further
since is to freeze/cap versions of all dependencies (including every
transitive dependency) in stable support branches and then create
stable branches of clients and libraries as needed to backport
security fixes.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] About Sahara Oozie plan and Spark CDH Issues

2015-01-28 Thread Trevor McKay
Intel folks,

Belated welcome to Sahara!  Thank you for your recent commits.

Moving this thread to openstack-dev so others may contribute, cc'ing
Daniele and Pietro who pioneered the Spark plugin.

I'll respond with another email about Oozie work, but I want to
address the Spark/Swift issue in CDH since I have been working
on it and there is a task which still needs to be done -- that
is to upgrade the CDH version in the spark image and see if
the situation improves (see below)

Relevant reviews are here:

https://review.openstack.org/146659
https://review.openstack.org/147955
https://review.openstack.org/147985
https://review.openstack.org/146659

In the first review, you can see that we set an extra driver
classpath to pull in '/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar.

This is because the spark-assembly JAR in CDH4 contains classes from
jackson-mapper-asl-1.8.8 and jackson-core-asl-1.9.x. When the
hadoop-swift.jar dereferences a Swift path, it calls into code
from jackson-mapper-asl-1.8.8 which uses JsonClass.  But JsonClass
was removed in jackson-core-asl-1.9.x, so there is an exception.

Therefore, we need to use the classpath to either upgrade the version of
jackson-mapper-asl to 1.9.x or downgrade the version of jackson-core-asl
to 1.8.8 (both work in my testing).  However, the first of these options
requires us to bundle an extra jar.  Since /usr/lib/hadoop already
contains jackson-core-asl-1.8.8, it is easier to just add that to the
classpath and downgrade the jackson version.

Note, there are some references to this problem on the spark mailing list,
we are not the only ones to encounter it.

However, I am not completely comfortable with mixing versions and
patching the classpath this way.  It looks to me like the Spark assembly
used in CDH5 has consistent versions, and I would like to try updating
the CDH version in sahara-image-elments to CDH5 for Spark. If this fixes
the problem and removes the need for the extra classpath, that would be
great.

Would someone like to take on this change? (modifying sahara-image-elements
to use CDH5 for Spark images) I can make a blueprint for
it.

More to come about Oozie topics.

Best regards,

Trevor

On Thu, 2015-01-15 at 15:34 +, Chen, Weiting wrote:
 Hi Mckay.
 
  
 
 We are Intel team and contributing OpenStack Sahara project.
 
 We are new in Sahara and would like to do more contributions in this
 project.
 
 So far, we are focusing on Sahara CDH Plugin.
 
 So if there is any issues related on this, please feel free to discuss
 with us.
 
  
 
 During IRC meeting, there are two issues you mentioned and we would
 like to discuss with you.
 
 1.  Oozie Workflow Support: 
 
 Do you have any plan could share with us about your idea?
 
 Because in our case, we are testing to run a java action job with
 HBase library support and also facing some problems about Oozie
 support.
 
 So it should be good to share the experience with each other.
 
 
 
 2.  Spark CDH Issues: 
 
 Could you provide more information about this issue? In CDH Plugin, we
 have used CDH 5 to finish swift test. So it should be fine to upgrade
 CDH 4 to 5.
 
  
 
 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] design question : green thread model

2015-01-28 Thread murali reddy
Hello,

I am trying to understand how a nova component can be run parallely on a
host. From the developer reference documentation it seems to indicate that
all the openstack services use green thread model of threading. Is it the
only model of parallelism for all the components or multiple processes can
be used for a nova service on a host. Does nova.service which seems to do
os.fork is being used to fork multiple processes for a nova service?

I am newbie so pardon me if the question is asked before.

Thanks
Murali
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-28 Thread Jeremy Stanley
On 2015-01-28 23:37:18 +0800 (+0800), Tom Fifield wrote:
 If logistics are getting complicated, is it necessary to lock it
 down so much? I vaguely recall a launchpad poll in the past, which
 was effectively open to the public? Is voting on the shortlisted
 names something we should just open wide up so that we're
 including absolutely everyone in the fun?

If the proposal is Condorcet, then I don't think Launchpad polls are
going to suffice?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-28 Thread Thierry Carrez
James E. Blair wrote:
 Considering that the process used to be
 a poll of the ~openstack group on launchpad, it seemed like a fairly
 straightforward mapping to ATCs.  I wanted to find the easiest way to
 get the most people in the community likely to vote as possible without
 needing to generate a new voting roll.  But you are correct: if we're
 fixing this, let's fix it right.

Actually, since Launchpad ~openstack group usage was discontinued, I
used an open surveymonkey poll for the last two picks. That meant
*anyone* (who knew about the poll) could vote.

 The next best thing I can think of is to use the entire Foundation
 Individual Membership to produce the roll for the CIVS poll.  It will be
 a bit of extra work, but I believe that is about as broad of a
 definition of our community that we use.

I didn't use CIVS in the past because it doesn't support such a large
number of voters.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Questions on pep8 F811 hacking check for microversion

2015-01-28 Thread Matthew Gilliard
F811 is not part of our hacking lib - it's in flake8.  As far as I know,
it's not possible to selectively disable that for particular files or
methods.  And as mentioned earlier in the list and when I asked in
#openstack-dev the feeling was that we don't want to disable F811 globally
because it's a useful check. So I think we have to choose between:

* Continuing to use #noqa
* Disabling F811 globally
* Modifying flake8
* Something else I haven't thought of

  Matthew

On Wed, Jan 28, 2015 at 7:43 AM, Chen CH Ji jiche...@cn.ibm.com wrote:

 Is there a way to overwrite the rule in our hacking (not familiar with it
 ...)?
 if so ,maybe we can do as suggested to avoid 811 for the class which has
 Microversion definition? Thanks

 Best Regards!

 Kevin (Chen) Ji 纪 晨

 Engineer, zVM Development, CSTL
 Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
 Phone: +86-10-82454158
 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
 Beijing 100193, PRC

 [image: Inactive hide details for Christopher Yeoh ---01/28/2015 09:37:00
 AM---On Tue, 06 Jan 2015 07:31:19 -0500 Jay Pipes jaypipes@g]Christopher
 Yeoh ---01/28/2015 09:37:00 AM---On Tue, 06 Jan 2015 07:31:19 -0500 Jay
 Pipes jaypi...@gmail.com wrote:

 From: Christopher Yeoh cbky...@gmail.com
 To: openstack-dev@lists.openstack.org
 Date: 01/28/2015 09:37 AM
 Subject: Re: [openstack-dev] [nova] Questions on pep8 F811 hacking check
 for microversion
 --



 On Tue, 06 Jan 2015 07:31:19 -0500
 Jay Pipes jaypi...@gmail.com wrote:

  On 01/06/2015 06:25 AM, Chen CH Ji wrote:
   Based on nova-specs api-microversions.rst
   we support following function definition format, but it violate the
   hacking rule pep8 F811 because duplicate function definition
   we should use #noqa for them , but considering microversion may
   live for long time ,
   keep adding #noqa may be a little bit ugly, can anyone suggest a
   good solution for it ? thanks
  
   @api_version(min_version='2.1')
   def _version_specific_func(self, req, arg1):
  pass

   @api_version(min_version='2.5')
   def _version_specific_func(self, req, arg1):
  pass
 
  Hey Kevin,
 
  This was actually one of my reservations about the proposed
  microversioning implementation -- i.e. having functions that are
  named exactly the same, only decorated with the microversioning
  notation. It kinda reminds me of the hell of debugging C++ code that
  uses STL: how does one easily know which method one is in when inside
  a debugger?
 
  That said, the only other technique we could try to use would be to
  not use a decorator and instead have a top-level dispatch function
  that would inspect the API microversion (only when the API version
  makes a difference to the output or input of that function) and then
  dispatch the call to a helper method that had the version in its name.
 
  So, for instance, let's say you are calling the controller's GET
  /$tenant/os-hosts method, which happens to get routed to the
  nova.api.openstack.compute.contrib.hosts.HostController.index()
  method. If you wanted to modify the result of that method and the API
  microversion is at 2.5, you might do something like:
 
def index(self, req):
req_api_ver = utils.get_max_requested_api_version(req)
if req_api_ver == (2, 5):
return self.index_2_5(req)
return self.index_2_1(req)
 
def index_2_5(self, req):
results = self.index_2_1(req)
# Replaces 'host' with 'host_name'
for result in results:
result['host_name'] = result['host']
del result['host']
return results
 
def index_2_1(self, req):
# Would be a rename of the existing index() method on
# the controller
 

 So having to manually add switching code everything we have an API
 patch I think is not only longer and more complicated but more error
 prone when updating. If we change something at the core in the future it
 means changing all the microversioned code rather than just the
 switching architecture at the core of wsgi.


  Another option would be to use something like JSON-patch to determine
  the difference between two output schemas and automatically translate
  one to another... but that would be a huge effort.
 
  That's the only other way I can think of besides disabling F811,
  which I really would not recommend, since it's a valuable safeguard
  against duplicate function names (especially duplicated test methods).

 So I don't think we need to disable F811 in general - why not just
 disable it for any method with the api_version decorator? On those ones
 we can do checks on what is passed to api_version which will help
 verify that there hasn't been a typo to an api_version decorator.

 Chris

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 

Re: [openstack-dev] [nova] [api] Get servers with limit and IP address filter

2015-01-28 Thread Vishvananda Ishaya

On Jan 28, 2015, at 7:05 AM, Steven Kaufer kau...@us.ibm.com wrote:

 Vishvananda Ishaya vishvana...@gmail.com wrote on 01/27/2015 04:29:50 PM:
 
  From: Vishvananda Ishaya vishvana...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org
  Date: 01/27/2015 04:32 PM
  Subject: Re: [openstack-dev] [nova] [api] Get servers with limit and
  IP address filter
  
  The network info for an instance is cached as a blob of data 
  (neutron has the canonical version in most installs), so it isn’t 
  particularly easy to do at the database layer. You would likely need
  a pretty complex stored procedure to do it accurately.
  
  Vish
 
 Vish,
 
 Thanks for the reply.
 
 I agree with your point about the difficultly in accurately querying the blob 
 of data; however, IMHO, the complexity this fix does not preclude the current 
 behavior as being classified as a bug.
 
 With that in mind, I was wondering if anyone in the community has any 
 thoughts on if the current behavior is considered a bug?
 

Yes it should be classified as a bug.
 
 If so, how should it be resolved? A couple options that I could think of:
 
 1. Disallow the combination of using both a limit and an IP address filter by 
 raising an error.
 

I think this is the simplest solution.

Vish

 2. Workaround the problem by removing the limit from the DB query and then 
 manually limiting the list of servers (after manually applying the IP address 
 filter).
 3. Break up the query so that the server UUIDs that match the IP filter are 
 retrieved first and then used as a UUID DB filter. As far as I can tell, this 
 type of solution was originally implemented but the network query was deemed 
 to expensive [1]. Is there a less expensive method to determine the UUIDs 
 (possibly querying the cached 'network_info' in the 'instance_info_caches' 
 table)?
 4. Figure out how to accurately query the blob of network info that is cached 
 in the nova DB and apply the IP filter at the DB layer.
 
 [1]: https://review.openstack.org/#/c/131460/
 
 Thanks,
 Steven Kaufer
 
  
  On Jan 27, 2015, at 2:00 PM, Steven Kaufer kau...@us.ibm.com wrote:
  
  Hello,
  
  When applying an IP address filter to a paginated servers query (eg,
  supplying servers/detail?ip=192.168limit=100), the IP address 
  filtering is only being applied against the non-filtered page of 
  servers that were retrieved from the DB; see [1].
  
  I believe that the IP address filtering should be done before the 
  limit is applied, returning up to limit servers that match the IP 
  address filter.  Currently, if the servers in the page of data 
  returned from the DB do not happen to match the IP address filter 
  (applied in the compute API), then no servers will be returned by 
  the REST API (even if there are servers that match the IP address filter).
  
  This seems like a bug to me, shouldn't all filtering be done at the DB 
  layer?
  
  [1]: https://github.com/openstack/nova/blob/master/nova/compute/
  api.py#L2037-L2042
  
  Thanks,
  Steven Kaufer
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-28 Thread Thierry Carrez
Monty Taylor wrote:
 What if, to reduce stress on you, we make this 100% mechanical:
 
 - Anyone can propose a name
 - Election officials verify that the name matches the criteria
 -  * note: how do we approve additive exceptions without tons of effort

Devil is in the details, as reading some of my hatemail would tell you.
For example in the past I rejected Foo which was proposed because
there was a Foo Bar landmark in the vicinity. The rules would have to
be pretty detailed to be entirely objective.

 - Marketing team provides feedback to the election officials on names
 they find image-wise problematic
 - The poll is created with the roster of all foundation members
 containing all of the choices, but with the marketing issues clearly
 labeled, like this:
 
 * Love
 * Lumber
 * Lettuce
 * Lemming - marketing issues identified
 
 - post poll - foundation staff run trademarks checks on the winners in
 order until a legally acceptable winner is found
 
 This way nobody is excluded, it's not a burden on you, it's about as
 transparent as it could be - and there are no special privileges needed
 for anyone to volunteer to be an election official.
 
 I'm going to continue to advocate that we use condorcet instead of a
 launchpad poll because we need the ability to rank things for post-vote
 trademark checks to not get weird. (also, we're working on getting off
 of launchpad, so let's not re-add another connection)

It's been some time since we last used a Launchpad poll. I recently used
an open surveymonkey poll, which allowed crude ranking. Agree that
Condorcet is better, as long as you can determine a clear list of voters.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-01-28 Thread Carl Baldwin
On Wed, Jan 28, 2015 at 9:52 AM, Salvatore Orlando sorla...@nicira.com wrote:
 The patch Kevin points out increased the lease to 24 hours (which I agree is
 as arbitrary as 2 minutes, 8 minutes, or 1 century) because it introduced
 use of DHCPRELEASE message in the agent, which is supported by dnsmasq (to
 the best of my knowledge) and is functionally similar to FORCERENEW.

My understanding was that the dhcp release mechanism in dnsmasq does
not actually unicast a FORCERENEW message to the client.  Does it?  I
thought it just released dnsmasq's record of the lease.  If I'm right,
this is a huge difference.  It is a big pain knowing that there are
many clients out there who may not renew their leases to get updated
dhcp options for hours and hours.  I don't think there is a reliable
way for the server to force renew to the client, is there?  Do clients
support the FORCERENEW unicast message?

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-28 Thread Thierry Carrez
Thierry Carrez wrote:
 James E. Blair wrote:
 Considering that the process used to be
 a poll of the ~openstack group on launchpad, it seemed like a fairly
 straightforward mapping to ATCs.  I wanted to find the easiest way to
 get the most people in the community likely to vote as possible without
 needing to generate a new voting roll.  But you are correct: if we're
 fixing this, let's fix it right.
 
 Actually, since Launchpad ~openstack group usage was discontinued, I
 used an open surveymonkey poll for the last two picks. That meant
 *anyone* (who knew about the poll) could vote.
 
 The next best thing I can think of is to use the entire Foundation
 Individual Membership to produce the roll for the CIVS poll.  It will be
 a bit of extra work, but I believe that is about as broad of a
 definition of our community that we use.
 
 I didn't use CIVS in the past because it doesn't support such a large
 number of voters.

Reading their site again they seem to have raised the hard limit (which
was around 4K IIRC). So we can probably use that (entering voters 1000
at a time).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] multicloud support for ec2

2015-01-28 Thread Hongbin Lu
Hi,

I would appreciate if someone replies the email below. Thanks.

Best regards,
Hongbin

On Sun, Jan 25, 2015 at 12:03 AM, Hongbin Lu hongbin...@gmail.com wrote:

 Hi Heat team,

 I am looking for a solution to bridge between OpenStack and EC2. According
 to documents, it seems that Heat has multicloud support but the remote
 cloud(s) must be OpenStack. I wonder if Heat supports multicloud in the
 context of supporting remote EC2 cloud. For example, does Heat support a
 remote stack that contains resources from EC2 cloud? As a result, creating
 a stack will provision local OpenStack resources along with remote EC2
 resources.

 If this feature is not supported, will the dev team accept blueprint
 and/or contributions for that?

 Thanks,
 Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-01-28 Thread Vishvananda Ishaya

On Jan 28, 2015, at 9:36 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 On Wed, Jan 28, 2015 at 9:52 AM, Salvatore Orlando sorla...@nicira.com 
 wrote:
 The patch Kevin points out increased the lease to 24 hours (which I agree is
 as arbitrary as 2 minutes, 8 minutes, or 1 century) because it introduced
 use of DHCPRELEASE message in the agent, which is supported by dnsmasq (to
 the best of my knowledge) and is functionally similar to FORCERENEW.
 
 My understanding was that the dhcp release mechanism in dnsmasq does
 not actually unicast a FORCERENEW message to the client.  Does it?  I
 thought it just released dnsmasq's record of the lease.  If I'm right,
 this is a huge difference.  It is a big pain knowing that there are
 many clients out there who may not renew their leases to get updated
 dhcp options for hours and hours.  I don't think there is a reliable
 way for the server to force renew to the client, is there?  Do clients
 support the FORCERENEW unicast message?

If you are using the dhcp-release script (that we got included in ubuntu years
ago for nova-network), it sends a release packet on behalf of the client so
that dnsmasq can update its leases table, but it doesn’t send any message to
the client to tell it to update.

Vish

 
 Carl
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] proposal for unwinding database usage from tests

2015-01-28 Thread Chen CH Ji
Sorry for late reply and thanks for bring this out, I agree the create_db
flag will increase the complexity
so I might do some PoC and write a spec to do it next release

For this sentence, I don't fully understand, are you suggesting to every db
usage remove should be a
patch for a test class? thanks a lot
I'd like to propose instead DB usage should be removed per test Class as
an atomic unit.

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Sean Dague s...@dague.net
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Cc: Chen CH Ji/China/IBM@IBMCN
Date:   01/24/2015 07:13 PM
Subject:[nova] proposal for unwinding database usage from tests



I've been looking at the following patch series -
https://review.openstack.org/#/c/131691/13 for removing database
requirements from some tests.

I whole heartedly support getting DB usage out of tests, but I'd like to
make sure that we don't create new challenges in the process. The
conditional create_db parameter in test functions adds a bit more
internal test complexity than I think we should have.

I'd like to propose instead DB usage should be removed per test Class as
an atomic unit. If that turns into too large a patch that probably means
breaking apart the test class into smaller test classes first.

The other thing to be careful in understanding the results of timing
tests. The way the database fixture works it caches the migration
process -
https://github.com/openstack/nova/blob/master/nova/tests/fixtures.py#L206

That actually means that the overhead of the db-migration sync is paid
only once per testr worker (it's 1s on my fast workstation, might be 2s
on gate nodes). The subsequence db setups for tests 2 - N in the worker
only take about 0.020s on my workstation (scale appropriately). So
removing all the unneeded db setup code is probably only going to save
~30s over an entire test run.

Which doesn't mean it shouldn't be done, there are other safety reasons
we shouldn't let every test randomly punch data into the db and still
pass. But time savings should not be the primary motivator here, because
it's actually not nearly as much gain as it looks like from running only
a small number of tests.

 -Sean

--
Sean Dague
http://dague.net

(See attached file: signature.asc)

signature.asc
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-28 Thread Evgeniy L
Hi Dmitry,

I'm not sure if we should user approach when task executor reads
some data from the file system, ideally Nailgun should push
all of the required data to Astute.
But it can be tricky to implement, so I vote for 2nd approach.

Thanks,

On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko adide...@mirantis.com
wrote:

 3rd option is about using rsyncd that we run under xinetd on primary
 controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then distributing
 them by mcollective
 transport to all nodes. As you may know we are in the process of making
 this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder /etc/fuel/keys, and
 then copy them with rsync task (but it feels not very secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on target
 nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from file on
 master and put it on the node.

 Also there is 3rd option to generate keys right on primary-controller
 and then distribute them on all other nodes, and i guess it will be
 responsibility of controller to store current keys that are valid for
 cluster. Alex please provide more details about 3rd approach.

 Maybe there is more options?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] design question : green thread model

2015-01-28 Thread murali reddy
Thanks JE.

On hosts with multi-core processors, it does not seem optimal to run a
single service instance with just green thread. I understand that on
controller node, we can run one or more nova services but still it does not
seem to utilize multi-core processors.

Is it not a nova scaling concern?

On Wed, Jan 28, 2015 at 10:38 PM, Johannes Erdfelt johan...@erdfelt.com
wrote:

 On Wed, Jan 28, 2015, murali reddy muralimmre...@gmail.com wrote:
  I am trying to understand how a nova component can be run parallely on a
  host. From the developer reference documentation it seems to indicate
 that
  all the openstack services use green thread model of threading. Is it the
  only model of parallelism for all the components or multiple processes
 can
  be used for a nova service on a host. Does nova.service which seems to do
  os.fork is being used to fork multiple processes for a nova service?

 Multiple processes are used in some places, for instance, nova-api can
 fork multiple processes. Each process would also use greenthreads as
 well.

 However, most services don't use multiple processes (at least in Nova).

 JE


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-28 Thread Dmitriy Shulyak
Thank you guys for quick response.
Than, if there is no better option we will follow with second approach.

On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko adide...@mirantis.com
 wrote:

 3rd option is about using rsyncd that we run under xinetd on primary
 controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of making
 this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder /etc/fuel/keys, and
 then copy them with rsync task (but it feels not very secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on
 target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from file on
 master and put it on the node.

 Also there is 3rd option to generate keys right on primary-controller
 and then distribute them on all other nodes, and i guess it will be
 responsibility of controller to store current keys that are valid for
 cluster. Alex please provide more details about 3rd approach.

 Maybe there is more options?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-01-28 Thread Kevin Benton
If we are going to ignore the IP address changing use-case, can we just
make the default infinity? Then nobody ever has to worry about control
plane outages for existing client. 24 hours is way too long to be useful
anyway.
On Jan 28, 2015 12:44 PM, Salvatore Orlando sorla...@nicira.com wrote:



 On 28 January 2015 at 20:19, Brian Haley brian.ha...@hp.com wrote:

 Hi Kevin,

 On 01/28/2015 03:50 AM, Kevin Benton wrote:
  Hi,
 
  Approximately a year and a half ago, the default DHCP lease time in
 Neutron was
  increased from 120 seconds to 86400 seconds.[1] This was done with the
 goal of
  reducing DHCP traffic with very little discussion (based on what I can
 see in
  the review and bug report). While it it does indeed reduce DHCP
 traffic, I don't
  think any bug reports were filed showing that a 120 second lease time
 resulted
  in too much traffic or that a jump all of the way to 86400 seconds was
 required
  instead of a value in the same order of magnitude.
 
  Why does this matter?
 
  Neutron ports can be updated with a new IP address from the same subnet
 or
  another subnet on the same network. The port update will result in
 anti-spoofing
  iptables rule changes that immediately stop the old IP address from
 working on
  the host. This means the host is unreachable for 0-12 hours based on
 the current
  default lease time without manual intervention[2] (assuming half-lease
 length
  DHCP renewal attempts).

 So I'll first comment on the problem.  You're essentially pulling the
 rug out
 from under these VMs by changing their IP (and that of their router and
 DHCP/DNS
 server), but you expect they should fail quickly and come right back
 online.  In
 a non-Neutron environment wouldn't the IT person that did this need some
 pretty
 good heat-resistant pants for all the flames from pissed-off users?
 Sure, the
 guy on his laptop will just bounce the connection, but servers (aka VMs)
 should
 stay pretty static.  VMs are servers (and cows according to some).


 I actually expect this kind operation to not be one Neutron users will do
 very often, mostly because regardless of whether you're in the cloud or
 not, you'd still need to wear those heat resistant pants.



 The correct solution is to be able to renumber the network so there is no
 issue
 with the anti-spoofing rules dropping packets, or the VMs having an
 unreachable
 IP address, but that's a much bigger nut to crack.


 Indeed. In my opinion the update IP operation sets false expectations in
 users. I have considered disallowing PUT on fixed_ips in the past but that
 did not go ahead because there were users leveraging it.



  Why is this on the mailing list?
 
  In an attempt to make the VMs usable in a much shorter timeframe
 following a
  Neutron port address change, I submitted a patch to reduce the default
 DHCP
  lease time to 8 minutes.[3] However, this was upsetting to several
 people,[4] so
  it was suggested I bring this discussion to the mailing list. The
 following are
  the high-level concerns followed by my responses:
 
* 8 minutes is arbitrary
o Yes, but it's no more arbitrary than 1440 minutes. I picked it
 as an
  interval because it is still 4 times larger than the last short
 value,
  but it still allows VMs to regain connectivity in 5 minutes in
 the
  event their IP is changed. If someone has a good suggestion for
 another
  interval based on known dnsmasq QPS limits or some other
 quantitative
  reason, please chime in here.

 We run 48 hours as the default in our public cloud, and I did some
 digging to
 remind myself of the multiple reasons:

 1. Too much DHCP traffic.  Sure, only that initial request is broadcast,
 but
 dnsmasq is very verbose and loves writing to syslog for everything it
 does -
 less is more.  Do a scale test with 10K VMs and you'll quickly find out a
 large
 portion of traffic is DHCP RENEWs, and syslog is huge.


 This is correct, and something I overlooked in my previous post.
 Nevertheless I still think that it is really impossible to find an optimal
 default which is regarded as such by every user. The current default has
 been chosen mostly for the reason you explain below, and I don't see a
 strong reason for changing it.



 2. During a control-plane upgrade or outage, having a short DHCP lease
 time will
 take all your VMs offline.  The old value of 2 minutes is not a realistic
 value
 for an upgrade, and I don't think 8 minutes is much better.  Yes, when
 DHCP is
 down you can't boot a new VM, but as long as customers can get to their
 existing
 VMs they're pretty happy and won't scream bloody murder.


 In our cloud we were continuously hit bit this. We could not take our dhcp
 agents out, otherwise all VMs would lose their leases, unless the downtime
 of the agent was very brief.


 There's probably more, but those were the top two, with #2 being most
 important.


 Summarizing, I think that Kevin is exposing a real, albeit 

Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-01-28 Thread Joe Gordon
On Wed, Jan 28, 2015 at 11:56 AM, Sean Dague s...@dague.net wrote:

 The following review for Kilo deprecates the EC2 API in Nova -
 https://review.openstack.org/#/c/150929/

 There are a number of reasons for this. The EC2 API has been slowly
 rotting in the Nova tree, never was highly tested, implements a
 substantially older version of what AWS has, and currently can't work
 with any recent releases of the boto library (due to implementing
 extremely old version of auth). This has given the misunderstanding that
 it's a first class supported feature in OpenStack, which it hasn't been
 in quite sometime. Deprecating honestly communicates where we stand.

 There is a new stackforge project which is getting some activity now -
 https://github.com/stackforge/ec2-api. The intent and hope is that is
 the path forward for the portion of the community that wants this
 feature, and that efforts will be focused there.


FYI:
http://lists.openstack.org/pipermail/openstack-dev/2015-January/055041.html



 Comments are welcomed, but we've attempted to get more people engaged to
 address these issues over the last 18 months, and never really had
 anyone step up. Without some real maintainers of this code in Nova (and
 tests somewhere in the community) it's really no longer viable.

 -Sean

 --
 Sean Dague
 http://dague.net


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins][Orchestration] Unclear handling of primary-controler and controller roles

2015-01-28 Thread Dmitriy Shulyak
 Also I would like to mention that in plugins user currently can write
 'roles': ['controller'],
 which means that the task will be applied on 'controller' and
 'primary-controller' nodes.
 Plugin developer can get this information from astute.yaml file. But I'm
 curious if we
 should change this behaviour for plugins (with backward compatibility of
 course)?


In my opinion we should make interface for task description identical for
plugins and for library,
and if this separation makes sense for library, there will be cases when it
will be expected by plugin developer
as well.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] reckoning time for nova ec2 stack

2015-01-28 Thread Matt Riedemann



On 1/15/2015 5:43 PM, Steven Hardy wrote:

On Thu, Jan 15, 2015 at 04:49:37PM -0600, Matt Riedemann wrote:



On 1/15/2015 11:40 AM, Matt Riedemann wrote:



On 1/13/2015 9:27 PM, Matt Riedemann wrote:



On 1/13/2015 12:11 PM, Steven Hardy wrote:

On Tue, Jan 13, 2015 at 10:00:04AM -0600, Matt Riedemann wrote:

Looks like the fix we merged didn't actually fix the problem. I have
a patch
[1] to uncap the boto requirement on master and it's failing the ec2
tests
in tempest the same as before.


FWIW, I just re-tested and boto 2.35.1 works fine for me locally, if you
revert my patch it breaks again with Signature not provided errors
(for
all ec2 API requests).

If you look at the failures in the log, it actually looks like a
different
problem:

EC2ResponseError: EC2ResponseError: 401 Unauthorized

This is not the same as the original error which rejected any request
inside the nova API before even calling keystone with a message like
this:

AuthFailure: Signature not provided

AFAICT this means my patch is working, and there's a different problem
affecting only a subset of the ec2 boto tests.

Steve

__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah, new bug reported, looks like we're hitting 401 Unauthorized errors
when trying to create security groups in the test:

https://bugs.launchpad.net/nova/+bug/1410622



I have a debug patch up here to try and recreate the tempest failures
with latest boto but using a nova debug change also to get more
information when we fail.

https://review.openstack.org/#/c/147601/



I finally narrowed this down to some code in keystone where it generates a
signature and compares that to what nova is passing in on the request for
ec2 credentials and they are different so keystone is rejecting the request
with a 401.

http://logs.openstack.org/01/147601/3/check/check-tempest-dsvm-full/96bb05e/logs/apache/keystone.txt.gz#_2015-01-15_22_00_27_046

I'm assuming something needs to change in keystone to support the version 4
format?


Keystone already supports the hmac v4 format, the code which creates the
signature keystone compares with that in the request actually lives in
keystoneclient:

https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/contrib/ec2/utils.py#L156

We have been bitten by a couple of bugs in the past where boto changed the
way it calculated the signature (presumably in a way compatible with AWS),
but generally that breaks all requests not just a specific one as in this
case (CreateSecurityGroup).

Interestingly, this seems to work OK for me locally, e.g using
euca-add-group, which AFAICS is the same API action the test is failing on:

-bash-4.2$ source /opt/stack/devstack/accrc/demo/demo
-bash-4.2$ euca-add-group --debug --description foo shtest2
2015-01-15 23:33:22,890 euca2ools [DEBUG]:Using access key provided by
client.
2015-01-15 23:33:22,890 euca2ools [DEBUG]:Using secret key provided by
client.
2015-01-15 23:33:22,890 euca2ools [DEBUG]:Method: POST
2015-01-15 23:33:22,890 euca2ools [DEBUG]:Path: /services/Cloud/
2015-01-15 23:33:22,890 euca2ools [DEBUG]:Data:
2015-01-15 23:33:22,890 euca2ools [DEBUG]:Headers: {}
2015-01-15 23:33:22,891 euca2ools [DEBUG]:Host: 192.168.0.4
2015-01-15 23:33:22,891 euca2ools [DEBUG]:Port: 8773
2015-01-15 23:33:22,891 euca2ools [DEBUG]:Params: {'Action':
'CreateSecurityGroup', 'GroupName': 'shtest2', 'Version': '2009-11-30',
'GroupDescription': 'foo'}
2015-01-15 23:33:22,891 euca2ools [DEBUG]:establishing HTTP connection:
kwargs={'port': 8773, 'timeout': 70}
2015-01-15 23:33:22,891 euca2ools [DEBUG]:Token: None
2015-01-15 23:33:22,891 euca2ools [DEBUG]:CanonicalRequest:
POST
/services/Cloud/

host:192.168.0.4:8773
x-amz-date:20150115T233322Z

host;x-amz-date
a364b884b3e72160b8850f80c1b5b559011b38313da026d04dd60b745c0fd135
2015-01-15 23:33:22,892 euca2ools [DEBUG]:StringToSign:
AWS4-HMAC-SHA256
20150115T233322Z
20150115/168/192/aws4_request
6b751f8ad4ca1935c68a35f196928c1626d8fdb0ae4bf901050ab757649c9a9a
2015-01-15 23:33:22,892 euca2ools [DEBUG]:Signature:
3a0c741e4a8abf22d83ea8d909bb90e11627905338e42fdfaa51296fa5f2dd31
2015-01-15 23:33:22,892 euca2ools [DEBUG]:Final headers: {'Content-Length':
'84', 'User-Agent': 'Boto/2.35.1 Python/2.7.5
Linux/3.17.7-200.fc20.x86_64', 'Host': '192.168.0.4:8773', 'X-Amz-Date':
'20150115T233322Z', 'Content-Type': 'application/x-www-form-urlencoded;
charset=UTF-8', 'Authorization': 'AWS4-HMAC-SHA256
Credential=8943303abb374011be595587aa1fc986/20150115/168/192/aws4_request,SignedHeaders=host;x-amz-date,Signature=3a0c741e4a8abf22d83ea8d909bb90e11627905338e42fdfaa51296fa5f2dd31'}
2015-01-15 23:33:22,994 euca2ools [DEBUG]:Response headers: [('date', 'Thu,
15 Jan 2015 23:33:22 GMT'), ('content-length', '407'), ('content-type',

[openstack-dev] [Manila] Manila driver for CephFS

2015-01-28 Thread Jake Kugel
Hi,

I see there is a blueprint for a Manila driver for CephFS here [1].  It 
looks like it was opened back in 2013 but still in Drafting state.  Does 
anyone know more status about this one?

Thank you,
-Jake

[1]  https://blueprints.launchpad.net/manila/+spec/cephfs-driver


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-01-28 Thread Salvatore Orlando
On 28 January 2015 at 20:19, Brian Haley brian.ha...@hp.com wrote:

 Hi Kevin,

 On 01/28/2015 03:50 AM, Kevin Benton wrote:
  Hi,
 
  Approximately a year and a half ago, the default DHCP lease time in
 Neutron was
  increased from 120 seconds to 86400 seconds.[1] This was done with the
 goal of
  reducing DHCP traffic with very little discussion (based on what I can
 see in
  the review and bug report). While it it does indeed reduce DHCP traffic,
 I don't
  think any bug reports were filed showing that a 120 second lease time
 resulted
  in too much traffic or that a jump all of the way to 86400 seconds was
 required
  instead of a value in the same order of magnitude.
 
  Why does this matter?
 
  Neutron ports can be updated with a new IP address from the same subnet
 or
  another subnet on the same network. The port update will result in
 anti-spoofing
  iptables rule changes that immediately stop the old IP address from
 working on
  the host. This means the host is unreachable for 0-12 hours based on the
 current
  default lease time without manual intervention[2] (assuming half-lease
 length
  DHCP renewal attempts).

 So I'll first comment on the problem.  You're essentially pulling the
 rug out
 from under these VMs by changing their IP (and that of their router and
 DHCP/DNS
 server), but you expect they should fail quickly and come right back
 online.  In
 a non-Neutron environment wouldn't the IT person that did this need some
 pretty
 good heat-resistant pants for all the flames from pissed-off users?  Sure,
 the
 guy on his laptop will just bounce the connection, but servers (aka VMs)
 should
 stay pretty static.  VMs are servers (and cows according to some).


I actually expect this kind operation to not be one Neutron users will do
very often, mostly because regardless of whether you're in the cloud or
not, you'd still need to wear those heat resistant pants.



 The correct solution is to be able to renumber the network so there is no
 issue
 with the anti-spoofing rules dropping packets, or the VMs having an
 unreachable
 IP address, but that's a much bigger nut to crack.


Indeed. In my opinion the update IP operation sets false expectations in
users. I have considered disallowing PUT on fixed_ips in the past but that
did not go ahead because there were users leveraging it.



  Why is this on the mailing list?
 
  In an attempt to make the VMs usable in a much shorter timeframe
 following a
  Neutron port address change, I submitted a patch to reduce the default
 DHCP
  lease time to 8 minutes.[3] However, this was upsetting to several
 people,[4] so
  it was suggested I bring this discussion to the mailing list. The
 following are
  the high-level concerns followed by my responses:
 
* 8 minutes is arbitrary
o Yes, but it's no more arbitrary than 1440 minutes. I picked it
 as an
  interval because it is still 4 times larger than the last short
 value,
  but it still allows VMs to regain connectivity in 5 minutes in
 the
  event their IP is changed. If someone has a good suggestion for
 another
  interval based on known dnsmasq QPS limits or some other
 quantitative
  reason, please chime in here.

 We run 48 hours as the default in our public cloud, and I did some digging
 to
 remind myself of the multiple reasons:

 1. Too much DHCP traffic.  Sure, only that initial request is broadcast,
 but
 dnsmasq is very verbose and loves writing to syslog for everything it does
 -
 less is more.  Do a scale test with 10K VMs and you'll quickly find out a
 large
 portion of traffic is DHCP RENEWs, and syslog is huge.


This is correct, and something I overlooked in my previous post.
Nevertheless I still think that it is really impossible to find an optimal
default which is regarded as such by every user. The current default has
been chosen mostly for the reason you explain below, and I don't see a
strong reason for changing it.



 2. During a control-plane upgrade or outage, having a short DHCP lease
 time will
 take all your VMs offline.  The old value of 2 minutes is not a realistic
 value
 for an upgrade, and I don't think 8 minutes is much better.  Yes, when
 DHCP is
 down you can't boot a new VM, but as long as customers can get to their
 existing
 VMs they're pretty happy and won't scream bloody murder.


In our cloud we were continuously hit bit this. We could not take our dhcp
agents out, otherwise all VMs would lose their leases, unless the downtime
of the agent was very brief.


 There's probably more, but those were the top two, with #2 being most
 important.


Summarizing, I think that Kevin is exposing a real, albeit well-know
problem (sorry about my dhcp release faux pas - I can use jet lag as a
justification!), and he's proposing a mitigation to it. On the other hand,
this mitigation, as Brian explains, is going to cause real operational
issues. Still, we're arguing on the a default value for a configuration
parameter. I 

Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-28 Thread Dmitriy Shulyak
 1. as I mentioned above, we should have an interface, and if interface
 doesn't
 provide required information, you will have to fix it in two places,
 in Nailgun and in external-serializers, instead of a single place i.e.
 in Nailgun,
 another thing if astute.yaml is a bad interface and we should provide
 another
 versioned interface, or add more data into deployment serializer.

But why to add another interface when there is one already (rest api)? And
plugin developer
may query whatever he want (detailed information about volumes, interfaces,
master node settings).
It is most full source of information in fuel and it is already needs to be
protected from incompatible changes.

If our API will be not enough for general use - ofcourse we will need to
fix it, but i dont quite understand what do
you mean by - fix it in two places. API provides general information that
can be consumed by serializers (or any other service/human actually),
and if there is some issues with that information - API should be fixed.
Serializers expects that information in specific format and makes
additional transformation or computation based on that info.

What is your opinion about serializing additional information in plugins
code? How it can be done, without exposing db schema?

2. it can be handled in python or any other code (which can be wrapped into
 tasks),
 why should we implement here another entity (a.k.a external
 serializers)?

Yep, i guess this is true, i thought that we may not want to deliver
credentials to the target nodes, and only token that can be used
for limited time, but...
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-28 Thread Clint Byrum
Excerpts from Johannes Erdfelt's message of 2015-01-28 15:33:25 -0800:
 On Wed, Jan 28, 2015, Mike Bayer mba...@redhat.com wrote:
  I can envision turning this driver into a total monster, adding
  C-speedups where needed but without getting in the way of async
  patching, adding new APIs for explicit async, and everything else.
  However, I’ve no idea what the developers have an appetite for.
 
 This is great information. I appreciate the work on evaluating it.
 
 Can I bring up the alternative of dropping eventlet and switching to
 native threads?
 
 We spend a lot of time working on the various incompatibilies between
 eventlet and other libraries we use. It also restricts us by making it
 difficult to use an entire class of python modules (that use C
 extensions for performance, etc).
 
 I personally have spent more time than I wish to admit fixing bugs in
 eventlet and troubleshooting problems we've had.
 
 And it's never been clear to me why we *need* to use eventlet or
 green threads in general.
 
 Our modern Nova appears to only be weakly tied to eventlet and greenlet.
 I think we would spend less time replacing eventlet with native threads
 than we'll spend in the future trying to fit our code and dependencies
 into the eventlet shaped hole we currently have.
 
 I'm not as familiar with the code in other OpenStack projects, but from
 what I have seen, they appear to be similar to Nova and are only weakly
 tied to eventlet/greenlet.

As is often the case with threading, a reason to avoid using it is
that libraries often aren't able or willing to assert thread safety.

That said, one way to fix that, is to fix those libraries that we do
want to use, to be thread safe. :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-28 Thread Johannes Erdfelt
On Wed, Jan 28, 2015, Clint Byrum cl...@fewbar.com wrote:
 As is often the case with threading, a reason to avoid using it is
 that libraries often aren't able or willing to assert thread safety.
 
 That said, one way to fix that, is to fix those libraries that we do
 want to use, to be thread safe. :)

I floated this idea across some coworkers recently and they brought up a
similar concern, which is concurrency in general, both within our code
and dependencies.

I can't find many places in Nova (at least) that are concurrent in the
sense that one object will be used by multiple threads. nova-scheduler
is likely one place. nova-compute would likely be easy to fix if there
are any problems.

That said, I think the only way to know for sure is to try it out and
see. I'm going to hack up a proof of concept and see how difficult this
will be.

JE


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Question about VPNaas

2015-01-28 Thread Sridhar Ramaswamy
I agree, it is kind of odd to restrict vpn-service to one private tenant
network. Particularly when the current VPN model does allow multiple remote
peer CIDRs to connect to,

neutron ipsec-site-connection-create --name ipsec0 --vpnservice-id vpnsvc0
--ikepolicy-id ike0 --ipsecpolicy-id esp0 --peer-address 192.168.110.21
--peer-id 192.168.110.21 --peer-cidr *13.1.0.0/24,14.1.0.0/24
http://13.1.0.0/24,14.1.0.0/24* --psk secret

Perhaps there is some history, may be Nachi might know?

- Sridhar

On Wed, Jan 28, 2015 at 6:26 AM, Paul Michali p...@michali.net wrote:

 I can try to comment on your questions... inline @PCM


 PCM (Paul Michali)

 IRC pc_m (irc.freenode.com)
 Twitter... @pmichali


 On Tue, Jan 27, 2015 at 9:45 PM, shihanzhang ayshihanzh...@126.com
 wrote:

 Hi Stacker:

 I am a novice, I want  use Neutron VPNaas, through my preliminary
 understanding on this it, I have two questions about it:
 (1) why a 'vpnservices' can just has one subnet?

 (2) why the subnet of 'vpnservices' can't be changed?


 @PCM Currently, the VPN service is designed to setup a site to site
 connection between two private subnets. The service is associated 1:1 with
 (and applies the connection to) a Neutron router that has a interface on
 the private network, and an interface on the public network. Changing the
 subnet for the service would effectively change the router. One would have
 to delete and recreate the service to use a different router.

 I don't know if the user can attach multiple private subnets to a
 router, and the VPN implementation assumes that there is only one private
 subnet.


  As I know, the OpenSwan does not has these limitations.
 I've learned that there is a BP to do this:

 https://blueprints.launchpad.net/neutron/+spec/vpn-multiple-subnet
  but this BP has been no progress.


  I want to know whether this will do in next cycle or later, who can
 help me to explain?


 @PCM I don't know what happened with that BP, but it is effectively
 abandoned (even though status says 'new'). There has not been any activity
 on it for over a year, and since we are at a new release, a BP spec would
 have been required for Kilo. Also, the bug that drove the issue, has been
 placed into Invalid state by Mark McClain in March of last year.

 https://bugs.launchpad.net/neutron/+bug/1258375


 You could ask Mark for clarification, but I think it may be because the
 Neutron router doesn't support multiple subnets.

 Regards.


 Thanks.

 -shihanzhang




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-28 Thread Mike Bayer


Johannes Erdfelt johan...@erdfelt.com wrote:

 On Wed, Jan 28, 2015, Mike Bayer mba...@redhat.com wrote:
 I can envision turning this driver into a total monster, adding
 C-speedups where needed but without getting in the way of async
 patching, adding new APIs for explicit async, and everything else.
 However, I’ve no idea what the developers have an appetite for.
 
 This is great information. I appreciate the work on evaluating it.
 
 Can I bring up the alternative of dropping eventlet and switching to
 native threads?

I'd be your best friend for life. But I don’t think you’re going to get much
traction here on that. There’s three camps on this issue: the
threads-are-fine camp, the “everything should be async! you should never
trust threads / IO to run implicitly!” camp, and the “we need 1000
concurrent connections but don’t want to get into explicit async so let’s
use eventlet/gevent”.

Myself, I am in a fourth camp, use explicit async for those kinds of logic
where it makes sense from a code organization point of view, and not for
those where it does not. The use case of gevent/eventlet is itself in somewhat
of a fringe camp, the one where you need many hundreds of concurrent database
connections within one process for something (which you do not).

The kinds of use cases that are very valid for explicit async are things
like code that is heavy on web service requests and asynchronous message
queues. However, code that deals with databases and transactions really
gains nothing and loses everything by being muddied and complicated with
this very awkward approach.

Unfortunately, the “debate” as I continue to follow it is really not about 
these things.  
From my observations, it seems to revolve around the following points:

A. async code is “faster” (it is not)

B. threads are “incorrect” (they are not)

C. code that performs IO without the programmer setting aside
explicit directives, deferrals, and other whatnot to accommodate this event is 
“wrong”
(it is not).

As long as the debate is over these largely ephemeral assertions, which keep 
coming
up over and over again, Openstack has stuck its feet in the mud with the 
eventlet 
thing as a kind of compromise (even though crowd C is not appeased by this).


 We spend a lot of time working on the various incompatibilies between
 eventlet and other libraries we use. It also restricts us by making it
 difficult to use an entire class of python modules (that use C
 extensions for performance, etc).
 
 I personally have spent more time than I wish to admit fixing bugs in
 eventlet and troubleshooting problems we've had.
 
 And it's never been clear to me why we *need* to use eventlet or
 green threads in general.
 
 Our modern Nova appears to only be weakly tied to eventlet and greenlet.
 I think we would spend less time replacing eventlet with native threads
 than we'll spend in the future trying to fit our code and dependencies
 into the eventlet shaped hole we currently have.
 
 I'm not as familiar with the code in other OpenStack projects, but from
 what I have seen, they appear to be similar to Nova and are only weakly
 tied to eventlet/greenlet.
 
 JE
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][oslo] oslo namespace package releases are done

2015-01-28 Thread Doug Hellmann
You will all, I am sure, be relieved to know that the oslo.vmware release today 
was the last library that needed to be released with namespace package changes. 
There are a few more patches to land to the requirements list to update the 
minimum required version of the oslo libs, and of course the work to update 
projects to actually use the new package name is still ongoing.

We have some more library releases planned for the next few weeks, but none 
should be so disruptive as these others have been. oslo.log, oslo.policy, and 
debtcollector are all new libraries. The oslo.vmware team has some feature 
work, but now that the requirement in juno is capped that release should be 
uneventful.

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-01-28 Thread Chuck Carlino

On 01/28/2015 12:51 PM, Kevin Benton wrote:


If we are going to ignore the IP address changing use-case, can we 
just make the default infinity? Then nobody ever has to worry about 
control plane outages for existing client. 24 hours is way too long to 
be useful anyway.




Why would users want to change an active port's IP address anyway? I can 
see possible use in changing an inactive port's IP address, but that 
wouldn't cause the dhcp issues mentioned here.  I worry about setting a 
default config value to handle a very unusual use case.


Chuck


On Jan 28, 2015 12:44 PM, Salvatore Orlando sorla...@nicira.com 
mailto:sorla...@nicira.com wrote:




On 28 January 2015 at 20:19, Brian Haley brian.ha...@hp.com
mailto:brian.ha...@hp.com wrote:

Hi Kevin,

On 01/28/2015 03:50 AM, Kevin Benton wrote:
 Hi,

 Approximately a year and a half ago, the default DHCP lease
time in Neutron was
 increased from 120 seconds to 86400 seconds.[1] This was
done with the goal of
 reducing DHCP traffic with very little discussion (based on
what I can see in
 the review and bug report). While it it does indeed reduce
DHCP traffic, I don't
 think any bug reports were filed showing that a 120 second
lease time resulted
 in too much traffic or that a jump all of the way to 86400
seconds was required
 instead of a value in the same order of magnitude.

 Why does this matter?

 Neutron ports can be updated with a new IP address from the
same subnet or
 another subnet on the same network. The port update will
result in anti-spoofing
 iptables rule changes that immediately stop the old IP
address from working on
 the host. This means the host is unreachable for 0-12 hours
based on the current
 default lease time without manual intervention[2] (assuming
half-lease length
 DHCP renewal attempts).

So I'll first comment on the problem.  You're essentially
pulling the rug out
from under these VMs by changing their IP (and that of their
router and DHCP/DNS
server), but you expect they should fail quickly and come
right back online.  In
a non-Neutron environment wouldn't the IT person that did this
need some pretty
good heat-resistant pants for all the flames from pissed-off
users?  Sure, the
guy on his laptop will just bounce the connection, but servers
(aka VMs) should
stay pretty static.  VMs are servers (and cows according to some).


I actually expect this kind operation to not be one Neutron users
will do very often, mostly because regardless of whether you're in
the cloud or not, you'd still need to wear those heat resistant pants.


The correct solution is to be able to renumber the network so
there is no issue
with the anti-spoofing rules dropping packets, or the VMs
having an unreachable
IP address, but that's a much bigger nut to crack.


Indeed. In my opinion the update IP operation sets false
expectations in users. I have considered disallowing PUT on
fixed_ips in the past but that did not go ahead because there were
users leveraging it.


 Why is this on the mailing list?

 In an attempt to make the VMs usable in a much shorter
timeframe following a
 Neutron port address change, I submitted a patch to reduce
the default DHCP
 lease time to 8 minutes.[3] However, this was upsetting to
several people,[4] so
 it was suggested I bring this discussion to the mailing
list. The following are
 the high-level concerns followed by my responses:

   * 8 minutes is arbitrary
   o Yes, but it's no more arbitrary than 1440 minutes. I
picked it as an
 interval because it is still 4 times larger than the last 
short value,
 but it still allows VMs to regain connectivity in 5
minutes in the
 event their IP is changed. If someone has a good
suggestion for another
 interval based on known dnsmasq QPS limits or some
other quantitative
 reason, please chime in here.

We run 48 hours as the default in our public cloud, and I did
some digging to
remind myself of the multiple reasons:

1. Too much DHCP traffic.  Sure, only that initial request is
broadcast, but
dnsmasq is very verbose and loves writing to syslog for
everything it does -
less is more.  Do a scale test with 10K VMs and you'll quickly
find out a large
portion of traffic is DHCP RENEWs, and syslog is huge.


This is correct, and something I overlooked in 

Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-28 Thread Johannes Erdfelt
On Wed, Jan 28, 2015, Mike Bayer mba...@redhat.com wrote:
 I can envision turning this driver into a total monster, adding
 C-speedups where needed but without getting in the way of async
 patching, adding new APIs for explicit async, and everything else.
 However, I’ve no idea what the developers have an appetite for.

This is great information. I appreciate the work on evaluating it.

Can I bring up the alternative of dropping eventlet and switching to
native threads?

We spend a lot of time working on the various incompatibilies between
eventlet and other libraries we use. It also restricts us by making it
difficult to use an entire class of python modules (that use C
extensions for performance, etc).

I personally have spent more time than I wish to admit fixing bugs in
eventlet and troubleshooting problems we've had.

And it's never been clear to me why we *need* to use eventlet or
green threads in general.

Our modern Nova appears to only be weakly tied to eventlet and greenlet.
I think we would spend less time replacing eventlet with native threads
than we'll spend in the future trying to fit our code and dependencies
into the eventlet shaped hole we currently have.

I'm not as familiar with the code in other OpenStack projects, but from
what I have seen, they appear to be similar to Nova and are only weakly
tied to eventlet/greenlet.

JE


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Proposed Changes to Magnum Core

2015-01-28 Thread Jay Lau
+1!

Thanks!

2015-01-29 6:40 GMT+08:00 Steven Dake sd...@redhat.com:

 On 01/28/2015 03:27 PM, Adrian Otto wrote:

 Magnum Cores,

 I propose the following addition to the Magnum Core group[1]:

 + Hongbin Lu (hongbin034)

 Please let me know your votes by replying to this message.

 Thanks,

 Adrian

 +1

 Regards
 -steve


  [1] https://review.openstack.org/#/admin/groups/473,members Current
 Members

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-28 Thread Doug Hellmann


On Wed, Jan 28, 2015, at 06:50 PM, Johannes Erdfelt wrote:
 On Wed, Jan 28, 2015, Clint Byrum cl...@fewbar.com wrote:
  As is often the case with threading, a reason to avoid using it is
  that libraries often aren't able or willing to assert thread safety.
  
  That said, one way to fix that, is to fix those libraries that we do
  want to use, to be thread safe. :)
 
 I floated this idea across some coworkers recently and they brought up a
 similar concern, which is concurrency in general, both within our code
 and dependencies.
 
 I can't find many places in Nova (at least) that are concurrent in the
 sense that one object will be used by multiple threads. nova-scheduler
 is likely one place. nova-compute would likely be easy to fix if there
 are any problems.
 
 That said, I think the only way to know for sure is to try it out and
 see. I'm going to hack up a proof of concept and see how difficult this
 will be.

I hope someone who was around at the time will chime in with more detail
about why green threads were deemed better than regular threads, and I
look forward to seeing your analysis of a change. There is already a
thread-based executor in oslo.messaging, which *should* be usable in the
applications when you remove eventlet.

Doug

 
 JE
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-28 Thread Joshua Harlow

+1

Will be an interesting analysis; I've also never understood this desire 
to use eventlet (perhaps it's written somewhere?). It seems like most of 
the applications in openstack (maybe leaving out the WSGI entrypoints) 
can be scaled horizontally and mostly just do a large amount of work 
that blocks (and python afaik itself will swap in/out threads when this 
happens) on I/O, sockets, other...


The highly concurrent situation seems only *slightly* applicable on the 
WSGI/webserver entrypoints (although this IMHO is 'meh', since we have 
multiple workers here anyway, and those should be scalable with new 
processes if needed) and maybe 'conductor' applications (although this 
should also be horizontally scalable using child-processes if it's 
really a bottleneck).


-Josh

Johannes Erdfelt wrote:

On Wed, Jan 28, 2015, Clint Byrumcl...@fewbar.com  wrote:

As is often the case with threading, a reason to avoid using it is
that libraries often aren't able or willing to assert thread safety.

That said, one way to fix that, is to fix those libraries that we do
want to use, to be thread safe. :)


I floated this idea across some coworkers recently and they brought up a
similar concern, which is concurrency in general, both within our code
and dependencies.

I can't find many places in Nova (at least) that are concurrent in the
sense that one object will be used by multiple threads. nova-scheduler
is likely one place. nova-compute would likely be easy to fix if there
are any problems.

That said, I think the only way to know for sure is to try it out and
see. I'm going to hack up a proof of concept and see how difficult this
will be.

JE


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-28 Thread Vishvananda Ishaya

On Jan 28, 2015, at 4:03 PM, Doug Hellmann d...@doughellmann.com wrote:

 
 
 On Wed, Jan 28, 2015, at 06:50 PM, Johannes Erdfelt wrote:
 On Wed, Jan 28, 2015, Clint Byrum cl...@fewbar.com wrote:
 As is often the case with threading, a reason to avoid using it is
 that libraries often aren't able or willing to assert thread safety.
 
 That said, one way to fix that, is to fix those libraries that we do
 want to use, to be thread safe. :)
 
 I floated this idea across some coworkers recently and they brought up a
 similar concern, which is concurrency in general, both within our code
 and dependencies.
 
 I can't find many places in Nova (at least) that are concurrent in the
 sense that one object will be used by multiple threads. nova-scheduler
 is likely one place. nova-compute would likely be easy to fix if there
 are any problems.
 
 That said, I think the only way to know for sure is to try it out and
 see. I'm going to hack up a proof of concept and see how difficult this
 will be.
 
 I hope someone who was around at the time will chime in with more detail
 about why green threads were deemed better than regular threads, and I
 look forward to seeing your analysis of a change. There is already a
 thread-based executor in oslo.messaging, which *should* be usable in the
 applications when you remove eventlet.

Threading was never really considered. The initial version tried to get a
working api server up as quickly as possible and it used tonado. This was
quickly replaced with twisted since tornado was really new at the time and
had bugs. We then switched to eventlet when swift joined the party so we
didn’t have multiple concurrency stacks.

By the time someone came up with the idea of using different concurrency
models for the api server and the backend services, we were already pretty
far down the greenthread path.

Vish

 
 Doug
 
 
 JE
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-28 Thread Johannes Erdfelt
On Wed, Jan 28, 2015, Vishvananda Ishaya vishvana...@gmail.com wrote:
 On Jan 28, 2015, at 4:03 PM, Doug Hellmann d...@doughellmann.com wrote:
  I hope someone who was around at the time will chime in with more detail
  about why green threads were deemed better than regular threads, and I
  look forward to seeing your analysis of a change. There is already a
  thread-based executor in oslo.messaging, which *should* be usable in the
  applications when you remove eventlet.
 
 Threading was never really considered. The initial version tried to get a
 working api server up as quickly as possible and it used tonado. This was
 quickly replaced with twisted since tornado was really new at the time and
 had bugs. We then switched to eventlet when swift joined the party so we
 didn’t have multiple concurrency stacks.
 
 By the time someone came up with the idea of using different concurrency
 models for the api server and the backend services, we were already pretty
 far down the greenthread path.

Not sure if it helps more than this explanation, but there was a
blueprint and accompanying wiki page that explains the move from twisted
to eventlet:

https://blueprints.launchpad.net/nova/+spec/unified-service-architecture

https://wiki.openstack.org/wiki/UnifiedServiceArchitecture

JE


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [UI] Deploy Changes dialog redesign

2015-01-28 Thread Igor Kalnitsky
Nik,

I'm sure it requires at least a spec, since there are things that
should be discussed. Who can do it in this release cycle? If there's a
person I'm +1 for refactoring; otherwise - I'd prefer to remove it to
make code more clear.

Thanks,
Igor

On Wed, Jan 28, 2015 at 12:44 PM, Nikolay Markov nmar...@mirantis.com wrote:
 Igor,

 But why can't we implement it properly on the first try? It doesn't
 seem like a hard task and won't take much time.

 On Wed, Jan 28, 2015 at 12:50 PM, Igor Kalnitsky
 ikalnit...@mirantis.com wrote:
 Nik,

 I'm now here and I don't agree that we need to remove changes
 attribute. On the opposite, I think this is the only attribute which
 should be looked at on UI and backend, and all these
 pending_addition and pending_someotherstuff are obsolete and
 needless.

 You're absolutely right. It's better to have one field rather than
 few. However, in current implementation this field (changes) is
 completely unusable. It's not even extensible, since it has a
 pre-defined values.

 So, I propose to solve first tasks first. We can remove it for now (in
 order to drop legacy) and introduce new implementation when we need.

 Thanks,
 Igor

 On Tue, Jan 27, 2015 at 11:12 AM, Nikolay Markov nmar...@mirantis.com 
 wrote:
 Guys,

 I'm now here and I don't agree that we need to remove changes
 attribute. On the opposite, I think this is the only attribute which
 should be looked at on UI and backend, and all these
 pending_addition and pending_someotherstuff are obsolete and
 needless.

 Just assume, that we'll soon have some plugin or just some tech which
 allows us to modify some settings on UI after environment was deployed
 and somehow apply it onto nodes (like, for example, we're planning
 such thing for VMWare). In this case there is no any
 pending_addition or some other stuff, these are just changes to
 apply on a node somehow, maybe just execute some script on them. And
 the same goes to a lot of cases with plugins, which do some services
 on target nodes configurable.

 Pending_addition flag, on the other hand, is useless, because all
 changes we should apply on node are already listed in changes
 attribute. We can even probably add provisioning and deployment
 into these pending changes do avoid logic duplication. But still, as
 for me, this is the only working mechanism we should consider and
 which will really help us to cver complex cases in the future.

 On Tue, Jan 27, 2015 at 10:52 AM, Mike Scherbakov
 mscherba...@mirantis.com wrote:
 +1, I do not think it's usable as how it is now. Let's think though if we
 can come up with better idea how to show what has been changed (or even
 otherwise, what was not touched - and so might bring a surprise later).
 We might want to think about it after wizard-like UI is implemented.

 On Mon, Jan 26, 2015 at 8:26 PM, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:

 +1 for removing attribute.

 @Evgeniy, I'm not sure that this attribute really shows all changes
 that's going to be done.

 On Mon, Jan 26, 2015 at 7:11 PM, Evgeniy L e...@mirantis.com wrote:
  To be more specific, +1 for removing this information from UI, not from
  backend.
 
  On Mon, Jan 26, 2015 at 7:46 PM, Evgeniy L e...@mirantis.com wrote:
 
  Hi,
 
  I agree that this information is useless, but it's not really clear
  what
  you are going
  to show instead, will you completely remove the information about nodes
  for deployment?
  I think the list of nodes for deployment (without detailed list of
  changes) can be useful
  for the user.
 
  Thanks,
 
  On Mon, Jan 26, 2015 at 7:23 PM, Vitaly Kramskikh
  vkramsk...@mirantis.com wrote:
 
  +1 for removing changes attribute. It's useless now. If there are no
  plans to add something else there, let's remove it.
 
  2015-01-26 11:39 GMT+03:00 Julia Aranovich jkirnos...@mirantis.com:
 
  Hi All,
 
  Since we changed Deploy Changes pop-up and added processing of role
  limits and restrictions I would like to raise a question of it's
  subsequent
  refactoring.
 
  In particular, I mean 'changes' attribute of cluster model. It's
  displayed in Deploy Changes dialog in the following format:
 
  Changed disks configuration on the following nodes:
 
  node_name_list
 
  Changed interfaces configuration on the following nodes:
 
  node_name_list
 
  Changed network settings
  Changed OpenStack settings
 
  This list looks absolutely useless.
 
  It doesn't make any sense to display lists of new, not deployed nodes
  with changed disks/interfaces. It's obvious I think that new nodes
  attributes await deployment. At the same time user isn't able to
  change
  disks/interfaces on deployed nodes (at least in UI). So, such node
  name
  lists are definitely redundant.
  Networks and settings are also locked after deployment finished.
 
 
  I tend to get rid of cluster model 'changes' attribute at all.
 
  It is important for me to know your opinion, to make a final
  decision.
  Please feel free and share your ideas and concerns 

Re: [openstack-dev] [nova] [api] Get servers with limit and IP address filter

2015-01-28 Thread Steven Kaufer
Vishvananda Ishaya vishvana...@gmail.com wrote on 01/27/2015 04:29:50 PM:

 From: Vishvananda Ishaya vishvana...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 01/27/2015 04:32 PM
 Subject: Re: [openstack-dev] [nova] [api] Get servers with limit and
 IP address filter

 The network info for an instance is cached as a blob of data
 (neutron has the canonical version in most installs), so it isn’t
 particularly easy to do at the database layer. You would likely need
 a pretty complex stored procedure to do it accurately.

 Vish

Vish,

Thanks for the reply.

I agree with your point about the difficultly in accurately querying the
blob of data; however, IMHO, the complexity this fix does not preclude the
current behavior as being classified as a bug.

With that in mind, I was wondering if anyone in the community has any
thoughts on if the current behavior is considered a bug?

If so, how should it be resolved? A couple options that I could think of:

1. Disallow the combination of using both a limit and an IP address filter
by raising an error.
2. Workaround the problem by removing the limit from the DB query and then
manually limiting the list of servers (after manually applying the IP
address filter).
3. Break up the query so that the server UUIDs that match the IP filter are
retrieved first and then used as a UUID DB filter. As far as I can tell,
this type of solution was originally implemented but the network query was
deemed to expensive [1]. Is there a less expensive method to determine the
UUIDs (possibly querying the cached 'network_info' in the '
instance_info_caches' table)?
4. Figure out how to accurately query the blob of network info that is
cached in the nova DB and apply the IP filter at the DB layer.

[1]: https://review.openstack.org/#/c/131460/

Thanks,
Steven Kaufer


 On Jan 27, 2015, at 2:00 PM, Steven Kaufer kau...@us.ibm.com wrote:

 Hello,

 When applying an IP address filter to a paginated servers query (eg,
 supplying servers/detail?ip=192.168limit=100), the IP address
 filtering is only being applied against the non-filtered page of
 servers that were retrieved from the DB; see [1].

 I believe that the IP address filtering should be done before the
 limit is applied, returning up to limit servers that match the IP
 address filter.  Currently, if the servers in the page of data
 returned from the DB do not happen to match the IP address filter
 (applied in the compute API), then no servers will be returned by
 the REST API (even if there are servers that match the IP address
filter).

 This seems like a bug to me, shouldn't all filtering be done at the DB
layer?

 [1]: https://github.com/openstack/nova/blob/master/nova/compute/
 api.py#L2037-L2042

 Thanks,
 Steven Kaufer

__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] core team changes

2015-01-28 Thread Zane Bitter

On 27/01/15 20:36, Angus Salkeld wrote:

Hi all

After having a look at the stats:
http://stackalytics.com/report/contribution/heat-group/90
http://stackalytics.com/?module=heat-groupmetric=person-day

I'd like to propose the following changes to the Heat core team:

Add:
Qiming Teng
Huang Tianhua

Remove:
Bartosz Górski (Bartosz has indicated that he is happy to be removed and
doesn't have the time to work on heat ATM).

Core team please respond with +/- 1.


+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] temporarily disabling python 3.x testing for oslo.messaging and oslo.rootwrap

2015-01-28 Thread Ben Nemec
On 01/27/2015 10:51 AM, Doug Hellmann wrote:
 The infra team has been working hard to update our Python 3 testing for all 
 projects to run on 3.4 instead of 3.3. Two of the last projects to be able to 
 shift are oslo.messaging and oslo.rootwrap. The test suites for both projects 
 trigger a segfault bug in the 3.4 interpreter as it is shipped on Ubuntu 
 Trusty. The fix for the segfault is already available upstream, and the team 
 at Canonical is working on packaging a new release, but our schedules are out 
 of sync. Maintaining a separate image and pool of testing nodes for 3.3 
 testing of just these two projects is going to be a bit of a burden, and so 
 the infra team has asked if we’re willing to turn off the 3.3 jobs for the 
 two projects, leaving us without 3.x testing in the gate until the 3.4 
 interpreter on Trusty is updated.
 
 The latest word from Canonical is that they plan to package Python 3.4.3, due 
 to be released in about a month. It will take some additional time to put it 
 through their release process, and so there’s some uncertainty about how long 
 we would be without 3.x gate jobs, but it doesn’t look like it will be 
 indefinitely.
 
 To mitigate that risk, fungi has suggested starting to work on Debian Jessie 
 worker images, which would include a version of Python 3.4 that doesn’t have 
 the segfault issue. His goal is to have something working by around the end 
 of March. That gives Canonical up to a month to release the 3.4.3 package 
 before we would definitely move those tests to Debian. Whether we move any of 
 the other projects, or would move anyway if fungi gets Debian working more 
 quickly than he expects, would remain to be seen.
 
 Although we do have some risk of introducing Python 3 regressions into the 
 two libraries, I am inclined to go along with the infra team’s request and 
 disable the tests for a short period of time. The rootwrap library doesn’t 
 see a lot of changes, and we can rely on the messaging lib devs to run tests 
 locally for a little while.
 
 Before I give the go-ahead, I want to hear concerns from the rest of the 
 team. Let’s try to have an answer by the 29th (Thursday).
 
 Doug

As long as 1) this is short-term, which it sounds like it is and 2) we
make sure to at least run the py3 tests locally before doing a release,
I think this will be fine.  A bug sneaking into the source isn't the end
of the world, but releasing with one would be bad.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-28 Thread James E. Blair
Thierry Carrez thie...@openstack.org writes:

 Monty Taylor wrote:
 You'll notice that I did say in my suggestion that ANYONE should be able
 to propose a name - I believe that would include non-dev people. Since
 the people in question are marketing people, I would imagine that if any
 of them feel strongly about a name, that it should be trivial for them
 to make their case in a persuasive way.

 The proposal as it stands (https://review.openstack.org/#/c/150604/4)
 currently excludes all non-ATCs from voting, though. The wider
 community was included in previous iterations of the naming process,
 so this very much feels like a TC power grab.

Egad, it was definitely not intended to be a power grab, quite the
opposite in fact (in my proposal, the TC is only granted the power to
exempt really cool names from the rules).  But since we're doing things
in the open now, we can fix it.  Considering that the process used to be
a poll of the ~openstack group on launchpad, it seemed like a fairly
straightforward mapping to ATCs.  I wanted to find the easiest way to
get the most people in the community likely to vote as possible without
needing to generate a new voting roll.  But you are correct: if we're
fixing this, let's fix it right.

The next best thing I can think of is to use the entire Foundation
Individual Membership to produce the roll for the CIVS poll.  It will be
a bit of extra work, but I believe that is about as broad of a
definition of our community that we use.

 I'm not willing to cede that choosing the name is by definition a
 marketing activity - and in fact the sense that such a position was
 developing is precisely why I think it's time to get this sorted. I
 think the dev community feels quite a bit of ownership on this topic and
 I would like to keep it that way.

 It's not by definition a technical activity either, so we are walking a
 thin line. Like I commented on the review: I think the TC can retain
 ownership of this process and keep the last bits of fun that were still
 in it[1], as long as we find a way to keep non-ATCs in the naming
 process, and take into account the problematic names raised by the
 marketing community team (which will use those names as much as the
 technical community does).

Sounds great!  I will revise my TC proposal.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-28 Thread Doug Hellmann


On Wed, Jan 28, 2015, at 03:23 AM, Denis Makogon wrote:
 On Tue, Jan 27, 2015 at 10:26 PM, Gordon Sim g...@redhat.com wrote:
 
  On 01/27/2015 06:31 PM, Doug Hellmann wrote:
 
  On Tue, Jan 27, 2015, at 12:28 PM, Denis Makogon wrote:
 
  I'd like to build tool that would be able to profile messaging over
  various deployments. This tool would give me an ability to compare
  results of performance testing produced by native tools and
  oslo.messaging-based tool, eventually it would lead us into digging into
  code and trying to figure out where bad things are happening (that's
  the
  actual place where we would need to profile messaging code). Correct me
  if
  i'm wrong.
 
 
  It would be interesting to have recommendations for deployment of rabbit
  or qpid based on performance testing with oslo.messaging. It would also
  be interesting to have recommendations for changes to the implementation
  of oslo.messaging based on performance testing. I'm not sure you want to
  do full-stack testing for the latter, though.
 
  Either way, I think you would be able to start the testing without any
  changes in oslo.messaging.
 
 
  I agree. I think the first step is to define what to measure and then
  construct an application using olso.messaging that allows the data of
  interest to be captured using different drivers and indeed different
  configurations of a given driver.
 
  I wrote a very simple test application to test one aspect that I felt was
  important, namely the scalability of the RPC mechanism as you increase the
  number of clients and servers involved. The code I used is
  https://github.com/grs/ombt, its probably stale at the moment, I only
  link to it as an example of approach.
 
  Using that test code I was then able to compare performance in this one
  aspect across drivers (the 'rabbit', 'qpid' and new amqp 1.0 based drivers
  _ I wanted to try zmq, but couldn't figure out how to get it working at the
  time), and for different deployment options using a given driver (amqp 1.0
  using qpidd or qpid dispatch router in either standalone or with multiple
  connected routers).
 
  There are of course several other aspects that I think would be important
  to explore: notifications, more specific variations in the RPC 'topology'
  i.e. number of clients on given server number of servers in single group
  etc, and a better tool (or set of tools) would allow all of these to be
  explored.
 
  From my experimentation, I believe the biggest differences in scalability
  are going to come not from optimising the code in oslo.messaging so much as
  choosing different patterns for communication. Those choices may be
  constrained by other aspects as well of course, notably approach to
  reliability.
 
 
 
 After couple internal discussions and hours of investigations, i think
 i've
 foung the most applicabale solution
 that will accomplish performance testing approach and will eventually be
 evaluated as messaging drivers
 configuration and AMQP service deployment recommendataion.
 
 Solution that i've been talking about is already pretty well-known across
 OpenStack components - Rally and its scenarios.
 Why it would be the best option? Rally scenarios would not touch
 messaging
  core part. Scenarios are gate-able.
 Even if we're talking about internal testing, scenarios are very useful
 in
 this case,
 since they are something that can be tuned/configured taking into account
 environment needs.
 
 Doug, Gordon, what do you think about bringing scenarios into messaging?

I think I need more detail about what you mean by that.

Doug

 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 Kind regards,
 Denis M.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-28 Thread Jeremy Stanley
On 2015-01-28 10:29:38 +0100 (+0100), Thierry Carrez wrote:
 The proposal as it stands (https://review.openstack.org/#/c/150604/4)
 currently excludes all non-ATCs from voting, though. The wider
 community was included in previous iterations of the naming process,
 so this very much feels like a TC power grab.
[...]

The only reason I'm in favor of that simplification is logistics. If
representatives of our developer community are expected to run this
poll on our own then ATCs are an electorate we're able to produce
for that purpose. If it's going to be a vote across all OpenStack
Foundation individual members instead, then that election will need
to be run by (or at least in close cooperation with) the same people
who currently manage the board elections.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Jan 29 1400 UTC

2015-01-28 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting in #openstack-meeting-3 channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20150129T14

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Questions on pep8 F811 hacking check for microversion

2015-01-28 Thread Chen CH Ji
Is there a way to overwrite the rule in our hacking (not familiar with
it ...)?
if so ,maybe we can do as suggested to avoid 811 for the class which has
Microversion definition? Thanks

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Christopher Yeoh cbky...@gmail.com
To: openstack-dev@lists.openstack.org
Date:   01/28/2015 09:37 AM
Subject:Re: [openstack-dev] [nova] Questions on pep8 F811 hacking check
for microversion



On Tue, 06 Jan 2015 07:31:19 -0500
Jay Pipes jaypi...@gmail.com wrote:

 On 01/06/2015 06:25 AM, Chen CH Ji wrote:
  Based on nova-specs api-microversions.rst
  we support following function definition format, but it violate the
  hacking rule pep8 F811 because duplicate function definition
  we should use #noqa for them , but considering microversion may
  live for long time ,
  keep adding #noqa may be a little bit ugly, can anyone suggest a
  good solution for it ? thanks
 
  @api_version(min_version='2.1')
  def _version_specific_func(self, req, arg1):
 pass
   
  @api_version(min_version='2.5')
  def _version_specific_func(self, req, arg1):
 pass

 Hey Kevin,

 This was actually one of my reservations about the proposed
 microversioning implementation -- i.e. having functions that are
 named exactly the same, only decorated with the microversioning
 notation. It kinda reminds me of the hell of debugging C++ code that
 uses STL: how does one easily know which method one is in when inside
 a debugger?

 That said, the only other technique we could try to use would be to
 not use a decorator and instead have a top-level dispatch function
 that would inspect the API microversion (only when the API version
 makes a difference to the output or input of that function) and then
 dispatch the call to a helper method that had the version in its name.

 So, for instance, let's say you are calling the controller's GET
 /$tenant/os-hosts method, which happens to get routed to the
 nova.api.openstack.compute.contrib.hosts.HostController.index()
 method. If you wanted to modify the result of that method and the API
 microversion is at 2.5, you might do something like:

   def index(self, req):
   req_api_ver = utils.get_max_requested_api_version(req)
   if req_api_ver == (2, 5):
   return self.index_2_5(req)
   return self.index_2_1(req)

   def index_2_5(self, req):
   results = self.index_2_1(req)
   # Replaces 'host' with 'host_name'
   for result in results:
   result['host_name'] = result['host']
   del result['host']
   return results

   def index_2_1(self, req):
   # Would be a rename of the existing index() method on
   # the controller


So having to manually add switching code everything we have an API
patch I think is not only longer and more complicated but more error
prone when updating. If we change something at the core in the future it
means changing all the microversioned code rather than just the
switching architecture at the core of wsgi.


 Another option would be to use something like JSON-patch to determine
 the difference between two output schemas and automatically translate
 one to another... but that would be a huge effort.

 That's the only other way I can think of besides disabling F811,
 which I really would not recommend, since it's a valuable safeguard
 against duplicate function names (especially duplicated test methods).

So I don't think we need to disable F811 in general - why not just
disable it for any method with the api_version decorator? On those ones
we can do checks on what is passed to api_version which will help
verify that there hasn't been a typo to an api_version decorator.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Puppet] Manifests for granular deploy steps and testing results against the host OS

2015-01-28 Thread Dmitriy Shulyak
Guys, is it crazy idea to write tests for deployment state on node in
python?
It even can be done in unit tests fashion..

I mean there is no strict dependency on tool from puppet world, what is
needed is access to os and shell, maybe some utils.

 What plans have Fuel Nailgun team for testing the results of deploy steps
aka tasks?
From nailgun/orchestration point of view - verification of deployment
should be done as another task, or included in original.

On Thu, Jan 22, 2015 at 5:44 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Moreover I would suggest to use server spec as beaker is already
 duplicating part of our infrastructure automatization.

 On Thu, Jan 22, 2015 at 6:44 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Guys, I suggest that we create a blueprint how to integrate beaker with
 our existing infrastructure to increase test coverage. My optimistic
 estimate is that we can see its implementation in 7.0.

 On Thu, Jan 22, 2015 at 2:07 AM, Andrew Woodward xar...@gmail.com
 wrote:

 My understanding is serverspec is not going to work well / going to be
 supported. I think it was discusssed on IRC (as i cant find it in my
 email). Stackforge/puppet-ceph moved from ?(something)spec to beaker,
 as its more functional and actively developed.

 On Mon, Jan 12, 2015 at 6:10 AM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
  Hi,
 
  Puppet OpenStack community uses Beaker for acceptance testing. I would
  consider it as option [2]
 
  [2] https://github.com/puppetlabs/beaker
 
  --
  Best regards,
  Sergii Golovatiuk,
  Skype #golserge
  IRC #holser
 
  On Mon, Jan 12, 2015 at 2:53 PM, Bogdan Dobrelya 
 bdobre...@mirantis.com
  wrote:
 
  Hello.
 
  We are working on the modularization of Openstack deployment by puppet
  manifests in Fuel library [0].
 
  Each deploy step should be post-verified with some testing framework
 as
  well.
 
  I believe the framework should:
  * be shipped as a part of Fuel library for puppet manifests instead of
  orchestration or Nailgun backend logic;
  * allow the deployer to verify results right in-place, at the node
 being
  deployed, for example, with a rake tool;
  * be compatible / easy to integrate with the existing orchestration in
  Fuel and Mistral as an option?
 
  It looks like test resources provided by Serverspec [1] are a good
  option, what do you think?
 
  What plans have Fuel Nailgun team for testing the results of deploy
  steps aka tasks? The spec for blueprint gives no a clear answer.
 
  [0]
 
 https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
  [1] http://serverspec.org/resource_types.html
 
  --
  Best regards,
  Bogdan Dobrelya,
  Skype #bogdando_at_yahoo.com
  Irc #bogdando
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Ceph community


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Melanie Witt for python-novaclient-core

2015-01-28 Thread Russell Bryant
On 01/27/2015 05:41 PM, Michael Still wrote:
 Greetings,
 
 I would like to nominate Melanie Witt for the python-novaclient-core team.
 
 (What is python-novaclient-core? Its a new group which will contain
 all of nova-core as well as anyone else we think should have core
 reviewer powers on just the python-novaclient code).
 
 Melanie has been involved with nova for a long time now. She does
 solid reviews in python-novaclient, and at least two current
 nova-cores have suggested her as ready for core review powers on that
 repository.
 
 Please respond with +1s or any concerns.

+1

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Puppet] Manifests for granular deploy steps and testing results against the host OS

2015-01-28 Thread Sergii Golovatiuk
We need to write tests in way how Puppet community writes. Though if user
uses salt in one stage, it's fine to use tests on python.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Jan 28, 2015 at 11:15 PM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 Guys, is it crazy idea to write tests for deployment state on node in
 python?
 It even can be done in unit tests fashion..

 I mean there is no strict dependency on tool from puppet world, what is
 needed is access to os and shell, maybe some utils.

  What plans have Fuel Nailgun team for testing the results of deploy steps
 aka tasks?
 From nailgun/orchestration point of view - verification of deployment
 should be done as another task, or included in original.

 On Thu, Jan 22, 2015 at 5:44 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Moreover I would suggest to use server spec as beaker is already
 duplicating part of our infrastructure automatization.

 On Thu, Jan 22, 2015 at 6:44 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Guys, I suggest that we create a blueprint how to integrate beaker with
 our existing infrastructure to increase test coverage. My optimistic
 estimate is that we can see its implementation in 7.0.

 On Thu, Jan 22, 2015 at 2:07 AM, Andrew Woodward xar...@gmail.com
 wrote:

 My understanding is serverspec is not going to work well / going to be
 supported. I think it was discusssed on IRC (as i cant find it in my
 email). Stackforge/puppet-ceph moved from ?(something)spec to beaker,
 as its more functional and actively developed.

 On Mon, Jan 12, 2015 at 6:10 AM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
  Hi,
 
  Puppet OpenStack community uses Beaker for acceptance testing. I would
  consider it as option [2]
 
  [2] https://github.com/puppetlabs/beaker
 
  --
  Best regards,
  Sergii Golovatiuk,
  Skype #golserge
  IRC #holser
 
  On Mon, Jan 12, 2015 at 2:53 PM, Bogdan Dobrelya 
 bdobre...@mirantis.com
  wrote:
 
  Hello.
 
  We are working on the modularization of Openstack deployment by
 puppet
  manifests in Fuel library [0].
 
  Each deploy step should be post-verified with some testing framework
 as
  well.
 
  I believe the framework should:
  * be shipped as a part of Fuel library for puppet manifests instead
 of
  orchestration or Nailgun backend logic;
  * allow the deployer to verify results right in-place, at the node
 being
  deployed, for example, with a rake tool;
  * be compatible / easy to integrate with the existing orchestration
 in
  Fuel and Mistral as an option?
 
  It looks like test resources provided by Serverspec [1] are a good
  option, what do you think?
 
  What plans have Fuel Nailgun team for testing the results of deploy
  steps aka tasks? The spec for blueprint gives no a clear answer.
 
  [0]
 
 https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
  [1] http://serverspec.org/resource_types.html
 
  --
  Best regards,
  Bogdan Dobrelya,
  Skype #bogdando_at_yahoo.com
  Irc #bogdando
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Ceph community


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [Magnum] Proposed Changes to Magnum Core

2015-01-28 Thread Davanum Srinivas
Welcome Hongbin. +1 from me!

On Wed, Jan 28, 2015 at 2:27 PM, Adrian Otto adrian.o...@rackspace.com wrote:
 Magnum Cores,

 I propose the following addition to the Magnum Core group[1]:

 + Hongbin Lu (hongbin034)

 Please let me know your votes by replying to this message.

 Thanks,

 Adrian

 [1] https://review.openstack.org/#/admin/groups/473,members Current Members

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-01-28 Thread Thomas Goirand
On 01/28/2015 08:56 PM, Sean Dague wrote:
 There is a new stackforge project which is getting some activity now -
 https://github.com/stackforge/ec2-api. The intent and hope is that is
 the path forward for the portion of the community that wants this
 feature, and that efforts will be focused there.

I'd be happy to provide a Debian package for this, however, there's not
even a single git tag there. That's not so nice for tracking issues.
Who's working on it?

Also, is this supposed to be branch-less? Or will it follow juno/kilo/l... ?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] pyCADF 0.7.1 released

2015-01-28 Thread gordon chung
 The pyCADF team is pleased to announce the release of pyCADF 0.7.1.  This 
release includes several bug fixes as well as many other changes: d59675b Add 
new CADF taxonomy types8b82468 Pull out some CADF taxonomy to be 
constants84544aa Updated from global requirements  For more details, please see 
the git log history below and https://launchpad.net/pycadf/+milestone/0.7.1  
Please report issues through launchpad: https://launchpad.net/pycadf

gord
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.db] PyMySQL review

2015-01-28 Thread Mike Bayer
Hey list -

As many are aware, we’ve been looking for the one MySQL driver to rule them 
all.  As has been the case for some weeks now, that driver is PyMySQL, meeting 
all the critical requirements we have of:  1. pure python, so eventlet 
patchable, 2. Python 3 support 3. Is released on Pypi (which is what 
disqualifies MySQL-connector-Python).

I have experience with PyMySQL and I was at first a little skeptical that it’s 
ready for openstack’s level of activity, so I offered to have oslo.db write up 
a full review of everything we know about all the MySQL drivers, including that 
I’d evaluate PyMySQL and try to put some actual clue behind my vague notions.   

I finally got around to writing up the code review portion, so that we at least 
have awareness of what we’re getting with PyMySQL.  This is also a document 
that I’m very much hoping we can use to get the PyMySQL developers involved 
with.   Basically PyMySQL is fine, it lacks some polish and test coverage that 
can certainly be added, and as far as performance, we’re going to be really 
disappointed with pure-Python MySQL drivers in general, though PyMySQL itself 
can still be improved within the realm of pure Python.  It at least allows for 
eventlet monkey patching, so that we will for the first time be able to observe 
the benefits of some real database concurrency ever since Openstack decided not 
to use threads (if Openstack did in fact ever use normal threads…), which also 
means we will be able to observe the headaches of real database concurrency, 
especially the really scary ones I’ve already observed when working with 
eventlet/gevent style monkeypatching :).

While PyMySQL is lacking test coverage in some areas, has no external 
documentation, and has at least some areas where Python performance can be 
improved, the basic structure of the driver is perfectly fine and 
straightforward.  I can envision turning this driver into a total monster, 
adding C-speedups where needed but without getting in the way of async 
patching, adding new APIs for explicit async, and everything else.   However, 
I’ve no idea what the developers have an appetite for.

Please review the document at 
https://wiki.openstack.org/wiki/PyMySQL_evaluation.

- mike





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Manila driver for CephFS

2015-01-28 Thread Valeriy Ponomaryov
Hello Jake,

Main thing, that should be mentioned, is that blueprint has no assignee.
Also, It is created long time ago without any activity after it.
I did not hear any intentions about it, moreover did not see some, at
least, drafts.

So, I guess, it is open for volunteers.

Regards,
Valeriy Ponomaryov

On Wed, Jan 28, 2015 at 11:30 PM, Jake Kugel jku...@us.ibm.com wrote:

 Hi,

 I see there is a blueprint for a Manila driver for CephFS here [1].  It
 looks like it was opened back in 2013 but still in Drafting state.  Does
 anyone know more status about this one?

 Thank you,
 -Jake

 [1]  https://blueprints.launchpad.net/manila/+spec/cephfs-driver


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-Library] MVP implementation of Granular Deployment merged into Fuel master branch

2015-01-28 Thread Dmitriy Shulyak
Andrew,
What should be sorted out? It is unavoidable that people will comment and
ask questions during development cycle.
I am not sure that merging spec as early as possible, and than add comments
and different fixes is good strategy.
On the other hand we need to eliminate risks.. but how merging spec can
help?

On Wed, Jan 28, 2015 at 8:49 PM, Andrew Woodward xar...@gmail.com wrote:

 Vova,

 Its great to see so much progress on this, however it appears that we
 have started merging code prior to the spec landing [0] lets get it
 sorted ASAP.

 [0] https://review.openstack.org/#/c/113491/

 On Mon, Jan 19, 2015 at 8:21 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:
  Hi, Fuelers and Stackers
 
  I am glad to announce that we merged initial support for granular
 deployment
  feature which is described here:
 
 
 https://blueprints.launchpad.net/fuel/+spec/granular-deployment-based-on-tasks
 
  This is an important milestone for our overall deployment and operations
  architecture as well as it is going to significantly improve our testing
 and
  engineering process.
 
  Starting from now we can start merging code for:
 
  https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
  https://blueprints.launchpad.net/fuel/+spec/fuel-library-modular-testing
 
  We are still working on documentation and QA stuff, but it should be
 pretty
  simple for you to start trying it out. We would really appreciate your
  feedback.
 
  Existing issues are the following:
 
  1) pre and post deployment hooks are still out of the scope of main
  deployment graph
  2) there is currently only puppet task provider working reliably
  3) no developer published documentation
  4) acyclic graph testing not injected into CI
  5) there is currently no opportunity to execute particular task - only
 the
  whole deployment (code is being reviewed right now)
 
  --
  Yours Faithfully,
  Vladimir Kuklin,
  Fuel Library Tech Lead,
  Mirantis, Inc.
  +7 (495) 640-49-04
  +7 (926) 702-39-68
  Skype kuklinvv
  45bk3, Vorontsovskaya Str.
  Moscow, Russia,
  www.mirantis.com
  www.mirantis.ru
  vkuk...@mirantis.com
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Ceph community

 On Mon, Jan 19, 2015 at 8:21 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:
  Hi, Fuelers and Stackers
 
  I am glad to announce that we merged initial support for granular
 deployment
  feature which is described here:
 
 
 https://blueprints.launchpad.net/fuel/+spec/granular-deployment-based-on-tasks
 
  This is an important milestone for our overall deployment and operations
  architecture as well as it is going to significantly improve our testing
 and
  engineering process.
 
  Starting from now we can start merging code for:
 
  https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
  https://blueprints.launchpad.net/fuel/+spec/fuel-library-modular-testing
 
  We are still working on documentation and QA stuff, but it should be
 pretty
  simple for you to start trying it out. We would really appreciate your
  feedback.
 
  Existing issues are the following:
 
  1) pre and post deployment hooks are still out of the scope of main
  deployment graph
  2) there is currently only puppet task provider working reliably
  3) no developer published documentation
  4) acyclic graph testing not injected into CI
  5) there is currently no opportunity to execute particular task - only
 the
  whole deployment (code is being reviewed right now)
 
  --
  Yours Faithfully,
  Vladimir Kuklin,
  Fuel Library Tech Lead,
  Mirantis, Inc.
  +7 (495) 640-49-04
  +7 (926) 702-39-68
  Skype kuklinvv
  45bk3, Vorontsovskaya Str.
  Moscow, Russia,
  www.mirantis.com
  www.mirantis.ru
  vkuk...@mirantis.com
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Fuel community ambassador
 Ceph community

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins][Orchestration] Unclear handling of primary-controler and controller roles

2015-01-28 Thread Andrew Woodward
On Wed, Jan 28, 2015 at 3:06 AM, Evgeniy L e...@mirantis.com wrote:
 Hi,

 +1 for having primary-controller role in terms of deployment.

Yes, we need to continue to be able to differentiate the difference
between the first node in a set of roles, and all the others.

For controllers we have logic around how the services start, and if we
attempt to create resources. This allows the deployment to run more
smoothly.
For mongo the logic is used to setup the primary vs backup data nodes.
For plugins I would expect to continue to see this kind of need and
would need to be able to expose a similar logic when adding roles /
tasks

I'm however not sure that we need to do this with some kind of role,
this could simply be some parameter that we then use to set the
conditional that we use to apply primary logic already. Alternately,
this could cause the inclusion of 'primary' or 'first node' tasks that
would do these specific work with out the presence of the conditional
to run this testing

 In our tasks user should be able to run specific task on primary-controller.
 But I agree that it can be tricky because after the cluster is deployed, we
 cannot say who is really primary, is there a case when it's important to
 know
 who is really primary after deployment is done?

for mongo, its important to find out who is currently the primary
prior to deployment starting (which may not have been the primary that
the deployment started with) So it may be special in it's case.

for controller, its irrelevant as long as it's not set to a newly
added node (a node with a lower node.id will cause this and create
problems)

 Also I would like to mention that in plugins user currently can write
 'roles': ['controller'],
 which means that the task will be applied on 'controller' and
 'primary-controller' nodes.
 Plugin developer can get this information from astute.yaml file. But I'm
 curious if we
 should change this behaviour for plugins (with backward compatibility of
 course)?


writing roles: ['controller'] should apply to all controllers as
expected, with the addition of roles: ['primary-controller'] only
applying to the primary controller.
 Thanks,


 On Wed, Jan 28, 2015 at 1:07 PM, Aleksandr Didenko adide...@mirantis.com
 wrote:

 Hi,

 we definitely need such separation on orchestration layer.

  Is it possible to have significantly different sets of tasks for
  controller and primary-controller?

 Right now we already do different things on primary and secondary
 controllers, but it's all conducted in the same manifest and controlled by
 conditionals inside the manifest. So when we split our tasks into smaller
 ones, we may want/need to separate them for primary and secondary
 controllers.

  I wouldn't differentiate tasks for primary and other controllers.
  Primary-controller logic should be controlled by task itself. That will
  allow to have elegant and tiny task framework

 Sergii, we still need this separation on the orchestration layer and, as
 you know, our deployment process is based on it. Currently we already have
 separate task groups for primary and secondary controller roles. So it will
 be up to the task developer how to handle some particular task for different
 roles: developer can write 2 different tasks (one for 'primary-controller'
 and the other one for 'controller'), or he can write the same task for both
 groups and handle differences inside the task.

 --
 Regards,
 Aleksandr Didenko


 On Wed, Jan 28, 2015 at 11:25 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 But without this separation on orchestration layer, we are unable to
 differentiate between nodes.
 What i mean is - we need to run subset of tasks on primary first and then
 on all others, and we are using role as mapper
 to node identities (and this mechanism was hardcoded in nailgun for a
 long time).

 Lets say we have task A that is mapped to primary-controller and B that
 is mapped to secondary controller, task B requires task A.
 If there is no primary in mapping - we will execute task A on all
 controllers and then task B on all controllers.

 And how in such case deployment code will know that it should not execute
 commands in task A for secondary controllers and
 in task B on primary ?

 On Wed, Jan 28, 2015 at 10:44 AM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:

 Hi,

 But with introduction of plugins and granular deployment, in my opinion,
 we need to be able
 to specify that task should run specifically on primary, or on
 secondaries. Alternative to this approach would be - always run task on all
 controllers, and let task itself to verify that it is  executed on primary
 or not.

 I wouldn't differentiate tasks for primary and other controllers.
 Primary-controller logic should be controlled by task itself. That will
 allow to have elegant and tiny task framework ...

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Tue, Jan 27, 2015 at 11:35 PM, Dmitriy Shulyak
 dshul...@mirantis.com 

Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-28 Thread Mike Bayer


Mike Bayer mba...@redhat.com wrote:

 Hey list -
 
 As many are aware, we’ve been looking for the one MySQL driver to rule them 
 all.  As has been the case for some weeks now, that driver is PyMySQL, 
 meeting all the critical requirements we have of:  1. pure python, so 
 eventlet patchable, 2. Python 3 support 3. Is released on Pypi (which is what 
 disqualifies MySQL-connector-Python).
 
 I have experience with PyMySQL and I was at first a little skeptical that 
 it’s ready for openstack’s level of activity, so I offered to have oslo.db 
 write up a full review of everything we know about all the MySQL drivers, 
 including that I’d evaluate PyMySQL and try to put some actual clue behind my 
 vague notions.   
 
 I finally got around to writing up the code review portion, so that we at 
 least have awareness of what we’re getting with PyMySQL.  This is also a 
 document that I’m very much hoping we can use to get the PyMySQL developers 
 involved with.   Basically PyMySQL is fine, it lacks some polish and test 
 coverage that can certainly be added, and as far as performance, we’re going 
 to be really disappointed with pure-Python MySQL drivers in general, though 
 PyMySQL itself can still be improved within the realm of pure Python.  It at 
 least allows for eventlet monkey patching, so that we will for the first time 
 be able to observe the benefits of some real database concurrency ever since 
 Openstack decided not to use threads (if Openstack did in fact ever use 
 normal threads…), which also means we will be able to observe the headaches 
 of real database concurrency, especially the really scary ones I’ve already 
 observed when working with eventlet/gevent style monkeypatching :).
 
 While PyMySQL is lacking test coverage in some areas, has no external 
 documentation, and has at least some areas where Python performance can be 
 improved, the basic structure of the driver is perfectly fine and 
 straightforward.  I can envision turning this driver into a total monster, 
 adding C-speedups where needed but without getting in the way of async 
 patching, adding new APIs for explicit async, and everything else.   However, 
 I’ve no idea what the developers have an appetite for.
 
 Please review the document at 
 https://wiki.openstack.org/wiki/PyMySQL_evaluation.

I’ve also landed onto planet PyMySQL, walked down the gangway to the planet’s 
surface, and offered greetings and souvenirs from planet Openstack: 
https://groups.google.com/forum/#!topic/pymysql-users/LfLBD1zcMpY.   I now 
await the response of PyMySQL’s leaders, whether it be offers of peace or an 
armada of laser cannons (or just the so-typical crickets chirping), to our kind 
world’s offer of friendship.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled

2015-01-28 Thread Padmanabhan Krishnan
Some follow up questions on this.
In the specs, i see that during a create_port,  there's provisions to query 
the external source by  Pluggable IPAM to return the IP.This works fine for 
cases where the external source (say, DHCP server) can be queried for the IP 
address when a launch happens.
Is there a provision to have the flexibility of a late IP assignment?
 I am thinking of cases, like the temporary unavailability of external IP 
source or lack of standard interfaces in which case data packet snooping is 
used to find the IP address of a VM after launch. Something similar to late 
binding of IP addresses.This means the create_port  may not get the IP address 
from the pluggable IPAM. In that case, launch of a VM (or create_port) 
shouldn't fail. The Pluggable IPAM should have some provision to return 
something equivalent to unavailable during create_port and be able to do an 
update_port when the IP address becomes available.
I don't see that option. Please correct me if I am wrong.
Thanks,Paddu 
 

 On Thursday, December 18, 2014 7:59 AM, Padmanabhan Krishnan 
kpr...@yahoo.com wrote:
   

 Hi John,Thanks for the pointers. I shall take a look and get back.
Regards,Paddu
 

 On Thursday, December 18, 2014 6:23 AM, John Belamaric 
jbelama...@infoblox.com wrote:
   

 Hi Paddu,
Take a look at what we are working on in Kilo [1] for external IPAM. While this 
does not address DHCP specifically, it does allow you to use an external source 
to allocate the IP that OpenStack uses, which may solve your problem.
Another solution to your question is to invert the logic - you need to take the 
IP allocated by OpenStack and program the DHCP server to provide a fixed IP for 
that MAC.
You may be interested in looking at this Etherpad [2] that Don Kehn put 
together gathering all the various DHCP blueprints and related info, and also 
at this BP [3] for including a DHCP relay so we can utilize external DHCP more 
easily.
[1] https://blueprints.launchpad.net/neutron/+spec/neutron-ipam[2] 
https://etherpad.openstack.org/p/neutron-dhcp-org[3] 
https://blueprints.launchpad.net/neutron/+spec/dhcp-relay
John
From: Padmanabhan Krishnan kpr...@yahoo.com
Reply-To: Padmanabhan Krishnan kpr...@yahoo.com, OpenStack Development 
Mailing List (not for usage questions) openstack-dev@lists.openstack.org
Date: Wednesday, December 17, 2014 at 6:06 PM
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] fixed ip info shown for port even when 
dhcp is disabled

This means whatever tools the operators are using, it need to make sure the IP 
address assigned inside the VM matches with Openstack has assigned to the 
port.Bringing the question that i had in another thread on the same topic:
If one wants to use the provider DHCP server and not have Openstack's DHCP or 
L3 agent/DVR, it may not be possible to do so even with DHCP disabled in 
Openstack network. Even if the provider DHCP server is configured with the same 
start/end range in the same subnet, there's no guarantee that it will match 
with Openstack assigned IP address for bulk VM launches or  when there's a 
failure case.So, how does one deploy external DHCP with Openstack?
If Openstack hasn't assigned a IP address when DHCP is disabled for a network, 
can't port_update be done with the provider DHCP specified IP address to put 
the anti-spoofing and security rules?With Openstack assigned IP address, 
port_update cannot be done since IP address aren't in sync and can overlap.
Thanks,Paddu



On 12/16/14 4:30 AM, Pasquale Porreca pasquale.porr...@dektech.com.au
wrote:

I understood and I agree that assigning the ip address to the port is
not a bug, however showing it to the user, at least in Horizon dashboard
where it pops up in the main instance screen without a specific search,
can be very confusing.

On 12/16/14 12:25, Salvatore Orlando wrote:
 In Neutron IP address management and distribution are separated
concepts.
 IP addresses are assigned to ports even when DHCP is disabled. That IP
 address is indeed used to configure anti-spoofing rules and security
groups.
 
 It is however understandable that one wonders why an IP address is
assigned
 to a port if there is no DHCP server to communicate that address.
Operators
 might decide to use different tools to ensure the IP address is then
 assigned to the instance's ports. On XenServer for instance one could
use a
 guest agent reading network configuration from XenStore; as another
 example, older versions of Openstack used to inject network
configuration
 into the instance file system; I reckon that today's configdrive might
also
 be used to configure instance's networking.
 
 Summarising I don't think this is a bug. Nevertheless if you have any
idea
 regarding improvements on the API UX feel free to file a bug report.
 
 Salvatore
 
 On 16 December 2014 at 10:41, Pasquale Porreca 
 pasquale.porr...@dektech.com.au wrote:

 Is there a specific reason for 

[openstack-dev] [nova] The libvirt.cpu_mode and libvirt.cpu_model

2015-01-28 Thread Jiang, Yunhong
Hi, Daniel
I recently tried the libvirt.cpu_mode and libvirt.cpu_model when I was 
working on cpu_info related code and found bug 
https://bugs.launchpad.net/nova/+bug/1412994 .  The reason is because with 
these two flags, all guests launched on the host will use them, while when host 
report back the compute capability, they report the real-hardware compute 
capability, instead of the compute capabilities masked by these two configs.

I think the key thing is, these two flags are per-instance properties 
instead of per-host properties. 

If we do want to keep it as per-host property, libvirt driver should 
return the capabilities that has been altered by these two items, but the 
problem is, I checked libvirt doc and seems we can't get the cpu_info for the 
custom cpu_mode.

How about remove these two config items? And I don't think we should 
present cpu_mode/model option to end user, instead, we should only expose the 
feature request like disable/force some cpu_features, and the libvirt driver 
select the cpu_mode/model based on user's feature requirement.

Your opinion?

Thanks
--jyh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Proposed Changes to Magnum Core

2015-01-28 Thread Adrian Otto
Magnum Cores,

I propose the following addition to the Magnum Core group[1]:

+ Hongbin Lu (hongbin034)

Please let me know your votes by replying to this message.

Thanks,

Adrian

[1] https://review.openstack.org/#/admin/groups/473,members Current Members

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Proposed Changes to Magnum Core

2015-01-28 Thread Steven Dake

On 01/28/2015 03:27 PM, Adrian Otto wrote:

Magnum Cores,

I propose the following addition to the Magnum Core group[1]:

+ Hongbin Lu (hongbin034)

Please let me know your votes by replying to this message.

Thanks,

Adrian

+1

Regards
-steve


[1] https://review.openstack.org/#/admin/groups/473,members Current Members

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] core team changes

2015-01-28 Thread Qiming Teng
Thanks the team for the trust. It's my pleasure to work with you.

Regards,
  Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-28 Thread Adam Lawson
I'm short on time so I apologize for my candor since I need to get straight
to the point.

I love reading the various opinions and my team is immensely excited with
OpenStack is maturing. But this is lunacy.

I looked at the patch being worked [1] to change how things are done and
have more questions than I can count.

So I'll start with the obvious ones:

   - Are you proposing this change as a Foundation Individual Board
   Director tasked with representing the interests of all Individual Members
   of the OpenStack community or as a member of the TC? Context matters
   because your two hats are presenting a conflict of interest in my opinion.
   One cannot propose a change that gives them greater influence while
   suggesting they're doing it for everyone's benefit.
   - How is fun remotely relevant when discussing process improvement?
   I'm really hoping we aren't developing processes based on how fun a process
   is or isn't.
   - Why is this discussion being limited to the development community
   only? Where's the openness in that?
   - What exactly is the problem we're attempting to fix?
   - Does the current process not work?
   - Is there group of individuals being disenfranchised with our current
   process somehow that suggests the process should limit participation
   differently?

And some questions around the participation proposals:

   - Why is the election process change proposing to limit participation to
   ATC members only?
   There are numerous enthusiasts within our community that don't fall
   within the ATC category such as marketing (as some have brought up),
   corporate sponsors (where I live) and I'm sure there are many more.
   - Is taking back the process a hint that the current process is being
   mishandled or restores a sense of process control?
   - Is the presumption that the election process belongs to someone or
   some group?
   That strikes me as an incredibly subjective assertion to make.

opinionThis is one reason I feel so strongly folks should not be allowed
to hold more than one position of leadership within the OpenStack project.
Obfuscated context coupled with increased influence rarely produces
excellence on either front. But that's me./opinion

Mahalo,
Adam

[1] https://review.openstack.org/#/c/150604/


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Wed, Jan 28, 2015 at 10:23 AM, Anita Kuno ante...@anteaya.info wrote:

 On 01/28/2015 11:36 AM, Thierry Carrez wrote:
  Monty Taylor wrote:
  What if, to reduce stress on you, we make this 100% mechanical:
 
  - Anyone can propose a name
  - Election officials verify that the name matches the criteria
  -  * note: how do we approve additive exceptions without tons of effort
 
  Devil is in the details, as reading some of my hatemail would tell you.
  For example in the past I rejected Foo which was proposed because
  there was a Foo Bar landmark in the vicinity. The rules would have to
  be pretty detailed to be entirely objective.
 Naming isn't objective. That is both the value and the hardship.
 
  - Marketing team provides feedback to the election officials on names
  they find image-wise problematic
  - The poll is created with the roster of all foundation members
  containing all of the choices, but with the marketing issues clearly
  labeled, like this:
 
  * Love
  * Lumber
 Ohh, it gives me a thrill to see a name that means something even
 remotely Canadian. (not advocating it be added to this round)
  * Lettuce
  * Lemming - marketing issues identified
 
  - post poll - foundation staff run trademarks checks on the winners in
  order until a legally acceptable winner is found
 
  This way nobody is excluded, it's not a burden on you, it's about as
  transparent as it could be - and there are no special privileges needed
  for anyone to volunteer to be an election official.
 
  I'm going to continue to advocate that we use condorcet instead of a
  launchpad poll because we need the ability to rank things for post-vote
  trademark checks to not get weird. (also, we're working on getting off
  of launchpad, so let's not re-add another connection)
 
  It's been some time since we last used a Launchpad poll. I recently used
  an open surveymonkey poll, which allowed crude ranking. Agree that
  Condorcet is better, as long as you can determine a clear list of voters.
 

 Glad we are talking about this,
 Anita.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

  1   2   >