Re: [openstack-dev] [nova][NFV] VIF_VHOSTUSER

2014-09-01 Thread loy wolfe
If the neutron side MD is just for snabbswitch, then I thinks there is no
change to be merged into the tree. Maybe we can learn from sriov nic,
although backend is vendor specific, but the MD is generic, can support
snabb, dpdkovs, ans other userspace vswitch, etc.

As the reference implementation CI, the snabb can work in a very simple
mode without need for agent (just as agentless sriov veb nic)


On Sun, Aug 31, 2014 at 9:36 PM, Itzik Brown itz...@dev.mellanox.co.il
wrote:


 On 8/30/2014 11:22 PM, Ian Wells wrote:

 The problem here is that you've removed the vif_driver option and now
 you're preventing the inclusion of named VIF types into the generic driver,
 which means that rather than adding a package to an installation to add
 support for a VIF driver it's now necessary to change the Nova code (and
 repackage it, or - ew - patch it in place after installation).  I
 understand where you're coming from but unfortunately the two changes
 together make things very awkward.  Granted that vif_driver needed to go
 away - it was the wrong level of code and the actual value was coming from
 the wrong place anyway (nova config and not Neutron) - but it's been
 removed without a suitable substitute.

 It's a little late for a feature for Juno, but I think we need to write
 something discovers VIF types installed on the system.  That way you can
 add a new VIF type to Nova by deploying a package (and perhaps naming it in
 config as an available selection to offer to Neutron) *without* changing
 the Nova tree itself.


 In the meantime, I recommend you consult with the Neutron cores and see
 if you can make an exception for the VHOSTUSER driver for the current
 timescale.
 --
 Ian.

  I Agree with Ian.
 My understanding from a conversation a month ago was that there would be
 an alternative to the deprecated config option.
 As far as I understand now there is no such alternative in Juno and in
 case that one has an out of the tree VIF Driver he'll be left out with a
 broken solution.
 What do you say about an option of reverting the change?
 Anyway It might be a good idea to discuss propositions to address this
 issue towards Kilo summit.

 BR,
 Itzik


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rally scenario Issue

2014-09-01 Thread masoom alam
How you solved this issue of syntax error. We are getting the same error:

# rally -v task start rally/doc/samples/tasks/scenarios/vm/boot-runcommand-
delete.json
Command failed, please check log for more info
2014-09-01 07:31:52.606 9155 CRITICAL rally [-] ParserError: while parsing
a flow mapping
  in string, line 23, column 24:
context: {
   ^
expected ',' or '}', but got 'scalar'
  in string, line 28, column 18:
 neutron_network: {
 ^
2014-09-01 07:31:52.606 9155 TRACE rally Traceback (most recent call last):
2014-09-01 07:31:52.606 9155 TRACE rally   File /usr/local/bin/rally,
line 10, in 

Any clue?

Thanks


On Sat, Aug 30, 2014 at 12:42 AM, Ajay Kalambur (akalambu) 
akala...@cisco.com wrote:

  Hi
 I am trying to run the Rally scenario boot-runcommand-delete. This
 scenario has the following code
   def boot_runcommand_delete(self, image, flavor,
script, interpreter, username,
fixed_network=private,
floating_network=public,
ip_version=4, port=22,
use_floatingip=True, **kwargs):
server = None
 floating_ip = None
 try:
 print fixed network:%s floating network:%s
 %(fixed_network,floating_network)
 server = self._boot_server(
 self._generate_random_name(rally_novaserver_),
 image, flavor, key_name='rally_ssh_key', **kwargs)

  *self.check_network(server, fixed_network)*

  The question I have is the instance is created with a call to
 boot_server but no networks are attached to this server instance. Next step
 it goes and checks if the fixed network is attached to the instance and
 sure enough it fails
 At the step highlighted in bold. Also I cannot see this failure unless I
 run rally with –v –d object. So it actually reports benchmark scenario
 numbers in a table with no errors when I run with
 rally task start boot-and-delete.json

  And reports results. First what am I missing in this case. Thing is I am
 using neutron not nova-network
 Second when most of the steps in the scenario failed like attaching to
 network, ssh and run command why bother reporting the results

  Ajay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

2014-09-01 Thread Nejc Saje

Hey guys,

in Ceilometer we're using consistent hash rings to do workload 
partitioning[1]. We've considered generalizing your hash ring 
implementation and moving it up to oslo, but unfortunately your 
implementation is not actually consistent, which is our requirement.


Since you divide your ring into a number of equal sized partitions, 
instead of hashing hosts onto the ring, when you add a new host,
an unbound amount of keys get re-mapped to different hosts (instead of 
the 1/#nodes remapping guaranteed by hash ring). I've confirmed this 
with the test in aforementioned patch[2].


If this is good enough for your use-case, great, otherwise we can get a 
generalized hash ring implementation into oslo for use in both projects 
or we can both use an external library[3].


Cheers,
Nejc

[1] https://review.openstack.org/#/c/113549/
[2] 
https://review.openstack.org/#/c/113549/21/ceilometer/tests/test_utils.py

[3] https://pypi.python.org/pypi/hash_ring

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Trove Blueprint Meeting on 1 Sep canceled

2014-09-01 Thread Nikhil Manchanda
Hello team:

Most folks are going to be out on Monday September 1st, on account of it
being Labor Day in the US. Consequently, I'd like to cancel the the
Trove blueprint meeting this week.

See you guys at the regular Trove meeting on Wednesday.

Thanks,
Nikhil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kesytone][multidomain] - Time to leave LDAP backend?

2014-09-01 Thread Marcos Fermin Lobo
Hi all,

I found two functionalities for keystone that could be against each other.

Multi-domain feature (This functionality is new in Juno.)
---
Link: 
http://docs.openstack.org/developer/keystone/configuration.html#domain-specific-drivers
Keystone supports the option to specify identity driver configurations on a 
domain by domain basis, allowing, for example, a specific domain to have its 
own LDAP or SQL server. So, we can use different backends for different 
domains. But, as Henry Nash said it has not been validated with multiple SQL 
drivers https://bugs.launchpad.net/keystone/+bug/1362181/comments/2

Hierarchical Multitenancy

Link: https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy
This is nested projects feature but, only for SQL, not LDAP.

So, if you are using LDAP and you want nested projects feature, you should to 
migrate from LDAP to SQL but, I you want to get multi-domain feature too you 
can't use 2 SQL backends (you need at least one LDAP backend) because is not 
validated for multiple SQL drivers...

Maybe I'm losing something, please, correct me if I'm wrong.

Here my questions:


-  If I want Multi-domain and Hierarchical Multitenancy features, which 
are my options? What should I do (migrate or not migrate to SQL)?

-  Is LDAP going to deprecated soon?

Thanks.

Cheers,
Marcos.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-09-01 Thread Thierry Carrez
James Polley wrote:
 I'm fairly certain the buzzing sound I can hear is a bee in my bonnet...
 so I suspect that I'm starting to sound like someone chasing a bee that
 only they can hear. I'm not sure if it's helpful to keep this discussion
 on this list - would there be a better forum somewhere else?

Not really, feel free to send to me personally if that works better for you.

 This page reflects the official list of programs, which is why it's
 protected. it's supposed to be replaced by an automatic publication from
 
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml
 which is the ultimate source of truth on that topic.
 
 
 I was going to ask about the reference to The process new projects can
 follow to become an Integrated project - is that intended to refer to a
 project or a program?
 
 But then I read https://review.openstack.org/#/c/116727/ and
 and 
 http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements.rst,
 seem to make it clear that it's entirely possible that the Kitty program
 might have a mix of Integrated and non-Integrated projects.
 
 Is it safe to assume that the Governance repo is canonical and
 up-to-date, and rework the wiki pages based on the information in the
 Governance repo?

Yes, the governance repository reflects the current governance. The wiki
pages are derived from it.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vmware] Canonical list of os types

2014-09-01 Thread Matthew Booth
On 30/08/14 03:45, Steve Gordon wrote:
 - Original Message -
 From: Matthew Booth mbo...@redhat.com

 On 14/08/14 12:41, Steve Gordon wrote:
 - Original Message -
 From: Matthew Booth mbo...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org

 I've just spent the best part of a day tracking down why instance
 creation was failing on a particular setup. The error message from
 CreateVM_Task was: 'A specified parameter was not correct'.

 After discounting a great many possibilities, I finally discovered that
 the problem was guestId, which was being set to 'CirrosGuest'.
 Unusually, the vSphere API docs don't contain a list of valid values for
 that field. Given the unhelpfulness of the error message, it might be
 worthwhile validating that field (which we get from glance) and
 displaying an appropriate warning.

 Does anybody have a canonical list of valid values?

 Thanks,

 Matt

 I found a page [1] linked from the Grizzly edition of the compute guide
 [2] which has since been superseded. The content that would appear to
 have replaced it in more recent versions of the documentation suite [3]
 does not appear to contain such a link though. If a link to a more formal
 list is available it would be great to get this in the documentation.

 I just extracted a list of 126 os types from the ESX 5.5u1 installation
 iso. While this isn't ideal documentation, I'm fairly sure it will be
 accurate :)

 Matt
 
 Hi Matt,
 
 Any chance you can provide this list? Would be good to get into the 
 configuration reference.

I posted it here for review:

https://review.openstack.org/#/c/114529/

Matt

-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-09-01 Thread Maru Newby

On Aug 26, 2014, at 5:06 PM, Pradeep Kilambi (pkilambi) pkila...@cisco.com 
wrote:

 
 
 On 8/26/14, 4:49 AM, Maru Newby ma...@redhat.com wrote:
 
 
 On Aug 25, 2014, at 4:39 PM, Pradeep Kilambi (pkilambi)
 pkila...@cisco.com wrote:
 
 
 
 On 8/23/14, 5:36 PM, Maru Newby ma...@redhat.com wrote:
 
 
 On Aug 23, 2014, at 4:06 AM, Sumit Naiksatam sumitnaiksa...@gmail.com
 wrote:
 
 On Thu, Aug 21, 2014 at 7:28 AM, Kyle Mestery mest...@mestery.com
 wrote:
 On Thu, Aug 21, 2014 at 5:12 AM, Ihar Hrachyshka
 ihrac...@redhat.com
 wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 On 20/08/14 18:28, Salvatore Orlando wrote:
 Some comments inline.
 
 Salvatore
 
 On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com
 mailto:ihrac...@redhat.com wrote:
 
 Hi all,
 
 I've read the proposal for incubator as described at [1], and I
 have several comments/concerns/suggestions to this.
 
 Overall, the idea of giving some space for experimentation that
 does not alienate parts of community from Neutron is good. In that
 way, we may relax review rules and quicken turnaround for preview
 features without loosing control on those features too much.
 
 Though the way it's to be implemented leaves several concerns, as
 follows:
 
 1. From packaging perspective, having a separate repository and
 tarballs seems not optimal. As a packager, I would better deal with
 a single tarball instead of two. Meaning, it would be better to
 keep the code in the same tree.
 
 I know that we're afraid of shipping the code for which some users
 may expect the usual level of support and stability and
 compatibility. This can be solved by making it explicit that the
 incubated code is unsupported and used on your user's risk. 1) The
 experimental code wouldn't probably be installed unless explicitly
 requested, and 2) it would be put in a separate namespace (like
 'preview', 'experimental', or 'staging', as the call it in Linux
 kernel world [2]).
 
 This would facilitate keeping commit history instead of loosing it
 during graduation.
 
 Yes, I know that people don't like to be called experimental or
 preview or incubator... And maybe neutron-labs repo sounds more
 appealing than an 'experimental' subtree in the core project.
 Well, there are lots of EXPERIMENTAL features in Linux kernel that
 we actively use (for example, btrfs is still considered
 experimental by Linux kernel devs, while being exposed as a
 supported option to RHEL7 users), so I don't see how that naming
 concern is significant.
 
 
 I think this is the whole point of the discussion around the
 incubator and the reason for which, to the best of my knowledge,
 no proposal has been accepted yet.
 
 
 I wonder where discussion around the proposal is running. Is it
 public?
 
 The discussion started out privately as the incubation proposal was
 put together, but it's now on the mailing list, in person, and in IRC
 meetings. Lets keep the discussion going on list now.
 
 
 In the spirit of keeping the discussion going, I think we probably
 need to iterate in practice on this idea a little bit before we can
 crystallize on the policy and process for this new repo. Here are few
 ideas on how we can start this iteration:
 
 * Namespace for the new repo:
 Should this be in the neutron namespace, or a completely different
 namespace like neutron labs? Perhaps creating a separate namespace
 will help the packagers to avoid issues of conflicting package owners
 of the namespace.
 
 I don¹t think there is a technical requirement to choose a new
 namespace.
 Python supports sharing a namespace, and packaging can support this
 feature (see: oslo.*).
 
 
 From what I understand there can be overlapping code between neutron and
 incubator to override/modify existing python/config files. In which
 case,
 packaging(for Eg: rpm) will raise a path conflict. So we probably will
 need to worry about namespaces?
 
 Doug's suggestion to use a separate namespace to indicate that the
 incubator codebase isn’t fully supported is a good idea and what I had in
 mind as a non-technical reason for a new namespace.  I still assert that
 the potential for path conflicts can be avoided easily enough, and is not
 a good reason on its own to use a different namespace.
 
 
 
 
 
 * Dependency on Neutron (core) repository:
 We would need to sort this out so that we can get UTs to run and pass
 in the new repo. Can we set the dependency on Neutron milestone
 releases? We already publish tar balls for the milestone releases, but
 I am not sure we publish these as packages to pypi. If not could we
 start doing that? With this in place, the incubator would always lag
 the Neutron core by at the most one milestone release.
 
 Given that it is possible to specify a dependency as a branch/hash/tag
 in
 a git repo [1], I¹m not sure it¹s worth figuring out how to target
 tarballs.  Master branch of the incubation repo could then target the
 master branch of the Neutron repo and always be assured of being
 

Re: [openstack-dev] [nova] libvirt version_cap, a postmortem

2014-09-01 Thread Kashyap Chamarthy
On Sat, Aug 30, 2014 at 05:08:16PM +0100, Mark McLoughlin wrote:
 
 Hey
 
 The libvirt version_cap debacle continues to come up in conversation and
 one perception of the whole thing appears to be:
 
   A controversial patch was ninjaed by three Red Hat nova-cores and 
   then the same individuals piled on with -2s when a revert was proposed
   to allow further discussion.

As someone who tried to be a little helper to troubleshoot this live
snapshot bug (mentioned below) when it surfaced, I've been following
this discussion and ensuing threads about version_cap closely. 

And, FWIW, never in a moment did I feel there was any such intention of
ninjaing going on by the said folks, and felt (still do) it was done
in a 'good technical faith'. Maybe my eyes are blindsided by the fact of
observing the work and integrity of these people in different open
source communities over the years.

Thanks for taking time to do this write-up, Mark.

PS: Since email says @redhat.com, hope people reading this thread won't
misinterpret this comment.

 I hope it's clear to everyone why that's a pretty painful thing to hear.
 However, I do see that I didn't behave perfectly here. I apologize for
 that.
 
 In order to understand where this perception came from, I've gone back
 over the discussions spread across gerrit and the mailing list in order
 to piece together a precise timeline. I've appended that below.
 
 Some conclusions I draw from that tedious exercise:
 
  - Some people came at this from the perspective that we already have 
a firm, unwritten policy that all code must have functional written 
tests. Others see that test all the things is interpreted as a
worthy aspiration, but is only one of a number of nuanced factors
that needs to be taken into account when considering the addition of
a new feature.
 
i.e. the former camp saw Dan Smith's devref addition as attempting 
to document an existing policy (perhaps even a more forgiving 
version of an existing policy), whereas other see it as a dramatic 
shift to a draconian implementation of test all the things.
 
  - Dan Berrange, Russell and I didn't feel like we were ninjaing a
controversial patch - you can see our perspective expressed in 
multiple places. The patch would have helped the live snapshot 
issue, and has other useful applications. It does not affect the 
broader testing debate.
 
Johannes was a solitary voice expressing concerns with the patch, 
and you could see that Dan was particularly engaged in trying to 
address those concerns and repeating his feeling that the patch was 
orthogonal to the testing debate.
 
That all being said - the patch did merge too quickly.
 
  - What exacerbates the situation - particularly when people attempt to 
look back at what happened - is how spread out our conversations 
are. You look at the version_cap review and don't see any of the 
related discussions on the devref policy review nor the mailing list 
threads. Our disjoint methods of communicating contribute to 
misunderstandings.
 
  - When it came to the revert, a couple of things resulted in 
misunderstandings, hurt feelings and frayed tempers - (a) that our 
retrospective veto revert policy wasn't well understood and (b) 
a feeling that there was private, in-person grumbling about us at 
the mid-cycle while we were absent, with no attempt to talk to us 
directly.
 
 
 To take an even further step back - successful communities like ours
 require a huge amount of trust between the participants. Trust requires
 communication and empathy. If communication breaks down and the pressure
 we're all under erodes our empathy for each others' positions, then
 situations can easily get horribly out of control.

--
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][neutron] Neutron full job, advanced services, and the integrated gate

2014-09-01 Thread Maru Newby

On Aug 27, 2014, at 1:47 AM, Salvatore Orlando sorla...@nicira.com wrote:

 TL; DR
 A few folks are proposing to stop running tests for neutron advanced services 
 [ie: (lb|vpn|fw)aas] in the integrated gate, and run them only on the neutron 
 gate.
 
 Reason: projects like nova are 100% orthogonal to neutron advanced services. 
 Also, there have been episodes in the past of unreliability of tests for 
 these services, and it would be good to limit affected projects considering 
 that more api tests and scenarios are being added.

Given how many rechecks I’ve had to do to merge what are effectively no-op 
patches to infra/config, most often due to the full neutron job exhibiting 
sporadic failures, I fully support this change.  I think we need time to 
stabilize the tests for advanced services against just neutron before we 
consider slowing down merges for other projects.


 
 -
 
 So far the neutron full job runs tests (api and scenarios) for neutron core 
 functionality as well as neutron advanced services, which run as neutron 
 service plugin.
 
 It's highly unlikely, if not impossible, that changes in projects such as 
 nova, glance or ceilometer can have an impact on the stability of these 
 services.
 On the other hand, instability in these services can trigger gate failures in 
 unrelated projects as long as tests for these services are run in the neutron 
 full job in the integrated gate. There have already been several 
 gate-breaking bugs in lbaas scenario tests are firewall api tests.
 Admittedly, advanced services do not have the same level of coverage as core 
 neutron functionality. Therefore as more tests are being added, there is an 
 increased possibility of unearthing dormant bugs.
 
 For this reason we are proposing to not run anymore tests for neutron 
 advanced services in the integrated gate, but keep them running on the 
 neutron gate.
 This means we will have two neutron jobs:
 1) check-tempest-dsvm-neutron-full which will run only core neutron 
 functionality
 2) check-tempest-dsvm-neutron-full-ext which will be what the neutron full 
 job is today.
 
 The former will be part of the integrated gate, the latter will be part of 
 the neutron gate.
 Considering that other integrating services should not have an impact on 
 neutron advanced services, this should not make gate testing asymmetric.
 
 However, there might be exceptions for:
 - orchestration project like heat which in the future might leverage 
 capabilities like load balancing
 - oslo-* libraries, as changes in them might have an impact on neutron 
 advanced services, since they consume those libraries
 
 Another good question is whether extended tests should be performed as part 
 of functional or tempest checks. My take on this is that scenario tests 
 should always be part of tempest. On the other hand I reckon API tests should 
 exclusively be part of functional tests, but as so far tempest is running a 
 gazillion of API tests, this is probably a discussion for the medium/long 
 term. 

As you say, tempest should retain responsibility for ‘golden-path’ integration 
tests involving other OpenStack services (’scenario tests’).  Everything else 
should eventually be in-tree, though the transition period to achieve this is 
likely to be multi-cycle.


m.

 
 In order to add this new job there are a few patches under review:
 [1] and [2] Introduces the 'full-ext' job and devstack-gate support for it.
 [3] Are the patches implementing a blueprint which will enable us to specify 
 for which extensions test should be executed.
 
 Finally, one more note about smoketests. Although we're planning to get rid 
 of them soon, we still have failures in the pg job because of [4]. For this 
 reasons smoketests are still running for postgres in the integrated gate. As 
 load balancing and firewall API tests are part of it, they should be removed 
 from the smoke test executed on the integrated gate ([5], [6]). This is a 
 temporary measure until the postgres issue is fixed.
 
 Regards,
 Salvatore
 
 [1] https://review.openstack.org/#/c/114933/
 [2] https://review.openstack.org/#/c/114932/
 [3] 
 https://review.openstack.org/#/q/status:open+branch:master+topic:bp/branchless-tempest-extensions,n,z
 [4] https://bugs.launchpad.net/nova/+bug/1305892
 [5] https://review.openstack.org/#/c/115022/
 [6] https://review.openstack.org/#/c/115023/
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting reminder - 09/01/2014

2014-09-01 Thread Renat Akhmerov
Hi,

As usually, this is a reminder about team meeting today at 16.00 UTC 
#openstack-meeting.

Agenda:
Review action items
Current status (progress, issues, roadblocks, further plans)
Release 0.1 progress
Open discussion

(See it also at https://wiki.openstack.org/wiki/Meetings/MistralAgenda as well 
as the meeting archive)

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Feedback required on convergence observer implementation

2014-09-01 Thread Gurjar, Unmesh
Hi All,

As a part of the implementation of Convergence BP, I have submitted change 
(https://review.openstack.org/#/c/118143/ ) for moving the implementation of 
check_create_complete method of 'server' resource to Observer. To start with, 
for Juno, the convergence feature will be turned off by default for avoiding 
upgrade issues (since the convergence related services won't be running). To 
enable convergence, one should set the 'convergence_enabled' flag (to True) in 
the heat configuration file and ensure that the convergence service(s) (at this 
time only heat-observer service) are running!

This is an initial patch; please give it a look and provide your valuable 
feedback/suggestions on the overall approach.

Thanks,
Unmesh G.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-09-01 Thread joehuang
Hello,

Not all features which had already been shipped in Neutron supported by 
Horizon. For example, multi provider network. 

This is not the special case only happened in Neutron. For example, Glance 
delivered V2 api in IceHouse or even early and support Image muti-locations 
feature, but this feature also not available from Horizon.

Fortunately, the CLI/python client can give us the opportunity to use this 
powerful feature.

So, It's not necessary to link Neutron incubation with Horizon tightly. The 
feature implemented in Horizon can be introduced when the the incubation 
graduate.

Best regards.

Chaoyi Huang ( joehuang )


发件人: Maru Newby [ma...@redhat.com]
发送时间: 2014年9月1日 17:53
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [neutron] Incubator concerns from packaging 
perspective

On Aug 26, 2014, at 5:06 PM, Pradeep Kilambi (pkilambi) pkila...@cisco.com 
wrote:



 On 8/26/14, 4:49 AM, Maru Newby ma...@redhat.com wrote:


 On Aug 25, 2014, at 4:39 PM, Pradeep Kilambi (pkilambi)
 pkila...@cisco.com wrote:



 On 8/23/14, 5:36 PM, Maru Newby ma...@redhat.com wrote:


 On Aug 23, 2014, at 4:06 AM, Sumit Naiksatam sumitnaiksa...@gmail.com
 wrote:

 On Thu, Aug 21, 2014 at 7:28 AM, Kyle Mestery mest...@mestery.com
 wrote:
 On Thu, Aug 21, 2014 at 5:12 AM, Ihar Hrachyshka
 ihrac...@redhat.com
 wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 20/08/14 18:28, Salvatore Orlando wrote:
 Some comments inline.

 Salvatore

 On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com
 mailto:ihrac...@redhat.com wrote:

 Hi all,

 I've read the proposal for incubator as described at [1], and I
 have several comments/concerns/suggestions to this.

 Overall, the idea of giving some space for experimentation that
 does not alienate parts of community from Neutron is good. In that
 way, we may relax review rules and quicken turnaround for preview
 features without loosing control on those features too much.

 Though the way it's to be implemented leaves several concerns, as
 follows:

 1. From packaging perspective, having a separate repository and
 tarballs seems not optimal. As a packager, I would better deal with
 a single tarball instead of two. Meaning, it would be better to
 keep the code in the same tree.

 I know that we're afraid of shipping the code for which some users
 may expect the usual level of support and stability and
 compatibility. This can be solved by making it explicit that the
 incubated code is unsupported and used on your user's risk. 1) The
 experimental code wouldn't probably be installed unless explicitly
 requested, and 2) it would be put in a separate namespace (like
 'preview', 'experimental', or 'staging', as the call it in Linux
 kernel world [2]).

 This would facilitate keeping commit history instead of loosing it
 during graduation.

 Yes, I know that people don't like to be called experimental or
 preview or incubator... And maybe neutron-labs repo sounds more
 appealing than an 'experimental' subtree in the core project.
 Well, there are lots of EXPERIMENTAL features in Linux kernel that
 we actively use (for example, btrfs is still considered
 experimental by Linux kernel devs, while being exposed as a
 supported option to RHEL7 users), so I don't see how that naming
 concern is significant.


 I think this is the whole point of the discussion around the
 incubator and the reason for which, to the best of my knowledge,
 no proposal has been accepted yet.


 I wonder where discussion around the proposal is running. Is it
 public?

 The discussion started out privately as the incubation proposal was
 put together, but it's now on the mailing list, in person, and in IRC
 meetings. Lets keep the discussion going on list now.


 In the spirit of keeping the discussion going, I think we probably
 need to iterate in practice on this idea a little bit before we can
 crystallize on the policy and process for this new repo. Here are few
 ideas on how we can start this iteration:

 * Namespace for the new repo:
 Should this be in the neutron namespace, or a completely different
 namespace like neutron labs? Perhaps creating a separate namespace
 will help the packagers to avoid issues of conflicting package owners
 of the namespace.

 I don¹t think there is a technical requirement to choose a new
 namespace.
 Python supports sharing a namespace, and packaging can support this
 feature (see: oslo.*).


 From what I understand there can be overlapping code between neutron and
 incubator to override/modify existing python/config files. In which
 case,
 packaging(for Eg: rpm) will raise a path conflict. So we probably will
 need to worry about namespaces?

 Doug's suggestion to use a separate namespace to indicate that the
 incubator codebase isn’t fully supported is a good idea and what I had in
 mind as a non-technical reason for a new namespace.  I still assert that
 the potential 

Re: [openstack-dev] [OpenStack-Infra] [third-party] [infra] New mailing lists for third party announcements and account requests

2014-09-01 Thread Erlon Cruz
On Fri, Aug 29, 2014 at 5:03 PM, Stefano Maffulli stef...@openstack.org
wrote:

 On Fri 29 Aug 2014 12:47:00 PM PDT, Elizabeth K. Joseph wrote:
  Third-party-request
 
  This list is the new place to request the creation or modification of
  your third party account. Note that old requests sent to the
  openstack-infra mailing list don't need to be resubmitted, they are
  already in the queue for creation.

 I'm not happy about this decision: creating new lists is expensive, it
 multiplies entry points for newcomers, which need to be explained *and*
 understood. We've multiplying processes, rules, points of contact and
 places to monitor, be aware of... I feel overwhelmed. I wonder how much
 worse that feeling is for people who are not 150% of their time
 following discussions online and offline on all OpenStack channels.


I feel the same. As a new comer to openstack community, I can say that
digging all projects Wikis searching for information is very cumbersome. In
fact It took some months to get subscribed in all channels that where
relevant to me. But, who knows if I'm missing some at this very moment.


 Are you sure that a mailing list is the most appropriate way of handling
 requests? Aren't bug trackers more appropriate instead?  And don't we
 have a bug tracker already?

  It would also be helpful for third party operators to join this
  mailing list as well as the -announce list in order to reply when they
  can to distribute workload and support new participants to thethird
  party community.

 What makes you think they will join a list called 'request'? It's a
 request: I file a request, get back what I asked for, I say goodbye.
 Doesn't sound like a place for discussions.

 Also, if the problem with third-party operators is that they don't stick
 around, how did you come to the conclusion that two more mailing lists
 would solve (or help solving) the problem?


 --
 Ask and answer questions on https://ask.openstack.org
 http://t.signauxdix.com/link?url=https%3A%2F%2Fask.openstack.org%2Fukey=agxzfnNpZ25hbHNjcnhyGAsSC1VzZXJQcm9maWxlGICAwLnctacJDAk=0d79c018-bbe4-4a87-91d0-012813563ea8

 ___
 OpenStack-Infra mailing list
 openstack-in...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
 http://t.signauxdix.com/link?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack-infraukey=agxzfnNpZ25hbHNjcnhyGAsSC1VzZXJQcm9maWxlGICAwLnctacJDAk=4ec3e9a2-0a9c-45ae-8427-0a563b1988f1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-01 Thread Susanne Balle
Kyle, Adam,



Based on this thread Kyle is suggesting the follow moving forward plan:



1) We incubate Neutron LBaaS V2 in the “Neutron” incubator “and freeze
LBaas V1.0”
2) “Eventually” It graduates into a project under the networking program.
3) “At that point” We deprecate Neutron LBaaS v1.



The words in “xx“ are works I added to make sure I/We understand the whole
picture.



And as Adam mentions: Octavia != LBaaS-v2. Octavia is a peer to F5 /
Radware / A10 / etc *appliances* which is a definition I agree with BTW.



What I am trying to now understand is how we will move Octavia into the new
LBaaS project?



If we do it later rather than develop Octavia in tree under the new
incubated LBaaS project when do we plan to bring it in-tree from
Stackforge? Kilo? Later? When LBaaS is a separate project under the
Networking program?



What are the criteria to bring a driver into the LBaaS project and what do
we need to do to replace the existing reference driver? Maybe adding a
software driver to LBaaS source tree is less of a problem than converting a
whole project to an OpenStack project.



Again I am open to both directions I just want to make sure we understand
why we are choosing to do one or the other and that our  decision is based
on data and not emotions.



I am assuming that keeping Octavia in Stackforge will increase the velocity
of the project and allow us more freedom which is goodness. We just need to
have a plan to make it part of the Openstack LBaaS project.



Regards Susanne


On Sat, Aug 30, 2014 at 2:09 PM, Adam Harwell adam.harw...@rackspace.com
wrote:

   Only really have comments on two of your related points:

  [Susanne] To me Octavia is a driver so it is very hard to me to think of
 it as a standalone project. It needs the new Neutron LBaaS v2 to function
 which is why I think of them together. This of course can change since we
 can add whatever layers we want to Octavia.

  [Adam] I guess I've always shared Stephen's viewpoint — Octavia !=
 LBaaS-v2. Octavia is a peer to F5 / Radware / A10 / etc appliances, not
 to an Openstack API layer like Neutron-LBaaS. It's a little tricky to
 clearly define this difference in conversation, and I have noticed that
 quite a few people are having the same issue differentiating. In a small
 group, having quite a few people not on the same page is a bit scary, so
 maybe we need to really sit down and map this out so everyone is together
 one way or the other.

  [Susanne] Ok now I am confused… But I agree with you that it need to
 focus on our use cases. I remember us discussing Octavia being the refenece
 implementation for OpenStack LBaaS (whatever that is). Has that changed
 while I was on vacation?

  [Adam] I believe that having the Octavia driver (not the Octavia
 codebase itself, technically) become the reference implementation for
 Neutron-LBaaS is still the plan in my eyes. The Octavia Driver in
 Neutron-LBaaS is a separate bit of code from the actual Octavia project,
 similar to the way the A10 driver is a separate bit of code from the A10
 appliance. To do that though, we need Octavia to be fairly close to fully
 functional. I believe we can do this because even though the reference
 driver would then require an additional service to run, what it requires is
 still fully-open-source and (by way of our plan) available as part of
 OpenStack core.

   --Adam

  https://keybase.io/rm_you


   From: Susanne Balle sleipnir...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Friday, August 29, 2014 9:19 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] [neutron][lbaas][octavia]

Stephen



 See inline comments.



 Susanne



 -



 Susanne--



 I think you are conflating the difference between OpenStack incubation
 and Neutron incubator. These are two very different matters and should be
 treated separately. So, addressing each one individually:



 *OpenStack Incubation*

 I think this has been the end-goal of Octavia all along and continues to
 be the end-goal. Under this scenario, Octavia is its own stand-alone
 project with its own PTL and core developer team, its own governance, and
 should eventually become part of the integrated OpenStack release. No
 project ever starts out as OpenStack incubated.



 [Susanne] I totally agree that the end goal is for Neutron LBaaS to become
 its own incubated project. I did miss the nuance that was pointed out by
 Mestery in an earlier email that if a Neutron incubator project wants to
 become a separate project it will have to apply for incubation again or at
 that time. It was my understanding that such a Neutron incubated project
 would be grandfathered in but again we do not have much details on the
 process yet.



 To me Octavia is a driver so it is very hard to me to think of it as 

Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-09-01 Thread Robert Kukura
Sure, Horizon (or Heat) support is not always required for new features 
entering incubation, but when a goal in incubating a feature is to get 
it packaged with OpenStack distributions and into the hands of as many 
early adopters as possible to gather feedback, these integrations are 
very important.


-Bob

On 9/1/14, 9:05 AM, joehuang wrote:

Hello,

Not all features which had already been shipped in Neutron supported by 
Horizon. For example, multi provider network.

This is not the special case only happened in Neutron. For example, Glance 
delivered V2 api in IceHouse or even early and support Image muti-locations 
feature, but this feature also not available from Horizon.

Fortunately, the CLI/python client can give us the opportunity to use this 
powerful feature.

So, It's not necessary to link Neutron incubation with Horizon tightly. The 
feature implemented in Horizon can be introduced when the the incubation 
graduate.

Best regards.

Chaoyi Huang ( joehuang )


发件人: Maru Newby [ma...@redhat.com]
发送时间: 2014年9月1日 17:53
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [neutron] Incubator concerns from packaging 
perspective

On Aug 26, 2014, at 5:06 PM, Pradeep Kilambi (pkilambi) pkila...@cisco.com 
wrote:



On 8/26/14, 4:49 AM, Maru Newby ma...@redhat.com wrote:


On Aug 25, 2014, at 4:39 PM, Pradeep Kilambi (pkilambi)
pkila...@cisco.com wrote:



On 8/23/14, 5:36 PM, Maru Newby ma...@redhat.com wrote:


On Aug 23, 2014, at 4:06 AM, Sumit Naiksatam sumitnaiksa...@gmail.com
wrote:


On Thu, Aug 21, 2014 at 7:28 AM, Kyle Mestery mest...@mestery.com
wrote:

On Thu, Aug 21, 2014 at 5:12 AM, Ihar Hrachyshka
ihrac...@redhat.com
wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 20/08/14 18:28, Salvatore Orlando wrote:

Some comments inline.

Salvatore

On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com
mailto:ihrac...@redhat.com wrote:

Hi all,

I've read the proposal for incubator as described at [1], and I
have several comments/concerns/suggestions to this.

Overall, the idea of giving some space for experimentation that
does not alienate parts of community from Neutron is good. In that
way, we may relax review rules and quicken turnaround for preview
features without loosing control on those features too much.

Though the way it's to be implemented leaves several concerns, as
follows:

1. From packaging perspective, having a separate repository and
tarballs seems not optimal. As a packager, I would better deal with
a single tarball instead of two. Meaning, it would be better to
keep the code in the same tree.

I know that we're afraid of shipping the code for which some users
may expect the usual level of support and stability and
compatibility. This can be solved by making it explicit that the
incubated code is unsupported and used on your user's risk. 1) The
experimental code wouldn't probably be installed unless explicitly
requested, and 2) it would be put in a separate namespace (like
'preview', 'experimental', or 'staging', as the call it in Linux
kernel world [2]).

This would facilitate keeping commit history instead of loosing it
during graduation.

Yes, I know that people don't like to be called experimental or
preview or incubator... And maybe neutron-labs repo sounds more
appealing than an 'experimental' subtree in the core project.
Well, there are lots of EXPERIMENTAL features in Linux kernel that
we actively use (for example, btrfs is still considered
experimental by Linux kernel devs, while being exposed as a
supported option to RHEL7 users), so I don't see how that naming
concern is significant.



I think this is the whole point of the discussion around the
incubator and the reason for which, to the best of my knowledge,
no proposal has been accepted yet.

I wonder where discussion around the proposal is running. Is it
public?


The discussion started out privately as the incubation proposal was
put together, but it's now on the mailing list, in person, and in IRC
meetings. Lets keep the discussion going on list now.


In the spirit of keeping the discussion going, I think we probably
need to iterate in practice on this idea a little bit before we can
crystallize on the policy and process for this new repo. Here are few
ideas on how we can start this iteration:

* Namespace for the new repo:
Should this be in the neutron namespace, or a completely different
namespace like neutron labs? Perhaps creating a separate namespace
will help the packagers to avoid issues of conflicting package owners
of the namespace.

I don¹t think there is a technical requirement to choose a new
namespace.
Python supports sharing a namespace, and packaging can support this
feature (see: oslo.*).


 From what I understand there can be overlapping code between neutron and
incubator to override/modify existing python/config files. In which
case,
packaging(for Eg: rpm) will raise a path conflict. So we 

Re: [openstack-dev] [nova] refactoring of resize/migrate

2014-09-01 Thread Markus Zoeller
John Garbutt j...@johngarbutt.com wrote on 08/29/2014 07:59:38 PM:

 From: John Garbutt j...@johngarbutt.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 08/29/2014 08:12 PM
 Subject: Re: [openstack-dev] [nova] refactoring of resize/migrate
 
 On 28 August 2014 09:50, Markus Zoeller mzoel...@de.ibm.com wrote:
  Jay Pipes jaypi...@gmail.com wrote on 08/27/2014 08:57:08 PM:
 
  From: Jay Pipes jaypi...@gmail.com
  To: openstack-dev@lists.openstack.org
  Date: 08/27/2014 08:59 PM
  Subject: Re: [openstack-dev] [nova] refactoring of resize/migrate
 
  On 08/27/2014 06:41 AM, Markus Zoeller wrote:
   The review of the spec to blueprint hot-resize has several 
comments
   about the need of refactoring the existing code base of resize 
and
   migrate before the blueprint could be considered (see [1]).
   I'm interested in the result of the blueprint therefore I want to
  offer
   my support. How can I participate?
  
   [1] https://review.openstack.org/95054
 
  Are you offering support to refactor resize/migrate, or are you 
offering
 
  support to work only on the hot-resize functionality?
 
  I'm offering support to refactor resize/migrate (with the goal in
  mind to have a hot resize feature in the future).
 
  I'm very much interested in refactoring the resize/migrate
  functionality, and would appreciate any help and insight you might 
have.
 
  Unfortunately, such a refactoring:
 
  a) Must start in Kilo
  b) Begins with un-crufting the simply horrible, inconsistent, and
  duplicative REST API and public behaviour of the resize and migrate
  actions
 
  If you give me some pointers to look at I can make some thoughts
  about them.
 
  In any case, I'm happy to start the conversation about this going in
  about a month or so, or whenever Kilo blueprints open up. Until then,
  we're pretty much working on reviews for already-approved blueprints 
and
 
  bug fixing.
 
  Best,
  -jay
 
  Just ping me and I will participate and give as much as I can.
 
 Happy to help with some planning/reviewing of specs etc.
 
 I did have a plan here. It was to move to the conductor the migrate
 and live-migrate code paths. The idea was to simplify the code paths,
 so the commonalty and missing bits could be compared, etc:
 
https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L470

 
 That has proved hard to finish, probably because was the wrong
 approach. Turns out there isn't much in common.
 
 I did also plan on updating the user API, but kinda decided to wait
 for v3 to get sorted, probably incorrectly.
 
 The main pain with the work is the lack of live-migrate testing in the
 gate, waiting for the multi-node gate work. Its starting to rot in
 there because people are scared of change in there, etc.
 
 Helping fix some live-migrate bugs, and helping out with live-migrate
 testing, might be a good firsts steps? But depends how you like to
 work really.
 
 Anyways, happy to see that area get some more love!
 
 John

The unit tests and bugs are a good start, I agree. By end of this 
week I will start a 2 week vacation but after that I will start 
working in this area. I have a few years of experience in refactoring
tangled code. Let's see what I can do here :)
Let's keep in touch.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Swift 2.1.0 released

2014-09-01 Thread John Dickinson
I'm happy to announce that Swift 2.1.0 has been released. This release includes 
several useful features that I'd like to highlight.

First, Swift's data placement algorithm was slightly changed to improve adding 
capacity. Specifically, now when you add a new region to an existing Swift 
cluster, there will not be a massive migration of data. If you've been wanting 
to expand your Swift cluster into another region, you can now do it painlessly.

Second, we've updated some of the logging and metrics tracking. We removed some 
needless log spam (cleaner logs!), added the process PID to the storage node 
log lines, and no count user errors as errors in StatsD metrics reporting.

We've also improved the object auditing process to allow for multiple processes 
at once. Using the new concurrency config value can speed up the overall 
auditor cycle time.

The tempurl middleware default allowed methods has been updated to allow POST 
and DELETE. This means that with no additional configuration, users can create 
tempURLs against any supported verb.

Finally, the list_endpoints middleware now has a v2 response that supports 
storage policies.

Please take a look at the full changelog to see what else has changed. I'd 
encourage everyone to upgrade to this new version of Swift. As always, you can 
upgrade with no end-user downtime.


Changelog:
http://git.openstack.org/cgit/openstack/swift/tree/CHANGELOG

Tarball:
http://tarballs.openstack.org/swift/swift-2.1.0.tar.gz

Launchpad:
https://launchpad.net/swift/+milestone/2.1.0


This release is the result of 28 contributors, including 7 new contributors. 
The first-time contributors to Swift are:

Jing Liuqing
Steve Martinelli
Matthew Oliver
Pawel Palucki
Thiago da Silva
Nirmal Thacker
Lin Yang

Thank you to everyone who contributed to this release, both as a dev and as the 
sysadmins who keep Swift running every day at massive scale around the world.

My vision for Swift is that everyone uses it every day, even if they don't 
realize it. We're well on our way to that goal. Thank you.

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Use Launcher/ProcessLauncher in glance

2014-09-01 Thread stuart . mclaren


I've been looking at the most recent patch for this 
(https://review.openstack.org/#/c/117988/).

I'm wondering what behaviour do folks feel is best in the following scenario?

1) an image download starts (download1)

2) send SIGHUP to glance-api

3) start another download (download2)

4) download1 completes

5) ?

I think there's a few potential options:

5a) The request for 'download2' is never accepted (in which case
the service is 'offline' on the node until all in-flight requests are 
completed).

5b) The request for 'download2' is killed when 'download1' completes
and the service restarts (not much point in new SIGHUP behaviour)

5c) The request for 'download2' is accepted and completes using the
existing process, but in this case the service potentially never restarts if new
requests keep coming in

5d) A 'swift reload' type operation is done where the old processes
are left alone to complete (download1 and download2 complete) but
new parent (and child) processes are spun up to handle new requests
('download3'). The disadvantage here is some extra process accounting
and potentially stray old code running on your system

(See http://docs.openstack.org/developer/swift/admin_guide.html#swift-orphans)

-Stuart


On Mon, Jul 28, 2014 at 8:12 AM, Tailor, Rajesh rajesh.tai...@nttdata.com
wrote:


Hi All,

I have submitted the patch Made provision for glance service to use
Launcher to the community gerrit.
Pl refer: https://review.openstack.org/#/c/110012/

I have also set the workflow to 'work in progress'. I will start working
on writing unit tests for the proposed
changes, after positive feedback for the same.

Could you please give your comments on this.

Could you also please suggest me whether to file a launchpad bug or a
blueprint,  to propose these changes in the glance project ?



Submitting to github.com/openstack/glance-specs would be best. Thanks.




Thanks,
Rajesh Tailor

-Original Message-
From: Tailor, Rajesh [mailto:rajesh.tai...@nttdata.com]
Sent: Wednesday, July 23, 2014 12:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Use Launcher/ProcessLauncher in
glance

Hi Jay,
Thank you for your response.
I will soon submit patch for the same.

Thanks,
Rajesh Tailor

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, July 22, 2014 8:07 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance] Use Launcher/ProcessLauncher in
glance

On 07/17/2014 03:07 AM, Tailor, Rajesh wrote:
 Hi all,

 Why glance is not using Launcher/ProcessLauncher (oslo-incubator) for
 its wsgi service like it is used in other openstack projects i.e.
 nova, cinder, keystone etc.

Glance uses the same WSGI service launch code as the other OpenStack
project from which that code was copied: Swift.

 As of now when SIGHUP signal is sent to glance-api parent process, it
 calls the callback handler and then throws OSError.

 The OSError is thrown because os.wait system call was interrupted due
 to SIGHUP callback handler.

 As a result of this parent process closes the server socket.

 All the child processes also gets terminated without completing
 existing api requests because the server socket is already closed and
 the service doesn't restart.

 Ideally when SIGHUP signal is received by the glance-api process, it
 should process all the pending requests and then restart the
 glance-api service.

 If (oslo-incubator) Launcher/ProcessLauncher is used in glance then it
 will handle service restart on 'SIGHUP' signal properly.

 Can anyone please let me know what will be the positive/negative
 impact of using Launcher/ProcessLauncher (oslo-incubator) in glance?

Sounds like you've identified at least one good reason to move to
oslo-incubator's Launcher/ProcessLauncher. Feel free to propose patches
which introduce that change to Glance. :)

 Thank You,

 Rajesh Tailor
 __
 Disclaimer:This email and any attachments are sent in strictest
 confidence for the sole use of the addressee and may contain legally
 privileged, confidential, and proprietary data. If you are not the
 intended recipient, please advise the sender by replying promptly to
 this email and then delete and destroy this email and any attachments
 without any further use, copying or forwarding

Please advise your corporate IT department that the above disclaimer on
your emails is annoying, is entirely disregarded by 99.999% of the real
world, has no legal standing or enforcement, and may be a source of
problems with people's mailing list posts being sent into spam boxes.

All the best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
Disclaimer:This email and any attachments 

[openstack-dev] [mistral] Team meeting minutes/log - 09/01/2014

2014-09-01 Thread Renat Akhmerov
Thanks for joining our team meeting today!

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-09-01-16.00.html
Log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-09-01-16.00.log.html

Agenda/archive: https://wiki.openstack.org/wiki/Meetings/MistralAgenda

The next meeting will be on Sep 8th.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] change to deprecation policy in the incubator

2014-09-01 Thread Doug Hellmann

On Aug 29, 2014, at 6:28 PM, Ben Nemec openst...@nemebean.com wrote:

 On 08/28/2014 11:14 AM, Doug Hellmann wrote:
 Before Juno we set a deprecation policy for graduating libraries that said 
 the incubated versions of the modules would stay in the incubator repository 
 for one full cycle after graduation. This gives projects time to adopt the 
 libraries and still receive bug fixes to the incubated version (see 
 https://wiki.openstack.org/wiki/Oslo#Graduation).
 
 That policy worked well early on, but has recently introduced some 
 challenges with the low level modules. Other modules in the incubator are 
 still importing the incubated versions of, for example, timeutils, and so 
 tests that rely on mocking out or modifying the behavior of timeutils do not 
 work as expected when different parts of the application code end up calling 
 different versions of timeutils. We had similar issues with the notifiers 
 and RPC code, and I expect to find other cases as we continue with the 
 graduations.
 
 To deal with this problem, I propose that for Kilo we delete graduating 
 modules as soon as the new library is released, rather than waiting to the 
 end of the cycle. We can update the other incubated modules at the same 
 time, so that the incubator will always use the new libraries and be 
 consistent.
 
 So from a consumer perspective, this means projects will need to sync
 from stable/juno until they adopt the new libs and then they need to
 sync from master, which will also be using the new libs.

That’s right. I would expect the sync to be part of the adoption process.

 One thing I think is worth noting is the fact that this will require
 projects to adopt all of the libs at once (or at least all of the libs
 that need to match incubator, but that's not always obvious so probably
 safest to just say all).  It might be possible to sync some modules
 from master and some from stable, but that sounds like a mess waiting to
 happen. :-)

Quite.

 
 I guess my concern here is that I don't think projects have been
 adopting all of the oslo libs at once, so if, for example, a project was
 looking at adopting oslo.i18n and oslo.utils they may have to do both at
 the same time since adopting one will require them to start syncing from
 master, and then they won't have the ability to use the graduated
 modules anymore.
 
 This may be a necessary evil, but it does raise the short-term bar for
 adopting any oslo lib, even if the end result will be the same (all of
 the released libs adopted).

True, more below.

 
 
 We have not had a lot of patches where backports were necessary, but there 
 have been a few important ones, so we need to retain the ability to handle 
 them and allow projects to adopt libraries at a reasonable pace. To handle 
 backports cleanly, we can “freeze” all changes to the master branch version 
 of modules slated for graduation during Kilo (we would need to make a good 
 list very early in the cycle), and use the stable/juno branch for backports.
 
 The new process would be:
 
 1. Declare which modules we expect to graduate during Kilo.
 2. Changes to those pre-graduation modules could be made in the master 
 branch before their library is released, as long as the change is also 
 backported to the stable/juno branch at the same time (we should enforce 
 this by having both patches submitted before accepting either).
 3. When graduation for a library starts, freeze those modules in all 
 branches until the library is released.
 4. Remove modules from the incubator’s master branch after the library is 
 released.
 5. Land changes in the library first.
 6. Backport changes, as needed, to stable/juno instead of master.
 
 It would be better to begin the export/import process as early as possible 
 in Kilo to keep the window where point 2 applies very short.
 
 If there are objections to using stable/juno, we could introduce a new 
 branch with a name like backports/kilo, but I am afraid having the extra 
 branch to manage would just cause confusion.
 
 I would like to move ahead with this plan by creating the stable/juno branch 
 and starting to update the incubator as soon as the oslo.log repository is 
 imported (https://review.openstack.org/116934).
 
 Thoughts?
 
 I think the obvious concern for me is the extra overhead of trying to
 keep one more branch in sync with all the others.  With this we will
 require two commits for each change to incubator code that isn't
 graduating.  Backporting to Havana would require four changes.  I guess
 this is no worse than the situation with graduating code (one commit to
 the lib and one to incubator), but that's temporary pain for specific
 files.  This would continue indefinitely for all files in incubator.
 
 We could probably help this by requiring changes to be linked in their
 commit messages so reviewers can vote on both changes at once, but it's
 still additional work for everyone so I think it's worth bringing up.
 
 I don't have a 

Re: [openstack-dev] [oslo] change to deprecation policy in the incubator

2014-09-01 Thread Doug Hellmann

On Aug 29, 2014, at 5:53 AM, Flavio Percoco fla...@redhat.com wrote:

 On 08/28/2014 06:14 PM, Doug Hellmann wrote:
 Before Juno we set a deprecation policy for graduating libraries that said 
 the incubated versions of the modules would stay in the incubator repository 
 for one full cycle after graduation. This gives projects time to adopt the 
 libraries and still receive bug fixes to the incubated version (see 
 https://wiki.openstack.org/wiki/Oslo#Graduation).
 
 That policy worked well early on, but has recently introduced some 
 challenges with the low level modules. Other modules in the incubator are 
 still importing the incubated versions of, for example, timeutils, and so 
 tests that rely on mocking out or modifying the behavior of timeutils do not 
 work as expected when different parts of the application code end up calling 
 different versions of timeutils. We had similar issues with the notifiers 
 and RPC code, and I expect to find other cases as we continue with the 
 graduations.
 
 To deal with this problem, I propose that for Kilo we delete graduating 
 modules as soon as the new library is released, rather than waiting to the 
 end of the cycle. We can update the other incubated modules at the same 
 time, so that the incubator will always use the new libraries and be 
 consistent.
 
 We have not had a lot of patches where backports were necessary, but there 
 have been a few important ones, so we need to retain the ability to handle 
 them and allow projects to adopt libraries at a reasonable pace. To handle 
 backports cleanly, we can “freeze” all changes to the master branch version 
 of modules slated for graduation during Kilo (we would need to make a good 
 list very early in the cycle), and use the stable/juno branch for backports.
 
 The new process would be:
 
 1. Declare which modules we expect to graduate during Kilo.
 2. Changes to those pre-graduation modules could be made in the master 
 branch before their library is released, as long as the change is also 
 backported to the stable/juno branch at the same time (we should enforce 
 this by having both patches submitted before accepting either).
 3. When graduation for a library starts, freeze those modules in all 
 branches until the library is released.
 4. Remove modules from the incubator’s master branch after the library is 
 released.
 5. Land changes in the library first.
 6. Backport changes, as needed, to stable/juno instead of master.
 
 It would be better to begin the export/import process as early as possible 
 in Kilo to keep the window where point 2 applies very short.
 
 If there are objections to using stable/juno, we could introduce a new 
 branch with a name like backports/kilo, but I am afraid having the extra 
 branch to manage would just cause confusion.
 
 I would like to move ahead with this plan by creating the stable/juno branch 
 and starting to update the incubator as soon as the oslo.log repository is 
 imported (https://review.openstack.org/116934).
 
 Thoughts?
 
 I like the plan. By being more aggressive in the way we deprecate
 graduated modules from oslo-incubator helps making sure the projects are
 all aligned.
 
 One thing we may want to think about is to graduate fewer modules in
 order to give liaisons enough time to migrate the projects they're
 taking care of. The more libs we graduate, the more work we're putting
 on liaisons, which means they'll need more time (besides the time
 they're dedicating to other projects) to do that work.

I think the libs we’ll be working on in Kilo are a bit more complicated than 
what we’ve done in Juno, so that’s likely to happen as a natural consequence. 
We also don’t expect projects to adopt the libraries automatically in the cycle 
where they are graduated (that was the original intent behind delaying when we 
delete the code from the repository). 

 
 One more thing, we need to add to the list of ports to do during Kilo
 the backlog of ports that haven't happened yet. For example, I haven't
 ported glance to oslo.utils yet. I expect to do it before the end of the
 cycle but Murphy :)

You raise a good point, that we need to audit which projects are using code 
that has now graduated, and work with the liaisons from those projects on 
patches and reviews. It would be good if we could get caught up by K1 or K2 at 
the latest. I’m already working on checking the libraries we have released to 
ensure we finished all of the steps after that initial release. Does anyone 
else want to volunteer to do the audit of consuming projects?

Doug


 
 Flavio
 
 
 -- 
 @flaper87
 Flavio Percoco
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the BP approval process broken?

2014-09-01 Thread Doug Hellmann

On Aug 29, 2014, at 5:07 AM, Thierry Carrez thie...@openstack.org wrote:

 Joe Gordon wrote:
 On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh
 alan.kavan...@ericsson.com mailto:alan.kavan...@ericsson.com wrote:
 
I share Donald's points here, I believe what would help is to
clearly describe in the Wiki the process and workflow for the BP
approval process and build in this process how to deal with
discrepancies/disagreements and build timeframes for each stage and
process of appeal etc.
The current process would benefit from some fine tuning and helping
to build safe guards and time limits/deadlines so folks can expect
responses within a reasonable time and not be left waiting in the cold.
 
 This is a resource problem, the nova team simply does not have enough
 people doing enough reviews to make this possible. 
 
 I think Nova lacks core reviewers more than it lacks reviewers, though.
 Just looking at the ratio of core developers vs. patchsets proposed,
 it's pretty clear that the core team is too small:
 
 Nova: 750 patchsets/month for 21 core = 36
 Heat: 230/14 = 16
 Swift: 50/16 = 3
 
 Neutron has the same issue (550/14 = 39). I think above 20, you have a
 dysfunctional setup. No amount of process, spec, or runway will solve
 that fundamental issue.
 
 The problem is, you can't just add core reviewers, they have to actually
 understand enough of the code base to be trusted with that +2 power. All
 potential candidates are probably already in. In Nova, the code base is
 so big it's difficult to find people that know enough of it. In Neutron,
 the contributors are often focused on subsections of the code base so
 they are not really interested in learning enough of the rest. That
 makes the pool of core candidates quite dry.
 
 I fear the only solution is smaller groups being experts on smaller
 codebases. There is less to review, and more candidates that are likely
 to be experts in this limited area.
 
 Applied to Nova, that means modularization -- having strong internal
 interfaces and trusting subteams to +2 the code they are experts on.
 Maybe VMWare driver people should just +2 VMware-related code. We've had
 that discussion before, and I know there is a dangerous potential
 quality slope there -- I just fail to see any other solution to bring
 that 750/21=36 figure down to a bearable level, before we burn out all
 of the Nova core team.

Besides making packaging and testing easier, one of the reasons Oslo uses a 
separate git repo for each of our libraries is to allow this sort of 
specialization. We have a core group with +2 across all of the libraries, and 
we have some team members who only have +2 on one or two specific libraries 
where they focus their attention.

Doug

 
 -- 
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Should docker plugin remove containers on delete?

2014-09-01 Thread Lars Kellogg-Stedman
Hello all,

I recently submitted this change:

  https://review.openstack.org/#/c/118190/

This causes the Docker plugin to *remove* containers on delete,
rather than simply *stopping* them.  When creating named containers,
the stop but do not remove behavior would cause conflicts when try
to re-create the stack.

Do folks have an opinion on which behavior is correct?

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgp0HRMLHzOb7.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Audio Visual Content and Structure for training-guides

2014-09-01 Thread Sayali Lunkad
Hi,

This is the etherpad link to discuss the structure and content of the Audio
Visual content to be included in the training-guides.
https://etherpad.openstack.org/p/openstck-training-guides%28Audio_Visual_Content%29
Please feel free to add or edit the content here.

Regards,
Sayali.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [ceilometer] [swift] tempests tests, grenade, old branches

2014-09-01 Thread Chris Dent


I've got a review in progress for adding a telemetry scenario test:

https://review.openstack.org/#/c/115971/

It can't pass the *-icehouse tests because ceilometer-api is not present
on the icehouse side of a havana-icehouse upgrade.

In the process of trying to figure out what's going on I discovered
so many confusing things that I'm no longer clear on:

* Whether this is a fixable problem?
* Whether it is worth fixing?
* How (or if) it is possible to disable the test in question for
  older branches?
* Maybe I should scrap the whole thing?[1]

The core problem is that older branches of grenade do not have an
upgrade-ceilometer, so though some ceilometer services do run in
Havana they are not restarted over the upgrade gap.

Presumably that could be fixed by backporting some stuff to the
relevant branch. I admit, though, that at times it can be rather
hard to tell which branch during a grenade run is providing the
configuration and environment variables. In part this is due to an
apparent difference in default local behavior and gate behavior.
Suppose I wanted to exactly what replicate on a local setup what
happens on a gate run, where do I go to figure that out?

That seems a bit fragile, though. Wouldn't it be better to upgrade
services based on what services are actually running, rather than
some lines in a shell script?

I looked into how this might be done and the mapping from
ENABLED_SERVICES to actually-running-processes to
some-generic-name-to-identify-an-upgrade is not at all
straightforward. I suspect this is a known problem that people would
like to fix, but I don't know where to look for more discussion on
the matter. Please help?

[1] And finally, the other grenade runs, those that are currently
passing are only passing because a very long loop is waiting up to
two minutes for notification messages (from the middleware) to show
up at the ceilometer collector. Is this because the instance is just
that overloaded and process contention is so high and it is just
going to take that long? Is so, is there much point having a test
which introduces this kind of potential legacy. A scenario test
appears to be exactly what's needed here, but at what cost?

What I'm after here is basically threefold:

* Pointers to written info on how I can resolve these issues, if it
  exists.
* If it doesn't, some discussion here on options to reach some
  resolution.
* A cup of tea or other beverage of our choice and some sympathy
  and commiseration. A bit of I too have suffered at the hands of
  grenade. Then we can all be friends.


From my side I can provide a promise to follow through on

improvements we discover.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday September 2nd at 19:00 UTC

2014-09-01 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday September 2nd, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [Horizon] [Heat] Merlin project (formerly known as cross-project UI library for Heat/Mistral/Murano/Solum) plans for PoC and more

2014-09-01 Thread Drago Rosson
Timur,

I am excited to hear that you think that Barricade could be used in
Merlin! Your feedback is fantastic and I am impressed that you have been
able to understand Barricade so well from only the source and the spec.


I am replying to your feedback inline:

On 8/29/14, 2:59 PM, Timur Sufiev tsuf...@mirantis.com wrote:

Drago,

It sounds like you convinced me to give D3.js a second chance :). I'll
experiment with what can be achieved using force-directed graph layout
combined with some composable svg object, hopefully this will save me
from placing objects on the canvas on my own.

I've read the barricade_Spec.js several times and part of
barricade.js. Code is very interesting and allowed me to refresh some
JavaScript knowledge I used a looong ago :). The topic I definitely
haven't fully grasped is deferred/deferrable/referenced objects. The
thing I've understood is that if some scheme includes '@ref' key, then
it tries to get value returned from the resolver function no matter
what value has provided during scheme instantiation. Am I right? Is
'needs' section required for the value to be resolved? The examples
in file with tests are a bit mind-bending, so I failed to imagine how
it works for real use-cases. Also I'm interested whether it is
possible to define a schema that allows both to provide the value
directly and via reference?

Reference resolving is probably the most complicated part of Barricade
right now, and I think that it should be more simple and clear than it is.

You are correct about resolver, except that the intent is to use the value
provided to find the correct reference. The spec is not a good real-world
example. To illustrate how @ref should really be used, I will use
intrinsic functions in a Heat template as an example. Take for instance
get_resource:

{get_resource: “some_resource_id”}

Here, “some_resource_id” is a just a string, but it is referring to an
actual resource defined elsewhere (the resource section of the template).
To resolve this string into the actual resource, @ref can be used in this
way:

'get_resource': {
'@type': String,
'@ref': {
to: function () { return hot.Resource; },
needs: function () { return hot.Template; },
resolver: function (json, template) {
return template.get('resources').getByID(json.get());
}
}
}

This can be read as “get_resource is a string which refers to a Resource,
and it needs a Template in order to find what it’s referencing”. The idea
is that both the value that is supposed to be a reference and the actual
reference will have the same parent somewhere up the chain. The “needs”
property describes how far up the chain to go to find that parent and the
resolver is then used to find the needed reference somewhere within that
parent. Here, the resolver is called once the Template is encountered, and
the reference is found in the resources section of the template by using
the string that was originally provided.


 Among other things that inclined me to
give some feedback are:
* '@type' vs. '@class' - is the only difference between them that
'@type' refers to primitive type and '@class' refers to Barricade.js
scheme? Perhaps it could be reduced to a single word to make things
simpler?

The reason for separate tags was that I was not sure if they were truly
mutually exclusive. Right now, @class causes @type (among other things) to
be ignored, because that information should be contained in the class that
@class is referring to. Therefore, I think that the two tags could
definitely be combined into one.

* '?' vs '*' - seems they are used in different contexts, '?' is for
Object and '*' for Array - are 2 distinct markers actually needed?

No, you’re right. It was simply to help separate mutable objects from
arrays. It is not necessary.

* Is it better for objects with fixed schema to fail when unexpected
key is passed to them? Currently they don't.

For set(), it does fail and give an error (though only in the console
right now) [1], but for get(), it does not give any warning or error [2],
so that could be changed.

* Pushing an element of wrong type into an arraylike scheme still
creates an element with empty default value.

Yes, this should be fixed relatively soon now that better validation has
been added. I think that it should simply fail instead.

* Is it possible to create schemas with arbitrary default values (the
example from spec caused me to think that default values cannot be
specified)?

It should be possible, but it is not yet.

* 'required' property does not really force such key to be provided
during schema instantiation - I presume this is going to change when
the real validation arrives?

Yes.

* What are the conceptual changes between objects instantiated from
mutable (with '?') and from immutable (with fixed keys) schema?

“Mutable” objects (for lack of a better name) are represented internally
as arrays with each element having an ID. Since each key of a 

Re: [openstack-dev] [all] Design Summit reloaded

2014-09-01 Thread Hayes, Graham
On Fri, 2014-08-29 at 17:56 +0200, Thierry Carrez wrote:
 Hayes, Graham wrote:
  Yep, I think this works in theory, the tough part will be when all the
  incubating projects realize they're sending people for a single day?
  Maybe it'll work out differently than I think though. It means fitting
  ironic, barbican, designate, manila, marconi in a day? 
 
  Actually those projects would get pod space for the rest of the week, so
  they should stay! Also some of them might have graduated by then :)
  
  Would the programs for those projects not get design summit time? I
  thought the Programs got Design summit time, not projects... If not, can
  the Programs get design summit time? 
 
 Sure, that's what Anne probably meant. Time for the program behind every
 incubated project.
 

Sure,

I was referring to the the 2 main days - (days 2 and 3)

I thought that was a benefit of having a Program? The PTL chooses the
sessions, and the PTL is over a program, so I assumed that programs
would get both Pods and some design summit time (not 1/2 a day on the
Tuesday)

I know we (designate) got some great work done last year, but most of it
was in isolation, as we had one 40 min session, and one 1/2 day session,
but the rest of the sessions were unofficial ones, which meant that
people in the community who were not as engaged missed out on the
discussions.

Would there be space for programs with incubated projects at the
'Contributors meetups' ?

Thanks, 

--
Graham

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Launchpad tracking of oslo projects

2014-09-01 Thread Doug Hellmann

On Aug 28, 2014, at 7:57 AM, Thierry Carrez thie...@openstack.org wrote:

 Thierry Carrez wrote:
 Doug Hellmann wrote:
 This makes sense to me, so let’s move ahead with your plan.
 
 OK, this is now done:
 
 Project group @ https://launchpad.net/oslo
 Oslo incubator: https://launchpad.net/oslo-incubator
 oslo.messaging: https://launchpad.net/oslo.messaging
 
 General blueprint view: https://blueprints.launchpad.net/oslo
 General bug view: https://bugs.launchpad.net/oslo
 
 We do have launchpad projects for some of the other oslo libraries, we just 
 haven’t been using them for release tracking:
 
 https://launchpad.net/python-stevedore
 https://launchpad.net/python-cliff
 https://launchpad.net/taskflow
 https://launchpad.net/pbr
 https://launchpad.net/oslo.vmware
 
 Cool, good to know. I'll include them in the oslo group if we create it.
 
 I added pbr, but I don't have the rights to move the other ones. It
 would generally be good to have oslo-drivers added as maintainer or
 driver for those projects so that we can fix them, if they are part of oslo.

I updated the others where I have access, and contacted the owners where I 
don’t. I think only oslo.vmware and pylockfile remain to be moved.

I also created projects for the other oslo libraries and configured their bug 
and blueprints trackers.

Oslo library owners, please have someone on your team go through your bugs on 
oslo-incubator and add/move them to your library project. If the bug does not 
affect the incubator, you can retarget it by replacing the oslo-incubator 
entry, but if it does affect the incubator please just add your lib and leave 
the incubator reference there. If you don’t have permission to edit bugs in 
that way, let me know and I’ll set you up in the oslo-bugs group.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Pre-5.1 and master builds ISO are available for download

2014-09-01 Thread Dmitry Borodaenko
We should not confuse beta and rc builds, normally betas predate RCs and
serve a different purpose. In that sense, the nightlies we currently
publish are closest to what beta builds should be.

As discussed earlier in the thread, we already have full versioning and
provenance information in each build, so there is not a lot of value in
inventing a parallel versioning scheme just for the time period when our
builds are feature complete but not yet stable enough to declare an RC. The
only benefit is to explicitly indicate the beta status of these builds, and
we can achieve that without messing with versions. For example, by
generating a summary table of all community builds that have passed the
tests (using same build numbers we already have).

Not supporting upgrades from/to intermediate builds is a limitation that we
should not discard as inevitable, overcoming it should be in our backlog.
Image based provisioning should make it much easier to support.

My 2c,
-Dmitry
I would not use beta word anywhere at all. These are nightly builds,
pre-5.1. So it will become 5.1 eventually, but for the moment - it is just
master branch. We've not even reached HCF.

After we reach HCF, we will start calling builds as Release Candidates
(RC1, RC2, etc.)  - and QA team runs acceptance testing against them. This
can be considered as another name instead of beta-1, etc.

Anyone can go to fuel-master-IP:8000/api/version to get sha commits of
git repos a particular build was created of. Yes, these are development
builds, and there will be no upgrade path provided from development build
to 5.1 release or any other release. We might want to think about it
though, if we could do it in theory, but I confirm what Evgeny says - we do
not support it now.



On Wed, Aug 27, 2014 at 1:11 PM, Evgeniy L e...@mirantis.com wrote:

 Hi guys, I have to say something about beta releases.

 As far as I know our beta release has the same version
 5.1 as our final release.

 I think this versions should be different, because in case
 of some problem it will be much easier to identify what
 version we are trying to debug.

 Also from the irc channel I've heard that somebody wanted
 to upgrade his system to stable version, right now it's impossible
 because upgrade system uses this version for names of
 containers/images/temporary directories and we have
 validation which prevents the user to run upgrade to the
 same version.

 In upgrade script we use python module [1] to compare versions
 for validation.
 Let me give an example how development versions can look like

 5.1a1 # alpha
 5.1b1 # beta 1
 5.1b1 # beta 2
 5.1b1 # beta 3
 5.1# final release

 [1]
 http://epydoc.sourceforge.net/stdlib/distutils.version.StrictVersion-class.html

 Thanks,


 On Tue, Aug 26, 2014 at 11:15 AM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Igor,
 thanks a lot for improving UX over it - this table allows me to see which
 ISO passed verification tests.


 On Mon, Aug 25, 2014 at 7:54 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 I would also like to add that you can use our library called devops
 along with system tests we use for QA and CI. These tests use libvirt and
 kvm so that you can easily fire up an environment with specific
 configuration (Centos/Ubuntu Nova/Neutron Ceph/Swift and so on). All the
 documentation how to use this library is here:
 http://docs.mirantis.com/fuel-dev/devops.html. If you find any bugs or
 gaps in documentation, please feel free to file bugs to
 https://launchpad.net/fuel.


 On Mon, Aug 25, 2014 at 6:39 PM, Igor Shishkin ishish...@mirantis.com
 wrote:

 Hi all,
 along with building your own ISO following instructions [1], you can
 always download nightly build [2] and run it, by using virtualbox scripts
 [3], for example.

 For your conveniency, you can see a build status table on CI [4]. First
 tab now refers to pre-5.1 builds, and second - to master builds.
 BVT columns stands for Build Verification Test, which is essentially
 full HA deploy deployment test.

 Currently pre-5.1 and master builds are actually built from same master
 branch. As soon as we call for Hard Code Freeze, pre-5.1 builds will be
 reconfigured to use stable/5.1 branch.

 Thanks,

 [1]
 http://docs.mirantis.com/fuel-dev/develop/env.html#building-the-fuel-iso
 [2] https://wiki.openstack.org/wiki/Fuel#Nightly_builds
 [3] https://github.com/stackforge/fuel-main/tree/master/virtualbox
 [4] https://fuel-jenkins.mirantis.com/view/ISO/
 --
 Igor Shishkin
 DevOps




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 

[openstack-dev] [neutron] New meeting rotation starting next week

2014-09-01 Thread Kyle Mestery
Per discussion again today in the Neutron meeting, next week we'll
start rotating the meeting. This will mean next week we'll meet on
Tuesday (9-9-2014) at 1400 UTC in #openstack-meeting-alt.

I've updated the Neutron meeting page [1] as well as the meeting wiki
page [2] with the new details on the meeting page.

Please add any agenda items to the page.

Looking forward to seeing some new faces who can't normally join us at
the 2100UTC slot!

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/Network/Meetings
[2] https://wiki.openstack.org/wiki/Meetings#Neutron_team_meeting

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New meeting rotation starting next week

2014-09-01 Thread Kevin Benton
Is it possible to put an iCal on the wiki so we can automatically see when
meetings are updated/cancelled/moved?
On Sep 1, 2014 6:23 PM, Kyle Mestery mest...@mestery.com wrote:

 Per discussion again today in the Neutron meeting, next week we'll
 start rotating the meeting. This will mean next week we'll meet on
 Tuesday (9-9-2014) at 1400 UTC in #openstack-meeting-alt.

 I've updated the Neutron meeting page [1] as well as the meeting wiki
 page [2] with the new details on the meeting page.

 Please add any agenda items to the page.

 Looking forward to seeing some new faces who can't normally join us at
 the 2100UTC slot!

 Thanks,
 Kyle

 [1] https://wiki.openstack.org/wiki/Network/Meetings
 [2] https://wiki.openstack.org/wiki/Meetings#Neutron_team_meeting

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-09-01 Thread joehuang
Hello, 

1. As dashboard, Horizon even does not support all stable OpenStack API ( 
including Neutron API ), not mention to unstable API
2. For incubation feature, the introduced API is not stable and not necessary 
for Horizon to support that.
3. The incubation feature could be experience by CLI/python client, but not in 
general delivery Horizon distribution.
4. If some customer asked the vendor to provide Horizon support for the 
incubation feature, the vendor can do the Horizon customization case by case, 
but no relationship with the general distribution of Horizon. 

Is the logical above reasonable?

Best Regards

Chaoyi Huang ( Joe Huang )

-邮件原件-
发件人: Robert Kukura [mailto:kuk...@noironetworks.com] 
发送时间: 2014年9月1日 22:37
收件人: openstack-dev@lists.openstack.org
主题: Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

Sure, Horizon (or Heat) support is not always required for new features 
entering incubation, but when a goal in incubating a feature is to get it 
packaged with OpenStack distributions and into the hands of as many early 
adopters as possible to gather feedback, these integrations are very important.

-Bob

On 9/1/14, 9:05 AM, joehuang wrote:
 Hello,

 Not all features which had already been shipped in Neutron supported by 
 Horizon. For example, multi provider network.

 This is not the special case only happened in Neutron. For example, Glance 
 delivered V2 api in IceHouse or even early and support Image muti-locations 
 feature, but this feature also not available from Horizon.

 Fortunately, the CLI/python client can give us the opportunity to use this 
 powerful feature.

 So, It's not necessary to link Neutron incubation with Horizon tightly. The 
 feature implemented in Horizon can be introduced when the the incubation 
 graduate.

 Best regards.

 Chaoyi Huang ( joehuang )

 
 发件人: Maru Newby [ma...@redhat.com]
 发送时间: 2014年9月1日 17:53
 收件人: OpenStack Development Mailing List (not for usage questions)
 主题: Re: [openstack-dev] [neutron] Incubator concerns from packaging 
 perspective

 On Aug 26, 2014, at 5:06 PM, Pradeep Kilambi (pkilambi) pkila...@cisco.com 
 wrote:


 On 8/26/14, 4:49 AM, Maru Newby ma...@redhat.com wrote:

 On Aug 25, 2014, at 4:39 PM, Pradeep Kilambi (pkilambi) 
 pkila...@cisco.com wrote:


 On 8/23/14, 5:36 PM, Maru Newby ma...@redhat.com wrote:

 On Aug 23, 2014, at 4:06 AM, Sumit Naiksatam 
 sumitnaiksa...@gmail.com
 wrote:

 On Thu, Aug 21, 2014 at 7:28 AM, Kyle Mestery 
 mest...@mestery.com
 wrote:
 On Thu, Aug 21, 2014 at 5:12 AM, Ihar Hrachyshka 
 ihrac...@redhat.com
 wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 20/08/14 18:28, Salvatore Orlando wrote:
 Some comments inline.

 Salvatore

 On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com 
 mailto:ihrac...@redhat.com wrote:

 Hi all,

 I've read the proposal for incubator as described at [1], and 
 I have several comments/concerns/suggestions to this.

 Overall, the idea of giving some space for experimentation 
 that does not alienate parts of community from Neutron is 
 good. In that way, we may relax review rules and quicken 
 turnaround for preview features without loosing control on those 
 features too much.

 Though the way it's to be implemented leaves several concerns, 
 as
 follows:

 1. From packaging perspective, having a separate repository 
 and tarballs seems not optimal. As a packager, I would better 
 deal with a single tarball instead of two. Meaning, it would 
 be better to keep the code in the same tree.

 I know that we're afraid of shipping the code for which some 
 users may expect the usual level of support and stability and 
 compatibility. This can be solved by making it explicit that 
 the incubated code is unsupported and used on your user's 
 risk. 1) The experimental code wouldn't probably be installed 
 unless explicitly requested, and 2) it would be put in a 
 separate namespace (like 'preview', 'experimental', or 
 'staging', as the call it in Linux kernel world [2]).

 This would facilitate keeping commit history instead of 
 loosing it during graduation.

 Yes, I know that people don't like to be called experimental 
 or preview or incubator... And maybe neutron-labs repo sounds 
 more appealing than an 'experimental' subtree in the core project.
 Well, there are lots of EXPERIMENTAL features in Linux kernel 
 that we actively use (for example, btrfs is still considered 
 experimental by Linux kernel devs, while being exposed as a 
 supported option to RHEL7 users), so I don't see how that 
 naming concern is significant.


 I think this is the whole point of the discussion around the 
 incubator and the reason for which, to the best of my 
 knowledge, no proposal has been accepted yet.
 I wonder where discussion around the proposal is running. Is it 
 public?

 The discussion started out privately as the incubation proposal 
 was put together, but it's now on the 

Re: [openstack-dev] [neutron] New meeting rotation starting next week

2014-09-01 Thread Anne Gentle
Look on https://wiki.openstack.org/wiki/Meetings for a link to an iCal feed
of all OpenStack meetings.

https://www.google.com/calendar/ical/bj05mroquq28jhud58esggq...@group.calendar.google.com/public/basic.ics





On Mon, Sep 1, 2014 at 8:26 PM, Kevin Benton blak...@gmail.com wrote:

 Is it possible to put an iCal on the wiki so we can automatically see when
 meetings are updated/cancelled/moved?
  On Sep 1, 2014 6:23 PM, Kyle Mestery mest...@mestery.com wrote:

 Per discussion again today in the Neutron meeting, next week we'll
 start rotating the meeting. This will mean next week we'll meet on
 Tuesday (9-9-2014) at 1400 UTC in #openstack-meeting-alt.

 I've updated the Neutron meeting page [1] as well as the meeting wiki
 page [2] with the new details on the meeting page.

 Please add any agenda items to the page.

 Looking forward to seeing some new faces who can't normally join us at
 the 2100UTC slot!

 Thanks,
 Kyle

 [1] https://wiki.openstack.org/wiki/Network/Meetings
 [2] https://wiki.openstack.org/wiki/Meetings#Neutron_team_meeting

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-09-01 Thread Kevin Benton
Is it possible to have a /contrib folder or something similar where
experimental modules can be placed? Requiring a separate Horizon
distribution for every project in the incubator is really going to make it
difficult to get users to try them out.


On Mon, Sep 1, 2014 at 6:39 PM, joehuang joehu...@huawei.com wrote:

 Hello,

 1. As dashboard, Horizon even does not support all stable OpenStack API (
 including Neutron API ), not mention to unstable API
 2. For incubation feature, the introduced API is not stable and not
 necessary for Horizon to support that.
 3. The incubation feature could be experience by CLI/python client, but
 not in general delivery Horizon distribution.
 4. If some customer asked the vendor to provide Horizon support for the
 incubation feature, the vendor can do the Horizon customization case by
 case, but no relationship with the general distribution of Horizon.

 Is the logical above reasonable?

 Best Regards

 Chaoyi Huang ( Joe Huang )

 -邮件原件-
 发件人: Robert Kukura [mailto:kuk...@noironetworks.com]
 发送时间: 2014年9月1日 22:37
 收件人: openstack-dev@lists.openstack.org
 主题: Re: [openstack-dev] [neutron] Incubator concerns from packaging
 perspective

 Sure, Horizon (or Heat) support is not always required for new features
 entering incubation, but when a goal in incubating a feature is to get it
 packaged with OpenStack distributions and into the hands of as many early
 adopters as possible to gather feedback, these integrations are very
 important.

 -Bob

 On 9/1/14, 9:05 AM, joehuang wrote:
  Hello,
 
  Not all features which had already been shipped in Neutron supported by
 Horizon. For example, multi provider network.
 
  This is not the special case only happened in Neutron. For example,
 Glance delivered V2 api in IceHouse or even early and support Image
 muti-locations feature, but this feature also not available from Horizon.
 
  Fortunately, the CLI/python client can give us the opportunity to use
 this powerful feature.
 
  So, It's not necessary to link Neutron incubation with Horizon tightly.
 The feature implemented in Horizon can be introduced when the the
 incubation graduate.
 
  Best regards.
 
  Chaoyi Huang ( joehuang )
 
  
  发件人: Maru Newby [ma...@redhat.com]
  发送时间: 2014年9月1日 17:53
  收件人: OpenStack Development Mailing List (not for usage questions)
  主题: Re: [openstack-dev] [neutron] Incubator concerns from packaging
  perspective
 
  On Aug 26, 2014, at 5:06 PM, Pradeep Kilambi (pkilambi) 
 pkila...@cisco.com wrote:
 
 
  On 8/26/14, 4:49 AM, Maru Newby ma...@redhat.com wrote:
 
  On Aug 25, 2014, at 4:39 PM, Pradeep Kilambi (pkilambi)
  pkila...@cisco.com wrote:
 
 
  On 8/23/14, 5:36 PM, Maru Newby ma...@redhat.com wrote:
 
  On Aug 23, 2014, at 4:06 AM, Sumit Naiksatam
  sumitnaiksa...@gmail.com
  wrote:
 
  On Thu, Aug 21, 2014 at 7:28 AM, Kyle Mestery
  mest...@mestery.com
  wrote:
  On Thu, Aug 21, 2014 at 5:12 AM, Ihar Hrachyshka
  ihrac...@redhat.com
  wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA512
 
  On 20/08/14 18:28, Salvatore Orlando wrote:
  Some comments inline.
 
  Salvatore
 
  On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com
  mailto:ihrac...@redhat.com wrote:
 
  Hi all,
 
  I've read the proposal for incubator as described at [1], and
  I have several comments/concerns/suggestions to this.
 
  Overall, the idea of giving some space for experimentation
  that does not alienate parts of community from Neutron is
  good. In that way, we may relax review rules and quicken
  turnaround for preview features without loosing control on those
 features too much.
 
  Though the way it's to be implemented leaves several concerns,
  as
  follows:
 
  1. From packaging perspective, having a separate repository
  and tarballs seems not optimal. As a packager, I would better
  deal with a single tarball instead of two. Meaning, it would
  be better to keep the code in the same tree.
 
  I know that we're afraid of shipping the code for which some
  users may expect the usual level of support and stability and
  compatibility. This can be solved by making it explicit that
  the incubated code is unsupported and used on your user's
  risk. 1) The experimental code wouldn't probably be installed
  unless explicitly requested, and 2) it would be put in a
  separate namespace (like 'preview', 'experimental', or
  'staging', as the call it in Linux kernel world [2]).
 
  This would facilitate keeping commit history instead of
  loosing it during graduation.
 
  Yes, I know that people don't like to be called experimental
  or preview or incubator... And maybe neutron-labs repo sounds
  more appealing than an 'experimental' subtree in the core
 project.
  Well, there are lots of EXPERIMENTAL features in Linux kernel
  that we actively use (for example, btrfs is still considered
  experimental by Linux kernel devs, while being exposed as a
  supported option to RHEL7 users), so I don't see how 

Re: [openstack-dev] [neutron] New meeting rotation starting next week

2014-09-01 Thread Kevin Benton
Unfortunately the master ICAL has so many meetings that it's not useful to
have displaying as part of a normal calendar.
I was hoping for a Neutron-specific one similar to Tripleo's.


On Mon, Sep 1, 2014 at 6:52 PM, Anne Gentle a...@openstack.org wrote:

 Look on https://wiki.openstack.org/wiki/Meetings for a link to an iCal
 feed of all OpenStack meetings.


 https://www.google.com/calendar/ical/bj05mroquq28jhud58esggq...@group.calendar.google.com/public/basic.ics





 On Mon, Sep 1, 2014 at 8:26 PM, Kevin Benton blak...@gmail.com wrote:

 Is it possible to put an iCal on the wiki so we can automatically see
 when meetings are updated/cancelled/moved?
  On Sep 1, 2014 6:23 PM, Kyle Mestery mest...@mestery.com wrote:

 Per discussion again today in the Neutron meeting, next week we'll
 start rotating the meeting. This will mean next week we'll meet on
 Tuesday (9-9-2014) at 1400 UTC in #openstack-meeting-alt.

 I've updated the Neutron meeting page [1] as well as the meeting wiki
 page [2] with the new details on the meeting page.

 Please add any agenda items to the page.

 Looking forward to seeing some new faces who can't normally join us at
 the 2100UTC slot!

 Thanks,
 Kyle

 [1] https://wiki.openstack.org/wiki/Network/Meetings
 [2] https://wiki.openstack.org/wiki/Meetings#Neutron_team_meeting

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA

2014-09-01 Thread Xu Han Peng

Anthony,

Thanks for your reply.

If HA method like VRRP are used for IPv6 router, according to the VRRP 
RFC with IPv6 included, the servers should be auto-configured with the 
active router's LLA as the default route before the failover happens and 
still remain that route after the failover. In other word, there should 
be no need to use two LLAs for default route of a subnet unless load 
balance is required.


When the backup router become the master router, the backup router 
should be responsible for sending out an unsolicited ND neighbor 
advertisement with the associated LLA (the previous master's LLA) 
immediately to update the bridge learning state and sending out router 
advertisement with the same options with the previous master to maintain 
the route and bridge learning.


This is shown in http://tools.ietf.org/html/rfc5798#section-4.1 and the 
actions backup router should take after failover is documented here: 
http://tools.ietf.org/html/rfc5798#section-6.4.2. The need for immediate 
messaging sending and periodic message sending is documented here: 
http://tools.ietf.org/html/rfc5798#section-2.4


Since the keepalived manager support for L3 HA is merged: 
https://review.openstack.org/#/c/68142/43. And keepalived release 1.2.0 
supports VRRP IPv6 features ( http://www.keepalived.org/changelog.html, 
see Release 1.2.0 | VRRP IPv6 Release). I think we can check if 
keepalived can satisfy our requirement here and if that will cause any 
conflicts with RADVD.


Thoughts?

Xu Han

On 08/28/2014 10:11 PM, Veiga, Anthony wrote:



Anthony and Robert,

Thanks for your reply. I don't know if the arping is there for
NAT, but I am pretty sure it's for HA setup to broadcast the
router's own change since the arping is controlled by
send_arp_for_ha config. By checking the man page of arping, you
can find the arping -A we use in code is sending out ARP REPLY
instead of ARP REQUEST. This is like saying I am here instead of
where are you. I didn't realized this either until Brain pointed
this out at my code review below.


That's what I was trying to say earlier.  Sending out the RA is the 
same effect.  RA says I'm here, oh and I'm also a router and should 
supersede the need for an unsolicited NA.  The only thing to consider 
here is that RAs are from LLAs.  If you're doing IPv6 HA, you'll need 
to have two gateway IPs for the RA of the standby to work.  So far as 
I know, I think there's still a bug out on this since you can only 
have one gateway per subnet.




http://linux.die.net/man/8/arping

https://review.openstack.org/#/c/114437/2/neutron/agent/l3_agent.py

Thoughts?

Xu Han


On 08/27/2014 10:01 PM, Veiga, Anthony wrote:



Hi Xuhan,

What I saw is that GARP is sent to the gateway port and also
to the router ports, from a neutron router. I'm not sure why
it's sent to the router ports (internal network). My
understanding for arping to the gateway port is that it is
needed for proper NAT operation. Since we are not planning to
support ipv6 NAT, so this is not required/needed for ipv6 any
more?


I agree that this is no longer necessary.


There is an abandoned patch that disabled the arping for ipv6
gateway port:
https://review.openstack.org/#/c/77471/3/neutron/agent/l3_agent.py

thanks,
Robert

On 8/27/14, 1:03 AM, Xuhan Peng pengxu...@gmail.com
mailto:pengxu...@gmail.com wrote:

As a follow-up action of yesterday's IPv6 sub-team
meeting, I would like to start a discussion about how to
support l3 agent HA when IP version is IPv6.

This problem is triggered by bug [1] where sending
gratuitous arp packet for HA doesn't work for IPv6 subnet
gateways. This is because neighbor discovery instead of
ARP should be used for IPv6.

My thought to solve this problem turns into how to send
out neighbor advertisement for IPv6 routers just like
sending ARP reply for IPv4 routers after reading the
comments on code review [2].

I searched for utilities which can do this and only find
a utility called ndsend [3] as part of vzctl on ubuntu. I
could not find similar tools on other linux distributions.

There are comments in yesterday's meeting that it's the
new router's job to send out RA and there is no need for
neighbor discovery. But we didn't get enough time to
finish the discussion.


Because OpenStack runs the l3 agent, it is the router.  Instead
of needing to do gratuitous ARP to alert all clients of the new
MAC, a simple RA from the new router for the same prefix would
accomplish the same, without having to resort to a special
package to generate unsolicited NA packets.  RAs must be
generated 

[openstack-dev] [third party][neutron] - OpenDaylight CI and -1 voting

2014-09-01 Thread Kevin Benton
I have had multiple occasions where the OpenDaylight CI will vote a -1 on a
patch for something completely unrelated (e.g. [1]). This would be fine
except for two issues. First, there doesn't appear to be any way to trigger
a recheck. Second, there is no maintainer listed on the Neutron third party
drivers page.[2] Because of this, there is effectively no way to get the -1
removed without uploading a new patch and losing current code review votes.

Can we remove the voting rights for the ODL CI until there is a documented
way to trigger rechecks and a public contact on the drivers page for when
things go wrong? Getting reviews is already hard enough, let alone when
there is a -1 in the 'verified' column.

1. https://review.openstack.org/#/c/116187/
2.
https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_Plugin

-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third party][neutron] - OpenDaylight CI and -1 voting

2014-09-01 Thread YAMAMOTO Takashi
 I have had multiple occasions where the OpenDaylight CI will vote a -1 on a
 patch for something completely unrelated (e.g. [1]). This would be fine
 except for two issues. First, there doesn't appear to be any way to trigger
 a recheck. Second, there is no maintainer listed on the Neutron third party
 drivers page.[2] Because of this, there is effectively no way to get the -1
 removed without uploading a new patch and losing current code review votes.

http://stackalytics.com/report/driverlog says its maintainer is
irc:mestery.  last time it happened to me, i asked him to trigger
recheck and it worked.

YAMAMOTO Takashi

 
 Can we remove the voting rights for the ODL CI until there is a documented
 way to trigger rechecks and a public contact on the drivers page for when
 things go wrong? Getting reviews is already hard enough, let alone when
 there is a -1 in the 'verified' column.
 
 1. https://review.openstack.org/#/c/116187/
 2.
 https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_Plugin
 
 -- 
 Kevin Benton

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-09-01 Thread Wuhongning
It would satisfy everyone if horizon support all APIs, either in tree or in the 
lab, but at the perquisite that we have enough resource for horizon.

Else for the limitation of resource, we have to sort by the priority, then 
should we focus on APIs being baked in the incubator first?


From: Kevin Benton [blak...@gmail.com]
Sent: Tuesday, September 02, 2014 9:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Incubator concerns from packaging 
perspective

Is it possible to have a /contrib folder or something similar where 
experimental modules can be placed? Requiring a separate Horizon distribution 
for every project in the incubator is really going to make it difficult to get 
users to try them out.


On Mon, Sep 1, 2014 at 6:39 PM, joehuang 
joehu...@huawei.commailto:joehu...@huawei.com wrote:
Hello,

1. As dashboard, Horizon even does not support all stable OpenStack API ( 
including Neutron API ), not mention to unstable API
2. For incubation feature, the introduced API is not stable and not necessary 
for Horizon to support that.
3. The incubation feature could be experience by CLI/python client, but not in 
general delivery Horizon distribution.
4. If some customer asked the vendor to provide Horizon support for the 
incubation feature, the vendor can do the Horizon customization case by case, 
but no relationship with the general distribution of Horizon.

Is the logical above reasonable?

Best Regards

Chaoyi Huang ( Joe Huang )

-邮件原件-
发件人: Robert Kukura 
[mailto:kuk...@noironetworks.commailto:kuk...@noironetworks.com]
发送时间: 2014年9月1日 22:37
收件人: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
主题: Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

Sure, Horizon (or Heat) support is not always required for new features 
entering incubation, but when a goal in incubating a feature is to get it 
packaged with OpenStack distributions and into the hands of as many early 
adopters as possible to gather feedback, these integrations are very important.

-Bob

On 9/1/14, 9:05 AM, joehuang wrote:
 Hello,

 Not all features which had already been shipped in Neutron supported by 
 Horizon. For example, multi provider network.

 This is not the special case only happened in Neutron. For example, Glance 
 delivered V2 api in IceHouse or even early and support Image muti-locations 
 feature, but this feature also not available from Horizon.

 Fortunately, the CLI/python client can give us the opportunity to use this 
 powerful feature.

 So, It's not necessary to link Neutron incubation with Horizon tightly. The 
 feature implemented in Horizon can be introduced when the the incubation 
 graduate.

 Best regards.

 Chaoyi Huang ( joehuang )

 
 发件人: Maru Newby [ma...@redhat.commailto:ma...@redhat.com]
 发送时间: 2014年9月1日 17:53
 收件人: OpenStack Development Mailing List (not for usage questions)
 主题: Re: [openstack-dev] [neutron] Incubator concerns from packaging 
 perspective

 On Aug 26, 2014, at 5:06 PM, Pradeep Kilambi (pkilambi) 
 pkila...@cisco.commailto:pkila...@cisco.com wrote:


 On 8/26/14, 4:49 AM, Maru Newby 
 ma...@redhat.commailto:ma...@redhat.com wrote:

 On Aug 25, 2014, at 4:39 PM, Pradeep Kilambi (pkilambi)
 pkila...@cisco.commailto:pkila...@cisco.com wrote:


 On 8/23/14, 5:36 PM, Maru Newby 
 ma...@redhat.commailto:ma...@redhat.com wrote:

 On Aug 23, 2014, at 4:06 AM, Sumit Naiksatam
 sumitnaiksa...@gmail.commailto:sumitnaiksa...@gmail.com
 wrote:

 On Thu, Aug 21, 2014 at 7:28 AM, Kyle Mestery
 mest...@mestery.commailto:mest...@mestery.com
 wrote:
 On Thu, Aug 21, 2014 at 5:12 AM, Ihar Hrachyshka
 ihrac...@redhat.commailto:ihrac...@redhat.com
 wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 20/08/14 18:28, Salvatore Orlando wrote:
 Some comments inline.

 Salvatore

 On 20 August 2014 17:38, Ihar Hrachyshka 
 ihrac...@redhat.commailto:ihrac...@redhat.com
 mailto:ihrac...@redhat.commailto:ihrac...@redhat.com wrote:

 Hi all,

 I've read the proposal for incubator as described at [1], and
 I have several comments/concerns/suggestions to this.

 Overall, the idea of giving some space for experimentation
 that does not alienate parts of community from Neutron is
 good. In that way, we may relax review rules and quicken
 turnaround for preview features without loosing control on those 
 features too much.

 Though the way it's to be implemented leaves several concerns,
 as
 follows:

 1. From packaging perspective, having a separate repository
 and tarballs seems not optimal. As a packager, I would better
 deal with a single tarball instead of two. Meaning, it would
 be better to keep the code in the same tree.

 I know that we're afraid of shipping the code for which some
 users may expect the usual level of support and stability and
 compatibility. This can be solved by making it explicit 

Re: [openstack-dev] [third party][neutron] - OpenDaylight CI and -1 voting

2014-09-01 Thread Kevin Benton
Thank you YAMAMOTO. I didn't think to look at stackalytics.

Kyle, can you list yourself on the wiki? I don't want to do it in case
there is someone else doing that job full time.
Also, is there a re-trigger phrase that you can document on the Wiki or in
the message body the CI posts to the reviews?

Thanks,
Kevin Benton


On Mon, Sep 1, 2014 at 8:08 PM, YAMAMOTO Takashi yamam...@valinux.co.jp
wrote:

  I have had multiple occasions where the OpenDaylight CI will vote a -1
 on a
  patch for something completely unrelated (e.g. [1]). This would be fine
  except for two issues. First, there doesn't appear to be any way to
 trigger
  a recheck. Second, there is no maintainer listed on the Neutron third
 party
  drivers page.[2] Because of this, there is effectively no way to get the
 -1
  removed without uploading a new patch and losing current code review
 votes.

 http://stackalytics.com/report/driverlog says its maintainer is
 irc:mestery.  last time it happened to me, i asked him to trigger
 recheck and it worked.

 YAMAMOTO Takashi

 
  Can we remove the voting rights for the ODL CI until there is a
 documented
  way to trigger rechecks and a public contact on the drivers page for when
  things go wrong? Getting reviews is already hard enough, let alone when
  there is a -1 in the 'verified' column.
 
  1. https://review.openstack.org/#/c/116187/
  2.
 
 https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_Plugin
 
  --
  Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][Nova]Quest about Cinder Brick proposal

2014-09-01 Thread Emma Lin
Hi Gurus,
I saw the wiki page for Cinder Brick proposal for Havana, but I didn't see any 
follow up on that idea. Is there any real progress on that idea?

As this proposal is to address the local storage issue, I'd like to know the 
status, and to see if there is any task required for hypervisor provider.

Any comments are appreciated
Emma

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-09-01 Thread Kevin Benton
Definitely don't prioritize it. I was just saying that people developing
these would probably be happy to do all of the development and testing.
Is the resource limitation in Horizon on the reviewer side or the code
contribution side? I'm sure there are people familiar with Neutron that
would be happy to help with developing the missing stable neutron features
you mentioned.


On Mon, Sep 1, 2014 at 8:16 PM, Wuhongning wuhongn...@huawei.com wrote:

  It would satisfy everyone if horizon support all APIs, either in tree or
 in the lab, but at the perquisite that we have enough resource for horizon.

  Else for the limitation of resource, we have to sort by the priority,
 then should we focus on APIs being baked in the incubator first?

  --
 *From:* Kevin Benton [blak...@gmail.com]
 *Sent:* Tuesday, September 02, 2014 9:55 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] Incubator concerns from
 packaging perspective

   Is it possible to have a /contrib folder or something similar where
 experimental modules can be placed? Requiring a separate Horizon
 distribution for every project in the incubator is really going to make it
 difficult to get users to try them out.


 On Mon, Sep 1, 2014 at 6:39 PM, joehuang joehu...@huawei.com wrote:

 Hello,

 1. As dashboard, Horizon even does not support all stable OpenStack API (
 including Neutron API ), not mention to unstable API
 2. For incubation feature, the introduced API is not stable and not
 necessary for Horizon to support that.
 3. The incubation feature could be experience by CLI/python client, but
 not in general delivery Horizon distribution.
 4. If some customer asked the vendor to provide Horizon support for the
 incubation feature, the vendor can do the Horizon customization case by
 case, but no relationship with the general distribution of Horizon.

 Is the logical above reasonable?

 Best Regards

 Chaoyi Huang ( Joe Huang )

 -邮件原件-
 发件人: Robert Kukura [mailto:kuk...@noironetworks.com]
 发送时间: 2014年9月1日 22:37
 收件人: openstack-dev@lists.openstack.org
  主题: Re: [openstack-dev] [neutron] Incubator concerns from packaging
 perspective

 Sure, Horizon (or Heat) support is not always required for new features
 entering incubation, but when a goal in incubating a feature is to get it
 packaged with OpenStack distributions and into the hands of as many early
 adopters as possible to gather feedback, these integrations are very
 important.

 -Bob

 On 9/1/14, 9:05 AM, joehuang wrote:
  Hello,
 
  Not all features which had already been shipped in Neutron supported by
 Horizon. For example, multi provider network.
 
  This is not the special case only happened in Neutron. For example,
 Glance delivered V2 api in IceHouse or even early and support Image
 muti-locations feature, but this feature also not available from Horizon.
 
  Fortunately, the CLI/python client can give us the opportunity to use
 this powerful feature.
 
  So, It's not necessary to link Neutron incubation with Horizon tightly.
 The feature implemented in Horizon can be introduced when the the
 incubation graduate.
 
  Best regards.
 
  Chaoyi Huang ( joehuang )
 
  
  发件人: Maru Newby [ma...@redhat.com]
  发送时间: 2014年9月1日 17:53
  收件人: OpenStack Development Mailing List (not for usage questions)
  主题: Re: [openstack-dev] [neutron] Incubator concerns from packaging
  perspective
 
  On Aug 26, 2014, at 5:06 PM, Pradeep Kilambi (pkilambi) 
 pkila...@cisco.com wrote:
 
 
  On 8/26/14, 4:49 AM, Maru Newby ma...@redhat.com wrote:
 
  On Aug 25, 2014, at 4:39 PM, Pradeep Kilambi (pkilambi)
  pkila...@cisco.com wrote:
 
 
  On 8/23/14, 5:36 PM, Maru Newby ma...@redhat.com wrote:
 
  On Aug 23, 2014, at 4:06 AM, Sumit Naiksatam
  sumitnaiksa...@gmail.com
  wrote:
 
  On Thu, Aug 21, 2014 at 7:28 AM, Kyle Mestery
  mest...@mestery.com
  wrote:
  On Thu, Aug 21, 2014 at 5:12 AM, Ihar Hrachyshka
  ihrac...@redhat.com
  wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA512
 
  On 20/08/14 18:28, Salvatore Orlando wrote:
  Some comments inline.
 
  Salvatore
 
  On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com
  mailto:ihrac...@redhat.com wrote:
 
  Hi all,
 
  I've read the proposal for incubator as described at [1], and
  I have several comments/concerns/suggestions to this.
 
  Overall, the idea of giving some space for experimentation
  that does not alienate parts of community from Neutron is
  good. In that way, we may relax review rules and quicken
  turnaround for preview features without loosing control on
 those features too much.
 
  Though the way it's to be implemented leaves several concerns,
  as
  follows:
 
  1. From packaging perspective, having a separate repository
  and tarballs seems not optimal. As a packager, I would better
  deal with a single tarball instead of two. Meaning, it would
  be better to keep the code in the 

[openstack-dev] [gantt] scheduler sub-group agenda 9/2

2014-09-01 Thread Dugger, Donald D
1) Forklift status
2) Next steps (now that Isolate Scheduler DB blueprint was rejected)
3) Opens

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-01 Thread Brandon Logan
Hi Susanne and everyone,

My opinions are that keeping it in stackforge until it gets mature is
the best solution.  I'm pretty sure we can all agree on that.  Whenever
it is mature then, and only then, we should try to get it into openstack
one way or another.  If Neutron LBaaS v2 is still incubated then it
should be relatively easy to get it in that codebase.  If Neutron LBaaS
has already spun out, even easier for us.  If we want Octavia to just
become an openstack project all its own then that will be the difficult
part.

I think the best course of action is to get Octavia itself into the same
codebase as LBaaS (Neutron or spun out).  They do go together, and the
maintainers will almost always be the same for both.  This makes even
more sense when LBaaS is spun out into its own project.

I really think all of the answers to these questions will fall into
place when we actually deliver a product that we are all wanting and
talking about delivering with Octavia.  Once we prove that we can all
come together as a community and manage a product from inception to
maturity, we will then have the respect and trust to do what is best for
an Openstack LBaaS product.

Thanks,
Brandon

On Mon, 2014-09-01 at 10:18 -0400, Susanne Balle wrote:
 Kyle, Adam,
 
  
 
 Based on this thread Kyle is suggesting the follow moving forward
 plan: 
 
  
 
 1) We incubate Neutron LBaaS V2 in the “Neutron” incubator “and freeze
 LBaas V1.0”
 2) “Eventually” It graduates into a project under the networking
 program.
 3) “At that point” We deprecate Neutron LBaaS v1.
 
  
 
 The words in “xx“ are works I added to make sure I/We understand the
 whole picture.
 
  
 
 And as Adam mentions: Octavia != LBaaS-v2. Octavia is a peer to F5 /
 Radware / A10 / etc appliances which is a definition I agree with BTW.
 
  
 
 What I am trying to now understand is how we will move Octavia into
 the new LBaaS project? 
 
  
 
 If we do it later rather than develop Octavia in tree under the new
 incubated LBaaS project when do we plan to bring it in-tree from
 Stackforge? Kilo? Later? When LBaaS is a separate project under the
 Networking program?

  
 
 What are the criteria to bring a driver into the LBaaS project and
 what do we need to do to replace the existing reference driver? Maybe
 adding a software driver to LBaaS source tree is less of a problem
 than converting a whole project to an OpenStack project.

  
 
 Again I am open to both directions I just want to make sure we
 understand why we are choosing to do one or the other and that our
  decision is based on data and not emotions. 
 
  
 
 I am assuming that keeping Octavia in Stackforge will increase the
 velocity of the project and allow us more freedom which is goodness.
 We just need to have a plan to make it part of the Openstack LBaaS
 project.
 
  
 
 Regards Susanne
 
 
 
 
 On Sat, Aug 30, 2014 at 2:09 PM, Adam Harwell
 adam.harw...@rackspace.com wrote:
 Only really have comments on two of your related points:
 
 
 [Susanne] To me Octavia is a driver so it is very hard to me
 to think of it as a standalone project. It needs the new
 Neutron LBaaS v2 to function which is why I think of them
 together. This of course can change since we can add whatever
 layers we want to Octavia.
 
 
 [Adam] I guess I've always shared Stephen's
 viewpoint — Octavia != LBaaS-v2. Octavia is a peer to F5 /
 Radware / A10 / etcappliances, not to an Openstack API layer
 like Neutron-LBaaS. It's a little tricky to clearly define
 this difference in conversation, and I have noticed that quite
 a few people are having the same issue differentiating. In a
 small group, having quite a few people not on the same page is
 a bit scary, so maybe we need to really sit down and map this
 out so everyone is together one way or the other.
 
 
 [Susanne] Ok now I am confused… But I agree with you that it
 need to focus on our use cases. I remember us discussing
 Octavia being the refenece implementation for OpenStack LBaaS
 (whatever that is). Has that changed while I was on vacation?
 
 
 [Adam] I believe that having the Octavia driver (not the
 Octavia codebase itself, technically) become the reference
 implementation for Neutron-LBaaS is still the plan in my eyes.
 The Octavia Driver in Neutron-LBaaS is a separate bit of code
 from the actual Octavia project, similar to the way the A10
 driver is a separate bit of code from the A10 appliance. To do
 that though, we need Octavia to be fairly close to fully
 functional. I believe we can do this because even though the
 reference driver would then require an additional service to
 run, what it requires is still fully-open-source and (by way
 of our plan) available as