Re: [openstack-dev] [QA] Proposed Changes to Tempest Core

2014-07-22 Thread Koderer, Marc
+1

 -Ursprüngliche Nachricht-
 Von: Matthew Treinish [mailto:mtrein...@kortar.org]
 Gesendet: Dienstag, 22. Juli 2014 00:34
 An: openstack-dev@lists.openstack.org
 Betreff: [openstack-dev] [QA] Proposed Changes to Tempest Core
 
 
 Hi Everyone,
 
 I would like to propose 2 changes to the Tempest core team:
 
 First, I'd like to nominate Andrea Fritolli to the Tempest core team. Over
 the past cycle Andrea has been steadily become more actively engaged in the
 Tempest community. Besides his code contributions around refactoring
 Tempest's authentication and credentials code, he has been providing reviews
 that have been of consistently high quality that show insight into both the
 project internals and it's future direction. In addition he has been active
 in the qa-specs repo both providing reviews and spec proposals, which has
 been very helpful as we've been adjusting to using the new process. Keeping
 in mind that becoming a member of the core team is about earning the trust
 from the members of the current core team through communication and quality
 reviews, not simply a matter of review numbers, I feel that Andrea will make
 an excellent addition to the team.
 
 As per the usual, if the current Tempest core team members would please vote
 +1 or -1(veto) to the nomination when you get a chance. We'll keep the polls
 open for 5 days or until everyone has voted.
 
 References:
 
 https://review.openstack.org/#/q/reviewer:%22Andrea+Frittoli+%22,n,z
 
 http://stackalytics.com/?user_id=andrea-frittolimetric=marksmodule=qa-
 group
 
 
 The second change that I'm proposing today is to remove Giulio Fidente from
 the core team. He asked to be removed from the core team a few weeks back
 because he is no longer able to dedicate the required time to Tempest
 reviews. So if there are no objections to this I will remove him from the
 core team in a few days.
 Sorry to see you leave the team Giulio...
 
 
 Thanks,
 
 Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Networks without subnets

2014-07-22 Thread loy wolfe
it is another BP about NFV:

https://review.openstack.org/#/c/97715


On Tue, Jul 22, 2014 at 9:37 AM, Isaku Yamahata isaku.yamah...@gmail.com
wrote:

 On Mon, Jul 21, 2014 at 02:52:04PM -0500,
 Kyle Mestery mest...@mestery.com wrote:

   Following up with post SAD status:
  
   * https://review.openstack.org/#/c/99873/ ML2 OVS: portsecurity
 extension support
  
   Remains unapproved, no negative feedback on current revision.
  
   * https://review.openstack.org/#/c/106222/ Add Port Security
 Implementation in ML2 Plugin
  
   Has a -2 to highlight the significant overlap with 99873 above.
  
   Although there were some discussions about these last week I am not
 sure we reached consensus on whether either of these (or even both of them)
 are the correct path forward - particularly to address the problem Brent
 raised w.r.t. to creation of networks without subnets - I believe this
 currently still works with nova-network?
  
   Regardless, I am wondering if either of the spec authors intend to
 propose these for a spec freeze exception?
  
  For the port security implementation in ML2, I've had one of the
  authors reach out to me. I'd like them to send an email to the
  openstack-dev ML though, so we can have the discussion here.

 As I commented at the gerrit, we, two authors of port security
 (Shweta and me), have agreed that the blueprints/specs will be unified.
 I'll send a mail for a spec freeze exception soon.

 thanks,
 --
 Isaku Yamahata isaku.yamah...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Spec Freeze Exception] ml2-ovs-portsecurity

2014-07-22 Thread Isaku Yamahata
On Tue, Jul 22, 2014 at 01:12:24PM +0800,
loy wolfe loywo...@gmail.com wrote:

 any relation with this BP?
 
 https://review.openstack.org/#/c/97715/6/specs/juno/nfv-unaddressed-interfaces.rst

No direct relationship because the above blueprint doesn't specify 
concrete plugin/mechanism driver, but it defines only rest API.
If it touches iptables_firewall driver, both BPs need mostly same
code change on the driver.

thanks,

 
 
 
 On Tue, Jul 22, 2014 at 11:17 AM, Isaku Yamahata isaku.yamah...@gmail.com
 wrote:
 
 
  I'd like to request Juno spec freeze exception for ML2 OVS portsecurity
  extension.
 
  - https://review.openstack.org/#/c/99873/
ML2 OVS: portsecurity extension support
 
  - https://blueprints.launchpad.net/neutron/+spec/ml2-ovs-portsecurity
Add portsecurity support to ML2 OVS mechanism driver
 
  The spec/blueprint adds portsecurity extension to ML2 plugin and implements
  it in ovs mechanism driver with iptables_firewall driver.
  The spec has gotten 5 +1 with many respins.
  This feature will be a basement to run network service within VM.
 
  There is another spec whose goal is same.
  - https://review.openstack.org/#/c/106222/
Add Port Security Implementation in ML2 Plugin
  The author, Shweta, and I have agreed to consolidate those specs/blueprints
  and unite for the same goal.
 
  Thanks,
  --
  Isaku Yamahata isaku.yamah...@gmail.com
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [Spec freeze exception] add boot_mode filter for nova.virt.ironic driver

2014-07-22 Thread Faizan Barmawer
Hello everyone,

I would like to request for juno spec freeze exception for Add ironic boot
mode filters.

https://review.openstack.org/#/c/108582/

This change is required to support UEFI boot mode in ironic drivers. The
ironic spec to add UEFI support in ironic is still under review
https://review.openstack.org/#/c/99850/, and hence at present it cannot be
added to nova virt ironic driver code in the ironic repository.

This change is dependent on
https://blueprints.launchpad.net/nova/+spec/add-ironic-driver
and will be contained only in ironic virt driver space.

Regards,
Faizan Barmawer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Spec freeze exception] add boot_mode filter for nova.virt.ironic driver

2014-07-22 Thread Michael Still
I think this sounds risky to me... I'd rather we landed _something_ in
terms of an ironic driver in juno, rather than adding features to what
we have now. In fact, I thought Devananda had frozen the ironic nova
driver to make this easier?

Michael

On Tue, Jul 22, 2014 at 4:45 PM, Faizan Barmawer
faizan.barma...@gmail.com wrote:
 Hello everyone,

 I would like to request for juno spec freeze exception for Add ironic boot
 mode filters.

 https://review.openstack.org/#/c/108582/

 This change is required to support UEFI boot mode in ironic drivers. The
 ironic spec to add UEFI support in ironic is still under review
 https://review.openstack.org/#/c/99850/, and hence at present it cannot be
 added to nova virt ironic driver code in the ironic repository.

 This change is dependent on
 https://blueprints.launchpad.net/nova/+spec/add-ironic-driver
 and will be contained only in ironic virt driver space.

 Regards,
 Faizan Barmawer


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Virtio-scsi settings nova-specs exception

2014-07-22 Thread Michael Still
Fair enough. Let's roll with that then.

Michael

On Tue, Jul 22, 2014 at 6:33 AM, Sean Dague s...@dague.net wrote:
 On 07/21/2014 03:35 PM, Dan Smith wrote:
 We've already approved many other blueprints for Juno that involve features
 from new libvirt, so I don't think it is credible to reject this or any
 other feature that requires new libvirt in Juno.

 Furthermore this proposal for Nova is a targetted feature which is not
 enabled by default, so the risk of regression for people not using it
 is negligible. So I see no reason not to accept this feature.

 Yep, the proposal that started this discussion was never aimed at
 creating new test requirements for already-approved nova specs anyway. I
 definitely don't think we need to hold up something relatively simple
 like this on those grounds, given where we are in the discussion.

 --Dan

 Agreed. This was mostly about figuring out a future path for ensuring
 that the features that we say work in OpenStack either have some
 validation behind them, or some appropriate disclaimers so that people
 realize they aren't really tested in our normal system.

 I'm fine with the virtio-scsi settings moving forward.

 -Sean

 --
 Sean Dague
 http://dague.net




-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Specs repository update and the way forward

2014-07-22 Thread Baohua Yang
Great!
And, not sure if it's right, but cannot find place to compare two commits
through the website, e.g., the latest version and the last one.
Guess this would be easier to find what changes in the new patch.


On Mon, Jul 21, 2014 at 9:47 PM, Kyle Mestery mest...@mestery.com wrote:

 On Mon, Jul 21, 2014 at 8:33 AM, Carlos Gonçalves m...@cgoncalves.pt
 wrote:
  On 12 Jun 2014, at 15:00, Carlos Gonçalves m...@cgoncalves.pt wrote:
 
  Is there any web page where all approved blueprints are being published
 to?
  Jenkins builds such pages I’m looking for but they are linked to each
  patchset individually (e.g.,
 
 http://docs-draft.openstack.org/77/92477/6/check/gate-neutron-specs-docs/f05cc1d/doc/build/html/
 ).
  In addition, listing BPs currently under reviewing and linking to its
  review.o.o page could potentially draw more attention/awareness to what’s
  being proposed to Neutron (and other OpenStack projects).
 
 
  Kyle? :-)
 
 I don't know of a published page, but you can always look at the git
 repository [1].

 [1] http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/juno

  Thanks,
  Carlos Goncalves
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best wishes!
Baohua
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Integrated with iSCSI target Question

2014-07-22 Thread Johnson Cheng
Dear Jay,

Yes, it's answer my question.
Because I have problem to launch an instance from an image, I can not do nova 
volume-attach to check if iSCSI LUN mechanism is work.

Thanks for your reply.

Regards,
Johnson

-Original Message-
From: Jay S. Bryant [mailto:jsbry...@electronicjungle.net] 
Sent: Monday, July 21, 2014 11:53 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] Integrated with iSCSI target Question

Johnson,

I am not sure what you mean by 'attach volume manually'.  Do you mean when you 
do a 'nova volume-attach'?  If so, then, yes, the process will use the 
appropriate iscsi_helper and iscsi_ip_address to configure the attachment.

Does that answer your question?

Jay

On Mon, 2014-07-21 at 12:23 +, Johnson Cheng wrote:
 Dear Thomas,
 
 Thanks for your reply.
 So when I attach volume manually, will iSCSI LUN be automatically setup via 
 cinder.conf (iscsi_helper and iscsi_ip_address)?
 
 
 Regards,
 Johnson
 
 -Original Message-
 From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
 Sent: Monday, July 21, 2014 6:16 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Cinder] Integrated with iSCSI target 
 Question
 
 The iSCSI lun won't be set up until you try to attach the volume
 
 On 17 July 2014 12:44, Johnson Cheng johnson.ch...@qsantechnology.com wrote:
  Dear All,
 
 
 
  I installed iSCSI target at my controller node (IP: 192.168.106.20),
 
  #iscsitarget open-iscsi iscsitarget-dkms
 
 
 
  then modify my cinder.conf at controller node as below,
 
  [DEFAULT]
 
  rootwrap_config = /etc/cinder/rootwrap.conf
 
  api_paste_confg = /etc/cinder/api-paste.ini
 
  #iscsi_helper = tgtadm
 
  iscsi_helper = ietadm
 
  volume_name_template = volume-%s
 
  volume_group = cinder-volumes
 
  verbose = True
 
  auth_strategy = keystone
 
  #state_path = /var/lib/cinder
 
  #lock_path = /var/lock/cinder
 
  #volumes_dir = /var/lib/cinder/volumes
 
  iscsi_ip_address=192.168.106.20
 
 
 
  rpc_backend = cinder.openstack.common.rpc.impl_kombu
 
  rabbit_host = controller
 
  rabbit_port = 5672
 
  rabbit_userid = guest
 
  rabbit_password = demo
 
 
 
  glance_host = controller
 
 
 
  enabled_backends=lvmdriver-1,lvmdriver-2
 
  [lvmdriver-1]
 
  volume_group=cinder-volumes-1
 
  volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
 
  volume_backend_name=LVM_iSCSI
 
  [lvmdriver-2]
 
  volume_group=cinder-volumes-2
 
  volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
 
  volume_backend_name=LVM_iSCSI_b
 
  [database]
 
  connection = mysql://cinder:demo@controller/cinder
 
 
 
  [keystone_authtoken]
 
  auth_uri = http://controller:5000
 
  auth_host = controller
 
  auth_port = 35357
 
  auth_protocol = http
 
  admin_tenant_name = service
 
  admin_user = cinder
 
  admin_password = demo
 
 
 
  Now I use the following command to create a cinder volume, and it 
  can be created successfully.
 
  # cinder create --volume-type lvm_controller --display-name vol 1
 
 
 
  Unfortunately it seems not attach to a iSCSI LUN automatically 
  because I can not discover it from iSCSI initiator,
 
  # iscsiadm -m discovery -t st -p 192.168.106.20
 
 
 
  Do I miss something?
 
 
 
 
 
  Regards,
 
  Johnson
 
 
 
 
 
  From: Manickam, Kanagaraj [mailto:kanagaraj.manic...@hp.com]
  Sent: Thursday, July 17, 2014 1:19 PM
 
 
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Cinder] Integrated with iSCSI target 
  Question
 
 
 
  I think, It should be on the cinder node which is usually deployed 
  on the controller node
 
 
 
  From: Johnson Cheng [mailto:johnson.ch...@qsantechnology.com]
  Sent: Thursday, July 17, 2014 10:38 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Cinder] Integrated with iSCSI target 
  Question
 
 
 
  Dear All,
 
 
 
  I have three nodes, a controller node and two compute nodes(volume node).
 
  The default value for iscsi_helper in cinder.conf is “tgtadm”, I 
  will change to “ietadm” to integrate with iSCSI target.
 
  Unfortunately I am not sure that iscsitarget should be installed at 
  controller node or compute node?
 
  Have any reference?
 
 
 
 
 
  Regards,
 
  Johnson
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 --
 Duncan Thomas
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list

Re: [openstack-dev] [TripleO] os-refresh-config run frequency

2014-07-22 Thread Macdonald-Wallace, Matthew
Any chance for getting it streamed or at least IRC'd for those of us who have 
an interest in this but can't attend?

From: Robert Collins [robe...@robertcollins.net]
Sent: 20 July 2014 20:30
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [TripleO] os-refresh-config run frequency


Sure. Put it in the agenda perhaps Tuesday morning

On 20 Jul 2014 12:11, Chris Jones c...@tenshu.netmailto:c...@tenshu.net 
wrote:
Hi

I also have some strong concerns about this. Can we get round a table this week 
and hash it out?

Cheers,
Chris


 On 20 Jul 2014, at 14:51, Dan Prince 
 dpri...@redhat.commailto:dpri...@redhat.com wrote:

 On Thu, 2014-07-17 at 15:54 +0100, Michael Kerrin wrote:
 On Thursday 26 June 2014 12:20:30 Clint Byrum wrote:

 Excerpts from Macdonald-Wallace, Matthew's message of 2014-06-26
 04:13:31 -0700:

 Hi all,


 I've been working more and more with TripleO recently and whilst
 it does

 seem to solve a number of problems well, I have found a couple of

 idiosyncrasies that I feel would be easy to address.


 My primary concern lies in the fact that os-refresh-config does
 not run on

 every boot/reboot of a system. Surely a reboot *is* a
 configuration

 change and therefore we should ensure that the box has come up in
 the

 expected state with the correct config?


 This is easily fixed through the addition of an @reboot entry in

 /etc/crontab to run o-r-c or (less easily) by re-designing o-r-c
 to run

 as a service.


 My secondary concern is that through not running os-refresh-config
 on a

 regular basis by default (i.e. every 15 minutes or something in
 the same

 style as chef/cfengine/puppet), we leave ourselves exposed to
 someone

 trying to make a quick fix to a production node and taking that
 node

 offline the next time it reboots because the config was still left
 as

 broken owing to a lack of updates to HEAT (I'm thinking a quick
 change

 to allow root access via SSH during a major incident that is then
 left

 unchanged for months because no-one updated HEAT).


 There are a number of options to fix this including Modifying

 os-collect-config to auto-run os-refresh-config on a regular basis
 or

 setting os-refresh-config to be its own service running via
 upstart or

 similar that triggers every 15 minutes


 I'm sure there are other solutions to these problems, however I
 know from

 experience that claiming this is solved through education of
 users or

 (more severely!) via HR is not a sensible approach to take as by
 the time

 you realise that your configuration has been changed for the last
 24

 hours it's often too late!

 So I see two problems highlighted above.


 1) We don't re-assert ephemeral state set by o-r-c scripts. You're
 right,

 and we've been talking about it for a while. The right thing to do
 is

 have os-collect-config re-run its command on boot. I don't think a
 cron

 job is the right way to go, we should just have a file in /var/run
 that

 is placed there only on a successful run of the command. If that
 file

 does not exist, then we run the command.


 I've just opened this bug in response:


 https://bugs.launchpad.net/os-collect-config/+bug/1334804




 I have been looking into bug #1334804 and I have a review up to
 resolve it. I want to highlight something.



 Currently on a reboot we start all services via upstart (on debian
 anyways) and there have been quite a lot of issues around this -
 missing upstart scripts and timing issues. I don't know the issues on
 fedora.



 So with a fix to #1334804, on a reboot upstart will start all the
 services first (with potentially out-of-date configuration), then
 o-c-c will start o-r-c and will now configure all services and restart
 them or start them if upstart isn't configured properly.



 I would like to turn off all boot scripts for services we configure
 and leave all this to o-r-c. I think this will simplify things and put
 us in control of starting services. I believe that it will also narrow
 the gap between fedora and debian or debian and debian so what works
 on one should work on the other and make it easier for developers.

 I'm not sold on this approach. At the very least I think we want to make
 this optional because not all deployments may want to have o-r-c be the
 central service starting agent. So I'm opposed to this being our (only!)
 default...

 The job of o-r-c in this regard is to assert state... which to me means
 making sure that a service is configured correctly (config files, set to
 start on boot, and initially started). Requiring o-r-c to be the service
 starting agent (always) is beyond the scope of the o-r-c tool.

 If people want to use it in that mode I think having an *option* to do
 this is fine. I don't think it should be required though. Furthermore I
 don't think we should get into the habit of writing our elements in such
 a matter that things no longer start on boot without o-r-c in the mix.

 I do 

Re: [openstack-dev] [tc][rally] Application for a new OpenStack Program: Performance and Scalability

2014-07-22 Thread Baohua Yang
It's interesting and practical!
There're some early efforts to make openstack more robust and efficient,
however, we're still lacking one such framework.

Just a little question.
On the wiki, it says to consider the scalability problem.
How can we measure the performance at large scale? By real deployment or
just simulating the input?
Through the descriptions, seems the method is to try testing every single
component.

Also, suggest adding necessary perf checks into the test/integration jobs,
too.
Thanks!



On Tue, Jul 22, 2014 at 5:53 AM, Boris Pavlovic bo...@pavlovic.me wrote:

 Hi Stackers and TC,

 The Rally contributor team would like to propose a new OpenStack program
 with a mission to provide scalability and performance benchmarking, and
 code profiling tools for OpenStack components.

 We feel we've achieved a critical mass in the Rally project, with an
 active, diverse contributor team. The Rally project will be the initial
 project in a new proposed Performance and Scalability program.

 Below, the details on our proposed new program.

 Thanks for your consideration,
 Boris



 [1] https://review.openstack.org/#/c/108502/


 Official Name
 =

 Performance and Scalability

 Codename
 

 Rally

 Scope
 =

 Scalability benchmarking, performance analysis, and profiling of
 OpenStack components and workloads

 Mission
 ===

 To increase the scalability and performance of OpenStack clouds by:

 * defining standard benchmarks
 * sharing performance data between operators and developers
 * providing transparency of code paths through profiling tools

 Maturity
 

 * Meeting logs http://eavesdrop.openstack.org/meetings/rally/2014/
 * IRC channel: #openstack-rally
 * Rally performance jobs are in (Cinder, Glance, Keystone  Neutron)
 check pipelines.
 *  950 commits over last 10 months
 * Large, diverse contributor community
  *
 http://stackalytics.com/?release=junometric=commitsproject_type=Allmodule=rally
  * http://stackalytics.com/report/contribution/rally/180

 * Non official lead of project is Boris Pavlovic
  * Official election In progress.

 Deliverables
 

 Critical deliverables in the Juno cycle are:

 * extending Rally Benchmark framework to cover all use cases that are
 required by all OpenStack projects
 * integrating OSprofiler in all core projects
 * increasing functional  unit testing coverage of Rally.

 Discussion
 ==

 One of the major goals of Rally is to make it simple to share results of
 standardized benchmarks and experiments between operators and
 developers. When an operator needs to verify certain performance
 indicators meet some service level agreement, he will be able to run
 benchmarks (from Rally) and share with the developer community the
 results along with his OpenStack configuration. These benchmark results
 will assist developers in diagnosing particular performance and
 scalability problems experienced with the operator's configuration.

 Another interesting area is Rally  the OpenStack CI process. Currently,
 working on performance issues upstream tends to be a more social than
 technical process. We can use Rally in the upstream gates to identify
 performance regressions and measure improvement in scalability over
 time. The use of Rally in the upstream gates will allow a more rigorous,
 scientific approach to performance analysis. In the case of an
 integrated OSprofiler, it will be possible to get detailed information
 about API call flows (e.g. duration of API calls in different services).



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best wishes!
Baohua
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Neutron] Request voting for Tail-f CI account

2014-07-22 Thread Luke Gorrie
Hi Sean,

On 21 July 2014 22:53, Collins, Sean sean_colli...@cable.comcast.com
wrote:

   The fact that I tried to reach out to the person who was listed as the
 contact back in November to try and resolve the –1 that this CI system
 gave, and never received a response until the public mailing list thread
 about revoking voting rights for Tail-F, makes me believe that the Tail-F
 CI system is still not ready to have that kind of privilege. Especially if
 the account was idle from around February, until June – that is a huge gap,
 if I understand correctly?


I understand your frustration. It seems like the experience of bringing up
our CI has been miserable for all concerned. I am sad about that. It does
not seem that it should have worked out this way, since everybody concerned
is a competent person and acting in good faith.

I hope we can finally clear this up and then continue with contributing to
OpenStack on good terms with everybody.

Back in November we were feeling eager to be good citizens and we wanted to
be amongst the first to setup a 3rd party CI for Neutron. We were trying to
be proactive: our driver was already in Havana and the deadlines for us to
setup the CI were far in the future. My colleague Tobbe was also planning
to take the lead on development of our OpenStack code from me and we
thought the perfect first step would be to setup our CI system, since that
would get him familiar with the code and since neither of us had prior
experience operating an OpenStack CI.

We read through the 3rd Party CI setup instructions and created a CI. Our
initial setup ran Jenkins and would use a custom script to create a
one-shot VM and inside that it would run the Neutron unit tests together
with a patch that made our driver talk to our real external system. This
got quite good test coverage because the unit tests really exercise the ML2
interface quite well. (Likely we should have used Tempest instead, as
everybody does nowadays include us, but we didn't know that back then.)

This seemed to work well and so we let it run. Honestly, we did not really
know what would happen with our results after they were posted, and we did
not have a definite goal for what service level we should uphold. That was
surely naive, but I think understandable. We were relatively new and minor
contributors to OpenStack and we were amongst the first wave of Neutron
people to setup a CI. We hadn't yet had the opportunity to learn from the
mistakes of others or see how reviews are used by the upstream people and
systems. We were also perhaps a little too relaxed because our total
contribution was around 150 lines of code that only run when explicitly
enabled, and we had our own test procedure in place separately from
OpenStack CI that we had been using since Havana, so it did not feel like
we had much potential to impact other OpenStack users and developers with
our code.

Anyway. The test runs started to fail unexpectedly, for a boring kind of
reason like that OpenStack needed a newer version of a library and our CI
script lacked a pip upgrade command that would pick it up, so all tests
would fail until manual intervention.

So what happens when the CI falls down and needs help to come back up?
First of all, it creates a big problem for upstream developers and slows
down work on OpenStack (ouch). Second, you poor guys who are having
problems try to contact the person responsible, but all you have is one
work email address and IRC nick. In that case, you guys did not get a
response. I think that was for the very pedestrian reason that my colleague
who was responsible was on vacation and didn't appreciate that an
operational issue with our CI would create an urgent problem for other
people and must be attended to at all times.

This must have been bad for you guys since you were stuck waiting on us and
couldn't fix the problem on your side. I was also contacted by email, as
the previous contact person for that driver, but the message simply asked
me to confirm my colleague's email address and did not tell me that there
was a problem that we had to resolve. So eventually the problem boiled over
and when we started getting publicly flamed on the mailing list then I
finally saw that there was an issue and called up my colleague directly who
*then* jumped into account to sort it out (logging into gerrit and
reversing old negative votes, and so on).

So what do we take away from this first experience? To me it just looks
like processes to fix: people operating 3rd party CIs need to better
understand the required service level, there should be multiple contact
points to deal with mundane stuff like vacations and illness, and that
people should operate their CI successfully for a while before voting is
enabled. It sucks that work was interrupted and people got mad, but at the
end of the day this happened with everybody acting in good faith, and it
shows us what kind of problems to prevent in the future.

This is where it became a bit 

Re: [openstack-dev] [Rally] PTL Candidacy

2014-07-22 Thread Hugh Saunders
+1 Boris has the enthusiasm, and time to take Rally forwards. He is
especially good at encouraging people to get involved.

Ensure that everybody's use cases are fully covered - this must be
balanced against the need for focus and clear scope.

--
Hugh Saunders


On 21 July 2014 19:38, Boris Pavlovic bpavlo...@mirantis.com wrote:

 Hi,

 I would like to propose my candidacy for Rally PTL.

 I started this project to make benchmarking of OpenStack simple as
 possible. This means not only load generation, but as well OpenStack
 specific benchmark framework, data analyze and integration with gates. All
 these things should make it simple for developers and operators to
 benchmark (perf, scale, stress test) OpenStack, share experiments 
 results, and have a fast way to find what produce bottleneck or just to
 ensure that OpenStack works well under load that they are expecting.

 I am current non official PTL and in my responsibilities are such things
 like:
 1) Adoption of Rally architecture to cover everybody's use cases
 2) Building  managing work of community
 3) Writing a lot of code
 4) Working on docs  wiki
 5) Helping newbies to join Rally team

 As a PTL I would like to continue work and finish my initial goal:
 1) Ensure that everybody's use cases are fully covered
 2) There is no monopoly in project
 3) Run Rally in gates of all OpenStack projects (currently we have check
 jobs in Keystone, Cinder, Glance  Neutron)
 4) Continue work on making project more mature. It covers such topics like
 increasing unit and functional test coverage and making Rally absolutely
 safe to run against any production cloud)


 Best regards,
 Boris Pavlovic

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] PTL Candidacy

2014-07-22 Thread Sergey Lukjanov
ack

On Mon, Jul 21, 2014 at 10:38 PM, Boris Pavlovic bpavlo...@mirantis.com wrote:
 Hi,

 I would like to propose my candidacy for Rally PTL.

 I started this project to make benchmarking of OpenStack simple as possible.
 This means not only load generation, but as well OpenStack specific
 benchmark framework, data analyze and integration with gates. All these
 things should make it simple for developers and operators to benchmark
 (perf, scale, stress test) OpenStack, share experiments  results, and have
 a fast way to find what produce bottleneck or just to ensure that OpenStack
 works well under load that they are expecting.

 I am current non official PTL and in my responsibilities are such things
 like:
 1) Adoption of Rally architecture to cover everybody's use cases
 2) Building  managing work of community
 3) Writing a lot of code
 4) Working on docs  wiki
 5) Helping newbies to join Rally team

 As a PTL I would like to continue work and finish my initial goal:
 1) Ensure that everybody's use cases are fully covered
 2) There is no monopoly in project
 3) Run Rally in gates of all OpenStack projects (currently we have check
 jobs in Keystone, Cinder, Glance  Neutron)
 4) Continue work on making project more mature. It covers such topics like
 increasing unit and functional test coverage and making Rally absolutely
 safe to run against any production cloud)


 Best regards,
 Boris Pavlovic

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Neutron] Request voting for Tail-f CI account

2014-07-22 Thread Luke Gorrie
On 22 July 2014 11:06, Luke Gorrie l...@tail-f.com wrote:

 This must have been bad for you guys since you were stuck waiting on us
 and couldn't fix the problem on your side. I was also contacted by email,
 as the previous contact person for that driver, but the message simply
 asked me to confirm my colleague's email address and did not tell me that
 there was a problem that we had to resolve.


(I checked and that is not true: actually it did tell me that there was a
problem, and I just didn't get that it was urgent. This narrative is a
little clouded with emotion at this point I must admit :-)).
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] PTL Candidacy

2014-07-22 Thread Sergey Lukjanov
Folks, please, don't +1 it. If we'll have = 2 candidates, we'll have
CIVS elections.

On Tue, Jul 22, 2014 at 2:09 PM, Hugh Saunders h...@wherenow.org wrote:
 +1 Boris has the enthusiasm, and time to take Rally forwards. He is
 especially good at encouraging people to get involved.

 Ensure that everybody's use cases are fully covered - this must be
 balanced against the need for focus and clear scope.

 --
 Hugh Saunders


 On 21 July 2014 19:38, Boris Pavlovic bpavlo...@mirantis.com wrote:

 Hi,

 I would like to propose my candidacy for Rally PTL.

 I started this project to make benchmarking of OpenStack simple as
 possible. This means not only load generation, but as well OpenStack
 specific benchmark framework, data analyze and integration with gates. All
 these things should make it simple for developers and operators to benchmark
 (perf, scale, stress test) OpenStack, share experiments  results, and have
 a fast way to find what produce bottleneck or just to ensure that OpenStack
 works well under load that they are expecting.

 I am current non official PTL and in my responsibilities are such things
 like:
 1) Adoption of Rally architecture to cover everybody's use cases
 2) Building  managing work of community
 3) Writing a lot of code
 4) Working on docs  wiki
 5) Helping newbies to join Rally team

 As a PTL I would like to continue work and finish my initial goal:
 1) Ensure that everybody's use cases are fully covered
 2) There is no monopoly in project
 3) Run Rally in gates of all OpenStack projects (currently we have check
 jobs in Keystone, Cinder, Glance  Neutron)
 4) Continue work on making project more mature. It covers such topics like
 increasing unit and functional test coverage and making Rally absolutely
 safe to run against any production cloud)


 Best regards,
 Boris Pavlovic

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Libvirt not migrating cdrom

2014-07-22 Thread Dave Walker
Hi,

Kicking open an old thread[0] about Libvirt not migrating cdrom
devices (config-drive) [0] with the attached LP bug[1].

It seems that the direction was to consider switching to vfat, as
libvirt supports this.  It isn't clear to me if the cdrom limitation
is specific to libvirt, nor if vfat could be made to work in windows
(seemed to imply there was a limitation).

I wanted to check if the reasoning for libvirt not allowing cdrom
migration had been considered?  Is it that libvirt blocks it, as it
'could' be a physical cdrom - rather than an iso?

It feels to me that pushing the fix down the stack into libvirt seems
like the correct solution?

[0] http://lists.openstack.org/pipermail/openstack-dev/2014-February/027394.html
[1] https://bugs.launchpad.net/nova/+bug/1246201

--
Kind Regards,
Dave Walker

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.serialization and oslo.concurrency graduation call for help

2014-07-22 Thread Yuriy Taraday
Hello, Ben.

On Mon, Jul 21, 2014 at 7:23 PM, Ben Nemec openst...@nemebean.com wrote:

 Hi all,

 The oslo.serialization and oslo.concurrency graduation specs are both
 approved, but unfortunately I haven't made as much progress on them as I
 would like.  The serialization repo has been created and has enough acks
 to continue the process, and concurrency still needs to be started.

 Also unfortunately, I am unlikely to make progress on either over the
 next two weeks due to the tripleo meetup and vacation.  As discussed in
 the Oslo meeting last week
 (
 http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-18-16.00.log.html
 )
 we would like to continue work on them during that time, so Doug asked
 me to look for volunteers to pick up the work and run with it.

 The current status and next steps for oslo.serialization can be found in
 the bp:
 https://blueprints.launchpad.net/oslo/+spec/graduate-oslo-serialization

 As mentioned, oslo.concurrency isn't started and has a few more pending
 tasks, which are enumerated in the spec:

 http://git.openstack.org/cgit/openstack/oslo-specs/plain/specs/juno/graduate-oslo-concurrency.rst

 Any help would be appreciated.  I'm happy to pick this back up in a
 couple of weeks, but if someone could shepherd it along in the meantime
 that would be great!


I would be happy to work on graduating oslo.concurrency as well as
improving it after that. I like fiddling with OS's, threads and races :)
I can also help to finish work on oslo.serialization (it looks like some
steps are already finished there).

What would be needed to start working on that? I haven't been following
development of processes within Oslo. So I would need someone to answer
questions as they arise.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - certificates data persistency

2014-07-22 Thread Samuel Bercovici
Stephen,

This will increase the complexity of the code since it will add managing the 
cache lifecycle in tandem with the barbican back end and the fact that 
containers may be shared by multiple listeners.
At this stage, I think that it serves us all to keep the code at this stage as 
small and simple as possible.

Let’s judge if presenting this information on the fly (ex: in the Web UI) 
becomes a performance issue and if it does, we can fix it then.

-Sam.


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, July 22, 2014 3:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - certificates 
data persistency

Evgeny--

The only reason I see for storing certificate information in Neutron (and not 
private key information-- just the certificate) is to aid in presenting UI 
information to the user. Especially GUI users don't care about a certificate's 
UUID, they care about which hostnames it's valid for. Yes, this can be loaded 
on the fly whenever public certificate information is accessed, but the 
perception was that it would be a significant performance increase to cache it.

Stephen

On Sun, Jul 20, 2014 at 4:32 AM, Evgeny Fedoruk 
evge...@radware.commailto:evge...@radware.com wrote:
Hi folks,

In a current version of TLS capabilities RST certificate SubjectCommonName and 
SubjectAltName information is cached in a database.
This may be not necessary and here is why:


1.   TLS containers are immutable, meaning once a container was associated 
to a listener and was validated, it’s not necessary to validate the container 
anymore.
This is relevant for both, default container and containers used for SNI.

2.   LBaaS front-end API can check if TLS containers ids were changed for a 
listener as part of an update operation. Validation of containers will be done 
for
new containers only. This is stated in “Performance Impact” section of the RST, 
excepting the last statement that proposes persistency for SCN and SAN.

3.   Any interaction with Barbican API for getting containers data will be 
performed via a common module API only. This module’s API is mentioned in
“SNI certificates list management” section of the RST.

4.   In case when driver really needs to extract certificate information 
prior to the back-end system provisioning, it will do it via the common module 
API.

5.   Back-end provisioning system may cache any certificate data, except 
private key, in case of a specific need of the vendor.

IMO, There is no real need to store certificates data in Neutron database and 
manage its life cycle.
Does anyone sees a reason why caching certificates’ data in Neutron database is 
critical?

Thank you,
Evg


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.serialization and oslo.concurrency graduation call for help

2014-07-22 Thread Davanum Srinivas
Yuriy,
Hop onto #openstack-oslo, that's where we hang out.

Ben,
I can help as well.

thanks,
-- dims

On Tue, Jul 22, 2014 at 6:38 AM, Yuriy Taraday yorik@gmail.com wrote:
 Hello, Ben.

 On Mon, Jul 21, 2014 at 7:23 PM, Ben Nemec openst...@nemebean.com wrote:

 Hi all,

 The oslo.serialization and oslo.concurrency graduation specs are both
 approved, but unfortunately I haven't made as much progress on them as I
 would like.  The serialization repo has been created and has enough acks
 to continue the process, and concurrency still needs to be started.

 Also unfortunately, I am unlikely to make progress on either over the
 next two weeks due to the tripleo meetup and vacation.  As discussed in
 the Oslo meeting last week

 (http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-18-16.00.log.html)
 we would like to continue work on them during that time, so Doug asked
 me to look for volunteers to pick up the work and run with it.

 The current status and next steps for oslo.serialization can be found in
 the bp:
 https://blueprints.launchpad.net/oslo/+spec/graduate-oslo-serialization

 As mentioned, oslo.concurrency isn't started and has a few more pending
 tasks, which are enumerated in the spec:

 http://git.openstack.org/cgit/openstack/oslo-specs/plain/specs/juno/graduate-oslo-concurrency.rst

 Any help would be appreciated.  I'm happy to pick this back up in a
 couple of weeks, but if someone could shepherd it along in the meantime
 that would be great!


 I would be happy to work on graduating oslo.concurrency as well as improving
 it after that. I like fiddling with OS's, threads and races :)
 I can also help to finish work on oslo.serialization (it looks like some
 steps are already finished there).

 What would be needed to start working on that? I haven't been following
 development of processes within Oslo. So I would need someone to answer
 questions as they arise.

 --

 Kind regards, Yuriy.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-07-22 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

FYI: I've moved the spec to oslo space since the switch is not really
limited to neutron, and most of coding is to be done in oslo.db
(though not much anyway).

New spec: https://review.openstack.org/#/c/108355/

On 09/07/14 13:17, Ihar Hrachyshka wrote:
 Hi all,
 
 Multiple projects are suffering from db lock timeouts due to
 deadlocks deep in mysqldb library that we use to interact with
 mysql servers. In essence, the problem is due to missing eventlet
 support in mysqldb module, meaning when a db lock is encountered,
 the library does not yield to the next green thread, allowing other
 threads to eventually unlock the grabbed lock, and instead it just
 blocks the main thread, that eventually raises timeout exception
 (OperationalError).
 
 The failed operation is not retried, leaving failing request not 
 served. In Nova, there is a special retry mechanism for deadlocks, 
 though I think it's more a hack than a proper fix.
 
 Neutron is one of the projects that suffer from those timeout
 errors a lot. Partly it's due to lack of discipline in how we do
 nested calls in l3_db and ml2_plugin code, but that's not something
 to change in foreseeable future, so we need to find another
 solution that is applicable for Juno. Ideally, the solution should
 be applicable for Icehouse too to allow distributors to resolve
 existing deadlocks without waiting for Juno.
 
 We've had several discussions and attempts to introduce a solution
 to the problem. Thanks to oslo.db guys, we now have more or less
 clear view on the cause of the failures and how to easily fix them.
 The solution is to switch mysqldb to something eventlet aware. The
 best candidate is probably MySQL Connector module that is an
 official MySQL client for Python and that shows some (preliminary)
 good results in terms of performance.
 
 I've posted a Neutron spec for the switch to the new client in Juno
 at [1]. Ideally, switch is just a matter of several fixes to
 oslo.db that would enable full support for the new driver already
 supported by SQLAlchemy, plus 'connection' string modified in
 service configuration files, plus documentation updates to refer to
 the new official way to configure services for MySQL. The database
 code won't, ideally, require any major changes, though some
 adaptation for the new client library may be needed. That said,
 Neutron does not seem to require any changes, though it was
 revealed that there are some alembic migration rules in Keystone or
 Glance that need (trivial) modifications.
 
 You can see how trivial the switch can be achieved for a service
 based on example for Neutron [2].
 
 While this is a Neutron specific proposal, there is an obvious wish
 to switch to the new library globally throughout all the projects,
 to reduce devops burden, among other things. My vision is that,
 ideally, we switch all projects to the new library in Juno, though
 we still may leave several projects for K in case any issues arise,
 similar to the way projects switched to oslo.messaging during two
 cycles instead of one. Though looking at how easy Neutron can be
 switched to the new library, I wouldn't expect any issues that
 would postpone the switch till K.
 
 It was mentioned in comments to the spec proposal that there were
 some discussions at the latest summit around possible switch in
 context of Nova that revealed some concerns, though they do not
 seem to be documented anywhere. So if you know anything about it,
 please comment.
 
 So, we'd like to hear from other projects what's your take on that 
 move, whether you see any issues or have concerns about it.
 
 Thanks for your comments, /Ihar
 
 [1]: https://review.openstack.org/#/c/104905/ [2]:
 https://review.openstack.org/#/c/105209/
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTzmE3AAoJEC5aWaUY1u57Wm4H/iK7qsBsXXu5EbHeCpzSDejt
Crp0wzhlI2LInF8r4oMdVc1qIldSBvfgb5seYraphrItJx2jVHrVNf3zQxYd8lDh
79Xi4+PPJkgnCsGc/dpUnah5Yvl32KMnpG860kfkQQR+xtOqS92wSVP9YrOr/cF4
D9674M+ks3v13sUNz9AnFyz5FwJxHUXhOxzAyzcz5e4bvAXlCo2HdxChblH/cA7y
fqpTvRH4l6iqaznx6bD8kfLlDKNAErMSkYLoPwoA1k6Ek586G1LKLlRNkeecSz3b
3y39cbE+ZM3StPgmk1AqdbX/nKzQSZMzHS7QnET8ijRPmC3DqJYrXjAx7m7UQ1s=
=cqK+
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Strategy for recovering crashed nodes in the Overcloud?

2014-07-22 Thread Howley, Tom
Hi,

I'm running a HA overcloud configuration and as far as I'm aware, there is 
currently no mechanism in place for restarting failed nodes in the cluster. 
Originally, I had been wondering if we would use a corosync/pacemaker cluster 
across the control plane with STONITH resources configured for each node (a 
STONITH plugin for Ironic could be written). This might be fine if a 
corosync/pacemaker stack is already being used for HA of some components, but 
it seems overkill otherwise. The undercloud heat could be in a good position to 
restart the overcloud nodes -- is that the plan or are there other options 
being considered?

Thanks,
Tom

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] graduating oslo.middleware

2014-07-22 Thread gordon chung
hi,
following the oslo graduation protocol, could the oslo team review the 
oslo.middleware library[1] i've created and see if there are any issues.
[1] https://github.com/chungg/oslo.middleware
cheers,
gord


  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] Cisco Nexus ML2 driver feature work

2014-07-22 Thread Kyle Mestery
On Mon, Jul 21, 2014 at 1:15 PM, Henry Gessau ges...@cisco.com wrote:
 I would like to request Juno spec freeze exceptions for the following, all of
 which add features to the ML2 driver for the Cisco Nexus family of switches.


 https://review.openstack.org/95834  - Provider Segment Support
 https://review.openstack.org/95910  - Layer 3 Service plugin

 The above two feature are needed for the Nexus ML2 driver to reach feature
 parity with the legacy Cisco Nexus plugin, which is going to be deprecated
 because it depends on the OVS plugin which is being deprecated.

I'm ok with these being approved as an exception, given the
deprecation of the monolithic Cisco plugin and the fact they allow the
ML2 driver to reach parity with the plugin being deprecated.

 https://review.openstack.org/98177  - VxLAN Gateway Support

 The dependencies for this one are now approved. It could be approved as a
 low-priority item with the caveat of best effort for review of code patches.

Given this is new work, and how loaded we are, I'm less inclined to
approve this one. I know the specs it's dependent on are approved, but
it's unclear if the code for those will land, and that would be
required before this one could land. I'm leaning towards moving this
one to Kilo.

Thanks,
Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] ml2-use-dpdkvhost

2014-07-22 Thread Kyle Mestery
On Mon, Jul 21, 2014 at 10:04 AM, Mooney, Sean K
sean.k.moo...@intel.com wrote:
 Hi

 I would like to propose
 https://review.openstack.org/#/c/107797/1/specs/juno/ml2-use-dpdkvhost.rst
 for a spec freeze exception.



 https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost



 This blueprint adds support for the Intel(R) DPDK Userspace vHost

 port binding to the Open Vswitch and Open Daylight ML2 Mechanism Drivers.

In general, I'd be ok with approving an exception for this BP.
However, please see below.



 This blueprint enables nova changes tracked by the following spec:

 https://review.openstack.org/#/c/95805/1/specs/juno/libvirt-ovs-use-usvhost.rst

This BP appears to also require an exception from the Nova team. I
think these both require exceptions for this work to have a shot at
landing in Juno. Given this, I'm actually leaning to move this to
Kilo. But if you can get a Nova freeze exception, I'd consider the
same for the Neutron BP.

Thanks,
Kyle



 regards

 sean

 --
 Intel Shannon Limited
 Registered in Ireland
 Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
 Registered Number: 308263
 Business address: Dromore House, East Park, Shannon, Co. Clare

 This e-mail and any attachments may contain confidential material for the
 sole use of the intended recipient(s). Any review or distribution by others
 is strictly prohibited. If you are not the intended recipient, please
 contact the sender and delete all copies.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec Approval Deadline (SAD) has passed, next steps

2014-07-22 Thread Kyle Mestery
On Mon, Jul 21, 2014 at 11:32 AM, YAMAMOTO Takashi
yamam...@valinux.co.jp wrote:
 Hi all!

 A quick note that SAD has passed. We briskly approved a pile of BPs

 it's sad. ;-(

 over the weekend, most of them vendor related as low priority, best
 effort attempts for Juno-3. At this point, we're hugely oversubscribed
 for Juno-3, so it's unlikely we'll make exceptions for things into
 Juno-3 now.

 my specs are ok'ed by Kyle but failed to get another core reviewer.
 https://review.openstack.org/#/c/98702/
 https://review.openstack.org/#/c/103737/

 does it indicate core reviewers man-power problems?
 if so, can you consider increasing the number of them?
 postponing vendor stuffs (like mine) for the reason would make
 the situation worse as many of developers/reviewers are paid for
 vendor stuffs.

We've approved both of these as exceptions for Juno-3 as low priority.

Thanks!
Kyle

 YAMAMOTO Takashi


 I don't plan to open a Kilo directory in the specs repository quite
 yet. I'd like to first let things settle down a bit with Juno-3 before
 going there. Once I do, specs which were not approved should be moved
 to that directory where they can be reviewed with the idea they are
 targeting Kilo instead of Juno.

 Also, just a note that we have a handful of bugs and BPs we're trying
 to land in Juno-3 yet today, so core reviewers, please focus on those
 today.

 Thanks!
 Kyle

 [1] https://launchpad.net/neutron/+milestone/juno-2

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Spec Freeze Exception] ml2-ovs-portsecurity

2014-07-22 Thread Kyle Mestery
On Mon, Jul 21, 2014 at 10:17 PM, Isaku Yamahata
isaku.yamah...@gmail.com wrote:

 I'd like to request Juno spec freeze exception for ML2 OVS portsecurity
 extension.

 - https://review.openstack.org/#/c/99873/
   ML2 OVS: portsecurity extension support

 - https://blueprints.launchpad.net/neutron/+spec/ml2-ovs-portsecurity
   Add portsecurity support to ML2 OVS mechanism driver

 The spec/blueprint adds portsecurity extension to ML2 plugin and implements
 it in ovs mechanism driver with iptables_firewall driver.
 The spec has gotten 5 +1 with many respins.
 This feature will be a basement to run network service within VM.

 There is another spec whose goal is same.
 - https://review.openstack.org/#/c/106222/
   Add Port Security Implementation in ML2 Plugin
 The author, Shweta, and I have agreed to consolidate those specs/blueprints
 and unite for the same goal.

Given that this is important for the NFV use cases, I'm leaning
towards approving this one as low priority. I'll wait to see if
another core reviewer can also +2 this one today and then we can merge
it and set the priority appropriately.

Thanks,
Kyle

 Thanks,
 --
 Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Use Launcher/ProcessLauncher in glance

2014-07-22 Thread Tailor, Rajesh
Hi all,

Please provide your valuable inputs on the proposal below mentioned.

Thanks,
Rajesh Tailor

From: Tailor, Rajesh
Sent: Thursday, July 17, 2014 12:38 PM
To: 'openstack-dev@lists.openstack.org'
Subject: [openstack-dev] [glance] Use Launcher/ProcessLauncher in glance

Hi all,

Why glance is not using Launcher/ProcessLauncher (oslo-incubator) for its wsgi 
service like it is used in other openstack projects i.e. nova, cinder, keystone 
etc.

As of now when SIGHUP signal is sent to glance-api parent process, it calls the 
callback handler and then throws OSError.
The OSError is thrown because os.wait system call was interrupted due to SIGHUP 
callback handler.
As a result of this parent process closes the server socket.
All the child processes also gets terminated without completing existing api 
requests because the server socket is already closed and the service doesn't 
restart.

Ideally when SIGHUP signal is received by the glance-api process, it should 
process all the pending requests and then restart the glance-api service.

If (oslo-incubator) Launcher/ProcessLauncher is used in glance then it will 
handle service restart on 'SIGHUP' signal properly.

Can anyone please let me know what will be the positive/negative impact of 
using Launcher/ProcessLauncher (oslo-incubator) in glance?

Thank You,
Rajesh Tailor


__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] graduating oslo.middleware

2014-07-22 Thread Julien Danjou
On Tue, Jul 22 2014, gordon chung wrote:

 hi,
 following the oslo graduation protocol, could the oslo team review the
 oslo.middleware library[1] i've created and see if there are any issues.
 [1] https://github.com/chungg/oslo.middleware

LGTM. Don't forget to gate it on py33. :)

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally] Application for a new OpenStack Program: Performance and Scalability

2014-07-22 Thread Tim Bell

As I mentioned on the review, the title could cause confusion.

Rally tests performance and scalability but the title could lead people to 
think that if you install Rally, you will get Performance and Scale from your 
OpenStack instance. Adding Benchmark or Testing in to the description would 
clarify this.

Tim

From: Baohua Yang [mailto:yangbao...@gmail.com]
Sent: 22 July 2014 10:59
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc][rally] Application for a new OpenStack 
Program: Performance and Scalability

It's interesting and practical!
There're some early efforts to make openstack more robust and efficient, 
however, we're still lacking one such framework.

Just a little question.
On the wiki, it says to consider the scalability problem.
How can we measure the performance at large scale? By real deployment or just 
simulating the input?
Through the descriptions, seems the method is to try testing every single 
component.

Also, suggest adding necessary perf checks into the test/integration jobs, too.
Thanks!


On Tue, Jul 22, 2014 at 5:53 AM, Boris Pavlovic 
bo...@pavlovic.memailto:bo...@pavlovic.me wrote:
Hi Stackers and TC,

The Rally contributor team would like to propose a new OpenStack program
with a mission to provide scalability and performance benchmarking, and
code profiling tools for OpenStack components.

We feel we've achieved a critical mass in the Rally project, with an
active, diverse contributor team. The Rally project will be the initial
project in a new proposed Performance and Scalability program.

Below, the details on our proposed new program.

Thanks for your consideration,
Boris



[1] https://review.openstack.org/#/c/108502/


Official Name
=

Performance and Scalability

Codename


Rally

Scope
=

Scalability benchmarking, performance analysis, and profiling of
OpenStack components and workloads

Mission
===

To increase the scalability and performance of OpenStack clouds by:

* defining standard benchmarks
* sharing performance data between operators and developers
* providing transparency of code paths through profiling tools

Maturity


* Meeting logs http://eavesdrop.openstack.org/meetings/rally/2014/
* IRC channel: #openstack-rally
* Rally performance jobs are in (Cinder, Glance, Keystone  Neutron)
check pipelines.
*  950 commits over last 10 months
* Large, diverse contributor community
 * 
http://stackalytics.com/?release=junometric=commitsproject_type=Allmodule=rally
 * http://stackalytics.com/report/contribution/rally/180

* Non official lead of project is Boris Pavlovic
 * Official election In progress.

Deliverables


Critical deliverables in the Juno cycle are:

* extending Rally Benchmark framework to cover all use cases that are
required by all OpenStack projects
* integrating OSprofiler in all core projects
* increasing functional  unit testing coverage of Rally.

Discussion
==

One of the major goals of Rally is to make it simple to share results of
standardized benchmarks and experiments between operators and
developers. When an operator needs to verify certain performance
indicators meet some service level agreement, he will be able to run
benchmarks (from Rally) and share with the developer community the
results along with his OpenStack configuration. These benchmark results
will assist developers in diagnosing particular performance and
scalability problems experienced with the operator's configuration.

Another interesting area is Rally  the OpenStack CI process. Currently,
working on performance issues upstream tends to be a more social than
technical process. We can use Rally in the upstream gates to identify
performance regressions and measure improvement in scalability over
time. The use of Rally in the upstream gates will allow a more rigorous,
scientific approach to performance analysis. In the case of an
integrated OSprofiler, it will be possible to get detailed information
about API call flows (e.g. duration of API calls in different services).



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best wishes!
Baohua
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally] Application for a new OpenStack Program: Performance and Scalability

2014-07-22 Thread Sean Dague
Honestly, I'm really not sure I see this as a different program, but is
really something that should be folded into the QA program. I feel like
a top level effort like this is going to lead to a lot of duplication in
the data analysis that's currently going on, as well as functionality
for better load driver UX.

-Sean

On 07/21/2014 05:53 PM, Boris Pavlovic wrote:
 Hi Stackers and TC,
 
 The Rally contributor team would like to propose a new OpenStack program
 with a mission to provide scalability and performance benchmarking, and
 code profiling tools for OpenStack components.
 
 We feel we've achieved a critical mass in the Rally project, with an
 active, diverse contributor team. The Rally project will be the initial
 project in a new proposed Performance and Scalability program.
 
 Below, the details on our proposed new program.
 
 Thanks for your consideration,
 Boris
 
 
 
 [1] https://review.openstack.org/#/c/108502/
 
 
 Official Name
 =
 
 Performance and Scalability
 
 Codename
 
 
 Rally
 
 Scope
 =
 
 Scalability benchmarking, performance analysis, and profiling of
 OpenStack components and workloads
 
 Mission
 ===
 
 To increase the scalability and performance of OpenStack clouds by:
 
 * defining standard benchmarks
 * sharing performance data between operators and developers
 * providing transparency of code paths through profiling tools
 
 Maturity
 
 
 * Meeting logs http://eavesdrop.openstack.org/meetings/rally/2014/
 * IRC channel: #openstack-rally
 * Rally performance jobs are in (Cinder, Glance, Keystone  Neutron)
 check pipelines.
 *  950 commits over last 10 months
 * Large, diverse contributor community
  * 
 http://stackalytics.com/?release=junometric=commitsproject_type=Allmodule=rally
  * http://stackalytics.com/report/contribution/rally/180
 
 * Non official lead of project is Boris Pavlovic
  * Official election In progress.
 
 Deliverables
 
 
 Critical deliverables in the Juno cycle are:
 
 * extending Rally Benchmark framework to cover all use cases that are
 required by all OpenStack projects
 * integrating OSprofiler in all core projects
 * increasing functional  unit testing coverage of Rally.
 
 Discussion
 ==
 
 One of the major goals of Rally is to make it simple to share results of
 standardized benchmarks and experiments between operators and
 developers. When an operator needs to verify certain performance
 indicators meet some service level agreement, he will be able to run
 benchmarks (from Rally) and share with the developer community the
 results along with his OpenStack configuration. These benchmark results
 will assist developers in diagnosing particular performance and
 scalability problems experienced with the operator's configuration.
 
 Another interesting area is Rally  the OpenStack CI process. Currently,
 working on performance issues upstream tends to be a more social than
 technical process. We can use Rally in the upstream gates to identify
 performance regressions and measure improvement in scalability over
 time. The use of Rally in the upstream gates will allow a more rigorous,
 scientific approach to performance analysis. In the case of an
 integrated OSprofiler, it will be possible to get detailed information
 about API call flows (e.g. duration of API calls in different services).
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - certificates data persistency

2014-07-22 Thread Brandon Logan
I agree with Sam.  We're under a strict timeline here and the simpler
the code the faster it will be implemented and reviewed.  Is there any
strong reason why this caching can't wait until K if it decided it is
really needed?

Thanks,
Brandon

On Tue, 2014-07-22 at 11:01 +, Samuel Bercovici wrote:
 Stephen,
 
  
 
 This will increase the complexity of the code since it will add
 managing the cache lifecycle in tandem with the barbican back end and
 the fact that containers may be shared by multiple listeners.
 
 At this stage, I think that it serves us all to keep the code at this
 stage as small and simple as possible.
 
  
 
 Let’s judge if presenting this information on the fly (ex: in the Web
 UI) becomes a performance issue and if it does, we can fix it then.
 
  
 
 -Sam.
 
  
 
  
 
 From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
 Sent: Tuesday, July 22, 2014 3:43 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability -
 certificates data persistency
 
  
 
 Evgeny--
 
  
 
 
 The only reason I see for storing certificate information in Neutron
 (and not private key information-- just the certificate) is to aid in
 presenting UI information to the user. Especially GUI users don't care
 about a certificate's UUID, they care about which hostnames it's valid
 for. Yes, this can be loaded on the fly whenever public certificate
 information is accessed, but the perception was that it would be a
 significant performance increase to cache it.
 
 
  
 
 
 Stephen
 
 
  
 
 On Sun, Jul 20, 2014 at 4:32 AM, Evgeny Fedoruk evge...@radware.com
 wrote:
 
 Hi folks,
 
  
 
 In a current version of TLS capabilities RST certificate
 SubjectCommonName and SubjectAltName information is cached in a
 database.
 
 This may be not necessary and here is why:
 
  
 
 1.   TLS containers are immutable, meaning once a container was
 associated to a listener and was validated, it’s not necessary to
 validate the container anymore.
 This is relevant for both, default container and containers used for
 SNI.
 
 2.   LBaaS front-end API can check if TLS containers ids were
 changed for a listener as part of an update operation. Validation of
 containers will be done for
 new containers only. This is stated in “Performance Impact” section of
 the RST, excepting the last statement that proposes persistency for
 SCN and SAN.
 
 3.   Any interaction with Barbican API for getting containers data
 will be performed via a common module API only. This module’s API is
 mentioned in
 “SNI certificates list management” section of the RST.
 
 4.   In case when driver really needs to extract certificate
 information prior to the back-end system provisioning, it will do it
 via the common module API.
 
 5.   Back-end provisioning system may cache any certificate data,
 except private key, in case of a specific need of the vendor.
 
  
 
 IMO, There is no real need to store certificates data in Neutron
 database and manage its life cycle. 
 
 Does anyone sees a reason why caching certificates’ data in Neutron
 database is critical?
 
  
 
 Thank you,
 
 Evg
 
  
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  
 
 
 -- 
 Stephen Balukoff 
 Blue Box Group, LLC 
 (800)613-4305 x807 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] The gate: a failure analysis

2014-07-22 Thread Chris Friesen

On 07/21/2014 12:03 PM, Clint Byrum wrote:

Thanks Matthew for the analysis.

I think you missed something though.

Right now the frustration is that unrelated intermittent bugs stop your
presumably good change from getting in.

Without gating, the result would be that even more bugs, many of them not
intermittent at all, would get in. Right now, the one random developer
who has to hunt down the rechecks and do them is inconvenienced. But
without a gate, _every single_ developer will be inconvenienced until
the fix is merged.


The problem I see with this is that it's fundamentally not a fair system.

If someone is trying to fix a bug in the libvirt driver, it's wrong to 
expect them to try to debug issues with neutron being unstable.  They 
likely don't have the skillset to do it, and we shouldn't expect them to 
do so.  It's a waste of developer time.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Boot from ISO feature status

2014-07-22 Thread Maksym Lobur
Hi Folks,

Could someone please share his experience with Nova Boot from ISO feature
[1].

We test it on Havana + KVM, uploaded the image with DISK_FORMAT set to
'iso'. Windows deployment does not happen. The VM has two volumes: one is
config-2 (CDFS, ~400Kb, don't know what that is); and the second one is
our flavor volume (80Gb). The windows ISO contents (about 500Mb) for some
reason are inside a flavor volume instead of separate CD drive.

So far I found only two patches for nova: vmware [2] and Xen [3].
Does it work with KVM? Maybe some specific nova configuration required for
KVM.

[1] https://wiki.openstack.org/wiki/BootFromISO
[2] https://review.openstack.org/#/c/63084/
[3] https://review.openstack.org/#/c/38650/


Thanks beforehand!

Max Lobur,
OpenStack Developer, Mirantis, Inc.

Mobile: +38 (093) 665 14 28
Skype: max_lobur

38, Lenina ave. Kharkov, Ukraine
www.mirantis.com
www.mirantis.ru
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Spec freeze exception] add boot_mode filter for nova.virt.ironic driver

2014-07-22 Thread Devananda van der Veen
Hi Faizan,

The Nova proposal is missing a dependency reference for UEFI support in
Ironic. That spec has not been approved, and, since this feature will
require changes in the Nova driver and scheduler, I am blocking it now so
that we can stay focused on simply landing the Ironic driver in Nova.

Thanks for understanding,
Devananda



On Tue, Jul 22, 2014 at 1:27 AM, Michael Still mi...@stillhq.com wrote:

 I think this sounds risky to me... I'd rather we landed _something_ in
 terms of an ironic driver in juno, rather than adding features to what
 we have now. In fact, I thought Devananda had frozen the ironic nova
 driver to make this easier?

 Michael

 On Tue, Jul 22, 2014 at 4:45 PM, Faizan Barmawer
 faizan.barma...@gmail.com wrote:
  Hello everyone,
 
  I would like to request for juno spec freeze exception for Add ironic
 boot
  mode filters.
 
  https://review.openstack.org/#/c/108582/
 
  This change is required to support UEFI boot mode in ironic drivers. The
  ironic spec to add UEFI support in ironic is still under review
  https://review.openstack.org/#/c/99850/, and hence at present it cannot
 be
  added to nova virt ironic driver code in the ironic repository.
 
  This change is dependent on
  https://blueprints.launchpad.net/nova/+spec/add-ironic-driver
  and will be contained only in ironic virt driver space.
 
  Regards,
  Faizan Barmawer
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Soft code freeze is planned for July, 24th

2014-07-22 Thread Mike Scherbakov
Hi Fuelers,
Looks like we are more or less good to call for a Soft Code Freeze [1] on
Thursday.

Then hard code freeze [2] will follow. It is planned to have no more than 2
weeks between SCF and HCF [3]. When hard code freeze is called, we create
stable/5.1 branch at the same time to accept only critical bug fixes, and
release will be produced out of this branch. At the same time master will
be re-opened for accepting new features and all types of bug fixes.

[1] https://wiki.openstack.org/wiki/Fuel/Soft_Code_Freeze
[2] https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze
[3] https://wiki.openstack.org/wiki/Fuel/5.1_Release_Schedule

Let me know if anything blocks us from doing SCF on 24th.

Thanks,
-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Flavor Framework spec approval deadline exception

2014-07-22 Thread Eugene Nikanorov
Hi folks,

I'd like to request an exception for the Flavor Framework spec:
https://review.openstack.org/#/c/102723/

It already have more or less complete server-side implementation:
https://review.openstack.org/#/c/105982/

CLI will be posted on review soon.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor Framework spec approval deadline exception

2014-07-22 Thread Kyle Mestery
On Tue, Jul 22, 2014 at 10:10 AM, Eugene Nikanorov
enikano...@mirantis.com wrote:
 Hi folks,

 I'd like to request an exception for the Flavor Framework spec:
 https://review.openstack.org/#/c/102723/

 It already have more or less complete server-side implementation:
 https://review.openstack.org/#/c/105982/

 CLI will be posted on review soon.

We need the flavor framework to land for Juno, as LBaaS needs it. I'm
ok with an exception here. Can we work to close the gaps in the spec
review in the next few days? I see a few -1s on there still.

Thanks,
Kyle

 Thanks,
 Eugene.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Update on specs we needed approved

2014-07-22 Thread Avishay Balderman
We still need another core to approve L7
https://review.openstack.org/#/c/99709

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, July 22, 2014 3:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Update on specs we needed approved

Yes, thanks guys! These are really important for features we want to get into 
Neutron LBaaS in Juno! :D

On Mon, Jul 21, 2014 at 2:42 PM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
In reference to these 3 specs:

TLS Termination - https://review.openstack.org/#/c/98640/
L7 Switching - https://review.openstack.org/#/c/99709/
Implementing TLS in reference Impl -
https://review.openstack.org/#/c/100931/

Kyle has +2'ed all three and once Mark Mcclain +2's them then one of
them will +A them.

Thanks again Kyle and Mark!


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]code review needed for 'changing HTTP response code on errors'

2014-07-22 Thread Nikhil Komawar
Hey Kent,

Appreciate your effort and being pro-active for your patches. For future review 
requests please join us at the #openstack-glance channel on Freenode.

Thanks,
-Nikhil

From: Wang, Kent [kent.w...@intel.com]
Sent: Monday, July 21, 2014 1:20 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [glance]code review needed for 'changing HTTP response 
code on errors'

Hi I’m looking for some reviewers (especially core reviewers!) to review my 
patch that fixes this bug.

This is the bug description:

Glance v2: HTTP 404s are returned for unallowed methods
Requests for many resources in Glance v2 will return a 404 if the request is 
using an unsupported HTTP verb for that resource. For example, the /v2/images 
resource does exist but a 404 is returned when attempting a DELETE on that 
resource. Instead, this should return an HTTP 405 MethodNotAllowed response.


My fix for it can be found here:
https://review.openstack.org/#/c/103959/

Thanks!
Kent
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] The gate: a failure analysis

2014-07-22 Thread Jay Pipes

On 07/22/2014 10:48 AM, Chris Friesen wrote:

On 07/21/2014 12:03 PM, Clint Byrum wrote:

Thanks Matthew for the analysis.

I think you missed something though.

Right now the frustration is that unrelated intermittent bugs stop your
presumably good change from getting in.

Without gating, the result would be that even more bugs, many of them not
intermittent at all, would get in. Right now, the one random developer
who has to hunt down the rechecks and do them is inconvenienced. But
without a gate, _every single_ developer will be inconvenienced until
the fix is merged.


The problem I see with this is that it's fundamentally not a fair system.

If someone is trying to fix a bug in the libvirt driver, it's wrong to
expect them to try to debug issues with neutron being unstable.  They
likely don't have the skillset to do it, and we shouldn't expect them to
do so.  It's a waste of developer time.


Who is expecting the developer to debug issues with Neutron? It may be a 
waste of developer time to constantly recheck certain bugs (or no bug), 
but nobody is saying to the contributor of a libvirt fix Hey, this 
unrelated Neutron bug is causing a failure, so go fix it.


The point of the gate is specifically to provide the sort of rigidity 
that unfortunately manifests itself in discomfort from developers. 
Perhaps you don't have the history of when we had no strict gate, and it 
was a frequent source of frustration that code would sail through to 
master that would routinely break master and branches of other OpenStack 
projects. I, for one, don't want to revisit the bad old days. As much as 
a pain it is, the gate failures are a thorn in the side of folks 
precisely to push folks to fix the valid bugs that they highlight. What 
we need, like Sean said, is more folks fixing bugs and less folks 
working on features and vendor drivers.


Perhaps we, as a community, should make the bug triaging and fixing days 
a much more common thing? Maybe make Thursdays or Fridays dedicated bug 
days? How about monetary bug bounties being paid out by the OpenStack 
Foundation, with a payout scale based on the bug severity and 
importance? How about having dedicated bug-squashing teams that focus on 
a particular area of the code, that share their status reports at weekly 
meetings and on the ML?


best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-07-22 Thread David Kranz

On 07/22/2014 10:44 AM, Sean Dague wrote:

Honestly, I'm really not sure I see this as a different program, but is
really something that should be folded into the QA program. I feel like
a top level effort like this is going to lead to a lot of duplication in
the data analysis that's currently going on, as well as functionality
for better load driver UX.

-Sean

+1
It will also lead to pointless discussions/arguments about which 
activities are part of QA and which are part of

Performance and Scalability Testing.

QA Program mission:

 Develop, maintain, and initiate tools and plans to ensure the upstream 
stability and quality of OpenStack, and its release readiness at any 
point during the release cycle.


It is hard to see how $subj falls outside of that mission. Of course 
rally would continue to have its own repo, review team, etc. as do 
tempest and grenade.


  -David



On 07/21/2014 05:53 PM, Boris Pavlovic wrote:

Hi Stackers and TC,

The Rally contributor team would like to propose a new OpenStack program
with a mission to provide scalability and performance benchmarking, and
code profiling tools for OpenStack components.

We feel we've achieved a critical mass in the Rally project, with an
active, diverse contributor team. The Rally project will be the initial
project in a new proposed Performance and Scalability program.

Below, the details on our proposed new program.

Thanks for your consideration,
Boris



[1] https://review.openstack.org/#/c/108502/


Official Name
=

Performance and Scalability

Codename


Rally

Scope
=

Scalability benchmarking, performance analysis, and profiling of
OpenStack components and workloads

Mission
===

To increase the scalability and performance of OpenStack clouds by:

* defining standard benchmarks
* sharing performance data between operators and developers
* providing transparency of code paths through profiling tools

Maturity


* Meeting logs http://eavesdrop.openstack.org/meetings/rally/2014/
* IRC channel: #openstack-rally
* Rally performance jobs are in (Cinder, Glance, Keystone  Neutron)
check pipelines.
*  950 commits over last 10 months
* Large, diverse contributor community
  * 
http://stackalytics.com/?release=junometric=commitsproject_type=Allmodule=rally
  * http://stackalytics.com/report/contribution/rally/180

* Non official lead of project is Boris Pavlovic
  * Official election In progress.

Deliverables


Critical deliverables in the Juno cycle are:

* extending Rally Benchmark framework to cover all use cases that are
required by all OpenStack projects
* integrating OSprofiler in all core projects
* increasing functional  unit testing coverage of Rally.

Discussion
==

One of the major goals of Rally is to make it simple to share results of
standardized benchmarks and experiments between operators and
developers. When an operator needs to verify certain performance
indicators meet some service level agreement, he will be able to run
benchmarks (from Rally) and share with the developer community the
results along with his OpenStack configuration. These benchmark results
will assist developers in diagnosing particular performance and
scalability problems experienced with the operator's configuration.

Another interesting area is Rally  the OpenStack CI process. Currently,
working on performance issues upstream tends to be a more social than
technical process. We can use Rally in the upstream gates to identify
performance regressions and measure improvement in scalability over
time. The use of Rally in the upstream gates will allow a more rigorous,
scientific approach to performance analysis. In the case of an
integrated OSprofiler, it will be possible to get detailed information
about API call flows (e.g. duration of API calls in different services).




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally] Application for a new OpenStack Program: Performance and Scalability

2014-07-22 Thread Lingxian Kong
Thanks, boris, really big +1! we at huawei are using Rally as our
performance testing tool for OpenStack API, but I still hope Rally
will be more mature with more and more talented folks within the
OpenStack community. I am really excited to be able to participate.

2014-07-22 5:53 GMT+08:00 Boris Pavlovic bo...@pavlovic.me:
 Hi Stackers and TC,

 The Rally contributor team would like to propose a new OpenStack program
 with a mission to provide scalability and performance benchmarking, and
 code profiling tools for OpenStack components.

 We feel we've achieved a critical mass in the Rally project, with an
 active, diverse contributor team. The Rally project will be the initial
 project in a new proposed Performance and Scalability program.

 Below, the details on our proposed new program.

 Thanks for your consideration,
 Boris



 [1] https://review.openstack.org/#/c/108502/


 Official Name
 =

 Performance and Scalability

 Codename
 

 Rally

 Scope
 =

 Scalability benchmarking, performance analysis, and profiling of
 OpenStack components and workloads

 Mission
 ===

 To increase the scalability and performance of OpenStack clouds by:

 * defining standard benchmarks
 * sharing performance data between operators and developers
 * providing transparency of code paths through profiling tools

 Maturity
 

 * Meeting logs http://eavesdrop.openstack.org/meetings/rally/2014/
 * IRC channel: #openstack-rally
 * Rally performance jobs are in (Cinder, Glance, Keystone  Neutron)
 check pipelines.
 *  950 commits over last 10 months
 * Large, diverse contributor community
  *
 http://stackalytics.com/?release=junometric=commitsproject_type=Allmodule=rally
  * http://stackalytics.com/report/contribution/rally/180

 * Non official lead of project is Boris Pavlovic
  * Official election In progress.

 Deliverables
 

 Critical deliverables in the Juno cycle are:

 * extending Rally Benchmark framework to cover all use cases that are
 required by all OpenStack projects
 * integrating OSprofiler in all core projects
 * increasing functional  unit testing coverage of Rally.

 Discussion
 ==

 One of the major goals of Rally is to make it simple to share results of
 standardized benchmarks and experiments between operators and
 developers. When an operator needs to verify certain performance
 indicators meet some service level agreement, he will be able to run
 benchmarks (from Rally) and share with the developer community the
 results along with his OpenStack configuration. These benchmark results
 will assist developers in diagnosing particular performance and
 scalability problems experienced with the operator's configuration.

 Another interesting area is Rally  the OpenStack CI process. Currently,
 working on performance issues upstream tends to be a more social than
 technical process. We can use Rally in the upstream gates to identify
 performance regressions and measure improvement in scalability over
 time. The use of Rally in the upstream gates will allow a more rigorous,
 scientific approach to performance analysis. In the case of an
 integrated OSprofiler, it will be possible to get detailed information
 about API call flows (e.g. duration of API calls in different services).



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards!
---
Lingxian Kong

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][nova] resize

2014-07-22 Thread Lingxian Kong
Maybe you are using local storage for your vm system volume backend,
accroding to the 'resize' implementation, 'rsync' and 'scp' will be
executed during the resize process, which will be the bottleneck

2014-07-19 13:07 GMT+08:00 fdsafdsafd jaze...@163.com:
 Did someone test the concurrency of nova's resize? i found it has poor
 concurrency, i do not know why. I found most the failed request is rpc
 timeout.
 I write the resize test for nova is boot-resize-confirm-delete.






 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards!
---
Lingxian Kong

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack] [Barbican] Cinder and Barbican

2014-07-22 Thread Duncan Thomas
No. There a blueprint in to do the integration, but no code merged yet let
alone documentation.

Duncan Thomas
On Jul 22, 2014 4:42 PM, Giuseppe Galeota giuseppegale...@gmail.com
wrote:

 Dear all,
 is Cinder capable today to use Barbican for encryption? If yes, can you
 link to me some useful doc?

 Thank you,
 Giuseppe

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Scheduler] Promote select_destination as a REST API

2014-07-22 Thread Jay Pipes

On 07/21/2014 11:16 PM, Jay Lau wrote:

Hi Jay,

There are indeed some China customers want this feature because before
they do some operations, they want to check the action plan, such as
where the VM will be migrated or created, they want to use some
interactive mode do some operations to make sure no errors.


This isn't something that normal tenants should have access to, IMO. The 
scheduler is not like a database optimizer that should give you a query 
plan for a SQL statement. The information the scheduler is acting on 
(compute node usage records, aggregate records, deployment 
configuration, etc) are absolutely NOT something that should be exposed 
to end-users.


I would certainly support a specification that intended to add detailed 
log message output from the scheduler that recorded how it made its 
decisions, so that an operator could evaluate the data and decision, but 
I'm not in favour of exposing this information via a tenant-facing API.


Best,
-jay


2014-07-22 10:23 GMT+08:00 Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com:

On 07/21/2014 07:45 PM, Jay Lau wrote:

There is one requirement that some customers want to get the
possible
host list when create/rebuild/migrate/__evacuate VM so as to
create a
resource plan for those operations, but currently
select_destination is
not a REST API, is it possible that we promote this API to be a
REST API?


Which customers want to get the possible host list?

/me imagines someone asking Amazon for a REST API that returned all
the possible servers that might be picked for placement... and what
answer Amazon might give to the request.

If by customer, you are referring to something like IBM Smart
Cloud Orchestrator, then I don't really see the point of supporting
something like this. Such a customer would only need to create a
resource plan for those operations if it was wholly supplanting
large pieces of OpenStack infrastructure, including parts of Nova
and much of Heat.

Best,
-jay


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Meeting time change

2014-07-22 Thread Kurt Griffiths
FYI, we chatted about this in #openstack-marconi today and decided to try
2100 UTC for tomorrow. If we would like to alternate at an earlier time
every other week, is 1900 UTC good, or shall we do something more like
1400 UTC?

On 7/21/14, 11:21 AM, Kurt Griffiths kurt.griffi...@rackspace.com
wrote:

I think Wednesday would be best. That way we can get an update on all the
bugs and blueprints before the weekly 1:1 project status meetings with
Thierry on Thursday. Mondays are often pretty busy with everyone having
meetings and catchup from the weekend.

If we do 2100 UTC, that is 9am NZT. Shall we alternate between 1900 and
2100 UTC on Wednesdays?

Also, when will we meet this week? Perhaps we should keep things the same
one more time while we get the new schedule finalized here on the ML.

On 7/17/14, 11:27 AM, Flavio Percoco fla...@redhat.com wrote:

On 07/16/2014 06:31 PM, Malini Kamalambal wrote:
 
 On 7/16/14 4:43 AM, Flavio Percoco fla...@redhat.com wrote:
 
 On 07/15/2014 06:20 PM, Kurt Griffiths wrote:
 Hi folks, we¹ve been talking about this in IRC, but I wanted to bring
it
 to the ML to get broader feedback and make sure everyone is aware.
We¹d
 like to change our meeting time to better accommodate folks that live
 around the globe. Proposals:

 Tuesdays, 1900 UTC
 Wednesdays, 2000 UTC
 Wednesdays, 2100 UTC

 I believe these time slots are free, based
 on: https://wiki.openstack.org/wiki/Meetings

 Please respond with ONE of the following:

 A. None of these times work for me
 B. An ordered list of the above times, by preference
 C. I am a robot

 I don't like the idea of switching days :/

 Since the reason we're using Wednesday is because we don't want the
 meeting to overlap with the TC and projects meeting, what if we change
 the day of both meeting times in order to keep them on the same day
(and
 perhaps also channel) but on different times?

 I think changing day and time will be more confusing than just
changing
 the time.
 
 If we can find an agreeable time on a non Tuesday, I take the ownership
of
 pinging  getting you to #openstack-meeting-alt ;)
 
From a quick look, #openstack-meeting-alt is free on Wednesdays on both
 times: 15 UTC and 21 UTC. Does this sound like a good day/time/idea to
 folks?
 
 1500 UTC might still be too early for our NZ folks - I thought we
wanted
 to have the meeting at/after 1900 UTC.
 That being said, I will be able to attend only part of the meeting any
 time after 1900 UTC - unless it is @ Thursday 1900 UTC
 Sorry for making this a puzzle :(

We'll have 2 times. The idea is to keep the current time and have a
second time slot that is good for NZ folks. What I'm proposing is to
pick a day in the week that is good for both times and just rotate on
the time instead of time+day_of_the_week.

Again, the proposal is not to have 1 time but just 1 day and alternate
times on that day. For example, Glance meetings are *always* on
Thursdays and time is alternated each other week. We can do the same for
Marconi on Mondays, Wednesdays or Fridays.

Thoughts?


Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano]

2014-07-22 Thread McLellan, Steven
Hi,

This is a little rambling, so I'll put this summary here and some discussion 
below. I would like to be able to add heat template fragments (primarily 
softwareconfig) to a template before an instance is created by Heat. This could 
be possible by updating but not pushing the heat template before 
instance.deploy, except that instance.deploy does a stack.push to configure 
networking before it adds information about the nova instance. This seems like 
the wrong place for the networking parts of the stack to be configured (maybe 
in the Environment before it tries to deploy applications). Thoughts?

--

The long version:

I've been looking at using disk-image-builder (a project that came out of 
triple-o) to build images for consumption through Murano. Disk images are built 
from a base OS plus a set of 'elements' which can include packages to install 
when building the image, templatized config file etc, and allows for 
substitutions based on heat metadata at deploy time. This uses a lot of the 
existing heat software config agents taking configuration from StructuredConfig 
and StructuredDeployment heat elements.

I'm typically finding for our use cases that instances will tend to be single 
purpose (that is, the image will be created specifically to run a piece of 
software that requires some configuration). Currently Murano provisions the 
instance, and then adds software configuration as a separate stack-update step. 
This is quite inefficient since os-refresh-config ends up having to re-run, and 
so I'm wondering if there's strong opposition to allowing the object model to 
support injection of software configuration heat elements before the instance 
is deployed.

Alternatively maybe this is something that is best supported by pure HOT 
packages, but I think there's value having murano's composition ability even if 
just to be able to combine heat fragments (perhaps in the drag  drop manner 
that was briefly discussed in Atlanta).

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-07-22 Thread Sean Dague
On 07/22/2014 11:58 AM, David Kranz wrote:
 On 07/22/2014 10:44 AM, Sean Dague wrote:
 Honestly, I'm really not sure I see this as a different program, but is
 really something that should be folded into the QA program. I feel like
 a top level effort like this is going to lead to a lot of duplication in
 the data analysis that's currently going on, as well as functionality
 for better load driver UX.

  -Sean
 +1
 It will also lead to pointless discussions/arguments about which
 activities are part of QA and which are part of
 Performance and Scalability Testing.
 
 QA Program mission:
 
  Develop, maintain, and initiate tools and plans to ensure the upstream
 stability and quality of OpenStack, and its release readiness at any
 point during the release cycle.
 
 It is hard to see how $subj falls outside of that mission. Of course
 rally would continue to have its own repo, review team, etc. as do
 tempest and grenade.

Right, 100% agreed. Rally would remain with it's own repo + review team,
just like grenade.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Requesting a spec review exception

2014-07-22 Thread Ronak Shah
Hi all,
Towards the end of the SAD one of my spec (
https://review.openstack.org/#/c/104378/) did not make it on Sunday;
understandably because there were flurry of specs getting reviewed in
burst.

I would like to ask cores to take a look at this very basic change and see
if it can be made in. Please consider my request.

Thanks,
Ronak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] python-swiftclient 2.2.0 release

2014-07-22 Thread John Dickinson
I'm happy to announce that python-swiftclient 2.2.0 has been released.

This release has the following significant features:

* Ability to set a storage policy on container and object upload
* Ability to generate Swift temporary URLs from the CLI and SDK
* Added context-sensitive help to the CLI commands

This release is available on PyPI at 
https://pypi.python.org/pypi/python-swiftclient/2.2.0

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] The image status is always 'BUILD'

2014-07-22 Thread Johnson Cheng
Dear Dan,

I understand.
Thanks for your information.

Regards,
Johnson

-Original Message-
From: Dan Genin [mailto:daniel.ge...@jhuapl.edu] 
Sent: Tuesday, July 22, 2014 9:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] The image status is always 'BUILD'

It seems that there already are posts out there explaining how to resolve your 
issue 
https://ask.openstack.org/en/question/6196/programmingerror-programmingerror-1146-table-novaservices-doesnt-exist/.

Generally, usage questions should be directed to other fora such as 
ask.openstack.org. This mailing list is reserved for questions and discussions 
dealing exclusively with Openstack development.

Dan

On 07/22/2014 05:12 AM, Johnson Cheng wrote:
 ProgrammingError: (ProgrammingError) (1146, Table 'nova.services' 
 doesn't exist)



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Soft code freeze is planned for July, 24th

2014-07-22 Thread Andrew Woodward
Mike,

I don't think we should SCF until the review queue is addressed, there are
far to many outstanding reviews presently. I'm not saying the queue has to
be flushed and revised (although we should give this time given the size of
the outstanding queue) , but all patches should be reviewed, and merged, or
minused (addressed). They should not be penalized because they are not high
priority and no one has gotten around to reviewing them.

my though is: prior to SCF, the low and medium priority reviews must be
addressed, and the submitter should have one additional day to revise the
patch prior to their code being barred from the release. We could address
this by having a review deadline the day prior to SCF, or watch excepted
intently for revision the day after SCF.



On Tue, Jul 22, 2014 at 8:08 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Hi Fuelers,
 Looks like we are more or less good to call for a Soft Code Freeze [1] on
 Thursday.

 Then hard code freeze [2] will follow. It is planned to have no more than
 2 weeks between SCF and HCF [3]. When hard code freeze is called, we create
 stable/5.1 branch at the same time to accept only critical bug fixes, and
 release will be produced out of this branch. At the same time master will
 be re-opened for accepting new features and all types of bug fixes.

 [1] https://wiki.openstack.org/wiki/Fuel/Soft_Code_Freeze
 [2] https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze
  [3] https://wiki.openstack.org/wiki/Fuel/5.1_Release_Schedule

 Let me know if anything blocks us from doing SCF on 24th.

 Thanks,
 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrew
Mirantis
Ceph community
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally] Application for a new OpenStack Program: Performance and Scalability

2014-07-22 Thread Marco Morais
+1 At yahoo we are using and contributing to Rally.  Within our company 
performance engineering and QE are distinct job groups with different skill 
sets and focus.  As a result, I think that it makes perfect sense to have a 
dedicated program for performance and scalability within OpenStack.  I also 
think it makes sense to start growing that capability from active community of 
developers already working on Rally and OSprofiler.
--
Regards,
Marco

From: Boris Pavlovic bo...@pavlovic.memailto:bo...@pavlovic.me
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, July 21, 2014 at 2:53 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
openstack...@lists.openstack.orgmailto:openstack...@lists.openstack.org 
openstack...@lists.openstack.orgmailto:openstack...@lists.openstack.org
Subject: [openstack-dev] [tc][rally] Application for a new OpenStack Program: 
Performance and Scalability

Hi Stackers and TC,

The Rally contributor team would like to propose a new OpenStack program
with a mission to provide scalability and performance benchmarking, and
code profiling tools for OpenStack components.

We feel we've achieved a critical mass in the Rally project, with an
active, diverse contributor team. The Rally project will be the initial
project in a new proposed Performance and Scalability program.

Below, the details on our proposed new program.

Thanks for your consideration,
Boris



[1] https://review.openstack.org/#/c/108502/


Official Name
=

Performance and Scalability

Codename


Rally

Scope
=

Scalability benchmarking, performance analysis, and profiling of
OpenStack components and workloads

Mission
===

To increase the scalability and performance of OpenStack clouds by:

* defining standard benchmarks
* sharing performance data between operators and developers
* providing transparency of code paths through profiling tools

Maturity


* Meeting logs http://eavesdrop.openstack.org/meetings/rally/2014/
* IRC channel: #openstack-rally
* Rally performance jobs are in (Cinder, Glance, Keystone  Neutron)
check pipelines.
*  950 commits over last 10 months
* Large, diverse contributor community
 * 
http://stackalytics.com/?release=junometric=commitsproject_type=Allmodule=rally
 * http://stackalytics.com/report/contribution/rally/180

* Non official lead of project is Boris Pavlovic
 * Official election In progress.

Deliverables


Critical deliverables in the Juno cycle are:

* extending Rally Benchmark framework to cover all use cases that are
required by all OpenStack projects
* integrating OSprofiler in all core projects
* increasing functional  unit testing coverage of Rally.

Discussion
==

One of the major goals of Rally is to make it simple to share results of
standardized benchmarks and experiments between operators and
developers. When an operator needs to verify certain performance
indicators meet some service level agreement, he will be able to run
benchmarks (from Rally) and share with the developer community the
results along with his OpenStack configuration. These benchmark results
will assist developers in diagnosing particular performance and
scalability problems experienced with the operator's configuration.

Another interesting area is Rally  the OpenStack CI process. Currently,
working on performance issues upstream tends to be a more social than
technical process. We can use Rally in the upstream gates to identify
performance regressions and measure improvement in scalability over
time. The use of Rally in the upstream gates will allow a more rigorous,
scientific approach to performance analysis. In the case of an
integrated OSprofiler, it will be possible to get detailed information
about API call flows (e.g. duration of API calls in different services).


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleoO] Switching SELinux to enforcing mode spec

2014-07-22 Thread Richard Su

Hello,

As discussed earlier this morning, we are working towards switching 
SELinux to enforcing mode in tripleo. The work required are detailed in 
this spec: https://review.openstack.org/#/c/108168/. I welcome 
additional comments and suggestions.


Thank you,

Richard

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Strategy for recovering crashed nodes in the Overcloud?

2014-07-22 Thread Charles Crouch


- Original Message -
 Hi,
 
 I'm running a HA overcloud configuration and as far as I'm aware, there is
 currently no mechanism in place for restarting failed nodes in the cluster.
 Originally, I had been wondering if we would use a corosync/pacemaker
 cluster across the control plane with STONITH resources configured for each
 node (a STONITH plugin for Ironic could be written). 

I know some people are starting to look at how to use pacemaker for fencing/
recovery with TripleO, but I'm not aware of any proposals yet. 
I'm sure as soon as that is published it will hit this list.

This might be fine if a
 corosync/pacemaker stack is already being used for HA of some components,
 but it seems overkill otherwise. 

There is a pending patch to add support for using pacemaker to deal with A/P
services: e.g. https://review.openstack.org/#/c/105397/
I'd expect additional patches like this in the future.

The undercloud heat could be in a good
 position to restart the overcloud nodes -- is that the plan or are there
 other options being considered?
 
 Thanks,
 Tom
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Add static routes on neutron router to devices in the external network

2014-07-22 Thread Kevin Benton
The issue (if I understand your diagram correctly) is that the VPN GW
address is on the other side of your home router from the neutron router.
The nexthop address has to be an address on one of the subnets directly
attached to the router. In this topology, the static route should be on
your home router.

--
Kevin Benton


On Tue, Jul 22, 2014 at 6:55 AM, Ricardo Carrillo Cruz 
ricardo.carrillo.c...@gmail.com wrote:

 Hello guys

 I have the following network setup at home:

 [openstack instances] - [neutron router] - [  [home router] [vpn gw]   ]
  TENANT NETWORK  EXTERNAL NETWORK

 I need my instances to connect to machines that are connected thru the vpn
 gw server.
 By default, all traffic that comes from openstack instances go thru the
 neutron router, and then hop onto the home router.

 I've seen there's an extra routes extension for neutron routers that would
 allow me to do that, but apparently I can't add extra routes to
 destinations in the external network, only subnets known by neutron.
 This can be seen from the neutron CLI command:

 snip
 neutron router-update router name --routes type=dict list=true
 destination=network connected by VPN in CIDR,nexthop=vpn gw IP
 Invalid format for routes: [{u'nexthop': u'vpn gw IP', u'destination':
 u'network connected by VPN in CIDR'}], the nexthop is not connected with
 router
 /snip

 Is this use case not being possible to do at all?

 P.S.
 I found Heat BP
 https://blueprints.launchpad.net/heat/+spec/router-properties-object that
 in the description reads this can be done on Neutron, but can't figure out
 how.

 Regards

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] PTL Candidacy

2014-07-22 Thread Marco Morais
Since Rally is the first OpenStack project that I contributed to, I can 
personally vouch for Boris' magical abilities to welcome new contributors and 
set the technical direction for the project.  If you spend some time in the 
#openstack-rally irc you quickly realize that boris-42 is available across 
multiple timezones to answer questions, delegate bugs and features, and provide 
a vision for what Rally should be.
--
Regards,
Marco

From: Boris Pavlovic bpavlo...@mirantis.commailto:bpavlo...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, July 21, 2014 at 11:38 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Rally] PTL Candidacy

Hi,

I would like to propose my candidacy for Rally PTL.

I started this project to make benchmarking of OpenStack simple as possible. 
This means not only load generation, but as well OpenStack specific benchmark 
framework, data analyze and integration with gates. All these things should 
make it simple for developers and operators to benchmark (perf, scale, stress 
test) OpenStack, share experiments  results, and have a fast way to find what 
produce bottleneck or just to ensure that OpenStack works well under load that 
they are expecting.

I am current non official PTL and in my responsibilities are such things like:
1) Adoption of Rally architecture to cover everybody's use cases
2) Building  managing work of community
3) Writing a lot of code
4) Working on docs  wiki
5) Helping newbies to join Rally team

As a PTL I would like to continue work and finish my initial goal:
1) Ensure that everybody's use cases are fully covered
2) There is no monopoly in project
3) Run Rally in gates of all OpenStack projects (currently we have check jobs 
in Keystone, Cinder, Glance  Neutron)
4) Continue work on making project more mature. It covers such topics like 
increasing unit and functional test coverage and making Rally absolutely safe 
to run against any production cloud)


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] how do I change qemu execution path?

2014-07-22 Thread Gareth
Hi

I mean I could use another patch instead of /usr/libexec/qemu-kvm

Thanks
-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Image tagging

2014-07-22 Thread McLellan, Steven
Thanks for the response.

Primarily I’m thinking about a situation where I have an image that has a 
specific piece of software installed (let’s say MySQL for the sake of 
argument). My application (which configures mysql) requires a glance image that 
has MySQL pre-installed, and doesn’t particularly care what OS (though again 
for the sake of argument assume it’s linux of some kind, so that configuration 
files are expected to be in the same place regardless of OS).

Currently we have a list of three hardcoded values in the UI, and none of them 
apply properly. I’m suggesting instead of that list, we allow free-form text; 
if you’re tagging glance images, you are expected to know what applications 
will be looking for. This still leaves a problem in that I can upload a package 
but I don’t necessarily have the ability to mark any images as valid for it, 
but I think that can be a later evolution; for now, I’m focusing on the 
situation where an admin is both uploading glance images and murano packages.

As a slight side note, we do have the ability to filter image sizes based on 
glance properties (RAM, cpus), but this is in the UI code, not enforced at the 
contractual level. I agree reengineering some of this to be at the contract 
level is a good goal, but it seems like that would involve major reengineering 
of the dashboard to make it much dumber and go through the murano API for 
everything (which ultimately is probably a good thing).

From: Stan Lagun [mailto:sla...@mirantis.com]
Sent: Sunday, July 20, 2014 5:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Murano] Image tagging

Hi!

I think it would be usefull to share the original vision on tagging that we had 
back in 0.4 era when it was introduced.
Tagging was supposed to be JSON image metadata with extendable scheme. Workflow 
should be able to both utilize that metadata and impose some constraints on it. 
That feature was never really designed so I cannot tell exactly how this JSON 
should work or look like. As far as I see it it can contain:

1. Operating system information. For example os: { family: Linux, name: 
Ubuntu, version: 12.04, arch: x86_x64 } (this also may be encoded as 
a single string)
Workflows (MuranoPL contracts) need to be able to express requirements 
based on those attributes. For example

image:
  Contract($.class(Image).check($.family = Linux and $.arch = x86)

   In UI only those images that matches such contract should be displayed.

2. Human readable image title Ubuntu Linux 12.04 x86

3. Information about built-in software for image-based deployment. Not sure 
exactly what information is needed. Meybe even portion of Object Model so that 
if such image is used Murano environment will automatically recocnize and 
incorporate that application like it was added by user to be installed on clean 
instance. This will allow using of pre-build images with preinstalled software 
(e.g. speed up deployment) but will make it transparent for the user so that 
this software could be managed as well as applications that user choses to 
install

4. Minimal hardware requirement for the image. Murano could use that 
information to guarantee that user will not select flavor that is too small for 
that operating system.

5. General-purposed tags

We need to think how this concept fits into our roadmap and new Glance design 
(probably there are other projects that can benefit from extended image 
metadata) before chosing one of your approaches



Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

On Fri, Jul 18, 2014 at 6:46 PM, McLellan, Steven 
steve.mclel...@hp.commailto:steve.mclel...@hp.com wrote:
Hi,

Images that can be used for package deployment have to be tagged in glance in 
order to enable the UI to filter the list of images to present to a user (and 
potentially preselect). Currently the tags are defined in the dashboard code 
(images/forms.py) which makes things very inflexible; if I can upload an image 
and a package that consumes that image, I don’t want to have to make a code 
change to use it.

Anyone who can upload images should also be able to specify tags for them. 
There is also the question of whether a user should be allowed to tag images 
that don’t belong to them (e.g. a shared image used by a private package), but 
I think that can be punted down the road somewhat.

I think this needs to be more dynamic, and if that’s agreed upon, there are a 
couple of approaches:

1)  Store allowed tags in the database, and allow administrators to add to 
that list. Ordinary users would likely not be able to create tags, though they 
could use pre-defined ones for images they owned.

2)  Have some public tags, but also allow user-specified tags for private 
packages. I think this leads to all sorts of tricky edge cases

3)  Allow freeform tags (i.e. don’t provide any hints). Since there’s no 
formal link between the tag that a 

Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-22 Thread Joe Gordon
On Thu, Jul 17, 2014 at 7:17 AM, Dolph Mathews dolph.math...@gmail.com
wrote:


 On Thu, Jul 17, 2014 at 7:56 AM, Joe Gordon joe.gord...@gmail.com wrote:




 On Wed, Jul 16, 2014 at 5:07 PM, Morgan Fainberg 
 morgan.fainb...@gmail.com wrote:

 Reposted now will a lot less bad quote issues. Thanks for being patient
 with the re-send!

 --
 From: Joe Gordon joe.gord...@gmail.com
 Reply: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: July 16, 2014 at 02:27:42
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject:  Re: [openstack-dev] [devstack][keystone] Devstack, auth_token
 and keystone v3

  On Tue, Jul 15, 2014 at 7:20 AM, Morgan Fainberg
  wrote:
 
  
  
   On Tuesday, July 15, 2014, Steven Hardy wrote:
  
   On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
On 07/14/2014 11:47 AM, Steven Hardy wrote:
Hi all,

I'm probably missing something, but can anyone please tell me
 when
   devstack
will be moving to keystone v3, and in particular when API
 auth_token
   will
be configured such that auth_version is v3.0 by default?

Some months ago, I posted this patch, which switched
 auth_version to
   v3.0
for Heat:

https://review.openstack.org/#/c/80341/

That patch was nack'd because there was apparently some version
   discovery
code coming which would handle it, but AFAICS I still have to
 manually
configure auth_version to v3.0 in the heat.conf for our API to
 work
properly with requests from domains other than the default.

The same issue is observed if you try to use non-default-domains
 via
python-heatclient using this soon-to-be-merged patch:

https://review.openstack.org/#/c/92728/

Can anyone enlighten me here, are we making a global devstack
 move to
   the
non-deprecated v3 keystone API, or do I need to revive this
 devstack
   patch?

The issue for Heat is we support notifications from stack domain
   users,
who are created in a heat-specific domain, thus won't work if the
auth_token middleware is configured to use the v2 keystone API.

Thanks for any information :)

Steve
There are reviews out there in client land now that should work.
 I was
testing discover just now and it seems to be doing the right
 thing. If
   the
AUTH_URL is chopped of the V2.0 or V3 the client should be able to
   handle
everything from there on forward.
  
   Perhaps I should restate my problem, as I think perhaps we still
 have
   crossed wires:
  
   - Certain configurations of Heat *only* work with v3 tokens,
 because we
   create users in a non-default domain
   - Current devstack still configures versioned endpoints, with v2.0
   keystone
   - Heat breaks in some circumstances on current devstack because of
 this.
   - Adding auth_version='v3.0' to the auth_token section of heat.conf
 fixes
   the problem.
  
   So, back in March, client changes were promised to fix this
 problem, and
   now, in July, they still have not - do I revive my patch, or are
 fixes for
   this really imminent this time?
  
   Basically I need the auth_token middleware to accept a v3 token for
 a user
   in a non-default domain, e.g validate it *always* with the v3 API
 not
   v2.0,
   even if the endpoint is still configured versioned to v2.0.
  
   Sorry to labour the point, but it's frustrating to see this still
 broken
   so long after I proposed a fix and it was rejected.
  
  
   We just did a test converting over the default to v3 (and falling
 back to
   v2 as needed, yes fallback will still be needed) yesterday (Dolph
 posted a
   couple of test patches and they seemed to succeed - yay!!) It looks
 like it
   will just work. Now there is a big caveate, this default will only
 change
   in the keystone middleware project, and it needs to have a patch or
 three
   get through gate converting projects over to use it before we accept
 the
   code.
  
   Nova has approved the patch to switch over, it is just fighting with
 Gate.
   Other patches are proposed for other projects and are in various
 states of
   approval.
  
 
  I assume you mean switch over to keystone middleware project [0], not

 Correct, switch to middleware (a requirement before we landed this patch
 in middleware). I was unclear in that statement. Sorry didn’t mean to make
 anyone jumpy that something was approved in Nova that shouldn’t have been
 or that did massive re-workings internal to Nova.

  switch over to keystone v3. Based on [1] my understanding is no
 changes to
  nova are needed to use the v2 compatible parts of the v3 API, But are
  changes needed to support domains or is this not a problem because the
 auth
  middleware uses uuids for user_id and project_id, so nova doesn't need
 to
  have any concept of domains? Are any nova changes 

[openstack-dev] [Neutron] [Spec freeze exception] Multiple IPv6 Prefixes/Addresses per port

2014-07-22 Thread Dane Leblanc (leblancd)
I would like to request Juno spec freeze exception for the Multiple IPv6 
Prefixes/Addresses per port blueprint:

https://review.openstack.org/98217

This feature defines how multiple IPv6 prefixes and/or addresses per Neutron 
port should be supported in OpenStack, in a manner that is consistent with the 
nature of IPv6. Without this change, the IPv6 addresses shown as active on a 
port may not match what's actually in use on the port.

The dependencies for this feature have already been merged.

The following work items listed in the design spec are complete:
   - Port-create API handling changes
   - Port-update API handling changes
   - Subnet-delete API handling changes
And the following work items are yet to be done:
   - L3 agent changes (number addresses per gateway port)
   - Unit tests
   - Tempest tests

Thanks,
Dane


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how scheduler handle messages?

2014-07-22 Thread Vishvananda Ishaya
Workers can consume more than one message at a time due to 
eventlet/greenthreads. The conf option rpc_thread_pool_size determines how many 
messages can theoretically be handled at once. Greenthread switching can happen 
any time a monkeypatched call is made.

Vish

On Jul 21, 2014, at 3:36 AM, fdsafdsafd jaze...@163.com wrote:

 Hello,
recently, i use rally to test boot-and-delete. I thought that one 
 nova-scheduler will handle message sent to it one by one, but the log print 
 show differences. So Can some one how nova-scheduler handle messages? I read 
 the code in nova.service,  and found that one service will create fanout 
 consumer, and that all fanout message consumed in one thread. So I wonder 
 that, How the nova-scheduler handle message, if there are many messages 
 casted to call scheduler's run_instance?
 Thanks a lot.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.serialization and oslo.concurrency graduation call for help

2014-07-22 Thread Ben Nemec
Thanks guys, much appreciated!  I should be around on IRC for the rest
of the week if you have any questions.  Next week I'm unlikely to have
internet access so don't wait. :-)

-Ben

On 2014-07-22 06:37, Davanum Srinivas wrote:
 Yuriy,
 Hop onto #openstack-oslo, that's where we hang out.
 
 Ben,
 I can help as well.
 
 thanks,
 -- dims
 
 On Tue, Jul 22, 2014 at 6:38 AM, Yuriy Taraday yorik@gmail.com wrote:
 Hello, Ben.

 On Mon, Jul 21, 2014 at 7:23 PM, Ben Nemec openst...@nemebean.com wrote:

 Hi all,

 The oslo.serialization and oslo.concurrency graduation specs are both
 approved, but unfortunately I haven't made as much progress on them as I
 would like.  The serialization repo has been created and has enough acks
 to continue the process, and concurrency still needs to be started.

 Also unfortunately, I am unlikely to make progress on either over the
 next two weeks due to the tripleo meetup and vacation.  As discussed in
 the Oslo meeting last week

 (http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-18-16.00.log.html)
 we would like to continue work on them during that time, so Doug asked
 me to look for volunteers to pick up the work and run with it.

 The current status and next steps for oslo.serialization can be found in
 the bp:
 https://blueprints.launchpad.net/oslo/+spec/graduate-oslo-serialization

 As mentioned, oslo.concurrency isn't started and has a few more pending
 tasks, which are enumerated in the spec:

 http://git.openstack.org/cgit/openstack/oslo-specs/plain/specs/juno/graduate-oslo-concurrency.rst

 Any help would be appreciated.  I'm happy to pick this back up in a
 couple of weeks, but if someone could shepherd it along in the meantime
 that would be great!


 I would be happy to work on graduating oslo.concurrency as well as improving
 it after that. I like fiddling with OS's, threads and races :)
 I can also help to finish work on oslo.serialization (it looks like some
 steps are already finished there).

 What would be needed to start working on that? I haven't been following
 development of processes within Oslo. So I would need someone to answer
 questions as they arise.

 --

 Kind regards, Yuriy.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Add a hacking check to not use Python Source Code Encodings (PEP0263)

2014-07-22 Thread Doug Hellmann


On Tue, Jul 22, 2014, at 08:41 AM, Ihar Hrachyshka wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 On 21/07/14 17:11, Doug Hellmann wrote:
  
  On Jul 21, 2014, at 4:45 AM, Christian Berendt
  bere...@b1-systems.de wrote:
  
  Hello.
  
  There are some files using the Python source code encodings as
  the first line. That's normally not necessary and I want propose
  to introduce a hacking check to check for the absence of the
  source code encodings.
  
  We need to be testing with Unicode inputs. Will the hacking check
  still support that case?
 
 I suspect it just means that we won't be able to use Unicode
 codepoints in the code, while we still may specify those characters as
 escape sequences, as in:
 
  \N{GREEK CAPITAL LETTER DELTA}  # Using the character name
 '\u0394'
  \u0394  # Using a 16-bit hex value
 '\u0394'
  \U0394  # Using a 32-bit hex value
 '\u0394'
 
 (shamelessly copied from https://docs.python.org/3/howto/unicode.html)

And what is the benefit of doing that? I assume there's some reason for
suggesting this change, but I don't understand it. Is there a tool
breaking on unicode characters now that I haven't heard about, maybe?

Doug

 
  
  Doug
  
  
  Best, Christian.
  
  -- Christian Berendt Cloud Computing Solution Architect Mail:
  bere...@b1-systems.de
  
  B1 Systems GmbH Osterfeldstraße 7 / 85088 Vohburg /
  http://www.b1-systems.de GF: Ralph Dehner / Unternehmenssitz:
  Vohburg / AG: Ingolstadt,HRB 3537
  
  ___ OpenStack-dev
  mailing list OpenStack-dev@lists.openstack.org 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  
  
  ___ OpenStack-dev
  mailing list OpenStack-dev@lists.openstack.org 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
 
 iQEcBAEBCgAGBQJTzlv6AAoJEC5aWaUY1u57xK4H/iaHyoE2+0gz5sZHhbMHBXm3
 8Dti0ZOB2D4XEOhaxhXguuaDQQdzYHc3LOj09uzO+7qjgKQa5X7SExYDg7Tz6VaC
 lqgv5mnLZNgt2iNNK0PFGwzYV/n12w513DCuTAxROPKrZuaoMwAZFfqkcf6YGTDg
 tH16qU8nr/1SFZYVE7w/flDRI5gS04yZavIHwuMEzWN5fXebR5TxDe/JRwFNMocI
 jMwSZKY73TBzGlp8ND9bee0Wzv/IbUzjIi/R+FZAhgwK53Dc/jxUGJ9V+aKEox8d
 JdF7yvquOhBQRwuARt9O6IRvN3AG2oT+lcheSA/KAkOZ+1v1oRjNDqkn2Y9KNRk=
 =jGEy
 -END PGP SIGNATURE-
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.serialization and oslo.concurrency graduation call for help

2014-07-22 Thread Doug Hellmann


On Tue, Jul 22, 2014, at 06:38 AM, Yuriy Taraday wrote:
 Hello, Ben.
 
 On Mon, Jul 21, 2014 at 7:23 PM, Ben Nemec openst...@nemebean.com
 wrote:
 
  Hi all,
 
  The oslo.serialization and oslo.concurrency graduation specs are both
  approved, but unfortunately I haven't made as much progress on them as I
  would like.  The serialization repo has been created and has enough acks
  to continue the process, and concurrency still needs to be started.
 
  Also unfortunately, I am unlikely to make progress on either over the
  next two weeks due to the tripleo meetup and vacation.  As discussed in
  the Oslo meeting last week
  (
  http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-18-16.00.log.html
  )
  we would like to continue work on them during that time, so Doug asked
  me to look for volunteers to pick up the work and run with it.
 
  The current status and next steps for oslo.serialization can be found in
  the bp:
  https://blueprints.launchpad.net/oslo/+spec/graduate-oslo-serialization
 
  As mentioned, oslo.concurrency isn't started and has a few more pending
  tasks, which are enumerated in the spec:
 
  http://git.openstack.org/cgit/openstack/oslo-specs/plain/specs/juno/graduate-oslo-concurrency.rst
 
  Any help would be appreciated.  I'm happy to pick this back up in a
  couple of weeks, but if someone could shepherd it along in the meantime
  that would be great!
 
 
 I would be happy to work on graduating oslo.concurrency as well as
 improving it after that. I like fiddling with OS's, threads and races :)
 I can also help to finish work on oslo.serialization (it looks like some
 steps are already finished there).
 
 What would be needed to start working on that? I haven't been following
 development of processes within Oslo. So I would need someone to answer
 questions as they arise.

The basic process is described in
https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary, although there
are usually differences for each library as we have to do things like
tighten up the API. We can help with those parts after we have the new
repository, which is one of the more time-consuming aspects of the
graduation process, so if you were able to work on that it would be a
huge help.

Thanks,
Doug

 
 -- 
 
 Kind regards, Yuriy.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-22 Thread Andrea Frittoli
The v3 experimental jobs are available for tempest [0]:
  - check-tempest-dsvm-keystonev3-full
  - check-tempest-dsvm-neutron-keystonev3-ful

At the moment the difference between these and the regular jobs are that
what has been implemented in this bp [1]:
- tempest works with v3 credentials, which include domain_id - atm
configured to Default
- all tokens used by the API tests are v3 tokens

The intention for these experimental jobs is to have services eventually
using v3 tokens as well - the spec is available here [2].

As soon as v3 token becomes available in the auth middleware used by the
various services, I'd like to enable it in the v3 jobs, until we have dsvm
jobs running with v3 alone.

andrea

[0]
http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul/layout.yaml#n2260
[1]
https://github.com/openstack/qa-specs/blob/master/specs/multi-keystone-api-version-tests.rst
[2]
https://github.com/openstack/qa-specs/blob/master/specs/keystone-v3-jobs.rst



On 22 July 2014 19:28, Joe Gordon joe.gord...@gmail.com wrote:




 On Thu, Jul 17, 2014 at 11:12 AM, Morgan Fainberg 
 morgan.fainb...@gmail.com wrote:



   I wasn't aware that PKI tokens had domains in them. What happens to
 nova
   in this case, It just works?
  
 
  Both PKI and UUID responses from v3 contain:
 
  1. the user's domain
 
  And if it's a project scoped token:
 
  2. the project's domain
 
  Or if it's a domain-scoped token:
 
  3. a domain scope
 
  The answer to your question is that if nova receives a project-scoped
 token
  (1  2), it doesn't need to be domain-aware: project ID's are globally
  unique and nova doesn't need to know about project-domain relationships.
 
  If nova receives a domain-scoped token (1  3), the policy layer can
 balk
  with an HTTP 401 because there's no project in scope, and it's not
  domain-aware. From nova's perspective, this is identical to the scenario
  where the policy layer returns an HTTP 401 because nova was presented
 with
  an unscoped token (1 only) from keystone.

 Let me add some specifics based upon the IRC discussion I had with Joe
 Gordon.

 In addition to what Dolph has outlined here we have this document
 http://docs.openstack.org/developer/keystone/http-api.html#how-do-i-migrate-from-v2-0-to-v3
 that should help with what is needed to do the conversion. The change to
 use v3 largely relies on a deployer enabling the V3 API in Keystone.

 By and large, the change is all in the middleware. The middleware will
 handle either token, so it really comes down to when a V3 token is
 requested by the end user and subsequently used to interact with the
 various OpenStack services. This part requires no change on Nova (or any
 other services) part (with exception to the Domain-Scoped tokens outlined
 above and the needed changes to policy if those are to be supported).

 Each of the client libraries will need to be updated to utilize the V3
 API. This has been in process for a while (you’ve seen the code from Jamie
 Lennox and Guang Yee) and is mostly supported by converting each of the
 libraries to utilize the Session object from keystoneclient instead of the
 many various implementations to talk to auth.

 Last but not least here are a couple bullet points that make V3 much
 better than the V2 Keystone API (all the details of what V3 brings to the
 table can be found here:
 https://github.com/openstack/identity-api/tree/master/v3/src/markdown ).
 A lot of these benefits are operator specific.

 * Federated Identity. V3 Keystone supports the use of SAML (via
 shibboleth) from a number of sources as a form of Identity (instead of
 having to keep the users all within Keystone’s Identity backend). The
 federation support relies heavily upon the domain constructs in Keystone
 (which are part of V3). There is work to expand the support beyond SAML
 (including a proposal to support keystone-to-keystone federation).

 * Pluggable Auth. V3 Keystone supports pluggable authentication
 mechanisms (a light weight module that can authenticate the user), this is
 a bit more friendly than needing to subclass the entire Identity backend
 with a  bunch of conditional logic. Plugins are configured via the Keystone
 configuration file.

 * Better admin-scoping support. Domains allow us to better handle “admin”
 vs “non-admin” and limit bleeding those roles across projects (a big
 complaint in v2: you were either an admin or not an admin globally). Due to
 backwards compatibility requirements, we have largely left this as it was,
 but the support is there and can be seen via the policy.v3cloudsample.json
 file provided in the Keystone tree.

 * The hierarchical multi tenancy work is being done against the V3
 Keystone API. This is again related to the domain construct and support.
 This will likely require changes to more than just Keystone to make full
 use of the new functionality, but specifics are still up in the air as this
 is under active development.

 

[openstack-dev] [python-openstacksdk] Meeting minutes from 2014-07-22

2014-07-22 Thread Brian Curtin
Today was a relatively quick meeting, but a meeting nonetheless.

Minutes:
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-07-22-19.00.html

Minutes (text):
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-07-22-19.00.txt

Log:
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-07-22-19.00.log.html

Next meeting scheduled for Tuesday July 29 at 1900 UTC.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Add a hacking check to not use Python Source Code Encodings (PEP0263)

2014-07-22 Thread John Dennis
On 07/21/2014 04:45 AM, Christian Berendt wrote:
 Hello.

 There are some files using the Python source code encodings as the first
 line. That's normally not necessary and I want propose to introduce a
 hacking check to check for the absence of the source code encodings.


I assume you mean you want to prohibit the use of the source code
encoding declaration as opposed to mandating it's use in every file, if
so then ...

NAK. This is a very useful and essential feature, at least for the
specific case of UTF-8. Python 3 source files are defined to be UTF-8
encoded however the same is not true for Python 2 which requires an
explicit source coding for UTF-8. Properly handling
internationalization, especially in the unit tests is critical.
Embedding code points outside the ASCII range via obtuse notation is
both cumbersome and impossible to read without referring to a chart, a
definite drawback. OpenStack in general has not had a great track record
with respect to internationalization, let's not make it more difficult
instead let's embrace those features which promote better
internationalization support.

Given Python 3 has declared the source code encoding is UTF-8 I see no
justification for capriciously declaring Python 2 cannot share the same
encoding. Nor do we want to make it difficult for developers by forcing
them to use unnatural hexadecimal escapes in strings.

However my concerns are strictly limited to UTF-8, I do not think we
should allow any other source code encoding aside from UTF-8. I'd
advocate for a hacking rule to check for any encoding other than UTF-8
(no need to check for ASCII since it's a proper subset of UTF-8). For
example Latin-1 should be flagged as a definite problem.

Do you have a reason for advancing this restriction?

-- 
John


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Spec Minimum Review Proposal

2014-07-22 Thread Jay Dobies
At the meetup today, the topic of our spec process came up. The general 
sentiment is that the process is still young and the hiccups are 
expected, but we do need to get better about making sure we're staying 
on top of them.


As a first step, it was proposed to add 1 spec review a week to the 
existing 3 reviews per day requirement for cores.


Additionally, we're going to start to capture and review the metrics on 
spec patches specifically during the weekly meeting. That should help 
bring to light how long reviews are sitting in the queue without being 
touched.


What are everyone's feelings on adding a 1 spec review per week 
requirement for cores?


Not surprisingly, I'm +1 for it  :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Need opinion on bug 1347101

2014-07-22 Thread Tiwari, Arvind
I have logged below bug to enforce 'content-type' check before RBAC enforcement 
on POST requests, but seems we have difference in opinion.

https://bugs.launchpad.net/barbican/+bug/1347101

Please look at the above bug and share your thoughts.

IMO -
content-type enforcement is concern of REST subsystem (Pecan in this case) 
and RBAC is the applications concern.  Application resides a level below REST 
subsystem, so these checks and response should also follow this notion.
RBAC enforcement should be done only after all the necessary checks related 
REST aspect has been performed.  This way we can save costly RBAC validation, 
at the same time returning a legitimate unauthorized response for a request 
with bad content type does not makes sense.


Thanks,
Arvind




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-07-22 Thread Boris Pavlovic
Sean, David

So seems like I am better in writing code then english, sorry for that=)

Let me try to explain my position and try to make this situation clear.

We have the great program QA that helps to keep OpenStack working
(automation of testing, unit/function/integration tests, log analyze,
elastic recheck, dsvm jobs and so on).
I really appreciate what you guys are doing, and thank you for that!

But the scope of what Rally team is working on is quite different.
It's more about Operations cases e.g. making it simple to understand what
is happening inside
production OpenStack clouds (especially under load). To do that, it's not
enough just to write tools
(like Rally), it requires extending OpenStack API, to make it simple to
retrieve via OpenStack API audit information (profiling data - OSprofiler,
querying logs - LogaaS, configuration discovery - Satori), which is really
out of scope of QA (it should be done inside OpenStack projects, not on top
of them).

And to coordinate this job we should have Program Operations for that.

The reason why this program is called Performance  Scalability and not
Operations is that current Rally team scope is Performance 
Scalability. But as I see Rally team really would like to extend it's
scope to Operations. So Performance  Scalability should be the first
piece of big Operation program (that includes such projects like: rally,
osprofiler, logaas, satori, rubick and probably some others)

So let's move to the Operation program with baby steps.


Best regards,
Boris Pavlovic


On Tue, Jul 22, 2014 at 8:18 PM, Sean Dague s...@dague.net wrote:

 On 07/22/2014 11:58 AM, David Kranz wrote:
  On 07/22/2014 10:44 AM, Sean Dague wrote:
  Honestly, I'm really not sure I see this as a different program, but is
  really something that should be folded into the QA program. I feel like
  a top level effort like this is going to lead to a lot of duplication in
  the data analysis that's currently going on, as well as functionality
  for better load driver UX.
 
   -Sean
  +1
  It will also lead to pointless discussions/arguments about which
  activities are part of QA and which are part of
  Performance and Scalability Testing.
 
  QA Program mission:
 
   Develop, maintain, and initiate tools and plans to ensure the upstream
  stability and quality of OpenStack, and its release readiness at any
  point during the release cycle.
 
  It is hard to see how $subj falls outside of that mission. Of course
  rally would continue to have its own repo, review team, etc. as do
  tempest and grenade.

 Right, 100% agreed. Rally would remain with it's own repo + review team,
 just like grenade.

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] The gate: a failure analysis

2014-07-22 Thread Sean Dague
On 07/22/2014 11:51 AM, Jay Pipes wrote:
 On 07/22/2014 10:48 AM, Chris Friesen wrote:
 On 07/21/2014 12:03 PM, Clint Byrum wrote:
 Thanks Matthew for the analysis.

 I think you missed something though.

 Right now the frustration is that unrelated intermittent bugs stop your
 presumably good change from getting in.

 Without gating, the result would be that even more bugs, many of them
 not
 intermittent at all, would get in. Right now, the one random developer
 who has to hunt down the rechecks and do them is inconvenienced. But
 without a gate, _every single_ developer will be inconvenienced until
 the fix is merged.

 The problem I see with this is that it's fundamentally not a fair system.

 If someone is trying to fix a bug in the libvirt driver, it's wrong to
 expect them to try to debug issues with neutron being unstable.  They
 likely don't have the skillset to do it, and we shouldn't expect them to
 do so.  It's a waste of developer time.
 
 Who is expecting the developer to debug issues with Neutron? It may be a
 waste of developer time to constantly recheck certain bugs (or no bug),
 but nobody is saying to the contributor of a libvirt fix Hey, this
 unrelated Neutron bug is causing a failure, so go fix it.
 
 The point of the gate is specifically to provide the sort of rigidity
 that unfortunately manifests itself in discomfort from developers.
 Perhaps you don't have the history of when we had no strict gate, and it
 was a frequent source of frustration that code would sail through to
 master that would routinely break master and branches of other OpenStack
 projects. I, for one, don't want to revisit the bad old days. As much as
 a pain it is, the gate failures are a thorn in the side of folks
 precisely to push folks to fix the valid bugs that they highlight. What
 we need, like Sean said, is more folks fixing bugs and less folks
 working on features and vendor drivers.
 
 Perhaps we, as a community, should make the bug triaging and fixing days
 a much more common thing? Maybe make Thursdays or Fridays dedicated bug
 days? How about monetary bug bounties being paid out by the OpenStack
 Foundation, with a payout scale based on the bug severity and
 importance? How about having dedicated bug-squashing teams that focus on
 a particular area of the code, that share their status reports at weekly
 meetings and on the ML?

Something that's somewhat relevant to this discussion is one that we had
last week in Darmstadt at the Infra / QA Sprint, it even has a pretty
picture (#notverypretty) - https://dague.net/2014/07/22/openstack-failures/

I think fairness is one of those things that's hard to figure out here.
Because while it might not seem fair to a developer that they can't land
their patch, lets consider the alternative, where we turned off all the
testing (or limited it to only things we were 100% sure would not false
negative). In that environment the review teams would have to be fair
more careful about what they approved, as there was no backstop. Which
means I'd expect the review queue to grow by many integer multiples. And
land time for patches to actually increase.

An alternative to the current space of man it's annoying that my patch
gets killed by bugs some times isn't yay I'm landing all the codes!,
it's probably hmmm, how do I get anyone to look at my code, it's been
up for review for 6 months. Especially for newer developers without a
track record that haven't built up trust.

This is basically what you see in Linux. We could always evolve the
community in that direction, but I'm not sure it's what people actually
want. But in Linux if you show up as a new person the chance of anyone
reviewing your code is effectively 0%.

Every systemic change we've ever had to the gating system has 2nd and
3rd order effects, some we predict, and some we don't. Aren't emergent
systems fun? :)

For instance, when we implemented clean check, which demonstrably
decreased the gate queue length during rush times, many people now felt
like the system was punishing them because their code had to make more
round trips in the system. But so does everyone elses, which means some
really dubious behavior by some of the core teams in approving code that
hadn't been tested recently now was blocked. That was one of the
contributing factors to the January backup. So while it means that if
you hit a bug, your patch has longer in the system, it actually means if
you don't, it is less likely to be stuck behind a ton of other failing code.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] network_rpcapi.migrate_instance_start

2014-07-22 Thread Russell Bryant
On 07/21/2014 04:10 PM, Nachi Ueno wrote:
 Hi nova folks
 
 QQ: Who uses migrate_instance_start/finish, and why we need this rpc call?
 
 I greped code but I couldn't find implementation for it.
 
 https://github.com/openstack/nova/blob/372c54927ab4f6c226f5a1a2aead40b89617cf77/nova/network/manager.py#L1683

Agree that it appears to be a no-op and could probably be dropped (with
proper rpc backwards compat handling).  The manger side can't be removed
until the next major rev of the API.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano]

2014-07-22 Thread Stan Lagun
Hi Steve,

1. There are no objections whatsoever if you know how to do it without
breaking the entire concept
2. I thing that deployment workflow need to be broken to more fine-grained
steps. Maybe instead of single deploy methdos have prepareDeploy (which
doesn't push the changes to Heat), deploy and finishDeploy. Maybe
more/other methods need to be defined. This will make the whole process
more customizible
3. If you want to have single-instance applications based on a fixed
prebuild image then maybe what you need is to have your apps inhertir both
Application and Instance classes and then override Instance's deploy method
and add HOT snippet before VM instantiation. This may also require ability
for child class to bind fixed values to parent class properties (narrowing
class public contract, hiding those properties from user). This is not yet
supported in MuranoPL but can be done in UI form as a temporary workaround
4. Didn't get why you mentioned object model. Object model is mostly user
input. Do you suggest passing HOT snippets as part of user input? If so
that would be something I oppose to
5. I guess image tagging would be better solution to image-based deployment
6. Personally I believe that problem can be eficently solved by Murano
today or in the nearest future without resorting to pure HOT packages. This
is not against Murano design and perfectly alligned with it


Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

 sla...@mirantis.com


On Tue, Jul 22, 2014 at 8:05 PM, McLellan, Steven steve.mclel...@hp.com
wrote:

  Hi,



 This is a little rambling, so I’ll put this summary here and some
 discussion below. I would like to be able to add heat template fragments
 (primarily softwareconfig) to a template before an instance is created by
 Heat. This could be possible by updating but not pushing the heat template
 before instance.deploy, except that instance.deploy does a stack.push to
 configure networking before it adds information about the nova instance.
 This seems like the wrong place for the networking parts of the stack to be
 configured (maybe in the Environment before it tries to deploy
 applications). Thoughts?



 --



 The long version:



 I’ve been looking at using disk-image-builder (a project that came out of
 triple-o) to build images for consumption through Murano. Disk images are
 built from a base OS plus a set of ‘elements’ which can include packages to
 install when building the image, templatized config file etc, and allows
 for substitutions based on heat metadata at deploy time. This uses a lot of
 the existing heat software config agents taking configuration from
 StructuredConfig and StructuredDeployment heat elements.



 I’m typically finding for our use cases that instances will tend to be
 single purpose (that is, the image will be created specifically to run a
 piece of software that requires some configuration). Currently Murano
 provisions the instance, and then adds software configuration as a separate
 stack-update step. This is quite inefficient since os-refresh-config ends
 up having to re-run, and so I’m wondering if there’s strong opposition to
 allowing the object model to support injection of software configuration
 heat elements before the instance is deployed.



 Alternatively maybe this is something that is best supported by pure HOT
 packages, but I think there’s value having murano’s composition ability
 even if just to be able to combine heat fragments (perhaps in the drag 
 drop manner that was briefly discussed in Atlanta).



 Thanks,



 Steve



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Image tagging

2014-07-22 Thread Stan Lagun
How do you like alternate design: uses can chose any image he wants (say
any Linux) but the JSON that is in image tag has enough information on what
applications are installed on that image. And not just installed or not but
the exact state installation was frozen (say binaries are deployed but
config files are need to be modified). The deployment workflow can peak
that state from image tag and continue right from the place it was stopped
last time. So if user has chosen image with MySQL preinstalled the workflow
will just post-configure it while if the user chosen clean Linux image it
will do the whole deployment from scratch. Thus it will become only a
matter of optimization and user will still be able to to share instance for
several applications (good example is Firewall app) or deploy his app even
if there is no image where it was built in.

Those are only my thoughts and this need a proper design. For now I agree
that we need to improve tagging to support yours use case. But this need to
be done in a way that would allow both user and machine to work with. UI at
least needs to distinguish between Linux and Windows while for user a
free-form tagging may be appropriate. Both can be stored in a single JSON
tag.

So lets create blueprint/etherpad for this and both think on exact format
that can be implemented right now

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

 sla...@mirantis.com


On Tue, Jul 22, 2014 at 10:08 PM, McLellan, Steven steve.mclel...@hp.com
wrote:

  Thanks for the response.



 Primarily I’m thinking about a situation where I have an image that has a
 specific piece of software installed (let’s say MySQL for the sake of
 argument). My application (which configures mysql) requires a glance image
 that has MySQL pre-installed, and doesn’t particularly care what OS (though
 again for the sake of argument assume it’s linux of some kind, so that
 configuration files are expected to be in the same place regardless of OS).



 Currently we have a list of three hardcoded values in the UI, and none of
 them apply properly. I’m suggesting instead of that list, we allow
 free-form text; if you’re tagging glance images, you are expected to know
 what applications will be looking for. This still leaves a problem in that
 I can upload a package but I don’t necessarily have the ability to mark any
 images as valid for it, but I think that can be a later evolution; for now,
 I’m focusing on the situation where an admin is both uploading glance
 images and murano packages.



 As a slight side note, we do have the ability to filter image sizes based
 on glance properties (RAM, cpus), but this is in the UI code, not enforced
 at the contractual level. I agree reengineering some of this to be at the
 contract level is a good goal, but it seems like that would involve major
 reengineering of the dashboard to make it much dumber and go through the
 murano API for everything (which ultimately is probably a good thing).



 *From:* Stan Lagun [mailto:sla...@mirantis.com]
 *Sent:* Sunday, July 20, 2014 5:42 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Murano] Image tagging



 Hi!



 I think it would be usefull to share the original vision on tagging that
 we had back in 0.4 era when it was introduced.

 Tagging was supposed to be JSON image metadata with extendable scheme.
 Workflow should be able to both utilize that metadata and impose some
 constraints on it. That feature was never really designed so I cannot tell
 exactly how this JSON should work or look like. As far as I see it it can
 contain:



 1. Operating system information. For example os: { family: Linux,
 name: Ubuntu, version: 12.04, arch: x86_x64 } (this also may be
 encoded as a single string)

 Workflows (MuranoPL contracts) need to be able to express requirements
 based on those attributes. For example



 image:

   Contract($.class(Image).check($.family = Linux and $.arch = x86)



In UI only those images that matches such contract should be displayed.



 2. Human readable image title Ubuntu Linux 12.04 x86



 3. Information about built-in software for image-based deployment. Not
 sure exactly what information is needed. Meybe even portion of Object Model
 so that if such image is used Murano environment will automatically
 recocnize and incorporate that application like it was added by user to be
 installed on clean instance. This will allow using of pre-build images with
 preinstalled software (e.g. speed up deployment) but will make it
 transparent for the user so that this software could be managed as well as
 applications that user choses to install



 4. Minimal hardware requirement for the image. Murano could use that
 information to guarantee that user will not select flavor that is too small
 for that operating system.



 5. General-purposed tags



 We need to think how this concept fits into our roadmap and new Glance
 

Re: [openstack-dev] [Murano]

2014-07-22 Thread Lee Calcote (lecalcot)
Gents,

For what it’s worth - We’ve long accounting for “extension points” within our 
VM and physical server provisioning flows, where developers may drop in code to 
augment OOTB behavior with customer/solution-specific needs.  While there are 
many extension points laced throughout different points in the provisioning 
flow, we pervasively injected “pre” and “post” provisioning extension points to 
allow for easy customization (like the one being attempted by Steve).

The notions of prepareDeploy and finishDeploy resonant well.

Regards,
Lee
  [cid:79A77799-DB4B-4D01-957B-D6A7D5D69476]
Lee Calcote
Sr. Software Engineering Manager
Cloud and Virtualization Group

Phone: 512-378-8835
Mail/Jabber/Video: 
lecal...@cisco.comapplewebdata://BF2BDB88-16CF-4E49-8251-92F7CF8AD490/lecal...@cisco.com

United States
www.cisco.com

From: Stan Lagun sla...@mirantis.commailto:sla...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, July 22, 2014 at 4:37 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Murano]

Hi Steve,

1. There are no objections whatsoever if you know how to do it without breaking 
the entire concept
2. I thing that deployment workflow need to be broken to more fine-grained 
steps. Maybe instead of single deploy methdos have prepareDeploy (which 
doesn't push the changes to Heat), deploy and finishDeploy. Maybe 
more/other methods need to be defined. This will make the whole process more 
customizible
3. If you want to have single-instance applications based on a fixed prebuild 
image then maybe what you need is to have your apps inhertir both Application 
and Instance classes and then override Instance's deploy method and add HOT 
snippet before VM instantiation. This may also require ability for child class 
to bind fixed values to parent class properties (narrowing class public 
contract, hiding those properties from user). This is not yet supported in 
MuranoPL but can be done in UI form as a temporary workaround
4. Didn't get why you mentioned object model. Object model is mostly user 
input. Do you suggest passing HOT snippets as part of user input? If so that 
would be something I oppose to
5. I guess image tagging would be better solution to image-based deployment
6. Personally I believe that problem can be eficently solved by Murano today or 
in the nearest future without resorting to pure HOT packages. This is not 
against Murano design and perfectly alligned with it


Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

mailto:sla...@mirantis.com


On Tue, Jul 22, 2014 at 8:05 PM, McLellan, Steven 
steve.mclel...@hp.commailto:steve.mclel...@hp.com wrote:
Hi,

This is a little rambling, so I’ll put this summary here and some discussion 
below. I would like to be able to add heat template fragments (primarily 
softwareconfig) to a template before an instance is created by Heat. This could 
be possible by updating but not pushing the heat template before 
instance.deploy, except that instance.deploy does a stack.push to configure 
networking before it adds information about the nova instance. This seems like 
the wrong place for the networking parts of the stack to be configured (maybe 
in the Environment before it tries to deploy applications). Thoughts?

--

The long version:

I’ve been looking at using disk-image-builder (a project that came out of 
triple-o) to build images for consumption through Murano. Disk images are built 
from a base OS plus a set of ‘elements’ which can include packages to install 
when building the image, templatized config file etc, and allows for 
substitutions based on heat metadata at deploy time. This uses a lot of the 
existing heat software config agents taking configuration from StructuredConfig 
and StructuredDeployment heat elements.

I’m typically finding for our use cases that instances will tend to be single 
purpose (that is, the image will be created specifically to run a piece of 
software that requires some configuration). Currently Murano provisions the 
instance, and then adds software configuration as a separate stack-update step. 
This is quite inefficient since os-refresh-config ends up having to re-run, and 
so I’m wondering if there’s strong opposition to allowing the object model to 
support injection of software configuration heat elements before the instance 
is deployed.

Alternatively maybe this is something that is best supported by pure HOT 
packages, but I think there’s value having murano’s composition ability even if 
just to be able to combine heat fragments (perhaps in the drag  drop manner 
that was briefly discussed in Atlanta).

Thanks,

Steve


___
OpenStack-dev mailing list

Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-22 Thread Carlos Garza

On Jul 17, 2014, at 4:59 PM, Stephen Balukoff sbaluk...@bluebox.net wrote:

 From the comments there, I think the reason for storing the subjectAltNames 
 was to minimize the number of calls we will need to make to barbican, and 
 because the barbican container is immutable, and therefore the list of 
 subjectAltNames won't change so long as the container exists, and we don't 
 have to worry about cache invalidation. (Because really, storing the 
 subjectAltNames locally is a cache.)  We could accomplish the same thing by 
 storing the cert (NOT the key) in our database as well and extracting the 
 information from the x509 cert that we want on the fly. But this also seems 
 like we're doing more work than necessary to keep extracting the same data 
 from the same certificate that will never change.

I'm more forgiving of duplicating work at API time but not on every backend 
LB https request. Thats insane.
The question for me is more of a balancing act between attempting Single 
Source of Truth design versus painful repeated computation. which I though 
repeating the computations at the API layer was acceptable. If we are 
disciplined enough to guarantee the cert won't fall out of sync with the 
entries in the database I'm fine with not reparseing the X509 on the fly. I 
just don't know what level of trust we have for our selves.


 How we store this in the database is something I'm less opinionated about, 
 but your idea that storing this data in a separate table seems to make sense.
 
 Do you really see a need to be concerned with anything but GEN_DNS entries 
 here? Or put another way, is there an application that would likely be used 
 in load balancing that makes use of any subjectAltName entries that are not 
 DNSNames? (I'm pretty sure that's all that all the major browsers look at 
 anyway-- and I don't see them changing any time soon since this satisfies the 
 need for implementing SNI.)  Secondary to this, does supporting other 
 subjectAltName types in our code cause any extra significant complication?  

Well no GEN_DNS is it for now but to make the code 
https://github.com/pyca/pyopenssl/pull/143 more attractive for merging to the 
pyopenssl folks I implemented most of the other entry types as well but we can 
ignore all but the dNSName entries.

 In practice, I think anything that does TERMINATED_HTTPS as the listener 
 protocol is only going to care about dNSName entries and ignore the rest-- 
 but if supporting the rest opens the door for more general-purpose forms of 
 TLS, I don't see harm in extracting these other subjectAltName types from the 
 x509 cert. It certainly feels more correct to treat these for what they 
 are: the tuples you've described.
 
 Thanks,
 Stephen
 
 
 
 On Thu, Jul 17, 2014 at 2:29 PM, Carlos Garza carlos.ga...@rackspace.com 
 wrote:
 I added the following comments to patch 14 but I'm not -1 it but I think its 
 a mistake
 to assume the altSubjectName is a string type. See below.
 
 --- Comments on patch 14 below 
 
 SubjectAltNames are not a string and should be thought
  of as an array of tuples. Example
  [('dNSName','www.somehost.com'),
 ('dirNameCN','www.somehostFromAltCN.org'),
 ('dirNameCN','www.anotherHostFromAltCN.org')]
 
 for right now we only care about entries that are of type dNSName
 or the entries that are of type DirName that also contain a CN in the DirName 
 container. All other AltNames can be ignores as they don't seem to be apart 
 of hostname validation in PKIX
 
 Also we don't need to store these in the object model. since these
 can be extracted from the X509 on the fly. Just be aware though that
 the SubjectAltName should not be treated as a simple string but as a
 list of (general_name_type,general_name_value) tuples
 
 were really close to the end but we can't mess this one up.
 
 I'm flexible if you want these values store in the database
 or not. If we do store it in a database we need a table called
 general_names that contains varchars for type and value for
 now with what ever you guys want to use for the keys. to
 map back to the tls_container_id. unless we want with a
 firm decision on what strings in type should map to
 GEN_DNS and GEN_DIRNAME CN entries from the
 OpenSSL layer.
 
 For now we can skip GEN_DIRNAME entries since RFC2818 doesn't mandate its 
 support and I'm not sure if fetching the CN from the DirName is in practice 
 now. I'm leery of using CN's from DirName entries as I can imagine people 
 signing differen't X509Names as a DirName with no intention of host name 
 validation. Excample
 (dirName, 'cn=john.garza,ou=people,o=somecompany)
 
 dNSName and DirName encodings are mentioned in RFC2459. if you want a more 
 formal definition.
 
 On Jul 17, 2014, at 10:19 AM, Stephen Balukoff sbaluk...@bluebox.net wrote:
 
  Ok, folks!
 
  Per the IRC meeting this morning, we came to the following consensus 
  regarding how TLS certificates are handled, how SAN is handled, and how 
  hostname 

Re: [openstack-dev] [Nova] network_rpcapi.migrate_instance_start

2014-07-22 Thread Nachi Ueno
Hi Russell

Thanks. I got it.

2014-07-22 14:21 GMT-07:00 Russell Bryant rbry...@redhat.com:
 On 07/21/2014 04:10 PM, Nachi Ueno wrote:
 Hi nova folks

 QQ: Who uses migrate_instance_start/finish, and why we need this rpc call?

 I greped code but I couldn't find implementation for it.

 https://github.com/openstack/nova/blob/372c54927ab4f6c226f5a1a2aead40b89617cf77/nova/network/manager.py#L1683

 Agree that it appears to be a no-op and could probably be dropped (with
 proper rpc backwards compat handling).  The manger side can't be removed
 until the next major rev of the API.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - certificates data persistency

2014-07-22 Thread Carlos Garza

On Jul 20, 2014, at 6:32 AM, Evgeny Fedoruk evge...@radware.com wrote:

 Hi folks,
  
 In a current version of TLS capabilities RST certificate SubjectCommonName 
 and SubjectAltName information is cached in a database.
 This may be not necessary and here is why:
  
 1.   TLS containers are immutable, meaning once a container was 
 associated to a listener and was validated, it’s not necessary to validate 
 the container anymore.
 This is relevant for both, default container and containers used for SNI.
 2.   LBaaS front-end API can check if TLS containers ids were changed for 
 a listener as part of an update operation. Validation of containers will be 
 done for
 new containers only. This is stated in “Performance Impact” section of the 
 RST, excepting the last statement that proposes persistency for SCN and SAN.
 3.   Any interaction with Barbican API for getting containers data will 
 be performed via a common module API only. This module’s API is mentioned in
 “SNI certificates list management” section of the RST.
 4.   In case when driver really needs to extract certificate information 
 prior to the back-end system provisioning, it will do it via the common 
 module API.
 5.   Back-end provisioning system may cache any certificate data, except 
 private key, in case of a specific need of the vendor.
  
 IMO, There is no real need to store certificates data in Neutron database and 
 manage its life cycle.
 Does anyone sees a reason why caching certificates’ data in Neutron database 
 is critical?

Its not so much caching the certificate. Lets just say when an lb change 
comes into the API that wants to add an X509 then we need to parse the 
subjectNames and SubjectAltNames from the previous X509s which aren't available 
to us so we must grab them all from barbican over the rest interface. Like I 
said in an earlier email its a balancing act between Single Source of Truth 
vs how much lag were willing to deal with.



 Thank you,
 Evg
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Spec freeze exception] Instance tasks

2014-07-22 Thread Michael Still
Ok, this one has two cores, so the exception is approved. The
exception is in the form of another week to get the spec merged, so
quick iterations are the key.

Cheers,
Michael

On Tue, Jul 22, 2014 at 1:55 PM, Kenichi Oomichi
oomi...@mxs.nes.nec.co.jp wrote:

 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: Monday, July 21, 2014 6:54 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] [Spec freeze exception] Instance tasks

 On 18 July 2014 14:28, Andrew Laski andrew.la...@rackspace.com wrote:
  Hello everybody,
 
  I would like to request a spec proposal extension for instance tasks,
  described in https://review.openstack.org/#/c/86938/ .  This has been a 
  long
  discussed and awaited feature with a lot of support from the community.
 
  This feature has been intertwined with the fate of the V3 API, which is
  still being worked out, and may not be completed in Juno.  This means that 
  I
  lack confidence that the tasks API work can be fully completed in Juno as
  well.  But there is more to the tasks work than just the API, and I would
  like to get some of that groundwork done. In fact, one of the challenges
  with the task work is how to handle an upgrade situation with an API that
  exposes tasks and computes which are not task aware and therefore don't
  update them properly. If it's acceptable I would propose stripping the API
  portion of the spec for now and focus on getting Juno computes to be task
  aware so that tasks exposed in the Kilo API would be handled properly with
  Juno computes.  This of course assumes that we're reasonably confident we
  want to add tasks to the API in Kilo.

 I see better task handling as a key to better organising the error
 handling inside Nova, and improving stability.

 As such I am happy to sponsor this spec.

 I also am glad to support this spec.
 This feature will be nice to investigate gate failures in our future.


 Thanks
 Ken'ichi Ohmichi


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Virtio-scsi settings nova-specs exception

2014-07-22 Thread Michael Still
Ok, I am going to take Daniel and Dan's comments as agreement that
this spec freeze exception should go ahead, so the exception is
approved. The exception is in the form of another week to get the spec
merged, so quick iterations are the key.


Cheers,

Michael

On Tue, Jul 22, 2014 at 6:36 PM, Michael Still mi...@stillhq.com wrote:
 Fair enough. Let's roll with that then.

 Michael

 On Tue, Jul 22, 2014 at 6:33 AM, Sean Dague s...@dague.net wrote:
 On 07/21/2014 03:35 PM, Dan Smith wrote:
 We've already approved many other blueprints for Juno that involve features
 from new libvirt, so I don't think it is credible to reject this or any
 other feature that requires new libvirt in Juno.

 Furthermore this proposal for Nova is a targetted feature which is not
 enabled by default, so the risk of regression for people not using it
 is negligible. So I see no reason not to accept this feature.

 Yep, the proposal that started this discussion was never aimed at
 creating new test requirements for already-approved nova specs anyway. I
 definitely don't think we need to hold up something relatively simple
 like this on those grounds, given where we are in the discussion.

 --Dan

 Agreed. This was mostly about figuring out a future path for ensuring
 that the features that we say work in OpenStack either have some
 validation behind them, or some appropriate disclaimers so that people
 realize they aren't really tested in our normal system.

 I'm fine with the virtio-scsi settings moving forward.

 -Sean

 --
 Sean Dague
 http://dague.net




 --
 Rackspace Australia



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Spec freeze exception] Online Schema Changes

2014-07-22 Thread Michael Still
Ok, this one has two cores, so the exception is approved. The
exception is in the form of another week to get the spec merged, so
quick iterations are the key.

Cheers,
Michael

On Tue, Jul 22, 2014 at 2:02 AM, Kevin L. Mitchell
kevin.mitch...@rackspace.com wrote:
 On Mon, 2014-07-21 at 10:55 +0100, John Garbutt wrote:
 On 19 July 2014 03:53, Johannes Erdfelt johan...@erdfelt.com wrote:
  I'm requestion a spec freeze exception for online schema changes.
 
  https://review.openstack.org/102545
 
  This work is being done to try to minimize the downtime as part of
  upgrades. Database migrations have historically been a source of long
  periods of downtime. The spec is an attempt to start optimizing this
  part by allowing deployers to perform most schema changes online, while
  Nova is running.

 Improving upgrades is high priority, and I feel it will help reduce
 the amount of downtime required when performing database migrations.

 So I am happy to sponsor this.

 I will also sponsor this for an exception.
 --
 Kevin L. Mitchell kevin.mitch...@rackspace.com
 Rackspace


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Spec Freeze Exception] [Gantt] Scheduler Isolate DB spec

2014-07-22 Thread Michael Still
This spec freeze exception only has one core signed up. Are there any
other cores interested in working with Sylvain on this one?

Michael

On Mon, Jul 21, 2014 at 7:59 PM, John Garbutt j...@johngarbutt.com wrote:
 On 18 July 2014 09:10, Sylvain Bauza sba...@redhat.com wrote:
 Hi team,

 I would like to put your attention on https://review.openstack.org/89893
 This spec targets to isolate access within the filters to only Scheduler
 bits. This one is a prerequisite for a possible split of the scheduler
 into a separate project named Gantt, as it's necessary to remove direct
 access to other Nova objects (like aggregates and instances).

 This spec is one of the oldest specs so far, but its approval has been
 delayed because there were other concerns to discuss first about how we
 split the scheduler. Now that these concerns have been addressed, it is
 time for going back to that blueprint and iterate over it.

 I understand the exception is for a window of 7 days. In my opinion,
 this objective is targetable as now all the pieces are there for making
 a consensus.

 The change by itself is only a refactoring of the existing code with no
 impact on APIs neither on DB scheme, so IMHO this blueprint is a good
 opportunity for being on track with the objective of a split by
 beginning of Kilo.

 Cores, I leave you appreciate the urgency and I'm available by IRC or
 email for answering questions.

 Regardless of Gantt, tidying up the data dependencies here make sense.

 I feel we need to consider how the above works with upgrades.

 I am happy to sponsor this blueprint. Although I worry we might not
 get agreement in time.

 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] - rebasing patches for CI

2014-07-22 Thread Jeremy Stanley
On 2014-07-21 11:36:43 -0700 (-0700), Kevin Benton wrote:
 I see. So then back to my other question, is it possible to get
 access to the same branch that is being passed to the OpenStack CI
 devstack tests?
 
 For example, in the console output I can see it uses a ref
 like refs/zuul/ master/Z75ac747d605b4eb28d4add7fa5b99890.[1] Is
 that visible somewhere (other than the logs of course) could be
 used in a third-party system?

Right now, no. It's information passed from Zuul to a Jenkins master
via Gearman, but as far as I know is currently only discoverable
within the logs and the job parameters displayed in Jenkins. There
has been some discussion in the past of Zuul providing some more
detailed information to third-party systems (perhaps the capability
to add them as additional Gearman workers) but that has never been
fully fleshed out.

For the case of independent pipelines (which is all I would expect a
third-party CI to have any interest in running for the purpose of
testing a proposed change) it should be entirely sufficient to
cherry-pick a patch/series from our Gerrit onto the target branch.
Only _dependent_ pipelines currently make use of Zuul's capability
to provide a common ref representing a set of different changes
across multiple projects, since independent pipelines will only ever
have an available ZUUL_REF on a single project (the same project for
which the change is being proposed).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] putting [tag] in LP bug titles instead of using LP tags

2014-07-22 Thread Andrew Woodward
There has been an increased occurrence of using [tag] in the title instead
of adding tag to the tags section of the LP bugs for Fuel.

As we discussed in the Fuel meeting last Thursday, We should stop doing
this as it causes several issues
* It spams e-mail.
* It breaks threading that your mail client may perform as it changes the
subject.
* They aren't searchable as easily as tags
* They are going to look even more ugly when more tags are added or removed
from the bug.

-- 
Andrew
Mirantis
Ceph community
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Feasibility of adding global restrictions at trust creation time

2014-07-22 Thread Nathan Kinder
Hi,

I've had a few discussions recently related to Keystone trusts with
regards to imposing restrictions on trusts at a deployment level.
Currently, the creator of a trust is able to specify the following
restrictions on the trust at creation time:

  - an expiration time for the trust
  - the number of times that the trust can be used to issue trust tokens

If an expiration time (expires_at) is not specified by the creator of
the trust, then it never expires.  Similarly, if the number of uses
(remaining_uses) is not specified by the creator of the trust, it has an
unlimited number of uses.  The important thing to note is that the
restrictions are entirely in the control of the trust creator.

There may be cases where a particular deployment wants to specify global
maximum values for these restrictions to prevent a trust from being
granted indefinitely.  For example, Keystone configuration could specify
that a trust can't be created that has 100 remaining uses or is valid
for more than 6 months.  This would certainly cause problems for some
deployments that may be relying on indefinite trusts, but it is also a
nice security control for deployments that don't want to allow something
so open-ended.

I'm wondering about the feasibility of this sort of change, particularly
from an API compatibility perspective.  An attempt to create a trust
without an expires_at value should still be considered as an attempt to
create a trust that never expires, but Keystone could return a '403
Forbidden' response if this request violates the maximum specified in
configuration (this would be similar for remaining_uses).  The semantics
of the API remain the same, but the response has the potential to be
rejected for new reasons.  Is this considered as an API change, or would
this be considered to be OK to implement in the v3 API?  The existing
API docs [1][2] don't really go to this level of detail with regards to
when exactly a 403 will be returned for trust creation, though I know of
specific cases where this response is returned for the create-trust request.

Thanks,
-NGK

[1]
http://docs.openstack.org/api/openstack-identity-service/3/content/create-trust-post-os-trusttrusts.html
[2]
http://docs.openstack.org/api/openstack-identity-service/3/content/forbidden.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] putting [tag] in LP bug titles instead of using LP tags

2014-07-22 Thread Dmitry Borodaenko
+1

To provide some more context, we discussed this in the team meeting last week:
http://eavesdrop.openstack.org/meetings/fuel/2014/fuel.2014-07-17-16.00.log.html#l-107

and agreed to stop doing it until further discussion, or at all.


On Tue, Jul 22, 2014 at 4:36 PM, Andrew Woodward xar...@gmail.com wrote:
 There has been an increased occurrence of using [tag] in the title instead
 of adding tag to the tags section of the LP bugs for Fuel.

 As we discussed in the Fuel meeting last Thursday, We should stop doing this
 as it causes several issues
 * It spams e-mail.
 * It breaks threading that your mail client may perform as it changes the
 subject.
 * They aren't searchable as easily as tags
 * They are going to look even more ugly when more tags are added or removed
 from the bug.

 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] TLS capability - work division

2014-07-22 Thread Carlos Garza
Since it looks like the TLS blueprint was approved I''m sure
were all eager to start coded so how should we divide up work on the
source code. I have Pull requests in pyopenssl 
https://github.com/pyca/pyopenssl/pull/143;. and a few one liners
in pica/cryptography to expose the needed low-level that I'm hoping 
will be added pretty soon to that PR 143 test's can pass. Incase it
doesn't we will fall back to using the pyasn1_modules as it 
already also has a means to fetch what we want at a lower level. 
I'm just hoping that we can split the work up so that we can
collaborate together on this with out over serializing the work
were people become dependent on waiting for some one else to
complete their work or worse one person ending up doing all
the work.


Carlos D. Garza
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Feasibility of adding global restrictions at trust creation time

2014-07-22 Thread Steven Hardy
On Tue, Jul 22, 2014 at 05:20:44PM -0700, Nathan Kinder wrote:
 Hi,
 
 I've had a few discussions recently related to Keystone trusts with
 regards to imposing restrictions on trusts at a deployment level.
 Currently, the creator of a trust is able to specify the following
 restrictions on the trust at creation time:
 
   - an expiration time for the trust
   - the number of times that the trust can be used to issue trust tokens
 
 If an expiration time (expires_at) is not specified by the creator of
 the trust, then it never expires.  Similarly, if the number of uses
 (remaining_uses) is not specified by the creator of the trust, it has an
 unlimited number of uses.  The important thing to note is that the
 restrictions are entirely in the control of the trust creator.
 
 There may be cases where a particular deployment wants to specify global
 maximum values for these restrictions to prevent a trust from being
 granted indefinitely.  For example, Keystone configuration could specify
 that a trust can't be created that has 100 remaining uses or is valid
 for more than 6 months.  This would certainly cause problems for some
 deployments that may be relying on indefinite trusts, but it is also a
 nice security control for deployments that don't want to allow something
 so open-ended.
 
 I'm wondering about the feasibility of this sort of change, particularly
 from an API compatibility perspective.  An attempt to create a trust
 without an expires_at value should still be considered as an attempt to
 create a trust that never expires, but Keystone could return a '403
 Forbidden' response if this request violates the maximum specified in
 configuration (this would be similar for remaining_uses).  The semantics
 of the API remain the same, but the response has the potential to be
 rejected for new reasons.  Is this considered as an API change, or would
 this be considered to be OK to implement in the v3 API?  The existing
 API docs [1][2] don't really go to this level of detail with regards to
 when exactly a 403 will be returned for trust creation, though I know of
 specific cases where this response is returned for the create-trust request.

FWIW if you start enforcing either of these restrictions by default, you
will break heat, and every other delegation-to-a-service use case I'm aware
of, where you simply don't have any idea how long the lifetime of the thing
created by the service (e.g heat stack, Solum application definition,
Mistral workflow or whatever) will be.

So while I can understand the desire to make this configurable for some
environments, please leave the defaults as the current behavior and be
aware that adding these kind of restrictions won't work for many existing
trusts use-cases.

Maybe the solution would be some sort of policy defined exception to these
limits?  E.g when delegating to a user in the service project, they do not
apply?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Use FQDN in Ring files instead of ip

2014-07-22 Thread Matsuda, Kenichiro
Hi,

I want to use FQDN in Ring files instead of ip.
I tried the following Swift APIs with using FQDN and it succeeded.
(I used swift1.13.1.)

  - PUT Container
  - PUT Object

In some documents there is no info for using FQDN in Ring files.

  - swift 1.13.1 documentation The Rings  List of Devices
  
http://docs.openstack.org/developer/swift/1.13.1/overview_ring.html#list-of-devices
  -
  The IP address of the server containing the device.
  -
  - swift-ring-builder's USAGE
  -
  swift-ring-builder builder_file add
  [--region region] --zone zone --ip ip --port port
  --replication-ip r_ip --replication-port r_port
  --device device_name --meta meta --weight weight
  -

I would like to know whether FQDN in Ring files supports and/or how to evaluate 
to support FQDN in Ring files.

Could you please advise me for it?

Best Regards,
Kenichiro Matsuda.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] When VM do not have fixed_ip, Allowed address pair should not allow all the IPs by default

2014-07-22 Thread Liping Mao -X (limao - YI JIN XIN XI FU WU(SU ZHOU)YOU XIAN GONG SI at Cisco)
Hi Salvatore and Kyle,


Thanks for your review the following bug:
https://review.openstack.org/#/c/97516/
https://launchpad.net/bugs/1325986


I think I did not make myself clear in the bug description.

And you have the following comments:

I have a question regarding the removal of the following rule
'-m mac --mac-source %s -j RETURN'
It was originally added to allow passing traffic to the specified additional 
MAC regardless. As a side effect however, it is also passing traffic for all 
the IPs, which is the bug you're trying to fix.
As you're removing the rule, would you agree that setting an allowed address 
pair with MAC only now does not make a lot of sense anymore? If you agree we 
should add this restriction to the API.
Otherwise we should build iptables rules for the specified MAC and all the IPs 
on that port known to neutron.



In my opinion, We have ip snooping feature to protect the VM to use IP which is 
not its fixed IP.

So If the VM have a fixed IP, the rules are something like following:
Chain neutron-openvswi-sdcd32e11-1 (1 references)
pkts bytes target prot opt in out source   destination
 3026  382K RETURN all  --  *  *   10.224.148.121   0.0.0.0/0   
MAC FA:16:3E:4E:A9:3D
0 0 DROP   all  --  *  *   0.0.0.0/00.0.0.0/0

We will only allowed Source IP(10.224.148.121) and Source 
MAC(FA:16:3E:4E:A9:3D) to go out of VM in this case.
This means even I modify the IP(for example, I use 10.224.148.122) in the VM, 
It still can't work.


Then, If I remove the fixed ip of the VM, the port do not have any fixed ip, 
the rule will be :
Chain neutron-openvswi-sdcd32e11-1 (1 references)
pkts bytes target prot opt in out source   destination
 3026  382K RETURN all  --  *  *   0.0.0.0/0   0.0.0.0/0
   MAC FA:16:3E:4E:A9:3D
0 0 DROP   all  --  *  *   0.0.0.0/00.0.0.0/0

This will allow all the MAC with FA:16:3E:4E:A9:3D  to go out of the VM.
In this case, if I add IP in the VM(for example, I use 10.224.148.121), the IP 
can work.

So I think in this case anti-ip-snooping does not work well. I think when we do 
not have fixed IP, the rule should be :
Chain neutron-openvswi-sdcd32e11-1 (1 references)
pkts bytes target prot opt in out source   destination
0 0 DROP   all  --  *  *   0.0.0.0/00.0.0.0/0

with this rules, If I add ip in VM, the IP can't work.
So my patch is used to remove the rule '-m mac --mac-source %s -j RETURN' 
when port does not have fixed ip.


And in your comments, you said that setting an allowed address pair with MAC 
only now does not make a lot of sense anymore
I don't think so, because we still need this feature in many case. For example, 
If we need to use DSR(Direct Server Return) in the VM, we need to allow all the 
ips.



Thanks again for your review, and please let me know, if I have any 
misunstanding.  :)


Regards,
Liping Mao

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Feasibility of adding global restrictions at trust creation time

2014-07-22 Thread Nathan Kinder


On 07/22/2014 06:55 PM, Steven Hardy wrote:
 On Tue, Jul 22, 2014 at 05:20:44PM -0700, Nathan Kinder wrote:
 Hi,

 I've had a few discussions recently related to Keystone trusts with
 regards to imposing restrictions on trusts at a deployment level.
 Currently, the creator of a trust is able to specify the following
 restrictions on the trust at creation time:

   - an expiration time for the trust
   - the number of times that the trust can be used to issue trust tokens

 If an expiration time (expires_at) is not specified by the creator of
 the trust, then it never expires.  Similarly, if the number of uses
 (remaining_uses) is not specified by the creator of the trust, it has an
 unlimited number of uses.  The important thing to note is that the
 restrictions are entirely in the control of the trust creator.

 There may be cases where a particular deployment wants to specify global
 maximum values for these restrictions to prevent a trust from being
 granted indefinitely.  For example, Keystone configuration could specify
 that a trust can't be created that has 100 remaining uses or is valid
 for more than 6 months.  This would certainly cause problems for some
 deployments that may be relying on indefinite trusts, but it is also a
 nice security control for deployments that don't want to allow something
 so open-ended.

 I'm wondering about the feasibility of this sort of change, particularly
 from an API compatibility perspective.  An attempt to create a trust
 without an expires_at value should still be considered as an attempt to
 create a trust that never expires, but Keystone could return a '403
 Forbidden' response if this request violates the maximum specified in
 configuration (this would be similar for remaining_uses).  The semantics
 of the API remain the same, but the response has the potential to be
 rejected for new reasons.  Is this considered as an API change, or would
 this be considered to be OK to implement in the v3 API?  The existing
 API docs [1][2] don't really go to this level of detail with regards to
 when exactly a 403 will be returned for trust creation, though I know of
 specific cases where this response is returned for the create-trust request.
 
 FWIW if you start enforcing either of these restrictions by default, you
 will break heat, and every other delegation-to-a-service use case I'm aware
 of, where you simply don't have any idea how long the lifetime of the thing
 created by the service (e.g heat stack, Solum application definition,
 Mistral workflow or whatever) will be.
 
 So while I can understand the desire to make this configurable for some
 environments, please leave the defaults as the current behavior and be
 aware that adding these kind of restrictions won't work for many existing
 trusts use-cases.

I fully agree.  In no way should the default behavior change.

 
 Maybe the solution would be some sort of policy defined exception to these
 limits?  E.g when delegating to a user in the service project, they do not
 apply?

Role-based limits seem to be a natural progression of the idea, though I
didn't want to throw that out there from the get-go.

-NGK

 
 Steve
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Use FQDN in Ring files instead of ip

2014-07-22 Thread Hua ZZ Zhang
Hi,
Here's a patch to allow hostname in Ring which is under developing you
might be interested in.
https://review.openstack.org/#/c/80421/

-Edward Zhang



   
 Matsuda, 
 Kenichiro
 matsuda_kenichi@  To 
 jp.fujitsu.com   openstack-dev@lists.openstack.org 
   openstack-dev@lists.openstack.org 
 2014-07-23 上午cc 
 10:47 
   Subject 
   [openstack-dev] [swift] Use FQDN in 
 Please respond to Ring files instead of ip  
OpenStack 
Development
   Mailing List
  \(not for usage  
   questions\)
 openstack-dev@li 
 sts.openstack.org 
  
   
   




Hi,

I want to use FQDN in Ring files instead of ip.
I tried the following Swift APIs with using FQDN and it succeeded.
(I used swift1.13.1.)

  - PUT Container
  - PUT Object

In some documents there is no info for using FQDN in Ring files.

  - swift 1.13.1 documentation The Rings  List of Devices

http://docs.openstack.org/developer/swift/1.13.1/overview_ring.html#list-of-devices

  -
  The IP address of the server containing the device.
  -
  - swift-ring-builder's USAGE
  -
  swift-ring-builder builder_file add
  [--region region] --zone zone --ip ip --port port
  --replication-ip r_ip --replication-port r_port
  --device device_name --meta meta --weight weight
  -

I would like to know whether FQDN in Ring files supports and/or how to
evaluate to support FQDN in Ring files.

Could you please advise me for it?

Best Regards,
Kenichiro Matsuda.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [Spec freeze exception] Support Stateful and Stateless DHCPv6 by dnsmasq

2014-07-22 Thread Xu Han Peng
I would like to request one Juno Spec freeze exception for Support 
Stateful and Stateless DHCPv6 by dnsmasq BP.


This BP is an important part if IPv6 support in Juno. Router 
advertisement support by RADVD has been merged and this BP is planned 
for configure OpenStack dnsmasq to co-work with router advertisement 
from either external router or OpenStack managed RADVD to get IPv6 
network fully functional. This BP also supports dnsmasq to work 
independently for DHCPv6 subnet. Without this BP, only SLAAC mode is 
enabled without any stateful/stateless DHCPv6 support.


The spec is under review:
https://review.openstack.org/#/c/102411/

Code change for this BP is submitted as well for a while:
https://review.openstack.org/#/c/106299/

Thanks,
Xu Han
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev