Re: [openstack-dev] [marconi] Proposal to add Fei Long Wang (flwang) as a core reviewer

2014-03-04 Thread Flavio Percoco

On 03/03/14 18:29 +, Kurt Griffiths wrote:

Hi folks, I’d like to propose adding Fei Long Wang (flwang) as a core reviewer
on the Marconi team. He has been contributing regularly over the past couple of
months, and has proven to be a careful reviewer with good judgment.

All Marconi ATC’s, please respond with a +1 or –1.


+1 ;)

--
@flaper87
Flavio Percoco


pgpvo4CXSQjoO.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Proposal to add Fei Long Wang (flwang) as a core reviewer

2014-03-04 Thread Xurong Yang
+1


2014-03-04 2:29 GMT+08:00 Kurt Griffiths kurt.griffi...@rackspace.com:

  Hi folks, I'd like to propose adding* Fei Long Wang (flwang) *as a core
 reviewer on the *Marconi* team. He has been contributing regularly over
 the past couple of months, and has proven to be a careful reviewer with
 good judgment.

  *All Marconi ATC's, please respond with a +1 or -1.*

  Cheers,
 Kurt G. | @kgriffs | Marconi PTL

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][docker] docker and openstack disagree about the IP address

2014-03-04 Thread John Bresnahan

Hello all,

I am trying to setup a devstack vagrant virtualbox VM and running into 
network troubles.  I can successful launch an container but docker and 
openstack disagree on the IP address of that instance. Docker has the 
correct IP.  I an running devstack stable/havana on ubuntu 12.04.  My 
session is below.  Any help is greatly appreciated.


Thanks,

John

nova boot --flavor 1 --image 5709b62d-0b54-43d9-82b6-fd149e477350 tst

nova list
+--+--+++-+--+
| ID   | Name | Status | Task State | 
Power State | Networks |

+--+--+++-+--+
| 4f331975-497d-4cbe-ad73-ca333e178e19 | tst  | ACTIVE | - | NOSTATE 
| private=10.0.0.2 |

+--+--+++-+--+

$ ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
From 10.0.0.1 icmp_seq=2 Destination Host Unreachable
From 10.0.0.1 icmp_seq=3 Destination Host Unreachable
^C
--- 10.0.0.2 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2999ms
pipe 3

$ docker ps
CONTAINER IDIMAGE COMMANDCREATED 
STATUS PORTSNAMES
ebc7eae3e777172.16.129.20:5042/docker-busybox:latest 
sh 9 minutes ago   Up 9 
minutes lonely_pasteur
a4ea16899219docker-registry:latest ./docker-registry/ru   24 
minutes ago  Up 24 minutes 0.0.0.0:5042-5000/tcp   tender_brown


$ docker inspect ebc7eae3e777

NetworkSettings: {
IPAddress: 172.17.0.3,
...

$ ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_req=1 ttl=64 time=0.081 ms
64 bytes from 172.17.0.3: icmp_req=2 ttl=64 time=0.053 ms






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-04 Thread Qin Zhao
I think the current snapshot implementation can be a solution sometimes,
but it is NOT exact same as user's expectation. For example, a new
blueprint is created last week,
https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot, which
seems a little similar with this discussion. I feel the user is requesting
Nova to create in-place snapshot (not a new image), in order to revert the
instance to a certain state. This capability should be very useful when
testing new software or system settings. It seems a short-term temporary
snapshot associated with a running instance for Nova. Creating a new
instance is not that convenient, and may be not feasible for the user,
especially if he or she is using public cloud.


On Tue, Mar 4, 2014 at 1:32 PM, Nandavar, Divakar Padiyar 
divakar.padiyar-nanda...@hp.com wrote:

  Why reboot an instance? What is wrong with deleting it and create a
 new one?

 You generally use non-persistent disk mode when you are testing new
 software or experimenting with settings.   If something goes wrong just
 reboot and you are back to clean state and start over again.I feel it's
 convenient to handle this with just a reboot rather than recreating the
 instance.

 Thanks,
 Divakar

 -Original Message-
 From: Joe Gordon [mailto:joe.gord...@gmail.com]
 Sent: Tuesday, March 04, 2014 10:41 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova][cinder] non-persistent storage(after
 stopping VM, data will be rollback automatically), do you think we shoud
 introduce this feature?
 Importance: High

 On Mon, Mar 3, 2014 at 8:13 PM, Zhangleiqiang zhangleiqi...@huawei.com
 wrote:
 
  This sounds like ephemeral storage plus snapshots.  You build a base
  image, snapshot it then boot from the snapshot.
 
 
  Non-persistent storage/disk is useful for sandbox-like environment, and
 this feature has already exists in VMWare ESX from version 4.1. The
 implementation of ESX is the same as what you said, boot from snapshot of
 the disk/volume, but it will also *automatically* delete the transient
 snapshot after the instance reboots or shutdowns. I think the whole
 procedure may be controlled by OpenStack other than user's manual
 operations.

 Why reboot an instance? What is wrong with deleting it and create a new
 one?

 
  As far as I know, libvirt already defines the corresponding transient
 element in domain xml for non-persistent disk ( [1] ), but it cannot
 specify the location of the transient snapshot. Although qemu-kvm has
 provided support for this feature by the -snapshot command argument,
 which will create the transient snapshot under /tmp directory, the qemu
 driver of libvirt don't support transient element currently.
 
  I think the steps of creating and deleting transient snapshot may be
 better to done by Nova/Cinder other than waiting for the transient
 support added to libvirt, as the location of transient snapshot should
 specified by Nova.
 
 
  [1] http://libvirt.org/formatdomain.html#elementsDisks
  --
  zhangleiqiang
 
  Best Regards
 
 
  -Original Message-
  From: Joe Gordon [mailto:joe.gord...@gmail.com]
  Sent: Tuesday, March 04, 2014 11:26 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Cc: Luohao (brian)
  Subject: Re: [openstack-dev] [nova][cinder] non-persistent
  storage(after stopping VM, data will be rollback automatically), do
  you think we shoud introduce this feature?
 
  On Mon, Mar 3, 2014 at 6:00 PM, Yuzhou (C) vitas.yuz...@huawei.com
  wrote:
   Hi stackers,
  
   As far as I know ,there are two types of storage used by VM in
 openstack:
  Ephemeral Storage and Persistent Storage.
   Data on ephemeral storage ceases to exist when the instance it is
   associated
  with is terminated. Rebooting the VM or restarting the host server,
  however, will not destroy ephemeral data.
   Persistent storage means that the storage resource outlives any
   other
  resource and is always available, regardless of the state of a running
 instance.
  
   There is a use case that maybe need a new type of storage, maybe we
   can
  call it non-persistent storage .
   The use case is that VMs are assigned to the public ephemerally in
   public
  areas.
   After the VM is used, new data on storage of VM ceases to exist
   when the
  instance it is associated with is stopped.
   It means stop the VM, Non-persistent storage used by VM will be
   rollback
  automatically.
  
   Is there any other suggestions? Or any BPs about this use case?
  
 
  This sounds like ephemeral storage plus snapshots.  You build a base
  image, snapshot it then boot from the snapshot.
 
   Thanks!
  
   Zhou Yu
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  

Re: [openstack-dev] [OpenStack-Dev] [Cinder] Open Source and community working together

2014-03-04 Thread Luke Gorrie
On 3 March 2014 18:30, Thierry Carrez thie...@openstack.org wrote:

 My advice was therefore that you should not wait for that to happen to
 engage in cooperative behavior, because you don't want to be the first
 company to get singled out.


Cooperative behavior is vague.

Case in point: I have not successfully setup 3rd party CI for the ML2
driver that I've developed on behalf of a vendor. Does this make me one of
your uncooperative vendors? Do I need to worry about being fired because
somebody at OpenStack decides to name and shame the company I'm doing the
work for and make an example? (Is that what the deprecated neutron drivers
list will be used for?)

If one project official says driver contributors have to comply with X, Y,
Z by Icehouse-2 and then another project official says that uncooperative
contributors are going to be nailed to the wall then, well, sucks to be
contributors.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor Framework

2014-03-04 Thread Salvatore Orlando
Hi,

I read this thread and I think this moves us in the right direction of
moving away from provider mapping, and, most importantly, abstracting away
backend-specific details.

I was however wondering if flavours (or service offerings) will act
more like a catalog or a scheduler.
The difference, in mu opinion, is the following:
In the first case, specifying of an item from the catalog will uniquely
identify the backend which will implement the service. For instance if you
select Gold
 or GoldwithSSL then your load balancer will be implemented using the
backend driver Iota, whereas if you select Copper it will be
implemented using driver Epsilon.
In the latter case the selection of a flavour will be more like
expressing a desired configuration, and this sort of scheduler will then
pick the driver which offer the closest specification, or reject the
request if no driver is available (which might happen if the driver is
there but has no capacity).

From my perspective, it would also be important to not focus exclusively on
one service (I've read mostly about load balancing here), but provide a
solution, and then a PoC implementation, which will apply to Firewall and
VPN services as well.

Salvatore

PS: I'm terrible at names; So far I think we've been using mostly flavour
and service offering. Regardless of what makes sense one has to consider
also uniformity with similar concepts across openstack projects.


On 4 March 2014 00:33, Samuel Bercovici samu...@radware.com wrote:

  Hi,



 The discussion about advanced services and scheduling was primarily around
 choosing backbends based on capabilities.

 AFAIK, the Nova flavor specify capacity.

 So I think that using the term flavor might not match what is intended.

 A better word might be capability or group of capabilities.



 Is the following what we want to achieve?

 · A tenant creates a vip and requires high-available with
 advanced L7 and SSL capabilities for production.

 · Another tenant creates a vip that requires advanced L7 and SSL
 capabilities for development.



 The admin or maybe even the tenant might group such capabilities (ha, L7,
 SSL) and name them advanced-adc and another group of capabilities (no-ha,
 L7, SSL) and name them adc-for-testing.



 This leads to an abbreviation of:

 · Tenant creates a vip that requires advanced-adc.

 · Tenant creates a vip the requires adc-for-testing.



 Regards,

 -Sam.















 *From:* Eugene Nikanorov [mailto:enikano...@mirantis.com]
 *Sent:* Thursday, February 27, 2014 12:12 AM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Neutron] Flavor Framework



 Hi neutron folks,



 I know that there are patches on gerrit for VPN, FWaaS and L3 services
 that are leveraging Provider Framework.

 Recently we've been discussing more comprehensive approach that will allow
 user to choose service capabilities rather than vendor or provider.



 I've started creating design draft of Flavor Framework, please take a
 look:

 https://wiki.openstack.org/wiki/Neutron/FlavorFramework



 It also now looks clear to me that the code that introduces providers for
 vpn, fwaas, l3 is really necessary to move forward to Flavors with one
 exception: providers should not be exposed to public API.

 While provider attribute could be visible to administrator (like
 segmentation_id of network), it can't be specified on creation and it's not
 available to a regular user.



 Looking forward to get your feedback.



 Thanks,

 Eugene.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest - Stress Test][qa] : implement a full SSH connection on ssh_floating.py and improve it

2014-03-04 Thread Koderer, Marc
 Von: LELOUP Julien [mailto:julien.lel...@3ds.com]
 Gesendet: Montag, 3. März 2014 11:21
 An: OpenStack Development Mailing List (not for usage questions); Koderer,
 Marc
 Betreff: [Tempest - Stress Test][qa] : implement a full SSH connection on
 ssh_floating.py and improve it

 Hello,

 It tourns out that this stress test maybe not implemented in the right
 place.

 At ther moment I have put the SSH stress test in large_ops but Joe Gordon
 pointed (in https://review.openstack.org/#/c/74067/) that the large_ops
 gates are not meant to launch real servers.

Yep, this is a discussion that came up in the last summit and I might
undervalue the consequences. IMHO stress test's should both work with
fake drivers and with real environments. If they don't and we really
want to support such cases we need to flag such tests or make these
checks optional. If we do so these check possibly never run in the
gate. Especially for stress tests it's might the case that there
are designed for in-house usage and not for the gate.


 So where should I put this test ? I'm thinking of creating a new scenario
 file inheriting of the class specified in the large_ops file
 (TestLargeOpsScenario) in order to avoid duplicating the nova_boot
 method.
 Is it OK for you ?

 Is there a better place where I should put my test ?

I mean we still have the tempest/stress/actions/ and we could put something
like that in there. In general I would like to discuss this topic in the next
QA meeting..
@Julien: are you able to join the next meeting? It would be 22:00 UTC.



 Best Regards,

 Julien LELOUP
 julien.lel...@3ds.com


Regards,
Marc



 -Original Message-
 From: LELOUP Julien [mailto:julien.lel...@3ds.com]
 Sent: Monday, February 17, 2014 5:02 PM
 To: OpenStack Development Mailing List (not for usage questions); Koderer,
 Marc
 Subject: Re: [openstack-dev] [qa] [Tempest - Stress Test] : implement a
 full SSH connection on ssh_floating.py and improve it

 Hello Marc,

 I'm raising this subject again since I have pushed a patch set
 implementing this SSH stress test in Tempest.
 As you suggested, I wrote it as a scenario test : I'm currently using it
 to check the ability of my stack to launch x servers at the same time and
 having them fully available to use.

 As a reminder, here is the Blueprint :
 https://blueprints.launchpad.net/tempest/+spec/stress-test-ssh-floating-ip

 Plese feel free to check it and make feedback :)

 Best Regards,

 Julien LELOUP
 julien.lel...@3ds.com

 -Original Message-
 From: LELOUP Julien [mailto:julien.lel...@3ds.com]
 Sent: Monday, January 20, 2014 11:55 AM
 To: Koderer, Marc
 Cc: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [qa] [Tempest - Stress Test] : implement a
 full SSH connection on ssh_floating.py and improve it

 Hello Marc,

 Thanks again for your help, your blog post is helpful.

 So I will start writing a new scenario test to get this full SSH stress
 test on newly created VM.
 I will put more details about it in the blueprint I created for this :
 https://blueprints.launchpad.net/tempest/+spec/stress-test-ssh-floating-ip

 Best Regards,

 Julien LELOUP
 julien.lel...@3ds.com


 -Original Message-
 From: Koderer, Marc [mailto:m.kode...@telekom.de]
 Sent: Saturday, January 18, 2014 10:11 AM
 To: LELOUP Julien
 Cc: openstack-dev@lists.openstack.org
 Subject: RE: [qa] [Tempest - Stress Test] : implement a full SSH
 connection on ssh_floating.py and improve it

 Hello Julien,

 maybe my blog post helps you with some more details:

 http://telekomcloud.github.io/2013/09/11/new-ways-of-tempest-stress-
 testing.html

 You can run single test if you add a new json file with the test function
 you want to test. Like:
 https://github.com/openstack/tempest/blob/master/tempest/stress/etc/sample
 -unit-test.json

 With that you can launch them with the parameters you already described.

 Regards,
 Marc

 
 From: LELOUP Julien [julien.lel...@3ds.com]
 Sent: Friday, January 17, 2014 3:49 PM
 To: Koderer, Marc
 Cc: openstack-dev@lists.openstack.org
 Subject: RE: [Tempest - Stress Test] : implement a full SSH connection on
 ssh_floating.py and improve it

 Hi Marc,

 The Etherpad you provided was helpful to know the current state of the
 stress tests.

 I admit that I have some difficulties to understand how I can run a single
 test built with the @stresstest decorator (even not a beginner in Python,
 I still have things to learn on this technology and a lot more on
 OpenStack/Tempest :) ).
 I used to run my test using ./run_stress.py -t JSON configuration
 pointing at my action/.py script -d duration, which allowed me to run
 only one test with a dedicated configuration (number of threads, ...)

 For what I understand now in Tempest, I only managed to run all tests,
 using ./run_tests.sh and the only configuration I found related to
 stress tests was the [stress] section in tempest.conf.

 For example : let say I ported my SSH 

Re: [openstack-dev] [nova] Thought exercise for a V2 only world

2014-03-04 Thread Christopher Yeoh
On Mon, 3 Mar 2014 21:31:23 -0800
Morgan Fainberg m...@metacloud.com wrote:
 
 I think that in a V2 only world a 2 cycle deprecation model would be
 sufficient for any extension. I don’t foresee any complaints on that
 front, especially if there is work to supersede or obsolete the need
 for the extension.

Given the context of feedback saying we're not  able to deprecate
the V2 API as-it-is for a very long time I don't see how a 2 cycle
deprecation model for an extension is sufficient. Perhaps it comes down
to how do we really know its unused? If it hasn't ever worked (we had
one of those!) or accidentally hadn't worked for a couple of cycles and
no one noticed then perhaps deprecating it then is ok. But otherwise
whilst we can get input from public cloud providers fairly easily
there's going to be a lot of small private deployments as well with
custom bits of API using code which we won't hear from. And we'll be
forcing them off the API which people say is exactly what we don't want
to do.

Deprecating functionality and still calling it V2 is I think nearly
always a bad thing. Because it is very different from what people
consider to be major version API stability - eg you may get new
features, but old ones stay.

Its for similar reasons I'm pretty hesitant about using microversioning
for backwards incompatible changes in addition to backwards compatible
ones. Because we end up with a concept of major version stability which
is quite different from what people expect. I don't think we should be
seeing versioning as a magic bullet to get out of API mistakes
(except perhaps under really exceptional circumstances) we've made
because it really just shifts the pain to the users. Do it enough and
people lose an understanding of what it means to have version X.Y
of an API available versus X.(Y+n) and whether they can expect the
software to still work.




 
 Cheers,
 Morgan 
 —
 Morgan Fainberg
 Principal Software Engineer
 Core Developer, Keystone
 m...@metacloud.com
 
 
 On March 3, 2014 at 21:29:43, Joe Gordon (joe.gord...@gmail.com)
 wrote:
 
 Hi All,  
 
 here's a case worth exploring in a v2 only world ... what about some  
 extension we really think is dead and should go away? can we ever  
 remove it? In the past we have said backwards compatibility means no  
 we cannot remove any extensions, if we adopt the v2 only notion of  
 backwards compatibility is this still true?  
 
 best,  
 Joe  
 
 ___  
 OpenStack-dev mailing list  
 OpenStack-dev@lists.openstack.org  
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] milestone-proposed branches

2014-03-04 Thread Thierry Carrez
Robert Collins wrote:
 On 3 March 2014 23:12, Thierry Carrez thie...@openstack.org wrote:
 James Slagle wrote:
 I'd like to ask that the following repositories for TripleO be included
 in next week's cutting of icehouse-3:

 http://git.openstack.org/openstack/tripleo-incubator
 http://git.openstack.org/openstack/tripleo-image-elements
 http://git.openstack.org/openstack/tripleo-heat-templates
 http://git.openstack.org/openstack/diskimage-builder
 http://git.openstack.org/openstack/os-collect-config
 http://git.openstack.org/openstack/os-refresh-config
 http://git.openstack.org/openstack/os-apply-config

 Are you willing to run through the steps on the How_To_Release wiki for
 these repos, or should I do it next week? Just let me know how or what
 to coordinate. Thanks.

 I looked into more details and there are a number of issues as TripleO
 projects were not really originally configured to be released.

 First, some basic jobs are missing, like a tarball job for
 tripleo-incubator.
 
 Do we need one? tripleo-incubator has no infrastructure to make
 tarballs. So that has to be created de novo, and its not really
 structured to be sdistable - its a proving ground. This needs more
 examination. Slagle could however use a git branch effectively.

I'd say you don't need such a job, but then I'm not the one asking for
that repository to be included in next week's cutting of icehouse-3.

James asks if I'd be OK to run through the steps on the How_To_Release
wiki, and that wiki page is all about publishing tarballs.

So my answer is, if you want to run the release scripts for
tripleo-incubator, then you need a tarball job.

 Then the release scripts are made for integrated projects, which follow
 a number of rules that TripleO doesn't follow:

 - One Launchpad project per code repository, under the same name (here
 you have tripleo-* under tripleo + diskimage-builder separately)
 
 Huh? diskimage-builder is a separate project, with a separate repo. No
 conflation. Same for os-*-config, though I haven't made a LP project
 for os-cloud-config yet (but its not a dependency yet either).

Just saying that IF you want to use the release scripts (and it looks
like you actually don't want that), you'll need a 1:1 LP - repo match.
Currently in LP you have tripleo (covering tripleo-* repos),
diskimage-builder, and the os-* projects (which I somehow missed). To
reuse the release scripts you'd have to split tripleo in LP into
multiple projects.

 Finally the person doing the release needs to have push annotated tags
 / create reference permissions over refs/tags/* in Gerrit. This seems
 to be missing for a number of projects.
 
 We have this for all the projects we release; probably not incubator
 because *we don't release it*- and we had no intent of doing releases
 for tripleo-incubator - just having a stable branch so that there is a
 thing RH can build rpms from is the key goal.

I agree with you. I only talked about it because James mentioned it in
his to be released list.

 In all cases I'd rather limit myself to incubated/integrated projects,
 rather than extend to other projects, especially on a busy week like
 feature freeze week. So I'd advise that for icehouse-3 you follow the
 following simplified procedure:

 - Add missing tarball-creation jobs
 - Add missing permissions for yourself in Gerrit
 - Skip milestone-proposed branch creation
 - Push tag on master when ready (this will result in tarballs getting
 built at tarballs.openstack.org)

 Optionally:
 - Create icehouse series / icehouse-3 milestone for projects in LP
 - Manually create release and upload resulting tarballs to Launchpad
 milestone page, under the projects that make the most sense (tripleo-*
 under tripleo, etc)

 I'm still a bit confused with the goals here. My original understanding
 was that TripleO was explicitly NOT following the release cycle. How
 much of the integrated projects release process do you want to reuse ?
 We do a feature freeze on icehouse-3, then bugfix on master until -rc1,
 then we cut an icehouse release branch (milestone-proposed), unfreeze
 master and let it continue as Juno. Is that what you want to do too ? Do
 you want releases ? Or do you actually just want stable branches ?
 
 This is the etherpad:
 https://etherpad.openstack.org/p/icehouse-updates-stablebranches -
 that captures our notes from the summit.
 
 TripleO has a whole is not committing to stable maintenance nor API
 service integrated releases as yet: tuskar is our API service which
 will follow that process next cycle, but right now it has its guts
 open undergoing open heart surgery. Everything else we do semver on -
 like the openstack clients (novaclient etc) - and our overall process
 is aimed at moving things from incubator into stable trees as they
 mature. We'll be stabilising the interfaces in tripleo-heat-templates
 and tripleo-image-elements somehow in future too - but we don't have
 good answers to some aspects there yet.
 
 BUT
 
 We want 

Re: [openstack-dev] [Neutron] Flavor Framework

2014-03-04 Thread Eugene Nikanorov
Thanks for you interest, folks.

Salvatore, I think we're mostly model this with load balancing examples
because firstly we're working on lbaas and secondly - lbaas already has
providers/drivers and knowing limitations of that, we are trying to
understand how to do Flavors better.
For sure we plan to make the framework generic.

Regarding catalog vs scheduler - I think we're planning scheduling rather
then catalog.
 In the latter case the selection of a flavour will be more like
expressing a desired configuration, and this sort of scheduler
 will then pick the driver which offer the closest specification, or
reject the request if no driver is available (which might happen if  the
driver is there but has no capacity).

Yes, that is how I see it working.

On some previous Jay's comment:
  Well, provider becomes read-only attribute and for admin only (jsut to
  see which driver actually handles the resources), not too much of API
  visibility.

 I'd very much prefer to keep the provider/driver name out of the public
 API entirely. I don't see how it is needed.
Yep, like network segmentation id (which is implementation detail) is not
visible to the user,
provider/driver will only be visible to admin.

Driver attribute of the resource just represents the binding between
resource and the driver that handles REST calls.
I think it must be useful for the admin to know that.

Thanks,
Eugene.



On Tue, Mar 4, 2014 at 1:11 PM, Salvatore Orlando sorla...@nicira.comwrote:

 Hi,

 I read this thread and I think this moves us in the right direction of
 moving away from provider mapping, and, most importantly, abstracting away
 backend-specific details.

 I was however wondering if flavours (or service offerings) will act
 more like a catalog or a scheduler.
 The difference, in mu opinion, is the following:
 In the first case, specifying of an item from the catalog will uniquely
 identify the backend which will implement the service. For instance if you
 select Gold
  or GoldwithSSL then your load balancer will be implemented using the
 backend driver Iota, whereas if you select Copper it will be
 implemented using driver Epsilon.
 In the latter case the selection of a flavour will be more like
 expressing a desired configuration, and this sort of scheduler will then
 pick the driver which offer the closest specification, or reject the
 request if no driver is available (which might happen if the driver is
 there but has no capacity).

 From my perspective, it would also be important to not focus exclusively
 on one service (I've read mostly about load balancing here), but provide a
 solution, and then a PoC implementation, which will apply to Firewall and
 VPN services as well.

 Salvatore

 PS: I'm terrible at names; So far I think we've been using mostly
 flavour and service offering. Regardless of what makes sense one has to
 consider also uniformity with similar concepts across openstack projects.


 On 4 March 2014 00:33, Samuel Bercovici samu...@radware.com wrote:

  Hi,



 The discussion about advanced services and scheduling was primarily
 around choosing backbends based on capabilities.

 AFAIK, the Nova flavor specify capacity.

 So I think that using the term flavor might not match what is intended.

 A better word might be capability or group of capabilities.



 Is the following what we want to achieve?

 · A tenant creates a vip and requires high-available with
 advanced L7 and SSL capabilities for production.

 · Another tenant creates a vip that requires advanced L7 and SSL
 capabilities for development.



 The admin or maybe even the tenant might group such capabilities (ha, L7,
 SSL) and name them advanced-adc and another group of capabilities (no-ha,
 L7, SSL) and name them adc-for-testing.



 This leads to an abbreviation of:

 · Tenant creates a vip that requires advanced-adc.

 · Tenant creates a vip the requires adc-for-testing.



 Regards,

 -Sam.















 *From:* Eugene Nikanorov [mailto:enikano...@mirantis.com]
 *Sent:* Thursday, February 27, 2014 12:12 AM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Neutron] Flavor Framework



 Hi neutron folks,



 I know that there are patches on gerrit for VPN, FWaaS and L3 services
 that are leveraging Provider Framework.

 Recently we've been discussing more comprehensive approach that will
 allow user to choose service capabilities rather than vendor or provider.



 I've started creating design draft of Flavor Framework, please take a
 look:

 https://wiki.openstack.org/wiki/Neutron/FlavorFramework



 It also now looks clear to me that the code that introduces providers for
 vpn, fwaas, l3 is really necessary to move forward to Flavors with one
 exception: providers should not be exposed to public API.

 While provider attribute could be visible to administrator (like
 segmentation_id of network), it can't be specified on creation and it's not
 available to a regular 

Re: [openstack-dev] Guru Meditation output seems useless

2014-03-04 Thread Daniel P. Berrange
On Mon, Mar 03, 2014 at 02:58:34PM -0700, Pete Zaitcev wrote:
 Dear Solly:
 
 I cobbled together a working prototype of Guru Meditation for Swift
 just to see how it worked. I did not use Oslo classes, but used the
 code from Dan's prototype and from your Nova review. Here's the

Sigh, pllease don't reinvent the wheel with special code just for swift.
Use the oslo common report module for this, so we have standardized
behaviour and code across all projects - that's the whole point of oslo
after all.

 Gerrit link:
  https://review.openstack.org/70513
 
 Looking at the collected tracebacks, most of all they seem singularly
 useless. No matter how loaded the process is, they always show
 something like:
 
   File /usr/lib/python2.6/site-packages/eventlet/hubs/hub.py, line 226, in 
 run
 self.wait(sleep_time)
   File /usr/lib/python2.6/site-packages/eventlet/hubs/poll.py, line 84, in 
 wait
 presult = self.do_poll(seconds)
   File /usr/lib/python2.6/site-packages/eventlet/hubs/poll.py, line 73, in 
 do_poll
 return self.poll.poll(int(seconds * 1000.0))
   File /usr/lib/python2.6/site-packages/swift/common/daemon.py, line 103, 
 in lambda
 *args))
   File /usr/lib/python2.6/site-packages/swift/common/guru_meditation.py, 
 line 79, in signal_handler
 dump_threads(gthr_model, report_fp)
   File /usr/lib/python2.6/site-packages/swift/common/guru_meditation.py, 
 line 53, in dump_threads
 thread.dump(report_fp)
   File /usr/lib/python2.6/site-packages/swift/common/guru_meditation.py, 
 line 29, in dump
 traceback.print_stack(self.stack, file=report_fp)
 
 The same is true for native and gree threads: they all seem to be
 anchored in he lambda that passes parameters to the signal handler,
 so they show nothing of value.
 
 So, my question is: did you look at traces in Nova and if yes,
 did you catch anything? If yes, where is the final code that works?

Yes, we've successfully used the oslo common code to diagnose deadlock
in Nova recently, from the greenthread stack traces - eg this log:

  http://paste.openstack.org/show/62463/

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Julien Danjou
On Tue, Mar 04 2014, James E. Blair wrote:

 If there aren't objections to this plan, I think we can propose a motion
 to the TC with a date and move forward with it fairly soon.

That plan LGTM, and +1 for OFTC. :)

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Regarding the icehouse release roadmap tracking page

2014-03-04 Thread Thierry Carrez
Yathiraj Udupi (yudupi) wrote:
 I was looking at the Icehouse release roadmap tracking page here
 - http://status.openstack.org/release/ 
 
 How is the list generated?

It's automatically generated using
https://git.openstack.org/cgit/openstack-infra/releasestatus/

 I couldn’t see the Nova Solver Scheduler blueprint here in this list
 - https://blueprints.launchpad.net/nova/+spec/solver-scheduler which is
 approved for icehouse-3. 
 Whoever knows or handles this release tracking page, can you please
 update this to include the above blueprint?

As mentioned on the page in a small note at the top:

We only track active blueprints where priority is Medium, High or
Essential (release radar)

Blueprints that are Low priority are explicitly excluded from this
view, unless they get Implemented. This page is about communicating
what's likely to land in icehouse, and PTLs use the Low priority to
communicate that they do not commit to that blueprint landing in the
release. Therefore those do not show up on this page. It is a trade-off
to try to predict the future as accurately as we can -- it's far from
perfect (as evidenced in this cycle) and we will certainly change that
in future cycles.

Note that your blueprint appears on the icehouse-3 tracking page:
https://launchpad.net/nova/+milestone/icehouse-3

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Community Meeting reminder - 03/04/2014

2014-03-04 Thread Alexander Tivelkov
Hi folks,

This is just a reminder that we are going to have a regular weekly
meeting in IRC (#openstack-meeting-alt at freenode) today at 17:00 UTC
(9am PST). The agenda is available at
https://wiki.openstack.org/wiki/Meetings/MuranoAgenda#Agenda

Feel free to add anything you want to discuss to the Agenda.

See you there!

--
Regards,
Alexander Tivelkov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest - Stress Test][qa] : implement a full SSH connection on ssh_floating.py and improve it

2014-03-04 Thread LELOUP Julien
From: Koderer, Marc [mailto:m.kode...@telekom.de]
Sent: Tuesday, March 04, 2014 10:41 AM
To: LELOUP Julien; OpenStack Development Mailing List (not for usage questions)
Subject: AW: [Tempest - Stress Test][qa] : implement a full SSH connection on 
ssh_floating.py and improve it


 So where should I put this test ? I'm thinking of creating a new
 scenario file inheriting of the class specified in the large_ops file
 (TestLargeOpsScenario) in order to avoid duplicating the nova_boot
 method.
 Is it OK for you ?

 Is there a better place where I should put my test ?

I mean we still have the tempest/stress/actions/ and we could put something 
like that in there. In general I would like to discuss this topic in the next 
QA meeting..
@Julien: are you able to join the next meeting? It would be 22:00 UTC.

@Marc : so next Thrusday (3/6/2014) ? Yes I can be there.


 Best Regards,

 Julien LELOUP
 julien.lel...@3ds.com


Regards,
Marc



 -Original Message-
 From: LELOUP Julien [mailto:julien.lel...@3ds.com]
 Sent: Monday, February 17, 2014 5:02 PM
 To: OpenStack Development Mailing List (not for usage questions);
 Koderer, Marc
 Subject: Re: [openstack-dev] [qa] [Tempest - Stress Test] : implement
 a full SSH connection on ssh_floating.py and improve it

 Hello Marc,

 I'm raising this subject again since I have pushed a patch set
 implementing this SSH stress test in Tempest.
 As you suggested, I wrote it as a scenario test : I'm currently using
 it to check the ability of my stack to launch x servers at the same
 time and having them fully available to use.

 As a reminder, here is the Blueprint :
 https://blueprints.launchpad.net/tempest/+spec/stress-test-ssh-floatin
 g-ip

 Plese feel free to check it and make feedback :)

 Best Regards,

 Julien LELOUP
 julien.lel...@3ds.com

 -Original Message-
 From: LELOUP Julien [mailto:julien.lel...@3ds.com]
 Sent: Monday, January 20, 2014 11:55 AM
 To: Koderer, Marc
 Cc: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [qa] [Tempest - Stress Test] : implement
 a full SSH connection on ssh_floating.py and improve it

 Hello Marc,

 Thanks again for your help, your blog post is helpful.

 So I will start writing a new scenario test to get this full SSH
 stress test on newly created VM.
 I will put more details about it in the blueprint I created for this :
 https://blueprints.launchpad.net/tempest/+spec/stress-test-ssh-floatin
 g-ip

 Best Regards,

 Julien LELOUP
 julien.lel...@3ds.com


 -Original Message-
 From: Koderer, Marc [mailto:m.kode...@telekom.de]
 Sent: Saturday, January 18, 2014 10:11 AM
 To: LELOUP Julien
 Cc: openstack-dev@lists.openstack.org
 Subject: RE: [qa] [Tempest - Stress Test] : implement a full SSH
 connection on ssh_floating.py and improve it

 Hello Julien,

 maybe my blog post helps you with some more details:

 http://telekomcloud.github.io/2013/09/11/new-ways-of-tempest-stress-
 testing.html

 You can run single test if you add a new json file with the test
 function you want to test. Like:
 https://github.com/openstack/tempest/blob/master/tempest/stress/etc/sa
 mple
 -unit-test.json

 With that you can launch them with the parameters you already described.

 Regards,
 Marc

 
 From: LELOUP Julien [julien.lel...@3ds.com]
 Sent: Friday, January 17, 2014 3:49 PM
 To: Koderer, Marc
 Cc: openstack-dev@lists.openstack.org
 Subject: RE: [Tempest - Stress Test] : implement a full SSH connection
 on ssh_floating.py and improve it

 Hi Marc,

 The Etherpad you provided was helpful to know the current state of the
 stress tests.

 I admit that I have some difficulties to understand how I can run a
 single test built with the @stresstest decorator (even not a beginner
 in Python, I still have things to learn on this technology and a lot
 more on OpenStack/Tempest :) ).
 I used to run my test using ./run_stress.py -t JSON configuration
 pointing at my action/.py script -d duration, which allowed me to
 run only one test with a dedicated configuration (number of threads,
 ...)

 For what I understand now in Tempest, I only managed to run all tests,
 using ./run_tests.sh and the only configuration I found related to
 stress tests was the [stress] section in tempest.conf.

 For example : let say I ported my SSH stress test as a scenario test
 with the @stresstest decorator.
 How can I launch this test (and only this one) and use a dedicated
 configuration file like ones we can found in tempest/stress/etc ?

 Another question I have now : in the case that I have to use run_test.sh
 and not run_stress.py anymore, how do I get the test runs statistics
 I used to have, and where should I put some code to improve them ?

 When I will have cleared my mind with all these kinds of practical
 details, maybe I should add some content on the Wiki about stress
 tests in Tempest.

 Best Regards,

 Julien LELOUP
 julien.lel...@3ds.com

 -Original Message-
 From: Koderer, Marc 

[openstack-dev] Nova gate currently broken

2014-03-04 Thread Michael Still
Hi.

You might have noticed that the nova gate is currently broken. I
believe this is related to an oslo.messaging release today, and have
proposed a fix at https://review.openstack.org/#/c/77844/

Cheers,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thought exercise for a V2 only world

2014-03-04 Thread Morgan Fainberg
I think I missed the emphasis on the efforts towards superseding and/or making 
the call obsolete in my previous email (I was aiming to communicate far more 
than a the few lines managed to convey). I honestly think that if we are 
sticking with V2 because of the fact that Nova is such a large surface area, 
the only option is to treat each extension as it’s (almost) own isolated API in 
the context of deprecation, obsolescence, etc. I think the issue here is that 
we may be looking at the version of the API like something fixed in stone. 
Clearly, with the scope and surface area of Nova (this point has been made 
clear), there is no possible way we can handle a whole-sale change.

As a deployer, and supporter of a number of OpenStack based clouds, as long as 
the API is stable (yes, I’ll give it a full year from when it is determined a 
change is needed), I don’t see my customers complaining excessively; maybe we 
make it a 4 cycle deprecation? It is silly to say “because we called it X we 
can’t ever take anything away”. I am all for not breaking the contract, but 
define the contract beyond “the spec”. This holds true especially when it comes 
down to continued growth and possibly moving the data from one place to 
somewhere better / more suited. Perhaps part of the real issue is the whole 
extension model. A well documented, interoperable (across deployments) API 
doesn’t have huge swaths of functionality that is optional (or more to the 
point what is OpenStack Compute’s API?). If you are adding a core-feature it 
should it be an “Extension”? 

Lets add one more step in, ask deployments if they are using the “extensions”? 
Make it part of the summit / development cycle. Get real information (send 
surveys?) and get to know the community running the software. That in itself 
will help to direct if an extension is used. I think the crux is that we do not 
have a clear (and have no way of getting the information) understanding of what 
is being used and what isn’t. Perhaps the best thing we can do is make this an 
exercise on understanding what people are using and how we can quantify that 
information before we worry about “how do we remove functionality”.

Complete removal of functionality is probably going to be rare. It is far more 
likely that locations will shift and / or things will be rolled into more 
logical areas.  At the speed that we are moving (not slow, but not as fast as 
other things), it should be totally possible to support a 2+ cycle deprecation 
if something is being moved / shuffled / absorbed. But more importantly, 
knowing use is far more important that knowing how to remove, the former will 
direct the latter.


So I propose changing the topic of this thread slightly: In a V2 only world, 
how do we know if something is used? How do we understand how it is used and 
when? Lets answer that instead.


—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com


On March 4, 2014 at 02:09:51, Christopher Yeoh (cbky...@gmail.com) wrote:

On Mon, 3 Mar 2014 21:31:23 -0800
Morgan Fainberg m...@metacloud.com wrote:
  
 I think that in a V2 only world a 2 cycle deprecation model would be
 sufficient for any extension. I don’t foresee any complaints on that
 front, especially if there is work to supersede or obsolete the need
 for the extension.

Given the context of feedback saying we're not able to deprecate
the V2 API as-it-is for a very long time I don't see how a 2 cycle
deprecation model for an extension is sufficient. Perhaps it comes down
to how do we really know its unused? If it hasn't ever worked (we had
one of those!) or accidentally hadn't worked for a couple of cycles and
no one noticed then perhaps deprecating it then is ok. But otherwise
whilst we can get input from public cloud providers fairly easily
there's going to be a lot of small private deployments as well with
custom bits of API using code which we won't hear from. And we'll be
forcing them off the API which people say is exactly what we don't want
to do.

Deprecating functionality and still calling it V2 is I think nearly
always a bad thing. Because it is very different from what people
consider to be major version API stability - eg you may get new
features, but old ones stay.

Its for similar reasons I'm pretty hesitant about using microversioning
for backwards incompatible changes in addition to backwards compatible
ones. Because we end up with a concept of major version stability which
is quite different from what people expect. I don't think we should be
seeing versioning as a magic bullet to get out of API mistakes
(except perhaps under really exceptional circumstances) we've made
because it really just shifts the pain to the users. Do it enough and
people lose an understanding of what it means to have version X.Y
of an API available versus X.(Y+n) and whether they can expect the
software to still work.




  
 Cheers,
 Morgan 
 —
 Morgan Fainberg
 Principal Software Engineer
 

Re: [openstack-dev] [OpenStack-Dev] [Cinder] Open Source and community working together

2014-03-04 Thread Thierry Carrez
Luke Gorrie wrote:
 Cooperative behavior is vague.

I realize I was way too vague and apologize if you felt threatened in
any way. It is terminology borrowed from sociology, and I realize it
does not translate that well in general discussion (and appears stronger
than I really meant).

 Case in point: I have not successfully setup 3rd party CI for the ML2
 driver that I've developed on behalf of a vendor. Does this make me one
 of your uncooperative vendors? Do I need to worry about being fired
 because somebody at OpenStack decides to name and shame the company
 I'm doing the work for and make an example? (Is that what the
 deprecated neutron drivers list will be used for?)

This is a technical requirement, and failing to match those requirements
is clearly not the same as engaging in deception or otherwise failing
the OpenStack community code of conduct.

If you fail to match a technical requirement, the only risk for you is
to get removed from the mainline code because the developers can't
maintain it properly. There is no harsh feelings or blame involved, it's
just a natural thing. It's also perfectly valid to ship drivers for
OpenStack out of tree. They are not worse, they are just out of tree.

I hope this clarifies,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Thierry Carrez
James E. Blair wrote:
 Freenode has been having a rough time lately due to a series of DDoS
 attacks which have been increasingly disruptive to collaboration.
 Fortunately there's an alternative.
 
 OFTC URL:http://www.oftc.net/ is a robust and established alternative
 to Freenode.  It is a smaller network whose mission statement makes it a
 less attractive target.  It's significantly more stable than Freenode
 and has friendly and responsive operators.  The infrastructure team has
 been exploring this area and we think OpenStack should move to using
 OFTC.

There is quite a bit of literature out there pointing to Freenode, like
presentation slides from old conferences. We should expect people to
continue to join Freenode's channels forever. I don't think staying a
few weeks on those channels to redirect misled people will be nearly
enough. Could we have a longer plan ? Like advertisement bots that would
advise every n hours to join the right servers ?

 [...]
 1) Create an irc.openstack.org CNAME record that points to
 chat.freenode.net.  Update instructions to suggest users configure their
 clients to use that alias.

I'm not sure that helps. The people who would get (and react to) the DNS
announcement are likely using proxies anyway, which you'll have to
unplug manually from Freenode on switch day. The vast majority of users
will just miss the announcement. So I'd rather just make a lot of noise
on switch day :)

Finally, I second Sean's question on OFTC's stability. As bad as
Freenode is hit by DoS, they have experience handling this, mitigation
procedures in place, sponsors lined up to help, so damage ends up
*relatively* limited. If OFTC raises profile and becomes a target, are
we confident they would mitigate DoS as well as Freenode does ? Or would
they just disappear from the map completely ? I fear that we are trading
a known evil for some unknown here.

In all cases I would target post-release for the transition, maybe even
post-Summit.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Stephen Gran

On 04/03/14 11:01, Thierry Carrez wrote:

James E. Blair wrote:

Freenode has been having a rough time lately due to a series of DDoS
attacks which have been increasingly disruptive to collaboration.
Fortunately there's an alternative.

OFTC URL:http://www.oftc.net/ is a robust and established alternative
to Freenode.  It is a smaller network whose mission statement makes it a
less attractive target.  It's significantly more stable than Freenode
and has friendly and responsive operators.  The infrastructure team has
been exploring this area and we think OpenStack should move to using
OFTC.


There is quite a bit of literature out there pointing to Freenode, like
presentation slides from old conferences. We should expect people to
continue to join Freenode's channels forever. I don't think staying a
few weeks on those channels to redirect misled people will be nearly
enough. Could we have a longer plan ? Like advertisement bots that would
advise every n hours to join the right servers ?


Why not just set /topic to tell people to connect to OFTC and join there?

Cheers,
--
Stephen Gran
Senior Systems Integrator - theguardian.com
Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our iPad edition theguardian.com/iPad   
Save up to 57% by subscribing to the Guardian and Observer - choose the papers you want and get full digital access.

Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News  Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News  Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Disaster Recovery for OpenStack - call for stakeholder

2014-03-04 Thread Ronen Kat
Hello,

In the Hong-Kong summit, there was a lot of interest around OpenStack 
support for Disaster Recovery including a design summit session, an 
un-conference session and a break-out session.
In addition we set up a Wiki for OpenStack disaster recovery - see 
https://wiki.openstack.org/wiki/DisasterRecovery 
The first step was enabling volume replication in Cinder, which has 
started in the Icehouse development cycle and will continue into Juno.

Toward the Juno summit and development cycle we would like to send out a 
call for disaster recovery stakeholders, looking to:
* Create a list of use-cases and scenarios for disaster recovery with 
OpenStack
* Find interested parties who wish to contribute features and code to 
advance disaster recovery in OpenStack
* Plan needed for discussions at the Juno summit

To coordinate such efforts, I  would like to invite you to a conference 
call on Wednesday March 5 at 12pm ET and work together coordinating 
actions for the Juno summit (an invitation is attached).
We will record minutes of the call at - 
https://etherpad.openstack.org/p/juno-disaster-recovery-call-for-stakeholders 
(link also available from the disaster recovery wiki page).
If you are unable to join and interested, please register your self and 
share your thoughts.



Call in numbers are available at 
https://www.teleconference.att.com/servlet/glbAccess?process=1accessCode=6406941accessNumber=1809417783#C2
 

Passcode: 6406941

Regards,
__
Ronen I. Kat, PhD
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com


invite.ics
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Murray, Paul (HP Cloud Services)
Hi All,

One of my patches has a query asking if I am using the agreed way to load 
plugins: https://review.openstack.org/#/c/71557/

I followed the same approach as filters/weights/metrics using nova.loadables. 
Was there an agreement to do it a different way? And if so, what is the agreed 
way of doing it? A pointer to an example or even documentation/wiki page would 
be appreciated.

Thanks in advance,
Paul

Paul Murray
HP Cloud Services
+44 117 312 9309

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England. The contents of this message and any attachments 
to it are confidential and may be legally privileged. If you have received this 
message in error, you should delete it from your system immediately and advise 
the sender. To any recipient of this message within HP, unless otherwise stated 
you should consider this message and attachments as HP CONFIDENTIAL.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Thierry Carrez
Stephen Gran wrote:
 On 04/03/14 11:01, Thierry Carrez wrote:
 James E. Blair wrote:
 Freenode has been having a rough time lately due to a series of DDoS
 attacks which have been increasingly disruptive to collaboration.
 Fortunately there's an alternative.

 OFTC URL:http://www.oftc.net/ is a robust and established alternative
 to Freenode.  It is a smaller network whose mission statement makes it a
 less attractive target.  It's significantly more stable than Freenode
 and has friendly and responsive operators.  The infrastructure team has
 been exploring this area and we think OpenStack should move to using
 OFTC.

 There is quite a bit of literature out there pointing to Freenode, like
 presentation slides from old conferences. We should expect people to
 continue to join Freenode's channels forever. I don't think staying a
 few weeks on those channels to redirect misled people will be nearly
 enough. Could we have a longer plan ? Like advertisement bots that would
 advise every n hours to join the right servers ?
 
 Why not just set /topic to tell people to connect to OFTC and join there?

We certainly would set /topic, but a lot of people ignore channel topics
(and a lot of IRC clients fail to display them). But if they read the
channel, they just can't ignore the public service announcement bots :)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Daniel P. Berrange
On Tue, Mar 04, 2014 at 11:12:13AM +, Stephen Gran wrote:
 On 04/03/14 11:01, Thierry Carrez wrote:
 James E. Blair wrote:
 Freenode has been having a rough time lately due to a series of DDoS
 attacks which have been increasingly disruptive to collaboration.
 Fortunately there's an alternative.
 
 OFTC URL:http://www.oftc.net/ is a robust and established alternative
 to Freenode.  It is a smaller network whose mission statement makes it a
 less attractive target.  It's significantly more stable than Freenode
 and has friendly and responsive operators.  The infrastructure team has
 been exploring this area and we think OpenStack should move to using
 OFTC.
 
 There is quite a bit of literature out there pointing to Freenode, like
 presentation slides from old conferences. We should expect people to
 continue to join Freenode's channels forever. I don't think staying a
 few weeks on those channels to redirect misled people will be nearly
 enough. Could we have a longer plan ? Like advertisement bots that would
 advise every n hours to join the right servers ?
 
 Why not just set /topic to tell people to connect to OFTC and join there?

That's certainly something you want todo, but IME of moving IRC channels
in the past, plenty of people will never look at the #topic :-( You want
to be more aggressive like setting channel permissions to block anyone
except admins from speaking in the channel. Then set a bot with admin
rights to spam the channel once an hour telling people to go elsewhere.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova gate currently broken

2014-03-04 Thread Dina Belova
Michael, hello.

I've found this issue some days ago with oslo.messaging master (and now
it's released, as you see..)

The main problem here is: *Error importing module
nova.openstack.common.sslutils: duplicate option: ca_file*

That's because of https://review.openstack.org/#/c/71997/ merged to
oslo.messaging - and now released

Author changed opts not only in oslo.messaging files, but also in
openstack.compute one - sslutils - and we've got duplicate option error
with openstack.common.sslutils in nova.

As I discussed that with Doug Hellmann (and understood him), on
#openstack-dev some days ago, he proposed to merge the same fix to
oslo-incubator and then update all other projects using oslo.messaging.

Here is the fix: https://review.openstack.org/#/c/76300/ to oslo.incubator

Although, I suppose now your idea with simply not using last oslo.messaging
release will be much quicker due to code freeze.

On Tue, Mar 4, 2014 at 2:38 PM, Michael Still mi...@stillhq.com wrote:

 Hi.

 You might have noticed that the nova gate is currently broken. I
 believe this is related to an oslo.messaging release today, and have
 proposed a fix at https://review.openstack.org/#/c/77844/

 Cheers,
 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [neutron] How to tell a compute host the control host is running Neutron

2014-03-04 Thread Sean Dague
On 03/03/2014 11:32 PM, Dean Troyer wrote:
 On Mon, Mar 3, 2014 at 8:36 PM, Kyle Mestery mest...@noironetworks.com
 mailto:mest...@noironetworks.com wrote:
 
 In all cases today with Open Source plugins, Neutron agents have run
 on the hosts. For OpenDaylight, this is not the case. OpenDaylight
 integrates with Neutron as a ML2 MechanismDriver. But it has no
 Neutron code on the compute hosts. OpenDaylight itself communicates
 directly to those compute hosts to program Open vSwitch.
 
  
 
 devstack doesn't provide a way for me to express this today. On the
 compute hosts in the above scenario, there is no q-* services
 enabled, so the is_neutron_enabled function returns 1, meaning no
 neutron.
 
 
 True and working as designed.
  
 
 And then devstack sets Nova up to use nova-networking, which fails.
 
 
 This only happens if you have enabled nova-network.  Since it is on by
 default you must disable it.
  
 
 The patch I have submitted [1] modifies is_neutron_enabled to
 check for the meta neutron service being enabled, which will then
 configure nova to use Neutron instead of nova-networking on the
 hosts. If this sounds wonky and incorrect, I'm open to suggestions
 on how to make this happen.
 
 
 From the review: 
 
 is_neutron_enabled() is doing exactly what it is expected to do, return
 success if it finds any q-* service listed in ENABLED_SERVICES. If no
 neutron services are configured on a compute host, then this must not
 say they are.
 
 Putting 'neutron' in ENABLED_SERVICES does nothing and should do nothing.
 
 Since you are not implementing the ODS as a Neutron plugin (as far as
 DevStack is concerned) you should then treat it as a system service and
 configure it that way, adding 'opendaylight' to ENABLED_SERVICES
 whenever you want something to know it is being used.
 
  
 
 Note: I have another patch [2] which enables an OpenDaylight
 service, including configuration of OVS on hosts. But I cannot check
 if the opendaylight service is enabled, because this will only run
 on a single node, and again, not on each compute host.
 
 
 I don't understand this conclusion. in multi-node each node gets its own
 specific ENABLED_SERVICES list, you can check that on each node to
 determine how to configure that node.  That is what I'm trying to
 explain in that last paragraph above, maybe not too clearly.

So in an Open Daylight environment... what's running on the compute host
to coordinate host level networking?

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova gate currently broken

2014-03-04 Thread Sean Dague
Dina's analysis is a good one.

Basically oslo.messaging *completely violated* oslo-incubator rules and
updated common code in their tree. Then shipped a release, then broke
anyone trying to use them. This is actually very not cool.

This was not caught in the gate because the only thing that tickles this
is currently something which actually tries to touch all the config
options, and includes oslo.messaging, which is nova's config generator,
in the nova pep8 job.

Currently in the devstack/gate we aren't doing anything with code which
touches the ca_file option, which means python's lazy loading isn't ever
catching / exploding there.

The immediate fix is to block 1.3.0a8 -
https://review.openstack.org/#/c/77844/2/global-requirements.txt

The longer term is to actually come up with a gate job which tests for
these kinds of incompatibilities in a live environment.

I also really think we need to revisit this idea of config options
getting declared inside oslo-incubator. Because at 5 source trees this
was manageable, but with the current number of projects including the
incubator, the upgrade process for something like this becomes crazy
really fast. This is just the tip of the iceberg as we go to carving up
the incubator more.

-Sean

On 03/04/2014 06:44 AM, Dina Belova wrote:
 Michael, hello.
 
 I've found this issue some days ago with oslo.messaging master (and now
 it's released, as you see..)
 
 The main problem here is: *Error importing module
 nova.openstack.common.sslutils: duplicate option: ca_file*
 
 That's because of https://review.openstack.org/#/c/71997/ merged to
 oslo.messaging - and now released
 
 Author changed opts not only in oslo.messaging files, but also in
 openstack.compute one - sslutils - and we've got duplicate option error
 with openstack.common.sslutils in nova.
 
 As I discussed that with Doug Hellmann (and understood him), on
 #openstack-dev some days ago, he proposed to merge the same fix to
 oslo-incubator and then update all other projects using oslo.messaging.
 
 Here is the fix: https://review.openstack.org/#/c/76300/ to oslo.incubator
 
 Although, I suppose now your idea with simply not using last
 oslo.messaging release will be much quicker due to code freeze.
 
 
 On Tue, Mar 4, 2014 at 2:38 PM, Michael Still mi...@stillhq.com
 mailto:mi...@stillhq.com wrote:
 
 Hi.
 
 You might have noticed that the nova gate is currently broken. I
 believe this is related to an oslo.messaging release today, and have
 proposed a fix at https://review.openstack.org/#/c/77844/
 
 Cheers,
 Michael
 
 --
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 
 Best regards,
 
 Dina Belova
 
 Software Engineer
 
 Mirantis Inc.
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thought exercise for a V2 only world

2014-03-04 Thread Christopher Yeoh
On Mon, 3 Mar 2014 22:14:02 -0800
Chris Behrens cbehr...@codestud.com wrote:
 I don’t think I have an answer, but I’m going to throw out some of my
 random thoughts about extensions in general. They might influence a
 longer term decision. But I’m also curious if I’m the only one that
 feels this way:
 
 I tend to feel like extensions should start outside of nova and any
 other code needed to support the extension should be implemented by
 using hooks in nova. 

So at the moment we have this conflict between getting new api features
in NOW and having to balance that with the pain of having to support
any mistakes forever. Especially at this stage of cycle I feel really
bad about -1'ing API code because it might mean that they have to wait
another cycle and often the API code is just the tip of a really big
chunk of work. But at the same time once we merge these API features we
have to live with them forever.

So there's been the odd discussion going on about how we raise the bar
for making API changes, and I think what you have suggested would fit
in really well. It would allow us to continue to innovate quickly, but
extensions outside of nova could been seen as experimental and
potentially subject to backwards incompatible API changes so we're not
stuck with mistakes indefinitely.

 The modules implementing the hook code should be
 shipped with the extension. If hooks don’t exist where needed, they
 should be created in trunk. I like hooks. Of course, there’s probably
 such a thing as too many hooks, so… hmm… :)  Anyway, this addresses
 another annoyance of mine whereby code for extensions is mixed in all
 over the place. Is it really an extension if all of the supporting
 code is in ‘core nova’?


So the good news is that the V3 API framework already supports this
sort of model really well. Under the hood it is based on hooks and
allowing different parts of the API to offer hook points for other
parts of the API to extend.  It's one of the techniques we used to clean
up the mess that is the V2 API servers code which is core but littered
with extension specific code. Separating it all logically makes it much
easier to read/maintain.

There's also some nice extra controls so you can if you wish
through whitelist/blacklists ensure that the environment doesn't start
loading new API features unexpectedly and complains loudly when ones
you think should be there aren't.

 That said, I then think that the only extensions shipped with nova
 are really ones we deem “optional core API components”. “optional”
 and “core” are probably oxymorons in this context, but I’m just going
 to go with it. There would be some sort of process by which we let
 extensions “graduate” into nova.
 
 Like I said, this is not really an answer. But if we had such a
 model, I wonder if it turns “deprecating extensions” into something
 more like “deprecating part of the API”… something less likely to
 happen. Extensions that aren’t used would more likely just never
 graduate into nova.

So for want of a better term, incubating potential API features for a
cycle or two before graduating them into the Nova tree would I think be
very valuable. It avoids where its hard to really use the API until
its implemented, but you learn a lot from using it but by then its too
late. It allows us to have graduation type criteria like - is anyone
using it?, is all of the interface used, or only part of it, how
does this fit into the overall Nova API, etc. We can also fix all
the various little bugs in the API design before it gets locked in. And
we'll avoid situations like quota_classes which got merged and shipped
for a while but as far as we can tell never really did anything useful
because the rest of it didn't get finished.

So a big +1 from me for this. But I think we'd want to be able to
symmetrically gate on this code, otherwise it becomes a bit of
nightmare for people maintaining code in this external repository.

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Need unique ID for every Network Service

2014-03-04 Thread Srikanth Kumar Lingala

Hi Roger,
Yes, I agree with your comments.

Following are the few use cases for LBaaS-UUID:

i) There is a need for common configuration support for SLB's in Network Node 
and the SLB's deployed [using Network Service Chains] for East-West traffic for 
a Tenant. To map the configuration for Network Node - SLB and Service Chain - 
SLB, we need UUID for each SLB configuration. So that we can map the 
configuration based on Tenant network requirements.

ii) To support multiple Configurations to SLB, UUID of SLB can be useful to map 
the configuration dynamically.

Regards,
Srikanth

WICKES, ROGER rw3...@att.com wrote:


Maybe I am misunderstanding the debate, but imho Every OpenStack Service (XaaS) 
needs to be listed in the Service Catalog as being available (and stable and 
tested), and every instance of that service, when started, needs a service ID, 
and every X created by that service needs a UUID aka object id. This is 
regardless of how many of them are per tenant or host or whatever. This 
discussion may be semantics, but just to be clear, LBaaS is the service that is 
called to create an LB.

On the surface, it makes sense that you would only have one Service running per 
tenant; every object or instantiation created by that service (a Load Balancer, 
in this case) must have a UUID. I can't imagine why you would want multiple 
LBaaS services running at the same time, but again my imagination is limited. I 
am sure someone else has more imagination, such as a tenant having two vApps 
located on hosts in two different data centers, and they want an LBaaS in each 
data center since their inventory system or whatever is restricted to a single 
data center. If there were like two or three LBaaS' running, how would Neutron 
or Heat etc. know which one to call (criteria) when the network changes? It 
would be like having two butlers.

A UUID on each Load Balancer is needed for alarming, callbacks, service 
assurance, service delivery, service availability monitoring and reporting, 
billing, compliance audits, and simply being able to modify the service. If 
there is an n-ary tuple relationship between LB and anything, you might be 
inclined to restrict only one LB per vApp. However, for ultra-high volume and 
high-availability apps we may want cross-redundant LB's with a third LB in 
front of the first two; that way if one gets overloaded or crashes, we can 
route to the other. A user might want to even mix and match hard and soft LB's 
in a hybrid environment. So, even in that use case, restricting the number of 
LB's or their tupleness is restrictive.

I also want to say to those who are struggling with reasonable n-ary 
relationship modeling: This is just a problem with global app development, 
where there are so many use cases out there. It's tough to never say never, as 
in, you would never want more than one LBaaS per tenant.

[Roger] --
From: Srikanth Kumar Lingala [mailto:srikanth.ling...@freescale.com]
Sent: Monday, March 03, 2014 5:18 PM
To: Stephen Balukoff; Veera Reddy
Cc: openstack-dev@lists.openstack.org; openstack
Subject: Re: [openstack-dev] [Openstack] Need unique ID for every Network 
Service

Yes..I will send a mail to Eugene Nikanorov, requesting to add this to the 
agenda in the coming weekly discussion.
Detailed requirement is as follows:
In the current implementation, only one LBaaS configuration is possible per 
tenant. It is better to have multiple LBaaS configurations for each tenant.
We are planning to configure haproxy as VM in a Network Service Chain. In a 
chain, there may be possibility of multiple Network Services of the same type 
(For Eg: Haproxy). For that, each Network Service should have a Unique ID 
(UUID) for a tenant.

Regards,
Srikanth.

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Saturday, March 01, 2014 1:22 AM
To: Veera Reddy
Cc: Lingala Srikanth Kumar-B37208; 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org; 
openstack
Subject: Re: [Openstack] Need unique ID for every Network Service

Hi y'all!

The ongoing debate in the LBaaS group is whether the concept of a 
'Loadbalancer' needs to exist  as an entity. If it is decided that we need it, 
I'm sure it'll have a unique ID. (And please feel free to join the discussion 
on this as well, eh!)

Stephen

On Thu, Feb 27, 2014 at 10:27 PM, Veera Reddy 
veerare...@gmail.commailto:veerare...@gmail.com wrote:
Hi,

Good idea to have unique id for each entry of network functions.
So that we can configure multiple network function with different configuration.


Regards,
Veera.

On Fri, Feb 28, 2014 at 11:23 AM, Srikanth Kumar Lingala 
srikanth.ling...@freescale.commailto:srikanth.ling...@freescale.com wrote:
Hi-
In the existing Neutron, we have FWaaS, LBaaS, VPNaaS ?etc.
In FWaaS, each Firewall has its own UUID.
It is good to have a unique ID [UUID] for LBaaS also.

Please share your comments on the above.

Regards,
Srikanth.



Re: [openstack-dev] [nova] Thought exercise for a V2 only world

2014-03-04 Thread Sean Dague
On 03/04/2014 01:14 AM, Chris Behrens wrote:
 
 On Mar 3, 2014, at 9:23 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
 Hi All,

 here's a case worth exploring in a v2 only world ... what about some
 extension we really think is dead and should go away?  can we ever
 remove it? In the past we have said backwards compatibility means no
 we cannot remove any extensions, if we adopt the v2 only notion of
 backwards compatibility is this still true?
 
 I don’t think I have an answer, but I’m going to throw out some of my random 
 thoughts about extensions in general. They might influence a longer term 
 decision. But I’m also curious if I’m the only one that feels this way:
 
 I tend to feel like extensions should start outside of nova and any other 
 code needed to support the extension should be implemented by using hooks in 
 nova. The modules implementing the hook code should be shipped with the 
 extension. If hooks don’t exist where needed, they should be created in 
 trunk. I like hooks. Of course, there’s probably such a thing as too many 
 hooks, so… hmm… :)  Anyway, this addresses another annoyance of mine whereby 
 code for extensions is mixed in all over the place. Is it really an extension 
 if all of the supporting code is in ‘core nova’?
 
 That said, I then think that the only extensions shipped with nova are really 
 ones we deem “optional core API components”. “optional” and “core” are 
 probably oxymorons in this context, but I’m just going to go with it. There 
 would be some sort of process by which we let extensions “graduate” into nova.
 
 Like I said, this is not really an answer. But if we had such a model, I 
 wonder if it turns “deprecating extensions” into something more like 
 “deprecating part of the API”… something less likely to happen. Extensions 
 that aren’t used would more likely just never graduate into nova.

So this approach actually really concerns me, because what it says is
that we should be optimizing Nova for out of tree changes to the API
which are vendor specific. Which I think is completely the wrong
direction. Because in that world you'll never be able to move between
Nova installations. What's worse is you'll get multiple people
implementing the same feature out of tree, slightly differently.

I 100% agree the current extensions approach is problematic. It's used
as a way to circumvent the idea of a stable API (mostly with oh, it's
an extension, we need this feature right now, and it's not part of core
so we don't need to give the same guaruntees.)

So realistically I want to march us towards a place where we stop doing
that. Nova out of the box should have all the knobs that anyone needs to
build these kinds of features on top of. If not, we should fix that. It
shouldn't be optional.

If that means we need to have more folks spending time on coming
together on that interface, so be it, but this world where Nova is a
bucket of parts isn't pro-user.

A really great instance is the events extension that Dan is working on.
Without it, Neutron can't work reliably. But it's an extension. So you
can turn it off. So we just made it optional to work with the rest of
OpenStack. Also the fact that it was an admin API was used as
justification on that. But if reliable integration with Neutron isn't
considered a core feature of Nova... well, I'm not sure what would be.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Code review: bp/allow-multiple-subnets-on-gateway-port

2014-03-04 Thread Randy Tuttle
Resending as I realized I had omitted a subject Sorry for blasting
again.
Hi all.

Just submitted the code[1] for supporting dual-stack (IPv6 and IPv4) on an
external gateway port of a tenant's router (l3_agent). It implements [2].

Please, if you would, have a look and provide any feedback. I would be
grateful.

Cheers,
Randy

[1]. https://review.openstack.org/#/c/77471/
[2].
https://blueprints.launchpad.net/neutron/+spec/allow-multiple-subnets-on-gateway-port
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Invitation to Survey on Design Patterns

2014-03-04 Thread Bouwers, P.
Dear OpenStack Developer,

We are two students from the University of Groningen doing research on the
impact of design patterns on software quality attributes. As part of this
research we are conducting a survey among software developers. We would
like to include your perspective on this subject and hope you would be
willing to participate in this survey. The survey will take approximately
10 to 15 minutes.

To complete the survey, please go to the following link:
https://www.surveymonkey.com/s/Y9XCWHB.

Thank you in advance; your input is very valuable to us.

Thanks for your time,

P. Bouwers (p.bouw...@student.rug.nl)
W. Visser (w.m.visse...@student.rug.nl)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Open Source and community working together

2014-03-04 Thread Luke Gorrie
On 4 March 2014 11:40, Thierry Carrez thie...@openstack.org wrote:

 This is a technical requirement, and failing to match those requirements
 is clearly not the same as engaging in deception or otherwise failing
 the OpenStack community code of conduct.


Thank you for clearing that up!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PCI SRIOV meeting suspend?

2014-03-04 Thread Robert Li (baoli)
Hi Yongli,

I have been looking at your patch set. Let me look at it again if you have
new update. 

The meeting changed back to UTC 1300 Tuesday.

thanks,
Robert

On 3/4/14 12:39 AM, yongli he yongli...@intel.com wrote:

On 2014年03月04日 13:33, Irena Berezovsky wrote:
 Hi Yongli He,
 The PCI SRIOV meeting switched back to weekly occurrences,.
 Next meeting will be today at usual time slot:
 https://wiki.openstack.org/wiki/Meetings#PCI_Passthrough_Meeting

 In coming meetings we would like to work on content to be proposed for
Juno.
 BR,
thanks, Irena.

Yongli he
 Irena

 -Original Message-
 From: yongli he [mailto:yongli...@intel.com]
 Sent: Tuesday, March 04, 2014 3:28 AM
 To: Robert Li (baoli); Irena Berezovsky; OpenStack Development Mailing
List
 Subject: PCI SRIOV meeting suspend?

 HI, Robert

 does it stop for while?

 and if you are convenient please review this patch set , check if the
interface is ok.


 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branc
h:master+topic:bp/pci-extra-info,n,z

 Yongli He



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Suggestions for alarm improvements

2014-03-04 Thread Sampath Priyankara
Hi All,

 Sandy did a great job by putting up some valuable points for alarm
improvements in,
 https://wiki.openstack.org/wiki/Ceilometer/AlarmImprovements
 Is there any further discussion about [Part 4 - Moving Alarms into the
Pipelines] in above doc?
 I tried to find more info about above topic but failed.
 If someone working on above Part 4, really appreciate could give a quick
reply on this.
 Thanks,
  Sampath.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Sean Dague
So this thread is getting deep again, as I expect they all will, so I'm
just going to top post and take the ire for doing so.

I also want to summarize what I've seen in the threads so far:

v2 needed forever - if I do a sentiment analysis here looking at the
orgs people are coming from, most of this is currently coming from
Rackspace folks (publicly). There might be many reasons for this, one of
which is the fact that they've done a big transition in the near past
(between *not openstack* and Openstack), and felt that pain.
Understanding that pain is good.

It is interesting that Phil actually brings up a completely different
issue from the HP cloud side, which is the amount of complaints they are
having to field about how terrible the v2 API is. HP has actually had an
OpenStack cloud public longer than Rackspace. So this feedback shouldn't
be lost.

So I feel like while some deployers have expressed no interest in moving
forward on API, others can't get there soon enough.

Which makes me think a lot about point 4. As has already been suggested
we could actually make v2 a proxy to v3. And just like with images and
volumes, it becomes frozen in Juno, and people that want more features
will move to the v3 API. Just like with other services.

This requires internal cleanups as well. However it wouldn't shut down
future evolution of the interface.

Nova really has 4 interfaces today
 * Nova v2 JSON
 * Nova v2 XML
 * Nova v3 JSON
 * EC2

I feel like if we had to lose one to decrease maintenance cost, Nova v2
XML is the one to lose. And if we did, v2 on v3 isn't the craziest thing
in the world. It's not free, but neither is the backport.

I also feel like the code duplication approach for v2 and v3 was taken
because the Glance folks had a previously bad experience on common code
with 2 APIs. However, their surface was small enough that the dual code
trees were fine. Nova is a beast in this regard.

I also feel like more folks are excited about working on the v2 on v3
approach, given that Kenichi is already prototyping on it. And it is
important to have excitement in this work as well. The human factor,
about the parts that people want to work on, is critical to the success
of Nova as a project.

This project has always been about coming up with the best ideas,
regardless of where they came from. So I'd like to make sure people
aren't making up their mind early on this, and have retreated to the
idea of winning this discussion. The outcome of this is important.
Because it basically sets up the framework of how we work on improving
the Nova interface... for a long time.

I also feel like the 2 complaints that come up on v3 repeatedly, that
it's a bunch of copy and paste, and that it doesn't implement anything
new, are kind of disingenuous. Because those were explicit design
constraints imposed and agreed on by the nova core team on the approach.
And if those are the complaints on the approach, why weren't those
brought up in advance?

At the end of the day, I'm on the fence. Because what I actually want is
somewhat orthogonal to all of this. Which is a real discoverable
interface, and substantially less optional Nova core. And I've yet to
figure out which approach sets us up for doing that better.

-Sean

On 03/03/2014 12:32 PM, Russell Bryant wrote:
 There has been quite a bit of discussion about the future of the v3 API
 recently.  There has been growing support for the idea that we should
 change course and focus on evolving the existing v2 API instead of
 putting out a new major revision.  This message is a more complete
 presentation of that proposal that concludes that we can do what we
 really need to do with only the v2 API.
 
 Keeping only the v2 API requires some confidence that we can stick with
 it for years to come.  We don't want to be revisiting this any time
 soon.  This message addresses a bunch of different questions about how
 things would work if we only had v2.
 
 1) What about tasks?
 
 In some cases, the proposed integration of tasks is backwards
 compatible.  A task ID will be added to a header.  The biggest point of
 debate was if and how we would change the response for creating a
 server.  For tasks in v2, we would not change the response by default.
 The task ID would just be in a header.  However, if and when the client
 starts exposing version support information, we can provide an
 alternative/preferred response based on tasks.
 
 For example:
 
Accept: application/json;type=task
 
 2) Versioning extensions
 
 One of the points being addressed in the v3 API was the ability to
 version extensions.  In v2, we have historically required new API
 extensions, even for changes that are backwards compatible.  We propose
 the following:
 
  - Add a version number to v2 API extensions
  - Allow backwards compatible changes to these API extensions,
 accompanied by a version number increase
  - Add the option to advertise an extension as deprecated, which can be
 used for all 

Re: [openstack-dev] Neutron: Need help with tox failure in VPN code

2014-03-04 Thread Paul Michali
Bo,

I did that change, and it passes when I run neutron.tests.unit.services.vpn, 
but not when I run full tox or neutron.tetss.unit.services.  I still get 
failures (either error code 10 or test fails with no info).

Irene,

Any thoughts on why the driver is not loading (even with the mod that Bo 
suggests)?

Nachi,

I just tried run_tests.sh and it fails to run the test (haven't used that in a 
very long time, so not sure I'm running it correctly). Do I need any special 
args, when running that? I tried './run_tests.sh -f -V -P' but it ran 0 tests.


All,

The bottom line here is that I can't seem to get the loading of service driver 
from neutron.conf, irrespective of the blueprint change set. If I use a hard 
coded driver (as is on master branch and used in the latest patch for 74144), 
all the tests work. But for this blueprint we need to be able to load the 
service driver (so that the blueprint driver can be loaded). The issue is 
unrelated to the blueprint functionality, as shown by the latest patch and by 
previous versions where I had the full service type framework implementation. 
It seems like there is some problem with this partial application of STF to 
load the service driver.

I took the (working) 74144 patch and made the changes below to load the service 
plugin from neutron.conf, and see tox failures. I've also patched this into the 
master branch, and I see the same issue!  IOW, there is something wrong with 
the method I'm using to setup the service driver at least with respect to the 
current test suite.

diff --git a/neutron/services/vpn/plugin.py b/neutron/services/vpn/plugin.py
index 5d818a3..41cbff0 100644
--- a/neutron/services/vpn/plugin.py
+++ b/neutron/services/vpn/plugin.py
@@ -18,11 +18,9 @@
 #
 # @author: Swaminathan Vasudevan, Hewlett-Packard
 
-# from neutron.db import servicetype_db as st_db
 from neutron.db.vpn import vpn_db
-# from neutron.plugins.common import constants
-# from neutron.services import service_base
-from neutron.services.vpn.service_drivers import ipsec as ipsec_driver
+from neutron.plugins.common import constants
+from neutron.services import service_base
 
 
 class VPNPlugin(vpn_db.VPNPluginDb):
@@ -41,12 +39,10 @@ class VPNDriverPlugin(VPNPlugin, 
vpn_db.VPNPluginRpcDbMixin):
 #TODO(nati) handle ikepolicy and ipsecpolicy update usecase
 def __init__(self):
 super(VPNDriverPlugin, self).__init__()
-self.ipsec_driver = ipsec_driver.IPsecVPNDriver(self)
-# Currently, if the following code is used, there are UT failures
-# self.service_type_manager = st_db.ServiceTypeManager.get_instance()
-# drivers, default_provider = service_base.load_drivers(
-# constants.VPN, self)
-# self.ipsec_driver = drivers[default_provider]
+# Dynamically load the current service driver
+drivers, default_provider = service_base.load_drivers(
+constants.VPN, self)
+self.ipsec_driver = drivers[default_provider]
 
 def _get_driver_for_vpnservice(self, vpnservice):
 return self.ipsec_driver
diff --git a/neutron/tests/unit/services/vpn/test_vpnaas_driver_plugin.py 
b/neutron/tests/unit/services/vpn/test_vpnaas_driver_plugin.py
index 8c25d7e..9531938 100644
--- a/neutron/tests/unit/services/vpn/test_vpnaas_driver_plugin.py
+++ b/neutron/tests/unit/services/vpn/test_vpnaas_driver_plugin.py
@@ -17,6 +17,7 @@
 import contextlib
 
 import mock
+from oslo.config import cfg
 
 from neutron.common import constants
 from neutron import context
@@ -44,6 +45,12 @@ class TestVPNDriverPlugin(test_db_vpnaas.TestVpnaas,
 self.driver = mock.Mock()
 self.driver.service_type = ipsec_driver.IPSEC
 driver_cls.return_value = self.driver
+vpnaas_provider = (p_constants.VPN +
+   ':vpnaas:neutron.services.vpn.'
+   'service_drivers.ipsec.IPsecVPNDriver:default')
+cfg.CONF.set_override('service_provider',
+  [vpnaas_provider],
+  'service_providers')
 super(TestVPNDriverPlugin, self).setUp(
 vpnaas_plugin=VPN_DRIVER_CLASS)


Any advise?


PCM (Paul Michali)

MAIL  p...@cisco.com
IRCpcm_  (irc.freenode.net)
TW@pmichali
GPG key4525ECC253E31A83
Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83

On Mar 4, 2014, at 1:31 AM, Bo Lin l...@vmware.com wrote:

 I don't know whether i got your point. But try to modify 
 /neutron/tests/unit/services/vpn/test_vpnaas_driver_plugin.py like the 
 following, the error would be fixed:
 --- a/neutron/tests/unit/services/vpn/test_vpnaas_driver_plugin.py
 +++ b/neutron/tests/unit/services/vpn/test_vpnaas_driver_plugin.py
 @@ -17,6 +17,7 @@
 import contextlib
 
 import mock
 +from oslo.config import cfg
 
 from neutron.common import constants
 from neutron import context
 @@ -44,6 +45,11 @@ class TestVPNDriverPlugin(test_db_vpnaas.TestVpnaas,
 

Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Daniel P. Berrange
On Tue, Mar 04, 2014 at 07:49:03AM -0500, Sean Dague wrote:
 So this thread is getting deep again, as I expect they all will, so I'm
 just going to top post and take the ire for doing so.
 
 I also want to summarize what I've seen in the threads so far:
 
 v2 needed forever - if I do a sentiment analysis here looking at the
 orgs people are coming from, most of this is currently coming from
 Rackspace folks (publicly). There might be many reasons for this, one of
 which is the fact that they've done a big transition in the near past
 (between *not openstack* and Openstack), and felt that pain.
 Understanding that pain is good.
 
 It is interesting that Phil actually brings up a completely different
 issue from the HP cloud side, which is the amount of complaints they are
 having to field about how terrible the v2 API is. HP has actually had an
 OpenStack cloud public longer than Rackspace. So this feedback shouldn't
 be lost.
 
 So I feel like while some deployers have expressed no interest in moving
 forward on API, others can't get there soon enough.
 
 Which makes me think a lot about point 4. As has already been suggested
 we could actually make v2 a proxy to v3. And just like with images and
 volumes, it becomes frozen in Juno, and people that want more features
 will move to the v3 API. Just like with other services.

 This requires internal cleanups as well. However it wouldn't shut down
 future evolution of the interface.
 
 Nova really has 4 interfaces today
  * Nova v2 JSON
  * Nova v2 XML
  * Nova v3 JSON
  * EC2
 
 I feel like if we had to lose one to decrease maintenance cost, Nova v2
 XML is the one to lose. And if we did, v2 on v3 isn't the craziest thing
 in the world. It's not free, but neither is the backport.

A proxy of v2 onto v3 is appealing, but do we think we have good enough
testing of v2 to ensure that any proxy impl is bug-for-bug compatible
with the original native v2 implementation ? Avoiding breakage of client
apps is to me the key reason for keeping v2 around, so we'd want very
high confidence that any proxy impl is functionally identical with the
orginal impl.

If we want to proxy v2 onto v3, then by that same argument should we
be proxying EC2 onto v3 as well.  ie Nova v3 JSON be the only supported
API, and every thing else just be a proxy, potentially maintained out
of tree from main nova codebase.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-04 Thread Dina Belova
Joe, thanks for discussion.


 I think nova should natively support booting an instance for a

 limited amount of time. I would use this all the time to boot up

 devstack instances (boot devstack instance for 5 hours)


Really nice idea, but to provide time based resource management for any
resource type in OS (instance, volume, compute host, Heat stack, etc.) that
needs to be implemented in every project. And even with that feature
implemented, without central leasing service, there should be some other
reservation connected opportunities like user notifications about close end
of lease / energy efficiency, etc. that do not really fit idea of some
already existing project / program.


 Reserved and Spot Instances. I like Amazon's concept of reserved and

 spot instances it would be cool if we could support something similar


AWS reserved instances look like your first idea with instances booted for
a limited amount of time - even that in Amazon use case that's *much* time.
As for spot instances, I believe this idea is more about some billing
service that counts current instance/host/whatever price due to current
compute capacity load, etc.


 Boot an instances for 4 hours every morning. This sounds like

 something that
https://wiki.openstack.org/wiki/Mistral#Tasks_Scheduling_-_Cloud_Cron

 can handle.


That's not really thing we've implemented in Climate - we have not
implemented periodic tasks like that - now lease might be not started,
started and ended - without any 'sleeping' periods. Although, that's quite
nice idea to implement this feature using Mistral.


 Give someone 100 CPU hours per time period of quota. Support quotas

 by overall usage not current usage. This sounds like something that

 each service should support natively.


Quotas (if we speak about time management) should be satisfied in any time
period. Now in Climate that's done by getting cloud resources from common
pool at the lease creation moment - but, as you guess, that does not allow
to have resource reusage at the time lease has not started yet. To
implement resource reusage advanced quota management is truly needed. That
idea was the first at the very beginning of Climate project and we
definitely need that in future.


 Reserved Volume: Not sure how that works.


Now we're in the process of investigating this moment too. Ideally that
should be some kind of volume state, that simply means only DB record
without real block storage created - and it'll be created only at the lease
start date. But that requires many changes to Cinder. Other idea is to do
the same as Climate does with compute hosts - consider cinder-volumes as
dedicated to Climate and Climate will manage them itself. Reserved volume
idea came from thoughts about 'reserved stack' - to have working group like
vm+volume+assigned_ip time you really need that.


 Virtual Private Cloud.  It would be great to see OpenStack support a

 hardware isolated virtual private cloud, but not sure what the best

 way to implement that is.


There was proposal with pclouds by Phil Day, that was changed after
Icehouse summit to something new. First idea was to use exactly pclouds,
but as they are not implemented now, Climate works directly with hosts
aggregates to imitate them. In future, when we'll have opportunity to use
pcloud (it does not matter how it'll be called really), we'll do it, of
course.


 Capacity Planning. Sure, but it would be nice to see a more fleshed

 out story for it.


Sure. I believe, that having resource reusage opportunity (when lease
creation and resource allocation steps won't be the same one) will help to
manage future capacity peak loads - because cloud provider will know about
future user needs before resources will be really used.


Cheers

Dina


On Tue, Mar 4, 2014 at 12:30 AM, Joe Gordon joe.gord...@gmail.com wrote:

 Overall I think Climate is trying to address some very real use cases,
 but its unclear to me where these solutions should live or how to
 solve them. Furthermore I understand what a reservation means for nova
 but I am not sure what it means in Cinder, Swift etc.

 To give a few examples:
 * I think nova should natively support booting an instance for a
 limited amount of time. I would use this all the time to boot up
 devstack instances (boot devstack instance for 5 hours)
 * Reserved and Spot Instances. I like Amazon's concept of reserved and
 spot instances it would be cool if we could support something similar
 * Boot an instances for 4 hours every morning. This sounds like
 something that
 https://wiki.openstack.org/wiki/Mistral#Tasks_Scheduling_-_Cloud_Cron
 can handle.
 * Give someone 100 CPU hours per time period of quota. Support quotas
 by overall usage not current usage. This sounds like something that
 each service should support natively.
 * Reserved Volume: Not sure how that works.
 * Virtual Private Cloud.  It would be great to see OpenStack support a
 hardware isolated virtual private cloud, but not sure what the 

Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Sean Dague
On 03/04/2014 08:14 AM, Daniel P. Berrange wrote:
 On Tue, Mar 04, 2014 at 07:49:03AM -0500, Sean Dague wrote:
 So this thread is getting deep again, as I expect they all will, so I'm
 just going to top post and take the ire for doing so.

 I also want to summarize what I've seen in the threads so far:

 v2 needed forever - if I do a sentiment analysis here looking at the
 orgs people are coming from, most of this is currently coming from
 Rackspace folks (publicly). There might be many reasons for this, one of
 which is the fact that they've done a big transition in the near past
 (between *not openstack* and Openstack), and felt that pain.
 Understanding that pain is good.

 It is interesting that Phil actually brings up a completely different
 issue from the HP cloud side, which is the amount of complaints they are
 having to field about how terrible the v2 API is. HP has actually had an
 OpenStack cloud public longer than Rackspace. So this feedback shouldn't
 be lost.

 So I feel like while some deployers have expressed no interest in moving
 forward on API, others can't get there soon enough.

 Which makes me think a lot about point 4. As has already been suggested
 we could actually make v2 a proxy to v3. And just like with images and
 volumes, it becomes frozen in Juno, and people that want more features
 will move to the v3 API. Just like with other services.
 
 This requires internal cleanups as well. However it wouldn't shut down
 future evolution of the interface.

 Nova really has 4 interfaces today
  * Nova v2 JSON
  * Nova v2 XML
  * Nova v3 JSON
  * EC2

 I feel like if we had to lose one to decrease maintenance cost, Nova v2
 XML is the one to lose. And if we did, v2 on v3 isn't the craziest thing
 in the world. It's not free, but neither is the backport.
 
 A proxy of v2 onto v3 is appealing, but do we think we have good enough
 testing of v2 to ensure that any proxy impl is bug-for-bug compatible
 with the original native v2 implementation ? Avoiding breakage of client
 apps is to me the key reason for keeping v2 around, so we'd want very
 high confidence that any proxy impl is functionally identical with the
 orginal impl.

So because of Nova objects, Icehouse v2 isn't bug-for-bug compatible
with Havana v2 anyway. The API isn't isolated enough from the internals
at this point to actually provide those guaruntees in v2.

It doesn't seem unreasonable to me that v2 could be as good a guaruntee
as the one we have. A focus on increased coverage of the v2 API in
tempest would help. It would be great if the folks that are really
interested in keeping stable v2 (and ensuring we lock down the behavior)
would contribute there.

 If we want to proxy v2 onto v3, then by that same argument should we
 be proxying EC2 onto v3 as well.  ie Nova v3 JSON be the only supported
 API, and every thing else just be a proxy, potentially maintained out
 of tree from main nova codebase.

There are some fundamental issues around ids that make that problematic
(and that's only one of the concerns I know about). EC2 as a proxy (out
of tree) was discussed 3 cycles ago, and no one stepped up. And the
result has been no one working on EC2. So I'm actually really concerned
that if we don't reach a conclusion around Nova API that people are
excited about working on, we see the same core collapse there as we saw
on EC2.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [neutron] How to tell a compute host the control host is running Neutron

2014-03-04 Thread Kyle Mestery
On Mon, Mar 3, 2014 at 10:32 PM, Dean Troyer dtro...@gmail.com wrote:

 On Mon, Mar 3, 2014 at 8:36 PM, Kyle Mestery mest...@noironetworks.comwrote:

  In all cases today with Open Source plugins, Neutron agents have run on
 the hosts. For OpenDaylight, this is not the case. OpenDaylight integrates
 with Neutron as a ML2 MechanismDriver. But it has no Neutron code on the
 compute hosts. OpenDaylight itself communicates directly to those compute
 hosts to program Open vSwitch.



 devstack doesn't provide a way for me to express this today. On the
 compute hosts in the above scenario, there is no q-* services enabled, so
 the is_neutron_enabled function returns 1, meaning no neutron.


 True and working as designed.


  And then devstack sets Nova up to use nova-networking, which fails.


 This only happens if you have enabled nova-network.  Since it is on by
 default you must disable it.


I have disable_all_services in local.conf for my compute hosts. I then
selectively enable the following services:

enable_service nova n-cpu neutron n-novnc qpid

With that config, nova is setup to use nova networking.


 The patch I have submitted [1] modifies is_neutron_enabled to check for
 the meta neutron service being enabled, which will then configure nova to
 use Neutron instead of nova-networking on the hosts. If this sounds wonky
 and incorrect, I'm open to suggestions on how to make this happen.


 From the review:

 is_neutron_enabled() is doing exactly what it is expected to do, return
 success if it finds any q-* service listed in ENABLED_SERVICES. If no
 neutron services are configured on a compute host, then this must not say
 they are.

 Putting 'neutron' in ENABLED_SERVICES does nothing and should do nothing.

 Since you are not implementing the ODS as a Neutron plugin (as far as
 DevStack is concerned) you should then treat it as a system service and
 configure it that way, adding 'opendaylight' to ENABLED_SERVICES whenever
 you want something to know it is being used.


Note: I have another patch [2] which enables an OpenDaylight service,
 including configuration of OVS on hosts. But I cannot check if the
 opendaylight service is enabled, because this will only run on a single
 node, and again, not on each compute host.


 I don't understand this conclusion. in multi-node each node gets its own
 specific ENABLED_SERVICES list, you can check that on each node to
 determine how to configure that node.  That is what I'm trying to explain
 in that last paragraph above, maybe not too clearly.

 OK, I understand now. And what I need to do is add support for not only an
opendaylight service, but also an opendaylight-compute service, as that
is what I need to enable on compute nodes to setup OVS to point at
OpenDaylight.

Thanks,
Kyle


 dt

 --

 Dean Troyer
 dtro...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [neutron] How to tell a compute host the control host is running Neutron

2014-03-04 Thread Kyle Mestery
On Tue, Mar 4, 2014 at 5:46 AM, Sean Dague s...@dague.net wrote:

 On 03/03/2014 11:32 PM, Dean Troyer wrote:
  On Mon, Mar 3, 2014 at 8:36 PM, Kyle Mestery mest...@noironetworks.com
  mailto:mest...@noironetworks.com wrote:
 
  In all cases today with Open Source plugins, Neutron agents have run
  on the hosts. For OpenDaylight, this is not the case. OpenDaylight
  integrates with Neutron as a ML2 MechanismDriver. But it has no
  Neutron code on the compute hosts. OpenDaylight itself communicates
  directly to those compute hosts to program Open vSwitch.
 
 
 
  devstack doesn't provide a way for me to express this today. On the
  compute hosts in the above scenario, there is no q-* services
  enabled, so the is_neutron_enabled function returns 1, meaning no
  neutron.
 
 
  True and working as designed.
 
 
  And then devstack sets Nova up to use nova-networking, which fails.
 
 
  This only happens if you have enabled nova-network.  Since it is on by
  default you must disable it.
 
 
  The patch I have submitted [1] modifies is_neutron_enabled to
  check for the meta neutron service being enabled, which will then
  configure nova to use Neutron instead of nova-networking on the
  hosts. If this sounds wonky and incorrect, I'm open to suggestions
  on how to make this happen.
 
 
  From the review:
 
  is_neutron_enabled() is doing exactly what it is expected to do, return
  success if it finds any q-* service listed in ENABLED_SERVICES. If no
  neutron services are configured on a compute host, then this must not
  say they are.
 
  Putting 'neutron' in ENABLED_SERVICES does nothing and should do nothing.
 
  Since you are not implementing the ODS as a Neutron plugin (as far as
  DevStack is concerned) you should then treat it as a system service and
  configure it that way, adding 'opendaylight' to ENABLED_SERVICES
  whenever you want something to know it is being used.
 
 
 
  Note: I have another patch [2] which enables an OpenDaylight
  service, including configuration of OVS on hosts. But I cannot check
  if the opendaylight service is enabled, because this will only run
  on a single node, and again, not on each compute host.
 
 
  I don't understand this conclusion. in multi-node each node gets its own
  specific ENABLED_SERVICES list, you can check that on each node to
  determine how to configure that node.  That is what I'm trying to
  explain in that last paragraph above, maybe not too clearly.

 So in an Open Daylight environment... what's running on the compute host
 to coordinate host level networking?

 Nothing. OpenDaylight communicates to each host using OpenFlow and OVSDB
to manage networking on the host. In fact, this is one huge advantage for
the
ODL MechanismDriver in Neutron, because it's one less agent running on the
host.

Thanks,
Kyle

-Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Get volumes REST API with filters and limit

2014-03-04 Thread Duncan Thomas
Definitely file a bug... a script to reproduce would be fantastic.
Needs to be fixed... I don't think you need a blueprint if the fix is
simple, but if you're making deep changes then a blueprint always
helps.#

Thanks for pointing this out

On 28 February 2014 20:52, Steven Kaufer kau...@us.ibm.com wrote:
 I am investigating some pagination enhancements in nova and cinder (see nova
 blueprint https://blueprints.launchpad.net/nova/+spec/nova-pagination).

 In cinder, it appears that all filtering is done after the volumes are
 retrieved from the database (see the API.get_all function in
 https://github.com/openstack/cinder/blob/master/cinder/volume/api.py).
 Therefore, the usage combination of filters and limit will only work if all
 volumes matching the filters are in the page of data being retrieved from
 the database.

 For example, assume that all of the volumes with a name of foo would be
 retrieved from the database starting at index 100 and that you query for all
 volumes with a name of foo while specifying a limit of 50.  In this case,
 the query would yield 0 results since the filter did not match any of the
 first 50 entries retrieved from the database.

 Is this a known problem?
 Is this considered a bug?
 How should this get resolved?  As a blueprint for juno?

 I am new to the community and am trying to determine how this should be
 addressed.

 Thanks,

 Steven Kaufer


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Open Source and community working together

2014-03-04 Thread Jay Pipes
On Tue, 2014-03-04 at 10:11 +0100, Luke Gorrie wrote:
 On 3 March 2014 18:30, Thierry Carrez thie...@openstack.org wrote:
 My advice was therefore that you should not wait for that to
 happen to
 
 engage in cooperative behavior, because you don't want to be
 the first
 company to get singled out.
 
 
 Cooperative behavior is vague.

Not really, IMO.

 Case in point: I have not successfully setup 3rd party CI for the ML2
 driver that I've developed on behalf of a vendor.

Please feel free to engage with myself and others on IRC if you have
problems. We had a first meeting on #openstack-meeting yesterday and
have started putting answers to questions about 3rd party CI here:

https://etherpad.openstack.org/p/third-party-ci-workshop

Let me know how we can help you!

  Does this make me one of your uncooperative vendors? 

Of course not. Now, if you were not seeking out any assistance from the
community and instead were talking to a few other companies about just
replacing all of the upstream continuous integration system with
something you wrote yourself, yes, I would say that's being
uncooperative. :)

 Do I need to worry about being fired because somebody at OpenStack
 decides to name and shame the company I'm doing the work for and
 make an example? (Is that what the deprecated neutron drivers list
 will be used for?)

No. Not having success in setting up a required CI link does not make
you uncooperative. Simply reach out to others in the community for
assistance if you need it.

 If one project official says driver contributors have to comply with
 X, Y, Z by Icehouse-2 and then another project official says that
 uncooperative contributors are going to be nailed to the wall then,
 well, sucks to be contributors.

I think you are over-analyzing this :)

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread zhyu_139
+1 on the v2 on v3 proxy idea. Since none of other suggestions lead to easy 
decision, we might pay more attention here, especially when there are surely 
stackers, including myself, willing to provide solid contribution to it. 
Further and detailed discussion on technical feasibility, work sizing, etc. 
does help.

发自139邮箱mail.10086.cn

The following is the content of the forwarded email
From:Sean Dague  s...@dague.net
To:openstack-dev openstack-dev@lists.openstack.org
Date:2014-03-04 20:49:03
Subject:Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

So this thread is getting deep again, as I expect they all will, so I39m
just going to top post and take the ire for doing so.

I also want to summarize what I39ve seen in the threads so far:

v2 needed forever - if I do a sentiment analysis here looking at the
orgs people are coming from, most of this is currently coming from
Rackspace folks (publicly). There might be many reasons for this, one of
which is the fact that they39ve done a big transition in the near past
(between *not openstack* and Openstack), and felt that pain.
Understanding that pain is good.

It is interesting that Phil actually brings up a completely different
issue from the HP cloud side, which is the amount of complaints they are
having to field about how terrible the v2 API is. HP has actually had an
OpenStack cloud public longer than Rackspace. So this feedback shouldn39t
be lost.

So I feel like while some deployers have expressed no interest in moving
forward on API, others can39t get there soon enough.

Which makes me think a lot about point 4. As has already been suggested
we could actually make v2 a proxy to v3. And just like with images and
volumes, it becomes frozen in Juno, and people that want more features
will move to the v3 API. Just like with other services.

This requires internal cleanups as well. However it wouldn39t shut down
future evolution of the interface.

Nova really has 4 interfaces today
 * Nova v2 JSON
 * Nova v2 XML
 * Nova v3 JSON
 * EC2

I feel like if we had to lose one to decrease maintenance cost, Nova v2
XML is the one to lose. And if we did, v2 on v3 isn39t the craziest thing
in the world. It39s not free, but neither is the backport.

I also feel like the code duplication approach for v2 and v3 was taken
because the Glance folks had a previously bad experience on common code
with 2 APIs. However, their surface was small enough that the dual code
trees were fine. Nova is a beast in this regard.

I also feel like more folks are excited about working on the v2 on v3
approach, given that Kenichi is already prototyping on it. And it is
important to have excitement in this work as well. The human factor,
about the parts that people want to work on, is critical to the success
of Nova as a project.

This project has always been about coming up with the best ideas,
regardless of where they came from. So I39d like to make sure people
aren39t making up their mind early on this, and have retreated to the
idea of winning this discussion. The outcome of this is important.
Because it basically sets up the framework of how we work on improving
the Nova interface... for a long time.

I also feel like the 2 complaints that come up on v3 repeatedly, that
it39s a bunch of copy and paste, and that it doesn39t implement anything
new, are kind of disingenuous. Because those were explicit design
constraints imposed and agreed on by the nova core team on the approach.
And if those are the complaints on the approach, why weren39t those
brought up in advance?

At the end of the day, I39m on the fence. Because what I actually want is
somewhat orthogonal to all of this. Which is a real discoverable
interface, and substantially less optional Nova core. And I39ve yet to
figure out which approach sets us up for doing that better.

-Sean

On 03/03/2014 12:32 PM, Russell Bryant wrote:
 There has been quite a bit of discussion about the future of the v3 API
 recently. There has been growing support for the idea that we should
 change course and focus on evolving the existing v2 API instead of
 putting out a new major revision. This message is a more complete
 presentation of that proposal that concludes that we can do what we
 really need to do with only the v2 API.
 
 Keeping only the v2 API requires some confidence that we can stick with
 it for years to come. We don39t want to be revisiting this any time
 soon. This message addresses a bunch of different questions about how
 things would work if we only had v2.
 
 1) What about tasks?
 
 In some cases, the proposed integration of tasks is backwards
 compatible. A task ID will be added to a header. The biggest point of
 debate was if and how we would change the response for creating a
 server. For tasks in v2, we would not change the response by default.
 The task ID would just be in a header. However, if and when the client
 starts exposing version support information, we can provide an
 

[openstack-dev] [Murano] Discussion of the v2 Murano repository API

2014-03-04 Thread Ekaterina Fedorova

Hi everyone!

As we are moving towards the Application Catalog mission, we are 
improving various components of Murano to support the Catalog's specifics.
We've already introduced the changes to the workflows definition 
notation ([1]), now it is time for the next version of Murano repository 
API.



It will introduce the support of new Murano Application Packages, stored 
as immutable entities and associated with appropriate Murano 
applications and classes.
At the same time it adds browsing capabilities which will be used to 
inspect the Application Catalog
.
For now, this API will be backed by our own storage, later it is planned 
to migrate to use Glance Artifact repository.

However, Murano's API will remain stable during this migration.

The draft of the API specification is located at [2], please feel free 
to take a look and send your feedback.

I've created an etherpad for a feedback and discussion [3].

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-February/027202.html

[2] http://docs.muranorepositoryapi.apiary.io
[3] https://etherpad.openstack.org/p/muranorepository-api

Regards,
 Kate.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] milestone-proposed branches

2014-03-04 Thread James Slagle
On Tue, Mar 4, 2014 at 2:08 AM, Thierry Carrez thie...@openstack.org wrote:
 Robert Collins wrote:
 On 3 March 2014 23:12, Thierry Carrez thie...@openstack.org wrote:
 James Slagle wrote:
 I'd like to ask that the following repositories for TripleO be included
 in next week's cutting of icehouse-3:

 http://git.openstack.org/openstack/tripleo-incubator
 http://git.openstack.org/openstack/tripleo-image-elements
 http://git.openstack.org/openstack/tripleo-heat-templates
 http://git.openstack.org/openstack/diskimage-builder
 http://git.openstack.org/openstack/os-collect-config
 http://git.openstack.org/openstack/os-refresh-config
 http://git.openstack.org/openstack/os-apply-config

 Are you willing to run through the steps on the How_To_Release wiki for
 these repos, or should I do it next week? Just let me know how or what
 to coordinate. Thanks.

 I looked into more details and there are a number of issues as TripleO
 projects were not really originally configured to be released.

 First, some basic jobs are missing, like a tarball job for
 tripleo-incubator.

 Do we need one? tripleo-incubator has no infrastructure to make
 tarballs. So that has to be created de novo, and its not really
 structured to be sdistable - its a proving ground. This needs more
 examination. Slagle could however use a git branch effectively.

 I'd say you don't need such a job, but then I'm not the one asking for
 that repository to be included in next week's cutting of icehouse-3.

 James asks if I'd be OK to run through the steps on the How_To_Release
 wiki, and that wiki page is all about publishing tarballs.

 So my answer is, if you want to run the release scripts for
 tripleo-incubator, then you need a tarball job.

 Then the release scripts are made for integrated projects, which follow
 a number of rules that TripleO doesn't follow:

 - One Launchpad project per code repository, under the same name (here
 you have tripleo-* under tripleo + diskimage-builder separately)

 Huh? diskimage-builder is a separate project, with a separate repo. No
 conflation. Same for os-*-config, though I haven't made a LP project
 for os-cloud-config yet (but its not a dependency yet either).

 Just saying that IF you want to use the release scripts (and it looks
 like you actually don't want that), you'll need a 1:1 LP - repo match.
 Currently in LP you have tripleo (covering tripleo-* repos),
 diskimage-builder, and the os-* projects (which I somehow missed). To
 reuse the release scripts you'd have to split tripleo in LP into
 multiple projects.

 Finally the person doing the release needs to have push annotated tags
 / create reference permissions over refs/tags/* in Gerrit. This seems
 to be missing for a number of projects.

 We have this for all the projects we release; probably not incubator
 because *we don't release it*- and we had no intent of doing releases
 for tripleo-incubator - just having a stable branch so that there is a
 thing RH can build rpms from is the key goal.

 I agree with you. I only talked about it because James mentioned it in
 his to be released list.

 In all cases I'd rather limit myself to incubated/integrated projects,
 rather than extend to other projects, especially on a busy week like
 feature freeze week. So I'd advise that for icehouse-3 you follow the
 following simplified procedure:

 - Add missing tarball-creation jobs
 - Add missing permissions for yourself in Gerrit
 - Skip milestone-proposed branch creation
 - Push tag on master when ready (this will result in tarballs getting
 built at tarballs.openstack.org)

 Optionally:
 - Create icehouse series / icehouse-3 milestone for projects in LP
 - Manually create release and upload resulting tarballs to Launchpad
 milestone page, under the projects that make the most sense (tripleo-*
 under tripleo, etc)

 I'm still a bit confused with the goals here. My original understanding
 was that TripleO was explicitly NOT following the release cycle. How
 much of the integrated projects release process do you want to reuse ?
 We do a feature freeze on icehouse-3, then bugfix on master until -rc1,
 then we cut an icehouse release branch (milestone-proposed), unfreeze
 master and let it continue as Juno. Is that what you want to do too ? Do
 you want releases ? Or do you actually just want stable branches ?

 This is the etherpad:
 https://etherpad.openstack.org/p/icehouse-updates-stablebranches -
 that captures our notes from the summit.

 TripleO has a whole is not committing to stable maintenance nor API
 service integrated releases as yet: tuskar is our API service which
 will follow that process next cycle, but right now it has its guts
 open undergoing open heart surgery. Everything else we do semver on -
 like the openstack clients (novaclient etc) - and our overall process
 is aimed at moving things from incubator into stable trees as they
 mature. We'll be stabilising the interfaces in tripleo-heat-templates
 and tripleo-image-elements somehow in 

[openstack-dev] [Neutron][IPv6] Update the Wiki with links to blueprints and reviews

2014-03-04 Thread Collins, Sean
Hi All,

We've got a lot of work in progress, so if you
have a blueprint or bug that you are working on (or know about),
let's make sure that we keep track of them. Ideally, for bugs, add the
ipv6 tag

https://bugs.launchpad.net/neutron/+bugs?field.tag=ipv6

For blueprints and code reviews, please add them to the Wiki 

https://wiki.openstack.org/wiki/Neutron/IPv6

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] std:repeat action

2014-03-04 Thread Nikolay Makhotkin
Ok, Manas, I'll comment it very soon :)


On Tue, Mar 4, 2014 at 6:47 AM, Manas Kelshikar ma...@stackstorm.comwrote:

 Ping!

 @Nikolay - Can you take a look at the etherpad discussion and provide
 comments. I am going to start working on option (I) as that is the one
 which seems to make most sense. Thoughts?


 On Thu, Feb 27, 2014 at 8:24 AM, Renat Akhmerov rakhme...@mirantis.comwrote:

 Thanks Manas!

 This is one of the important things we need to get done within the next
 couple of weeks. Since it's going to affect engine I think we need to wait
 for a couple of days with the implementation till we merge the changes that
 are being worked on and that also affect engine significantly.

 Team, please research carefully this etherpad and leave your comments.
 It's a pretty tricky thing and we need to figure out the best strategy how
 to approach this kind of things. We're going to have more problems similar
 to this one.

 Renat Akhmerov
 @ Mirantis Inc.



 On 25 Feb 2014, at 10:07, manas kelshikar mana...@gmail.com wrote:

 Hi everyone,

 I have put down my thoughts about the standard repeat action blueprint.

 https://blueprints.launchpad.net/mistral/+spec/mistral-std-repeat-action

 I have added link to an etherpad document which explore a few
 alternatives of the approach. I have explored details of how the std:repeat
 action should behave as defined in the blueprint. Further there are some
 thoughts on how it could be designed to remove ambiguity in the chaining.

 Please take a look.

 Thanks,
 Manas
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards,
Nikolay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [3rd party testing] How to setup CI? Take #2

2014-03-04 Thread Luke Gorrie
Hi Jay,

(Switching Subject to third party testing)

On 4 March 2014 15:13, Jay Pipes jaypi...@gmail.com wrote:

 Let me know how we can help you!


Thanks for the invitation! I will take you up on it :-).

My goal is to make sure the Tail-f NCS mechanism driver is fully supported
in Icehouse.

Question: How should I setup CI?

Option 1: Should I debug the CI implementation we developed and tested back
in December, which was not based on Tempest but rather on our private
integration test method that we used in the Havana cycle? The debugging
that is needed is more defensive programming in our automated attempt to
setup OpenStack for test -- so that if the installation fails for some
unexpected reason we don't vote based on that.

Option 2: Should I start over with a standard Tempest test insead? If so,
what's the best method to set it up (yours? Arista's? another?), and how do
I know when that method is sufficiently debugged that it's time to start?

I was on the 3rd party testing meeting last night (as 'lukego') and your
recommendation for me was to hold off for a week or so and then try your
method after your next update. That sounds totally fine to me in principle.
However, this will mean that I don't have a mature test procedure in place
by March 14th, and I'm concerned that there may be bad consequences on
this. This date was mentioned as a deadline in the Neutron meeting last
night, but I don't actually understand what the consequence of
non-compliance is for established drivers such as this one.

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Nikola Đipanov
On 03/03/2014 06:32 PM, Russell Bryant wrote:
 There has been quite a bit of discussion about the future of the v3 API
 recently.  There has been growing support for the idea that we should
 change course and focus on evolving the existing v2 API instead of
 putting out a new major revision.  This message is a more complete
 presentation of that proposal that concludes that we can do what we
 really need to do with only the v2 API.
 
 Keeping only the v2 API requires some confidence that we can stick with
 it for years to come.  We don't want to be revisiting this any time
 soon.  This message addresses a bunch of different questions about how
 things would work if we only had v2.
 
 1) What about tasks?
 
 In some cases, the proposed integration of tasks is backwards
 compatible.  A task ID will be added to a header.  The biggest point of
 debate was if and how we would change the response for creating a
 server.  For tasks in v2, we would not change the response by default.
 The task ID would just be in a header.  However, if and when the client
 starts exposing version support information, we can provide an
 alternative/preferred response based on tasks.
 
 For example:
 
Accept: application/json;type=task
 

I feel that ability to expose tasks is the single most important thing
we need to do from the API semantics standpoint, unless we redesign the
API from scratch (see below). Just looking at how awkward and edge-case
ridden the interaction between components that use each others APIs in
OpenStack is, should be enough to convince anyone that this needs fixing.

I am not sure if tasks will solve this, but the fact that we have
tried to solve this in several different ways up until now, and the
effort was being driven by large deployers, mostly tells me that this is
an issue people need solved.

From that point of view - if we can do this with V2 we absolutely should.

 2) Versioning extensions
 
 One of the points being addressed in the v3 API was the ability to
 version extensions.  In v2, we have historically required new API
 extensions, even for changes that are backwards compatible.  We propose
 the following:
 
  - Add a version number to v2 API extensions
  - Allow backwards compatible changes to these API extensions,
 accompanied by a version number increase
  - Add the option to advertise an extension as deprecated, which can be
 used for all those extensions created only to advertise the availability
 of new input parameters
 
 3) Core versioning
 
 Another pain point in API maintenance has been having to create API
 extensions for every small addition to the core API.  We propose that a
 version number be exposed for the core API that exposes the revision of
 the core API in use.  With that in place, backwards compatible changes
 such as adding a new property to a resource would be allowed when
 accompanied by a version number increase.
 
 With versioning of the core and API extensions, we will be able to cut
 down significantly on the number of changes that require an API
 extension without sacrificing the ability of a client to discover
 whether the addition is present or not.
 

The whole extensions vs. core discussion has been confusing me since the
beginning, and I can't say it has changed much.

After thinking about this for some time I've decided :) that I think
Nova needs 2 APIs. Amazon EC2 was always meant to be exposed as a web
service to it's users and having a REST API that exposes resources
without actually going into details about what is happening is fine, and
it's fine for people using Nova in a similar manner. It is clear from
this that I think the Nova API borrows a lot from EC2.

But I think nova would benefit from having a lower level API, just as a
well designed software library would, that let's people build services
on top of it, that might provide different stability guarantees, as it's
customers would not be cloud applications, but rather other cloud
infrastructure. I think that having things tired in this manner would
answer a lot of questions about what is and isn't core and what we
mean by extensions.

If I were to attempt a new API for Nova - I would start from the above.

 4) API Proxying
 
 We don't see proxying APIs as a problem.  It is the cost we pay for
 choosing to split apart projects after they are released.  We don't
 think it's fair to break users just because we have chosen to split
 apart the backend implementation.
 
 Further, the APIs that are proxied are frozen while those in the other
 projects are evolving.  We believe that as more features are available
 only via the native APIs in Cinder, Glance, and Neutron, users will
 naturally migrate over to the native APIs.
 
 Over time, we can ensure clients are able to query the API without the
 need to proxy by adding new formats or extensions that don't return data
 that needed to be proxied.
 

Proxying is fine, and conveniently fits in the above story about tiered
APIs :)

 8) Input Validation

Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Sergey Lukjanov
++, OFTC looks nice.

On Tue, Mar 4, 2014 at 3:31 PM, Daniel P. Berrange berra...@redhat.com wrote:
 On Tue, Mar 04, 2014 at 11:12:13AM +, Stephen Gran wrote:
 On 04/03/14 11:01, Thierry Carrez wrote:
 James E. Blair wrote:
 Freenode has been having a rough time lately due to a series of DDoS
 attacks which have been increasingly disruptive to collaboration.
 Fortunately there's an alternative.
 
 OFTC URL:http://www.oftc.net/ is a robust and established alternative
 to Freenode.  It is a smaller network whose mission statement makes it a
 less attractive target.  It's significantly more stable than Freenode
 and has friendly and responsive operators.  The infrastructure team has
 been exploring this area and we think OpenStack should move to using
 OFTC.
 
 There is quite a bit of literature out there pointing to Freenode, like
 presentation slides from old conferences. We should expect people to
 continue to join Freenode's channels forever. I don't think staying a
 few weeks on those channels to redirect misled people will be nearly
 enough. Could we have a longer plan ? Like advertisement bots that would
 advise every n hours to join the right servers ?

 Why not just set /topic to tell people to connect to OFTC and join there?

 That's certainly something you want todo, but IME of moving IRC channels
 in the past, plenty of people will never look at the #topic :-( You want
 to be more aggressive like setting channel permissions to block anyone
 except admins from speaking in the channel. Then set a bot with admin
 rights to spam the channel once an hour telling people to go elsewhere.

 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Update the Wiki with links to blueprints and reviews

2014-03-04 Thread Robert Li (baoli)
Hi Sean,

I just added the ipv6-prefix-delegation BP that can be found using the
search link on the ipv6 wiki. More details about it will be added once I
find time.

thanks,
--Robert

On 3/4/14 10:05 AM, Collins, Sean sean_colli...@cable.comcast.com
wrote:

Hi All,

We've got a lot of work in progress, so if you
have a blueprint or bug that you are working on (or know about),
let's make sure that we keep track of them. Ideally, for bugs, add the
ipv6 tag

https://bugs.launchpad.net/neutron/+bugs?field.tag=ipv6

For blueprints and code reviews, please add them to the Wiki

https://wiki.openstack.org/wiki/Neutron/IPv6

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Proposal to add Fei Long Wang (flwang) as a core reviewer

2014-03-04 Thread Kurt Griffiths
The poll has closed. flwang has been promoted to Marconi core.

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [3rd party testing] How to setup CI? Take #2

2014-03-04 Thread Jay Pipes
On Tue, 2014-03-04 at 16:31 +0100, Luke Gorrie wrote:
 Option 1: Should I debug the CI implementation we developed and tested
 back in December, which was not based on Tempest but rather on our
 private integration test method that we used in the Havana cycle? The
 debugging that is needed is more defensive programming in our
 automated attempt to setup OpenStack for test -- so that if the
 installation fails for some unexpected reason we don't vote based on
 that.
 
 Option 2: Should I start over with a standard Tempest test insead? If
 so, what's the best method to set it up (yours? Arista's? another?),
 and how do I know when that method is sufficiently debugged that it's
 time to start?

Although I recognize you and your team have put in a substantial amount
of time in debugging your custom setup, I would advise dropping the
custom CI setup and going with a method that specifically uses the
upstream openstack-dev/devstack and openstack-infra/devstack-gate
projects. The reason is because these two projects are well supported by
the upstream Infrastructure team.

devstack will allow you to set up a complete OpenStack environment that
matches upstream -- with the exception of using the Tailf-NCS ML2 plugin
instead of the default plugin. devstack-gate will provide you the git
checkout plumbing that will populate the source directories for the
OpenStack projects that devstack uses to build its OpenStack
environment.

I'd recommend using my os-ext-testing repository (which is mostly just a
couple shell scripts and documentation that uses the upstream Puppet
modules to install and configure Jenkins, Zuul, Jenkins Job Builder,
Gearman, devstack-gate/nodepool scripts on a master and slave node).

 I was on the 3rd party testing meeting last night (as 'lukego') and
 your recommendation for me was to hold off for a week or so and then
 try your method after your next update. That sounds totally fine to me
 in principle. However, this will mean that I don't have a mature test
 procedure in place by March 14th, and I'm concerned that there may be
 bad consequences on this. This date was mentioned as a deadline in the
 Neutron meeting last night, but I don't actually understand what the
 consequence of non-compliance is for established drivers such as this
 one.

I'm not going to step on Mark McClain's toes regarding policy for
drivers in the Neutron code tree; Mark, please chime in here.

I mentioned waiting about a week because, after discussions with the
upstream Infrastructure team yesterday, it became clear that putting a
nodepool manager in place to spin up *single-use devstack slave nodes*
for running Tempest tests is going to be necessary.

I had previously thought that it was possible to reset a Devstack
environment to a clean state (thus being able to re-use the slave
Jenkins node for 1 test run). However, so much is changed on the slave
node during a Tempest run (and by devstack itself), that the only way to
truly ensure a clean test environment is to have a brand new devstack
slave node created/launched for each test run. Nodepool is the piece of
software that manages a pool of these devstack slave nodes, and it will
take me about a week to complete a new article and testing on the
os-ext-testing repository for integrating and installing nodepool
properly.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Update the Wiki with links to blueprints and reviews

2014-03-04 Thread Collins, Sean
On Tue, Mar 04, 2014 at 04:06:02PM +, Robert Li (baoli) wrote:
 Hi Sean,
 
 I just added the ipv6-prefix-delegation BP that can be found using the
 search link on the ipv6 wiki. More details about it will be added once I
 find time.

Perfect - we'll probably want to do a session at the summit on it.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Proposal to add Fei Long Wang (flwang) as a core reviewer

2014-03-04 Thread Alejandro Cabrera
 Hi folks, I'd like to propose adding Fei Long Wang (flwang) as a core 
 reviewer on the Marconi team. He has been contributing regularly over the 
 past couple of months, and has proven to be a careful reviewer with good 
 judgment.

 All Marconi ATC's, please respond with a +1 or --1.

 Cheers,
 Kurt G. | @kgriffs | Marconi PTL

+1!

I second this thought. Fei Long Wang (flwang) has consistently
participated in discussions, meetings, and contributed.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] rpc concurrency control rfc

2014-03-04 Thread Duncan Thomas
On 28 November 2013 10:14, Daniel P. Berrange berra...@redhat.com wrote:

 For this specific block zero'ing case it occurred to me that it might
 be sufficient to just invoke 'ionice dd' instead of 'dd' and give it
 a lower I/O priority class than normal.

Excuse the thread necromancy, I've just been searching for thoughts
about this very issue. I've merged a patch that does I/O nice, and it
helps, but it is easy to DoS a volume server by creating and deleting
volumes fast while maintaining a high i/o load... the zeroing never
runs and so you run out of allocatable space.

I'll take a look at writing something with more controls than dd for
doing the zeroing...

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Thomas Goirand
On 03/04/2014 06:13 PM, Julien Danjou wrote:
 On Tue, Mar 04 2014, James E. Blair wrote:
 
 If there aren't objections to this plan, I think we can propose a motion
 to the TC with a date and move forward with it fairly soon.
 
 That plan LGTM, and +1 for OFTC. :)

Same over here, +1 for OFTC.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] rpc concurrency control rfc

2014-03-04 Thread Daniel P. Berrange
On Tue, Mar 04, 2014 at 04:15:03PM +, Duncan Thomas wrote:
 On 28 November 2013 10:14, Daniel P. Berrange berra...@redhat.com wrote:
 
  For this specific block zero'ing case it occurred to me that it might
  be sufficient to just invoke 'ionice dd' instead of 'dd' and give it
  a lower I/O priority class than normal.
 
 Excuse the thread necromancy, I've just been searching for thoughts
 about this very issue. I've merged a patch that does I/O nice, and it
 helps, but it is easy to DoS a volume server by creating and deleting
 volumes fast while maintaining a high i/o load... the zeroing never
 runs and so you run out of allocatable space.

Oh well, thanks for experimenting with this idea anyway.

 I'll take a look at writing something with more controls than dd for
 doing the zeroing...

Someone already beat you to it

  commit 71946855591a41dcc87ef59656a8a340774eeaf2
  Author: Pádraig Brady pbr...@redhat.com
  Date:   Tue Feb 11 11:51:39 2014 +

libvirt: support configurable wipe methods for LVM backed instances

Provide configurable methods to clear these volumes.
The new 'volume_clear' and 'volume_clear_size' options
are the same as currently supported by cinder.

* nova/virt/libvirt/imagebackend.py: Define the new options.
* nova/virt/libvirt/utils.py (clear_logical_volume): Support the
new options. Refactor the existing dd method out to
_zero_logic_volume().
* nova/tests/virt/libvirt/test_libvirt_utils.py: Add missing test cases
for the existing clear_logical_volume code, and for the new code
supporting the new clearing methods.
* etc/nova/nova.conf.sample: Add the 2 new config descriptions
to the [libvirt] section.

Change-Id: I5551197f9ec89ae2f9b051696bccdeb1af2c031f
Closes-Bug: #889299

this matches equivalent config in cinder.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6][Security Group] BP: Support ICMP type filter by security group

2014-03-04 Thread Brian Haley
On 03/03/2014 11:18 AM, Collins, Sean wrote:
 On Mon, Mar 03, 2014 at 09:39:42PM +0800, Xuhan Peng wrote:
 Currently, only security group rule direction, protocol, ethertype and port
 range are supported by neutron security group rule data structure. To allow
 
 If I am not mistaken, I believe that when you use the ICMP protocol
 type, you can use the port range specs to limit the type.
 
 https://github.com/openstack/neutron/blob/master/neutron/db/securitygroups_db.py#L309
 
 http://i.imgur.com/3n858Pf.png
 
 I assume we just have to check and see if it applies to ICMPv6?

I tried using horizon to add an icmp type/code rule, and it didn't work.

Before:

-A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN

After:

-A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN
-A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN

I'd assume I'll have the same error with v6.

I am curious what's actually being done under the hood here now...

-Brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Russell Bryant
On 03/03/2014 12:32 PM, Russell Bryant wrote:
 There has been quite a bit of discussion about the future of the v3 API
 recently.

snip  :-)

Since this proposal was posted, it is clear that there is not much
support for it, much less consensus.  That's progress because it now
seems clear to me that the path proposed (keep only v2) isn't the right
answer.

Let's reflect a bit on some of the other progress that I think has been
made:

1) Greater understanding and documentation of the v3 API effort

It has led to a larger group of people taking a much closer look at what
has been done with the v3 API so far.  That has widened the net for
feedback on what else should be done before we could call it done.

Chris has put together an excellent page with the most comprehensive
overview of the v3 API effort that I've seen.  I think this is very helpful:

http://ozlabs.org/~cyeoh/V3_API.html

2) Expansion on ideas to ease long term support of APIs

Thinking through this has led to a lot of deep thought about what
changes we can make to support an API for a longer period of time.
These are all ideas that can be applied to v3:

  - minor-versions for the core API and what changes would be
considered acceptable under that scheme

  - how we can make significant changes that normally are not
backwards compatible optional so that clients can declare
support for them, easing the possible future need for another
major API revision.

3) New ideas to ease keeping both v2 and v3

There has been some excellent input from those that have been working on
the v3 API with some new ideas for how we can lessen the burden of
keeping both APIs long term.  I'm personally especially interested in
the v2.1 approach where v2 turns into code that transforms requests
and responses to/from v3 format.  More on that here:

http://ozlabs.org/~cyeoh/V3_API.html#v2_v3_dual_maintenance


What I'd like to do next is work through a new proposal that includes
keeping both v2 and v3, but with a new added focus of minimizing the
cost.  This should include a path away from the dual code bases and to
something like the v2.1 proposal.

Thank you all for your participation on this topic.  It has been quite
controversial, but the API we expose to our users is a really big deal.
 I'm feeling more and more confident that we're coming through this with
a much better understanding of the problem space overall, as well as a
better plan going forward than we had a few weeks ago.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Russell Bryant
On 03/04/2014 06:27 AM, Murray, Paul (HP Cloud Services) wrote:
 One of my patches has a query asking if I am using the agreed way to
 load plugins: https://review.openstack.org/#/c/71557/
 
 I followed the same approach as filters/weights/metrics using
 nova.loadables. Was there an agreement to do it a different way? And if
 so, what is the agreed way of doing it? A pointer to an example or even
 documentation/wiki page would be appreciated.

The short version is entry-point based plugins using stevedore.

We should be careful though.  We need to limit what we expose as
external plug points, even if we consider them unstable.  If we don't
want it to be public, it may not make sense for it to be a plugin
interface at all.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Anne Gentle
On Tue, Mar 4, 2014 at 11:10 AM, Russell Bryant rbry...@redhat.com wrote:

 On 03/03/2014 12:32 PM, Russell Bryant wrote:
  There has been quite a bit of discussion about the future of the v3 API
  recently.

 snip  :-)

 Since this proposal was posted, it is clear that there is not much
 support for it, much less consensus.  That's progress because it now
 seems clear to me that the path proposed (keep only v2) isn't the right
 answer.

 Let's reflect a bit on some of the other progress that I think has been
 made:

 1) Greater understanding and documentation of the v3 API effort

 It has led to a larger group of people taking a much closer look at what
 has been done with the v3 API so far.  That has widened the net for
 feedback on what else should be done before we could call it done.

 Chris has put together an excellent page with the most comprehensive
 overview of the v3 API effort that I've seen.  I think this is very
 helpful:

 http://ozlabs.org/~cyeoh/V3_API.html


I still sense that the struggle with Compute v3 is the lack of
documentation for contributor developers but also especially end users so
that we could get feedback early and often.

My original understanding, passed by word-of-mouth, was that the goal for
v3 was to define an expanded core that nearly all deployers could
confidently put into production to serve their users needs. Since there's
no end-user-sympathetic documentation, we learned a bit too much about how
it's made, that supposedly it's implemented with all extensions -- a
revelation that I'd still prefer to be protected from. :) Or possibly I
don't understand. But the thing is, as a user advocate I shouldn't need to
know that. I should know what it does and what benefits it holds.

I recently had to write a paragraph about v3 for the Operations Guide, and
it was really difficult to write because of the conversational nature of
the discussion. Worse still, it was difficult to tell a deployer where
their voice could be best heard. I went with respond on the user survey.
I still sense we need to ensure we have data from users (deployers and end
users) and that won't be available until May.


 2) Expansion on ideas to ease long term support of APIs

 Thinking through this has led to a lot of deep thought about what
 changes we can make to support an API for a longer period of time.
 These are all ideas that can be applied to v3:

   - minor-versions for the core API and what changes would be
 considered acceptable under that scheme

   - how we can make significant changes that normally are not
 backwards compatible optional so that clients can declare
 support for them, easing the possible future need for another
 major API revision.

 3) New ideas to ease keeping both v2 and v3

 There has been some excellent input from those that have been working on
 the v3 API with some new ideas for how we can lessen the burden of
 keeping both APIs long term.  I'm personally especially interested in
 the v2.1 approach where v2 turns into code that transforms requests
 and responses to/from v3 format.  More on that here:

 http://ozlabs.org/~cyeoh/V3_API.html#v2_v3_dual_maintenance


 What I'd like to do next is work through a new proposal that includes
 keeping both v2 and v3, but with a new added focus of minimizing the
 cost.  This should include a path away from the dual code bases and to
 something like the v2.1 proposal.



I'd like to make a better API and I think details about this proposal helps
us with that goal.

I'd like the effort to continue but I'd like an additional focus during the
Icehouse timeframe to write end user and SDK dev docs and to listen to the
user survey respondents.

Thanks Russell and Chris for the mega-efforts here. It matters and you're
fighting the good fight.
Anne





 Thank you all for your participation on this topic.  It has been quite
 controversial, but the API we expose to our users is a really big deal.
  I'm feeling more and more confident that we're coming through this with
 a much better understanding of the problem space overall, as well as a
 better plan going forward than we had a few weeks ago.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Vishvananda Ishaya

On Mar 4, 2014, at 9:10 AM, Russell Bryant rbry...@redhat.com wrote:
 
 Thank you all for your participation on this topic.  It has been quite
 controversial, but the API we expose to our users is a really big deal.
 I'm feeling more and more confident that we're coming through this with
 a much better understanding of the problem space overall, as well as a
 better plan going forward than we had a few weeks ago.

Hey Russell,

Thanks for bringing this to the mailing list and being open to discussion
and collaboration. Also, thanks to everyone who is participating in the
plan. Doing this kind of thing in the open is difficult and it has lead to
a ton of debate, but this is the right way to do things. It says a lot
about the strength of our community that we are able to have conversations
like this without devolving into arguments and flame wars.

Vish





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Toward SQLAlchemy 0.9.x compatibility everywhere for Icehouse

2014-03-04 Thread Matt Riedemann



On 3/3/2014 8:59 AM, Thomas Goirand wrote:

On 03/03/2014 01:14 PM, Thomas Goirand wrote:

On 03/03/2014 11:24 AM, Thomas Goirand wrote:

It looks like my patch fixes the first unit test failure. Though we
still need a fix for the 2nd problem:
AttributeError: 'module' object has no attribute 'AbstractType'


Replying to myself...

It looks like AbstractType is not needed except for backwards
compatibility in SQLA 0.7  0.8, and it's gone away in 0.9. See:

http://docs.sqlalchemy.org/en/rel_0_7/core/types.html
http://docs.sqlalchemy.org/en/rel_0_8/core/types.html
http://docs.sqlalchemy.org/en/rel_0_9/core/types.html

(reference to AbstractType is gone from the 0.9 doc)

Therefore, I'm tempted to just remove lines 336 and 337, though I am
unsure of what was intended in this piece of code.

Your thoughts?

Thomas


Seems Sean already fixed that one, and it was lost in the git review
process (with patches going back and forth). I added it again as a
separate patch, and now the unit tests are now ok. It just passed the
gating tests! :)

Cheers, and thanks to Sean and everyone else for the help, hoping to get
this series approved soon,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



You're going to need to rebase on this [1] now since we have a Tempest 
job running against sqlalchemy-migrate patches as of yesterday.  I'm 
trying to figure out why that's failing in devstack-gate-cleanup-host 
though so any help there is appreciated.  I'm assuming we missed 
something in the job setup [2].


[1] https://review.openstack.org/#/c/77669/
[2] https://review.openstack.org/#/c/77679/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Jarret Raim
#1) do we believe OFTC is fundamentally better equipped to resist a
DDOS, or do we just believe they are a smaller target? The ongoing DDOS
on meetup.com the past 2 weeks is a good indicator that being a smaller
fish only helps for so long.

It seems like we would need a answer to this question. If the main reason
to switch is to avoid DDoS interruptions, the question would really boil
down to whether the OFTC is actually more resilient to DDoS or if they
just haven't had to deal with it.


Jarret


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat]Policy on upgades required config changes

2014-03-04 Thread Steven Hardy
Hi all,

As some of you know, I've been working on the instance-users blueprint[1].

This blueprint implementation requires three new items to be added to the
heat.conf, or some resources (those which create keystone users) will not
work:

https://review.openstack.org/#/c/73978/
https://review.openstack.org/#/c/76035/

So on upgrade, the deployer must create a keystone domain and domain-admin
user, add the details to heat.conf, as already been done in devstack[2].

The changes requried for this to work have already landed in devstack, but
it was discussed to day and Clint suggested this may be unacceptable
upgrade behavior - I'm not sure so looking for guidance/comments.

My plan was/is:
- Make devstack work
- Talk to tripleo folks to assist in any transition (what prompted this
  discussion)
- Document the upgrade requirements in the Icehouse release notes so the
  wider community can upgrade from Havana.
- Try to give a heads-up to those maintaining downstream heat deployment
  tools (e.g stackforge/puppet-heat) that some tweaks will be required for
  Icehouse.

However some have suggested there may be an openstack-wide policy which
requires peoples old config files to continue working indefinitely on
upgrade between versions - is this right?  If so where is it documented?

The code itself will handle backwards compatibility where existing stacks
were created with the old code, but I had assumed (as a concession to code
simplicity) that some documented upgrade procedure would be acceptable
rather than hacking in some way to support the previous (broken, ref bug
#1089261) behavior when the config values are not found.

If anyone can clarify the requirement/expectation around config files and
upgrades that would be most helpful, thanks!

Steve

[1] https://blueprints.launchpad.net/heat/+spec/instance-users
[2] https://review.openstack.org/#/c/73324/
https://review.openstack.org/#/c/75424/
https://review.openstack.org/#/c/76036/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Sean Dague
On 03/04/2014 12:26 PM, Vishvananda Ishaya wrote:
 
 On Mar 4, 2014, at 9:10 AM, Russell Bryant rbry...@redhat.com wrote:

 Thank you all for your participation on this topic.  It has been quite
 controversial, but the API we expose to our users is a really big deal.
 I'm feeling more and more confident that we're coming through this with
 a much better understanding of the problem space overall, as well as a
 better plan going forward than we had a few weeks ago.
 
 Hey Russell,
 
 Thanks for bringing this to the mailing list and being open to discussion
 and collaboration. Also, thanks to everyone who is participating in the
 plan. Doing this kind of thing in the open is difficult and it has lead to
 a ton of debate, but this is the right way to do things. It says a lot
 about the strength of our community that we are able to have conversations
 like this without devolving into arguments and flame wars.
 
 Vish

+1, and definitely appreciate Russell's leadership through this whole
discussion.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Get volumes REST API with filters and limit

2014-03-04 Thread Steven Kaufer
Duncan Thomas duncan.tho...@gmail.com wrote on 03/04/2014 07:53:49 AM:

 From: Duncan Thomas duncan.tho...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 03/04/2014 08:06 AM
 Subject: Re: [openstack-dev] [Cinder] Get volumes REST API with
 filters and limit

 Definitely file a bug... a script to reproduce would be fantastic.
 Needs to be fixed... I don't think you need a blueprint if the fix is
 simple, but if you're making deep changes then a blueprint always
 helps.#

Bug created: https://bugs.launchpad.net/cinder/+bug/1287813


 Thanks for pointing this out

 On 28 February 2014 20:52, Steven Kaufer kau...@us.ibm.com wrote:
  I am investigating some pagination enhancements in nova and cinder(see
nova
  blueprint https://blueprints.launchpad.net/nova/+spec/nova-pagination).
 
  In cinder, it appears that all filtering is done after the volumes are
  retrieved from the database (see the API.get_all function in
  https://github.com/openstack/cinder/blob/master/cinder/volume/api.py).
  Therefore, the usage combination of filters and limit will only work if
all
  volumes matching the filters are in the page of data being retrieved
from
  the database.
 
  For example, assume that all of the volumes with a name of foo would
be
  retrieved from the database starting at index 100 and that you query
for all
  volumes with a name of foo while specifying a limit of 50.  In this
case,
  the query would yield 0 results since the filter did not match any of
the
  first 50 entries retrieved from the database.
 
  Is this a known problem?
  Is this considered a bug?
  How should this get resolved?  As a blueprint for juno?
 
  I am new to the community and am trying to determine how this should be
  addressed.
 
  Thanks,
 
  Steven Kaufer
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Duncan Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks,

Steven Kaufer___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Sandy Walsh
This brings up something that's been gnawing at me for a while now ... why use 
entry-point based loaders at all? I don't see the problem they're trying to 
solve. (I thought I got it for a while, but I was clearly fooling myself)

1. If you use the load all drivers in this category feature, that's a 
security risk since any compromised python library could hold a trojan.

2. otherwise you have to explicitly name the plugins you want (or don't want) 
anyway, so why have the extra indirection of the entry-point? Why not just name 
the desired modules directly? 

3. the real value of a loader would be to also extend/manage the python path 
... that's where the deployment pain is. Use fully qualified filename driver 
and take care of the pathing for me. Abstracting the module/class/function 
name isn't a great win. 

I don't see where the value is for the added pain (entry-point 
management/package metadata) it brings. 

CMV,

-S

From: Russell Bryant [rbry...@redhat.com]
Sent: Tuesday, March 04, 2014 1:29 PM
To: Murray, Paul (HP Cloud Services); OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] What is the currently accepted way to do 
plugins

On 03/04/2014 06:27 AM, Murray, Paul (HP Cloud Services) wrote:
 One of my patches has a query asking if I am using the agreed way to
 load plugins: https://review.openstack.org/#/c/71557/

 I followed the same approach as filters/weights/metrics using
 nova.loadables. Was there an agreement to do it a different way? And if
 so, what is the agreed way of doing it? A pointer to an example or even
 documentation/wiki page would be appreciated.

The short version is entry-point based plugins using stevedore.

We should be careful though.  We need to limit what we expose as
external plug points, even if we consider them unstable.  If we don't
want it to be public, it may not make sense for it to be a plugin
interface at all.

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6][Security Group] BP: Support ICMP type filter by security group

2014-03-04 Thread Collins, Sean
On Tue, Mar 04, 2014 at 12:01:00PM -0500, Brian Haley wrote:
 On 03/03/2014 11:18 AM, Collins, Sean wrote:
  On Mon, Mar 03, 2014 at 09:39:42PM +0800, Xuhan Peng wrote:
  Currently, only security group rule direction, protocol, ethertype and port
  range are supported by neutron security group rule data structure. To allow
  
  If I am not mistaken, I believe that when you use the ICMP protocol
  type, you can use the port range specs to limit the type.
  
  https://github.com/openstack/neutron/blob/master/neutron/db/securitygroups_db.py#L309
  
  http://i.imgur.com/3n858Pf.png
  
  I assume we just have to check and see if it applies to ICMPv6?
 
 I tried using horizon to add an icmp type/code rule, and it didn't work.
 
 Before:
 
 -A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN
 
 After:
 
 -A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN
 -A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN
 
 I'd assume I'll have the same error with v6.
 
 I am curious what's actually being done under the hood here now...

Looks like _port_arg just returns an empty array when hte protocol is
ICMP?

https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L328

Called by: 

https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L292


-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Community Meeting minutes - 03/04/2014

2014-03-04 Thread Alexander Tivelkov
Hi,

Thanks for joining murano weekly meeting.
Here are the meeting minutes and the logs:

http://eavesdrop.openstack.org/meetings/murano/2014/murano.2014-03-04-17.01.html
http://eavesdrop.openstack.org/meetings/murano/2014/murano.2014-03-04-17.01.log.html

See you next week!

--
Regards,
Alexander Tivelkov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Hooking the external events discussion

2014-03-04 Thread Alexander Tivelkov
Hi folks,

On today's IRC meeting there was a very interesting discussion about
publishing of handlers for external events in Murano applications. It
turns out that the topic is pretty hot and requires some more
discussions. So, it was suggested to host an additional meeting to
cover this topic.

So, let's meet tomorrow at #murano channel on freenode. The suggested
time is 16:00 UTC (8am PST).

Anybody who is interested in the topic, please feel free to join!
Thanks

--
Regards,
Alexander Tivelkov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron: Need help with tox failure in VPN code

2014-03-04 Thread Paul Michali
All, I found the problem…

There is a race condition on the LB vs VPN unit tests. I've seen it with just 
reference VPN code, when trying to load the service driver via configuration.

Essentially, VPN sets up cfg.CONF with service driver entry, and then starts 
the Neutron plugin to handle various northbound APIs. For some tests, before 
the VPN plugin is started by Neutron, LB runs and sets a different cfg.CONF (to 
LOADBALANCE). It has the Service Type Manager load that config in and when VPN 
plugin __init__ runs, it goes to Service Type Manager, gets the existing 
instance (it is a singleton) that has the LB settings, and then fails to find 
the VPN service driver, obviously.

My workaround, was to have VPN plugin __init__() clear the instance for Service 
Type Manager and force it to re-parse the configuration (and get the right 
thing).  This will have little performance impact, as it is only run during 
init of VPN plugin, the config to load is small, and worst case is it happens 
twice (LB then VPN loads).

I don't know of any way of preventing this race condition, other than mocking 
out the Service Type Manager to return the expected service driver (though that 
doesn't test that logic). Nor do I know why this was not seen, when we had the 
full Service Type Framework in place. Not sure if it just changed the timing 
enough to mask the issue?

Note: I found that the Service Type Manager code raises a SystemExit() 
exception, when there is no matching configs. As a result, there is no 
traceback (just an error code), and it is really hard to tell why tox failed. 
Maybe sys.exit() would be better?

It was quite the setback, finding out yesterday afternoon that the VPN service 
type framework was definitely not going into I-3, having to rework the code to 
remove the dependency on that commit, and then hitting this test failure. Spent 
lots of time trying to figure this issue out, but many thanks for Akihiro, 
Henry G, and others for helping me trudge through the issue!

In any case, new reviews have been pushed out 
https://review.openstack.org/#/c/74144 and 
https://review.openstack.org/#/c/74156, which should be passing Jenkins again. 
We are in the process of bringing up Tempest with these patches to provide 3rd 
party testing.

I'd appreciate it very much, if you can (re)review these two change sets.

Thanks!


PCM (Paul Michali)

MAIL  p...@cisco.com
IRCpcm_  (irc.freenode.net)
TW@pmichali
GPG key4525ECC253E31A83
Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83

On Mar 4, 2014, at 8:06 AM, Paul Michali p...@cisco.com wrote:

 Bo,
 
 I did that change, and it passes when I run neutron.tests.unit.services.vpn, 
 but not when I run full tox or neutron.tetss.unit.services.  I still get 
 failures (either error code 10 or test fails with no info).
 
 Irene,
 
 Any thoughts on why the driver is not loading (even with the mod that Bo 
 suggests)?
 
 Nachi,
 
 I just tried run_tests.sh and it fails to run the test (haven't used that in 
 a very long time, so not sure I'm running it correctly). Do I need any 
 special args, when running that? I tried './run_tests.sh -f -V -P' but it ran 
 0 tests.
 
 
 All,
 
 The bottom line here is that I can't seem to get the loading of service 
 driver from neutron.conf, irrespective of the blueprint change set. If I use 
 a hard coded driver (as is on master branch and used in the latest patch for 
 74144), all the tests work. But for this blueprint we need to be able to load 
 the service driver (so that the blueprint driver can be loaded). The issue is 
 unrelated to the blueprint functionality, as shown by the latest patch and by 
 previous versions where I had the full service type framework implementation. 
 It seems like there is some problem with this partial application of STF to 
 load the service driver.
 
 I took the (working) 74144 patch and made the changes below to load the 
 service plugin from neutron.conf, and see tox failures. I've also patched 
 this into the master branch, and I see the same issue!  IOW, there is 
 something wrong with the method I'm using to setup the service driver at 
 least with respect to the current test suite.
 
 diff --git a/neutron/services/vpn/plugin.py b/neutron/services/vpn/plugin.py
 index 5d818a3..41cbff0 100644
 --- a/neutron/services/vpn/plugin.py
 +++ b/neutron/services/vpn/plugin.py
 @@ -18,11 +18,9 @@
  #
  # @author: Swaminathan Vasudevan, Hewlett-Packard
  
 -# from neutron.db import servicetype_db as st_db
  from neutron.db.vpn import vpn_db
 -# from neutron.plugins.common import constants
 -# from neutron.services import service_base
 -from neutron.services.vpn.service_drivers import ipsec as ipsec_driver
 +from neutron.plugins.common import constants
 +from neutron.services import service_base
  
  
  class VPNPlugin(vpn_db.VPNPluginDb):
 @@ -41,12 +39,10 @@ class VPNDriverPlugin(VPNPlugin, 
 vpn_db.VPNPluginRpcDbMixin):
  #TODO(nati) handle 

Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Brian Cline

On 03/04/2014 05:01 AM, Thierry Carrez wrote:

James E. Blair wrote:

Freenode has been having a rough time lately due to a series of DDoS
attacks which have been increasingly disruptive to collaboration.
Fortunately there's an alternative.

OFTC URL:http://www.oftc.net/ is a robust and established alternative
to Freenode.  It is a smaller network whose mission statement makes it a
less attractive target.  It's significantly more stable than Freenode
and has friendly and responsive operators.  The infrastructure team has
been exploring this area and we think OpenStack should move to using
OFTC.

There is quite a bit of literature out there pointing to Freenode, like
presentation slides from old conferences. We should expect people to
continue to join Freenode's channels forever. I don't think staying a
few weeks on those channels to redirect misled people will be nearly
enough. Could we have a longer plan ? Like advertisement bots that would
advise every n hours to join the right servers ?


[...]
1) Create an irc.openstack.org CNAME record that points to
chat.freenode.net.  Update instructions to suggest users configure their
clients to use that alias.

I'm not sure that helps. The people who would get (and react to) the DNS
announcement are likely using proxies anyway, which you'll have to
unplug manually from Freenode on switch day. The vast majority of users
will just miss the announcement. So I'd rather just make a lot of noise
on switch day :)

Finally, I second Sean's question on OFTC's stability. As bad as
Freenode is hit by DoS, they have experience handling this, mitigation
procedures in place, sponsors lined up to help, so damage ends up
*relatively* limited. If OFTC raises profile and becomes a target, are
we confident they would mitigate DoS as well as Freenode does ? Or would
they just disappear from the map completely ? I fear that we are trading
a known evil for some unknown here.

In all cases I would target post-release for the transition, maybe even
post-Summit.



Indeed, I can't help but feel like the large amount of effort involved 
in changing networks is a bit of a riverboat gamble. DDoS has been an 
unfortunate reality for every well-known/trusted/stable IRC network for 
the last 15-20 years, and running from it rather than planning for it is 
usually a futile effort. It feels like we'd be chasing our tails trying 
to find a place where DDoS couldn't cause serious disruption; even then 
it's still not a sure thing. I would hate to see everyone's efforts to 
have been in vain once the same problem occurs there.


--
Brian Cline
br...@linux.vnet.ibm.com



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nova failed to spawn when download disk image from Glance timeout

2014-03-04 Thread Ben Nemec
 

Nora, 

This is a development list. Your questions sound more related to usage,
so you might have better luck asking on the users list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack 

Thanks. 

-Ben 

On 2014-03-03 03:09, Nora Zhou wrote: 

 Hi, 
 
 I recently deploy Bare-metal node instance using Heat Template. However, Nova 
 failed to spawn due to a timeout error. When I look into the code I found 
 that the timeout is related to Nova downloading disk image from Glance. The 
 nova-schedule.log shows below: 
 
 2014-02-28 02:49:48.046 2136 ERROR nova.compute.manager 
 [req-09e61b23-436f-4425-8db0-10dd1aea2e39 85bbc1abb4254761a5452654a6934b75 
 692e595702654930936a65d1a658cff4] [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] Instance failed to spawn 
 
 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] Traceback (most recent call last): 
 
 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1417, in 
 _spawn/ 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] network_info=network_info, 
 
 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
 /usr/lib/python2.7/dist-packages/nova/virt/baremetal/pxe.py, line 444, in 
 cache_images 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
 [instance: 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] 
 self._cache_tftp_images(context, instance, tftp_image_info) 
 
 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
 /usr/lib/python2.7/dist-packages/nova/virt/baremetal/pxe.py, line 335, in 
 _cache_tftp_images 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
 [instance: 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] 
 project_id=instance['project_id'], 
 
 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
 /usr/lib/python2.7/dist-packages/nova/virt/baremetal/utils.py, line 33, in 
 cache_image 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
 [instance: 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] user_id, project_id) 
 
 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py, line 645, in 
 fetch_image 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
 [instance: 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] max_size=max_size) 
 
 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
 /usr/lib/python2.7/dist-packages/nova/virt/images.py, line 196, in 
 fetch_to_raw 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
 [instance: 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] max_size=max_size) 
 
 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
 /usr/lib/python2.7/dist-packages/nova/virt/images.py, line 190, in fetch 
 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] image_service.download(context, 
 image_id, dst_path=path) 
 
 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
 /usr/lib/python2.7/dist-packages/nova/image/glance.py, line 360, in 
 download 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] for chunk in image_chunks: 
 
 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
 /usr/lib/python2.7/dist-packages/glanceclient/common/http.py, line 478, in 
 __iter__ 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] chunk = self.next() 
 
 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
 /usr/lib/python2.7/dist-packages/glanceclient/common/http.py, line 494, in 
 next 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] chunk = self._resp.read(CHUNKSIZE) 
 
 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File /usr/lib/python2.7/httplib.py, 
 line 561, in read 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
 [instance: 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] s = self.fp.read(amt) 
 
 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File /usr/lib/python2.7/socket.py, 
 line 380, in read 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
 [instance: 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] data = self._sock.recv(left) 
 
 2014-02-28 02:49:48.046 2136 TRACE 

Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-03-04 Thread Vishvananda Ishaya

On Mar 3, 2014, at 11:32 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Mon, 2014-03-03 at 11:09 -0800, Vishvananda Ishaya wrote:
 On Mar 3, 2014, at 6:48 AM, Jay Pipes jaypi...@gmail.com wrote:
 
 On Sun, 2014-03-02 at 12:05 -0800, Morgan Fainberg wrote:
 Having done some work with MySQL (specifically around similar data
 sets) and discussing the changes with some former coworkers (MySQL
 experts) I am inclined to believe the move from varchar  to binary
 absolutely would increase performance like this.
 
 
 However, I would like to get some real benchmarks around it and if it
 really makes a difference we should get a smart UUID type into the
 common SQLlibs (can pgsql see a similar benefit? Db2?) I think this
 conversation. Should be split off from the keystone one at hand - I
 don't want valuable information / discussions to get lost.
 
 No disagreement on either point. However, this should be done after the
 standardization to a UUID user_id in Keystone, as a separate performance
 improvement patch. Agree?
 
 Best,
 -jay
 
 -1
 
 The expectation in other projects has been that project_ids and user_ids are 
 opaque strings. I need to see more clear benefit to enforcing stricter 
 typing on these, because I think it might break a lot of things.
 
 What does Nova lose here? The proposal is to have Keystone's user_id
 values be UUIDs all the time. There would be a migration or helper
 script against Nova's database that would change all non-UUID user_id
 values to the Keystone UUID values.

So I don’t have a problem with keystone internally using uuids, but forcing
a migration of user identifiers isn’t something that should be taken lightly.
One example is external logging and billing systems which now have to be
migrated.

I’m not opposed to the migration in principle. We may have to do a similar
thing for project_ids with hierarchical multitenancy, for example. I just
think we need a really good reason to do it, and the performance arguments
just don’t seem good enough without a little empirical data.

Vish

 
 If there's stuff in Nova that would break (which is doubtful,
 considering like you say, these are supposed to be opaque values and
 as such should not have any restrictions or validation on their value),
 then that is code in Nova that should be fixed.
 
 Honestly, we shouldn't accept poor or loose code just because stuff
 might break.
 
 -jay
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Update the Wiki with links to blueprints and reviews

2014-03-04 Thread Robert Li (baoli)
Yea. that's a good idea. I will try to find out time working on the spec.

--Robert

On 3/4/14 11:17 AM, Collins, Sean sean_colli...@cable.comcast.com
wrote:

On Tue, Mar 04, 2014 at 04:06:02PM +, Robert Li (baoli) wrote:
 Hi Sean,
 
 I just added the ipv6-prefix-delegation BP that can be found using the
 search link on the ipv6 wiki. More details about it will be added once I
 find time.

Perfect - we'll probably want to do a session at the summit on it.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-04 Thread Joe Gordon
On Tue, Mar 4, 2014 at 5:25 AM, Dina Belova dbel...@mirantis.com wrote:
 Joe, thanks for discussion.


 I think nova should natively support booting an instance for a

 limited amount of time. I would use this all the time to boot up

 devstack instances (boot devstack instance for 5 hours)


 Really nice idea, but to provide time based resource management for any
 resource type in OS (instance, volume, compute host, Heat stack, etc.) that
 needs to be implemented in every project. And even with that feature
 implemented, without central leasing service, there should be some other
 reservation connected opportunities like user notifications about close end
 of lease / energy efficiency, etc. that do not really fit idea of some
 already existing project / program.


So I understand the use case where I want a instance for x amount of
time, because the cloud model makes compute resources (instances)
ephemeral. But volumes and object storage are explicitly persistent,
so not sure why you would want to consume one of those resources for a
finite amount of time.


 Reserved and Spot Instances. I like Amazon's concept of reserved and

 spot instances it would be cool if we could support something similar


 AWS reserved instances look like your first idea with instances booted for a
 limited amount of time - even that in Amazon use case that's *much* time. As
 for spot instances, I believe this idea is more about some billing service
 that counts current instance/host/whatever price due to current compute
 capacity load, etc.

Actually you have it backwards.
Reserved Instances are easy to use and require no change to how you
use EC2. When computing your bill, our system will automatically apply
Reserved Instance rates first to minimize your costs. An instance hour
will only be charged at the On-Demand rate when your total quantity of
instances running that hour exceeds the number of applicable Reserved
Instances you own.
https://aws.amazon.com/ec2/purchasing-options/reserved-instances/


https://aws.amazon.com/ec2/purchasing-options/spot-instances/




 Boot an instances for 4 hours every morning. This sounds like

 something that
 https://wiki.openstack.org/wiki/Mistral#Tasks_Scheduling_-_Cloud_Cron

 can handle.


 That's not really thing we've implemented in Climate - we have not
 implemented periodic tasks like that - now lease might be not started,
 started and ended - without any 'sleeping' periods. Although, that's quite
 nice idea to implement this feature using Mistral.


 Give someone 100 CPU hours per time period of quota. Support quotas

 by overall usage not current usage. This sounds like something that

 each service should support natively.


 Quotas (if we speak about time management) should be satisfied in any time
 period. Now in Climate that's done by getting cloud resources from common
 pool at the lease creation moment - but, as you guess, that does not allow
 to have resource reusage at the time lease has not started yet. To
 implement resource reusage advanced quota management is truly needed. That
 idea was the first at the very beginning of Climate project and we
 definitely need that in future.

This is the crux of my concern:  without 'resource reusage' at the
time lease has not started yet. I don't see what climate provides.

How would climate handle quotas? Currently quotas are up to each
project to manage.



 Reserved Volume: Not sure how that works.


 Now we're in the process of investigating this moment too. Ideally that
 should be some kind of volume state, that simply means only DB record
 without real block storage created - and it'll be created only at the lease
 start date. But that requires many changes to Cinder. Other idea is to do
 the same as Climate does with compute hosts - consider cinder-volumes as
 dedicated to Climate and Climate will manage them itself. Reserved volume
 idea came from thoughts about 'reserved stack' - to have working group like
 vm+volume+assigned_ip time you really need that.


I would like to see a clear roadmap for this with input from the
Cinder team. Because I am not sure if this really makes much sense.


 Virtual Private Cloud.  It would be great to see OpenStack support a

 hardware isolated virtual private cloud, but not sure what the best

 way to implement that is.


 There was proposal with pclouds by Phil Day, that was changed after Icehouse
 summit to something new. First idea was to use exactly pclouds, but as they
 are not implemented now, Climate works directly with hosts aggregates to
 imitate them. In future, when we'll have opportunity to use pcloud (it does
 not matter how it'll be called really), we'll do it, of course.


That brings up another point, having a project that imports nova code
directly is bad. You are using non-public non-contractual APIs that
nova can change at any time.
http://git.openstack.org/cgit/stackforge/climate-nova/tree/climatenova/api/extensions/reservation.py

Having a nova filter that lives in 

[openstack-dev] [GSoC 2014] Proposal Template

2014-03-04 Thread Masaru Nomura
Hi,


 I have a question about an application format as I can't find it on wiki
page. Is there any specific information I should provide within a proposal?
I checked other communities and some of them have an application template,
so I would just like to make it clear.


 Thank you,

Masaru Nomura

IRC : massa [freenode]
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WARNING: ... This application has not enabled MySQL traditional mode, which means silent data corruption may occur - real issue?

2014-03-04 Thread Sean Dague
On 03/04/2014 01:27 PM, Ben Nemec wrote:
 This warning should be gone by default once
 https://github.com/openstack/oslo-incubator/commit/dda24eb4a815914c29e801ad0176630786db8734
 gets synced.  I believe there is work underway by the db team to get
 that done.
 
 Note that the reason it will be gone is that we're changing the default
 oslo db mode to traditional, so if we have any code that would have
 triggered silent data corruption it's now going to be not so silent.
 
 -Ben

Ok, but we're at the i3 freeze. So is there a db patch set up for every
service to sync that, and FFE ready to let this land?

Because otherwise I'm very afraid this is going to get trapped as 1/2
implemented, which would be terrible for the release.

So basically, who is driving these patches out to the projects?

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSoC 2014] Proposal Template

2014-03-04 Thread Davanum Srinivas
Hi,

Is there something we can adopt? can you please send me some pointers
to templates from other communities?

-- dims

On Tue, Mar 4, 2014 at 1:46 PM, Masaru Nomura massa.nom...@gmail.com wrote:
 Hi,


 I have a question about an application format as I can't find it on wiki
 page. Is there any specific information I should provide within a proposal?
 I checked other communities and some of them have an application template,
 so I would just like to make it clear.


 Thank you,

 Masaru Nomura

 IRC : massa [freenode]


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-03-04 Thread Morgan Fainberg
On March 4, 2014 at 10:45:02, Vishvananda Ishaya (vishvana...@gmail.com) wrote:

On Mar 3, 2014, at 11:32 AM, Jay Pipes jaypi...@gmail.com wrote: 

 On Mon, 2014-03-03 at 11:09 -0800, Vishvananda Ishaya wrote: 
 On Mar 3, 2014, at 6:48 AM, Jay Pipes jaypi...@gmail.com wrote: 
 
 On Sun, 2014-03-02 at 12:05 -0800, Morgan Fainberg wrote: 
 Having done some work with MySQL (specifically around similar data 
 sets) and discussing the changes with some former coworkers (MySQL 
 experts) I am inclined to believe the move from varchar to binary 
 absolutely would increase performance like this. 
 
 
 However, I would like to get some real benchmarks around it and if it 
 really makes a difference we should get a smart UUID type into the 
 common SQLlibs (can pgsql see a similar benefit? Db2?) I think this 
 conversation. Should be split off from the keystone one at hand - I 
 don't want valuable information / discussions to get lost. 
 
 No disagreement on either point. However, this should be done after the 
 standardization to a UUID user_id in Keystone, as a separate performance 
 improvement patch. Agree? 
 
 Best, 
 -jay 
 
 -1 
 
 The expectation in other projects has been that project_ids and user_ids are 
 opaque strings. I need to see more clear benefit to enforcing stricter 
 typing on these, because I think it might break a lot of things. 
 
 What does Nova lose here? The proposal is to have Keystone's user_id 
 values be UUIDs all the time. There would be a migration or helper 
 script against Nova's database that would change all non-UUID user_id 
 values to the Keystone UUID values. 

So I don’t have a problem with keystone internally using uuids, but forcing 
a migration of user identifiers isn’t something that should be taken lightly. 
One example is external logging and billing systems which now have to be 
migrated. 

I’m not opposed to the migration in principle. We may have to do a similar 
thing for project_ids with hierarchical multitenancy, for example. I just 
think we need a really good reason to do it, and the performance arguments 
just don’t seem good enough without a little empirical data. 

Vish 
Any one of the proposals we’re planning on using will not affect any current 
IDs.  Since the user_id is a blob, if we start issuing a new “id” format, 
ideally it shouldn’t matter as long as old IDs continue to work. If we do make 
any kind of migration for issued IDs I would expect that to be very deliberate 
and outside of this change set. Specifically this change set would enable 
multiple LDAP backends (among other user_id uniqueness benefits for federation, 
etc). 

I am very concerned about the external tools that reference IDs we currently 
have.

—Morgan





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thought exercise for a V2 only world

2014-03-04 Thread Chris Behrens

On Mar 4, 2014, at 4:09 AM, Sean Dague s...@dague.net wrote:

 On 03/04/2014 01:14 AM, Chris Behrens wrote:
 […]
 I don’t think I have an answer, but I’m going to throw out some of my random 
 thoughts about extensions in general. They might influence a longer term 
 decision. But I’m also curious if I’m the only one that feels this way:
 
 I tend to feel like extensions should start outside of nova and any other 
 code needed to support the extension should be implemented by using hooks in 
 nova. The modules implementing the hook code should be shipped with the 
 extension. If hooks don’t exist where needed, they should be created in 
 trunk. I like hooks. Of course, there’s probably such a thing as too many 
 hooks, so… hmm… :)  Anyway, this addresses another annoyance of mine whereby 
 code for extensions is mixed in all over the place. Is it really an 
 extension if all of the supporting code is in ‘core nova’?
 
 That said, I then think that the only extensions shipped with nova are 
 really ones we deem “optional core API components”. “optional” and “core” 
 are probably oxymorons in this context, but I’m just going to go with it. 
 There would be some sort of process by which we let extensions “graduate” 
 into nova.
 
 Like I said, this is not really an answer. But if we had such a model, I 
 wonder if it turns “deprecating extensions” into something more like 
 “deprecating part of the API”… something less likely to happen. Extensions 
 that aren’t used would more likely just never graduate into nova.
 
 So this approach actually really concerns me, because what it says is
 that we should be optimizing Nova for out of tree changes to the API
 which are vendor specific. Which I think is completely the wrong
 direction. Because in that world you'll never be able to move between
 Nova installations. What's worse is you'll get multiple people
 implementing the same feature out of tree, slightly differently.

Right. And I have an internal conflict because I also tend to agree with what 
you’re saying. :) But I think that if we have API extensions at all, we have 
your issue of “never being able to move”. Well, maybe not “never”, because at 
least they’d be easy to “turn on” if they are in nova. But I think for the 
random API extension that only 1 person ever wants to enable, there’s your same 
problem. This is somewhat off-topic, but I just don’t want a ton of bloat in 
nova for something few people use.

 
 I 100% agree the current extensions approach is problematic. It's used
 as a way to circumvent the idea of a stable API (mostly with oh, it's
 an extension, we need this feature right now, and it's not part of core
 so we don't need to give the same guaruntees.)

Yeah, totally..  that’s bad.

 
 So realistically I want to march us towards a place where we stop doing
 that. Nova out of the box should have all the knobs that anyone needs to
 build these kinds of features on top of. If not, we should fix that. It
 shouldn't be optional.

Agree, although I’m not sure if I’m reading this correctly as it sounds like 
you want the knobs that you said above concern you. I want some sort of 
balance. There’s extensions I think absolutely should be part of nova as 
optional features… but I don’t want everything. :)

- Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/neutron[master]: Permit ICMPv6 RAs only from known routers

2014-03-04 Thread Robert Li (baoli)
Hi Xu Han  Sean,

Is this code going to be committed as it is? Based on this morning's
discussion, I thought that the IP address used to install the RA rule
comes from the qr-xxx interface's LLA address. I think that I'm confused.

Also this bug: Allow LLA as router interface of IPv6 subnet
https://review.openstack.org/76125 was created due to comments to 72252.
If We don't need to create a new LLA for the gateway IP, is the fix still
needed? 

Just trying to sync up with you guys on them.

Thanks,
Robert



On 3/4/14 3:02 AM, Sean M. Collins (Code Review) rev...@openstack.org
wrote:

Sean M. Collins has posted comments on this change.

Change subject: Permit ICMPv6 RAs only from known routers
..


Patch Set 4: Looks good to me, but someone else must approve

Automatically re-added by Gerrit trivial rebase detection script.

--
To view, visit https://review.openstack.org/72252
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I1d5c7aaa8e4cf057204eb746c0faab2c70409a94
Gerrit-PatchSet: 4
Gerrit-Project: openstack/neutron
Gerrit-Branch: master
Gerrit-Owner: Xu Han Peng xuh...@cn.ibm.com
Gerrit-Reviewer: Arista Testing arista-openstack-t...@aristanetworks.com
Gerrit-Reviewer: Baodong (Robert) Li ba...@cisco.com
Gerrit-Reviewer: Big Switch CI openstack...@bigswitch.com
Gerrit-Reviewer: Brian Haley brian.ha...@hp.com
Gerrit-Reviewer: Brocade CI openstack_ger...@brocade.com
Gerrit-Reviewer: Cisco Neutron CI cisco-openstack-neutron...@cisco.com
Gerrit-Reviewer: Hyper-V CI hyper-v...@microsoft.com
Gerrit-Reviewer: Jenkins
Gerrit-Reviewer: Midokura CI Bot lu...@midokura.com
Gerrit-Reviewer: Miguel Angel Ajo miguelan...@ajo.es
Gerrit-Reviewer: NEC OpenStack CI nec-openstack...@iaas.jp.nec.com
Gerrit-Reviewer: Neutron Ryu ryu-openstack-rev...@lists.sourceforge.net
Gerrit-Reviewer: Nuage CI nuage...@nuagenetworks.net
Gerrit-Reviewer: One Convergence CI oc-neutron-t...@oneconvergence.com
Gerrit-Reviewer: PLUMgrid CI plumgrid-ci...@plumgrid.com
Gerrit-Reviewer: Sean M. Collins sean_colli...@cable.comcast.com
Gerrit-Reviewer: Xu Han Peng xuh...@cn.ibm.com
Gerrit-Reviewer: mark mcclain mmccl...@yahoo-inc.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thought exercise for a V2 only world

2014-03-04 Thread Sean Dague
On 03/04/2014 02:03 PM, Chris Behrens wrote:
 
 On Mar 4, 2014, at 4:09 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 On 03/04/2014 01:14 AM, Chris Behrens wrote:
 […]
 I don’t think I have an answer, but I’m going to throw out some of my
 random thoughts about extensions in general. They might influence a
 longer term decision. But I’m also curious if I’m the only one that
 feels this way:

 I tend to feel like extensions should start outside of nova and any
 other code needed to support the extension should be implemented by
 using hooks in nova. The modules implementing the hook code should be
 shipped with the extension. If hooks don’t exist where needed, they
 should be created in trunk. I like hooks. Of course, there’s probably
 such a thing as too many hooks, so… hmm… :)  Anyway, this addresses
 another annoyance of mine whereby code for extensions is mixed in all
 over the place. Is it really an extension if all of the supporting
 code is in ‘core nova’?

 That said, I then think that the only extensions shipped with nova
 are really ones we deem “optional core API components”. “optional”
 and “core” are probably oxymorons in this context, but I’m just going
 to go with it. There would be some sort of process by which we let
 extensions “graduate” into nova.

 Like I said, this is not really an answer. But if we had such a
 model, I wonder if it turns “deprecating extensions” into something
 more like “deprecating part of the API”… something less likely to
 happen. Extensions that aren’t used would more likely just never
 graduate into nova.

 So this approach actually really concerns me, because what it says is
 that we should be optimizing Nova for out of tree changes to the API
 which are vendor specific. Which I think is completely the wrong
 direction. Because in that world you'll never be able to move between
 Nova installations. What's worse is you'll get multiple people
 implementing the same feature out of tree, slightly differently.
 
 Right. And I have an internal conflict because I also tend to agree with
 what you’re saying. :) But I think that if we have API extensions at
 all, we have your issue of “never being able to move”. Well, maybe not
 “never”, because at least they’d be easy to “turn on” if they are in
 nova. But I think for the random API extension that only 1 person ever
 wants to enable, there’s your same problem. This is somewhat off-topic,
 but I just don’t want a ton of bloat in nova for something few people use.
 

 I 100% agree the current extensions approach is problematic. It's used
 as a way to circumvent the idea of a stable API (mostly with oh, it's
 an extension, we need this feature right now, and it's not part of core
 so we don't need to give the same guaruntees.)
 
 Yeah, totally..  that’s bad.
 

 So realistically I want to march us towards a place where we stop doing
 that. Nova out of the box should have all the knobs that anyone needs to
 build these kinds of features on top of. If not, we should fix that. It
 shouldn't be optional.
 
 Agree, although I’m not sure if I’m reading this correctly as it sounds
 like you want the knobs that you said above concern you. I want some
 sort of balance. There’s extensions I think absolutely should be part of
 nova as optional features… but I don’t want everything. :)

I want to give the knobs to the users. If we thought it was important
enough to review and test in Nova, then we made a judgement call that
people should have access to it.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WARNING: ... This application has not enabled MySQL traditional mode, which means silent data corruption may occur - real issue?

2014-03-04 Thread Ben Nemec

On 2014-03-04 12:51, Sean Dague wrote:

On 03/04/2014 01:27 PM, Ben Nemec wrote:

This warning should be gone by default once
https://github.com/openstack/oslo-incubator/commit/dda24eb4a815914c29e801ad0176630786db8734
gets synced.  I believe there is work underway by the db team to get
that done.

Note that the reason it will be gone is that we're changing the 
default

oslo db mode to traditional, so if we have any code that would have
triggered silent data corruption it's now going to be not so silent.

-Ben


Ok, but we're at the i3 freeze. So is there a db patch set up for every
service to sync that, and FFE ready to let this land?

Because otherwise I'm very afraid this is going to get trapped as 1/2
implemented, which would be terrible for the release.

So basically, who is driving these patches out to the projects?

-Sean


I'm not sure.  We're tracking the sync work here: 
https://etherpad.openstack.org/p/Icehouse-nova-oslo-sync but it just 
says the db team is working on it.


Adding Joe and Doug since I think they know more about what's going on 
with this.


If we can't get db synced, it's basically a bit flip to turn on 
traditional mode in the projects that are seeing this message right now. 
 I'd rather not since we want to drop support for that in favor of the 
general sql_mode option, but it can certainly be done if necessary.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-04 Thread Joe Gordon
On Tue, Mar 4, 2014 at 1:06 AM, Qin Zhao chaoc...@gmail.com wrote:
 I think the current snapshot implementation can be a solution sometimes, but
 it is NOT exact same as user's expectation. For example, a new blueprint is
 created last week,
 https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot, which
 seems a little similar with this discussion. I feel the user is requesting
 Nova to create in-place snapshot (not a new image), in order to revert the
 instance to a certain state. This capability should be very useful when
 testing new software or system settings. It seems a short-term temporary
 snapshot associated with a running instance for Nova. Creating a new
 instance is not that convenient, and may be not feasible for the user,
 especially if he or she is using public cloud.


Why isn't it easy to create a new instance from a snapshot?


 On Tue, Mar 4, 2014 at 1:32 PM, Nandavar, Divakar Padiyar
 divakar.padiyar-nanda...@hp.com wrote:

  Why reboot an instance? What is wrong with deleting it and create a
  new one?

 You generally use non-persistent disk mode when you are testing new
 software or experimenting with settings.   If something goes wrong just
 reboot and you are back to clean state and start over again.I feel it's
 convenient to handle this with just a reboot rather than recreating the
 instance.

 Thanks,
 Divakar

 -Original Message-
 From: Joe Gordon [mailto:joe.gord...@gmail.com]
 Sent: Tuesday, March 04, 2014 10:41 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova][cinder] non-persistent storage(after
 stopping VM, data will be rollback automatically), do you think we shoud
 introduce this feature?
 Importance: High

 On Mon, Mar 3, 2014 at 8:13 PM, Zhangleiqiang zhangleiqi...@huawei.com
 wrote:
 
  This sounds like ephemeral storage plus snapshots.  You build a base
  image, snapshot it then boot from the snapshot.
 
 
  Non-persistent storage/disk is useful for sandbox-like environment, and
  this feature has already exists in VMWare ESX from version 4.1. The
  implementation of ESX is the same as what you said, boot from snapshot of
  the disk/volume, but it will also *automatically* delete the transient
  snapshot after the instance reboots or shutdowns. I think the whole
  procedure may be controlled by OpenStack other than user's manual
  operations.

 Why reboot an instance? What is wrong with deleting it and create a new
 one?

 
  As far as I know, libvirt already defines the corresponding transient
  element in domain xml for non-persistent disk ( [1] ), but it cannot 
  specify
  the location of the transient snapshot. Although qemu-kvm has provided
  support for this feature by the -snapshot command argument, which will
  create the transient snapshot under /tmp directory, the qemu driver of
  libvirt don't support transient element currently.
 
  I think the steps of creating and deleting transient snapshot may be
  better to done by Nova/Cinder other than waiting for the transient 
  support
  added to libvirt, as the location of transient snapshot should specified by
  Nova.
 
 
  [1] http://libvirt.org/formatdomain.html#elementsDisks
  --
  zhangleiqiang
 
  Best Regards
 
 
  -Original Message-
  From: Joe Gordon [mailto:joe.gord...@gmail.com]
  Sent: Tuesday, March 04, 2014 11:26 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Cc: Luohao (brian)
  Subject: Re: [openstack-dev] [nova][cinder] non-persistent
  storage(after stopping VM, data will be rollback automatically), do
  you think we shoud introduce this feature?
 
  On Mon, Mar 3, 2014 at 6:00 PM, Yuzhou (C) vitas.yuz...@huawei.com
  wrote:
   Hi stackers,
  
   As far as I know ,there are two types of storage used by VM in
   openstack:
  Ephemeral Storage and Persistent Storage.
   Data on ephemeral storage ceases to exist when the instance it is
   associated
  with is terminated. Rebooting the VM or restarting the host server,
  however, will not destroy ephemeral data.
   Persistent storage means that the storage resource outlives any
   other
  resource and is always available, regardless of the state of a running
  instance.
  
   There is a use case that maybe need a new type of storage, maybe we
   can
  call it non-persistent storage .
   The use case is that VMs are assigned to the public ephemerally in
   public
  areas.
   After the VM is used, new data on storage of VM ceases to exist
   when the
  instance it is associated with is stopped.
   It means stop the VM, Non-persistent storage used by VM will be
   rollback
  automatically.
  
   Is there any other suggestions? Or any BPs about this use case?
  
 
  This sounds like ephemeral storage plus snapshots.  You build a base
  image, snapshot it then boot from the snapshot.
 
   Thanks!
  
   Zhou Yu
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
  

Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Murray, Paul (HP Cloud Services)
In a chat with Dan Smith on IRC, he was suggesting that the important thing was 
not to use class paths in the config file. I can see that internal 
implementation should not be exposed in the config files - that way the 
implementation can change without impacting the nova users/operators.

Sandy, I'm not sure I really get the security argument. Python provides every 
means possible to inject code, not sure plugins are so different. Certainly 
agree on choosing which plugins you want to use though.

-Original Message-
From: Sandy Walsh [mailto:sandy.wa...@rackspace.com] 
Sent: 04 March 2014 17:50
To: OpenStack Development Mailing List (not for usage questions); Murray, Paul 
(HP Cloud Services)
Subject: RE: [openstack-dev] [Nova] What is the currently accepted way to do 
plugins

This brings up something that's been gnawing at me for a while now ... why use 
entry-point based loaders at all? I don't see the problem they're trying to 
solve. (I thought I got it for a while, but I was clearly fooling myself)

1. If you use the load all drivers in this category feature, that's a 
security risk since any compromised python library could hold a trojan.

2. otherwise you have to explicitly name the plugins you want (or don't want) 
anyway, so why have the extra indirection of the entry-point? Why not just name 
the desired modules directly? 

3. the real value of a loader would be to also extend/manage the python path 
... that's where the deployment pain is. Use fully qualified filename driver 
and take care of the pathing for me. Abstracting the module/class/function 
name isn't a great win. 

I don't see where the value is for the added pain (entry-point 
management/package metadata) it brings. 

CMV,

-S

From: Russell Bryant [rbry...@redhat.com]
Sent: Tuesday, March 04, 2014 1:29 PM
To: Murray, Paul (HP Cloud Services); OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] What is the currently accepted way to do 
plugins

On 03/04/2014 06:27 AM, Murray, Paul (HP Cloud Services) wrote:
 One of my patches has a query asking if I am using the agreed way to 
 load plugins: https://review.openstack.org/#/c/71557/

 I followed the same approach as filters/weights/metrics using 
 nova.loadables. Was there an agreement to do it a different way? And 
 if so, what is the agreed way of doing it? A pointer to an example or 
 even documentation/wiki page would be appreciated.

The short version is entry-point based plugins using stevedore.

We should be careful though.  We need to limit what we expose as external plug 
points, even if we consider them unstable.  If we don't want it to be public, 
it may not make sense for it to be a plugin interface at all.

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WARNING: ... This application has not enabled MySQL traditional mode, which means silent data corruption may occur - real issue?

2014-03-04 Thread Joe Gordon
On Tue, Mar 4, 2014 at 11:08 AM, Ben Nemec openst...@nemebean.com wrote:
 On 2014-03-04 12:51, Sean Dague wrote:

 On 03/04/2014 01:27 PM, Ben Nemec wrote:

 This warning should be gone by default once

 https://github.com/openstack/oslo-incubator/commit/dda24eb4a815914c29e801ad0176630786db8734
 gets synced.  I believe there is work underway by the db team to get
 that done.

 Note that the reason it will be gone is that we're changing the default
 oslo db mode to traditional, so if we have any code that would have
 triggered silent data corruption it's now going to be not so silent.

 -Ben


 Ok, but we're at the i3 freeze. So is there a db patch set up for every
 service to sync that, and FFE ready to let this land?

 Because otherwise I'm very afraid this is going to get trapped as 1/2
 implemented, which would be terrible for the release.

 So basically, who is driving these patches out to the projects?

 -Sean


 I'm not sure.  We're tracking the sync work here:
 https://etherpad.openstack.org/p/Icehouse-nova-oslo-sync but it just says
 the db team is working on it.

 Adding Joe and Doug since I think they know more about what's going on with
 this.

https://github.com/openstack/oslo-incubator/blob/master/MAINTAINERS#L100


 If we can't get db synced, it's basically a bit flip to turn on traditional
 mode in the projects that are seeing this message right now.  I'd rather not
 since we want to drop support for that in favor of the general sql_mode
 option, but it can certainly be done if necessary.

 -Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-03-04 Thread Gokul Kandiraju
Dear All,



We are working on a framework where we want to monitor the system and take
certain actions when specific events or situations occur. Here are two
examples of 'different' situations:



   Example 1: A VM's-Owner and N/W's-owner are different == this could
mean a violation == we need to take some action

   Example 2: A simple policy such as (VM-migrate of all VMs on possible
node failure) OR (a more complex Energy Policy that may involve
optimization).



Both these examples need monitoring and actions to be taken when certain
events happen (or through polling). However, the first one falls into the
Compliance domain with Boolean conditions getting evaluated while the
second one may require a more richer set of expression allowing for
sequences or algorithms.

 So far, based on this discussion, it seems that these are *separate*
initiatives in the community. I am understanding the Congress project to be
in the domain of Boolean conditions (used for Compliance, etc.) where as
the Run-time-policies (Jay's proposal) where policies can be expressed as
rules, algorithms with higher-level goals. Is this understanding correct?

Also, looking at all the mails, this is what I am reading:



 1. Congress -- Focused on Compliance [ is that correct? ] (Boolean
constraints and logic)



 2. Runtime-Policies -- Jay's mail -- Focused on Runtime policies for
Load Balancing, Availability, Energy, etc. (sequences of actions, rules,
algorithms)



 3. SolverScheduler -- Focused on Placement [ static or runtime ] and
will be invoked by the (above) policy engines



 4. Gantt - Focused on (Holistic) Scheduling



 5. Neat -- seems to be a special case of Runtime-Policies  (policies
based on Load)



Would this be correct understanding?  We need to understand this to
contribute to the right project. :)



Thanks!

-Gokul



On Fri, Feb 28, 2014 at 5:46 PM, Jay Lau jay.lau@gmail.com wrote:

 Hi Yathiraj and Tim,

 Really appreciate your comments here ;-)

 I will prepare some detailed slides or documents before summit and we can
 have a review then. It would be great if OpenStack can provide DRS
 features.

 Thanks,

 Jay



 2014-03-01 6:00 GMT+08:00 Tim Hinrichs thinri...@vmware.com:

 Hi Jay,

 I think the Solver Scheduler is a better fit for your needs than Congress
 because you know what kinds of constraints and enforcement you want.  I'm
 not sure this topic deserves an entire design session--maybe just talking a
 bit at the summit would suffice (I *think* I'll be attending).

 Tim

 - Original Message -
 | From: Jay Lau jay.lau@gmail.com
 | To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 | Sent: Wednesday, February 26, 2014 6:30:54 PM
 | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for
 OpenStack run time policy to manage
 | compute/storage resource
 |
 |
 |
 |
 |
 |
 | Hi Tim,
 |
 | I'm not sure if we can put resource monitor and adjust to
 | solver-scheduler (Gantt), but I have proposed this to Gantt design
 | [1], you can refer to [1] and search jay-lau-513.
 |
 | IMHO, Congress does monitoring and also take actions, but the actions
 | seems mainly for adjusting single VM network or storage. It did not
 | consider migrating VM according to hypervisor load.
 |
 | Not sure if this topic deserved to be a design session for the coming
 | summit, but I will try to propose.
 |
 |
 |
 |
 | [1] https://etherpad.openstack.org/p/icehouse-external-scheduler
 |
 |
 |
 | Thanks,
 |
 |
 | Jay
 |
 |
 |
 | 2014-02-27 1:48 GMT+08:00 Tim Hinrichs  thinri...@vmware.com  :
 |
 |
 | Hi Jay and Sylvain,
 |
 | The solver-scheduler sounds like a good fit to me as well. It clearly
 | provisions resources in accordance with policy. Does it monitor
 | those resources and adjust them if the system falls out of
 | compliance with the policy?
 |
 | I mentioned Congress for two reasons. (i) It does monitoring. (ii)
 | There was mention of compute, networking, and storage, and I
 | couldn't tell if the idea was for policy that spans OS components or
 | not. Congress was designed for policies spanning OS components.
 |
 |
 | Tim
 |
 | - Original Message -
 |
 | | From: Jay Lau  jay.lau@gmail.com 
 | | To: OpenStack Development Mailing List (not for usage questions)
 | |  openstack-dev@lists.openstack.org 
 |
 |
 | | Sent: Tuesday, February 25, 2014 10:13:14 PM
 | | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal
 | | for OpenStack run time policy to manage
 | | compute/storage resource
 | |
 | |
 | |
 | |
 | |
 | | Thanks Sylvain and Tim for the great sharing.
 | |
 | | @Tim, I also go through with Congress and have the same feeling
 | | with
 | | Sylvai, it is likely that Congress is doing something simliar with
 | | Gantt providing a holistic way for deploying. What I want to do is
 | | to provide some functions which is very similar with VMWare DRS
 | | that
 | | can do some adaptive 

Re: [openstack-dev] [nova] Thought exercise for a V2 only world

2014-03-04 Thread Chris Behrens

On Mar 4, 2014, at 11:14 AM, Sean Dague s...@dague.net wrote:

 
 I want to give the knobs to the users. If we thought it was important
 enough to review and test in Nova, then we made a judgement call that
 people should have access to it.

Oh, I see. But, I don’t agree, certainly not for every single knob. It’s less 
of an issue in the private cloud world, but when you start offering this as a 
service, not everything is appropriate to enable.

- Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-04 Thread Georgy Okrokvertskhov
Hi,

Here is an etherpad page with current Murano status
http://etherpad.openstack.org/p/murano-incubation-status.

Thanks
Georgy


On Mon, Mar 3, 2014 at 9:04 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 Hi Zane,

 Thank you very much for this question.

 First of all let me highlight that Murano DSL was much inspired by TOSCA.
 We carefully read this standard before our movement to Murano DSL. TOSCA
 standard has a lot f of very well designed concepts and ideas which we
 reused in Murano. There is one obvious draw back of TOSCA - very heavy and
 verbose XML based syntax. Taking into account that OpenStack itself is
 clearly moving from XML based representations, it will be strange to bring
 this huge XML monster back on a higher level. Frankly, the current Murano
 workflows language is XML based and it is quite painful to write a
 workflows without any additional instrument like IDE.

 Now let me remind that TOSCA has a defined scope of its responsibility.
 There is a list of areas which are out of scope. For Murano it was
 important to see that the following items are out of TOSCA scope:
 Citations from [1]:
 ...
 2. The definition of concrete plans, i.e. the definition of plans in any
 process modeling language like BPMN or BPEL.
 3. The definition of a language for defining plans (i.e. a new process
 modeling language).
 ...
 Plans in TOSCA understanding is something similar to workflows. This is
 what we address by Murano workflow.

 Not let me go through TOSCA ideas and show how they are reflected in
 Murano. It will be a long peace of text so feel free to skip it.

 Taking this into account lets review what we have in Murano as an
 application package. Inside application package we have:
 1. Application metadata which describes application, its relations and
 properties
 2. Heat templates snippets
 3. Scripts for deployment
 4. Workflow definitions

 In TOSCA document in section 3.2.1 there are Service Templates introduced.
 These templates are declarative descriptions of services components and
 service Topologies. Service templates can be stored in catalog to be found
 and used by users. This  service template description is abstracted from
 actual infrastructure implementation and each cloud provider maps this
 definition to actual cloud infrastructure. This is definitely a part which
 is already covered by Heat.
 The same section says the following:  Making a concrete instance of a
 Topology Template can be done by running a corresponding Plan (so-called
 instantiating management plan, a.k.a. build plan). This build plan could be
 provided by the service developer who also creates the Service Template. This
 plan part which is out of scope of TOSCA is essentially what Murano adds as
 a part of application definition.

 Section 3.3 of TOSCA document introduces an new entity - artifacts.
 Artifact is a name for content which is needed for service deployment
 including (scripts, executables, binaries and images). That is why Murano
 has a metadata service to store artifacts as a part of application package.
 Moreover, Murano works with Glance team to move this metadata repository
 from Murano to Glance providing generic artifact repository which can be
 used not only by Murano but by any other services.

 Sections 3.4 and 3.5 explain the idea of Relationships with
 Compatibilities and Service Composition. Murano actually implements all
 these high level features. Application definition has a section with
 contract definitions. This contract syntax is not just a declaration of the
 relations and capabilities but also a way to make assertions and on-the fly
 type validation and conversion if needed.  Section 10 reveals details of
 requirements. It explains that requirements can be complex: inherit each
 other and be abstract to define a broad set of required values. Like when
 service requires relation database it will require type=RDMS without
 assuming the actual DB implementation MySQL, PostgreSQL or MSSQL.

 In order to solve the problem of complex requirements, relations and
 service composition we introduced  classes in our DSL. It was presented and
 discussed in this e-mail thread [3]. Murano DSL syntax allows application
 package writer to compose applications from existing classes by using
 inheritance and class properties with complex types like classes. It is
 possible to define a requirement with using abstract classes to define
 specific types of applications and services like databases, webservers and
 other. Using class inheritance Murano will be able to satisfy the
 requirement by specific object which proper parent class by checking the
 whole hierarchy of objects parent classes which can be abstract.

 I don't want to cover all entities defined in TOSCA as most important were
 discussed already. There are implementations of many TOSCA concepts in
 Murano, like class properties for TOSCA Properties, class methods for TOSCA
 Operations etc.

 TL;DR

 Summarizing 

Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Russell Bryant
On 03/04/2014 12:42 PM, Sean Dague wrote:
 On 03/04/2014 12:26 PM, Vishvananda Ishaya wrote:
 
 On Mar 4, 2014, at 9:10 AM, Russell Bryant rbry...@redhat.com
 wrote:
 
 Thank you all for your participation on this topic.  It has
 been quite controversial, but the API we expose to our users is
 a really big deal. I'm feeling more and more confident that
 we're coming through this with a much better understanding of
 the problem space overall, as well as a better plan going
 forward than we had a few weeks ago.
 
 Hey Russell,
 
 Thanks for bringing this to the mailing list and being open to
 discussion and collaboration. Also, thanks to everyone who is
 participating in the plan. Doing this kind of thing in the open
 is difficult and it has lead to a ton of debate, but this is the
 right way to do things. It says a lot about the strength of our
 community that we are able to have conversations like this
 without devolving into arguments and flame wars.
 
 Vish
 
 +1, and definitely appreciate Russell's leadership through this
 whole discussion.

Thanks for the kind words.  It really is a group effort.  Even in the
face of an incredibly controversial topic, we can't be afraid to ask
hard questions.  It takes a lot of maturity and focus to work through
the answers toward some sort of consensus without it turning into, as
Vish said, arguments and flame wars.

Nova (and OpenStack overall) is made up of a pretty incredible group
of people.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Meeting Tuesday March 4th at 19:00 UTC

2014-03-04 Thread Elizabeth Krumbach Joseph
On Mon, Mar 3, 2014 at 8:53 AM, Elizabeth Krumbach Joseph
l...@princessleia.com wrote:
 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting tomorrow, Tuesday March 4th, at 19:00 UTC in
 #openstack-meeting

Thanks to everyone who joined us, meeting minutes and log here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-03-04-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-03-04-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-03-04-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Dan Smith
 In a chat with Dan Smith on IRC, he was suggesting that the important
 thing was not to use class paths in the config file. I can see that
 internal implementation should not be exposed in the config files -
 that way the implementation can change without impacting the nova
 users/operators.
 
 Sandy, I'm not sure I really get the security argument. Python
 provides every means possible to inject code, not sure plugins are so
 different. Certainly agree on choosing which plugins you want to use
 though.

Yeah, so I don't think there's any security reason why one is better
than the other. I think that we've decided that providing a class path
is ugly, and I agree, especially if we have entry points at our disposal.

Now, the actual concern is not related to any of that, but about whether
we're going to open this up as a new thing we support. In general, my
reaction to adding new APIs people expect to be stable is no. However,
I understand why things like the resource reporting and even my events
mechanism are very useful for deployers to do some plumbing and
monitoring of their environment -- things that don't belong upstream anyway.

So I'm conflicted. I think that for these two cases, as long as we can
say that it's not a stable interface, I think it's probably okay.
However, things like we've had in the past, where we provide a clear
plug point for something like Compute manager API class are clearly
off the table, IMHO.

--Dan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk] Minutes for 4 March Meeting

2014-03-04 Thread Brian Curtin
Today was the second python-openstacksdk meeting

Minutes: 
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-03-04-19.00.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-03-04-19.00.txt
Log: 
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-03-04-19.00.log.html

Action Items:
1. Remove the existing API strawman (https://review.openstack.org/#/c/75362/)
2. Sketch out a core HTTP layer to build on
3. Write a rough Identity client

The next meeting is scheduled for Tuesday 11 March at 1900 UTC / 1300 CST.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/neutron[master]: Permit ICMPv6 RAs only from known routers

2014-03-04 Thread Collins, Sean
On Tue, Mar 04, 2014 at 02:08:03PM EST, Robert Li (baoli) wrote:
 Hi Xu Han  Sean,
 
 Is this code going to be committed as it is? Based on this morning's
 discussion, I thought that the IP address used to install the RA rule
 comes from the qr-xxx interface's LLA address. I think that I'm confused.

Xu Han has a better grasp on the query than I do, but I'm going to try
and take a crack at explaining the code as I read through it. Here's
some sample data from the Neutron database - built using
vagrant_devstack. 

https://gist.github.com/sc68cal/568d6119eecad753d696

I don't have V6 addresses working in vagrant_devstack just yet, but for
the sake of discourse I'm going to use it as an example.

If you look at the queries he's building in 72252 - he's querying all
the ports on the network, that are q_const.DEVICE_OWNER_ROUTER_INTF 
(network:router_interface). The IP of those ports are added to the list of 
IPs.

Then a second query is done to find the port connected from the router
to the gateway, q_const.DEVICE_OWNER_ROUTER_GW
('network:router_gateway'). Those IPs are then appended to the list of
IPs.

Finally, the last query adds the IPs of the gateway for each subnet
in the network.

So, ICMPv6 traffic from ports that are either:

A) A gateway device
B) A router
C) The subnet's gateway 

Will be passed through to an instance.

Now, please take note that I have *not* discussed what *kind* of IP
address will be picked up. We intend for it to be a Link Local address,
but that will be/is addressed in other patch sets.

 Also this bug: Allow LLA as router interface of IPv6 subnet
 https://review.openstack.org/76125 was created due to comments to 72252.
 If We don't need to create a new LLA for the gateway IP, is the fix still
 needed? 

Yes - we still need this patch - because that code path is how we are
able to create ports on routers that are a link local address.


This is at least my understanding of our progress so far, but I'm not
perfect - Xu Han will probably have the last word.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Stefano Maffulli
-1 to the switch from me.

this question from Sean is of fundamental value:

On 03/03/2014 03:19 PM, Sean Dague wrote:
 #1) do we believe OFTC is fundamentally better equipped to resist a
 DDOS, or do we just believe they are a smaller target? The ongoing DDOS
 on meetup.com the past 2 weeks is a good indicator that being a smaller
 fish only helps for so long.

until we can say that *fundamentally* OFTC is not going to suffer
disruptions in the future I wouldn't even remotely consider a painful
switch like this one.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >