Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2014-12-16 Thread Eduard Matei
Hi,

Can someone point me to some working documentation on how to setup third
party CI? (joinfu's instructions don't seem to work, and manually running
devstack-gate scripts fails:

Running gate_hook
Job timeout set to: 163 minutes
timeout: failed to run command
‘/opt/stack/new/devstack-gate/devstack-vm-gate.sh’: No such file or
directory
ERROR: the main setup script run by this job failed - exit code: 127
please look at the relevant log files to determine the root cause
Cleaning up host
... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz)

Build step 'Execute shell' marked build as failure.

I have a working Jenkins slave with devstack and our internal libraries, i
have Gerrit Trigger Plugin working and triggering on patches created, i
just need the actual job contents so that it can get to comment with the
test results.

Thanks,

Eduard

On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei eduard.ma...@cloudfounders.com
 wrote:

 Hi Darragh, thanks for your input

 I double checked the job settings and fixed it:
 - build triggers is set to Gerrit event
 - Gerrit trigger server is Gerrit (configured from Gerrit Trigger Plugin
 and tested separately)
 - Trigger on: Patchset Created
 - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches:
 Type: Path, Pattern: ** (was Type Plain on both)
 Now the job is triggered by commit on openstack-dev/sandbox :)

 Regarding the Query and Trigger Gerrit Patches, i found my patch using
 query: status:open project:openstack-dev/sandbox change:139585 and i can
 trigger it manually and it executes the job.

 But i still have the problem: what should the job do? It doesn't actually
 do anything, it doesn't run tests or comment on the patch.
 Do you have an example of job?

 Thanks,
 Eduard

 On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh dbai...@hp.com wrote:

 Hi Eduard,


 I would check the trigger settings in the job, particularly which type
 of pattern matching is being used for the branches. Found it tends to be
 the spot that catches most people out when configuring jobs with the
 Gerrit Trigger plugin. If you're looking to trigger against all branches
 then you would want Type: Path and Pattern: ** appearing in the UI.

 If you have sufficient access using the 'Query and Trigger Gerrit
 Patches' page accessible from the main view will make it easier to
 confirm that your Jenkins instance can actually see changes in gerrit
 for the given project (which should mean that it can see the
 corresponding events as well). Can also use the same page to re-trigger
 for PatchsetCreated events to see if you've set the patterns on the job
 correctly.

 Regards,
 Darragh Bailey

 Nothing is foolproof to a sufficiently talented fool - Unknown

 On 08/12/14 14:33, Eduard Matei wrote:
  Resending this to dev ML as it seems i get quicker response :)
 
  I created a job in Jenkins, added as Build Trigger: Gerrit Event:
  Patchset Created, chose as server the configured Gerrit server that
  was previously tested, then added the project openstack-dev/sandbox
  and saved.
  I made a change on dev sandbox repo but couldn't trigger my job.
 
  Any ideas?
 
  Thanks,
  Eduard
 
  On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei
  eduard.ma...@cloudfounders.com
  mailto:eduard.ma...@cloudfounders.com wrote:
 
  Hello everyone,
 
  Thanks to the latest changes to the creation of service accounts
  process we're one step closer to setting up our own CI platform
  for Cinder.
 
  So far we've got:
  - Jenkins master (with Gerrit plugin) and slave (with DevStack and
  our storage solution)
  - Service account configured and tested (can manually connect to
  review.openstack.org http://review.openstack.org and get events
  and publish comments)
 
  Next step would be to set up a job to do the actual testing, this
  is where we're stuck.
  Can someone please point us to a clear example on how a job should
  look like (preferably for testing Cinder on Kilo)? Most links
  we've found are broken, or tools/scripts are no longer working.
  Also, we cannot change the Jenkins master too much (it's owned by
  Ops team and they need a list of tools/scripts to review before
  installing/running so we're not allowed to experiment).
 
  Thanks,
  Eduard
 
  --
 
  *Eduard Biceri Matei, Senior Software Developer*
  www.cloudfounders.com
  http://www.cloudfounders.com/ | eduard.ma...@cloudfounders.com
  mailto:eduard.ma...@cloudfounders.com
 
 
 
  *CloudFounders, The Private Cloud Software Company*
 
  Disclaimer:
  This email and any files transmitted with it are confidential and
  intended solely for the use of the individual or entity to whom
  they are addressed.If you are not the named addressee or an
  employee or agent responsible for delivering this message to the
  named addressee, you are hereby notified that you are not
  authorized to 

[openstack-dev] Cross-Project meeting, Tue December 16th, 21:00 UTC

2014-12-16 Thread Thierry Carrez
Dear PTLs, cross-project liaisons and anyone else interested,

We'll have a cross-project meeting Tuesday at 21:00 UTC, with the
following agenda:

* Next steps for cascading (joehuang)
* Providing an alternative to shipping default config file (ttx)
  * operators thread at [1]
  * https://bugs.launchpad.net/nova/+bug/1301519
  * tools/config/generate_sample.sh -b . -p nova -o etc/nova
* Open discussion  announcements

[1]
http://lists.openstack.org/pipermail/openstack-operators/2014-December/005658.html

See you there !

For more details on this meeting, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cross-Project meeting, Tue December 16th, 21:00 UTC

2014-12-16 Thread joehuang
Hello,

I'll attend the IRC meeting if the network is not blocked. 

Best Regards
Chaoyi Huang ( Joe Huang )

-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org] 
Sent: Tuesday, December 16, 2014 5:20 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Cross-Project meeting, Tue December 16th, 21:00 UTC

Dear PTLs, cross-project liaisons and anyone else interested,

We'll have a cross-project meeting Tuesday at 21:00 UTC, with the following 
agenda:

* Next steps for cascading (joehuang)
* Providing an alternative to shipping default config file (ttx)
  * operators thread at [1]
  * https://bugs.launchpad.net/nova/+bug/1301519
  * tools/config/generate_sample.sh -b . -p nova -o etc/nova
* Open discussion  announcements

[1]
http://lists.openstack.org/pipermail/openstack-operators/2014-December/005658.html

See you there !

For more details on this meeting, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

--
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-16 Thread Neil Jerram
Stefano Maffulli stef...@openstack.org writes:

 On 12/09/2014 04:11 PM,  by wrote:
[vad] how about the documentation in this case?... bcos it needs some
 place to document (a short desc and a link to vendor page) or list these
 kind of out-of-tree plugins/drivers...  just to make the user aware of
 the availability of such plugins/driers which is compatible with so and
 so openstack release.  
 I checked with the documentation team and according to them, only the
 following plugins/drivers only will get documented...
 1) in-tree plugins/drivers (full documentation) 
 2) third-party plugins/drivers (ie, one implements and follows this new
 proposal, a.k.a partially-in-tree due to the integration module/code)...
 
 *** no listing/mention about such completely out-of-tree plugins/drivers***

 Discoverability of documentation is a serious issue. As I commented on
 docs spec [1], I think there are already too many places, mini-sites and
 random pages holding information that is relevant to users. We should
 make an effort to keep things discoverable, even if not maintained
 necessarily by the same, single team.

 I think the docs team means that they are not able to guarantee
 documentation for third-party *themselves* (and has not been able, too).
 The docs team is already overworked as it is now, they can't take on
 more responsibilities.

 So once Neutron's code will be split, documentation for the users of all
 third-party modules should find a good place to live in, indexed and
 searchable together where the rest of the docs are. I'm hoping that we
 can find a place (ideally under docs.openstack.org?) where third-party
 documentation can live and be maintained by the teams responsible for
 the code, too.

 Thoughts?

I suggest a simple table, under docs.openstack.org, where each row has
the plugin/driver name, and then links to the documentation and code.
There should ideally be a very lightweight process for vendors to add
their row(s) to this table, and to edit those rows.

I don't think it makes sense for the vendor documentation itself to be
under docs.openstack.org, while the code is out of tree.

Regards,
Neil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-16 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 15/12/14 18:57, Doug Hellmann wrote:
 There may be a similar problem managing dependencies on libraries
 that live outside of either tree. I assume you already decided how
 to handle that. Are you doing the same thing, and adding the
 requirements to neutron’s lists?

I guess the idea is to keep in neutron-*aas only those oslo-incubator
modules that are used there solely (=not used in main repo).

I think requirements are a bit easier and should track all direct
dependencies in each of the repos, so that in case main repo decides
to drop one, neutron-*aas repos are not broken.

For requirements, it's different because there is no major burden due
to duplicate entries in repos.

 
 On Dec 15, 2014, at 12:16 PM, Doug Wiegley do...@a10networks.com
 wrote:
 
 Hi all,
 
 Ihar and I discussed this on IRC, and are going forward with
 option 2 unless someone has a big problem with it.
 
 Thanks, Doug
 
 
 On 12/15/14, 8:22 AM, Doug Wiegley do...@a10networks.com
 wrote:
 
 Hi Ihar,
 
 I’m actually in favor of option 2, but it implies a few things
 about your time, and I wanted to chat with you before
 presuming.
 
 Maintenance can not involve breaking changes. At this point,
 the co-gate will block it.  Also, oslo graduation changes will
 have to be made in the services repos first, and then Neutron.
 
 Thanks, doug
 
 
 On 12/15/14, 6:15 AM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:
 
 Hi all,
 
 the question arose recently in one of reviews for neutron-*aas
 repos to remove all oslo-incubator code from those repos since
 it's duplicated in neutron main repo. (You can find the link to the
 review at the end of the email.)
 
 Brief hostory: neutron repo was recently split into 4 pieces
 (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split
 resulted in each repository keeping their own copy of 
 neutron/openstack/common/... tree (currently unused in all 
 neutron-*aas repos that are still bound to modules from main
 repo).
 
 As a oslo liaison for the project, I wonder what's the best way to 
 manage oslo-incubator files. We have several options:
 
 1. just kill all the neutron/openstack/common/ trees from
 neutron-*aas repositories and continue using modules from main
 repo.
 
 2. kill all duplicate modules from neutron-*aas repos and leave
 only those that are used in those repos but not in main repo.
 
 3. fully duplicate all those modules in each of four repos that use
 them.
 
 I think option 1. is a straw man, since we should be able to
 introduce new oslo-incubator modules into neutron-*aas repos even
 if they are not used in main repo.
 
 Option 2. is good when it comes to synching non-breaking bug fixes
 (or security fixes) from oslo-incubator, in that it will require
 only one sync patch instead of e.g. four. At the same time there
 may be potential issues when synchronizing updates from
 oslo-incubator that would break API and hence require changes to
 each of the modules that use it. Since we don't support atomic
 merges for multiple projects in gate, we will need to be cautious
 about those updates, and we will still need to leave neutron-*aas
 repos broken for some time (though the time may be mitigated with
 care).
 
 Option 3. is vice versa - in theory, you get total decoupling,
 meaning no oslo-incubator updates in main repo are expected to
 break neutron-*aas repos, but bug fixing becomes a huge PITA.
 
 I would vote for option 2., for two reasons: - most oslo-incubator
 syncs are non-breaking, and we may effectively apply care to
 updates that may result in potential breakage (f.e. being able to
 trigger an integrated run for each of neutron-*aas repos with the
 main sync patch, if there are any concerns). - it will make oslo
 liaison life a lot easier. OK, I'm probably too selfish on that.
 ;) - it will make stable maintainers life a lot easier. The main
 reason why stable maintainers and distributions like recent oslo
 graduation movement is that we don't need to track each bug fix we
 need in every project, and waste lots of cycles on it. Being able
 to fix a bug in one place only is *highly* anticipated. [OK, I'm
 quite selfish on that one too.] - it's a delusion that there will
 be no neutron-main syncs that will break neutron-*aas repos ever.
 There can still be problems due to incompatibility between neutron
 main and neutron-*aas code resulted EXACTLY because multiple parts
 of the same process use different versions of the same module.
 
 That said, Doug Wiegley (lbaas core) seems to be in favour of
 option 3. due to lower coupling that is achieved in that way. I
 know that lbaas team had a bad experience due to tight coupling to
 neutron project in the past, so I appreciate their concerns.
 
 All in all, we should come up with some standard solution for both 
 advanced services that are already split out, *and* upcoming
 vendor plugin shrinking initiative.
 
 The initial discussion is captured at: 
 

Re: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-16 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 15/12/14 17:22, Doug Wiegley wrote:
 Hi Ihar,
 
 I’m actually in favor of option 2, but it implies a few things
 about your time, and I wanted to chat with you before presuming.

I think split didn't mean moving project trees under separate
governance, so I assume oslo (doc, qa, ...) liaisons should not be
split either.

 
 Maintenance can not involve breaking changes. At this point, the
 co-gate will block it.  Also, oslo graduation changes will have to
 be made in the services repos first, and then Neutron.

Do you mean that a change to oslo-incubator modules is co-gated (not
just co-checked with no vote) with each of advanced services?

As I pointed in my previous email, sometimes breakages are inescapable.

Consider a change to neutron oslo-incubator module used commonly in
all repos that breaks API (they are quite rare, but still have a
chance of happening once in a while). If we would co-gate main neutron
repo changes with services, it will mean that we won't be able to
merge the change.

That would probably suggest that we go forward with option 3 and
manage all incubator files separately in each of the trees, though,
again, breakages are still possible in that scenario via introducing
incompatibility between versions of incubator modules in separate repos.

So we should be realistic about it and plan forward how we deal
potential breakages that *may* occur.

As for oslo library graduations, the order is not really significant.
What is significant is that we drop oslo-incubator module from main
neutron repo only after all other neutron-*aas repos migrate to
appropriate oslo.* library. The neutron migration itself may occur in
parallel (by postponing module drop later).

 
 Thanks, doug
 
 
 On 12/15/14, 6:15 AM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:
 
 Hi all,
 
 the question arose recently in one of reviews for neutron-*aas
 repos to remove all oslo-incubator code from those repos since
 it's duplicated in neutron main repo. (You can find the link to the
 review at the end of the email.)
 
 Brief hostory: neutron repo was recently split into 4 pieces
 (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split
 resulted in each repository keeping their own copy of 
 neutron/openstack/common/... tree (currently unused in all 
 neutron-*aas repos that are still bound to modules from main
 repo).
 
 As a oslo liaison for the project, I wonder what's the best way to 
 manage oslo-incubator files. We have several options:
 
 1. just kill all the neutron/openstack/common/ trees from
 neutron-*aas repositories and continue using modules from main
 repo.
 
 2. kill all duplicate modules from neutron-*aas repos and leave
 only those that are used in those repos but not in main repo.
 
 3. fully duplicate all those modules in each of four repos that use
 them.
 
 I think option 1. is a straw man, since we should be able to
 introduce new oslo-incubator modules into neutron-*aas repos even
 if they are not used in main repo.
 
 Option 2. is good when it comes to synching non-breaking bug fixes
 (or security fixes) from oslo-incubator, in that it will require
 only one sync patch instead of e.g. four. At the same time there
 may be potential issues when synchronizing updates from
 oslo-incubator that would break API and hence require changes to
 each of the modules that use it. Since we don't support atomic
 merges for multiple projects in gate, we will need to be cautious
 about those updates, and we will still need to leave neutron-*aas
 repos broken for some time (though the time may be mitigated with
 care).
 
 Option 3. is vice versa - in theory, you get total decoupling,
 meaning no oslo-incubator updates in main repo are expected to
 break neutron-*aas repos, but bug fixing becomes a huge PITA.
 
 I would vote for option 2., for two reasons: - most oslo-incubator
 syncs are non-breaking, and we may effectively apply care to
 updates that may result in potential breakage (f.e. being able to
 trigger an integrated run for each of neutron-*aas repos with the
 main sync patch, if there are any concerns). - it will make oslo
 liaison life a lot easier. OK, I'm probably too selfish on that.
 ;) - it will make stable maintainers life a lot easier. The main
 reason why stable maintainers and distributions like recent oslo
 graduation movement is that we don't need to track each bug fix we
 need in every project, and waste lots of cycles on it. Being able
 to fix a bug in one place only is *highly* anticipated. [OK, I'm
 quite selfish on that one too.] - it's a delusion that there will
 be no neutron-main syncs that will break neutron-*aas repos ever.
 There can still be problems due to incompatibility between neutron
 main and neutron-*aas code resulted EXACTLY because multiple parts
 of the same process use different versions of the same module.
 
 That said, Doug Wiegley (lbaas core) seems to be in favour of
 option 3. due to lower coupling 

[openstack-dev] [all][gerrit] Showing all inline comments from all patch sets

2014-12-16 Thread Radoslav Gerganov
I never liked how Gerrit is displaying inline comments and I find it 
hard to follow discussions on changes with many patch sets and inline 
comments.  So I tried to hack together an html view which display all 
comments grouped by patch set, file and commented line.  You can see the 
result at http://gerrit-mirror.appspot.com/change-id.  Some examples:


http://gerrit-mirror.appspot.com/127283
http://gerrit-mirror.appspot.com/128508
http://gerrit-mirror.appspot.com/83207

There is room for many improvements (my css skills are very limited) but 
I am just curious if someone else finds the idea useful.  The frontend 
is using the same APIs as the Gerrit UI and the backend running on 
GoogleAppEngine is just proxying the requests to review.openstack.org. 
So in theory if we serve the html page from our Gerrit it will work. 
You can find all sources here: 
https://github.com/rgerganov/gerrit-hacks.  Let me know what you think.


Thanks,
Rado

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled

2014-12-16 Thread Pasquale Porreca
Is there a specific reason for which a fixed ip is bound to a port on a
subnet where dhcp is disabled? it is confusing to have this info shown
when the instance doesn't have actually an ip on that port.
Should I fill a bug report, or is this a wanted behavior?

-- 
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Do all OpenStack daemons support sd_notify?

2014-12-16 Thread Alan Pevec
2014-12-15 17:00 GMT+01:00 Clint Byrum cl...@fewbar.com:
 Excerpts from Ihar Hrachyshka's message of 2014-12-15 07:21:04 -0800:
 I guess Type=notify is supposed to be used with daemons that use
 Service class from oslo-incubator that provides systemd notification
 mechanism, or call to systemd.notify_once() otherwise.
...
 BTW now that more distributions are interested in shipping unit files
 for services, should we upstream them and ship the same thing in all
 interested distributions?

 Since we can expect the five currently implemented OS's in TripleO to all
 have systemd by default soon (Debian, Fedora, openSUSE, RHEL, Ubuntu),
 it would make a lot of sense for us to make the systemd unit files that
 TripleO generates set Type=notify wherever possible. So hopefully we can
 actually make such a guarantee upstream sometime in the not-so-distant
 future, especially since our CI will run two of the more distinct forks,
 Ubuntu and Fedora.

There's one issue with Type=notify that Dan Prince discovered and
where Type=simple works better for his use-case:
if service startup takes more than DefaultTimeoutStartSec (90s by
default) and systemd does not get notification from the service, it
will consider it failed and try to restart it if Restart is defined in
the service unit file.
I tried to fix that by disabling timeout (example in Nova package
https://review.gerrithub.io/13054 ) but then systemctl start blocks
until notification is received.
Copying Dan's review comment: This still isn't quite right. It is
better in that the systemctl doesn't think the service fails... rather
it just seems to hang indefinately on 'systemctl start
openstack-nova-compute', as in I never get my terminal back.
My test case is:
1) Stop Nova compute. 2) Stop RabbitMQ. 3) Try to start Nova compute
via systemctl.
Could the RabbitMQ retry loop be preventing the notification?


Current implementation in oslo service sends notification only just
before entering the wait loop, because at that point all
initialization should be done and service ready to serve. Does anyone
have a suggestion for the better place where to notify service
readiness?
Or should just Dan's test-case step 3) be modified as:
3) Start Nova compute via systemctl start ...   (i.e. in the background) ?


Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] mid-cycle details -- CONFIRMED Feb. 18 - 20

2014-12-16 Thread James Polley
Is there a group hotel booking being arranged?

On Tue, Dec 16, 2014 at 5:26 AM, Clint Byrum cl...@fewbar.com wrote:

 I'm happy to announce we've cleared the schedule and the Mid-Cycle is
 confirmed for February 18 - 20 in Seattle, WA at HP's downtown offices.

 Please refer to the etherpad linked below for details including address
 and instructions for access to the building.

 PLEASE make sure you add yourself to the list of confirmed attendees
 on the etherpad *BEFORE* booking travel. We have a hard limit of 30
 participants, so if you are not certain you have a spot, please contact
 me before booking travel.

 Excerpts from Clint Byrum's message of 2014-12-01 14:58:58 -0800:
  Hello! I've received confirmation that our venue, the HP offices in
  downtown Seattle, will be available for the most-often-preferred
  least-often-cannot week of Feb 16 - 20.
 
  Our venue has a maximum of 20 participants, but I only have 16 possible
  attendees now. Please add yourself to that list _now_ if you will be
  joining us.
 
  I've asked our office staff to confirm Feb 18 - 20 (Wed-Fri). When they
  do, I will reply to this thread to let everyone know so you can all
  start to book travel. See the etherpad for travel details.
 
  https://etherpad.openstack.org/p/kilo-tripleo-midcycle-meetup

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-16 Thread Doug Hellmann

On Dec 16, 2014, at 5:13 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 Signed PGP part
 On 15/12/14 18:57, Doug Hellmann wrote:
  There may be a similar problem managing dependencies on libraries
  that live outside of either tree. I assume you already decided how
  to handle that. Are you doing the same thing, and adding the
  requirements to neutron’s lists?
 
 I guess the idea is to keep in neutron-*aas only those oslo-incubator
 modules that are used there solely (=not used in main repo).

How are the *aas packages installed? Are they separate libraries or 
applications that are installed on top of neutron? Or are their files copied 
into the neutron namespace?

 
 I think requirements are a bit easier and should track all direct
 dependencies in each of the repos, so that in case main repo decides
 to drop one, neutron-*aas repos are not broken.
 
 For requirements, it's different because there is no major burden due
 to duplicate entries in repos.
 
 
  On Dec 15, 2014, at 12:16 PM, Doug Wiegley do...@a10networks.com
  wrote:
 
  Hi all,
 
  Ihar and I discussed this on IRC, and are going forward with
  option 2 unless someone has a big problem with it.
 
  Thanks, Doug
 
 
  On 12/15/14, 8:22 AM, Doug Wiegley do...@a10networks.com
  wrote:
 
  Hi Ihar,
 
  I’m actually in favor of option 2, but it implies a few things
  about your time, and I wanted to chat with you before
  presuming.
 
  Maintenance can not involve breaking changes. At this point,
  the co-gate will block it.  Also, oslo graduation changes will
  have to be made in the services repos first, and then Neutron.
 
  Thanks, doug
 
 
  On 12/15/14, 6:15 AM, Ihar Hrachyshka ihrac...@redhat.com
  wrote:
 
  Hi all,
 
  the question arose recently in one of reviews for neutron-*aas
  repos to remove all oslo-incubator code from those repos since
  it's duplicated in neutron main repo. (You can find the link to the
  review at the end of the email.)
 
  Brief hostory: neutron repo was recently split into 4 pieces
  (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split
  resulted in each repository keeping their own copy of
  neutron/openstack/common/... tree (currently unused in all
  neutron-*aas repos that are still bound to modules from main
  repo).
 
  As a oslo liaison for the project, I wonder what's the best way to
  manage oslo-incubator files. We have several options:
 
  1. just kill all the neutron/openstack/common/ trees from
  neutron-*aas repositories and continue using modules from main
  repo.
 
  2. kill all duplicate modules from neutron-*aas repos and leave
  only those that are used in those repos but not in main repo.
 
  3. fully duplicate all those modules in each of four repos that use
  them.
 
  I think option 1. is a straw man, since we should be able to
  introduce new oslo-incubator modules into neutron-*aas repos even
  if they are not used in main repo.
 
  Option 2. is good when it comes to synching non-breaking bug fixes
  (or security fixes) from oslo-incubator, in that it will require
  only one sync patch instead of e.g. four. At the same time there
  may be potential issues when synchronizing updates from
  oslo-incubator that would break API and hence require changes to
  each of the modules that use it. Since we don't support atomic
  merges for multiple projects in gate, we will need to be cautious
  about those updates, and we will still need to leave neutron-*aas
  repos broken for some time (though the time may be mitigated with
  care).
 
  Option 3. is vice versa - in theory, you get total decoupling,
  meaning no oslo-incubator updates in main repo are expected to
  break neutron-*aas repos, but bug fixing becomes a huge PITA.
 
  I would vote for option 2., for two reasons: - most oslo-incubator
  syncs are non-breaking, and we may effectively apply care to
  updates that may result in potential breakage (f.e. being able to
  trigger an integrated run for each of neutron-*aas repos with the
  main sync patch, if there are any concerns). - it will make oslo
  liaison life a lot easier. OK, I'm probably too selfish on that.
  ;) - it will make stable maintainers life a lot easier. The main
  reason why stable maintainers and distributions like recent oslo
  graduation movement is that we don't need to track each bug fix we
  need in every project, and waste lots of cycles on it. Being able
  to fix a bug in one place only is *highly* anticipated. [OK, I'm
  quite selfish on that one too.] - it's a delusion that there will
  be no neutron-main syncs that will break neutron-*aas repos ever.
  There can still be problems due to incompatibility between neutron
  main and neutron-*aas code resulted EXACTLY because multiple parts
  of the same process use different versions of the same module.
 
  That said, Doug Wiegley (lbaas core) seems to be in favour of
  option 3. due to lower coupling that is achieved in that way. I
  know that lbaas team had a bad experience due to tight 

Re: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-16 Thread Doug Hellmann

On Dec 16, 2014, at 5:22 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 Signed PGP part
 On 15/12/14 17:22, Doug Wiegley wrote:
  Hi Ihar,
 
  I’m actually in favor of option 2, but it implies a few things
  about your time, and I wanted to chat with you before presuming.
 
 I think split didn't mean moving project trees under separate
 governance, so I assume oslo (doc, qa, ...) liaisons should not be
 split either.
 
 
  Maintenance can not involve breaking changes. At this point, the
  co-gate will block it.  Also, oslo graduation changes will have to
  be made in the services repos first, and then Neutron.
 
 Do you mean that a change to oslo-incubator modules is co-gated (not
 just co-checked with no vote) with each of advanced services?
 
 As I pointed in my previous email, sometimes breakages are inescapable.
 
 Consider a change to neutron oslo-incubator module used commonly in
 all repos that breaks API (they are quite rare, but still have a
 chance of happening once in a while). If we would co-gate main neutron
 repo changes with services, it will mean that we won't be able to
 merge the change.
 
 That would probably suggest that we go forward with option 3 and
 manage all incubator files separately in each of the trees, though,
 again, breakages are still possible in that scenario via introducing
 incompatibility between versions of incubator modules in separate repos.
 
 So we should be realistic about it and plan forward how we deal
 potential breakages that *may* occur.
 
 As for oslo library graduations, the order is not really significant.
 What is significant is that we drop oslo-incubator module from main
 neutron repo only after all other neutron-*aas repos migrate to
 appropriate oslo.* library. The neutron migration itself may occur in
 parallel (by postponing module drop later).

Don’t assume that it’s safe to combine the incubated version and library 
version of a module. We’ve had some examples where the APIs change or global 
state changes in a way that make the two incompatible. We definitely don’t take 
any care to ensure that the two copies can be run together.

 
 
  Thanks, doug
 
 
  On 12/15/14, 6:15 AM, Ihar Hrachyshka ihrac...@redhat.com
  wrote:
 
  Hi all,
 
  the question arose recently in one of reviews for neutron-*aas
  repos to remove all oslo-incubator code from those repos since
  it's duplicated in neutron main repo. (You can find the link to the
  review at the end of the email.)
 
  Brief hostory: neutron repo was recently split into 4 pieces
  (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split
  resulted in each repository keeping their own copy of
  neutron/openstack/common/... tree (currently unused in all
  neutron-*aas repos that are still bound to modules from main
  repo).
 
  As a oslo liaison for the project, I wonder what's the best way to
  manage oslo-incubator files. We have several options:
 
  1. just kill all the neutron/openstack/common/ trees from
  neutron-*aas repositories and continue using modules from main
  repo.
 
  2. kill all duplicate modules from neutron-*aas repos and leave
  only those that are used in those repos but not in main repo.
 
  3. fully duplicate all those modules in each of four repos that use
  them.
 
  I think option 1. is a straw man, since we should be able to
  introduce new oslo-incubator modules into neutron-*aas repos even
  if they are not used in main repo.
 
  Option 2. is good when it comes to synching non-breaking bug fixes
  (or security fixes) from oslo-incubator, in that it will require
  only one sync patch instead of e.g. four. At the same time there
  may be potential issues when synchronizing updates from
  oslo-incubator that would break API and hence require changes to
  each of the modules that use it. Since we don't support atomic
  merges for multiple projects in gate, we will need to be cautious
  about those updates, and we will still need to leave neutron-*aas
  repos broken for some time (though the time may be mitigated with
  care).
 
  Option 3. is vice versa - in theory, you get total decoupling,
  meaning no oslo-incubator updates in main repo are expected to
  break neutron-*aas repos, but bug fixing becomes a huge PITA.
 
  I would vote for option 2., for two reasons: - most oslo-incubator
  syncs are non-breaking, and we may effectively apply care to
  updates that may result in potential breakage (f.e. being able to
  trigger an integrated run for each of neutron-*aas repos with the
  main sync patch, if there are any concerns). - it will make oslo
  liaison life a lot easier. OK, I'm probably too selfish on that.
  ;) - it will make stable maintainers life a lot easier. The main
  reason why stable maintainers and distributions like recent oslo
  graduation movement is that we don't need to track each bug fix we
  need in every project, and waste lots of cycles on it. Being able
  to fix a bug in one place only is *highly* anticipated. [OK, I'm
  quite 

Re: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-16 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 16/12/14 12:50, Doug Hellmann wrote:
 
 On Dec 16, 2014, at 5:13 AM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:
 
 Signed PGP part On 15/12/14 18:57, Doug Hellmann wrote:
 There may be a similar problem managing dependencies on
 libraries that live outside of either tree. I assume you
 already decided how to handle that. Are you doing the same
 thing, and adding the requirements to neutron’s lists?
 
 I guess the idea is to keep in neutron-*aas only those
 oslo-incubator modules that are used there solely (=not used in
 main repo).
 
 How are the *aas packages installed? Are they separate libraries or
 applications that are installed on top of neutron? Or are their
 files copied into the neutron namespace?

They are separate libraries with their own setup.py, dependencies,
tarballs, all that, but they are free to use (public) code from main
neutron package.

 
 
 I think requirements are a bit easier and should track all
 direct dependencies in each of the repos, so that in case main
 repo decides to drop one, neutron-*aas repos are not broken.
 
 For requirements, it's different because there is no major burden
 due to duplicate entries in repos.
 
 
 On Dec 15, 2014, at 12:16 PM, Doug Wiegley
 do...@a10networks.com wrote:
 
 Hi all,
 
 Ihar and I discussed this on IRC, and are going forward with 
 option 2 unless someone has a big problem with it.
 
 Thanks, Doug
 
 
 On 12/15/14, 8:22 AM, Doug Wiegley do...@a10networks.com 
 wrote:
 
 Hi Ihar,
 
 I’m actually in favor of option 2, but it implies a few
 things about your time, and I wanted to chat with you
 before presuming.
 
 Maintenance can not involve breaking changes. At this
 point, the co-gate will block it.  Also, oslo graduation
 changes will have to be made in the services repos first,
 and then Neutron.
 
 Thanks, doug
 
 
 On 12/15/14, 6:15 AM, Ihar Hrachyshka
 ihrac...@redhat.com wrote:
 
 Hi all,
 
 the question arose recently in one of reviews for neutron-*aas 
 repos to remove all oslo-incubator code from those repos since 
 it's duplicated in neutron main repo. (You can find the link to
 the review at the end of the email.)
 
 Brief hostory: neutron repo was recently split into 4 pieces 
 (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The
 split resulted in each repository keeping their own copy of 
 neutron/openstack/common/... tree (currently unused in all 
 neutron-*aas repos that are still bound to modules from main 
 repo).
 
 As a oslo liaison for the project, I wonder what's the best way
 to manage oslo-incubator files. We have several options:
 
 1. just kill all the neutron/openstack/common/ trees from 
 neutron-*aas repositories and continue using modules from main 
 repo.
 
 2. kill all duplicate modules from neutron-*aas repos and
 leave only those that are used in those repos but not in main
 repo.
 
 3. fully duplicate all those modules in each of four repos that
 use them.
 
 I think option 1. is a straw man, since we should be able to 
 introduce new oslo-incubator modules into neutron-*aas repos
 even if they are not used in main repo.
 
 Option 2. is good when it comes to synching non-breaking bug
 fixes (or security fixes) from oslo-incubator, in that it will
 require only one sync patch instead of e.g. four. At the same
 time there may be potential issues when synchronizing updates
 from oslo-incubator that would break API and hence require
 changes to each of the modules that use it. Since we don't
 support atomic merges for multiple projects in gate, we will
 need to be cautious about those updates, and we will still need
 to leave neutron-*aas repos broken for some time (though the
 time may be mitigated with care).
 
 Option 3. is vice versa - in theory, you get total decoupling, 
 meaning no oslo-incubator updates in main repo are expected to 
 break neutron-*aas repos, but bug fixing becomes a huge PITA.
 
 I would vote for option 2., for two reasons: - most
 oslo-incubator syncs are non-breaking, and we may effectively
 apply care to updates that may result in potential breakage
 (f.e. being able to trigger an integrated run for each of
 neutron-*aas repos with the main sync patch, if there are any
 concerns). - it will make oslo liaison life a lot easier. OK,
 I'm probably too selfish on that. ;) - it will make stable
 maintainers life a lot easier. The main reason why stable
 maintainers and distributions like recent oslo graduation
 movement is that we don't need to track each bug fix we need in
 every project, and waste lots of cycles on it. Being able to
 fix a bug in one place only is *highly* anticipated. [OK, I'm 
 quite selfish on that one too.] - it's a delusion that there
 will be no neutron-main syncs that will break neutron-*aas
 repos ever. There can still be problems due to incompatibility
 between neutron main and neutron-*aas code resulted EXACTLY
 because multiple parts of the same process use different
 versions of the same 

Re: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-16 Thread Doug Hellmann

On Dec 16, 2014, at 7:27 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 Signed PGP part
 On 16/12/14 12:50, Doug Hellmann wrote:
 
  On Dec 16, 2014, at 5:13 AM, Ihar Hrachyshka ihrac...@redhat.com
  wrote:
 
  Signed PGP part On 15/12/14 18:57, Doug Hellmann wrote:
  There may be a similar problem managing dependencies on
  libraries that live outside of either tree. I assume you
  already decided how to handle that. Are you doing the same
  thing, and adding the requirements to neutron’s lists?
 
  I guess the idea is to keep in neutron-*aas only those
  oslo-incubator modules that are used there solely (=not used in
  main repo).
 
  How are the *aas packages installed? Are they separate libraries or
  applications that are installed on top of neutron? Or are their
  files copied into the neutron namespace?
 
 They are separate libraries with their own setup.py, dependencies,
 tarballs, all that, but they are free to use (public) code from main
 neutron package.

OK.

If they don’t have copies of all of the incubated modules they use, how are 
they tested? Is neutron a dependency?

 
 
 
  I think requirements are a bit easier and should track all
  direct dependencies in each of the repos, so that in case main
  repo decides to drop one, neutron-*aas repos are not broken.
 
  For requirements, it's different because there is no major burden
  due to duplicate entries in repos.
 
 
  On Dec 15, 2014, at 12:16 PM, Doug Wiegley
  do...@a10networks.com wrote:
 
  Hi all,
 
  Ihar and I discussed this on IRC, and are going forward with
  option 2 unless someone has a big problem with it.
 
  Thanks, Doug
 
 
  On 12/15/14, 8:22 AM, Doug Wiegley do...@a10networks.com
  wrote:
 
  Hi Ihar,
 
  I’m actually in favor of option 2, but it implies a few
  things about your time, and I wanted to chat with you
  before presuming.
 
  Maintenance can not involve breaking changes. At this
  point, the co-gate will block it.  Also, oslo graduation
  changes will have to be made in the services repos first,
  and then Neutron.
 
  Thanks, doug
 
 
  On 12/15/14, 6:15 AM, Ihar Hrachyshka
  ihrac...@redhat.com wrote:
 
  Hi all,
 
  the question arose recently in one of reviews for neutron-*aas
  repos to remove all oslo-incubator code from those repos since
  it's duplicated in neutron main repo. (You can find the link to
  the review at the end of the email.)
 
  Brief hostory: neutron repo was recently split into 4 pieces
  (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The
  split resulted in each repository keeping their own copy of
  neutron/openstack/common/... tree (currently unused in all
  neutron-*aas repos that are still bound to modules from main
  repo).
 
  As a oslo liaison for the project, I wonder what's the best way
  to manage oslo-incubator files. We have several options:
 
  1. just kill all the neutron/openstack/common/ trees from
  neutron-*aas repositories and continue using modules from main
  repo.
 
  2. kill all duplicate modules from neutron-*aas repos and
  leave only those that are used in those repos but not in main
  repo.
 
  3. fully duplicate all those modules in each of four repos that
  use them.
 
  I think option 1. is a straw man, since we should be able to
  introduce new oslo-incubator modules into neutron-*aas repos
  even if they are not used in main repo.
 
  Option 2. is good when it comes to synching non-breaking bug
  fixes (or security fixes) from oslo-incubator, in that it will
  require only one sync patch instead of e.g. four. At the same
  time there may be potential issues when synchronizing updates
  from oslo-incubator that would break API and hence require
  changes to each of the modules that use it. Since we don't
  support atomic merges for multiple projects in gate, we will
  need to be cautious about those updates, and we will still need
  to leave neutron-*aas repos broken for some time (though the
  time may be mitigated with care).
 
  Option 3. is vice versa - in theory, you get total decoupling,
  meaning no oslo-incubator updates in main repo are expected to
  break neutron-*aas repos, but bug fixing becomes a huge PITA.
 
  I would vote for option 2., for two reasons: - most
  oslo-incubator syncs are non-breaking, and we may effectively
  apply care to updates that may result in potential breakage
  (f.e. being able to trigger an integrated run for each of
  neutron-*aas repos with the main sync patch, if there are any
  concerns). - it will make oslo liaison life a lot easier. OK,
  I'm probably too selfish on that. ;) - it will make stable
  maintainers life a lot easier. The main reason why stable
  maintainers and distributions like recent oslo graduation
  movement is that we don't need to track each bug fix we need in
  every project, and waste lots of cycles on it. Being able to
  fix a bug in one place only is *highly* anticipated. [OK, I'm
  quite selfish on that one too.] - it's a delusion that there
  will be no neutron-main 

Re: [openstack-dev] [oslo] interesting problem with config filter

2014-12-16 Thread Mark McLoughlin
Hi Doug,

On Mon, 2014-12-08 at 15:58 -0500, Doug Hellmann wrote:
 As we’ve discussed a few times, we want to isolate applications from
 the configuration options defined by libraries. One way we have of
 doing that is the ConfigFilter class in oslo.config. When a regular
 ConfigOpts instance is wrapped with a filter, a library can register
 new options on the filter that are not visible to anything that
 doesn’t have the filter object.

Or to put it more simply, the configuration options registered by the
library should not be part of the public API of the library.

  Unfortunately, the Neutron team has identified an issue with this
 approach. We have a bug report [1] from them about the way we’re using
 config filters in oslo.concurrency specifically, but the issue applies
 to their use everywhere. 
 
 The neutron tests set the default for oslo.concurrency’s lock_path
 variable to “$state_path/lock”, and the state_path option is defined
 in their application. With the filter in place, interpolation of
 $state_path to generate the lock_path value fails because state_path
 is not known to the ConfigFilter instance.

It seems that Neutron sets this default in its etc/neutron.conf file in
its git tree:

  lock_path = $state_path/lock

I think we should be aiming for defaults like this to be set in code,
and for the sample config files to contain nothing but comments. So,
neutron should do:

  lockutils.set_defaults(lock_path=$state_path/lock)

That's a side detail, however.

 The reverse would also happen (if the value of state_path was somehow
 defined to depend on lock_path),

This dependency wouldn't/shouldn't be code - because Neutron *code*
shouldn't know about the existence of library config options.
Neutron deployers absolutely will be aware of lock_path however.

  and that’s actually a bigger concern to me. A deployer should be able
 to use interpolation anywhere, and not worry about whether the options
 are in parts of the code that can see each other. The values are all
 in one file, as far as they know, and so interpolation should “just
 work”.

Yes, if a deployer looks at a sample configuration file, all options
listed in there seem like they're in-play for substitution use within
the value of another option. For string substitution only, I'd say there
should be a global namespace where all options are registered.

Now ... one caveat on all of this ... I do think the string substitution
feature is pretty obscure and mostly just used in default values.

 I see a few solutions:
 
 1. Don’t use the config filter at all.
 2. Make the config filter able to add new options and still see
 everything else that is already defined (only filter in one
 direction).
 3. Leave things as they are, and make the error message better.

4. Just tackle this specific case by making lock_path implicitly
relative to a base path the application can set via an API, so Neutron
would do:

  lockutils.set_base_path(CONF.state_path)

at startup.

5. Make the toplevel ConfigOpts aware of all filters hanging off it, and
somehow cycle through all of those filters just when doing string
substitution.

 Because of the deployment implications of using the filter, I’m
 inclined to go with choice 1 or 2. However, choice 2 leaves open the
 possibility of a deployer wanting to use the value of an option
 defined by one filtered set of code when defining another. I don’t
 know how frequently that might come up, but it seems like the error
 would be very confusing, especially if both options are set in the
 same config file.
 
 I think that leaves option 1, which means our plans for hiding options
 from applications need to be rethought.
 
 Does anyone else see another solution that I’m missing?

I'd do something like (3) and (4), then wait to see if it crops up
multiple times in the future before tackling a more general solution.

With option (1), the basic thing to think about is how to maintain API
compatibility - if we expose the options through the API, how do we deal
with future moves, removals, renames, and changing semantics of those
config options.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-16 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 16/12/14 12:52, Doug Hellmann wrote:
 
 On Dec 16, 2014, at 5:22 AM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:
 
 Signed PGP part On 15/12/14 17:22, Doug Wiegley wrote:
 Hi Ihar,
 
 I’m actually in favor of option 2, but it implies a few things 
 about your time, and I wanted to chat with you before
 presuming.
 
 I think split didn't mean moving project trees under separate 
 governance, so I assume oslo (doc, qa, ...) liaisons should not
 be split either.
 
 
 Maintenance can not involve breaking changes. At this point,
 the co-gate will block it.  Also, oslo graduation changes will
 have to be made in the services repos first, and then Neutron.
 
 Do you mean that a change to oslo-incubator modules is co-gated
 (not just co-checked with no vote) with each of advanced
 services?
 
 As I pointed in my previous email, sometimes breakages are
 inescapable.
 
 Consider a change to neutron oslo-incubator module used commonly
 in all repos that breaks API (they are quite rare, but still have
 a chance of happening once in a while). If we would co-gate main
 neutron repo changes with services, it will mean that we won't be
 able to merge the change.
 
 That would probably suggest that we go forward with option 3 and 
 manage all incubator files separately in each of the trees,
 though, again, breakages are still possible in that scenario via
 introducing incompatibility between versions of incubator modules
 in separate repos.
 
 So we should be realistic about it and plan forward how we deal 
 potential breakages that *may* occur.
 
 As for oslo library graduations, the order is not really
 significant. What is significant is that we drop oslo-incubator
 module from main neutron repo only after all other neutron-*aas
 repos migrate to appropriate oslo.* library. The neutron
 migration itself may occur in parallel (by postponing module drop
 later).
 
 Don’t assume that it’s safe to combine the incubated version and
 library version of a module. We’ve had some examples where the APIs
 change or global state changes in a way that make the two
 incompatible. We definitely don’t take any care to ensure that the
 two copies can be run together.

Hm. Does it leave us with option 3 only? In that case, should we care
about incompatibilities between different versions of incubator
modules running in the same process (one for core code, and another
one for a service)? That sounds more like we're not left with safe
options.

 
 
 
 Thanks, doug
 
 
 On 12/15/14, 6:15 AM, Ihar Hrachyshka ihrac...@redhat.com 
 wrote:
 
 Hi all,
 
 the question arose recently in one of reviews for neutron-*aas 
 repos to remove all oslo-incubator code from those repos since 
 it's duplicated in neutron main repo. (You can find the link to
 the review at the end of the email.)
 
 Brief hostory: neutron repo was recently split into 4 pieces 
 (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The
 split resulted in each repository keeping their own copy of 
 neutron/openstack/common/... tree (currently unused in all 
 neutron-*aas repos that are still bound to modules from main 
 repo).
 
 As a oslo liaison for the project, I wonder what's the best way
 to manage oslo-incubator files. We have several options:
 
 1. just kill all the neutron/openstack/common/ trees from 
 neutron-*aas repositories and continue using modules from main 
 repo.
 
 2. kill all duplicate modules from neutron-*aas repos and
 leave only those that are used in those repos but not in main
 repo.
 
 3. fully duplicate all those modules in each of four repos that
 use them.
 
 I think option 1. is a straw man, since we should be able to 
 introduce new oslo-incubator modules into neutron-*aas repos
 even if they are not used in main repo.
 
 Option 2. is good when it comes to synching non-breaking bug
 fixes (or security fixes) from oslo-incubator, in that it will
 require only one sync patch instead of e.g. four. At the same
 time there may be potential issues when synchronizing updates
 from oslo-incubator that would break API and hence require
 changes to each of the modules that use it. Since we don't
 support atomic merges for multiple projects in gate, we will
 need to be cautious about those updates, and we will still need
 to leave neutron-*aas repos broken for some time (though the
 time may be mitigated with care).
 
 Option 3. is vice versa - in theory, you get total decoupling, 
 meaning no oslo-incubator updates in main repo are expected to 
 break neutron-*aas repos, but bug fixing becomes a huge PITA.
 
 I would vote for option 2., for two reasons: - most
 oslo-incubator syncs are non-breaking, and we may effectively
 apply care to updates that may result in potential breakage
 (f.e. being able to trigger an integrated run for each of
 neutron-*aas repos with the main sync patch, if there are any
 concerns). - it will make oslo liaison life a lot easier. OK,
 I'm probably too selfish on that. ;) 

Re: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change

2014-12-16 Thread Henry


Sent from my iPad

On 2014-12-16, at 下午2:54, Armando M. arma...@gmail.com wrote:

 
 
 Good questions. I'm also looking for the linux bridge MD, SRIOV MD...
 Who will be responsible for these drivers?
 
 Excellent question. In my opinion, 'technology' specific but not vendor 
 specific MD (like SRIOV) should not be maintained by specific vendor. It 
 should be accessible for all interested parties for contribution.
 
 I don't think that anyone is making the suggestion of making these drivers 
 develop in silos, but instead one of the objective is to allow them to evolve 
 more rapidly, and in the open, where anyone can participate.
  
 
 The OVS driver is maintained by Neutron community, vendor specific hardware 
 driver by vendor, SDN controllers driver by their own community or vendor. 
 But there are also other drivers like SRIOV, which are general for a lot of 
 vendor agonitsc backends, and can't be maintained by a certain 
 vendor/community.
 
 Certain technologies, like the ones mentioned above may require specific 
 hardware; even though they may not be particularly associated with a specific 
 vendor, some sort of vendor support is indeed required, like 3rd party CI. 
 So, grouping them together under an hw-accelerated umbrella, or whichever 
 other name that sticks, may make sense long term should the number of drivers 
 really ramp up as hinted below.

There are also MD not related with hardware, like via-tap, vif-vhostuser. Even 
for sriov, a stub agent for testing is enough, no need for real hardware.

All these MD should be very thin, only handle port binding.

  
 
 So, it would be better to keep some general backend MD in tree besides 
 SRIOV. There are also vif-type-tap, vif-type-vhostuser, 
 hierarchy-binding-external-VTEP ... We can implement a very thin in-tree base 
 MD that only handle vif bind which is backend agonitsc, then backend 
 provider is free to implement their own service logic, either by an backend 
 agent, or by a driver derived from the base MD for agentless scenery.
 
 Keeping general backend MDs in tree sounds reasonable.
 Regards
 
  Many thanks,
   Neil
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-16 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 16/12/14 13:41, Doug Hellmann wrote:
 
 On Dec 16, 2014, at 7:27 AM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:
 
 Signed PGP part On 16/12/14 12:50, Doug Hellmann wrote:
 
 On Dec 16, 2014, at 5:13 AM, Ihar Hrachyshka
 ihrac...@redhat.com wrote:
 
 Signed PGP part On 15/12/14 18:57, Doug Hellmann wrote:
 There may be a similar problem managing dependencies on 
 libraries that live outside of either tree. I assume you 
 already decided how to handle that. Are you doing the same 
 thing, and adding the requirements to neutron’s lists?
 
 I guess the idea is to keep in neutron-*aas only those 
 oslo-incubator modules that are used there solely (=not used
 in main repo).
 
 How are the *aas packages installed? Are they separate
 libraries or applications that are installed on top of neutron?
 Or are their files copied into the neutron namespace?
 
 They are separate libraries with their own setup.py,
 dependencies, tarballs, all that, but they are free to use
 (public) code from main neutron package.
 
 OK.
 
 If they don’t have copies of all of the incubated modules they use,
 how are they tested? Is neutron a dependency?

Yes, neutron is in their requirements.txt files.

 
 
 
 
 I think requirements are a bit easier and should track all 
 direct dependencies in each of the repos, so that in case
 main repo decides to drop one, neutron-*aas repos are not
 broken.
 
 For requirements, it's different because there is no major
 burden due to duplicate entries in repos.
 
 
 On Dec 15, 2014, at 12:16 PM, Doug Wiegley 
 do...@a10networks.com wrote:
 
 Hi all,
 
 Ihar and I discussed this on IRC, and are going forward
 with option 2 unless someone has a big problem with it.
 
 Thanks, Doug
 
 
 On 12/15/14, 8:22 AM, Doug Wiegley
 do...@a10networks.com wrote:
 
 Hi Ihar,
 
 I’m actually in favor of option 2, but it implies a
 few things about your time, and I wanted to chat with
 you before presuming.
 
 Maintenance can not involve breaking changes. At this 
 point, the co-gate will block it.  Also, oslo
 graduation changes will have to be made in the services
 repos first, and then Neutron.
 
 Thanks, doug
 
 
 On 12/15/14, 6:15 AM, Ihar Hrachyshka 
 ihrac...@redhat.com wrote:
 
 Hi all,
 
 the question arose recently in one of reviews for
 neutron-*aas repos to remove all oslo-incubator code from
 those repos since it's duplicated in neutron main repo.
 (You can find the link to the review at the end of the
 email.)
 
 Brief hostory: neutron repo was recently split into 4
 pieces (main, neutron-fwaas, neutron-lbaas, and
 neutron-vpnaas). The split resulted in each repository
 keeping their own copy of neutron/openstack/common/... tree
 (currently unused in all neutron-*aas repos that are still
 bound to modules from main repo).
 
 As a oslo liaison for the project, I wonder what's the best
 way to manage oslo-incubator files. We have several
 options:
 
 1. just kill all the neutron/openstack/common/ trees from 
 neutron-*aas repositories and continue using modules from
 main repo.
 
 2. kill all duplicate modules from neutron-*aas repos and 
 leave only those that are used in those repos but not in
 main repo.
 
 3. fully duplicate all those modules in each of four repos
 that use them.
 
 I think option 1. is a straw man, since we should be able
 to introduce new oslo-incubator modules into neutron-*aas
 repos even if they are not used in main repo.
 
 Option 2. is good when it comes to synching non-breaking
 bug fixes (or security fixes) from oslo-incubator, in that
 it will require only one sync patch instead of e.g. four.
 At the same time there may be potential issues when
 synchronizing updates from oslo-incubator that would break
 API and hence require changes to each of the modules that
 use it. Since we don't support atomic merges for multiple
 projects in gate, we will need to be cautious about those
 updates, and we will still need to leave neutron-*aas repos
 broken for some time (though the time may be mitigated with
 care).
 
 Option 3. is vice versa - in theory, you get total
 decoupling, meaning no oslo-incubator updates in main repo
 are expected to break neutron-*aas repos, but bug fixing
 becomes a huge PITA.
 
 I would vote for option 2., for two reasons: - most 
 oslo-incubator syncs are non-breaking, and we may
 effectively apply care to updates that may result in
 potential breakage (f.e. being able to trigger an
 integrated run for each of neutron-*aas repos with the main
 sync patch, if there are any concerns). - it will make oslo
 liaison life a lot easier. OK, I'm probably too selfish on
 that. ;) - it will make stable maintainers life a lot
 easier. The main reason why stable maintainers and
 distributions like recent oslo graduation movement is that
 we don't need to track each bug fix we need in every
 project, and waste lots of cycles on it. Being able to fix
 a bug in one place only is *highly* anticipated. [OK, I'm 
 quite 

Re: [openstack-dev] oslo.db 1.2.1 release coming to fix stable/juno

2014-12-16 Thread Doug Hellmann

On Dec 15, 2014, at 5:58 PM, Doug Hellmann d...@doughellmann.com wrote:

 
 On Dec 15, 2014, at 3:21 PM, Doug Hellmann d...@doughellmann.com wrote:
 
 The issue with stable/juno jobs failing because of the difference in the 
 SQLAlchemy requirements between the older applications and the newer oslo.db 
 is being addressed with a new release of the 1.2.x series. We will then cap 
 the requirements for stable/juno to 1.2.1. We decided we did not need to 
 raise the minimum version of oslo.db allowed in kilo, because the old 
 versions of the library do work, if they are installed from packages and not 
 through setuptools.
 
 Jeremy created a feature/1.2 branch for us, and I have 2 patches up [1][2] 
 to apply the requirements fix. The change to the oslo.db version in 
 stable/juno is [3].
 
 After the changes in oslo.db merge, I will tag 1.2.1.
 
 After spending several hours exploring a bunch of options to make this 
 actually work, some of which require making changes to test job definitions, 
 grenade, or other long-term changes, I’m proposing a new approach:
 
 1. Undo the change in master that broke the compatibility with versions of 
 SQLAlchemy by making master match juno: https://review.openstack.org/141927
 2. Update oslo.db after ^^ lands.
 3. Tag oslo.db 1.4.0 with a set of requirements compatible with Juno.
 4. Change the requirements in stable/juno to skip oslo.db 1.1, 1.2, and 1.3.
 
 I’ll proceed with that plan tomorrow morning (~15 hours from now) unless 
 someone points out why that won’t work in the mean time.

I just reset a few approved patches that were not going to land because of this 
issue to kick them out of the gate to expedite landing part of the fix. I did 
this by modifying their commit messages. I tried to limit the changes to simple 
cosmetic tweaks, so if you see a weird change to one of your patches that’s 
probably why.

 
 Doug
 
 
 Doug
 
 [1] https://review.openstack.org/#/c/141893/
 [2] https://review.openstack.org/#/c/141894/
 [3] https://review.openstack.org/#/c/141896/
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] interesting problem with config filter

2014-12-16 Thread Doug Hellmann

On Dec 16, 2014, at 7:41 AM, Mark McLoughlin mar...@redhat.com wrote:

 Hi Doug,
 
 On Mon, 2014-12-08 at 15:58 -0500, Doug Hellmann wrote:
 As we’ve discussed a few times, we want to isolate applications from
 the configuration options defined by libraries. One way we have of
 doing that is the ConfigFilter class in oslo.config. When a regular
 ConfigOpts instance is wrapped with a filter, a library can register
 new options on the filter that are not visible to anything that
 doesn’t have the filter object.
 
 Or to put it more simply, the configuration options registered by the
 library should not be part of the public API of the library.
 
 Unfortunately, the Neutron team has identified an issue with this
 approach. We have a bug report [1] from them about the way we’re using
 config filters in oslo.concurrency specifically, but the issue applies
 to their use everywhere. 
 
 The neutron tests set the default for oslo.concurrency’s lock_path
 variable to “$state_path/lock”, and the state_path option is defined
 in their application. With the filter in place, interpolation of
 $state_path to generate the lock_path value fails because state_path
 is not known to the ConfigFilter instance.
 
 It seems that Neutron sets this default in its etc/neutron.conf file in
 its git tree:
 
  lock_path = $state_path/lock
 
 I think we should be aiming for defaults like this to be set in code,
 and for the sample config files to contain nothing but comments. So,
 neutron should do:
 
  lockutils.set_defaults(lock_path=$state_path/lock)
 
 That's a side detail, however.
 
 The reverse would also happen (if the value of state_path was somehow
 defined to depend on lock_path),
 
 This dependency wouldn't/shouldn't be code - because Neutron *code*
 shouldn't know about the existence of library config options.
 Neutron deployers absolutely will be aware of lock_path however.
 
 and that’s actually a bigger concern to me. A deployer should be able
 to use interpolation anywhere, and not worry about whether the options
 are in parts of the code that can see each other. The values are all
 in one file, as far as they know, and so interpolation should “just
 work”.
 
 Yes, if a deployer looks at a sample configuration file, all options
 listed in there seem like they're in-play for substitution use within
 the value of another option. For string substitution only, I'd say there
 should be a global namespace where all options are registered.
 
 Now ... one caveat on all of this ... I do think the string substitution
 feature is pretty obscure and mostly just used in default values.
 
 I see a few solutions:
 
 1. Don’t use the config filter at all.
 2. Make the config filter able to add new options and still see
 everything else that is already defined (only filter in one
 direction).
 3. Leave things as they are, and make the error message better.
 
 4. Just tackle this specific case by making lock_path implicitly
 relative to a base path the application can set via an API, so Neutron
 would do:
 
  lockutils.set_base_path(CONF.state_path)
 
 at startup.
 
 5. Make the toplevel ConfigOpts aware of all filters hanging off it, and
 somehow cycle through all of those filters just when doing string
 substitution.

We would have to allow the reverse as well, since the filter object doesn’t see 
options not explicitly imported by the code creating the filter.

In either case, it only works if the filter object has been instantiated. I 
wonder if we have a similar problem with runtime option registration. I’ll have 
to test that.


 
 Because of the deployment implications of using the filter, I’m
 inclined to go with choice 1 or 2. However, choice 2 leaves open the
 possibility of a deployer wanting to use the value of an option
 defined by one filtered set of code when defining another. I don’t
 know how frequently that might come up, but it seems like the error
 would be very confusing, especially if both options are set in the
 same config file.
 
 I think that leaves option 1, which means our plans for hiding options
 from applications need to be rethought.
 
 Does anyone else see another solution that I’m missing?
 
 I'd do something like (3) and (4), then wait to see if it crops up
 multiple times in the future before tackling a more general solution.

Option 3 prevents neutron from adopting oslo.concurrency, and option 4 is a 
backwards-incompatible change to the way lock path is set.

 
 With option (1), the basic thing to think about is how to maintain API
 compatibility - if we expose the options through the API, how do we deal
 with future moves, removals, renames, and changing semantics of those
 config options.

The option is exposed through the existing set_defaults() method, so we can 
make that handle any backwards compatibility issues if we change it.

 
 Mark.
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

[openstack-dev] [Fuel]

2014-12-16 Thread Gil Meir
A performance issue was found when using OVS mechanism (we deduced it's on VM 
RX side) - we get very limited BW, tested with iperf.
When using Mellanox SR-IOV the problem does not occur, this also points on OVS 
mechanism driver problem.
LP bug with all details: https://bugs.launchpad.net/fuel/+bug/1403047/

For further questions Sasha from Mellanox who reported the bug is now on 
#fuel-dev with nick = t-sasha, and is also Cc here.


Regards,

Gil Meir
SW Cloud Solutions
Mellanox Technologies

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-16 Thread Thomas Morin

Hi Keshava,

2014-12-15 11:52, A, Keshava :

I have been thinking of Starting MPLS right from CN for L2VPN/EVPN 
scenario also.

Below are my queries w.r.t supporting MPLS from OVS :
1. MPLS will be used even for VM-VM traffic across CNs 
generated by OVS  ?


If E-VPN is used only to interconnect outside of a Neutron domain, then 
MPLS does not have to be used for traffic between VMs.


If E-VPN is used inside one DC for VM-VM traffic, then MPLS is *one* of 
the possible encapsulation only: E-VPN specs have been defined to use 
VXLAN (handy because there is native kernel support), MPLS/GRE or 
MPLS/UDP are other possibilities.



2. MPLS will be originated right from OVS and will be mapped at 
Gateway (it may be NN/Hardware router ) to SP network ?
So MPLS will carry 2 Labels ? (one for hop-by-hop, and 
other one for end to identify network ?)


On will carry 2 Labels ? : this would be one possibility, but not the 
one we target.
We would actually favor MPLS/GRE (GRE used instead of what you call the 
MPLS hop-by-hop label) inside the DC -- this requires only one label.
At the DC edge gateway, depending on the interconnection techniques to 
connect the WAN, different options can be used (RFC4364 section 10): 
Option A with back-to-back VRFs (no MPLS label, but typically VLANs), or 
option B (with one MPLS label), a mix of A/B is also possible and 
sometimes called option D (one label) ;  option C also exists, but is 
not a good fit here.


Inside one DC, if vswitches see each other across an Ethernet segment, 
we can also use MPLS with just one label (the VPN label) without a GRE 
encap.


In a way, you can say that in Option B, the label are mapped at the 
DC/WAN gateway(s), but this is really just MPLS label swaping, not to be 
misunderstood as mapping a DC label space to a WAN label space (see 
below, the label space is local to each device).




3. MPLS will go over even the network physical infrastructure 
 also ?


The use of MPLS/GRE means we are doing an overlay, just like your 
typical VXLAN-based solution, and the network physical infrastructure 
does not need to be MPLS-aware (it just needs to be able to carry IP 
traffic)



4. How the Labels will be mapped a/c virtual and physical world 
?


(I don't get the question, I'm not sure what you mean by mapping labels)


5. Who manages the label space  ? Virtual world or physical 
world or both ? (OpenStack +  ODL ?)


In MPLS*, the label space is local to each device : a label is 
downstream-assigned, i.e. allocated by the receiving device for a 
specific purpose (e.g. forwarding in a VRF). It is then (typically) 
avertized in a routing protocol; the sender device will use this label 
to send traffic to the receiving device for this specific purpose.  As a 
result a sender device may then use label 42 to forward traffic in the 
context of VPN X to a receiving device A, and the same label 42 to 
forward traffic in the context of another VPN Y to another receiving 
device B, and locally use label 42 to receive traffic for VPN Z.  There 
is no global label space to manage.


So, while you can design a solution where the label space is managed in 
a centralized fashion, this is not required.


You could design an SDN controller solution where the controller would 
manage one label space common to all nodes, or all the label spaces of 
all forwarding devices, but I think its hard to derive any interesting 
property from such a design choice.


In our BaGPipe distributed design (and this is also true in OpenContrail 
for instance) the label space is managed locally on each compute node 
(or network node if the BGP speaker is on a network node). More 
precisely in VPN implementation.


If you take a step back, the only naming space that has to be managed 
in BGP VPNs is the Route Target space. This is only in the control 
plane. It is a very large space (48 bits), and it is structured (each AS 
has its own 32 bit space, and there are private AS numbers). The mapping 
to the dataplane to MPLS labels is per-device and purely local.


(*: MPLS also allows upstream-assigned labels, it is more recent and 
only used in specific cases where downstream assigned does not work well)



6. The labels are nested (i.e. Like L3 VPN end to end MPLS 
connectivity ) will be established ?


In solutions where MPLS/GRE is used the label stack typically has only 
one label (the VPN label).




7. Or it will be label stitching between Virtual-Physical 
network ?
How the end-to-end path will be setup ?

Let me know your opinion for the same.



How the end-to-end path is setup may depend on interconnection choice.
With an inter-AS option B or A+B, you would have the following:
- ingress DC overlay: one MPLS-over-GRE hop from vswitch to DC edge
- ingress DC edge to WAN: one MPLS label (VPN label 

Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-16 Thread Thomas Morin

Hi Ryan,

Mathieu Rohon :

We have been working on similar Use cases to announce /32 with the
Bagpipe BGPSpeaker that supports EVPN.


Btw, the code for the BGP E-VPN implementation is at 
https://github.com/Orange-OpenSource/bagpipe-bgp
It reuses parts of ExaBGP (to which we contributed encodings for E-VPN 
and IP VPNs) and relies on the VXLAN native Linux kernel implementation 
for the E-VPN dataplane.


-Thomas


Please have a look at use case B in [1][2].
Note also that the L2population Mechanism driver for ML2, that is
compatible with OVS, Linuxbridge and ryu ofagent, is inspired by EVPN,
and I'm sure it could help in your use case

[1]http://fr.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
[2]https://www.youtube.com/watch?v=q5z0aPrUZYcsns
[3]https://blueprints.launchpad.net/neutron/+spec/l2-population

Mathieu

On Thu, Dec 4, 2014 at 12:02 AM, Ryan Clevenger
ryan.cleven...@rackspace.com wrote:

Hi,

At Rackspace, we have a need to create a higher level networking service
primarily for the purpose of creating a Floating IP solution in our
environment. The current solutions for Floating IPs, being tied to plugin
implementations, does not meet our needs at scale for the following reasons:

1. Limited endpoint H/A mainly targeting failover only and not multi-active
endpoints,
2. Lack of noisy neighbor and DDOS mitigation,
3. IP fragmentation (with cells, public connectivity is terminated inside
each cell leading to fragmentation and IP stranding when cell CPU/Memory use
doesn't line up with allocated IP blocks. Abstracting public connectivity
away from nova installations allows for much more efficient use of those
precious IPv4 blocks).
4. Diversity in transit (multiple encapsulation and transit types on a per
floating ip basis).

We realize that network infrastructures are often unique and such a solution
would likely diverge from provider to provider. However, we would love to
collaborate with the community to see if such a project could be built that
would meet the needs of providers at scale. We believe that, at its core,
this solution would boil down to terminating north-south traffic
temporarily at a massively horizontally scalable centralized core and then
encapsulating traffic east-west to a specific host based on the
association setup via the current L3 router's extension's 'floatingips'
resource.

Our current idea, involves using Open vSwitch for header rewriting and
tunnel encapsulation combined with a set of Ryu applications for management:

https://i.imgur.com/bivSdcC.png

The Ryu application uses Ryu's BGP support to announce up to the Public
Routing layer individual floating ips (/32's or /128's) which are then
summarized and announced to the rest of the datacenter. If a particular
floating ip is experiencing unusually large traffic (DDOS, slashdot effect,
etc.), the Ryu application could change the announcements up to the Public
layer to shift that traffic to dedicated hosts setup for that purpose. It
also announces a single /32 Tunnel Endpoint ip downstream to the TunnelNet
Routing system which provides transit to and from the cells and their
hypervisors. Since traffic from either direction can then end up on any of
the FLIP hosts, a simple flow table to modify the MAC and IP in either the
SRC or DST fields (depending on traffic direction) allows the system to be
completely stateless. We have proven this out (with static routing and
flows) to work reliably in a small lab setup.

On the hypervisor side, we currently plumb networks into separate OVS
bridges. Another Ryu application would control the bridge that handles
overlay networking to selectively divert traffic destined for the default
gateway up to the FLIP NAT systems, taking into account any configured
logical routing and local L2 traffic to pass out into the existing overlay
fabric undisturbed.

Adding in support for L2VPN EVPN
(https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN
Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to the
Ryu BGP speaker will allow the hypervisor side Ryu application to advertise
up to the FLIP system reachability information to take into account VM
failover, live-migrate, and supported encapsulation types. We believe that
decoupling the tunnel endpoint discovery from the control plane
(Nova/Neutron) will provide for a more robust solution as well as allow for
use outside of openstack if desired.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-16 Thread Gary Kotton


On 12/16/14, 2:41 PM, Doug Hellmann d...@doughellmann.com wrote:


On Dec 16, 2014, at 7:27 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 Signed PGP part
 On 16/12/14 12:50, Doug Hellmann wrote:
 
  On Dec 16, 2014, at 5:13 AM, Ihar Hrachyshka ihrac...@redhat.com
  wrote:
 
  Signed PGP part On 15/12/14 18:57, Doug Hellmann wrote:
  There may be a similar problem managing dependencies on
  libraries that live outside of either tree. I assume you
  already decided how to handle that. Are you doing the same
  thing, and adding the requirements to neutron¹s lists?
 
  I guess the idea is to keep in neutron-*aas only those
  oslo-incubator modules that are used there solely (=not used in
  main repo).
 
  How are the *aas packages installed? Are they separate libraries or
  applications that are installed on top of neutron? Or are their
  files copied into the neutron namespace?
 
 They are separate libraries with their own setup.py, dependencies,
 tarballs, all that, but they are free to use (public) code from main
 neutron package.

OK.

If they don¹t have copies of all of the incubated modules they use, how
are they tested? Is neutron a dependency?

This is/was one of my concerns with the decomposition proposal. It is not
clear if neutron is a dependency. My two cents is that it should be.


 
 
 
  I think requirements are a bit easier and should track all
  direct dependencies in each of the repos, so that in case main
  repo decides to drop one, neutron-*aas repos are not broken.
 
  For requirements, it's different because there is no major burden
  due to duplicate entries in repos.
 
 
  On Dec 15, 2014, at 12:16 PM, Doug Wiegley
  do...@a10networks.com wrote:
 
  Hi all,
 
  Ihar and I discussed this on IRC, and are going forward with
  option 2 unless someone has a big problem with it.
 
  Thanks, Doug
 
 
  On 12/15/14, 8:22 AM, Doug Wiegley do...@a10networks.com
  wrote:
 
  Hi Ihar,
 
  I¹m actually in favor of option 2, but it implies a few
  things about your time, and I wanted to chat with you
  before presuming.
 
  Maintenance can not involve breaking changes. At this
  point, the co-gate will block it.  Also, oslo graduation
  changes will have to be made in the services repos first,
  and then Neutron.
 
  Thanks, doug
 
 
  On 12/15/14, 6:15 AM, Ihar Hrachyshka
  ihrac...@redhat.com wrote:
 
  Hi all,
 
  the question arose recently in one of reviews for neutron-*aas
  repos to remove all oslo-incubator code from those repos since
  it's duplicated in neutron main repo. (You can find the link to
  the review at the end of the email.)
 
  Brief hostory: neutron repo was recently split into 4 pieces
  (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The
  split resulted in each repository keeping their own copy of
  neutron/openstack/common/... tree (currently unused in all
  neutron-*aas repos that are still bound to modules from main
  repo).
 
  As a oslo liaison for the project, I wonder what's the best way
  to manage oslo-incubator files. We have several options:
 
  1. just kill all the neutron/openstack/common/ trees from
  neutron-*aas repositories and continue using modules from main
  repo.
 
  2. kill all duplicate modules from neutron-*aas repos and
  leave only those that are used in those repos but not in main
  repo.
 
  3. fully duplicate all those modules in each of four repos that
  use them.
 
  I think option 1. is a straw man, since we should be able to
  introduce new oslo-incubator modules into neutron-*aas repos
  even if they are not used in main repo.
 
  Option 2. is good when it comes to synching non-breaking bug
  fixes (or security fixes) from oslo-incubator, in that it will
  require only one sync patch instead of e.g. four. At the same
  time there may be potential issues when synchronizing updates
  from oslo-incubator that would break API and hence require
  changes to each of the modules that use it. Since we don't
  support atomic merges for multiple projects in gate, we will
  need to be cautious about those updates, and we will still need
  to leave neutron-*aas repos broken for some time (though the
  time may be mitigated with care).
 
  Option 3. is vice versa - in theory, you get total decoupling,
  meaning no oslo-incubator updates in main repo are expected to
  break neutron-*aas repos, but bug fixing becomes a huge PITA.
 
  I would vote for option 2., for two reasons: - most
  oslo-incubator syncs are non-breaking, and we may effectively
  apply care to updates that may result in potential breakage
  (f.e. being able to trigger an integrated run for each of
  neutron-*aas repos with the main sync patch, if there are any
  concerns). - it will make oslo liaison life a lot easier. OK,
  I'm probably too selfish on that. ;) - it will make stable
  maintainers life a lot easier. The main reason why stable
  maintainers and distributions like recent oslo graduation
  movement is that we don't need to track each bug fix we need in
  

Re: [openstack-dev] [Fuel] Logs format on UI (High/6.0)

2014-12-16 Thread Dmitry Pyzhov
Guys, thank you for your feedback. As a quick and dirty solution we
continue to hide extra information from UI. It will not break existing user
experience.

Roman, there were attempts to get rid of our current web logs page and use
Logstash. As usual, it's all about time and resources. It is our backlog,
but it is not in our current roadmap.

On Mon, Dec 15, 2014 at 6:11 PM, Roman Prykhodchenko m...@romcheg.me wrote:

 Hi folks!

 In most productions environments I’ve seen bare logs as they are shown now
 in Fuel web UI were pretty useless. If someone has an infrastructure that
 consists of more that 5 servers and 5 services running on them they are
 most likely to use logstash, loggly or any other log management system.
 There are options for forwarding these logs to a remote log server and
 that’s what is likely to be used IRL.

 Therefore for production environments formatting logs in Fuel web UI or
 even showing them is a cool but pretty useless feature. In addition to
 being useless in production environments it also creates additional load to
 the user interface.

 However, I can see that developers actually use it for debugging or
 troubleshooting, so my proposal is to introduce an option for disabling
 this feature completely.


 - romcheg

  On 15 Dec 2014, at 12:40, Tomasz Napierala tnapier...@mirantis.com
 wrote:
 
  Also +1 here.
  In huge envs we already have problems with parsing performance. In long
 long term we need to think about other log management solution
 
 
  On 12 Dec 2014, at 23:17, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:
 
  +1 to stop parsing logs on UI and show them as is. I think it's more
  than enough for all users.
 
  On Fri, Dec 12, 2014 at 8:35 PM, Dmitry Pyzhov dpyz...@mirantis.com
 wrote:
  We have a high priority bug in 6.0:
  https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story.
 
  Our openstack services use to send logs in strange format with extra
 copy of
  timestamp and loglevel:
  == ./neutron-metadata-agent.log ==
  2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349
 INFO
  neutron.common.config [-] Logging enabled!
 
  And we have a workaround for this. We hide extra timestamp and use
 second
  loglevel.
 
  In Juno some of services have updated oslo.logging and now send logs in
  simple format:
  == ./nova-api.log ==
  2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from
  /etc/nova/api-paste.ini
 
  In order to keep backward compatibility and deal with both formats we
 have a
  dirty workaround for our workaround:
  https://review.openstack.org/#/c/141450/
 
  As I see, our best choice here is to throw away all workarounds and
 show
  logs on UI as is. If service sends duplicated data - we should show
  duplicated data.
 
  Long term fix here is to update oslo.logging in all packages. We can
 do it
  in 6.1.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  --
  Tomasz 'Zen' Napierala
  Sr. OpenStack Engineer
  tnapier...@mirantis.com
 
 
 
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][gerrit] Showing all inline comments from all patch sets

2014-12-16 Thread Jeremy Stanley
On 2014-12-16 12:27:15 +0200 (+0200), Radoslav Gerganov wrote:
[...]
 the backend running on GoogleAppEngine is just proxying the
 requests to review.openstack.org. So in theory if we serve the
 html page from our Gerrit it will work.
[...]

I'm having trouble locating the source code for Google App Engine,
and can instead only find source code for its SDK. How would we run
a GAE instance? (Please remember that our Infra team doesn't host
content backed by proprietary services, but do encourage Google to
release this under a free license if they haven't already.)
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-16 Thread Doug Hellmann

On Dec 16, 2014, at 7:42 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 Signed PGP part
 On 16/12/14 12:52, Doug Hellmann wrote:
 
  On Dec 16, 2014, at 5:22 AM, Ihar Hrachyshka ihrac...@redhat.com
  wrote:
 
  Signed PGP part On 15/12/14 17:22, Doug Wiegley wrote:
  Hi Ihar,
 
  I’m actually in favor of option 2, but it implies a few things
  about your time, and I wanted to chat with you before
  presuming.
 
  I think split didn't mean moving project trees under separate
  governance, so I assume oslo (doc, qa, ...) liaisons should not
  be split either.
 
 
  Maintenance can not involve breaking changes. At this point,
  the co-gate will block it.  Also, oslo graduation changes will
  have to be made in the services repos first, and then Neutron.
 
  Do you mean that a change to oslo-incubator modules is co-gated
  (not just co-checked with no vote) with each of advanced
  services?
 
  As I pointed in my previous email, sometimes breakages are
  inescapable.
 
  Consider a change to neutron oslo-incubator module used commonly
  in all repos that breaks API (they are quite rare, but still have
  a chance of happening once in a while). If we would co-gate main
  neutron repo changes with services, it will mean that we won't be
  able to merge the change.
 
  That would probably suggest that we go forward with option 3 and
  manage all incubator files separately in each of the trees,
  though, again, breakages are still possible in that scenario via
  introducing incompatibility between versions of incubator modules
  in separate repos.
 
  So we should be realistic about it and plan forward how we deal
  potential breakages that *may* occur.
 
  As for oslo library graduations, the order is not really
  significant. What is significant is that we drop oslo-incubator
  module from main neutron repo only after all other neutron-*aas
  repos migrate to appropriate oslo.* library. The neutron
  migration itself may occur in parallel (by postponing module drop
  later).
 
  Don’t assume that it’s safe to combine the incubated version and
  library version of a module. We’ve had some examples where the APIs
  change or global state changes in a way that make the two
  incompatible. We definitely don’t take any care to ensure that the
  two copies can be run together.
 
 Hm. Does it leave us with option 3 only? In that case, should we care
 about incompatibilities between different versions of incubator
 modules running in the same process (one for core code, and another
 one for a service)? That sounds more like we're not left with safe
 options.

I think you only want to have one copy of the Oslo modules active in a process 
at any given point. That probably means having the *aas projects use whatever 
incubated Oslo modules are in the main neutron repository instead of their own 
copy, but as you point out that will break those projects when neutron adopts a 
new library. You might end up having to build shims in neutron to hide the Oslo 
change during the transition.

OTOH, it may not be a big deal. We don’t go out of our way to break 
compatibility, so you might find that it works fine in a lot of cases. I think 
context won’t, because it holds global state, but some of the others should be 
fine.

FWIW, usually when we hit a dependency problem like this, the solution is to 
split one of the projects up so there is a library that can be used by all of 
the consumers. It sounds like neutron is trying to be both an application and a 
library.

 
 
 
 
  Thanks, doug
 
 
  On 12/15/14, 6:15 AM, Ihar Hrachyshka ihrac...@redhat.com
  wrote:
 
  Hi all,
 
  the question arose recently in one of reviews for neutron-*aas
  repos to remove all oslo-incubator code from those repos since
  it's duplicated in neutron main repo. (You can find the link to
  the review at the end of the email.)
 
  Brief hostory: neutron repo was recently split into 4 pieces
  (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The
  split resulted in each repository keeping their own copy of
  neutron/openstack/common/... tree (currently unused in all
  neutron-*aas repos that are still bound to modules from main
  repo).
 
  As a oslo liaison for the project, I wonder what's the best way
  to manage oslo-incubator files. We have several options:
 
  1. just kill all the neutron/openstack/common/ trees from
  neutron-*aas repositories and continue using modules from main
  repo.
 
  2. kill all duplicate modules from neutron-*aas repos and
  leave only those that are used in those repos but not in main
  repo.
 
  3. fully duplicate all those modules in each of four repos that
  use them.
 
  I think option 1. is a straw man, since we should be able to
  introduce new oslo-incubator modules into neutron-*aas repos
  even if they are not used in main repo.
 
  Option 2. is good when it comes to synching non-breaking bug
  fixes (or security fixes) from oslo-incubator, in that it will
  require only one sync patch instead of 

[openstack-dev] Hyper-V Meeting

2014-12-16 Thread Peter Pouliot
Hi All,

We're postponing the meeeting this week due to everyone being swamped with 
higher priority tasks.   If people have direct needs we please email us 
directly or contact us in the irc channels.

p

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon

2014-12-16 Thread Liz Blanchard

On Dec 12, 2014, at 2:26 PM, David Lyle dkly...@gmail.com wrote:

 works for me, less complexity +1

Sorry I’m a bit late to the game here… +1 to this though from my perspective!

Liz

 
 On Fri, Dec 12, 2014 at 11:09 AM, Timur Sufiev tsuf...@mirantis.com wrote:
 It seems to me that the consensus on keeping the simpler approach -- to make 
 Bootstrap data-backdrop=static as the default behavior -- has been reached. 
 Am I right?
 
 On Thu, Dec 4, 2014 at 10:59 PM, Kruithof, Piet pieter.c.kruithof...@hp.com 
 wrote:
 My preference would be “change the default behavior to 'static’” for the 
 following reasons:
 
 - There are plenty of ways to close the modal, so there’s not really a need 
 for this feature.
 - There are no visual cues, such as an “X” or a Cancel button, that selecting 
 outside of the modal closes it.
 - Downside is losing all of your data.
 
 My two cents…
 
 Begin forwarded message:
 
 From: Rob Cresswell (rcresswe) 
 rcres...@cisco.commailto:rcres...@cisco.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: December 3, 2014 at 5:21:51 AM PST
 Subject: Re: [openstack-dev] [horizon] [ux] Changing how the modals are 
 closed in Horizon
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 
 +1 to changing the behaviour to ‘static'. Modal inside a modal is potentially 
 slightly more useful, but looks messy and inconsistent, which I think 
 outweighs the functionality.
 
 Rob
 
 
 On 2 Dec 2014, at 12:21, Timur Sufiev 
 tsuf...@mirantis.commailto:tsuf...@mirantis.com wrote:
 
 Hello, Horizoneers and UX-ers!
 
 The default behavior of modals in Horizon (defined in turn by Bootstrap 
 defaults) regarding their closing is to simply close the modal once user 
 clicks somewhere outside of it (on the backdrop element below and around the 
 modal). This is not very convenient for the modal forms containing a lot of 
 input - when it is closed without a warning all the data the user has already 
 provided is lost. Keeping this in mind, I've made a patch [1] changing 
 default Bootstrap 'modal_backdrop' parameter to 'static', which means that 
 forms are not closed once the user clicks on a backdrop, while it's still 
 possible to close them by pressing 'Esc' or clicking on the 'X' link at the 
 top right border of the form. Also the patch [1] allows to customize this 
 behavior (between 'true'-current one/'false' - no backdrop element/'static') 
 on a per-form basis.
 
 What I didn't know at the moment I was uploading my patch is that David Lyle 
 had been working on a similar solution [2] some time ago. It's a bit more 
 elaborate than mine: if the user has already filled some some inputs in the 
 form, then a confirmation dialog is shown, otherwise the form is silently 
 dismissed as it happens now.
 
 The whole point of writing about this in the ML is to gather opinions which 
 approach is better:
 * stick to the current behavior;
 * change the default behavior to 'static';
 * use the David's solution with confirmation dialog (once it'll be rebased to 
 the current codebase).
 
 What do you think?
 
 [1] https://review.openstack.org/#/c/113206/
 [2] https://review.openstack.org/#/c/23037/
 
 P.S. I remember that I promised to write this email a week ago, but better 
 late than never :).
 
 --
 Timur Sufiev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 -- 
 Timur Sufiev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][gerrit] Showing all inline comments from all patch sets

2014-12-16 Thread Radoslav Gerganov

On 12/16/2014 03:59 PM, Jeremy Stanley wrote:

I'm having trouble locating the source code for Google App Engine,
and can instead only find source code for its SDK. How would we run
a GAE instance? (Please remember that our Infra team doesn't host
content backed by proprietary services, but do encourage Google to
release this under a free license if they haven't already.)



Hi Jeremy,

We don't need GoogleAppEngine if we decide that this is useful.  We 
simply need to put the html page which renders the view on 
https://review.openstack.org.  It is all javascript which talks 
asynchronously to the Gerrit backend.


I am using GAE to simply illustrate the idea without having to spin up 
an entire Gerrit server.  I guess I can also submit a patch to the infra 
project and see how this works on https://review-dev.openstack.org if 
you want.


Thanks,
Rado

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] New release of python-neutronclient: 2.3.10

2014-12-16 Thread Kyle Mestery
The neutron team is pleased to announce the release of a new version of
python-neutronclient. This primarily has bug fixes, including a regression
with how --enable-dhcp was modified. See bug 1401555 [1] for more
details. In addition, the following changes are also in this release:

[kmestery@fedora-mac python-neutronclient]$ git log --abbrev-commit
--pretty=oneline --no-merges 2.3.9..2.3.10
fea8706 subnet: allow --enable-dhcp=False/True syntax, again
66612c9 Router create distributed accepts lower case
89271b1 Add unit tests for agent related commands
56892bb Make help for agent-update more verbose
c5d8557 Use discovery fixture
497bb55 Cleanup copy and pasted token
8a77718 fix the firewall rule arg split error
a65f385 Updated from global requirements
3ed2a5e Disable name support for lb-healthmonitor-* commands
02c108f Fix mixed usage of _
12a87f2 Fixes neutronclient lb-member-show command
5d2bafa neutron port-list -f csv outputs poorly formatted JSON strings
9ed73c0 Updated from global requirements
81fe0c7 Don't allow update of ipv6-ra-mode and ipv6-address-mode
1ac542c Updated from global requirements
9c464ba Use graduated oslo libraries
d046a95 Fix E113 hacking check
64b2d8a Fix E129 hacking check
d812227 Updated from global requirements
092e668 Add InvalidIpForNetworkClient exception
0f7741d Add missing parameters to Client's docstring
72afc0f Leverage neutronclient.openstack.common.importutils import_class
27f02ac Remove extraneous vim editor configuration comments
4d2133c Fix E128 hacking check
2eba58a Don't get keystone session if using noauth
c02e782 Bump hacking to 0.9.x series
e3e0915 Change healthmonitor to health monitor in help info
bb4a0dc Correct 4xx/5xx response management in SessionClient
0fedd33 Change ipsecpolicies to 2 separate words: IPsec policies
a1a8a0e handles keyboard interrupt
1ab4335 Use six.moves cStringIO instead of cStringIO
8115c02 Updated from global requirements
9d8ab0d Replace httpretty with requests_mock
a9ed96f Fix to ensure endpoint_type is used by make_client()
42731a2 Work toward Python 3.4 support and testing
2840bdb Adds tty password entry for neutronclient
[kmestery@fedora-mac python-neutronclient]$

For more info, please see the LP page [2], and report any issues found on
the python-neutronclient LP page as a bug [3].

Thanks!
Kyle

[1] https://bugs.launchpad.net/python-neutronclient/+bug/1401555
[2] https://launchpad.net/python-neutronclient/+milestone/2.3.10
[3] https://bugs.launchpad.net/python-neutronclient
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] interesting problem with config filter

2014-12-16 Thread Ben Nemec
On 12/16/2014 07:20 AM, Doug Hellmann wrote:
 
 On Dec 16, 2014, at 7:41 AM, Mark McLoughlin mar...@redhat.com wrote:
 
 Hi Doug,

 On Mon, 2014-12-08 at 15:58 -0500, Doug Hellmann wrote:
 As we’ve discussed a few times, we want to isolate applications from
 the configuration options defined by libraries. One way we have of
 doing that is the ConfigFilter class in oslo.config. When a regular
 ConfigOpts instance is wrapped with a filter, a library can register
 new options on the filter that are not visible to anything that
 doesn’t have the filter object.

 Or to put it more simply, the configuration options registered by the
 library should not be part of the public API of the library.

 Unfortunately, the Neutron team has identified an issue with this
 approach. We have a bug report [1] from them about the way we’re using
 config filters in oslo.concurrency specifically, but the issue applies
 to their use everywhere. 

 The neutron tests set the default for oslo.concurrency’s lock_path
 variable to “$state_path/lock”, and the state_path option is defined
 in their application. With the filter in place, interpolation of
 $state_path to generate the lock_path value fails because state_path
 is not known to the ConfigFilter instance.

 It seems that Neutron sets this default in its etc/neutron.conf file in
 its git tree:

  lock_path = $state_path/lock

 I think we should be aiming for defaults like this to be set in code,
 and for the sample config files to contain nothing but comments. So,
 neutron should do:

  lockutils.set_defaults(lock_path=$state_path/lock)

 That's a side detail, however.

 The reverse would also happen (if the value of state_path was somehow
 defined to depend on lock_path),

 This dependency wouldn't/shouldn't be code - because Neutron *code*
 shouldn't know about the existence of library config options.
 Neutron deployers absolutely will be aware of lock_path however.

 and that’s actually a bigger concern to me. A deployer should be able
 to use interpolation anywhere, and not worry about whether the options
 are in parts of the code that can see each other. The values are all
 in one file, as far as they know, and so interpolation should “just
 work”.

 Yes, if a deployer looks at a sample configuration file, all options
 listed in there seem like they're in-play for substitution use within
 the value of another option. For string substitution only, I'd say there
 should be a global namespace where all options are registered.

 Now ... one caveat on all of this ... I do think the string substitution
 feature is pretty obscure and mostly just used in default values.

 I see a few solutions:

 1. Don’t use the config filter at all.
 2. Make the config filter able to add new options and still see
 everything else that is already defined (only filter in one
 direction).
 3. Leave things as they are, and make the error message better.

 4. Just tackle this specific case by making lock_path implicitly
 relative to a base path the application can set via an API, so Neutron
 would do:

  lockutils.set_base_path(CONF.state_path)

 at startup.

 5. Make the toplevel ConfigOpts aware of all filters hanging off it, and
 somehow cycle through all of those filters just when doing string
 substitution.
 
 We would have to allow the reverse as well, since the filter object doesn’t 
 see options not explicitly imported by the code creating the filter.

This doesn't seem like it should be difficult to do though.  The
ConfigFilter already takes a conf object when it gets initialized so it
should have access to all of the globally registered opts.  I'm a little
surprised it doesn't already.

I'm actually not 100% sure it makes sense to allow application opts to
reference library opts since the application shouldn't depend on a
library setting, but since the config file is flat I don't know that we
can enforce that separation so _somebody_ is going to try to do it and
be confused why it doesn't work.

So I guess I feel like making opt interpolation work in both directions
is the right way to do this, but it's kind of a moot point if runtime
registration breaks this anyway (which it probably does :-/).  Improving
the error message to explain why a particular value can't be used for
interpolation might be the only not insanely complicated way to
completely address this interpolation issue.

 
 In either case, it only works if the filter object has been instantiated. I 
 wonder if we have a similar problem with runtime option registration. I’ll 
 have to test that.
 
 

 Because of the deployment implications of using the filter, I’m
 inclined to go with choice 1 or 2. However, choice 2 leaves open the
 possibility of a deployer wanting to use the value of an option
 defined by one filtered set of code when defining another. I don’t
 know how frequently that might come up, but it seems like the error
 would be very confusing, especially if both options are set in the
 same config file.

 I think 

Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-16 Thread Anant Patil
On 16-Dec-14 00:59, Clint Byrum wrote:
 Excerpts from Anant Patil's message of 2014-12-15 07:15:30 -0800:
 On 13-Dec-14 05:42, Zane Bitter wrote:
 On 12/12/14 05:29, Murugan, Visnusaran wrote:


 -Original Message-
 From: Zane Bitter [mailto:zbit...@redhat.com]
 Sent: Friday, December 12, 2014 6:37 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept
 showdown

 On 11/12/14 08:26, Murugan, Visnusaran wrote:
 [Murugan, Visnusaran]
 In case of rollback where we have to cleanup earlier version of
 resources,
 we could get the order from old template. We'd prefer not to have a
 graph table.

 In theory you could get it by keeping old templates around. But that
 means keeping a lot of templates, and it will be hard to keep track
 of when you want to delete them. It also means that when starting an
 update you'll need to load every existing previous version of the
 template in order to calculate the dependencies. It also leaves the
 dependencies in an ambiguous state when a resource fails, and
 although that can be worked around it will be a giant pain to implement.


 Agree that looking to all templates for a delete is not good. But
 baring Complexity, we feel we could achieve it by way of having an
 update and a delete stream for a stack update operation. I will
 elaborate in detail in the etherpad sometime tomorrow :)

 I agree that I'd prefer not to have a graph table. After trying a
 couple of different things I decided to store the dependencies in the
 Resource table, where we can read or write them virtually for free
 because it turns out that we are always reading or updating the
 Resource itself at exactly the same time anyway.


 Not sure how this will work in an update scenario when a resource does
 not change and its dependencies do.

 We'll always update the requirements, even when the properties don't
 change.


 Can you elaborate a bit on rollback.

 I didn't do anything special to handle rollback. It's possible that we 
 need to - obviously the difference in the UpdateReplace + rollback case 
 is that the replaced resource is now the one we want to keep, and yet 
 the replaced_by/replaces dependency will force the newer (replacement) 
 resource to be checked for deletion first, which is an inversion of the 
 usual order.


 This is where the version is so handy! For UpdateReplaced ones, there is
 an older version to go back to. This version could just be template ID,
 as I mentioned in another e-mail. All resources are at the current
 template ID if they are found in the current template, even if they is
 no need to update them. Otherwise, they need to be cleaned-up in the
 order given in the previous templates.

 I think the template ID is used as version as far as I can see in Zane's
 PoC. If the resource template key doesn't match the current template
 key, the resource is deleted. The version is misnomer here, but that
 field (template id) is used as though we had versions of resources.

 However, I tried to think of a scenario where that would cause problems 
 and I couldn't come up with one. Provided we know the actual, real-world 
 dependencies of each resource I don't think the ordering of those two 
 checks matters.

 In fact, I currently can't think of a case where the dependency order 
 between replacement and replaced resources matters at all. It matters in 
 the current Heat implementation because resources are artificially 
 segmented into the current and backup stacks, but with a holistic view 
 of dependencies that may well not be required. I tried taking that line 
 out of the simulator code and all the tests still passed. If anybody can 
 think of a scenario in which it would make a difference, I would be very 
 interested to hear it.

 In any event though, it should be no problem to reverse the direction of 
 that one edge in these particular circumstances if it does turn out to 
 be a problem.

 We had an approach with depends_on
 and needed_by columns in ResourceTable. But dropped it when we figured out
 we had too many DB operations for Update.

 Yeah, I initially ran into this problem too - you have a bunch of nodes 
 that are waiting on the current node, and now you have to go look them 
 all up in the database to see what else they're waiting on in order to 
 tell if they're ready to be triggered.

 It turns out the answer is to distribute the writes but centralise the 
 reads. So at the start of the update, we read all of the Resources, 
 obtain their dependencies and build one central graph[1]. We than make 
 that graph available to each resource (either by passing it as a 
 notification parameter, or storing it somewhere central in the DB that 
 they will all have to read anyway, i.e. the Stack). But when we update a 
 dependency we don't update the central graph, we update the individual 
 Resource so there's no global lock required.

 [1] 
 

Re: [openstack-dev] [oslo] interesting problem with config filter

2014-12-16 Thread Doug Hellmann

On Dec 16, 2014, at 10:32 AM, Ben Nemec openst...@nemebean.com wrote:

 On 12/16/2014 07:20 AM, Doug Hellmann wrote:
 
 On Dec 16, 2014, at 7:41 AM, Mark McLoughlin mar...@redhat.com wrote:
 
 Hi Doug,
 
 On Mon, 2014-12-08 at 15:58 -0500, Doug Hellmann wrote:
 As we’ve discussed a few times, we want to isolate applications from
 the configuration options defined by libraries. One way we have of
 doing that is the ConfigFilter class in oslo.config. When a regular
 ConfigOpts instance is wrapped with a filter, a library can register
 new options on the filter that are not visible to anything that
 doesn’t have the filter object.
 
 Or to put it more simply, the configuration options registered by the
 library should not be part of the public API of the library.
 
 Unfortunately, the Neutron team has identified an issue with this
 approach. We have a bug report [1] from them about the way we’re using
 config filters in oslo.concurrency specifically, but the issue applies
 to their use everywhere. 
 
 The neutron tests set the default for oslo.concurrency’s lock_path
 variable to “$state_path/lock”, and the state_path option is defined
 in their application. With the filter in place, interpolation of
 $state_path to generate the lock_path value fails because state_path
 is not known to the ConfigFilter instance.
 
 It seems that Neutron sets this default in its etc/neutron.conf file in
 its git tree:
 
 lock_path = $state_path/lock
 
 I think we should be aiming for defaults like this to be set in code,
 and for the sample config files to contain nothing but comments. So,
 neutron should do:
 
 lockutils.set_defaults(lock_path=$state_path/lock)
 
 That's a side detail, however.
 
 The reverse would also happen (if the value of state_path was somehow
 defined to depend on lock_path),
 
 This dependency wouldn't/shouldn't be code - because Neutron *code*
 shouldn't know about the existence of library config options.
 Neutron deployers absolutely will be aware of lock_path however.
 
 and that’s actually a bigger concern to me. A deployer should be able
 to use interpolation anywhere, and not worry about whether the options
 are in parts of the code that can see each other. The values are all
 in one file, as far as they know, and so interpolation should “just
 work”.
 
 Yes, if a deployer looks at a sample configuration file, all options
 listed in there seem like they're in-play for substitution use within
 the value of another option. For string substitution only, I'd say there
 should be a global namespace where all options are registered.
 
 Now ... one caveat on all of this ... I do think the string substitution
 feature is pretty obscure and mostly just used in default values.
 
 I see a few solutions:
 
 1. Don’t use the config filter at all.
 2. Make the config filter able to add new options and still see
 everything else that is already defined (only filter in one
 direction).
 3. Leave things as they are, and make the error message better.
 
 4. Just tackle this specific case by making lock_path implicitly
 relative to a base path the application can set via an API, so Neutron
 would do:
 
 lockutils.set_base_path(CONF.state_path)
 
 at startup.
 
 5. Make the toplevel ConfigOpts aware of all filters hanging off it, and
 somehow cycle through all of those filters just when doing string
 substitution.
 
 We would have to allow the reverse as well, since the filter object doesn’t 
 see options not explicitly imported by the code creating the filter.
 
 This doesn't seem like it should be difficult to do though.  The
 ConfigFilter already takes a conf object when it gets initialized so it
 should have access to all of the globally registered opts.  I'm a little
 surprised it doesn't already.
 
 I'm actually not 100% sure it makes sense to allow application opts to
 reference library opts since the application shouldn't depend on a
 library setting, but since the config file is flat I don't know that we
 can enforce that separation so _somebody_ is going to try to do it and
 be confused why it doesn't work.
 
 So I guess I feel like making opt interpolation work in both directions
 is the right way to do this, but it's kind of a moot point if runtime
 registration breaks this anyway (which it probably does :-/).  

If it does, we should probably change the interpolation code to use any option 
values it finds as a literal string without interpreting or validating it. That 
means changing the implementation to go through a different lookup path, but it 
sounds like we need that anyway.

 Improving
 the error message to explain why a particular value can't be used for
 interpolation might be the only not insanely complicated way to
 completely address this interpolation issue.

https://review.openstack.org/#/c/140143/

 
 
 In either case, it only works if the filter object has been instantiated. I 
 wonder if we have a similar problem with runtime option registration. I’ll 
 have to test that.
 
 
 
 

Re: [openstack-dev] [all][gerrit] Showing all inline comments from all patch sets

2014-12-16 Thread Jeremy Stanley
On 2014-12-16 17:19:55 +0200 (+0200), Radoslav Gerganov wrote:
 We don't need GoogleAppEngine if we decide that this is useful. We
 simply need to put the html page which renders the view on
 https://review.openstack.org. It is all javascript which talks
 asynchronously to the Gerrit backend.
 
 I am using GAE to simply illustrate the idea without having to
 spin up an entire Gerrit server.

That makes a lot more sense--thanks for the clarification!

 I guess I can also submit a patch to the infra project and see how
 this works on https://review-dev.openstack.org if you want.

If there's a general desire from the developer community for it,
then that's probably the next step. However, ultimately this seems
like something better suited as an upstream feature request for
Gerrit (there may even already be thread-oriented improvements in
the works for the new change screen--I haven't kept up with their
progress lately).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Telco][NFV] Meeting Reminder - Wednesday December 17th @ 1400 UTC in #openstack-meeting-alt

2014-12-16 Thread Steve Gordon
Hi all,

Just a reminder that the Telco Working Group will be meeting @ 1400 UTC in 
#openstack-meeting on Wednesday December 17th. Draft agenda is available here:

https://etherpad.openstack.org/p/nfv-meeting-agenda

Please feel free to add items. Note that I would also like to propose that we 
skip the meetings which would have fallen on December 24th and December 31st 
due to it being a holiday period for many participants. This would make the 
next meeting following this one January 7th.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel]

2014-12-16 Thread Sergey Vasilenko
Guys, it's a big and complicated architecture issue.

Issue, like this was carefully researched about month ago (while P***)

root case of issue:

   - Now we use OVS for build virtual network topology on each node.
   - OVS has performance degradation while pass huge of small network
   packets.
   - We can’t abandon using OVS entirely and forever, because it's a most
   popular Neutron solution.
   - We can’t abandon using OVS partial now, because low-level modules
   don’t ready yet for this. I start blueprint (
   
https://blueprints.launchpad.net/fuel/+spec/l23network-refactror-to-provider-based-resources)
   for aim possibility of combine using OVS for Neutron purposes and don't use
   it for management, storage, etc... purposes.

We, together with L2 support team, Neutron team, and another network
experts make tuning one of existing production-like env after deployment
and achieve following values on bonds of two 10G cards:

   - vm-to-vm speed (on different compute nodes): 2.56 Gbits/sec (GRE
   segmentation)
   - node-to-node speed: 17.6 Gbits/s

This values closely near with theoretical maximum for OVS 1.xx with GRE.
Some performance improvements may also achieved by upgrading open vSwitch
to the latest LTS (2.3.1 at this time) branch and using megaflow feature (
http://networkheresy.com/2014/11/13/accelerating-open-vswitch-to-ludicrous-speed/
).


After this research we concluded:


   - OVS can't pass huge of small packages without network performance
   degradation
   - for fix this we should re-design network topology on env nodes
   - even re-designed network topology can't fix this issue at all. Some
   network parameters, like mtu, disabling offloading for NICs, buffers,
   etc... can be tuned only on real environment.


My opinion — in FUEL we should add new (or extend existing network-checker)
component.  This component should testing network performance on real
customer’s pre-configured env by different (already defined) performance
test cases and recommend better setup BEFORE main deployment cycle run.

/sv
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-16 Thread Mathieu Gagné

On 2014-12-16 12:07 AM, Christopher Yeoh wrote:

So I think this is something we really should get agreement on across
the open stack API first before flipping back and forth on a case by
case basis.

Personally I think we should be using uuids for uniqueness and leave any
extra restrictions to a ui layer if really required. If we try to have
name uniqueness then test  should be considered the same as  test as
 test  and it introduces all sorts of slightly different combos that
look the same except under very close comparison. Add unicode for extra fun.



Leaving such uniqueness validation to the UI layer is a *huge no-no*.

The problem I had in production occurred in a non-UI system.

Please consider making it great for all users, not just the one 
(Horizon) provided by OpenStack.


--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] question about Get Guest Info row in HypervisorSupportMatrix

2014-12-16 Thread Dmitry Guryanov
On Tuesday 09 December 2014 18:15:01 Markus Zoeller wrote:
   On Tue, Dec 09, 2014 at 06:33:47PM +0300, Dmitry Guryanov wrote:
   
   Hello!
   
   There is a feature in HypervisorSupportMatrix
   (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called Get
 
 Guest
 
   Info. Does anybody know, what does it mean? I haven't found anything
 
 like
 
   this neither in nova api nor in horizon and nova command line.
 
 I think this maps to the nova driver function get_info:
 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4
 054
 
 I believe (and didn't double-check) that this is used e.g. by the
 Nova CLI via `nova show [--minimal] server` command.
 

It seems Driver.get_info used only for obtaining instance's power state. It's 
strange. It think we can cleanup the code, rename get_info to get_power_state 
and return only power state from this function.

 I tried to map the features of the hypervisor support matrix to
 specific nova driver functions on this wiki page:
 https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DriverAPI
 

Thanks!

  On Tue Dec 9 15:39:35 UTC 2014, Daniel P. Berrange wrote:
  I've pretty much no idea what the intention was for that field. I've
  been working on formally documenting all those things, but draw a blank
  for that
  
  FYI:
  
  https://review.openstack.org/#/c/136380/1/doc/hypervisor-support.ini
  
  Regards, Daniel
 
 Nice! I will keep an eye on that :)
 
 
 Regards,
 Markus Zoeller
 IRC: markus_z
 Launchpad: mzoeller
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-16 Thread Anne Gentle
On Tue, Dec 16, 2014 at 4:05 AM, Neil Jerram neil.jer...@metaswitch.com
wrote:

 Stefano Maffulli stef...@openstack.org writes:

  On 12/09/2014 04:11 PM,  by wrote:
 [vad] how about the documentation in this case?... bcos it needs some
  place to document (a short desc and a link to vendor page) or list these
  kind of out-of-tree plugins/drivers...  just to make the user aware of
  the availability of such plugins/driers which is compatible with so and
  so openstack release.
  I checked with the documentation team and according to them, only the
  following plugins/drivers only will get documented...
  1) in-tree plugins/drivers (full documentation)
  2) third-party plugins/drivers (ie, one implements and follows this new
  proposal, a.k.a partially-in-tree due to the integration module/code)...
 
  *** no listing/mention about such completely out-of-tree
 plugins/drivers***
 
  Discoverability of documentation is a serious issue. As I commented on
  docs spec [1], I think there are already too many places, mini-sites and
  random pages holding information that is relevant to users. We should
  make an effort to keep things discoverable, even if not maintained
  necessarily by the same, single team.
 
  I think the docs team means that they are not able to guarantee
  documentation for third-party *themselves* (and has not been able, too).
  The docs team is already overworked as it is now, they can't take on
  more responsibilities.
 
  So once Neutron's code will be split, documentation for the users of all
  third-party modules should find a good place to live in, indexed and
  searchable together where the rest of the docs are. I'm hoping that we
  can find a place (ideally under docs.openstack.org?) where third-party
  documentation can live and be maintained by the teams responsible for
  the code, too.
 
  Thoughts?

 I suggest a simple table, under docs.openstack.org, where each row has
 the plugin/driver name, and then links to the documentation and code.
 There should ideally be a very lightweight process for vendors to add
 their row(s) to this table, and to edit those rows.

 I don't think it makes sense for the vendor documentation itself to be
 under docs.openstack.org, while the code is out of tree.


Stef has suggested docs.openstack.org/third-party as a potential location
on the review at [1] https://review.openstack.org/#/c/133372/.

The proposal currently is that the list's source would be in the
openstack-manuals repository, and the process for adding to that repo is
the same as all OpenStack contributions.

I plan to finalize the plan in January, thanks all for the input, and keep
it coming.

Anne


 Regards,
 Neil


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Organizational changes to support stable branches

2014-12-16 Thread Thierry Carrez
New status update:

The switch to per-project stable review teams is now completed.

People that originally were in the openstack-stable-maint have been
split between stable-maint-core (for cross-project stable policy
guardians) and $PROJECT-stable-maint (for those specialized in reviewing
one project stable branch in particular).

If you were a member of openstack-stable-maint and feel like you've been
misplaced, don't hesitate to contact me or another stable-maint-core member.

Regards,

Thierry Carrez wrote:
 OK, since there was no disagreement I pushed the changes to:
 https://wiki.openstack.org/wiki/StableBranch
 
 We'll get started setting up project-specific stable-maint teams ASAP.
 Cheers,
 
 Thierry Carrez wrote:
 TL;DR:
 Every project should designate a Stable branch liaison.

 Hi everyone,

 Last week at the summit we discussed evolving the governance around
 stable branches, in order to maintain them more efficiently (and
 hopefully for a longer time) in the future.

 The current situation is the following: there is a single
 stable-maint-core review team that reviews all backports for all
 projects, making sure the stable rules are followed. This does not scale
 that well, so we started adding project-specific people to the single
 group, but they (rightfully) only care about one project. Things had to
 change for Kilo. Here is what we came up with:

 1. We propose that integrated projects with stable branches designate a
 formal Stable Branch Liaison (by default, that would be the PTL, but I
 strongly encourage someone specifically interested in stable branches to
 step up). The Stable Branch Liaison is responsible for making sure
 backports are proposed for critical issues in their project, and make
 sure proposed backports are reviewed. They are also the contact point
 for stable branch release managers around point release times.

 2. We propose to set up project-specific review groups
 ($PROJECT-stable-core) which would be in charge of reviewing backports
 for a given project, following the stable rules. Originally that group
 should be the Stable Branch Liaison + stable-maint-core. The group is
 managed by stable-maint-core, so that we make sure any addition is well
 aware of the Stable Branch rules before they are added. The Stable
 Branch Liaison should suggest names for addition to the group as needed.

 3. The current stable-maint-core group would be reduced to stable branch
 release managers and other active cross-project stable branch rules
 custodians. We'll remove project-specific people and PTLs that were
 added in the past. The new group would be responsible for granting
 exceptions for all questionable backports raised by $PROJECT-stable-core
 groups, providing backports reviews help everywhere, maintain the stable
 branch rules (and make sure they are respected), and educate proposed
 $PROJECT-stable-core members on the rules.

 4. Each stable branch (stable/icehouse, stable/juno...) that we
 concurrently support should have a champion. Stable Branch Champions are
 tasked with championing a specific stable branch support, making sure
 the branch stays in good shape and remains usable at all times. They
 monitor periodic jobs failures and enlist the help of others in order to
 fix the branches in case of breakage. They should also raise flags if
 for some reason they are blocked and don't receive enough support, in
 which case early abandon of the branch will be considered. Adam
 Gandelman volunteered to be the stable/juno champion. Ihar Hrachyshka
 (was) volunteered to be the stable/icehouse champion.

 5. To set expectations right and evolve the meaning of stable over
 time to gradually mean more not changing, we propose to introduce
 support phases for stable branches. During the first 6 months of life of
 a stable branch (Phase I) any significant bug may be backported. During
 the next 6 months of life  of a stable branch (Phase II), only critical
 issues and security fixes may be backported. After that and until end of
 life (Phase III), only security fixes may be backported. That way, at
 any given time, there is only one stable branch in Phase I support.

 6. In order to raise awareness, all stable branch discussions will now
 happen on the -dev list (with prefix [stable]). The
 openstack-stable-maint list is now only used for periodic jobs reports,
 and is otherwise read-only.

 Let us know if you have any comment, otherwise we'll proceed to set
 those new policies up.

 
 


-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][gerrit] Showing all inline comments from all patch sets

2014-12-16 Thread Dolph Mathews
I've envisioned basically the same feature before, but I don't find the
comments to be particularly useful without the complete context.

What I really want from gerrit is a 3-way diff, wherein the first column is
always the original state of the repo, the second column is a
user-selectable patchset between (patchset 1) and (latest patchset - 1),
and the third column is always the (latest patchset). And then make it easy
for me to switch the middle column to a different patchset, without
scrolling back to the top of the page. You'd be able to quickly skim
through the history of comments and see the evolution of a patch, which I
think is the same user experience that you're looking for?

I agree with Jeremy though, this is ideally an upstream effort to improve
gerrit itself.

On Tue, Dec 16, 2014 at 4:27 AM, Radoslav Gerganov rgerga...@vmware.com
wrote:

 I never liked how Gerrit is displaying inline comments and I find it hard
 to follow discussions on changes with many patch sets and inline comments.
 So I tried to hack together an html view which display all comments grouped
 by patch set, file and commented line.  You can see the result at
 http://gerrit-mirror.appspot.com/change-id.  Some examples:

 http://gerrit-mirror.appspot.com/127283
 http://gerrit-mirror.appspot.com/128508
 http://gerrit-mirror.appspot.com/83207

 There is room for many improvements (my css skills are very limited) but I
 am just curious if someone else finds the idea useful.  The frontend is
 using the same APIs as the Gerrit UI and the backend running on
 GoogleAppEngine is just proxying the requests to review.openstack.org. So
 in theory if we serve the html page from our Gerrit it will work. You can
 find all sources here: https://github.com/rgerganov/gerrit-hacks.  Let me
 know what you think.

 Thanks,
 Rado

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Looking for feedback: spec for allowing additional IPs to be shared

2014-12-16 Thread Thomas Maddox
Hey all,

It seems I missed the Kilo proposal deadline for Neutron, unfortunately, but I 
still wanted to propose this spec for Neutron and get feedback/approval, sooner 
rather than later, so I can begin working on an implementation, even if it 
can't land in Kilo. I opted to put this in an etherpad for now for 
collaboration due to missing the Kilo proposal deadline.

Spec markdown in etherpad: 
https://etherpad.openstack.org/p/allow-sharing-additional-ips
Blueprint: 
https://blueprints.launchpad.net/neutron/+spec/allow-sharing-additional-ips

I also want to add this to the meeting agenda for Monday and hopefully we can 
get to chatting about it. :)

Cheers!
-Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party

2014-12-16 Thread Kurt Taylor
On Mon, Dec 15, 2014 at 7:07 PM, Stefano Maffulli stef...@openstack.org
wrote:

 On 12/05/2014 07:08 AM, Kurt Taylor wrote:
  1. Meeting content: Having 2 meetings per week is more than is needed at
  this stage of the working group. There just isn't enough meeting content
  to justify having two meetings every week.

 I'd like to discuss this further: the stated objectives of the meetings
 are very wide and may allow for more than one slot per week. In
 particular I'm seeing the two below as good candidates for 'meet as many
 times as possible':

*  to provide a forum for the curious and for OpenStack programs who
 are not yet in this space but may be in the future
* to encourage questions from third party folks and support the
 sourcing of answers

 snip


 As I mentioned above, probably one way to do this is to make some slots
 more focused on engaging newcomers and answering questions, more like
 serendipitous mentoring sessions with the less involved, while another
 slot could be dedicated to more focused and long term efforts, with more
 committed people?


This is an excellent idea, let's split the meetings into:

1) Mentoring - mentoring new CI team members and operators, help them
understand infra tools and processes. Anita can continue her fantastic work
here.

2) Working Group - working meeting for documentation, reviewing patches for
relevant work, and improving the consumability of infra CI components. I
will be happy to chair these meetings initially. I am sure I can get help
with these meetings for the other time zones also.

With this approach we can also continue to use the new meeting times voted
on by the group, and each is focused on targeting a specific group with
very different needs.

Thanks Stefano!

Kurt Taylor (krtaylor)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] question about Get Guest Info row in HypervisorSupportMatrix

2014-12-16 Thread Dmitry Guryanov
On Tuesday 09 December 2014 15:39:35 Daniel P. Berrange wrote:
 On Tue, Dec 09, 2014 at 06:33:47PM +0300, Dmitry Guryanov wrote:
  Hello!
  
  There is a feature in HypervisorSupportMatrix
  (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called Get
  Guest
  Info. Does anybody know, what does it mean? I haven't found anything like
  this neither in nova api nor in horizon and nova command line.
 
 I've pretty much no idea what the intention was for that field. I've
 been working on formally documenting all those things, but draw a blank
 for that
 
 FYI:
 
   https://review.openstack.org/#/c/136380/1/doc/hypervisor-support.ini
 
 

Thanks, looks much betters than previous one.


I think Auto configure disk refers to resizing filesystems on root disk 
according to value given in flavor.


 Regards,
 Daniel

-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [UX] Curvature interactive virtual network design

2014-12-16 Thread Liz Blanchard

On Nov 7, 2014, at 11:16 AM, John Davidge (jodavidg) jodav...@cisco.com wrote:

 As discussed in the Horizon contributor meet up, here at Cisco we’re 
 interested in upstreaming our work on the Curvature dashboard into Horizon. 
 We think that it can solve a lot of issues around guidance for new users and 
 generally improving the experience of interacting with Neutron. Possibly an 
 alternative persona for novice users?
 
 For reference, see:
 http://youtu.be/oFTmHHCn2-g – Video Demo
 https://www.openstack.org/summit/portland-2013/session-videos/presentation/interactive-visual-orchestration-with-curvature-and-donabe
  – Portland presentation
 https://github.com/CiscoSystems/curvature – original (Rails based) code
 We’d like to gauge interest from the community on whether this is something 
 people want.
 
 Thanks,
 
 John, Brad  Sam

Hey guys,

Sorry for my delayed response here…just coming back from maternity leave.

I’ve been waiting and hoping since the Portland summit that the curvature work 
you have done would be brought in to Horizon. A definite +1 from me from a user 
experience point of view. It would be great to have a solid plan on how this 
could work with or be additional to the Orchestration and Network Topology 
pieces that currently exist in Horizon.

Let me know if I can help out with any design review, wireframe, or usability 
testing aspects.

Best,
Liz

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Re: Placement and Scheduling via Policy

2014-12-16 Thread Tim Hinrichs
[Adding openstack-dev to this thread.  For those of you just joining… We 
started kicking around ideas for how we might integrate a special-purpose VM 
placement engine into Congress.]

Kudva: responses inline.


On Dec 16, 2014, at 6:25 AM, Prabhakar Kudva 
ku...@us.ibm.commailto:ku...@us.ibm.com wrote:

Hi,

I am very interested in this.

So, it looks like there are two parts to this:
1. Policy analysis when there are a significant mix of logical and builtin 
predicates (i.e.,
runtime should identify a solution space when there are arithmetic operators). 
This will
require linear programming/ILP type solvers.  There might be a need to have a 
function
in runtime.py that specifically deals with this (Tim?)


I think it’s right that we expect there to be a mix of builtins and standard 
predicates.  But what we’re considering here is having the linear solver be 
treated as if it were a domain-specific policy engine.  So that solver wouldn’t 
be embedded into the runtime.py necessarily.  Rather, we’d delegate part of the 
policy to that domain-specific policy engine.

2. Enforcement. That is with a large number of constraints in place for 
placement and
scheduling, how does the policy engine communicate and enforce the placement
constraints to nova scheduler.


I would imagine that we could delegate either enforcement or monitoring or 
both.  Eventually we want enforcement here, but monitoring could be useful too.

And yes you’re asking the right questions.  I was trying to break the problem 
down into pieces in my bullet (1) below.  But I think there is significant 
overlap in the questions we need to answer whether we’re delegating monitoring 
or enforcement.

Both of these require some form of mathematical analysis.

Would be happy and interested to discuss more on these lines.


Maybe take a look at how I tried to breakdown the problem into separate 
questions in bullet (1) below and see if that makes sense.

Tim

Prabhakar






From:Tim Hinrichs thinri...@vmware.commailto:thinri...@vmware.com
To:ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com
Cc:Ramki Krishnan (r...@brocade.commailto:r...@brocade.com) 
r...@brocade.commailto:r...@brocade.com, Gokul B 
Kandiraju/Watson/IBM@IBMUS, Prabhakar Kudva/Watson/IBM@IBMUS
Date:12/15/2014 12:09 PM
Subject:Re: Placement and Scheduling via Policy




[Adding Prabhakar and Gokul, in case they are interested.]

1) Ruby, thinking about the solver as taking 1 matrix of [vm, server] and 
returning another matrix helps me understand what we’re talking about—thanks.  
I think you’re right that once we move from placement to optimization problems 
in general we’ll need to figure out how to deal with actions.  But if it’s a 
placement-specific policy engine, then we can build VM-migration into it.

It seems to me that the only part left is figuring out how to take an arbitrary 
policy, carve off the placement-relevant portion, and create the inputs the 
solver needs to generate that new matrix.  Some thoughts...

- My gut tells me that the placement-solver should basically say “I enforce 
policies having to do with the schema nova:location.”  This way the Congress 
policy engine knows to give it policies relevant to nova:location (placement).  
If we do that, I believe we can carve off the right sub theory.

- That leaves taking a Datalog policy where we know nova:location is important 
and converting it to the input language required by a linear solver.  We need 
to remember that the Datalog rules may reference tables from other services 
like Neutron, Ceilometer, etc.  I think the key will be figuring out what class 
of policies we can actually do that for reliably.  Cool—a concrete question.


2) We can definitely wait until January on this.  I’ll be out of touch starting 
Friday too; it seems we all get back early January, which seems like the right 
time to resume our discussions.  We have some concrete questions to answer, 
which was what I was hoping to accomplish before we all went on holiday.

Happy Holidays!
Tim


On Dec 15, 2014, at 5:53 AM, 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com wrote:

Hi Tim

“Questions:
1) Is there any more data the solver needs?  Seems like it needs something 
about CPU-load for each VM.
2) Which solver should we be using?  What does the linear program that we feed 
it look like?  How do we translate the results of the linear solver into a 
collection of ‘migrate_VM’ API calls?”



  Question (2) seems to me the first to address, in particular:
 “how to prepare the input (variables, constraints, goal) and invoke the 
solver”
=  We need rules that represent constraints to give the solver (e.g. a 
technical constraint that a VM should not be assigned to more than one server 
or that more than maximum 

[openstack-dev] [qa] Very first VM launched won't response to ARP request

2014-12-16 Thread Danny Choi (dannchoi)
Hi,

I have seen this issue consistently.

I freshly install Ubuntu 14.04 onto Cisco UCS and use devstack to deploy 
OpenStack (stable Juno) to make it a Compute node.

For the very first VM launched at this node, it won’t respond to ARP request (I 
ping from the router namespace).

The Linux bridge tap interface shows it’s sending packets to the VM, and 
tcpdump confirms it.


qbr8a29c673-4f Link encap:Ethernet  HWaddr b2:76:d7:47:c2:fe

  inet6 addr: fe80::98ac:73ff:fea8:8be1/64 Scope:Link

  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

  RX packets:1137 errors:0 dropped:0 overruns:0 frame:0

  TX packets:8 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:0

  RX bytes:49528 (49.5 KB)  TX bytes:648 (648.0 B)


qvb8a29c673-4f Link encap:Ethernet  HWaddr b2:76:d7:47:c2:fe

  inet6 addr: fe80::b076:d7ff:fe47:c2fe/64 Scope:Link

  UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1

  RX packets:1132 errors:0 dropped:0 overruns:0 frame:0

  TX packets:22 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:1000

  RX bytes:63592 (63.5 KB)  TX bytes:3228 (3.2 KB)


qvo8a29c673-4f Link encap:Ethernet  HWaddr 9a:2b:5e:e4:22:f9

  inet6 addr: fe80::982b:5eff:fee4:22f9/64 Scope:Link

  UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1

  RX packets:22 errors:0 dropped:0 overruns:0 frame:0

  TX packets:1132 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:1000

  RX bytes:3228 (3.2 KB)  TX bytes:63592 (63.5 KB)


tap8a29c673-4f Link encap:Ethernet  HWaddr fe:16:3e:12:49:10

  inet6 addr: fe80::fc16:3eff:fe12:4910/64 Scope:Link

  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

  RX packets:7 errors:0 dropped:0 overruns:0 frame:0

  TX packets:1143 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:500

  RX bytes:2022 (2.0 KB)  TX bytes:64490 (64.4 KB)



localadmin@qa6:~/devstack$ ifconfig tap8a29c673-4f

tap8a29c673-4f Link encap:Ethernet  HWaddr fe:16:3e:12:49:10

  inet6 addr: fe80::fc16:3eff:fe12:4910/64 Scope:Link

  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

  RX packets:7 errors:0 dropped:0 overruns:0 frame:0

  TX packets:1236 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:500

  RX bytes:2022 (2.0 KB)  TX bytes:69698 (69.6 KB)


localadmin@qa6:~/devstack$ ifconfig tap8a29c673-4f

tap8a29c673-4f Link encap:Ethernet  HWaddr fe:16:3e:12:49:10

  inet6 addr: fe80::fc16:3eff:fe12:4910/64 Scope:Link

  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

  RX packets:7 errors:0 dropped:0 overruns:0 frame:0

  TX packets:1239 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:500

  RX bytes:2022 (2.0 KB)  TX bytes:69866 (69.8 KB)


localadmin@qa6:~/devstack$ sudo tcpdump -i tap8a29c673-4f

tcpdump: WARNING: tap8a29c673-4f: no IPv4 address assigned

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on tap8a29c673-4f, link-type EN10MB (Ethernet), capture size 65535 
bytes

13:07:31.678751 ARP, Request who-has 10.0.0.14 tell 10.0.0.1, length 42

13:07:32.678813 ARP, Request who-has 10.0.0.14 tell 10.0.0.1, length 42

13:07:32.678838 ARP, Request who-has 10.0.0.14 tell 10.0.0.1, length 42

13:07:33.678778 ARP, Request who-has 10.0.0.14 tell 10.0.0.1, length 42

13:07:34.678840 ARP, Request who-has 10.0.0.14 tell 10.0.0.1, length 42



Usually I would reboot the VM and the ping works fine afterwards.


localadmin@qa6:~/devstack$ sudo tcpdump -i tap8a29c673-4f

tcpdump: WARNING: tap8a29c673-4f: no IPv4 address assigned

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on tap8a29c673-4f, link-type EN10MB (Ethernet), capture size 65535 
bytes

13:13:18.154711 IP 10.0.0.1  10.0.0.14: ICMP echo request, id 25711, seq 32, 
length 64

13:13:18.154996 IP 10.0.0.14  10.0.0.1: ICMP echo reply, id 25711, seq 32, 
length 64

13:13:19.156244 IP 10.0.0.1  10.0.0.14: ICMP echo request, id 25711, seq 33, 
length 64

13:13:19.156502 IP 10.0.0.14  10.0.0.1: ICMP echo reply, id 25711, seq 33, 
length 64


Looking for suggestions on how to debug this issue?


Thanks,

Danny
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-12-16 Thread Steven Kaufer

This is a follow up to this thread from a few weeks ago:
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg40287.html

I've updated the nova spec in this area to include the total server count
in the server_links based on the existence of an include_count query
parameter (eg: GET /servers?include_count=1).  The spec no longer
references a GET /servers/count API.

Nova spec:  https://review.openstack.org/#/c/134279/

Thanks,
Steven Kaufer___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Image based provisioning

2014-12-16 Thread Dmitry Pyzhov
Guys,

we are about to enable image based provisioning in our master by default.
I'm trying to figure out requirement for this change. As far as I know, it
was not tested on scale lab. Is it true? Have we ever run full system tests
cycle with this option?

Do we have any other pre-requirements?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Questions regarding Functional Testing (Paris Summit)

2014-12-16 Thread Sean Dague
On 12/12/2014 01:04 PM, Sean Toner wrote:
 Hi everyone,
 
 I have been reading the etherpad from the Paris summit wrt to moving the
 functional tests into their respective projects 
 (https://etherpad.openstack.org/p/kilo-crossproject-move-func-tests-to-projects).
   I am mostly interested this from the nova project 
 perspective. However, I still have a lot of questions.
 
 For example, is it permissible (or a good idea) to use the python-
 *clients as a library for the tasks?  I know these were not allowed in 
 Tempest, but I don't see why they couldn't be used here (especially 
 since, AFAIK, there is no testing done on the SDK clients themselves).

Sure, though realistically I'd actually expect the clients to have their
own tests.

 Another question is also about a difference between Tempest and these 
 new functional tests.  In nova's case, it would be very useful to 
 actually utilize the libvirt library in order to touch the hypervisor 
 itself.  In Tempest, it's not allowed to do that.  Would it make sense 
 to be able to make calls to libvirt within a nova functional test?

Examples would be handy here.

 Basically, since Tempest was a public only library, there needs to be 
 a different set of rules as to what can and can't be done.  Even the 
 definition of what exactly a functional test is should be more clearly 
 stated.  
 
 For example, I have been working on a project for some nova tests that 
 also use the glance and keystone clients (since I am using the python 
 SDK clients).  I saw this quote from the etherpad:
 
 Many api tests in Tempest require more than one service (eg, nova 
 api tests require glance)
 
 Is this an API test or an integration test or a functional test? 
 sounds to me like cross project integration tests +1+1
 
 I would disagree that a functional test should belong to only one 
 project.  IMHO, a functional test is essentially a black box test that 
 might span one or more projects, though the projects should be related.  
 For example, I have worked on one of the new features where the config 
 drive image property is set in the glance image itself, rather than 
 specified during the nova boot call.  
 
 I believe that's how a functional test can be defined.  A black box test 
 which may require looking under the hood that Tempest does not allow.

A black box test by definition doesn't look under the hood, and I think
that's where there has been a lot of disconnect.

 Has there been any other work or thoughts on how functional testing 
 should be done?

I've been working through early stages of this on the Nova side.

There are very pragmatic reasons for the tests to be owned by a single
project. When they are not, we get wedges, where due to factors beyond
our control you have 2 source trees in a bind over tests, and the
project doesn't have the ability to make an executive decision that
those tests aren't actually correct and should be changed.

I think the functional correctness of a project needs to be owned by
that project. As a community that's been largely pushed at the QA team
up until this point, and that both not scalable, as well as not very
debuggable (note how many folks just randomly type recheck because
they have no idea how to debug a failure).

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Re: Placement and Scheduling via Policy

2014-12-16 Thread Yathiraj Udupi (yudupi)
Tim,

I read the conversation thread below and this got me interested as it relates 
to our discussion we had in the Policy Summit (mid cycle meet up) held in Palo 
Alto a few months ago.

This relates to our project – Nova Solver Scheduler, which I had talked about 
at the Policy summit.   Please see this - 
https://github.com/stackforge/nova-solver-scheduler

We already have a working constraints-based solver framework/engine that 
handles Nova placement, and we are currently active in Stackforge, and aim to 
get this integrated into the Gantt project 
(https://blueprints.launchpad.net/nova/+spec/solver-scheduler), based on our 
discussions in the Nova scheduler sub group.

When I saw discussions around using Linear programming (LP) solvers, PULP, etc, 
 I thought of pitching in here to say, we already have demonstrated integrating 
a LP based solver for Nova compute placements.   Please see: 
https://www.youtube.com/watch?v=7QzDbhkk-BI#t=942 for a demo of this (from our 
talk at the Atlanta Openstack summit).
 Based on this email thread,  I believe Ramki, one of our early collaborators 
is driving a similar solution in the NFV ETSI research group.  Glad to know our 
Solver scheduler project is getting interest now.

As part of Congress integration,  at the policy summit, I had suggested, we can 
try to translate a Congress policy into our Solver Scheduler’s constraints,  
and use this to enforce Nova placement policies.
We can already demonstrate policy-driven nova placements using our pluggable 
constraints model.  So it should be easy to integrate with Congress.

The Nova solver scheduler team would be glad to help with any efforts wrt to 
trying out a Congress integration for Nova placements.

Thanks,
Yathi.



On 12/16/14, 10:24 AM, Tim Hinrichs 
thinri...@vmware.commailto:thinri...@vmware.com wrote:

[Adding openstack-dev to this thread.  For those of you just joining… We 
started kicking around ideas for how we might integrate a special-purpose VM 
placement engine into Congress.]

Kudva: responses inline.


On Dec 16, 2014, at 6:25 AM, Prabhakar Kudva 
ku...@us.ibm.commailto:ku...@us.ibm.com wrote:

Hi,

I am very interested in this.

So, it looks like there are two parts to this:
1. Policy analysis when there are a significant mix of logical and builtin 
predicates (i.e.,
runtime should identify a solution space when there are arithmetic operators). 
This will
require linear programming/ILP type solvers.  There might be a need to have a 
function
in runtime.py that specifically deals with this (Tim?)


I think it’s right that we expect there to be a mix of builtins and standard 
predicates.  But what we’re considering here is having the linear solver be 
treated as if it were a domain-specific policy engine.  So that solver wouldn’t 
be embedded into the runtime.py necessarily.  Rather, we’d delegate part of the 
policy to that domain-specific policy engine.

2. Enforcement. That is with a large number of constraints in place for 
placement and
scheduling, how does the policy engine communicate and enforce the placement
constraints to nova scheduler.


I would imagine that we could delegate either enforcement or monitoring or 
both.  Eventually we want enforcement here, but monitoring could be useful too.

And yes you’re asking the right questions.  I was trying to break the problem 
down into pieces in my bullet (1) below.  But I think there is significant 
overlap in the questions we need to answer whether we’re delegating monitoring 
or enforcement.

Both of these require some form of mathematical analysis.

Would be happy and interested to discuss more on these lines.


Maybe take a look at how I tried to breakdown the problem into separate 
questions in bullet (1) below and see if that makes sense.

Tim

Prabhakar






From:Tim Hinrichs thinri...@vmware.commailto:thinri...@vmware.com
To:ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com
Cc:Ramki Krishnan (r...@brocade.commailto:r...@brocade.com) 
r...@brocade.commailto:r...@brocade.com, Gokul B 
Kandiraju/Watson/IBM@IBMUS, Prabhakar Kudva/Watson/IBM@IBMUS
Date:12/15/2014 12:09 PM
Subject:Re: Placement and Scheduling via Policy




[Adding Prabhakar and Gokul, in case they are interested.]

1) Ruby, thinking about the solver as taking 1 matrix of [vm, server] and 
returning another matrix helps me understand what we’re talking about—thanks.  
I think you’re right that once we move from placement to optimization problems 
in general we’ll need to figure out how to deal with actions.  But if it’s a 
placement-specific policy engine, then we can build VM-migration into it.

It seems to me that the only part left is figuring out how to take an arbitrary 
policy, carve off the placement-relevant portion, and create the inputs the 
solver needs to generate that new matrix.  Some thoughts...

- My 

Re: [openstack-dev] [Congress] Re: Placement and Scheduling via Policy

2014-12-16 Thread Yathiraj Udupi (yudupi)
To add to what I mentioned below… We from the Solver Scheduler team are a small 
team here at Cisco, trying to drive this project and slowly adding more complex 
use cases for scheduling and policy–driven placements.We would really love 
to have some real contributions from everyone in the community and build this 
the right way.
If it may interest – some interesting scheduler use cases are here based on one 
of our community meetings in IRC - 
https://etherpad.openstack.org/p/SchedulerUseCases  This could apply to 
Congress driving some of this too.

I am leading the effort for the  Solver Scheduler project ( 
https://github.com/stackforge/nova-solver-scheduler ) , and if any of you are 
willing to contribute code, API, benchmarks, and also work on integration, my 
team and I can help you guide through this.   We would be following the same 
processes under Stackforge at the moment.

Thanks,
Yathi.





On 12/16/14, 11:14 AM, Yathiraj Udupi (yudupi) 
yud...@cisco.commailto:yud...@cisco.com wrote:

Tim,

I read the conversation thread below and this got me interested as it relates 
to our discussion we had in the Policy Summit (mid cycle meet up) held in Palo 
Alto a few months ago.

This relates to our project – Nova Solver Scheduler, which I had talked about 
at the Policy summit.   Please see this - 
https://github.com/stackforge/nova-solver-scheduler

We already have a working constraints-based solver framework/engine that 
handles Nova placement, and we are currently active in Stackforge, and aim to 
get this integrated into the Gantt project 
(https://blueprints.launchpad.net/nova/+spec/solver-scheduler), based on our 
discussions in the Nova scheduler sub group.

When I saw discussions around using Linear programming (LP) solvers, PULP, etc, 
 I thought of pitching in here to say, we already have demonstrated integrating 
a LP based solver for Nova compute placements.   Please see: 
https://www.youtube.com/watch?v=7QzDbhkk-BI#t=942 for a demo of this (from our 
talk at the Atlanta Openstack summit).
 Based on this email thread,  I believe Ramki, one of our early collaborators 
is driving a similar solution in the NFV ETSI research group.  Glad to know our 
Solver scheduler project is getting interest now.

As part of Congress integration,  at the policy summit, I had suggested, we can 
try to translate a Congress policy into our Solver Scheduler’s constraints,  
and use this to enforce Nova placement policies.
We can already demonstrate policy-driven nova placements using our pluggable 
constraints model.  So it should be easy to integrate with Congress.

The Nova solver scheduler team would be glad to help with any efforts wrt to 
trying out a Congress integration for Nova placements.

Thanks,
Yathi.



On 12/16/14, 10:24 AM, Tim Hinrichs 
thinri...@vmware.commailto:thinri...@vmware.com wrote:

[Adding openstack-dev to this thread.  For those of you just joining… We 
started kicking around ideas for how we might integrate a special-purpose VM 
placement engine into Congress.]

Kudva: responses inline.


On Dec 16, 2014, at 6:25 AM, Prabhakar Kudva 
ku...@us.ibm.commailto:ku...@us.ibm.com wrote:

Hi,

I am very interested in this.

So, it looks like there are two parts to this:
1. Policy analysis when there are a significant mix of logical and builtin 
predicates (i.e.,
runtime should identify a solution space when there are arithmetic operators). 
This will
require linear programming/ILP type solvers.  There might be a need to have a 
function
in runtime.py that specifically deals with this (Tim?)


I think it’s right that we expect there to be a mix of builtins and standard 
predicates.  But what we’re considering here is having the linear solver be 
treated as if it were a domain-specific policy engine.  So that solver wouldn’t 
be embedded into the runtime.py necessarily.  Rather, we’d delegate part of the 
policy to that domain-specific policy engine.

2. Enforcement. That is with a large number of constraints in place for 
placement and
scheduling, how does the policy engine communicate and enforce the placement
constraints to nova scheduler.


I would imagine that we could delegate either enforcement or monitoring or 
both.  Eventually we want enforcement here, but monitoring could be useful too.

And yes you’re asking the right questions.  I was trying to break the problem 
down into pieces in my bullet (1) below.  But I think there is significant 
overlap in the questions we need to answer whether we’re delegating monitoring 
or enforcement.

Both of these require some form of mathematical analysis.

Would be happy and interested to discuss more on these lines.


Maybe take a look at how I tried to breakdown the problem into separate 
questions in bullet (1) below and see if that makes sense.

Tim

Prabhakar






From:Tim Hinrichs thinri...@vmware.commailto:thinri...@vmware.com
To:ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com 

[openstack-dev] [Fuel] Adding code to add node to fuel UI

2014-12-16 Thread Satyasanjibani Rautaray
Hi,

*i am in a process of creating an additional node by editing the code where
the new node will be solving a different propose than installing openstack
components just for testing currently the new node will install vim for me
please help me what else i need to look into to create the complete setup
and deploy with fuel i have edited openstack.yaml at
/root/fuel-web/nailgun/nailgun/fixtures http://pastebin.com/P1MmDBzP
http://pastebin.com/P1MmDBzP*
-- 
Thanks
Satya
Mob:9844101001

No one is the best by birth, Its his brain/ knowledge which make him the
best.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Bug Squashing Day

2014-12-16 Thread Gregory Haynes
  On Wed, Dec 10, 2014 at 10:36 PM, Gregory Haynes g...@greghaynes.net
  wrote:
 
  A couple weeks ago we discussed having a bug squash day. AFAICT we all
  forgot, and we still have a huge bug backlog. I'd like to propose we
  make next Wed. (12/17, in whatever 24 window is Wed. in your time zone)
  a bug squashing day. Hopefully we can add this as an item to our weekly
  meeting on Tues. to help remind everyone the day before.

Friendly Reminder that tomorrow (or today for some time zones) is our
bug squash day! I hope to see youall in IRC squashing some of our
(least) favorite bugs.

Random Factoid: We currently have 299 open bugs.

Cheers,
Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Adding code to add node to fuel UI

2014-12-16 Thread Andrey Danin
Hello.

What version of Fuel do you use? Did you reupload openstack.yaml into
Nailgun? Do you want just to deploy an operating system and configure a
network on a new node?

I would really appreciate if you use a period at the end of sentences.

On Tuesday, December 16, 2014, Satyasanjibani Rautaray engg.s...@gmail.com
wrote:

 Hi,

 *i am in a process of creating an additional node by editing the code
 where the new node will be solving a different propose than installing
 openstack components just for testing currently the new node will install
 vim for me please help me what else i need to look into to create the
 complete setup and deploy with fuel i have edited openstack.yaml at
 /root/fuel-web/nailgun/nailgun/fixtures http://pastebin.com/P1MmDBzP
 http://pastebin.com/P1MmDBzP*
 --
 Thanks
 Satya
 Mob:9844101001

 No one is the best by birth, Its his brain/ knowledge which make him the
 best.



-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Image based provisioning

2014-12-16 Thread Andrey Danin
On Tuesday, December 16, 2014, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 Guys,

 we are about to enable image based provisioning in our master by default.
 I'm trying to figure out requirement for this change. As far as I know, it
 was not tested on scale lab. Is it true? Have we ever run full system tests
 cycle with this option?

 Do we have any other pre-requirements?



-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Option to skip deleting images in use?

2014-12-16 Thread Chris St. Pierre
Currently, with delay_delete enabled, the Glance scrubber happily deletes
whatever images you ask it to. That includes images that are currently in
use by Nova guests, which can really hose things. It'd be nice to have an
option to tell the scrubber to skip deletion of images that are currently
in use, which is fairly trivial to check for and provides a nice measure of
protection.

Without delay_delete enabled, checking for images in use likely takes too
much time, so this would be limited to just images that are scrubbed with
delay_delete.

I wanted to bring this up here before I go to the trouble of writing a spec
for it, particularly since it doesn't appear that glance currently talks to
Nova as a client at all. Is this something that folks would be interested
in having? Thanks!

-- 
Chris St. Pierre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Image based provisioning

2014-12-16 Thread Andrey Danin
Adding Mellanox team explicitly.

Gil, Nurit, Aviram, can you confirm that you tested that feature? It can be
enabled on every fresh ISO. You just need to enable the Experimental mode
(please, see the documentation for instructions).

On Tuesday, December 16, 2014, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 Guys,

 we are about to enable image based provisioning in our master by default.
 I'm trying to figure out requirement for this change. As far as I know, it
 was not tested on scale lab. Is it true? Have we ever run full system tests
 cycle with this option?

 Do we have any other pre-requirements?



-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2014-12-16 Thread Matt Riedemann



On 12/12/2014 7:54 PM, melanie witt wrote:

Hi everybody,

At some point, our db archiving functionality got broken because there was a 
change to stop ever deleting instance system metadata [1]. For those 
unfamiliar, the 'nova-manage db archive_deleted_rows' is the thing that moves 
all soft-deleted (deleted=nonzero) rows to the shadow tables. This is a 
periodic cleaning that operators can do to maintain performance (as things can 
get sluggish when deleted=nonzero rows accumulate).

The change was made because instance_type data still needed to be read even after 
instances had been deleted, because we allow admin to view deleted instances. I saw a bug 
[2] and two patches [3][4] which aimed to fix this by changing back to soft-deleting 
instance sysmeta when instances are deleted, and instead allow 
read_deleted=yes for the things that need to read instance_type for deleted 
instances present in the db.

My question is, is this approach okay? If so, I'd like to see these patches 
revive so we can have our db archiving working again. :) I think there's likely 
something I'm missing about the approach, so I'm hoping people who know more 
about instance sysmeta than I do, can chime in on how/if we can fix this for db 
archiving. Thanks.

[1] https://bugs.launchpad.net/nova/+bug/1185190
[2] https://bugs.launchpad.net/nova/+bug/1226049
[3] https://review.openstack.org/#/c/110875/
[4] https://review.openstack.org/#/c/109201/

melanie (melwitt)






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I changed this from In Progress to Confirmed, removed Alex as the owner 
(since I didn't see any patches from him) and marked it High:


https://bugs.launchpad.net/nova/+bug/1226049

It looks like that could be a duplicate of bug 
https://bugs.launchpad.net/nova/+bug/1183523 which sounds like a lot of 
the same problems. dripton had looked at it at one point and said it was 
Won't Fix at that time, but I don't think that's the case.  Note comment 
7 in there:


https://bugs.launchpad.net/nova/+bug/1183523/comments/7

comstud thinks we can fix this but we need to do instance_type data 
differently. Maybe embedded JSON blobs so we have all the information we 
need without a reference to the instances row. (My opinion: yuck.) So 
this bug is staying open for now, but it requires some significant 
redesign to fix.


I'm not sure if that's related to comstud's instance_type design summit 
topic in Atlanta or not, it sounds the same:


http://junodesignsummit.sched.org/event/e3f1d51c53fc484d070f02ea36d08601#.VJCS6yvF-KU

I can't find the etherpad for that. I'm wondering if Dan Smith's 
blueprint for flavor-from-sysmeta-to-blob handles that? [1] I've never 
been sure how those two items are related.


Anyway, I think the fix is for the taking assuming someone has a good 
fix.  As noted in one of the bugs, the foreign key constraints in nova 
don't have cascading deletes, so if it's a foreign key issue, we should 
find the one that's not being cleaned up before the delete.  It looked 
like dripton thought it was fixed_ips at one point:


https://review.openstack.org/#/c/32742/

[1] 
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-16 Thread Jay Pipes
Just set the images to is_public=False as an admin and they'll disappear 
from everyone except the admin.


-jay

On 12/16/2014 03:09 PM, Chris St. Pierre wrote:

Currently, with delay_delete enabled, the Glance scrubber happily
deletes whatever images you ask it to. That includes images that are
currently in use by Nova guests, which can really hose things. It'd be
nice to have an option to tell the scrubber to skip deletion of images
that are currently in use, which is fairly trivial to check for and
provides a nice measure of protection.

Without delay_delete enabled, checking for images in use likely takes
too much time, so this would be limited to just images that are scrubbed
with delay_delete.

I wanted to bring this up here before I go to the trouble of writing a
spec for it, particularly since it doesn't appear that glance currently
talks to Nova as a client at all. Is this something that folks would be
interested in having? Thanks!

--
Chris St. Pierre


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Adding code to add node to fuel UI

2014-12-16 Thread Satyasanjibani Rautaray
I just need to deploy the node and install my required packages.
On 17-Dec-2014 1:31 am, Andrey Danin ada...@mirantis.com wrote:

 Hello.

 What version of Fuel do you use? Did you reupload openstack.yaml into
 Nailgun? Do you want just to deploy an operating system and configure a
 network on a new node?

 I would really appreciate if you use a period at the end of sentences.

 On Tuesday, December 16, 2014, Satyasanjibani Rautaray 
 engg.s...@gmail.com wrote:

 Hi,

 *i am in a process of creating an additional node by editing the code
 where the new node will be solving a different propose than installing
 openstack components just for testing currently the new node will install
 vim for me please help me what else i need to look into to create the
 complete setup and deploy with fuel i have edited openstack.yaml at
 /root/fuel-web/nailgun/nailgun/fixtures http://pastebin.com/P1MmDBzP
 http://pastebin.com/P1MmDBzP*
 --
 Thanks
 Satya
 Mob:9844101001

 No one is the best by birth, Its his brain/ knowledge which make him the
 best.



 --
 Andrey Danin
 ada...@mirantis.com
 skype: gcon.monolake


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Adding code to add node to fuel UI

2014-12-16 Thread Satyasanjibani Rautaray
I am using community version 6.
Basically I am trying to create a iso file after code change so wants to
understand the complete way to add new note and class to fuel.
On 17-Dec-2014 1:31 am, Andrey Danin ada...@mirantis.com wrote:

 Hello.

 What version of Fuel do you use? Did you reupload openstack.yaml into
 Nailgun? Do you want just to deploy an operating system and configure a
 network on a new node?

 I would really appreciate if you use a period at the end of sentences.

 On Tuesday, December 16, 2014, Satyasanjibani Rautaray 
 engg.s...@gmail.com wrote:

 Hi,

 *i am in a process of creating an additional node by editing the code
 where the new node will be solving a different propose than installing
 openstack components just for testing currently the new node will install
 vim for me please help me what else i need to look into to create the
 complete setup and deploy with fuel i have edited openstack.yaml at
 /root/fuel-web/nailgun/nailgun/fixtures http://pastebin.com/P1MmDBzP
 http://pastebin.com/P1MmDBzP*
 --
 Thanks
 Satya
 Mob:9844101001

 No one is the best by birth, Its his brain/ knowledge which make him the
 best.



 --
 Andrey Danin
 ada...@mirantis.com
 skype: gcon.monolake


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2014-12-16 Thread Andrew Laski


On 12/12/2014 08:54 PM, melanie witt wrote:

Hi everybody,

At some point, our db archiving functionality got broken because there was a 
change to stop ever deleting instance system metadata [1]. For those 
unfamiliar, the 'nova-manage db archive_deleted_rows' is the thing that moves 
all soft-deleted (deleted=nonzero) rows to the shadow tables. This is a 
periodic cleaning that operators can do to maintain performance (as things can 
get sluggish when deleted=nonzero rows accumulate).

The change was made because instance_type data still needed to be read even after 
instances had been deleted, because we allow admin to view deleted instances. I saw a bug 
[2] and two patches [3][4] which aimed to fix this by changing back to soft-deleting 
instance sysmeta when instances are deleted, and instead allow 
read_deleted=yes for the things that need to read instance_type for deleted 
instances present in the db.

My question is, is this approach okay? If so, I'd like to see these patches 
revive so we can have our db archiving working again. :) I think there's likely 
something I'm missing about the approach, so I'm hoping people who know more 
about instance sysmeta than I do, can chime in on how/if we can fix this for db 
archiving. Thanks.


I looked briefly into tackling this as well a while back.  The tricky 
piece that I hit is what system_metadata should be available when 
read_deleted='yes'.  Is it okay for it to be all deleted system_metadata 
or should it only be the system_metadata that was deleted at the same 
time as the instance?  I didn't get to dig in enough to answer that.


Also there are periodic tasks that query for deleted instances so those 
might need to pull system_metadata in addition to the API.





[1] https://bugs.launchpad.net/nova/+bug/1185190
[2] https://bugs.launchpad.net/nova/+bug/1226049
[3] https://review.openstack.org/#/c/110875/
[4] https://review.openstack.org/#/c/109201/

melanie (melwitt)






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][infra] Ceph CI status update

2014-12-16 Thread Matt Riedemann



On 12/11/2014 10:36 AM, Jon Bernard wrote:

Heya, quick Ceph CI status update.  Once the test_volume_boot_pattern
was marked as skipped, only the revert_resize test was failing.  I have
submitted a patch to nova for this [1], and that yields an all green
ceph ci run [2].  So at the moment, and with my revert patch, we're in
good shape.

I will fix up that patch today so that it can be properly reviewed and
hopefully merged.  From there I'll submit a patch to infra to move the
job to the check queue as non-voting, and we can go from there.

[1] https://review.openstack.org/#/c/139693/
[2] 
http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html

Cheers,



Jon,

Thanks, this is something I'm supposed to be tracking actually given the 
Kilo priorities for the project, it's nice to know that someone is 
already fixing this stuff. :)


I've reviewed https://review.openstack.org/#/c/139693/ so it's close, 
just needs a small fix.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-12-16 Thread Jay Pipes

Thanks, Steven, much appreciated! :)

On 12/16/2014 01:26 PM, Steven Kaufer wrote:

This is a follow up to this thread from a few weeks ago:
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg40287.html

I've updated the nova spec in this area to include the total server
count in the server_links based on the existence of an include_count
query parameter (eg: GET /servers?include_count=1).  The spec no longer
references a GET /servers/count API.

Nova spec: https://review.openstack.org/#/c/134279/

Thanks,
Steven Kaufer


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-16 Thread Chris St. Pierre
The goal here is protection against deletion of in-use images, not a
workaround that can be executed by an admin. For instance, someone without
admin still can't do that, and someone with a fat finger can still delete
images in use.

Don't lose your data is a fine workaround for taking backups, but most of
us take backups anyway. Same deal.

On Tue, Dec 16, 2014 at 2:30 PM, Jay Pipes jaypi...@gmail.com wrote:

 Just set the images to is_public=False as an admin and they'll disappear
 from everyone except the admin.

 -jay


 On 12/16/2014 03:09 PM, Chris St. Pierre wrote:

 Currently, with delay_delete enabled, the Glance scrubber happily
 deletes whatever images you ask it to. That includes images that are
 currently in use by Nova guests, which can really hose things. It'd be
 nice to have an option to tell the scrubber to skip deletion of images
 that are currently in use, which is fairly trivial to check for and
 provides a nice measure of protection.

 Without delay_delete enabled, checking for images in use likely takes
 too much time, so this would be limited to just images that are scrubbed
 with delay_delete.

 I wanted to bring this up here before I go to the trouble of writing a
 spec for it, particularly since it doesn't appear that glance currently
 talks to Nova as a client at all. Is this something that folks would be
 interested in having? Thanks!

 --
 Chris St. Pierre


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Chris St. Pierre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-16 Thread Jay Pipes

On 12/16/2014 04:23 PM, Chris St. Pierre wrote:

The goal here is protection against deletion of in-use images, not a
workaround that can be executed by an admin. For instance, someone
without admin still can't do that, and someone with a fat finger can
still delete images in use.


Then set the protected property on the image, which prevents it from 
being deleted.


From the glance CLI image-update help output:

--is-protected [True|False]
Prevent image from being deleted.


Don't lose your data is a fine workaround for taking backups, but most
of us take backups anyway. Same deal.

On Tue, Dec 16, 2014 at 2:30 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

Just set the images to is_public=False as an admin and they'll
disappear from everyone except the admin.

-jay


On 12/16/2014 03:09 PM, Chris St. Pierre wrote:

Currently, with delay_delete enabled, the Glance scrubber happily
deletes whatever images you ask it to. That includes images that are
currently in use by Nova guests, which can really hose things.
It'd be
nice to have an option to tell the scrubber to skip deletion of
images
that are currently in use, which is fairly trivial to check for and
provides a nice measure of protection.

Without delay_delete enabled, checking for images in use likely
takes
too much time, so this would be limited to just images that are
scrubbed
with delay_delete.

I wanted to bring this up here before I go to the trouble of
writing a
spec for it, particularly since it doesn't appear that glance
currently
talks to Nova as a client at all. Is this something that folks
would be
interested in having? Thanks!

--
Chris St. Pierre


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Chris St. Pierre


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-16 Thread Nikhil Komawar
+1

Thanks,
-Nikhil


From: Jay Pipes [jaypi...@gmail.com]
Sent: Tuesday, December 16, 2014 4:33 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use?

On 12/16/2014 04:23 PM, Chris St. Pierre wrote:
 The goal here is protection against deletion of in-use images, not a
 workaround that can be executed by an admin. For instance, someone
 without admin still can't do that, and someone with a fat finger can
 still delete images in use.

Then set the protected property on the image, which prevents it from
being deleted.

 From the glance CLI image-update help output:

--is-protected [True|False]
 Prevent image from being deleted.

 Don't lose your data is a fine workaround for taking backups, but most
 of us take backups anyway. Same deal.

 On Tue, Dec 16, 2014 at 2:30 PM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 Just set the images to is_public=False as an admin and they'll
 disappear from everyone except the admin.

 -jay


 On 12/16/2014 03:09 PM, Chris St. Pierre wrote:

 Currently, with delay_delete enabled, the Glance scrubber happily
 deletes whatever images you ask it to. That includes images that are
 currently in use by Nova guests, which can really hose things.
 It'd be
 nice to have an option to tell the scrubber to skip deletion of
 images
 that are currently in use, which is fairly trivial to check for and
 provides a nice measure of protection.

 Without delay_delete enabled, checking for images in use likely
 takes
 too much time, so this would be limited to just images that are
 scrubbed
 with delay_delete.

 I wanted to bring this up here before I go to the trouble of
 writing a
 spec for it, particularly since it doesn't appear that glance
 currently
 talks to Nova as a client at all. Is this something that folks
 would be
 interested in having? Thanks!

 --
 Chris St. Pierre


 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Chris St. Pierre


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-16 Thread Fei Long Wang

Hi Chris,

Are you looking for the 'protected' attribute? You can mark an image 
with 'protected'=True, then the image can't be deleted by accidentally.


On 17/12/14 10:23, Chris St. Pierre wrote:
The goal here is protection against deletion of in-use images, not a 
workaround that can be executed by an admin. For instance, someone 
without admin still can't do that, and someone with a fat finger can 
still delete images in use.


Don't lose your data is a fine workaround for taking backups, but 
most of us take backups anyway. Same deal.


On Tue, Dec 16, 2014 at 2:30 PM, Jay Pipes jaypi...@gmail.com 
mailto:jaypi...@gmail.com wrote:


Just set the images to is_public=False as an admin and they'll
disappear from everyone except the admin.

-jay


On 12/16/2014 03:09 PM, Chris St. Pierre wrote:

Currently, with delay_delete enabled, the Glance scrubber happily
deletes whatever images you ask it to. That includes images
that are
currently in use by Nova guests, which can really hose things.
It'd be
nice to have an option to tell the scrubber to skip deletion
of images
that are currently in use, which is fairly trivial to check
for and
provides a nice measure of protection.

Without delay_delete enabled, checking for images in use
likely takes
too much time, so this would be limited to just images that
are scrubbed
with delay_delete.

I wanted to bring this up here before I go to the trouble of
writing a
spec for it, particularly since it doesn't appear that glance
currently
talks to Nova as a client at all. Is this something that folks
would be
interested in having? Thanks!

--
Chris St. Pierre


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Chris St. Pierre


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Cheers  Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Anyone knows whether there is freezing day of spec approval?

2014-12-16 Thread Jay S. Bryant

Dave,

My apologies.  We have not yet set a day that we are freezing BP/Spec 
approval for Cinder.


We had a deadline in November for new drivers being proposed but haven't 
frozen other proposals yet.  I mixed things up with Nova's 12/18 cutoff.


Not sure when we will be cutting off BPs for Cinder.  The goal is to 
spend as much of K-2 and K-3 on Cinder clean-up.  So, I wouldn't let 
anything you want considered linger too long.


Thanks,
Jay

On 12/15/2014 09:16 PM, Chen, Wei D wrote:

Hi,

I know nova has such day around Dec. 18, is there a similar day in Cinder 
project? thanks!

Best Regards,
Dave Chen




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.db 1.2.1 release coming to fix stable/juno

2014-12-16 Thread Doug Hellmann

On Dec 16, 2014, at 8:15 AM, Doug Hellmann d...@doughellmann.com wrote:

 
 On Dec 15, 2014, at 5:58 PM, Doug Hellmann d...@doughellmann.com wrote:
 
 
 On Dec 15, 2014, at 3:21 PM, Doug Hellmann d...@doughellmann.com wrote:
 
 The issue with stable/juno jobs failing because of the difference in the 
 SQLAlchemy requirements between the older applications and the newer 
 oslo.db is being addressed with a new release of the 1.2.x series. We will 
 then cap the requirements for stable/juno to 1.2.1. We decided we did not 
 need to raise the minimum version of oslo.db allowed in kilo, because the 
 old versions of the library do work, if they are installed from packages 
 and not through setuptools.
 
 Jeremy created a feature/1.2 branch for us, and I have 2 patches up [1][2] 
 to apply the requirements fix. The change to the oslo.db version in 
 stable/juno is [3].
 
 After the changes in oslo.db merge, I will tag 1.2.1.
 
 After spending several hours exploring a bunch of options to make this 
 actually work, some of which require making changes to test job definitions, 
 grenade, or other long-term changes, I’m proposing a new approach:
 
 1. Undo the change in master that broke the compatibility with versions of 
 SQLAlchemy by making master match juno: https://review.openstack.org/141927
 2. Update oslo.db after ^^ lands.
 3. Tag oslo.db 1.4.0 with a set of requirements compatible with Juno.
 4. Change the requirements in stable/juno to skip oslo.db 1.1, 1.2, and 1.3.
 
 I’ll proceed with that plan tomorrow morning (~15 hours from now) unless 
 someone points out why that won’t work in the mean time.
 
 I just reset a few approved patches that were not going to land because of 
 this issue to kick them out of the gate to expedite landing part of the fix. 
 I did this by modifying their commit messages. I tried to limit the changes 
 to simple cosmetic tweaks, so if you see a weird change to one of your 
 patches that’s probably why.

That solution evolved into a third approach, which has taken most of the day to 
land.

We now have an oslo.db 1.0.3 with SQLAlchemy requirements that work with 
setuptools 8. stable/juno is currently capped to oslo.db1.0.0,1.3 but another 
change to move the cap down to 1.1 is in the queue right now [1]. This is a 
lower cap than the last tests we were running, but it has the benefit of 
providing a version of oslo.db that does not introduce any other requirements 
changes in stable/juno as 1.2.1 would have. More details are available in the 
etherpad we used for notes today [2], and of course please post here if you 
have questions.

Doug

[1] https://review.openstack.org/#/c/142180/2
[2] https://etherpad.openstack.org/p/cloL2FzTRd

 
 
 Doug
 
 
 Doug
 
 [1] https://review.openstack.org/#/c/141893/
 [2] https://review.openstack.org/#/c/141894/
 [3] https://review.openstack.org/#/c/141896/
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-16 Thread Chris St. Pierre
No, I'm looking to prevent images that are in use from being deleted. In
use and protected are disjoint sets.

On Tue, Dec 16, 2014 at 3:36 PM, Fei Long Wang feil...@catalyst.net.nz
wrote:

  Hi Chris,

 Are you looking for the 'protected' attribute? You can mark an image with
 'protected'=True, then the image can't be deleted by accidentally.

 On 17/12/14 10:23, Chris St. Pierre wrote:

 The goal here is protection against deletion of in-use images, not a
 workaround that can be executed by an admin. For instance, someone without
 admin still can't do that, and someone with a fat finger can still delete
 images in use.

  Don't lose your data is a fine workaround for taking backups, but most
 of us take backups anyway. Same deal.

 On Tue, Dec 16, 2014 at 2:30 PM, Jay Pipes jaypi...@gmail.com wrote:

 Just set the images to is_public=False as an admin and they'll disappear
 from everyone except the admin.

 -jay


 On 12/16/2014 03:09 PM, Chris St. Pierre wrote:

  Currently, with delay_delete enabled, the Glance scrubber happily
 deletes whatever images you ask it to. That includes images that are
 currently in use by Nova guests, which can really hose things. It'd be
 nice to have an option to tell the scrubber to skip deletion of images
 that are currently in use, which is fairly trivial to check for and
 provides a nice measure of protection.

 Without delay_delete enabled, checking for images in use likely takes
 too much time, so this would be limited to just images that are scrubbed
 with delay_delete.

 I wanted to bring this up here before I go to the trouble of writing a
 spec for it, particularly since it doesn't appear that glance currently
 talks to Nova as a client at all. Is this something that folks would be
 interested in having? Thanks!

 --
 Chris St. Pierre


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --
 Chris St. Pierre


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Cheers  Best regards,
 Fei Long Wang (王飞龙)
 --
 Senior Cloud Software Engineer
 Tel: +64-48032246
 Email: flw...@catalyst.net.nz
 Catalyst IT Limited
 Level 6, Catalyst House, 150 Willis Street, Wellington
 --


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Chris St. Pierre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] - Revert change of default ephemeral fs to ext4

2014-12-16 Thread Matt Riedemann



On 12/30/2013 7:30 PM, Clint Byrum wrote:

Excerpts from Day, Phil's message of 2013-12-30 11:05:17 -0800:

Hi, so it seems we were saying the same thing - new vms get a shared blank 
(empty) file system,  not blank disc.  How big a problem it is that in many cases this 
will be the already created ext3 disk and not ext4 depends I guess on how important 
consistency is to you (to me its pretty important).  Either way the change as it stands 
wont give all new vms an ext4 fs as intended,  so its flawed in that regard.

Like you I was thinking that we may have to move away from default being in 
the file name to fix this.



Indeed, default's meaning is mutable and thus it is flawed as a
cache key.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



jogo brought this bug up in IRC today [1]. The bug report says that we 
should put in the Icehouse release notes that ext3 is going to be 
changed to ext4 in Juno but that never happened.  So question is, now 
that we're well into Kilo, what can be done about this now?  The thread 
here talks about doing more than just changing the default value like in 
the original change [2], but is someone willing to work on that?


[1] https://bugs.launchpad.net/nova/+bug/1266262
[2] https://review.openstack.org/#/c/63209/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] - Revert change of default ephemeral fs to ext4

2014-12-16 Thread Davanum Srinivas
Matt, i'll take a stab at it

thanks,
dims

On Tue, Dec 16, 2014 at 5:18 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:


 On 12/30/2013 7:30 PM, Clint Byrum wrote:

 Excerpts from Day, Phil's message of 2013-12-30 11:05:17 -0800:

 Hi, so it seems we were saying the same thing - new vms get a shared
 blank (empty) file system,  not blank disc.  How big a problem it is that
 in many cases this will be the already created ext3 disk and not ext4
 depends I guess on how important consistency is to you (to me its pretty
 important).  Either way the change as it stands wont give all new vms an
 ext4 fs as intended,  so its flawed in that regard.

 Like you I was thinking that we may have to move away from default
 being in the file name to fix this.


 Indeed, default's meaning is mutable and thus it is flawed as a
 cache key.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 jogo brought this bug up in IRC today [1]. The bug report says that we
 should put in the Icehouse release notes that ext3 is going to be changed to
 ext4 in Juno but that never happened.  So question is, now that we're well
 into Kilo, what can be done about this now?  The thread here talks about
 doing more than just changing the default value like in the original change
 [2], but is someone willing to work on that?

 [1] https://bugs.launchpad.net/nova/+bug/1266262
 [2] https://review.openstack.org/#/c/63209/

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-16 Thread Nikhil Komawar
Hi Chris,

Apologies for not having heard your use case completely. From the description 
as well as the information you've provided; it is my recommendation to use a 
protected property in Glance for the Image entity that is in use. You can then 
use it in the service of your choice (Nova, Cinder) for not deleting the same. 
It is that service which shall have more accurate information as well as be 
source of truth for the in-use state of the Image entity. Making a call out to 
different service (except backend stores) is out of the scope of Glance. (Nova 
is the client of Glance and we would like to avoid the circular dependency mess 
there!)

Hope it helps. Please let me know if you need more information.

Thanks and Regards,
-Nikhil

From: Chris St. Pierre [chris.a.st.pie...@gmail.com]
Sent: Tuesday, December 16, 2014 5:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use?

No, I'm looking to prevent images that are in use from being deleted. In use 
and protected are disjoint sets.

On Tue, Dec 16, 2014 at 3:36 PM, Fei Long Wang 
feil...@catalyst.net.nzmailto:feil...@catalyst.net.nz wrote:
Hi Chris,

Are you looking for the 'protected' attribute? You can mark an image with 
'protected'=True, then the image can't be deleted by accidentally.

On 17/12/14 10:23, Chris St. Pierre wrote:
The goal here is protection against deletion of in-use images, not a workaround 
that can be executed by an admin. For instance, someone without admin still 
can't do that, and someone with a fat finger can still delete images in use.

Don't lose your data is a fine workaround for taking backups, but most of us 
take backups anyway. Same deal.

On Tue, Dec 16, 2014 at 2:30 PM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:
Just set the images to is_public=False as an admin and they'll disappear from 
everyone except the admin.

-jay


On 12/16/2014 03:09 PM, Chris St. Pierre wrote:
Currently, with delay_delete enabled, the Glance scrubber happily
deletes whatever images you ask it to. That includes images that are
currently in use by Nova guests, which can really hose things. It'd be
nice to have an option to tell the scrubber to skip deletion of images
that are currently in use, which is fairly trivial to check for and
provides a nice measure of protection.

Without delay_delete enabled, checking for images in use likely takes
too much time, so this would be limited to just images that are scrubbed
with delay_delete.

I wanted to bring this up here before I go to the trouble of writing a
spec for it, particularly since it doesn't appear that glance currently
talks to Nova as a client at all. Is this something that folks would be
interested in having? Thanks!

--
Chris St. Pierre


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Chris St. Pierre



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Cheers  Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246tel:%2B64-48032246
Email: flw...@catalyst.net.nzmailto:flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Chris St. Pierre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-16 Thread Collins, Sean
On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote:
 No, I'm looking to prevent images that are in use from being deleted. In
 use and protected are disjoint sets.

I have seen multiple cases of images (and snapshots) being deleted while
still in use in Nova, which leads to some very, shall we say,
interesting bugs and support problems.

I do think that we should try and determine a way forward on this, they
are indeed disjoint sets. Setting an image as protected is a proactive
measure, we should try and figure out a way to keep tenants from
shooting themselves in the foot if possible.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-16 Thread Ben Nemec
Some thoughts inline.  I'll go ahead and push a change to remove the
things everyone seems to agree on.

On 12/09/2014 09:05 AM, Sean Dague wrote:
 On 12/09/2014 09:11 AM, Doug Hellmann wrote:

 On Dec 9, 2014, at 6:39 AM, Sean Dague s...@dague.net wrote:

 I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely.

 1 - the entire H8* group. This doesn't function on python code, it
 functions on git commit message, which makes it tough to run locally. It
 also would be a reason to prevent us from not rerunning tests on commit
 message changes (something we could do after the next gerrit update).

 2 - the entire H3* group - because of this -
 https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm

 A look at the H3* code shows that it's terribly complicated, and is
 often full of bugs (a few bit us last week). I'd rather just delete it
 and move on.

 I don’t have the hacking rules memorized. Could you describe them briefly?
 
 Sure, the H8* group is git commit messages. It's checking for line
 length in the commit message.
 
 - [H802] First, provide a brief summary of 50 characters or less.  Summaries
   of greater then 72 characters will be rejected by the gate.
 
 - [H801] The first line of the commit message should provide an accurate
   description of the change, not just a reference to a bug or
   blueprint.
 
 
 H802 is mechanically enforced (though not the 50 characters part, so the
 code isn't the same as the rule).
 
 H801 is enforced by a regex that looks to see if the first line is a
 launchpad bug and fails on it. You can't mechanically enforce that
 english provides an accurate description.

+1.  It would be nice to provide automatic notification to people if
they submit something with an absurdly long commit message, but I agree
that hacking isn't the place to do that.

 
 
 H3* are all the module import rules:
 
 Imports
 ---
 - [H302] Do not import objects, only modules (*)
 - [H301] Do not import more than one module per line (*)
 - [H303] Do not use wildcard ``*`` import (*)
 - [H304] Do not make relative imports
 - Order your imports by the full module path
 - [H305 H306 H307] Organize your imports according to the `Import order
   template`_ and `Real-world Import Order Examples`_ below.
 
 I think these remain reasonable guidelines, but H302 is exceptionally
 tricky to get right, and we keep not getting it right.
 
 H305-307 are actually impossible to get right. Things come in and out of
 stdlib in python all the time.

tdlr; I'd like to remove H302, H305 and, H307 and leave the rest.
Reasons below.

+1 to H305 and H307.  I'm going to have to admit defeat and accept that
I can't make them work in a sane fashion.

H306 is different though - that one is only checking alphabetical order
and only works on the text of the import so it doesn't have the issues
around having modules installed or mis-categorizing.  AFAIK it has never
actually caused any problems either (the H306 failure in
https://review.openstack.org/#/c/140168/2/nova/tests/unit/test_fixtures.py
is correct - nova.tests.fixtures should come before
nova.tests.unit.conf_fixture).

As far as 301-304, only 302 actually depends on the is_module stuff.
The others are all text-based too so I think we should leave them.  H302
I'm kind of indifferent on - we hit an edge case with the olso namespace
thing which is now fixed, but if removing that allows us to not install
requirements.txt to run pep8 I think I'm onboard with removing it too.

 
 
 I think it's time to just decide to be reasonable Humans and that these
 are guidelines.
 
 The H3* set of rules is also why you have to install *all* of
 requirements.txt and test-requirements.txt in your pep8 tox target,
 because H302 actually inspects the sys.modules to attempt to figure out
 if things are correct.
 
   -Sean
 

 Doug
 - [H802] First, provide a brief summary of 50 characters or less.  Summaries
   of greater then 72 characters will be rejected by the gate.
 
 - [H801] The first line of the commit message should provide an accurate
   description of the change, not just a reference to a bug or
   blueprint.
 


 -Sean

 -- 
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-16 Thread Vishvananda Ishaya
A simple solution that wouldn’t require modification of glance would be a cron 
job
that lists images and snapshots and marks them protected while they are in use.

Vish

On Dec 16, 2014, at 3:19 PM, Collins, Sean sean_colli...@cable.comcast.com 
wrote:

 On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote:
 No, I'm looking to prevent images that are in use from being deleted. In
 use and protected are disjoint sets.
 
 I have seen multiple cases of images (and snapshots) being deleted while
 still in use in Nova, which leads to some very, shall we say,
 interesting bugs and support problems.
 
 I do think that we should try and determine a way forward on this, they
 are indeed disjoint sets. Setting an image as protected is a proactive
 measure, we should try and figure out a way to keep tenants from
 shooting themselves in the foot if possible.
 
 -- 
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Meetings canceled until Jan 7

2014-12-16 Thread Stephen Balukoff
Since we're in the middle of the Octavia hack-a-thon (and have been meeting
in person and online all week), it doesn't make sense for us to have an
Octavia meeting next week.

I also suggested holding Octavia meetings on Christmas Eve and New Year's
Eve (when the following two meetings would be held), but I was assured by
the usual participants in these meetings that I would probably be the only
one attending them. As such, the next Octavia meeting we'll be holding will
happen on January 7th.

In the mean time, let's bang out some code and get it reviewed!

Thanks,
Stephen

--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] RFC - Action spec CLI

2014-12-16 Thread Lakshmi Kannan
Apologies for the long email. If this fancy email doesn’t render correctly
for you, please read it here:
https://gist.github.com/lakshmi-kannan/cf953f66a397b153254a

I was looking into fixing bug:
https://bugs.launchpad.net/mistral/+bug/1401039. My idea was to use shlex
to parse the string. This actually would work for anything that is supplied
in the linux shell syntax. Problem is this craps out when we want to
support complex data structures such as arrays and dicts as arguments. I
did not think we supported a syntax to take in complex data structures in a
one line format. Consider for example:

  task7:
for-each:
  vm_info: $.vms
workflow: wf2 is_true=true object_list=[1, null, str]
on-complete:
  - task9
  - task10

Specifically

wf2 is_true=true object_list=[1, null, str]

shlex will not handle this correctly because object_list is an array. Same
problem with dict.

There are 3 potential options here:
Option 1

1) Provide a spec for specifying lists and dicts like so:
list_arg=1,2,3 and dict_arg=h1:h2,h3:h4,h5:h6

shlex will handle this fine but there needs to be a code that converts the
argument values to appropriate data types based on schema. (ActionSpec
should have a parameter schema probably in jsonschema). This is doable.

wf2 is_true=true object_list=1,null,str

Option 2

2) Allow JSON strings to be used as arguments so we can json.loads them (if
it fails, use them as simple string). For example, with this approach, the
line becomes

wf2 is_true=true object_list=[1, null, str]

This would pretty much resemble
http://stackoverflow.com/questions/7625786/type-dict-in-argparse-add-argument
Option 3

3) Keep the spec as such and try to parse it. I have no idea how we can do
this reliably. We need a more rigorous lexer. This syntax doesn’t translate
well when we want to build a CLI. Linux shells cannot support this syntax
natively. This means people would have to use shlex syntax and a
translation needs to happen in CLI layer. This will lead to inconsistency.
CLI uses some syntax and the action input line in workflow definition will
use another. We should try and avoid this.
Option 4

4) Completely drop support for this fancy one line syntax in workflow. This
is probably the least desired option.
My preference

Looking the options, I like option2/option 1/option 4/option 3 in the order
of preference.

With some documentation, we can tell people why this is hard. People will
also grok because they are already familiar with CLI limitations in linux.

Thoughts?
​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2014-12-16 Thread Asselin, Ramy
Manually running the script requires a few environment settings. Take a look at 
the README here:
https://github.com/openstack-infra/devstack-gate

Regarding cinder, I’m using this repo to run our cinder jobs (fork from 
jaypipes).
https://github.com/rasselin/os-ext-testing

Note that this solution doesn’t use the Jenkins gerrit trigger pluggin, but 
zuul.

There’s a sample job for cinder here. It’s in Jenkins Job Builder format.
https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample

You can ask more questions in IRC freenode #openstack-cinder. (irc# asselin)

Ramy

From: Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
Sent: Tuesday, December 16, 2014 12:41 AM
To: Bailey, Darragh
Cc: OpenStack Development Mailing List (not for usage questions); OpenStack
Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting 
up CI

Hi,

Can someone point me to some working documentation on how to setup third party 
CI? (joinfu's instructions don't seem to work, and manually running 
devstack-gate scripts fails:

Running gate_hook

Job timeout set to: 163 minutes

timeout: failed to run command 
‘/opt/stack/new/devstack-gate/devstack-vm-gate.sh’: No such file or directory

ERROR: the main setup script run by this job failed - exit code: 127

please look at the relevant log files to determine the root cause

Cleaning up host

... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz)
Build step 'Execute shell' marked build as failure.

I have a working Jenkins slave with devstack and our internal libraries, i have 
Gerrit Trigger Plugin working and triggering on patches created, i just need 
the actual job contents so that it can get to comment with the test results.

Thanks,

Eduard

On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei 
eduard.ma...@cloudfounders.commailto:eduard.ma...@cloudfounders.com wrote:
Hi Darragh, thanks for your input

I double checked the job settings and fixed it:
- build triggers is set to Gerrit event
- Gerrit trigger server is Gerrit (configured from Gerrit Trigger Plugin and 
tested separately)
- Trigger on: Patchset Created
- Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: Type: 
Path, Pattern: ** (was Type Plain on both)
Now the job is triggered by commit on openstack-dev/sandbox :)

Regarding the Query and Trigger Gerrit Patches, i found my patch using query: 
status:open project:openstack-dev/sandbox change:139585 and i can trigger it 
manually and it executes the job.

But i still have the problem: what should the job do? It doesn't actually do 
anything, it doesn't run tests or comment on the patch.
Do you have an example of job?

Thanks,
Eduard

On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh 
dbai...@hp.commailto:dbai...@hp.com wrote:
Hi Eduard,


I would check the trigger settings in the job, particularly which type
of pattern matching is being used for the branches. Found it tends to be
the spot that catches most people out when configuring jobs with the
Gerrit Trigger plugin. If you're looking to trigger against all branches
then you would want Type: Path and Pattern: ** appearing in the UI.

If you have sufficient access using the 'Query and Trigger Gerrit
Patches' page accessible from the main view will make it easier to
confirm that your Jenkins instance can actually see changes in gerrit
for the given project (which should mean that it can see the
corresponding events as well). Can also use the same page to re-trigger
for PatchsetCreated events to see if you've set the patterns on the job
correctly.

Regards,
Darragh Bailey

Nothing is foolproof to a sufficiently talented fool - Unknown

On 08/12/14 14:33, Eduard Matei wrote:
 Resending this to dev ML as it seems i get quicker response :)

 I created a job in Jenkins, added as Build Trigger: Gerrit Event:
 Patchset Created, chose as server the configured Gerrit server that
 was previously tested, then added the project openstack-dev/sandbox
 and saved.
 I made a change on dev sandbox repo but couldn't trigger my job.

 Any ideas?

 Thanks,
 Eduard

 On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei
 eduard.ma...@cloudfounders.commailto:eduard.ma...@cloudfounders.com
 mailto:eduard.ma...@cloudfounders.commailto:eduard.ma...@cloudfounders.com
  wrote:

 Hello everyone,

 Thanks to the latest changes to the creation of service accounts
 process we're one step closer to setting up our own CI platform
 for Cinder.

 So far we've got:
 - Jenkins master (with Gerrit plugin) and slave (with DevStack and
 our storage solution)
 - Service account configured and tested (can manually connect to
 review.openstack.orghttp://review.openstack.org 
 http://review.openstack.org and get events
 and publish comments)

 Next step would be to set up a job to do the actual testing, this
 is where we're stuck.
 Can someone please point us to a clear example on how a job should
 

Re: [openstack-dev] [Fuel] Adding code to add node to fuel UI

2014-12-16 Thread Mike Scherbakov
Hi,
did you come across
http://docs.mirantis.com/fuel-dev/develop/addition_examples.html ?

I believe it should cover your use case.

Thanks,

On Tue, Dec 16, 2014 at 11:43 PM, Satyasanjibani Rautaray 
engg.s...@gmail.com wrote:

 I just need to deploy the node and install my required packages.
 On 17-Dec-2014 1:31 am, Andrey Danin ada...@mirantis.com wrote:

 Hello.

 What version of Fuel do you use? Did you reupload openstack.yaml into
 Nailgun? Do you want just to deploy an operating system and configure a
 network on a new node?

 I would really appreciate if you use a period at the end of sentences.

 On Tuesday, December 16, 2014, Satyasanjibani Rautaray 
 engg.s...@gmail.com wrote:

 Hi,

 *i am in a process of creating an additional node by editing the code
 where the new node will be solving a different propose than installing
 openstack components just for testing currently the new node will install
 vim for me please help me what else i need to look into to create the
 complete setup and deploy with fuel i have edited openstack.yaml at
 /root/fuel-web/nailgun/nailgun/fixtures http://pastebin.com/P1MmDBzP
 http://pastebin.com/P1MmDBzP*
 --
 Thanks
 Satya
 Mob:9844101001

 No one is the best by birth, Its his brain/ knowledge which make him the
 best.



 --
 Andrey Danin
 ada...@mirantis.com
 skype: gcon.monolake


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-16 Thread Joe Gordon
On Tue, Dec 16, 2014 at 3:22 PM, Ben Nemec openst...@nemebean.com wrote:

 Some thoughts inline.  I'll go ahead and push a change to remove the
 things everyone seems to agree on.

 On 12/09/2014 09:05 AM, Sean Dague wrote:
  On 12/09/2014 09:11 AM, Doug Hellmann wrote:
 
  On Dec 9, 2014, at 6:39 AM, Sean Dague s...@dague.net wrote:
 
  I'd like to propose that for hacking 1.0 we drop 2 groups of rules
 entirely.
 
  1 - the entire H8* group. This doesn't function on python code, it
  functions on git commit message, which makes it tough to run locally.
 It
  also would be a reason to prevent us from not rerunning tests on commit
  message changes (something we could do after the next gerrit update).
 
  2 - the entire H3* group - because of this -
  https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm
 
  A look at the H3* code shows that it's terribly complicated, and is
  often full of bugs (a few bit us last week). I'd rather just delete it
  and move on.
 
  I don’t have the hacking rules memorized. Could you describe them
 briefly?
 
  Sure, the H8* group is git commit messages. It's checking for line
  length in the commit message.
 
  - [H802] First, provide a brief summary of 50 characters or less.
 Summaries
of greater then 72 characters will be rejected by the gate.
 
  - [H801] The first line of the commit message should provide an accurate
description of the change, not just a reference to a bug or
blueprint.
 
 
  H802 is mechanically enforced (though not the 50 characters part, so the
  code isn't the same as the rule).
 
  H801 is enforced by a regex that looks to see if the first line is a
  launchpad bug and fails on it. You can't mechanically enforce that
  english provides an accurate description.

 +1.  It would be nice to provide automatic notification to people if
 they submit something with an absurdly long commit message, but I agree
 that hacking isn't the place to do that.

 
 
  H3* are all the module import rules:
 
  Imports
  ---
  - [H302] Do not import objects, only modules (*)
  - [H301] Do not import more than one module per line (*)
  - [H303] Do not use wildcard ``*`` import (*)
  - [H304] Do not make relative imports
  - Order your imports by the full module path
  - [H305 H306 H307] Organize your imports according to the `Import order
template`_ and `Real-world Import Order Examples`_ below.
 
  I think these remain reasonable guidelines, but H302 is exceptionally
  tricky to get right, and we keep not getting it right.
 
  H305-307 are actually impossible to get right. Things come in and out of
  stdlib in python all the time.

 tdlr; I'd like to remove H302, H305 and, H307 and leave the rest.
 Reasons below.

 +1 to H305 and H307.  I'm going to have to admit defeat and accept that
 I can't make them work in a sane fashion.


++, these have been nothing but trouble.



 H306 is different though - that one is only checking alphabetical order
 and only works on the text of the import so it doesn't have the issues
 around having modules installed or mis-categorizing.  AFAIK it has never
 actually caused any problems either (the H306 failure in
 https://review.openstack.org/#/c/140168/2/nova/tests/unit/test_fixtures.py
 is correct - nova.tests.fixtures should come before
 nova.tests.unit.conf_fixture).


Agreed H306 is mechanically enforceable and is  there in part to reduce the
risk of merge conflicts in the imports section



 As far as 301-304, only 302 actually depends on the is_module stuff.
 The others are all text-based too so I think we should leave them.  H302
 I'm kind of indifferent on - we hit an edge case with the olso namespace
 thing which is now fixed, but if removing that allows us to not install
 requirements.txt to run pep8 I think I'm onboard with removing it too.


As for H302, it comes from
https://google-styleguide.googlecode.com/svn/trunk/pyguide.html?showone=Imports#Imports

We still don't have this one working right, running flake8 outside a venv
 is still causing oslo packaging related issues for me

./nova/i18n.py:21:1: H302  import only modules.'from oslo import i18n' does
not import a module

So +1 to just removing it.



 
 
  I think it's time to just decide to be reasonable Humans and that these
  are guidelines.
 
  The H3* set of rules is also why you have to install *all* of
  requirements.txt and test-requirements.txt in your pep8 tox target,
  because H302 actually inspects the sys.modules to attempt to figure out
  if things are correct.
 
-Sean
 
 
  Doug
  - [H802] First, provide a brief summary of 50 characters or less.
 Summaries
of greater then 72 characters will be rejected by the gate.
 
  - [H801] The first line of the commit message should provide an accurate
description of the change, not just a reference to a bug or
blueprint.
 
 
 
  -Sean
 
  --
  Sean Dague
  http://dague.net
 
  ___
  OpenStack-dev mailing list
  

Re: [openstack-dev] [Fuel] Image based provisioning

2014-12-16 Thread Mike Scherbakov
Dmitry,
as part of 6.1 roadmap, we are going to work on patching feature.
There are two types of workflow to consider:
- patch existing environment (already deployed nodes, aka target nodes)
- ensure that new nodes, added to the existing and already patched envs,
will install updated packages too.

In case of anakonda/preseed install, we can simply update repo on master
node and run createrepo/etc. What do we do in case of image? Will we need a
separate repo alongside with main one, updates repo - and do
post-provisioning yum update to fetch all patched packages?

On Tue, Dec 16, 2014 at 11:09 PM, Andrey Danin ada...@mirantis.com wrote:

 Adding Mellanox team explicitly.

 Gil, Nurit, Aviram, can you confirm that you tested that feature? It can
 be enabled on every fresh ISO. You just need to enable the Experimental
 mode (please, see the documentation for instructions).

 On Tuesday, December 16, 2014, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 Guys,

 we are about to enable image based provisioning in our master by default.
 I'm trying to figure out requirement for this change. As far as I know, it
 was not tested on scale lab. Is it true? Have we ever run full system tests
 cycle with this option?

 Do we have any other pre-requirements?



 --
 Andrey Danin
 ada...@mirantis.com
 skype: gcon.monolake


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SRIOV-error

2014-12-16 Thread david jhon
Hi Irena,

Thanks a lot for your help, it has helped me a lot to fix bugs and get
issues to be resolved.

Thanks again!

On Tue, Dec 16, 2014 at 4:40 PM, Irena Berezovsky ire...@mellanox.com
wrote:

  Hi David,

 As I mentioned before, you do not need to run sriov agent in your setup,
 just set agent_required=False in your neutron-server configuration. I think
 that initially this may be easier to make things work this way.

 I also cannot understand why you have two neutron config files that
 contain same sections with different settings.



 You can find me on #openstack-neuron IRC channel, I can try to help.



 BR,

 Irena





 *From:* david jhon [mailto:djhon9...@gmail.com]
 *Sent:* Tuesday, December 16, 2014 9:44 AM
 *To:* Irena Berezovsky
 *Cc:* OpenStack Development Mailing List (not for usage questions);
 Murali B
 *Subject:* Re: [openstack-dev] SRIOV-error



 Hi Irena and Murali,

 Thanks a lot for your reply!

 Here is the output from pci_devices table of nova db:

 select * from pci_devices;

 +-+++-++-+--++---+--+--+-+---+---+---++
 | created_at  | updated_at | deleted_at | deleted | id |
 compute_node_id | address  | product_id | vendor_id | dev_type |
 dev_id   | label   | status|
 extra_info| instance_uuid | request_id |

 +-+++-++-+--++---+--+--+-+---+---+---++
 | 2014-12-15 12:10:52 | NULL   | NULL   |   0 |  1
 |   1 | :03:10.0 | 10ed   | 8086  | type-VF  |
 pci__03_10_0 | label_8086_10ed | available | {phys_function:
 :03:00.0} | NULL  | NULL   |
 | 2014-12-15 12:10:52 | NULL   | NULL   |   0 |  2
 |   1 | :03:10.2 | 10ed   | 8086  | type-VF  |
 pci__03_10_2 | label_8086_10ed | available | {phys_function:
 :03:00.0} | NULL  | NULL   |
 | 2014-12-15 12:10:52 | NULL   | NULL   |   0 |  3
 |   1 | :03:10.4 | 10ed   | 8086  | type-VF  |
 pci__03_10_4 | label_8086_10ed | available | {phys_function:
 :03:00.0} | NULL  | NULL   |
 | 2014-12-15 12:10:52 | NULL   | NULL   |   0 |  4
 |   1 | :03:10.6 | 10ed   | 8086  | type-VF  |
 pci__03_10_6 | label_8086_10ed | available | {phys_function:
 :03:00.0} | NULL  | NULL   |
 | 2014-12-15 12:10:53 | NULL   | NULL   |   0 |  5
 |   1 | :03:10.1 | 10ed   | 8086  | type-VF  |
 pci__03_10_1 | label_8086_10ed | available | {phys_function:
 :03:00.1} | NULL  | NULL   |
 | 2014-12-15 12:10:53 | NULL   | NULL   |   0 |  6
 |   1 | :03:10.3 | 10ed   | 8086  | type-VF  |
 pci__03_10_3 | label_8086_10ed | available | {phys_function:
 :03:00.1} | NULL  | NULL   |
 | 2014-12-15 12:10:53 | NULL   | NULL   |   0 |  7
 |   1 | :03:10.5 | 10ed   | 8086  | type-VF  |
 pci__03_10_5 | label_8086_10ed | available | {phys_function:
 :03:00.1} | NULL  | NULL   |
 | 2014-12-15 12:10:53 | NULL   | NULL   |   0 |  8
 |   1 | :03:10.7 | 10ed   | 8086  | type-VF  |
 pci__03_10_7 | label_8086_10ed | available | {phys_function:
 :03:00.1} | NULL  | NULL   |

 +-+++-++-+--++---+--+--+-+---+---+---++

 output from select hypervisor_hostname,pci_stats from compute_nodes; is:

 +-+---+
 | hypervisor_hostname |
 pci_stats
 |

 +-+---+
 | blade08 | [{count: 8, vendor_id: 8086,
 physical_network: ext-net, product_id: 10ed}] |

 +-+---+

 Moreover, I have set agent_required = True in 
 /etc/neutron/plugins/ml2/ml2_conf_sriov.ini.
 but still found no sriov agent running.
 # Defines configuration options for SRIOV NIC Switch MechanismDriver
 # and Agent

 [ml2_sriov]
 # (ListOpt) Comma-separated list of
 # supported Vendor PCI Devices, in format vendor_id:product_id
 #
 #supported_pci_vendor_devs = 8086:10ca, 8086:10ed
 supported_pci_vendor_devs = 

Re: [openstack-dev] [Fuel] Feature delivery rules and automated tests

2014-12-16 Thread Mike Scherbakov
I fully support the idea.

Feature Lead has to know, that his feature is under threat if it's not yet
covered by system tests (unit/integration tests are not enough!!!), and
should proactive work with QA engineers to get tests implemented and
passing before SCF.

On Fri, Dec 12, 2014 at 5:55 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 Guys,

 we've done a good job in 6.0. Most of the features were merged before
 feature freeze. Our QA were involved in testing even earlier. It was much
 better than before.

 We had a discussion with Anastasia. There were several bug reports for
 features yesterday, far beyond HCF. So we still have a long way to be
 perfect. We should add one rule: we need to have automated tests before HCF.

 Actually, we should have results of these tests just after FF. It is quite
 challengeable because we have a short development cycle. So my proposal is
 to require full deployment and run of automated tests for each feature
 before soft code freeze. And it needs to be tracked in checklists and on
 feature syncups.

 Your opinion?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] #PERSONAL# : Git checkout command for Blueprints submission

2014-12-16 Thread Swati Shukla1
 
Hi All,

Generally, for bug submissions, we use git checkout -b bug/bug_number

What is the similar 'git checkout' command for blueprints submission?

Swati Shukla
Tata Consultancy Services
Mailto: swati.shuk...@tcs.com
Website: http://www.tcs.com

Experience certainty.   IT Services
Business Solutions
Consulting

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party

2014-12-16 Thread Asselin, Ramy
In this case, I see the mentoring to be more like “office hours”, which can be 
done by ci operators  other volunteers spread across time zones. I think this 
is a great idea.

Ramy

From: Kurt Taylor [mailto:kurt.r.tay...@gmail.com]
Sent: Tuesday, December 16, 2014 9:39 AM
To: Stefano Maffulli
Cc: OpenStack Development Mailing List (not for usage questions); 
openstack-in...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional 
Meeting for third-party



On Mon, Dec 15, 2014 at 7:07 PM, Stefano Maffulli 
stef...@openstack.orgmailto:stef...@openstack.org wrote:
On 12/05/2014 07:08 AM, Kurt Taylor wrote:
 1. Meeting content: Having 2 meetings per week is more than is needed at
 this stage of the working group. There just isn't enough meeting content
 to justify having two meetings every week.

I'd like to discuss this further: the stated objectives of the meetings
are very wide and may allow for more than one slot per week. In
particular I'm seeing the two below as good candidates for 'meet as many
times as possible':

   *  to provide a forum for the curious and for OpenStack programs who
are not yet in this space but may be in the future
   * to encourage questions from third party folks and support the
sourcing of answers
snip

As I mentioned above, probably one way to do this is to make some slots
more focused on engaging newcomers and answering questions, more like
serendipitous mentoring sessions with the less involved, while another
slot could be dedicated to more focused and long term efforts, with more
committed people?

This is an excellent idea, let's split the meetings into:

1) Mentoring - mentoring new CI team members and operators, help them 
understand infra tools and processes. Anita can continue her fantastic work 
here.

2) Working Group - working meeting for documentation, reviewing patches for 
relevant work, and improving the consumability of infra CI components. I will 
be happy to chair these meetings initially. I am sure I can get help with these 
meetings for the other time zones also.

With this approach we can also continue to use the new meeting times voted on 
by the group, and each is focused on targeting a specific group with very 
different needs.

Thanks Stefano!

Kurt Taylor (krtaylor)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Query regarding BluePrint submission for Review

2014-12-16 Thread Vikram Choudhary
Dear All,

We want to submit a new blueprint for review.
Can you please provide the steps for doing it.

Thanks
Vikram
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Not able to locate tests for glanceclient

2014-12-16 Thread Abhishek Talwar/HYD/TCS
Hi All,

I am currently working on stable Juno release for a fix on glanceclient, but I 
am not able to locate tests in glanceclient. So if you can help me locate it as 
I need to add a unit test.
The current path for glanceclient is 
/usr/local/lib/python2.7/dist-packages/glanceclient.


Thanks and Regards
Abhishek Talwar
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] #PERSONAL# : Git checkout command for Blueprints submission

2014-12-16 Thread Amit Das
Hi,
It is git checkout -b bp/bp_name

Regards,
Amit
*CloudByte Inc.* http://www.cloudbyte.com/

On Wed, Dec 17, 2014 at 10:23 AM, Swati Shukla1 swati.shuk...@tcs.com
wrote:


 Hi All,

 Generally, for bug submissions, we use git checkout -b bug/bug_number

 What is the similar 'git checkout' command for blueprints submission?

 Swati Shukla
 Tata Consultancy Services
 Mailto: swati.shuk...@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty. IT Services
 Business Solutions
 Consulting
 

 =-=-=
 Notice: The information contained in this e-mail
 message and/or attachments to it may contain
 confidential or privileged information. If you are
 not the intended recipient, any dissemination, use,
 review, distribution, printing or copying of the
 information contained in this e-mail message
 and/or attachments to it are strictly prohibited. If
 you have received this communication in error,
 please notify us by reply e-mail or telephone and
 immediately and permanently delete the message
 and any attachments. Thank you


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Query regarding BluePrint submission for Review

2014-12-16 Thread Jay Bryant
Vikram,

The process is documented here: https://wiki.openstack.org/wiki/Blueprints

Let me know if you have questions.

Jay
On Dec 16, 2014 11:18 PM, Vikram Choudhary vikram.choudh...@huawei.com
wrote:

  Dear All,



 We want to submit a new blueprint for review.

 Can you please provide the steps for doing it.



 Thanks

 Vikram

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Querries Regarding Blueprint for LBaaS API and Object Model improvement

2014-12-16 Thread Shreshtha Joshi
Hi All,

I wanted to know the approach has been followed for LBaaS implementation in 
Openstack (juno) out of the following in the 
link(https://etherpad.openstack.org/p/neutron-lbaas-api-proposals). Is it-
Existing Core Resource Model
LoadBalancer Instance Model
Vip-Centric Model
Currently I find Pool as the root object that has a VIP associated with it 
rather than Listeners and LoadBalancers in various openstack documents.

But while investigating the same, I came across a blueprint 
(https://blueprints.launchpad.net/neutron/+spec/lbaas-api-and-objmodel-improvement)
 for LBaaS Api and object model improvement. It talks about moving the current 
VIP object to Listener and Listener will be linked to a LoadBalancer in the 
upcoming releases,

I wanted to know the current approach followed for openstack-juno and if in the 
upcoming releases(Kilo)-
 Are we planning to have new APIs for /Listener and /Loader and there will be 
no VIP object and its corresponding API.
 Or we will be having VIP object and its corresponding API, creation of which 
will result in creation of Loadbalancer and /Listener at the backend itself.
If you find the above investigation incorrect, please feel free to point to the 
right direction.


Thanks  Regards
Shreshtha Joshi
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Querries Regarding Blueprint for LBaaS API and Object Model improvement

2014-12-16 Thread Doug Wiegley
Adding tags for [neutron][lbaas]

Juno lbaas (v1) has pool as the root model, with VIP.

Kilo lbaas (v2), you are correct, vip is splitting into loadbalancer and 
listener, and loadbalancer is the root object. And yes, the new objects get new 
URIs.

Both v1 and v2 plugins will be available in Kilo.

Thanks,
doug

From: Shreshtha Joshi shreshtha.jo...@tcs.commailto:shreshtha.jo...@tcs.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, December 16, 2014 at 9:36 PM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Partha Datta partha.da...@tcs.commailto:partha.da...@tcs.com, Deepankar 
Gupta deepankar.gu...@tcs.commailto:deepankar.gu...@tcs.com, 
johnbrandonlo...@gmail.commailto:johnbrandonlo...@gmail.com 
johnbrandonlo...@gmail.commailto:johnbrandonlo...@gmail.com
Subject: [openstack-dev] Querries Regarding Blueprint for LBaaS API and Object 
Model improvement

Hi All,

I wanted to know the approach has been followed for LBaaS implementation in 
Openstack (juno) out of the following in the 
linkhttps://etherpad.openstack.org/p/neutron-lbaas-api-proposals(https://etherpad.openstack.org/p/neutron-lbaas-api-proposals).
 Is it-

  1.  Existing Core Resource Model
  2.  LoadBalancer Instance Model
  3.  Vip-Centric Model

Currently I find Pool as the root object that has a VIP associated with it 
rather than Listeners and LoadBalancers in various openstack documents.

But while investigating the same, I came across a 
blueprinthttps://blueprints.launchpad.net/neutron/+spec/lbaas-api-and-objmodel-improvement
 
(https://blueprints.launchpad.net/neutron/+spec/lbaas-api-and-objmodel-improvement)
 for LBaaS Api and object model improvement. It talks about moving the current 
VIP object to Listener and Listener will be linked to a LoadBalancer in the 
upcoming releases,

I wanted to know the current approach followed for openstack-juno and if in the 
upcoming releases(Kilo)-

  *Are we planning to have new APIs for /Listener and /Loader and there 
will be no VIP object and its corresponding API.
  *Or we will be having VIP object and its corresponding API, creation of 
which will result in creation of Loadbalancer and /Listener at the backend 
itself.

If you find the above investigation incorrect, please feel free to point to the 
right direction.


Thanks  Regards
Shreshtha Joshi

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [UX] Curvature interactive virtual network design

2014-12-16 Thread Amit Das
+1 after looking at the videos.

Regards,
Amit
*CloudByte Inc.* http://www.cloudbyte.com/

On Tue, Dec 16, 2014 at 11:43 PM, Liz Blanchard lsure...@redhat.com wrote:


 On Nov 7, 2014, at 11:16 AM, John Davidge (jodavidg) jodav...@cisco.com
 wrote:

   As discussed in the Horizon contributor meet up, here at Cisco we’re
 interested in upstreaming our work on the Curvature dashboard into Horizon.
 We think that it can solve a lot of issues around guidance for new users
 and generally improving the experience of interacting with Neutron.
 Possibly an alternative persona for novice users?

  For reference, see:

1. http://youtu.be/oFTmHHCn2-g – Video Demo
2.

 https://www.openstack.org/summit/portland-2013/session-videos/presentation/interactive-visual-orchestration-with-curvature-and-donabe
  –
Portland presentation
3. https://github.com/CiscoSystems/curvature – original (Rails based)
code

  We’d like to gauge interest from the community on whether this is
 something people want.

  Thanks,

  John, Brad  Sam


 Hey guys,

 Sorry for my delayed response here…just coming back from maternity leave.

 I’ve been waiting and hoping since the Portland summit that the curvature
 work you have done would be brought in to Horizon. A definite +1 from me
 from a user experience point of view. It would be great to have a solid
 plan on how this could work with or be additional to the Orchestration and
 Network Topology pieces that currently exist in Horizon.

 Let me know if I can help out with any design review, wireframe, or
 usability testing aspects.

 Best,
 Liz


   ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] RFC - Action spec CLI

2014-12-16 Thread Renat Akhmerov
Ok, I would prefer to spend some time and think how to improve the existing reg 
exp that we use to parse key-value pairs. We definitely can’t just drop support 
of this syntax and can’t even change it significantly since people already use 
it.

Renat Akhmerov
@ Mirantis Inc.



 On 17 Dec 2014, at 07:28, Lakshmi Kannan laks...@lakshmikannan.me wrote:
 
 Apologies for the long email. If this fancy email doesn’t render correctly 
 for you, please read it here: 
 https://gist.github.com/lakshmi-kannan/cf953f66a397b153254a 
 https://gist.github.com/lakshmi-kannan/cf953f66a397b153254a
 I was looking into fixing bug: 
 https://bugs.launchpad.net/mistral/+bug/1401039 
 https://bugs.launchpad.net/mistral/+bug/1401039. My idea was to use shlex 
 to parse the string. This actually would work for anything that is supplied 
 in the linux shell syntax. Problem is this craps out when we want to support 
 complex data structures such as arrays and dicts as arguments. I did not 
 think we supported a syntax to take in complex data structures in a one line 
 format. Consider for example:
 
   task7:
 for-each:
   vm_info: $.vms
 workflow: wf2 is_true=true object_list=[1, null, str]
 on-complete:
   - task9
   - task10
 Specifically
 
 wf2 is_true=true object_list=[1, null, str]
 shlex will not handle this correctly because object_list is an array. Same 
 problem with dict.
 
 There are 3 potential options here:
 
 Option 1
 
 1) Provide a spec for specifying lists and dicts like so:
 list_arg=1,2,3 and dict_arg=h1:h2,h3:h4,h5:h6
 
 shlex will handle this fine but there needs to be a code that converts the 
 argument values to appropriate data types based on schema. (ActionSpec should 
 have a parameter schema probably in jsonschema). This is doable.
 
 wf2 is_true=true object_list=1,null,str
 Option 2
 
 2) Allow JSON strings to be used as arguments so we can json.loads them (if 
 it fails, use them as simple string). For example, with this approach, the 
 line becomes
 
 wf2 is_true=true object_list=[1, null, str]
 This would pretty much resemble 
 http://stackoverflow.com/questions/7625786/type-dict-in-argparse-add-argument 
 http://stackoverflow.com/questions/7625786/type-dict-in-argparse-add-argument
 Option 3
 
 3) Keep the spec as such and try to parse it. I have no idea how we can do 
 this reliably. We need a more rigorous lexer. This syntax doesn’t translate 
 well when we want to build a CLI. Linux shells cannot support this syntax 
 natively. This means people would have to use shlex syntax and a translation 
 needs to happen in CLI layer. This will lead to inconsistency. CLI uses some 
 syntax and the action input line in workflow definition will use another. We 
 should try and avoid this.
 
 Option 4
 
 4) Completely drop support for this fancy one line syntax in workflow. This 
 is probably the least desired option.
 
 My preference
 
 Looking the options, I like option2/option 1/option 4/option 3 in the order 
 of preference.
 
 With some documentation, we can tell people why this is hard. People will 
 also grok because they are already familiar with CLI limitations in linux.
 
 Thoughts?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Querries Regarding Blueprint for LBaaS API and Object Model improvement

2014-12-16 Thread Shreshtha Joshi
Thanks Doug.

Regards
Shreshtha Joshi

-Doug Wiegley do...@a10networks.com wrote: -
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
From: Doug Wiegley do...@a10networks.com
Date: 12/17/2014 11:15AM
Cc: johnbrandonlo...@gmail.com johnbrandonlo...@gmail.com, Deepankar Gupta 
deepankar.gu...@tcs.com, Partha Datta partha.da...@tcs.com
Subject: Re: [openstack-dev] [neutron][lbaas] Querries Regarding Blueprint for 
LBaaS API and Object Model improvement

Adding tags for [neutron][lbaas]

Juno lbaas (v1) has pool as the root model, with VIP.

Kilo lbaas (v2), you are correct, vip is splitting into loadbalancer and 
listener, and loadbalancer is the root object. And yes, the new objects get new 
URIs.

Both v1 and v2 plugins will be available in Kilo.

Thanks,
doug

From: Shreshtha Joshi shreshtha.jo...@tcs.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: Tuesday, December 16, 2014 at 9:36 PM
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Cc: Partha Datta partha.da...@tcs.com, Deepankar Gupta 
deepankar.gu...@tcs.com, johnbrandonlo...@gmail.com 
johnbrandonlo...@gmail.com
Subject: [openstack-dev] Querries Regarding Blueprint for LBaaS API and Object 
Model improvement

Hi All,

I wanted to know the approach has been followed for LBaaS implementation in 
Openstack (juno) out of the following in the 
link(https://etherpad.openstack.org/p/neutron-lbaas-api-proposals). Is it-
Existing Core Resource Model
LoadBalancer Instance Model
Vip-Centric Model
Currently I find Pool as the root object that has a VIP associated with it 
rather than Listeners and LoadBalancers in various openstack documents.

But while investigating the same, I came across a blueprint 
(https://blueprints.launchpad.net/neutron/+spec/lbaas-api-and-objmodel-improvement)
 for LBaaS Api and object model improvement. It talks about moving the current 
VIP object to Listener and Listener will be linked to a LoadBalancer in the 
upcoming releases,

I wanted to know the current approach followed for openstack-juno and if in the 
upcoming releases(Kilo)-
 Are we planning to have new APIs for /Listener and /Loader and there will be 
no VIP object and its corresponding API.
 Or we will be having VIP object and its corresponding API, creation of which 
will result in creation of Loadbalancer and /Listener at the backend itself.
If you find the above investigation incorrect, please feel free to point to the 
right direction.


Thanks  Regards
Shreshtha Joshi
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][gerrit] Showing all inline comments from all patch sets

2014-12-16 Thread James Polley
I was looking at the new change screen on https://review.openstack.org
today[1] and it seems to do something vaguely similar.

Rather than saying James polley made 4 inline comments, the contents of
the comments are shown, along with a link to the file so you can see the
context.

Have you seen this? It seems fairly similar to what you're wanting.

Have
[1] To activate it, go to
https://review.openstack.org/#/settings/preferences and set Change view
to New Screen, then look at a change screen (such as
https://review.openstack.org/#/c/127283/)

On Tue, Dec 16, 2014 at 4:45 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-12-16 17:19:55 +0200 (+0200), Radoslav Gerganov wrote:
  We don't need GoogleAppEngine if we decide that this is useful. We
  simply need to put the html page which renders the view on
  https://review.openstack.org. It is all javascript which talks
  asynchronously to the Gerrit backend.
 
  I am using GAE to simply illustrate the idea without having to
  spin up an entire Gerrit server.

 That makes a lot more sense--thanks for the clarification!

  I guess I can also submit a patch to the infra project and see how
  this works on https://review-dev.openstack.org if you want.

 If there's a general desire from the developer community for it,
 then that's probably the next step. However, ultimately this seems
 like something better suited as an upstream feature request for
 Gerrit (there may even already be thread-oriented improvements in
 the works for the new change screen--I haven't kept up with their
 progress lately).
 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] ActionProvider

2014-12-16 Thread W Chan
Renat,

We want to introduce the concept of an ActionProvider to Mistral.  We are
thinking that with an ActionProvider, a third party system can extend
Mistral with it's own action catalog and set of dedicated and specialized
action executors.  The ActionProvider will return it's own list of actions
via an abstract interface.  This minimizes the complexity and latency in
managing and sync'ing the Action table.  In the DSL, we can define provider
specific context/configuration separately and apply to all provider
specific actions without explicitly passing as inputs.  WDYT?

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >