Re: [openstack-dev] [neutron] BPs for Juno-1

2014-05-28 Thread trinath.soman...@freescale.com
Hi-

Our CI FTP server is active now.

You may check the same at http://115.249.211.42/


--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

-Original Message-
From: trinath.soman...@freescale.com [mailto:trinath.soman...@freescale.com] 
Sent: Wednesday, May 28, 2014 12:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] BPs for Juno-1

Hi Kyle-

I'm working the issues with our FTP server which is hosting the CI testing logs.

Will update the status of the Server in this Email chain.

-
Trinath



From: Kyle Mestery mest...@noironetworks.com
Sent: Wednesday, May 28, 2014 12:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] BPs for Juno-1

I've marked it as Juno-1 for now, but as Salvatore indicated, there are some 
issues with third party testing which need to be addressed before this can be 
merged. It would be a good idea to attend the IRC meeting Anita pointed out as 
well.

Thanks,
Kyle

On Tue, May 27, 2014 at 1:45 PM, trinath.soman...@freescale.com 
trinath.soman...@freescale.com wrote:
 Hi Kyle-

 The BP Spec approved.

 https://review.openstack.org/#/c/88190/

 Kindly consider my BP spec for Juno-1.

 Thanking all the code reviewers for their time to review my ML2 MD Spec and 
 making me to improve the spec.

 -
 Trinaths

 
 From: Kyle Mestery mest...@noironetworks.com
 Sent: Tuesday, May 27, 2014 10:46 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] BPs for Juno-1

 On Tue, May 27, 2014 at 12:12 PM, Vinay Yadhav vinayyad...@gmail.com wrote:
 Hi,

 I have been working on the port mirroring blueprint and i was asked 
 to submit a neutron spec related to this I have drafter the spec and 
 ready to commit it to the 
 'git://git.openstack.org/openstack/neutron-specs' repository for 
 review. Since the blueprint for this was old i was asked to submit 
 the spec for review. should i also bring the old blueprint to life by 
 linking it the spec that i will commit.

 You can file a BP in launchpad. We're using those to track progress 
 against milestones for release management reasons. Please see the 
 instructions here [1]. Once the BP is filed in neutron-specs, reviewed 
 and approved, we can track it to milestones. But at this point, this 
 won't make it to Juno-1, given that's 2 weeks away.

 Thanks,
 Kyle

 [1] https://wiki.openstack.org/wiki/Blueprints#Neutron

 We are calling the new spec Tap-as-a-Service

 Cheers,

 main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}


 On Tue, May 27, 2014 at 4:14 PM, Kyle Mestery 
 mest...@noironetworks.com
 wrote:

 Hi Neutron developers:

 I've spent some time cleaning up the BPs for Juno-1, and they are 
 documented at the link below [1]. There are a large number of BPs 
 currently under review right now in neutron-specs. If we land some 
 of those specs this week, it's possible some of these could make it 
 into Juno-1, pending review cycles and such. But I just wanted to 
 highlight that I removed a large number of BPs from targeting Juno-1 
 now which did not have specifications linked to them nor 
 specifications which were actively under review in neutron-specs.

 Also, a gentle reminder that the process for submitting 
 specifications to Neutron is documented here [2].

 Thanks, and please reach out to me if you have any questions!

 Kyle

 [1] https://launchpad.net/neutron/+milestone/juno-1
 [2] https://wiki.openstack.org/wiki/Blueprints#Neutron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Minutes from May 27 2014

2014-05-28 Thread Xu Han Peng

Shixiong,

Sean and I were thinking about throwing out an error when someone is 
trying to attach a router to subnet when gateway is already set (for 
IPv6, it could be LLA). In the long term, we need to figure out how to 
use the link local address IP of the gateway port to overwrite the 
gateway IP set before attaching.


Xuhan

On 05/28/2014 08:53 AM, Shixiong Shang wrote:

I am reading the meeting minute, and saw the discussion on this BP submitted by 
xuhanp:

https://review.openstack.org/#/c/76125/

I don’t see any reason why we need it….do you?

Shixiong



On May 27, 2014, at 12:47 PM, Collins, Sean sean_colli...@cable.comcast.com 
wrote:


http://eavesdrop.openstack.org/meetings/neutron_ipv6/2014/neutron_ipv6.2014-05-27-14.01.html

--
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Cleaning up configuration settings

2014-05-28 Thread Angus Salkeld
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 17/05/14 02:48, W Chan wrote:
 Regarding config opts for keystone, the keystoneclient middleware already 
 registers the opts at 
 https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/middleware/auth_token.py#L325
  
 under a keystone_authtoken group in the config file.  Currently, Mistral 
 registers the opts again at 
 https://github.com/stackforge/mistral/blob/master/mistral/config.py#L108 
 under a 
 different configuration group.  Should we remove the duplicate from Mistral 
 and 
 refactor the reference to keystone configurations to the keystone_authtoken 
 group?  This seems more consistent.

I think that is the only thing that makes sense. Seems like a bug
waiting to happen having the same options registered twice.

If some user used to other projects comes and configures
keystone_authtoken then will their config take effect?
(how much confusion will that generate)..

I'd suggest just using the one that is registered keystoneclient.

- -Angus

 
 
 On Thu, May 15, 2014 at 1:13 PM, W Chan m4d.co...@gmail.com 
 mailto:m4d.co...@gmail.com wrote:
 
 Currently, the various configurations are registered in 
 ./mistral/config.py.
   The configurations are registered when mistral.config is referenced.
   Given the way the code is written, PEP8 throws referenced but not used
 error if mistral.config is referenced but not called in the module.  In
 various use cases, this is avoided by using importutils to import
 mistral.config (i.e.
 
 https://github.com/stackforge/mistral/blob/master/mistral/tests/unit/engine/test_transport.py#L34).
   I want to break down registration code in ./mistral/config.py into
 separate functions for api, engine, db, etc and move the registration 
 closer
 to the module where the configuration is needed.  Any objections?
 
 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJThYdUAAoJEFrDYBLxZjWoxQgH/3K9Kqe9oKyMBl2lTbGbQTGp
j3hJu5EKkG+2nUxW6m7yE5uZmNyauG2IrtU5xW5eOM+TvovyB23fRbyB7YCl57Y3
if1lXpn1pmv/+ELcPqHxpRyHTvj4eevU3zVb7tNhIHCrBq1jpGXoIzOg/9uWCrx8
SxgJzwD7lV+KAc4s3JAXTuRfmVXx4SJ0abSHXspqPhAD7Cio9McjK1xDex3j/SXc
Z1JnYSrVTcs0/ynSc1z+CWB3N6F1fTX8Vltv7pjsKcTSPSuBLGNPRqftXgBSLeJ5
16clgrxOVJf1e8pfSva+feJ6Q49Rltw33nXjXha/cV5WIbb3umIDrK0xpRlJW0I=
=O+uP
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] status of quota class

2014-05-28 Thread Mark McLoughlin
On Wed, 2014-02-19 at 10:27 -0600, Kevin L. Mitchell wrote:
 On Wed, 2014-02-19 at 13:47 +0100, Mehdi Abaakouk wrote:
  But 'quota_class' is never set when a nova RequestContext is created.
 
 When I created quota classes, I envisioned the authentication component
 of the WSGI stack setting the quota_class on the RequestContext, but
 there was no corresponding concept in Keystone.  We need some means of
 identifying groups of tenants.
 
  So my question, what is the plan to finish the 'quota class' feature ? 
 
 I currently have no plan to work on that, and I am not aware of any such
 work.

Just for reference, we discussed the fact that this code was unused two
years ago:

  https://lists.launchpad.net/openstack/msg12200.html

and I see Joe has now completed the process of removing it again:

  https://review.openstack.org/75535
  https://review.openstack.org/91480
  https://review.openstack.org/91699
  https://review.openstack.org/91700

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] q-agt error

2014-05-28 Thread abhishek jain
Hi

I'm trying to run my q-agt service and getting following error ...


-05-28 02:00:51.205 15377 DEBUG neutron.agent.linux.utils [-]
Command: ['ip', '-o', 'link', 'show', 'br-int']
Exit code: 0
Stdout: '28: br-int: BROADCAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
UNKNOWN mode DEFAULT \\link/ether 3a:ed:a9:bd:14:19 brd
ff:ff:ff:ff:ff:ff\n'
Stderr: '' execute /opt/stack/neutron/neutron/agent/linux/utils.py:74
2014-05-28 02:00:51.209 15377 CRITICAL neutron [-] Policy configuration
policy.json could not be found
2014-05-28 02:00:51.209 15377 TRACE neutron Traceback (most recent call
last):
2014-05-28 02:00:51.209 15377 TRACE neutron   File
/usr/local/bin/neutron-openvswitch-agent, line 10, in module
2014-05-28 02:00:51.209 15377 TRACE neutron sys.exit(main())
2014-05-28 02:00:51.209 15377 TRACE neutron   File
/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
line 1485, in main
2014-05-28 02:00:51.209 15377 TRACE neutron agent =
OVSNeutronAgent(**agent_config)
2014-05-28 02:00:51.209 15377 TRACE neutron   File
/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
line 207, in __init__

Please help regarding this.


Thanks
Abhishek Jain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Selenium test fixes

2014-05-28 Thread Matthias Runge
On Wed, May 28, 2014 at 03:45:18PM +1000, Kieran Spear wrote:
 No failures in the last 24 hours. \o/
 

Thank you for looking into this (and apparently fixing it)!
-- 
Matthias Runge mru...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Support LVM on a shared LU

2014-05-28 Thread Deepak Shetty
Mitsuhiro,
  Few questions that come to my mind based on your proposal

1) There is a lof of manual work needed here.. like every time the new host
added.. admin needs to do FC zoning to ensure that LU is visible by the
host. Also the method you mentioend for refreshing (echo '---'  ...)
doesn't work reliably across all storage types does it ?

2) In Slide 1-1 .. how ( and who?) ensures that the compute nodes don't
step on each other is using the LVs ? In other words.. how is it ensured
that LV1 is not used by compute nodes 1 and 2 at the same time ?

3) In slide 1-2, you show that the LU1 is seen as /dev/sdx on all the
nodes.. this is wrong.. it can be seen as anything (/dev/sdx on control
node, sdn on compute 1, sdz on compute 2) so assumign sdx on all nodes is
wrong.
How does this different device names handled.. in short, how does compute
node 2 knows that LU1 is actually sdn and not sdz (assuming you had  1 LUs
provisioned)

4) What abt multipath ? In most prod env.. the FC storage will be
multipath'ed.. hence you will actually see sdx and sdy on each node and you
actually need to use mpathN (which is multipathe'd to sdx anx sdy) device
and NOT the sd? device to take adv of the customer multipath env. How does
the nodes know which mpath? device to use and which mpath? device maps to
which LU on the array ?

5) Doesnt this new proposal also causes the compute nodes to be physcially
connected (via FC) to the array, which means more wiring and need for FC
HBA on compute nodes. With LVMiSCSI, we don't need FC HBA on compute nodes
so you are actualluy adding cost of each FC HBA to the compute nodes and
slowly turning commodity system to non-commodity ;-) (in a way)

6) Last but not the least... since you are using 1 BIG LU on the array to
host multiple volumes, you cannot possibly take adv of the premium,
efficient snapshot/clone/mirroring features of the array, since they are at
LU level, not at the LV level. LV snapshots have limitations (as mentioned
by you in other thread) and are always in-efficient compared to array
snapshots. Why would someone want to use less efficient method when they
invested on a expensive array ?

thanx,
deepak



On Tue, May 20, 2014 at 9:01 PM, Mitsuhiro Tanino
mitsuhiro.tan...@hds.comwrote:

  Hello All,



 I’m proposing a feature of LVM driver to support LVM on a shared LU.

 The proposed LVM volume driver provides these benefits.
   - Reduce hardware based storage workload by offloading the workload to
 software based volume operation.
   - Provide quicker volume creation and snapshot creation without storage
 workloads.
   - Enable cinder to any kinds of shared storage volumes without specific
 cinder storage driver.

   - Better I/O performance using direct volume access via Fibre channel.



 In the attachment pdf, following contents are explained.

   1. Detail of Proposed LVM volume driver

   1-1. Big Picture

   1-2. Administrator preparation

   1-3. Work flow of volume creation and attachment

   2. Target of Proposed LVM volume driver

   3. Comparison of Proposed LVM volume driver



 Could you review the attachment?

 Any comments, questions, additional ideas would be appreciated.





 Also there are blueprints, wiki and patches related to the slide.

 https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage

 https://blueprints.launchpad.net/nova/+spec/lvm-driver-for-shared-storage


 https://wiki.openstack.org/wiki/Cinder/NewLVMbasedDriverForSharedStorageInCinder

 https://review.openstack.org/#/c/92479/

 https://review.openstack.org/#/c/92443/



 Regards,

 Mitsuhiro Tanino mitsuhiro.tan...@hds.com

  *HITACHI DATA SYSTEMS*

  c/o Red Hat, 314 Littleton Road, Westford, MA 01886

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Specifying file encoding

2014-05-28 Thread Martin Geisler
Hi everybody,

I'm trying to get my feet wet with OpenStack development, so I recently
tried to submit some small patches. One small thing I noticed was that
some files used

  # -*- encoding: utf-8 -*-

to specify the file encoding for both Python and Emacs. Unfortunately,
Emacs expects you to set coding, not encoding. Python is fine with
either. I submitted a set of patches for this:

* https://review.openstack.org/95862
* https://review.openstack.org/95864
* https://review.openstack.org/95865
* https://review.openstack.org/95869
* https://review.openstack.org/95871
* https://review.openstack.org/95880
* https://review.openstack.org/95882
* https://review.openstack.org/95886

It was pointed out to me that such a change ought to be coordinated
better via bug(s) or the mailinglist, so here I am :)

It was also suggested that the lines should be removed completely with a
reference to this thread:

  http://lists.openstack.org/pipermail/openstack-dev/2013-October/017353.html

Unfortunately, the issue is only somewhat similar: the thread is about
removing Vim modelines:

  # vim: tabstop=4 shiftwidth=4 softtabstop=4

My patches is about using an Emacs-friendly way to specify the file
encoding. PEP 263 (http://legacy.python.org/dev/peps/pep-0263/) explains
that is it a SyntaxError not to specify an encoding when a file has
non-ASCII characters.

Many of the files (but not all) have a © copyright symbol in the file
header and so the encoding line cannot be removed from these files
unless the copyright symbol is removed at the same time.

I saw references to hacking in the Vim modeline thread. At first I
thought it might be a tool used to check the style, but now I think it
migth just be this document:

  http://docs.openstack.org/developer/hacking/


I look forward to hearing your comments!

-- 
Martin Geisler

https://plus.google.com/+MartinGeisler/


pgpH6fMgxteZ9.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Selenium test fixes

2014-05-28 Thread Zhenguo Niu
Thank you for the fix.


On Wed, May 28, 2014 at 3:09 PM, Matthias Runge mru...@redhat.com wrote:

 On Wed, May 28, 2014 at 03:45:18PM +1000, Kieran Spear wrote:
  No failures in the last 24 hours. \o/
 

 Thank you for looking into this (and apparently fixing it)!
 --
 Matthias Runge mru...@redhat.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards,
Zhenguo Niu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Bulk load API draft

2014-05-28 Thread Illia Khudoshyn
Hi Dima,

Sounds good, thank you for the point.


On Tue, May 27, 2014 at 7:34 PM, Dmitriy Ukhlov dukh...@mirantis.comwrote:

 Hi Illia,

 Looks good. But I suggest to return all of these fields for positive
 request as well as for error request:

 read: string,
 processed: string,
 failed: string,

 but leave next fields optional and fill them in case of error response
 (failed  0) to specify what exactly was happened:

 last_read:
 errors (maybe not processed will be better)



 On Tue, May 27, 2014 at 3:39 PM, Illia Khudoshyn 
 ikhudos...@mirantis.comwrote:

 Hi openstackers,

 While working on bulk load, I found previously proposed batch-oriented
 asynchronous approach both resource consuming on server side and somewhat
 complicated to use.
 So I tried to outline some more straightforward streaming way of
 uploading data.

 By the link below you can found a draft for a new streaming API
 https://wiki.openstack.org/wiki/MagnetoDB/streamingbulkload.

 Any feedback is welcome as usual.



 On Wed, May 14, 2014 at 5:04 PM, Illia Khudoshyn ikhudos...@mirantis.com
  wrote:

 Hi openstackers,

 I'm working on bulk load for MagnetoDB, the facility for inserting large
 amounts of data, like,  millions of rows, gigabytes of data. Below is the
 link to draft API description.


 https://wiki.openstack.org/wiki/MagnetoDB/bulkload#.5BDraft.5D_MagnetoDB_Bulk_Load_workflow_and_API

 Any feedback is welcome.

 --

 Best regards,

 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.



 38, Lenina ave. Kharkov, Ukraine

 www.mirantis.com http://www.mirantis.ru/

 www.mirantis.ru



 Skype: gluke_work

 ikhudos...@mirantis.com




 --

 Best regards,

 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.



 38, Lenina ave. Kharkov, Ukraine

 www.mirantis.com http://www.mirantis.ru/

 www.mirantis.ru



 Skype: gluke_work

 ikhudos...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best regards,
 Dmitriy Ukhlov
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Illia Khudoshyn,
Software Engineer, Mirantis, Inc.



38, Lenina ave. Kharkov, Ukraine

www.mirantis.com http://www.mirantis.ru/

www.mirantis.ru



Skype: gluke_work

ikhudos...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Xen and Libvirt.

2014-05-28 Thread Álvaro López García
On Wed 28 May 2014 (01:08), Tian, Shuangtai wrote:
 Hi,

Hi.

Does anyone use the latest libvirt + xen +openstack ? In my ubuntu 
 environment, It can not create VM , Because the blktap2 does not work.

Which kernel are you using? Why does blktap2 not work?

We're using libvirt+xen on Ubuntu, using theblktap2-dkms package and a
3.5 kernel (we couldn't use it with t a 3.8 due to this bug that seems
resolved now [1]).

[1] https://bugs.launchpad.net/ubuntu/+source/blktap-dkms/+bug/1078843

Regards,
-- 
Álvaro López García  al...@ifca.unican.es
Instituto de Física de Cantabria http://alvarolopez.github.io
Ed. Juan Jordá, Campus UC  tel: (+34) 942 200 969
Avda. de los Castros s/n
39005 Santander (SPAIN)
_
http://xkcd.com/571/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] [Heat][Documentation] Heat template documentation

2014-05-28 Thread Julien Danjou
On Tue, May 27 2014, Gauvain Pocentek wrote:

 So my feeling is that we should work on the tools to convert RST (or
 whatever format, but RST seems to be the norm for openstack projects) to
 docbook, and generate our online documentation from there. There are tools
 that can help us doing that, and I don't see an other solution that would
 make us move forward.

 Anne, you talked about experimenting with the end user guide, and after the
 discussion and the technical info brought by Doug, Steve and Steven, I now
 think it is worth trying.

I think it's a very good idea.

FWIW, AsciiDoc¹ has a nice markup format that can be converted to
Docbook. I know it's not RST, but it's still better than writing XML
IMHO.


¹  http://www.methods.co.nz/asciidoc/

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Xen and Libvirt.

2014-05-28 Thread Álvaro López García
On Tue 27 May 2014 (10:11), Daniel P. Berrange wrote:
 On Mon, May 26, 2014 at 09:45:07AM -0400, Alvin Starr wrote:
  
  What is the status of Xen and libvirt under Openstack?
  I noticed bits of discussions about deprecating the interface but I did not
  see any clear answers.
 
 There is *no* intention to deprecated it. It was merely marked as being
 in Tier 3 support state, primarily due to lack of any automated testing.
 The lack of CI testing is being addressed, hopefully to be up  running
 before end of Juno.

This is great news. We tried to invest some effort on libvirt + Xen,
but unfortunately we didn't have enough resources to commit for this by
ourselves. Who is taking care of this?

Regards,
-- 
Álvaro López García  al...@ifca.unican.es
Instituto de Física de Cantabria http://alvarolopez.github.io
Ed. Juan Jordá, Campus UC  tel: (+34) 942 200 969
Avda. de los Castros s/n
39005 Santander (SPAIN)
_
Optimization hinders evolution. -- Alan Perlis

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Xen and Libvirt.

2014-05-28 Thread Daniel P. Berrange
On Wed, May 28, 2014 at 10:26:18AM +0200, Álvaro López García wrote:
 On Tue 27 May 2014 (10:11), Daniel P. Berrange wrote:
  On Mon, May 26, 2014 at 09:45:07AM -0400, Alvin Starr wrote:
   
   What is the status of Xen and libvirt under Openstack?
   I noticed bits of discussions about deprecating the interface but I did 
   not
   see any clear answers.
  
  There is *no* intention to deprecated it. It was merely marked as being
  in Tier 3 support state, primarily due to lack of any automated testing.
  The lack of CI testing is being addressed, hopefully to be up  running
  before end of Juno.
 
 This is great news. We tried to invest some effort on libvirt + Xen,
 but unfortunately we didn't have enough resources to commit for this by
 ourselves. Who is taking care of this?

A combination of folks from B1 systems, Citrix and SUSE indicated that
they'd work together on CI. See minutes from the previous libvirt
team meeting

https://wiki.openstack.org/wiki/Meeting/Libvirt/Minutes/20140520
http://eavesdrop.openstack.org/meetings/libvirt/2014/libvirt.2014-05-20-15.00.html

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Designate Incubation Request

2014-05-28 Thread Mac Innes, Kiall
On Tue, 2014-05-27 at 17:42 -0700, Joe Gordon wrote:


 On Sat, May 24, 2014 at 10:24 AM, Hayes, Graham graham.ha...@hp.com
 wrote:
 
 * You mention nova's dns capabilities as not being adequate one of the
 incubation requirements is:

   Project should not inadvertently duplicate functionality present in other
   OpenStack projects. If they do, they should have a clear plan and timeframe
   to prevent long-term scope duplication

 So what is the plan for this?

Our current belief is that the DNS functionality in nova sees little to
no use, with replacement functionality (Designate) incubated, I would
personally like to see it deprecated and removed.

Additionally, as the functionality is driver based, we can likely
implement a driver that forwards requests to Designate during the
deprecation period.

 * Can you expand on why this doesn't make sense in neutron when things
 like LBaaS do.

LBaaS (and VPNaaS, FWaaS etc) certainly feel like a good fit inside
Neutron. Their core functionality revolves around physically
transporting or otherwise inspecting bits moving from one place to
another and their primary interfaces are Neutron ports, leading to a
desire for tight integration.

Designate, and authoritative DNS in general, is closer to a phone book.
We have no involvement in the transportation of bits, and behave much
closer to a specialized database than any traditional networking gear.

 * Your application doesn't cover all the items raised in the
 incubation requirements list. For example the QA requirement of
 Project must have a basic devstack-gate job set up which as far as I
 can tell isn't really there, although there appears to be a devstack
 based job run as third party which in at least once case didn't run on
 a merged patch (https://review.openstack.org/#/c/91115/)
 
The application is based on the request template, which for better or
worse doesn't map directly to the the governance document :)

If there are other requirements beyond the devstack-gate one you
mentioned, please ask and we'll respond as best we can!

You're correct that we do not yet have a DevStack gate running directly
in the CI system, and that we do have a 3rd party Jenkins running
DevStack with Designate and some basic functional tests against our
repositories.

The 3rd party jobs were originally set up before DevStack supported
plugins (or at least, before we knew it did!), and were based on a fork
of DevStack which made using the official CI system difficult.

After DevStack gained plugin support, we converted our fork to a plugin,
and looked into getting the official CI system to run DevStack jobs with
our DevStack plugin. This again proved difficult, so the status quo was
left in place. We're looking forward to being able to merge our plugin
into DevStack so we can shutdown the 3rd party tests :)

Re the DevStack jobs not running on a merged patch, after the recent
Gerrit updates, the devstack job was failing for period of time due to
the change in how gerrit accepts reviews from 3rd party systems. This
was fixed recently, and all patches are again running through these
jobs.

Please keep the questions coming :)

Thanks,
Kiall
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Introducing task oriented workflows

2014-05-28 Thread Hirofumi Ichihara
Hi, Salvatore

I think neutron needs the task management too.

IMO, the problem of neutron resource status should be discussed individually.
Task management enable neutron to roll back API operation and delete trash of 
resource, try API operation again in one API process.
Of course, we can use task to correct inconsistency between neutron DB(resource 
status) and actual resource configuration.
But, we should add resource status management to some resources before task.
For example, LBaaS has resource status management[1].
Neutron router, port don't mange status is basic problem.

 For instance a port is UP if it's been wired by the OVS agent; it often 
 does not tell us whether the actual resource configuration is exactly the 
 desired one in the database. For instance, if the ovs agent fails to apply 
 security groups to a port, the port stays ACTIVE and the user might never 
 know there was an error and the actual state diverged from the desired one.
So, we should solve this problem by resource status management such LBaaS 
rather than task.

I don't deny task, but we need to discuss for task long term, I hope the status 
management will be modified right away.

[1] 
https://wiki.openstack.org/wiki/Neutron/LBaaS/API_1.0#Synchronous_versus_Asynchronous_Plugin_Behavior

thanks,
Hirofumi

-
Hirofumi Ichihara
NTT Software Innovation Center
Tel:+81-422-59-2843  Fax:+81-422-59-2699
Email:ichihara.hirof...@lab.ntt.co.jp
-


On 2014/05/23, at 7:34, Salvatore Orlando sorla...@nicira.com wrote:

 As most of you probably know already, this is one of the topics discussed 
 during the Juno summit [1].
 I would like to kick off the discussion in order to move towards a concrete 
 design.
 
 Preamble: Considering the meat that's already on the plate for Juno, I'm not 
 advocating that whatever comes out of this discussion should be put on the 
 Juno roadmap. However, preparation (or yak shaving) activities that should be 
 identified as pre-requisite might happen during the Juno time frame assuming 
 that they won't interfere with other critical or high priority activities.
 This is also a very long post; the TL;DR summary is that I would like to 
 explore task-oriented communication with the backend and how it should be 
 reflected in the API - gauging how the community feels about this, and 
 collecting feedback regarding design, constructs, and related 
 tools/techniques/technologies.
 
 At the summit a broad range of items were discussed during the session, and 
 most of them have been reported in the etherpad [1].
 
 First, I think it would be good to clarify whether we're advocating a 
 task-based API, a workflow-oriented operation processing, or both.
 
 -- About a task-based API
 
 In a task-based API, most PUT/POST API operations would return tasks rather 
 than neutron resources, and users of the API will interact directly with 
 tasks.
 I put an example in [2] to avoid cluttering this post with too much text.
 As the API operation simply launches a task - the database state won't be 
 updated until the task is completed.
 
 Needless to say, this would be a radical change to Neutron's API; it should 
 be carefully evaluated and not considered for the v2 API.
 Even if it is easily recognisable that this approach has a few benefits, I 
 don't think this will improve usability of the API at all. Indeed this will 
 limit the ability of operating on a resource will a task is in execution on 
 it, and will also require neutron API users to change the paradigm the use to 
 interact with the API; for not mentioning the fact that it would look weird 
 if neutron is the only API endpoint in Openstack operating in this way.
 For the Neutron API, I think that its operations should still be manipulating 
 the database state, and possibly return immediately after that (*) - a task, 
 or to better say a workflow will then be started, executed asynchronously, 
 and update the resource status on completion.
 
 -- On workflow-oriented operations
 
 The benefits of it when it comes to easily controlling operations and 
 ensuring consistency in case of failures are obvious. For what is worth, I 
 have been experimenting introducing this kind of capability in the NSX plugin 
 in the past few months. I've been using celery as a task queue, and writing 
 the task management code from scratch - only to realize that the same 
 features I was implementing are already supported by taskflow.
 
 I think that all parts of Neutron API can greatly benefit from introducing a 
 flow-based approach.
 Some examples:
 - pre/post commit operations in the ML2 plugin can be orchestrated a lot 
 better as a workflow, articulating operations on the various drivers in a 
 graph
 - operation spanning multiple plugins (eg: add router interface) could be 
 simplified using clearly defined tasks for the L2 and L3 parts
 - it would be finally possible to properly 

Re: [openstack-dev] q-agt error

2014-05-28 Thread abhishek jain
Hi Asaf

Thanks
I'm able to solve the above error but I'm stuck in the another one.


Below is the code which I'm going to reffer to..

 vim /opt/stack/nova/nova/virt/disk/vfs/api.py

#

try:
LOG.debug(_(Trying to import guestfs))
importutils.import_module(guestfs)
hasGuestfs = True
except Exception:
pass

if hasGuestfs:
LOG.debug(_(Using primary VFSGuestFS))
return importutils.import_object(
nova.virt.disk.vfs.guestfs.VFSGuestFS,
imgfile, imgfmt, partition)
else:
LOG.debug(_(Falling back to VFSLocalFS))
return importutils.import_object(
nova.virt.disk.vfs.localfs.VFSLocalFS,
imgfile, imgfmt, partition)

###

When I'm launching  VM from the controller node onto compute node,the
nova compute logs on the compute node displays...Falling back to
VFSLocalFS and the result is that the VM is stuck in spawning state.
However When I'm trying to launch a VM onto controller node form the
controller node itself,the nova compute logs on the controller node
dislpays ...Using primary VFSGuestFS and I'm able to launch VM on
controller node.
Is there any module in the kernel or any package that i need to
enable.Please help.




On Wed, May 28, 2014 at 3:00 PM, Assaf Muller amul...@redhat.com wrote:



 - Original Message -
  Hi
 
  I'm trying to run my q-agt service and getting following error ...
 

 You've stumbled on the development mailing list. You will have better
 luck with ask.openstack.org or the users mailing list.

 Good luck!

 
  -05-28 02:00:51.205 15377 DEBUG neutron.agent.linux.utils [-]
  Command: ['ip', '-o', 'link', 'show', 'br-int']
  Exit code: 0
  Stdout: '28: br-int: BROADCAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
  UNKNOWN mode DEFAULT \\ link/ether 3a:ed:a9:bd:14:19 brd
  ff:ff:ff:ff:ff:ff\n'
  Stderr: '' execute /opt/stack/neutron/neutron/agent/linux/utils.py:74
  2014-05-28 02:00:51.209 15377 CRITICAL neutron [-] Policy configuration
  policy.json could not be found
  2014-05-28 02:00:51.209 15377 TRACE neutron Traceback (most recent call
  last):
  2014-05-28 02:00:51.209 15377 TRACE neutron File
  /usr/local/bin/neutron-openvswitch-agent, line 10, in module
  2014-05-28 02:00:51.209 15377 TRACE neutron sys.exit(main())
  2014-05-28 02:00:51.209 15377 TRACE neutron File
 
 /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
  line 1485, in main
  2014-05-28 02:00:51.209 15377 TRACE neutron agent =
  OVSNeutronAgent(**agent_config)
  2014-05-28 02:00:51.209 15377 TRACE neutron File
 
 /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
  line 207, in __init__
 
  Please help regarding this.
 
 
  Thanks
  Abhishek Jain
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] q-agt error

2014-05-28 Thread Assaf Muller


- Original Message -
 Hi
 
 I'm trying to run my q-agt service and getting following error ...
 

You've stumbled on the development mailing list. You will have better
luck with ask.openstack.org or the users mailing list.

Good luck!

 
 -05-28 02:00:51.205 15377 DEBUG neutron.agent.linux.utils [-]
 Command: ['ip', '-o', 'link', 'show', 'br-int']
 Exit code: 0
 Stdout: '28: br-int: BROADCAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
 UNKNOWN mode DEFAULT \\ link/ether 3a:ed:a9:bd:14:19 brd
 ff:ff:ff:ff:ff:ff\n'
 Stderr: '' execute /opt/stack/neutron/neutron/agent/linux/utils.py:74
 2014-05-28 02:00:51.209 15377 CRITICAL neutron [-] Policy configuration
 policy.json could not be found
 2014-05-28 02:00:51.209 15377 TRACE neutron Traceback (most recent call
 last):
 2014-05-28 02:00:51.209 15377 TRACE neutron File
 /usr/local/bin/neutron-openvswitch-agent, line 10, in module
 2014-05-28 02:00:51.209 15377 TRACE neutron sys.exit(main())
 2014-05-28 02:00:51.209 15377 TRACE neutron File
 /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1485, in main
 2014-05-28 02:00:51.209 15377 TRACE neutron agent =
 OVSNeutronAgent(**agent_config)
 2014-05-28 02:00:51.209 15377 TRACE neutron File
 /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 207, in __init__
 
 Please help regarding this.
 
 
 Thanks
 Abhishek Jain
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Cleaning up configuration settings

2014-05-28 Thread Renat Akhmerov

On 28 May 2014, at 13:51, Angus Salkeld angus.salk...@rackspace.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 17/05/14 02:48, W Chan wrote:
 Regarding config opts for keystone, the keystoneclient middleware already 
 registers the opts at 
 https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/middleware/auth_token.py#L325
  
 under a keystone_authtoken group in the config file.  Currently, Mistral 
 registers the opts again at 
 https://github.com/stackforge/mistral/blob/master/mistral/config.py#L108 
 under a 
 different configuration group.  Should we remove the duplicate from Mistral 
 and 
 refactor the reference to keystone configurations to the keystone_authtoken 
 group?  This seems more consistent.
 
 I think that is the only thing that makes sense. Seems like a bug
 waiting to happen having the same options registered twice.
 
 If some user used to other projects comes and configures
 keystone_authtoken then will their config take effect?
 (how much confusion will that generate)..
 
 I'd suggest just using the one that is registered keystoneclient.

Ok, I had a feeling it was needed for some reason. But after having another 
look at this I think this is really a bug. Let’s do it.

Thanks guys
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] PTL Candidacy

2014-05-28 Thread Ilya Sviridov
Hello openstackers,

I'd like to announce  my candidacy as PTL of MagnetoDB[1] project.

Several words about me:
I'm software developer at Mirantis. I'm working with OpenStack for a year
and a bit more. At the beginning it was integration and customization of
HEAT for customer. After that I've contributed to Trove and now working on
MagnetoDB 100% [2] of my time.

I've started with MagnetoDB as idea [3] on Hong Kong summit and now it is a
project with community of two major companies [4], with regular releases,
roadmap[5] and plans for incubation.

As a PTL of MagnetoDB I'll continue my work on building great environment
for contributors, making MagnetoDB successful software product and
eventually to be integrated to OpenStack.

[1] https://launchpad.net/magnetodb
[2] http://www.stackalytics.com/report/contribution/magnetodb/90
[3]
http://lists.openstack.org/pipermail/openstack-dev/2013-October/017930.html
[4]
http://www.stackalytics.com/?release=allmetric=commitsproject_type=stackforgemodule=magnetodbcompany=user_id=
[5] https://etherpad.openstack.org/p/ magnetodb-juno-roadmap

Thank you,
Ilya Sviridov
isviridov @ FreeNode
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone

2014-05-28 Thread Tizy Ninan
Hi,

Thanks for the reply.
I am still not successful in integrating keystone with active directory.
Can you please provide some clarifications related to the following
questions.
1. Currently, my active directory schema does not have projects/tenants and
roles OU. Is it necessary that I need to create projects/tenants and roles
OU in the active directory schema for the keystone to authenticate to
active directory.?
2. We added values to the user_tree_dn.Does the tenant_tree_dn and
role_tree_dn and group_tree_dn fields needs to be filled in for
authenticating?
3.How does the mapping of a user to a project/tenant and role will be done
if I try to use active directory to authenticate only the users and use the
already existing projects and roles tables in the mysql database?

Kindly provide me some insight into these questions.

Thanks,
Tizy

On Tue, May 20, 2014 at 8:27 AM, Adam Young ayo...@redhat.com wrote:

  On 05/16/2014 05:08 AM, Tizy Ninan wrote:

  Hi,

  We have an openstack Havana deployment on CentOS 6.4 and nova-network
 network service installed using Mirantis Fuel v4.0.
 We are trying to integrate the openstack setup with the Microsoft Active
 Directory(LDAP server). I  only have  a read access to the LDAP server.
 What will be the minimum changes needed to be made under the [ldap] tag in
 keystone.conf file?Can you please specify what variables need to be set and
 what should be the values for each variable?

  [ldap]
 # url = ldap://localhost
 # user = dc=Manager,dc=example,dc=com
 # password = None
 # suffix = cn=example,cn=com
 # use_dumb_member = False
 # allow_subtree_delete = False
 # dumb_member = cn=dumb,dc=example,dc=com

  # Maximum results per page; a value of zero ('0') disables paging
 (default)
 # page_size = 0

  # The LDAP dereferencing option for queries. This can be either 'never',
 # 'searching', 'always', 'finding' or 'default'. The 'default' option falls
 # back to using default dereferencing configured by your ldap.conf.
 # alias_dereferencing = default

  # The LDAP scope for queries, this can be either 'one'
 # (onelevel/singleLevel) or 'sub' (subtree/wholeSubtree)
 # query_scope = one

  # user_tree_dn = ou=Users,dc=example,dc=com
 # user_filter =
 # user_objectclass = inetOrgPerson
 # user_id_attribute = cn
 # user_name_attribute = sn
 # user_mail_attribute = email
 # user_pass_attribute = userPassword
 # user_enabled_attribute = enabled
 # user_enabled_mask = 0
 # user_enabled_default = True
 # user_attribute_ignore = default_project_id,tenants
 # user_default_project_id_attribute =
 # user_allow_create = True
 # user_allow_update = True
 # user_allow_delete = True
 # user_enabled_emulation = False
 # user_enabled_emulation_dn =

  # tenant_tree_dn = ou=Projects,dc=example,dc=com
 # tenant_filter =
 # tenant_objectclass = groupOfNames
 # tenant_domain_id_attribute = businessCategory
 # tenant_id_attribute = cn
 # tenant_member_attribute = member
 # tenant_name_attribute = ou
 # tenant_desc_attribute = desc
 # tenant_enabled_attribute = enabled
 # tenant_attribute_ignore =
 # tenant_allow_create = True
 # tenant_allow_update = True
 # tenant_allow_delete = True
 # tenant_enabled_emulation = False
 # tenant_enabled_emulation_dn =

  # role_tree_dn = ou=Roles,dc=example,dc=com
 # role_filter =
 # role_objectclass = organizationalRole
 # role_id_attribute = cn
 # role_name_attribute = ou
 # role_member_attribute = roleOccupant
 # role_attribute_ignore =
 # role_allow_create = True
 # role_allow_update = True
 # role_allow_delete = True

  # group_tree_dn =
 # group_filter =
 # group_objectclass = groupOfNames
 # group_id_attribute = cn
 # group_name_attribute = ou
 # group_member_attribute = member
 # group_desc_attribute = desc
 # group_attribute_ignore =
 # group_allow_create = True
 # group_allow_update = True
 # group_allow_delete = True

  Kindly help us to resolve the issue.

  Thanks,
 Tizy



 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 http://www.youtube.com/watch?v=w3Yjlmb_68g


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Selenium test fixes

2014-05-28 Thread Julie Pichon
On 28/05/14 06:45, Kieran Spear wrote:
 No failures in the last 24 hours. \o/

That's awesome, thanks Kieran for looking into this!

Julie

 
 On 26 May 2014 23:44, Kieran Spear kisp...@gmail.com wrote:
 Hi peeps,

 Could I ask reviewers to prioritise the following:

 https://review.openstack.org/#/c/95392/

 It should eliminate our selenium gate failures, which seem to be happening 
 many times per day now.

 Cheers,
 Kieran

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Gerrit downtime on May 23 for project renames

2014-05-28 Thread Sergey Lukjanov
Re launchpad project renames - only climate-blazar.

On Wed, May 28, 2014 at 4:23 AM, Stefano Maffulli stef...@openstack.org wrote:
 On 05/23/2014 03:58 PM, James E. Blair wrote:
 This is complete.  The actual list of renamed projects is:
 [...]

 have any of these projects changed also their launchpad name?

 thanks
 .stef

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring Extension in Neutron

2014-05-28 Thread Vinay Yadhav
Hi all,

I am experiencing an issue when i try to commit the spec for review.

This is the message that i get:

 fatal: ICLA contributor agreement requires current contact information.

Please review your contact information:

  https://review.openstack.org/#/settings/contact


fatal: The remote end hung up unexpectedly
ericsson@ericsson-VirtualBox:~/openstack_neutron/checkin/neutron-specs/specs/juno$
git review
fatal: ICLA contributor agreement requires current contact information.

Please review your contact information:

  https://review.openstack.org/#/settings/contact


fatal: The remote end hung up unexpectedly

I am trying to see what has gone wrong with my account.

In the mean while please use the attached spec for the irc meeting today.

I will try to fix the issue with my account soon and commit it for review.

Cheers,

main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}


On Wed, May 21, 2014 at 7:38 PM, Anil Rao anil@gigamon.com wrote:

 Thanks Vinay. I’ll review the spec and get back with my comments soon.



 -Anil



 *From:* Vinay Yadhav [mailto:vinayyad...@gmail.com]
 *Sent:* Wednesday, May 21, 2014 10:23 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring
 Extension in Neutron



 Hi,



 I am attaching the first version of the neutron spec for Tap-as-a-Service
 (Port Mirroring).



 It will be formally commited soon in git.



 Cheers,

 main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}



 On Tue, May 20, 2014 at 7:12 AM, Kanzhe Jiang kanzhe.ji...@bigswitch.com
 wrote:

 Vinay's proposal was based on OVS's mirroring feature.



 On Mon, May 19, 2014 at 9:11 PM, YAMAMOTO Takashi yamam...@valinux.co.jp
 wrote:

  Hi,
 
  I am Vinay, working with Ericsson.
 
  I am interested in the following blueprint regarding port mirroring
  extension in neutron:
  https://blueprints.launchpad.net/neutron/+spec/port-mirroring
 
  I am close to finishing an implementation for this extension in OVS
 plugin
  and would be submitting a neutron spec related to the blueprint soon.

 does your implementation use OVS' mirroring functionality?
 or is it flow-based?

 YAMAMOTO Takashi


 
  I would like to know other who are also interested in introducing Port
  Mirroring extension in neutron.
 
  It would be great if we can discuss and collaborate in development and
  testing this extension
 
  I am currently attending the OpenStack Summit in Atlanta, so if any of
 you
  are interested in the blueprint, we can meet here in the summit and
 discuss
  how to proceed with the blueprint.
 
  Cheers,
  main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --
 Kanzhe Jiang

 MTS at BigSwitch


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




tap-as-a-service.rst
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Specifying file encoding

2014-05-28 Thread Pádraig Brady
On 05/28/2014 08:16 AM, Martin Geisler wrote:
 Hi everybody,
 
 I'm trying to get my feet wet with OpenStack development, so I recently
 tried to submit some small patches. One small thing I noticed was that
 some files used
 
   # -*- encoding: utf-8 -*-
 
 to specify the file encoding for both Python and Emacs. Unfortunately,
 Emacs expects you to set coding, not encoding. Python is fine with
 either. I submitted a set of patches for this:
 
 * https://review.openstack.org/95862
 * https://review.openstack.org/95864
 * https://review.openstack.org/95865
 * https://review.openstack.org/95869
 * https://review.openstack.org/95871
 * https://review.openstack.org/95880
 * https://review.openstack.org/95882
 * https://review.openstack.org/95886
 
 It was pointed out to me that such a change ought to be coordinated
 better via bug(s) or the mailinglist, so here I am :)

This is valid change.
I don't see why there is any question
as it only improves the situation for emacs
which will pop up an error when trying to edit these files.

You could create a bug I suppose
to reference all the changes though I don't
think that's mandatory since this isn't user facing.

cheers,
Pádraig.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]TLS API support for authentication

2014-05-28 Thread Samuel Bercovici
This very good news.
Please point to the code review in gerrit.

-Sam.


-Original Message-
From: Eichberger, German [mailto:german.eichber...@hp.com] 
Sent: Saturday, May 24, 2014 12:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]TLS API support for authentication

All,

Susanne and I had a demonstration of life code by HP's Barbican team today for 
certificate storage. The code is submitted for review in the Barbican project.

Barbican will be able to store all the certificate parts (cert, key, pwd) in a 
secure container. We will follow up with more details next week -- 

So in short all we need to store in LBaaS is the container-id. The rest will 
come from Barbican and the user will interact straight with Barbican to 
upload/manage his certificates, keys, pwd...

This functionality/use case also is considered in the Marketplace / Murano 
project -- making the need for a central certificate storage in OpenStack a bit 
more pressing.

German

-Original Message-
From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
Sent: Friday, May 23, 2014 1:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]TLS API support for authentication

Right so are you advocating that the front end API never return a private 
key back to the user once regardless if the key was generated on the back end 
or sent in to the API from the user? We kind of are already are implying that 
they can refer to the key via a private key id.


On May 23, 2014, at 9:11 AM, John Dennis jden...@redhat.com
 wrote:

 Using standard formats such as PEM and PKCS12 (most people don't use
 PKCS8 directly) is a good approach.

We had to deal with PKCS8 manually in our CLB1.0 offering at rackspace. Too 
many customers complained.

 Be mindful that some cryptographic
 services do not provide *any* direct access to private keys (makes 
 sense, right?). Private keys are shielded in some hardened container 
 and the only way to refer to the private key is via some form of name 
 association.

Were anticipating referring the keys via a barbican key id which will be 
named later.


 Therefore your design should never depend on having access to a 
 private key and

But we need access enough to transport the key to the back end 
implementation though.

 should permit having the private key stored in some type of secure key 
 storage.

   A secure repository for the private key is already a requirement that we are 
attempting to meat with barbican.


 --
 John
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Bulk load API draft

2014-05-28 Thread Illia Khudoshyn
Hi Ilya

As for 'string' vs 'number', in MDB REST API we pass numers as strings
since we want to support big ints, so I just wanted to be conform.

As for the last parameter name, I'd prefer 'failed_items', 'cos we 1)
already have 'failed' and I think it would be good if they match 2) they
were actually processed but failed


On Wed, May 28, 2014 at 1:03 PM, Ilya Sviridov isviri...@mirantis.comwrote:

 Hello Illia,

 Great job!

 Several moments about spec itself.

 read: string,
 processed: string,
 failed: string,

 I believe all of them are integers, but not strings.

 About Dmitri's suggestion to rename errors, I would also prefer to do it
 as far as error itself goes below with item.
 What do you think about  unprocessed_items?

 And I think we can go with it.

 Ilya



 On Wed, May 28, 2014 at 10:28 AM, Illia Khudoshyn ikhudos...@mirantis.com
  wrote:

 Hi Dima,

 Sounds good, thank you for the point.


 On Tue, May 27, 2014 at 7:34 PM, Dmitriy Ukhlov dukh...@mirantis.comwrote:

 Hi Illia,

 Looks good. But I suggest to return all of these fields for positive
 request as well as for error request:

 read: string,
 processed: string,
 failed: string,

 but leave next fields optional and fill them in case of error response
 (failed  0) to specify what exactly was happened:

 last_read:
 errors (maybe not processed will be better)



 On Tue, May 27, 2014 at 3:39 PM, Illia Khudoshyn 
 ikhudos...@mirantis.com wrote:

 Hi openstackers,

 While working on bulk load, I found previously proposed batch-oriented
 asynchronous approach both resource consuming on server side and somewhat
 complicated to use.
 So I tried to outline some more straightforward streaming way of
 uploading data.

 By the link below you can found a draft for a new streaming API
 https://wiki.openstack.org/wiki/MagnetoDB/streamingbulkload.

 Any feedback is welcome as usual.



 On Wed, May 14, 2014 at 5:04 PM, Illia Khudoshyn 
 ikhudos...@mirantis.com wrote:

 Hi openstackers,

 I'm working on bulk load for MagnetoDB, the facility for inserting
 large amounts of data, like,  millions of rows, gigabytes of data. Below 
 is
 the link to draft API description.


 https://wiki.openstack.org/wiki/MagnetoDB/bulkload#.5BDraft.5D_MagnetoDB_Bulk_Load_workflow_and_API

 Any feedback is welcome.

 --

 Best regards,

 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.



 38, Lenina ave. Kharkov, Ukraine

 www.mirantis.com http://www.mirantis.ru/

 www.mirantis.ru



 Skype: gluke_work

 ikhudos...@mirantis.com




 --

 Best regards,

 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.



 38, Lenina ave. Kharkov, Ukraine

 www.mirantis.com http://www.mirantis.ru/

 www.mirantis.ru



 Skype: gluke_work

 ikhudos...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best regards,
 Dmitriy Ukhlov
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.



 38, Lenina ave. Kharkov, Ukraine

 www.mirantis.com http://www.mirantis.ru/

 www.mirantis.ru



 Skype: gluke_work

 ikhudos...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Illia Khudoshyn,
Software Engineer, Mirantis, Inc.



38, Lenina ave. Kharkov, Ukraine

www.mirantis.com http://www.mirantis.ru/

www.mirantis.ru



Skype: gluke_work

ikhudos...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-05-28 Thread Jaromir Coufal

Hi to all,

after previous TripleO  Ironic mid-cycle meetup, which I believe was 
beneficial for all, I would like to suggest that we meet again in the 
middle of Juno cycle to discuss current progress, blockers, next steps 
and of course get some beer all together :)


Last time, TripleO and Ironic merged their meetings together and I think 
it was great idea. This time I would like to invite also Heat team if 
they want to join. Our cooperation is increasing and I think it would be 
great, if we can discuss all issues together.


Red Hat offered to host this event, so I am very happy to invite you all 
and I would like to ask, who would come if there was a mid-cycle meetup 
in following dates and place:


* July 28 - Aug 1
* Red Hat office, Raleigh, North Carolina

If you are intending to join, please, fill yourselves into this etherpad:
https://etherpad.openstack.org/p/juno-midcycle-meetup

Cheers
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]TLS API support for authentication

2014-05-28 Thread Clark, Robert Graham
Several OSSG members have expressed an interest in reviewing this
functionality too.

-Rob

On 28/05/2014 11:35, Samuel Bercovici samu...@radware.com wrote:

This very good news.
Please point to the code review in gerrit.

-Sam.


-Original Message-
From: Eichberger, German [mailto:german.eichber...@hp.com]
Sent: Saturday, May 24, 2014 12:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]TLS API support for
authentication

All,

Susanne and I had a demonstration of life code by HP's Barbican team
today for certificate storage. The code is submitted for review in the
Barbican project.

Barbican will be able to store all the certificate parts (cert, key, pwd)
in a secure container. We will follow up with more details next week --

So in short all we need to store in LBaaS is the container-id. The rest
will come from Barbican and the user will interact straight with Barbican
to upload/manage his certificates, keys, pwd...

This functionality/use case also is considered in the Marketplace /
Murano project -- making the need for a central certificate storage in
OpenStack a bit more pressing.

German

-Original Message-
From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
Sent: Friday, May 23, 2014 1:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]TLS API support for
authentication

Right so are you advocating that the front end API never return a
private key back to the user once regardless if the key was generated on
the back end or sent in to the API from the user? We kind of are already
are implying that they can refer to the key via a private key id.


On May 23, 2014, at 9:11 AM, John Dennis jden...@redhat.com
 wrote:

 Using standard formats such as PEM and PKCS12 (most people don't use
 PKCS8 directly) is a good approach.

We had to deal with PKCS8 manually in our CLB1.0 offering at
rackspace. Too many customers complained.

 Be mindful that some cryptographic
 services do not provide *any* direct access to private keys (makes
 sense, right?). Private keys are shielded in some hardened container
 and the only way to refer to the private key is via some form of name
 association.

Were anticipating referring the keys via a barbican key id which will
be named later.


 Therefore your design should never depend on having access to a
 private key and

But we need access enough to transport the key to the back end
implementation though.

 should permit having the private key stored in some type of secure key
 storage.

   A secure repository for the private key is already a requirement that
we are attempting to meat with barbican.


 --
 John
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] MagnetoDB events notifications

2014-05-28 Thread Ilya Sviridov
Hello Charles,

The specification looks very good and we can start with it for table and
data item events.

The only thing I'm not quite sure is implementation of table.usage
information.
I mean this part of description Periodic usage notification generated by
the magnetodb-table-usage-audit cron job

Until now we don't have central place for running such job for all tables
and I wouldn't want to introduce it, but having it working on each MDB api
instance will multiply the number of notifications. Also I'm not sure that
it will scale well with growing count of tables.

I believe that MDB should be pooled for table.usage as well as for other
data what describes general state of system by monitoring system.

What do you think?

Thanks,
Ilya



On Tue, May 27, 2014 at 5:56 PM, Charles Wang charles_w...@symantec.comwrote:

 Hi Dmitriy,

 Thank you very much for your feedback.

 Although it looks like MagnetoDB Events  Notifications component has some
 similarities to Ceilometer, it is much narrower scope. We only plan to
 provide immediate and periodic notifications of MagnetoDB table/data item
 CRUD activities based on Oslo Notification. There’s no backend database
 storing them, and no query API for those notifications. They are different
 from Ceilometer metrics and events. In the future when we integrate with
 Ceilometer, the MagnetoDB notifications are fed into Ceilometer to collect
 Ceilometer metrics, and/or generate Ceilometer events. Basically Ceilometer
 will be a consumer of MagnetoDB notifications.

 I’ll update the wiki further to define our scope clearer, and possibly
 drop the word events” to indicate we focus on notifications.

 Regards,

 Charles



 From: Dmitriy Ukhlov dukh...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, May 26, 2014 at 7:28 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [MagnetoDB] MagnetoDB events  notifications

 Hi Charles!

 It looks like to me that we are duplicating functionality of Ceilometer
 project.
  Am I wrong? Have you considered Ceilometer integration for monitoring
 MagnetoDB?


 On Fri, May 23, 2014 at 6:55 PM, Charles Wang 
 charles_w...@symantec.comwrote:

 Folks,

 Please take a look at the initial draft of MagnetoDB Events and
 Notifications wiki page:
 https://wiki.openstack.org/wiki/MagnetoDB/notification. Your feedback
 will be appreciated.

 Thanks,

 Charles Wang
 charles_w...@symantec.com



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best regards,
 Dmitriy Ukhlov
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring Extension in Neutron

2014-05-28 Thread Vinay Yadhav
Hi,

Issue resolved. The specification is now submitted for review:
https://review.openstack.org/96149

Cheers,
main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}


On Wed, May 28, 2014 at 12:16 PM, Vinay Yadhav vinayyad...@gmail.comwrote:

 Hi all,

 I am experiencing an issue when i try to commit the spec for review.

 This is the message that i get:

  fatal: ICLA contributor agreement requires current contact information.

 Please review your contact information:

   https://review.openstack.org/#/settings/contact


 fatal: The remote end hung up unexpectedly
 ericsson@ericsson-VirtualBox:~/openstack_neutron/checkin/neutron-specs/specs/juno$
 git review
 fatal: ICLA contributor agreement requires current contact information.

 Please review your contact information:

   https://review.openstack.org/#/settings/contact


 fatal: The remote end hung up unexpectedly

 I am trying to see what has gone wrong with my account.

 In the mean while please use the attached spec for the irc meeting today.

 I will try to fix the issue with my account soon and commit it for review.

 Cheers,

 main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}


 On Wed, May 21, 2014 at 7:38 PM, Anil Rao anil@gigamon.com wrote:

 Thanks Vinay. I’ll review the spec and get back with my comments soon.



 -Anil



 *From:* Vinay Yadhav [mailto:vinayyad...@gmail.com]
 *Sent:* Wednesday, May 21, 2014 10:23 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring
 Extension in Neutron



 Hi,



 I am attaching the first version of the neutron spec for Tap-as-a-Service
 (Port Mirroring).



 It will be formally commited soon in git.



 Cheers,

 main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}



 On Tue, May 20, 2014 at 7:12 AM, Kanzhe Jiang kanzhe.ji...@bigswitch.com
 wrote:

 Vinay's proposal was based on OVS's mirroring feature.



 On Mon, May 19, 2014 at 9:11 PM, YAMAMOTO Takashi yamam...@valinux.co.jp
 wrote:

  Hi,
 
  I am Vinay, working with Ericsson.
 
  I am interested in the following blueprint regarding port mirroring
  extension in neutron:
  https://blueprints.launchpad.net/neutron/+spec/port-mirroring
 
  I am close to finishing an implementation for this extension in OVS
 plugin
  and would be submitting a neutron spec related to the blueprint soon.

 does your implementation use OVS' mirroring functionality?
 or is it flow-based?

 YAMAMOTO Takashi


 
  I would like to know other who are also interested in introducing Port
  Mirroring extension in neutron.
 
  It would be great if we can discuss and collaborate in development and
  testing this extension
 
  I am currently attending the OpenStack Summit in Atlanta, so if any of
 you
  are interested in the blueprint, we can meet here in the summit and
 discuss
  how to proceed with the blueprint.
 
  Cheers,
  main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --
 Kanzhe Jiang

 MTS at BigSwitch


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] MagnetoDB events notifications

2014-05-28 Thread Flavio Percoco

On 23/05/14 08:55 -0700, Charles Wang wrote:

Folks,

Please take a look at the initial draft of MagnetoDB Events and Notifications
wiki page:  https://wiki.openstack.org/wiki/MagnetoDB/notification. Your
feedback will be appreciated. 


Just one nit.

The wiki page mentions that Oslo Notifier will be used. Oslo notifier
is on its way of deprecation. Instead, oslo.messaging[0] should be used.

[0] http://docs.openstack.org/developer/oslo.messaging/


--
@flaper87
Flavio Percoco


pgpeiIRc4K8iq.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] [Heat][Documentation] Heat template documentation

2014-05-28 Thread Tomas Sedovic
On 28/05/14 10:05, Julien Danjou wrote:
 On Tue, May 27 2014, Gauvain Pocentek wrote:
 
 So my feeling is that we should work on the tools to convert RST
 (or whatever format, but RST seems to be the norm for openstack
 projects) to docbook, and generate our online documentation from
 there. There are tools that can help us doing that, and I don't
 see an other solution that would make us move forward.
 
 Anne, you talked about experimenting with the end user guide, and
 after the discussion and the technical info brought by Doug,
 Steve and Steven, I now think it is worth trying.
 
 I think it's a very good idea.
 
 FWIW, AsciiDoc¹ has a nice markup format that can be converted to 
 Docbook. I know it's not RST, but it's still better than writing
 XML IMHO.

I would voice my support for AsciiDoc as well. Conversion to DocBook
was what it was designed for and the two should be be semantically
equivalent (i.e. any markup we're using in our DocBook sources should
be available in AsciiDoc as well).


These two articles provide a good quick introduction:

http://asciidoctor.org/docs/asciidoc-writers-guide/

http://asciidoctor.org/docs/asciidoc-syntax-quick-reference/

 
 
 ¹  http://www.methods.co.nz/asciidoc/
 
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Review blueprint specs on gerrit

2014-05-28 Thread Dmitry Pyzhov
Guys,

from now on we should keep all our 5.1 blueprint specs in one place: fuel-specs
repo https://github.com/stackforge/fuel-specs. We do it same way as nova,
so you can use their
instructionhttps://wiki.openstack.org/wiki/Blueprints#Novaas a
guideline.

Once again. All specifications for 5.1 blueprints need to be moved to
stackforge. Here is example link:
https://github.com/stackforge/fuel-specs/blob/master/specs/template.rst.

Jenkins builds every request and adds link to html docs to the comments.
For example: https://review.openstack.org/#/c/96145/.

I propose to send feedback for this workflow into this mailing thread.

Also, take a look on review
guidelineshttps://wiki.openstack.org/wiki/Blueprints#Blueprint_Review_Criteria.
It contains some useful information, you know.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-05-28 Thread Radomir Dopieralski
Hello,

we plan to finally do the split in this cycle, and I started some
preparations for that. I also started to prepare a detailed plan for the
whole operation, as it seems to be a rather big endeavor.

You can view and amend the plan at the etherpad at:
https://etherpad.openstack.org/p/horizon-split-plan

It's still a little vague, but I plan to gradually get it more detailed.
All the points are up for discussion, if anybody has any good ideas or
suggestions, or can help in any way, please don't hesitate to add to
this document.

We still don't have any dates or anything -- I suppose we will work that
out soonish.

Oh, and great thanks to all the people who have helped me so far with
it, I wouldn't even dream about trying such a thing without you. Also
thanks in advance to anybody who plans to help!

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Recommended way of having a project admin

2014-05-28 Thread Ajaya Agrawal
Hi All,

We want to introduce a role of project admin in our cloud who can add users
only in the project in which he is an admin. AFAIK RBAC policies are not
supported by keystone v2 api. So I suppose we will need to use keystone v3
to support the concept of project admin. But I hear things like all the
projects don't talk keystone v3 as of now.

What is the recommended way of doing it?

Cheers,
Ajaya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Random thoughts on asynchronous API spec

2014-05-28 Thread Dmitry Tantsur
Hi Ironic folks, hi Devananda!

I'd like to share with you my thoughts on asynchronous API, which is
spec https://review.openstack.org/#/c/94923
First I was planned this as comments to the review, but it proved to be
much larger, so I post it for discussion on ML.

Here is list of different consideration, I'd like to take into account
when prototyping async support, some are reflected in spec already, some
are from my and other's comments:

1. Executability
We need to make sure that request can be theoretically executed,
which includes:
a) Validating request body
b) For each of entities (e.g. nodes) touched, check that they are
available
   at the moment (at least exist).
   This is arguable, as checking for entity existence requires going to
DB.

2. Appropriate state
For each entity in question, ensure that it's either in a proper state
or
moving to a proper state.
It would help avoid users e.g. setting deploy twice on the same node
It will still require some kind of NodeInAWrongStateError, but we won't
necessary need a client retry on this one.

Allowing the entity to be _moving_ to appropriate state gives us a
problem:
Imagine OP1 was running and OP2 got scheduled, hoping that OP1 will come
to desired state. What if OP1 fails? What if conductor, doing OP1
crashes?
That's why we may want to approve only operations on entities that do
not
undergo state changes. What do you think?

Similar problem with checking node state.
Imagine we schedule OP2 while we had OP1 - regular checking node state.
OP1 discovers that node is actually absent and puts it to maintenance
state.
What to do with OP2?
a) Obvious answer is to fail it
b) Can we make client wait for the results of periodic check?
   That is, wait for OP1 _before scheduling_ OP2?

Anyway, this point requires some state framework, that knows about
states,
transitions, actions and their compatibility with each other.

3. Status feedback
People would like to know, how things are going with their task.
What they know is that their request was scheduled. Options:
a) Poll: return some REQUEST_ID and expect users to poll some endpoint.
   Pros:
   - Should be easy to implement
   Cons:
   - Requires persistent storage for tasks. Does AMQP allow to do this
kinds
 of queries? If not, we'll need to duplicate tasks in DB.
   - Increased load on API instances and DB
b) Callback: take endpoint, call it once task is done/fails.
   Pros:
   - Less load on both client and server
   - Answer exactly when it's ready
   Cons:
   - Will not work for cli and similar
   - If conductor crashes, there will be no callback.

Seems like we'd want both (a) and (b) to comply with current needs.

If we have a state framework from (2), we can also add notifications to
it.

4. Debugging consideration
a) This is an open question: how to debug, if we have a lot of requests
   and something went wrong?
b) One more thing to consider: how to make command like `node-show`
aware of
   scheduled transitioning, so that people don't try operations that are
   doomed to failure.

5. Performance considerations
a) With async approach, users will be able to schedule nearly unlimited
   number of tasks, thus essentially blocking work of Ironic, without
any
   signs of the problem (at least for some time).
   I think there are 2 common answers to this problem:
   - Request throttling: disallow user to make too many requests in some
 amount of time. Send them 503 with Retry-After header set.
   - Queue management: watch queue length, deny new requests if it's too
large.
   This means actually getting back error 503 and will require retrying
again!
   At least it will be exceptional case, and won't affect Tempest run...
b) State framework from (2), if invented, can become a bottleneck as
well.
   Especially with polling approach.

6. Usability considerations
a) People will be unaware, when and whether their request is going to be
   finished. As there will be tempted to retry, we may get flooded by
   duplicates. I would suggest at least make it possible to request
canceling
   any task (which will be possible only if it is not started yet,
obviously).
b) We should try to avoid scheduling contradictive requests.
c) Can we somehow detect duplicated requests and ignore them?
   E.g. we won't want user to make 2-3-4 reboots in a row just because
the user
   was not patient enough.

--

Possible takeaways from this letter:
- We'll need at least throttling to avoid DoS
- We'll still need handling of 503 error, though it should not happen
under
  normal conditions
- Think about state framework that unifies all this complex logic with
features:
  * Track entities, their states and actions on entities
  * Check whether new action is compatible with states of entities it
touches
and with other ongoing and scheduled actions on these entities.
  * Handle notifications for finished and failed actions by providing
both
pull and push approaches.
  * Track whether started action is still executed, 

[openstack-dev] [sahara] summit wrap-up: backward compat

2014-05-28 Thread Sergey Lukjanov
Hey folks,

it's a small wrap-up for the two topics Sahara backward compat and 
Hadoop cluster backward compatibility, both were discussed on design
summit, etherpad [0] contains info about them. There are some open
questions listed in the end of email, please, don't skip them :)

 Sahara backward compat

Keeping released APIs stable since the Icehouse release. So, for now
we have one stable API v1.1 (and v1.0 as a subset for it). Any changes
to existing semantics requires new API version, additions handling is
a question. As part of API stability decision python client should
work with all previous Sahara versions. API of python-saharaclient
should be stable itself, because we aren't limiting the client version
for OpenStack release, so, the client v123 shouldn't change own API
exposed to user that is working with stable release REST API versions.

 Hadoop cluster backward compat

It was decided to at least keep released versions of cluster (Hadoop)
plugins for the next release, so, It means if we have vanilla-2.0.1
released as part of Icehouse, than we could remove it's support only
after releasing it as part of Juno with note that it's deprecated and
will not be available in the next release. Additionally, we've decided
to add some docs with upgrade recommendations.

 Open questions

1. How should we handle addition of new functionality to the API,
should we bump minor version and just add new endpoints?
2. For which period of time should we keep deprecated API and client for it?
3. How to publish all images and/or keep stability of building images
for plugins?

[0] https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-05-28 Thread Ana Krivokapic

On 05/28/2014 02:54 PM, Radomir Dopieralski wrote:

Hello,

we plan to finally do the split in this cycle, and I started some
preparations for that. I also started to prepare a detailed plan for the
whole operation, as it seems to be a rather big endeavor.

You can view and amend the plan at the etherpad at:
https://etherpad.openstack.org/p/horizon-split-plan

It's still a little vague, but I plan to gradually get it more detailed.
All the points are up for discussion, if anybody has any good ideas or
suggestions, or can help in any way, please don't hesitate to add to
this document.

We still don't have any dates or anything -- I suppose we will work that
out soonish.

Oh, and great thanks to all the people who have helped me so far with
it, I wouldn't even dream about trying such a thing without you. Also
thanks in advance to anybody who plans to help!



This high level overview looks quite reasonable to me.

Count me in on helping out with this effort.

Thanks for driving this, Radomir!

--
Regards,

Ana Krivokapic
Software Engineer
OpenStack team
Red Hat Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova default quotas

2014-05-28 Thread Cazzolato, Sergio J
Hi Kieran, 

What do you think about the approach proposed in 
https://review.openstack.org/#/c/94519/ ?

What we are trying to do is to simplify the way to manage default quotas 
through an API and keeping backward compatibility. So doing this is not needed 
to restart any service once a default quota is changed, something that could be 
painful when there are many services running in parallel. 


From: Kieran Spear [mailto:kisp...@gmail.com] 
Sent: Wednesday, May 28, 2014 2:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] nova default quotas

Hi Joe,

On 28/05/2014, at 11:21 AM, Joe Gordon joe.gord...@gmail.com wrote:




On Tue, May 27, 2014 at 1:30 PM, Kieran Spear kisp...@gmail.com wrote:


On 28/05/2014, at 6:11 AM, Vishvananda Ishaya vishvana...@gmail.com wrote:

 Phil,

 You are correct and this seems to be an error. I don't think in the earlier 
 ML thread[1] that anyone remembered that the quota classes were being used 
 for default quotas. IMO we need to revert this removal as we (accidentally) 
 removed a Havana feature with no notification to the community. I've 
 reactivated a bug[2] and marked it critical.

+1.

We rely on this to set the default quotas in our cloud.

Hi Kieran,

Can you elaborate on this point. Do you actually use the full quota-class 
functionality that allows for quota classes, if so what provides the quota 
classes? If you only use this for setting the default quotas, why do you prefer 
the API and not setting the config file?

We just need the defaults. My comment was more to indicate that yes, this is 
being used by people. I'm sure we could switch to using the config file, and 
generally I prefer to keep configuration in code, but finding out about this 
half way through a release cycle isn't ideal.

I notice that only the API has been removed in Icehouse, so I'm assuming the 
impact is limited to *changing* the defaults, which we don't do often. I was 
initially worried that after upgrading to Icehouse we'd be left with either no 
quotas or whatever the config file defaults are, but it looks like this isn't 
the case.

Unfortunately the API removal in Nova was followed by similar changes in 
novaclient and Horizon, so fixing Icehouse at this point is probably going to 
be difficult.

Cheers,
Kieran


 

Kieran


 Vish

 [1] 
 http://lists.openstack.org/pipermail/openstack-dev/2014-February/027574.html
 [2] https://bugs.launchpad.net/nova/+bug/1299517

 On May 27, 2014, at 12:19 PM, Day, Phil philip@hp.com wrote:

 Hi Vish,

 I think quota classes have been removed from Nova now.

 Phil


 Sent from Samsung Mobile


  Original message 
 From: Vishvananda Ishaya
 Date:27/05/2014 19:24 (GMT+00:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] nova default quotas

 Are you aware that there is already a way to do this through the cli using 
 quota-class-update?

 http://docs.openstack.org/user-guide-admin/content/cli_set_quotas.html (near 
 the bottom)

 Are you suggesting that we also add the ability to use just regular 
 quota-update? I'm not sure i see the need for both.

 Vish

 On May 20, 2014, at 9:52 AM, Cazzolato, Sergio J 
 sergio.j.cazzol...@intel.com wrote:

 I would to hear your thoughts about an idea to add a way to manage the 
 default quota values through the API.

 The idea is to use the current quota api, but sending ''default' instead of 
 the tenant_id. This change would apply to quota-show and quota-update 
 methods.

 This approach will help to simplify the implementation of another blueprint 
 named per-flavor-quotas

 Feedback? Suggestions?


 Sergio Juan Cazzolato
 Intel Software Argentina

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting May 29 1800 UTC

2014-05-28 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Agenda_for_May.2C_29

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140529T18


-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Qcow2 support for cinder-backup

2014-05-28 Thread Duncan Thomas
On 18 May 2014 12:32, Murali Balcha murali.bal...@triliodata.com wrote:
 Hi,
 I did a design session on Friday though my proposal was to capture the
 delta as qcow2. Here is the link to ether pad notes.

 https://etherpad.openstack.org/p/juno-cinder-changed-block-list


 Do you see synergies between what you are proposing and my proposal?
 Shouldn¹t we standardize on one format for all backups? I believe Cinder
 backup API currently uses JSON based list with pointers to all swift
 objects that make up the backup data of a volume.

I think the problem being referred to in this thread is that the
backup code assumes the *source* is a raw volume. The destination
(i.e. swift) should absolutely remain universal across all volume
back-ends - a JSON list with pointers. The JSON file is versioned, so
there is scope to add more to it (like we did volume metadata), but I
don't want to see QCOW or similar going into swift.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Policy

2014-05-28 Thread André Aranha
Hello,
I have a question in Glance about its policy, I'm setting it to deny
all calls to images.list (get_image: !, get_images: !,)  but it
doesn't work, I tried this on keystone and worked fine, but on glance it
ignores the change I made to the policy.
Does anyone knows if glance code is enforcing the rules in policy?

Thank you,
Andre Aranha
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? May 28 2014

2014-05-28 Thread Anne Gentle
Here's the latest news in docland. Thanks for all the great input at the
Summit and excellent follow up conversations on the mailing lists.

1. In review and merged recently:
Conversion of the High Availability Guide to DocBook to ease maintenance.
It's so interesting that the conversation about the End User Guide and
adding Heat Template information has Julien suggesting asciidoc -- we
haven't seen an uptick in contributions to the single asciidoc file we had
in the repo. But it's an unfair comparison since High Availability is a
specialty that not many would have contributed to. So let's keep talking
about the priorities for authoring ease.

Addition of roadmap.rst to each guide's directory. You can see what tasks
are not large enough to merit a blueprint but are still needed work on any
given guide.

Also note that the project-specs repos were rumored to be renamed to
program-specs, but they are going to remain project-specs. I haven't
been compelled to create a docs-specs repo and instead went with the
roadmap file mentioned above.

The database-api repository is now deleted and the API documentation for
the trove project now lives in the trove repository. Thanks for the
interest and let's keep doing that move for all the projects.

2. High priority doc work:

I've re-started an effort to bring in the O'Reilly content with index
entries wholesale at https://review.openstack.org/#/c/96015/ - let's merge
quickly so that rebasing isn't so difficult and so we can merge in the
havana to icehouse upgrade information.

The Architecture design guide book sprint is being planned by Ken Hui and
we have enough authors as of now but he is wait listing so do contact him
if you're interested. The dates are July 7-11 and VMware is hosting in Palo
Alto.

Still working on persona definitions for the Documentation so that the
intended audience can be added to each guide in the front matter.

Also we're working on further organizing the Documentation/HowTo wiki page
to encourage newcomers.

The Security Guide will be moving to its own repository with a separate
core review team. Thanks all for the interest there!

3. Doc work going on that I know of:
We're continuing discussions about the User Guide to see if we can automate
some of the Orchestration reference information and incorporate into the
End User Guide or Admin User Guide.

Matt is working on writing up standards for file names and xml:ids as part
of the install guide work. Follow along at
http://lists.openstack.org/pipermail/openstack-docs/2014-May/004471.html.

4. New incoming doc requests:
I'd like to get started on app developer documentation and have been
talking to various community members from HP and other places.

I'm also meeting with Todd Morey, the Foundation designer who originally
desiged docs.openstack.org, this week for design ideas for docs.

5. Doc tools updates:
The Maven Clouddocs plugin is at 2.0.2 now, and openstack-doc-tools is at
0.15. With 2.0.2 you can use markup within a code listing to emphasize
(with bold output) anything you want to highlight in what's returned to the
user.

There's now a checklang non-voting gate test to test translations. When
those break, please contact the openstack-docs list. Andreas has file this
bug for further investigation and tracking: https://bugs.launchpad.net/
openstack-i18n/+bug/1324007
3:05 AM (5 hours ago)

6. Other doc news:

I still want to have the discussion about moving the training guides out of
the openstack-manuals repo with a separate core review team similar to the
Security Guide. Let me know about good next steps there and we can make a
plan.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Random thoughts on asynchronous API spec

2014-05-28 Thread Lucas Alvares Gomes
On Wed, May 28, 2014 at 2:02 PM, Dmitry Tantsur dtant...@redhat.com wrote:
 Hi Ironic folks, hi Devananda!

 I'd like to share with you my thoughts on asynchronous API, which is
 spec https://review.openstack.org/#/c/94923
 First I was planned this as comments to the review, but it proved to be
 much larger, so I post it for discussion on ML.

 Here is list of different consideration, I'd like to take into account
 when prototyping async support, some are reflected in spec already, some
 are from my and other's comments:

 1. Executability
 We need to make sure that request can be theoretically executed,
 which includes:
 a) Validating request body
 b) For each of entities (e.g. nodes) touched, check that they are
 available
at the moment (at least exist).
This is arguable, as checking for entity existence requires going to
 DB.

+ 1


 2. Appropriate state
 For each entity in question, ensure that it's either in a proper state
 or
 moving to a proper state.
 It would help avoid users e.g. setting deploy twice on the same node
 It will still require some kind of NodeInAWrongStateError, but we won't
 necessary need a client retry on this one.

 Allowing the entity to be _moving_ to appropriate state gives us a
 problem:
 Imagine OP1 was running and OP2 got scheduled, hoping that OP1 will come
 to desired state. What if OP1 fails? What if conductor, doing OP1
 crashes?
 That's why we may want to approve only operations on entities that do
 not
 undergo state changes. What do you think?

 Similar problem with checking node state.
 Imagine we schedule OP2 while we had OP1 - regular checking node state.
 OP1 discovers that node is actually absent and puts it to maintenance
 state.
 What to do with OP2?
 a) Obvious answer is to fail it
 b) Can we make client wait for the results of periodic check?
That is, wait for OP1 _before scheduling_ OP2?

 Anyway, this point requires some state framework, that knows about
 states,
 transitions, actions and their compatibility with each other.

For {power, provision} state changes should we queue the requests? We
may want to only accept 1 request to change the state per time, if a
second request comes when there's another state change mid-operation
we may just return 409 (Conflict) to indicate that a state change is
already in progress. This is similar of what we have today but instead
of checking the node lock and states on the conductor side the API
service could do it, since it's on the DB.


 3. Status feedback
 People would like to know, how things are going with their task.
 What they know is that their request was scheduled. Options:
 a) Poll: return some REQUEST_ID and expect users to poll some endpoint.
Pros:
- Should be easy to implement
Cons:
- Requires persistent storage for tasks. Does AMQP allow to do this
 kinds
  of queries? If not, we'll need to duplicate tasks in DB.
- Increased load on API instances and DB
 b) Callback: take endpoint, call it once task is done/fails.
Pros:
- Less load on both client and server
- Answer exactly when it's ready
Cons:
- Will not work for cli and similar
- If conductor crashes, there will be no callback.

 Seems like we'd want both (a) and (b) to comply with current needs.

+1, we could allow pooling by default (like checking
nodes/uuid/states to know the current and target state of the node)
but we may also want to include a callback parameter that users could
use to input a URL that the conductor will call out as soon as the
operation is finished. So if the callback URl exists, the conductor
will submit a POST request to that URL with some data structure
identifying the operation and the current state.


 If we have a state framework from (2), we can also add notifications to
 it.

 4. Debugging consideration
 a) This is an open question: how to debug, if we have a lot of requests
and something went wrong?
 b) One more thing to consider: how to make command like `node-show`
 aware of
scheduled transitioning, so that people don't try operations that are
doomed to failure.

 5. Performance considerations
 a) With async approach, users will be able to schedule nearly unlimited
number of tasks, thus essentially blocking work of Ironic, without
 any
signs of the problem (at least for some time).
I think there are 2 common answers to this problem:
- Request throttling: disallow user to make too many requests in some
  amount of time. Send them 503 with Retry-After header set.
- Queue management: watch queue length, deny new requests if it's too
 large.
This means actually getting back error 503 and will require retrying
 again!
At least it will be exceptional case, and won't affect Tempest run...
 b) State framework from (2), if invented, can become a bottleneck as
 well.
Especially with polling approach.

 6. Usability considerations
 a) People will be unaware, when and whether their request is going to be
finished. 

Re: [openstack-dev] [Glance] Policy

2014-05-28 Thread Guangyu Suo
I think you should restart the glance-api, because it will not reload the
policy.json like others.


2014-05-28 22:05 GMT+08:00 André Aranha andre.f.ara...@gmail.com:

 Hello,
 I have a question in Glance about its policy, I'm setting it to deny
 all calls to images.list (get_image: !, get_images: !,)  but it
 doesn't work, I tried this on keystone and worked fine, but on glance it
 ignores the change I made to the policy.
 Does anyone knows if glance code is enforcing the rules in policy?

 Thank you,
 Andre Aranha

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
索广宇(Guangyu Suo)
UnitedStack Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Specifying file encoding

2014-05-28 Thread Martin Geisler
Pádraig Brady p...@draigbrady.com writes:

 On 05/28/2014 08:16 AM, Martin Geisler wrote:
 Hi everybody,
 
 I'm trying to get my feet wet with OpenStack development, so I recently
 tried to submit some small patches. One small thing I noticed was that
 some files used
 
   # -*- encoding: utf-8 -*-
 
 to specify the file encoding for both Python and Emacs. Unfortunately,
 Emacs expects you to set coding, not encoding. Python is fine with
 either. I submitted a set of patches for this:
 
 * https://review.openstack.org/95862
 * https://review.openstack.org/95864
 * https://review.openstack.org/95865
 * https://review.openstack.org/95869
 * https://review.openstack.org/95871
 * https://review.openstack.org/95880
 * https://review.openstack.org/95882
 * https://review.openstack.org/95886
 
 It was pointed out to me that such a change ought to be coordinated
 better via bug(s) or the mailinglist, so here I am :)

 This is valid change.
 I don't see why there is any question
 as it only improves the situation for emacs
 which will pop up an error when trying to edit these files.

Yes, exactly :)

It's also worth noting that the files *already* use Emacs-specific
markup in the form of the -*- markers. Python doesn't care about those
at all, but Emacs relies on them.

The reviewers in https://review.openstack.org/95886/ suggests removing
the coding lines completely and also remove non-ASCII characters as
necessary to make that possible.

I can definitely do that instead if people like that better. The only
problem I see is that we'll have to hope that © is the only non-ASCII
character -- I haven't checked if that's the case yet, but it's not
uncommon to find non-ASCII characters in author names as well.

 You could create a bug I suppose to reference all the changes though I
 don't think that's mandatory since this isn't user facing.

Okay, I'm not really familiar with the process around here. I hoped I
could just make some commits and submit them for review :)

-- 
Martin Geisler

https://plus.google.com/+MartinGeisler/


pgpuijH_jhThC.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Policy

2014-05-28 Thread André Aranha
Worked fine, thank you!



On 28 May 2014 11:12, Guangyu Suo guan...@unitedstack.com wrote:

 I think you should restart the glance-api, because it will not reload the
 policy.json like others.


 2014-05-28 22:05 GMT+08:00 André Aranha andre.f.ara...@gmail.com:

 Hello,
 I have a question in Glance about its policy, I'm setting it to deny
 all calls to images.list (get_image: !, get_images: !,)  but it
 doesn't work, I tried this on keystone and worked fine, but on glance it
 ignores the change I made to the policy.
 Does anyone knows if glance code is enforcing the rules in policy?

 Thank you,
 Andre Aranha

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 索广宇(Guangyu Suo)
 UnitedStack Inc.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Selenium test fixes

2014-05-28 Thread Jason Rist
On 05/27/2014 11:45 PM, Kieran Spear wrote:
 No failures in the last 24 hours. \o/
 
 On 26 May 2014 23:44, Kieran Spear kisp...@gmail.com wrote:
 Hi peeps,

 Could I ask reviewers to prioritise the following:

 https://review.openstack.org/#/c/95392/

 It should eliminate our selenium gate failures, which seem to be happening 
 many times per day now.

 Cheers,
 Kieran

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


Well done!

-J

-- 
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Oslo logging eats system level tracebacks by default

2014-05-28 Thread Sean Dague
When attempting to build a new tool for Tempest, I found that my python
syntax errors were being completely eaten. After 2 days of debugging I
found that oslo log.py does the following *very unexpected* thing.

 - replaces the sys.excepthook with it's own function
 - eats the execption traceback unless debug or verbose are set to True
 - sets debug and verbose to False by default
 - prints out a completely useless summary log message at Critical
([CRITICAL] [-] 'id' was my favorite of these)

This is basically for an exit level event. Something so breaking that
your program just crashed.

Note this has nothing to do with preventing stack traces that are
currently littering up the logs that happen at many logging levels, it's
only about removing the stack trace of a CRITICAL level event that's
going to very possibly result in a crashed daemon with no information as
to why.

So the process of including oslo log makes the code immediately
undebuggable unless you change your config file to not the default.

Whether or not there was justification for this before, one of the
things we heard loud and clear from the operator's meetup was:

 - Most operators are running at DEBUG level for all their OpenStack
services because you can't actually do problem determination in
OpenStack for anything  that.
 - Operators reacted negatively to the idea of removing stack traces
from logs, as that's typically the only way to figure out what's going
on. It took a while of back and forth to explain that our initiative to
do that wasn't about removing them per say, but having the code
correctly recover.

So the current oslo logging behavior seems inconsistent (we spew
exceptions at INFO and WARN levels, and hide all the important stuff
with a legitimately uncaught system level crash), undebuggable, and
completely against the prevailing wishes of the operator community.

I'd like to change that here - https://review.openstack.org/#/c/95860/

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday May 29th at 22:00UTC

2014-05-28 Thread Matthew Treinish
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, May 29th at 22:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

18:00 EDT
07:00 JST
07:30 ACST
0:00 CEST
17:00 CDT
15:00 PDT

-Matt Treinish


pgpN9WZqd81jU.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Neutron] heal_instance_info_cache_interval - Can we kill it?

2014-05-28 Thread Assaf Muller


- Original Message -
 Hi,
 
 Sorry somehow I missed this email. I don't think you want to disable it,
 though we can definitely have it run less often. The issue with disabling it
 is if one of the notifications from neutron-nova never gets sent
 successfully to nova (neutron-server is restarted before the event is sent
 or some other internal failure). Nova will never update it's cache if the
 heal_instance_info_cache_interval is set to 0.

The thing is, this periodic healing doesn't imply correctness either.
In the case where you lose a notification and the compute node hosting
the VM is hosting a non-trivial amount of VMs it can take (With the default
of 60 seconds) dozens of minutes to update the cache, since you only
update a VM a minute. I could understand the use of a sanity check if it
was performed much more often, but as it is now it seems useless to me
since you can't really rely on it.

What I'm trying to say is that with the inefficiency of the implementation,
coupled with Neutron's default plugin inability to cope with a large
amount of API calls, I feel like the disadvantages outweigh the
advantages when it comes to the cache healing.

How would you feel about disabling it, optimizing the implementation
(For example by introducing a new networking_for_instance API verb to Neutron?)
then enabling it again?

 The neutron-nova events help
 to ensure that the nova info_cache is up to date sooner by having neutron
 inform nova whenever a port's data has changed (@Joe Gordon - this happens
 regardless of virt driver).
 
 If you're using the libvirt virt driver the neutron-nova events will also be
 used to ensure that the networking is 'ready' before the instance is powered
 on.
 
 Best,
 
 Aaron
 
 P.S: we're working on making the heal_network call to neutron a lot less
 expensive as well in the future.
 
 
 
 
 On Tue, May 27, 2014 at 7:25 PM, Joe Gordon  joe.gord...@gmail.com  wrote:
 
 
 
 
 
 
 On Wed, May 21, 2014 at 6:21 AM, Assaf Muller  amul...@redhat.com  wrote:
 
 
 Dear Nova aficionados,
 
 Please make sure I understand this correctly:
 Each nova compute instance selects a single VM out of all of the VMs
 that it hosts, and every heal_instance_info_cache_interval seconds
 queries Neutron for all of its networking information, then updates
 Nova's DB.
 
 If the information above is correct, then I fail to see how that
 is in anyway useful. For example, for a compute node hosting 20 VMs,
 it would take 20 minutes to update the last one. Seems unacceptable
 to me.
 
 Considering Icehouse's Neutron to Nova notifications, my question
 is if we can change the default to 0 (Disable the feature), deprecate
 it, then delete it in the K cycle. Is there a good reason not to do this?
 
 Based on the patch that introduced this function [0] you may be on to
 something, but AFAIK unfortunately the neutron to nova notifications only
 work in libvirt right now [1], so I don' think we can fully deprecate this
 periodic task. That being said turning it off by default may be an option.
 Have you tried disabling this feature and seeing what happens (in the gate
 and/or in production)?
 

We've disabled it in a scale lab and didn't observe any black holes forming
or other catastrophes.

 
 [0] https://review.openstack.org/#/c/4269/
 [1] https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse
 
 
 
 
 Assaf Muller, Cloud Networking Engineer
 Red Hat
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] BPs for Juno-1

2014-05-28 Thread Kyle Mestery
On Wed, May 28, 2014 at 12:41 AM, mar...@redhat.com mandr...@redhat.com wrote:
 On 27/05/14 17:14, Kyle Mestery wrote:
 Hi Neutron developers:

 I've spent some time cleaning up the BPs for Juno-1, and they are
 documented at the link below [1]. There are a large number of BPs
 currently under review right now in neutron-specs. If we land some of
 those specs this week, it's possible some of these could make it into
 Juno-1, pending review cycles and such. But I just wanted to highlight
 that I removed a large number of BPs from targeting Juno-1 now which
 did not have specifications linked to them nor specifications which
 were actively under review in neutron-specs.

 Also, a gentle reminder that the process for submitting specifications
 to Neutron is documented here [2].

 Thanks, and please reach out to me if you have any questions!


 Hi Kyle,

 Can you please consider my PUT /subnets/subnet allocation_pools:{}
 review at [1] for Juno-1? Also, I see you have included a bug [1] and
 associated review [2] that I've worked on but the review is already
 pushed to master. Is it there for any pending backports?

Done, I've added the bug referenced in [2] to Juno-1.

With regards to [3] below, are you saying you would like to submit
that as a backport to stable?

 thanks! marios

 [1] https://review.openstack.org/#/c/62042/
 [2] https://bugs.launchpad.net/neutron/+bug/1255338
 [3] https://review.openstack.org/#/c/59212/



 Kyle

 [1] https://launchpad.net/neutron/+milestone/juno-1
 [2] https://wiki.openstack.org/wiki/Blueprints#Neutron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Specifying file encoding

2014-05-28 Thread Martin Geisler
Ryan Brady rbr...@redhat.com writes:

 - Original Message -
 From: Pádraig Brady p...@draigbrady.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Sent: Wednesday, May 28, 2014 6:28:28 AM
 Subject: Re: [openstack-dev] Specifying file encoding
 
 On 05/28/2014 08:16 AM, Martin Geisler wrote:
  Hi everybody,
  
  I'm trying to get my feet wet with OpenStack development, 

 Welcome aboard!  Thanks for contributing.  :)

 so I recently
  tried to submit some small patches. One small thing I noticed was that
  some files used
  
# -*- encoding: utf-8 -*-
  
  to specify the file encoding for both Python and Emacs. Unfortunately,
  Emacs expects you to set coding, not encoding. Python is fine with
  either. I submitted a set of patches for this:
  
  * https://review.openstack.org/95862
  * https://review.openstack.org/95864
  * https://review.openstack.org/95865
  * https://review.openstack.org/95869
  * https://review.openstack.org/95871
  * https://review.openstack.org/95880
  * https://review.openstack.org/95882
  * https://review.openstack.org/95886
  
  It was pointed out to me that such a change ought to be coordinated
  better via bug(s) or the mailinglist, so here I am :)
 
 This is valid change.
 I don't see why there is any question
 as it only improves the situation for emacs
 which will pop up an error when trying to edit these files.

 I guess I approach this differently. When I saw this patch, my first
 thought was to validate if the line being changed needed to exist at
 all.

That makes a lot of sense!

 If the file has valid non-ascii characters that effect its execution,
 are absolutely required for documentation to convey a specific
 meaning, or in strings that need to translate, then I agree the change
 is valid. But in the case the characters in the file can be changed,
 it seems like the bug is the extra encoding comment itself.

I see what you mean -- I also try to keep my files just ASCII for
convenience (even though I'm from Denmark where we have the extra three
vowels æ, ø, and å). As a brand new contributor, changing copyright
statements seemed like a bigger change than updating the coding line :)

 Taking tuskar for example, the files in question seem to only need
 this encoding line to support the copyright symbol.

 [rb@localhost tuskar]$ grep -R -i -P [^\x00-\x7F] ./*
 Binary file ./doc/source/api/img/model_v2.jpg matches
 Binary file ./doc/source/api/img/model_v4.odg matches
 Binary file ./doc/source/api/img/model.odg matches
 Binary file ./doc/source/api/img/model_v3.jpg matches
 Binary file ./doc/source/api/img/model_v3.odg matches
 Binary file ./doc/source/api/img/model_v2.odg matches
 Binary file ./doc/source/api/img/model_v4.jpg matches
 Binary file ./doc/source/api/img/model_v1.jpg matches
 Binary file ./doc/source/_static/header_bg.jpg matches
 Binary file ./doc/source/_static/header-line.gif matches
 Binary file ./doc/source/_static/openstack_logo.png matches
 ./tuskar/api/hooks.py:# Copyright © 2012 New Dream Network, LLC (DreamHost)
 ./tuskar/api/app.py:# Copyright © 2012 New Dream Network, LLC (DreamHost)
 ./tuskar/api/acl.py:# Copyright © 2012 New Dream Network, LLC (DreamHost)
 ./tuskar/common/service.py:# Copyright © 2012 eNovance 
 licens...@enovance.com
 ./tuskar/tests/api/api.py:# Copyright © 2012 New Dream Network, LLC 
 (DreamHost)


 In the U.S., copyright notices haven't really been needed since 1989.
 You also only need to include
 one instance of the symbol, Copyright or copr [1]. If the
 requirements for copyright are different
 outside the U.S., then I hope we capture that in the copyright wiki
 page [2]. Maybe the current info
 in that wiki needs to be updated or at least notated as to why the
 specific notice text is suggested.

 Unless there is another valid requirement to keep the © in the files,
 I think it's best if we just remove them altogether and eliminate the
 need to add the encoding comments at all.

Sounds good to me. I'll update my script to do this and rework the patch
sets. I've made most patches as two: one that removes coding lines that
are currently redudant and one that adjusts the remaining lines to make
Emacs happy.

Would you want to merge the patches that simply remove the unneeded
lines and then let me followup with patches that remove © along with the
then unnecessary coding lines?

I'm asking since it seems that Gerrit encourages a different style of
development than most other projects I know -- single large commits
instead of a series of smaller commits, each one logical step building
on the previous.

-- 
Martin Geisler

https://plus.google.com/+MartinGeisler/


pgpael_3tsbag.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo logging eats system level tracebacks by default

2014-05-28 Thread Doug Hellmann
On Wed, May 28, 2014 at 10:38 AM, Sean Dague s...@dague.net wrote:
 When attempting to build a new tool for Tempest, I found that my python
 syntax errors were being completely eaten. After 2 days of debugging I
 found that oslo log.py does the following *very unexpected* thing.

  - replaces the sys.excepthook with it's own function
  - eats the execption traceback unless debug or verbose are set to True
  - sets debug and verbose to False by default
  - prints out a completely useless summary log message at Critical
 ([CRITICAL] [-] 'id' was my favorite of these)

 This is basically for an exit level event. Something so breaking that
 your program just crashed.

 Note this has nothing to do with preventing stack traces that are
 currently littering up the logs that happen at many logging levels, it's
 only about removing the stack trace of a CRITICAL level event that's
 going to very possibly result in a crashed daemon with no information as
 to why.

 So the process of including oslo log makes the code immediately
 undebuggable unless you change your config file to not the default.

 Whether or not there was justification for this before, one of the
 things we heard loud and clear from the operator's meetup was:

  - Most operators are running at DEBUG level for all their OpenStack
 services because you can't actually do problem determination in
 OpenStack for anything  that.
  - Operators reacted negatively to the idea of removing stack traces
 from logs, as that's typically the only way to figure out what's going
 on. It took a while of back and forth to explain that our initiative to
 do that wasn't about removing them per say, but having the code
 correctly recover.

 So the current oslo logging behavior seems inconsistent (we spew
 exceptions at INFO and WARN levels, and hide all the important stuff
 with a legitimately uncaught system level crash), undebuggable, and
 completely against the prevailing wishes of the operator community.

 I'd like to change that here - https://review.openstack.org/#/c/95860/

 -Sean

I agree, we should dump as much detail as we can when we encounter an
unhandled exception that causes an app to die.

Doug


 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [FUEL][Design session 5.1] 28/05/2014 meeting minutes

2014-05-28 Thread Vladimir Kuklin
Hey, folks

We did a meeting today regarding 5.1-targeted blueprints and design.

Here is the document with the results:

https://etherpad.openstack.org/p/fuel-library-5.1-design-session

Obviously, we need several additional meetings to build up roadmap for 5.1,
but I think this was a really good start. Thank you all.

We will continue to work on this during this and next working week. Hope to
see you all on weekly IRC meeting tomorrow. Feel free to propose your
blueprints and ideas for 5.1 release.
https://wiki.openstack.org/wiki/Meetings/Fuel

-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo logging eats system level tracebacks by default

2014-05-28 Thread Jay Pipes

On 05/28/2014 11:39 AM, Doug Hellmann wrote:

On Wed, May 28, 2014 at 10:38 AM, Sean Dague s...@dague.net wrote:

When attempting to build a new tool for Tempest, I found that my python
syntax errors were being completely eaten. After 2 days of debugging I
found that oslo log.py does the following *very unexpected* thing.

  - replaces the sys.excepthook with it's own function
  - eats the execption traceback unless debug or verbose are set to True
  - sets debug and verbose to False by default
  - prints out a completely useless summary log message at Critical
([CRITICAL] [-] 'id' was my favorite of these)

This is basically for an exit level event. Something so breaking that
your program just crashed.

Note this has nothing to do with preventing stack traces that are
currently littering up the logs that happen at many logging levels, it's
only about removing the stack trace of a CRITICAL level event that's
going to very possibly result in a crashed daemon with no information as
to why.

So the process of including oslo log makes the code immediately
undebuggable unless you change your config file to not the default.

Whether or not there was justification for this before, one of the
things we heard loud and clear from the operator's meetup was:

  - Most operators are running at DEBUG level for all their OpenStack
services because you can't actually do problem determination in
OpenStack for anything  that.
  - Operators reacted negatively to the idea of removing stack traces
from logs, as that's typically the only way to figure out what's going
on. It took a while of back and forth to explain that our initiative to
do that wasn't about removing them per say, but having the code
correctly recover.

So the current oslo logging behavior seems inconsistent (we spew
exceptions at INFO and WARN levels, and hide all the important stuff
with a legitimately uncaught system level crash), undebuggable, and
completely against the prevailing wishes of the operator community.

I'd like to change that here - https://review.openstack.org/#/c/95860/

 -Sean


I agree, we should dump as much detail as we can when we encounter an
unhandled exception that causes an app to die.


+1

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Baremetal][Neutron] SDN Solution for Openstack Baremetal Driver

2014-05-28 Thread Tim Chim
Ryota,

Thanks for your invitation. Unfortunately I did not join the summit. Wish
you had a good time and enjoy the summit ;). However I am really interested
in your baremetal + SDN solution. Would you mind elaborate a bit more about
the architecture and design of the solution?

Regards,
Tim

On Sat, May 17, 2014 at 1:51 AM, Tim Chim fsc...@gmail.com wrote:

 Hi fellow stacker,

 I am designing for a Baremetal stack with SDN support recently and would
 like to seek for the help from you all for how to do so.

 I am building an automated testing framework on top of Openstack Havana.
 For performance and platform sake I would need to provision my stack using
 baremetal driver (Not Ironic). But on the other hand I also need the
 network virtualization provided by neutron in order to maximize resource
 utilization. Therefore I started looking for SDN solutions for the
 baremetal use case. My network infrastructure is from Cisco and so I
 started with Cisco plugin within Neutron.

 According to [1], I found that the plugin pretty much supported all
 functionality for the virtualized VM case (i.e. dynamical VLAN creation and
 VLAN tag propagation from controller to compute host). But for baremetal
 case, it seems that the solution would not work as there is no OVS agent
 running on the BM node.

 Since information about baremetal + SDN is quit lacking so I wonder if
 anybody here has tried baremetal + SDN before and could share with me your
 experience in doing so? Or it is simply impossible to do SDN with baremetal
 driver? And how about the case for Ironic? Thanks.

 Regards,
 Tim

 Reference:
 [1] -
 http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/data_sheet_c78-727737.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] PTL Candidacy

2014-05-28 Thread Serge Kovaleff
+1.

Do you plan a meetup or something? I am deeply interested in a good storage.


On Wed, May 28, 2014 at 6:07 AM, Sergey Lukjanov slukja...@mirantis.comwrote:

 ack

 On Wed, May 28, 2014 at 1:34 PM, Ilya Sviridov isviri...@mirantis.com
 wrote:
  Hello openstackers,
 
  I'd like to announce  my candidacy as PTL of MagnetoDB[1] project.
 
  Several words about me:
  I'm software developer at Mirantis. I'm working with OpenStack for a year
  and a bit more. At the beginning it was integration and customization of
  HEAT for customer. After that I've contributed to Trove and now working
 on
  MagnetoDB 100% [2] of my time.
 
  I've started with MagnetoDB as idea [3] on Hong Kong summit and now it
 is a
  project with community of two major companies [4], with regular releases,
  roadmap[5] and plans for incubation.
 
  As a PTL of MagnetoDB I'll continue my work on building great environment
  for contributors, making MagnetoDB successful software product and
  eventually to be integrated to OpenStack.
 
  [1] https://launchpad.net/magnetodb
  [2] http://www.stackalytics.com/report/contribution/magnetodb/90
  [3]
 
 http://lists.openstack.org/pipermail/openstack-dev/2013-October/017930.html
  [4]
 
 http://www.stackalytics.com/?release=allmetric=commitsproject_type=stackforgemodule=magnetodbcompany=user_id=
  [5] https://etherpad.openstack.org/p/ magnetodb-juno-roadmap
 
  Thank you,
  Ilya Sviridov
  isviridov @ FreeNode
 
 



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo logging eats system level tracebacks by default

2014-05-28 Thread Morgan Fainberg
+1 Providing service crashing information is very valuable. In general we need 
to provide as much information about why the service exited 
(critically/traceback/unexpectedly) for our operators.

—Morgan
—
Morgan Fainberg


From: Jay Pipes jaypi...@gmail.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: May 28, 2014 at 08:50:25
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] Oslo logging eats system level tracebacks by 
default  

On 05/28/2014 11:39 AM, Doug Hellmann wrote:  
 On Wed, May 28, 2014 at 10:38 AM, Sean Dague s...@dague.net wrote:  
 When attempting to build a new tool for Tempest, I found that my python  
 syntax errors were being completely eaten. After 2 days of debugging I  
 found that oslo log.py does the following *very unexpected* thing.  
  
 - replaces the sys.excepthook with it's own function  
 - eats the execption traceback unless debug or verbose are set to True  
 - sets debug and verbose to False by default  
 - prints out a completely useless summary log message at Critical  
 ([CRITICAL] [-] 'id' was my favorite of these)  
  
 This is basically for an exit level event. Something so breaking that  
 your program just crashed.  
  
 Note this has nothing to do with preventing stack traces that are  
 currently littering up the logs that happen at many logging levels, it's  
 only about removing the stack trace of a CRITICAL level event that's  
 going to very possibly result in a crashed daemon with no information as  
 to why.  
  
 So the process of including oslo log makes the code immediately  
 undebuggable unless you change your config file to not the default.  
  
 Whether or not there was justification for this before, one of the  
 things we heard loud and clear from the operator's meetup was:  
  
 - Most operators are running at DEBUG level for all their OpenStack  
 services because you can't actually do problem determination in  
 OpenStack for anything  that.  
 - Operators reacted negatively to the idea of removing stack traces  
 from logs, as that's typically the only way to figure out what's going  
 on. It took a while of back and forth to explain that our initiative to  
 do that wasn't about removing them per say, but having the code  
 correctly recover.  
  
 So the current oslo logging behavior seems inconsistent (we spew  
 exceptions at INFO and WARN levels, and hide all the important stuff  
 with a legitimately uncaught system level crash), undebuggable, and  
 completely against the prevailing wishes of the operator community.  
  
 I'd like to change that here - https://review.openstack.org/#/c/95860/  
  
 -Sean  
  
 I agree, we should dump as much detail as we can when we encounter an  
 unhandled exception that causes an app to die.  

+1  

-jay  

___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Random thoughts on asynchronous API spec

2014-05-28 Thread Maksym Lobur
Hi All,

You've raised a good discussion, something similar already was started back
in february. Could someone please find the long etherpad with discussion
between Deva and Lifeless, as I recall most of the points mentioned above
have a good comments there.

Up to this point I have the only idea how to elegantly address these
problems. This is a tasks concept and probably a scheduler service, which
not necessarily should be separate from the API at the moment (Finally we
already have a hash ring on the api side which is a kind of scheduler
right?) It was already proposed earlier, but I would like to try to fit all
these issues into this concept.


 1. Executability
 We need to make sure that request can be theoretically executed,
 which includes:
 a) Validating request body


We cannot validate everything on the API side, relying on the fact that DB
state is actual is not a good idea, especially under heavy load.

In tasks concept we could assume that all the requests are executable, and
do not perform any validation in the API thread at all. Instead of this the
API will just create a task and return it's ID to the user. Task scheduler
may perform some minor validations before the task is queued or started for
convenience, but they should be duplicated inside task body because there
is an arbitrary time between queuing up and start ((c)lifeless). I assume
the scheduler will have it's own thread or even process. The user will need
to poke received ID to know the current state of his submission.


 b) For each of entities (e.g. nodes) touched, check that they are
 available
at the moment (at least exist).
This is arguable, as checking for entity existence requires going to
 DB.


Same here, DB round trip is a potential block, therefore this will be done
inside task (after it's queued and started) and will not affect the API.
The user will just observe the task state by poking the API (or using
callback as an option).

2. Appropriate state
 For each entity in question, ensure that it's either in a proper state
 or
 moving to a proper state.
 It would help avoid users e.g. setting deploy twice on the same node
 It will still require some kind of NodeInAWrongStateError, but we won't
 necessary need a client retry on this one.
 Allowing the entity to be _moving_ to appropriate state gives us a
 problem:
 Imagine OP1 was running and OP2 got scheduled, hoping that OP1 will come
 to desired state. What if OP1 fails? What if conductor, doing OP1
 crashes?


Let's say OP1 and OP2 are two separate tasks. Each one have the initial
state validation inside it's body. Once OP2 gets its turn it will perform
validation and fail, which looks reasonable to me.


 Similar problem with checking node state.
 Imagine we schedule OP2 while we had OP1 - regular checking node state.
 OP1 discovers that node is actually absent and puts it to maintenance
 state.
 What to do with OP2?


The task will fail once it get it's turn.


 b) Can we make client wait for the results of periodic check?
That is, wait for OP1 _before scheduling_ OP2?


We will just schedule the task and the user will observe its progress, once
OP1 is finished and OP2 started - he will see a fail.


 3. Status feedback
 People would like to know, how things are going with their task.
 What they know is that their request was scheduled. Options:
 a) Poll: return some REQUEST_ID and expect users to poll some endpoint.
Pros:
- Should be easy to implement
Cons:
- Requires persistent storage for tasks. Does AMQP allow to do this
 kinds
  of queries? If not, we'll need to duplicate tasks in DB.
- Increased load on API instances and DB


Exactly described the tasks concept :)


 b) Callback: take endpoint, call it once task is done/fails.
Pros:
- Less load on both client and server
- Answer exactly when it's ready
Cons:
- Will not work for cli and similar
- If conductor crashes, there will be no callback.


Add to Cons:
- Callback is not reliable since it may get lost.
We should have an ability to poke anyway, though I see a great benefit from
implementing a callbacks - to decrease API load.


 4. Debugging consideration
 a) This is an open question: how to debug, if we have a lot of requests
and something went wrong?


We will be able to see the queue state (btw what about security here,
should the user be able to see all the tasks, or just his ones, or all but
others with hidden details).


 b) One more thing to consider: how to make command like `node-show`
 aware of
scheduled transitioning, so that people don't try operations that are
doomed to failure.


node-show will always show current state of the node, though we may check
if there are any tasks queued or going, which will change the state. If any
- add a notification to the response.


 5. Performance considerations
 a) With async approach, users will be able to schedule nearly unlimited
number of tasks, thus essentially blocking work 

[openstack-dev] [keystone] Redesign of Keystone Federation

2014-05-28 Thread David Chadwick
Hi Everyone

at the Atlanta meeting the following slides were presented during the
federation session

http://www.slideshare.net/davidwchadwick/keystone-apach-authn

It was acknowledged that the current design is sub-optimal, but was a
best first efforts to get something working in time for the IceHouse
release, which it did successfully.

Now is the time to redesign federated access in Keystone in order to
allow for:
i) the inclusion of more federation protocols such as OpenID and OpenID
Connect via Apache plugins
ii) federating together multiple Keystone installations
iii) the inclusion of federation protocols directly into Keystone where
good Apache plugins dont yet exist e.g. IETF ABFAB

The Proposed Design (1) in the slide show is the simplest change to
make, in which the Authn module has different plugins for different
federation protocols, whether via Apache or not.

The Proposed Design (2) is cleaner since the plugins are directly into
Keystone and not via the Authn module, but it requires more
re-engineering work, and it was questioned in Atlanta whether that
effort exists or not.

Kent therefore proposes that we go with Proposed Design (1). Kent will
provide drafts of the revised APIs and the re-engineered code for
inspection and approval by the group, if the group agrees to go with
this revised design.

If you have any questions about the proposed re-design, please don't
hesitate to ask

regards

David and Kristy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Specifying file encoding

2014-05-28 Thread Ryan Brady


- Original Message -
 From: Martin Geisler mar...@geisler.net
 To: openstack-dev@lists.openstack.org
 Sent: Wednesday, May 28, 2014 11:35:02 AM
 Subject: Re: [openstack-dev] Specifying file encoding
 
 Ryan Brady rbr...@redhat.com writes:
 
  - Original Message -
  From: Pádraig Brady p...@draigbrady.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Sent: Wednesday, May 28, 2014 6:28:28 AM
  Subject: Re: [openstack-dev] Specifying file encoding
  
  On 05/28/2014 08:16 AM, Martin Geisler wrote:
   Hi everybody,
   
   I'm trying to get my feet wet with OpenStack development,
 
  Welcome aboard!  Thanks for contributing.  :)
 
  so I recently
   tried to submit some small patches. One small thing I noticed was that
   some files used
   
 # -*- encoding: utf-8 -*-
   
   to specify the file encoding for both Python and Emacs. Unfortunately,
   Emacs expects you to set coding, not encoding. Python is fine with
   either. I submitted a set of patches for this:
   
   * https://review.openstack.org/95862
   * https://review.openstack.org/95864
   * https://review.openstack.org/95865
   * https://review.openstack.org/95869
   * https://review.openstack.org/95871
   * https://review.openstack.org/95880
   * https://review.openstack.org/95882
   * https://review.openstack.org/95886
   
   It was pointed out to me that such a change ought to be coordinated
   better via bug(s) or the mailinglist, so here I am :)
  
  This is valid change.
  I don't see why there is any question
  as it only improves the situation for emacs
  which will pop up an error when trying to edit these files.
 
  I guess I approach this differently. When I saw this patch, my first
  thought was to validate if the line being changed needed to exist at
  all.
 
 That makes a lot of sense!
 
  If the file has valid non-ascii characters that effect its execution,
  are absolutely required for documentation to convey a specific
  meaning, or in strings that need to translate, then I agree the change
  is valid. But in the case the characters in the file can be changed,
  it seems like the bug is the extra encoding comment itself.
 
 I see what you mean -- I also try to keep my files just ASCII for
 convenience (even though I'm from Denmark where we have the extra three
 vowels æ, ø, and å). As a brand new contributor, changing copyright
 statements seemed like a bigger change than updating the coding line :)
 

It helps me to think about effort, value, complexity, future maintenance,
and dependencies when evaluating changes.

  Taking tuskar for example, the files in question seem to only need
  this encoding line to support the copyright symbol.
 
  [rb@localhost tuskar]$ grep -R -i -P [^\x00-\x7F] ./*
  Binary file ./doc/source/api/img/model_v2.jpg matches
  Binary file ./doc/source/api/img/model_v4.odg matches
  Binary file ./doc/source/api/img/model.odg matches
  Binary file ./doc/source/api/img/model_v3.jpg matches
  Binary file ./doc/source/api/img/model_v3.odg matches
  Binary file ./doc/source/api/img/model_v2.odg matches
  Binary file ./doc/source/api/img/model_v4.jpg matches
  Binary file ./doc/source/api/img/model_v1.jpg matches
  Binary file ./doc/source/_static/header_bg.jpg matches
  Binary file ./doc/source/_static/header-line.gif matches
  Binary file ./doc/source/_static/openstack_logo.png matches
  ./tuskar/api/hooks.py:# Copyright © 2012 New Dream Network, LLC (DreamHost)
  ./tuskar/api/app.py:# Copyright © 2012 New Dream Network, LLC (DreamHost)
  ./tuskar/api/acl.py:# Copyright © 2012 New Dream Network, LLC (DreamHost)
  ./tuskar/common/service.py:# Copyright © 2012 eNovance
  licens...@enovance.com
  ./tuskar/tests/api/api.py:# Copyright © 2012 New Dream Network, LLC
  (DreamHost)
 
 
  In the U.S., copyright notices haven't really been needed since 1989.
  You also only need to include
  one instance of the symbol, Copyright or copr [1]. If the
  requirements for copyright are different
  outside the U.S., then I hope we capture that in the copyright wiki
  page [2]. Maybe the current info
  in that wiki needs to be updated or at least notated as to why the
  specific notice text is suggested.
 
  Unless there is another valid requirement to keep the © in the files,
  I think it's best if we just remove them altogether and eliminate the
  need to add the encoding comments at all.
 
 Sounds good to me. I'll update my script to do this and rework the patch
 sets. I've made most patches as two: one that removes coding lines that
 are currently redudant and one that adjusts the remaining lines to make
 Emacs happy.
 
 Would you want to merge the patches that simply remove the unneeded
 lines and then let me followup with patches that remove © along with the
 then unnecessary coding lines?

If I was in your position, I'd update the patches that remove lines to include 
all
of the affected files and also remove the ©.  I'd abandon the patches that 

[openstack-dev] [Swift] storage policies are upon us; soft freeze in effect

2014-05-28 Thread John Dickinson
The series of patches implementing storage policies in Swift has been proposed 
to master. The first patch set is https://review.openstack.org/#/c/96026/.

This is a major feature in Swift, and it requires a lot of work in reviewing 
and integrating it. In order to focus as reviewers, Swift is under a soft 
freeze until the storage policies patches land.

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] summit wrap-up: subprojects

2014-05-28 Thread Sergey Lukjanov
Hey folks,

it's a small wrap-up for the topic Sahara subprojects releasing and
versioning that was discussed partially on summit and requires some
more discussions. You can find details in [0].

 common

We'll include only one tarball for sahara to the release launchpad
pages. All other links will be provided in docs.

 sahara-dashboard

The merging to Horizon process is now in progress. We've decided that
j1 is the deadline for merging main code parts and during the j2 all
the code should be merged into Horizon, so, if in time of j2 we'll
have some work on merging sahara-dashboard to Horizon not done we'll
need to fallback to the separated sahara-dashboard repo release for
Juno cycle and continue merging the code into the Horizon to be able
to completely kill sahara-dashboard repo in K release.

Where we should keep our UI integration tests?

 sahara-image-elements

We're agreed that some common parts should be merged into the
diskimage-builder repo (like java support, ssh, etc.). The main issue
of keeping -image-elements separated is how to release them and
provide mapping sahara version - elements version. You can find
different options in etherpad [0], I'll write here about the option
that I think will work best for us.

So, the idea is that sahara-image-elements is a bunch of scripts and
tools for building images for Sahara. It's high coupled with plugins's
code in Sahara, so, we need to align them good. Current default
decision is to keep aligned versioning like 2014.1 and etc. It'll be
discussed on the weekly irc team meeting May 29.

 sahara-extra

Keep it as is, no need to stop releasing, because we're not publishing
anything to pypi. No real need for tags.


 open questions

If you have any objections for this model, please, share your thoughts
before June 3 due to the Juno-1 (June 12) to have enough time to apply
selected approach.

[0] https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]TLS API support for authentication

2014-05-28 Thread Carlos Garza

On May 27, 2014, at 9:13 PM, Stephen Balukoff sbaluk...@bluebox.net
 wrote:

 Hi y'all!
 
 I would advocate that if the user asks the front-end API for the private key 
 information (ie. GET request), what they get back is the key's modulus and 
 nothing else. This should work to verify whether a given key matches a given 
 cert, and doesn't break security requirements for those who are never allowed 
 to actually touch private key data. And if a user added the key themselves 
 through the front-end API, I think it's safe to assume the responsibility for 
 keeping a copy of the key they can access lies with the user.

I'm thinking at this point all user interaction with their cert and key be 
handled by barbican directly instead of through our API. It seems like we've 
punted everything but the IDs to barbican. Returning the modulus is still RSA 
centric though. 


 
 Having said this, of course anything which spins up virtual appliances, or 
 configures physical appliances is going to need access to the actual private 
 key. So any back-end API(s) will probably need to have different behavior 
 here.
 
 One thing I do want to point out is that with the 'transient' nature of 
 back-end guests / virtual servers, it's probably going to be important to 
 store the private key data in something non-volatile, like barbican's store. 
 While it may be a good idea to add a feature that generates a private key and 
 certificate signing request via our API someday for certain organizations' 
 security requirements, one should never have the only store for this private 
 key be a single virtual server. This is also going to be important if a 
 certificate + key combination gets re-used in another listener in some way, 
 or when horizontal scaling features get added.

I don't think our API needs to handle the CSRs it looks like barbican 
aspires to do this so our API really is pretty insulated.

 
 Thanks,
 Stephen
 
 -- 
 Stephen Balukoff 
 Blue Box Group, LLC 
 (800)613-4305 x807
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Meeting Tuesday May 27th at 19:00 UTC

2014-05-28 Thread Elizabeth K. Joseph
On Mon, May 26, 2014 at 11:02 AM, Elizabeth K. Joseph
l...@princessleia.com wrote:
 Hi everyone,

 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting on Tuesday May 27th, at 19:00 UTC in #openstack-meeting

Meeting minutes and log from our meeting yesterday available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-05-27-19.00.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-05-27-19.00.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-05-27-19.00.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] PTL Candidacy

2014-05-28 Thread Sergey Lukjanov
Please, don't reply to the nominations with +1s, voting will be done
after nominations week closed. If you'd like to ask some questions,
please, use separated thread / IRC.

Thanks.

On Wed, May 28, 2014 at 7:53 PM, Serge Kovaleff
sergey_kova...@symantec40.com wrote:
 +1.

 Do you plan a meetup or something? I am deeply interested in a good storage.


 On Wed, May 28, 2014 at 6:07 AM, Sergey Lukjanov slukja...@mirantis.com
 wrote:

 ack

 On Wed, May 28, 2014 at 1:34 PM, Ilya Sviridov isviri...@mirantis.com
 wrote:
  Hello openstackers,
 
  I'd like to announce  my candidacy as PTL of MagnetoDB[1] project.
 
  Several words about me:
  I'm software developer at Mirantis. I'm working with OpenStack for a
  year
  and a bit more. At the beginning it was integration and customization of
  HEAT for customer. After that I've contributed to Trove and now working
  on
  MagnetoDB 100% [2] of my time.
 
  I've started with MagnetoDB as idea [3] on Hong Kong summit and now it
  is a
  project with community of two major companies [4], with regular
  releases,
  roadmap[5] and plans for incubation.
 
  As a PTL of MagnetoDB I'll continue my work on building great
  environment
  for contributors, making MagnetoDB successful software product and
  eventually to be integrated to OpenStack.
 
  [1] https://launchpad.net/magnetodb
  [2] http://www.stackalytics.com/report/contribution/magnetodb/90
  [3]
 
  http://lists.openstack.org/pipermail/openstack-dev/2013-October/017930.html
  [4]
 
  http://www.stackalytics.com/?release=allmetric=commitsproject_type=stackforgemodule=magnetodbcompany=user_id=
  [5] https://etherpad.openstack.org/p/ magnetodb-juno-roadmap
 
  Thank you,
  Ilya Sviridov
  isviridov @ FreeNode
 
 



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Redesign of Keystone Federation

2014-05-28 Thread Adam Young

On 05/28/2014 11:59 AM, David Chadwick wrote:

Hi Everyone

at the Atlanta meeting the following slides were presented during the
federation session

http://www.slideshare.net/davidwchadwick/keystone-apach-authn

It was acknowledged that the current design is sub-optimal, but was a
best first efforts to get something working in time for the IceHouse
release, which it did successfully.

Now is the time to redesign federated access in Keystone in order to
allow for:
i) the inclusion of more federation protocols such as OpenID and OpenID
Connect via Apache plugins


These are underway:  Steve Mar just posted review for OpenID connect.

ii) federating together multiple Keystone installations
I think Keystone should be dealt with separately. Keystone is not a good 
stand-alone authentication mechanism.



iii) the inclusion of federation protocols directly into Keystone where
good Apache plugins dont yet exist e.g. IETF ABFAB

I though this was mostly pulling together other protocols such as Radius?
http://freeradius.org/mod_auth_radius/



The Proposed Design (1) in the slide show is the simplest change to
make, in which the Authn module has different plugins for different
federation protocols, whether via Apache or not.


I'd like to avoid doing non-HTTPD modules for as long as possible.



The Proposed Design (2) is cleaner since the plugins are directly into
Keystone and not via the Authn module, but it requires more
re-engineering work, and it was questioned in Atlanta whether that
effort exists or not.


The method parameter is all that is going to vary for most of the Auth 
mechanisms.  X509 and Kerberos both require special set up of the HTTP 
connection to work, which means client and server sides need to be in 
sync:  SAML, OpenID and the rest have no such requirements.




Kent therefore proposes that we go with Proposed Design (1). Kent will
provide drafts of the revised APIs and the re-engineered code for
inspection and approval by the group, if the group agrees to go with
this revised design.

If you have any questions about the proposed re-design, please don't
hesitate to ask

regards

David and Kristy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DriverLog][Stackalytics] Detailed view for vendor drivers

2014-05-28 Thread Jay Pipes

On 05/28/2014 10:20 AM, Ilya Shakhat wrote:

Hi!

At the summit we've announced a new dashboard called DriverLog [1]
http://stackalytics.com/report/driverlog that tracks all available
open-source drivers for OpenStack. DriverLog allows maintainers to
describe their drivers with list of OpenStack releases and also provide
name of external CI if available. For Ci-enabled drivers summary screen
shows bright green mark, but it is actually more than just a sign. Under
the hood DriverLog finds the most recent merged change request and
extracts tests execution results. DriverLog can work with voting and
non-voting CIs and supports case when one CI runs tests against several
drivers or hardware configurations.

Starting today CI execution data becomes available to users by clicking
driver name in summary screen. The pop-up shows table with time, status
and message from external CI for every release together with driver
details and list of maintainers.


I know that driver verification isn't the sexiest thing in the world, 
but I just wanted to say thank you to you and your team for making this 
happen. DriverLog is a slick little dashboard that, over time, I think 
will really be a great help to the operator and developer communities alike.


Thanks,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Redesign of Keystone Federation

2014-05-28 Thread Tim Bell

 -Original Message-
 From: Adam Young [mailto:ayo...@redhat.com]
 Sent: 28 May 2014 18:23
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [keystone] Redesign of Keystone Federation
 
 On 05/28/2014 11:59 AM, David Chadwick wrote:
  Hi Everyone
 
  at the Atlanta meeting the following slides were presented during the
  federation session
 
  http://www.slideshare.net/davidwchadwick/keystone-apach-authn
 
  It was acknowledged that the current design is sub-optimal, but was a
  best first efforts to get something working in time for the IceHouse
  release, which it did successfully.
 
  Now is the time to redesign federated access in Keystone in order to

Getting some working clients for the existing implementation would also be very 
desirable :-)

I would hope that any re-design would retain backwards compatibility for those 
of us who are deploying.

Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon][Tuskar-UI] Location for common dashboard code?

2014-05-28 Thread Tzu-Mainn Chen
Heya,

Tuskar-UI is currently extending classes directly from openstack-dashboard.  
For example, right now
our UI for Flavors extends classes in both 
openstack_dashboard.dashboards.admin.flavors.tables and
openstack_dashboard.dashboards.admin.flavors.workflows.  In the future, this 
sort of pattern will
increase; we anticipate doing similar things with Heat code in 
openstack-dashboard.

However, since tuskar-ui is intended to be a separate dashboard that has the 
potential to live
away from openstack-dashboard, it does feel odd to directly extend 
openstack-dashboard dashboard
components.  Is there a separate place where such code might live?  Something 
similar in concept
to https://github.com/openstack/horizon/tree/master/openstack_dashboard/usage ?


Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mahout-as-a-service

2014-05-28 Thread Dat Tran
Hi everyone,

I have a idea for new project: Mahout-as-a-service.
Main idea of this project:
- Install OpenStack
- Deploying OpenStack Sahara source
- Deploying Mahout on Sahara OpenStack system.
- Construction of the API.

Through web or mobile interface, users can:
- Enable / Disable Mahout on Hadoop cluster
- Run Mahout job
- Get information on surveillance systems related to Mahout job.
- Statistics and service costs over time and total resource use.

Definitely!!! APIs will be public. Look forward to your comments. Hopefully
in this summer, we can do something together.

Thank you very much! :)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Redesign of Keystone Federation

2014-05-28 Thread Marco Fargetta
Hi David and Kristy,

looking at the slides it is not clear to me why you need to have
an authn plugin for each Apache plugin. I have experience only with
few Apache plugins and they provide the possibility to customise the attributes
for the application behind. As an example, with mod_shib it is possible
to define how the information coming from the IdP should be provided
to the application. Maybe this is not possible with all plugins but
I am wondering if it is possible the other way around. In other words,
to create only a configurable authn plugin for all apache plugin. In the 
configuration
you should provide the name of the attributes to look at and the mapping
with the Keystone attributes.

BTW: design 2 seems better although requires more work.

Cheers,
Marco

On Wed, May 28, 2014 at 04:59:48PM +0100, David Chadwick wrote:
 Hi Everyone
 
 at the Atlanta meeting the following slides were presented during the
 federation session
 
 http://www.slideshare.net/davidwchadwick/keystone-apach-authn
 
 It was acknowledged that the current design is sub-optimal, but was a
 best first efforts to get something working in time for the IceHouse
 release, which it did successfully.
 
 Now is the time to redesign federated access in Keystone in order to
 allow for:
 i) the inclusion of more federation protocols such as OpenID and OpenID
 Connect via Apache plugins
 ii) federating together multiple Keystone installations
 iii) the inclusion of federation protocols directly into Keystone where
 good Apache plugins dont yet exist e.g. IETF ABFAB
 
 The Proposed Design (1) in the slide show is the simplest change to
 make, in which the Authn module has different plugins for different
 federation protocols, whether via Apache or not.
 
 The Proposed Design (2) is cleaner since the plugins are directly into
 Keystone and not via the Authn module, but it requires more
 re-engineering work, and it was questioned in Atlanta whether that
 effort exists or not.
 
 Kent therefore proposes that we go with Proposed Design (1). Kent will
 provide drafts of the revised APIs and the re-engineered code for
 inspection and approval by the group, if the group agrees to go with
 this revised design.
 
 If you have any questions about the proposed re-design, please don't
 hesitate to ask
 
 regards
 
 David and Kristy
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Eng. Marco Fargetta, PhD
 
Istituto Nazionale di Fisica Nucleare (INFN)
Catania, Italy

EMail: marco.farge...@ct.infn.it




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Cleaning up configuration settings

2014-05-28 Thread W Chan
Thanks for following up.  I will publish this change as a separate patch
from my current config cleanup.


On Wed, May 28, 2014 at 2:38 AM, Renat Akhmerov rakhme...@mirantis.comwrote:


 On 28 May 2014, at 13:51, Angus Salkeld angus.salk...@rackspace.com
 wrote:

  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 17/05/14 02:48, W Chan wrote:
  Regarding config opts for keystone, the keystoneclient middleware
 already
  registers the opts at
 
 https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/middleware/auth_token.py#L325
  under a keystone_authtoken group in the config file.  Currently, Mistral
  registers the opts again at
 
 https://github.com/stackforge/mistral/blob/master/mistral/config.py#L108under 
 a
  different configuration group.  Should we remove the duplicate from
 Mistral and
  refactor the reference to keystone configurations to the
 keystone_authtoken
  group?  This seems more consistent.
 
  I think that is the only thing that makes sense. Seems like a bug
  waiting to happen having the same options registered twice.
 
  If some user used to other projects comes and configures
  keystone_authtoken then will their config take effect?
  (how much confusion will that generate)..
 
  I'd suggest just using the one that is registered keystoneclient.

 Ok, I had a feeling it was needed for some reason. But after having
 another look at this I think this is really a bug. Let’s do it.

 Thanks guys
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Removing Get and Delete Messages by ID

2014-05-28 Thread Kurt Griffiths
Crew, as discussed in the last team meeting, I have updated the API v1.1 
spechttps://wiki.openstack.org/wiki/Marconi/specs/api/v1.1 to remove the 
ability to get one or more messages by ID. This was done to remove unnecessary 
complexity from the API, and to make it easier to support different types of 
message store backends.

However, this now leaves us with asymmetric semantics. On the one hand, we do 
not allow retrieving messages by ID, but we still support deleting them by ID. 
It seems to me that deleting a message only makes sense in the context of a 
claim or pop operation. In the case of a pop, the message is already deleted by 
the time the client receives it, so I don’t see a need for including a message 
ID in the response. When claiming a batch of messages, however, the client 
still needs some way to delete each message after processing it. In this case, 
we either need to allow the client to delete an entire batch of messages using 
the claim ID, or we still need individual message IDs (hrefs) that can be 
DELETEd.

Deleting a batch of messages can be accomplished in V1.0 using “delete multiple 
messages by ID”. Regardless of how it is done, I’ve been wondering if it is 
actually an anti-pattern; if a worker crashes after processing N messages, but 
before deleting those same N messages, the system is left with several messages 
that another worker will pick up and potentially reprocess, although the work 
has already been done. If the work is idempotent, this isn’t a big deal. 
Otherwise, the client will have to have a way to check whether a message has 
already been processed, ignoring it if it has. But whether it is 1 message or N 
messages left in a bad state by the first worker, the other worker has to 
follow the same logic, so perhaps it would make sense after all to simply allow 
deleting entire batches of claimed messages by claim ID, and not worrying about 
providing individual message hrefs/IDs for deletion.

With all this in mind, I’m starting to wonder if I should revert my changes to 
the spec, and wait to address these changes in the v2.0 API, since it seems 
that to do this right, we need to make some changes that are anything but 
“minor” (for a minor release).

What does everyone think? Should we postpone this work to 2.0?

—Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Bad perf on swift servers...

2014-05-28 Thread Shyam Prasad N
Hi,

Confused about the right mailing list to ask this question. So including
both openstack and openstack-dev in the CC list.

I'm running a swift cluster with 4 nodes.
All 4 nodes are symmetrical. i.e. proxy, object, container, and account
servers running on each with similar storage configuration and conf files.
The I/O traffic to this cluster is mainly to upload dynamic large objects
(typically 1GB chunks (sub-objects) and around 5-6 chunks under each large
object).

The setup is running and serving data; but I've begun to see a few perf
issues, as the traffic increases. I want to understand the reason behind
some of these issues, and make sure that there is nothing wrong with the
setup configuration.

1. High CPU utilization from rsync. I have set replica count in each of
account, container, and object rings to 2. From what I've read, this
assigns 2 devices for each partition in the storage cluster. And for each
PUT, the 2 replicas should be written synchronously. And for GET, the I/O
is through one of the object servers. So nothing here should be
asynchronous in nature. Then what is causing the rsync traffic here?

I recently ran a ring rebalance command after adding a node recently. Could
this be causing the issue?

2. High CPU utilization from swift-account-server threads. All my frontend
traffic use 1 account and 1 container on the servers. There are hundreds of
such objects in the same container. I don't understand what's keeping the
account servers busy.

3. I've started noticing that the 1GB object transfers of the frontend
traffic are taking significantly more time than they used to (more than
double the time). Could this be because i'm using the same subnet for both
the internal and the frontend traffic.

4. Can someone provide me some pointers/tips to improving perf for my
cluster configuration? (I guess I've given out most details above. Feel
free to ask if you need more details)

As always, thanks in advance for your replies. Appreciate the support. :)
-- 
-Shyam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Random thoughts on asynchronous API spec

2014-05-28 Thread Maksym Lobur
BTW a very similar discussion is going in Neutron community right now,
please find a thread under the *[openstack-dev] [Neutron] Introducing task
oriented workflows* label.

Best regards,
Max Lobur,
Python Developer, Mirantis, Inc.

Mobile: +38 (093) 665 14 28
Skype: max_lobur

38, Lenina ave. Kharkov, Ukraine
www.mirantis.com
www.mirantis.ru


On Wed, May 28, 2014 at 6:56 PM, Maksym Lobur mlo...@mirantis.com wrote:

 Hi All,

 You've raised a good discussion, something similar already was started
 back in february. Could someone please find the long etherpad with
 discussion between Deva and Lifeless, as I recall most of the points
 mentioned above have a good comments there.

 Up to this point I have the only idea how to elegantly address these
 problems. This is a tasks concept and probably a scheduler service, which
 not necessarily should be separate from the API at the moment (Finally we
 already have a hash ring on the api side which is a kind of scheduler
 right?) It was already proposed earlier, but I would like to try to fit all
 these issues into this concept.


 1. Executability
 We need to make sure that request can be theoretically executed,
 which includes:
 a) Validating request body


 We cannot validate everything on the API side, relying on the fact that DB
 state is actual is not a good idea, especially under heavy load.

 In tasks concept we could assume that all the requests are executable, and
 do not perform any validation in the API thread at all. Instead of this the
 API will just create a task and return it's ID to the user. Task scheduler
 may perform some minor validations before the task is queued or started for
 convenience, but they should be duplicated inside task body because there
 is an arbitrary time between queuing up and start ((c)lifeless). I assume
 the scheduler will have it's own thread or even process. The user will need
 to poke received ID to know the current state of his submission.


 b) For each of entities (e.g. nodes) touched, check that they are
 available
at the moment (at least exist).
This is arguable, as checking for entity existence requires going to
 DB.


 Same here, DB round trip is a potential block, therefore this will be done
 inside task (after it's queued and started) and will not affect the API.
 The user will just observe the task state by poking the API (or using
 callback as an option).

 2. Appropriate state
 For each entity in question, ensure that it's either in a proper state
 or
 moving to a proper state.
 It would help avoid users e.g. setting deploy twice on the same node
 It will still require some kind of NodeInAWrongStateError, but we won't
 necessary need a client retry on this one.
 Allowing the entity to be _moving_ to appropriate state gives us a
 problem:
 Imagine OP1 was running and OP2 got scheduled, hoping that OP1 will come
 to desired state. What if OP1 fails? What if conductor, doing OP1
 crashes?


 Let's say OP1 and OP2 are two separate tasks. Each one have the initial
 state validation inside it's body. Once OP2 gets its turn it will perform
 validation and fail, which looks reasonable to me.


 Similar problem with checking node state.
 Imagine we schedule OP2 while we had OP1 - regular checking node state.
 OP1 discovers that node is actually absent and puts it to maintenance
 state.
 What to do with OP2?


 The task will fail once it get it's turn.


 b) Can we make client wait for the results of periodic check?
That is, wait for OP1 _before scheduling_ OP2?


 We will just schedule the task and the user will observe its progress,
 once OP1 is finished and OP2 started - he will see a fail.


 3. Status feedback
 People would like to know, how things are going with their task.
 What they know is that their request was scheduled. Options:
 a) Poll: return some REQUEST_ID and expect users to poll some endpoint.
Pros:
- Should be easy to implement
Cons:
- Requires persistent storage for tasks. Does AMQP allow to do this
 kinds
  of queries? If not, we'll need to duplicate tasks in DB.
- Increased load on API instances and DB


 Exactly described the tasks concept :)


 b) Callback: take endpoint, call it once task is done/fails.
Pros:
- Less load on both client and server
- Answer exactly when it's ready
Cons:
- Will not work for cli and similar
- If conductor crashes, there will be no callback.


 Add to Cons:
 - Callback is not reliable since it may get lost.
 We should have an ability to poke anyway, though I see a great benefit
 from implementing a callbacks - to decrease API load.


 4. Debugging consideration
 a) This is an open question: how to debug, if we have a lot of requests
and something went wrong?


 We will be able to see the queue state (btw what about security here,
 should the user be able to see all the tasks, or just his ones, or all but
 others with hidden details).


 b) One more thing to consider: how to make command like 

Re: [openstack-dev] [sahara] summit wrap-up: subprojects

2014-05-28 Thread Matthew Farrellee

On 05/28/2014 12:02 PM, Sergey Lukjanov wrote:

Hey folks,

it's a small wrap-up for the topic Sahara subprojects releasing and
versioning that was discussed partially on summit and requires some
more discussions. You can find details in [0].


common


We'll include only one tarball for sahara to the release launchpad
pages. All other links will be provided in docs.


safe to assume this is in addition to the client tarball?



sahara-dashboard


The merging to Horizon process is now in progress. We've decided that
j1 is the deadline for merging main code parts and during the j2 all
the code should be merged into Horizon, so, if in time of j2 we'll
have some work on merging sahara-dashboard to Horizon not done we'll
need to fallback to the separated sahara-dashboard repo release for
Juno cycle and continue merging the code into the Horizon to be able
to completely kill sahara-dashboard repo in K release.


we really need to kill sahara-dashboard before the juno release



Where we should keep our UI integration tests?


ideally w/ the code it tests, so horizon. are there problems w/ that 
approach?


as a fallback they can go into the sahara repo



sahara-image-elements


We're agreed that some common parts should be merged into the
diskimage-builder repo (like java support, ssh, etc.). The main issue
of keeping -image-elements separated is how to release them and
provide mapping sahara version - elements version. You can find
different options in etherpad [0], I'll write here about the option
that I think will work best for us.

So, the idea is that sahara-image-elements is a bunch of scripts and
tools for building images for Sahara. It's high coupled with plugins's
code in Sahara, so, we need to align them good. Current default
decision is to keep aligned versioning like 2014.1 and etc. It'll be
discussed on the weekly irc team meeting May 29.


i vote for merging sahara-image-elements into the sahara repo and 
keeping the strategic direction that common-enough elements get pushed 
to diskimage-builder




sahara-extra


Keep it as is, no need to stop releasing, because we're not publishing
anything to pypi. No real need for tags.


we still need to figure out the examples and swift plugin, but seems 
reasonable to punt that from the juno cycle if there is no bandwidth




open questions


If you have any objections for this model, please, share your thoughts
before June 3 due to the Juno-1 (June 12) to have enough time to apply
selected approach.

[0] https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward


so ideal situation imho -

 . sahara (includes image elements and possibly ui tests)
 . python-saharaclient (as before)
 . sahara-extra (handle later)
 . horizon (everything that was in sahara-dashboard)

this misses the puppet modules. possibly they should also be merged into 
the sahara repo.


best,


matt


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] BPs for Juno-1

2014-05-28 Thread Edgar Magana Perdomo (eperdomo)
Nader,

My understanding is that you DO need a BP for those changes as well due to the 
fact that neutron-client is a separate project but it should be easier to merge 
because the main one in Neutron has been already approved.

Edgar

From: Nader Lahouti nader.laho...@gmail.commailto:nader.laho...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, May 27, 2014 at 3:56 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] BPs for Juno-1

Thanks Kyle.

One more question: I have a BP for changes in python-neutronclient. Do I need 
to have add a neutron-spec for that?

Thanks,
Nader.




On Tue, May 27, 2014 at 10:08 AM, Kyle Mestery 
mest...@noironetworks.commailto:mest...@noironetworks.com wrote:
On Tue, May 27, 2014 at 12:01 PM, Nader Lahouti 
nader.laho...@gmail.commailto:nader.laho...@gmail.com wrote:
 Hi Kyle,

 ... But I just wanted to highlight

 that I removed a large number of BPs from targeting Juno-1 now which
 did not have specifications linked to them...

 Will those BP be reviewed after updating the link to the specification?

The BPs will be reviewed per normal, but Juno-1 is 2 weeks out now, so
realistically given review cycles, it's unlikely additional things
(unless small) will land in Juno-1.

Thanks,
Kyle


 Thanks,
 Nader.





 On Tue, May 27, 2014 at 7:14 AM, Kyle Mestery 
 mest...@noironetworks.commailto:mest...@noironetworks.com
 wrote:

 Hi Neutron developers:

 I've spent some time cleaning up the BPs for Juno-1, and they are
 documented at the link below [1]. There are a large number of BPs
 currently under review right now in neutron-specs. If we land some of
 those specs this week, it's possible some of these could make it into
 Juno-1, pending review cycles and such. But I just wanted to highlight
 that I removed a large number of BPs from targeting Juno-1 now which
 did not have specifications linked to them nor specifications which
 were actively under review in neutron-specs.

 Also, a gentle reminder that the process for submitting specifications
 to Neutron is documented here [2].

 Thanks, and please reach out to me if you have any questions!

 Kyle

 [1] https://launchpad.net/neutron/+milestone/juno-1
 [2] https://wiki.openstack.org/wiki/Blueprints#Neutron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] MagnetoDB events notifications

2014-05-28 Thread Charles Wang
Hi Flavio,

Thank you very much for taking time to review the MagnetoDB Notification
spec. For Oslo Notifier vs Oslo Messaging, could you please provide links
to example projects showing how Oslo Messaging¹s Notifier component is
used in OpensSack? I noticed Oslo Notifier is being graduated to Oslo
Messaging, but it seems both are actively being developed.

https://blueprints.launchpad.net/oslo/+spec/graduate-notifier-middleware

Charles Wang
charles_w...@symantec.com


On 5/28/14, 8:17 AM, Flavio Percoco fla...@redhat.com wrote:

On 23/05/14 08:55 -0700, Charles Wang wrote:
Folks,

Please take a look at the initial draft of MagnetoDB Events and
Notifications
wiki page:  https://wiki.openstack.org/wiki/MagnetoDB/notification. Your
feedback will be appreciated.

Just one nit.

The wiki page mentions that Oslo Notifier will be used. Oslo notifier
is on its way of deprecation. Instead, oslo.messaging[0] should be used.

[0] http://docs.openstack.org/developer/oslo.messaging/


-- 
@flaper87
Flavio Percoco


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring Extension in Neutron

2014-05-28 Thread Anil Rao
Great!

Should this also be added to the Neutron milestones/blueprints page for Juno-2?

Thanks,
Anil

From: Vinay Yadhav [mailto:vinayyad...@gmail.com]
Sent: Wednesday, May 28, 2014 5:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring Extension 
in Neutron

Hi,

Issue resolved. The specification is now submitted for review: 
https://review.openstack.org/96149

Cheers,
main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}

On Wed, May 28, 2014 at 12:16 PM, Vinay Yadhav 
vinayyad...@gmail.commailto:vinayyad...@gmail.com wrote:
Hi all,

I am experiencing an issue when i try to commit the spec for review.

This is the message that i get:

 fatal: ICLA contributor agreement requires current contact information.

Please review your contact information:

  https://review.openstack.org/#/settings/contact


fatal: The remote end hung up unexpectedly
ericsson@ericsson-VirtualBox:~/openstack_neutron/checkin/neutron-specs/specs/juno$
 git review
fatal: ICLA contributor agreement requires current contact information.

Please review your contact information:

  https://review.openstack.org/#/settings/contact


fatal: The remote end hung up unexpectedly

I am trying to see what has gone wrong with my account.

In the mean while please use the attached spec for the irc meeting today.

I will try to fix the issue with my account soon and commit it for review.

Cheers,

main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}

On Wed, May 21, 2014 at 7:38 PM, Anil Rao 
anil@gigamon.commailto:anil@gigamon.com wrote:
Thanks Vinay. I’ll review the spec and get back with my comments soon.

-Anil

From: Vinay Yadhav [mailto:vinayyad...@gmail.commailto:vinayyad...@gmail.com]
Sent: Wednesday, May 21, 2014 10:23 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring Extension 
in Neutron

Hi,

I am attaching the first version of the neutron spec for Tap-as-a-Service (Port 
Mirroring).

It will be formally commited soon in git.

Cheers,
main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}

On Tue, May 20, 2014 at 7:12 AM, Kanzhe Jiang 
kanzhe.ji...@bigswitch.commailto:kanzhe.ji...@bigswitch.com wrote:
Vinay's proposal was based on OVS's mirroring feature.

On Mon, May 19, 2014 at 9:11 PM, YAMAMOTO Takashi 
yamam...@valinux.co.jpmailto:yamam...@valinux.co.jp wrote:
 Hi,

 I am Vinay, working with Ericsson.

 I am interested in the following blueprint regarding port mirroring
 extension in neutron:
 https://blueprints.launchpad.net/neutron/+spec/port-mirroring

 I am close to finishing an implementation for this extension in OVS plugin
 and would be submitting a neutron spec related to the blueprint soon.
does your implementation use OVS' mirroring functionality?
or is it flow-based?

YAMAMOTO Takashi


 I would like to know other who are also interested in introducing Port
 Mirroring extension in neutron.

 It would be great if we can discuss and collaborate in development and
 testing this extension

 I am currently attending the OpenStack Summit in Atlanta, so if any of you
 are interested in the blueprint, we can meet here in the summit and discuss
 how to proceed with the blueprint.

 Cheers,
 main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kanzhe Jiang
MTS at BigSwitch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: backward compat

2014-05-28 Thread Michael McCune


- Original Message -
  Open questions
 
 1. How should we handle addition of new functionality to the API,
 should we bump minor version and just add new endpoints?

I think we should not include the minor revision number in the url. Looking at 
some of the other projects (nova, keystone) it looks like the preference to 
make the version endpoint able to return information about the specific version 
implemented. I think going forward, if we choose to bump the minor version for 
small features we can just change what the version endpoint returns. Any client 
would then be able to decide if they can use newer features based on the 
version reported from the return value. If we maintain a consistent version api 
endpoint then I don't see an issue with increasing the minor version based on 
new features being added. But, I only endorse this if we decide to solidify the 
version endpoint (e.g. /v2, not /v2.1).

I realize this creates some confusion as we already have /v1 and /v1.1. I'm 
guessing we might subsume v1.1 at a point in time where we choose to deprecate.

 2. For which period of time should we keep deprecated API and client for it?

Not sure what the standard for OpenStack project is, but I would imagine we 
keep the deprecated API version for one release to give users time to migrate.

 3. How to publish all images and/or keep stability of building images
 for plugins?

This is a good question, I don't have a strong opinion at this time. My gut 
feeling is that we should maintain official images somewhere, but I realize 
this introduces more work in maintenance.

regards,
mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Random thoughts on asynchronous API spec

2014-05-28 Thread Devananda van der Veen
While I appreciate the many ideas being discussed here (some of which we've
explored previously and agreed to continue exploring), there is a
fundamental difference vs. what I propose in that spec. I believe that what
I'm proposing will be achievable without any significant visible changes in
the API -- no new API end points or resources, and the client interaction
will be nearly the same. A few status codes may be different in certain
circumstances -- but it will not require a new major version of the REST
API. And it solves a scalability and stability problem that folks are
encountering today. (It seems my spec didn't describe those problems well
enough -- I'm updating it now.)

Cheers,
Devananda



On Wed, May 28, 2014 at 10:14 AM, Maksym Lobur mlo...@mirantis.com wrote:

 BTW a very similar discussion is going in Neutron community right now,
 please find a thread under the *[openstack-dev] [Neutron] Introducing
 task oriented workflows* label.

 Best regards,
 Max Lobur,
 Python Developer, Mirantis, Inc.

 Mobile: +38 (093) 665 14 28
 Skype: max_lobur

 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru


 On Wed, May 28, 2014 at 6:56 PM, Maksym Lobur mlo...@mirantis.com wrote:

 Hi All,

 You've raised a good discussion, something similar already was started
 back in february. Could someone please find the long etherpad with
 discussion between Deva and Lifeless, as I recall most of the points
 mentioned above have a good comments there.

 Up to this point I have the only idea how to elegantly address these
 problems. This is a tasks concept and probably a scheduler service, which
 not necessarily should be separate from the API at the moment (Finally we
 already have a hash ring on the api side which is a kind of scheduler
 right?) It was already proposed earlier, but I would like to try to fit all
 these issues into this concept.


 1. Executability
 We need to make sure that request can be theoretically executed,
 which includes:
 a) Validating request body


 We cannot validate everything on the API side, relying on the fact that
 DB state is actual is not a good idea, especially under heavy load.

 In tasks concept we could assume that all the requests are executable,
 and do not perform any validation in the API thread at all. Instead of this
 the API will just create a task and return it's ID to the user. Task
 scheduler may perform some minor validations before the task is queued or
 started for convenience, but they should be duplicated inside task body
 because there is an arbitrary time between queuing up and start
 ((c)lifeless). I assume the scheduler will have it's own thread or even
 process. The user will need to poke received ID to know the current state
 of his submission.


 b) For each of entities (e.g. nodes) touched, check that they are
 available
at the moment (at least exist).
This is arguable, as checking for entity existence requires going to
 DB.


 Same here, DB round trip is a potential block, therefore this will be
 done inside task (after it's queued and started) and will not affect the
 API. The user will just observe the task state by poking the API (or using
 callback as an option).

 2. Appropriate state
 For each entity in question, ensure that it's either in a proper state
 or
 moving to a proper state.
 It would help avoid users e.g. setting deploy twice on the same node
 It will still require some kind of NodeInAWrongStateError, but we won't
 necessary need a client retry on this one.
 Allowing the entity to be _moving_ to appropriate state gives us a
 problem:
 Imagine OP1 was running and OP2 got scheduled, hoping that OP1 will come
 to desired state. What if OP1 fails? What if conductor, doing OP1
 crashes?


 Let's say OP1 and OP2 are two separate tasks. Each one have the initial
 state validation inside it's body. Once OP2 gets its turn it will perform
 validation and fail, which looks reasonable to me.


 Similar problem with checking node state.
 Imagine we schedule OP2 while we had OP1 - regular checking node state.
 OP1 discovers that node is actually absent and puts it to maintenance
 state.
 What to do with OP2?


 The task will fail once it get it's turn.


 b) Can we make client wait for the results of periodic check?
That is, wait for OP1 _before scheduling_ OP2?


 We will just schedule the task and the user will observe its progress,
 once OP1 is finished and OP2 started - he will see a fail.


 3. Status feedback
 People would like to know, how things are going with their task.
 What they know is that their request was scheduled. Options:
 a) Poll: return some REQUEST_ID and expect users to poll some endpoint.
Pros:
- Should be easy to implement
Cons:
- Requires persistent storage for tasks. Does AMQP allow to do this
 kinds
  of queries? If not, we'll need to duplicate tasks in DB.
- Increased load on API instances and DB


 Exactly described the tasks concept :)


 b) Callback: take endpoint, 

Re: [openstack-dev] Designate Incubation Request

2014-05-28 Thread Carl Baldwin
Does this make sense in Neutron?  In my opinion it doesn't.

DNSaaS is external to Neutron and is independent.  It serves DNS
requests that can come from the internet just as well as they can come
from VMs in the cloud (but through the network external to the cloud).
 It can serve IPs for cloud resources just as well as it can serve IPs
for resources outside the cloud. The services are separated by the
external network (from Neutron's perspective).

Neutron only provides very limited DNS functionality which forwards
DNS queries to an external resolver to facilitate the ability for VMs
to lookup DNS.   It injects names and IPs for VMs on the same network
but currently this needs some work with Neutron.  I don't think it
makes sense for Neutron to provide an external facing DNS service.
Neutron is about moving network traffic within a cloud and between the
cloud and external networks.

My $0.02.

Carl

On Tue, May 27, 2014 at 6:42 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Sat, May 24, 2014 at 10:24 AM, Hayes, Graham graham.ha...@hp.com wrote:


 Hi all,

 Designate would like to apply for incubation status in OpenStack.


 Our application is here:
 https://wiki.openstack.org/wiki/Designate/Incubation_Application


 Based on
 http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements.rst
 I have a few questions:

 * You mention nova's dns capabilities as not being adequate one of the
 incubation requirements is:


   Project should not inadvertently duplicate functionality present in other
   OpenStack projects. If they do, they should have a clear plan and
 timeframe
   to prevent long-term scope duplication


 So what is the plan for this?

 * Can you expand on why this doesn't make sense in neutron when things like
 LBaaS do.

 * Your application doesn't cover all the items raised in the incubation
 requirements list. For example the QA requirement of


  Project must have a basic devstack-gate job set up



   which as far as I can tell isn't really there, although there appears to
 be a devstack based job run as third party which in at least once case
 didn't run on a merged patch (https://review.openstack.org/#/c/91115/)




 As part of our application we would like to apply for a new program. Our
 application for the program is here:

 https://wiki.openstack.org/wiki/Designate/Program_Application

 Designate is a DNS as a Service project, providing both end users,
 developers, and administrators with an easy to use REST API to manage
 their DNS Zones and Records.

 Thanks,

 Graham

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: backward compat

2014-05-28 Thread Matthew Farrellee

On 05/28/2014 09:14 AM, Sergey Lukjanov wrote:

Hey folks,

it's a small wrap-up for the two topics Sahara backward compat and 
Hadoop cluster backward compatibility, both were discussed on design
summit, etherpad [0] contains info about them. There are some open
questions listed in the end of email, please, don't skip them :)


Sahara backward compat


Keeping released APIs stable since the Icehouse release. So, for now
we have one stable API v1.1 (and v1.0 as a subset for it). Any changes
to existing semantics requires new API version, additions handling is
a question. As part of API stability decision python client should
work with all previous Sahara versions. API of python-saharaclient
should be stable itself, because we aren't limiting the client version
for OpenStack release, so, the client v123 shouldn't change own API
exposed to user that is working with stable release REST API versions.


for juno we should just have a v1 api (there can still be a v1.1 
endpoint, but it should be deprecated), and maybe a v2 api


+1 any semantic changes require new major version number

+1 api should only have a major number (no 1.1 or 2.1)



Hadoop cluster backward compat


It was decided to at least keep released versions of cluster (Hadoop)
plugins for the next release, so, It means if we have vanilla-2.0.1
released as part of Icehouse, than we could remove it's support only
after releasing it as part of Juno with note that it's deprecated and
will not be available in the next release. Additionally, we've decided
to add some docs with upgrade recommendations.


we should only be producing images for the currently supported plugin 
versions. images for deprecated versions can be found with the releases 
where the version wasn't deprecated.



best,


matt


Open questions


1. How should we handle addition of new functionality to the API,
should we bump minor version and just add new endpoints?
2. For which period of time should we keep deprecated API and client for it?
3. How to publish all images and/or keep stability of building images
for plugins?

[0] https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward

Thanks.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] BPs for Juno-1

2014-05-28 Thread Nader Lahouti
Hi Edgar,

Thanks for the reply. I do have separate BP for the neutronclient (
https://blueprints.launchpad.net/python-neutronclient/+spec/python-neutronclient-for-cisco-dfa)
but I didn't file any spec for it in neutron-specs as I thought it is
different. I don't know if the existing BP is enough or it should be added
in the neutron-spec?

Thanks,
Nader.


​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-28 Thread Paul Ward

Would it be feasible to make the retry logic only apply to read-only
operations?  This would still require a nova change to specify the number
of retries, but it'd also prevent invokers from shooting themselves in the
foot if they call for a write operation.



Aaron Rosen aaronoro...@gmail.com wrote on 05/27/2014 09:40:00 PM:

 From: Aaron Rosen aaronoro...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 05/27/2014 09:44 PM
 Subject: Re: [openstack-dev] [neutron] Supporting retries in
neutronclient

 Hi,

 Is it possible to detect when the ssl handshaking error occurs on
 the client side (and only retry for that)? If so I think we should
 do that rather than retrying multiple times. The danger here is
 mostly for POST operations (as Eugene pointed out) where it's
 possible for the response to not make it back to the client and for
 the operation to actually succeed.

 Having this retry logic nested in the client also prevents things
 like nova from handling these types of failures individually since
 this retry logic is happening inside of the client. I think it would
 be better not to have this internal mechanism in the client and
 instead make the user of the client implement retry so they are
 aware of failures.

 Aaron


 On Tue, May 27, 2014 at 10:48 AM, Paul Ward wpw...@us.ibm.com wrote:
 Currently, neutronclient is hardcoded to only try a request once in
 retry_request by virtue of the fact that it uses self.retries as the
 retry count, and that's initialized to 0 and never changed.  We've
 seen an issue where we get an ssl handshaking error intermittently
 (seems like more of an ssl bug) and a retry would probably have
 worked.  Yet, since neutronclient only tries once and gives up, it
 fails the entire operation.  Here is the code in question:

 https://github.com/openstack/python-neutronclient/blob/master/
 neutronclient/v2_0/client.py#L1296

 Does anybody know if there's some explicit reason we don't currently
 allow configuring the number of retries?  If not, I'm inclined to
 propose a change for just that.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: backward compat

2014-05-28 Thread Matthew Farrellee

On 05/28/2014 01:59 PM, Michael McCune wrote:



- Original Message -

Open questions


1. How should we handle addition of new functionality to the API,
should we bump minor version and just add new endpoints?


I think we should not include the minor revision number in the url.
Looking at some of the other projects (nova, keystone) it looks like
the preference to make the version endpoint able to return
information about the specific version implemented. I think going
forward, if we choose to bump the minor version for small features we
can just change what the version endpoint returns. Any client would
then be able to decide if they can use newer features based on the
version reported from the return value. If we maintain a consistent
version api endpoint then I don't see an issue with increasing the
minor version based on new features being added. But, I only endorse
this if we decide to solidify the version endpoint (e.g. /v2, not
/v2.1).

I realize this creates some confusion as we already have /v1 and
/v1.1. I'm guessing we might subsume v1.1 at a point in time where we
choose to deprecate.


i agree, no minor version number. we should even collapse v1.1 and v1 
for juno.


i don't think we need a capability discovery step. the api should 
already properly response w/ 404 for endpoints that do not exist.


the concern about only discovering a function isn't available until a 
few steps into a call sequence can be addressed with upfront endpoint 
detection. and i think this is an extremely rare corner case.




2. For which period of time should we keep deprecated API and client for it?


Not sure what the standard for OpenStack project is, but I would
imagine we keep the deprecated API version for one release to give
users time to migrate.


i'd say 1-2 cycles. pragmatically, we will probably never be able to 
remove api versions.




3. How to publish all images and/or keep stability of building images
for plugins?


This is a good question, I don't have a strong opinion at this time.
My gut feeling is that we should maintain official images somewhere,
but I realize this introduces more work in maintenance.


for each release we should distribute images for the non-deprecated 
plugin versions.


best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Designate Incubation Request

2014-05-28 Thread Sean Dague
I would agree this doesn't make sense in Neutron.

I do wonder if it makes sense in the Network program. I'm getting
suspicious of the programs for projects model if every new project
incubating in seems to need a new program. Which isn't really a
reflection on designate, but possibly on our program structure.

-Sean

On 05/28/2014 02:21 PM, Carl Baldwin wrote:
 Does this make sense in Neutron?  In my opinion it doesn't.
 
 DNSaaS is external to Neutron and is independent.  It serves DNS
 requests that can come from the internet just as well as they can come
 from VMs in the cloud (but through the network external to the cloud).
  It can serve IPs for cloud resources just as well as it can serve IPs
 for resources outside the cloud. The services are separated by the
 external network (from Neutron's perspective).
 
 Neutron only provides very limited DNS functionality which forwards
 DNS queries to an external resolver to facilitate the ability for VMs
 to lookup DNS.   It injects names and IPs for VMs on the same network
 but currently this needs some work with Neutron.  I don't think it
 makes sense for Neutron to provide an external facing DNS service.
 Neutron is about moving network traffic within a cloud and between the
 cloud and external networks.
 
 My $0.02.
 
 Carl
 
 On Tue, May 27, 2014 at 6:42 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Sat, May 24, 2014 at 10:24 AM, Hayes, Graham graham.ha...@hp.com wrote:


 Hi all,

 Designate would like to apply for incubation status in OpenStack.


 Our application is here:
 https://wiki.openstack.org/wiki/Designate/Incubation_Application


 Based on
 http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements.rst
 I have a few questions:

 * You mention nova's dns capabilities as not being adequate one of the
 incubation requirements is:


   Project should not inadvertently duplicate functionality present in other
   OpenStack projects. If they do, they should have a clear plan and
 timeframe
   to prevent long-term scope duplication


 So what is the plan for this?

 * Can you expand on why this doesn't make sense in neutron when things like
 LBaaS do.

 * Your application doesn't cover all the items raised in the incubation
 requirements list. For example the QA requirement of


  Project must have a basic devstack-gate job set up



   which as far as I can tell isn't really there, although there appears to
 be a devstack based job run as third party which in at least once case
 didn't run on a merged patch (https://review.openstack.org/#/c/91115/)




 As part of our application we would like to apply for a new program. Our
 application for the program is here:

 https://wiki.openstack.org/wiki/Designate/Program_Application

 Designate is a DNS as a Service project, providing both end users,
 developers, and administrators with an easy to use REST API to manage
 their DNS Zones and Records.

 Thanks,

 Graham

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Unanswered questions in object model refactor blueprint

2014-05-28 Thread Brandon Logan
Hi Vijay,

On Tue, 2014-05-27 at 16:27 -0700, Vijay B wrote:
 Hi Brandon,
 
 
 The current reviews of the schema itself are absolutely valid and
 necessary, and must go on. However, the place of implementation of
 this schema needs to be clarified. Rather than make any changes
 whatsoever to the existing neutron db schema for LBaaS, this new db
 schema outlined needs to be implemented for a separate LBaaS core
 service.
 
Are you suggesting a separate lbaas database from the neutron database?
If not, then I could use some clarification. If so, I'd advocate against
that right now because there's just too many things that would need to
be changed.  Later, when LBaaS becomes its own service then yeah that
will need to happen.
 
 What we should be providing in neutron is a switch (a global conf)
 that can be set to instruct neutron to do one of two things:
 
 
 1. Use the existing neutron LBaaS API, with the backend being the
 existing neutron LBaaS db schema. This is the status quo.
 2. Use the existing neutron LBaaS API, with the backend being the new
 LBaaS service. This will invoke calls not to neutron's current LBaaS
 code at all, rather, it will call into a new set of proxy backend
 code in neutron that will translate the older LBaaS API calls into the
 newer REST calls serviced by the new LBaaS service, which will write
 down these details accordingly in its new db schema. As long as the
 request and response objects to legacy neutron LBaaS calls are
 preserved as is, there should be no issues. Writing unit tests should
 also be comparatively more straightforward, and old functional tests
 can be retained, and newer ones will not clash with legacy code.
 Legacy code itself will work, having not been touched at all. The
 blueprint for the db schema that you have referenced
 (https://review.openstack.org/#/c/89903/5/specs/juno/lbaas-api-and-objmodel-improvement.rst)
  should be implemented for this new LBaaS service, post reviews.
 
I think the point of this blueprint is to get the API and object model
less confusing for the Neutron LBaaS service plugin.  I think it's too
early to create an LBaaS service because we have not yet cleaned up the
tight integration points between Neutron LBaaS and LBaaS.  Creating a
new service would require only API interactions between Neutron and this
LBaaS service, which currently is not possible due to these tight
integration points.  

 
 The third option would be to turn off neutron LBaaS API, and use the
 new LBaaS core service directly, but for this we can simply disable
 neutron lbaas, and don't need a config parameter in neutron.
 
 
 Implementing this db schema within neutron instead will be not just
 complicated, but a huge effort that will go waste in future once the
 new LBaaS service is implemented. Also, migration will unnecessarily
 retain the same steps needed to go from legacy neutron LBaaS to the
 new core LBaaS service in this approach (twice, in succession) in case
 for any reason the version goes from legacy neutron LBaaS - new
 neutron LBaaS - new LBaaS core service.
I totally agree that this is technical debt, but I believe it is the
best option we have right now since LBaaS needs to live in the Neutron
code and process because of the tight integration points.  Since this
object model refactor has been slated for Juno, and these tight
integration points may or may not be cleaned up by Juno, staying within
Neutron seems to be the best option right now.
 
 
 Going forward, the legacy neutron LBaaS API can be deprecated, and the
 new API that directly contacts the new LBaaS core service can be used.
 
 
 We have discussed the above architecture previously, but outside of
 the ML, and a draft of the blueprint for this new LBaaS core service
 is underway, and is a collation of all the discussions among a large
 number of LBaaS engineers including yourself during the summit - I
 will be posting it for review within a couple of days, as planned.
 
 
 
 
 Regards,
 Vijay
 
 
 On Tue, May 27, 2014 at 12:32 PM, Brandon Logan
 brandon.lo...@rackspace.com wrote:
 Referencing this blueprint:
 
 https://review.openstack.org/#/c/89903/5/specs/juno/lbaas-api-and-objmodel-improvement.rst
 
 Anyone who has suggestions to possible issues or can answer
 some of
 these questions please respond.
 
 
 1. LoadBalancer to Listener relationship M:N vs 1:N
 The main reason we went with the M:N was so IPv6 could use the
 same
 listener as IPv4.  However this can be accomplished by the
 user just
 creating a second listener and pool with the same
 configuration.  This
 will end up being a bad user experience when the listener and
 pool
 configuration starts getting complex (adding in TLS, health
 monitors,
 SNI, etc). A good reason to not do the M:N is because the
 logic on might
 get complex when dealing 

Re: [openstack-dev] Oslo logging eats system level tracebacks by default

2014-05-28 Thread Davanum Srinivas
+1 from me.

On Wed, May 28, 2014 at 11:49 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 05/28/2014 11:39 AM, Doug Hellmann wrote:

 On Wed, May 28, 2014 at 10:38 AM, Sean Dague s...@dague.net wrote:

 When attempting to build a new tool for Tempest, I found that my python
 syntax errors were being completely eaten. After 2 days of debugging I
 found that oslo log.py does the following *very unexpected* thing.

   - replaces the sys.excepthook with it's own function
   - eats the execption traceback unless debug or verbose are set to True
   - sets debug and verbose to False by default
   - prints out a completely useless summary log message at Critical
 ([CRITICAL] [-] 'id' was my favorite of these)

 This is basically for an exit level event. Something so breaking that
 your program just crashed.

 Note this has nothing to do with preventing stack traces that are
 currently littering up the logs that happen at many logging levels, it's
 only about removing the stack trace of a CRITICAL level event that's
 going to very possibly result in a crashed daemon with no information as
 to why.

 So the process of including oslo log makes the code immediately
 undebuggable unless you change your config file to not the default.

 Whether or not there was justification for this before, one of the
 things we heard loud and clear from the operator's meetup was:

   - Most operators are running at DEBUG level for all their OpenStack
 services because you can't actually do problem determination in
 OpenStack for anything  that.
   - Operators reacted negatively to the idea of removing stack traces
 from logs, as that's typically the only way to figure out what's going
 on. It took a while of back and forth to explain that our initiative to
 do that wasn't about removing them per say, but having the code
 correctly recover.

 So the current oslo logging behavior seems inconsistent (we spew
 exceptions at INFO and WARN levels, and hide all the important stuff
 with a legitimately uncaught system level crash), undebuggable, and
 completely against the prevailing wishes of the operator community.

 I'd like to change that here - https://review.openstack.org/#/c/95860/

  -Sean


 I agree, we should dump as much detail as we can when we encounter an
 unhandled exception that causes an app to die.


 +1

 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-stable-maint] Stable exception

2014-05-28 Thread Paul Ward

I'll start by saying that we don't need this ported to icehouse as we've
included it in our distro, as Alan suggested.

However, I would like to explain why we needed it.  We do generate
cert files for the controller node.  However, for cases where the different
services are all running on the controller node, we use 127.0.0.1 as the
address they communicate on.  Since the cert was based on hostname,
this will fail unless we have the api_insecure flag set.  And since we're
communicating on 127.0.0.1, it's ok to ignore ssl errors.

Since this is in Juno, and we've patched it in Icehouse for our distro, we
have no pressing need to backport this one.  Thanks for keeping an
eye on it!

Alan Pevec wrote:
 https://bugs.launchpad.net/neutron/+bug/1306822
 https://bugs.launchpad.net/neutron/+bug/1309694

 Those bugs describe the missing options, but do not do a great job of
 describing the impact of not having them. My guess is that without those
 parameters, you have to rely on system certificates (as you can't
 provide your own and you can't disable the check). Is that a correct
 assumption ? Who is impacted by these bugs ?

I think you're right that 1309694 can be worked around by using system
cert store.
Disabling cert check bug 1306822 is definitely not needed - why would
you use certs if you don't check them?
So unless more justification is provided in the bugs (importance of
both is Undecided) I don't think we have the case for granting the
exception.

Distributions are of course free to take those patches, if it suits
their policies.
BTW having such backports proposed is fine even if denied for stable
merge, we can use stable reviews as a mean to share patches among
distros.

 If my interpretation is correct, then this falls a bit in a grey area:
 it is a feature to allow your own certificate to be provided, but it
 could be seen as a bug (feature gap) if Neutron was the only project in
 Icehouse not having that feature (and people would generally expect
 those parameters to be present). Is Neutron the only project that misses
 those parameters ?

Currently yes, only Neutron has a new feature in Icehouse to send port
events to Nova but Cinder will need to same to properly fix the race
with volumes during VM setup.

Cheers,
Alan___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone

2014-05-28 Thread Adam Young

On 05/28/2014 05:57 AM, Tizy Ninan wrote:

Hi,

Thanks for the reply.
I am still not successful in integrating keystone with active 
directory. Can you please provide some clarifications related to the 
following questions.
1. Currently, my active directory schema does not have 
projects/tenants and roles OU. Is it necessary that I need to create 
projects/tenants and roles OU in the active directory schema for the 
keystone to authenticate to active directory.?

No.  Set the Assignment driver to SQL, not LDAP.

2. We added values to the user_tree_dn.Does the tenant_tree_dn and 
role_tree_dn and group_tree_dn fields needs to be filled in for 
authenticating?
No, tenant values are used for assignment, and you should not be doing 
assignments in AD.  THose go into SQL.



3.How does the mapping of a user to a project/tenant and role will be 
done if I try to use active directory to authenticate only the users 
and use the already existing projects and roles tables in the mysql 
database?
You need a role assignment, based either on the userid or on a groupid 
that the user is in.  These are stored in the assignment backend.





Kindly provide me some insight into these questions.

Thanks,
Tizy

On Tue, May 20, 2014 at 8:27 AM, Adam Young ayo...@redhat.com 
mailto:ayo...@redhat.com wrote:


On 05/16/2014 05:08 AM, Tizy Ninan wrote:

Hi,

We have an openstack Havana deployment on CentOS 6.4 and
nova-network network service installed using Mirantis Fuel v4.0.
We are trying to integrate the openstack setup with the Microsoft
Active Directory(LDAP server). I  only have  a read access to the
LDAP server.
What will be the minimum changes needed to be made under the
[ldap] tag in keystone.conf file?Can you please specify what
variables need to be set and what should be the values for each
variable?

[ldap]
# url = ldap://localhost
# user = dc=Manager,dc=example,dc=com
# password = None
# suffix = cn=example,cn=com
# use_dumb_member = False
# allow_subtree_delete = False
# dumb_member = cn=dumb,dc=example,dc=com

# Maximum results per page; a value of zero ('0') disables paging
(default)
# page_size = 0

# The LDAP dereferencing option for queries. This can be either
'never',
# 'searching', 'always', 'finding' or 'default'. The 'default'
option falls
# back to using default dereferencing configured by your ldap.conf.
# alias_dereferencing = default

# The LDAP scope for queries, this can be either 'one'
# (onelevel/singleLevel) or 'sub' (subtree/wholeSubtree)
# query_scope = one

# user_tree_dn = ou=Users,dc=example,dc=com
# user_filter =
# user_objectclass = inetOrgPerson
# user_id_attribute = cn
# user_name_attribute = sn
# user_mail_attribute = email
# user_pass_attribute = userPassword
# user_enabled_attribute = enabled
# user_enabled_mask = 0
# user_enabled_default = True
# user_attribute_ignore = default_project_id,tenants
# user_default_project_id_attribute =
# user_allow_create = True
# user_allow_update = True
# user_allow_delete = True
# user_enabled_emulation = False
# user_enabled_emulation_dn =

# tenant_tree_dn = ou=Projects,dc=example,dc=com
# tenant_filter =
# tenant_objectclass = groupOfNames
# tenant_domain_id_attribute = businessCategory
# tenant_id_attribute = cn
# tenant_member_attribute = member
# tenant_name_attribute = ou
# tenant_desc_attribute = desc
# tenant_enabled_attribute = enabled
# tenant_attribute_ignore =
# tenant_allow_create = True
# tenant_allow_update = True
# tenant_allow_delete = True
# tenant_enabled_emulation = False
# tenant_enabled_emulation_dn =

# role_tree_dn = ou=Roles,dc=example,dc=com
# role_filter =
# role_objectclass = organizationalRole
# role_id_attribute = cn
# role_name_attribute = ou
# role_member_attribute = roleOccupant
# role_attribute_ignore =
# role_allow_create = True
# role_allow_update = True
# role_allow_delete = True

# group_tree_dn =
# group_filter =
# group_objectclass = groupOfNames
# group_id_attribute = cn
# group_name_attribute = ou
# group_member_attribute = member
# group_desc_attribute = desc
# group_attribute_ignore =
# group_allow_create = True
# group_allow_update = True
# group_allow_delete = True

Kindly help us to resolve the issue.

Thanks,
Tizy



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



http://www.youtube.com/watch?v=w3Yjlmb_68g


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] Designate Incubation Request

2014-05-28 Thread Zane Bitter

On 28/05/14 14:52, Sean Dague wrote:

I would agree this doesn't make sense in Neutron.

I do wonder if it makes sense in the Network program. I'm getting
suspicious of the programs for projects model if every new project
incubating in seems to need a new program. Which isn't really a
reflection on designate, but possibly on our program structure.


I agree, the whole program/project thing is confusing (as we've 
discovered this week, programs actually have code names which are... 
identical to the name of a project in the program) and IMHO unnecessary. 
Programs share a common core team, so it is inevitable that most 
incubated projects (including, I would think, Designate) will make sense 
in a separate program. (TripleO incorporating Tuskar is the only 
counterexample I can think of.)


Let's just get rid of the 'project' terminology. Let programs organise 
whatever repos they have in whatever way they see fit, with the proviso 
that they consult with the TC on any change in scope.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: backward compat

2014-05-28 Thread Andrew Lazarev
 for juno we should just have a v1 api (there can still be a v1.1 endpoint,
 but it should be deprecated), and maybe a v2 api

 +1 any semantic changes require new major version number

 +1 api should only have a major number (no 1.1 or 2.1)


In this case we will end up with new major number each release. Even if no
major changes were done.

we should only be producing images for the currently supported plugin
 versions. images for deprecated versions can be found with the releases
 where the version wasn't deprecated.


agree. We just need to store all images for previous releases somewhere.

Andrew.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Designate Incubation Request

2014-05-28 Thread Kyle Mestery
On Wed, May 28, 2014 at 1:52 PM, Sean Dague s...@dague.net wrote:
 I would agree this doesn't make sense in Neutron.

 I do wonder if it makes sense in the Network program. I'm getting
 suspicious of the programs for projects model if every new project
 incubating in seems to need a new program. Which isn't really a
 reflection on designate, but possibly on our program structure.

+1

I think this is an example of something which makes sense in the
Network program. We've already discussed having the Service VM
project incubate in the Network program initially as well, and once
LBaaS spins out in the future, it will land in the Network program as
well.

Thanks,
Kyle

 -Sean

 On 05/28/2014 02:21 PM, Carl Baldwin wrote:
 Does this make sense in Neutron?  In my opinion it doesn't.

 DNSaaS is external to Neutron and is independent.  It serves DNS
 requests that can come from the internet just as well as they can come
 from VMs in the cloud (but through the network external to the cloud).
  It can serve IPs for cloud resources just as well as it can serve IPs
 for resources outside the cloud. The services are separated by the
 external network (from Neutron's perspective).

 Neutron only provides very limited DNS functionality which forwards
 DNS queries to an external resolver to facilitate the ability for VMs
 to lookup DNS.   It injects names and IPs for VMs on the same network
 but currently this needs some work with Neutron.  I don't think it
 makes sense for Neutron to provide an external facing DNS service.
 Neutron is about moving network traffic within a cloud and between the
 cloud and external networks.

 My $0.02.

 Carl

 On Tue, May 27, 2014 at 6:42 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Sat, May 24, 2014 at 10:24 AM, Hayes, Graham graham.ha...@hp.com wrote:


 Hi all,

 Designate would like to apply for incubation status in OpenStack.


 Our application is here:
 https://wiki.openstack.org/wiki/Designate/Incubation_Application


 Based on
 http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements.rst
 I have a few questions:

 * You mention nova's dns capabilities as not being adequate one of the
 incubation requirements is:


   Project should not inadvertently duplicate functionality present in other
   OpenStack projects. If they do, they should have a clear plan and
 timeframe
   to prevent long-term scope duplication


 So what is the plan for this?

 * Can you expand on why this doesn't make sense in neutron when things like
 LBaaS do.

 * Your application doesn't cover all the items raised in the incubation
 requirements list. For example the QA requirement of


  Project must have a basic devstack-gate job set up



   which as far as I can tell isn't really there, although there appears to
 be a devstack based job run as third party which in at least once case
 didn't run on a merged patch (https://review.openstack.org/#/c/91115/)




 As part of our application we would like to apply for a new program. Our
 application for the program is here:

 https://wiki.openstack.org/wiki/Designate/Program_Application

 Designate is a DNS as a Service project, providing both end users,
 developers, and administrators with an easy to use REST API to manage
 their DNS Zones and Records.

 Thanks,

 Graham

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Tuskar-UI] Location for common dashboard code?

2014-05-28 Thread Tzu-Mainn Chen
Hi Doug,

Thanks for the response!  I agree with you in the cases where we are extending
things like panels; if you're extending those, you're extending the dashboard
itself.  However, things such as workflows feel like they could reasonably live
independently of the dashboard for re-use elsewhere.

Incidentally, I know that within openstack_dashboard there are cases where, say,
the admin dashboard extends instances tables from the project dashboard.  That
feels a bit odd to me; wouldn't it be cleaner to have both dashboards extend
some common instances table that lives independently of either dashboard?

Thanks,
Tzu-Mainn Chen

- Original Message -
 Hey Tzu-Mainn,
 
 I've actually discouraged people from doing this sort of thing when
 customizing Horizon.  IMO it's risky to extend those panels because they
 really aren't intended as extension points.  We intend Horizon to be
 extensible by adding additional panels or dashboards.  I know you are
 closely involved in Horizon development, so you are better able to manage
 that better than most customizers.
 
 Still, I wonder if we can better address this for Tuskar-UI as well as
 other situations by defining extensibility points in the dashboard panels
 and workflows themselves.  Like well defined ways to add/show a column of
 data, add/hide row actions, add/skip a workflow step, override text
 elements, etc.  Is it viable to create a few well defined extension points
 and meet your need to modify existing dashboard panels?
 
 In any case, it seems to me that if you are overriding the dashboard
 panels, it's reasonable that tuskar-ui should be dependent on the
 dashboard.
 
 Doug Fish
 
 
 
 
 
 From: Tzu-Mainn Chen tzuma...@redhat.com
 To:   OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 05/28/2014 11:40 AM
 Subject:  [openstack-dev] [Horizon][Tuskar-UI] Location for common
 dashboard code?
 
 
 
 Heya,
 
 Tuskar-UI is currently extending classes directly from openstack-dashboard.
 For example, right now
 our UI for Flavors extends classes in both
 openstack_dashboard.dashboards.admin.flavors.tables and
 openstack_dashboard.dashboards.admin.flavors.workflows.  In the future,
 this sort of pattern will
 increase; we anticipate doing similar things with Heat code in
 openstack-dashboard.
 
 However, since tuskar-ui is intended to be a separate dashboard that has
 the potential to live
 away from openstack-dashboard, it does feel odd to directly extend
 openstack-dashboard dashboard
 components.  Is there a separate place where such code might live?
 Something similar in concept
 to
 https://github.com/openstack/horizon/tree/master/openstack_dashboard/usage
 ?
 
 
 Thanks,
 Tzu-Mainn Chen
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: backward compat

2014-05-28 Thread Matthew Farrellee

On 05/28/2014 03:50 PM, Andrew Lazarev wrote:


for juno we should just have a v1 api (there can still be a v1.1
endpoint, but it should be deprecated), and maybe a v2 api

+1 any semantic changes require new major version number

+1 api should only have a major number (no 1.1 or 2.1)


In this case we will end up with new major number each release. Even if
no major changes were done.


a semantic addition (e.g. adding EDP and v1.1) doesn't warrant a new 
version.


so more specifically: +1 any change in existing semantics requires a new 
major version number


but maybe i'm missing why we'd end up w/ a new version per release

best,


matt


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-05-28 Thread Gabriel Hurley
It's sort of a silly point, but as someone who would likely consume the 
split-off package outside of the OpenStack context, please give it a proper 
name instead of django_horizon. The module only works in Django, the name 
adds both clutter and confusion, and it's against the Django community's 
conventions to have the python import name be prefixed with django_.

If the name horizon needs to stay with the reference implementation of the 
dashboard rather than keeping it with the framework as-is that's fine, but 
choose a new real name for the framework code.

Just to get the ball rolling, and as a nod to the old-timers, I'll propose the 
runner up in the original naming debate: bourbon. ;-)

If there are architectural questions I can help with in this process let me 
know. I'll try to keep an eye on the progress as it goes.

All the best,

   - Gabriel

 -Original Message-
 From: Radomir Dopieralski [mailto:openst...@sheep.art.pl]
 Sent: Wednesday, May 28, 2014 5:55 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon 
 into
 two repositories
 
 Hello,
 
 we plan to finally do the split in this cycle, and I started some 
 preparations for
 that. I also started to prepare a detailed plan for the whole operation, as it
 seems to be a rather big endeavor.
 
 You can view and amend the plan at the etherpad at:
 https://etherpad.openstack.org/p/horizon-split-plan
 
 It's still a little vague, but I plan to gradually get it more detailed.
 All the points are up for discussion, if anybody has any good ideas or
 suggestions, or can help in any way, please don't hesitate to add to this
 document.
 
 We still don't have any dates or anything -- I suppose we will work that out
 soonish.
 
 Oh, and great thanks to all the people who have helped me so far with it, I
 wouldn't even dream about trying such a thing without you. Also thanks in
 advance to anybody who plans to help!
 
 --
 Radomir Dopieralski
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >