Re: [openstack-dev] [cinder] All Cinder Volume Drivers Must Have A Third Party CI by March 19, 2014

2015-03-18 Thread Bharat Kumar

Hi Mike,

Regarding the GlusterFS CI:

As I am dealing with end to end CI process of GlusterFS, please modify 
the contact person to "bharat.kobag...@redhat.com".


Because of this I may miss important announcements from you regarding 
the CI.


On 03/19/2015 11:11 AM, Mike Perez wrote:

The deadline is almost near. First off I want to thank all driver
maintainers who are reporting successfully with their driver CI in
Cinder reviews. For many of you, I know you discovered how useful the
CI is, in just the bugs it has caught or revealed. OpenStack users
that use your solution will appreciate the added stability as well. I
have been keeping a report of the different vendors, which I promised
to make public:

https://docs.google.com/spreadsheets/d/1GrIzXY4djNbJnF3RMw44_2e3aTWgBbgnBlSKafEDNGQ/edit?usp=sharing

If you're not marked with a light green or dark green color and you
believe this is a mistake, please let me know on IRC via thingee or
this email address, and provide proof of multiple reviews your CI has
posted results to.

For the drivers that have not responded and won't be able to make the
deadline. Proposing your driver back into Cinder in Liberty will
require a CI reporting before merge back in. I want to make this as
easy as possible to be merged back into tree, so I will just do a diff
of what's being proposed and what was previously in tree. This should
cut down on a review time quite a bit. Drivers that are removed in the
Kilo release will be mentioned in the release notes if they were in
prior to Kilo.

--
Mike Perez


On Thu, Jan 15, 2015 at 7:31 PM, Mike Perez  wrote:

*Note: A more detailed email about this has been sent to all Cinder
volume driver maintainers directly.*

In the Jan 14th 2015 16:00 UTC Cinder IRC meeting [1], it was agreed
by Cinder core and participating vendors that the deadline for vendors
to have a third party CI would be:

March 19th 2015

There are requirements set for OpenStack Third Party CI's [2]. In
addition Cinder third party CI's must:

1) Test all volume drivers your company has integrated in Cinder.
2) Test all fabrics your solution uses.

For example, if your company has two volume drivers in Cinder and they
both use ISCSI and FibreChannel, you would need to have a CI that
tests against four backends and reports the results for each backend,
for every Cinder upstream patch. To test we're using a subset of tests
in Tempest [6].

To get started, read OpenStack's third party testing documentation
[32]. There are a variety of solutions [3] that help setting up a CI,
third party mentoring meetings [4], and designated people to answer
questions with setting up a third party CI in the #openstack-cinder
room [5].

If a solution is not being tested in a CI system and reporting to
OpenStack gerrit Cinder patches by the deadline of March 19th 2015, a
volume driver could be removed from the Cinder repository as of the
Kilo release. Without a CI system, Cinder core is unable to verify
your driver works in the Kilo release of Cinder. We will make sure
OpenStack users are aware of this via the OpenStack users mailing list
and Kilo release notes.

Cinder third party CI's have been discussed throughout a variety of
ways last year:

* Cinder IRC Meetings: [1][9][10][11][12][13][14][15][16]
* Midcycle meetups: [17]
* OpenStack dev list: [18][19][20][21][22][23][24][25][26][27][28][29]
* Design summit sessions: [30][31]

If there is something not clear about this email, please email me
*directly* with your question. You can also reach me as thingee on
Freenode IRC in the #openstack-cinder channel. Again I want you all to
be successful in this, and take advantage of this testing you will
have with your product. Please communicate with me and reach out to
the team for help.

--
Mike Perez

[1] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-01-14-16.00.log.html#l-21
[2] - http://ci.openstack.org/third_party.html#requirements
[3] - 
https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Existing_CI_Solutions
[4] - https://wiki.openstack.org/wiki/Meetings/ThirdParty
[5] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Questions
[6] - 
https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#What_tests_do_I_use.3F
[7] - 
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-12-10-16.00.log.html#l-471
[8] - 
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.log.html#l-34
[9] - 
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-10-29-16.00.log.html#l-224
[10] - 
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-10-15-16.00.log.html#l-59
[11] - 
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-10-08-16.00.log.html#l-17
[12] - 
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-09-17-16.00.log.html#l-244
[13] - 
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-07-02-16.01.log.html#l-141
[14] - 
http://eavesdrop.openstack.org/meeting

Re: [openstack-dev] [openstack][neutron] Debugging L3 Agent with PyCharm

2015-03-18 Thread Gal Sagie
File -> Settings -> Python Debugger

Mark "Gevent compatible debugging"

On Thu, Mar 19, 2015 at 12:06 AM, Daniel Comnea 
wrote:

> Gal,
>
> while i don't have an answer to your question, can you pls share how you
> enabled the Gevent debugging?
>
> Thx,
> Dani
>
>
>
> On Wed, Mar 18, 2015 at 10:16 AM, Gal Sagie  wrote:
>
>> Hello all,
>>
>> I am trying to debug the L3-agent code with pycharm, but the debugger
>> doesnt stop on my break points.
>>
>> I have enabled PyCharm Gevent compatible debugging but that doesnt solve
>> the issue
>> (I am able to debug neutron server correctly)
>>
>> Anyone might know what is the problem?
>>
>> Thanks
>> Gal.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] All Cinder Volume Drivers Must Have A Third Party CI by March 19, 2014

2015-03-18 Thread Mike Perez
The deadline is almost near. First off I want to thank all driver
maintainers who are reporting successfully with their driver CI in
Cinder reviews. For many of you, I know you discovered how useful the
CI is, in just the bugs it has caught or revealed. OpenStack users
that use your solution will appreciate the added stability as well. I
have been keeping a report of the different vendors, which I promised
to make public:

https://docs.google.com/spreadsheets/d/1GrIzXY4djNbJnF3RMw44_2e3aTWgBbgnBlSKafEDNGQ/edit?usp=sharing

If you're not marked with a light green or dark green color and you
believe this is a mistake, please let me know on IRC via thingee or
this email address, and provide proof of multiple reviews your CI has
posted results to.

For the drivers that have not responded and won't be able to make the
deadline. Proposing your driver back into Cinder in Liberty will
require a CI reporting before merge back in. I want to make this as
easy as possible to be merged back into tree, so I will just do a diff
of what's being proposed and what was previously in tree. This should
cut down on a review time quite a bit. Drivers that are removed in the
Kilo release will be mentioned in the release notes if they were in
prior to Kilo.

--
Mike Perez


On Thu, Jan 15, 2015 at 7:31 PM, Mike Perez  wrote:
> *Note: A more detailed email about this has been sent to all Cinder
> volume driver maintainers directly.*
>
> In the Jan 14th 2015 16:00 UTC Cinder IRC meeting [1], it was agreed
> by Cinder core and participating vendors that the deadline for vendors
> to have a third party CI would be:
>
> March 19th 2015
>
> There are requirements set for OpenStack Third Party CI's [2]. In
> addition Cinder third party CI's must:
>
> 1) Test all volume drivers your company has integrated in Cinder.
> 2) Test all fabrics your solution uses.
>
> For example, if your company has two volume drivers in Cinder and they
> both use ISCSI and FibreChannel, you would need to have a CI that
> tests against four backends and reports the results for each backend,
> for every Cinder upstream patch. To test we're using a subset of tests
> in Tempest [6].
>
> To get started, read OpenStack's third party testing documentation
> [32]. There are a variety of solutions [3] that help setting up a CI,
> third party mentoring meetings [4], and designated people to answer
> questions with setting up a third party CI in the #openstack-cinder
> room [5].
>
> If a solution is not being tested in a CI system and reporting to
> OpenStack gerrit Cinder patches by the deadline of March 19th 2015, a
> volume driver could be removed from the Cinder repository as of the
> Kilo release. Without a CI system, Cinder core is unable to verify
> your driver works in the Kilo release of Cinder. We will make sure
> OpenStack users are aware of this via the OpenStack users mailing list
> and Kilo release notes.
>
> Cinder third party CI's have been discussed throughout a variety of
> ways last year:
>
> * Cinder IRC Meetings: [1][9][10][11][12][13][14][15][16]
> * Midcycle meetups: [17]
> * OpenStack dev list: [18][19][20][21][22][23][24][25][26][27][28][29]
> * Design summit sessions: [30][31]
>
> If there is something not clear about this email, please email me
> *directly* with your question. You can also reach me as thingee on
> Freenode IRC in the #openstack-cinder channel. Again I want you all to
> be successful in this, and take advantage of this testing you will
> have with your product. Please communicate with me and reach out to
> the team for help.
>
> --
> Mike Perez
>
> [1] - 
> http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-01-14-16.00.log.html#l-21
> [2] - http://ci.openstack.org/third_party.html#requirements
> [3] - 
> https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Existing_CI_Solutions
> [4] - https://wiki.openstack.org/wiki/Meetings/ThirdParty
> [5] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Questions
> [6] - 
> https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#What_tests_do_I_use.3F
> [7] - 
> http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-12-10-16.00.log.html#l-471
> [8] - 
> http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.log.html#l-34
> [9] - 
> http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-10-29-16.00.log.html#l-224
> [10] - 
> http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-10-15-16.00.log.html#l-59
> [11] - 
> http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-10-08-16.00.log.html#l-17
> [12] - 
> http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-09-17-16.00.log.html#l-244
> [13] - 
> http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-07-02-16.01.log.html#l-141
> [14] - 
> http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-07-23-16.00.log.html#l-161
> [15] - 
> http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-06-18-16.02.log.ht

Re: [openstack-dev] [openstack][neutron] Debugging L3 Agent with PyCharm

2015-03-18 Thread Damon Wang
Hi,

I suggest you use pd. or ipdb to debug neutron...
If you prefer IDE, komodo can do remote debug in neutron in my experiment.

Hope helps,
Damon

2015-03-19 6:06 GMT+08:00 Daniel Comnea :

> Gal,
>
> while i don't have an answer to your question, can you pls share how you
> enabled the Gevent debugging?
>
> Thx,
> Dani
>
>
>
> On Wed, Mar 18, 2015 at 10:16 AM, Gal Sagie  wrote:
>
>> Hello all,
>>
>> I am trying to debug the L3-agent code with pycharm, but the debugger
>> doesnt stop on my break points.
>>
>> I have enabled PyCharm Gevent compatible debugging but that doesnt solve
>> the issue
>> (I am able to debug neutron server correctly)
>>
>> Anyone might know what is the problem?
>>
>> Thanks
>> Gal.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-18 Thread Zhipeng Huang
BP is at
https://blueprints.launchpad.net/keystone/+spec/keystone-ha-multisite ,
spec will come later :)

On Thu, Mar 19, 2015 at 11:21 AM, Adam Young  wrote:

>  On 03/18/2015 08:59 PM, joehuang wrote:
>
>  [Joe]: For reliability purpose, I suggest that the keystone client
> should provide a fail-safe design: primary KeyStone server, the second
> KeyStone server (or even the third KeySont server) . If the primary
> KeyStone server is out of service, then the KeyStone client will try the
> second KeyStone server. Different KeyStone client may be configured with
> different primary KeyStone server and the second KeyStone server.
>
>
> [Adam]: Makes sense, but that can be handled outside of Keystone using HA
> and Heartbear and awhole slew of technologies.  Each Keystone server can
> validate each other's tokens.
>
> For cross-site KeyStone HA, the backend of HA can leverage MySQL Galera
> cluster for multisite database synchronous replication to provide high
> availability, but for the KeyStone front-end the API server, it’s web
> service and accessed through the endpoint address ( name, or domain name,
> or ip address ) , like http:// or ip address.
>
>
>
> AFAIK, the HA for web service will usually be done through DNS based
> geo-load balancer in multi-site scenario. The shortcoming for this HA is
> that the fault recovery ( forward request to the health web service) will
> take longer time, it's up to the configuration in the DNS system. The other
> way is to put a load balancer like LVS ahead of KeyStone web services in
> multi-site. Then either the LVS is put in one site(so that KeyStone client
> only configured with one IP address based endpoint item, but LVS cross-site
> HA is lack), or in multisite site, and register the multi-LVS’s IP to the
> DNS or Name server(so that KeyStone client only configured with one Domain
> name or name based endpoint item, same issue just mentioned).
>
>
>
> Therefore, I still think that keystone client with a fail-safe design(
> primary KeyStone server, the second KeyStone server ) will be a “very high
> gain but low invest” multisite high availability solution. Just like MySQL
> itself, we know there is some outbound high availability solution (for
> example, PaceMaker+ColoSync+DRDB), but also there is  Galera like inbound
> cluster ware.
>
>
> Write it up as a full spec, and we will discuss at the summit.
>
>
>
> Best Regards
>
> Chaoyi Huang ( Joe Huang )
>
>
>
>
>
> *From:* Adam Young [mailto:ayo...@redhat.com ]
> *Sent:* Tuesday, March 17, 2015 10:00 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite]
> Huge token size
>
>
>
> On 03/17/2015 02:51 AM, joehuang wrote:
>
> It’s not reality to deploy KeyStone service ( including backend store ) in
> each site if the number, for example, is more than 10.  The reason is that
> the stored data including data related to revocation need to be replicated
> to all sites in synchronization manner. Otherwise, the API server might
> attempt to use the token before it's able to be validated in the target
> site.
>
>
> Replicating revocati9on data across 10 sites will be tricky, but far
> better than replicating all of the token data.  Revocations should be
> relatively rare.
>
>
>
> When Fernet token is used in multisite scenario, each API request will ask
> for token validation from KeyStone. The cloud will be out of service if
> KeyStone stop working, therefore KeyStone service need to run in several
> sites.
>
>
> There will be multiple Keystone servers, so each should talk to their
> local instance.
>
>
>
> For reliability purpose, I suggest that the keystone client should provide
> a fail-safe design: primary KeyStone server, the second KeyStone server (or
> even the third KeySont server) . If the primary KeyStone server is out of
> service, then the KeyStone client will try the second KeyStone server.
> Different KeyStone client may be configured with different primary KeyStone
> server and the second KeyStone server.
>
>
> Makes sense, but that can be handled outside of Keystone using HA and
> Heartbear and awhole slew of technologies.  Each Keystone server can
> validate each other's tokens.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang,

Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-18 Thread Adam Young

On 03/18/2015 08:59 PM, joehuang wrote:


[Joe]: For reliability purpose, I suggest that the keystone client 
should provide a fail-safe design: primary KeyStone server, the second 
KeyStone server (or even the third KeySont server) . If the primary 
KeyStone server is out of service, then the KeyStone client will try 
the second KeyStone server. Different KeyStone client may be 
configured with different primary KeyStone server and the second 
KeyStone server.



[Adam]: Makes sense, but that can be handled outside of Keystone using 
HA and Heartbear and awhole slew of technologies.  Each Keystone 
server can validate each other's tokens.


For cross-site KeyStone HA, the backend of HA can leverage MySQL 
Galera cluster for multisite database synchronous replication to 
provide high availability, but for the KeyStone front-end the API 
server, it’s web service and accessed through the endpoint address ( 
name, or domain name, or ip address ) , like http:// or ip address.


AFAIK, the HA for web service will usually be done through DNS based 
geo-load balancer in multi-site scenario. The shortcoming for this HA 
is that the fault recovery ( forward request to the health web 
service) will take longer time, it's up to the configuration in the 
DNS system. The other way is to put a load balancer like LVS ahead of 
KeyStone web services in multi-site. Then either the LVS is put in one 
site(so that KeyStone client only configured with one IP address based 
endpoint item, but LVS cross-site HA is lack), or in multisite site, 
and register the multi-LVS’s IP to the DNS or Name server(so that 
KeyStone client only configured with one Domain name or name based 
endpoint item, same issue just mentioned).


Therefore, I still think that keystone client with a fail-safe design( 
primary KeyStone server, the second KeyStone server ) will be a “very 
high gain but low invest” multisite high availability solution. Just 
like MySQL itself, we know there is some outbound high availability 
solution (for example, PaceMaker+ColoSync+DRDB), but also there is 
 Galera like inbound cluster ware.




Write it up as a full spec, and we will discuss at the summit.


Best Regards

Chaoyi Huang ( Joe Huang )

*From:*Adam Young [mailto:ayo...@redhat.com]
*Sent:* Tuesday, March 17, 2015 10:00 PM
*To:* openstack-dev@lists.openstack.org
*Subject:* Re: [openstack-dev] [opnfv-tech-discuss] 
[Keystone][Multisite] Huge token size


On 03/17/2015 02:51 AM, joehuang wrote:

It’s not reality to deploy KeyStone service ( including backend
store ) in each site if the number, for example, is more than 10.
 The reason is that the stored data including data related to
revocation need to be replicated to all sites in synchronization
manner. Otherwise, the API server might attempt to use the token
before it's able to be validated in the target site.


Replicating revocati9on data across 10 sites will be tricky, but far 
better than replicating all of the token data. Revocations should be 
relatively rare.


When Fernet token is used in multisite scenario, each API request will 
ask for token validation from KeyStone. The cloud will be out of 
service if KeyStone stop working, therefore KeyStone service need to 
run in several sites.



There will be multiple Keystone servers, so each should talk to their 
local instance.


For reliability purpose, I suggest that the keystone client should 
provide a fail-safe design: primary KeyStone server, the second 
KeyStone server (or even the third KeySont server) . If the primary 
KeyStone server is out of service, then the KeyStone client will try 
the second KeyStone server. Different KeyStone client may be 
configured with different primary KeyStone server and the second 
KeyStone server.



Makes sense, but that can be handled outside of Keystone using HA and 
Heartbear and awhole slew of technologies. Each Keystone server can 
validate each other's tokens.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara][Horizon] Can't open "Data Processing" panel after update sahara & horizon

2015-03-18 Thread Li, Chen
Thanks all for the help.

I get help from IRC, and now I have solved my issues:


1.   The reason after I update horizon and the "Data Processing" panel do 
not work is because I missed a step for horizon to work.

The correct step to update horizon is:

a.   git pull origin master  (update horizon code)

b.  python manage.py collectstatic

c.   python manage.py compress

d.  sudo service apache2 restart



2.   I can't open "job_executions" page is due to bug:  
https://bugs.launchpad.net/horizon/+bug/1376738

It can be solved by patch https://review.openstack.org/#/c/125927/



3.   I can't delete  "job" is because a job can only be deleted after all 
related "job_executions" deleted.


Thanks.
-chen


From: Li, Chen [mailto:chen...@intel.com]
Sent: Wednesday, March 18, 2015 11:05 AM
To: OpenStack Development Mailing List (not for usage questions) 
(openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [Sahara][Horizon] Can't open "Data Processing" panel 
after update sahara & horizon

Hi all,

I'm working under Ubuntu14.04 with devstack.

After the fresh devstack installation, I run a integration test to test the 
environment.
After the test, cluster and tested edp jobs remains in my environment.

Then I updated sahara to the lasted code.
To make the newest code work, I also did :

1.   manually download python-novaclient and by running "python setup.py 
install" to install it

2.   run "sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade 
head"

Then I restarted sahara.

I tried to delete things remained from last test from dashboard, but  :

1.   The table for "job_executions" can't be opened anymore.

2.   When I try to delete "job", an error happened:

2015-03-18 10:34:33.031 ERROR oslo_db.sqlalchemy.exc_filters [-] DBAPIError 
exception wrapped from (IntegrityError) (1451, 'Cannot delete or update a 
parent row: a foreign key constraint fails (`sahara`.`job_executions`, 
CONSTRAINT `job_executions_ibfk_3` FOREIGN KEY (`job_id`) REFERENCES `jobs` 
(`id`))') 'DELETE FROM jobs WHERE jobs.id = %s' 
('10c36a9b-a855-44b6-af60-0effee31efc9',)
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters Traceback (most 
recent call last):
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 951, 
in _execute_context
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters context)
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
436, in do_execute
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 174, in execute
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters 
self.errorhandler(self, exc, value)
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in 
defaulterrorhandler
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters raise 
errorclass, errorvalue
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters IntegrityError: 
(1451, 'Cannot delete or update a parent row: a foreign key constraint fails 
(`sahara`.`job_executions`, CONSTRAINT `job_executions_ibfk_3` FOREIGN KEY 
(`job_id`) REFERENCES `jobs` (`id`))')
2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters
2015-03-18 10:34:33.073 DEBUG sahara.openstack.common.periodic_task [-] Running 
periodic task SaharaPeriodicTasks.terminate_unneeded_transient_clusters from 
(pid=8084) run_periodic_tasks 
/opt/stack/sahara/sahara/openstack/common/periodic_task.py:219
2015-03-18 10:34:33.073 DEBUG sahara.service.periodic [-] Terminating unneeded 
transient clusters from (pid=8084) terminate_unneeded_transient_clusters 
/opt/stack/sahara/sahara/service/periodic.py:131
2015-03-18 10:34:33.108 ERROR sahara.utils.api [-] Validation Error occurred: 
error_code=400, error_message=Job deletion failed on foreign key constraint
Error ID: e65b3fb1-b142-45a7-bc96-416efb14de84, error_name=DELETION_FAILED

I assume this might be caused by an old horizon version, so I did :

1.   update horizon code.

2.   python manage.py compress

3.   sudo python setup.py install

4.   sudo service apache2 restart

But these only make things worse.
Now, when I click "Data Processing" on dashboard, there is no return action 
anymore.

Anyone can help me here ?
What I did wrong ?
How can I fix this ?

I tested sahara CLI, command like "sahara job-list" & "sahara job-delete" can 
still work.
So I guess sahara is working fine.

Thanks.
-chen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstac

Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-18 Thread joehuang
[Joe]: For reliability purpose, I suggest that the keystone client should 
provide a fail-safe design: primary KeyStone server, the second KeyStone server 
(or even the third KeySont server) . If the primary KeyStone server is out of 
service, then the KeyStone client will try the second KeyStone server. 
Different KeyStone client may be configured with different primary KeyStone 
server and the second KeyStone server.

[Adam]: Makes sense, but that can be handled outside of Keystone using HA and 
Heartbear and awhole slew of technologies.  Each Keystone server can validate 
each other's tokens.
For cross-site KeyStone HA, the backend of HA can leverage MySQL Galera cluster 
for multisite database synchronous replication to provide high availability, 
but for the KeyStone front-end the API server, it's web service and accessed 
through the endpoint address ( name, or domain name, or ip address ) , like 
http:// or ip address.

AFAIK, the HA for web service will usually be done through DNS based geo-load 
balancer in multi-site scenario. The shortcoming for this HA is that the fault 
recovery ( forward request to the health web service) will take longer time, 
it's up to the configuration in the DNS system. The other way is to put a load 
balancer like LVS ahead of KeyStone web services in multi-site. Then either the 
LVS is put in one site(so that KeyStone client only configured with one IP 
address based endpoint item, but LVS cross-site HA is lack), or in multisite 
site, and register the multi-LVS's IP to the DNS or Name server(so that 
KeyStone client only configured with one Domain name or name based endpoint 
item, same issue just mentioned).

Therefore, I still think that keystone client with a fail-safe design( primary 
KeyStone server, the second KeyStone server ) will be a "very high gain but low 
invest" multisite high availability solution. Just like MySQL itself, we know 
there is some outbound high availability solution (for example, 
PaceMaker+ColoSync+DRDB), but also there is  Galera like inbound cluster ware.

Best Regards
Chaoyi Huang ( Joe Huang )


From: Adam Young [mailto:ayo...@redhat.com]
Sent: Tuesday, March 17, 2015 10:00 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge 
token size

On 03/17/2015 02:51 AM, joehuang wrote:
It's not reality to deploy KeyStone service ( including backend store ) in each 
site if the number, for example, is more than 10.  The reason is that the 
stored data including data related to revocation need to be replicated to all 
sites in synchronization manner. Otherwise, the API server might attempt to use 
the token before it's able to be validated in the target site.

Replicating revocati9on data across 10 sites will be tricky, but far better 
than replicating all of the token data.  Revocations should be relatively rare.

When Fernet token is used in multisite scenario, each API request will ask for 
token validation from KeyStone. The cloud will be out of service if KeyStone 
stop working, therefore KeyStone service need to run in several sites.

There will be multiple Keystone servers, so each should talk to their local 
instance.

For reliability purpose, I suggest that the keystone client should provide a 
fail-safe design: primary KeyStone server, the second KeyStone server (or even 
the third KeySont server) . If the primary KeyStone server is out of service, 
then the KeyStone client will try the second KeyStone server. Different 
KeyStone client may be configured with different primary KeyStone server and 
the second KeyStone server.

Makes sense, but that can be handled outside of Keystone using HA and Heartbear 
and awhole slew of technologies.  Each Keystone server can validate each 
other's tokens.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday March 19th at 22:00 UTC

2015-03-18 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, March 19th at 22:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

18:00 EDT
07:00 JST
08:30 ACDT
23:00 CET
17:00 CDT
15:00 PDT

-Matt Treinish


pgpQ_0z38xjZV.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Cinder] Questions re progress

2015-03-18 Thread Billy Olsen
Specifically to the point of Swift backend for Cinder...

>From my understanding Swift was never intending to provide block-device
abstractions the way that Ceph does. That's not to say that it couldn't,
but it doesn't today.

I wonder if you might be targeting the wrong audience by going to the
Cinder community for the Swift backed volume support in Cinder. Since
Cinder is not in the datapath it cannot provide the block level
abstractions necessary for Swift objects to be treated as block devices.

If you're really interested in this, you might want to reach out to the
Swift community to see if there is an interest in adding block support.
After some set of block device abstraction is available for Swift then a
driver can be written for Cinder which exposes the block abstractions.

- Billy


On Wed, Mar 18, 2015 at 4:43 PM John Griffith 
wrote:

> On Wed, Mar 18, 2015 at 12:25 PM, Adam Lawson  wrote:
>
>> The aim is cloud storage that isn't affected by a host failure and major
>> players who deploy hyper-scaling clouds architect them to prevent that from
>> happening. To me that's cloud 101. Physical machine goes down, data
>> disappears, VM's using it fail and folks scratch their head and ask this
>> was in the cloud right? That's the indication of a service failure, not a
>> feature.
>>
> ​
> Yeah, the idea of an auto-evacuate is def nice, and I know there's
> progress there just maybe not as far along as some would like.  I'm far
> from a domain expert there though so I can't say much, other than I keep
> beating the drum that that doesn't require shared storage.
>
> Also, I would argue depending on who you ask, cloud 101 actually says;
> "The Instance puked, auto-spin up another one and get on wit it".  I'm
> certainly not arguing your points, just noting their are multiple views on
> this.  Also.
> ​
>
>
>>
>> I'm just a very big proponent of cloud arch that provides a seamless
>> abstraction between the service and the hardware. Ceph and DRDB are decent
>> enough. But tying data access to a single host by design is a mistake IMHO
>> so I'm asking why we do things the way we do and whether that's the way
>> it's always going to be.
>>
>
> ​So others have/will chime in here... one thing I think is kinda missing
> in the statement above is the "single host", that's actually the whole
> point of Ceph and other vendor driven clustered storage technologies out
> there.  There's a ton to choose from at this point, open source as well as
> proprietary and a lot of them are really really good.  This is also very
> much what DRBD aims to solve for you.  You're not tying data access to a
> single host/node, that's kinda the whole point.
>
> Granted in the case of DRBD we've still got a ways to go and something we
> haven't even scratched the surface on much is virtual/shared IP's for
> targets but we're getting there albeit slowly (there are folks who are
> doing this already but haven't contributed their work back upstream), so in
> that case yes we still have a shortcoming in that if the node that's acting
> as your target server goes down you're kinda hosed.  ​
>
>
>>
>> Of course this bumps into the question whether all apps hosted in the
>> cloud should be cloud aware or whether the cloud should have some tolerance
>> for legacy apps that are not written that way.
>>
>
> ​I've always felt "it depends".  I think you should be able to do both
> honestly (and IMHO you can currently), but if you want to take full
> advantage of everything that's offered in an OpenStack context at least,
> the best way to do that is to design and build with failure and dynamic
> provisioning in mind.​
>
>
>>
>>
>>
>> *Adam Lawson*
>>
>> AQORN, Inc.
>> 427 North Tatnall Street
>> Ste. 58461
>> Wilmington, Delaware 19801-2230
>> Toll-free: (844) 4-AQORN-NOW ext. 101
>> International: +1 302-387-4660
>> Direct: +1 916-246-2072
>>
>>
> ​Just my 2 cents, hope it's helpful.
>
> John​
>
>
>>
>> On Wed, Mar 18, 2015 at 10:59 AM, Duncan Thomas 
>> wrote:
>>
>>> I'm not sure of any particular benefit to trying to run cinder volumes
>>> over swift, and I'm a little confused by the aim - you'd do better to use
>>> something closer to purpose designed for the job if you want software fault
>>> tolerant block storage - ceph and drdb are the two open-source options I
>>> know of.
>>>
>>> On 18 March 2015 at 19:40, Adam Lawson  wrote:
>>>
 Hi everyone,

 Got some questions for whether certain use cases have been addressed
 and if so, where things are at. A few things I find particularly
 interesting:

- Automatic Nova evacuation for VM's using shared storage
- Using Swift as a back-end for Cinder

 I know we discussed Nova evacuate last year with some dialog leading
 into the Paris Operator Summit and there were valid unknowns around what
 would be required to constitute a host being "down", by what logic that
 would be calculated and what would be required to initiate the move and
>

[openstack-dev] Security Groups - What is the future direction?

2015-03-18 Thread Andrew Mann
The Nova API attaches security groups to servers.  The Neutron API attaches
security groups to ports. A server can of course have multiple ports. Up
through Icehouse at least the Horizon GUI only exposes the ability to map
security groups to servers (I haven't looked beyond Icehouse).

Are both server and port associations of security groups planned to be
supported into the future, or with the progression towards neutron, will
server association be retired?


-- 
Andrew Mann
DivvyCloud Inc.
www.divvycloud.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Capability Discovery API

2015-03-18 Thread Andrew Mann
Here's a possibly relevant use case for this discussion:

1) Running Icehouse OpenStack
2) Keystone reports v3.0 auth capabilities
3) If you actually use the v3.0 auth, then any nova call that gets passed
through to cinder fails due to the code in Icehouse being unable to parse
the 3.0 service catalog format

Due to the limited ability to interrogate OpenStack and determine what is
running, we have to auth with v3, and then make a volume related nova call
and see if it fails. Afterward we can go down code paths to work around the
OS bugs in the presumed version.  If a more robust API for determining the
running components and their capabilities were available, this would be an
easier situation to deal with.

The main point of this is that a capabilities API requires an absolute
flawless implementation to be sufficient. It fails if a capability is
reported as available, but the implementation in that particular release
has a bug. The version of implementation code also needs to be exposed
through the API for consumers to be able to know when issues are present
and work around them.

-Andrew



On Wed, Mar 18, 2015 at 1:38 PM, Ian Wells  wrote:

> On 18 March 2015 at 03:33, Duncan Thomas  wrote:
>
>> On 17 March 2015 at 22:02, Davis, Amos (PaaS-Core) <
>> amos.steven.da...@hp.com> wrote:
>>
>>> Ceph/Cinder:
>>> LVM or other?
>>> SCSI-backed?
>>> Any others?
>>>
>>
>> I'm wondering why any of the above matter to an application.
>>
>
> The Neutron requirements list is the same.  Everything you've listed
> details implementation details with the exception of shared networks (which
> are a core feature, and so it's actually rather unclear what you had in
> mind there).
>
> Implementation details should be hidden from cloud users - they don't care
> if I'm using ovs/vlan, and they don't care that I change my cloud one day
> to run ovs/vxlan, they only care that I deliver a cloud that will run their
> application - and since I care that I don't break applications when I make
> under the cover changes I will be thinking carefully about that too. I
> think you could develop a feature list, mind, just that you've not managed
> it here.
>
> For instance: why is an LVM disk different from one on a Netapp when
> you're a cloud application and you always attach a volume via a VM?  Well,
> it basically isn't, unless there are features (like for instance a minimum
> TPS guarantee) that are different between the drivers.  Cinder's even
> stranger here, since you can have multiple backend drivers simultaneously
> and a feature may not be present in all of them.
>
> Also, in Neutron, the current MTU and VLAN work is intended to expose some
> of those features to the app more than they were previously (e.g. 'can I
> use a large MTU on this network?'), but there are complexities in exposing
> this in advance of running the application.  The MTU size is not easy to
> discover in advance (it varies depending on what sort of network you're
> making), and what MTU you get for a specific network is very dependent on
> the network controller (network controllers can choose to not expose it at
> all, expose it with upper bounds in place, or expose it and try so hard to
> implement what the user requests that it's not immediately obvious whether
> a request will succeed or fail, for instance).  You could say 'you can ask
> for large MTU networks' - that is a straightforward feature - but some apps
> will fail to run if they ask and get declined.
>
> This is not to say there isn't useful work that could be done here, just
> that there may be some limitations on what is possible.
> --
> Ian.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Andrew Mann
DivvyCloud Inc.
www.divvycloud.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Questions re progress

2015-03-18 Thread John Griffith
On Wed, Mar 18, 2015 at 12:25 PM, Adam Lawson  wrote:

> The aim is cloud storage that isn't affected by a host failure and major
> players who deploy hyper-scaling clouds architect them to prevent that from
> happening. To me that's cloud 101. Physical machine goes down, data
> disappears, VM's using it fail and folks scratch their head and ask this
> was in the cloud right? That's the indication of a service failure, not a
> feature.
>
​
Yeah, the idea of an auto-evacuate is def nice, and I know there's progress
there just maybe not as far along as some would like.  I'm far from a
domain expert there though so I can't say much, other than I keep beating
the drum that that doesn't require shared storage.

Also, I would argue depending on who you ask, cloud 101 actually says; "The
Instance puked, auto-spin up another one and get on wit it".  I'm certainly
not arguing your points, just noting their are multiple views on this.
Also.
​


>
> I'm just a very big proponent of cloud arch that provides a seamless
> abstraction between the service and the hardware. Ceph and DRDB are decent
> enough. But tying data access to a single host by design is a mistake IMHO
> so I'm asking why we do things the way we do and whether that's the way
> it's always going to be.
>

​So others have/will chime in here... one thing I think is kinda missing in
the statement above is the "single host", that's actually the whole point
of Ceph and other vendor driven clustered storage technologies out there.
There's a ton to choose from at this point, open source as well as
proprietary and a lot of them are really really good.  This is also very
much what DRBD aims to solve for you.  You're not tying data access to a
single host/node, that's kinda the whole point.

Granted in the case of DRBD we've still got a ways to go and something we
haven't even scratched the surface on much is virtual/shared IP's for
targets but we're getting there albeit slowly (there are folks who are
doing this already but haven't contributed their work back upstream), so in
that case yes we still have a shortcoming in that if the node that's acting
as your target server goes down you're kinda hosed.  ​


>
> Of course this bumps into the question whether all apps hosted in the
> cloud should be cloud aware or whether the cloud should have some tolerance
> for legacy apps that are not written that way.
>

​I've always felt "it depends".  I think you should be able to do both
honestly (and IMHO you can currently), but if you want to take full
advantage of everything that's offered in an OpenStack context at least,
the best way to do that is to design and build with failure and dynamic
provisioning in mind.​


>
>
>
> *Adam Lawson*
>
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
>
>
​Just my 2 cents, hope it's helpful.

John​


>
> On Wed, Mar 18, 2015 at 10:59 AM, Duncan Thomas 
> wrote:
>
>> I'm not sure of any particular benefit to trying to run cinder volumes
>> over swift, and I'm a little confused by the aim - you'd do better to use
>> something closer to purpose designed for the job if you want software fault
>> tolerant block storage - ceph and drdb are the two open-source options I
>> know of.
>>
>> On 18 March 2015 at 19:40, Adam Lawson  wrote:
>>
>>> Hi everyone,
>>>
>>> Got some questions for whether certain use cases have been addressed and
>>> if so, where things are at. A few things I find particularly interesting:
>>>
>>>- Automatic Nova evacuation for VM's using shared storage
>>>- Using Swift as a back-end for Cinder
>>>
>>> I know we discussed Nova evacuate last year with some dialog leading
>>> into the Paris Operator Summit and there were valid unknowns around what
>>> would be required to constitute a host being "down", by what logic that
>>> would be calculated and what would be required to initiate the move and
>>> which project should own the code to make it happen. Just wondering where
>>> we are with that.
>>>
>>> On a separate note, Ceph has the ability to act as a back-end for
>>> Cinder, Swift does not. Perhaps there are performance trade-offs to
>>> consider but I'm a big fan of service plane abstraction and what I'm not a
>>> fan of is tying data to physical hardware. The fact this continues to be
>>> the case with Cinder troubles me.
>>>
>>> So a question; are these being addressed somewhere in some context? I
>>> admittedly don't want to distract momentum on the Nova/Cinder teams, but I
>>> am curious if these exist (or conflict) with our current infrastructure
>>> blueprints?
>>>
>>> Mahalo,
>>> Adam
>>>
>>> *Adam Lawson*
>>>
>>> AQORN, Inc.
>>> 427 North Tatnall Street
>>> Ste. 58461
>>> Wilmington, Delaware 19801-2230
>>> Toll-free: (844) 4-AQORN-NOW ext. 101
>>> International: +1 302-387-4660
>>> Direct: +1 916-246-2072
>>>
>>>
>>>
>>> ___

[openstack-dev] [Fuel][Docs] Triage rules for documentation bugs

2015-03-18 Thread Dmitry Borodaenko
I've added following bug importance guidelines for documentation bugs
in the public Fuel wiki [0]:

* Critical = following the instructions from documentation can cause
outage or data loss
* High = documentation includes information that is not true, or
instructions that yield the advertised outcome
* Medium = important information is missing from documentation (e.g.
new feature description)
* Low = additional information would improve reader's understanding of a feature
* Wishlist = cosmetic formatting and grammar issues

The "How to contribute" page doesn't include the definition of our
code freeze process, so I don't have a good place to publish it yet.
In short, code freeze doesn't apply the same way to fuel-docs
repository, documentation changes including bugfixes can be worked on
throughout code freeze all the way until the release week.

More generic bug importance criteria based on functionalily still
apply. For example, the definition of High importance as "specific
hardware, configurations, or components are unusable and there's no
workaround; or everything is broken but there's a workaround" means
that when a feature is not usable without documentation, lack of
documentation for a feature brings the bug importance up to High.

[0] 
https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Confirm_and_triage_bugs

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Mar 19 1800 UTC

2015-03-18 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting in #openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20150319T18

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fuel 6.1 Feature Freeze in action, further milestones postponed for 2 weeks

2015-03-18 Thread Eugene Bogdanov

Hi everyone,

As we continue working on Fuel 6.1 release, let me share some updates. I 
do realize I should have shared this earlier, I apologize for the delay 
with this communication. So, here we go:


1. We are officially at Feature Freeze for 6.1.
According to our Release Schedule for 6.1 [1] with quite a bit of 
exceptions [2] we have entered into Feature Freeze [3] state. As usual, 
let me request Feature Leads', Component leads' and core reviewers' help 
with sorting out the blueprints - many have to be properly updated, and 
other (not started/not completed) moved to the next milestone.


2. 6.1 Soft Code Freeze and further milestones are postponed for 2 weeks.
We cannot declare Soft Code Freeze until we are done with all 
exceptions. Based on cumulative Feature Leads' feedback, we'll need 
roughly 2 weeks to deal with all exceptions and perform the necessary 
level of QA, so we have updated the Release Schedule with new dates for 
Soft Code Freeze (March 31st), Hard Code Freeze (April 23rd) and GA 
Release (May 14th).


Another challenge is that we have quite considerable amount of Medium 
Priority bugs. To ensure good quality of this release we also need to 
fix as many Medium Priority bugs before we declare Soft Code Freeze, so 
we greatly appreciate your contributions here.


Thank you very much for your continued contributions, your efforts are 
greatly appreciated.


[1] https://wiki.openstack.org/wiki/Fuel/6.1_Release_Schedule

[2] List of current FF exceptions for 6.1 release:
https://blueprints.launchpad.net/fuel/+spec/plugins-deployment-order
https://blueprints.launchpad.net/fuel/+spec/consume-external-ubuntu
https://blueprints.launchpad.net/fuel/+spec/200-nodes-support
https://blueprints.launchpad.net/fuel/+spec/separate-mos-from-linux
https://blueprints.launchpad.net/murano/+spec/muraniclient-url-download
https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
https://blueprints.launchpad.net/fuel/+spec/support-ubuntu-trusty

[3] https://wiki.openstack.org/wiki/FeatureFreeze


--
EugeneB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][neutron] Debugging L3 Agent with PyCharm

2015-03-18 Thread Daniel Comnea
Gal,

while i don't have an answer to your question, can you pls share how you
enabled the Gevent debugging?

Thx,
Dani



On Wed, Mar 18, 2015 at 10:16 AM, Gal Sagie  wrote:

> Hello all,
>
> I am trying to debug the L3-agent code with pycharm, but the debugger
> doesnt stop on my break points.
>
> I have enabled PyCharm Gevent compatible debugging but that doesnt solve
> the issue
> (I am able to debug neutron server correctly)
>
> Anyone might know what is the problem?
>
> Thanks
> Gal.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][pbr] fixing up pbr's master branch

2015-03-18 Thread Robert Collins
On 19 March 2015 at 10:51, Doug Hellmann  wrote:
> Excerpts from Robert Collins's message of 2015-03-19 09:15:36 +1300:

> I wonder if it had to do with Oslo's alpha releases? Since we're no
> longer doing that, do we still care? Are we still actually "broken"?

Yes, we do and should fix it. Details in the IRC log (sorry:/)

>>
>> I don't recall the exact detail of the conflict here - but its all in
>> -infra channel logs if that matters. Making the change should be
>> pretty straight forward.
>>
>> A second but also mandatory change is to synchronise on the final
>> pre-release tag definitions in PEP-440, IIRC that was just 'rc' ->
>> 'c'.
>
> How do we use those?

git tag 1.0.0.0rc1 for instance.

We read in any of the version formats we've previously used and output
a canonical form.
..
>> PEP-440. To reiterate the current situation (AIUI) is warn not
>> enforce, and no action should be needed here.
>
> OK, so the problem was with installing packages with "bad" versions,
> rather than building them in the first place?

There were a couple of bugs where we couldn't read old versions but
they were fixed immediately, since they were gate breakers.

> I thought maybe someone closer to the issue would remember the
> details. I should probably just try to use trunk pbr and see where
> it's failing, if at all.

We stopped using trunk the day that pip implemented pep-440 and we
found out we had this skew :(.

>>
>> > Some of the other special casing seems to be for TripleO's benefit
>> > (especially the stuff that generates versions from untagged commits).
>> > Is that working? If not, is it still necessary to have?
>>
>> Huh, no. Thats all about unbreaking our behaviour in the gate. We've
>> had (and can still have with current released) cases where we end up
>> installing from pypi rather than the thing we're testing, if the
>> version numbers align wrongly. (All due to accidentially rewinding
>> versions in the presence of pre-release versions). Pbr has generate
>> versions forever. It just generates completely broken ones in the
>> released code. Yes broken - they go backwards :).
>
> By pre-release do you mean things with "alpha" in them, or do you mean
> commits that were made after a release tag?

alpha etc.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][pbr] fixing up pbr's master branch

2015-03-18 Thread Doug Hellmann
Excerpts from Robert Collins's message of 2015-03-19 09:15:36 +1300:
> On 18 March 2015 at 03:03, Doug Hellmann  wrote:
> > Now that we have good processes in place for the other Oslo libraries, I
> > want to bring pbr into the fold so to speak and start putting it through
> > the same reviews and release procedures. We also have some open bugs
> > that I'd like to find someone to help fix, but because we can't actually
> > release from the master branch right now working on fixes is more
> > complicated than it needs to be. I don't want to focus on placing blame,
> > just understanding where things actually stand and then talking about
> > how to get them to a better state.
> 
> +1.
> 
> > From what I can tell, the main problem we have in master right now
> > is that the new semver rules added as part of [1] don't like some
> > of the existing stable branch tags being used by projects. This
> > feels a bit like we overreached with the spec, and so I would like
> > to explore options for pulling back and changing directions. It is
> > quite likely I don't fully understand either the original intent
> > or the current state of things, but I want to start the discussion
> > with the hope that those with the details can correct my mistakes
> > and fill in any gaps.
> 
> My understanding is different. The thing preventing a release of trunk
> was that pbr and PEP-440 ended up (after lots of effort!) at odds, and
> pip strictly implements PEP-440.
> 
> The key change is to tweak the generation of versions when pre-release
> tags are in play.
> 
> Given this state:
> commit X
> commit Y tagged 1.2.3.alpha1
> commiy Z tagged 1.2.2
> 
> PEP-440 says that 1.2.3.alpha1.dev1
> is legitimate
> 
> but we'd chosen to do 1.2.3.dev2 - discarding the .alpha1 and walking
> back to the last tag.

I wonder if it had to do with Oslo's alpha releases? Since we're no
longer doing that, do we still care? Are we still actually "broken"?

> 
> I don't recall the exact detail of the conflict here - but its all in
> -infra channel logs if that matters. Making the change should be
> pretty straight forward.
> 
> A second but also mandatory change is to synchronise on the final
> pre-release tag definitions in PEP-440, IIRC that was just 'rc' ->
> 'c'.

How do we use those?

> 
> With those done we should be able to release trunk.
> 
> *all* the extra bits are extra and should not hold up releases
> 
> 
> On 'do what I say' for version numbers, pbr still permits that: if you
> tag, it will honour the tag. I'd like eventually to make it error
> rather than warn on weirdness there - both the requirement that things
> be in canonical form, and the insistence on honouring semver (no API
> breaking .minor releases!) - are aids for interop with pip and
> PEP-440. To reiterate the current situation (AIUI) is warn not
> enforce, and no action should be needed here.

OK, so the problem was with installing packages with "bad" versions,
rather than building them in the first place?

I thought maybe someone closer to the issue would remember the
details. I should probably just try to use trunk pbr and see where
it's failing, if at all.

> 
> > Some of the other special casing seems to be for TripleO's benefit
> > (especially the stuff that generates versions from untagged commits).
> > Is that working? If not, is it still necessary to have?
> 
> Huh, no. Thats all about unbreaking our behaviour in the gate. We've
> had (and can still have with current released) cases where we end up
> installing from pypi rather than the thing we're testing, if the
> version numbers align wrongly. (All due to accidentially rewinding
> versions in the presence of pre-release versions). Pbr has generate
> versions forever. It just generates completely broken ones in the
> released code. Yes broken - they go backwards :).

By pre-release do you mean things with "alpha" in them, or do you mean
commits that were made after a release tag?

> 
> > The tag-release command isn't necessary for OpenStack as far as I
> > can tell. We have a whole separate repository of tools with
> > release-related scripts and tooling [2], and those tools automate
> > far more than just creating a tag for us. I don't expect any OpenStack
> > project to directly use a pbr command for creating a tag. Maybe we
> > missed the window of opportunity there? How much of that work is done?
> > Should we drop any remaining plans?
> 
> I think we can defer this part of the conversation.
> 
> > Did I miss anything that's currently broken, or needs to be done before
> > we can consider pbr releasable for liberty?
> 
> We should update the spec as we do this.

Yes, it would be really good to have a concrete list of things we need
to bring master back into a working state. It sounds like we're closer
than I expected, which is good.

Doug

> 
> -Rob
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: ope

Re: [openstack-dev] [Fuel] Let's stick to OpenStack global requirements

2015-03-18 Thread Dmitry Borodaenko
Roman,

I like this proposal very much, thanks for the idea and for putting
together a straightforward process.

I assume you meant: "If a requirement that previously was only in Fuel
Requirements is merged to Global Requirements, it should be removed
from *Fuel* Requirements".

Sebastian,

We have found ways to resolve the conflicts between clvm and docker,
and between ruby versions 1.8 and 2.1, without introducing a separate
package repo for Fuel master. I've updated the blueprint to make note
of that:
https://blueprints.launchpad.net/fuel/+spec/separate-repo-for-master-node




On Wed, Mar 18, 2015 at 9:25 AM, Sebastian Kalinowski
 wrote:
> I assume that you considered a situation when we have a common repository
> with RPMs for Fuel master and for nodes.
> There are some plans (unfortunately I do not know details, so maybe someone
> from OSCI could tell more) to split those repositories. How this workflow
> will work with those separated repos? Will we still need it?
>
> Thanks!
> Sebastian
>
> 2015-03-18 11:04 GMT+01:00 Roman Prykhodchenko :
>>
>> Hi folks,
>>
>> before you say «romcheg, go away and never come back again!», please read
>> the story that caused me to propose this and the proposed solution. Perhaps
>> it makes you reconsider :)
>>
>> As you know for different reasons, among which are being able to set up
>> everything online and bringing up-to-date packages, we maintain an OSCI
>> repository which is used for building ISOs and deploying OpenStack services.
>> Managing that repo is a pretty hard job. Thus a dedicated group of people is
>> devoted to perform that duty, they are always busy because of a lot of
>> responsibilities they have.
>>
>> At the same time Fuel’s developers are pretty energetic and always want to
>> add new features to Fuel. For that they love to use different libraries,
>> many of which aren’t in the OSCI mirror yet. So they ask OSCI guys to add
>> more and more of those and I guess that’s pretty fine except one little
>> thing — sometimes those libraries conflict with ones, required by OpenStack
>> services.
>>
>> To prevent that from happening someone has to check every patch against
>> the OSCI repo and OpenStack’s global requirements, to detect whether a
>> version bump or adding a new library is required an whether it can be
>> performed. As you can guess, there’s too much of a human factor so
>> statistically no one does that until problems appear. Moreover, theres’
>> nothing but a «it’s not compatible with OpenStack» yelling from OSCI team
>> that stops developers to change dependencies in Fuel.
>>
>> All the stuff described above causes sometimes tremendous time losses and
>> is very problem-prone.
>>
>> I’d like to propose to make everyone’s life easier by following these
>> steps:
>>
>>  - Create a new project called Fuel Requirements, all changes to it should
>> go through a standard review procedure
>>  - We strict ourselves to use only packages from both Fuel Requirements
>> and Global Requirements for the version of OpenStack, Fuel is installing in
>> the following manner:
>> - If a requirement is in Global Requirements, the version spec in all
>> Fuel’s components should be exactly like that.
>> - OSCI mirror should contain the maximum version of a requirement
>> that matches its version specification.
>> - If a requirement is not in the global requirements list, then Fuel
>> Requirements list should be used to check whether all Fuel’s components
>> require the same version of a library/package.
>> - OSCI mirror should contain the maximum version of a requirement
>> that matches its version specification.
>> - If a requirement that previously was only in Fuel Requirements is
>> merged to Global Requirements, it should be removed from Global Requirements
>>   - Set up CI jobs in both OpenStack CI and FuelCI to check all patches
>> against both Global Requirements and Fuel Requirements and block, if either
>> of checks doesn’t pass
>>   - Set up CI jobs to notify OSCI team if either Global Requirements or
>> Fuel Requirements are changed.
>>   - Set up requirements proposal jobs that will automatically propose
>> changes to all fuel projects once either of requirements lists was changed,
>> just like it’s done for OpenStack projects.
>>
>>
>> These steps may look terribly hard, but most of the job is already done in
>> OpenStack projects and therefore it can be reused for Fuel.
>> Looking forward for your feedback folks!
>>
>>
>> - romcheg
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.op

Re: [openstack-dev] [oslo][pbr] fixing up pbr's master branch

2015-03-18 Thread Donald Stufft

> On Mar 18, 2015, at 4:21 PM, Jeremy Stanley  wrote:
> 
> On 2015-03-19 09:15:36 +1300 (+1300), Robert Collins wrote:
> [...]
>> A second but also mandatory change is to synchronise on the final
>> pre-release tag definitions in PEP-440, IIRC that was just 'rc' ->
>> 'c'.
> [...]
> 
> Mmmwaffles. It was for a time, then by popular demand it got
> switched back to "rc" again.
> 
>http://legacy.python.org/dev/peps/pep-0440/#pre-releases
> 
> --
> Jeremy Stanley
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

To be clear, both “rc” and “c” are completely supported, the only thing
we changed is which one was the canonical representation. Other than that
using one is equivalent to using the other.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Igor Malinovskiy for core team

2015-03-18 Thread yang, xing
+1

-Original Message-
From: Ben Swartzlander [mailto:b...@swartzlander.org] 
Sent: Wednesday, March 18, 2015 3:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Manila] Nominate Igor Malinovskiy for core team

Igor (u_glide on IRC) joined the Manila team back in December and has done a 
consistent amount of reviews and contributed significant new core features in 
the last 2-3 months. I would like to nominate him to join the Manila core 
reviewer team.

-Ben Swartzlander
Manila PTL


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][pbr] fixing up pbr's master branch

2015-03-18 Thread Robert Collins
On 19 March 2015 at 09:21, Jeremy Stanley  wrote:
> On 2015-03-19 09:15:36 +1300 (+1300), Robert Collins wrote:
> [...]
>> A second but also mandatory change is to synchronise on the final
>> pre-release tag definitions in PEP-440, IIRC that was just 'rc' ->
>> 'c'.
> [...]
>
> Mmmwaffles. It was for a time, then by popular demand it got
> switched back to "rc" again.
>
> http://legacy.python.org/dev/peps/pep-0440/#pre-releases

Man.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] requirements-py{2, 3} and universal wheels

2015-03-18 Thread Robert Collins
On 18 March 2015 at 02:33, Monty Taylor  wrote:

 If so, an option would be to have pbr recognize the version-specific
 input files as implying a particular rule, and adding that environment
 marker to the dependencies list automatically until we can migrate to a
 single requirements.txt (for which no rules would be implied).
>>>
>>> We could, or we could just migrate - I don't think its worth writing a
>>> compat shim.
>>
>> Also agree.
>
> Actually - no, I just realized - we need to do a compat shim - because
> pbr has no such thing as a stable release or ability to be versioned. We
> have requirements-pyX in the wild, which means we must support them
> basically until the end of time.

We have more options than that.
We can:
 a) keep them as first class things. Thats what I think you mean above.
 b) freeze their feature set now, and tell folk wanting newer features
to stop using requirements-X files
 c) deprecate them: keep them working but nag folk to stop using them

All three things will meet our needs w.r.t. stable branches and
released things 'out there'.
Further - and I need to check this when the time comes - I am
reasonably sure we don't ship the requirements files in sdists, rather
its all reflected into metadata, so we *totally* can evolve stuff if
we're willing to break git checkouts: not that we should or need to,
just that the window there is narrower than 'end of time', which
things on pypi might imply :).

Separately we can and should do a stable release and versioned deps
for pbr in future, but that requires a whole detailed discussion and
analysis.

> So I'm going to propose that we add a shim such as the one dhellmann
> suggests above so that pbr will support our old releases, but moving
> forward as a project, we should use markers and not requirements-pyX

I still don't think we need to do that. I am proposing we do b: we
stop adding features to requirements-X files, and advise folk of the
migration path.

I am poking at pip a bit right now to scratch the freaking annoying
setup_requires itch : not the evil fix to solve it for all releases
out there, just one to make our life with pbr and testtools and the
like a lot better - and that will take this very thread a bit further,
so I'd like to suggest we don't do anything right now.

Lets fix up trunk to be releasable and then discuss the next pbr
evolution after that. Juliens work should be incorporatable trivially,
but shims for requirements etc - lets defer for a week or two.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][pbr] fixing up pbr's master branch

2015-03-18 Thread Jeremy Stanley
On 2015-03-19 09:15:36 +1300 (+1300), Robert Collins wrote:
[...]
> A second but also mandatory change is to synchronise on the final
> pre-release tag definitions in PEP-440, IIRC that was just 'rc' ->
> 'c'.
[...]

Mmmwaffles. It was for a time, then by popular demand it got
switched back to "rc" again.

http://legacy.python.org/dev/peps/pep-0440/#pre-releases

-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][pbr] fixing up pbr's master branch

2015-03-18 Thread Robert Collins
On 18 March 2015 at 03:03, Doug Hellmann  wrote:
> Now that we have good processes in place for the other Oslo libraries, I
> want to bring pbr into the fold so to speak and start putting it through
> the same reviews and release procedures. We also have some open bugs
> that I'd like to find someone to help fix, but because we can't actually
> release from the master branch right now working on fixes is more
> complicated than it needs to be. I don't want to focus on placing blame,
> just understanding where things actually stand and then talking about
> how to get them to a better state.

+1.

> From what I can tell, the main problem we have in master right now
> is that the new semver rules added as part of [1] don't like some
> of the existing stable branch tags being used by projects. This
> feels a bit like we overreached with the spec, and so I would like
> to explore options for pulling back and changing directions. It is
> quite likely I don't fully understand either the original intent
> or the current state of things, but I want to start the discussion
> with the hope that those with the details can correct my mistakes
> and fill in any gaps.

My understanding is different. The thing preventing a release of trunk
was that pbr and PEP-440 ended up (after lots of effort!) at odds, and
pip strictly implements PEP-440.

The key change is to tweak the generation of versions when pre-release
tags are in play.

Given this state:
commit X
commit Y tagged 1.2.3.alpha1
commiy Z tagged 1.2.2

PEP-440 says that 1.2.3.alpha1.dev1
is legitimate

but we'd chosen to do 1.2.3.dev2 - discarding the .alpha1 and walking
back to the last tag.

I don't recall the exact detail of the conflict here - but its all in
-infra channel logs if that matters. Making the change should be
pretty straight forward.

A second but also mandatory change is to synchronise on the final
pre-release tag definitions in PEP-440, IIRC that was just 'rc' ->
'c'.

With those done we should be able to release trunk.

*all* the extra bits are extra and should not hold up releases


On 'do what I say' for version numbers, pbr still permits that: if you
tag, it will honour the tag. I'd like eventually to make it error
rather than warn on weirdness there - both the requirement that things
be in canonical form, and the insistence on honouring semver (no API
breaking .minor releases!) - are aids for interop with pip and
PEP-440. To reiterate the current situation (AIUI) is warn not
enforce, and no action should be needed here.

> Some of the other special casing seems to be for TripleO's benefit
> (especially the stuff that generates versions from untagged commits).
> Is that working? If not, is it still necessary to have?

Huh, no. Thats all about unbreaking our behaviour in the gate. We've
had (and can still have with current released) cases where we end up
installing from pypi rather than the thing we're testing, if the
version numbers align wrongly. (All due to accidentially rewinding
versions in the presence of pre-release versions). Pbr has generate
versions forever. It just generates completely broken ones in the
released code. Yes broken - they go backwards :).

> The tag-release command isn't necessary for OpenStack as far as I
> can tell. We have a whole separate repository of tools with
> release-related scripts and tooling [2], and those tools automate
> far more than just creating a tag for us. I don't expect any OpenStack
> project to directly use a pbr command for creating a tag. Maybe we
> missed the window of opportunity there? How much of that work is done?
> Should we drop any remaining plans?

I think we can defer this part of the conversation.

> Did I miss anything that's currently broken, or needs to be done before
> we can consider pbr releasable for liberty?

We should update the spec as we do this.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Questions re progress

2015-03-18 Thread Clint Byrum
Excerpts from Adam Lawson's message of 2015-03-18 11:25:37 -0700:
> The aim is cloud storage that isn't affected by a host failure and major
> players who deploy hyper-scaling clouds architect them to prevent that from
> happening. To me that's cloud 101. Physical machine goes down, data
> disappears, VM's using it fail and folks scratch their head and ask this
> was in the cloud right? That's the indication of a service failure, not a
> feature.
>

Ceph provides this for cinder installations that use it.

> I'm just a very big proponent of cloud arch that provides a seamless
> abstraction between the service and the hardware. Ceph and DRDB are decent
> enough. But tying data access to a single host by design is a mistake IMHO
> so I'm asking why we do things the way we do and whether that's the way
> it's always going to be.
> 

Why do you say Ceph is "decent". It solves all your issues you're
talking about, and does so on commodity hardware.

> Of course this bumps into the question whether all apps hosted in the cloud
> should be cloud aware or whether the cloud should have some tolerance for
> legacy apps that are not written that way.
> 

Using volumes is more expensive than using specialized scale-out storage,
aka "cloud aware" storage. But finding and migrating to that scale-out
storage takes time and has a cost too, so volumes have their place and
always will.

So, can you be more clear, what is it that you're suggesting isn't
available now?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Subject: Re: Barbican : Usage of public_key, private_key and private_key_passphrase under RSA type Container

2015-03-18 Thread Asha Seshagiri
Hi Douglas ,

Thanks for your response .
Yeah it's Asha Again :)

I guess Barbican is not validating while  storing the secret references
under private_key and public_key.
ie I am able to store private  secret type under public_key and public
secret type under private_key.
Container resources stores the secret references irrespective of the secret
 types
Please find the example below :

*Command to create the public key *

root@barbican:~# curl -X POST -H 'content-type:application/json' -H
'X-Project-Id:12345' -d '{ "name": "AES key","payload": "public-secret",
"payload_content_ty
   pe": "text/plain", *"secret_type": "public"*}'
http://localhost:9311/v1/secrets
{"secret_ref": "
http://localhost:9311/v1/secrets/bd1f75e2-8c8d-40a1-8eb5-7c855ee

*Command to create the private key*

curl -X POST -H 'content-type:application/json' -H 'X-Project-Id:12345' -d
'{ "name": "AES key","payload": "private-secret", "payload_content_type":
"text/plain",* "secret_type": "private"*}' http://localhost:9311/v1/secrets
{"secret_ref": "
http://localhost:9311/v1/secrets/7be75254-4137-4a90-ae4f-1fe43299bfbe
"}root@barbican:~#

root@barbican:~# curl -X POST -H 'content-type:application/json' -H
'X-Project-Id: 12345' -d '{ "name": "container3" ,"type":
"rsa","secret_refs": [ *{ "name": "private_key", "secret_ref":
"http://localhost:9311/v1/secrets/bd1f75e2-8c8d-40a1-8eb5-7c855eed84f9
" }*,
{ *"name": "public_key",
"secret_ref":"http://localhost:9311/v1/secrets/7be75254-4137-4a90-ae4f-1fe43299bfbe
"* }
] } ' http://localhost:9311/v1/containers
{"container_ref": "
http://localhost:9311/v1/containers/1005b36f-f6d5-4709-b9ca-030e2df841cc"}

Please correct me if I am wrong.
It would be great if you could help me on this.

Thanks and Regards,
Asha Seshagiri

Hello again Asha,

Yes, the predefined secret names in an RSA container should match up with
secret refs for those actual things.  ?private_key? should point to the
private key of the RSA pair, ?public_key? should point to the matching
public key.

private_key_passphrase is optional, and it is only used for
passphrase-protected keys.  It should point to a secret that has the plain
text passphrase used to unlock the private key.

-Doug


Douglas Mendiz?bal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C

-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Nominate Igor Malinovskiy for core team

2015-03-18 Thread Ben Swartzlander
Igor (u_glide on IRC) joined the Manila team back in December and has 
done a consistent amount of reviews and contributed significant new core 
features in the last 2-3 months. I would like to nominate him to join 
the Manila core reviewer team.


-Ben Swartzlander
Manila PTL


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Capability Discovery API

2015-03-18 Thread Ian Wells
On 18 March 2015 at 03:33, Duncan Thomas  wrote:

> On 17 March 2015 at 22:02, Davis, Amos (PaaS-Core) <
> amos.steven.da...@hp.com> wrote:
>
>> Ceph/Cinder:
>> LVM or other?
>> SCSI-backed?
>> Any others?
>>
>
> I'm wondering why any of the above matter to an application.
>

The Neutron requirements list is the same.  Everything you've listed
details implementation details with the exception of shared networks (which
are a core feature, and so it's actually rather unclear what you had in
mind there).

Implementation details should be hidden from cloud users - they don't care
if I'm using ovs/vlan, and they don't care that I change my cloud one day
to run ovs/vxlan, they only care that I deliver a cloud that will run their
application - and since I care that I don't break applications when I make
under the cover changes I will be thinking carefully about that too. I
think you could develop a feature list, mind, just that you've not managed
it here.

For instance: why is an LVM disk different from one on a Netapp when you're
a cloud application and you always attach a volume via a VM?  Well, it
basically isn't, unless there are features (like for instance a minimum TPS
guarantee) that are different between the drivers.  Cinder's even stranger
here, since you can have multiple backend drivers simultaneously and a
feature may not be present in all of them.

Also, in Neutron, the current MTU and VLAN work is intended to expose some
of those features to the app more than they were previously (e.g. 'can I
use a large MTU on this network?'), but there are complexities in exposing
this in advance of running the application.  The MTU size is not easy to
discover in advance (it varies depending on what sort of network you're
making), and what MTU you get for a specific network is very dependent on
the network controller (network controllers can choose to not expose it at
all, expose it with upper bounds in place, or expose it and try so hard to
implement what the user requests that it's not immediately obvious whether
a request will succeed or fail, for instance).  You could say 'you can ask
for large MTU networks' - that is a straightforward feature - but some apps
will fail to run if they ask and get declined.

This is not to say there isn't useful work that could be done here, just
that there may be some limitations on what is possible.
-- 
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Questions re progress

2015-03-18 Thread Adam Lawson
The aim is cloud storage that isn't affected by a host failure and major
players who deploy hyper-scaling clouds architect them to prevent that from
happening. To me that's cloud 101. Physical machine goes down, data
disappears, VM's using it fail and folks scratch their head and ask this
was in the cloud right? That's the indication of a service failure, not a
feature.

I'm just a very big proponent of cloud arch that provides a seamless
abstraction between the service and the hardware. Ceph and DRDB are decent
enough. But tying data access to a single host by design is a mistake IMHO
so I'm asking why we do things the way we do and whether that's the way
it's always going to be.

Of course this bumps into the question whether all apps hosted in the cloud
should be cloud aware or whether the cloud should have some tolerance for
legacy apps that are not written that way.



*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Wed, Mar 18, 2015 at 10:59 AM, Duncan Thomas 
wrote:

> I'm not sure of any particular benefit to trying to run cinder volumes
> over swift, and I'm a little confused by the aim - you'd do better to use
> something closer to purpose designed for the job if you want software fault
> tolerant block storage - ceph and drdb are the two open-source options I
> know of.
>
> On 18 March 2015 at 19:40, Adam Lawson  wrote:
>
>> Hi everyone,
>>
>> Got some questions for whether certain use cases have been addressed and
>> if so, where things are at. A few things I find particularly interesting:
>>
>>- Automatic Nova evacuation for VM's using shared storage
>>- Using Swift as a back-end for Cinder
>>
>> I know we discussed Nova evacuate last year with some dialog leading into
>> the Paris Operator Summit and there were valid unknowns around what would
>> be required to constitute a host being "down", by what logic that would be
>> calculated and what would be required to initiate the move and which
>> project should own the code to make it happen. Just wondering where we are
>> with that.
>>
>> On a separate note, Ceph has the ability to act as a back-end for Cinder,
>> Swift does not. Perhaps there are performance trade-offs to consider but
>> I'm a big fan of service plane abstraction and what I'm not a fan of is
>> tying data to physical hardware. The fact this continues to be the case
>> with Cinder troubles me.
>>
>> So a question; are these being addressed somewhere in some context? I
>> admittedly don't want to distract momentum on the Nova/Cinder teams, but I
>> am curious if these exist (or conflict) with our current infrastructure
>> blueprints?
>>
>> Mahalo,
>> Adam
>>
>> *Adam Lawson*
>>
>> AQORN, Inc.
>> 427 North Tatnall Street
>> Ste. 58461
>> Wilmington, Delaware 19801-2230
>> Toll-free: (844) 4-AQORN-NOW ext. 101
>> International: +1 302-387-4660
>> Direct: +1 916-246-2072
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Duncan Thomas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Block migrations and Cinder volumes

2015-03-18 Thread Daniel P. Berrange
On Wed, Mar 18, 2015 at 10:59:19AM -0700, Joe Gordon wrote:
> On Wed, Mar 18, 2015 at 3:09 AM, Daniel P. Berrange 
> wrote:
> 
> > On Wed, Mar 18, 2015 at 08:33:26AM +0100, Thomas Herve wrote:
> > > > Interesting bug.  I think I agree with you that there isn't a good
> > solution
> > > > currently for instances that have a mix of shared and not-shared
> > storage.
> > > >
> > > > I'm curious what Daniel meant by saying that marking the disk
> > shareable is
> > > > not
> > > > as reliable as we would want.
> > >
> > > I think this is the bug I reported here:
> > https://bugs.launchpad.net/nova/+bug/1376615
> > >
> > > My initial approach was indeed to mark the disks are shareable: the
> > patch (https://review.openstack.org/#/c/125616/) has comments around the
> > issues, mainly around I/Ocache and SELinux isolation being disabled.
> >
> > Yep, those are both show stopper issues. The only solution is to fix the
> > libvirt API for this first.
> >
> 
> Thanks for the clarification, is there a  bug tracking this in libvirt
> already?

Actually I don't think there is one, so feel free to file one


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Block migrations and Cinder volumes

2015-03-18 Thread Joe Gordon
On Wed, Mar 18, 2015 at 3:09 AM, Daniel P. Berrange 
wrote:

> On Wed, Mar 18, 2015 at 08:33:26AM +0100, Thomas Herve wrote:
> > > Interesting bug.  I think I agree with you that there isn't a good
> solution
> > > currently for instances that have a mix of shared and not-shared
> storage.
> > >
> > > I'm curious what Daniel meant by saying that marking the disk
> shareable is
> > > not
> > > as reliable as we would want.
> >
> > I think this is the bug I reported here:
> https://bugs.launchpad.net/nova/+bug/1376615
> >
> > My initial approach was indeed to mark the disks are shareable: the
> patch (https://review.openstack.org/#/c/125616/) has comments around the
> issues, mainly around I/Ocache and SELinux isolation being disabled.
>
> Yep, those are both show stopper issues. The only solution is to fix the
> libvirt API for this first.
>

Thanks for the clarification, is there a  bug tracking this in libvirt
already?


>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Questions re progress

2015-03-18 Thread Duncan Thomas
I'm not sure of any particular benefit to trying to run cinder volumes over
swift, and I'm a little confused by the aim - you'd do better to use
something closer to purpose designed for the job if you want software fault
tolerant block storage - ceph and drdb are the two open-source options I
know of.

On 18 March 2015 at 19:40, Adam Lawson  wrote:

> Hi everyone,
>
> Got some questions for whether certain use cases have been addressed and
> if so, where things are at. A few things I find particularly interesting:
>
>- Automatic Nova evacuation for VM's using shared storage
>- Using Swift as a back-end for Cinder
>
> I know we discussed Nova evacuate last year with some dialog leading into
> the Paris Operator Summit and there were valid unknowns around what would
> be required to constitute a host being "down", by what logic that would be
> calculated and what would be required to initiate the move and which
> project should own the code to make it happen. Just wondering where we are
> with that.
>
> On a separate note, Ceph has the ability to act as a back-end for Cinder,
> Swift does not. Perhaps there are performance trade-offs to consider but
> I'm a big fan of service plane abstraction and what I'm not a fan of is
> tying data to physical hardware. The fact this continues to be the case
> with Cinder troubles me.
>
> So a question; are these being addressed somewhere in some context? I
> admittedly don't want to distract momentum on the Nova/Cinder teams, but I
> am curious if these exist (or conflict) with our current infrastructure
> blueprints?
>
> Mahalo,
> Adam
>
> *Adam Lawson*
>
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][FFE] - IdP ID (remote_id) registration and validation

2015-03-18 Thread Yee, Guang
I think we can create a mapping which restricts which IdP it is applicable. 
When playing around with K2K, I've experimented with multiple IdPs. I basically 
chained the IdPs in shibboleth2.xml like this






And with a mapping intended for Acme IdP, we can ensure that only Acme users 
can map to group '1234567890'.

{
"mapping": {
"rules": [
{
"local": [
{
"user": {
"name": "{0}"
}
},
{
"group": {
"id": "1234567890"
 }
}
],
"remote": [
{
"type": "openstack_user"
},
{
"type": "Shib-Identity-Provider",
"any_one_of": [
"https://acme.com/v3/OS-FEDERATION/saml2/idp";
]
}
]
}
]
}
}

Shibboleth do convey the "Shib-Identity-Provider" attribute in the request 
environment. With this mechanism we should be able to create a rule for 
multiple IdPs as well.


Guang


-Original Message-
From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk] 
Sent: Wednesday, March 18, 2015 2:41 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Keystone][FFE] - IdP ID (remote_id) registration 
and validation

In my opinion you have got into this situation because your federation trust 
model is essentially misguided. As I have been arguing since the inception of 
federated Keystone, you should have rules for trusted IdPs (already done), 
trusted attributes (not done), and then one set of mapping rules that apply to 
all IdPs and all attributes (not done). If you had followed this model (the one 
Kent originally implemented) you would not be in this situation now.

Concerning the remote user ID, we can guarantee that it is always globally 
unique by concatenating the IDP name with the IdP issued user ID, so this wont 
cause a problem in mapping rules.

Concerning other identity attributes, there are two types:
- globally known and assigned attributes (such email address and other LDAP 
ones) that have unique IDs regardless of the IDP that issued them - the 
eduPerson schema attributes are of this type, so the mapping rules for these 
are IDP independent, and the trusted IDP rules ensure that you filter out 
untrusted ones
- locally issued attributes that mean different things to different IDPs. In 
this case you need to concatenate the name of the IDP to the attribute to make 
it globally unique, and then the mapping rules will always apply. The trusted 
IDP rules will again filter these out or let them pass.

So instead of fixing the model, you are adding more layers of complexity to the 
implementation in order to fix conceptual errors in your federation model.

Sadly yours

David


On 17/03/2015 22:28, Marek Denis wrote:
> Hello,
> 
> One very important feature that we have been working on in the Kilo 
> development cycle is management of remote_id attributes tied to 
> Identity Providers in keystone.
> 
> This work is crucial for:
> 
> -  Secure OpenStack identity federation configuration. User is 
> required to specify what Identity Provider (IdP) issues an assertion 
> as well as what protocol (s)he wishes to use (typically it would be 
> SAML2 or OpenId Connect). Based on that knowledge (arbitrarily 
> specified by a user), keystone fetches mapping rules configured for 
> {IdP, protocol} pair and applies it on the assertion. As an effect a 
> set of groups is returned, and by membership of those dynamically 
> assigned groups (and later roles), an ephemeral user is being granted 
> access to certain OpenStack resources. Without remote_id attributes, a 
> user, can arbitrarily choose pair {Identity Provider, protocol} 
> without respect of issuing Identity Provider. This may lead to a 
> situation where Identity Provider X issues an assertion, but user 
> chooses mapping ruleset dedicated for Identity Provider Y, effectively 
> being granted improper groups (roles). As part of various federation 
> protocols, every Identity Provider issues an identifier allowing 
> trusting peers (Keystone  servers in this case) to reliably identify 
> issuer of the assertion. That said, remote_id attributes allow cloud 
> administrators to match assertions with Identity Providers objects 
> configured in keystone (i.e. situation depicted above would not 
> happen, as keystone object Identity Provider Y would accept assertions issued 
> by Identity Provider Y only).
> 
> - WebSSO implementation - a highly requested feature that allows to 
> use federation in OpenStack via web browsers, especially Horizon. 
> Without remote_ids server (keystone) is not able to distinguish wh

Re: [openstack-dev] [Congress] [Delegation] Meeting scheduling

2015-03-18 Thread Tim Hinrichs
Ruby,

The Custom constraint class was something Yathiraj mentioned a while back.  But 
yes the idea is that MemoryCapacityConstraint would be a special case of what 
we can express in the custom constraints.

Tim


On Mar 18, 2015, at 10:05 AM, 
ruby.krishnasw...@orange.com wrote:

Hello

o) custom constraint class
What did you mean by the “custom” constraint class?

  Did you mean we specify a “meta model” to specify constraints?  And then each 
“Policy” specifying a constraint  ( ) will lead to generation of the constraint 
in that meta-model.
Then the solver-scheduler could pick up the constraint?

This then will not require the “solver scheduler” to implement specific 
constraint classes such as “MemoryCapacityConstraint”.

We may have rules (not in sense of Datalog ☺ ) for name (e.g. variables or 
constants) generation?


Ruby

De : Tim Hinrichs [mailto:thinri...@vmware.com]
Envoyé : mercredi 18 mars 2015 16:34
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [Congress] [Delegation] Meeting scheduling

I responded in the gdoc.  Here’s a copy.

One of my goals for delegation is to avoid asking people to write policy 
statements specific to any particular domain-specific solver.  People ought to 
encode policy however they like, and the system ought to figure out how best to 
enforce that policy  (delegation being one option).

Assuming that's a reasonable goal, I see two options for delegation to  
solverScheduler

(1) SolverScheduler exposes a custom constraint class.  Congress generates the 
LP program from the Datalog, similar to what is described in this doc, and 
gives that LP program as custom constraints to the  SolverScheduler.  
SolverScheduler is then responsible for enforcing that policy both during 
provisioning of new servers and for monitoring/migrating servers once 
provisioning is finished.

(2) The Congress adapter for SolverScheduler understands the semantics of 
MemoryCapacityConstraint, identifies when the user has asked for that 
constraint, and replaces that part of the LP program with the 
MemoryCapacityConstraint.

We probably want a combination of (1) and (2) so that we handle any gaps in the 
pre-defined constraints that SolverScheduler has, while at the same time 
leveraging the pre-defined constraints when possible.

Tim


On Mar 17, 2015, at 6:09 PM, Yathiraj Udupi (yudupi) 
mailto:yud...@cisco.com>> wrote:

Hi Tim,

I posted this comment on the doc.  I am still pondering over a possibility of 
have a policy-driven scheduler workflow via the Solver Scheduler placement 
engine, which is also LP based like you describe in your doc.
I know in your initial meeting, you plan to go over your proposal of building a 
VM placement engine that subscribes to the Congress DSE,  I probably will 
understand the Congress workflows better and see how I could incorporate this 
proposal to talk to the Solver Scheduler to make the placement decisions.

The example you provide in the doc, is a very good scenario, where a VM 
placement engine should continuously monitor and trigger VM migrations.

I am also interested in the case of a policy-driven scheduling for the initial 
creation of VMs. This is where say people will call Nova APIs and create a new 
set of VMs. Here the scheduler workflow should address the constraints as 
imposed from the user's policies.

Say the simple policy is " Host's free RAM >= 0.25 * Memory_Capacity"
I would like the scheduler to use this policy as defined from Congress, and 
apply it during the scheduling as part of the Nova boot call.

I am really interested in and need help in coming up with a solution 
integrating Solver Scheduler, so say if I have an implementation of a 
"MemoryCapacityConstraint", which takes a hint value "free_memory_limit" (0.25 
in this example),
could we have a policy in Datalog

placement_requirement(id) :-
nova:host(id),
solver_scheduler:applicable_constraints(id, ["MemoryCapacityConstraint", ]),
applicable_metadata(id, {"free_memory_limit": 0.25, })

This policy could be set and delegated by Congress to solver scheduler via the 
"set_policy" API. or the Solver Scheduler can query Congress via a "get_policy" 
API to get this policy, and incorporate it as part of the solver scheduler 
workflow ?
Does this sound doable ?

Thanks,
Yathi.



On 3/16/15, 11:05 AM, "Tim Hinrichs" 
mailto:thinri...@vmware.com>> wrote:

Hi all,

The feedback on the POC delegation proposal has been mostly positive.  Several 
people have asked for a meeting to discuss further.  Given time zone 
constraints, it will likely be 8a or 9a Pacific.  Let me know in the next 2 
days if you want to participate, and we will try to find a day that everyone 
can attend.

https://docs.google.com/document/d/1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI/edit

[openstack-dev] [Nova][Cinder] Questions re progress

2015-03-18 Thread Adam Lawson
Hi everyone,

Got some questions for whether certain use cases have been addressed and if
so, where things are at. A few things I find particularly interesting:

   - Automatic Nova evacuation for VM's using shared storage
   - Using Swift as a back-end for Cinder

I know we discussed Nova evacuate last year with some dialog leading into
the Paris Operator Summit and there were valid unknowns around what would
be required to constitute a host being "down", by what logic that would be
calculated and what would be required to initiate the move and which
project should own the code to make it happen. Just wondering where we are
with that.

On a separate note, Ceph has the ability to act as a back-end for Cinder,
Swift does not. Perhaps there are performance trade-offs to consider but
I'm a big fan of service plane abstraction and what I'm not a fan of is
tying data to physical hardware. The fact this continues to be the case
with Cinder troubles me.

So a question; are these being addressed somewhere in some context? I
admittedly don't want to distract momentum on the Nova/Cinder teams, but I
am curious if these exist (or conflict) with our current infrastructure
blueprints?

Mahalo,
Adam

*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][infra] Gate job works incorrectly

2015-03-18 Thread Nikolay Starodubtsev
Hi all,
This week I try to discover why this job
gate-tempest-dsvm-neutron-src-python-saharaclient-juno

fails
and what's the difference from the same jobs in other clients. The results
are here:
1) Sahara client job fails because of the check heat templates makes. You
can see it here [1].
2) Other jobs don't fails because devstack installs heat after OpenStack
projects clients, and hear fetch clients version which is ok for him. You
can find it in stack.sh log (can reproduce on all-in-one devstack node).

So, here we faced 2 issues:
1) The gate job works incorrectly because it's use client versions from
Juno global requirements.
2) If we fix the first issue the job will faill because of the Heat check
at [1]

Steps to reproduce:
1) Install all-in-one devstack node with Sahara and enabled stack.sh
logging.
2) Use LIBS_FROM_GIT for python-saharaclient and python-novaclient.
3) Check pip freeze | grep client
python-novaclient would be 2.20.0
python-saharaclient would have a long ref and would be fetched directly
from GitHub.

Now this job is skipped in saharaclient, but we should find a way to fix
gate job ASAP.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] [Delegation] Meeting scheduling

2015-03-18 Thread ruby.krishnaswamy
Hello

o) custom constraint class
What did you mean by the “custom” constraint class?

  Did you mean we specify a “meta model” to specify constraints?  And then each 
“Policy” specifying a constraint  ( ) will lead to generation of the constraint 
in that meta-model.
Then the solver-scheduler could pick up the constraint?

This then will not require the “solver scheduler” to implement specific 
constraint classes such as “MemoryCapacityConstraint”.

We may have rules (not in sense of Datalog ☺ ) for name (e.g. variables or 
constants) generation?


Ruby

De : Tim Hinrichs [mailto:thinri...@vmware.com]
Envoyé : mercredi 18 mars 2015 16:34
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [Congress] [Delegation] Meeting scheduling

I responded in the gdoc.  Here’s a copy.

One of my goals for delegation is to avoid asking people to write policy 
statements specific to any particular domain-specific solver.  People ought to 
encode policy however they like, and the system ought to figure out how best to 
enforce that policy  (delegation being one option).

Assuming that's a reasonable goal, I see two options for delegation to  
solverScheduler

(1) SolverScheduler exposes a custom constraint class.  Congress generates the 
LP program from the Datalog, similar to what is described in this doc, and 
gives that LP program as custom constraints to the  SolverScheduler.  
SolverScheduler is then responsible for enforcing that policy both during 
provisioning of new servers and for monitoring/migrating servers once 
provisioning is finished.

(2) The Congress adapter for SolverScheduler understands the semantics of 
MemoryCapacityConstraint, identifies when the user has asked for that 
constraint, and replaces that part of the LP program with the 
MemoryCapacityConstraint.

We probably want a combination of (1) and (2) so that we handle any gaps in the 
pre-defined constraints that SolverScheduler has, while at the same time 
leveraging the pre-defined constraints when possible.

Tim


On Mar 17, 2015, at 6:09 PM, Yathiraj Udupi (yudupi) 
mailto:yud...@cisco.com>> wrote:

Hi Tim,

I posted this comment on the doc.  I am still pondering over a possibility of 
have a policy-driven scheduler workflow via the Solver Scheduler placement 
engine, which is also LP based like you describe in your doc.
I know in your initial meeting, you plan to go over your proposal of building a 
VM placement engine that subscribes to the Congress DSE,  I probably will 
understand the Congress workflows better and see how I could incorporate this 
proposal to talk to the Solver Scheduler to make the placement decisions.

The example you provide in the doc, is a very good scenario, where a VM 
placement engine should continuously monitor and trigger VM migrations.

I am also interested in the case of a policy-driven scheduling for the initial 
creation of VMs. This is where say people will call Nova APIs and create a new 
set of VMs. Here the scheduler workflow should address the constraints as 
imposed from the user's policies.

Say the simple policy is " Host's free RAM >= 0.25 * Memory_Capacity"
I would like the scheduler to use this policy as defined from Congress, and 
apply it during the scheduling as part of the Nova boot call.

I am really interested in and need help in coming up with a solution 
integrating Solver Scheduler, so say if I have an implementation of a 
"MemoryCapacityConstraint", which takes a hint value "free_memory_limit" (0.25 
in this example),
could we have a policy in Datalog

placement_requirement(id) :-
nova:host(id),
solver_scheduler:applicable_constraints(id, ["MemoryCapacityConstraint", ]),
applicable_metadata(id, {"free_memory_limit": 0.25, })

This policy could be set and delegated by Congress to solver scheduler via the 
"set_policy" API. or the Solver Scheduler can query Congress via a "get_policy" 
API to get this policy, and incorporate it as part of the solver scheduler 
workflow ?
Does this sound doable ?

Thanks,
Yathi.



On 3/16/15, 11:05 AM, "Tim Hinrichs" 
mailto:thinri...@vmware.com>> wrote:

Hi all,

The feedback on the POC delegation proposal has been mostly positive.  Several 
people have asked for a meeting to discuss further.  Given time zone 
constraints, it will likely be 8a or 9a Pacific.  Let me know in the next 2 
days if you want to participate, and we will try to find a day that everyone 
can attend.

https://docs.google.com/document/d/1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI/edit

Thanks!
Tim
__
OpenStack Development Mailing List (not for

Re: [openstack-dev] [puppet] Release management and bug triage

2015-03-18 Thread Mathieu Gagné
On 2015-03-18 12:26 PM, Emilien Macchi wrote:
>>
>> The challenge is with release management at scale. I have a bunch of
>> tools which I use to create new series, milestones and release them. So
>> it's not that big of a deal.
> 
> Are you willing to share it?
> 

Sure. I'll make it a priority to publish it before the end of the week.

It needs a bit of cleanup though. =)

-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Can I change the username for review.openstack.org?

2015-03-18 Thread Jeremy Stanley
On 2015-03-18 13:41:03 +0800 (+0800), Lily.Sing wrote:
> I follow the account setup steps here
>  and
> it says the username for review.openstack.org should be the same as
> launchpad.

Well, it says "The first time you sign into ... review.openstack.org
... Please enter your Launchpad username." It doesn't say that it
won't work if you don't use the same username (in fact it will work
just fine), but I've now proposed a clarification to the document:

https://review.openstack.org/165507

> But I input a mismatch one by mistake. Does it still work?

Yes, it works. There's no real need for them to be identical.

> If not, how can I change it? Thanks!

You can't. Gerrit is designed to assume that once a username is set
for an account it will never be changed.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Let's stick to OpenStack global requirements

2015-03-18 Thread Sebastian Kalinowski
I assume that you considered a situation when we have a common repository
with RPMs for Fuel master and for nodes.
There are some plans (unfortunately I do not know details, so maybe someone
from OSCI could tell more) to split those repositories. How this workflow
will work with those separated repos? Will we still need it?

Thanks!
Sebastian

2015-03-18 11:04 GMT+01:00 Roman Prykhodchenko :

> Hi folks,
>
> before you say «romcheg, go away and never come back again!», please read
> the story that caused me to propose this and the proposed solution. Perhaps
> it makes you reconsider :)
>
> As you know for different reasons, among which are being able to set up
> everything online and bringing up-to-date packages, we maintain an OSCI
> repository which is used for building ISOs and deploying OpenStack
> services. Managing that repo is a pretty hard job. Thus a dedicated group
> of people is devoted to perform that duty, they are always busy because of
> a lot of responsibilities they have.
>
> At the same time Fuel’s developers are pretty energetic and always want to
> add new features to Fuel. For that they love to use different libraries,
> many of which aren’t in the OSCI mirror yet. So they ask OSCI guys to add
> more and more of those and I guess that’s pretty fine except one little
> thing — sometimes those libraries conflict with ones, required by OpenStack
> services.
>
> To prevent that from happening someone has to check every patch against
> the OSCI repo and OpenStack’s global requirements, to detect whether a
> version bump or adding a new library is required an whether it can be
> performed. As you can guess, there’s too much of a human factor so
> statistically no one does that until problems appear. Moreover, theres’
> nothing but a «it’s not compatible with OpenStack» yelling from OSCI team
> that stops developers to change dependencies in Fuel.
>
> All the stuff described above causes sometimes tremendous time losses and
> is very problem-prone.
>
> I’d like to propose to make everyone’s life easier by following these
> steps:
>
>  - Create a new project called Fuel Requirements, all changes to it should
> go through a standard review procedure
>  - We strict ourselves to use only packages from both Fuel Requirements
> and Global Requirements for the version of OpenStack, Fuel is installing in
> the following manner:
> - If a requirement is in Global Requirements, the version spec in all
> Fuel’s components should be exactly like that.
> - OSCI mirror should contain the maximum version of a requirement
> that matches its version specification.
> - If a requirement is not in the global requirements list, then Fuel
> Requirements list should be used to check whether all Fuel’s components
> require the same version of a library/package.
> - OSCI mirror should contain the maximum version of a requirement
> that matches its version specification.
> - If a requirement that previously was only in Fuel Requirements is
> merged to Global Requirements, it should be removed from Global Requirements
>   - Set up CI jobs in both OpenStack CI and FuelCI to check all patches
> against both Global Requirements and Fuel Requirements and block, if either
> of checks doesn’t pass
>   - Set up CI jobs to notify OSCI team if either Global Requirements or
> Fuel Requirements are changed.
>   - Set up requirements proposal jobs that will automatically propose
> changes to all fuel projects once either of requirements lists was changed,
> just like it’s done for OpenStack projects.
>
>
> These steps may look terribly hard, but most of the job is already done in
> OpenStack projects and therefore it can be reused for Fuel.
> Looking forward for your feedback folks!
>
>
> - romcheg
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Release management and bug triage

2015-03-18 Thread Emilien Macchi


On 03/18/2015 12:23 PM, Mathieu Gagné wrote:
> On 2015-03-17 3:22 PM, Emilien Macchi wrote:
>>
>> A first question that comes in my mind is: should we continue to manage
>> every Puppet module in a different Launchpad project? Or should we
>> migrate all modules to a single project.
> 
> I prefer multiple Launchpad projects due to the fact that each project
> assume you manage one project for every aspect, especially milestones
> management. (which is intrinsically linked to bug management)
> 
> 
>> So far this is what I think about both solutions, feel free to comment:
>>
>> "Having one project per module"
>> Pros:
>> * Really useful when having the right tools to manage Launchpad, and
>> also to manage one module as a real project.
>> * The solution scales to the number of modules we support.
>>
>> Cons:
>> * I think some people don't go on Launchpad because there is so many
>> projects (one per module), so they did not subscribe emails or don't
>> visit the page very often.
> 
> They can subscribe to the project group instead:
> https://bugs.launchpad.net/openstack-puppet-modules
> 
> 
>> * Each time we create a module (it's not every day, I would say each
>> time a new OpenStack project is started), we have to repeat the process
>> for a new launchpad project.
> 
> It takes me ~2 minutes to create a project. It's not a burden at all for me.
> 
> The challenge is with release management at scale. I have a bunch of
> tools which I use to create new series, milestones and release them. So
> it's not that big of a deal.

Are you willing to share it?

> 
> 
>> "Having everything in a single project"
>> Pro:
>> * Release management could be simpler
> 
> It's not simpler, especially if you wish to associate bugs to
> milestones. You would have to assume that EVERY projects will be part of
> a very synchronized release cycle. (which isn't true)
> 
>> * A single view for all the bugs in Puppet modules
> 
> It exists already:
> https://bugs.launchpad.net/openstack-puppet-modules
> 
>> * Maybe a bad idea, but we can use tags to track puppet modules issues
>> (ie: puppet-openstacklib whould be openstacklib)
> 
> I already tried using tags to track issues. The challenge is with
> versions and releases management. You cannot associate issues to
> milestones unless you make the assume that we have one single version
> for ALL our modules. So far, we had occasion where a single module was
> released instead of all of them.
> 
> 
>> Con:
>> * The solution does not scale much, it depends again at how we decide to
>> make bug triage and release management;
> 
> If we wish to be under the big tent, I think we have to have a strong
> bug triage and release management. And having only one LP project is
> going to make it difficult, not easier.

Big +1.

> 
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Release management and bug triage

2015-03-18 Thread Mathieu Gagné
On 2015-03-17 3:22 PM, Emilien Macchi wrote:
> 
> A first question that comes in my mind is: should we continue to manage
> every Puppet module in a different Launchpad project? Or should we
> migrate all modules to a single project.

I prefer multiple Launchpad projects due to the fact that each project
assume you manage one project for every aspect, especially milestones
management. (which is intrinsically linked to bug management)


> So far this is what I think about both solutions, feel free to comment:
> 
> "Having one project per module"
> Pros:
> * Really useful when having the right tools to manage Launchpad, and
> also to manage one module as a real project.
> * The solution scales to the number of modules we support.
> 
> Cons:
> * I think some people don't go on Launchpad because there is so many
> projects (one per module), so they did not subscribe emails or don't
> visit the page very often.

They can subscribe to the project group instead:
https://bugs.launchpad.net/openstack-puppet-modules


> * Each time we create a module (it's not every day, I would say each
> time a new OpenStack project is started), we have to repeat the process
> for a new launchpad project.

It takes me ~2 minutes to create a project. It's not a burden at all for me.

The challenge is with release management at scale. I have a bunch of
tools which I use to create new series, milestones and release them. So
it's not that big of a deal.


> "Having everything in a single project"
> Pro:
> * Release management could be simpler

It's not simpler, especially if you wish to associate bugs to
milestones. You would have to assume that EVERY projects will be part of
a very synchronized release cycle. (which isn't true)

> * A single view for all the bugs in Puppet modules

It exists already:
https://bugs.launchpad.net/openstack-puppet-modules

> * Maybe a bad idea, but we can use tags to track puppet modules issues
> (ie: puppet-openstacklib whould be openstacklib)

I already tried using tags to track issues. The challenge is with
versions and releases management. You cannot associate issues to
milestones unless you make the assume that we have one single version
for ALL our modules. So far, we had occasion where a single module was
released instead of all of them.


> Con:
> * The solution does not scale much, it depends again at how we decide to
> make bug triage and release management;

If we wish to be under the big tent, I think we have to have a strong
bug triage and release management. And having only one LP project is
going to make it difficult, not easier.


-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Capability Discovery API

2015-03-18 Thread Dolph Mathews
What you're proposing quickly becomes an authorization question. "What
capabilities can this service provide?" is a far less useful question than
"what capabilities is the user authorized to consume?" More generally, why
would you advertise any capability that the user is going to receive a
4xx/5xx for using? It's a longstanding problem that the community has
discussed many times in the past.

On Tue, Mar 17, 2015 at 3:02 PM, Davis, Amos (PaaS-Core) <
amos.steven.da...@hp.com> wrote:

> All,
> The Application EcoSystem Working Group realized during the mid-cycle
> meetup in Philadelphia that there is no way to get the capabilities of an
> Openstack cloud so that applications can measure their compatibility
> against that cloud.  In other words,  if we create an Openstack App
> Marketplace and have developers make apps to be in that marketplace, then
> we'll have no way for apps to verify that they can run on that cloud.  We'd
> like to ask that there be a standard set of API calls created that allow a
> cloud to list its capabilities.  The cloud "features" or capabilities list
> should return True/False API responses and could include but is not limited
> to the below examples.  Also, https://review.openstack.org/#/c/162655/
> may be a good starting point for this request.
>
>
> Glance:
> URL/upload
> types (raw, qcow, etc)
>
> Nova:
> Suspend/Resume VM
> Resize
> Flavor sizes supported
> Images Available
> Quota Limits
> VNC support
>
> Neutron:
> Types of Networking (neutron, neutron + ml2, nova-network aka linux
> bridge, other)
> Types of SDN in use?
> Shared tenant networks
> Anything else?
>
>
> Ceph/Cinder:
> LVM or other?
> SCSI-backed?
> Any others?
>
> Swift:
> ?
>
> Best Regards,
> Amos Davis
> amos.da...@hp.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][osc] updating review ACLs for cliff

2015-03-18 Thread Doug Hellmann
Yesterday the TC approved the python-openstackclient project as an
official OpenStack project. The governance change also included the
previously discussed move of openstack/cliff from the Oslo team
over to the OSC team. I've updated gerrit to add
python-openstackclient-core to cliff-core and
python-openstackclient-milestone to cliff-release, but I've left
oslo-core and oslo-release in those groups for now until Oslo core team
members who are interested in staying cliff core reviewers can express
their intent.

If you would like to stay on as a cliff-core reviewer, please reply to
this email so I can update the group ACLs to include you. I'll remove
oslo-core some time next week (that's not a deadline for you to express
interest, but it may cause review hiccups if you don't reply before
then).

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bug in documentation for host aggregates and availability zones?

2015-03-18 Thread Chris Friesen

On 03/18/2015 09:35 AM, Steve Gordon wrote:

- Original Message -

From: "Chris Friesen" 
I think I've found some bugs around host aggregates in the documentation,
curious what people think.



Agree on both counts, can you file a bug against openstack-manuals and I will 
pick it up?


Done.

https://bugs.launchpad.net/openstack-manuals/+bug/1433673

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] [Delegation] Meeting scheduling

2015-03-18 Thread Tim Hinrichs
I responded in the gdoc.  Here’s a copy.

One of my goals for delegation is to avoid asking people to write policy 
statements specific to any particular domain-specific solver.  People ought to 
encode policy however they like, and the system ought to figure out how best to 
enforce that policy  (delegation being one option).

Assuming that's a reasonable goal, I see two options for delegation to  
solverScheduler

(1) SolverScheduler exposes a custom constraint class.  Congress generates the 
LP program from the Datalog, similar to what is described in this doc, and 
gives that LP program as custom constraints to the  SolverScheduler.  
SolverScheduler is then responsible for enforcing that policy both during 
provisioning of new servers and for monitoring/migrating servers once 
provisioning is finished.

(2) The Congress adapter for SolverScheduler understands the semantics of 
MemoryCapacityConstraint, identifies when the user has asked for that 
constraint, and replaces that part of the LP program with the 
MemoryCapacityConstraint.

We probably want a combination of (1) and (2) so that we handle any gaps in the 
pre-defined constraints that SolverScheduler has, while at the same time 
leveraging the pre-defined constraints when possible.

Tim


On Mar 17, 2015, at 6:09 PM, Yathiraj Udupi (yudupi) 
mailto:yud...@cisco.com>> wrote:

Hi Tim,

I posted this comment on the doc.  I am still pondering over a possibility of 
have a policy-driven scheduler workflow via the Solver Scheduler placement 
engine, which is also LP based like you describe in your doc.
I know in your initial meeting, you plan to go over your proposal of building a 
VM placement engine that subscribes to the Congress DSE,  I probably will 
understand the Congress workflows better and see how I could incorporate this 
proposal to talk to the Solver Scheduler to make the placement decisions.

The example you provide in the doc, is a very good scenario, where a VM 
placement engine should continuously monitor and trigger VM migrations.

I am also interested in the case of a policy-driven scheduling for the initial 
creation of VMs. This is where say people will call Nova APIs and create a new 
set of VMs. Here the scheduler workflow should address the constraints as 
imposed from the user's policies.

Say the simple policy is " Host's free RAM >= 0.25 * Memory_Capacity"
I would like the scheduler to use this policy as defined from Congress, and 
apply it during the scheduling as part of the Nova boot call.

I am really interested in and need help in coming up with a solution 
integrating Solver Scheduler, so say if I have an implementation of a 
"MemoryCapacityConstraint", which takes a hint value "free_memory_limit" (0.25 
in this example),
could we have a policy in Datalog

placement_requirement(id) :-
nova:host(id),
solver_scheduler:applicable_constraints(id, ["MemoryCapacityConstraint", ]),
applicable_metadata(id, {"free_memory_limit": 0.25, })

This policy could be set and delegated by Congress to solver scheduler via the 
"set_policy" API. or the Solver Scheduler can query Congress via a "get_policy" 
API to get this policy, and incorporate it as part of the solver scheduler 
workflow ?

Does this sound doable ?

Thanks,
Yathi.



On 3/16/15, 11:05 AM, "Tim Hinrichs" 
mailto:thinri...@vmware.com>> wrote:

Hi all,

The feedback on the POC delegation proposal has been mostly positive.  Several 
people have asked for a meeting to discuss further.  Given time zone 
constraints, it will likely be 8a or 9a Pacific.  Let me know in the next 2 
days if you want to participate, and we will try to find a day that everyone 
can attend.

https://docs.google.com/document/d/1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI/edit

Thanks!
Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bug in documentation for host aggregates and availability zones?

2015-03-18 Thread Steve Gordon
- Original Message -
> From: "Chris Friesen" 
> To: openstack-dev@lists.openstack.org
> 
> Hi,
> 
> I think I've found some bugs around host aggregates in the documentation,
> curious what people think.
> 
> The docs at
> "http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html#host-aggregates";
> contain this:
> 
> 
> nova aggregate-create  
> 
>  Create a new aggregate named  in availability zone
> . Returns the ID of the newly created aggregate. Hosts can
> be
> made available to multiple availability zones, but administrators should be
> careful when adding the host to a different host aggregate within the same
> availability zone and pay attention when using the aggregate-set-metadata and
> aggregate-update commands to avoid user confusion when they boot instances in
> different availability zones. An error occurs if you cannot add a particular
> host to an aggregate zone for which it is not intended.
> 
> 
> 
> I'm pretty sure that there are multiple errors in there:
> 
> 1) This command creates a new aggregate that is *exposed as* an availability
> zone.  It doesn't create a new aggregate within an availability zone. [1]
> Because of that the idea of a "different host aggregate within the same
> availability zone" doesn't make any sense.
> 
> 2) Hosts can be part of multiple host aggregates, they cannot be part of
> multiple availability zones. [2]
> 
> 
> Chris
> 
> 
> 
> 
> References:
> [1]
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> 
> [2] See the check in nova.compute.api.AggregateAPI._check_az_for_host()

Agree on both counts, can you file a bug against openstack-manuals and I will 
pick it up?

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Common library for shared code

2015-03-18 Thread Ben Nemec
On 03/17/2015 09:13 AM, Zane Bitter wrote:
> On 16/03/15 16:38, Ben Nemec wrote:
>> On 03/13/2015 05:53 AM, Jan Provaznik wrote:
>>> On 03/10/2015 05:53 PM, James Slagle wrote:
 On Mon, Mar 9, 2015 at 4:35 PM, Jan Provazník  wrote:
> Hi,
> it would make sense to have a library for the code shared by Tuskar UI and
> CLI (I mean TripleO CLI - whatever it will be, not tuskarclient which is
> just a thing wrapper for Tuskar API). There are various actions which
> consist from "more that a single API call to an openstack service", to 
> give
> some examples:
>
> - nodes registration - for loading a list of nodes from a user defined 
> file,
> this means parsing a CSV file and then feeding Ironic with this data
> - decommission a resource node - this might consist of disabling
> monitoring/health checks on this node, then gracefully shut down the node
> - stack breakpoints - setting breakpoints will allow manual
> inspection/validation of changes during stack-update, user can then update
> nodes one-by-one and trigger rollback if needed

 I agree something is needed. In addition to the items above, it's much
 of the post deployment steps from devtest_overcloud.sh. I'd like to see 
 that be
 consumable from the UI and CLI.

 I think we should be aware though that where it makes sense to add things
 to os-cloud-config directly, we should just do that.

>>>
>>> Yes, actually I think most of the devtest_overcloud content fits
>>> os-cloud-config (and IIRC for this purpose os-cloud-config was created).
>>>
>
> It would be nice to have a place (library) where the code could live and
> where it could be shared both by web UI and CLI. We already have
> os-cloud-config [1] library which focuses on configuring OS cloud after
> first installation only (setting endpoints, certificates, flavors...) so 
> not
> all shared code fits here. It would make sense to create a new library 
> where
> this code could live. This lib could be placed on Stackforge for now and 
> it
> might have very similar structure as os-cloud-config.
>
> And most important... what is the best name? Some of ideas were:
> - tuskar-common

 I agree with Dougal here, -1 on this.

> - tripleo-common
> - os-cloud-management - I like this one, it's consistent with the
> os-cloud-config naming

 I'm more or less happy with any of those.

 However, If we wanted something to match the os-*-config pattern we might
 could go with:
 - os-management-config
 - os-deployment-config

>>>
>>> Well, the scope of this lib will be beyond configuration of a cloud so
>>> having "-config" in the name is not ideal. Based on feedback in this
>>> thread I tend to go ahead with os-cloud-management and unless someone
>>> rises an objection here now, I'll ask infra team what is the process of
>>> adding the lib to stackforge.
>>
>> Any particular reason you want to start on stackforge?  If we're going
>> to be consuming this in TripleO (and it's basically going to be
>> functionality graduating from incubator) I'd rather just have it in the
>> openstack namespace.  The overhead of some day having to rename this
>> project seems unnecessary in this case.
> 
> I think the long-term hope for this code is for it to move behind the 
> Tuskar API, so at this stage the library is mostly to bootstrap that 
> development to the point where the API is more or less settled. In that 
> sense stackforge seems like a natural fit, but if folks feel strongly 
> that it should be part of TripleO (i.e. in the openstack namespace) from 
> the beginning then there's probably nothing wrong with that either.

So is this eventually going to live in Tuskar?  If so, I would point out
that it's going to be awkward to move it there if it starts out as a
separate thing.  There's no good way I know of to copy code from one git
repo to another without losing its history.

I guess my main thing is that everyone seems to agree we need to do
this, so it's not like we're testing the viability of a new project.
I'd rather put this code in the right place up front than have to mess
around with moving it later.  That said, this is kind of outside my
purview so I don't want to hold things up, I just want to make sure
we've given some thought to where it lives.

-Ben


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] requirements-py{2, 3} and universal wheels

2015-03-18 Thread Victor Stinner
>> I haven't tested yet (and someone should) that it does all JUST WORK,
>> but thats easy: put an environment marker in a requirements.txt file
>> like so:
>> 
>>  argparse; python_version < '3'

> I think the last time this came up the feature wasn't available in pip
> yet, and so using separate files was the work-around. Are environment
> markers fully supported by pip/setuptools/whatever now?

Yeah, I developed this feature for OpenStack. My change was merged into pip 6.0:
https://github.com/pypa/pip/pull/1472

I forgot to finish the work in OpenStack. I don't know where the pip 
requirement is specified: we need at least pip 6.0.

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Bug in documentation for host aggregates and availability zones?

2015-03-18 Thread Chris Friesen


Hi,

I think I've found some bugs around host aggregates in the documentation, 
curious what people think.


The docs at 
"http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html#host-aggregates"; 
contain this:



nova aggregate-create  

Create a new aggregate named  in availability zone 
. Returns the ID of the newly created aggregate. Hosts can be 
made available to multiple availability zones, but administrators should be 
careful when adding the host to a different host aggregate within the same 
availability zone and pay attention when using the aggregate-set-metadata and 
aggregate-update commands to avoid user confusion when they boot instances in 
different availability zones. An error occurs if you cannot add a particular 
host to an aggregate zone for which it is not intended.




I'm pretty sure that there are multiple errors in there:

1) This command creates a new aggregate that is *exposed as* an availability 
zone.  It doesn't create a new aggregate within an availability zone. [1] 
Because of that the idea of a "different host aggregate within the same 
availability zone" doesn't make any sense.


2) Hosts can be part of multiple host aggregates, they cannot be part of 
multiple availability zones. [2]



Chris




References:
[1] 
http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/


[2] See the check in nova.compute.api.AggregateAPI._check_az_for_host()

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Prefix delegation and user facing API thoughts

2015-03-18 Thread Sean M. Collins
On Wed, Mar 18, 2015 at 06:45:59AM PDT, John Davidge (jodavidg) wrote:
> In the IPv6 meeting yesterday you mentioned doing this
> with an extension rather than modifying the core API. Could you go into
> some detail about how you see this extension looking?

The easiest, is to evaluate the REST API that is being worked on by the
subnet allocation spec:

http://specs.openstack.org/openstack/neutron-specs/specs/kilo/subnet-allocation.html#rest-api-impact

Since it also solves the issue of the CIDR being a required attribute in
the subnet-create call.

This was part of my comments when reviewing the spec, that we should
rely on the API changes that the subnet allocation spec as the "user
facing" portion of prefix delegation. 

Anthony Veiga and I did some preliminary sketches on what an API
extension that handled prefix delegation would look like nearly a year
ago ( 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030581.html),
and I have some additional thoughts on how the REST API would behave,
but at this stage of the game I think the subnet allocation REST API is
a superior spec.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara][Horizon] Can't open "Data Processing" panel after update sahara & horizon

2015-03-18 Thread David Lyle
If you are not seeing the horizon panels for Sahara, I believe you are
seeing https://bugs.launchpad.net/horizon/+bug/1429987

The fix for that was merged on March 9
https://review.openstack.org/#/c/162736/

There are several bugs and fixes around the switch of the endpoint type
from data_processing to data-processing. Seems like you may have got caught
in the middle of the change. All should work now.

David
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Block Device Mapping is Invalid error

2015-03-18 Thread Nikola Đipanov
On 03/16/2015 03:55 PM, aburluka wrote:
> Hello Nova!
> 
> I'd like to ask community to help me with some unclear things. I'm
> currently working on adding persistent storage support into a parallels
> driver.
> 
> I'm trying to start VM.
> 
> nova boot test-vm --flavor m1.medium --image centos-vm-32 --nic
> net-id=c3f40e33-d535-4217-916b-1450b8cd3987 --block-device
> id=26b7b917-2794-452a-95e5-2efb2ca6e32d,bus=sata,source=volume,bootindex=1
> 
> Got an error:
> ERROR (BadRequest): Block Device Mapping is Invalid: Boot sequence for
> the instance and image/block device mapping combination is not valid.
> (HTTP 400) (Request-ID: req-454a512c-c9c0-4f01-a4c8-dd0df0c2e052)
> 
> 
> nova/api/openstack/compute/servers.py
> def create(self, req, body)
> Has such "body" arg:
> {u'server':
> {u'name': u'test-vm',
>  u'imageRef': u'b9349d54-6fd3-4c09-94f5-8d1d5c5ada5c',
>  u'block_device_mapping_v2': [{u'disk_bus': u'sata',
>u'source_type': u'volume',
>u'boot_index': u'1',
>u'uuid':
> u'26b7b917-2794-452a-95e5-2efb2ca6e32d'}],
>  u'flavorRef': u'3',
>  u'max_count': 1,
>  u'min_count': 1,
>  u'networks': [{u'uuid': u'c3f40e33-d535-4217-916b-1450b8cd3987'}],
>  'scheduler_hints': {}
> }
> }
> 

So the reason you get such an error is because there is no block device
mapping with boot_index 0. This is for somewhat historical reasons -
when the new block device mapping syntax (so v2, see [1]) was
introduced, the idea was to stop special-casing images, and treat them
as just another block device. Still most of the driver code
special-cases the image field, so this block device is not really used
internally, but is checked for in the API when we try to validate the
boot sequence passed.

In order for this to work properly, we added code in the
python-novaclient to add a (somewhat useless) block device entry (see
commit [2]) so that the DB is used consistently and the validation passes.

[1] https://wiki.openstack.org/wiki/BlockDeviceConfig
[2] https://review.openstack.org/#/c/46537/1

> Such block device mapping leads to bad boot indexes list.
> I've tried to watch this argument while executing similiar command with
> kvm hypervisor on Juno RDO and get something like in "body":
> 
> {u'server': {u'name': u'test-vm',
>  u'imageRef': u'78ad3d84-a165-42bb-93c0-a4ad1f1ddefc',
>  u'block_device_mapping_v2': [{u'source_type': u'image',
>u'destination_type': u'local',
>u'boot_index': 0,
>u'delete_on_termination': True,
>u'uuid':
> u'78ad3d84-a165-42bb-93c0-a4ad1f1ddefc'},
> 
>  {u'disk_bus': u'sata',
>   u'source_type': u'volume',
>   u'boot_index': u'1',
>   u'uuid':
> u'57a27723-65a6-472d-a67d-a551d7dc8405'}],
>  u'flavorRef': u'3',
>  u'max_count': 1,
>  u'min_count': 1,
>  'scheduler_hints': {}}}
> 

The telling sign here was that you used RDO to test.

I spent some time looking at this, and the actual problem here is that
there is a line of code removed from the python-novaclient not too long
ago, that is present in the RDO Juno nova client, that actually makes it
work for.

The offending commit that breaks this for you and does not exist in the
RDO-shipped client is:

https://review.openstack.org/#/c/153203/

This basically removes the code that would add an image bdm if there are
other block devices specified. This is indeed a bug in master, but it is
not as simple as reverting the offending commit in the nova-client, as
it was a part of a separate bug fix [3]

Based on that I suspect that point the older (RDO Juno) client at a Nova
that contains the fix for [3] will also exsibit issues

Actually there is (at the time of this writing) still code in the Nova
API that expects and special-cases the case the above commit removes [4]

[3] https://bugs.launchpad.net/nova/+bug/1377958
[4]
https://github.com/openstack/nova/blob/4b1951622e4b7fcee5ef86396620e91b4b5fa1a1/nova/compute/api.py#L733

> Can you answer next questions please:
> 1) Does the first version miss an 'source_type': 'image' arg?
> 2) Where should and image block_device be added to this arg? Does it
> come from novaclient or is it added by some callback or decorator?
> 

I think both questions are answered above. The question that we want to
answer is how to fix it and make sure that it does not regress as easily
in the future.

I have created a bug for this:

https://bugs.launchpad.net/nova/+bug/1433609

so we can continue the discussion there.

N.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-

Re: [openstack-dev] [Neutron][IPv6] Prefix delegation and user facing API thoughts

2015-03-18 Thread John Davidge (jodavidg)
Copying my response on the review below:

Yes that completely makes sense Sean. In our original proposal we wanted
to allow the user to initiate a subnet-create without providing a CIDR,
and have an 'ipv6_pd_enabled' flag which could be set in the API call to
tell Neutron that this particular subnet needs to have its CIDR defined by
PD. The consensus from the community early in the Kilo development cycle
was that changes to the API should be avoided if at all possible, and so
it was agreed that we would use a special ::/64 CIDR for the initial
implementation. In the IPv6 meeting yesterday you mentioned doing this
with an extension rather than modifying the core API. Could you go into
some detail about how you see this extension looking?

Cheers,


John




On 18/03/2015 13:12, "Sean M. Collins"  wrote:

>Hi all,
>
>I recently posted this comment in the review for
>https://review.openstack.org/#/c/158697/,
>and wanted to post it here so that people can respond. Basically, I have
>concerns that I raised during the spec submission process
>(https://review.openstack.org/#/c/93054/).
>
>I'm still not totally on board with the proposed user facing API, where
>they create a subnet cidr of ::/64, then later it is updated by Neutron
>to actually be the cidr that is delegated. My hope is to have a user
>facing API that would require little to no user input (since we are
>relying on an external system to delegate us a subnet) and Neutron would
>create the required constructs internally. My hope is that either the new
>IPAM subsystem for subnet allocations would provide this, or that a small
>API extension could "paper over" some of the sharper edges.
>
>Basically, I know we need the user to create a CIDR of ::/64 to satisfy
>the Neutron core API's requirement that a subnet MUST have a CIDR when
>creating, but I think that in the case of prefix delegation we shouldn't
>expose this sharp edge to the user by default.
>
>Does this make sense?
>
>-- 
>Sean M. Collins
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara][Horizon] Can't open "Data Processing" panel after update sahara & horizon

2015-03-18 Thread Trevor McKay
Hi Li,

  I am using a fresh devstack with Horizon deployed as part of devstack.
I am running Sahara separately from the command line from the git
sources (master branch).

  I use a little script to register the Sahara endpoint so that Horizon
sees it.
The only change I had to make was to register the service type  as
data-processing instead
of data_processing (below). Other than that, I don't have any problems.

  If you are really stuck, you can always wipe out the database and
rebuild to get beyond the issue.
With mysql I use

$ mysqladmin drop sahara
$ mysqladmin create sahara
$ sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head

If this error is reliably reproducible, would you create a bug in
launchpad with detailed steps to reproduce?
It's not clear to me what the issue is.

Thanks,

Trevor

-- 

#!/bin/bash
keystone service-create --name sahara --type data-processing
keystone endpoint-create --region RegionOne --service sahara --publicurl
'http://localhost:8386/v1.1/$(tenant_id)s'

On Wed, 2015-03-18 at 03:05 +, Li, Chen wrote:
> Hi all,
> 
>  
> 
> I’m working under Ubuntu14.04 with devstack.
> 
>  
> 
> After the fresh devstack installation, I run a integration test to
> test the environment.
> 
> After the test, cluster and tested edp jobs remains in my environment.
> 
>  
> 
> Then I updated sahara to the lasted code.
> 
> To make the newest code work, I also did :
> 
> 1.  manually download python-novaclient and by running “python
> setup.py install” to install it
> 
> 2.  run “sahara-db-manage --config-file /etc/sahara/sahara.conf
> upgrade head”
> 
>  
> 
> Then I restarted sahara.
> 
>  
> 
> I tried to delete things remained from last test from dashboard, but
>  :
> 
> 1.  The table for “job_executions” can’t be opened anymore.
> 
> 2.  When I try to delete “job”, an error happened:
> 
>  
> 
> 2015-03-18 10:34:33.031 ERROR oslo_db.sqlalchemy.exc_filters [-]
> DBAPIError exception wrapped from (IntegrityError) (1451, 'Cannot
> delete or update a parent row: a foreign key constraint fails
> (`sahara`.`job_executions`, CONSTRAINT `job_executions_ibfk_3` FOREIGN
> KEY (`job_id`) REFERENCES `jobs` (`id`))') 'DELETE FROM jobs WHERE
> jobs.id = %s' ('10c36a9b-a855-44b6-af60-0effee31efc9',)
> 
> 2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters Traceback
> (most recent call last):
> 
> 2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters   File
> "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py",
> line 951, in _execute_context
> 
> 2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters
> context)
> 
> 2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters   File
> "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py",
> line 436, in do_execute
> 
> 2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters
> cursor.execute(statement, parameters)
> 
> 2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters   File
> "/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 174, in
> execute
> 
> 2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters
> self.errorhandler(self, exc, value)
> 
> 2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters   File
> "/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in
> defaulterrorhandler
> 
> 2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters raise
> errorclass, errorvalue
> 
> 2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters
> IntegrityError: (1451, 'Cannot delete or update a parent row: a
> foreign key constraint fails (`sahara`.`job_executions`, CONSTRAINT
> `job_executions_ibfk_3` FOREIGN KEY (`job_id`) REFERENCES `jobs`
> (`id`))')
> 
> 2015-03-18 10:34:33.031 TRACE oslo_db.sqlalchemy.exc_filters
> 
> 2015-03-18 10:34:33.073 DEBUG sahara.openstack.common.periodic_task
> [-] Running periodic task
> SaharaPeriodicTasks.terminate_unneeded_transient_clusters from
> (pid=8084)
> run_periodic_tasks 
> /opt/stack/sahara/sahara/openstack/common/periodic_task.py:219
> 
> 2015-03-18 10:34:33.073 DEBUG sahara.service.periodic [-] Terminating
> unneeded transient clusters from (pid=8084)
> terminate_unneeded_transient_clusters 
> /opt/stack/sahara/sahara/service/periodic.py:131
> 
> 2015-03-18 10:34:33.108 ERROR sahara.utils.api [-] Validation Error
> occurred: error_code=400, error_message=Job deletion failed on foreign
> key constraint
> 
> Error ID: e65b3fb1-b142-45a7-bc96-416efb14de84,
> error_name=DELETION_FAILED
> 
>  
> 
> I assume this might be caused by an old horizon version, so I did :
> 
> 1.  update horizon code.
> 
> 2.  python manage.py compress
> 
> 3.  sudo python setup.py install
> 
> 4.  sudo service apache2 restart
> 
>  
> 
> But these only make things worse.
> 
> Now, when I click “Data Processing” on dashboard, there is no return
> action anymore.
> 
>  
> 
> Anyone can help me here ?
> 
> What I did wrong ?
> 
> How can I fix this ?
> 
>  
> 
> I tested sahara 

[openstack-dev] [Neutron][IPv6] Prefix delegation and user facing API thoughts

2015-03-18 Thread Sean M. Collins
Hi all,

I recently posted this comment in the review for 
https://review.openstack.org/#/c/158697/,
and wanted to post it here so that people can respond. Basically, I have
concerns that I raised during the spec submission process
(https://review.openstack.org/#/c/93054/).

I'm still not totally on board with the proposed user facing API, where they 
create a subnet cidr of ::/64, then later it is updated by Neutron to actually 
be the cidr that is delegated. My hope is to have a user facing API that would 
require little to no user input (since we are relying on an external system to 
delegate us a subnet) and Neutron would create the required constructs 
internally. My hope is that either the new IPAM subsystem for subnet 
allocations would provide this, or that a small API extension could "paper 
over" some of the sharper edges.

Basically, I know we need the user to create a CIDR of ::/64 to satisfy the 
Neutron core API's requirement that a subnet MUST have a CIDR when creating, 
but I think that in the case of prefix delegation we shouldn't expose this 
sharp edge to the user by default.

Does this make sense?

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] PTL elections

2015-03-18 Thread Serg Melikyan
Thank you!

On Wed, Mar 18, 2015 at 8:28 AM, Sergey Lukjanov 
wrote:

> The PTL candidacy proposal time frame ended and we have only one candidate.
>
> So, Serg Melikyan, my congratulations!
>
> Results documented in
> https://wiki.openstack.org/wiki/Murano/PTL_Elections_Kilo_Liberty#PTL
>
> On Wed, Mar 11, 2015 at 2:04 AM, Sergey Lukjanov 
> wrote:
>
>> Hi folks,
>>
>> due to the requirement to have officially elected PTL, we're running
>> elections for the Murano PTL for Kilo and Liberty cycles. Schedule
>> and policies are fully aligned with official OpenStack PTLs elections.
>>
>> You can find more info in official elections wiki page [0] and the same
>> page for Murano elections [1], additionally some more info in the past
>> official nominations opening email [2].
>>
>> Timeline:
>>
>> till 05:59 UTC March 17, 2015: Open candidacy to PTL positions
>> March 17, 2015 - 1300 UTC March 24, 2015: PTL elections
>>
>> To announce your candidacy please start a new openstack-dev at
>> lists.openstack.org mailing list thread with the following subject:
>> "[murano] PTL Candidacy".
>>
>> [0] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014
>> [1] https://wiki.openstack.org/wiki/Murano/PTL_Elections_Kilo_Liberty
>> [2]
>> http://lists.openstack.org/pipermail/openstack-dev/2014-March/031239.html
>>
>> Thank you.
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Sahara Technical Lead
>> (OpenStack Data Processing)
>> Principal Software Engineer
>> Mirantis Inc.
>>
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Stable/icehouse: oslo.messaging RPCClient segmentation fault core dumped

2015-03-18 Thread ZIBA Romain
Hello everyone,
I am having an issue using the RPCClient of the oslo.messaging package 
delivered through the stable/icehouse release of devstack (v 1.4.1).

With this simple script:


import sys

from oslo.config import cfg
from oslo import messaging

from project.openstack.common import log

LOG = log.getLogger(__name__)

log_levels = (cfg.CONF.default_log_levels +
['stevedore=INFO', 'keystoneclient=INFO'])
cfg.set_defaults(log.log_opts, default_log_levels=log_levels)

argv = sys.argv
cfg.CONF(argv[1:], project='test_rpc_server')

log.setup('test_rpc_server')

transport_url = 'rabbit://guest:guest@localhost:5672/'
transport = messaging.get_transport(cfg.CONF, transport_url)
target = messaging.Target(topic='test_rpc', server='server1')
client = messaging.RPCClient(transport, target)
ctxt = {'some':'context'}
try:
res = client.call(ctxt, 'call_method_1')
except Exception as e:
LOG.debug(e)
print res


svcdev@svcdev-openstack: python rpc_client.py
2015-03-18 11:44:01.018 15967 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connecting to AMQP server on localhost:5672
2015-03-18 11:44:01.125 15967 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connected to AMQP server on localhost:5672
2015-03-18 11:44:01.134 15967 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connecting to AMQP server on localhost:5672
2015-03-18 11:44:01.169 15967 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connected to AMQP server on localhost:5672
Segmentation fault (core dumped)

The last Python method called is the following one (in librabbitmq package, v 
1.0.3):

def basic_publish(self, body, exchange='', routing_key='',
mandatory=False, immediate=False, **properties):
if isinstance(body, tuple):
body, properties = body
elif isinstance(body, self.Message):
body, properties = body.body, body.properties
return self.connection._basic_publish(self.channel_id,
body, exchange, routing_key, properties,
mandatory or False, immediate or False)

The script crashes after trying to call _basic_publish.

For information, I've got the trusty's rabbitmq-server version (v 3.2.4-1).
Plus, replacing the call method by a cast method makes that a message is queued.

Could you please tell me if I'm doing something wrong? Is there a bug in the 
c-library used by librabbitmq?

Thanks beforehand,
Romain Ziba.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Capability Discovery API

2015-03-18 Thread Duncan Thomas
On 17 March 2015 at 22:02, Davis, Amos (PaaS-Core)  wrote:

> Ceph/Cinder:
> LVM or other?
> SCSI-backed?
> Any others?
>

I'm wondering why any of the above matter to an application. The entire
point of cinder is to abstract those details from the application. I'd be
very strongly resistant to add any API to cinder that exposed any of the
above. Is there some significant difference the above makes, from an
application POV? If so, please do let me know, since it is probably a bug

There *are* details about cinder that are sensible to expose however:

Are consistency groups supported?
Is replication supported?
Is the backup service enabled?
Do snapshots of an attached volume work?
Are there restrictions to backing up snapshots, or snapshotted volumes,
or volume from snapshots?

...and probably others. I don't think there's a good answer to how to
answer these questions yet, and I agree we need to get that on the road
map. Some of the above questions should ideally only have one question, but
there are limitations on various drivers that we've not yet fixed.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][neutron] Debugging L3 Agent with PyCharm

2015-03-18 Thread Gal Sagie
Hello all,

I am trying to debug the L3-agent code with pycharm, but the debugger
doesnt stop on my break points.

I have enabled PyCharm Gevent compatible debugging but that doesnt solve
the issue
(I am able to debug neutron server correctly)

Anyone might know what is the problem?

Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Block migrations and Cinder volumes

2015-03-18 Thread Daniel P. Berrange
On Wed, Mar 18, 2015 at 08:33:26AM +0100, Thomas Herve wrote:
> > Interesting bug.  I think I agree with you that there isn't a good solution
> > currently for instances that have a mix of shared and not-shared storage.
> > 
> > I'm curious what Daniel meant by saying that marking the disk shareable is
> > not
> > as reliable as we would want.
> 
> I think this is the bug I reported here: 
> https://bugs.launchpad.net/nova/+bug/1376615
> 
> My initial approach was indeed to mark the disks are shareable: the patch 
> (https://review.openstack.org/#/c/125616/) has comments around the issues, 
> mainly around I/Ocache and SELinux isolation being disabled.

Yep, those are both show stopper issues. The only solution is to fix the
libvirt API for this first.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Block migrations and Cinder volumes

2015-03-18 Thread Daniel P. Berrange
On Tue, Mar 17, 2015 at 01:33:26PM -0700, Joe Gordon wrote:
> On Thu, Jun 19, 2014 at 1:38 AM, Daniel P. Berrange 
> wrote:
> 
> > On Wed, Jun 18, 2014 at 11:09:33PM -0700, Rafi Khardalian wrote:
> > > I am concerned about how block migration functions when Cinder volumes
> > are
> > > attached to an instance being migrated.  We noticed some unexpected
> > > behavior recently, whereby attached generic NFS-based volumes would
> > become
> > > entirely unsparse over the course of a migration.  After spending some
> > time
> > > reviewing the code paths in Nova, I'm more concerned that this was
> > actually
> > > a minor symptom of a much more significant issue.
> > >
> > > For those unfamiliar, NFS-based volumes are simply RAW files residing on
> > an
> > > NFS mount.  From Libvirt's perspective, these volumes look no different
> > > than root or ephemeral disks.  We are currently not filtering out volumes
> > > whatsoever when making the request into Libvirt to perform the migration.
> > >  Libvirt simply receives an additional flag (VIR_MIGRATE_NON_SHARED_INC)
> > > when a block migration is requested, which applied to the entire
> > migration
> > > process, not differentiated on a per-disk basis.  Numerous guards within
> > > Nova to prevent a block based migration from being allowed if the
> > instance
> > > disks exist on the destination; yet volumes remain attached and within
> > the
> > > defined XML during a block migration.
> > >
> > > Unless Libvirt has a lot more logic around this than I am lead to
> > believe,
> > > this seems like a recipe for corruption.  It seems as though this would
> > > also impact any type of volume attached to an instance (iSCSI, RBD,
> > etc.),
> > > NFS just happens to be what we were testing.  If I am wrong and someone
> > can
> > > correct my understanding, I would really appreciate it.  Otherwise, I'm
> > > surprised we haven't had more reports of issues when block migrations are
> > > used in conjunction with any attached volumes.
> >
> > Libvirt/QEMU has no special logic. When told to block-migrate, it will do
> > so for *all* disks attached to the VM in read-write-exclusive mode. It will
> > only skip those marked read-only or read-write-shared mode. Even that
> > distinction is somewhat dubious and so not reliably what you would want.
> >
> > It seems like we should just disallow block migrate when any cinder volumes
> > are attached to the VM, since there is never any valid use case for doing
> > block migrate from a cinder volume to itself.
> 
> Digging up this old thread because I am working on getting multi node live
> migration testing working (https://review.openstack.org/#/c/165182/), and
> just ran into this issue (bug 1398999).
> 
> And I am not sure I agree with this statement. I think there is a valid
> case for doing block migrate with a cinder volume attached to an instance:

To be clear, I'm not saying the use cases for block migrating cinder are
invalid. Just that with the way libvirt exposes block migration today
it isn't safe for us to allow it, because we don't have fine grained
control to make it reliably safe from openstack. We need to improve the
libvirt API in this area and then we can support this feature properly.

> * Cloud isn't using a shared filesystem for ephemeral storage
> * Instance is booted from an image, and a volume is attached afterwards. An
> admin wants to take the box the instance is running on offline for
> maintanince with a minimal impact to the instances running on it.
> 
> What is the recommended solution for that use case? If the admin
> disconnects and reconnects the volume themselves is there a risk of
> impacting whats running on the instance? etc.

Yes, and that sucks, but that's the only safe option today, otherwise
libvirt is going to try copying the data in the cinder volumes itself,
which means it is copying from the volume on one host, back into the
very same volume on the other host. IOW it is rewriting all the data
even though the volume is shared betwteen the hosts. This has dangerous
data corruption failure scenarios as well as being massively wasteful
of CPU and network bandwidth.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Let's stick to OpenStack global requirements

2015-03-18 Thread Roman Prykhodchenko
Hi folks,

before you say «romcheg, go away and never come back again!», please read the 
story that caused me to propose this and the proposed solution. Perhaps it 
makes you reconsider :)

As you know for different reasons, among which are being able to set up 
everything online and bringing up-to-date packages, we maintain an OSCI 
repository which is used for building ISOs and deploying OpenStack services. 
Managing that repo is a pretty hard job. Thus a dedicated group of people is 
devoted to perform that duty, they are always busy because of a lot of 
responsibilities they have.

At the same time Fuel’s developers are pretty energetic and always want to add 
new features to Fuel. For that they love to use different libraries, many of 
which aren’t in the OSCI mirror yet. So they ask OSCI guys to add more and more 
of those and I guess that’s pretty fine except one little thing — sometimes 
those libraries conflict with ones, required by OpenStack services.

To prevent that from happening someone has to check every patch against the 
OSCI repo and OpenStack’s global requirements, to detect whether a version bump 
or adding a new library is required an whether it can be performed. As you can 
guess, there’s too much of a human factor so statistically no one does that 
until problems appear. Moreover, theres’ nothing but a «it’s not compatible 
with OpenStack» yelling from OSCI team that stops developers to change 
dependencies in Fuel.

All the stuff described above causes sometimes tremendous time losses and is 
very problem-prone.

I’d like to propose to make everyone’s life easier by following these steps:

 - Create a new project called Fuel Requirements, all changes to it should go 
through a standard review procedure
 - We strict ourselves to use only packages from both Fuel Requirements and 
Global Requirements for the version of OpenStack, Fuel is installing in the 
following manner:
- If a requirement is in Global Requirements, the version spec in all 
Fuel’s components should be exactly like that.
- OSCI mirror should contain the maximum version of a requirement that 
matches its version specification.
- If a requirement is not in the global requirements list, then Fuel 
Requirements list should be used to check whether all Fuel’s components require 
the same version of a library/package.
- OSCI mirror should contain the maximum version of a requirement that 
matches its version specification.
- If a requirement that previously was only in Fuel Requirements is merged 
to Global Requirements, it should be removed from Global Requirements
  - Set up CI jobs in both OpenStack CI and FuelCI to check all patches against 
both Global Requirements and Fuel Requirements and block, if either of checks 
doesn’t pass
  - Set up CI jobs to notify OSCI team if either Global Requirements or Fuel 
Requirements are changed.
  - Set up requirements proposal jobs that will automatically propose changes 
to all fuel projects once either of requirements lists was changed, just like 
it’s done for OpenStack projects.


These steps may look terribly hard, but most of the job is already done in 
OpenStack projects and therefore it can be reused for Fuel.
Looking forward for your feedback folks!


- romcheg





signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] KVM Forum 2015 Call for Participation

2015-03-18 Thread Daniel P. Berrange
=
KVM Forum 2015: Call For Participation
August 19-21, 2015 - Sheraton Seattle - Seattle, WA

(All submissions must be received before midnight May 1, 2015)
=

KVM is an industry leading open source hypervisor that provides an ideal
platform for datacenter virtualization, virtual desktop infrastructure,
and cloud computing.  Once again, it's time to bring together the
community of developers and users that define the KVM ecosystem for
our annual technical conference.  We will discuss the current state of
affairs and plan for the future of KVM, its surrounding infrastructure,
and management tools.  Mark your calendar and join us in advancing KVM.
http://events.linuxfoundation.org/events/kvm-forum/

This year, the KVM Forum is moving back to North America.  We will be
colocated with the Linux Foundation's LinuxCon North America, CloudOpen
North America, ContainerCon and Linux Plumbers Conference events.
Attendees of KVM Forum will also be able to attend a shared hackathon
event with Xen Project Developer Summit on August 18, 2015.

We invite you to lead part of the discussion by submitting a speaking
proposal for KVM Forum 2015.
http://events.linuxfoundation.org/cfp


Suggested topics:

KVM/Kernel
* Scaling and optimizations
* Nested virtualization
* Linux kernel performance improvements
* Resource management (CPU, I/O, memory)
* Hardening and security
* VFIO: SR-IOV, GPU, platform device assignment
* Architecture ports

QEMU

* Management interfaces: QOM and QMP
* New devices, new boards, new architectures
* Scaling and optimizations
* Desktop virtualization and SPICE
* Virtual GPU
* virtio and vhost, including non-Linux or non-virtualized uses
* Hardening and security
* New storage features
* Live migration and fault tolerance
* High availability and continuous backup
* Real-time guest support
* Emulation and TCG
* Firmware: ACPI, UEFI, coreboot, u-Boot, etc.
* Testing

Management and infrastructure

* Managing KVM: Libvirt, OpenStack, oVirt, etc.
* Storage: glusterfs, Ceph, etc.
* Software defined networking: Open vSwitch, OpenDaylight, etc.
* Network Function Virtualization
* Security
* Provisioning
* Performance tuning


===
SUBMITTING YOUR PROPOSAL
===
Abstracts due: May 1, 2015

Please submit a short abstract (~150 words) describing your presentation
proposal.  Slots vary in length up to 45 minutes.  Also include in your
proposal
the proposal type -- one of:
- technical talk
- end-user talk

Submit your proposal here:
http://events.linuxfoundation.org/cfp
Please only use the categories "presentation" and "panel discussion"

You will receive a notification whether or not your presentation proposal
was accepted by May 29, 2015.

Speakers will receive a complimentary pass for the event.  In the instance
that your submission has multiple presenters, only the primary speaker for a
proposal will receive a complementary event pass.  For panel
discussions, all
panelists will receive a complimentary event pass.

TECHNICAL TALKS

A good technical talk should not just report on what has happened over
the last year; it should present a concrete problem and how it impacts
the user and/or developer community.  Whenever applicable, focus on
work that needs to be done, difficulties that haven't yet been solved,
and on decisions that other developers should be aware of.  Summarizing
recent developments is okay but it should not be more than a small
portion of the overall talk.

END-USER TALKS

One of the big challenges as developers is to know what, where and how
people actually use our software.  We will reserve a few slots for end
users talking about their deployment challenges and achievements.

If you are using KVM in production you are encouraged submit a speaking
proposal.  Simply mark it as an end-user talk.  As an end user, this is a
unique opportunity to get your input to developers.

HANDS-ON / BOF SESSIONS

We will reserve some time for people to get together and discuss
strategic decisions as well as other topics that are best solved within
smaller groups.

These sessions will be announced during the event.  If you are interested
in organizing such a session, please add it to the list at

   http://www.linux-kvm.org/page/KVM_Forum_2015_BOF

Let people you think might be interested know about it, and encourage
them to add their names to the wiki page as well.  Please try to
add your ideas to the list before KVM Forum starts.


PANEL DISCUSSIONS

If you are proposing a panel discussion, please make sure that you list
all of your potential panelists in your abstract.  We will request full
biographies if a panel is accepted.


===
HOTEL / TRAVEL
===
KVM Forum 2015 will be taking place at the Sheraton Seattle Hotel.  We
are pleased to offer attendees a discounted room rate of US$199/night
(plus applicable taxes) which includes wifi in you

Re: [openstack-dev] [Keystone][FFE] - IdP ID (remote_id) registration and validation

2015-03-18 Thread David Chadwick
In my opinion you have got into this situation because your federation
trust model is essentially misguided. As I have been arguing since the
inception of federated Keystone, you should have rules for trusted IdPs
(already done), trusted attributes (not done), and then one set of
mapping rules that apply to all IdPs and all attributes (not done). If
you had followed this model (the one Kent originally implemented) you
would not be in this situation now.

Concerning the remote user ID, we can guarantee that it is always
globally unique by concatenating the IDP name with the IdP issued user
ID, so this wont cause a problem in mapping rules.

Concerning other identity attributes, there are two types:
- globally known and assigned attributes (such email address and other
LDAP ones) that have unique IDs regardless of the IDP that issued them -
the eduPerson schema attributes are of this type, so the mapping rules
for these are IDP independent, and the trusted IDP rules ensure that you
filter out untrusted ones
- locally issued attributes that mean different things to different
IDPs. In this case you need to concatenate the name of the IDP to the
attribute to make it globally unique, and then the mapping rules will
always apply. The trusted IDP rules will again filter these out or let
them pass.

So instead of fixing the model, you are adding more layers of complexity
to the implementation in order to fix conceptual errors in your
federation model.

Sadly yours

David


On 17/03/2015 22:28, Marek Denis wrote:
> Hello,
> 
> One very important feature that we have been working on in the Kilo
> development cycle is management of remote_id attributes tied to Identity
> Providers in keystone.
> 
> This work is crucial for:
> 
> -  Secure OpenStack identity federation configuration. User is required
> to specify what Identity Provider (IdP) issues an assertion as well as
> what protocol (s)he wishes to use (typically it would be SAML2 or OpenId
> Connect). Based on that knowledge (arbitrarily specified by a user),
> keystone fetches mapping rules configured for {IdP, protocol} pair and
> applies it on the assertion. As an effect a set of groups is returned,
> and by membership of those dynamically assigned groups (and later
> roles), an ephemeral user is being granted access to certain OpenStack
> resources. Without remote_id attributes, a user, can arbitrarily choose
> pair {Identity Provider, protocol} without respect of issuing Identity
> Provider. This may lead to a situation where Identity Provider X issues
> an assertion, but user chooses mapping ruleset dedicated for Identity
> Provider Y, effectively being granted improper groups (roles). As part
> of various federation protocols, every Identity Provider issues an
> identifier allowing trusting peers (Keystone  servers in this case) to
> reliably identify issuer of the assertion. That said, remote_id
> attributes allow cloud administrators to match assertions with Identity
> Providers objects configured in keystone (i.e. situation depicted above
> would not happen, as keystone object Identity Provider Y would accept
> assertions issued by Identity Provider Y only).
> 
> - WebSSO implementation - a highly requested feature that allows to use
> federation in OpenStack via web browsers, especially Horizon. Without
> remote_ids server (keystone) is not able to distinguish what maping rule
> set should be used for transforming assertion into set of local
> attributes (groups, users etc).
> 
> 
> Status of the work:
> 
> So far we have implemented and merged feature where each Identity
> Provider object can have one remote_id specified. However, there have
> been few request for stakeholders for ability to configure multiple
> remote_id attributes per Identity Provider objects. This is extremely
> useful in configuring federations where 10s or 100s of Identity Provider
> work within one federation and where one mapping ruleset is used among
> them.
> This has been discussed and widely accepted during Keystone mid cycle
> meetup in January 2015. The first version of the implementation was
> proposed on Febrary 2nd. During the implementation process we discovered
> the bug (https://bugs.launchpad.net/keystone/+bug/1426334) that was
> blocking further work. Fixing it took reasonably big manpower and
> significantly delayed delivery process of the main feature. Eventually,
> the bug was fixed and now we are ready to get final reviews (mind, the
> patch was already reviewed and all the comments and issues were
> constantly being addressed) and hopefully get landed in the Kilo release.
> 
> Specification link:
> https://github.com/openstack/keystone-specs/blob/master/specs/kilo/idp-id-registration.rst
> 
> Implementation link: https://review.openstack.org/#/c/152156/
> 
> I hereby ask for exceptional accepting the provided work in the Kilo
> release cycle.
> 
> With kind regards,
> 

__
OpenStack Deve

Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-18 Thread joehuang
[Joe]: For reliability purpose, I suggest that the keystone client should 
provide a fail-safe design: primary KeyStone server, the second KeyStone server 
(or even the third KeySont server) . If the primary KeyStone server is out of 
service, then the KeyStone client will try the second KeyStone server. 
Different KeyStone client may be configured with different primary KeyStone 
server and the second KeyStone server.

[Adam]: Makes sense, but that can be handled outside of Keystone using HA and 
Heartbear and awhole slew of technologies.  Each Keystone server can validate 
each other's tokens.
For cross-site KeyStone HA, the backend of HA can leverage MySQL Galera cluster 
for multisite database synchronous replication to provide high availability, 
but for the KeyStone front-end the API server, it's web service and accessed 
through the endpoint address ( name, or domain name, or ip address ) , like 
http:// or ip address.

AFAIK, the HA for web service will usually be done through DNS based geo-load 
balancer in multi-site scenario. The shortcoming for this HA is that the fault 
recovery ( forward request to the health web service) will take longer time, 
it's up to the configuration in the DNS system. The other way is to put a load 
balancer like LVS ahead of KeyStone web services in multi-site. Then either the 
LVS is put in one site(so that KeyStone client only configured with one IP 
address based endpoint item, but LVS cross-site HA is lack), or in multisite 
site, and register the multi-LVS's IP to the DNS or Name server(so that 
KeyStone client only configured with one Domain name or name based endpoint 
item, same issue just mentioned).

Therefore, I still think that keystone client with a fail-safe design( primary 
KeyStone server, the second KeyStone server ) will be a "very high gain but low 
invest" multisite high availability solution. Just like MySQL itself, we know 
there is some outbound high availability solution (for example, 
PaceMaker+ColoSync+DRDB), but also there is  Galera like inbound cluster ware.

Best Regards
Chaoyi Huang ( Joe Huang )


From: Adam Young [mailto:ayo...@redhat.com]
Sent: Tuesday, March 17, 2015 10:00 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge 
token size

On 03/17/2015 02:51 AM, joehuang wrote:
It's not reality to deploy KeyStone service ( including backend store ) in each 
site if the number, for example, is more than 10.  The reason is that the 
stored data including data related to revocation need to be replicated to all 
sites in synchronization manner. Otherwise, the API server might attempt to use 
the token before it's able to be validated in the target site.

Replicating revocati9on data across 10 sites will be tricky, but far better 
than replicating all of the token data.  Revocations should be relatively rare.


When Fernet token is used in multisite scenario, each API request will ask for 
token validation from KeyStone. The cloud will be out of service if KeyStone 
stop working, therefore KeyStone service need to run in several sites.

There will be multiple Keystone servers, so each should talk to their local 
instance.


For reliability purpose, I suggest that the keystone client should provide a 
fail-safe design: primary KeyStone server, the second KeyStone server (or even 
the third KeySont server) . If the primary KeyStone server is out of service, 
then the KeyStone client will try the second KeyStone server. Different 
KeyStone client may be configured with different primary KeyStone server and 
the second KeyStone server.

Makes sense, but that can be handled outside of Keystone using HA and Heartbear 
and awhole slew of technologies.  Each Keystone server can validate each 
other's tokens.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] is it possible to microversion a static class method?

2015-03-18 Thread Matthew Gilliard
I think that both ways of doing this should be supported.

Decorated private methods make sense if the different microversions
have nicely interchangeable bits of functionality but not if one of
the private methods would have to be a no-op. A method which just
passes is noise. Additionally there has been talk (but no code I'm
aware of yet) about having a check that version ranges of decorated
methods are both continuous and non-overlapping.

Explicit inline checks do lose some visual impact, but there will be
cases where it makes sense to do an extra-small something (add a
single key to a response-body dict or similar, for example) and a
couple-line code change is a much simpler code change in that case.

So, I've commented on all the patches in what I think is the correct
sequence to leave both suggestions in the devref.

  MG

On Mon, Mar 16, 2015 at 11:32 AM, Alex Xu  wrote:
>
>
> 2015-03-16 12:26 GMT+08:00 Christopher Yeoh :
>>
>> So ultimately I think this is a style issue rather than a technical one. I
>> think there
>> are situations where one way looks clearer than another the other way
>> does. Sorry I can't get around to putting up a couple of examples,
>> ATM but to be clear there is no difference in the end result (no different
>> side effects etc)
>
>
> Emmone more point for multiple version toplevel method: we should
> declare the version explicitly. multiple version internal method and manual
> version comparison are just hide version declaration into the code.
>
>>
>>
>>
>>
>> On Mon, Mar 16, 2015 at 12:21 PM, Alex Xu  wrote:
>>>
>>>
>>>
>>> 2015-03-16 9:48 GMT+08:00 Alex Xu :



 2015-03-13 19:10 GMT+08:00 Sean Dague :
>
> On 03/13/2015 02:55 AM, Chris Friesen wrote:
> > On 03/12/2015 12:13 PM, Sean Dague wrote:
> >> On 03/12/2015 02:03 PM, Chris Friesen wrote:
> >>> Hi,
> >>>
> >>> I'm having an issue with microversions.
> >>>
> >>> The api_version() code has a comment saying "This decorator MUST
> >>> appear
> >>> first (the outermost decorator) on an API method for it to work
> >>> correctly"
> >>>
> >>> I tried making a microversioned static class method like this:
> >>>
> >>>  @wsgi.Controller.api_version("2.4")  # noqa
> >>>  @staticmethod
> >>>  def _my_func(req, foo):
> >>>
> >>> and pycharm highlighted the api_version decorator and complained
> >>> that
> >>> "This decorator will not receive a callable it may expect; the
> >>> built-in
> >>> decorator returns a special object."
> >>>
> >>> Is this a spurious warning from pycharm?  The pep8 checks don't
> >>> complain.
> >>>
> >>> If I don't make it static, then pycharm suggests that the method
> >>> could
> >>> be static.
> >>
> >> *API method*
> >>
> >> This is not intended for use by methods below the top controller
> >> level.
> >> If you want conditionals lower down in your call stack pull the
> >> request
> >> version out yourself and use that.
> >
> > Both the original spec and doc/source/devref/api_microversions.rst
> > contain text talking about decorating a private method.  The latter
> > gives this example:
> >
> > @api_version("2.1", "2.4")
> > def _version_specific_func(self, req, arg1):
> > pass
> >
> > @api_version(min_version="2.5") #noqa
> > def _version_specific_func(self, req, arg1):
> > pass
> >
> > def show(self, req, id):
> >  common stuff 
> > self._version_specific_func(req, "foo")
> >  common stuff 
> >
> > It's entirely possible that such a private method might not need to
> > reference "self", and could therefore be static, so I think it's a
> > valid
> > question.
>
> That's a doc bug, we should change it.



 Actually it is not a bug. It's controversial point in the spec, but
 finally that was keep in the spec.

 http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/api-microversions.html

 The discussion at line 268
 https://review.openstack.org/#/c/127127/7/specs/kilo/approved/api-microversions.rst
>>>
>>>
>>> Submit a patch for devref https://review.openstack.org/164555  Let see
>>> whether we can get agreement
>>>


>
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> 

Re: [openstack-dev] [GBP] PTL elections - Results

2015-03-18 Thread Bhandaru, Malini K
Hello OpenStackers!

   The nomination deadline is past .. and Sumit Naiksatam is the 
uncontested PTL of OpenStack GBP!
Congratulations Sumit and all the very best!

Regards
Malini

From: Bhandaru, Malini K
Sent: Wednesday, March 11, 2015 2:18 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [GBP] PTL elections


Hello OpenStackers!



To meet the requirement of an  officially elected PTL, we're running elections 
for the Group Based Policy (GBP)  PTL for Kilo and Liberty cycles. Schedule and 
policies are fully aligned with official OpenStack PTLs elections.



You can find more information in the official elections wiki page [0] and the 
same page for GBP elections [1], additionally some more info in the past 
official nominations opening email [2].



Timeline:



Till 05:59 UTC March 17, 2015: Open candidacy to PTL positions March 17, 2015 - 
1300 UTC March 24, 2015: PTL elections



To announce your candidacy please start a new openstack-dev at 
lists.openstack.org mailing list thread with the following subject:

"[GBP] PTL Candidacy".

[0] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014

[1] https://wiki.openstack.org/wiki/GroupBasedPolicy/PTL_Elections_Kilo_Liberty



Thank you.



Sincerely yours,



Malini Bhandaru
Architect and Engineering Manager,
Open source Technology Center,
Intel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Requesting FFE for last few remaining patches for domain configuration SQL support

2015-03-18 Thread Henry Nash
Rich,Yes, I am adding this ability to the keystone client library and then to osc.HenryOn 17 Mar 2015, at 20:17, Rich Megginson  wrote:
  

  
  
On 03/17/2015 01:26 PM, Henry Nash
  wrote:


  
  
  
  
  
  
  
  Hi
  
  
  Prior to Kilo, Keystone supported the ability for
its Identity backends to be specified on a domain-by-domain
basis - primarily so that different domains could be backed by
different LDAP servers. In this previous support, you defined
the domain-specific configuration options in a separate config
file (one for each domain that was not using the default
options). While functional, this can make onboarding new domains
somewhat problematic since you need to create the domains via
REST and then create a config file and push it out to the
keystone server (and restart the server). As part of the
Keystone Kilo release we are are supporting the ability to
manage these domain-specific configuration options via REST (and
allow them to be stored in the Keystone SQL database). More
detailed information can be found in the spec for this change
at: https://review.openstack.org/#/c/123238/
  
  
  The actual code change for this is split into 11
patches (to make it easier to review), the majority of which
have already merged - and the basic functionality described is
already functional.  There are some final patches that are
in-flight, a few of which are unlikely to meet the m3 deadline.
 These relate to:
  
  
  1) Migration assistance for those that want to move
from the current file-based domain-specific configuration files
to the SQL based support (i.e. a one-off upload of their config
files).  This is handled in the keystone-manage tool - See: https://review.openstack.org/160364
  2) The notification between multiple keystone server
processes that a domain has a new configuration (so that a
restart of keystone is not required) - See: https://review.openstack.org/163322
  3) Support of substitution of sensitive config
options into whitelisted options (this might actually make the
m3 deadline anyway) - See https://review.openstack.org/159928
  
  
  Given that we have the core support for this feature
already merged, I am requesting an FFE to enable these final
patches to be merged ahead of RC.


This would be nice to use in puppet-keystone for domain
configuration.  Is there support planned for the openstack client?


  
  
  Henry
  
  
  
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



  

__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Capability Discovery API

2015-03-18 Thread Ziad Sawalha
What does ³Flavor sizes² include? Memory, CPU count? Is there a
wide-enough care for other measures of performance or compatibility like:
- virtualization type: none (hardware/metal), xen, kvm, hyperv
- cpu speed, cache or some form of performance index
- volume types: SATA, SSD, iSCSI, and a performance index


On 3/17/15, 9:22 PM, "John Dickinson"  wrote:

>
>> On Mar 17, 2015, at 1:02 PM, Davis, Amos (PaaS-Core)
>> wrote:
>> 
>> All,
>> The Application EcoSystem Working Group realized during the mid-cycle
>>meetup in Philadelphia that there is no way to get the capabilities of
>>an Openstack cloud so that applications can measure their compatibility
>>against that cloud.  In other words,  if we create an Openstack App
>>Marketplace and have developers make apps to be in that marketplace,
>>then we'll have no way for apps to verify that they can run on that
>>cloud.  We'd like to ask that there be a standard set of API calls
>>created that allow a cloud to list its capabilities.  The cloud
>>"features" or capabilities list should return True/False API responses
>>and could include but is not limited to the below examples.  Also,
>>https://review.openstack.org/#/c/162655/ may be a good starting point
>>for this request.
>> 
>> 
>> Glance:
>> URL/upload
>> types (raw, qcow, etc)
>> 
>> Nova:
>> Suspend/Resume VM
>> Resize
>> Flavor sizes supported
>> Images Available
>> Quota Limits
>> VNC support
>> 
>> Neutron:
>> Types of Networking (neutron, neutron + ml2, nova-network aka linux
>>bridge, other)
>> Types of SDN in use?
>> Shared tenant networks
>> Anything else?
>> 
>> 
>> Ceph/Cinder:
>> LVM or other?
>> SCSI-backed?
>> Any others?
>> 
>> Swift:
>> ?
>
>Swift's capabilities are discoverable via an "/info" endpoint. The docs
>are at:
>
>http://docs.openstack.org/developer/swift/api/discoverability.html
>
>Example output from my dev environment and from Rackspace Cloud Files and
>from a SwiftStack lab cluster:
>
>https://gist.github.com/notmyname/438392d57c2f3d3ee327
>
>
>Clients use these to ensure a unified experience across clusters and that
>features are supported before trying to use them.
>
>> 
>> Best Regards,
>> Amos Davis
>> amos.da...@hp.com
>> 
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infrastructure] Document to write a new service

2015-03-18 Thread Pradip Mukhopadhyay
Hello,


Is there any documentation available which can be followed to start writing
up (from scratch) a new service?


Thanks,
Pradip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Block migrations and Cinder volumes

2015-03-18 Thread Thomas Herve
> Interesting bug.  I think I agree with you that there isn't a good solution
> currently for instances that have a mix of shared and not-shared storage.
> 
> I'm curious what Daniel meant by saying that marking the disk shareable is
> not
> as reliable as we would want.

I think this is the bug I reported here: 
https://bugs.launchpad.net/nova/+bug/1376615

My initial approach was indeed to mark the disks are shareable: the patch 
(https://review.openstack.org/#/c/125616/) has comments around the issues, 
mainly around I/Ocache and SELinux isolation being disabled.

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] PTL Candidacy

2015-03-18 Thread Daniel Comnea
Congrats Steve!

On Wed, Mar 18, 2015 at 12:51 AM, Daneyon Hansen (danehans) <
daneh...@cisco.com> wrote:

>
>  Congratulations Steve!
>
>  Regards,
> Daneyon Hansen
> Software Engineer
> Email: daneh...@cisco.com
> Phone: 303-718-0400
> http://about.me/daneyon_hansen
>
>   From: Angus Salkeld 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, March 17, 2015 at 5:05 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Kolla] PTL Candidacy
>
>There have been no other candidates within the allowed time, so
> congratulations Steve on being the new Kolla PTL.
>
>  Regards
>  Angus Salkeld
>
>
>
> On Thu, Mar 12, 2015 at 8:13 PM, Angus Salkeld 
> wrote:
>
>>  Candidacy confirmed.
>>
>>  -Angus
>>
>>  On Thu, Mar 12, 2015 at 6:54 PM, Steven Dake (stdake) 
>> wrote:
>>
>>>   I am running for PTL for the Kolla project.  I have been executing in
>>> an unofficial PTL capacity for the project for the Kilo cycle, but I feel
>>> it is important for our community to have an elected PTL and have asked
>>> Angus Salkeld, who has no outcome in the election, to officiate the
>>> election [1].
>>>
>>>  For the Kilo cycle our community went from zero LOC to a fully working
>>> implementation of most of the services based upon Kubernetes as the
>>> backend.  Recently I led the effort to remove Kubernetes as a backend and
>>> provide container contents, building, and management on bare metal using
>>> docker-compose which is nearly finished.  At the conclusion of Kilo, it
>>> should be possible from one shell script to start an AIO full deployment of
>>> all of the current OpenStack git-namespaced services using containers built
>>> from RPM packaging.
>>>
>>>  For Liberty, I’d like to take our community and code to the next
>>> level.  Since our containers are fairly solid, I’d like to integrate with
>>> existing projects such as TripleO, os-ansible-deployment, or Fuel.
>>> Alternatively the community has shown some interest in creating a
>>> multi-node HA-ified installation toolchain.
>>>
>>>  I am deeply committed to leading the community where the core
>>> developers want the project to go, wherever that may be.
>>>
>>>  I am strongly in favor of adding HA features to our container
>>> architecture.
>>>
>>>  I would like to add .deb package support and from-source support to
>>> our docker container build system.
>>>
>>>  I would like to implement a reference architecture where our
>>> containers can be used as a building block for deploying a reference
>>> platform of 3 controller nodes, ~100 compute nodes, and ~10 storage nodes.
>>>
>>>  I am open to expanding our scope to address full deployment, but would
>>> prefer to merge our work with one or more existing upstreams such as
>>> TripelO, os-ansible-deployment, and Fuel.
>>>
>>>  Finally I want to finish the job on functional testing, so all of our
>>> containers are functionally checked and gated per commit on Fedora, CentOS,
>>> and Ubuntu.
>>>
>>>  I am experienced as a PTL, leading the Heat Orchestration program from
>>> zero LOC through OpenStack integration for 3 development cycles.  I write
>>> code as a PTL and was instrumental in getting the Magnum Container Service
>>> code-base kicked off from zero LOC where Adrian Otto serves as PTL.  My
>>> past experiences include leading Corosync from zero LOC to a stable
>>> building block of High Availability in Linux.  Prior to that I was part of
>>> a team that implemented Carrier Grade Linux.  I have a deep and broad
>>> understanding of open source, software development, high performance team
>>> leadership, and distributed computing.
>>>
>>>  I would be pleased to serve as PTL for Kolla for the Liberty cycle and
>>> welcome your vote.
>>>
>>>  Regards
>>> -steve
>>>
>>>  [1] https://wiki.openstack.org/wiki/Kolla/PTL_Elections_March_2015
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev